Fault Exception

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 9 Experts worldwide ranked by ideXlab platform

Jian Feng - One of the best experts on this subject based on the ideXlab platform.

  • design of a peer to peer video on demand system with the consideration of Fault Exception
    2007
    Co-Authors: Jian Feng
    Abstract:

    Early departure of customers during video playback brings a great impact on the design of a peer-to-peer (P2P) video-on-demand (VoD) system, especially on the system bandwidth requirement and the need for a well-defined Fault Exception mechanism. In this paper, we develop an analytical model to evaluate how the system parameters such as bandwidth requirement and batching time in a P2P batching system are affected when early departure behavior is taken into account. In addition, a Fault Exception mechanism is also proposed. Computer simulations are performed to verify the correctness of the model. The results show that the system resources can be utilized more effectively when the customer departure behavior is captured into the system.

Heonshik Shin - One of the best experts on this subject based on the ideXlab platform.

  • scratchpad memory management for portable systems with a memory management unit
    2006
    Co-Authors: Bernhard Egger, Jaejin Lee, Heonshik Shin
    Abstract:

    In this paper,we present a dynamic scratchpad memory allocation strategy targeting a horizontally partitioned memory subsystem for contemporary embedded processors. The memory subsystem is equipped with a memory management unit (MMU), and physically addressed scratchpad memory (SPM)is mapped into the virtual address space. A small minicache is added to further reduce energy consumption and improve performance.Using the MMU's page Fault Exception mechanism, we track page accesses and copy frequently executed code sections into the SPM before they are executed. Because the minimal transfer unit between the external memory and the SPM is a single memory page, good code placement is of great importance for the success of our method. Based on profiling information, our postpass optimizer divides the application binary into pageable, cacheable, and uncacheable regions. The latter two are placed at fixed locations in the external memory, and only pageable code is copied on demand to the SPM from the external memory. Pageable code is grouped into sections whose sizes are equal to the physical page size of the MMU. We discuss code grouping techniques and also analyze the effect of the minicache on execution time and energy consumption. We evaluate our SPM allocation strategy with twelve embedded applications, including MPEG-4. Compared to a fully-cached configuration, on average we achieve a 12% improvement in runtime performance and a 33% reduction in energy consumption by the memory system.

Bernhard Egger - One of the best experts on this subject based on the ideXlab platform.

  • scratchpad memory management for portable systems with a memory management unit
    2006
    Co-Authors: Bernhard Egger, Jaejin Lee, Heonshik Shin
    Abstract:

    In this paper,we present a dynamic scratchpad memory allocation strategy targeting a horizontally partitioned memory subsystem for contemporary embedded processors. The memory subsystem is equipped with a memory management unit (MMU), and physically addressed scratchpad memory (SPM)is mapped into the virtual address space. A small minicache is added to further reduce energy consumption and improve performance.Using the MMU's page Fault Exception mechanism, we track page accesses and copy frequently executed code sections into the SPM before they are executed. Because the minimal transfer unit between the external memory and the SPM is a single memory page, good code placement is of great importance for the success of our method. Based on profiling information, our postpass optimizer divides the application binary into pageable, cacheable, and uncacheable regions. The latter two are placed at fixed locations in the external memory, and only pageable code is copied on demand to the SPM from the external memory. Pageable code is grouped into sections whose sizes are equal to the physical page size of the MMU. We discuss code grouping techniques and also analyze the effect of the minicache on execution time and energy consumption. We evaluate our SPM allocation strategy with twelve embedded applications, including MPEG-4. Compared to a fully-cached configuration, on average we achieve a 12% improvement in runtime performance and a 33% reduction in energy consumption by the memory system.

Jaejin Lee - One of the best experts on this subject based on the ideXlab platform.

  • scratchpad memory management for portable systems with a memory management unit
    2006
    Co-Authors: Bernhard Egger, Jaejin Lee, Heonshik Shin
    Abstract:

    In this paper,we present a dynamic scratchpad memory allocation strategy targeting a horizontally partitioned memory subsystem for contemporary embedded processors. The memory subsystem is equipped with a memory management unit (MMU), and physically addressed scratchpad memory (SPM)is mapped into the virtual address space. A small minicache is added to further reduce energy consumption and improve performance.Using the MMU's page Fault Exception mechanism, we track page accesses and copy frequently executed code sections into the SPM before they are executed. Because the minimal transfer unit between the external memory and the SPM is a single memory page, good code placement is of great importance for the success of our method. Based on profiling information, our postpass optimizer divides the application binary into pageable, cacheable, and uncacheable regions. The latter two are placed at fixed locations in the external memory, and only pageable code is copied on demand to the SPM from the external memory. Pageable code is grouped into sections whose sizes are equal to the physical page size of the MMU. We discuss code grouping techniques and also analyze the effect of the minicache on execution time and energy consumption. We evaluate our SPM allocation strategy with twelve embedded applications, including MPEG-4. Compared to a fully-cached configuration, on average we achieve a 12% improvement in runtime performance and a 33% reduction in energy consumption by the memory system.

Yuji Chiba - One of the best experts on this subject based on the ideXlab platform.

  • heap protection for java virtual machines
    2006
    Co-Authors: Yuji Chiba
    Abstract:

    Java virtual machine (JVM) crashes are often due to an invalid memory reference to the JVM heap. Before the bug that caused the invalid reference can be fixed, its location must be identified. It can be in either the JVM implementation or the native library written in C language invoked from Java applications. To help system engineers identify the location, we implemented a feature using page protection that prevents threads executing native methods from referring to the JVM heap. This feature protects the JVM heap during native method execution, and when native method execution refers to the JVM heap invalidly, it interrupts the execution by generating a page-Fault Exception and then reports the location where the page-Fault Exception was generated. This helps the system engineer to identify the location of the bug in the native library. The runtime overhead for using this feature averaged 4.4% based on an estimation using the SPECjvm98, SPECjbb2000, and JFCMark benchmark suites.