Virtual Address

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 51489 Experts worldwide ranked by ideXlab platform

Koen De Bosschere - One of the best experts on this subject based on the ideXlab platform.

  • Java object header elimination for reduced memory consumption in 64-bit Virtual machines
    ACM Transactions on Architecture and Code Optimization, 2007
    Co-Authors: Kris Venstermans, Koen De Bosschere
    Abstract:

    Memory performance is an important design issue for contemporary computer systems given the huge processor/memory speed gap. This paper proposes a space-efficient Java object model for reducing the memory consumption of 64-bit Java Virtual machines. We completely eliminate the object header through typed Virtual Addressing (TVA) or implicit typing. TVA encodes the object type in the object's Virtual Address by allocating all objects of a given type in a contiguous memory segment. This allows for removing the type information as well as the status field from the object header. Whenever type and status information is needed, masking is applied to the object's Virtual Address for obtaining an offset into type and status information structures. Unlike previous work on implicit typing, we apply TVA to a selected number of frequently allocated object types, hence, the name selective TVA (STVA); this limits the amount of memory fragmentation. In addition to applying STVA, we also compress the type information block (TIB) pointers for all objects that do not fall under TVA. We implement the space-efficient Java object model in the 64-bit version of the Jikes RVM on an AIX IBM platform and compare its performance against the traditionally used Java object model using a multitude of Java benchmarks. We conclude that the space-efficient Java object model reduces memory consumption by on average 15p (and up to 45p for some benchmarks). About one-half the reduction comes from TIB pointer compression; the other one-half comes from STVA. In terms of performance, the space-efficient object model generally does not affect performance; however, for some benchmarks we observe statistically significant performance speedups, up to 20p.

  • object relative Addressing compressed pointers in 64 bit java Virtual machines
    Lecture Notes in Computer Science, 2007
    Co-Authors: Kris Venstermans, Lieven Eeckhout, Koen De Bosschere
    Abstract:

    64-bit Address spaces come at the price of pointers requiring twice as much memory as 32-bit Address spaces, resulting in increased memory usage. This paper reduces the memory usage of 64-bit pointers in the context of Java Virtual machines through pointer compression, called Object-Relative Addressing (ORA). The idea is to compress 64-bit raw pointers into 32-bit offsets relative to the referencing object's Virtual Address. Unlike previous work on the subject using a constant base Address for compressed pointers, ORA allows for applying pointer compression to Java programs that allocate more than 4GB of memory. Our experimental results using Jikes RVM and the SPECjbb and DaCapo benchmarks on an IBM POWER4 machine show that the overhead introduced by ORA is statistically insignificant on average compared to raw 64-bit pointer representation, while reducing the total memory usage by 10% on average and up to 14.5% for some applications.

  • space efficient 64 bit java objects through selective typed Virtual Addressing
    Symposium on Code Generation and Optimization, 2006
    Co-Authors: Kris Venstermans, Lieven Eeckhout, Koen De Bosschere
    Abstract:

    Memory performance is an important design issue for contemporary systems given the ever increasing memory gap. This paper proposes a space-efficient Java object model for reducing the memory consumption of 64-bit Java Virtual machines. We propose selective typed Virtual Addressing (STVA) which uses typed Virtual Addressing (TVA) or implicit typing for reducing the header of 64-bit Java objects. The idea behind TVA is to encode the object's type in the object's Virtual Address. In other words, all objects of a given type are allocated in a contiguous memory segment. As such, the type information can be removed from the object's header which reduces the number of allocated bytes per object. Whenever type information is needed for the given object, masking is applied to the object's Virtual Address. Unlike previous work on implicit typing, we apply TVA to a selected number of frequently allocated and/or long-lived object types. This limits the amount of memory fragmentation. We implement STVA in the 64-bit version of the Jikes RVM on an AIX IBM platform and compare its performance against a traditional VM implementation without STVA using a multitude of Java benchmarks. We conclude that STVA reduces memory consumption by on average 15.5% (and up to 39% for some benchmarks). In terms of performance, STVA generally does not affect performance, however for some benchmarks we observe statistically significant performance speedups, up to 24%.

Xu Cheng - One of the best experts on this subject based on the ideXlab platform.

  • tap prediction reusing conditional branch predictor for indirect branches with target Address pointers
    International Conference on Computer Design, 2011
    Co-Authors: Zichao Xie, Dong Tong, Mingkai Huang, Xiaoyin Wang, Qinqing Shi, Xu Cheng
    Abstract:

    Indirect-branch prediction is becoming more important for modern processors as more programs are written in object-oriented languages. Previous hardware-based indirect-branch predictors generally require significant hardware storage or use aggressive algorithms which make the processor front-end more complex. In this paper, we propose a fast and cost-efficient indirect-branch prediction strategy, called Target Address Pointer (TAP) Prediction. TAP Prediction reuses the history-based branch direction predictor to detect occurrences of indirect branches, and then stores indirect-branch targets in the Branch Target Buffer (BTB). The key idea of TAP Prediction is to predict the Target Address Pointers, which generate Virtual Addresses to index the targets stored in the BTB, rather than to predict the indirect-branch targets directly. TAP Prediction also reuses the branch direction predictor to construct several small predictors. When fetching an indirect branch, these small predictors work in parallel to generate the target Address pointer. Then TAP prediction accesses the BTB to fetch the predicted indirect-branch target using the generated Virtual Address. This mechanism could achieve time cost comparable to that of dedicated-storage-predictors, without requiring additional large amounts of storage. Our evaluation shows that for three representative direction predictors-Hybrid, Perceptrons, and O-GEHL-TAP schemes improve performance by 18.19%, 21.52%, and 20.59%, respectively, over the baseline processor with the most commonly-used BTB prediction. Compared with previous hardware-based indirect-branch predictors, the TAP-Perceptrons scheme achieves performance improvement equivalent to that provided by a 48KB TTC predictor, and it also outperforms the VPC predictor by 14.02%.

  • dynamic memory demand estimating based on the guest operating system behaviors for Virtual machines
    International Symposium on Parallel and Distributed Processing and Applications, 2011
    Co-Authors: Yan Niu, Chun Yang, Xu Cheng
    Abstract:

    In the Virtualized environment, memory can be efficiently utilized if the dynamic memory demands of Virtual machines can be estimated at runtime. An efficient memory estimator should report the appropriate size of the memory which can be made full use of by the Virtual machine while keeping reasonable performance. However, the appropriate size is hard to be estimated accurately with low overhead. This paper presents a memory demand estimator based on the guest operating system behaviors architecturally visible to the Virtual machine monitor, and it can accurately reports the expected appropriate memory size with negligible overhead. The estimator consists of two components which respectively, track the amount of the memory residing in Virtual Address space, and the memory used as page cache only accessible in kernel mode. The experimental results show that the estimation error is only 0.4%~2.1%, and the runtime overhead is only 0.8% on average due to no additional memory protection traps are introduced.

  • TAP Prediction: Reusing Conditional Branch Predictor for Indirect Branches with Target Address Pointers
    2011
    Co-Authors: Xie Zichao, Tong Dong, Huang Mingkai, Wang Xiaoyin, Shi Qinqing, Xu Cheng
    Abstract:

    Indirect-branch prediction is becoming more important for modern processors as more programs are written in object-oriented languages. Previous hardware-based indirect-branch predictors generally require significant hardware storage or use aggressive algorithms which make the processor front-end more complex. In this paper, we propose a fast and cost-efficient indirect-branch prediction strategy, called Target Address Pointer (TAP) Prediction. TAP Prediction reuses the history-based branch direction predictor to detect occurrences of indirect branches, and then stores indirect-branch targets in the Branch Target Buffer (BTB). The key idea of TAP Prediction is to predict the Target Address Pointers, which generate Virtual Addresses to index the targets stored in the BTB, rather than to predict the indirect-branch targets directly. TAP Prediction also reuses the branch direction predictor to construct several small predictors. When fetching an indirect branch, these small predictors work in parallel to generate the target Address pointer. Then TAP prediction accesses the BTB to fetch the predicted indirect-branch target using the generated Virtual Address. This mechanism could achieve time cost comparable to that of dedicated-storage-predictors, without requiring additional large amounts of storage. Our evaluation shows that for three representative direction predictors-Hybrid, Perceptrons, and O-GEHL-TAP schemes improve performance by 18.19%, 21.52%, and 20.59%, respectively, over the baseline processor with the most commonly-used BTB prediction. Compared with previous hardware-based indirect-branch predictors, the TAP-Perceptrons scheme achieves performance improvement equivalent to that provided by a 48KB TTC predictor, and it also outperforms the VPC predictor by 14.02%.Computer Science, Hardware & ArchitectureEngineering, Electrical & ElectronicEICPCI-S(ISTP)

Kris Venstermans - One of the best experts on this subject based on the ideXlab platform.

  • Java object header elimination for reduced memory consumption in 64-bit Virtual machines
    ACM Transactions on Architecture and Code Optimization, 2007
    Co-Authors: Kris Venstermans, Koen De Bosschere
    Abstract:

    Memory performance is an important design issue for contemporary computer systems given the huge processor/memory speed gap. This paper proposes a space-efficient Java object model for reducing the memory consumption of 64-bit Java Virtual machines. We completely eliminate the object header through typed Virtual Addressing (TVA) or implicit typing. TVA encodes the object type in the object's Virtual Address by allocating all objects of a given type in a contiguous memory segment. This allows for removing the type information as well as the status field from the object header. Whenever type and status information is needed, masking is applied to the object's Virtual Address for obtaining an offset into type and status information structures. Unlike previous work on implicit typing, we apply TVA to a selected number of frequently allocated object types, hence, the name selective TVA (STVA); this limits the amount of memory fragmentation. In addition to applying STVA, we also compress the type information block (TIB) pointers for all objects that do not fall under TVA. We implement the space-efficient Java object model in the 64-bit version of the Jikes RVM on an AIX IBM platform and compare its performance against the traditionally used Java object model using a multitude of Java benchmarks. We conclude that the space-efficient Java object model reduces memory consumption by on average 15p (and up to 45p for some benchmarks). About one-half the reduction comes from TIB pointer compression; the other one-half comes from STVA. In terms of performance, the space-efficient object model generally does not affect performance; however, for some benchmarks we observe statistically significant performance speedups, up to 20p.

  • object relative Addressing compressed pointers in 64 bit java Virtual machines
    Lecture Notes in Computer Science, 2007
    Co-Authors: Kris Venstermans, Lieven Eeckhout, Koen De Bosschere
    Abstract:

    64-bit Address spaces come at the price of pointers requiring twice as much memory as 32-bit Address spaces, resulting in increased memory usage. This paper reduces the memory usage of 64-bit pointers in the context of Java Virtual machines through pointer compression, called Object-Relative Addressing (ORA). The idea is to compress 64-bit raw pointers into 32-bit offsets relative to the referencing object's Virtual Address. Unlike previous work on the subject using a constant base Address for compressed pointers, ORA allows for applying pointer compression to Java programs that allocate more than 4GB of memory. Our experimental results using Jikes RVM and the SPECjbb and DaCapo benchmarks on an IBM POWER4 machine show that the overhead introduced by ORA is statistically insignificant on average compared to raw 64-bit pointer representation, while reducing the total memory usage by 10% on average and up to 14.5% for some applications.

  • space efficient 64 bit java objects through selective typed Virtual Addressing
    Symposium on Code Generation and Optimization, 2006
    Co-Authors: Kris Venstermans, Lieven Eeckhout, Koen De Bosschere
    Abstract:

    Memory performance is an important design issue for contemporary systems given the ever increasing memory gap. This paper proposes a space-efficient Java object model for reducing the memory consumption of 64-bit Java Virtual machines. We propose selective typed Virtual Addressing (STVA) which uses typed Virtual Addressing (TVA) or implicit typing for reducing the header of 64-bit Java objects. The idea behind TVA is to encode the object's type in the object's Virtual Address. In other words, all objects of a given type are allocated in a contiguous memory segment. As such, the type information can be removed from the object's header which reduces the number of allocated bytes per object. Whenever type information is needed for the given object, masking is applied to the object's Virtual Address. Unlike previous work on implicit typing, we apply TVA to a selected number of frequently allocated and/or long-lived object types. This limits the amount of memory fragmentation. We implement STVA in the 64-bit version of the Jikes RVM on an AIX IBM platform and compare its performance against a traditional VM implementation without STVA using a multitude of Java benchmarks. We conclude that STVA reduces memory consumption by on average 15.5% (and up to 39% for some benchmarks). In terms of performance, STVA generally does not affect performance, however for some benchmarks we observe statistically significant performance speedups, up to 24%.

Kai Li - One of the best experts on this subject based on the ideXlab platform.

  • fast cluster failover using Virtual memory mapped communication
    International Conference on Supercomputing, 1999
    Co-Authors: Yuanyuan Zhou, Peter M Chen, Kai Li
    Abstract:

    This paper proposes a novel way to use Virtual memorymapped communication (VMMC) to reduce the failover time on clusters. With the VMMC model, applications’ Virtual Address space can be efficiently mirrored on remote memory either automatically or via explicit messages. When a machine fails, its applications can restart from the most recent checkpoints on the failover node with minimal memory copying and disk I/O overhead. This method requires little change to applications’ source code. We developed two fast failover protocols: deliberate update failover protocol (DU) and automatic update failover protoco2 (AU). The first can run on any system that supports VMMC, whereas the other requires special network interface support. We implemented these two protocols on two different clusters that supported VMMC communication. Our results with three transaction-based applications show that both protocols work quite well. The deliberate update protocol imposes 4-21% overhead when taking checkpoints every 2 seconds. If an application can tolerate 20% overhead, this protocol can failover to another machine within 4 milliseconds in the best case and from 0.1 to 3 seconds in the worst case. The failover performance can be further improved by using special network interface hardware. The automatic update protocol is able to take checkpoints every 0.1 seconds with only 3-12% overhead. If 10% overhead is allowed, it can failover applications from 0.01 to 0.4 seconds in the worst case.

  • software support for Virtual memory mapped communication
    International Conference on Parallel Processing, 1996
    Co-Authors: Cezary Dubnicki, Liviu Iftode, Edward W Felten, Kai Li
    Abstract:

    Virtual memory-mapped communication (VMMC) is a communication model providing direct data transfer between the sender's and receiver's Virtual Address spaces. This model eliminates operating system involvement in communication, provides full protection, supports user-level buffer management and zero-copy protocols, and minimizes software communication overhead. This paper describes system software support for the model including its API, operating system support, and software architecture, for two network interfaces designed in the SHRIMP project. Our implementations and experiments show that the VMMC model can indeed expose the available hardware performance to user programs. On two Pentium PCs with our prototype network interface hardware over a network, we have achieved user-to-user latency of 4.8 /spl mu/sec and sustained bandwidth of 23 MB/s, which is close to the peak hardware bandwidth. Software communication overhead is only a few user-level instructions.

Lieven Eeckhout - One of the best experts on this subject based on the ideXlab platform.

  • object relative Addressing compressed pointers in 64 bit java Virtual machines
    Lecture Notes in Computer Science, 2007
    Co-Authors: Kris Venstermans, Lieven Eeckhout, Koen De Bosschere
    Abstract:

    64-bit Address spaces come at the price of pointers requiring twice as much memory as 32-bit Address spaces, resulting in increased memory usage. This paper reduces the memory usage of 64-bit pointers in the context of Java Virtual machines through pointer compression, called Object-Relative Addressing (ORA). The idea is to compress 64-bit raw pointers into 32-bit offsets relative to the referencing object's Virtual Address. Unlike previous work on the subject using a constant base Address for compressed pointers, ORA allows for applying pointer compression to Java programs that allocate more than 4GB of memory. Our experimental results using Jikes RVM and the SPECjbb and DaCapo benchmarks on an IBM POWER4 machine show that the overhead introduced by ORA is statistically insignificant on average compared to raw 64-bit pointer representation, while reducing the total memory usage by 10% on average and up to 14.5% for some applications.

  • space efficient 64 bit java objects through selective typed Virtual Addressing
    Symposium on Code Generation and Optimization, 2006
    Co-Authors: Kris Venstermans, Lieven Eeckhout, Koen De Bosschere
    Abstract:

    Memory performance is an important design issue for contemporary systems given the ever increasing memory gap. This paper proposes a space-efficient Java object model for reducing the memory consumption of 64-bit Java Virtual machines. We propose selective typed Virtual Addressing (STVA) which uses typed Virtual Addressing (TVA) or implicit typing for reducing the header of 64-bit Java objects. The idea behind TVA is to encode the object's type in the object's Virtual Address. In other words, all objects of a given type are allocated in a contiguous memory segment. As such, the type information can be removed from the object's header which reduces the number of allocated bytes per object. Whenever type information is needed for the given object, masking is applied to the object's Virtual Address. Unlike previous work on implicit typing, we apply TVA to a selected number of frequently allocated and/or long-lived object types. This limits the amount of memory fragmentation. We implement STVA in the 64-bit version of the Jikes RVM on an AIX IBM platform and compare its performance against a traditional VM implementation without STVA using a multitude of Java benchmarks. We conclude that STVA reduces memory consumption by on average 15.5% (and up to 39% for some benchmarks). In terms of performance, STVA generally does not affect performance, however for some benchmarks we observe statistically significant performance speedups, up to 24%.