Logical Address Space

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 1974 Experts worldwide ranked by ideXlab platform

James Dinan - One of the best experts on this subject based on the ideXlab platform.

  • efficient runtime support for a partitioned global Logical Address Space
    International Conference on Parallel Processing, 2018
    Co-Authors: Brian D Larkins, John Snyder, James Dinan
    Abstract:

    Many HPC applications have successfully applied Partitioned Global Address Space (PGAS) parallel programming models to efficiently manage shared data that is distributed across multiple nodes in a distributed memory system. However, while the flat Addressing model provided by PGAS systems is effective for regular array data, it renders such systems difficult to use with loosely-structured or sparse data. This work proposes a Logically Addressed PGLAS model that naturally supports a variety of data models through the automatic mapping of an application-defined key Space onto the underlying distributed memory system. We present an efficient implementation of the PGLAS model built atop a parallel distributed hash table (PDHT) and demonstrate that this model is amenable to offloading using the Portals 4 network programming interface. We demonstrate the effectiveness of PDHT using representative applications from the computational chemistry and genomics domains. Results indicate that PGLAS models such as PDHT provide a promising new method for parallelizing applications with non-regular data.

Brian D Larkins - One of the best experts on this subject based on the ideXlab platform.

  • efficient runtime support for a partitioned global Logical Address Space
    International Conference on Parallel Processing, 2018
    Co-Authors: Brian D Larkins, John Snyder, James Dinan
    Abstract:

    Many HPC applications have successfully applied Partitioned Global Address Space (PGAS) parallel programming models to efficiently manage shared data that is distributed across multiple nodes in a distributed memory system. However, while the flat Addressing model provided by PGAS systems is effective for regular array data, it renders such systems difficult to use with loosely-structured or sparse data. This work proposes a Logically Addressed PGLAS model that naturally supports a variety of data models through the automatic mapping of an application-defined key Space onto the underlying distributed memory system. We present an efficient implementation of the PGLAS model built atop a parallel distributed hash table (PDHT) and demonstrate that this model is amenable to offloading using the Portals 4 network programming interface. We demonstrate the effectiveness of PDHT using representative applications from the computational chemistry and genomics domains. Results indicate that PGLAS models such as PDHT provide a promising new method for parallelizing applications with non-regular data.

John Snyder - One of the best experts on this subject based on the ideXlab platform.

  • efficient runtime support for a partitioned global Logical Address Space
    International Conference on Parallel Processing, 2018
    Co-Authors: Brian D Larkins, John Snyder, James Dinan
    Abstract:

    Many HPC applications have successfully applied Partitioned Global Address Space (PGAS) parallel programming models to efficiently manage shared data that is distributed across multiple nodes in a distributed memory system. However, while the flat Addressing model provided by PGAS systems is effective for regular array data, it renders such systems difficult to use with loosely-structured or sparse data. This work proposes a Logically Addressed PGLAS model that naturally supports a variety of data models through the automatic mapping of an application-defined key Space onto the underlying distributed memory system. We present an efficient implementation of the PGLAS model built atop a parallel distributed hash table (PDHT) and demonstrate that this model is amenable to offloading using the Portals 4 network programming interface. We demonstrate the effectiveness of PDHT using representative applications from the computational chemistry and genomics domains. Results indicate that PGLAS models such as PDHT provide a promising new method for parallelizing applications with non-regular data.

Ronan Keryell - One of the best experts on this subject based on the ideXlab platform.

  • A Linear Algebra Framework for Static HPF Code Distribution
    1995
    Co-Authors: Corinne Ancourt, Fabien Coelho, François Irigoin, Ronan Keryell
    Abstract:

    High Performance Fortran (hpf) was developed to support data parallel programming for simd and mimd machines with distributed memory. The programmer is provided a familiar uniform Logical Address Space and specifies the data distribution by directives. The compiler then exploits these directives to allocate arrays in the local memories, to assign computations to elementary processors and to migrate data between processors when required. We show here that linear algebra is a powerful framework to encode Hpf directives and to synthesize distributed code with Space-efficient array allocation, tight loop bounds and vectorized communications for INDEPENDENT loops. The generated code includes traditional optimizations such as guard elimination, message vectorization and aggregation, overlap analysis... The systematic use of an affine framework makes it possible to prove the compilation scheme correct. An early version of this paper was presented at the Fourth International Workshop on Comp..

Corinne Ancourt - One of the best experts on this subject based on the ideXlab platform.

  • A Linear Algebra Framework for Static HPF Code Distribution
    1995
    Co-Authors: Corinne Ancourt, Fabien Coelho, François Irigoin, Ronan Keryell
    Abstract:

    High Performance Fortran (hpf) was developed to support data parallel programming for simd and mimd machines with distributed memory. The programmer is provided a familiar uniform Logical Address Space and specifies the data distribution by directives. The compiler then exploits these directives to allocate arrays in the local memories, to assign computations to elementary processors and to migrate data between processors when required. We show here that linear algebra is a powerful framework to encode Hpf directives and to synthesize distributed code with Space-efficient array allocation, tight loop bounds and vectorized communications for INDEPENDENT loops. The generated code includes traditional optimizations such as guard elimination, message vectorization and aggregation, overlap analysis... The systematic use of an affine framework makes it possible to prove the compilation scheme correct. An early version of this paper was presented at the Fourth International Workshop on Comp..