GROMACS

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 9969 Experts worldwide ranked by ideXlab platform

Erik Lindahl - One of the best experts on this subject based on the ideXlab platform.

  • GROMACS: High performance molecular simulations through multi-level parallelism from laptops to supercomputers
    SoftwareX, 2015
    Co-Authors: Mark James Abraham, Berk Hess, Roland Schulz, Jeremy C. Smith, Szilárd Páll, Teemu Murtola, Erik Lindahl
    Abstract:

    GROMACS is one of the most widely used open-source and free software codes in chemistry, used primarily for dynamical simulations of biomolecules. It provides a rich set of calculation types, prepa ...

  • tackling exascale software challenges in molecular dynamics simulations with GROMACS
    2nd International Conference on Exascale Applications and Software (EASC) APR 02-03 2014 Stockholm Sweden, 2015
    Co-Authors: Szilárd Páll, Berk Hess, Erik Lindahl, Carsten Kutzner, Mark James Abraham
    Abstract:

    GROMACS is a widely used package for biomolecular simulation, and over the last two decades it has evolved from small-scale efficiency to advanced heterogeneous acceleration and multi-level parallelism targeting some of the largest supercomputers in the world. Here, we describe some of the ways we have been able to realize this through the use of parallelization on all levels, combined with a constant focus on absolute performance. Release 4.6 of GROMACS uses SIMD acceleration on a wide range of architectures, GPU offloading acceleration, and both OpenMP and MPI parallelism within and between nodes, respectively. The recent work on acceleration made it necessary to revisit the fundamental algorithms of molecular simulation, including the concept of neighborsearching, and we discuss the present and future challenges we see for exascale simulation - in particular a very fine-grained task parallelism. We also discuss the software management, code peer review and continuous integration testing required for a project of this complexity.

  • EASC - Tackling Exascale Software Challenges in Molecular Dynamics Simulations with GROMACS
    Lecture Notes in Computer Science, 2015
    Co-Authors: Szilárd Páll, Berk Hess, Carsten Kutzner, Mark James Abraham, Erik Lindahl
    Abstract:

    GROMACS is a widely used package for biomolecular simulation, and over the last two decades it has evolved from small-scale efficiency to advanced heterogeneous acceleration and multi-level parallelism targeting some of the largest supercomputers in the world. Here, we describe some of the ways we have been able to realize this through the use of parallelization on all levels, combined with a constant focus on absolute performance. Release 4.6 of GROMACS uses SIMD acceleration on a wide range of architectures, GPU offloading acceleration, and both OpenMP and MPI parallelism within and between nodes, respectively. The recent work on acceleration made it necessary to revisit the fundamental algorithms of molecular simulation, including the concept of neighborsearching, and we discuss the present and future challenges we see for exascale simulation - in particular a very fine-grained task parallelism. We also discuss the software management, code peer review and continuous integration testing required for a project of this complexity.

  • implementation of the charmm force field in GROMACS analysis of protein stability effects from correction maps virtual interaction sites and water models
    Journal of Chemical Theory and Computation, 2010
    Co-Authors: Par Bjelkmar, Berk Hess, Per Larsson, Michel A Cuendet, Erik Lindahl
    Abstract:

    CHARMM27 is a widespread and popular force field for biomolecular simulation, and several recent algorithms such as implicit solvent models have been developed specifically for it. We have here implemented the CHARMM force field and all necessary extended functional forms in the GROMACS molecular simulation package, to make CHARMM-specific features available and to test them in combination with techniques for extended time steps, to make all major force fields available for comparison studies in GROMACS, and to test various solvent model optimizations, in particular the effect of Lennard-Jones interactions on hydrogens. The implementation has full support both for CHARMM-specific features such as multiple potentials over the same dihedral angle and the grid-based energy correction map on the ϕ, ψ protein backbone dihedrals, as well as all GROMACS features such as virtual hydrogen interaction sites that enable 5 fs time steps. The medium-to-long time effects of both the correction maps and virtual sites ha...

  • Speeding up parallel GROMACS on high-latency networks
    Journal of computational chemistry, 2007
    Co-Authors: Carsten Kutzner, David Van Der Spoel, Erik Lindahl, Bert L De Groot, Martin Fechner, Udo W. Schmitt, Helmut Grubmüller
    Abstract:

    We investigate the parallel scaling of the GROMACS molecular dynamics code on Ethernet Beowulf clusters and what prerequisites are necessary for decent scaling even on such clusters with only limited bandwidth and high latency. GROMACS 3.3 scales well on supercomputers like the IBM p690 (Regatta) and on Linux clusters with a special interconnect like Myrinet or Infiniband. Because of the high single-node performance of GROMACS, however, on the widely used Ethernet switched clusters, the scaling typically breaks down when more than two computer nodes are involved, limiting the absolute speedup that can be gained to about 3 relative to a single-CPU run. With the LAM MPI implementation, the main scaling bottleneck is here identified to be the all-to-all communication which is required every time step. During such an all-to-all communication step, a huge amount of messages floods the network, and as a result many TCP packets are lost. We show that Ethernet flow control prevents network congestion and leads to substantial scaling improvements. For 16 CPUs, e.g., a speedup of 11 has been achieved. However, for more nodes this mechanism also fails. Having optimized an all-to-all routine, which sends the data in an ordered fashion, we show that it is possible to completely prevent packet loss for any number of multi-CPU nodes. Thus, the GROMACS scaling dramatically improves, even for switches that lack flow control. In addition, for the common HP ProCurve 2848 switch we find that for optimum all-to-all performance it is essential how the nodes are connected to the switch's ports. This is also demonstrated for the example of the Car-Parinello MD code.

Helmut Grubmüller - One of the best experts on this subject based on the ideXlab platform.

  • A GPU-Accelerated Fast Multipole Method for GROMACS: Performance and Accuracy.
    Journal of chemical theory and computation, 2020
    Co-Authors: Bartosz Kohnke, Carsten Kutzner, Helmut Grubmüller
    Abstract:

    An important and computationally demanding part of molecular dynamics simulations is the calculation of long-range electrostatic interactions. Today, the prevalent method to compute these interactions is particle mesh Ewald (PME). The PME implementation in the GROMACS molecular dynamics package is extremely fast on individual GPU nodes. However, for large scale multinode parallel simulations, PME becomes the main scaling bottleneck as it requires all-to-all communication between the nodes; as a consequence, the number of exchanged messages scales quadratically with the number of involved nodes in that communication step. To enable efficient and scalable biomolecular simulations on future exascale supercomputers, clearly a method with a better scaling property is required. The fast multipole method (FMM) is such a method. As a first step on the path to exascale, we have implemented a performance-optimized, highly efficient GPU FMM and integrated it into GROMACS as an alternative to PME. For a fair performance comparison between FMM and PME, we first assessed the accuracies of the methods for various sets of input parameters. With parameters yielding similar accuracies for both methods, we determined the performance of GROMACS with FMM and compared it to PME for exemplary benchmark systems. We found that FMM with a multipole order of 8 yields electrostatic forces that are as accurate as PME with standard parameters. Further, for typical mixed-precision simulation settings, FMM does not lead to an increased energy drift with multipole orders of 8 or larger. Whereas an ≈50 000 atom simulation system with our FMM reaches only about a third of the performance with PME, for systems with large dimensions and inhomogeneous particle distribution, e.g., aerosol systems with water droplets floating in a vacuum, FMM substantially outperforms PME already on a single node.

  • More bang for your buck: Improved use of GPU nodes for GROMACS 2018
    Journal of computational chemistry, 2019
    Co-Authors: Carsten Kutzner, Bert L De Groot, Szilárd Páll, Martin Fechner, Ansgar Esztermann, Helmut Grubmüller
    Abstract:

    We identify hardware that is optimal to produce molecular dynamics (MD) trajectories on Linux compute clusters with the GROMACS 2018 simulation package. Therefore, we benchmark the GROMACS performance on a diverse set of compute nodes and relate it to the costs of the nodes, which may include their lifetime costs for energy and cooling. In agreement with our earlier investigation using GROMACS 4.6 on hardware of 2014, the performance to price ratio of consumer GPU nodes is considerably higher than that of CPU nodes. However, with GROMACS 2018, the optimal CPU to GPU processing power balance has shifted even more toward the GPU. Hence, nodes optimized for GROMACS 2018 and later versions enable a significantly higher performance to price ratio than nodes optimized for older GROMACS versions. Moreover, the shift toward GPU processing allows to cheaply upgrade old nodes with recent GPUs, yielding essentially the same performance as comparable brand-new hardware. © 2019 Wiley Periodicals, Inc.

  • More Bang for Your Buck: Improved use of GPU Nodes for GROMACS 2018
    arXiv: Distributed Parallel and Cluster Computing, 2019
    Co-Authors: Carsten Kutzner, Bert L De Groot, Szilárd Páll, Martin Fechner, Ansgar Esztermann, Helmut Grubmüller
    Abstract:

    We identify hardware that is optimal to produce molecular dynamics trajectories on Linux compute clusters with the GROMACS 2018 simulation package. Therefore, we benchmark the GROMACS performance on a diverse set of compute nodes and relate it to the costs of the nodes, which may include their lifetime costs for energy and cooling. In agreement with our earlier investigation using GROMACS 4.6 on hardware of 2014, the performance to price ratio of consumer GPU nodes is considerably higher than that of CPU nodes. However, with GROMACS 2018, the optimal CPU to GPU processing power balance has shifted even more towards the GPU. Hence, nodes optimized for GROMACS 2018 and later versions enable a significantly higher performance to price ratio than nodes optimized for older GROMACS versions. Moreover, the shift towards GPU processing allows to cheaply upgrade old nodes with recent GPUs, yielding essentially the same performance as comparable brand-new hardware.

  • A flexible, GPU - powered fast multipole method for realistic biomolecular simulations in GROMACS.
    Biophysical Journal, 2017
    Co-Authors: Bartosz Kohnke, Berk Hess, Holger Dachsel, R. Thomas Ullmann, Carsten Kutzner, Andreas Beckmann, David Haensel, Ivo Kabadshow, Helmut Grubmüller
    Abstract:

    A Flexible, GPU - Powered Fast Multipole Method for Realistic Biomolecular Simulations in GROMACS

  • best bang for your buck gpu nodes for GROMACS biomolecular simulations
    Journal of Computational Chemistry, 2015
    Co-Authors: Carsten Kutzner, Bert L De Groot, Szilárd Páll, Martin Fechner, Ansgar Esztermann, Helmut Grubmüller
    Abstract:

    The molecular dynamics simulation package GROMACS runs efficiently on a wide variety of hardware from commodity workstations to high performance computing clusters. Hardware features are well-exploited with a combination of single instruction multiple data, multithreading, and message passing interface (MPI)-based single program multiple data/multiple program multiple data parallelism while graphics processing units (GPUs) can be used as accelerators to compute interactions off-loaded from the CPU. Here, we evaluate which hardware produces trajectories with GROMACS 4.6 or 5.0 in the most economical way. We have assembled and benchmarked compute nodes with various CPU/GPU combinations to identify optimal compositions in terms of raw trajectory production rate, performance-to-price ratio, energy efficiency, and several other criteria. Although hardware prices are naturally subject to trends and fluctuations, general tendencies are clearly visible. Adding any type of GPU significantly boosts a node's simulation performance. For inexpensive consumer-class GPUs this improvement equally reflects in the performance-to-price ratio. Although memory issues in consumer-class GPUs could pass unnoticed as these cards do not support error checking and correction memory, unreliable GPUs can be sorted out with memory checking tools. Apart from the obvious determinants for cost-efficiency like hardware expenses and raw performance, the energy consumption of a node is a major cost factor. Over the typical hardware lifetime until replacement of a few years, the costs for electrical power and cooling can become larger than the costs of the hardware itself. Taking that into account, nodes with a well-balanced ratio of CPU and consumer-class GPU resources produce the maximum amount of GROMACS trajectory over their lifetime.

Björn Sommer - One of the best experts on this subject based on the ideXlab platform.

  • APL@Voro: A voronoi-based membrane analysis tool for GROMACS trajectories
    Journal of chemical information and modeling, 2013
    Co-Authors: Gunther Lukat, Jens Krüger, Björn Sommer
    Abstract:

    APL@Voro is a new program developed to aid in the analysis of GROMACS trajectories of lipid bilayer simulations. It can read a GROMACS trajectory file, a PDB coordinate file, and a GROMACS index file to create a two-dimensional geometric representation of a bilayer. Voronoi diagrams and Delaunay triangulations—generated for different selection models of lipids—support the analysis of the bilayer. The values calculated on the geometric structures can be visualized in a user-friendly interactive environment and, then, plotted and exported to different file types. APL@Voro supports complex bilayers with a mix of various lipids and proteins. For the calculation of the projected area per lipid, a modification of the well-known Voronoi approach is presented as well as the presentation of a new approach for including atoms into an existing triangulation. The application of the developed software is discussed for three example systems simulated with GROMACS. The program is written in C++, is open source, and is a...

  • apl voro a voronoi based membrane analysis tool for GROMACS trajectories
    Journal of Chemical Information and Modeling, 2013
    Co-Authors: Gunther Lukat, Jens Krüger, Björn Sommer
    Abstract:

    APL@Voro is a new program developed to aid in the analysis of GROMACS trajectories of lipid bilayer simulations. It can read a GROMACS trajectory file, a PDB coordinate file, and a GROMACS index file to create a two-dimensional geometric representation of a bilayer. Voronoi diagrams and Delaunay triangulations—generated for different selection models of lipids—support the analysis of the bilayer. The values calculated on the geometric structures can be visualized in a user-friendly interactive environment and, then, plotted and exported to different file types. APL@Voro supports complex bilayers with a mix of various lipids and proteins. For the calculation of the projected area per lipid, a modification of the well-known Voronoi approach is presented as well as the presentation of a new approach for including atoms into an existing triangulation. The application of the developed software is discussed for three example systems simulated with GROMACS. The program is written in C++, is open source, and is a...

  • Coarse-grained and all-atom MD simulations with GROMACS based on CELLmicrocosmos 2.2 model membranes
    Journal of Cheminformatics, 2011
    Co-Authors: Björn Sommer, Gunther Lukat, Tim Dingersen, Christian Gamroth, André J. Heissmann, Ralf Rotzoll, Sebastian Rubert, Alexander Schäfer, Jens Krüger
    Abstract:

    The CELLmicrocosmos MembraneEditor (CmME) [1] enables researchers to generate PDB [2] based membrane structures in a convenient way. The lipid distribution is computed by algorithms working on the outer shapes of the molecules. For this reason, the computation and visualization process is very fast, while the atomistic structure of each single molecule remains unchanged. PDB membranes can be exported to GROMACS [3], a molecular dynamic simulation (MD) program. In this new approach the workflow between CmME and GROMACS has been improved. Two major strategies for the simulation of membranes are the all atom (AA) and coarse-grained (CG) approach. As a logical consequence of the shape-based principle of CmME, coarse-grained support has been implemented. The ongoing work is presented by comparing AA and CG structures generated with CmME, simulated with GROMACS and reverse-parsed to CmME. Newly implemented features of the CmME MD Edition: • The Membrane Shifter Tool, enabling membrane model repositioning • Shape generation taking periodic boundaries into account • A Molecule Editor, supporting the definition of CG particles • Advanced raft support, allowing the analysis of lipid values and raft-restricted computation of lipid distributions. In addition, a GUI based on the CmME algorithm interface is being implemented, allowing the comfortable handling of CmME and GROMACS based workflows.

Carsten Kutzner - One of the best experts on this subject based on the ideXlab platform.

  • A GPU-Accelerated Fast Multipole Method for GROMACS: Performance and Accuracy.
    Journal of chemical theory and computation, 2020
    Co-Authors: Bartosz Kohnke, Carsten Kutzner, Helmut Grubmüller
    Abstract:

    An important and computationally demanding part of molecular dynamics simulations is the calculation of long-range electrostatic interactions. Today, the prevalent method to compute these interactions is particle mesh Ewald (PME). The PME implementation in the GROMACS molecular dynamics package is extremely fast on individual GPU nodes. However, for large scale multinode parallel simulations, PME becomes the main scaling bottleneck as it requires all-to-all communication between the nodes; as a consequence, the number of exchanged messages scales quadratically with the number of involved nodes in that communication step. To enable efficient and scalable biomolecular simulations on future exascale supercomputers, clearly a method with a better scaling property is required. The fast multipole method (FMM) is such a method. As a first step on the path to exascale, we have implemented a performance-optimized, highly efficient GPU FMM and integrated it into GROMACS as an alternative to PME. For a fair performance comparison between FMM and PME, we first assessed the accuracies of the methods for various sets of input parameters. With parameters yielding similar accuracies for both methods, we determined the performance of GROMACS with FMM and compared it to PME for exemplary benchmark systems. We found that FMM with a multipole order of 8 yields electrostatic forces that are as accurate as PME with standard parameters. Further, for typical mixed-precision simulation settings, FMM does not lead to an increased energy drift with multipole orders of 8 or larger. Whereas an ≈50 000 atom simulation system with our FMM reaches only about a third of the performance with PME, for systems with large dimensions and inhomogeneous particle distribution, e.g., aerosol systems with water droplets floating in a vacuum, FMM substantially outperforms PME already on a single node.

  • More bang for your buck: Improved use of GPU nodes for GROMACS 2018
    Journal of computational chemistry, 2019
    Co-Authors: Carsten Kutzner, Bert L De Groot, Szilárd Páll, Martin Fechner, Ansgar Esztermann, Helmut Grubmüller
    Abstract:

    We identify hardware that is optimal to produce molecular dynamics (MD) trajectories on Linux compute clusters with the GROMACS 2018 simulation package. Therefore, we benchmark the GROMACS performance on a diverse set of compute nodes and relate it to the costs of the nodes, which may include their lifetime costs for energy and cooling. In agreement with our earlier investigation using GROMACS 4.6 on hardware of 2014, the performance to price ratio of consumer GPU nodes is considerably higher than that of CPU nodes. However, with GROMACS 2018, the optimal CPU to GPU processing power balance has shifted even more toward the GPU. Hence, nodes optimized for GROMACS 2018 and later versions enable a significantly higher performance to price ratio than nodes optimized for older GROMACS versions. Moreover, the shift toward GPU processing allows to cheaply upgrade old nodes with recent GPUs, yielding essentially the same performance as comparable brand-new hardware. © 2019 Wiley Periodicals, Inc.

  • More Bang for Your Buck: Improved use of GPU Nodes for GROMACS 2018
    arXiv: Distributed Parallel and Cluster Computing, 2019
    Co-Authors: Carsten Kutzner, Bert L De Groot, Szilárd Páll, Martin Fechner, Ansgar Esztermann, Helmut Grubmüller
    Abstract:

    We identify hardware that is optimal to produce molecular dynamics trajectories on Linux compute clusters with the GROMACS 2018 simulation package. Therefore, we benchmark the GROMACS performance on a diverse set of compute nodes and relate it to the costs of the nodes, which may include their lifetime costs for energy and cooling. In agreement with our earlier investigation using GROMACS 4.6 on hardware of 2014, the performance to price ratio of consumer GPU nodes is considerably higher than that of CPU nodes. However, with GROMACS 2018, the optimal CPU to GPU processing power balance has shifted even more towards the GPU. Hence, nodes optimized for GROMACS 2018 and later versions enable a significantly higher performance to price ratio than nodes optimized for older GROMACS versions. Moreover, the shift towards GPU processing allows to cheaply upgrade old nodes with recent GPUs, yielding essentially the same performance as comparable brand-new hardware.

  • gromaρs a GROMACS based toolset to analyze density maps derived from molecular dynamics simulations
    Biophysical Journal, 2019
    Co-Authors: Rodolfo Briones, Carsten Kutzner, Christian Blau, Bert L De Groot, Camilo Apontesantamaria
    Abstract:

    We introduce a computational toolset, named GROmaρs, to obtain and compare time-averaged density maps from molecular dynamics simulations. GROmaρs efficiently computes density maps by fast multi-Ga ...

  • A flexible, GPU - powered fast multipole method for realistic biomolecular simulations in GROMACS.
    Biophysical Journal, 2017
    Co-Authors: Bartosz Kohnke, Berk Hess, Holger Dachsel, R. Thomas Ullmann, Carsten Kutzner, Andreas Beckmann, David Haensel, Ivo Kabadshow, Helmut Grubmüller
    Abstract:

    A Flexible, GPU - Powered Fast Multipole Method for Realistic Biomolecular Simulations in GROMACS

Gunther Lukat - One of the best experts on this subject based on the ideXlab platform.

  • APL@Voro: A voronoi-based membrane analysis tool for GROMACS trajectories
    Journal of chemical information and modeling, 2013
    Co-Authors: Gunther Lukat, Jens Krüger, Björn Sommer
    Abstract:

    APL@Voro is a new program developed to aid in the analysis of GROMACS trajectories of lipid bilayer simulations. It can read a GROMACS trajectory file, a PDB coordinate file, and a GROMACS index file to create a two-dimensional geometric representation of a bilayer. Voronoi diagrams and Delaunay triangulations—generated for different selection models of lipids—support the analysis of the bilayer. The values calculated on the geometric structures can be visualized in a user-friendly interactive environment and, then, plotted and exported to different file types. APL@Voro supports complex bilayers with a mix of various lipids and proteins. For the calculation of the projected area per lipid, a modification of the well-known Voronoi approach is presented as well as the presentation of a new approach for including atoms into an existing triangulation. The application of the developed software is discussed for three example systems simulated with GROMACS. The program is written in C++, is open source, and is a...

  • apl voro a voronoi based membrane analysis tool for GROMACS trajectories
    Journal of Chemical Information and Modeling, 2013
    Co-Authors: Gunther Lukat, Jens Krüger, Björn Sommer
    Abstract:

    APL@Voro is a new program developed to aid in the analysis of GROMACS trajectories of lipid bilayer simulations. It can read a GROMACS trajectory file, a PDB coordinate file, and a GROMACS index file to create a two-dimensional geometric representation of a bilayer. Voronoi diagrams and Delaunay triangulations—generated for different selection models of lipids—support the analysis of the bilayer. The values calculated on the geometric structures can be visualized in a user-friendly interactive environment and, then, plotted and exported to different file types. APL@Voro supports complex bilayers with a mix of various lipids and proteins. For the calculation of the projected area per lipid, a modification of the well-known Voronoi approach is presented as well as the presentation of a new approach for including atoms into an existing triangulation. The application of the developed software is discussed for three example systems simulated with GROMACS. The program is written in C++, is open source, and is a...

  • Coarse-grained and all-atom MD simulations with GROMACS based on CELLmicrocosmos 2.2 model membranes
    Journal of Cheminformatics, 2011
    Co-Authors: Björn Sommer, Gunther Lukat, Tim Dingersen, Christian Gamroth, André J. Heissmann, Ralf Rotzoll, Sebastian Rubert, Alexander Schäfer, Jens Krüger
    Abstract:

    The CELLmicrocosmos MembraneEditor (CmME) [1] enables researchers to generate PDB [2] based membrane structures in a convenient way. The lipid distribution is computed by algorithms working on the outer shapes of the molecules. For this reason, the computation and visualization process is very fast, while the atomistic structure of each single molecule remains unchanged. PDB membranes can be exported to GROMACS [3], a molecular dynamic simulation (MD) program. In this new approach the workflow between CmME and GROMACS has been improved. Two major strategies for the simulation of membranes are the all atom (AA) and coarse-grained (CG) approach. As a logical consequence of the shape-based principle of CmME, coarse-grained support has been implemented. The ongoing work is presented by comparing AA and CG structures generated with CmME, simulated with GROMACS and reverse-parsed to CmME. Newly implemented features of the CmME MD Edition: • The Membrane Shifter Tool, enabling membrane model repositioning • Shape generation taking periodic boundaries into account • A Molecule Editor, supporting the definition of CG particles • Advanced raft support, allowing the analysis of lipid values and raft-restricted computation of lipid distributions. In addition, a GUI based on the CmME algorithm interface is being implemented, allowing the comfortable handling of CmME and GROMACS based workflows.