Simulation Code

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 193848 Experts worldwide ranked by ideXlab platform

Susanne Kunkel - One of the best experts on this subject based on the ideXlab platform.

  • Corrigendum: Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers.
    Frontiers in neuroinformatics, 2018
    Co-Authors: Jakob Jordan, Tammo Ippen, Moritz Helias, Itaru Kitayama, Mitsuhisa Sato, Jun Igarashi, Markus Diesmann, Susanne Kunkel
    Abstract:

    Extremely Scalable Spiking Neuronal Network Simulation Code : From Laptops to Exascale Computers (vol 12, 2, 2018)

  • extremely scalable spiking neuronal network Simulation Code from laptops to exascale computers
    Frontiers in Neuroinformatics, 2018
    Co-Authors: Jakob Jordan, Tammo Ippen, Moritz Helias, Itaru Kitayama, Mitsuhisa Sato, Jun Igarashi, Markus Diesmann, Susanne Kunkel
    Abstract:

    State-of-the-art software tools for neuronal network Simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, Simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST Simulation Code and we investigate its performance in different scaling scenarios of typical network Simulations. Our results show that the new data structures and communication scheme prepare the Simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.

  • spiking network Simulation Code for petascale computers
    Frontiers in Neuroinformatics, 2014
    Co-Authors: Susanne Kunkel, Jun Igarashi, Maximilian Schmidt, Jochen Martin Eppler, Hans E Plesser, Gen Masumoto, Shin Ishii, Tomoki Fukai, Abigail Morrison
    Abstract:

    Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel Simulation Codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network Simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale Simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today.

Jun Igarashi - One of the best experts on this subject based on the ideXlab platform.

  • Corrigendum: Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers.
    Frontiers in neuroinformatics, 2018
    Co-Authors: Jakob Jordan, Tammo Ippen, Moritz Helias, Itaru Kitayama, Mitsuhisa Sato, Jun Igarashi, Markus Diesmann, Susanne Kunkel
    Abstract:

    Extremely Scalable Spiking Neuronal Network Simulation Code : From Laptops to Exascale Computers (vol 12, 2, 2018)

  • extremely scalable spiking neuronal network Simulation Code from laptops to exascale computers
    Frontiers in Neuroinformatics, 2018
    Co-Authors: Jakob Jordan, Tammo Ippen, Moritz Helias, Itaru Kitayama, Mitsuhisa Sato, Jun Igarashi, Markus Diesmann, Susanne Kunkel
    Abstract:

    State-of-the-art software tools for neuronal network Simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, Simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST Simulation Code and we investigate its performance in different scaling scenarios of typical network Simulations. Our results show that the new data structures and communication scheme prepare the Simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.

  • spiking network Simulation Code for petascale computers
    Frontiers in Neuroinformatics, 2014
    Co-Authors: Susanne Kunkel, Jun Igarashi, Maximilian Schmidt, Jochen Martin Eppler, Hans E Plesser, Gen Masumoto, Shin Ishii, Tomoki Fukai, Abigail Morrison
    Abstract:

    Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel Simulation Codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network Simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale Simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today.

Jakob Jordan - One of the best experts on this subject based on the ideXlab platform.

  • Corrigendum: Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers.
    Frontiers in neuroinformatics, 2018
    Co-Authors: Jakob Jordan, Tammo Ippen, Moritz Helias, Itaru Kitayama, Mitsuhisa Sato, Jun Igarashi, Markus Diesmann, Susanne Kunkel
    Abstract:

    Extremely Scalable Spiking Neuronal Network Simulation Code : From Laptops to Exascale Computers (vol 12, 2, 2018)

  • extremely scalable spiking neuronal network Simulation Code from laptops to exascale computers
    Frontiers in Neuroinformatics, 2018
    Co-Authors: Jakob Jordan, Tammo Ippen, Moritz Helias, Itaru Kitayama, Mitsuhisa Sato, Jun Igarashi, Markus Diesmann, Susanne Kunkel
    Abstract:

    State-of-the-art software tools for neuronal network Simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, Simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST Simulation Code and we investigate its performance in different scaling scenarios of typical network Simulations. Our results show that the new data structures and communication scheme prepare the Simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.

Volker Springel - One of the best experts on this subject based on the ideXlab platform.

  • an implementation of radiative transfer in the cosmological Simulation Code gadget
    Monthly Notices of the Royal Astronomical Society, 2009
    Co-Authors: Margarita Petkova, Volker Springel
    Abstract:

    We present a novel numerical implementation of radiative transfer in the cosmological smoothed particle hydrodynamics (SPH) Simulation Code gadget. It is based on a fast, robust and photon-conserving integration scheme where the radiation transport problem is approximated in terms of moments of the transfer equation and by using a variable Eddington tensor as a closure relation, following the Optically Thin Variable Eddington Tensor suggestion of Gnedin & Abel. We derive a suitable anisotropic diffusion operator for use in the SPH discretization of the local photon transport, and we combine this with an implicit solver that guarantees robustness and photon conservation. This entails a matrix inversion problem of a huge, sparsely populated matrix that is distributed in memory in our parallel Code. We solve this task iteratively with a conjugate gradient scheme. Finally, to model photon sink processes we consider ionization and recombination processes of hydrogen, which is represented with a chemical network that is evolved with an implicit time integration scheme. We present several tests of our implementation, including single and multiple sources in static uniform density fields with and without temperature evolution, shadowing by a dense clump and multiple sources in a static cosmological density field. All tests agree quite well with analytical computations or with predictions from other radiative transfer Codes, except for shadowing. However, unlike most other radiative transfer Codes presently in use for studying re-ionization, our new method can be used on-the-fly during dynamical cosmological Simulation, allowing simultaneous treatments of galaxy formation and the re-ionization process of the Universe.

  • an implementation of radiative transfer in the cosmological Simulation Code gadget
    arXiv: Astrophysics, 2008
    Co-Authors: Margarita Petkova, Volker Springel
    Abstract:

    We present a novel numerical implementation of radiative transfer in the cosmological smoothed particle hydrodynamics (SPH) Simulation Code {\small GADGET}. It is based on a fast, robust and photon-conserving integration scheme where the radiation transport problem is approximated in terms of moments of the transfer equation and by using a variable Eddington tensor as a closure relation, following the `OTVET'-suggestion of Gnedin & Abel. We derive a suitable anisotropic diffusion operator for use in the SPH discretization of the local photon transport, and we combine this with an implicit solver that guarantees robustness and photon conservation. This entails a matrix inversion problem of a huge, sparsely populated matrix that is distributed in memory in our parallel Code. We solve this task iteratively with a conjugate gradient scheme. Finally, to model photon sink processes we consider ionisation and recombination processes of hydrogen, which is represented with a chemical network that is evolved with an implicit time integration scheme. We present several tests of our implementation, including single and multiple sources in static uniform density fields with and without temperature evolution, shadowing by a dense clump, and multiple sources in a static cosmological density field. All tests agree quite well with analytical computations or with predictions from other radiative transfer Codes, except for shadowing. However, unlike most other radiative transfer Codes presently in use for studying reionisation, our new method can be used on-the-fly during dynamical cosmological Simulation, allowing simultaneous treatments of galaxy formation and the reionisation process of the Universe.

  • the cosmological Simulation Code gadget 2
    Monthly Notices of the Royal Astronomical Society, 2005
    Co-Authors: Volker Springel
    Abstract:

    We discuss the cosmological Simulation Code GADGET-2, a new massively parallel TreeSPH Code, capable of following a collisionless fluid with the N-body method, and an ideal gas by means of smoothed particle hydrodynamics (SPH). Our implementation of SPH manifestly conserves energy and entropy in regions free of dissipation, while allowing for fully adaptive smoothing lengths. Gravitational forces are computed with a hierarchical multipole expansion, which can optionally be applied in the form of a TreePM algorithm, where only short-range forces are computed with the ‘tree’ method while long-range forces are determined with Fourier techniques. Time integration is based on a quasi-symplectic scheme where long-range and short-range forces can be integrated with different time-steps. Individual and adaptive short-range time-steps may also be employed. The domain decomposition used in the parallelization algorithm is based on a space-filling curve, resulting in high flexibility and tree force errors that do not depend on the way the domains are cut. The Code is efficient in terms of memory consumption and required communication bandwidth. It has been used to compute the first cosmological N-body Simulation with more than 10 10 dark matter particles, reaching a homogeneous spatial dynamic range of 10 5 per dimension in a three-dimensional box. It has also been used to carry out very large cosmological SPH Simulations that account for radiative cooling and star formation, reaching total particle numbers of more than 250 million. We present the algorithms used by the Code and discuss their accuracy and performance using a number of test problems. GADGET-2 is publicly released to the research community. Ke yw ords: methods: numerical ‐ galaxies: interactions ‐ dark matter.

  • The cosmological Simulation Code GADGET-2
    Monthly Notices of the Royal Astronomical Society, 2005
    Co-Authors: Volker Springel
    Abstract:

    We discuss the cosmological Simulation Code GADGET-2, a new massively parallel TreeSPH Code, capable of following a collisionless fluid with the N-body method, and an ideal gas by means of smoothed particle hydrodynamics (SPH). Our implementation of SPH manifestly conserves energy and entropy in regions free of dissipation, while allowing for fully adaptive smoothing lengths. Gravitational forces are computed with a hierarchical multipole expansion, which can optionally be applied in the form of a TreePM algorithm, where only short-range forces are computed with the `tree'-method while long-range forces are determined with Fourier techniques. Time integration is based on a quasi-symplectic scheme where long-range and short-range forces can be integrated with different timesteps. Individual and adaptive short-range timesteps may also be employed. The domain decomposition used in the parallelisation algorithm is based on a space-filling curve, resulting in high flexibility and tree force errors that do not depend on the way the domains are cut. The Code is efficient in terms of memory consumption and required communication bandwidth. It has been used to compute the first cosmological N-body Simulation with more than 10^10 dark matter particles, reaching a homogeneous spatial dynamic range of 10^5 per dimension in a 3D box. It has also been used to carry out very large cosmological SPH Simulations that account for radiative cooling and star formation, reaching total particle numbers of more than 250 million. We present the algorithms used by the Code and discuss their accuracy and performance using a number of test problems. GADGET-2 is publicly released to the research community.

Markus Diesmann - One of the best experts on this subject based on the ideXlab platform.

  • Corrigendum: Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers.
    Frontiers in neuroinformatics, 2018
    Co-Authors: Jakob Jordan, Tammo Ippen, Moritz Helias, Itaru Kitayama, Mitsuhisa Sato, Jun Igarashi, Markus Diesmann, Susanne Kunkel
    Abstract:

    Extremely Scalable Spiking Neuronal Network Simulation Code : From Laptops to Exascale Computers (vol 12, 2, 2018)

  • extremely scalable spiking neuronal network Simulation Code from laptops to exascale computers
    Frontiers in Neuroinformatics, 2018
    Co-Authors: Jakob Jordan, Tammo Ippen, Moritz Helias, Itaru Kitayama, Mitsuhisa Sato, Jun Igarashi, Markus Diesmann, Susanne Kunkel
    Abstract:

    State-of-the-art software tools for neuronal network Simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, Simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST Simulation Code and we investigate its performance in different scaling scenarios of typical network Simulations. Our results show that the new data structures and communication scheme prepare the Simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.