Numerical Code

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 24711 Experts worldwide ranked by ideXlab platform

A Levy - One of the best experts on this subject based on the ideXlab platform.

  • modeling of heat transfer in pneumatic conveyer using a combined dem cfd Numerical Code
    Drying Technology, 2010
    Co-Authors: Tamir Brosh, A Levy
    Abstract:

    A combined discrete element method (DEM) and computation fluid dynamics (CFD) Numerical Code was developed for modeling and simulating the flow of particles through the conveying pipe. The DEM was used to simulate the motion of the particles in the gas flow; the compressible Reynolds averaged Navier-Stokes (RANS) equations were used to describe the gas flow. During the initial heating/cooling and for small particle size, both conditions, Bi = hd p /k s  < 0.1 and , are not satisfied and therefore uniform particle temperature cannot be assumed. Therefore, particle temperature distribution must be taken into account. The equation of energy conservation for a spherical particle was applied in order to predict the temperature profile of each particle. The predictions of the Numerical simulations for a single particle flow in pipe were compared successfully with experimental published data. Comparisons between the uniform particle temperature, particle temperature distribution, and experimental data showed tha...

  • Modeling of Heat Transfer in Pneumatic Conveyer Using a Combined DEM-CFD Numerical Code
    Drying Technology, 2010
    Co-Authors: Tamir Brosh, A Levy
    Abstract:

    A combined discrete element method (DEM) and computation fluid dynamics (CFD) Numerical Code was developed for modeling and simulating the flow of particles through the conveying pipe. The DEM was used to simulate the motion of the particles in the gas flow; the compressible Reynolds averaged Navier-Stokes (RANS) equations were used to describe the gas flow. During the initial heating/cooling and for small particle size, both conditions, Bi = hd p /k s  

T Naab - One of the best experts on this subject based on the ideXlab platform.

  • vine a Numerical Code for simulating astrophysical systems using particles ii implementation and performance characteristics
    Astrophysical Journal Supplement Series, 2009
    Co-Authors: Andrew F Nelson, M Wetzstein, T Naab
    Abstract:

    We continue our presentation of VINE. In this paper, we begin with a description of relevant architectural properties of the serial and shared memory parallel computers on which VINE is intended to run, and describe their influences on the design of the Code itself. We continue with a detailed description of a number of optimizations made to the layout of the particle data in memory and to our implementation of a binary tree used to access that data for use in gravitational force calculations and searches for SPH neighbor particles. We describe the modifications to the Code necessary to obtain forces efficiently from special purpose ‘GRAPE’ hardware, the interfaces required to allow transparent substitution of those forces in the Code instead of those obtained from the tree, and the modifications necessary to use both tree and GRAPE together as a fused GRAPE/tree combination. We conclude with an extensive series of performance tests, which demonstrate that the Code can be run efficiently and without modification in serial on small workstations or in parallel using the OpenMP compiler directives on large scale, shared memory parallel machines. We analyze the effects of the Code optimizations and estimate that they improve its overall performance by more than an order of magnitude over that obtained by many other tree Codes. Scaled parallel performance of the gravity and SPH calculations, together the most costly components of most simulations, is nearly linear up to at least 120 processors on moderate sized test problems using the Origin 3000 architecture, and to the maximum machine sizes available to us on several other architectures. At similar accuracy, performance of VINE, used in GRAPE-tree mode, is approximately a factor two slower than that of VINE, used in host-only mode. Further optimizations of the GRAPE/host communications could improve the speed by as much as a factor of three, but have not yet been implemented in VINE. Finally, we find that although parallel performance on small problems may reach a plateau beyond which more processors bring no additional speedup, performance never decreases, a factor important for running large simulations on many processors with individual time steps, where only a small fraction of the total particles require updates at any given moment. Subject headings: methods: Numerical — methods: N-body simulations

  • vine a Numerical Code for simulating astrophysical systems using particles i description of the physics and the Numerical methods
    Astrophysical Journal Supplement Series, 2009
    Co-Authors: M Wetzstein, Andrew F Nelson, T Naab, Andreas Burkert
    Abstract:

    We present a Numerical Code for simulating the evolution of astrophysical systems using particles to represent the underlying fluid flow. The Code is written in Fortran 95 and is designed to be versatile, flexible and extensible, with modular options that can be selected either at the time the Code is compiled or at run time through a text input file. We include a number of general purpose modules describing a variety of physical processes commonly required in the astrophysical community and we expect that the effort required to integrate additional or alternate modules into the Code will small. In its simplest form the Code can evolve the dynamical trajectories of a set of particles in two or three dimensions using a module which implements either a Leapfrog or Runge-Kutta-Fehlberg integrator, selected by the user at compile time. The user may choose to allow the integrator to evolve the system using individual timesteps for each particle or with a single, global time step for all. Particles may interact gravitationally as N-body particles, and all or any subset may also interact hydrodynamically, using the Smoothed Particle Hydrodynamic (SPH) method by selecting the SPH module. A third particle species can be included with a module to model massive point particles which may accrete nearby SPH or N-body particles. Such particles may be used to model, e.g., stars in a molecular cloud. Free boundary conditions are implemented by default, and a module may be selected to include periodic boundary conditions. We use a binary ‘Press’ tree to organize particles for rapid access in gravity and SPH calculations. Modules implementing an interface with special purpose ‘GRAPE’ hardware may also be selected to accelerate the gravity calculations. If available, forces obtained from the GRAPE coprocessors may be transparently substituted for those obtained from the tree, or both tree and GRAPE may be used as a combination GRAPE/tree Code. The Code may be run without modification on single processors or in parallel using OpenMP compiler directives on large scale, shared memory parallel machines. We present simulations of several test problems, including a merger simulation of two elliptical galaxies with 800000 particles. In comparison to the Gadget-2 Code of Springel (2005), the gravitational force calculation, which is the most costly part of any simulation including self-gravity, is ∼ 4.6 − 4.9 times faster with VINE when tested on different snapshots of the elliptical galaxy merger simulation when run on an Itanium 2 processor in an SGI Altix. A full simulation of the same setup with 8 processors is a factor of 2.91 faster with VINE. The Code is available to the public under the terms of the Gnu General Public License. Subject headings: methods: Numerical — methods: N-body simulations — galaxies: interactions

M Wetzstein - One of the best experts on this subject based on the ideXlab platform.

  • vine a Numerical Code for simulating astrophysical systems using particles ii implementation and performance characteristics
    Astrophysical Journal Supplement Series, 2009
    Co-Authors: Andrew F Nelson, M Wetzstein, T Naab
    Abstract:

    We continue our presentation of VINE. In this paper, we begin with a description of relevant architectural properties of the serial and shared memory parallel computers on which VINE is intended to run, and describe their influences on the design of the Code itself. We continue with a detailed description of a number of optimizations made to the layout of the particle data in memory and to our implementation of a binary tree used to access that data for use in gravitational force calculations and searches for SPH neighbor particles. We describe the modifications to the Code necessary to obtain forces efficiently from special purpose ‘GRAPE’ hardware, the interfaces required to allow transparent substitution of those forces in the Code instead of those obtained from the tree, and the modifications necessary to use both tree and GRAPE together as a fused GRAPE/tree combination. We conclude with an extensive series of performance tests, which demonstrate that the Code can be run efficiently and without modification in serial on small workstations or in parallel using the OpenMP compiler directives on large scale, shared memory parallel machines. We analyze the effects of the Code optimizations and estimate that they improve its overall performance by more than an order of magnitude over that obtained by many other tree Codes. Scaled parallel performance of the gravity and SPH calculations, together the most costly components of most simulations, is nearly linear up to at least 120 processors on moderate sized test problems using the Origin 3000 architecture, and to the maximum machine sizes available to us on several other architectures. At similar accuracy, performance of VINE, used in GRAPE-tree mode, is approximately a factor two slower than that of VINE, used in host-only mode. Further optimizations of the GRAPE/host communications could improve the speed by as much as a factor of three, but have not yet been implemented in VINE. Finally, we find that although parallel performance on small problems may reach a plateau beyond which more processors bring no additional speedup, performance never decreases, a factor important for running large simulations on many processors with individual time steps, where only a small fraction of the total particles require updates at any given moment. Subject headings: methods: Numerical — methods: N-body simulations

  • vine a Numerical Code for simulating astrophysical systems using particles i description of the physics and the Numerical methods
    Astrophysical Journal Supplement Series, 2009
    Co-Authors: M Wetzstein, Andrew F Nelson, T Naab, Andreas Burkert
    Abstract:

    We present a Numerical Code for simulating the evolution of astrophysical systems using particles to represent the underlying fluid flow. The Code is written in Fortran 95 and is designed to be versatile, flexible and extensible, with modular options that can be selected either at the time the Code is compiled or at run time through a text input file. We include a number of general purpose modules describing a variety of physical processes commonly required in the astrophysical community and we expect that the effort required to integrate additional or alternate modules into the Code will small. In its simplest form the Code can evolve the dynamical trajectories of a set of particles in two or three dimensions using a module which implements either a Leapfrog or Runge-Kutta-Fehlberg integrator, selected by the user at compile time. The user may choose to allow the integrator to evolve the system using individual timesteps for each particle or with a single, global time step for all. Particles may interact gravitationally as N-body particles, and all or any subset may also interact hydrodynamically, using the Smoothed Particle Hydrodynamic (SPH) method by selecting the SPH module. A third particle species can be included with a module to model massive point particles which may accrete nearby SPH or N-body particles. Such particles may be used to model, e.g., stars in a molecular cloud. Free boundary conditions are implemented by default, and a module may be selected to include periodic boundary conditions. We use a binary ‘Press’ tree to organize particles for rapid access in gravity and SPH calculations. Modules implementing an interface with special purpose ‘GRAPE’ hardware may also be selected to accelerate the gravity calculations. If available, forces obtained from the GRAPE coprocessors may be transparently substituted for those obtained from the tree, or both tree and GRAPE may be used as a combination GRAPE/tree Code. The Code may be run without modification on single processors or in parallel using OpenMP compiler directives on large scale, shared memory parallel machines. We present simulations of several test problems, including a merger simulation of two elliptical galaxies with 800000 particles. In comparison to the Gadget-2 Code of Springel (2005), the gravitational force calculation, which is the most costly part of any simulation including self-gravity, is ∼ 4.6 − 4.9 times faster with VINE when tested on different snapshots of the elliptical galaxy merger simulation when run on an Itanium 2 processor in an SGI Altix. A full simulation of the same setup with 8 processors is a factor of 2.91 faster with VINE. The Code is available to the public under the terms of the Gnu General Public License. Subject headings: methods: Numerical — methods: N-body simulations — galaxies: interactions

Andrew F Nelson - One of the best experts on this subject based on the ideXlab platform.

  • vine a Numerical Code for simulating astrophysical systems using particles ii implementation and performance characteristics
    Astrophysical Journal Supplement Series, 2009
    Co-Authors: Andrew F Nelson, M Wetzstein, T Naab
    Abstract:

    We continue our presentation of VINE. In this paper, we begin with a description of relevant architectural properties of the serial and shared memory parallel computers on which VINE is intended to run, and describe their influences on the design of the Code itself. We continue with a detailed description of a number of optimizations made to the layout of the particle data in memory and to our implementation of a binary tree used to access that data for use in gravitational force calculations and searches for SPH neighbor particles. We describe the modifications to the Code necessary to obtain forces efficiently from special purpose ‘GRAPE’ hardware, the interfaces required to allow transparent substitution of those forces in the Code instead of those obtained from the tree, and the modifications necessary to use both tree and GRAPE together as a fused GRAPE/tree combination. We conclude with an extensive series of performance tests, which demonstrate that the Code can be run efficiently and without modification in serial on small workstations or in parallel using the OpenMP compiler directives on large scale, shared memory parallel machines. We analyze the effects of the Code optimizations and estimate that they improve its overall performance by more than an order of magnitude over that obtained by many other tree Codes. Scaled parallel performance of the gravity and SPH calculations, together the most costly components of most simulations, is nearly linear up to at least 120 processors on moderate sized test problems using the Origin 3000 architecture, and to the maximum machine sizes available to us on several other architectures. At similar accuracy, performance of VINE, used in GRAPE-tree mode, is approximately a factor two slower than that of VINE, used in host-only mode. Further optimizations of the GRAPE/host communications could improve the speed by as much as a factor of three, but have not yet been implemented in VINE. Finally, we find that although parallel performance on small problems may reach a plateau beyond which more processors bring no additional speedup, performance never decreases, a factor important for running large simulations on many processors with individual time steps, where only a small fraction of the total particles require updates at any given moment. Subject headings: methods: Numerical — methods: N-body simulations

  • vine a Numerical Code for simulating astrophysical systems using particles i description of the physics and the Numerical methods
    Astrophysical Journal Supplement Series, 2009
    Co-Authors: M Wetzstein, Andrew F Nelson, T Naab, Andreas Burkert
    Abstract:

    We present a Numerical Code for simulating the evolution of astrophysical systems using particles to represent the underlying fluid flow. The Code is written in Fortran 95 and is designed to be versatile, flexible and extensible, with modular options that can be selected either at the time the Code is compiled or at run time through a text input file. We include a number of general purpose modules describing a variety of physical processes commonly required in the astrophysical community and we expect that the effort required to integrate additional or alternate modules into the Code will small. In its simplest form the Code can evolve the dynamical trajectories of a set of particles in two or three dimensions using a module which implements either a Leapfrog or Runge-Kutta-Fehlberg integrator, selected by the user at compile time. The user may choose to allow the integrator to evolve the system using individual timesteps for each particle or with a single, global time step for all. Particles may interact gravitationally as N-body particles, and all or any subset may also interact hydrodynamically, using the Smoothed Particle Hydrodynamic (SPH) method by selecting the SPH module. A third particle species can be included with a module to model massive point particles which may accrete nearby SPH or N-body particles. Such particles may be used to model, e.g., stars in a molecular cloud. Free boundary conditions are implemented by default, and a module may be selected to include periodic boundary conditions. We use a binary ‘Press’ tree to organize particles for rapid access in gravity and SPH calculations. Modules implementing an interface with special purpose ‘GRAPE’ hardware may also be selected to accelerate the gravity calculations. If available, forces obtained from the GRAPE coprocessors may be transparently substituted for those obtained from the tree, or both tree and GRAPE may be used as a combination GRAPE/tree Code. The Code may be run without modification on single processors or in parallel using OpenMP compiler directives on large scale, shared memory parallel machines. We present simulations of several test problems, including a merger simulation of two elliptical galaxies with 800000 particles. In comparison to the Gadget-2 Code of Springel (2005), the gravitational force calculation, which is the most costly part of any simulation including self-gravity, is ∼ 4.6 − 4.9 times faster with VINE when tested on different snapshots of the elliptical galaxy merger simulation when run on an Itanium 2 processor in an SGI Altix. A full simulation of the same setup with 8 processors is a factor of 2.91 faster with VINE. The Code is available to the public under the terms of the Gnu General Public License. Subject headings: methods: Numerical — methods: N-body simulations — galaxies: interactions

Zhendong Su - One of the best experts on this subject based on the ideXlab platform.

  • Mathematical Execution: A Unified Approach for Testing Numerical Code.
    arXiv: Programming Languages, 2016
    Co-Authors: Zhoulai Fu, Zhendong Su
    Abstract:

    This paper presents Mathematical Execution (ME), a new, unified approach for testing Numerical Code. The key idea is to (1) capture the desired testing objective via a representing function and (2) transform the automated testing problem to the minimization problem of the representing function. The minimization problem is to be solved via mathematical optimization. The main feature of ME is that it directs input space exploration by only executing the representing function, thus avoiding static or symbolic reasoning about the program semantics, which is particularly challenging for Numerical Code. To illustrate this feature, we develop an ME-based algorithm for coverage-based testing of Numerical Code. We also show the potential of applying and adapting ME to other related problems, including path reachability testing, boundary value analysis, and satisfiability checking. To demonstrate ME's practical benefits, we have implemented CoverMe, a proof-of-concept realization for branch coverage based testing, and evaluated it on Sun's C math library (used in, for example, Android, Matlab, Java and JavaScript). We have compared CoverMe with random testing and Austin, a publicly available branch coverage based testing tool that supports Numerical Code (Austin combines symbolic execution and search-based heuristics). Our experimental results show that CoverMe achieves near-optimal and substantially higher coverage ratios than random testing on all tested programs, across all evaluated coverage metrics. Compared with Austin, CoverMe improves branch coverage from 43% to 91%, with significantly less time (6.9 vs. 6058.4 seconds on average).

  • automated backward error analysis for Numerical Code
    Conference on Object-Oriented Programming Systems Languages and Applications, 2015
    Co-Authors: Zhoulai Fu, Zhendong Su
    Abstract:

    Numerical Code uses floating-point arithmetic and necessarily suffers from roundoff and truncation errors. Error analysis is the process to quantify such uncertainty in the solution to a problem. Forward error analysis and backward error analysis are two popular paradigms of error analysis. Forward error analysis is more intuitive and has been explored and automated by the programming languages (PL) community. In contrast, although backward error analysis is more preferred by Numerical analysts and the foundation for Numerical stability, it is less known and unexplored by the PL community. To fill the gap, this paper presents an automated backward error analysis for Numerical Code to empower both Numerical analysts and application developers. In addition, we use the computed backward error results to also compute the condition number, an important quantity recognized by Numerical analysts for measuring how sensitive a function is to changes or errors in the input. Experimental results on Intel X87 FPU functions and widely-used GNU C Library functions demonstrate that our analysis is effective at analyzing the accuracy of floating-point programs.

  • OOPSLA - Automated backward error analysis for Numerical Code
    Proceedings of the 2015 ACM SIGPLAN International Conference on Object-Oriented Programming Systems Languages and Applications - OOPSLA 2015, 2015
    Co-Authors: Zhoulai Fu, Zhendong Su
    Abstract:

    Numerical Code uses floating-point arithmetic and necessarily suffers from roundoff and truncation errors. Error analysis is the process to quantify such uncertainty in the solution to a problem. Forward error analysis and backward error analysis are two popular paradigms of error analysis. Forward error analysis is more intuitive and has been explored and automated by the programming languages (PL) community. In contrast, although backward error analysis is more preferred by Numerical analysts and the foundation for Numerical stability, it is less known and unexplored by the PL community. To fill the gap, this paper presents an automated backward error analysis for Numerical Code to empower both Numerical analysts and application developers. In addition, we use the computed backward error results to also compute the condition number, an important quantity recognized by Numerical analysts for measuring how sensitive a function is to changes or errors in the input. Experimental results on Intel X87 FPU functions and widely-used GNU C Library functions demonstrate that our analysis is effective at analyzing the accuracy of floating-point programs.