Simulation Kernel

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 1068 Experts worldwide ranked by ideXlab platform

Philip A Wilsey - One of the best experts on this subject based on the ideXlab platform.

  • Annual Simulation Symposium - Redesigning the WARPED Simulation Kernel for analysis and application development
    36th Annual Simulation Symposium 2003., 2003
    Co-Authors: Dale E Martin, Philip A Wilsey, R.j. Hoekstra, E.r. Keiter, S.a. Hutchinson, T.v. Russo, L.j. Waters
    Abstract:

    WARPED is a publicly available time warp Simulation Kernel. The Kernel defines a standard interface to the application developer and is designed to provide a highly configurable environment for the integration of time warp optimizations. It is written in C++, uses the MPI message passing standard, and executes on a variety of parallel and distributed processing platforms. Version 2.0 of WARPED described here is distributed with several applications and the configuration can be set so that a sequential Kernel implementation can be instantiated The Kernel supports LP clustering, various GVT algorithms, and numerous optimizations to adaptively adjust Simulation parameters at runtime.

  • Redesigning the WARPED Simulation Kernel for analysis and application development
    36th Annual Simulation Symposium 2003., 2003
    Co-Authors: Dale E Martin, Philip A Wilsey, R.j. Hoekstra, E.r. Keiter, S.a. Hutchinson, T.v. Russo, L.j. Waters
    Abstract:

    WARPED is a publicly available time warp Simulation Kernel. The Kernel defines a standard interface to the application developer and is designed to provide a highly configurable environment for the integration of time warp optimizations. It is written in C++, uses the MPI message passing standard, and executes on a variety of parallel and distributed processing platforms. Version 2.0 of WARPED described here is distributed with several applications and the configuration can be set so that a sequential Kernel implementation can be instantiated The Kernel supports LP clustering, various GVT algorithms, and numerous optimizations to adaptively adjust Simulation parameters at runtime.

  • PADS - Software control systems for parallel Simulation
    Proceedings 16th Workshop on Parallel and Distributed Simulation, 2002
    Co-Authors: Radharamanan Radhakrishnan, Philip A Wilsey
    Abstract:

    Parallel Simulations using optimistic synchronization strategies such as Time Warp, operate with no regard to global synchronization since this results in greater parallelism and lower synchronization cost. However, like virtual memory, the parallel simulators may end up thrashing instead of performing useful work. The complication in using a Time Warp simulator is then to configure it suitably for good performance and avoid thrashing. Unfortunately, the optimal configuration is not generally static among different applications or even throughout an entire run of a single application. Thus, online feedback control systems are deployed to govern the adjustment of input parameters in our Time Warp Simulation Kernel. The design and implementation of effective feedback control systems can be difficult; the extra processing is pure overhead that must be absorbed by any performance gains delivered. The problem is further complicated when attempting to build a Simulation Kernel that is designed efficiently to operate with many different applications. In this paper, we introduce a control-centric architecture that is used to monitor and manage different parts of a Time Warp simulator. Specifically, we extend concepts from control theory such as adaptive control and stability, to better understand and design hierarchically-distributed run-time control systems for Time Warp based parallel Simulation.

  • Workshop on Parallel and Distributed Simulation - Time Warp Simulation on clumps
    Proceedings Thirteenth Workshop on Parallel and Distributed Simulation. PADS 99. (Cat. No.PR00155), 1999
    Co-Authors: Girindra D. Sharma, Radharamanan Radhakrishnan, Umesh Kumar V. Rajasekaran, Nael Abu-ghazaleh, Philip A Wilsey
    Abstract:

    Traditionally, parallel discrete-event simulators based on the Time Warp synchronization protocol have been implemented using either the shared memory programming model or the distributed memory, message passing programming model. This was because the preferred hardware platform was either a shared memory multiprocessor workstation or a network of uniprocessor workstations. However, with the advent of "clumps" (cluster of shared memory multiprocessors), a change in this dichotomous view becomes necessary. This paper explores the design and implementation issues involved in exploiting this new platform for Time Warp Simulations. Specifically, this paper presents two generic strategies for implementing Time Warp simulators on clumps. In addition, we present our experiences in implementing these strategies on an extant distributed memory, message passing Time Warp simulator (WARPED). Preliminary performance results comparing the modified clump-specific Simulation Kernel to the unmodified distributed memory, message passing Simulation Kernel are also presented.

  • an object oriented time warp Simulation Kernel
    Lecture Notes in Computer Science, 1998
    Co-Authors: Radharamanan Radhakrishnan, Dale E Martin, Malolan Chetlur, Philip A Wilsey
    Abstract:

    The design of a Time Warp Simulation Kernel is made difficult by the inherent complexity of the paradigm. Hence it becomes critical that the design of such complex Simulation Kernels follow established design principles such as object-oriented design so that the implementation is simple to modify and extend. In this paper, we present a compendium of our efforts in the design and development of an object-oriented Time Warp Simulation Kernel, called warped. warped is a publically available Time Warp Simulation Kernel for experimentation and application development. The Kernel defines a standard interface to the application developer and is designed to provide a highly configurable environment for the integration of Time Warp optimizations. It is written in C++, uses the MPI message passing standard for communication, and executes on a variety of platforms including a network of SUN workstations, a SUN SMP workstation, the IBM SP1/SP2 multiprocessors, the Cray T3E, the Intel Paragon, and IBM-compatible PCs running Linux.

Radharamanan Radhakrishnan - One of the best experts on this subject based on the ideXlab platform.

  • PADS - Software control systems for parallel Simulation
    Proceedings 16th Workshop on Parallel and Distributed Simulation, 2002
    Co-Authors: Radharamanan Radhakrishnan, Philip A Wilsey
    Abstract:

    Parallel Simulations using optimistic synchronization strategies such as Time Warp, operate with no regard to global synchronization since this results in greater parallelism and lower synchronization cost. However, like virtual memory, the parallel simulators may end up thrashing instead of performing useful work. The complication in using a Time Warp simulator is then to configure it suitably for good performance and avoid thrashing. Unfortunately, the optimal configuration is not generally static among different applications or even throughout an entire run of a single application. Thus, online feedback control systems are deployed to govern the adjustment of input parameters in our Time Warp Simulation Kernel. The design and implementation of effective feedback control systems can be difficult; the extra processing is pure overhead that must be absorbed by any performance gains delivered. The problem is further complicated when attempting to build a Simulation Kernel that is designed efficiently to operate with many different applications. In this paper, we introduce a control-centric architecture that is used to monitor and manage different parts of a Time Warp simulator. Specifically, we extend concepts from control theory such as adaptive control and stability, to better understand and design hierarchically-distributed run-time control systems for Time Warp based parallel Simulation.

  • PADS - Parallel mixed-technology Simulation
    Proceedings Fourteenth Workshop on Parallel and Distributed Simulation, 2000
    Co-Authors: Peter Frey, Radharamanan Radhakrishnan
    Abstract:

    Circuit Simulation has proven to be one of the most important computer aided design (CAD) methods for the analysis and validation of integrated circuit designs. A popular approach to describing circuits for Simulation purposes is to use a hardware description language such as VHDL. Similar efforts have also been carried out in the analog domain that has led to tools such as SPICE. However, with the growing trend of hardware designs that contain both analog and digital components, design environments that seamlessly integrate analog and digital circuitry are needed. Simulation of such circuit is however, exacerbated by the higher resource (CPU and memory) demands that arise when analog and digital models are integrated in a mixed-mode (analog and digital) Simulation. One solution to this problem is to use PDES algorithms on a distributed platform. However, a synchronization interface between the analog and digital Simulation environment is required to achieve integrated mixed-mode Simulation. In this paper, we present the issues involved in the construction of synchronization protocols which support mixed-mode Simulation in a distributed Simulation environment. The proposed synchronization protocols provide an interface between an optimistic (Time Warp based) discrete-event Simulation Kernel and any continuous time Simulation Kernel. Empirical and formal analyses were conducted to ensure correctness and completeness of the protocols and the results of these analyses are also presented.

  • Workshop on Parallel and Distributed Simulation - Time Warp Simulation on clumps
    Proceedings Thirteenth Workshop on Parallel and Distributed Simulation. PADS 99. (Cat. No.PR00155), 1999
    Co-Authors: Girindra D. Sharma, Radharamanan Radhakrishnan, Umesh Kumar V. Rajasekaran, Nael Abu-ghazaleh, Philip A Wilsey
    Abstract:

    Traditionally, parallel discrete-event simulators based on the Time Warp synchronization protocol have been implemented using either the shared memory programming model or the distributed memory, message passing programming model. This was because the preferred hardware platform was either a shared memory multiprocessor workstation or a network of uniprocessor workstations. However, with the advent of "clumps" (cluster of shared memory multiprocessors), a change in this dichotomous view becomes necessary. This paper explores the design and implementation issues involved in exploiting this new platform for Time Warp Simulations. Specifically, this paper presents two generic strategies for implementing Time Warp simulators on clumps. In addition, we present our experiences in implementing these strategies on an extant distributed memory, message passing Time Warp simulator (WARPED). Preliminary performance results comparing the modified clump-specific Simulation Kernel to the unmodified distributed memory, message passing Simulation Kernel are also presented.

  • an object oriented time warp Simulation Kernel
    Lecture Notes in Computer Science, 1998
    Co-Authors: Radharamanan Radhakrishnan, Dale E Martin, Malolan Chetlur, Philip A Wilsey
    Abstract:

    The design of a Time Warp Simulation Kernel is made difficult by the inherent complexity of the paradigm. Hence it becomes critical that the design of such complex Simulation Kernels follow established design principles such as object-oriented design so that the implementation is simple to modify and extend. In this paper, we present a compendium of our efforts in the design and development of an object-oriented Time Warp Simulation Kernel, called warped. warped is a publically available Time Warp Simulation Kernel for experimentation and application development. The Kernel defines a standard interface to the application developer and is designed to provide a highly configurable environment for the integration of Time Warp optimizations. It is written in C++, uses the MPI message passing standard for communication, and executes on a variety of platforms including a network of SUN workstations, a SUN SMP workstation, the IBM SP1/SP2 multiprocessors, the Cray T3E, the Intel Paragon, and IBM-compatible PCs running Linux.

  • ISCOPE - An Object-Oriented Time Warp Simulation Kernel
    Lecture Notes in Computer Science, 1998
    Co-Authors: Radharamanan Radhakrishnan, Dale E Martin, Malolan Chetlur, Philip A Wilsey
    Abstract:

    The design of a Time Warp Simulation Kernel is made difficult by the inherent complexity of the paradigm. Hence it becomes critical that the design of such complex Simulation Kernels follow established design principles such as object-oriented design so that the implementation is simple to modify and extend. In this paper, we present a compendium of our efforts in the design and development of an object-oriented Time Warp Simulation Kernel, called warped. warped is a publically available Time Warp Simulation Kernel for experimentation and application development. The Kernel defines a standard interface to the application developer and is designed to provide a highly configurable environment for the integration of Time Warp optimizations. It is written in C++, uses the MPI message passing standard for communication, and executes on a variety of platforms including a network of SUN workstations, a SUN SMP workstation, the IBM SP1/SP2 multiprocessors, the Cray T3E, the Intel Paragon, and IBM-compatible PCs running Linux.

Dale E Martin - One of the best experts on this subject based on the ideXlab platform.

  • Annual Simulation Symposium - Redesigning the WARPED Simulation Kernel for analysis and application development
    36th Annual Simulation Symposium 2003., 2003
    Co-Authors: Dale E Martin, Philip A Wilsey, R.j. Hoekstra, E.r. Keiter, S.a. Hutchinson, T.v. Russo, L.j. Waters
    Abstract:

    WARPED is a publicly available time warp Simulation Kernel. The Kernel defines a standard interface to the application developer and is designed to provide a highly configurable environment for the integration of time warp optimizations. It is written in C++, uses the MPI message passing standard, and executes on a variety of parallel and distributed processing platforms. Version 2.0 of WARPED described here is distributed with several applications and the configuration can be set so that a sequential Kernel implementation can be instantiated The Kernel supports LP clustering, various GVT algorithms, and numerous optimizations to adaptively adjust Simulation parameters at runtime.

  • Redesigning the WARPED Simulation Kernel for analysis and application development
    36th Annual Simulation Symposium 2003., 2003
    Co-Authors: Dale E Martin, Philip A Wilsey, R.j. Hoekstra, E.r. Keiter, S.a. Hutchinson, T.v. Russo, L.j. Waters
    Abstract:

    WARPED is a publicly available time warp Simulation Kernel. The Kernel defines a standard interface to the application developer and is designed to provide a highly configurable environment for the integration of time warp optimizations. It is written in C++, uses the MPI message passing standard, and executes on a variety of parallel and distributed processing platforms. Version 2.0 of WARPED described here is distributed with several applications and the configuration can be set so that a sequential Kernel implementation can be instantiated The Kernel supports LP clustering, various GVT algorithms, and numerous optimizations to adaptively adjust Simulation parameters at runtime.

  • an object oriented time warp Simulation Kernel
    Lecture Notes in Computer Science, 1998
    Co-Authors: Radharamanan Radhakrishnan, Dale E Martin, Malolan Chetlur, Philip A Wilsey
    Abstract:

    The design of a Time Warp Simulation Kernel is made difficult by the inherent complexity of the paradigm. Hence it becomes critical that the design of such complex Simulation Kernels follow established design principles such as object-oriented design so that the implementation is simple to modify and extend. In this paper, we present a compendium of our efforts in the design and development of an object-oriented Time Warp Simulation Kernel, called warped. warped is a publically available Time Warp Simulation Kernel for experimentation and application development. The Kernel defines a standard interface to the application developer and is designed to provide a highly configurable environment for the integration of Time Warp optimizations. It is written in C++, uses the MPI message passing standard for communication, and executes on a variety of platforms including a network of SUN workstations, a SUN SMP workstation, the IBM SP1/SP2 multiprocessors, the Cray T3E, the Intel Paragon, and IBM-compatible PCs running Linux.

  • ISCOPE - An Object-Oriented Time Warp Simulation Kernel
    Lecture Notes in Computer Science, 1998
    Co-Authors: Radharamanan Radhakrishnan, Dale E Martin, Malolan Chetlur, Philip A Wilsey
    Abstract:

    The design of a Time Warp Simulation Kernel is made difficult by the inherent complexity of the paradigm. Hence it becomes critical that the design of such complex Simulation Kernels follow established design principles such as object-oriented design so that the implementation is simple to modify and extend. In this paper, we present a compendium of our efforts in the design and development of an object-oriented Time Warp Simulation Kernel, called warped. warped is a publically available Time Warp Simulation Kernel for experimentation and application development. The Kernel defines a standard interface to the application developer and is designed to provide a highly configurable environment for the integration of Time Warp optimizations. It is written in C++, uses the MPI message passing standard for communication, and executes on a variety of platforms including a network of SUN workstations, a SUN SMP workstation, the IBM SP1/SP2 multiprocessors, the Cray T3E, the Intel Paragon, and IBM-compatible PCs running Linux.

  • HICSS (1) - WARPED: a time warp Simulation Kernel for analysis and application development
    Proceedings of HICSS-29: 29th Hawaii International Conference on System Sciences, 1996
    Co-Authors: Dale E Martin, T.j. Mcbrayer, Philip A Wilsey
    Abstract:

    WARPED is a publically-available time warp Simulation Kernel for experimentation and application development. The Kernel defines a standard interface to the application developer and is designed to provide a highly configurable environment for the integration of time warp optimizations. It is written in C++, uses the MPI (Message Passing Interface) standard and shared memory for communication, and executes on a variety of platforms including a network of SUN workstations, a SUN SMP workstation, the IBM SP1/SP2 multiprocessors, the Intel Paragon and IBM-compatible PCs running Linux. WARPED is distributed with several applications and includes a sequential Kernel implementation for comparative analysis. The Kernel supports LP (logical process) clustering, various time warp algorithms and several optimizations that dynamically adjust Simulation parameters.

Tang Wenjie - One of the best experts on this subject based on the ideXlab platform.

  • Multicore-Oriented Service Optimization of Parallel Discrete Event Simulation
    2020
    Co-Authors: Tang Wenjie, Yao Yiping
    Abstract:

    The development of CPU has made its way into the era of multicore. The current parallel Simulation Kernel utilizes multicore resource by a multiprocess, which leads to inefficiency in synchronization and communication. This study has optimized two services based on hierarchical parallel Simulation Kernel (HPSK) model to support high performance Simulation in multithread paradigm. First, the paper proposes a protocol of EETS computation based on hybrid time management, which can be configured flexibly as asynchronous EETS algorithm according to application's characteristics. Second, the study proposes an event management algorithm based on characteristics of events interaction, which can create events lock-free, commits events asynchronously, and transfers events based on pointers, to eliminate the overhead of locks and to reduce the usage of memory. Experimental results from phold show that the optimized HPSK works well on different conditions.

  • a hierarchical parallel discrete event Simulation Kernel for multicore platform
    Cluster Computing, 2013
    Co-Authors: Tang Wenjie, Yao Yiping, Zhu Feng
    Abstract:

    The development of CPU has stepped into the era of multicore. Current parallel Simulation Kernel utilizes multicore resource by multi-process, which leads to inefficiency on communication and synchronization. To fulfill this gap, we proposed a HPSK (hierarchical parallel Simulation Kernel) model, which schedules logical processes and executes events in parallel with multithread paradigm. Based on this model, three key algorithms were proposed to support high performance: (1) An event management algorithm was proposed to improve the efficiency of creation and release of events. It uses a lock-free creation and asynchronous commitment mechanism to decouple the relationship between threads, hence reduce the overhead of locks. (2) A pointer-based communication algorithm was proposed to improve efficiency of communication between threads. It uses a buffer mechanism to avoid interrupting the execution of target thread. The target thread will read events from the buffers when it needs. By using ring-structure buffers, synchronization between sending and receiving of threads can be annihilated. (3) An approximate method was proposed to compute LBTS (Lower Bound on Time Stamp). It uses an asynchronous mechanism to eliminate disturbing of thread execution and a two-level filter mechanism to reduce redundant LBTS computation. A series of experiments with a modified phold model show that HPSK can achieve good performance for applications on different conditions. It can run 8× faster than μsik when event locality and lookahead is low.

  • HSK: A Hierarchical Parallel Simulation Kernel for Multicore Platform
    2011 IEEE Ninth International Symposium on Parallel and Distributed Processing with Applications, 2011
    Co-Authors: Tang Wenjie
    Abstract:

    The development of CPU has stepped into the era of multi-core. Due to lack of support on thread level, most of the Simulation platform can not take full advantage of multicore. To fulfill this gap, we proposed a hierarchical parallel Simulation Kernel(HSK) model. The model has two layers. The first layer, named process Kernel, was responsible for managing all thread Kernels on second layer. The second layer is a group of thread Kernels, which were responsible for scheduling and advancing logical processes. Each thread Kernel was mapped onto an executing thread to advance Simulation parallel. In addition, two algorithms were proposed to support high performance: (1) To improve the communication efficiency between threads, we proposed a pointer-based communication mechanism. By using buffers, synchronization between threads can be annihilated. (2) To eliminate redundant Lower Bound on Time Stamp(LBTS) computation and not to interrupt thread execution, we employ an approximate method to compute LBTS asynchronously. A proof of validity was presented. The execution performance of HSK was demonstrated by a series of Simulation experiments with a modified phold model. The HSK can achieve good speedup for applications, especially with coarse-grained event.

  • ISPA - HSK: A Hierarchical Parallel Simulation Kernel for Multicore Platform
    2011 IEEE Ninth International Symposium on Parallel and Distributed Processing with Applications, 2011
    Co-Authors: Tang Wenjie
    Abstract:

    The development of CPU has stepped into the era of multi-core. Due to lack of support on thread level, most of the Simulation platform can not take full advantage of multicore. To fulfill this gap, we proposed a hierarchical parallel Simulation Kernel(HSK) model. The model has two layers. The first layer, named process Kernel, was responsible for managing all thread Kernels on second layer. The second layer is a group of thread Kernels, which were responsible for scheduling and advancing logical processes. Each thread Kernel was mapped onto an executing thread to advance Simulation parallel. In addition, two algorithms were proposed to support high performance: (1) To improve the communication efficiency between threads, we proposed a pointer-based communication mechanism. By using buffers, synchronization between threads can be annihilated. (2) To eliminate redundant Lower Bound on Time Stamp(LBTS) computation and not to interrupt thread execution, we employ an approximate method to compute LBTS asynchronously. A proof of validity was presented. The execution performance of HSK was demonstrated by a series of Simulation experiments with a modified phold model. The HSK can achieve good speedup for applications, especially with coarse-grained event.

S.k. Shukla - One of the best experts on this subject based on the ideXlab platform.

  • HLDVT - Accelerating SystemC Simulations using GPUs
    2012 IEEE International High Level Design Validation and Test Workshop (HLDVT), 2012
    Co-Authors: Mahesh Nanjundappa, H.d. Patel, Anirudh M. Kaushik, S.k. Shukla
    Abstract:

    Recent developments in graphics processing unit (GPU) technology has invigorated an interest in using GPUs for accelerating the Simulation of SystemC models. SystemC is extensively used for design space exploration, and early performance analysis of hardware systems. SystemC's reference implementation of the Simulation Kernel supports a single-threaded Simulation Kernel. However, modern computing platforms offer substantially more compute power by means of multiple central processing units, and multiple co-processors such as GPUs. This has peaked an interest in parallelizing SystemC Simulations. Of these, several efforts focus on utilizing the massive parallelism offered by GPUs as an alternate computing platform. In this paper, we present a summary of these recent research efforts that propose using GPUs for accelerating SystemC Simulation.

  • Accelerating SystemC Simulations using GPUs
    2012 IEEE International High Level Design Validation and Test Workshop (HLDVT), 2012
    Co-Authors: Mahesh Nanjundappa, H.d. Patel, Anirudh Kaushik, S.k. Shukla
    Abstract:

    Recent developments in graphics processing unit (GPU) technology has invigorated an interest in using GPUs for accelerating the Simulation of SystemC models. SystemC is extensively used for design space exploration, and early performance analysis of hardware systems. SystemC's reference implementation of the Simulation Kernel supports a single-threaded Simulation Kernel. However, modern computing platforms offer substantially more compute power by means of multiple central processing units, and multiple co-processors such as GPUs. This has peaked an interest in parallelizing SystemC Simulations. Of these, several efforts focus on utilizing the massive parallelism offered by GPUs as an alternate computing platform. In this paper, we present a summary of these recent research efforts that propose using GPUs for accelerating SystemC Simulation.

  • Towards a heterogeneous Simulation Kernel for system-level models: a SystemC Kernel for synchronous data flow models
    IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2005
    Co-Authors: H.d. Patel, S.k. Shukla
    Abstract:

    As SystemC gains popularity as a modeling language of choice for system-on-chip (SoC) designs, heterogeneous modeling in SystemC and efficient Simulation become increasingly important. However, in the current reference implementation, all SystemC models are simulated through a nondeterministic discrete-event (DE) Simulation Kernel that schedules events at run time mimicking other models of computation (MoCs) using DE, which may get cumbersome. This sometimes results in too many delta cycles hindering the Simulation performance of the model. SystemC also uses this Simulation Kernel as the target Simulation engine. This makes it difficult to express different MoCs naturally in SystemC. In an SoC model, different components may need to be naturally expressible in different MoCs. These components may be amenable to static scheduling-based Simulation or other preSimulation optimization techniques. The goal is to create a Simulation framework for heterogeneous SystemC models and to gain efficiency and ease of use within the framework of SystemC reference implementation. In this paper, a synchronous data flow (SDF) Kernel extension for SystemC is introduced. Experimental results showing improvement in Simulation time are also presented.

  • ISVLSI - Towards a heterogeneous Simulation Kernel for system level models: a SystemC Kernel for synchronous data flow models
    IEEE Computer Society Annual Symposium on VLSI, 2004
    Co-Authors: H.d. Patel, S.k. Shukla
    Abstract:

    As SystemC gains popularity as a modelling language of choice for system-on-chip (SOC) designs, heterogeneous modelling in SystemC and efficient Simulation become increasingly important. However, in the current reference implementation, all SystemC models are simulated through a non-deterministic discrete-event Simulation Kernel, which schedules events at run-time. This sometimes results in too many delta cycles hindering the Simulation performance of the model. The SystemC language also seems to target this Simulation Kernel as the target Simulation engine making it difficult to express different models of computation (MOC) naturally in SystemC. In an SOC model, different components may need to be naturally expressible in different MOCs. Some of these components may be amenable to static scheduling based Simulation or other pre-Simulation optimization techniques. Our goal is to create a Simulation framework for heterogeneous SystemC models, to gain efficiency and ease of use within the framework of SystemC reference implementation. In this work, we focus on synchronous data flow (SDF) models, where the rates of data produced and consumed by a data flow node/block are known a priori. Compile time knowledge of these rates allows the use of static scheduling resulting to significant improvement in Simulation efficiency. We propose source level hints to be provided by the model designer to help express SDF more naturally and to make the new Simulation Kernel execute special functionalities. Our experiments show significant improvement in Simulation time over the original models.

  • towards a heterogeneous Simulation Kernel for system level models a systemc Kernel for synchronous data flow models
    Great Lakes Symposium on VLSI, 2004
    Co-Authors: H.d. Patel, S.k. Shukla
    Abstract:

    As SystemC gains popularity as a modeling language of choice for system-on-chip (SOC) designs, heterogeneous modeling in SystemC and efficient Simulation become increasingly important. However, in the current reference implementation, all SystemC models are simulated through a non-deterministic Discrete-Event Simulation Kernel, which schedules events at run-time. This sometimes results in too many delta cycles hindering the Simulation performance of the model. The SystemC language also seems to target this Simulation Kernel as the target Simulation engine. This makes it difficult to express different Models Of Computation naturally in SystemC. In an SOC model, different components may need to be naturally expressible in different Models Of Computations. Some of these components may be amenable to static scheduling based Simulation or other pre-Simulation optimization techniques. Our goal is to create a Simulation framework for heterogeneous SystemC models, to gain efficiency and ease of use within the framework of SystemC reference implementation. In this paper, we focus on Synchronous Data Flow (SDF) models, where the rates of data produced and consumed by a data flow node/block are known a priori. In digital signal processing (DSP) applications where relative sample rates are specified for each DSP component, such models are quite common. Compile time knowledge of these rates allow the use of static scheduling resulting in significant improvement in Simulation efficiency. We describe an alternate SystemC Kernel that exploits such static scheduling of SDF models. Our experiments show improvement in Simulation time over the original models and over the latest efficiency results from [20].