Algorithm Level

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 369210 Experts worldwide ranked by ideXlab platform

Ramesh Karri - One of the best experts on this subject based on the ideXlab platform.

  • DAC - TAO: techniques for Algorithm-Level obfuscation during high-Level synthesis
    Proceedings of the 55th Annual Design Automation Conference, 2018
    Co-Authors: Christian Pilato, Ramesh Karri, Francesco Regazzoni, Siddharth Garg
    Abstract:

    Intellectual Property (IP) theft costs semiconductor design companies billions of dollars every year. Unauthorized IP copies start from reverse engineering the given chip. Existing techniques to protect against IP theft aim to hide the IC's functionality, but focus on manipulating the HDL descriptions. We propose TAO as a comprehensive solution based on high-Level synthesis to raise the abstraction Level and apply Algorithmic obfuscation automatically. TAO includes several transformations that make the component hard to reverse engineer during chip fabrication, while a key is later inserted to unlock the functionality. Finally, this is a promising approach to obfuscate large-scale designs despite the hardware overhead needed to implement the obfuscation.

  • Algorithm-Level recomputing with shifted operands-a register transfer Level concurrent error detection technique
    IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2006
    Co-Authors: Ramesh Karri
    Abstract:

    This paper presents Algorithm-Level REcomputing with Shifted Operands (ARESO), which is a new register transfer (RT) Level time redundancy-based concurrent error detection (CED) technique. In REcomputing with Shifted Operands (RESO), operations (additions, subtractions, etc.) are carried out twice-once on the basic input and once on the shifted input. Results from these two operations are compared to detect an error. Although using RESO operators in RT-Level designs is straightforward, it entails time and area overhead. In contrast, ARESO does not use specialized RESO operators. In ARESO, an Algorithm is carried out twice-once on the basic input and once on the shifted input. Results from these two Algorithm-Level instantiations are compared to detect an error. By operating at the Algorithm Level, ARESO exploits RT-Level scheduling, pipelining, operator chaining, and multicycling to incorporate user-specified error detection latencies. ARESO supports hardware versus performance versus error detection latency tradeoffs. The authors validated ARESO on practical design examples using the Synopsys Behavior Compiler (BC). An industry standard behavioral synthesis system.

  • Selectively breaking data dependences to improve the utilization of idle cycles in Algorithm Level re-computing data paths
    IEEE Transactions on Reliability, 2003
    Co-Authors: Ramesh Karri
    Abstract:

    Although Algorithm Level re-computing techniques can trade-off the fault detection capability vs. time overhead of a Concurrent Error Detection (CED) scheme, they result in 100% time overhead when the strongest CED capability is achieved. Using the idle cycles in the data path to do the re-computation can reduce this time overhead. However, dependences between operations prevent the re-computation from fully utilizing the idle cycles. Deliberately breaking some of these data dependences can further reduce the time overhead associated with Algorithm Level re-computing. According to the experimental results the proposed technique, it brings time overhead down to 0-60% while the associated hardware overhead is from 12% to 50% depending on the design size.

  • Algorithm Level re-computing using implementation diversity: a register transfer Level concurrent error detection technique
    IEEE Transactions on Very Large Scale Integration (VLSI) Systems, 2002
    Co-Authors: Ramesh Karri
    Abstract:

    Concurrent error detection (CED) based on time redundancy entails performing the normal computation and the re-computation at different times and then comparing their results. Time redundancy implemented can only detect transient faults. We present two Algorithm-Level time-redundancy-based CED schemes that exploit register transfer Level (RTL) implementation diversity to detect transient and permanent faults. At the RTL, implementation diversity can be achieved either by changing the operation-to-operator allocation or by shifting the operands before re-computation. By exploiting allocation diversity and data diversity, a stuck-at fault will affect the two results in two different ways. The proposed schemes yield good fault detection probability with very low area overhead. We used the Synopsys behavior complier (BC), to validate the schemes.

  • Algorithm Level recomputing using allocation diversity: a register transfer Level approach to time redundancy-based concurrent error detection
    IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2002
    Co-Authors: Ramesh Karri
    Abstract:

    In this paper, the authors propose an Algorithm-Level time redundancy-based concurrent error detection (CED) scheme against permanent and transient faults by exploiting the hardware allocation diversity at the register transfer Level. Although the normal computation and the recomputation are carried out on the same data path, the operation-to-operator allocation for the normal computation is different from the operation-to-operator allocation for the recomputation. The authors show that the proposed scheme provides very good CED capability with very low area overhead.

Yu Wen-xian - One of the best experts on this subject based on the ideXlab platform.

  • Key Design and Implementation of Algorithm-Level Data Fusion Testbed
    Computer Simulation, 2007
    Co-Authors: Yu Wen-xian
    Abstract:

    Aiming at the Level-One multisensor data fusion,an Algorithm-Level testbed named STE-One was developed which could test and evaluate various data fusion structures including the structures with feedback. The structure of the software was introduced,and some key methods used to implement the capability of Algorithm-Level test and replacement were described,including the hierarchical design model,the Algorithm sorting and data pool technology.

Dhiraj K. Pradhan - One of the best experts on this subject based on the ideXlab platform.

  • VTS - Algorithm Level Fault Tolerance: A Technique to Cope with Long Duration Transient Faults in Matrix Multiplication Algorithms
    26th IEEE VLSI Test Symposium (vts 2008), 2008
    Co-Authors: C.a. Lisboa, Luigi Carro, C. Argyrides, Dhiraj K. Pradhan
    Abstract:

    For technologies beyond the 45 nm node, radiation induced transients will last longer than one clock cycle. In this scenario, temporal redundancy techniques will no longer be able to cope with radiation induced soft errors, while spatial redundancy techniques still impose high power and area overheads. The solution to this impasse is the use of Algorithm Level techniques, able to detect and correct errors with low cost. In this paper, a new approach to deal with this problem is proposed, and applied to matrix multiplication Algorithm. The proposed technique is compared to previously published fault tolerance techniques, and the costs of detection and recomputation for both approaches are compared and discussed.

Jean Roy - One of the best experts on this subject based on the ideXlab platform.

  • Quantitative comparison of sensor fusion architectural approaches in an Algorithm-Level test bed
    Signal and Data Processing of Small Targets 1996, 1996
    Co-Authors: Jean Roy, Eloi Bosse, Nicolas Duclos-hindie
    Abstract:

    This paper presents the results of a quantitative comparison of two architectural options in developing a multi-sensor data fusion system. One option is the centralized architecture: a single track file is maintained and updated using raw sensor measurements. The second option is the autonomous sensor fusion architecture: each sensor maintains its own track file. The sensor tracks are then transmitted to a central processor responsible for fusing this data to form a master track file. Various performance trade-offs will typically be required in the selection of the best multi-sensor data fusion architecture since each approach has different benefits and disadvantages. The emphasis for this study is given to measuring the quality of the fused conducted with the CASE_ATTI (concept analysis and simulation environment for automatic target racking and identification) testbed. This testbed provides the Algorithm-Level test and replacement capability required to conduct this kind of performance study.

  • Realistic simulations of surveillance sensors in an Algorithm-Level sensor fusion test-bed
    Signal and Data Processing of Small Targets 1995, 1995
    Co-Authors: Eloi Bosse, Nicolas Duclos-hindie, Jean Roy, Denis Dion
    Abstract:

    This paper presents simulations of IR and radar surveillance sensors to support an ongoing Multi Sensor Data Fusion (MSDF) performance evaluation study for potential application to the Canadian Patrol Frigate midlife upgrade. The surveillance sensor models are used in an Algorithm-Level testbed that allows the investigation of advanced MSDF concepts. The sensor models take into account sensor's design parameters and external environmental effects such as clutter, clouds, propagation and jamming. The latest findings regarding the dominant perturbing effects affecting the sensor detection performance are included. The sensor models can be used to generate contacts and false alarms in scenarios for multi-sensor data fusion studies while the scenarios is running.© (1995) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

  • CASE_ATTI: An Algorithm-Level Testbed for Multi-Sensor Data Fusion
    1995
    Co-Authors: Jean Roy, Eloi Bosse, Denis Dion
    Abstract:

    Abstract : A key element in the anticipated information management problem on a naval platform is the ability to combine or fuse data, not only as a volume-reducing strategy, but also as a means to exploit the unique combinations of data that may be available. In this regard, the Command and Control Division at DREV is involved in multiple R&D activities in the field of local area Multi-Sensor Data Fusion (MSDF) for naval command and control afloat. Many different approaches to MSDF have been investigated and developed recently in response to the ever-increasing importance of the subject. However, at this stage of development, no standard approach is generally accepted for all applications. A wide variety of techniques have been proposed for many diverse applications, and the system designer must choose the techniques that are best suited to a specific problem. One of the best tools to help the designer with such a choice is a computer simulation for proof-of-concept purposes. This document presents an overview of the CASE_ ATTl (Concept Analysis and Simulation Environment for Automatic Target Tracking and Identification) Algorithm-Level simulation testbed that has been developed by DREV to support the theoretical work. CASE_ATTI provides the highly modular, structured and flexible hardware/software environment necessary to study and compare various advanced MSDF concepts and schemes in order to demonstrate their applicability, feasibility and performance. The document also discusses the use of CASE_ATTI to support an ongoing MSDF performance evaluation study in the context of the Canadian Patrol Frigate.

Denis Dion - One of the best experts on this subject based on the ideXlab platform.

  • Realistic simulations of surveillance sensors in an Algorithm-Level sensor fusion test-bed
    Signal and Data Processing of Small Targets 1995, 1995
    Co-Authors: Eloi Bosse, Nicolas Duclos-hindie, Jean Roy, Denis Dion
    Abstract:

    This paper presents simulations of IR and radar surveillance sensors to support an ongoing Multi Sensor Data Fusion (MSDF) performance evaluation study for potential application to the Canadian Patrol Frigate midlife upgrade. The surveillance sensor models are used in an Algorithm-Level testbed that allows the investigation of advanced MSDF concepts. The sensor models take into account sensor's design parameters and external environmental effects such as clutter, clouds, propagation and jamming. The latest findings regarding the dominant perturbing effects affecting the sensor detection performance are included. The sensor models can be used to generate contacts and false alarms in scenarios for multi-sensor data fusion studies while the scenarios is running.© (1995) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

  • CASE_ATTI: An Algorithm-Level Testbed for Multi-Sensor Data Fusion
    1995
    Co-Authors: Jean Roy, Eloi Bosse, Denis Dion
    Abstract:

    Abstract : A key element in the anticipated information management problem on a naval platform is the ability to combine or fuse data, not only as a volume-reducing strategy, but also as a means to exploit the unique combinations of data that may be available. In this regard, the Command and Control Division at DREV is involved in multiple R&D activities in the field of local area Multi-Sensor Data Fusion (MSDF) for naval command and control afloat. Many different approaches to MSDF have been investigated and developed recently in response to the ever-increasing importance of the subject. However, at this stage of development, no standard approach is generally accepted for all applications. A wide variety of techniques have been proposed for many diverse applications, and the system designer must choose the techniques that are best suited to a specific problem. One of the best tools to help the designer with such a choice is a computer simulation for proof-of-concept purposes. This document presents an overview of the CASE_ ATTl (Concept Analysis and Simulation Environment for Automatic Target Tracking and Identification) Algorithm-Level simulation testbed that has been developed by DREV to support the theoretical work. CASE_ATTI provides the highly modular, structured and flexible hardware/software environment necessary to study and compare various advanced MSDF concepts and schemes in order to demonstrate their applicability, feasibility and performance. The document also discusses the use of CASE_ATTI to support an ongoing MSDF performance evaluation study in the context of the Canadian Patrol Frigate.