Precomputation

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 4263 Experts worldwide ranked by ideXlab platform

Ingrid Verbauwhede - One of the best experts on this subject based on the ideXlab platform.

  • ARC - Time-Memory Trade-Off Attack on FPGA Platforms: UNIX Password Cracking
    Reconfigurable Computing: Architectures and Applications, 2006
    Co-Authors: Nele Mentens, Lejla Batina, Bart Preneel, Ingrid Verbauwhede
    Abstract:

    This paper presents a hardware architecture for UNIX password cracking using Hellman’s time-memory trade-off; it is the first hardware design for a key search machine based on the rainbow variant proposed by Oechslin. The implementation target is the Berkeley BEE2 FPGA platform which can run at 400 million password calculations/second. Our design targets passwords of length 48 bits (out of 56). This means that with one BEE2 module the Precomputation for one salt takes about 8 days, resulting in a storage of 56 Gigabyte. For the Precomputation of all salts in one year we would need 92 BEE2 modules. Recovering an individual password requires a few minutes on a Virtex-4 FPGA.

  • time memory trade off attack on fpga platforms unix password cracking
    Lecture Notes in Computer Science, 2006
    Co-Authors: Nele Mentens, Lejla Batina, Bart Preneel, Ingrid Verbauwhede
    Abstract:

    This paper presents a hardware architecture for UNIX password cracking using Hellman's time-memory trade-off; it is the first hardware design for a key search machine based on the rainbow variant proposed by Oechslin. The implementation target is the Berkeley BEE2 FPGA platform which can run at 400 million password calculations/second. Our design targets passwords of length 48 bits (out of 56). This means that with one BEE2 module the Precomputation for one salt takes about 8 days, resulting in a storage of 56 Gigabyte. For the Precomputation of all salts in one year we would need 92 BEE2 modules. Recovering an individual password requires a few minutes on a Virtex-4 FPGA.

José Monteiro - One of the best experts on this subject based on the ideXlab platform.

  • Sequential logic optimization for low power using input-disabling Precomputation architectures
    IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 1998
    Co-Authors: José Monteiro, Srinivas Devadas, A. Ghosh
    Abstract:

    Precomputation is a recently proposed logic optimization technique which selectively disables the inputs of a logic circuit, thereby reducing switching activity and power dissipation, without changing logic functionality. In sequential Precomputation, output values required in a particular clock cycle are selectively precomputed one clock cycle earlier, and the original logic circuit is "turned off" in the succeeding clock cycle. We target a general Precomputation architecture for sequential logic circuits, and show that it is significantly more powerful than the architecture previously treated in the literature. The very power of this architecture makes the synthesis of Precomputation logic a challenging problem. We present a method to automatically synthesize Precomputation logic for this architecture. Up to 66% reduction in power dissipation is possible using the proposed architecture. For many examples, the proposed architecture result in significantly less power dissipation than previously developed methods.

  • Power optimization of combinational modules using self-timed Precomputation
    1998 IEEE International Symposium on Circuits and Systems (ISCAS), 1998
    Co-Authors: A. Mota, José Monteiro, A. Oliveira
    Abstract:

    Precomputation has recently been proposed as a very effective power management technique. Precomputation works by preventing some of the inputs from being loaded into the input registers, thus significantly reducing the switching activity in the circuit. In this paper we present a self-timed approach for the Precomputation of combinational logic circuits. This technique allows for maximum power savings without the need of a clock signal. However we may incur in some delay penalty. We describe how to achieve significant power reductions without increasing the maximum delay, by choosing a judicious placement of the latches in the combinational logic circuit. Experimental results are presented for arithmetic modules, confirming that power dissipation can be greatly reduced with marginal increases in circuit area and almost zero delay increase.

  • computer aided design techniques for low power sequential logic circuits
    1996
    Co-Authors: José Monteiro, Srinivas Devadas
    Abstract:

    1 Introduction.- 1.1 Power as a Design Constraint.- 1.2 Organization of this Book.- References.- 2 Power Estimation.- 2.1 Power Dissipation Model.- 2.2 Switching Activity Estimation.- 2.2.1 Simulation-Based Techniques.- 2.2.2 Issues in Probabilistic Estimation Techniques.- 2.2.3 Probabilistic Techniques.- 2.3 Summary.- References.- 3 A Power Estimation Method for Combinational Circuits.- 3.1 Symbolic Simulation.- 3.2 Transparent Latches.- 3.3 Modeling Inertial Delay.- 3.4 Power Estimation Results.- 3.5 Summary.- References.- 4 Power Estimation for Sequential Circuits.- 4.1 Pipelines.- 4.2 Finite State Machines: Exact Method.- 4.2.1 Modeling Temporal Correlation.- 4.2.2 State Probability Computation.- 4.2.3 Power Estimation given State Probabilities.- 4.3 Finite State Machines: Approximate Method.- 4.3.1 Basis for the Approximation.- 4.3.2 Computing Present State Line Probabilities.- 4.3.3 Picard-Peano Method.- 4.3.4 Newton-Raphson Method.- 4.3.5 Improving Accuracy using m-Expanded Networks.- 4.3.6 Improving Accuracy using k-Unrolled Networks.- 4.3.7 Redundant State Lines.- 4.4 Results on Sequential Power Estimation Techniques.- 4.5 Modeling Correlation of Input Sequences.- 4.5.1 Completely and Incompletely Specified Input Sequences.- 4.5.2 Assembly Programs.- 4.5.3 Experimental Results.- 4.6 Summary.- References.- 5 Optimization Techniques for Low Power Circuits.- 5.1 Power Optimization by Transistor Sizing.- 5.2 Combinational Logic Level Optimization.- 5.2.1 Path Balancing.- 5.2.2 Don't-care Optimization.- 5.2.3 Logic Factorization.- 5.2.4 Technology Mapping.- 5.3 Sequential Optimization.- 5.3.1 State Encoding.- 5.3.2 Encoding in the Datapath.- 5.3.3 Gated Clocks.- 5.4 Summary.- References.- 6 Retiming for Low Power.- 6.1 Review of Retiming.- 6.1.1 Basic Concepts.- 6.1.2 Applications of Retiming.- 6.2 Retiming for Low Power.- 6.2.1 Cost Function.- 6.2.2 Verifying a Given Clock Period.- 6.2.3 Retiming Constraints.- 6.2.4 Executing the Retiming.- 6.3 Experimental Results.- 6.4 Conclusions.- References.- 7 Precomputation.- 7.1 Subset Input Disabling Precomputation.- 7.1.1 Subset Input Disabling Precomputation Architecture.- 7.1.2 An Example.- 7.1.3 Synthesis of Precomputation Logic.- 7.1.4 Multiple-Output Functions.- 7.1.5 Examples of Precomputation Applied to Datapath Modules.- 7.1.6 Multiple Cycle Precomputation.- 7.1.7 Experimental Results for the Subset Input Disabling Architecture.- 7.2 Complete Input Disabling Precomputation.- 7.2.1 Complete Input Disabling Precomputation Architecture.- 7.2.2 An Example.- 7.2.3 Synthesis of Precomputation Logic.- 7.2.4 Simplifying the Original Combinational Logic Block.- 7.2.5 Multiple-Output Functions.- 7.2.6 Experimental Results for the Complete Input Disabling Architecture.- 7.3 Combinational Precomputation.- 7.3.1 Combinational Logic Precomputation.- 7.3.2 Precomputation at the Inputs.- 7.3.3 Precomputation for Arbitrary Sub-Circuits in a Circuit.- 7.3.4 Experimental Results for the Combinational Precomputation Architecture.- 7.4 Multiplexor-Based Precomputation.- 7.5 Conclusions.- References.- 8 High-Level Power Estimation and Optimization.- 8.1 Register Transfer Level Power Estimation.- 8.1.1 Functional Modules.- 8.1.2 Controller.- 8.1.3 Interconnect.- 8.2 Behavioral Level Synthesis for Low Power.- 8.2.1 Transformation Techniques.- 8.2.2 Scheduling Techniques.- 8.2.3 Allocation Techniques.- 8.2.4 Optimizations at the Register-Transfer Level.- 8.3 Conclusions.- References.- 9 Conclusion.- 9.1 Power Estimation at the Logic Level.- 9.2 Optimization Techniques at the Logic Level.- 9.3 Estimation and Optimization Techniques at the RT Level.- References.

  • optimization of combinational and sequential logic circuits for low power using Precomputation
    Conference on Advanced Research in VLSI, 1995
    Co-Authors: José Monteiro, Srinivas Devadas, Jochen Rinderknecht, A. Ghosh
    Abstract:

    Precomputation is a recently proposed logic optimization technique which selectively disables the inputs of a sequential logic circuit, thereby reducing switching activity and power dissipation, without changing logic functionality. In this paper, we present new Precomputation architectures for both combinational and sequential logic and describe new Precomputation-based logic synthesis methods that optimize logic circuits for low power. We present a general Precomputation architecture for sequential logic circuits and show that it is significantly more powerful than the architectures previously treated in the literature. In this architecture, output values required in a particular clock cycle are selectively precomputed one clock cycle earlier, and the original logic circuit is "turned off" in the succeeding clock cycle. The very power of this architecture makes the synthesis of Precomputation logic a challenging problem and we present a method to automatically synthesize Precomputation logic for this architecture. We introduce a powerful Precomputation architecture for combinational logic circuits that uses transmission gates or transparent latches to disable parts of the logic. Unlike in the sequential circuit architecture, Precomputation occurs in an early portion of a clock cycle, and parts of the combinational logic circuit are "turned off" in a later portion of the same clock cycle. Further we are not restricted to perform Precomputation on the primary inputs. Preliminary results obtained using the described methods are presented. Up to 66 percent reductions in switching activity and power dissipation are possible using the proposed architectures. For many examples, the proposed architectures result in significantly less power dissipation than previously developed methods.

  • ARVLSI - Optimization of combinational and sequential logic circuits for low power using Precomputation
    Proceedings Sixteenth Conference on Advanced Research in VLSI, 1995
    Co-Authors: José Monteiro, Srinivas Devadas, Jochen Rinderknecht, A. Ghosh
    Abstract:

    Precomputation is a recently proposed logic optimization technique which selectively disables the inputs of a sequential logic circuit, thereby reducing switching activity and power dissipation, without changing logic functionality. In this paper, we present new Precomputation architectures for both combinational and sequential logic and describe new Precomputation-based logic synthesis methods that optimize logic circuits for low power. We present a general Precomputation architecture for sequential logic circuits and show that it is significantly more powerful than the architectures previously treated in the literature. In this architecture, output values required in a particular clock cycle are selectively precomputed one clock cycle earlier, and the original logic circuit is "turned off" in the succeeding clock cycle. The very power of this architecture makes the synthesis of Precomputation logic a challenging problem and we present a method to automatically synthesize Precomputation logic for this architecture. We introduce a powerful Precomputation architecture for combinational logic circuits that uses transmission gates or transparent latches to disable parts of the logic. Unlike in the sequential circuit architecture, Precomputation occurs in an early portion of a clock cycle, and parts of the combinational logic circuit are "turned off" in a later portion of the same clock cycle. Further we are not restricted to perform Precomputation on the primary inputs. Preliminary results obtained using the described methods are presented. Up to 66 percent reductions in switching activity and power dissipation are possible using the proposed architectures. For many examples, the proposed architectures result in significantly less power dissipation than previously developed methods.

Hong Wang - One of the best experts on this subject based on the ideXlab platform.

  • Mitosis: A Speculative Multithreaded Processor Based on Precomputation Slices
    IEEE Transactions on Parallel and Distributed Systems, 2008
    Co-Authors: Carlos Madriles, Dean M. Tullsen, Hong Wang, Carlos García-quiñones, Jesús Sánchez, Pedro Marcuello, Antonio González, John P. Shen
    Abstract:

    This paper presents the Mitosis framework, which is a combined hardware-software approach to speculative multithreading, even in the presence of frequent dependences among threads. Speculative multithreading increases single-threaded application performance by exploiting thread-level parallelism speculatively, that is, executing code in parallel, even when the compiler or runtime system cannot guarantee that the parallelism exists. The proposed approach is based on predicting/computing thread input values via software through a piece of code that is added at the beginning of each thread (the Precomputation slice). A Precomputation slice is expected to compute the correct thread input values most of the time but not necessarily always. This allows aggressive optimization techniques to be applied to the slice to make it very short. This paper focuses on the microarchitecture that supports this execution model. The primary novelty of the microarchitecture is the hardware support for the execution and validation of Precomputation slices. Additionally, this paper presents new architectures for the register file and the cache memory in order to support multiple versions of each variable and allow for efficient rollback in case of misspeculation. We show that the proposed microarchitecture, together with the compiler support, achieves an average speedup of 2.2 for applications that conventional nonspeculative approaches are not able to parallelize at all.

  • MICRO - Dynamic speculative Precomputation
    Proceedings. 34th ACM IEEE International Symposium on Microarchitecture. MICRO-34, 2001
    Co-Authors: J.d. Collins, Dean M. Tullsen, Hong Wang, J.p. Shen
    Abstract:

    A large number of memory accesses in memory-bound applications are irregular, such as pointer dereferences, and can be effectively targeted by thread-based prefetching techniques like Speculative Precomputation. These techniques execute instructions, for example on an available SMT thread context, that have been extracted directly from the program they are trying to accelerate. Proposed techniques typically require manual user intervention to extract and optimize instruction sequences. This paper proposes Dynamic Speculative Precomputation, which performs all necessary instruction analysis, extraction, and optimization through the use of back-end instruction analysis hardware, located off the processor's critical path. For a set of memory limited benchmarks an average speedup of 14% is achieved when constructing simple p-slices, and this gain grows to 33% when making use of aggressive optimizations.

  • ISCA - Speculative Precomputation: long-range prefetching of delinquent loads
    Proceedings of the 28th annual international symposium on Computer architecture - ISCA '01, 2001
    Co-Authors: J.d. Collins, Dean M. Tullsen, Hong Wang, D. Lavery, Christopher J. Hughes, J.p. Shen
    Abstract:

    This paper explores Speculative Precomputation, a technique that uses idle thread context in a multithreaded architecture to improve performance of single-threaded applications. It attacks program stalls from data cache misses by pre-computing future memory accesses in available thread contexts, and prefetching these data. This technique is evaluated by simulating the performance of a research processor based on the Itanium™ ISA supporting Simultaneous Multithreading. Two primary forms of Speculative Precomputation are evaluated. If only the non-speculative thread spawns speculative threads, performance gains of up to 30% are achieved when assuming ideal hardware. However, this speedup drops considerably with more realistic hardware assumptions. Permitting speculative threads to directly spawn additional speculative threads reduces the overhead associated with spawning threads and enables significantly more aggressive speculation, overcoming this limitation. Even with realistic costs for spawning threads, speedups as high as 169% are achieved, with an average speedup of 76%.

  • Dynamic speculative Precomputation
    Proceedings. 34th ACM IEEE International Symposium on Microarchitecture. MICRO-34, 2001
    Co-Authors: J.d. Collins, D.m. Tullsen, Hong Wang, J.p. Shen
    Abstract:

    A large number of memory accesses in memory-bound applications are irregular, such as pointer dereferences, and can be effectively targeted by thread-based prefetching techniques like Speculative Precomputation. These techniques execute instructions, for example on an available SMT thread context, that have been extracted directly from the program they are trying to accelerate. Proposed techniques typically require manual user intervention to extract and optimize instruction sequences. This paper proposes Dynamic Speculative Precomputation, which performs all necessary instruction analysis, extraction, and optimization through the use of back-end instruction analysis hardware, located off the processor's critical path. For a set of memory limited benchmarks an average speedup of 14% is achieved when constructing simple p-slices, and this gain grows to 33% when making use of aggressive optimizations.

  • Speculative Precomputation: long-range prefetching of delinquent loads
    Proceedings 28th Annual International Symposium on Computer Architecture, 2001
    Co-Authors: J.d. Collins, D.m. Tullsen, Hong Wang, C. Hughes, D. Lavery, J.p. Shen
    Abstract:

    This paper explores Speculative Precomputation, a technique that uses idle thread contexts in a multithreaded architecture to improve performance of single-threaded applications. It attacks program stalls from data cache misses by pre-computing future memory accesses in available thread contexts, and prefetching these data. This technique is evaluated by simulating the performance of a research processor based on the Itanium/sup TM/ ISA supporting Simultaneous Multithreading. Two primary forms of Speculative Precomputation are evaluated. If only the non-speculative thread spawns speculative threads, performance gains of up to 30% are achieved when assuming ideal hardware. However, this speedup drops considerably with more realistic hardware assumptions. Permitting speculative threads to directly spawn additional speculative threads reduces the overhead associated with spawning threads and enables significantly more aggressive speculation, overcoming this limitation. Even with realistic costs for spawning threads, speedups as high as 169% are achieved, with an average speedup of 76%.

A. Ghosh - One of the best experts on this subject based on the ideXlab platform.

  • Sequential Logic Optimization for Low Power Using
    1998
    Co-Authors: Input-disabling Precomputation Architectures, Srinivas Devadas, A. Ghosh
    Abstract:

    Precomputation is a recently proposed logic optimization technique which selectively disables the inputs of a logic circuit, thereby reducing switching activity and power dissipation, without changing logic functionality. In sequential Precomputation, output values required in a particular clock cycle are selectively precomputed one clock cycle earlier, and the original logic circuit is "turned off" in the succeeding clock cycle. We target a general Precomputation architecture for sequential logic circuits, and show that it is significantly more powerful than the architecture previously treated in the literature. The very power of this architecture makes the synthesis of Precomputation logic a challenging problem. We present a method to automatically synthesize precomputa- tion logic for this architecture. Up to 66% reduction in power dissipation is possible using the proposed architecture. For many examples, the proposed architecture result in significantly less power dissipation than previously developed methods.

  • Sequential logic optimization for low power using input-disabling Precomputation architectures
    IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 1998
    Co-Authors: José Monteiro, Srinivas Devadas, A. Ghosh
    Abstract:

    Precomputation is a recently proposed logic optimization technique which selectively disables the inputs of a logic circuit, thereby reducing switching activity and power dissipation, without changing logic functionality. In sequential Precomputation, output values required in a particular clock cycle are selectively precomputed one clock cycle earlier, and the original logic circuit is "turned off" in the succeeding clock cycle. We target a general Precomputation architecture for sequential logic circuits, and show that it is significantly more powerful than the architecture previously treated in the literature. The very power of this architecture makes the synthesis of Precomputation logic a challenging problem. We present a method to automatically synthesize Precomputation logic for this architecture. Up to 66% reduction in power dissipation is possible using the proposed architecture. For many examples, the proposed architecture result in significantly less power dissipation than previously developed methods.

  • optimization of combinational and sequential logic circuits for low power using Precomputation
    Conference on Advanced Research in VLSI, 1995
    Co-Authors: José Monteiro, Srinivas Devadas, Jochen Rinderknecht, A. Ghosh
    Abstract:

    Precomputation is a recently proposed logic optimization technique which selectively disables the inputs of a sequential logic circuit, thereby reducing switching activity and power dissipation, without changing logic functionality. In this paper, we present new Precomputation architectures for both combinational and sequential logic and describe new Precomputation-based logic synthesis methods that optimize logic circuits for low power. We present a general Precomputation architecture for sequential logic circuits and show that it is significantly more powerful than the architectures previously treated in the literature. In this architecture, output values required in a particular clock cycle are selectively precomputed one clock cycle earlier, and the original logic circuit is "turned off" in the succeeding clock cycle. The very power of this architecture makes the synthesis of Precomputation logic a challenging problem and we present a method to automatically synthesize Precomputation logic for this architecture. We introduce a powerful Precomputation architecture for combinational logic circuits that uses transmission gates or transparent latches to disable parts of the logic. Unlike in the sequential circuit architecture, Precomputation occurs in an early portion of a clock cycle, and parts of the combinational logic circuit are "turned off" in a later portion of the same clock cycle. Further we are not restricted to perform Precomputation on the primary inputs. Preliminary results obtained using the described methods are presented. Up to 66 percent reductions in switching activity and power dissipation are possible using the proposed architectures. For many examples, the proposed architectures result in significantly less power dissipation than previously developed methods.

  • ARVLSI - Optimization of combinational and sequential logic circuits for low power using Precomputation
    Proceedings Sixteenth Conference on Advanced Research in VLSI, 1995
    Co-Authors: José Monteiro, Srinivas Devadas, Jochen Rinderknecht, A. Ghosh
    Abstract:

    Precomputation is a recently proposed logic optimization technique which selectively disables the inputs of a sequential logic circuit, thereby reducing switching activity and power dissipation, without changing logic functionality. In this paper, we present new Precomputation architectures for both combinational and sequential logic and describe new Precomputation-based logic synthesis methods that optimize logic circuits for low power. We present a general Precomputation architecture for sequential logic circuits and show that it is significantly more powerful than the architectures previously treated in the literature. In this architecture, output values required in a particular clock cycle are selectively precomputed one clock cycle earlier, and the original logic circuit is "turned off" in the succeeding clock cycle. The very power of this architecture makes the synthesis of Precomputation logic a challenging problem and we present a method to automatically synthesize Precomputation logic for this architecture. We introduce a powerful Precomputation architecture for combinational logic circuits that uses transmission gates or transparent latches to disable parts of the logic. Unlike in the sequential circuit architecture, Precomputation occurs in an early portion of a clock cycle, and parts of the combinational logic circuit are "turned off" in a later portion of the same clock cycle. Further we are not restricted to perform Precomputation on the primary inputs. Preliminary results obtained using the described methods are presented. Up to 66 percent reductions in switching activity and power dissipation are possible using the proposed architectures. For many examples, the proposed architectures result in significantly less power dissipation than previously developed methods.

  • Precomputation based sequential logic optimization for low power
    IEEE Transactions on Very Large Scale Integration Systems, 1994
    Co-Authors: Mazhar Alidina, José Monteiro, Srinivas Devadas, A. Ghosh, Marios C Papaefthymiou
    Abstract:

    We address the problem of optimizing logic-level sequential circuits for low power. We present a powerful sequential logic optimization method that is based on selectively precomputing the output logic values of the circuit one clock cycle before they are required, and using the precomputed values to reduce internal switching activity in the succeeding clock cycle. We present two different Precomputation architectures which exploit this observation. The primary optimization step is the synthesis of the Precomputation logic, which computes the output values for a subset of input conditions. If the output values can be precomputed, the original logic circuit can be "turned off" in the next clock cycle and will have substantially reduced switching activity. The size of the Precomputation logic determines the power dissipation reduction, area increase and delay increase relative to the original circuit. Given a logic-level sequential circuit, we present an automatic method of synthesizing Precomputation logic so as to achieve maximal reductions in power dissipation. We present experimental results on various sequential circuits. Up to 75% reductions in average switching activity and power dissipation are possible with marginal increases in circuit area and delay. >

Ghassan Hamarneh - One of the best experts on this subject based on the ideXlab platform.

  • ISBI - Fast random walker image registration using Precomputation
    2014 IEEE 11th International Symposium on Biomedical Imaging (ISBI), 2014
    Co-Authors: Shawn Andrews, Lisa Tang, Ghassan Hamarneh
    Abstract:

    In this paper, we introduce an extension to the random walker image registration method designed to increase the speed at which a registration is performed. Our method involves precomputing data from one of the images being registered while anticipating the acquisition of the second image, and then using this precomputed data to approximate the final transformation once the second image becomes available. The Precomputation scheme incorporates a parameter controlling the trade-off between registration speed and accuracy that can be tuned when the registration is being performed. Our results show that with Precomputation, random walker image registration runs 3 to 10 times faster on volumetric images with only 3% to 20% loss in registration accuracy.

  • Fast random walker image registration using Precomputation
    2014 IEEE 11th International Symposium on Biomedical Imaging (ISBI), 2014
    Co-Authors: Shawn Andrews, Lisa Tang, Ghassan Hamarneh
    Abstract:

    In this paper, we introduce an extension to the random walker image registration method designed to increase the speed at which a registration is performed. Our method involves precomputing data from one of the images being registered while anticipating the acquisition of the second image, and then using this precomputed data to approximate the final transformation once the second image becomes available. The Precomputation scheme incorporates a parameter controlling the trade-off between registration speed and accuracy that can be tuned when the registration is being performed. Our results show that with Precomputation, random walker image registration runs 3 to 10 times faster on volumetric images with only 3% to 20% loss in registration accuracy.

  • fast random walker with priors using Precomputation for interactive medical image segmentation
    Medical Image Computing and Computer-Assisted Intervention, 2010
    Co-Authors: Shawn Andrews, Ghassan Hamarneh, Ahmed Saad
    Abstract:

    Updating segmentation results in real-time based on repeated user input is a reliable way to guarantee accuracy, paramount in medical imaging applications, while making efficient use of an expert's time. The random walker algorithm with priors is a robust method able to find a globally optimal probabilistic segmentation with an intuitive method for user input. However, like many other segmentation algorithms, it can be too slow for real-time user interaction. We propose a speedup to this popular algorithm based on offline Precomputation, taking advantage of the time images are stored on servers prior to an analysis session. Our results demonstrate the benefits of our approach. For example, the segmentations found by the original random walker and by our new Precomputation method for a given 3D image have a Dice's similarity coefficient of 0.975, yet our method runs in 1/25th of the time.

  • MICCAI (3) - Fast random walker with priors using Precomputation for interactive medical image segmentation
    Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Inte, 2010
    Co-Authors: Shawn Andrews, Ghassan Hamarneh, Ahmed Saad
    Abstract:

    Updating segmentation results in real-time based on repeated user input is a reliable way to guarantee accuracy, paramount in medical imaging applications, while making efficient use of an expert's time. The random walker algorithm with priors is a robust method able to find a globally optimal probabilistic segmentation with an intuitive method for user input. However, like many other segmentation algorithms, it can be too slow for real-time user interaction. We propose a speedup to this popular algorithm based on offline Precomputation, taking advantage of the time images are stored on servers prior to an analysis session. Our results demonstrate the benefits of our approach. For example, the segmentations found by the original random walker and by our new Precomputation method for a given 3D image have a Dice's similarity coefficient of 0.975, yet our method runs in 1/25th of the time.