Hardware Fault

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 29037 Experts worldwide ranked by ideXlab platform

Tudor Dumitras - One of the best experts on this subject based on the ideXlab platform.

  • terminal brain damage exposing the graceless degradation in deep neural networks under Hardware Fault attacks
    USENIX Security Symposium, 2019
    Co-Authors: Sanghyun Hong, Pietro Frigo, Yigitcan Kaya, Cristiano Giuffrida, Tudor Dumitras
    Abstract:

    Deep neural networks (DNNs) have been shown to tolerate "brain damage": cumulative changes to the network's parameters (e.g., pruning, numerical perturbations) typically result in a graceful degradation of classification accuracy. However, the limits of this natural resilience are not well understood in the presence of small adversarial changes to the DNN parameters' underlying memory representation, such as bit-flips that may be induced by Hardware Fault attacks. We study the effects of bitwise corruptions on 19 DNN models--six architectures on three image classification tasks--and we show that most models have at least one parameter that, after a specific bit-flip in their bitwise representation, causes an accuracy loss of over 90%. For large models, we employ simple heuristics to identify the parameters likely to be vulnerable and estimate that 40-50% of the parameters in a model might lead to an accuracy drop greater than 10% when individually subjected to such single-bit perturbations. To demonstrate how an adversary could take advantage of this vulnerability, we study the impact of an exemplary Hardware Fault attack, Rowhammer, on DNNs. Specifically, we show that a Rowhammer-enabled attacker co-located in the same physical machine can inflict significant accuracy drops (up to 99%) even with single bitflip corruptions and no knowledge of the model. Our results expose the limits of DNNs' resilience against parameter perturbations induced by real-world Fault attacks. We conclude by discussing possible mitigations and future research directions towards Fault attack-resilient DNNs.

  • terminal brain damage exposing the graceless degradation in deep neural networks under Hardware Fault attacks
    arXiv: Cryptography and Security, 2019
    Co-Authors: Sanghyun Hong, Pietro Frigo, Yigitcan Kaya, Cristiano Giuffrida, Tudor Dumitras
    Abstract:

    Deep neural networks (DNNs) have been shown to tolerate "brain damage": cumulative changes to the network's parameters (e.g., pruning, numerical perturbations) typically result in a graceful degradation of classification accuracy. However, the limits of this natural resilience are not well understood in the presence of small adversarial changes to the DNN parameters' underlying memory representation, such as bit-flips that may be induced by Hardware Fault attacks. We study the effects of bitwise corruptions on 19 DNN models---six architectures on three image classification tasks---and we show that most models have at least one parameter that, after a specific bit-flip in their bitwise representation, causes an accuracy loss of over 90%. We employ simple heuristics to efficiently identify the parameters likely to be vulnerable. We estimate that 40-50% of the parameters in a model might lead to an accuracy drop greater than 10% when individually subjected to such single-bit perturbations. To demonstrate how an adversary could take advantage of this vulnerability, we study the impact of an exemplary Hardware Fault attack, Rowhammer, on DNNs. Specifically, we show that a Rowhammer enabled attacker co-located in the same physical machine can inflict significant accuracy drops (up to 99%) even with single bit-flip corruptions and no knowledge of the model. Our results expose the limits of DNNs' resilience against parameter perturbations induced by real-world Fault attacks. We conclude by discussing possible mitigations and future research directions towards Fault attack-resilient DNNs.

Sanghyun Hong - One of the best experts on this subject based on the ideXlab platform.

  • terminal brain damage exposing the graceless degradation in deep neural networks under Hardware Fault attacks
    USENIX Security Symposium, 2019
    Co-Authors: Sanghyun Hong, Pietro Frigo, Yigitcan Kaya, Cristiano Giuffrida, Tudor Dumitras
    Abstract:

    Deep neural networks (DNNs) have been shown to tolerate "brain damage": cumulative changes to the network's parameters (e.g., pruning, numerical perturbations) typically result in a graceful degradation of classification accuracy. However, the limits of this natural resilience are not well understood in the presence of small adversarial changes to the DNN parameters' underlying memory representation, such as bit-flips that may be induced by Hardware Fault attacks. We study the effects of bitwise corruptions on 19 DNN models--six architectures on three image classification tasks--and we show that most models have at least one parameter that, after a specific bit-flip in their bitwise representation, causes an accuracy loss of over 90%. For large models, we employ simple heuristics to identify the parameters likely to be vulnerable and estimate that 40-50% of the parameters in a model might lead to an accuracy drop greater than 10% when individually subjected to such single-bit perturbations. To demonstrate how an adversary could take advantage of this vulnerability, we study the impact of an exemplary Hardware Fault attack, Rowhammer, on DNNs. Specifically, we show that a Rowhammer-enabled attacker co-located in the same physical machine can inflict significant accuracy drops (up to 99%) even with single bitflip corruptions and no knowledge of the model. Our results expose the limits of DNNs' resilience against parameter perturbations induced by real-world Fault attacks. We conclude by discussing possible mitigations and future research directions towards Fault attack-resilient DNNs.

  • terminal brain damage exposing the graceless degradation in deep neural networks under Hardware Fault attacks
    arXiv: Cryptography and Security, 2019
    Co-Authors: Sanghyun Hong, Pietro Frigo, Yigitcan Kaya, Cristiano Giuffrida, Tudor Dumitras
    Abstract:

    Deep neural networks (DNNs) have been shown to tolerate "brain damage": cumulative changes to the network's parameters (e.g., pruning, numerical perturbations) typically result in a graceful degradation of classification accuracy. However, the limits of this natural resilience are not well understood in the presence of small adversarial changes to the DNN parameters' underlying memory representation, such as bit-flips that may be induced by Hardware Fault attacks. We study the effects of bitwise corruptions on 19 DNN models---six architectures on three image classification tasks---and we show that most models have at least one parameter that, after a specific bit-flip in their bitwise representation, causes an accuracy loss of over 90%. We employ simple heuristics to efficiently identify the parameters likely to be vulnerable. We estimate that 40-50% of the parameters in a model might lead to an accuracy drop greater than 10% when individually subjected to such single-bit perturbations. To demonstrate how an adversary could take advantage of this vulnerability, we study the impact of an exemplary Hardware Fault attack, Rowhammer, on DNNs. Specifically, we show that a Rowhammer enabled attacker co-located in the same physical machine can inflict significant accuracy drops (up to 99%) even with single bit-flip corruptions and no knowledge of the model. Our results expose the limits of DNNs' resilience against parameter perturbations induced by real-world Fault attacks. We conclude by discussing possible mitigations and future research directions towards Fault attack-resilient DNNs.

Andy M Tyrrell - One of the best experts on this subject based on the ideXlab platform.

  • Hardware Fault tolerance an immunological solution
    Systems Man and Cybernetics, 2000
    Co-Authors: D W Bradley, Andy M Tyrrell
    Abstract:

    Since the advent of computers, numerous approaches have been taken to create Hardware systems that provide a high degree of reliability even in the presence of errors. This paper addresses the problem from a biological perspective using the human immune system as a source of inspiration. The immune system uses many ingenious methods to provide reliable operation in the body and so may suggest how similar methods can be used in the future design of reliable computer systems. The paper addresses this challenge through the implementation of an immunised finite state machine-based counter. The proposed methods demonstrate how through a process of self/non-self-differentiation, the Hardware immune system creates a set of tolerance conditions to monitor the change in states of the Hardware. Potential Faults may then be flagged, assessed and the appropriate recovery action taken.

  • immunotronics Hardware Fault tolerance inspired by the immune system
    International Conference on Evolvable Systems, 2000
    Co-Authors: D W Bradley, Andy M Tyrrell
    Abstract:

    An novel approach to Hardware Fault tolerance is proposed that takes inspiration from the human immune system as a method of Fault detection and removal. The immune system has inspired work within the areas of virus protection and pattern recognition yet its application to Hardware Fault tolerance is untouched. This paper introduces many of the ingenious methods provided by the immune system to provide reliable operation and suggests how such concepts can inspire novel methods of providing Fault tolerance in the design of state machine hard-ware systems. Through a process of self/non-self recognition the proposed Hardware immune system will learn to differentiate between acceptable and abnormal states and transitions within the 'immunised' system. Potential Faults can then be flagged and suitable recovery methods invoked to return the system to a safe state.

Vidroha Debroy - One of the best experts on this subject based on the ideXlab platform.

  • mutant generation for embedded systems using kernel based software and Hardware Fault simulation
    Information & Software Technology, 2011
    Co-Authors: Ahyoung Sung, Byoungju Choi, Eric W Wong, Vidroha Debroy
    Abstract:

    Abstract Context Mutation testing is a Fault-injection-based technique to help testers generate test cases for detecting specific and predetermined types of Faults. Objective Before mutation testing can be effectively applied to embedded systems, traditional mutation testing needs to be modified. To inject a Fault into an embedded system without causing any system failure or Hardware damage is a challenging task as it requires some knowledge of the underlying layers such as the kernel and the corresponding Hardware. Method We propose a set of mutation operators for embedded systems using kernel-based software and Hardware Fault simulation. These operators are designed for software developers so that they can use the mutation technique to test the entire system after the software is integrated with the kernel and Hardware devices. Results A case study on a programmable logic controller for a digital reactor protection system in a nuclear power plant is conducted. Our results suggest that the proposed mutation operators are useful for Fault-injection and this is evidenced by the fact that Faults not injected by us were discovered in the subject software as a result of the case study. Conclusion We conclude that our mutation operators are useful for integration testing of an embedded system.

Pietro Frigo - One of the best experts on this subject based on the ideXlab platform.

  • terminal brain damage exposing the graceless degradation in deep neural networks under Hardware Fault attacks
    USENIX Security Symposium, 2019
    Co-Authors: Sanghyun Hong, Pietro Frigo, Yigitcan Kaya, Cristiano Giuffrida, Tudor Dumitras
    Abstract:

    Deep neural networks (DNNs) have been shown to tolerate "brain damage": cumulative changes to the network's parameters (e.g., pruning, numerical perturbations) typically result in a graceful degradation of classification accuracy. However, the limits of this natural resilience are not well understood in the presence of small adversarial changes to the DNN parameters' underlying memory representation, such as bit-flips that may be induced by Hardware Fault attacks. We study the effects of bitwise corruptions on 19 DNN models--six architectures on three image classification tasks--and we show that most models have at least one parameter that, after a specific bit-flip in their bitwise representation, causes an accuracy loss of over 90%. For large models, we employ simple heuristics to identify the parameters likely to be vulnerable and estimate that 40-50% of the parameters in a model might lead to an accuracy drop greater than 10% when individually subjected to such single-bit perturbations. To demonstrate how an adversary could take advantage of this vulnerability, we study the impact of an exemplary Hardware Fault attack, Rowhammer, on DNNs. Specifically, we show that a Rowhammer-enabled attacker co-located in the same physical machine can inflict significant accuracy drops (up to 99%) even with single bitflip corruptions and no knowledge of the model. Our results expose the limits of DNNs' resilience against parameter perturbations induced by real-world Fault attacks. We conclude by discussing possible mitigations and future research directions towards Fault attack-resilient DNNs.

  • terminal brain damage exposing the graceless degradation in deep neural networks under Hardware Fault attacks
    arXiv: Cryptography and Security, 2019
    Co-Authors: Sanghyun Hong, Pietro Frigo, Yigitcan Kaya, Cristiano Giuffrida, Tudor Dumitras
    Abstract:

    Deep neural networks (DNNs) have been shown to tolerate "brain damage": cumulative changes to the network's parameters (e.g., pruning, numerical perturbations) typically result in a graceful degradation of classification accuracy. However, the limits of this natural resilience are not well understood in the presence of small adversarial changes to the DNN parameters' underlying memory representation, such as bit-flips that may be induced by Hardware Fault attacks. We study the effects of bitwise corruptions on 19 DNN models---six architectures on three image classification tasks---and we show that most models have at least one parameter that, after a specific bit-flip in their bitwise representation, causes an accuracy loss of over 90%. We employ simple heuristics to efficiently identify the parameters likely to be vulnerable. We estimate that 40-50% of the parameters in a model might lead to an accuracy drop greater than 10% when individually subjected to such single-bit perturbations. To demonstrate how an adversary could take advantage of this vulnerability, we study the impact of an exemplary Hardware Fault attack, Rowhammer, on DNNs. Specifically, we show that a Rowhammer enabled attacker co-located in the same physical machine can inflict significant accuracy drops (up to 99%) even with single bit-flip corruptions and no knowledge of the model. Our results expose the limits of DNNs' resilience against parameter perturbations induced by real-world Fault attacks. We conclude by discussing possible mitigations and future research directions towards Fault attack-resilient DNNs.