Safety Analysis

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 608754 Experts worldwide ranked by ideXlab platform

Robyn R Lutz - One of the best experts on this subject based on the ideXlab platform.

  • Safety Analysis of software product lines using state based modeling
    Journal of Systems and Software, 2007
    Co-Authors: Josh Dehlinger, Robyn R Lutz
    Abstract:

    The difficulty of managing variations and their potential interactions across an entire product line currently hinders Safety Analysis in Safety-critical, software product lines. The work described here contributes to a solution by integrating product-line Safety Analysis with model-based development. This approach provides a structured way to construct state-based models of a product line having significant, Safety-related variations and to systematically explore the relationships between behavioral variations and potential hazardous states through scenario-guided executions of the state model over the variations. The paper uses a product line of Safety-critical medical devices to demonstrate and evaluate the technique and results.

  • Safety Analysis of software product lines using state based modeling
    International Symposium on Software Reliability Engineering, 2005
    Co-Authors: Josh Dehlinger, Robyn R Lutz
    Abstract:

    The Analysis and management of variations (such as optional features) are central to the development of Safety-critical, software product lines. However, the difficulty of managing variations, and the potential interactions among them, across an entire product line currently hinders Safety Analysis in such systems. The work described here contributes to a solution by integrating Safety Analysis of a product line with model-based development. This approach provides a structured way to construct a state-based model of a product line having significant, Safety-related variations. The process described here uses and extends previous work on product-line software fault tree Analysis to explore hazard-prone variation points. The process then uses scenario-guided executions to exercise the state model over the variations as a means of validating the product-line Safety properties. Using an available tool, relationships between behavioral variations and potentially hazardous states are systematically explored and mitigation steps are identified. The paper uses a product line of embedded medical devices to demonstrate and evaluate the process and results

Suman Jana - One of the best experts on this subject based on the ideXlab platform.

  • efficient formal Safety Analysis of neural networks
    arXiv: Learning, 2018
    Co-Authors: Shiqi Wang, Kexin Pei, Justin Whitehouse, Junfeng Yang, Suman Jana
    Abstract:

    Neural networks are increasingly deployed in real-world Safety-critical domains such as autonomous driving, aircraft collision avoidance, and malware detection. However, these networks have been shown to often mispredict on inputs with minor adversarial or even accidental perturbations. Consequences of such errors can be disastrous and even potentially fatal as shown by the recent Tesla autopilot crash. Thus, there is an urgent need for formal Analysis systems that can rigorously check neural networks for violations of different Safety properties such as robustness against adversarial perturbations within a certain $L$-norm of a given image. An effective Safety Analysis system for a neural network must be able to either ensure that a Safety property is satisfied by the network or find a counterexample, i.e., an input for which the network will violate the property. Unfortunately, most existing techniques for performing such Analysis struggle to scale beyond very small networks and the ones that can scale to larger networks suffer from high false positives and cannot produce concrete counterexamples in case of a property violation. In this paper, we present a new efficient approach for rigorously checking different Safety properties of neural networks that significantly outperforms existing approaches by multiple orders of magnitude. Our approach can check different Safety properties and find concrete counterexamples for networks that are 10$\times$ larger than the ones supported by existing Analysis techniques. We believe that our approach to estimating tight output bounds of a network for a given input range can also help improve the explainability of neural networks and guide the training process of more robust neural networks.

  • efficient formal Safety Analysis of neural networks
    Neural Information Processing Systems, 2018
    Co-Authors: Shiqi Wang, Kexin Pei, Justin Whitehouse, Junfeng Yang, Suman Jana
    Abstract:

    Neural networks are increasingly deployed in real-world Safety-critical domains such as autonomous driving, aircraft collision avoidance, and malware detection. However, these networks have been shown to often mispredict on inputs with minor adversarial or even accidental perturbations. Consequences of such errors can be disastrous and even potentially fatal as shown by the recent Tesla autopilot crash. Thus, there is an urgent need for formal Analysis systems that can rigorously check neural networks for violations of different Safety properties such as robustness against adversarial perturbations within a certain L-norm of a given image. An effective Safety Analysis system for a neural network must be able to either ensure that a Safety property is satisfied by the network or find a counterexample, i.e., an input for which the network will violate the property. Unfortunately, most existing techniques for performing such Analysis struggle to scale beyond very small networks and the ones that can scale to larger networks suffer from high false positives and cannot produce concrete counterexamples in case of a property violation. In this paper, we present a new efficient approach for rigorously checking different Safety properties of neural networks that significantly outperforms existing approaches by multiple orders of magnitude. Our approach can check different Safety properties and find concrete counterexamples for networks that are 10x larger than the ones supported by existing Analysis techniques. We believe that our approach to estimating tight output bounds of a network for a given input range can also help improve the explainability of neural networks and guide the training process of more robust neural networks.

Josh Dehlinger - One of the best experts on this subject based on the ideXlab platform.

  • Safety Analysis of software product lines using state based modeling
    Journal of Systems and Software, 2007
    Co-Authors: Josh Dehlinger, Robyn R Lutz
    Abstract:

    The difficulty of managing variations and their potential interactions across an entire product line currently hinders Safety Analysis in Safety-critical, software product lines. The work described here contributes to a solution by integrating product-line Safety Analysis with model-based development. This approach provides a structured way to construct state-based models of a product line having significant, Safety-related variations and to systematically explore the relationships between behavioral variations and potential hazardous states through scenario-guided executions of the state model over the variations. The paper uses a product line of Safety-critical medical devices to demonstrate and evaluate the technique and results.

  • Safety Analysis of software product lines using state based modeling
    International Symposium on Software Reliability Engineering, 2005
    Co-Authors: Josh Dehlinger, Robyn R Lutz
    Abstract:

    The Analysis and management of variations (such as optional features) are central to the development of Safety-critical, software product lines. However, the difficulty of managing variations, and the potential interactions among them, across an entire product line currently hinders Safety Analysis in such systems. The work described here contributes to a solution by integrating Safety Analysis of a product line with model-based development. This approach provides a structured way to construct a state-based model of a product line having significant, Safety-related variations. The process described here uses and extends previous work on product-line software fault tree Analysis to explore hazard-prone variation points. The process then uses scenario-guided executions to exercise the state model over the variations as a means of validating the product-line Safety properties. Using an available tool, relationships between behavioral variations and potentially hazardous states are systematically explored and mitigation steps are identified. The paper uses a product line of embedded medical devices to demonstrate and evaluate the process and results

Mats P E Heimdahl - One of the best experts on this subject based on the ideXlab platform.

  • model based Safety Analysis of simulink models using scade design verifier
    International Conference on Computer Safety Reliability and Security, 2005
    Co-Authors: Anjali Joshi, Mats P E Heimdahl
    Abstract:

    Safety Analysis techniques have traditionally been performed manually by the Safety engineers. Since these analyses are based on an informal model of the system, it is unlikely that these analyses will be complete, consistent, and error-free. Using precise formal models of the system as the basis of the Analysis may help reduce errors and provide a more thorough Analysis. Further, these models allow automated Analysis, which may reduce the manual effort required. The process of creating system models suitable for Safety Analysis closely parallels the model-based development process that is increasingly used for critical system and software development. By leveraging the existing tools and techniques, we can create formal Safety models using tools that are familiar to engineers and we can use the static Analysis infrastructure available for these tools. This paper reports our initial experience in using model-based Safety Analysis on an example system taken from the ARP Safety Assessment guidelines document.

Shiqi Wang - One of the best experts on this subject based on the ideXlab platform.

  • efficient formal Safety Analysis of neural networks
    arXiv: Learning, 2018
    Co-Authors: Shiqi Wang, Kexin Pei, Justin Whitehouse, Junfeng Yang, Suman Jana
    Abstract:

    Neural networks are increasingly deployed in real-world Safety-critical domains such as autonomous driving, aircraft collision avoidance, and malware detection. However, these networks have been shown to often mispredict on inputs with minor adversarial or even accidental perturbations. Consequences of such errors can be disastrous and even potentially fatal as shown by the recent Tesla autopilot crash. Thus, there is an urgent need for formal Analysis systems that can rigorously check neural networks for violations of different Safety properties such as robustness against adversarial perturbations within a certain $L$-norm of a given image. An effective Safety Analysis system for a neural network must be able to either ensure that a Safety property is satisfied by the network or find a counterexample, i.e., an input for which the network will violate the property. Unfortunately, most existing techniques for performing such Analysis struggle to scale beyond very small networks and the ones that can scale to larger networks suffer from high false positives and cannot produce concrete counterexamples in case of a property violation. In this paper, we present a new efficient approach for rigorously checking different Safety properties of neural networks that significantly outperforms existing approaches by multiple orders of magnitude. Our approach can check different Safety properties and find concrete counterexamples for networks that are 10$\times$ larger than the ones supported by existing Analysis techniques. We believe that our approach to estimating tight output bounds of a network for a given input range can also help improve the explainability of neural networks and guide the training process of more robust neural networks.

  • efficient formal Safety Analysis of neural networks
    Neural Information Processing Systems, 2018
    Co-Authors: Shiqi Wang, Kexin Pei, Justin Whitehouse, Junfeng Yang, Suman Jana
    Abstract:

    Neural networks are increasingly deployed in real-world Safety-critical domains such as autonomous driving, aircraft collision avoidance, and malware detection. However, these networks have been shown to often mispredict on inputs with minor adversarial or even accidental perturbations. Consequences of such errors can be disastrous and even potentially fatal as shown by the recent Tesla autopilot crash. Thus, there is an urgent need for formal Analysis systems that can rigorously check neural networks for violations of different Safety properties such as robustness against adversarial perturbations within a certain L-norm of a given image. An effective Safety Analysis system for a neural network must be able to either ensure that a Safety property is satisfied by the network or find a counterexample, i.e., an input for which the network will violate the property. Unfortunately, most existing techniques for performing such Analysis struggle to scale beyond very small networks and the ones that can scale to larger networks suffer from high false positives and cannot produce concrete counterexamples in case of a property violation. In this paper, we present a new efficient approach for rigorously checking different Safety properties of neural networks that significantly outperforms existing approaches by multiple orders of magnitude. Our approach can check different Safety properties and find concrete counterexamples for networks that are 10x larger than the ones supported by existing Analysis techniques. We believe that our approach to estimating tight output bounds of a network for a given input range can also help improve the explainability of neural networks and guide the training process of more robust neural networks.