Threat Model

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 91059 Experts worldwide ranked by ideXlab platform

Jean Everson Martina - One of the best experts on this subject based on the ideXlab platform.

  • Threat Modelling Service Security as a Security Ceremony
    2016 11th International Conference on Availability, Reliability and Security (ARES), 2016
    Co-Authors: Taciane Martimiano, Jean Everson Martina
    Abstract:

    Security ceremonies are extensions for security protocols. One goal of ceremony designers is to be able to use symbolic evaluation methods to verify claims embedded in ceremonies. Unfortunately, there are some pieces missing for that, such as, a base description language and a tailored Threat Model for security ceremonies. Our contributions in this paper are: a proposal for message description syntax, an augmented Threat Model to encompass the subtleties of security ceremonies and a strategy for symbolic evaluation using First Order Logic (FOL) and an automatic theorem prover. Furthermore, we propose a new Threat Model named Distributed Attacker (DA), which uses the adaptive Threat Model proposed by Carlos et al. and the Security Ceremony Concertina Traversal layers proposed by Bella et al. As a result, we present scenarios which can be formally analysed with our proposal.

  • an updated Threat Model for security ceremonies
    ACM Symposium on Applied Computing, 2013
    Co-Authors: Marcelo Carlomagno Carlos, Jean Everson Martina, Geraint Price, Ricardo Felipe Custodio
    Abstract:

    Since Needham and Schroeder introduced the idea of an active attacker, a lot of research has been made in the protocol design and analysis area in order to verify the protocols' claims against this type of attacker. Nowadays, the Dolev-Yao Threat Model is the most widely accepted attacker Model in the analysis of security protocols. Consequently, there are several security protocols considered secure against an attacker under Dolev-Yao's assumptions. With the introduction of the concept of ceremonies, which extends protocol design and analysis to include human peers, we can potentially find and solve security flaws that were previously not detectable. In this paper, we discuss that even though Dolev-Yao's Threat Model can represent the most powerful attacker possible in a ceremony, the attacker in this Model is not realistic in certain scenarios, especially those related to the human peers. We propose a dynamic Threat Model that can be adjusted according to each ceremony, and consequently adapt the Model and the ceremony analysis to realistic scenarios without degrading security and improving usability.

Zhixiong Yang - One of the best experts on this subject based on the ideXlab platform.

  • adversary resilient distributed and decentralized statistical inference and machine learning an overview of recent advances under the byzantine Threat Model
    IEEE Signal Processing Magazine, 2020
    Co-Authors: Zhixiong Yang, Arpita Gang, Waheed U Ajwa
    Abstract:

    Statistical inference and machine-learning algorithms have traditionally been developed for data available at a single location. Unlike this centralized setting, modern data sets are increasingly being distributed across multiple physical entities (sensors, devices, machines, data centers, and so on) for a multitude of reasons that range from storage, memory, and computational constraints to privacy concerns and engineering needs. This has necessitated the development of inference and learning algorithms capable of operating on noncolocated data. For this article, we divide such algorithms into two broad categories, namely, distributed algorithms and decentralized algorithms (see "Is It Distributed or Is It Decentralized?").

  • adversary resilient distributed and decentralized statistical inference and machine learning an overview of recent advances under the byzantine Threat Model
    arXiv: Machine Learning, 2019
    Co-Authors: Zhixiong Yang, Arpita Gang, Waheed U Bajwa
    Abstract:

    While the last few decades have witnessed a huge body of work devoted to inference and learning in distributed and decentralized setups, much of this work assumes a non-adversarial setting in which individual nodes---apart from occasional statistical failures---operate as intended within the algorithmic framework. In recent years, however, cybersecurity Threats from malicious non-state actors and rogue entities have forced practitioners and researchers to rethink the robustness of distributed and decentralized algorithms against adversarial attacks. As a result, we now have a plethora of algorithmic approaches that guarantee robustness of distributed and/or decentralized inference and learning under different adversarial Threat Models. Driven in part by the world's growing appetite for data-driven decision making, however, securing of distributed/decentralized frameworks for inference and learning against adversarial Threats remains a rapidly evolving research area. In this article, we provide an overview of some of the most recent developments in this area under the Threat Model of Byzantine attacks.

Feizi Soheil - One of the best experts on this subject based on the ideXlab platform.

  • Perceptual Adversarial Robustness: Defense Against Unseen Threat Models
    2021
    Co-Authors: Laidlaw Cassidy, Singla Sahil, Feizi Soheil
    Abstract:

    A key challenge in adversarial robustness is the lack of a precise mathematical characterization of human perception, used in the very definition of adversarial attacks that are imperceptible to human eyes. Most current attacks and defenses try to avoid this issue by considering restrictive adversarial Threat Models such as those bounded by $L_2$ or $L_\infty$ distance, spatial perturbations, etc. However, Models that are robust against any of these restrictive Threat Models are still fragile against other Threat Models. To resolve this issue, we propose adversarial training against the set of all imperceptible adversarial examples, approximated using deep neural networks. We call this Threat Model the neural perceptual Threat Model (NPTM); it includes adversarial examples with a bounded neural perceptual distance (a neural network-based approximation of the true perceptual distance) to natural images. Through an extensive perceptual study, we show that the neural perceptual distance correlates well with human judgements of perceptibility of adversarial examples, validating our Threat Model. Under the NPTM, we develop novel perceptual adversarial attacks and defenses. Because the NPTM is very broad, we find that Perceptual Adversarial Training (PAT) against a perceptual attack gives robustness against many other types of adversarial attacks. We test PAT on CIFAR-10 and ImageNet-100 against five diverse adversarial attacks. We find that PAT achieves state-of-the-art robustness against the union of these five attacks, more than doubling the accuracy over the next best Model, without training against any of them. That is, PAT generalizes well to unforeseen perturbation types. This is vital in sensitive applications where a particular Threat Model cannot be assumed, and to the best of our knowledge, PAT is the first adversarial training defense with this property.Comment: Published in ICLR 2021. Code and data are available at https://github.com/cassidylaidlaw/perceptual-adve

  • Deep Partition Aggregation: Provable Defense against General Poisoning Attacks
    2021
    Co-Authors: Levine Alexander, Feizi Soheil
    Abstract:

    Adversarial poisoning attacks distort training data in order to corrupt the test-time behavior of a classifier. A provable defense provides a certificate for each test sample, which is a lower bound on the magnitude of any adversarial distortion of the training set that can corrupt the test sample's classification. We propose two novel provable defenses against poisoning attacks: (i) Deep Partition Aggregation (DPA), a certified defense against a general poisoning Threat Model, defined as the insertion or deletion of a bounded number of samples to the training set -- by implication, this Threat Model also includes arbitrary distortions to a bounded number of images and/or labels; and (ii) Semi-Supervised DPA (SS-DPA), a certified defense against label-flipping poisoning attacks. DPA is an ensemble method where base Models are trained on partitions of the training set determined by a hash function. DPA is related to both subset aggregation, a well-studied ensemble method in classical machine learning, as well as to randomized smoothing, a popular provable defense against evasion attacks. Our defense against label-flipping attacks, SS-DPA, uses a semi-supervised learning algorithm as its base classifier Model: each base classifier is trained using the entire unlabeled training set in addition to the labels for a partition. SS-DPA significantly outperforms the existing certified defense for label-flipping attacks on both MNIST and CIFAR-10: provably tolerating, for at least half of test images, over 600 label flips (vs. < 200 label flips) on MNIST and over 300 label flips (vs. 175 label flips) on CIFAR-10. Against general poisoning attacks, where no prior certified defenses exists, DPA can certify >= 50% of test images against over 500 poison image insertions on MNIST, and nine insertions on CIFAR-10. These results establish new state-of-the-art provable defenses against poisoning attacks.Comment: ICLR 202

  • Deep Partition Aggregation: Provable Defense against General Poisoning Attacks
    2020
    Co-Authors: Levine Alexander, Feizi Soheil
    Abstract:

    Adversarial poisoning attacks distort training data in order to corrupt the test-time behavior of a classifier. A provable defense provides a certificate for each test sample, which is a lower bound on the magnitude of any adversarial distortion of the training set that can corrupt the test sample's classification. We propose two provable defenses against poisoning attacks: (i) Deep Partition Aggregation (DPA), a certified defense against a general poisoning Threat Model, defined as the insertion or deletion of a bounded number of samples to the training set -- by implication, this Threat Model also includes arbitrary distortions to a bounded number of images and/or labels; and (ii) Semi-Supervised DPA (SS-DPA), a certified defense against label-flipping poisoning attacks. DPA is an ensemble method where base Models are trained on partitions of the training set determined by a hash function. DPA is related to subset aggregation, a well-studied ensemble method in classical machine learning. DPA can also be viewed as an extension of randomized ablation (Levine & Feizi, 2020a), a certified defense against sparse evasion attacks, to the poisoning domain. Our label-flipping defense, SS-DPA, uses a semi-supervised learning algorithm as its base classifier Model: we train each base classifier using the entire unlabeled training set in addition to the labels for a partition. SS-DPA outperforms the existing certified defense for label-flipping attacks (Rosenfeld et al., 2020). SS-DPA certifies >= 50% of test images against 675 label flips (vs. < 200 label flips with the existing defense) on MNIST and 83 label flips on CIFAR-10. Against general poisoning attacks (no prior certified defense), DPA certifies >= 50% of test images against > 500 poison image insertions on MNIST, and nine insertions on CIFAR-10. These results establish new state-of-the-art provable defenses against poison attacks

Ricardo Felipe Custodio - One of the best experts on this subject based on the ideXlab platform.

  • an updated Threat Model for security ceremonies
    ACM Symposium on Applied Computing, 2013
    Co-Authors: Marcelo Carlomagno Carlos, Jean Everson Martina, Geraint Price, Ricardo Felipe Custodio
    Abstract:

    Since Needham and Schroeder introduced the idea of an active attacker, a lot of research has been made in the protocol design and analysis area in order to verify the protocols' claims against this type of attacker. Nowadays, the Dolev-Yao Threat Model is the most widely accepted attacker Model in the analysis of security protocols. Consequently, there are several security protocols considered secure against an attacker under Dolev-Yao's assumptions. With the introduction of the concept of ceremonies, which extends protocol design and analysis to include human peers, we can potentially find and solve security flaws that were previously not detectable. In this paper, we discuss that even though Dolev-Yao's Threat Model can represent the most powerful attacker possible in a ceremony, the attacker in this Model is not realistic in certain scenarios, especially those related to the human peers. We propose a dynamic Threat Model that can be adjusted according to each ceremony, and consequently adapt the Model and the ceremony analysis to realistic scenarios without degrading security and improving usability.

Waheed U Ajwa - One of the best experts on this subject based on the ideXlab platform.