Privacy Threat

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 10845 Experts worldwide ranked by ideXlab platform

Wouter Joosen - One of the best experts on this subject based on the ideXlab platform.

  • LINDDUN GO: A Lightweight Approach to Privacy Threat Modeling
    2020 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW), 2020
    Co-Authors: Kim Wuyts, Laurens Sion, Wouter Joosen
    Abstract:

    Realizing Privacy-preserving software requires the application of principles such as Privacy by Design (PbD) which require the consideration of Privacy early on in the software development lifecycle. While Privacy Threat modeling approaches, such as LINDDUN, provide such a systematic and extensive assessment of a system’s design, their application requires the analyst performing the assessment to have (i) extensive Privacy expertise and (ii) sufficient experience with the Threat modeling process itself. Hence, there is a high startup cost to apply these techniques. To reduce this initial threshold, more lightweight Privacy analysis approaches are necessary.In this paper, we (i) discuss the requirements for early lightweight Privacy analysis approaches; (ii) present LIND-DUN GO, a toolkit that supports lightweight Privacy Threat modeling; (iii) describe the pilot studies that were conducted for the preliminary evaluation with industry professionals.The availability of lightweight Privacy analysis approaches reduces the initial effort to start Privacy Threat modeling and can therefore enable a more wide-spread adoption of system Privacy assessments in practice.

  • on the applicability of security and Privacy Threat modeling for blockchain applications
    International Workshop on Security, 2019
    Co-Authors: Dimitri Van Landuyt, Laurens Sion, Emiel Vandeloo, Wouter Joosen
    Abstract:

    Elicitative Threat modeling approaches such as Microsoft STRIDE and LINDDUN for respectively security and Privacy use Data Flow Diagrams (DFDs) to model the system under analysis. Distinguishing between external entities, processes, data stores and data flows, these system models are particularly suited for modeling centralized, traditional multi-tiered system architectures.

  • Privacy Risk Assessment for Data Subject-Aware Threat Modeling
    2019 IEEE Security and Privacy Workshops (SPW), 2019
    Co-Authors: Laurens Sion, Kim Wuyts, Dimitri Van Landuyt, Wouter Joosen
    Abstract:

    Regulatory efforts such as the General Data Protection Regulation (GDPR) embody a notion of Privacy risk that is centered around the fundamental rights of data subjects. This is, however, a fundamentally different notion of Privacy risk than the one commonly used in Threat modeling which is largely agnostic of involved data subjects. This mismatch hampers the applicability of Privacy Threat modeling approaches such as LINDDUN in a Data Protection by Design (DPbD) context. In this paper, we present a data subject-aware Privacy risk assessment model in specific support of Privacy Threat modeling activities. This model allows the Threat modeler to draw upon a more holistic understanding of Privacy risk while assessing the relevance of specific Privacy Threats to the system under design. Additionally, we propose a number of improvements to Privacy Threat modeling, such as enriching Data Flow Diagram (DFD) system models with appropriate risk inputs (e.g., information on data types and involved data subjects). Incorporation of these risk inputs in DFDs, in combination with a risk estimation approach using Monte Carlo simulations, leads to a more comprehensive assessment of Privacy risk. The proposed risk model has been integrated in Threat modeling tool prototype and validated in the context of a realistic eHealth application.

  • knowledge is power systematic reuse of Privacy knowledge for Threat elicitation
    IEEE Symposium on Security and Privacy, 2019
    Co-Authors: Kim Wuyts, Laurens Sion, Dimitri Van Landuyt, Wouter Joosen
    Abstract:

    Privacy Threat modeling is difficult. Identifying relevant Threats that cause Privacy harm requires an extensive assessment of common potential Privacy issues for all elements in the system-under-analysis. In practice, the outcome of a Threat modeling exercise thus strongly depends on the level of experience and expertise of the analyst. However, capturing (at least part of) this Privacy expertise in a reusable Threat knowledge base (i.e. an inventory of common Threat types), such as LINDDUN's and STRIDE's Threat trees, can greatly improve the efficiency of the Threat elicitation process and the overall quality of identified Threats. In this paper, we highlight the problems of current knowledge bases, such as limited semantics and lack of instantiation logic, and discuss the requirements for a Privacy Threat knowledge base that streamlines Threat elicitation efforts.

  • knowledge enriched security and Privacy Threat modeling
    International Conference on Software Engineering, 2018
    Co-Authors: Laurens Sion, Dimitri Van Landuyt, Koen Yskout, Wouter Joosen
    Abstract:

    Creating secure and Privacy-protecting systems entails the simultaneous coordination of development activities along three different yet mutually influencing dimensions: translating (security and Privacy) goals to design choices, analyzing the design for Threats, and performing a risk analysis of these Threats in light of the goals. These activities are often executed in isolation, and such a disconnect impedes the prioritization of elicited Threats, assessment which Threats are sufficiently mitigated, and decision-making in terms of which risks can be accepted. In the proposed TMaRA approach, we facilitate the simultaneous consideration of these dimensions by integrating support for Threat modeling, risk analysis, and design decisions. Key risk assessment inputs are systematically modeled and Threat modeling efforts are fed back into the risk management process. This enables prioritizing Threats based on their estimated risk, thereby providing decision support in the mitigation, acceptance, or transferral of risk for the system under design.

Fernando Perezgonzalez - One of the best experts on this subject based on the ideXlab platform.

  • Privacy preserving data aggregation in smart metering systems an overview
    IEEE Signal Processing Magazine, 2013
    Co-Authors: Zekeriya Erkin, Juan Ramon Troncosopastoriza, Reginald L Lagendijk, Fernando Perezgonzalez
    Abstract:

    Growing energy needs are forcing governments to look for alternative resources and ways to better manage the energy grid and load balancing. As a major initiative, many countries including the United Kingdom, United States, and China have already started deploying smart grids. One of the biggest advantages of smart grids compared to traditional energy grids is the ability to remotely read fine-granular measurements from each smart meter, which enables the grid operators to balance load efficiently and offer adapted time-dependent tariffs. However, collecting fine-granular data also poses a serious Privacy Threat for the citizens as illustrated by the decision of the Dutch Parliament in 2009 that rejects the deployment of smart meters due to Privacy considerations. Hence, it is a must to enforce Privacy rights without disrupting the smart grid services like billing and data aggregation. Secure signal processing (SSP) aims at protecting the sensitive data by means of encryption and provides tools to process them under encryption, effectively addressing the smart metering Privacy problem.

Nadia Fawaz - One of the best experts on this subject based on the ideXlab platform.

  • how to hide the elephant or the donkey in the room practical Privacy against statistical inference for large data
    IEEE Global Conference on Signal and Information Processing, 2013
    Co-Authors: Salman Salamatian, Flavio P Calmon, Nadia Fawaz, Amy Zhang, Sandilya Bhamidipati, Branislav Kveton, Pedro Oliveira, Nina Taft
    Abstract:

    We propose a practical methodology to protect a user's private data, when he wishes to publicly release data that is correlated with his private data, in the hope of getting some utility. Our approach relies on a general statistical inference framework that captures the Privacy Threat under inference attacks, given utility constraints. Under this framework, data is distorted before it is released, according to a Privacy-preserving probabilistic mapping. This mapping is obtained by solving a convex optimization problem, which minimizes information leakage under a distortion constraint. We address a practical challenge encountered when applying this theoretical framework to real world data: the optimization may become untractable and face scalability issues when data assumes values in large size alphabets, or is high dimensional. Our work makes two major contributions. We first reduce the optimization size by introducing a quantization step, and show how to generate Privacy mappings under quantization. Second, we evaluate our method on a dataset showing correlations between political views and TV viewing habits, and demonstrate that good Privacy properties can be achieved with limited distortion so as not to undermine the original purpose of the publicly released data, e.g. recommendations.

  • Privacy against statistical inference
    Allerton Conference on Communication Control and Computing, 2012
    Co-Authors: Flavio P Calmon, Nadia Fawaz
    Abstract:

    We propose a general statistical inference framework to capture the Privacy Threat incurred by a user that releases data to a passive but curious adversary, given utility constraints. We show that applying this general framework to the setting where the adversary uses the self-information cost function naturally leads to a non-asymptotic information-theoretic approach for characterizing the best achievable Privacy subject to utility constraints. Based on these results we introduce two Privacy metrics, namely average information leakage and maximum information leakage. We prove that under both metrics the resulting design problem of finding the optimal mapping from the user's data to a Privacy-preserving output can be cast as a modified rate-distortion problem which, in turn, can be formulated as a convex program. Finally, we compare our framework with differential Privacy.

Ju Ren - One of the best experts on this subject based on the ideXlab platform.

  • Analyzing User-Level Privacy Attack Against Federated Learning
    IEEE Journal on Selected Areas in Communications, 2020
    Co-Authors: Mengkai Song, Zhibo Wang, Zhifei Zhang, Yang Song, Qian Wang, Ju Ren
    Abstract:

    Federated learning has emerged as an advanced Privacy-preserving learning technique for mobile edge computing, where the model is trained in a decentralized manner by the clients, preventing the server from directly accessing those private data from the clients. This learning mechanism significantly challenges the attack from the server side. Although the state-of-the-art attacking techniques that incorporated the advance of Generative adversarial networks (GANs) could construct class representatives of the global data distribution among all clients, it is still challenging to distinguishably attack a specific client (i.e., user-level Privacy leakage), which is a stronger Privacy Threat to precisely recover the private data from a specific client. To analyze the Privacy leakage of federated learning, this paper gives the first attempt to explore user-level Privacy leakage by the attack from a malicious server. We propose a framework incorporating GAN with a multi-task discriminator, called multi-task GAN - Auxiliary Identification (mGAN-AI), which simultaneously discriminates category, reality, and client identity of input samples. The novel discrimination on client identity enables the generator to recover user specified private data. Unlike existing works interfering the federated learning process, the proposed method works “invisibly” on the server side. Furthermore, considering the anonymization strategy for mitigating mGAN-AI, we propose a beforehand linkability attack which re-identifies the anonymized updates by associating the client representatives. A novel siamese network fusing the identification and verification models is developed for measuring the similarity of representatives. The experimental results demonstrate the effectiveness of the proposed approaches and the superior to the state-of-the-art.

Daehun Nyang - One of the best experts on this subject based on the ideXlab platform.

  • code authorship identification using convolutional neural networks
    Future Generation Computer Systems, 2019
    Co-Authors: Mohammed Abuhamad, Jisu Rhim, Tamer Abuhmed, Sana Ullah, Sanggil Kang, Daehun Nyang
    Abstract:

    Abstract Although source code authorship identification creates a Privacy Threat for many open source contributors, it is an important topic for the forensics field and enables many successful forensic applications, including ghostwriting detection, copyright dispute settlements, and other code analysis applications. This work proposes a convolutional neural network (CNN) based code authorship identification system. Our proposed system exploits term frequency-inverse document frequency, word embedding modeling, and feature learning techniques for code representation. This representation is then fed into a CNN-based code authorship identification model to identify the code’s author. Evaluation results from using our approach on data from Google Code Jam demonstrate an identification accuracy of up to 99.4% with 150 candidate programmers, and 96.2% with 1,600 programmers. The evaluation of our approach also shows high accuracy for programmers identification over real-world code samples from 1987 public repositories on GitHub with 95% accuracy for 745 C programmers and 97% for the C++ programmers. These results indicate that the proposed approaches are not language-specific techniques and can identify programmers of different programming languages.