Privacy Principle

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 231 Experts worldwide ranked by ideXlab platform

Simone Fischer-hübner - One of the best experts on this subject based on the ideXlab platform.

  • Tools for Achieving Usable Ex Post Transparency: A Survey
    IEEE Access, 2017
    Co-Authors: Patrick Murmann, Simone Fischer-hübner
    Abstract:

    Transparency of personal data processing is a basic Privacy Principle and a right that is well acknowledged by data protection legislation, such as the EU general data protection regulation (GDPR). The objective of ex post transparency enhancing tools (TETs) is to provide users with insight about what data have been processed about them and what possible consequences might arise after their data have been revealed, that is, ex post. This survey assesses the state of the art in scientific literature of the usability of ex post TETs enhancing Privacy and discusses them in terms of their common features and unique characteristics. The article first defines the scope of usable transparency in terms of relevant Privacy Principles for providing transparency by taking the GDPR as a point of reference, and usability Principles that are important for achieving transparency. These Principles for usable transparency serve as a reference for classifying and assessing the surveyed TETs. The retrieval and screening process of the publications is then described, as is the process for deriving the subsequent classification of the characteristics of the TETs. The survey not only looks into what is made transparent by the TETs but also how transparency is actually achieved. A main contribution of this survey is a proposed classification that assesses the TETs based on their functionality, implementation and evaluation as described in the literature. It concludes by discussing the trends and limitations of the surveyed TETs in regard to the defined scope of usable TETs and shows possible directions of future research for addressing these gaps. This survey provides researchers and developers of Privacy enhancing technologies an overview of the characteristics of state of the art ex post TETs, on which they can base their work.

  • Transparency, Privacy and Trust – Technology for Tracking and Controlling My Data Disclosures: Does This Work?
    2016
    Co-Authors: Simone Fischer-hübner, Julio Angulo, Farzaneh Karegar, Tobias Pulls
    Abstract:

    Transparency is a basic Privacy Principle and social trust factor. However, in the age of cloud computing and big data, providing transparency becomes increasingly a challenge.This paper discusses Privacy requirements of the General Data Protection Regulation (GDPR) for providing ex-post transparency and presents how the transparency-enhancing tool Data Track can help to technically enforce those Principles. Open research challenges that remain from a Human Computer Interaction (HCI) perspective are discussed as well.

  • IFIPTM - Transparency, Privacy and Trust: Technology for Tracking and Controlling my Data Disclosures – Does this work?
    Trust Management X, 2016
    Co-Authors: Simone Fischer-hübner, Julio Angulo, Farzaneh Karegar, Tobias Pulls
    Abstract:

    Transparency is a basic Privacy Principle and social trust factor. However, in the age of cloud computing and big data, providing transparency becomes increasingly a challenge.

  • HCI requirements for transparency and accountability tools for cloud service chains
    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2015
    Co-Authors: Simone Fischer-hübner, John Sören Pettersson, Jesus Angulo
    Abstract:

    This paper elaborates HCI (Human-Computer Interaction) requirements for making cloud data protection tools comprehensible and trustworthy. The requirements and corresponding user interface design Principles are derived from our research and review work conducted to address in particular the following HCI challenges: How can the users be guided to better comprehend the flow and traces of data on the Internet and in the cloud? How can individual end users be supported to do better informed decisions on how their data can be used by cloud providers or others? How can the legal Privacy Principle of transparency and accountability be enforced by the user interfaces of cloud inspection tools? How can the user interfaces help users to reassess their trust/distrust in services? The research methods that we have used comprise stakeholder workshops, focus groups, controlled experiments, usability tests as well as literature and law reviews. The derived requirements and Principles are grouped into the following functional categories: (1) ex-ante transparency, (2) exercising data subject rights, (3) obtaining consent, (4) Privacy preference management, (5) Privacy policy management, (6) ex-post transparency, (7) audit configuration, (8) access control management, and (9) Privacy risk assessment. This broad categorization makes our results accessible and applicable for any developer within the field of usable Privacy and transparency-enhancing technologies for cloud service chains. © Springer International Publishing Switzerland 2015.

  • How can Cloud Users be Supported in Deciding on, Tracking and Controlling How their Data are Used?
    2014
    Co-Authors: Simone Fischer-hübner, Julio Angulo, Tobias Pulls
    Abstract:

    Transparency is a basic Privacy Principle and factor of social trust. However, the processing of personal data along a cloud chain is often rather intransparent to the data subjects concerned. Transparency Enhancing Tools (TETs) can help users in deciding on, tracking and controlling their data in the cloud. However, TETs for enhancing Privacy also have to be designed to be both Privacy-preserving and usable. In this paper, we provide requirements for usable TETs for the cloud. The requirements presented in this paper were derived in two ways; at a stakeholder workshop and through a legal analysis. Here we discuss design Principles for usable Privacy policies and give examples of TETs which enable end users to track their personal data. We are developing them using both Privacy and usability as design criteria.

Calton Pu - One of the best experts on this subject based on the ideXlab platform.

  • ICDE - A General Proximity Privacy Principle
    2009 IEEE 25th International Conference on Data Engineering, 2009
    Co-Authors: Ting Wang, Shicong Meng, Bhuvan Bamba, Calton Pu
    Abstract:

    As an important Privacy threat in anonymized data publication, the proximity breach is gaining increasing attention recently. Such breach occurs when an adversary concludes with high confidence that the sensitive value of a victim individual falls in a set of proximate values, even though with low confidence about the exact value. Most existing research efforts focus on the case of publishing data of specific types, e.g., (1) categorical sensitive data (different values have no sense of proximity) or (2) numerical sensitive data (different values can be strictly ordered), while failing to address the Privacy threats for a much wider range of data models, where the similarity of specific values is defined by arbitrary functions. In this work, we study the problem of protecting \texts c{general proximity Privacy}, with findings applicable to most existing data models. Specifically, we counter the attacks by introducing a novel Privacy Principle, ($\epsilon$, $\delta$)-dissimilarity. It requires that each sensitive value in a QI-group $G$ must be "dissimilar'' to at least $\delta$ percent of all other values in $G$, while the similarity is measured by $\epsilon$. We prove that ($\epsilon$, $\delta$)-dissimilarity, used in conjunction with $k$-anonymity, provides effective protection against linking attacks in terms of both exact-association and proximate-association. Furthermore, We present a theoretical analysis regarding the satisfiability of this Principle.

  • a general proximity Privacy Principle
    International Conference on Data Engineering, 2009
    Co-Authors: Ting Wang, Shicong Meng, Bhuvan Bamba, Calton Pu
    Abstract:

    As an important Privacy threat in anonymized data publication, the proximity breach is gaining increasing attention recently. Such breach occurs when an adversary concludes with high confidence that the sensitive value of a victim individual falls in a set of proximate values, even though with low confidence about the exact value. Most existing research efforts focus on the case of publishing data of specific types, e.g., (1) categorical sensitive data (different values have no sense of proximity) or (2) numerical sensitive data (different values can be strictly ordered), while failing to address the Privacy threats for a much wider range of data models, where the similarity of specific values is defined by arbitrary functions. In this work, we study the problem of protecting \texts c{general proximity Privacy}, with findings applicable to most existing data models. Specifically, we counter the attacks by introducing a novel Privacy Principle, ($\epsilon$, $\delta$)-dissimilarity. It requires that each sensitive value in a QI-group $G$ must be "dissimilar'' to at least $\delta$ percent of all other values in $G$, while the similarity is measured by $\epsilon$. We prove that ($\epsilon$, $\delta$)-dissimilarity, used in conjunction with $k$-anonymity, provides effective protection against linking attacks in terms of both exact-association and proximate-association. Furthermore, We present a theoretical analysis regarding the satisfiability of this Principle.

  • A General Proximity Privacy Principle
    2009 IEEE 25th International Conference on Data Engineering, 2009
    Co-Authors: Ting Wang, Shicong Meng, Bhuvan Bamba, Calton Pu
    Abstract:

    This work presents a systematic study of the problem of protecting general proximity Privacy, with findings applicable to most existing data models. Our contributions are multi-folded: we highlighted and formulated proximity Privacy breaches in a data-model-neutral manner; we proposed a new Privacy Principle (epsiv,delta)k-dissimilarity, with theoretically guaranteed protection against linking attacks in terms of both exact and proximate QI-SA associations; we provided a theoretical analysis regarding the satisfiability of (epsiv,delta)k -dissimilarity, and pointed to promising solutions to fulfilling this Principle.

Tobias Pulls - One of the best experts on this subject based on the ideXlab platform.

  • Transparency, Privacy and Trust – Technology for Tracking and Controlling My Data Disclosures: Does This Work?
    2016
    Co-Authors: Simone Fischer-hübner, Julio Angulo, Farzaneh Karegar, Tobias Pulls
    Abstract:

    Transparency is a basic Privacy Principle and social trust factor. However, in the age of cloud computing and big data, providing transparency becomes increasingly a challenge.This paper discusses Privacy requirements of the General Data Protection Regulation (GDPR) for providing ex-post transparency and presents how the transparency-enhancing tool Data Track can help to technically enforce those Principles. Open research challenges that remain from a Human Computer Interaction (HCI) perspective are discussed as well.

  • IFIPTM - Transparency, Privacy and Trust: Technology for Tracking and Controlling my Data Disclosures – Does this work?
    Trust Management X, 2016
    Co-Authors: Simone Fischer-hübner, Julio Angulo, Farzaneh Karegar, Tobias Pulls
    Abstract:

    Transparency is a basic Privacy Principle and social trust factor. However, in the age of cloud computing and big data, providing transparency becomes increasingly a challenge.

  • ISSE - Enhancing Transparency with Distributed Privacy-Preserving Logging
    ISSE 2013 Securing Electronic Business Processes, 2016
    Co-Authors: Roel Peeters, Tobias Pulls, Karel Wouters
    Abstract:

    Transparency of data processing is often a requirement for compliance to legislation and/or business requirements. Furthermore, it has recognised as a key Privacy Principle, for example in the European Data Protection Directive. At the same time, transparency of the data processing should be limited to the users involved in order to minimise the leakage of sensitive business information and Privacy of the employees (if any) performing the data processing.

  • How can Cloud Users be Supported in Deciding on, Tracking and Controlling How their Data are Used?
    2014
    Co-Authors: Simone Fischer-hübner, Julio Angulo, Tobias Pulls
    Abstract:

    Transparency is a basic Privacy Principle and factor of social trust. However, the processing of personal data along a cloud chain is often rather intransparent to the data subjects concerned. Transparency Enhancing Tools (TETs) can help users in deciding on, tracking and controlling their data in the cloud. However, TETs for enhancing Privacy also have to be designed to be both Privacy-preserving and usable. In this paper, we provide requirements for usable TETs for the cloud. The requirements presented in this paper were derived in two ways; at a stakeholder workshop and through a legal analysis. Here we discuss design Principles for usable Privacy policies and give examples of TETs which enable end users to track their personal data. We are developing them using both Privacy and usability as design criteria.

  • Privacy and Identity Management - How can Cloud Users be Supported in Deciding on, Tracking and Controlling How their Data are Used?
    IFIP Advances in Information and Communication Technology, 2013
    Co-Authors: Simone Fischer-hübner, Julio Angulo, Tobias Pulls
    Abstract:

    Transparency is a basic Privacy Principle and factor of social trust. However, the processing of personal data along a cloud chain is often rather intransparent to the data subjects concerned. Transparency Enhancing Tools (TETs) can help users in deciding on, tracking and controlling their data in the cloud. However, TETs for enhancing Privacy also have to be designed to be both Privacy-preserving and usable. In this paper, we provide requirements for usable TETs for the cloud. The requirements presented in this paper were derived in two ways; at a stakeholder workshop and through a legal analysis. Here we discuss design Principles for usable Privacy policies and give examples of TETs which enable end users to track their personal data. We are developing them using both Privacy and usability as design criteria.

Ke Wang - One of the best experts on this subject based on the ideXlab platform.

  • Anonymizing bag-valued sparse data by semantic similarity-based clustering
    Knowledge and Information Systems, 2012
    Co-Authors: Ke Wang
    Abstract:

    Web query logs provide a rich wealth of information, but also present serious Privacy risks. We preserve Privacy in publishing vocabularies extracted from a web query log by introducing vocabulary k-anonymity, which prevents the Privacy attack of re-identification that reveals the real identities of vocabularies. A vocabulary is a bag of query-terms extracted from queries issued by a user at a specified granularity. Such bag-valued data are extremely sparse, which makes it hard to retain enough utility in enforcing k-anonymity. To the best of our knowledge, the prior works do not solve such a problem, among which some achieve a different Privacy Principle, for example, differential Privacy, some deal with a different type of data, for example, set-valued data or relational data, and some consider a different publication scenario, for example, publishing frequent keywords. To retain enough data utility, a semantic similarity-based clustering approach is proposed, which measures the semantic similarity between a pair of terms by the minimum path distance over a semantic network of terms such as WordNet, computes the semantic similarity between two vocabularies by a weighted bipartite matching, and publishes the typical vocabulary for each cluster of semantically similar vocabularies. Extensive experiments on the AOL query log show that our approach can retain enough data utility in terms of loss metrics and in frequent pattern mining.

  • On optimal anonymization for l+-diversity
    2010 IEEE 26th International Conference on Data Engineering (ICDE 2010), 2010
    Co-Authors: Ke Wang
    Abstract:

    Publishing person specific data while protecting Privacy is an important problem. Existing algorithms that enforce the Privacy Principle called l-diversity are heuristic based due to the NP-hardness. Several questions remain open: can we get a significant gain in the data utility from an optimal solution compared to heuristic ones; can we improve the utility by setting a distinct Privacy threshold per sensitive value; is it practical to find an optimal solution efficiently for real world datasets. This paper addresses these questions. Specifically, we present a pruning based algorithm for finding an optimal solution to an extended form of the l-diversity problem. The novelty lies in several strong techniques: a novel structure for enumerating all solutions, methods for estimating cost lower bounds, strategies for dynamically arranging the enumeration order and updating lower bounds. This approach can be instantiated with any reasonable cost metric. Experiments on real world datasets show that our algorithm is efficient and improves the data utility.

  • ICDE - On optimal anonymization for l + -diversity
    2010 IEEE 26th International Conference on Data Engineering (ICDE 2010), 2010
    Co-Authors: Ke Wang
    Abstract:

    Publishing person specific data while protecting Privacy is an important problem. Existing algorithms that enforce the Privacy Principle called l-diversity are heuristic based due to the NP-hardness. Several questions remain open: can we get a significant gain in the data utility from an optimal solution compared to heuristic ones; can we improve the utility by setting a distinct Privacy threshold per sensitive value; is it practical to find an optimal solution efficiently for real world datasets. This paper addresses these questions. Specifically, we present a pruning based algorithm for finding an optimal solution to an extended form of the l-diversity problem. The novelty lies in several strong techniques: a novel structure for enumerating all solutions, methods for estimating cost lower bounds, strategies for dynamically arranging the enumeration order and updating lower bounds. This approach can be instantiated with any reasonable cost metric. Experiments on real world datasets show that our algorithm is efficient and improves the data utility.

  • On Optimal Anonymization for l + -Diversity
    Icde, 2010
    Co-Authors: Junqiang Liu, Ke Wang
    Abstract:

    Publishing person specific data while protecting Privacy is an important problem. Existing algorithms that enforce the Privacy Principle called l-diversity are heuristic based due to the NP-hardness. Several questions remain open: can we get a significant gain in the data utility from an optimal solution compared to heuristic ones; can we improve the utility by setting a distinct Privacy threshold per sensitive value; is it practical to find an optimal solution efficiently for real world datasets. This paper addresses these questions. Specifically, we present a pruning based algorithm for finding an optimal solution to an extended form of the l-diversity problem. The novelty lies in several strong techniques: a novel structure for enumerating all solutions, methods for estimating cost lower bounds, strategies for dynamically arranging the enumeration order and updating lower bounds. This approach can be instantiated with any reasonable cost metric. Experiments on real world datasets show that our algorithm is efficient and improves the data utility.

Ting Wang - One of the best experts on this subject based on the ideXlab platform.

  • Differentially Private Releasing via Deep Generative Model (Technical Report).
    arXiv: Cryptography and Security, 2018
    Co-Authors: Xinyang Zhang, Shouling Ji, Ting Wang
    Abstract:

    Privacy-preserving releasing of complex data (e.g., image, text, audio) represents a long-standing challenge for the data mining research community. Due to rich semantics of the data and lack of a priori knowledge about the analysis task, excessive sanitization is often necessary to ensure Privacy, leading to significant loss of the data utility. In this paper, we present dp-GAN, a general private releasing framework for semantic-rich data. Instead of sanitizing and then releasing the data, the data curator publishes a deep generative model which is trained using the original data in a differentially private manner; with the generative model, the analyst is able to produce an unlimited amount of synthetic data for arbitrary analysis tasks. In contrast of alternative solutions, dp-GAN highlights a set of key features: (i) it provides theoretical Privacy guarantee via enforcing the differential Privacy Principle; (ii) it retains desirable utility in the released model, enabling a variety of otherwise impossible analyses; and (iii) most importantly, it achieves practical training scalability and stability by employing multi-fold optimization strategies. Through extensive empirical evaluation on benchmark datasets and analyses, we validate the efficacy of dp-GAN.

  • Differentially Private Releasing via Deep Generative Model
    2018
    Co-Authors: Xinyang Zhang, Shouling Ji, Ting Wang
    Abstract:

    Privacy-preserving releasing of complex data (e.g., image, text, audio) represents a long-standing challenge for the data mining research community. Due to rich semantics of the data and lack of a priori knowledge about the analysis task, excessive sanitization is often necessary to ensure Privacy, leading to significant loss of the data utility. In this paper, we present dp-GAN, a general private releasing framework for semantic-rich data. Instead of sanitizing and then releasing the data, the data curator publishes a deep generative model which is trained using the original data in a differentially private manner; with the generative model, the analyst is able to produce an unlimited amount of synthetic data for arbitrary analysis tasks. In contrast of alternative solutions, dp-GAN highlights a set of key features: (i) it provides theoretical Privacy guarantee via enforcing the differential Privacy Principle; (ii) it retains desirable utility in the released model, enabling a variety of otherwise impossible analyses; and (iii) most importantly, it achieves practical training scalability and stability by employing multi-fold optimization strategies. Through extensive empirical evaluation on benchmark datasets and analyses, we validate the efficacy of dp-GAN.

  • ICDE - A General Proximity Privacy Principle
    2009 IEEE 25th International Conference on Data Engineering, 2009
    Co-Authors: Ting Wang, Shicong Meng, Bhuvan Bamba, Calton Pu
    Abstract:

    As an important Privacy threat in anonymized data publication, the proximity breach is gaining increasing attention recently. Such breach occurs when an adversary concludes with high confidence that the sensitive value of a victim individual falls in a set of proximate values, even though with low confidence about the exact value. Most existing research efforts focus on the case of publishing data of specific types, e.g., (1) categorical sensitive data (different values have no sense of proximity) or (2) numerical sensitive data (different values can be strictly ordered), while failing to address the Privacy threats for a much wider range of data models, where the similarity of specific values is defined by arbitrary functions. In this work, we study the problem of protecting \texts c{general proximity Privacy}, with findings applicable to most existing data models. Specifically, we counter the attacks by introducing a novel Privacy Principle, ($\epsilon$, $\delta$)-dissimilarity. It requires that each sensitive value in a QI-group $G$ must be "dissimilar'' to at least $\delta$ percent of all other values in $G$, while the similarity is measured by $\epsilon$. We prove that ($\epsilon$, $\delta$)-dissimilarity, used in conjunction with $k$-anonymity, provides effective protection against linking attacks in terms of both exact-association and proximate-association. Furthermore, We present a theoretical analysis regarding the satisfiability of this Principle.

  • a general proximity Privacy Principle
    International Conference on Data Engineering, 2009
    Co-Authors: Ting Wang, Shicong Meng, Bhuvan Bamba, Calton Pu
    Abstract:

    As an important Privacy threat in anonymized data publication, the proximity breach is gaining increasing attention recently. Such breach occurs when an adversary concludes with high confidence that the sensitive value of a victim individual falls in a set of proximate values, even though with low confidence about the exact value. Most existing research efforts focus on the case of publishing data of specific types, e.g., (1) categorical sensitive data (different values have no sense of proximity) or (2) numerical sensitive data (different values can be strictly ordered), while failing to address the Privacy threats for a much wider range of data models, where the similarity of specific values is defined by arbitrary functions. In this work, we study the problem of protecting \texts c{general proximity Privacy}, with findings applicable to most existing data models. Specifically, we counter the attacks by introducing a novel Privacy Principle, ($\epsilon$, $\delta$)-dissimilarity. It requires that each sensitive value in a QI-group $G$ must be "dissimilar'' to at least $\delta$ percent of all other values in $G$, while the similarity is measured by $\epsilon$. We prove that ($\epsilon$, $\delta$)-dissimilarity, used in conjunction with $k$-anonymity, provides effective protection against linking attacks in terms of both exact-association and proximate-association. Furthermore, We present a theoretical analysis regarding the satisfiability of this Principle.

  • A General Proximity Privacy Principle
    2009 IEEE 25th International Conference on Data Engineering, 2009
    Co-Authors: Ting Wang, Shicong Meng, Bhuvan Bamba, Calton Pu
    Abstract:

    This work presents a systematic study of the problem of protecting general proximity Privacy, with findings applicable to most existing data models. Our contributions are multi-folded: we highlighted and formulated proximity Privacy breaches in a data-model-neutral manner; we proposed a new Privacy Principle (epsiv,delta)k-dissimilarity, with theoretically guaranteed protection against linking attacks in terms of both exact and proximate QI-SA associations; we provided a theoretical analysis regarding the satisfiability of (epsiv,delta)k -dissimilarity, and pointed to promising solutions to fulfilling this Principle.