The Experts below are selected from a list of 896472 Experts worldwide ranked by ideXlab platform
Ju Ren - One of the best experts on this subject based on the ideXlab platform.
-
Analyzing User-Level Privacy Attack Against Federated Learning
IEEE Journal on Selected Areas in Communications, 2020Co-Authors: Mengkai Song, Zhibo Wang, Zhifei Zhang, Yang Song, Qian Wang, Ju RenAbstract:Federated learning has emerged as an advanced privacy-preserving learning technique for mobile edge computing, where the model is trained in a decentralized manner by the clients, preventing the server from directly accessing those private data from the clients. This learning mechanism significantly challenges the attack from the server side. Although the state-of-the-art attacking techniques that incorporated the advance of Generative adversarial networks (GANs) could construct class representatives of the global data distribution among all clients, it is still challenging to distinguishably attack a specific client (i.e., user-level privacy leakage), which is a stronger privacy threat to precisely recover the private data from a specific client. To analyze the privacy leakage of federated learning, this paper gives the first attempt to explore user-level privacy leakage by the attack from a malicious server. We propose a framework incorporating GAN with a multi-task discriminator, called multi-task GAN - Auxiliary Identification (mGAN-AI), which simultaneously discriminates category, reality, and client identity of input samples. The novel discrimination on client identity enables the generator to recover user specified private data. Unlike existing works interfering the federated learning process, the proposed method works “invisibly” on the server side. Furthermore, considering the anonymization strategy for mitigating mGAN-AI, we propose a beforehand linkability attack which re-identifies the anonymized updates by associating the client representatives. A novel siamese network fusing the identification and verification models is developed for measuring the similarity of representatives. The experimental results demonstrate the effectiveness of the proposed approaches and the superior to the state-of-the-art.
Mengkai Song - One of the best experts on this subject based on the ideXlab platform.
-
Analyzing User-Level Privacy Attack Against Federated Learning
IEEE Journal on Selected Areas in Communications, 2020Co-Authors: Mengkai Song, Zhibo Wang, Zhifei Zhang, Yang Song, Qian Wang, Ju RenAbstract:Federated learning has emerged as an advanced privacy-preserving learning technique for mobile edge computing, where the model is trained in a decentralized manner by the clients, preventing the server from directly accessing those private data from the clients. This learning mechanism significantly challenges the attack from the server side. Although the state-of-the-art attacking techniques that incorporated the advance of Generative adversarial networks (GANs) could construct class representatives of the global data distribution among all clients, it is still challenging to distinguishably attack a specific client (i.e., user-level privacy leakage), which is a stronger privacy threat to precisely recover the private data from a specific client. To analyze the privacy leakage of federated learning, this paper gives the first attempt to explore user-level privacy leakage by the attack from a malicious server. We propose a framework incorporating GAN with a multi-task discriminator, called multi-task GAN - Auxiliary Identification (mGAN-AI), which simultaneously discriminates category, reality, and client identity of input samples. The novel discrimination on client identity enables the generator to recover user specified private data. Unlike existing works interfering the federated learning process, the proposed method works “invisibly” on the server side. Furthermore, considering the anonymization strategy for mitigating mGAN-AI, we propose a beforehand linkability attack which re-identifies the anonymized updates by associating the client representatives. A novel siamese network fusing the identification and verification models is developed for measuring the similarity of representatives. The experimental results demonstrate the effectiveness of the proposed approaches and the superior to the state-of-the-art.
-
Beyond Inferring Class Representatives: User-Level Privacy Leakage From Federated Learning
arXiv: Learning, 2018Co-Authors: Zhibo Wang, Mengkai Song, Zhifei Zhang, Yang Song, Qian WangAbstract:Federated learning, i.e., a mobile edge computing framework for deep learning, is a recent advance in privacy-preserving machine learning, where the model is trained in a decentralized manner by the clients, i.e., data curators, preventing the server from directly accessing those private data from the clients. This learning mechanism significantly challenges the attack from the server side. Although the state-of-the-art attacking techniques that incorporated the advance of Generative adversarial networks (GANs) could construct class representatives of the global data distribution among all clients, it is still challenging to distinguishably attack a specific client (i.e., user-level privacy leakage), which is a stronger privacy threat to precisely recover the private data from a specific client. This paper gives the first attempt to explore user-level privacy leakage against the federated learning by the attack from a malicious server. We propose a framework incorporating GAN with a multi-task discriminator, which simultaneously discriminates category, reality, and client identity of input samples. The novel discrimination on client identity enables the generator to recover user specified private data. Unlike existing works that tend to interfere the training process of the federated learning, the proposed method works "invisibly" on the server side. The experimental results demonstrate the effectiveness of the proposed attacking approach and the superior to the state-of-the-art.
Win, Htet Myak - One of the best experts on this subject based on the ideXlab platform.
-
An Active Data-aware Cache Consistency Protocol for Data-Shipping DBMS Architectures
2017Co-Authors: Win, Htet MyakAbstract:In a data-shipping database system, dataitems are retrieved from the server machines,cached and processed at the client machines, andthen shipped back to the server. Cache consistencycontrols typically rely on a centralized server orservers to enforce the necessary concurrencycontrol. This system presents a consistency controlprotocol, Active Data-aware Cache Consistency(ADCC) that allows clients to be aware of theglobal state of their cached data via a two-tierdirectory.In addition, ADCC performs theconcurrency control from the server to the clients
Huaxiong Wang - One of the best experts on this subject based on the ideXlab platform.
-
Efficient Two-Server Password-Only Authenticated Key Exchange
IEEE Transactions on Parallel and Distributed Systems, 2013Co-Authors: Xun Yi, San Ling, Huaxiong WangAbstract:Password-authenticated key exchange (PAKE) is where a client and a server, who share a password, authenticate each other and meanwhile establish a cryptographic key by exchange of messages. In this setting, all the passwords necessary to authenticate clients are stored in a single server. If the server is compromised, due to, for example, hacking or even insider attack, passwords stored in the server are all disclosed. In this paper, we consider a scenario where two servers cooperate to authenticate a client and if one server is compromised, the attacker still cannot pretend to be the client with the information from the compromised server. Current solutions for two-server PAKE are either symmetric in the sense that two peer servers equally contribute to the authentication or asymmetric in the sense that one server authenticates the client with the help of another server. This paper presents a symmetric solution for two-server PAKE, where the client can establish different cryptographic keys with the two servers, respectively. Our protocol runs in parallel and is more efficient than existing symmetric two-server PAKE protocol, and even more efficient than existing asymmetric two-server PAKE protocols in terms of parallel computation.
Qian Wang - One of the best experts on this subject based on the ideXlab platform.
-
Analyzing User-Level Privacy Attack Against Federated Learning
IEEE Journal on Selected Areas in Communications, 2020Co-Authors: Mengkai Song, Zhibo Wang, Zhifei Zhang, Yang Song, Qian Wang, Ju RenAbstract:Federated learning has emerged as an advanced privacy-preserving learning technique for mobile edge computing, where the model is trained in a decentralized manner by the clients, preventing the server from directly accessing those private data from the clients. This learning mechanism significantly challenges the attack from the server side. Although the state-of-the-art attacking techniques that incorporated the advance of Generative adversarial networks (GANs) could construct class representatives of the global data distribution among all clients, it is still challenging to distinguishably attack a specific client (i.e., user-level privacy leakage), which is a stronger privacy threat to precisely recover the private data from a specific client. To analyze the privacy leakage of federated learning, this paper gives the first attempt to explore user-level privacy leakage by the attack from a malicious server. We propose a framework incorporating GAN with a multi-task discriminator, called multi-task GAN - Auxiliary Identification (mGAN-AI), which simultaneously discriminates category, reality, and client identity of input samples. The novel discrimination on client identity enables the generator to recover user specified private data. Unlike existing works interfering the federated learning process, the proposed method works “invisibly” on the server side. Furthermore, considering the anonymization strategy for mitigating mGAN-AI, we propose a beforehand linkability attack which re-identifies the anonymized updates by associating the client representatives. A novel siamese network fusing the identification and verification models is developed for measuring the similarity of representatives. The experimental results demonstrate the effectiveness of the proposed approaches and the superior to the state-of-the-art.
-
Beyond Inferring Class Representatives: User-Level Privacy Leakage From Federated Learning
arXiv: Learning, 2018Co-Authors: Zhibo Wang, Mengkai Song, Zhifei Zhang, Yang Song, Qian WangAbstract:Federated learning, i.e., a mobile edge computing framework for deep learning, is a recent advance in privacy-preserving machine learning, where the model is trained in a decentralized manner by the clients, i.e., data curators, preventing the server from directly accessing those private data from the clients. This learning mechanism significantly challenges the attack from the server side. Although the state-of-the-art attacking techniques that incorporated the advance of Generative adversarial networks (GANs) could construct class representatives of the global data distribution among all clients, it is still challenging to distinguishably attack a specific client (i.e., user-level privacy leakage), which is a stronger privacy threat to precisely recover the private data from a specific client. This paper gives the first attempt to explore user-level privacy leakage against the federated learning by the attack from a malicious server. We propose a framework incorporating GAN with a multi-task discriminator, which simultaneously discriminates category, reality, and client identity of input samples. The novel discrimination on client identity enables the generator to recover user specified private data. Unlike existing works that tend to interfere the training process of the federated learning, the proposed method works "invisibly" on the server side. The experimental results demonstrate the effectiveness of the proposed attacking approach and the superior to the state-of-the-art.