Deep Learning

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 235641 Experts worldwide ranked by ideXlab platform

Dit-yan Yeung - One of the best experts on this subject based on the ideXlab platform.

  • A Survey on Bayesian Deep Learning
    ACM Computing Surveys, 2020
    Co-Authors: Hao Wang, Dit-yan Yeung
    Abstract:

    A comprehensive artificial intelligence system needs to not only perceive the environment with different “senses” (e.g., seeing and hearing) but also infer the world’s conditional (or even causal) relations and corresponding uncertainty. The past decade has seen major advances in many perception tasks, such as visual object recognition and speech recognition, using Deep Learning models. For higher-level inference, however, probabilistic graphical models with their Bayesian nature are still more powerful and flexible. In recent years, Bayesian Deep Learning has emerged as a unified probabilistic framework to tightly integrate Deep Learning and Bayesian models. 1 In this general framework, the perception of text or images using Deep Learning can boost the performance of higher-level inference and, in turn, the feedback from the inference process is able to enhance the perception of text or images. This survey provides a comprehensive introduction to Bayesian Deep Learning and reviews its recent applications on recommender systems, topic models, control, and so on. We also discuss the relationship and differences between Bayesian Deep Learning and other related topics, such as Bayesian treatment of neural networks.

  • A Survey on Bayesian Deep Learning
    arXiv: Machine Learning, 2016
    Co-Authors: Hao Wang, Dit-yan Yeung
    Abstract:

    A comprehensive artificial intelligence system needs to not only perceive the environment with different `senses' (e.g., seeing and hearing) but also infer the world's conditional (or even causal) relations and corresponding uncertainty. The past decade has seen major advances in many perception tasks such as visual object recognition and speech recognition using Deep Learning models. For higher-level inference, however, probabilistic graphical models with their Bayesian nature are still more powerful and flexible. In recent years, Bayesian Deep Learning has emerged as a unified probabilistic framework to tightly integrate Deep Learning and Bayesian models. In this general framework, the perception of text or images using Deep Learning can boost the performance of higher-level inference and in turn, the feedback from the inference process is able to enhance the perception of text or images. This survey provides a comprehensive introduction to Bayesian Deep Learning and reviews its recent applications on recommender systems, topic models, control, etc. Besides, we also discuss the relationship and differences between Bayesian Deep Learning and other related topics such as Bayesian treatment of neural networks.

Vitaly Shmatikov - One of the best experts on this subject based on the ideXlab platform.

  • Privacy-preserving Deep Learning
    2015 53rd Annual Allerton Conference on Communication Control and Computing Allerton 2015, 2016
    Co-Authors: Reza Shokri, Vitaly Shmatikov
    Abstract:

    Deep Learning based on artificial neural networks is a very popular approach to modeling, classifying, and recognizing complex data such as images, speech, and text. The unprecedented accuracy of Deep Learning methods has turned them into the foundation of new AI-based services on the Internet. Commercial companies that col- lect user data on a large scale have been the main beneficiaries of this trend since the success of Deep Learning techniques is directly proportional to the amount of data available for training. Massive data collection required for Deep Learning presents ob- vious privacy issues. Users’ personal, highly sensitive data such as photos and voice recordings is kept indefinitely by the companies that collect it. Users can neither delete it, nor restrict the purposes for which it is used. Furthermore, centrally kept data is subject to legal subpoenas and extra-judicial surveillance. Many data own- ers—for example, medical institutions that may want to apply Deep Learning methods to clinical records—are prevented by privacy and confidentiality concerns from sharing the data and thus benefitting from large-scale Deep Learning. In this paper, we design, implement, and evaluate a practical sys- tem that enables multiple parties to jointly learn an accurate neural- network model for a given objective without sharing their input datasets. We exploit the fact that the optimization algorithms used in modern Deep Learning, namely, those based on stochastic gradi- ent descent, can be parallelized and executed asynchronously. Our system lets participants train independently on their own datasets and selectively share small subsets of their models’ key parameters during training. This offers an attractive point in the utility/privacy tradeoff space: participants preserve the privacy of their respective data while still benefitting from other participants’ models and thus boosting their Learning accuracy beyond what is achievable solely on their own inputs. We demonstrate the accuracy of our privacy- preserving Deep Learning on benchmark datasets.

  • privacy preserving Deep Learning
    Allerton Conference on Communication Control and Computing, 2015
    Co-Authors: Reza Shokri, Vitaly Shmatikov
    Abstract:

    Deep Learning based on artificial neural networks is a very popular approach to modeling, classifying, and recognizing complex data such as images, speech, and text. The unprecedented accuracy of Deep Learning methods has turned them into the foundation of new AI-based services on the Internet. Commercial companies that collect user data on a large scale have been the main beneficiaries of this trend since the success of Deep Learning techniques is directly proportional to the amount of data available for training.

Hao Wang - One of the best experts on this subject based on the ideXlab platform.

  • A Survey on Bayesian Deep Learning
    ACM Computing Surveys, 2020
    Co-Authors: Hao Wang, Dit-yan Yeung
    Abstract:

    A comprehensive artificial intelligence system needs to not only perceive the environment with different “senses” (e.g., seeing and hearing) but also infer the world’s conditional (or even causal) relations and corresponding uncertainty. The past decade has seen major advances in many perception tasks, such as visual object recognition and speech recognition, using Deep Learning models. For higher-level inference, however, probabilistic graphical models with their Bayesian nature are still more powerful and flexible. In recent years, Bayesian Deep Learning has emerged as a unified probabilistic framework to tightly integrate Deep Learning and Bayesian models. 1 In this general framework, the perception of text or images using Deep Learning can boost the performance of higher-level inference and, in turn, the feedback from the inference process is able to enhance the perception of text or images. This survey provides a comprehensive introduction to Bayesian Deep Learning and reviews its recent applications on recommender systems, topic models, control, and so on. We also discuss the relationship and differences between Bayesian Deep Learning and other related topics, such as Bayesian treatment of neural networks.

  • A Survey on Bayesian Deep Learning
    arXiv: Machine Learning, 2016
    Co-Authors: Hao Wang, Dit-yan Yeung
    Abstract:

    A comprehensive artificial intelligence system needs to not only perceive the environment with different `senses' (e.g., seeing and hearing) but also infer the world's conditional (or even causal) relations and corresponding uncertainty. The past decade has seen major advances in many perception tasks such as visual object recognition and speech recognition using Deep Learning models. For higher-level inference, however, probabilistic graphical models with their Bayesian nature are still more powerful and flexible. In recent years, Bayesian Deep Learning has emerged as a unified probabilistic framework to tightly integrate Deep Learning and Bayesian models. In this general framework, the perception of text or images using Deep Learning can boost the performance of higher-level inference and in turn, the feedback from the inference process is able to enhance the perception of text or images. This survey provides a comprehensive introduction to Bayesian Deep Learning and reviews its recent applications on recommender systems, topic models, control, etc. Besides, we also discuss the relationship and differences between Bayesian Deep Learning and other related topics such as Bayesian treatment of neural networks.

Reza Shokri - One of the best experts on this subject based on the ideXlab platform.

  • Privacy-preserving Deep Learning
    2015 53rd Annual Allerton Conference on Communication Control and Computing Allerton 2015, 2016
    Co-Authors: Reza Shokri, Vitaly Shmatikov
    Abstract:

    Deep Learning based on artificial neural networks is a very popular approach to modeling, classifying, and recognizing complex data such as images, speech, and text. The unprecedented accuracy of Deep Learning methods has turned them into the foundation of new AI-based services on the Internet. Commercial companies that col- lect user data on a large scale have been the main beneficiaries of this trend since the success of Deep Learning techniques is directly proportional to the amount of data available for training. Massive data collection required for Deep Learning presents ob- vious privacy issues. Users’ personal, highly sensitive data such as photos and voice recordings is kept indefinitely by the companies that collect it. Users can neither delete it, nor restrict the purposes for which it is used. Furthermore, centrally kept data is subject to legal subpoenas and extra-judicial surveillance. Many data own- ers—for example, medical institutions that may want to apply Deep Learning methods to clinical records—are prevented by privacy and confidentiality concerns from sharing the data and thus benefitting from large-scale Deep Learning. In this paper, we design, implement, and evaluate a practical sys- tem that enables multiple parties to jointly learn an accurate neural- network model for a given objective without sharing their input datasets. We exploit the fact that the optimization algorithms used in modern Deep Learning, namely, those based on stochastic gradi- ent descent, can be parallelized and executed asynchronously. Our system lets participants train independently on their own datasets and selectively share small subsets of their models’ key parameters during training. This offers an attractive point in the utility/privacy tradeoff space: participants preserve the privacy of their respective data while still benefitting from other participants’ models and thus boosting their Learning accuracy beyond what is achievable solely on their own inputs. We demonstrate the accuracy of our privacy- preserving Deep Learning on benchmark datasets.

  • privacy preserving Deep Learning
    Allerton Conference on Communication Control and Computing, 2015
    Co-Authors: Reza Shokri, Vitaly Shmatikov
    Abstract:

    Deep Learning based on artificial neural networks is a very popular approach to modeling, classifying, and recognizing complex data such as images, speech, and text. The unprecedented accuracy of Deep Learning methods has turned them into the foundation of new AI-based services on the Internet. Commercial companies that collect user data on a large scale have been the main beneficiaries of this trend since the success of Deep Learning techniques is directly proportional to the amount of data available for training.

Sungroh Yoon - One of the best experts on this subject based on the ideXlab platform.

  • Deep Learning in bioinformatics
    Briefings in Bioinformatics, 2017
    Co-Authors: Sungroh Yoon
    Abstract:

    In the era of big data, transformation of biomedical big data into valuable knowledge has been one of the most important challenges in bioinformatics. Deep Learning has advanced rapidly since the early 2000s and now demonstrates state-of-the-art performance in various fields. Accordingly, application of Deep Learning in bioinformatics to gain insight from data has been emphasized in both academia and industry. Here, we review Deep Learning in bioinformatics, presenting examples of current research. To provide a useful and comprehensive perspective, we categorize research both by the bioinformatics domain (i.e. omics, biomedical imaging, biomedical signal processing) and Deep Learning architecture (i.e. Deep neural networks, convolutional neural networks, recurrent neural networks, emergent architectures) and present brief descriptions of each study. Additionally, we discuss theoretical and practical issues of Deep Learning in bioinformatics and suggest future research directions. We believe that this review will provide valuable insights and serve as a starting point for researchers to apply Deep Learning approaches in their bioinformatics studies.