Deep Neural Network

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 104943 Experts worldwide ranked by ideXlab platform

Damith C Ranasinghe - One of the best experts on this subject based on the ideXlab platform.

  • februus input purification defense against trojan attacks on Deep Neural Network systems
    Annual Computer Security Applications Conference, 2020
    Co-Authors: Bao Gia Doan, Ehsan Abbasnejad, Damith C Ranasinghe
    Abstract:

    We propose Februus; a new idea to neutralize highly potent and insidious Trojan attacks on Deep Neural Network (DNN) systems at run-time. In Trojan attacks, an adversary activates a backdoor crafted in a Deep Neural Network model using a secret trigger, a Trojan, applied to any input to alter the model’s decision to a target prediction—a target determined by and only known to the attacker. Februus sanitizes the incoming input by surgically removing the potential trigger artifacts and restoring the input for the classification task. Februus enables effective Trojan mitigation by sanitizing inputs with no loss of performance for sanitized inputs, Trojaned or benign. Our extensive evaluations on multiple infected models based on four popular datasets across three contrasting vision applications and trigger types demonstrate the high efficacy of Februus. We dramatically reduced attack success rates from 100% to near 0% for all cases (achieving 0% on multiple cases) and evaluated the generalizability of Februus to defend against complex adaptive attacks; notably, we realized the first defense against the advanced partial Trojan attack. To the best of our knowledge, Februus is the first backdoor defense method for operation at run-time capable of sanitizing Trojaned inputs without requiring anomaly detection methods, model retraining or costly labeled data.

Jun Zhao - One of the best experts on this subject based on the ideXlab platform.

  • Relation Classification via Convolutional Deep Neural Network
    Coling, 2014
    Co-Authors: Daojian Zeng, Siwei Lai, Kang Liu, Guangyou Zhou, Jun Zhao
    Abstract:

    The state-of-the-art methods used for relation classification are primarily based on statistical ma-chine learning, and their performance strongly depends on the quality of the extracted features. The extracted features are often derived from the output of pre-existing natural language process-ing (NLP) systems, which leads to the propagation of the errors in the existing tools and hinders the performance of these systems. In this paper, we exploit a convolutional Deep Neural Network (DNN) to extract lexical and sentence level features. Our method takes all of the word tokens as input without complicated pre-processing. First, the word tokens are transformed to vectors by looking up word embeddings 1 . Then, lexical level features are extracted according to the given nouns. Meanwhile, sentence level features are learned using a convolutional approach. These two level features are concatenated to form the final extracted feature vector. Finally, the fea-tures are fed into a softmax classifier to predict the relationship between two marked nouns. The experimental results demonstrate that our approach significantly outperforms the state-of-the-art methods.

Greg Mori - One of the best experts on this subject based on the ideXlab platform.

  • constraint aware Deep Neural Network compression
    European Conference on Computer Vision, 2018
    Co-Authors: Changan Chen, Frederick Tung, Naveen Vedula, Greg Mori
    Abstract:

    Deep Neural Network compression has the potential to bring modern resource-hungry Deep Networks to resource-limited devices. However, in many of the most compelling deployment scenarios of compressed Deep Networks, the operational constraints matter: for example, a pedestrian detection Network on a self-driving car may have to satisfy a latency constraint for safe operation. We propose the first principled treatment of Deep Network compression under operational constraints. We formulate the compression learning problem from the perspective of constrained Bayesian optimization, and introduce a cooling (annealing) strategy to guide the Network compression towards the target constraints. Experiments on ImageNet demonstrate the value of modelling constraints directly in Network compression.

Brian Kingsbury - One of the best experts on this subject based on the ideXlab platform.

  • data augmentation for Deep Neural Network acoustic modeling
    IEEE Transactions on Audio Speech and Language Processing, 2015
    Co-Authors: Xiaodong Cui, Vaibhava Goel, Brian Kingsbury
    Abstract:

    This paper investigates data augmentation for Deep Neural Network acoustic modeling based on label-preserving transformations to deal with data sparsity. Two data augmentation approaches, vocal tract length perturbation (VTLP) and stochastic feature mapping (SFM), are investigated for both Deep Neural Networks (DNNs) and convolutional Neural Networks (CNNs). The approaches are focused on increasing speaker and speech variations of the limited training data such that the acoustic models trained with the augmented data are more robust to such variations. In addition, a two-stage data augmentation scheme based on a stacked architecture is proposed to combine VTLP and SFM as complementary approaches. Experiments are conducted on Assamese and Haitian Creole, two development languages of the IARPA Babel program, and improved performance on automatic speech recognition (ASR) and keyword search (KWS) is reported.

Manqing Dong - One of the best experts on this subject based on the ideXlab platform.

  • Deep Neural Network hyperparameter optimization with orthogonal array tuning
    International Conference on Neural Information Processing, 2019
    Co-Authors: Xiang Zhang, Xiaocong Chen, Lina Yao, Manqing Dong
    Abstract:

    Deep learning algorithms have achieved excellent performance lately in a wide range of fields (e.g., computer version). However, a severe challenge faced by Deep learning is the high dependency on hyper-parameters. The algorithm results may fluctuate dramatically under the different configuration of hyper-parameters. Addressing the above issue, this paper presents an efficient Orthogonal Array Tuning Method (OATM) for Deep learning hyper-parameter tuning. We describe the OATM approach in five detailed steps and elaborate on it using two widely used Deep Neural Network structures (Recurrent Neural Networks and Convolutional Neural Networks). The proposed method is compared to the state-of-the-art hyper-parameter tuning methods including manually (e.g., grid search and random search) and automatically (e.g., Bayesian Optimization) ones. The experiment results state that OATM can significantly save the tuning time compared to the state-of-the-art methods while preserving the satisfying performance.

  • Deep Neural Network hyperparameter optimization with orthogonal array tuning
    arXiv: Learning, 2019
    Co-Authors: Xiang Zhang, Xiaocong Chen, Lina Yao, Manqing Dong
    Abstract:

    Deep learning algorithms have achieved excellent performance lately in a wide range of fields (e.g., computer version). However, a severe challenge faced by Deep learning is the high dependency on hyper-parameters. The algorithm results may fluctuate dramatically under the different configuration of hyper-parameters. Addressing the above issue, this paper presents an efficient Orthogonal Array Tuning Method (OATM) for Deep learning hyper-parameter tuning. We describe the OATM approach in five detailed steps and elaborate on it using two widely used Deep Neural Network structures (Recurrent Neural Networks and Convolutional Neural Networks). The proposed method is compared to the state-of-the-art hyper-parameter tuning methods including manually (e.g., grid search and random search) and automatically (e.g., Bayesian Optimization) ones. The experiment results state that OATM can significantly save the tuning time compared to the state-of-the-art methods while preserving the satisfying performance. The codes are open in GitHub (this https URL)