Feature Representation

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 105042 Experts worldwide ranked by ideXlab platform

Dinggang Shen - One of the best experts on this subject based on the ideXlab platform.

  • Inherent Structure-Based Multiview Learning With Multitemplate Feature Representation for Alzheimer's Disease Diagnosis
    IEEE Transactions on Biomedical Engineering, 2016
    Co-Authors: Mingxia Liu, Daoqiang Zhang, Ehsan Adeli, Dinggang Shen
    Abstract:

    Multitemplate-based brain morphometric pattern analysis using magnetic resonance imaging has been recently proposed for automatic diagnosis of Alzheimer's disease (AD) and its prodromal stage (i.e., mild cognitive impairment or MCI). In such methods, multiview morphological patterns generated from multiple templates are used as Feature Representation for brain images. However, existing multitemplate-based methods often simply assume that each class is represented by a specific type of data distribution (i.e., a single cluster), while in reality, the underlying data distribution is actually not preknown. In this paper, we propose an inherent structure-based multiview leaning method using multiple templates for AD/MCI classification. Specifically, we first extract multiview Feature Representations for subjects using multiple selected templates and then cluster subjects within a specific class into several subclasses (i.e., clusters) in each view space. Then, we encode those subclasses with unique codes by considering both their original class information and their own distribution information, followed by a multitask Feature selection model. Finally, we learn an ensemble of view-specific support vector machine classifiers based on their, respectively, selected Features in each view and fuse their results to draw the final decision. Experimental results on the Alzheimer's Disease Neuroimaging Initiative database demonstrate that our method achieves promising results for AD/MCI classification, compared to the state-of-the-art multitemplate-based methods.

  • latent Feature Representation with stacked auto encoder for ad mci diagnosis
    Brain Structure & Function, 2015
    Co-Authors: Heungil Suk, Seongwhan Lee, Dinggang Shen
    Abstract:

    Recently, there have been great interests for computer-aided diagnosis of Alzheimer’s disease (AD) and its prodromal stage, mild cognitive impairment (MCI). Unlike the previous methods that considered simple low-level Features such as gray matter tissue volumes from MRI, and mean signal intensities from PET, in this paper, we propose a deep learning-based latent Feature Representation with a stacked auto-encoder (SAE). We believe that there exist latent non-linear complicated patterns inherent in the low-level Features such as relations among Features. Combining the latent information with the original Features helps build a robust model in AD/MCI classification, with high diagnostic accuracy. Furthermore, thanks to the unsupervised characteristic of the pre-training in deep learning, we can benefit from the target-unrelated samples to initialize parameters of SAE, thus finding optimal parameters in fine-tuning with the target-related samples, and further enhancing the classification performances across four binary classification problems: AD vs. healthy normal control (HC), MCI vs. HC, AD vs. MCI, and MCI converter (MCI-C) vs. MCI non-converter (MCI-NC). In our experiments on ADNI dataset, we validated the effectiveness of the proposed method, showing the accuracies of 98.8, 90.7, 83.7, and 83.3 % for AD/HC, MCI/HC, AD/MCI, and MCI-C/MCI-NC classification, respectively. We believe that deep learning can shed new light on the neuroimaging data analysis, and our work presented the applicability of this method to brain disease diagnosis.

  • hierarchical multi atlas label fusion with multi scale Feature Representation and label specific patch partition
    NeuroImage, 2015
    Co-Authors: Minjeong Kim, Gerard Sanroma, Qian Wang, Brent C Munsell, Dinggang Shen
    Abstract:

    Multi-atlas patch-based label fusion methods have been successfully used to improve segmentation accuracy in many important medical image analysis applications. In general, to achieve label fusion a single target image is first registered to several atlas images. After registration a label is assigned to each target point in the target image by determining the similarity between the underlying target image patch (centered at the target point) and the aligned image patch in each atlas image. To achieve the highest level of accuracy during the label fusion process it's critical for the chosen patch similarity measurement to accurately capture the tissue/shape appearance of the anatomical structure. One major limitation of existing state-of-the-art label fusion methods is that they often apply a fixed size image patch throughout the entire label fusion procedure. Doing so may severely affect the fidelity of the patch similarity measurement, which in turn may not adequately capture complex tissue appearance patterns expressed by the anatomical structure. To address this limitation, we advance state-of-the-art by adding three new label fusion contributions: First, each image patch is now characterized by a multi-scale Feature Representation that encodes both local and semi-local image information. Doing so will increase the accuracy of the patch-based similarity measurement. Second, to limit the possibility of the patch-based similarity measurement being wrongly guided by the presence of multiple anatomical structures in the same image patch, each atlas image patch is further partitioned into a set of label-specific partial image patches according to the existing labels. Since image information has now been semantically divided into different patterns, these new label-specific atlas patches make the label fusion process more specific and flexible. Lastly, in order to correct target points that are mislabeled during label fusion, a hierarchical approach is used to improve the label fusion results. In particular, a coarse-to-fine iterative label fusion approach is used that gradually reduces the patch size. To evaluate the accuracy of our label fusion approach, the proposed method was used to segment the hippocampus in the ADNI dataset and 7.0 T MR images, sub-cortical regions in LONI LBPA40 dataset, mid-brain regions in SATA dataset from MICCAI 2013 segmentation challenge, and a set of key internal gray matter structures in IXI dataset. In all experiments, the segmentation results of the proposed hierarchical label fusion method with multi-scale Feature Representations and label-specific atlas patches are more accurate than several well-known state-of-the-art label fusion methods.

  • hierarchical Feature Representation and multimodal fusion with deep learning for ad mci diagnosis
    NeuroImage, 2014
    Co-Authors: Heungil Suk, Seongwhan Lee, Dinggang Shen
    Abstract:

    For the last decade, it has been shown that neuroimaging can be a potential tool for the diagnosis of Alzheimer's Disease (AD) and its prodromal stage, Mild Cognitive Impairment (MCI), and also fusion of different modalities can further provide the complementary information to enhance diagnostic accuracy. Here, we focus on the problems of both Feature Representation and fusion of multimodal information from Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET). To our best knowledge, the previous methods in the literature mostly used hand-crafted Features such as cortical thickness, gray matter densities from MRI, or voxel intensities from PET, and then combined these multimodal Features by simply concatenating into a long vector or transforming into a higher-dimensional kernel space. In this paper, we propose a novel method for a high-level latent and shared Feature Representation from neuroimaging modalities via deep learning. Specifically, we use Deep Boltzmann Machine (DBM)(2), a deep network with a restricted Boltzmann machine as a building block, to find a latent hierarchical Feature Representation from a 3D patch, and then devise a systematic method for a joint Feature Representation from the paired patches of MRI and PET with a multimodal DBM. To validate the effectiveness of the proposed method, we performed experiments on ADNI dataset and compared with the state-of-the-art methods. In three binary classification problems of AD vs. healthy Normal Control (NC), MCI vs. NC, and MCI converter vs. MCI non-converter, we obtained the maximal accuracies of 95.35%, 85.67%, and 74.58%, respectively, outperforming the competing methods. By visual inspection of the trained model, we observed that the proposed method could hierarchically discover the complex latent patterns inherent in both MRI and PET.

  • hierarchical Feature Representation and multimodal fusion with deep learning for ad mci diagnosis
    NeuroImage, 2014
    Co-Authors: Heungil Suk, Seongwhan Lee, Dinggang Shen
    Abstract:

    article i nfo For the last decade, it has been shown that neuroimaging can be a potential tool for the diagnosis of Alzheimer's Disease (AD) and its prodromalstage,Mild Cognitive Impairment (MCI), and also fusion of different modalities can further provide the complementary information to enhance diagnostic accuracy. Here, we focus on the problems of both Feature Representation and fusion of multimodal information from Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET). To our best knowledge, the previous methods in the literature mostly used hand-crafted Features such as cortical thickness, gray matter densities from MRI, or voxel intensities from PET, and then combined these multimodal Features by simply concatenating into a long vector or transforming into a higher-dimensional kernel space. In this paper, we propose a novel method for a high-level latent and shared Feature Representation from neuroimaging modalities via deep learning. Specifically, we use Deep Boltzmann Machine (DBM) 2 , a deep network with a restricted Boltzmann machine as a building block, to find a latent hierar- chical Feature Representation from a 3D patch, and then devise a systematic method for a joint Feature representa- tion from the paired patches of MRI and PET with a multimodal DBM. To validate the effectiveness of the proposed method, we performed experiments on ADNI dataset and compared with the state-of-the-art methods. In three binary classification problems of AD vs. healthy Normal Control (NC), MCI vs. NC, and MCI converter vs. MCI non-converter, we obtained the maximal accuracies of 95.35%, 85.67%, and 74.58%, respectively, outperforming the competing methods. By visual inspection of the trained model, we observed that the proposed method could hierarchically discover the complex latent patterns inherent in both MRI and PET.

Heungil Suk - One of the best experts on this subject based on the ideXlab platform.

  • latent Feature Representation with stacked auto encoder for ad mci diagnosis
    Brain Structure & Function, 2015
    Co-Authors: Heungil Suk, Seongwhan Lee, Dinggang Shen
    Abstract:

    Recently, there have been great interests for computer-aided diagnosis of Alzheimer’s disease (AD) and its prodromal stage, mild cognitive impairment (MCI). Unlike the previous methods that considered simple low-level Features such as gray matter tissue volumes from MRI, and mean signal intensities from PET, in this paper, we propose a deep learning-based latent Feature Representation with a stacked auto-encoder (SAE). We believe that there exist latent non-linear complicated patterns inherent in the low-level Features such as relations among Features. Combining the latent information with the original Features helps build a robust model in AD/MCI classification, with high diagnostic accuracy. Furthermore, thanks to the unsupervised characteristic of the pre-training in deep learning, we can benefit from the target-unrelated samples to initialize parameters of SAE, thus finding optimal parameters in fine-tuning with the target-related samples, and further enhancing the classification performances across four binary classification problems: AD vs. healthy normal control (HC), MCI vs. HC, AD vs. MCI, and MCI converter (MCI-C) vs. MCI non-converter (MCI-NC). In our experiments on ADNI dataset, we validated the effectiveness of the proposed method, showing the accuracies of 98.8, 90.7, 83.7, and 83.3 % for AD/HC, MCI/HC, AD/MCI, and MCI-C/MCI-NC classification, respectively. We believe that deep learning can shed new light on the neuroimaging data analysis, and our work presented the applicability of this method to brain disease diagnosis.

  • hierarchical Feature Representation and multimodal fusion with deep learning for ad mci diagnosis
    NeuroImage, 2014
    Co-Authors: Heungil Suk, Seongwhan Lee, Dinggang Shen
    Abstract:

    For the last decade, it has been shown that neuroimaging can be a potential tool for the diagnosis of Alzheimer's Disease (AD) and its prodromal stage, Mild Cognitive Impairment (MCI), and also fusion of different modalities can further provide the complementary information to enhance diagnostic accuracy. Here, we focus on the problems of both Feature Representation and fusion of multimodal information from Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET). To our best knowledge, the previous methods in the literature mostly used hand-crafted Features such as cortical thickness, gray matter densities from MRI, or voxel intensities from PET, and then combined these multimodal Features by simply concatenating into a long vector or transforming into a higher-dimensional kernel space. In this paper, we propose a novel method for a high-level latent and shared Feature Representation from neuroimaging modalities via deep learning. Specifically, we use Deep Boltzmann Machine (DBM)(2), a deep network with a restricted Boltzmann machine as a building block, to find a latent hierarchical Feature Representation from a 3D patch, and then devise a systematic method for a joint Feature Representation from the paired patches of MRI and PET with a multimodal DBM. To validate the effectiveness of the proposed method, we performed experiments on ADNI dataset and compared with the state-of-the-art methods. In three binary classification problems of AD vs. healthy Normal Control (NC), MCI vs. NC, and MCI converter vs. MCI non-converter, we obtained the maximal accuracies of 95.35%, 85.67%, and 74.58%, respectively, outperforming the competing methods. By visual inspection of the trained model, we observed that the proposed method could hierarchically discover the complex latent patterns inherent in both MRI and PET.

  • hierarchical Feature Representation and multimodal fusion with deep learning for ad mci diagnosis
    NeuroImage, 2014
    Co-Authors: Heungil Suk, Seongwhan Lee, Dinggang Shen
    Abstract:

    article i nfo For the last decade, it has been shown that neuroimaging can be a potential tool for the diagnosis of Alzheimer's Disease (AD) and its prodromalstage,Mild Cognitive Impairment (MCI), and also fusion of different modalities can further provide the complementary information to enhance diagnostic accuracy. Here, we focus on the problems of both Feature Representation and fusion of multimodal information from Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET). To our best knowledge, the previous methods in the literature mostly used hand-crafted Features such as cortical thickness, gray matter densities from MRI, or voxel intensities from PET, and then combined these multimodal Features by simply concatenating into a long vector or transforming into a higher-dimensional kernel space. In this paper, we propose a novel method for a high-level latent and shared Feature Representation from neuroimaging modalities via deep learning. Specifically, we use Deep Boltzmann Machine (DBM) 2 , a deep network with a restricted Boltzmann machine as a building block, to find a latent hierar- chical Feature Representation from a 3D patch, and then devise a systematic method for a joint Feature representa- tion from the paired patches of MRI and PET with a multimodal DBM. To validate the effectiveness of the proposed method, we performed experiments on ADNI dataset and compared with the state-of-the-art methods. In three binary classification problems of AD vs. healthy Normal Control (NC), MCI vs. NC, and MCI converter vs. MCI non-converter, we obtained the maximal accuracies of 95.35%, 85.67%, and 74.58%, respectively, outperforming the competing methods. By visual inspection of the trained model, we observed that the proposed method could hierarchically discover the complex latent patterns inherent in both MRI and PET.

  • deep learning based Feature Representation for ad mci classification
    Medical Image Computing and Computer-Assisted Intervention, 2013
    Co-Authors: Heungil Suk, Dinggang Shen
    Abstract:

    In recent years, there has been a great interest in computer-aided diagnosis of Alzheimer’s Disease (AD) and its prodromal stage, Mild Cognitive Impairment (MCI). Unlike the previous methods that consider simple low-level Features such as gray matter tissue volumes from MRI, mean signal intensities from PET, in this paper, we propose a deep learning-based Feature Representation with a stacked auto-encoder. We believe that there exist latent complicated patterns, e.g., non-linear relations, inherent in the low-level Features. Combining latent information with the original low-level Features helps build a robust model for AD/MCI classification with high diagnostic accuracy. Using the ADNI dataset, we conducted experiments showing that the proposed method is 95.9%, 85.0%, and 75.8% accurate for AD, MCI, and MCI-converter diagnosis, respectively.

Chunhua Yang - One of the best experts on this subject based on the ideXlab platform.

  • hierarchical quality relevant Feature Representation for soft sensor modeling a novel deep learning strategy
    IEEE Transactions on Industrial Informatics, 2020
    Co-Authors: Xiaofeng Yuan, Biao Huang, Yalin Wang, Chunhua Yang, Jiao Zhou, Weihua Gui
    Abstract:

    Deep learning is a recently developed Feature Representation technique for data with complicated structures, which has great potential for soft sensing of industrial processes. However, most deep networks mainly focus on hierarchical Feature learning for the raw observed input data. For soft sensor applications, it is important to reduce irrelevant information and extract quality-relevant Features from the raw input data for quality prediction. To deal with this problem, a novel deep learning network is proposed for quality-relevant Feature Representation in this article, which is based on stacked quality-driven autoencoder (SQAE). First, a quality-driven autoencoder (QAE) is designed by exploiting the quality data to guide Feature extraction with the constraint that the potential Features should largely reconstruct the input layer data and the quality data at the output layer. In this way, quality-relevant Features can be captured by QAE. Then, by stacking multiple QAEs to construct the deep SQAE network, SQAE can gradually reduce irrelevant Features and learn hierarchical quality-relevant Features. Finally, the high-level quality-relevant Features can be directly applied for soft sensing of the quality variables. The effectiveness and flexibility of the proposed deep learning model are validated on an industrial debutanizer column process.

  • deep learning based Feature Representation and its application for soft sensor modeling with variable wise weighted sae
    IEEE Transactions on Industrial Informatics, 2018
    Co-Authors: Xiaofeng Yuan, Biao Huang, Yalin Wang, Chunhua Yang
    Abstract:

    In modern industrial processes, soft sensors have played an important role for effective process control, optimization, and monitoring. Feature Representation is one of the core factors to construct accurate soft sensors. Recently, deep learning techniques have been developed for high-level abstract Feature extraction in pattern recognition areas, which also have great potential for soft sensing applications. Hence, deep stacked autoencoder (SAE) is introduced for soft sensor in this paper. As for output prediction purpose, traditional deep learning algorithms cannot extract high-level output-related Features. Thus, a novel variable-wise weighted stacked autoencoder (VW-SAE) is proposed for hierarchical output-related Feature Representation layer by layer. By correlation analysis with the output variable, important variables are identified from other ones in the input layer of each autoencoder. The variables are assigned with different weights accordingly. Then, variable-wise weighted autoencoders are designed and stacked to form deep networks. An industrial application shows that the proposed VW-SAE can give better prediction performance than the traditional multilayer neural networks and SAE.

Xiaofeng Yuan - One of the best experts on this subject based on the ideXlab platform.

  • hierarchical quality relevant Feature Representation for soft sensor modeling a novel deep learning strategy
    IEEE Transactions on Industrial Informatics, 2020
    Co-Authors: Xiaofeng Yuan, Biao Huang, Yalin Wang, Chunhua Yang, Jiao Zhou, Weihua Gui
    Abstract:

    Deep learning is a recently developed Feature Representation technique for data with complicated structures, which has great potential for soft sensing of industrial processes. However, most deep networks mainly focus on hierarchical Feature learning for the raw observed input data. For soft sensor applications, it is important to reduce irrelevant information and extract quality-relevant Features from the raw input data for quality prediction. To deal with this problem, a novel deep learning network is proposed for quality-relevant Feature Representation in this article, which is based on stacked quality-driven autoencoder (SQAE). First, a quality-driven autoencoder (QAE) is designed by exploiting the quality data to guide Feature extraction with the constraint that the potential Features should largely reconstruct the input layer data and the quality data at the output layer. In this way, quality-relevant Features can be captured by QAE. Then, by stacking multiple QAEs to construct the deep SQAE network, SQAE can gradually reduce irrelevant Features and learn hierarchical quality-relevant Features. Finally, the high-level quality-relevant Features can be directly applied for soft sensing of the quality variables. The effectiveness and flexibility of the proposed deep learning model are validated on an industrial debutanizer column process.

  • deep learning based Feature Representation and its application for soft sensor modeling with variable wise weighted sae
    IEEE Transactions on Industrial Informatics, 2018
    Co-Authors: Xiaofeng Yuan, Biao Huang, Yalin Wang, Chunhua Yang
    Abstract:

    In modern industrial processes, soft sensors have played an important role for effective process control, optimization, and monitoring. Feature Representation is one of the core factors to construct accurate soft sensors. Recently, deep learning techniques have been developed for high-level abstract Feature extraction in pattern recognition areas, which also have great potential for soft sensing applications. Hence, deep stacked autoencoder (SAE) is introduced for soft sensor in this paper. As for output prediction purpose, traditional deep learning algorithms cannot extract high-level output-related Features. Thus, a novel variable-wise weighted stacked autoencoder (VW-SAE) is proposed for hierarchical output-related Feature Representation layer by layer. By correlation analysis with the output variable, important variables are identified from other ones in the input layer of each autoencoder. The variables are assigned with different weights accordingly. Then, variable-wise weighted autoencoders are designed and stacked to form deep networks. An industrial application shows that the proposed VW-SAE can give better prediction performance than the traditional multilayer neural networks and SAE.

Jinhyeok Jang - One of the best experts on this subject based on the ideXlab platform.

  • multi objective based spatio temporal Feature Representation learning robust to expression intensity variations for facial expression recognition
    IEEE Transactions on Affective Computing, 2019
    Co-Authors: Dae Hoe Kim, Wissam J Baddar, Jinhyeok Jang
    Abstract:

    Facial expression recognition (FER) is increasingly gaining importance in various emerging affective computing applications. In practice, achieving accurate FER is challenging due to the large amount of inter-personal variations such as expression intensity variations. In this paper, we propose a new spatio-temporal Feature Representation learning for FER that is robust to expression intensity variations. The proposed method utilizes representative expression-states (e.g., onset, apex and offset of expressions) which can be specified in facial sequences regardless of the expression intensity. The characteristics of facial expressions are encoded in two parts in this paper. As the first part, spatial image characteristics of the representative expression-state frames are learned via a convolutional neural network. Five objective terms are proposed to improve the expression class separability of the spatial Feature Representation. In the second part, temporal characteristics of the spatial Feature Representation in the first part are learned with a long short-term memory of the facial expression. Comprehensive experiments have been conducted on a deliberate expression dataset (MMI) and a spontaneous micro-expression dataset (CASME II). Experimental results showed that the proposed method achieved higher recognition rates in both datasets compared to the state-of-the-art methods.