Feature Extraction

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 144930 Experts worldwide ranked by ideXlab platform

Chulhee Lee - One of the best experts on this subject based on the ideXlab platform.

  • Feature Extraction based on the bhattacharyya distance
    Pattern Recognition, 2003
    Co-Authors: Euisun Choi, Chulhee Lee
    Abstract:

    In this paper, we present a Feature Extraction method by utilizing an error estimation equation based on the Bhattacharyya distance. We propose to use classification errors in the transformed Feature space, which are estimated using the error estimation equation, as a criterion for Feature Extraction. The construction of linear transformation for Feature Extraction is conducted using an iterative gradient descent algorithm, so that the estimated classification error is minimized. Due to the ability to predict error, it is possible to determine the minimum number of Features required for classification. Experimental results show that the proposed Feature Extraction method compares favorably with conventional methods.

  • Feature Extraction based on the bhattacharyya distance for multimodal data
    International Geoscience and Remote Sensing Symposium, 2001
    Co-Authors: Euisun Choi, Chulhee Lee
    Abstract:

    In this paper, we propose a Feature Extraction method based on the Bhattacharyya distance for multimodal data. First, we estimate the classification error based on the Bhattacharyya distance between two multimodal classes that are approximated by a finite mixture of Gaussian distributions. Then we extract the Features that minimize the estimated classification error. In order to find such Features, we explore two search methods: sequential search and global search. Experiments show that the proposed Feature Extraction algorithm shows promising results.

  • analytical decision boundary Feature Extraction for neural networks for the recognition of unconstrained handwritten digits
    Systems Man and Cybernetics, 2000
    Co-Authors: Chulhee Lee
    Abstract:

    Although neural networks have been successfully applied for the recognition of unconstrained handwritten characters, there have been few efficient Feature Extraction algorithms, resulting in inefficient neural networks. We apply a decision boundary Feature Extraction algorithm to neural networks for the recognition of handwritten digits and reduce the computational cost and complexity of neural networks. Experiments show that the proposed Feature Extraction algorithm can reduce the number of Features significantly without sacrificing the performance.

  • Feature Extraction method using the bhattacharyya distance
    Journal of the Institute of Electronics Engineers of Korea, 2000
    Co-Authors: Euisun Choi, Chulhee Lee
    Abstract:

    In pattern classification, the Bhattacharyya distance has been used as a class separability measure. Furthemore, it is recently reported that the Bhattacharyya distance can be used to estimate error of Gaussian ML classifier within 1-2% margin. In this paper, we propose a Feature Extraction method utilizing the Bhattacharyya distance. In the proposed method, we first predict the classification error with the error estimation equation based on the Bhauacharyya distance. Then we find the Feature vector that minimizes the classification error using two search algorithms: sequential search and global search. Experimental reslts show that the proposed method compares favorably with conventional Feature Extraction methods. In addition, it is possible to determine how man, Feature vectors arc needed for achieving the same classification accuracy as in the original space.

  • Feature Extraction based on decision boundaries
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 1993
    Co-Authors: Chulhee Lee, D A Landgrebe
    Abstract:

    A novel approach to Feature Extraction for classification based directly on the decision boundaries is proposed. It is shown how discriminantly redundant Features and discriminantly informative Features are related to decision boundaries. A procedure to extract discriminantly informative Features based on a decision boundary is proposed. The proposed Feature Extraction algorithm has several desirable properties: (1) it predicts the minimum number of Features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and (2) it finds the necessary Feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal class means or equal class covariances as some previous algorithms do. Experiments show that the performance of the proposed algorithm compares favorably with those of previous algorithms. >

Jon Atli Benediktsson - One of the best experts on this subject based on the ideXlab platform.

  • Feature Extraction for hyperspectral imagery the evolution from shallow to deep overview and toolbox
    arXiv: Computer Vision and Pattern Recognition, 2020
    Co-Authors: Behnood Rasti, Xudong Kang, Danfeng Hong, Renlong Hang, Pedram Ghamisi, Jocelyn Chanussot, Jon Atli Benediktsson
    Abstract:

    Hyperspectral images provide detailed spectral information through hundreds of (narrow) spectral channels (also known as dimensionality or bands) with continuous spectral information that can accurately classify diverse materials of interest. The increased dimensionality of such data makes it possible to significantly improve data information content but provides a challenge to the conventional techniques (the so-called curse of dimensionality) for accurate analysis of hyperspectral images. Feature Extraction, as a vibrant field of research in the hyperspectral community, evolved through decades of research to address this issue and extract informative Features suitable for data representation and classification. The advances in Feature Extraction have been inspired by two fields of research, including the popularization of image and signal processing as well as machine (deep) learning, leading to two types of Feature Extraction approaches named shallow and deep techniques. This article outlines the advances in Feature Extraction approaches for hyperspectral imagery by providing a technical overview of the state-of-the-art techniques, providing useful entry points for researchers at different levels, including students, researchers, and senior researchers, willing to explore novel investigations on this challenging topic. In more detail, this paper provides a bird's eye view over shallow (both supervised and unsupervised) and deep Feature Extraction approaches specifically dedicated to the topic of hyperspectral Feature Extraction and its application on hyperspectral image classification. Additionally, this paper compares 15 advanced techniques with an emphasis on their methodological foundations in terms of classification accuracies. Furthermore, the codes and libraries are shared at this https URL.

  • intrinsic image decomposition for Feature Extraction of hyperspectral images
    IEEE Transactions on Geoscience and Remote Sensing, 2015
    Co-Authors: Xudong Kang, Leyuan Fang, Jon Atli Benediktsson
    Abstract:

    In this paper, a novel Feature Extraction method based on intrinsic image decomposition (IID) is proposed for hyperspectral image classification. The proposed method consists of the following steps. First, the spectral dimension of the hyperspectral image is reduced with averaging-based image fusion. Then, the dimension reduced image is partitioned into several subsets of adjacent bands. Next, the reflectance and shading components of each subset are estimated with an optimization-based IID technique. Finally, pixel-wise classification is performed only on the reflectance components, which reflect the material-dependent properties of different objects. Experimental results show that, with the proposed Feature Extraction method, the support vector machine classifier is able to obtain much higher classification accuracy even when the number of training samples is quite small. This demonstrates that IID is indeed an effective way for Feature Extraction of hyperspectral images.

  • classification of hyperspectral data using extended attribute profiles based on supervised and unsupervised Feature Extraction techniques
    International Journal of Image and Data Fusion, 2012
    Co-Authors: Prashanth Reddy Marpu, Jon Atli Benediktsson, Mauro Dalla Mura, Mattia Pedergnana, Stijn Peeters, Lorenzo Bruzzone
    Abstract:

    The classification of remote sensing data based on the exploitation of spatial Features extracted with morphological and attribute profiles has been recently gaining importance. With the development of efficient algorithms to construct the profiles for large datasets, such methods are becoming even more relevant. When dealing with hyperspectral imagery, the profiles are traditionally built on the first few principal components computed from the data. However, it needs to be determined if other Feature reduction approaches are better suited to create base images for the profiles. In this article, we explore the use of profiles based on Features derived from three supervised Feature Extraction techniques (i.e. Discriminant Analysis Feature Extraction, Decision Boundary Feature Extraction and Non-parametric Weighted Feature Extraction) and two unsupervised Feature-Extraction techniques (i.e. Principal Component Analysis (PCA) and Kernel PCA) in classification and compare the classification accuracies obtaine...

  • classification using extended morphological attribute profiles based on different Feature Extraction techniques
    International Geoscience and Remote Sensing Symposium, 2011
    Co-Authors: Stijn Peeters, Jon Atli Benediktsson, Prashanth Reddy Marpu, Mauro Dalla Mura
    Abstract:

    Extended Morphological Attribute Profiles (EAPs) are extension of Extended Morphological Profiles (EMPs). They are based on the more general Morphological Attribute Profiles (APs) rather than the conventional Morphological Profiles (MPs). EAPs are computed on few of the first principle components (PCs) extracted from the multi-/hyper-spectral data. In this paper, we propose to compute EAPs on Features derived from supervised Feature Extraction techniques such as discriminant analysis Feature Extraction (DAFE), decision boundary Feature Extraction (DBFE) and non-parametric weighted Feature Extraction (NWFE)) instead of using unsupervised principal component analysis (PCA).

  • classification of hyperspectral images with extended attribute profiles and Feature Extraction techniques
    International Geoscience and Remote Sensing Symposium, 2010
    Co-Authors: Mauro Dalla Mura, Jon Atli Benediktsson, Lorenzo Bruzzone
    Abstract:

    In this paper we investigate the combined use of morphological attribute filters and Feature Extraction techniques for the classification of a high resolution hyperspectral image. In greater detail, we propose to model the spatial information with Extended Attribute Profiles computed on the hyperspectral data and to reduce the high dimensionality of the morphological Features computed (which show a high degree of redundancy) with Feature Extraction techniques. The Features extracted are analyzed by two classifiers. The experimental analysis was carried out on a high resolution hyperspectral image acquired by the airborne sensor ROSIS-03 on the University of Pavia, Italy. The obtained results compared to those obtained without Feature reduction proved the importance of the application of a stage of Feature Extraction in the process.

Jian Zhu - One of the best experts on this subject based on the ideXlab platform.

  • denoising convolutional autoencoder based b mode ultrasound tongue image Feature Extraction
    International Conference on Acoustics Speech and Signal Processing, 2019
    Co-Authors: Dawei Feng, Huaimin Wang, Jian Zhu
    Abstract:

    B-mode ultrasound tongue imaging is widely used in the speech production field. However, efficient interpretation is in a great need for the tongue image sequences. Inspired by the recent success of unsupervised deep learning approach, we explore unsupervised convolutional network architecture for the Feature Extraction in the ultrasound tongue image, which can be helpful for the clinical linguist and phonetics. By quantitative comparison between different unsupervised Feature Extraction approaches, the denoising convolutional autoencoder (DCAE)-based method outperforms the other Feature Extraction methods on the reconstruction task and the 2010 silent speech interface challenge. A Word Error Rate of 6.17% is obtained with DCAE, compared to the state-of-the-art value of 6.45% using Discrete cosine transform as the Feature extractor. Our codes are available at https://github.com/DeePBluE666/Source-code1.

  • denoising convolutional autoencoder based b mode ultrasound tongue image Feature Extraction
    arXiv: Image and Video Processing, 2019
    Co-Authors: Dawei Feng, Huaimin Wang, Jian Zhu
    Abstract:

    B-mode ultrasound tongue imaging is widely used in the speech production field. However, efficient interpretation is in a great need for the tongue image sequences. Inspired by the recent success of unsupervised deep learning approach, we explore unsupervised convolutional network architecture for the Feature Extraction in the ultrasound tongue image, which can be helpful for the clinical linguist and phonetics. By quantitative comparison between different unsupervised Feature Extraction approaches, the denoising convolutional autoencoder (DCAE)-based method outperforms the other Feature Extraction methods on the reconstruction task and the 2010 silent speech interface challenge. A Word Error Rate of 6.17% is obtained with DCAE, compared to the state-of-the-art value of 6.45% using Discrete cosine transform as the Feature extractor. Our codes are available at this https URL.

Na Han - One of the best experts on this subject based on the ideXlab platform.

  • joint sparse representation and locality preserving projection for Feature Extraction
    International Journal of Machine Learning and Cybernetics, 2019
    Co-Authors: Wei Zhang, Peipei Kang, Xiaozhao Fang, Luyao Teng, Na Han
    Abstract:

    Traditional graph-based Feature Extraction methods use two separated procedures, i.e., graph learning and projection learning to perform Feature Extraction. They make the Feature Extraction result highly dependent on the quality of the initial fixed graph, while the graph may not be the optimal one for Feature Extraction. In this paper, we propose a novel unsupervised Feature Extraction method, i.e., joint sparse representation and locality preserving projection (JSRLPP), in which the graph construction and Feature Extraction are simultaneously carried out. Specifically, we adaptively learn the similarity matrix by sparse representation, and at the same time, learn the projection matrix by preserving local structure. Compared with traditional Feature Extraction methods, our approach unifies graph learning and projection learning to a common framework, thus learns a more suitable graph for Feature Extraction. Experiments on several public image data sets demonstrate the effectiveness of our proposed algorithm.

  • a Feature Extraction method based on differential entropy and linear discriminant analysis for emotion recognition
    Sensors, 2019
    Co-Authors: Dong Wei Chen, Rui Miao, Wei Qi Yang, Yong Liang, Hao Heng Chen, Lan Huang, Chun Jian Deng, Na Han
    Abstract:

    Feature Extraction of electroencephalography (EEG) signals plays a significant role in the wearable computing field. Due to the practical applications of EEG emotion calculation, researchers often use edge calculation to reduce data transmission times, however, as EEG involves a large amount of data, determining how to effectively extract Features and reduce the amount of calculation is still the focus of abundant research. Researchers have proposed many EEG Feature Extraction methods. However, these methods have problems such as high time complexity and insufficient precision. The main purpose of this paper is to introduce an innovative method for obtaining reliable distinguishing Features from EEG signals. This Feature Extraction method combines differential entropy with Linear Discriminant Analysis (LDA) that can be applied in Feature Extraction of emotional EEG signals. We use a three-category sentiment EEG dataset to conduct experiments. The experimental results show that the proposed Feature Extraction method can significantly improve the performance of the EEG classification: Compared with the result of the original dataset, the average accuracy increases by 68%, which is 7% higher than the result obtained when only using differential entropy in Feature Extraction. The total execution time shows that the proposed method has a lower time complexity.

Dawei Feng - One of the best experts on this subject based on the ideXlab platform.

  • denoising convolutional autoencoder based b mode ultrasound tongue image Feature Extraction
    International Conference on Acoustics Speech and Signal Processing, 2019
    Co-Authors: Dawei Feng, Huaimin Wang, Jian Zhu
    Abstract:

    B-mode ultrasound tongue imaging is widely used in the speech production field. However, efficient interpretation is in a great need for the tongue image sequences. Inspired by the recent success of unsupervised deep learning approach, we explore unsupervised convolutional network architecture for the Feature Extraction in the ultrasound tongue image, which can be helpful for the clinical linguist and phonetics. By quantitative comparison between different unsupervised Feature Extraction approaches, the denoising convolutional autoencoder (DCAE)-based method outperforms the other Feature Extraction methods on the reconstruction task and the 2010 silent speech interface challenge. A Word Error Rate of 6.17% is obtained with DCAE, compared to the state-of-the-art value of 6.45% using Discrete cosine transform as the Feature extractor. Our codes are available at https://github.com/DeePBluE666/Source-code1.

  • denoising convolutional autoencoder based b mode ultrasound tongue image Feature Extraction
    arXiv: Image and Video Processing, 2019
    Co-Authors: Dawei Feng, Huaimin Wang, Jian Zhu
    Abstract:

    B-mode ultrasound tongue imaging is widely used in the speech production field. However, efficient interpretation is in a great need for the tongue image sequences. Inspired by the recent success of unsupervised deep learning approach, we explore unsupervised convolutional network architecture for the Feature Extraction in the ultrasound tongue image, which can be helpful for the clinical linguist and phonetics. By quantitative comparison between different unsupervised Feature Extraction approaches, the denoising convolutional autoencoder (DCAE)-based method outperforms the other Feature Extraction methods on the reconstruction task and the 2010 silent speech interface challenge. A Word Error Rate of 6.17% is obtained with DCAE, compared to the state-of-the-art value of 6.45% using Discrete cosine transform as the Feature extractor. Our codes are available at this https URL.