Spatial Feature

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 106572 Experts worldwide ranked by ideXlab platform

Dacheng Tao - One of the best experts on this subject based on the ideXlab platform.

  • simultaneous spectral Spatial Feature selection and extraction for hyperspectral images
    arXiv: Computer Vision and Pattern Recognition, 2019
    Co-Authors: Lefei Zhang, Xin Huang, Qian Zhang, Yuan Yan Tang, Dacheng Tao
    Abstract:

    In hyperspectral remote sensing data mining, it is important to take into account of both spectral and Spatial information, such as the spectral signature, texture Feature and morphological property, to improve the performances, e.g., the image classification accuracy. In a Feature representation point of view, a nature approach to handle this situation is to concatenate the spectral and Spatial Features into a single but high dimensional vector and then apply a certain dimension reduction technique directly on that concatenated vector before feed it into the subsequent classifier. However, multiple Features from various domains definitely have different physical meanings and statistical properties, and thus such concatenation hasn't efficiently explore the complementary properties among different Features, which should benefit for boost the Feature discriminability. Furthermore, it is also difficult to interpret the transformed results of the concatenated vector. Consequently, finding a physically meaningful consensus low dimensional Feature representation of original multiple Features is still a challenging task. In order to address the these issues, we propose a novel Feature learning framework, i.e., the simultaneous spectral-Spatial Feature selection and extraction algorithm, for hyperspectral images spectral-Spatial Feature representation and classification. Specifically, the proposed method learns a latent low dimensional subspace by projecting the spectral-Spatial Feature into a common Feature space, where the complementary information has been effectively exploited, and simultaneously, only the most significant original Features have been transformed. Encouraging experimental results on three public available hyperspectral remote sensing datasets confirm that our proposed method is effective and efficient.

  • active transfer learning network a unified deep joint spectral Spatial Feature learning model for hyperspectral image classification
    arXiv: Computer Vision and Pattern Recognition, 2019
    Co-Authors: Cheng Deng, Yumeng Xue, Xianglong Liu, Dacheng Tao
    Abstract:

    Deep learning has recently attracted significant attention in the field of hyperspectral images (HSIs) classification. However, the construction of an efficient deep neural network (DNN) mostly relies on a large number of labeled samples being available. To address this problem, this paper proposes a unified deep network, combined with active transfer learning that can be well-trained for HSIs classification using only minimally labeled training data. More specifically, deep joint spectral-Spatial Feature is first extracted through hierarchical stacked sparse autoencoder (SSAE) networks. Active transfer learning is then exploited to transfer the pre-trained SSAE network and the limited training samples from the source domain to the target domain, where the SSAE network is subsequently fine-tuned using the limited labeled samples selected from both source and target domain by corresponding active learning strategies. The advantages of our proposed method are threefold: 1) the network can be effectively trained using only limited labeled samples with the help of novel active learning strategies; 2) the network is flexible and scalable enough to function across various transfer situations, including cross-dataset and intra-image; 3) the learned deep joint spectral-Spatial Feature representation is more generic and robust than many joint spectral-Spatial Feature representation. Extensive comparative evaluations demonstrate that our proposed method significantly outperforms many state-of-the-art approaches, including both traditional and deep network-based methods, on three popular datasets.

  • active transfer learning network a unified deep joint spectral Spatial Feature learning model for hyperspectral image classification
    IEEE Transactions on Geoscience and Remote Sensing, 2019
    Co-Authors: Cheng Deng, Yumeng Xue, Xianglong Liu, Dacheng Tao
    Abstract:

    Deep learning has recently attracted significant attention in the field of hyperspectral images (HSIs) classification. However, the construction of an efficient deep neural network mostly relies on a large number of labeled samples being available. To address this problem, this paper proposes a unified deep network, combined with active transfer learning (TL) that can be well-trained for HSIs classification using only minimally labeled training data. More specifically, deep joint spectral–Spatial Feature is first extracted through hierarchical stacked sparse autoencoder (SSAE) networks. Active TL is then exploited to transfer the pretrained SSAE network and the limited training samples from the source domain to the target domain, where the SSAE network is subsequently fine-tuned using the limited labeled samples selected from both source and target domains by the corresponding active learning (AL) strategies. The advantages of our proposed method are threefold: 1) the network can be effectively trained using only limited labeled samples with the help of novel AL strategies; 2) the network is flexible and scalable enough to function across various transfer situations, including cross data set and intraimage; and 3) the learned deep joint spectral–Spatial Feature representation is more generic and robust than many joint spectral–Spatial Feature representations. Extensive comparative evaluations demonstrate that our proposed method significantly outperforms many state-of-the-art approaches, including both traditional and deep network-based methods, on three popular data sets.

  • simultaneous spectral Spatial Feature selection and extraction for hyperspectral images
    IEEE Transactions on Systems Man and Cybernetics, 2018
    Co-Authors: Lefei Zhang, Xin Huang, Qian Zhang, Yuan Yan Tang, Dacheng Tao
    Abstract:

    In hyperspectral remote sensing data mining, it is important to take into account of both spectral and Spatial information, such as the spectral signature, texture Feature, and morphological property, to improve the performances, e.g., the image classification accuracy. In a Feature representation point of view, a nature approach to handle this situation is to concatenate the spectral and Spatial Features into a single but high dimensional vector and then apply a certain dimension reduction technique directly on that concatenated vector before feed it into the subsequent classifier. However, multiple Features from various domains definitely have different physical meanings and statistical properties, and thus such concatenation has not efficiently explore the complementary properties among different Features, which should benefit for boost the Feature discriminability. Furthermore, it is also difficult to interpret the transformed results of the concatenated vector. Consequently, finding a physically meaningful consensus low dimensional Feature representation of original multiple Features is still a challenging task. In order to address these issues, we propose a novel Feature learning framework, i.e., the simultaneous spectral-Spatial Feature selection and extraction algorithm, for hyperspectral images spectral-Spatial Feature representation and classification. Specifically, the proposed method learns a latent low dimensional subspace by projecting the spectral-Spatial Feature into a common Feature space, where the complementary information has been effectively exploited, and simultaneously, only the most significant original Features have been transformed. Encouraging experimental results on three public available hyperspectral remote sensing datasets confirm that our proposed method is effective and efficient.

  • tensor discriminative locality alignment for hyperspectral image spectral Spatial Feature extraction
    IEEE Transactions on Geoscience and Remote Sensing, 2013
    Co-Authors: Liangpei Zhang, Lefei Zhang, Dacheng Tao, Xin Huang
    Abstract:

    In this paper, we propose a method for the dimensionality reduction (DR) of spectral-Spatial Features in hyperspectral images (HSIs), under the umbrella of multilinear algebra, i.e., the algebra of tensors. The proposed approach is a tensor extension of conventional supervised manifold-learning-based DR. In particular, we define a tensor organization scheme for representing a pixel's spectral-Spatial Feature and develop tensor discriminative locality alignment (TDLA) for removing redundant information for subsequent classification. The optimal solution of TDLA is obtained by alternately optimizing each mode of the input tensors. The methods are tested on three public real HSI data sets collected by hyperspectral digital imagery collection experiment, reflective optics system imaging spectrometer, and airborne visible/infrared imaging spectrometer. The classification results show significant improvements in classification accuracies while using a small number of Features.

Liangpei Zhang - One of the best experts on this subject based on the ideXlab platform.

  • spectral Spatial unified networks for hyperspectral image classification
    IEEE Transactions on Geoscience and Remote Sensing, 2018
    Co-Authors: Liangpei Zhang, Fan Zhang
    Abstract:

    In this paper, we propose a spectral–Spatial unified network (SSUN) with an end-to-end architecture for the hyperspectral image (HSI) classification. Different from traditional spectral–Spatial classification frameworks where the spectral Feature extraction (FE), Spatial FE, and classifier training are separated, these processes are integrated into a unified network in our model. In this way, both FE and classifier training will share a uniform objective function and all the parameters in the network can be optimized at the same time. In the implementation of the SSUN, we propose a band grouping-based long short-term memory model and a multiscale convolutional neural network as the spectral and Spatial Feature extractors, respectively. In the experiments, three benchmark HSIs are utilized to evaluate the performance of the proposed method. The experimental results demonstrate that the SSUN can yield a competitive performance compared with existing methods.

  • hyperspectral image super resolution by spectral mixture analysis and Spatial spectral group sparsity
    IEEE Geoscience and Remote Sensing Letters, 2016
    Co-Authors: Qiangqiang Yuan, Huanfeng Shen, Xiangchao Meng, Liangpei Zhang
    Abstract:

    Due to the limitation of hyperspectral sensors and optical imaging systems, there are several irreconcilable conflicts between high Spatial resolution and high spectral resolution of hyperspectral images (HSIs). Therefore, HSI super-resolution (SR) is regarded as an important preprocessing task for subsequent applications. In this letter, we use sparse representation to analyze the spectral and Spatial Feature of HSIs. Considering the sparse characteristic of spectral unmixing and high pattern repeatability of Spatial–spectral blocks, we proposed a novel HSI SR framework utilizing spectral mixture analysis and Spatial–spectral group sparsity. By simultaneously combining the sparsity and the nonlocal self-similarity of the images in the Spatial and spectral domains, the method not only maintains the spectral consistency but also produces plenty of image details. Experiments on three hyperspectral data sets confirm that the proposed method is robust to noise and achieves better results than traditional methods.

  • tensor discriminative locality alignment for hyperspectral image spectral Spatial Feature extraction
    IEEE Transactions on Geoscience and Remote Sensing, 2013
    Co-Authors: Liangpei Zhang, Lefei Zhang, Dacheng Tao, Xin Huang
    Abstract:

    In this paper, we propose a method for the dimensionality reduction (DR) of spectral-Spatial Features in hyperspectral images (HSIs), under the umbrella of multilinear algebra, i.e., the algebra of tensors. The proposed approach is a tensor extension of conventional supervised manifold-learning-based DR. In particular, we define a tensor organization scheme for representing a pixel's spectral-Spatial Feature and develop tensor discriminative locality alignment (TDLA) for removing redundant information for subsequent classification. The optimal solution of TDLA is obtained by alternately optimizing each mode of the input tensors. The methods are tested on three public real HSI data sets collected by hyperspectral digital imagery collection experiment, reflective optics system imaging spectrometer, and airborne visible/infrared imaging spectrometer. The classification results show significant improvements in classification accuracies while using a small number of Features.

  • a multiscale urban complexity index based on 3d wavelet transform for spectral Spatial Feature extraction and classification an evaluation on the 8 channel worldview 2 imagery
    International Journal of Remote Sensing, 2012
    Co-Authors: Xin Huang, Liangpei Zhang
    Abstract:

    The three-dimensional wavelet transform (3D-WT) processes a multispectral remotely sensed image as a cube and hence it is able to simultaneously represent variation information in joint spectral–Spatial Feature space. The urban complexity index (UCI) built on the 3D-WT is defined by comparing the amount of spectral and Spatial variation, since natural Features have relatively smaller Spatial changes than spectral changes but urban areas show more variation in the Spatial domain. The calculation of the UCI is subject to the selection of window sizes; therefore, in this study, a multiscale UCI (M-UCI) is proposed by integrating the UCI Features in different moving windows and decomposition levels. The performance of the M-UCI was evaluated on two WorldView-2 data sets over urban and suburban areas, respectively. Experimental results showed that the M-UCI was effective in integrating multiscale information contained in different windows and gave higher accuracies than the single-scale UCI. In experiments, the proposed M-UCI was compared with a pixel shape index (PSI), which is a texture measure extracted from the Spatial domain alone. It was revealed that the PSI was more effective for the classification of urban areas than natural landscapes, whereas the M-UCI was applicable for both urban and natural areas since it represented the joint spectral–Spatial domains.

  • classification and extraction of Spatial Features in urban areas using high resolution multispectral imagery
    IEEE Geoscience and Remote Sensing Letters, 2007
    Co-Authors: Xin Huang, Liangpei Zhang, Pingxiang Li
    Abstract:

    Classification and extraction of Spatial Features are investigated in urban areas from high Spatial resolution multispectral imagery. The proposed approach consists of three steps. First, as an extension of our previous work [pixel shape index (PSI)], a structural Feature set (SFS) is proposed to extract the statistical Features of the direction-lines histogram. Second, some methods of dimension reduction, including independent component analysis, decision boundary Feature extraction, and the similarity-index Feature selection, are implemented for the proposed SFS to reduce information redundancy. Third, four classifiers, the maximum-likelihood classifier, backpropagation neural network, probability neural network based on expectation-maximization training, and support vector machine, are compared to assess SFS and other Spatial Feature sets. We evaluate the proposed approach on two QuickBird datasets, and the results show that the new set of reduced Spatial Features has better performance than the existing length-width extraction algorithm and PSI

Xin Huang - One of the best experts on this subject based on the ideXlab platform.

  • simultaneous spectral Spatial Feature selection and extraction for hyperspectral images
    arXiv: Computer Vision and Pattern Recognition, 2019
    Co-Authors: Lefei Zhang, Xin Huang, Qian Zhang, Yuan Yan Tang, Dacheng Tao
    Abstract:

    In hyperspectral remote sensing data mining, it is important to take into account of both spectral and Spatial information, such as the spectral signature, texture Feature and morphological property, to improve the performances, e.g., the image classification accuracy. In a Feature representation point of view, a nature approach to handle this situation is to concatenate the spectral and Spatial Features into a single but high dimensional vector and then apply a certain dimension reduction technique directly on that concatenated vector before feed it into the subsequent classifier. However, multiple Features from various domains definitely have different physical meanings and statistical properties, and thus such concatenation hasn't efficiently explore the complementary properties among different Features, which should benefit for boost the Feature discriminability. Furthermore, it is also difficult to interpret the transformed results of the concatenated vector. Consequently, finding a physically meaningful consensus low dimensional Feature representation of original multiple Features is still a challenging task. In order to address the these issues, we propose a novel Feature learning framework, i.e., the simultaneous spectral-Spatial Feature selection and extraction algorithm, for hyperspectral images spectral-Spatial Feature representation and classification. Specifically, the proposed method learns a latent low dimensional subspace by projecting the spectral-Spatial Feature into a common Feature space, where the complementary information has been effectively exploited, and simultaneously, only the most significant original Features have been transformed. Encouraging experimental results on three public available hyperspectral remote sensing datasets confirm that our proposed method is effective and efficient.

  • simultaneous spectral Spatial Feature selection and extraction for hyperspectral images
    IEEE Transactions on Systems Man and Cybernetics, 2018
    Co-Authors: Lefei Zhang, Xin Huang, Qian Zhang, Yuan Yan Tang, Dacheng Tao
    Abstract:

    In hyperspectral remote sensing data mining, it is important to take into account of both spectral and Spatial information, such as the spectral signature, texture Feature, and morphological property, to improve the performances, e.g., the image classification accuracy. In a Feature representation point of view, a nature approach to handle this situation is to concatenate the spectral and Spatial Features into a single but high dimensional vector and then apply a certain dimension reduction technique directly on that concatenated vector before feed it into the subsequent classifier. However, multiple Features from various domains definitely have different physical meanings and statistical properties, and thus such concatenation has not efficiently explore the complementary properties among different Features, which should benefit for boost the Feature discriminability. Furthermore, it is also difficult to interpret the transformed results of the concatenated vector. Consequently, finding a physically meaningful consensus low dimensional Feature representation of original multiple Features is still a challenging task. In order to address these issues, we propose a novel Feature learning framework, i.e., the simultaneous spectral-Spatial Feature selection and extraction algorithm, for hyperspectral images spectral-Spatial Feature representation and classification. Specifically, the proposed method learns a latent low dimensional subspace by projecting the spectral-Spatial Feature into a common Feature space, where the complementary information has been effectively exploited, and simultaneously, only the most significant original Features have been transformed. Encouraging experimental results on three public available hyperspectral remote sensing datasets confirm that our proposed method is effective and efficient.

  • tensor discriminative locality alignment for hyperspectral image spectral Spatial Feature extraction
    IEEE Transactions on Geoscience and Remote Sensing, 2013
    Co-Authors: Liangpei Zhang, Lefei Zhang, Dacheng Tao, Xin Huang
    Abstract:

    In this paper, we propose a method for the dimensionality reduction (DR) of spectral-Spatial Features in hyperspectral images (HSIs), under the umbrella of multilinear algebra, i.e., the algebra of tensors. The proposed approach is a tensor extension of conventional supervised manifold-learning-based DR. In particular, we define a tensor organization scheme for representing a pixel's spectral-Spatial Feature and develop tensor discriminative locality alignment (TDLA) for removing redundant information for subsequent classification. The optimal solution of TDLA is obtained by alternately optimizing each mode of the input tensors. The methods are tested on three public real HSI data sets collected by hyperspectral digital imagery collection experiment, reflective optics system imaging spectrometer, and airborne visible/infrared imaging spectrometer. The classification results show significant improvements in classification accuracies while using a small number of Features.

  • a multiscale urban complexity index based on 3d wavelet transform for spectral Spatial Feature extraction and classification an evaluation on the 8 channel worldview 2 imagery
    International Journal of Remote Sensing, 2012
    Co-Authors: Xin Huang, Liangpei Zhang
    Abstract:

    The three-dimensional wavelet transform (3D-WT) processes a multispectral remotely sensed image as a cube and hence it is able to simultaneously represent variation information in joint spectral–Spatial Feature space. The urban complexity index (UCI) built on the 3D-WT is defined by comparing the amount of spectral and Spatial variation, since natural Features have relatively smaller Spatial changes than spectral changes but urban areas show more variation in the Spatial domain. The calculation of the UCI is subject to the selection of window sizes; therefore, in this study, a multiscale UCI (M-UCI) is proposed by integrating the UCI Features in different moving windows and decomposition levels. The performance of the M-UCI was evaluated on two WorldView-2 data sets over urban and suburban areas, respectively. Experimental results showed that the M-UCI was effective in integrating multiscale information contained in different windows and gave higher accuracies than the single-scale UCI. In experiments, the proposed M-UCI was compared with a pixel shape index (PSI), which is a texture measure extracted from the Spatial domain alone. It was revealed that the PSI was more effective for the classification of urban areas than natural landscapes, whereas the M-UCI was applicable for both urban and natural areas since it represented the joint spectral–Spatial domains.

  • classification and extraction of Spatial Features in urban areas using high resolution multispectral imagery
    IEEE Geoscience and Remote Sensing Letters, 2007
    Co-Authors: Xin Huang, Liangpei Zhang, Pingxiang Li
    Abstract:

    Classification and extraction of Spatial Features are investigated in urban areas from high Spatial resolution multispectral imagery. The proposed approach consists of three steps. First, as an extension of our previous work [pixel shape index (PSI)], a structural Feature set (SFS) is proposed to extract the statistical Features of the direction-lines histogram. Second, some methods of dimension reduction, including independent component analysis, decision boundary Feature extraction, and the similarity-index Feature selection, are implemented for the proposed SFS to reduce information redundancy. Third, four classifiers, the maximum-likelihood classifier, backpropagation neural network, probability neural network based on expectation-maximization training, and support vector machine, are compared to assess SFS and other Spatial Feature sets. We evaluate the proposed approach on two QuickBird datasets, and the results show that the new set of reduced Spatial Features has better performance than the existing length-width extraction algorithm and PSI

Xiaotong Yuan - One of the best experts on this subject based on the ideXlab platform.

  • bidirectional convolutional lstm based spectral Spatial Feature learning for hyperspectral image classification
    Remote Sensing, 2017
    Co-Authors: Qingshan Liu, Feng Zhou, Renlong Hang, Xiaotong Yuan
    Abstract:

    This paper proposes a novel deep learning framework named bidirectional-convolutional long short term memory (Bi-CLSTM) network to automatically learn the spectral-Spatial Features from hyperspectral images (HSIs). In the network, the issue of spectral Feature extraction is considered as a sequence learning problem, and a recurrent connection operator across the spectral domain is used to address it. Meanwhile, inspired from the widely used convolutional neural network (CNN), a convolution operator across the Spatial domain is incorporated into the network to extract the Spatial Feature. In addition, to sufficiently capture the spectral information, a bidirectional recurrent connection is proposed. In the classification phase, the learned Features are concatenated into a vector and fed to a Softmax classifier via a fully-connected operator. To validate the effectiveness of the proposed Bi-CLSTM framework, we compare it with six state-of-the-art methods, including the popular 3D-CNN model, on three widely used HSIs (i.e., Indian Pines, Pavia University, and Kennedy Space Center). The obtained results show that Bi-CLSTM can improve the classification performance by almost 1.5 % as compared to 3D-CNN.

  • bidirectional convolutional lstm based spectral Spatial Feature learning for hyperspectral image classification
    arXiv: Computer Vision and Pattern Recognition, 2017
    Co-Authors: Qingshan Liu, Feng Zhou, Renlong Hang, Xiaotong Yuan
    Abstract:

    This paper proposes a novel deep learning framework named bidirectional-convolutional long short term memory (Bi-CLSTM) network to automatically learn the spectral-Spatial Feature from hyperspectral images (HSIs). In the network, the issue of spectral Feature extraction is considered as a sequence learning problem, and a recurrent connection operator across the spectral domain is used to address it. Meanwhile, inspired from the widely used convolutional neural network (CNN), a convolution operator across the Spatial domain is incorporated into the network to extract the Spatial Feature. Besides, to sufficiently capture the spectral information, a bidirectional recurrent connection is proposed. In the classification phase, the learned Features are concatenated into a vector and fed to a softmax classifier via a fully-connected operator. To validate the effectiveness of the proposed Bi-CLSTM framework, we compare it with several state-of-the-art methods, including the CNN framework, on three widely used HSIs. The obtained results show that Bi-CLSTM can improve the classification performance as compared to other methods.

Wenzhi Zhao - One of the best experts on this subject based on the ideXlab platform.

  • spectral Spatial Feature extraction for hyperspectral image classification a dimension reduction and deep learning approach
    IEEE Transactions on Geoscience and Remote Sensing, 2016
    Co-Authors: Wenzhi Zhao, Shihong Du
    Abstract:

    In this paper, we propose a spectral–Spatial Feature based classification (SSFC) framework that jointly uses dimension reduction and deep learning techniques for spectral and Spatial Feature extraction, respectively. In this framework, a balanced local discriminant embedding algorithm is proposed for spectral Feature extraction from high-dimensional hyperspectral data sets. In the meantime, convolutional neural network is utilized to automatically find Spatial-related Features at high levels. Then, the fusion Feature is extracted by stacking spectral and Spatial Features together. Finally, the multiple-Feature-based classifier is trained for image classification. Experimental results on well-known hyperspectral data sets show that the proposed SSFC method outperforms other commonly used methods for hyperspectral image classification.

  • spectral Spatial Feature extraction for hyperspectral image classification a dimension reduction and deep learning approach
    IEEE Transactions on Geoscience and Remote Sensing, 2016
    Co-Authors: Wenzhi Zhao
    Abstract:

    In this paper, we propose a spectral–Spatial Feature based classification (SSFC) framework that jointly uses dimension reduction and deep learning techniques for spectral and Spatial Feature extraction, respectively. In this framework, a balanced local discriminant embedding algorithm is proposed for spectral Feature extraction from high-dimensional hyperspectral data sets. In the meantime, convolutional neural network is utilized to automatically find Spatial-related Features at high levels. Then, the fusion Feature is extracted by stacking spectral and Spatial Features together. Finally, the multiple-Feature-based classifier is trained for image classification. Experimental results on well-known hyperspectral data sets show that the proposed SSFC method outperforms other commonly used methods for hyperspectral image classification.

  • spectral Spatial classification of hyperspectral images using deep convolutional neural networks
    Remote Sensing Letters, 2015
    Co-Authors: Jun Yue, Wenzhi Zhao, Shanjun Mao, Hui Liu
    Abstract:

    In this letter, a novel deep learning framework for hyperspectral image classification using both spectral and Spatial Features is presented. The framework is a hybrid of principal component analysis, deep convolutional neural networks (DCNNs) and logistic regression (LR). The DCNNs for hierarchically extract deep Features is introduced into hyperspectral image classification for the first time. The proposed technique consists of two steps. First, Feature map generation algorithm is presented to generate the spectral and Spatial Feature maps. Second, the DCNNs-LR classifier is trained to get useful high-level Features and to fine-tune the whole model. Comparative experiments conducted over widely used hyperspectral data indicate that DCNNs-LR classifier built in this proposed deep learning framework provides better classification accuracy than previous hyperspectral classification methods.