Sensing Imagery

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 42957 Experts worldwide ranked by ideXlab platform

Liangpei Zhang - One of the best experts on this subject based on the ideXlab platform.

  • Multi-class geospatial object detection based on a position-sensitive balancing framework for high spatial resolution remote Sensing Imagery
    ISPRS Journal of Photogrammetry and Remote Sensing, 2018
    Co-Authors: Yanfei Zhong, Xiaobing Han, Liangpei Zhang
    Abstract:

    Multi-class geospatial object detection from high spatial resolution (HSR) remote Sensing Imagery is attracting increasing attention in a wide range of object-related civil and engineering applications. However, the distribution of objects in HSR remote Sensing Imagery is location-variable and complicated, and how to accurately detect the objects in HSR remote Sensing Imagery is a critical problem. Due to the powerful feature extraction and representation capability of deep learning, the deep learning based region proposal generation and object detection integrated framework has greatly promoted the performance of multi-class geospatial object detection for HSR remote Sensing Imagery. However, due to the translation caused by the convolution operation in the convolutional neural network (CNN), although the performance of the classification stage is seldom influenced, the localization accuracies of the predicted bounding boxes in the detection stage are easily influenced. The dilemma between translation-invariance in the classification stage and translation-variance in the object detection stage has not been addressed for HSR remote Sensing Imagery, and causes position accuracy problems for multi-class geospatial object detection with region proposal generation and object detection. In order to further improve the performance of the region proposal generation and object detection integrated framework for HSR remote Sensing Imagery object detection, a position-sensitive balancing (PSB) framework is proposed in this paper for multi-class geospatial object detection from HSR remote Sensing Imagery. The proposed PSB framework takes full advantage of the fully convolutional network (FCN), on the basis of a residual network, and adopts the PSB framework to solve the dilemma between translation-invariance in the classification stage and translation-variance in the object detection stage. In addition, a pre-training mechanism is utilized to accelerate the training procedure and increase the robustness of the proposed algorithm. The proposed algorithm is validated with a publicly available 10-class object detection dataset.

  • A Multiscale and Multidepth Convolutional Neural Network for Remote Sensing Imagery Pan-Sharpening
    IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2018
    Co-Authors: Qiangqiang Yuan, Yancong Wei, Xiangchao Meng, Huanfeng Shen, Liangpei Zhang
    Abstract:

    Pan-sharpening is a fundamental and significant task in the field of remote Sensing Imagery processing, in which high-resolution spatial details from panchromatic images are employed to enhance the spatial resolution of multispectral (MS) images. As the transformation from low spatial resolution MS image to high-resolution MS image is complex and highly nonlinear, inspired by the powerful representation for nonlinear relationships of deep neural networks, we introduce multiscale feature extraction and residual learning into the basic convolutional neural network (CNN) architecture and propose the multiscale and multidepth CNN for the pan-sharpening of remote Sensing Imagery. Both the quantitative assessment results and the visual assessment confirm that the proposed network yields high-resolution MS images that are superior to the images produced by the compared state-of-the-art methods.

  • An efficient and robust integrated geospatial object detection framework for high spatial resolution remote Sensing Imagery
    Remote Sensing, 2017
    Co-Authors: Xiaobing Han, Yanfei Zhong, Liangpei Zhang
    Abstract:

    Geospatial object detection from high spatial resolution (HSR) remote Sensing Imagery is a significant and challenging problem when further analyzing object-related information for civil and engineering applications. However, the computational efficiency and the separate region generation and localization steps are two big obstacles for the performance improvement of the traditional convolutional neural network (CNN)-based object detection methods. Although recent object detection methods based on CNN can extract features automatically, these methods still separate the feature extraction and detection stages, resulting in high time consumption and low efficiency. As a significant influencing factor, the acquisition of a large quantity of manually annotated samples for HSR remote Sensing Imagery objects requires expert experience, which is expensive and unreliable. Despite the progress made in natural image object detection fields, the complex object distribution makes it difficult to directly deal with the HSR remote Sensing Imagery object detection task. To solve the above problems, a highly efficient and robust integrated geospatial object detection framework based on faster region-based convolutional neural network (Faster R-CNN) is proposed in this paper. The proposed method realizes the integrated procedure by sharing features between the region proposal generation stage and the object detection stage. In addition, a pre-training mechanism is utilized to improve the efficiency of the multi-class geospatial object detection by transfer learning from the natural Imagery domain to the HSR remote Sensing Imagery domain. Extensive experiments and comprehensive evaluations on a publicly available 10-class object detection dataset were conducted to evaluate the proposed method.

  • IGARSS - A benchmark for scene classification of high spatial resolution remote Sensing Imagery
    2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), 2015
    Co-Authors: Tianbi Jiang, Xin-yi Tong, Gui-song Xia, Liangpei Zhang
    Abstract:

    Scene classification for high-resolution remotely sensed Imagery have been widely investigated in recent years. However, there is few public, widely accepted and large scale dataset for benchmarking different methods. This paper presents a new and large dataset consisting of 5000 high-resolution remote Sensing images which is manually labeled in 20 semantic classes for scene classification. Each class includes more than 200 image samples with different appearances. Some classic classification algorithms are compared on this dataset. To our knowledge, this work is the first time to give a public benchmark dataset at this size on the problem of scene classification in high-resolution remote Sensing Imagery, and give comparative results and analysis of various classic classification algorithms.

  • non local sparse unmixing for hyperspectral remote Sensing Imagery
    IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2014
    Co-Authors: Yanfei Zhong, Ruyi Feng, Liangpei Zhang
    Abstract:

    Sparse unmixing is a promising approach that acts as a semi-supervised unmixing strategy by assuming that the observed image signatures can be expressed in the form of linear combinations of a number of pure spectral signatures that are known in advance. However, conventional sparse unmixing involves finding the optimal subset of signatures for the observed data in a very large standard spectral library, without considering the spatial information. In this paper, a new sparse unmixing algorithm based on non-local means, namely non-local sparse unmixing (NLSU), is proposed to perform the unmixing task for hyperspectral remote Sensing Imagery. In NLSU, the non-local means method, as a regularizer for sparse unmixing, is used to exploit the similar patterns and structures in the abundance image. The NLSU algorithm based on the sparse spectral unmixing model can improve the spectral unmixing accuracy by incorporating the non-local spatial information by means of a weighting average for all the pixels in the abundance image. Five experiments with three simulated and two real hyperspectral images were performed to evaluate the performance of the proposed algorithm in comparison to the previous sparse unmixing methods: sparse unmixing via variable splitting and augmented Lagrangian (SUnSAL) and sparse unmixing via variable splitting augmented Lagrangian and total variation (SUnSAL-TV). The experimental results demonstrate that NLSU outperforms the other algorithms, with a better spectral unmixing accuracy, and is an effective spectral unmixing algorithm for hyperspectral remote Sensing Imagery.

Yanfei Zhong - One of the best experts on this subject based on the ideXlab platform.

  • Multi-class geospatial object detection based on a position-sensitive balancing framework for high spatial resolution remote Sensing Imagery
    ISPRS Journal of Photogrammetry and Remote Sensing, 2018
    Co-Authors: Yanfei Zhong, Xiaobing Han, Liangpei Zhang
    Abstract:

    Multi-class geospatial object detection from high spatial resolution (HSR) remote Sensing Imagery is attracting increasing attention in a wide range of object-related civil and engineering applications. However, the distribution of objects in HSR remote Sensing Imagery is location-variable and complicated, and how to accurately detect the objects in HSR remote Sensing Imagery is a critical problem. Due to the powerful feature extraction and representation capability of deep learning, the deep learning based region proposal generation and object detection integrated framework has greatly promoted the performance of multi-class geospatial object detection for HSR remote Sensing Imagery. However, due to the translation caused by the convolution operation in the convolutional neural network (CNN), although the performance of the classification stage is seldom influenced, the localization accuracies of the predicted bounding boxes in the detection stage are easily influenced. The dilemma between translation-invariance in the classification stage and translation-variance in the object detection stage has not been addressed for HSR remote Sensing Imagery, and causes position accuracy problems for multi-class geospatial object detection with region proposal generation and object detection. In order to further improve the performance of the region proposal generation and object detection integrated framework for HSR remote Sensing Imagery object detection, a position-sensitive balancing (PSB) framework is proposed in this paper for multi-class geospatial object detection from HSR remote Sensing Imagery. The proposed PSB framework takes full advantage of the fully convolutional network (FCN), on the basis of a residual network, and adopts the PSB framework to solve the dilemma between translation-invariance in the classification stage and translation-variance in the object detection stage. In addition, a pre-training mechanism is utilized to accelerate the training procedure and increase the robustness of the proposed algorithm. The proposed algorithm is validated with a publicly available 10-class object detection dataset.

  • An efficient and robust integrated geospatial object detection framework for high spatial resolution remote Sensing Imagery
    Remote Sensing, 2017
    Co-Authors: Xiaobing Han, Yanfei Zhong, Liangpei Zhang
    Abstract:

    Geospatial object detection from high spatial resolution (HSR) remote Sensing Imagery is a significant and challenging problem when further analyzing object-related information for civil and engineering applications. However, the computational efficiency and the separate region generation and localization steps are two big obstacles for the performance improvement of the traditional convolutional neural network (CNN)-based object detection methods. Although recent object detection methods based on CNN can extract features automatically, these methods still separate the feature extraction and detection stages, resulting in high time consumption and low efficiency. As a significant influencing factor, the acquisition of a large quantity of manually annotated samples for HSR remote Sensing Imagery objects requires expert experience, which is expensive and unreliable. Despite the progress made in natural image object detection fields, the complex object distribution makes it difficult to directly deal with the HSR remote Sensing Imagery object detection task. To solve the above problems, a highly efficient and robust integrated geospatial object detection framework based on faster region-based convolutional neural network (Faster R-CNN) is proposed in this paper. The proposed method realizes the integrated procedure by sharing features between the region proposal generation stage and the object detection stage. In addition, a pre-training mechanism is utilized to improve the efficiency of the multi-class geospatial object detection by transfer learning from the natural Imagery domain to the HSR remote Sensing Imagery domain. Extensive experiments and comprehensive evaluations on a publicly available 10-class object detection dataset were conducted to evaluate the proposed method.

  • non local sparse unmixing for hyperspectral remote Sensing Imagery
    IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2014
    Co-Authors: Yanfei Zhong, Ruyi Feng, Liangpei Zhang
    Abstract:

    Sparse unmixing is a promising approach that acts as a semi-supervised unmixing strategy by assuming that the observed image signatures can be expressed in the form of linear combinations of a number of pure spectral signatures that are known in advance. However, conventional sparse unmixing involves finding the optimal subset of signatures for the observed data in a very large standard spectral library, without considering the spatial information. In this paper, a new sparse unmixing algorithm based on non-local means, namely non-local sparse unmixing (NLSU), is proposed to perform the unmixing task for hyperspectral remote Sensing Imagery. In NLSU, the non-local means method, as a regularizer for sparse unmixing, is used to exploit the similar patterns and structures in the abundance image. The NLSU algorithm based on the sparse spectral unmixing model can improve the spectral unmixing accuracy by incorporating the non-local spatial information by means of a weighting average for all the pixels in the abundance image. Five experiments with three simulated and two real hyperspectral images were performed to evaluate the performance of the proposed algorithm in comparison to the previous sparse unmixing methods: sparse unmixing via variable splitting and augmented Lagrangian (SUnSAL) and sparse unmixing via variable splitting augmented Lagrangian and total variation (SUnSAL-TV). The experimental results demonstrate that NLSU outperforms the other algorithms, with a better spectral unmixing accuracy, and is an effective spectral unmixing algorithm for hyperspectral remote Sensing Imagery.

  • An Adaptive Memetic Fuzzy Clustering Algorithm With Spatial Information for Remote Sensing Imagery
    IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2014
    Co-Authors: Yanfei Zhong, Liangpei Zhang
    Abstract:

    Due to its inherent complexity, remote Sensing image clustering is a challenging task. Recently, some spatial-based clustering approaches have been proposed; however, one crucial factor with regard to their clustering quality is that there is usually one parameter that controls their spatial information weight, which is difficult to determine. Meanwhile, the traditional optimization methods of the objective functions for these clustering approaches often cannot function well because they cannot simultaneously possess both a local search capability and a global search capability. Furthermore, these methods only use a single optimization method rather than hybridizing and combining the existing algorithmic structures. In this paper, an adaptive fuzzy clustering algorithm with spatial information for remote Sensing Imagery (AFCM_S1) is proposed, which defines a new objective function with an adaptive spatial information weight by using the concept of entropy. In order to further enhance the capability of the optimization, an adaptive memetic fuzzy clustering algorithm with spatial information for remote Sensing Imagery (AMASFC) is also proposed. In AMASFC, the clustering problem is transformed into an optimization problem. A memetic algorithm is then utilized to optimize the proposed objective function, combining the global search ability of a differential evolution algorithm with a local search method using Gaussian local search (GLS). The optimal value of the specific parameter in GLS, which determines the local search efficiency, can be obtained by comparing the objective function increment for different values of the parameter. The experimental results using three remote Sensing images show that the two proposed algorithms are effective when compared with the traditional clustering algorithms.

  • sub pixel mapping based on artificial immune systems for remote Sensing Imagery
    Pattern Recognition, 2013
    Co-Authors: Yanfei Zhong, Liangpei Zhang
    Abstract:

    A new sub-pixel mapping strategy inspired by the clonal selection theory in artificial immune systems (AIS), namely, the clonal selection sub-pixel mapping (CSSM) framework, is proposed for the sub-pixel mapping of remote Sensing Imagery, to provide detailed information on the spatial distribution of land cover within a mixed pixel. In CSSM, the sub-pixel mapping problem becomes one of assigning land-cover classes to the sub-pixels while maximizing the spatial dependence by the clonal selection algorithm. Each antibody in CSSM represents a possible sub-pixel configuration of the pixel. CSSM evolves the antibody population by inheriting the biological properties of human immune systems, i.e., cloning, mutation, and memory, to build a memory cell population with a diverse set of locally optimal solutions. Based on the memory cell population, CSSM outputs the value of the memory cell and finds the optimal sub-pixel mapping result. Based on the framework of CSSM, three sub-pixel mapping algorithms with different mutation operators, namely, the clonal selection sub-pixel mapping algorithm based on Gaussian mutation (G-CSSM), Cauchy mutation (C-CSSM), and non-uniform mutation (N-CSSM), have been developed. They each have a similar sub-pixel mapping process, except for the mutation processes, which use different mutation operators. The proposed algorithms are compared with the following sub-pixel mapping algorithms: direct neighboring sub-pixel mapping (DNSM), the sub-pixel mapping algorithm based on spatial attraction models (SASM), the BP neural network sub-pixel mapping algorithm (BPSM), and the sub-pixel mapping algorithm based on a genetic algorithm (GASM), using both synthetic images (artificial and degraded synthetic images) and real remote Sensing Imagery. The experimental results demonstrate that the proposed approaches outperform the previous sub-pixel mapping algorithms, and hence provide an effective option for the sub-pixel mapping of remote Sensing Imagery.

Ran Tao - One of the best experts on this subject based on the ideXlab platform.

  • orsim detector a novel object detection framework in optical remote Sensing Imagery using spatial frequency channel features
    IEEE Transactions on Geoscience and Remote Sensing, 2019
    Co-Authors: Danfeng Hong, Jiaojiao Tian, Jocelyn Chanussot, Ran Tao
    Abstract:

    With the rapid development of spaceborne imaging techniques, object detection in optical remote Sensing Imagery has drawn much attention in recent decades. While many advanced works have been developed with powerful learning algorithms, the incomplete feature representation still cannot meet the demand for effectively and efficiently handling image deformations, particularly objective scaling and rotation. To this end, we propose a novel object detection framework, called Optical Remote Sensing Imagery detector (ORSIm detector), integrating diverse channel features extraction, feature learning, fast image pyramid matching, and boosting strategy. An ORSIm detector adopts a novel spatial-frequency channel feature (SFCF) by jointly considering the rotation-invariant channel features constructed in the frequency domain and the original spatial channel features (e.g., color channel and gradient magnitude). Subsequently, we refine SFCF using learning-based strategy in order to obtain the high-level or semantically meaningful features. In the test phase, we achieve a fast and coarsely scaled channel computation by mathematically estimating a scaling factor in the image domain. Extensive experimental results conducted on the two different airborne data sets are performed to demonstrate the superiority and effectiveness in comparison with the previous state-of-the-art methods.

  • orsim detector a novel object detection framework in optical remote Sensing Imagery using spatial frequency channel features
    arXiv: Computer Vision and Pattern Recognition, 2019
    Co-Authors: Danfeng Hong, Jiaojiao Tian, Jocelyn Chanussot, Ran Tao
    Abstract:

    With the rapid development of spaceborne imaging techniques, object detection in optical remote Sensing Imagery has drawn much attention in recent decades. While many advanced works have been developed with powerful learning algorithms, the incomplete feature representation still cannot meet the demand for effectively and efficiently handling image deformations, particularly objective scaling and rotation. To this end, we propose a novel object detection framework, called optical remote Sensing Imagery detector (ORSIm detector), integrating diverse channel features extraction, feature learning, fast image pyramid matching, and boosting strategy. ORSIm detector adopts a novel spatial-frequency channel feature (SFCF) by jointly considering the rotation-invariant channel features constructed in frequency domain and the original spatial channel features (e.g., color channel, gradient magnitude). Subsequently, we refine SFCF using learning-based strategy in order to obtain the high-level or semantically meaningful features. In the test phase, we achieve a fast and coarsely-scaled channel computation by mathematically estimating a scaling factor in the image domain. Extensive experimental results conducted on the two different airborne datasets are performed to demonstrate the superiority and effectiveness in comparison with previous state-of-the-art methods.

Christopher Kanan - One of the best experts on this subject based on the ideXlab platform.

  • low shot learning for the semantic segmentation of remote Sensing Imagery
    IEEE Transactions on Geoscience and Remote Sensing, 2018
    Co-Authors: Ronald Kemker, Ryan Luu, Christopher Kanan
    Abstract:

    Recent advances in computer vision using deep learning with RGB Imagery (e.g., object recognition and detection) have been made possible thanks to the development of large annotated RGB image data sets. In contrast, multispectral image (MSI) and hyperspectral image (HSI) data sets contain far fewer labeled images, in part due to the wide variety of sensors used. These annotations are especially limited for semantic segmentation, or pixelwise classification, of remote Sensing Imagery because it is labor intensive to generate image annotations. Low-shot learning algorithms can make effective inferences despite smaller amounts of annotated data. In this paper, we study low-shot learning using self-taught feature learning for semantic segmentation. We introduce: 1) an improved self-taught feature learning framework for HSI and MSI data and 2) a semisupervised classification algorithm. When these are combined, they achieve the state-of-the-art performance on remote Sensing data sets that have little annotated training data available. These low-shot learning frameworks will reduce the manual image annotation burden and improve semantic segmentation performance for remote Sensing Imagery.

  • low shot learning for the semantic segmentation of remote Sensing Imagery
    arXiv: Computer Vision and Pattern Recognition, 2018
    Co-Authors: Ronald Kemker, Ryan Luu, Christopher Kanan
    Abstract:

    Recent advances in computer vision using deep learning with RGB Imagery (e.g., object recognition and detection) have been made possible thanks to the development of large annotated RGB image datasets. In contrast, multispectral image (MSI) and hyperspectral image (HSI) datasets contain far fewer labeled images, in part due to the wide variety of sensors used. These annotations are especially limited for semantic segmentation, or pixel-wise classification, of remote Sensing Imagery because it is labor intensive to generate image annotations. Low-shot learning algorithms can make effective inferences despite smaller amounts of annotated data. In this paper, we study low-shot learning using self-taught feature learning for semantic segmentation. We introduce 1) an improved self-taught feature learning framework for HSI and MSI data and 2) a semi-supervised classification algorithm. When these are combined, they achieve state-of-the-art performance on remote Sensing datasets that have little annotated training data available. These low-shot learning frameworks will reduce the manual image annotation burden and improve semantic segmentation performance for remote Sensing Imagery.

Danfeng Hong - One of the best experts on this subject based on the ideXlab platform.

  • orsim detector a novel object detection framework in optical remote Sensing Imagery using spatial frequency channel features
    IEEE Transactions on Geoscience and Remote Sensing, 2019
    Co-Authors: Danfeng Hong, Jiaojiao Tian, Jocelyn Chanussot, Ran Tao
    Abstract:

    With the rapid development of spaceborne imaging techniques, object detection in optical remote Sensing Imagery has drawn much attention in recent decades. While many advanced works have been developed with powerful learning algorithms, the incomplete feature representation still cannot meet the demand for effectively and efficiently handling image deformations, particularly objective scaling and rotation. To this end, we propose a novel object detection framework, called Optical Remote Sensing Imagery detector (ORSIm detector), integrating diverse channel features extraction, feature learning, fast image pyramid matching, and boosting strategy. An ORSIm detector adopts a novel spatial-frequency channel feature (SFCF) by jointly considering the rotation-invariant channel features constructed in the frequency domain and the original spatial channel features (e.g., color channel and gradient magnitude). Subsequently, we refine SFCF using learning-based strategy in order to obtain the high-level or semantically meaningful features. In the test phase, we achieve a fast and coarsely scaled channel computation by mathematically estimating a scaling factor in the image domain. Extensive experimental results conducted on the two different airborne data sets are performed to demonstrate the superiority and effectiveness in comparison with the previous state-of-the-art methods.

  • orsim detector a novel object detection framework in optical remote Sensing Imagery using spatial frequency channel features
    arXiv: Computer Vision and Pattern Recognition, 2019
    Co-Authors: Danfeng Hong, Jiaojiao Tian, Jocelyn Chanussot, Ran Tao
    Abstract:

    With the rapid development of spaceborne imaging techniques, object detection in optical remote Sensing Imagery has drawn much attention in recent decades. While many advanced works have been developed with powerful learning algorithms, the incomplete feature representation still cannot meet the demand for effectively and efficiently handling image deformations, particularly objective scaling and rotation. To this end, we propose a novel object detection framework, called optical remote Sensing Imagery detector (ORSIm detector), integrating diverse channel features extraction, feature learning, fast image pyramid matching, and boosting strategy. ORSIm detector adopts a novel spatial-frequency channel feature (SFCF) by jointly considering the rotation-invariant channel features constructed in frequency domain and the original spatial channel features (e.g., color channel, gradient magnitude). Subsequently, we refine SFCF using learning-based strategy in order to obtain the high-level or semantically meaningful features. In the test phase, we achieve a fast and coarsely-scaled channel computation by mathematically estimating a scaling factor in the image domain. Extensive experimental results conducted on the two different airborne datasets are performed to demonstrate the superiority and effectiveness in comparison with previous state-of-the-art methods.