Scale Feature

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 160500 Experts worldwide ranked by ideXlab platform

Manuel Grizonnet - One of the best experts on this subject based on the ideXlab platform.

  • Large-Scale Feature selection with Gaussian mixture models for the classification of high dimensional remote sensing images
    IEEE Transactions on Computational Imaging, 2017
    Co-Authors: Adrien Lagrange, Mathieu Fauvel, Manuel Grizonnet
    Abstract:

    A large Scale Feature selection wrapper is discussed for the classification of high dimensional remote sensing. An efficient implementation is proposed based on intrinsic properties of Gaussian mixtures models and block matrix. The criterion function is split into two parts : one that is updated to test each Feature and one that needs to be updated only once per Feature selection. This split saved a lot of computation for each test. The algorithm is implemented in C++ and integrated into the Orfeo Toolbox. It has been compared to other classification algorithms on two high dimension remote sensing images. Results show that the approach provides good classification accuracies with low computation time.

Yichang James Tsai - One of the best experts on this subject based on the ideXlab platform.

  • y net multi Scale Feature aggregation network with wavelet structure similarity loss function for single image dehazing
    arXiv: Computer Vision and Pattern Recognition, 2020
    Co-Authors: Haohsiang Yang, Chaohan Huck Yang, Yichang James Tsai
    Abstract:

    Single image dehazing is the ill-posed two-dimensional signal reconstruction problem. Recently, deep convolutional neural networks (CNN) have been successfully used in many computer vision problems. In this paper, we propose a Y-net that is named for its structure. This network reconstructs clear images by aggregating multi-Scale Features maps. Additionally, we propose a Wavelet Structure SIMilarity (W-SSIM) loss function in the training step. In the proposed loss function, discrete wavelet transforms are applied repeatedly to divide the image into differently sized patches with different frequencies and Scales. The proposed loss function is the accumulation of SSIM loss of various patches with respective ratios. Extensive experimental results demonstrate that the proposed Y-net with the W-SSIM loss function restores high-quality clear images and outperforms state-of-the-art algorithms. Code and models are available at this https URL.

  • y net multi Scale Feature aggregation network with wavelet structure similarity loss function for single image dehazing
    International Conference on Acoustics Speech and Signal Processing, 2020
    Co-Authors: Haohsiang Yang, Chaohan Huck Yang, Yichang James Tsai
    Abstract:

    Single image dehazing is the ill-posed two-dimensional signal reconstruction problem. Recently, deep convolutional neural networks (CNN) have been successfully used in many computer vision problems. In this paper, we propose a Y-net that is named for its structure. This network reconstructs clear images by aggregating multi-Scale Features maps. Additionally, we propose a Wavelet Structure SIMilarity (W-SSIM) loss function in the training step. In the proposed loss function, discrete wavelet transforms are applied repeatedly to divide the image into differently sized patches with different frequencies and Scales. The proposed loss function is the accumulation of SSIM loss of various patches with respective ratios. Extensive experimental results demonstrate that the proposed Y-net with the W-SSIM loss function restores high-quality clear images and outperforms state-of-the-art algorithms. Code and models are available at https://github.com/dectrfov/Y-net

Adrien Lagrange - One of the best experts on this subject based on the ideXlab platform.

  • Large-Scale Feature selection with Gaussian mixture models for the classification of high dimensional remote sensing images
    IEEE Transactions on Computational Imaging, 2017
    Co-Authors: Adrien Lagrange, Mathieu Fauvel, Manuel Grizonnet
    Abstract:

    A large Scale Feature selection wrapper is discussed for the classification of high dimensional remote sensing. An efficient implementation is proposed based on intrinsic properties of Gaussian mixtures models and block matrix. The criterion function is split into two parts : one that is updated to test each Feature and one that needs to be updated only once per Feature selection. This split saved a lot of computation for each test. The algorithm is implemented in C++ and integrated into the Orfeo Toolbox. It has been compared to other classification algorithms on two high dimension remote sensing images. Results show that the approach provides good classification accuracies with low computation time.

Haohsiang Yang - One of the best experts on this subject based on the ideXlab platform.

  • y net multi Scale Feature aggregation network with wavelet structure similarity loss function for single image dehazing
    arXiv: Computer Vision and Pattern Recognition, 2020
    Co-Authors: Haohsiang Yang, Chaohan Huck Yang, Yichang James Tsai
    Abstract:

    Single image dehazing is the ill-posed two-dimensional signal reconstruction problem. Recently, deep convolutional neural networks (CNN) have been successfully used in many computer vision problems. In this paper, we propose a Y-net that is named for its structure. This network reconstructs clear images by aggregating multi-Scale Features maps. Additionally, we propose a Wavelet Structure SIMilarity (W-SSIM) loss function in the training step. In the proposed loss function, discrete wavelet transforms are applied repeatedly to divide the image into differently sized patches with different frequencies and Scales. The proposed loss function is the accumulation of SSIM loss of various patches with respective ratios. Extensive experimental results demonstrate that the proposed Y-net with the W-SSIM loss function restores high-quality clear images and outperforms state-of-the-art algorithms. Code and models are available at this https URL.

  • y net multi Scale Feature aggregation network with wavelet structure similarity loss function for single image dehazing
    International Conference on Acoustics Speech and Signal Processing, 2020
    Co-Authors: Haohsiang Yang, Chaohan Huck Yang, Yichang James Tsai
    Abstract:

    Single image dehazing is the ill-posed two-dimensional signal reconstruction problem. Recently, deep convolutional neural networks (CNN) have been successfully used in many computer vision problems. In this paper, we propose a Y-net that is named for its structure. This network reconstructs clear images by aggregating multi-Scale Features maps. Additionally, we propose a Wavelet Structure SIMilarity (W-SSIM) loss function in the training step. In the proposed loss function, discrete wavelet transforms are applied repeatedly to divide the image into differently sized patches with different frequencies and Scales. The proposed loss function is the accumulation of SSIM loss of various patches with respective ratios. Extensive experimental results demonstrate that the proposed Y-net with the W-SSIM loss function restores high-quality clear images and outperforms state-of-the-art algorithms. Code and models are available at https://github.com/dectrfov/Y-net

Yu Xue - One of the best experts on this subject based on the ideXlab platform.

  • self adaptive parameter and strategy based particle swarm optimization for large Scale Feature selection problems with multiple classifiers
    Applied Soft Computing, 2020
    Co-Authors: Yu Xue, Tao Tang, Wei Pang, Alex X Liu
    Abstract:

    Abstract Feature selection has been widely used in classification for improving classification accuracy and reducing computational complexity. Recently, evolutionary computation (EC) has become an important approach for solving Feature selection problems. However, firstly, as the datasets processed by classifiers become increasingly large and complex, more and more irrelevant and redundant Features may exist and there may be more local optima in the large-Scale Feature space. Therefore, traditional EC algorithms which have only one candidate solution generation strategy (CSGS) with fixed parameter values may not perform well in searching for the optimal Feature subsets for large-Scale Feature selection problems. Secondly, many existing studies usually use only one classifier to evaluate Feature subsets. To show the effectiveness of evolutionary algorithms for Feature selection problems, more classifiers should be tested. Thus, in order to efficiently solve large-Scale Feature selection problems and to show whether the EC-based Feature selection method is efficient for more classifiers, a self-adaptive parameter and strategy based particle swarm optimization (SPS-PSO) algorithm is proposed in this paper using multiple classifiers. In SPS-PSO, a representation scheme of solutions and five CSGSs have been used. To automatically adjust the CSGSs and their parameter values during the evolutionary process, a strategy self-adaptive mechanism and a parameter self-adaptive mechanism are employed in the framework of particle swarm optimization (PSO). By using the self-adaptive mechanisms, the SPS-PSO can adjust both CSGSs and their parameter values when solving different large-Scale Feature selection problems. Therefore, SPS-PSO has good global and local search ability when dealing with these large-Scale problems. Moreover, four classifiers, i.e., k-nearest neighbor (KNN), linear discriminant analysis (LDA), extreme learning machine (ELM), and support vector machine (SVM), are individually used as the evaluation functions for testing the effectiveness of Feature subsets generated by SPS-PSO. Nine datasets from the UCI Machine Learning Repository and Causality Workbench are used in the experiments. All the nine datasets have more than 600 dimensions, and two of them have more than 5,000 dimensions. The experimental results show that the strategy and parameter self-adaptive mechanisms can improve the performance of the evolutionary algorithms, and that SPS-PSO can achieve higher classification accuracy and obtain more concise solutions than those of the other algorithms on the large-Scale Feature problems selected in this research. In addition, Feature selection can improve the classification accuracy and reduce computational time for various classifiers. Furthermore, KNN is a better surrogate model compared with the other classifiers used in these experiments.

  • self adaptive particle swarm optimization for large Scale Feature selection in classification
    ACM Transactions on Knowledge Discovery From Data, 2019
    Co-Authors: Yu Xue, Bing Xue, Mengjie Zhang
    Abstract:

    Many evolutionary computation (EC) methods have been used to solve Feature selection problems and they perform well on most small-Scale Feature selection problems. However, as the dimensionality of Feature selection problems increases, the solution space increases exponentially. Meanwhile, there are more irrelevant Features than relevant Features in datasets, which leads to many local optima in the huge solution space. Therefore, the existing EC methods still suffer from the problem of stagnation in local optima on large-Scale Feature selection problems. Furthermore, large-Scale Feature selection problems with different datasets may have different properties. Thus, it may be of low performance to solve different large-Scale Feature selection problems with an existing EC method that has only one candidate solution generation strategy (CSGS). In addition, it is time-consuming to find a suitable EC method and corresponding suitable parameter values for a given large-Scale Feature selection problem if we want to solve it effectively and efficiently. In this article, we propose a self-adaptive particle swarm optimization (SaPSO) algorithm for Feature selection, particularly for large-Scale Feature selection. First, an encoding scheme for the Feature selection problem is employed in the SaPSO. Second, three important issues related to self-adaptive algorithms are investigated. After that, the SaPSO algorithm with a typical self-adaptive mechanism is proposed. The experimental results on 12 datasets show that the solution size obtained by the SaPSO algorithm is smaller than its EC counterparts on all datasets. The SaPSO algorithm performs better than its non-EC and EC counterparts in terms of classification accuracy not only on most training sets but also on most test sets. Furthermore, as the dimensionality of the Feature selection problem increases, the advantages of SaPSO become more prominent. This highlights that the SaPSO algorithm is suitable for solving Feature selection problems, particularly large-Scale Feature selection problems.