Hard Thresholding

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 7206 Experts worldwide ranked by ideXlab platform

Kwangcheng Chen - One of the best experts on this subject based on the ideXlab platform.

  • sparse pca via Hard Thresholding for blind source separation
    International Conference on Acoustics Speech and Signal Processing, 2016
    Co-Authors: Kwangcheng Chen
    Abstract:

    Principal Component Analysis (PCA) is adopted in diverse areas including signal processing and machine leaning. However, the derived principal components, the linear combinations of the original variables, are Hard to be interpreted in many applications especially the blind source separation. Therefore, we propose regularized PCA via Hard Thresholding such that the derived loadings are sparse and easier to be interpreted. The proposed method has advantages due to the adoption of Hard Thresholding. First, the proposed method can be implemented by linear operators and thus computational efficient even in p » n or large p scenarios. Second, the threshold can be objectively selected based on statistical decision theory without domain knowledge. Moreover, simulations show the superiority of our method compared to the L1-penalized method. Therefore, our approach can be a strong competitor of the existing sparse PCA.

  • ICASSP - Sparse PCA via Hard Thresholding for blind source separation
    2016 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), 2016
    Co-Authors: Kwangcheng Chen
    Abstract:

    Principal Component Analysis (PCA) is adopted in diverse areas including signal processing and machine leaning. However, the derived principal components, the linear combinations of the original variables, are Hard to be interpreted in many applications especially the blind source separation. Therefore, we propose regularized PCA via Hard Thresholding such that the derived loadings are sparse and easier to be interpreted. The proposed method has advantages due to the adoption of Hard Thresholding. First, the proposed method can be implemented by linear operators and thus computational efficient even in p » n or large p scenarios. Second, the threshold can be objectively selected based on statistical decision theory without domain knowledge. Moreover, simulations show the superiority of our method compared to the L1-penalized method. Therefore, our approach can be a strong competitor of the existing sparse PCA.

Xiaotong Yuan - One of the best experts on this subject based on the ideXlab platform.

  • efficient stochastic gradient Hard Thresholding
    Neural Information Processing Systems, 2018
    Co-Authors: Pan Zhou, Xiaotong Yuan, Jiashi Feng
    Abstract:

    Stochastic gradient Hard Thresholding methods have recently been shown to work favorably in solving large-scale empirical risk minimization problems under sparsity or rank constraint. Despite the improved iteration complexity over full gradient methods, the gradient evaluation and Hard Thresholding complexity of the existing stochastic algorithms usually scales linearly with data size, which could still be expensive when data is huge and the Hard Thresholding step could be as expensive as singular value decomposition in rank-constrained problems. To address these deficiencies, we propose an efficient hybrid stochastic gradient Hard Thresholding (HSG-HT) method that can be provably shown to have sample-size-independent gradient evaluation and Hard Thresholding complexity bounds. Specifically, we prove that the stochastic gradient evaluation complexity of HSG-HT scales linearly with inverse of sub-optimality and its Hard Thresholding complexity scales logarithmically. By applying the heavy ball acceleration technique, we further propose an accelerated variant of HSG-HT which can be shown to have improved factor dependence on restricted condition number. Numerical results confirm our theoretical affirmation and demonstrate the computational efficiency of the proposed methods.

  • NeurIPS - Efficient Stochastic Gradient Hard Thresholding
    2018
    Co-Authors: Pan Zhou, Xiaotong Yuan, Jiashi Feng
    Abstract:

    Stochastic gradient Hard Thresholding methods have recently been shown to work favorably in solving large-scale empirical risk minimization problems under sparsity or rank constraint. Despite the improved iteration complexity over full gradient methods, the gradient evaluation and Hard Thresholding complexity of the existing stochastic algorithms usually scales linearly with data size, which could still be expensive when data is huge and the Hard Thresholding step could be as expensive as singular value decomposition in rank-constrained problems. To address these deficiencies, we propose an efficient hybrid stochastic gradient Hard Thresholding (HSG-HT) method that can be provably shown to have sample-size-independent gradient evaluation and Hard Thresholding complexity bounds. Specifically, we prove that the stochastic gradient evaluation complexity of HSG-HT scales linearly with inverse of sub-optimality and its Hard Thresholding complexity scales logarithmically. By applying the heavy ball acceleration technique, we further propose an accelerated variant of HSG-HT which can be shown to have improved factor dependence on restricted condition number. Numerical results confirm our theoretical affirmation and demonstrate the computational efficiency of the proposed methods.

  • Training Skinny Deep Neural Networks with Iterative Hard Thresholding Methods
    arXiv: Computer Vision and Pattern Recognition, 2016
    Co-Authors: Xiaojie Jin, Xiaotong Yuan, Jiashi Feng, Shuicheng Yan
    Abstract:

    Deep neural networks have achieved remarkable success in a wide range of practical problems. However, due to the inherent large parameter space, deep models are notoriously prone to overfitting and difficult to be deployed in portable devices with limited memory. In this paper, we propose an iterative Hard Thresholding (IHT) approach to train Skinny Deep Neural Networks (SDNNs). An SDNN has much fewer parameters yet can achieve competitive or even better performance than its full CNN counterpart. More concretely, the IHT approach trains an SDNN through following two alternative phases: (I) perform Hard Thresholding to drop connections with small activations and fine-tune the other significant filters; (II)~re-activate the frozen connections and train the entire network to improve its overall discriminative capability. We verify the superiority of SDNNs in terms of efficiency and classification performance on four benchmark object recognition datasets, including CIFAR-10, CIFAR-100, MNIST and ImageNet. Experimental results clearly demonstrate that IHT can be applied for training SDNN based on various CNN architectures such as NIN and AlexNet.

  • exact recovery of Hard Thresholding pursuit
    Neural Information Processing Systems, 2016
    Co-Authors: Xiaotong Yuan, Tong Zhang
    Abstract:

    The Hard Thresholding Pursuit (HTP) is a class of truncated gradient descent methods for finding sparse solutions of $\ell_0$-constrained loss minimization problems. The HTP-style methods have been shown to have strong approximation guarantee and impressive numerical performance in high dimensional statistical learning applications. However, the current theoretical treatment of these methods has traditionally been restricted to the analysis of parameter estimation consistency. It remains an open problem to analyze the support recovery performance (a.k.a., sparsistency) of this type of methods for recovering the global minimizer of the original NP-Hard problem. In this paper, we bridge this gap by showing, for the first time, that exact recovery of the global sparse minimizer is possible for HTP-style methods under restricted strong condition number bounding conditions. We further show that HTP-style methods are able to recover the support of certain relaxed sparse solutions without assuming bounded restricted strong condition number. Numerical results on simulated data confirms our theoretical predictions.

  • Gradient Hard Thresholding Pursuit for Sparsity-Constrained Optimization
    arXiv: Learning, 2013
    Co-Authors: Xiaotong Yuan, Tong Zhang
    Abstract:

    Hard Thresholding Pursuit (HTP) is an iterative greedy selection procedure for finding sparse solutions of underdetermined linear systems. This method has been shown to have strong theoretical guarantee and impressive numerical performance. In this paper, we generalize HTP from compressive sensing to a generic problem setup of sparsity-constrained convex optimization. The proposed algorithm iterates between a standard gradient descent step and a Hard Thresholding step with or without debiasing. We prove that our method enjoys the strong guarantees analogous to HTP in terms of rate of convergence and parameter estimation accuracy. Numerical evidences show that our method is superior to the state-of-the-art greedy selection methods in sparse logistic regression and sparse precision matrix estimation tasks.

Jiashi Feng - One of the best experts on this subject based on the ideXlab platform.

  • efficient stochastic gradient Hard Thresholding
    Neural Information Processing Systems, 2018
    Co-Authors: Pan Zhou, Xiaotong Yuan, Jiashi Feng
    Abstract:

    Stochastic gradient Hard Thresholding methods have recently been shown to work favorably in solving large-scale empirical risk minimization problems under sparsity or rank constraint. Despite the improved iteration complexity over full gradient methods, the gradient evaluation and Hard Thresholding complexity of the existing stochastic algorithms usually scales linearly with data size, which could still be expensive when data is huge and the Hard Thresholding step could be as expensive as singular value decomposition in rank-constrained problems. To address these deficiencies, we propose an efficient hybrid stochastic gradient Hard Thresholding (HSG-HT) method that can be provably shown to have sample-size-independent gradient evaluation and Hard Thresholding complexity bounds. Specifically, we prove that the stochastic gradient evaluation complexity of HSG-HT scales linearly with inverse of sub-optimality and its Hard Thresholding complexity scales logarithmically. By applying the heavy ball acceleration technique, we further propose an accelerated variant of HSG-HT which can be shown to have improved factor dependence on restricted condition number. Numerical results confirm our theoretical affirmation and demonstrate the computational efficiency of the proposed methods.

  • NeurIPS - Efficient Stochastic Gradient Hard Thresholding
    2018
    Co-Authors: Pan Zhou, Xiaotong Yuan, Jiashi Feng
    Abstract:

    Stochastic gradient Hard Thresholding methods have recently been shown to work favorably in solving large-scale empirical risk minimization problems under sparsity or rank constraint. Despite the improved iteration complexity over full gradient methods, the gradient evaluation and Hard Thresholding complexity of the existing stochastic algorithms usually scales linearly with data size, which could still be expensive when data is huge and the Hard Thresholding step could be as expensive as singular value decomposition in rank-constrained problems. To address these deficiencies, we propose an efficient hybrid stochastic gradient Hard Thresholding (HSG-HT) method that can be provably shown to have sample-size-independent gradient evaluation and Hard Thresholding complexity bounds. Specifically, we prove that the stochastic gradient evaluation complexity of HSG-HT scales linearly with inverse of sub-optimality and its Hard Thresholding complexity scales logarithmically. By applying the heavy ball acceleration technique, we further propose an accelerated variant of HSG-HT which can be shown to have improved factor dependence on restricted condition number. Numerical results confirm our theoretical affirmation and demonstrate the computational efficiency of the proposed methods.

  • Training Skinny Deep Neural Networks with Iterative Hard Thresholding Methods
    arXiv: Computer Vision and Pattern Recognition, 2016
    Co-Authors: Xiaojie Jin, Xiaotong Yuan, Jiashi Feng, Shuicheng Yan
    Abstract:

    Deep neural networks have achieved remarkable success in a wide range of practical problems. However, due to the inherent large parameter space, deep models are notoriously prone to overfitting and difficult to be deployed in portable devices with limited memory. In this paper, we propose an iterative Hard Thresholding (IHT) approach to train Skinny Deep Neural Networks (SDNNs). An SDNN has much fewer parameters yet can achieve competitive or even better performance than its full CNN counterpart. More concretely, the IHT approach trains an SDNN through following two alternative phases: (I) perform Hard Thresholding to drop connections with small activations and fine-tune the other significant filters; (II)~re-activate the frozen connections and train the entire network to improve its overall discriminative capability. We verify the superiority of SDNNs in terms of efficiency and classification performance on four benchmark object recognition datasets, including CIFAR-10, CIFAR-100, MNIST and ImageNet. Experimental results clearly demonstrate that IHT can be applied for training SDNN based on various CNN architectures such as NIN and AlexNet.

Thomas Blumensath - One of the best experts on this subject based on the ideXlab platform.

  • Accelerated iterative Hard Thresholding
    Signal Processing, 2012
    Co-Authors: Thomas Blumensath
    Abstract:

    The iterative Hard Thresholding algorithm (IHT) is a powerful and versatile algorithm for compressed sensing and other sparse inverse problems. The standard IHT implementation faces several challenges when applied to practical problems. The step-size and sparsity parameters have to be chosen appropriately and, as IHT is based on a gradient descend strategy, convergence is only linear. Whilst the choice of the step-size can be done adaptively as suggested previously, this letter studies the use of acceleration methods to improve convergence speed. Based on recent suggestions in the literature, we show that a host of acceleration methods are also applicable to IHT. Importantly, we show that these modifications not only significantly increase the observed speed of the method, but also satisfy the same strong performance guarantees enjoyed by the original IHT method.

  • Iterative Hard Thresholding for compressed sensing
    Applied and Computational Harmonic Analysis, 2009
    Co-Authors: Thomas Blumensath, Michael Davies
    Abstract:

    Compressed sensing is a technique to sample compressible signals below the Nyquist rate, whilst still allowing near optimal reconstruction of the signal. In this paper we present a theoretical analysis of the iterative Hard Thresholding algorithm when applied to the compressed sensing recovery problem. We show that the algorithm has the following properties (made more precise in the main text of the paper)

  • iterative Hard Thresholding and l0 regularisation
    International Conference on Acoustics Speech and Signal Processing, 2007
    Co-Authors: Thomas Blumensath, Mehrdad Yaghoobi, M Davies
    Abstract:

    Sparse signal approximations are approximations that use only a small number of elementary waveforms to describe a signal. In this paper we proof the convergence of an iterative Hard Thresholding algorithm and show, that the fixed points of that algorithm are local minima of the sparse approximation cost function, which measures both, the reconstruction error and the number of elements in the representation. Simulation results suggest that the algorithm is comparable in performance to a commonly used alternative method.

  • ICASSP (3) - Iterative Hard Thresholding and L0 Regularisation
    2007 IEEE International Conference on Acoustics Speech and Signal Processing - ICASSP '07, 2007
    Co-Authors: Thomas Blumensath, Mehrdad Yaghoobi, Michael Davies
    Abstract:

    Sparse signal approximations are approximations that use only a small number of elementary waveforms to describe a signal. In this paper we proof the convergence of an iterative Hard Thresholding algorithm and show, that the fixed points of that algorithm are local minima of the sparse approximation cost function, which measures both, the reconstruction error and the number of elements in the representation. Simulation results suggest that the algorithm is comparable in performance to a commonly used alternative method.

Yun-bin Zhao - One of the best experts on this subject based on the ideXlab platform.

  • Newton-Step-Based Hard Thresholding Algorithms for Sparse Signal Recovery
    IEEE Transactions on Signal Processing, 2020
    Co-Authors: Nan Meng, Yun-bin Zhao
    Abstract:

    Sparse signal recovery or compressed sensing can be formulated as certain sparse optimization problems. The classic optimization theory indicates that the Newton-like method often has a numerical advantage over the classic gradient method for nonlinear optimization problems. In this paper, we propose the so-called Newton-step-based iterative Hard Thresholding (NSIHT) and the Newton-step-based Hard Thresholding pursuit (NSHTP) algorithms for sparse signal recovery. Different from the traditional iterative Hard Thresholding (IHT) and Hard Thresholding pursuit (HTP), the proposed algorithms adopt the Newton-like search direction instead of the steepest descent direction. A theoretical analysis for the proposed algorithms is carried out, and sufficient conditions for the guaranteed success of sparse signal recovery via these algorithms are established in terms of the restricted isometry property of a sensing matrix which is one of the standard assumptions used in the field of compressed sensing and signal approximation. The empirical results from synthetic signal recovery indicate that the performance of proposed algorithms are comparable to that of several existing algorithms. The numerical behavior of our algorithms with respect to the residual reduction and parameter changes is also investigated through simulations.