Sparse Approximation

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 23280 Experts worldwide ranked by ideXlab platform

Joel A Tropp - One of the best experts on this subject based on the ideXlab platform.

  • computational methods for Sparse solution of linear inverse problems
    Proceedings of the IEEE, 2010
    Co-Authors: Joel A Tropp, Stephen J Wright
    Abstract:

    The goal of the Sparse Approximation problem is to approximate a target signal using a linear combination of a few elementary signals drawn from a fixed collection. This paper surveys the major practical algorithms for Sparse Approximation. Specific attention is paid to computational issues, to the circumstances in which individual methods tend to perform well, and to the theoretical guarantees available. Many fundamental questions in electrical engineering, statistics, and applied mathematics can be posed as Sparse Approximation problems, making these algorithms versatile and relevant to a plethora of applications.

  • norms of random submatrices and Sparse Approximation
    Comptes Rendus Mathematique, 2008
    Co-Authors: Joel A Tropp
    Abstract:

    Many problems in the theory of Sparse Approximation require bounds on operator norms of a random submatrix drawn from a fixed matrix. The purpose of this Note is to collect estimates for several different norms that are most important in the analysis of l1 minimization algorithms. Several of these bounds have not appeared in detail.

  • algorithms for simultaneous Sparse Approximation part ii convex relaxation
    Signal Processing, 2006
    Co-Authors: Joel A Tropp
    Abstract:

    A simultaneous Sparse Approximation problem requests a good Approximation of several input signals at once using different linear combinations of the same elementary signals. At the same time, the problem balances the error in Approximation against the total number of elementary signals that participate. These elementary signals typically model coherent structures in the input signals, and they are chosen from a large, linearly dependent collection.The first part of this paper presents theoretical and numerical results for a greedy pursuit algorithm, called simultaneous orthogonal matching pursuit.The second part of the paper develops another algorithmic approach called convex relaxation. This method replaces the combinatorial simultaneous Sparse Approximation problem with a closely related convex program that can be solved efficiently with standard mathematical programming software. The paper develops conditions under which convex relaxation computes good solutions to simultaneous Sparse Approximation problems.

  • algorithms for simultaneous Sparse Approximation part i greedy pursuit
    Signal Processing, 2006
    Co-Authors: Joel A Tropp, Anna C Gilbert, Martin J Strauss
    Abstract:

    A simultaneous Sparse Approximation problem requests a good Approximation of several input signals at once using different linear combinations of the same elementary signals. At the same time, the problem balances the error in Approximation against the total number of elementary signals that participate. These elementary signals typically model coherent structures in the input signals, and they are chosen from a large, linearly dependent collection.The first part of this paper proposes a greedy pursuit algorithm, called simultaneous orthogonal matching pursuit (S-OMP), for simultaneous Sparse Approximation. Then it presents some numerical experiments that demonstrate how a Sparse model for the input signals can be identified more reliably given several input signals. Afterward, the paper proves that the S-OMP algorithm can compute provably good solutions to several simultaneous Sparse Approximation problems.The second part of the paper develops another algorithmic approach called convex relaxation, and it provides theoretical results on the performance of convex relaxation for simultaneous Sparse Approximation.

  • designing structured tight frames via an alternating projection method
    IEEE Transactions on Information Theory, 2005
    Co-Authors: Joel A Tropp, Inderjit S Dhillon, Robert W Heath, Thomas Strohmer
    Abstract:

    Tight frames, also known as general Welch-bound- equality sequences, generalize orthonormal systems. Numerous applications - including communications, coding, and Sparse Approximation- require finite-dimensional tight frames that possess additional structural properties. This paper proposes an alternating projection method that is versatile enough to solve a huge class of inverse eigenvalue problems (IEPs), which includes the frame design problem. To apply this method, one needs only to solve a matrix nearness problem that arises naturally from the design specifications. Therefore, it is the fast and easy to develop versions of the algorithm that target new design problems. Alternating projection will often succeed even if algebraic constructions are unavailable. To demonstrate that alternating projection is an effective tool for frame design, the paper studies some important structural properties in detail. First, it addresses the most basic design problem: constructing tight frames with prescribed vector norms. Then, it discusses equiangular tight frames, which are natural dictionaries for Sparse Approximation. Finally, it examines tight frames whose individual vectors have low peak-to-average-power ratio (PAR), which is a valuable property for code-division multiple-access (CDMA) applications. Numerical experiments show that the proposed algorithm succeeds in each of these three cases. The appendices investigate the convergence properties of the algorithm.

Jianfeng Cai - One of the best experts on this subject based on the ideXlab platform.

  • data driven tight frame for multi channel images and its application to joint color depth image reconstruction
    Journal of the Operations Research Society of China, 2015
    Co-Authors: Jin Wang, Jianfeng Cai
    Abstract:

    In image restoration, we usually assume that the underlying image has a good Sparse Approximation under a certain system. Wavelet tight frame system has been proven to be such an efficient system to Sparsely approximate piecewise smooth images. Thus, it has been widely used in many practical image restoration problems. However, images from different scenarios are so diverse that no static wavelet tight frame system can Sparsely approximate all of them well. To overcome this, recently, Cai et. al. (Appl Comput Harmon Anal 37:89–105, 2014) proposed a method that derives a data-driven tight frame adapted to the specific input image, leading to a better Sparse Approximation. The data-driven tight frame has been applied successfully to image denoising and CT image reconstruction. In this paper, we extend this data-driven tight frame construction method to multi-channel images. We construct a discrete tight frame system for each channel and assume their Sparse coefficients have a joint sparsity. The multi-channel data-driven tight frame construction scheme is applied to joint color and depth image reconstruction. Experimental results show that the proposed approach has a better performance than state-of-the-art joint color and depth image reconstruction approaches.

  • blind motion deblurring from a single image using Sparse Approximation
    Computer Vision and Pattern Recognition, 2009
    Co-Authors: Jianfeng Cai, Chaoqiang Liu, Zuowei Shen
    Abstract:

    Restoring a clear image from a single motion-blurred image due to camera shake has long been a challenging problem in digital imaging. Existing blind deblurring techniques either only remove simple motion blurring, or need user interactions to work on more complex cases. In this paper, we present an approach to remove motion blurring from a single image by formulating the blind blurring as a new joint optimization problem, which simultaneously maximizes the sparsity of the blur kernel and the sparsity of the clear image under certain suitable redundant tight frame systems (curvelet system for kernels and framelet system for images). Without requiring any prior information of the blur kernel as the input, our proposed approach is able to recover high-quality images from given blurred images. Furthermore, the new sparsity constraints under tight frame systems enable the application of a fast algorithm called linearized Bregman iteration to efficiently solve the proposed minimization problem. The experiments on both simulated images and real images showed that our algorithm can effectively removing complex motion blurring from nature images.

David Kempe - One of the best experts on this subject based on the ideXlab platform.

  • submodular meets spectral greedy algorithms for subset selection Sparse Approximation and dictionary selection
    International Conference on Machine Learning, 2011
    Co-Authors: Abhimanyu Das, David Kempe
    Abstract:

    We study the problem of selecting a subset of k random variables from a large set, in order to obtain the best linear prediction of another variable of interest. This problem can be viewed in the context of both feature selection and Sparse Approximation. We analyze the performance of widely used greedy heuristics, using insights from the maximization of submodular functions and spectral analysis. We introduce the submod-ularity ratio as a key quantity to help understand why greedy algorithms perform well even when the variables are highly correlated. Using our techniques, we obtain the strongest known Approximation guarantees for this problem, both in terms of the submodularity ratio and the smallest k-Sparse eigenvalue of the covariance matrix. We also analyze greedy algorithms for the dictionary selection problem, and significantly improve the previously known guarantees. Our theoretical analysis is complemented by experiments on real-world and synthetic data sets; the experiments show that the submodularity ratio is a stronger predictor of the performance of greedy algorithms than other spectral parameters.

  • submodular meets spectral greedy algorithms for subset selection Sparse Approximation and dictionary selection
    arXiv: Machine Learning, 2011
    Co-Authors: Abhimanyu Das, David Kempe
    Abstract:

    We study the problem of selecting a subset of k random variables from a large set, in order to obtain the best linear prediction of another variable of interest. This problem can be viewed in the context of both feature selection and Sparse Approximation. We analyze the performance of widely used greedy heuristics, using insights from the maximization of submodular functions and spectral analysis. We introduce the submodularity ratio as a key quantity to help understand why greedy algorithms perform well even when the variables are highly correlated. Using our techniques, we obtain the strongest known Approximation guarantees for this problem, both in terms of the submodularity ratio and the smallest k-Sparse eigenvalue of the covariance matrix. We further demonstrate the wide applicability of our techniques by analyzing greedy algorithms for the dictionary selection problem, and significantly improve the previously known guarantees. Our theoretical analysis is complemented by experiments on real-world and synthetic data sets; the experiments show that the submodularity ratio is a stronger predictor of the performance of greedy algorithms than other spectral parameters.

Guibo Ye - One of the best experts on this subject based on the ideXlab platform.

  • data driven tight frame construction and image denoising
    Applied and Computational Harmonic Analysis, 2014
    Co-Authors: Hui Ji, Zuowei Shen, Guibo Ye
    Abstract:

    Abstract Sparsity-based regularization methods for image restoration assume that the underlying image has a good Sparse Approximation under a certain system. Such a system can be a basis, a frame, or a general over-complete dictionary. One widely used class of such systems in image restoration are wavelet tight frames. There have been enduring efforts on seeking wavelet tight frames under which a certain class of functions or images can have a good Sparse Approximation. However, the structure of images varies greatly in practice and a system working well for one type of images may not work for another. This paper presents a method that derives a discrete tight frame system from the input image itself to provide a better Sparse Approximation to the input image. Such an adaptive tight frame construction scheme is applied to image denoising by constructing a tight frame tailored to the given noisy data. The experiments showed that the proposed approach performs better in image denoising than those wavelet tight frames designed for a class of images. Moreover, by ensuring that the system derived from our approach is always a tight frame, our approach also runs much faster than other over-complete dictionary based approaches with comparable performance on denoising.

Nadler Boaz - One of the best experts on this subject based on the ideXlab platform.

  • The Trimmed Lasso: Sparse Recovery Guarantees and Practical Optimization by the Generalized Soft-Min Penalty
    2021
    Co-Authors: Amir Tal, Basri Ronen, Nadler Boaz
    Abstract:

    We present a new approach to solve the Sparse Approximation or best subset selection problem, namely find a $k$-Sparse vector ${\bf x}\in\mathbb{R}^d$ that minimizes the $\ell_2$ residual $\lVert A{\bf x}-{\bf y} \rVert_2$. We consider a regularized approach, whereby this residual is penalized by the non-convex $\textit{trimmed lasso}$, defined as the $\ell_1$-norm of ${\bf x}$ excluding its $k$ largest-magnitude entries. We prove that the trimmed lasso has several appealing theoretical properties, and in particular derive Sparse recovery guarantees assuming successful optimization of the penalized objective. Next, we show empirically that directly optimizing this objective can be quite challenging. Instead, we propose a surrogate for the trimmed lasso, called the $\textit{generalized soft-min}$. This penalty smoothly interpolates between the classical lasso and the trimmed lasso, while taking into account all possible $k$-Sparse patterns. The generalized soft-min penalty involves summation over $\binom{d}{k}$ terms, yet we derive a polynomial-time algorithm to compute it. This, in turn, yields a practical method for the original Sparse Approximation problem. Via simulations, we demonstrate its competitive performance compared to current state of the art.Comment: 47 pages; 7 figure

  • The Trimmed Lasso: Sparse Recovery Guarantees and Practical Optimization by the Generalized Soft-Min Penalty
    2021
    Co-Authors: Amir Tal, Basri Ronen, Nadler Boaz
    Abstract:

    We present a new approach to solve the Sparse Approximation or best subset selection problem, namely find a $k$-Sparse vector ${\bf x}\in\mathbb{R}^d$ that minimizes the $\ell_2$ residual $\lVert A{\bf x}-{\bf y} \rVert_2$. We consider a regularized approach, whereby this residual is penalized by the non-convex $\textit{trimmed lasso}$, defined as the $\ell_1$-norm of ${\bf x}$ excluding its $k$ largest-magnitude entries. We prove that the trimmed lasso has several appealing theoretical properties, and in particular derive Sparse recovery guarantees assuming successful optimization of the penalized objective. Next, we show empirically that directly optimizing this objective can be quite challenging. Instead, we propose a surrogate for the trimmed lasso, called the $\textit{generalized soft-min}$. This penalty smoothly interpolates between the classical lasso and the trimmed lasso, while taking into account all possible $k$-Sparse patterns. The generalized soft-min penalty involves summation over $\binom{d}{k}$ terms, yet we derive a polynomial-time algorithm to compute it. This, in turn, yields a practical method for the original Sparse Approximation problem. Via simulations, we demonstrate its competitive performance compared to current state of the art.Comment: 49 pages; 7 figures; To appear in SIAM Journal on Mathematics of Data Science (SIMODS