Greedy Algorithms

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 37455 Experts worldwide ranked by ideXlab platform

Vladimir Temlyakov - One of the best experts on this subject based on the ideXlab platform.

  • Biorthogonal Greedy Algorithms in convex optimization.
    arXiv: Numerical Analysis, 2020
    Co-Authors: Anton Dereventsov, Vladimir Temlyakov
    Abstract:

    The study of Greedy approximation in the context of convex optimization is becoming a promising research direction as Greedy Algorithms are actively being employed to construct sparse minimizers for convex functions with respect to given sets of elements. In this paper we propose a unified way of analyzing a certain kind of Greedy-type Algorithms for the minimization of convex functions on Banach spaces. Specifically, we define the class of Weak Biorthogonal Greedy Algorithms for convex optimization that contains a wide range of Greedy Algorithms. We analyze the introduced class of Algorithms and establish the properties of convergence, rate of convergence, and numerical stability, which is understood in the sense that the steps of the algorithm are allowed to be performed not precisely but with controlled computational inaccuracies. We show that the following well-known Algorithms for convex optimization --- the Weak Chebyshev Greedy Algorithm (co) and the Weak Greedy Algorithm with Free Relaxation (co) --- belong to this class, and introduce a new algorithm --- the Rescaled Weak Relaxed Greedy Algorithm (co). Presented numerical experiments demonstrate the practical performance of the aforementioned Greedy Algorithms in the setting of convex minimization as compared to optimization with regularization, which is the conventional approach of constructing sparse minimizers.

  • A unified way of analyzing some Greedy Algorithms
    Journal of Functional Analysis, 2019
    Co-Authors: Anton Dereventsov, Vladimir Temlyakov
    Abstract:

    Abstract In this paper we propose a unified way of analyzing a certain kind of Greedy-type Algorithms in Banach spaces. We define a class of the Weak Biorthogonal Greedy Algorithms that contains a wide range of Greedy Algorithms. In particular, we show that the following well-known Algorithms — the Weak Chebyshev Greedy Algorithm and the Weak Greedy Algorithm with Free Relaxation — belong to this class. We investigate the properties of convergence, rate of convergence, and numerical stability of the Weak Biorthogonal Greedy Algorithms. Numerical stability is understood in the sense that the steps of the algorithm are allowed to be performed with controlled computational inaccuracies. We carry out a thorough analysis of the connection between the magnitude of those inaccuracies and the convergence properties of the algorithm. To emphasize the advantage of the proposed approach, we introduce here a new Greedy algorithm — the Rescaled Weak Relaxed Greedy Algorithm — from the above class, and derive the convergence results without analyzing the algorithm explicitly. Additionally, we explain how the proposed approach can be extended to some other types of Greedy Algorithms.

  • A unified way of analyzing some Greedy Algorithms
    arXiv: Numerical Analysis, 2018
    Co-Authors: Anton Dereventsov, Vladimir Temlyakov
    Abstract:

    A unified way of analyzing different Greedy-type Algorithms in Banach spaces is presented. We define a class of Weak Biorthogonal Greedy Algorithms and prove convergence and rate of convergence results for Algorithms from this class. In particular, the following well known Algorithms --- Weak Chebyshev Greedy Algorithm and Weak Greedy Algorithm with Free Relaxation --- belong to this class. We consider here one more algorithm --- Rescaled Weak Relaxed Greedy Algorithm --- from the above class. We also discuss modifications of these Algorithms, which are motivated by applications. We analyze convergence and rate of convergence of the Algorithms under assumption that we may perform steps of these Algorithms with some errors. We call such Algorithms approximate Greedy Algorithms. We prove convergence and rate of convergence results for the Approximate Weak Biorthogonal Greedy Algorithms. These results guarantee stability of Weak Biorthogonal Greedy Algorithms.

  • sparse approximation and recovery by Greedy Algorithms
    IEEE Transactions on Information Theory, 2014
    Co-Authors: Eugene Livshitz, Vladimir Temlyakov
    Abstract:

    We study sparse approximation by Greedy Algorithms. Our contribution is twofold. First, we prove exact recovery with high probability of random \(K\) -sparse signals within \(\lceil K(1+\epsilon )\rceil \) iterations of the orthogonal matching pursuit (OMP). This result shows that in a probabilistic sense, the OMP is almost optimal for exact recovery. Second, we prove the Lebesgue-type inequalities for the weak Chebyshev Greedy algorithm, a generalization of the weak orthogonal matching pursuit to the case of a Banach space. The main novelty of these results is a Banach space setting instead of a Hilbert space setting. However, even in the case of a Hilbert space, our results add some new elements to known results on the Lebesgue-type inequalities for the restricted isometry property dictionaries. Our technique is a development of the recent technique created by Zhang.

  • sparse approximation and recovery by Greedy Algorithms in banach spaces
    arXiv: Machine Learning, 2013
    Co-Authors: Vladimir Temlyakov
    Abstract:

    We study sparse approximation by Greedy Algorithms. We prove the Lebesgue-type inequalities for the Weak Chebyshev Greedy Algorithm (WCGA), a generalization of the Weak Orthogonal Matching Pursuit to the case of a Banach space. The main novelty of these results is a Banach space setting instead of a Hilbert space setting. The results are proved for redundant dictionaries satisfying certain conditions. Then we apply these general results to the case of bases. In particular, we prove that the WCGA provides almost optimal sparse approximation for the trigonometric system in $L_p$, $2\le p<\infty$.

David Kempe - One of the best experts on this subject based on the ideXlab platform.

  • submodular meets spectral Greedy Algorithms for subset selection sparse approximation and dictionary selection
    International Conference on Machine Learning, 2011
    Co-Authors: Abhimanyu Das, David Kempe
    Abstract:

    We study the problem of selecting a subset of k random variables from a large set, in order to obtain the best linear prediction of another variable of interest. This problem can be viewed in the context of both feature selection and sparse approximation. We analyze the performance of widely used Greedy heuristics, using insights from the maximization of submodular functions and spectral analysis. We introduce the submod-ularity ratio as a key quantity to help understand why Greedy Algorithms perform well even when the variables are highly correlated. Using our techniques, we obtain the strongest known approximation guarantees for this problem, both in terms of the submodularity ratio and the smallest k-sparse eigenvalue of the covariance matrix. We also analyze Greedy Algorithms for the dictionary selection problem, and significantly improve the previously known guarantees. Our theoretical analysis is complemented by experiments on real-world and synthetic data sets; the experiments show that the submodularity ratio is a stronger predictor of the performance of Greedy Algorithms than other spectral parameters.

  • submodular meets spectral Greedy Algorithms for subset selection sparse approximation and dictionary selection
    arXiv: Machine Learning, 2011
    Co-Authors: Abhimanyu Das, David Kempe
    Abstract:

    We study the problem of selecting a subset of k random variables from a large set, in order to obtain the best linear prediction of another variable of interest. This problem can be viewed in the context of both feature selection and sparse approximation. We analyze the performance of widely used Greedy heuristics, using insights from the maximization of submodular functions and spectral analysis. We introduce the submodularity ratio as a key quantity to help understand why Greedy Algorithms perform well even when the variables are highly correlated. Using our techniques, we obtain the strongest known approximation guarantees for this problem, both in terms of the submodularity ratio and the smallest k-sparse eigenvalue of the covariance matrix. We further demonstrate the wide applicability of our techniques by analyzing Greedy Algorithms for the dictionary selection problem, and significantly improve the previously known guarantees. Our theoretical analysis is complemented by experiments on real-world and synthetic data sets; the experiments show that the submodularity ratio is a stronger predictor of the performance of Greedy Algorithms than other spectral parameters.

Abhimanyu Das - One of the best experts on this subject based on the ideXlab platform.

  • submodular meets spectral Greedy Algorithms for subset selection sparse approximation and dictionary selection
    International Conference on Machine Learning, 2011
    Co-Authors: Abhimanyu Das, David Kempe
    Abstract:

    We study the problem of selecting a subset of k random variables from a large set, in order to obtain the best linear prediction of another variable of interest. This problem can be viewed in the context of both feature selection and sparse approximation. We analyze the performance of widely used Greedy heuristics, using insights from the maximization of submodular functions and spectral analysis. We introduce the submod-ularity ratio as a key quantity to help understand why Greedy Algorithms perform well even when the variables are highly correlated. Using our techniques, we obtain the strongest known approximation guarantees for this problem, both in terms of the submodularity ratio and the smallest k-sparse eigenvalue of the covariance matrix. We also analyze Greedy Algorithms for the dictionary selection problem, and significantly improve the previously known guarantees. Our theoretical analysis is complemented by experiments on real-world and synthetic data sets; the experiments show that the submodularity ratio is a stronger predictor of the performance of Greedy Algorithms than other spectral parameters.

  • submodular meets spectral Greedy Algorithms for subset selection sparse approximation and dictionary selection
    arXiv: Machine Learning, 2011
    Co-Authors: Abhimanyu Das, David Kempe
    Abstract:

    We study the problem of selecting a subset of k random variables from a large set, in order to obtain the best linear prediction of another variable of interest. This problem can be viewed in the context of both feature selection and sparse approximation. We analyze the performance of widely used Greedy heuristics, using insights from the maximization of submodular functions and spectral analysis. We introduce the submodularity ratio as a key quantity to help understand why Greedy Algorithms perform well even when the variables are highly correlated. Using our techniques, we obtain the strongest known approximation guarantees for this problem, both in terms of the submodularity ratio and the smallest k-sparse eigenvalue of the covariance matrix. We further demonstrate the wide applicability of our techniques by analyzing Greedy Algorithms for the dictionary selection problem, and significantly improve the previously known guarantees. Our theoretical analysis is complemented by experiments on real-world and synthetic data sets; the experiments show that the submodularity ratio is a stronger predictor of the performance of Greedy Algorithms than other spectral parameters.

Le Minh Hai Phong - One of the best experts on this subject based on the ideXlab platform.

  • a heuristic based on randomized Greedy Algorithms for the clustered shortest path tree problem
    Congress on Evolutionary Computation, 2019
    Co-Authors: Pham Dinh Thanh, Huynh Thi Thanh Binh, Do Dinh Dac, Nguyen Binh Long, Le Minh Hai Phong
    Abstract:

    Randomized Greedy Algorithms (RGAs) are interesting approaches incorporating the random processes into the Greedy Algorithms to solve problems whose structures are not well understood as well as problems in combinatorial optimization. This paper introduces a new algorithm that combines the major features of RGAs and Shortest Path Tree Algorithm (SPTA) to deal with the Clustered Shortest-Path Tree Problem (CluSPT). In our algorithm, SPTA is used to determine the shortest path tree in each cluster while the combination between characteristics of the RGAs and search strategy of SPTA are used to construct the edges connecting clusters. To evaluate the performance of the algorithm, various types of Euclidean benchmarks are selected. The experimental results show the strengths of the proposed algorithm in comparison with some existing Algorithms. We also analyze the influences of the parameters on the performance of the algorithm.

Pham Dinh Thanh - One of the best experts on this subject based on the ideXlab platform.

  • a heuristic based on randomized Greedy Algorithms for the clustered shortest path tree problem
    Congress on Evolutionary Computation, 2019
    Co-Authors: Pham Dinh Thanh, Huynh Thi Thanh Binh, Do Dinh Dac, Nguyen Binh Long, Le Minh Hai Phong
    Abstract:

    Randomized Greedy Algorithms (RGAs) are interesting approaches incorporating the random processes into the Greedy Algorithms to solve problems whose structures are not well understood as well as problems in combinatorial optimization. This paper introduces a new algorithm that combines the major features of RGAs and Shortest Path Tree Algorithm (SPTA) to deal with the Clustered Shortest-Path Tree Problem (CluSPT). In our algorithm, SPTA is used to determine the shortest path tree in each cluster while the combination between characteristics of the RGAs and search strategy of SPTA are used to construct the edges connecting clusters. To evaluate the performance of the algorithm, various types of Euclidean benchmarks are selected. The experimental results show the strengths of the proposed algorithm in comparison with some existing Algorithms. We also analyze the influences of the parameters on the performance of the algorithm.