The Experts below are selected from a list of 18216 Experts worldwide ranked by ideXlab platform

Mark Tygert - One of the best experts on this subject based on the ideXlab platform.

  • Algorithm 971 an implementation of a Randomized Algorithm for principal component analysis
    ACM Transactions on Mathematical Software, 2017
    Co-Authors: Huamin Li, Arthur Szlam, George C Linderman, Kelly P Stanton, Yuval Kluger, Mark Tygert
    Abstract:

    Recent years have witnessed intense development of Randomized methods for low-rank approximation. These methods target principal component analysis and the calculation of truncated singular value decompositions. The present article presents an essentially black-box, foolproof implementation for Mathworks’ MATLAB, a popular software platform for numerical computation. As illustrated via several tests, the Randomized Algorithms for low-rank approximation outperform or at least match the classical deterministic techniques (such as Lanczos iterations run to convergence) in basically all respects: accuracy, computational efficiency (both speed and memory usage), ease-of-use, parallelizability, and reliability. However, the classical procedures remain the methods of choice for estimating spectral norms and are far superior for calculating the least singular values and corresponding singular vectors (or singular subspaces).

  • a fast Randomized Algorithm for orthogonal projection
    SIAM Journal on Scientific Computing, 2011
    Co-Authors: E S Coakley, V. Rokhlin, Mark Tygert
    Abstract:

    We describe an Algorithm that, given any full-rank matrix $A$ having fewer rows than columns, can rapidly compute the orthogonal projection of any vector onto the null space of $A$, as well as the orthogonal projection onto the row space of $A$, provided that both $A$ and its adjoint $A^*$ can be applied rapidly to arbitrary vectors. As an intermediate step, the Algorithm solves the overdetermined linear least-squares regression involving $A^*$ and may therefore be used for this purpose as well. In many circumstances, the technique can accelerate interior-point methods for convex optimization, including linear programming (see, for example, Chapter 11 of [S. J. Wright, Primal-Dual Interior-Point Methods, SIAM, Philadelphia, 1997]). The basis of the Algorithm is an obvious but numerically unstable scheme (typically known as the method of normal equations); suitable use of a preconditioner yields numerical stability. We generate the preconditioner rapidly via a Randomized procedure that succeeds with extremely high probability. We provide numerical examples demonstrating the superior accuracy of the Randomized method over direct use of the normal equations.

  • a Randomized Algorithm for the decomposition of matrices
    Applied and Computational Harmonic Analysis, 2011
    Co-Authors: Pergunnar Martinsson, V. Rokhlin, Mark Tygert
    Abstract:

    Abstract Given an m × n matrix A and a positive integer k, we describe a Randomized procedure for the approximation of A with a matrix Z of rank k. The procedure relies on applying A T to a collection of l random vectors, where l is an integer equal to or slightly greater than k; the scheme is efficient whenever A and A T can be applied rapidly to arbitrary vectors. The discrepancy between A and Z is of the same order as l m times the ( k + 1 ) st greatest singular value σ k + 1 of A, with negligible probability of even moderately large deviations. The actual estimates derived in the paper are fairly complicated, but are simpler when l − k is a fixed small nonnegative integer. For example, according to one of our estimates for l − k = 20 , the probability that the spectral norm ‖ A − Z ‖ is greater than 10 ( k + 20 ) m σ k + 1 is less than 10 − 17 . The paper contains a number of estimates for ‖ A − Z ‖ , including several that are stronger (but more detailed) than the preceding example; some of the estimates are effectively independent of m. Thus, given a matrix A of limited numerical rank, such that both A and A T can be applied rapidly to arbitrary vectors, the scheme provides a simple, efficient means for constructing an accurate approximation to a singular value decomposition of A. Furthermore, the Algorithm presented here operates reliably independently of the structure of the matrix A. The results are illustrated via several numerical examples.

  • a fast Randomized Algorithm for overdetermined linear least squares regression
    Proceedings of the National Academy of Sciences of the United States of America, 2008
    Co-Authors: V. Rokhlin, Mark Tygert
    Abstract:

    We introduce a Randomized Algorithm for overdetermined linear least-squares regression. Given an arbitrary full-rank m × n matrix A with m ≥ n , any m × 1 vector b , and any positive real number e, the procedure computes an n × 1 vector x such that x minimizes the Euclidean norm ‖ Ax − b ‖ to relative precision e. The Algorithm typically requires

  • A Randomized Algorithm for principal component analysis
    SIAM Journal on Matrix Analysis and Applications, 2008
    Co-Authors: V. Rokhlin, Arthur Szlam, Mark Tygert
    Abstract:

    Principal component analysis (PCA) requires the computation of a low-rank approximation to a matrix containing the data being analyzed. In many applications of PCA, the best possible accuracy of any rank-deficient approximation is at most a few digits (measured in the spectral norm, relative to the spectral norm of the matrix being approximated). In such circumstances, efficient Algorithms have not come with guarantees of good accuracy, unless one or both dimensions of the matrix being approximated are small. We describe an efficient Algorithm for the low-rank approximation of matrices that produces accuracy very close to the best possible, for matrices of arbitrary sizes. We illustrate our theoretical results via several numerical examples.

Ingrid Daubechies - One of the best experts on this subject based on the ideXlab platform.

  • theoretical and experimental analysis of a Randomized Algorithm for sparse fourier transform analysis
    Journal of Computational Physics, 2006
    Co-Authors: Anna C Gilbert, M Strauss, Ingrid Daubechies
    Abstract:

    We analyze a sublinear RA@?SFA (Randomized Algorithm for Sparse Fourier analysis) that finds a near-optimal B-term Sparse representation R for a given discrete signal S of length N, in time and space poly(B,log(N)), following the approach given in [A.C. Gilbert, S. Guha, P. Indyk, S. Muthukrishnan, M. Strauss, Near-Optimal Sparse Fourier Representations via Sampling, STOC, 2002]. Its time cost poly(log(N)) should be compared with the superlinear @W(NlogN) time requirement of the Fast Fourier Transform (FFT). A straightforward implementation of the RA@?SFA, as presented in the theoretical paper [A.C. Gilbert, S. Guha, P. Indyk, S. Muthukrishnan, M. Strauss, Near-Optimal Sparse Fourier Representations via Sampling, STOC, 2002], turns out to be very slow in practice. Our main result is a greatly improved and practical RA@?SFA. We introduce several new ideas and techniques that speed up the Algorithm. Both rigorous and heuristic arguments for parameter choices are presented. Our RA@?SFA constructs, with probability at least 1-@d, a near-optimal B-term representation R in time poly(B)log(N)log(1/@d)/@e^2log(M) such that @?S-R@?"2^2=<(1+@e)@?S-R"o"p"t@?"2^2. Furthermore, this RA@?SFA implementation already beats the FFTW for not unreasonably large N. We extend the Algorithm to higher dimensional cases both theoretically and numerically. The crossover point lies at N~70,000 in one dimension, and at N~900 for data on a NxN grid in two dimensions for small B signals where there is noise.

  • theoretical and experimental analysis of a Randomized Algorithm for sparse fourier transform analysis
    arXiv: Numerical Analysis, 2004
    Co-Authors: Anna C Gilbert, M Strauss, Ingrid Daubechies
    Abstract:

    We analyze a sublinear RAlSFA (Randomized Algorithm for Sparse Fourier Analysis) that finds a near-optimal B-term Sparse Representation R for a given discrete signal S of length N, in time and space poly(B,log(N)), following the approach given in \cite{GGIMS}. Its time cost poly(log(N)) should be compared with the superlinear O(N log N) time requirement of the Fast Fourier Transform (FFT). A straightforward implementation of the RAlSFA, as presented in the theoretical paper \cite{GGIMS}, turns out to be very slow in practice. Our main result is a greatly improved and practical RAlSFA. We introduce several new ideas and techniques that speed up the Algorithm. Both rigorous and heuristic arguments for parameter choices are presented. Our RAlSFA constructs, with probability at least 1-delta, a near-optimal B-term representation R in time poly(B)log(N)log(1/delta)/ epsilon^{2} log(M) such that ||S-R||^{2}<=(1+epsilon)||S-R_{opt}||^{2}. Furthermore, this RAlSFA implementation already beats the FFTW for not unreasonably large N. We extend the Algorithm to higher dimensional cases both theoretically and numerically. The crossover point lies at N=70000 in one dimension, and at N=900 for data on a N*N grid in two dimensions for small B signals where there is noise.

Roberto Tempo - One of the best experts on this subject based on the ideXlab platform.

  • a distributed Randomized Algorithm for relative localization in sensor networks
    European Control Conference, 2013
    Co-Authors: Chiara Ravazzi, Hideaki Ishii, Paolo Frasca, Roberto Tempo
    Abstract:

    This paper regards the relative localization problem in sensor networks. We propose for its solution a distributed Randomized Algorithm, which is based on input-driven consensus dynamics and features pairwise “gossip” communications and updates. Due to the randomness of the updates, the state of this Algorithm oscillates in time around a certain limit value. We show that the time-average of the state asymptotically converges, in the mean-square sense, to the least-squares solution of the localization problem. Furthermore, we describe an update scheme ensuring that the time-averaging process is accomplished in a fully distributed way.

  • Randomized Algorithms for quadratic stability of quantized sampled-data systems
    Automatica, 2004
    Co-Authors: Hideaki Ishii, Tamer Basar, Roberto Tempo
    Abstract:

    In this paper, we present a novel development of Randomized Algorithms for quadratic stability analysis of sampled-data systems with memoryless quantizers. The specific Randomized Algorithm employed generates a quadratic Lyapunov function and leads to realistic bounds on the performance of such systems. A key feature of this method is that the characteristics of quantizers are exploited to make the Algorithm computationally efficient.

  • Randomized Algorithms for quadratic stability of quantized sampled-data systems
    Automatica, 2004
    Co-Authors: Hideaki Ishii, Tamer Basar, Roberto Tempo
    Abstract:

    In this paper, we present a novel development of Randomized Algorithms for quadratic stability analysis of sampled-data systems with memoryless quantizers. The specific Randomized Algorithm employed generates a quadratic Lyapunov function and leads to realistic bounds on the performance of such systems. A key feature of this method is that the characteristics of quantizers are exploited to make the Algorithm computationally efficient. © 2003 Elsevier Ltd. All rights reserved.

  • Randomized Algorithms for quadratic stability of quantized sampled-data systems
    Proceedings of the 2003 American Control Conference 2003., 2003
    Co-Authors: Hideaki Ishii, Tamer Basar, Roberto Tempo
    Abstract:

    We present a novel development of Randomized Algorithms for quadratic stability analysis of sampled-data systems with memoryless quantizers. The specific Randomized Algorithm employed generates a quadratic Lyapunov function and leads to realistic bounds on the performance of such systems. A key feature of this method is that the characteristics of quantizers are exploited to make the Algorithm computationally efficient.

Joseph Naor - One of the best experts on this subject based on the ideXlab platform.

  • a primal dual Randomized Algorithm for weighted paging
    Journal of the ACM, 2012
    Co-Authors: Nikhil Bansal, Niv Buchbinder, Joseph Naor
    Abstract:

    We study the weighted version of the classic online paging problem where there is a weight (cost) for fetching each page into the cache. We design a Randomized O(log k)-competitive online Algorithm for this problem, where k is the cache size. This is the first Randomized o(k)-competitive Algorithm and its competitive ratio matches the known lower bound for the problem, up to constant factors. More generally, we design an O(log(k/(k − h + 1)))-competitive online Algorithm for the version of the problem where the online Algorithm has cache size k and it is compared to an optimal offline solution with cache size h ≤ k. Our solution is based on a two-step approach. We first obtain an O(log k)-competitive fractional Algorithm based on an online primal-dual approach. Next, we obtain a Randomized Algorithm by rounding in an online manner the fractional solution to a probability distribution on the possible cache states. We also give an online primal-dual Randomized O(log N)-competitive Algorithm for the Metrical Task System problem (MTS) on a weighted star metric on N leaves.

  • a primal dual Randomized Algorithm for weighted paging
    Foundations of Computer Science, 2007
    Co-Authors: Nikhil Bansal, Niv Buchbinder, Joseph Naor
    Abstract:

    In the weighted paging problem there is a weight (cost) for fetching each page into the cache. We design a Randomized O(log k) -competitive online Algorithm for the weighted paging problem, where k is the cache size. This is the first Randomized o(k)-competitive Algorithm and its competitiveness matches the known lower bound on the problem. More generally, we design an O(log(k/(k - h + I)))-competitive online Algorithm for the version of the. problem where, the online Algorithm has-cache size k and the online Algorithm has cache size h les k. Weighted paging is a special case (weighted star metric) of the well known k-server problem for which it is a major open question whether randomization can be useful in obtaining sub-linear competitive Algorithms. Therefore, abstracting and extending the insights from paging is a key step in the resolution of the k-server problem. Our solution for the weighted paging problem is based on a two-step approach. In the first step we obtain an O(log k)-competitive fractional Algorithm which is based on a novel online primal-dual approach. In the second step we. obtain a Randomized Algorithm by rounding online the fractional solution to an actual distribution on integral cache, solutions. We conclude with a Randomized O(log N)-competitive Algorithm for the well studied Metrical Task System problem (MTS) on a metric defined by a weighted star on N leaves, improving upon a previous O(log2 N)-competitive Algorithm of Blum et al. [9].

V. Rokhlin - One of the best experts on this subject based on the ideXlab platform.

  • a fast Randomized Algorithm for orthogonal projection
    SIAM Journal on Scientific Computing, 2011
    Co-Authors: E S Coakley, V. Rokhlin, Mark Tygert
    Abstract:

    We describe an Algorithm that, given any full-rank matrix $A$ having fewer rows than columns, can rapidly compute the orthogonal projection of any vector onto the null space of $A$, as well as the orthogonal projection onto the row space of $A$, provided that both $A$ and its adjoint $A^*$ can be applied rapidly to arbitrary vectors. As an intermediate step, the Algorithm solves the overdetermined linear least-squares regression involving $A^*$ and may therefore be used for this purpose as well. In many circumstances, the technique can accelerate interior-point methods for convex optimization, including linear programming (see, for example, Chapter 11 of [S. J. Wright, Primal-Dual Interior-Point Methods, SIAM, Philadelphia, 1997]). The basis of the Algorithm is an obvious but numerically unstable scheme (typically known as the method of normal equations); suitable use of a preconditioner yields numerical stability. We generate the preconditioner rapidly via a Randomized procedure that succeeds with extremely high probability. We provide numerical examples demonstrating the superior accuracy of the Randomized method over direct use of the normal equations.

  • a Randomized Algorithm for the decomposition of matrices
    Applied and Computational Harmonic Analysis, 2011
    Co-Authors: Pergunnar Martinsson, V. Rokhlin, Mark Tygert
    Abstract:

    Abstract Given an m × n matrix A and a positive integer k, we describe a Randomized procedure for the approximation of A with a matrix Z of rank k. The procedure relies on applying A T to a collection of l random vectors, where l is an integer equal to or slightly greater than k; the scheme is efficient whenever A and A T can be applied rapidly to arbitrary vectors. The discrepancy between A and Z is of the same order as l m times the ( k + 1 ) st greatest singular value σ k + 1 of A, with negligible probability of even moderately large deviations. The actual estimates derived in the paper are fairly complicated, but are simpler when l − k is a fixed small nonnegative integer. For example, according to one of our estimates for l − k = 20 , the probability that the spectral norm ‖ A − Z ‖ is greater than 10 ( k + 20 ) m σ k + 1 is less than 10 − 17 . The paper contains a number of estimates for ‖ A − Z ‖ , including several that are stronger (but more detailed) than the preceding example; some of the estimates are effectively independent of m. Thus, given a matrix A of limited numerical rank, such that both A and A T can be applied rapidly to arbitrary vectors, the scheme provides a simple, efficient means for constructing an accurate approximation to a singular value decomposition of A. Furthermore, the Algorithm presented here operates reliably independently of the structure of the matrix A. The results are illustrated via several numerical examples.

  • a fast Randomized Algorithm for overdetermined linear least squares regression
    Proceedings of the National Academy of Sciences of the United States of America, 2008
    Co-Authors: V. Rokhlin, Mark Tygert
    Abstract:

    We introduce a Randomized Algorithm for overdetermined linear least-squares regression. Given an arbitrary full-rank m × n matrix A with m ≥ n , any m × 1 vector b , and any positive real number e, the procedure computes an n × 1 vector x such that x minimizes the Euclidean norm ‖ Ax − b ‖ to relative precision e. The Algorithm typically requires

  • A Randomized Algorithm for principal component analysis
    SIAM Journal on Matrix Analysis and Applications, 2008
    Co-Authors: V. Rokhlin, Arthur Szlam, Mark Tygert
    Abstract:

    Principal component analysis (PCA) requires the computation of a low-rank approximation to a matrix containing the data being analyzed. In many applications of PCA, the best possible accuracy of any rank-deficient approximation is at most a few digits (measured in the spectral norm, relative to the spectral norm of the matrix being approximated). In such circumstances, efficient Algorithms have not come with guarantees of good accuracy, unless one or both dimensions of the matrix being approximated are small. We describe an efficient Algorithm for the low-rank approximation of matrices that produces accuracy very close to the best possible, for matrices of arbitrary sizes. We illustrate our theoretical results via several numerical examples.

  • a Randomized Algorithm for the approximation of matrices
    2006
    Co-Authors: Franco Woolfe, V. Rokhlin, Edo Liberty, Mark Tygert
    Abstract:

    We introduce a Randomized procedure that, given an m × n matrix A and a positive integer k, approximates A with a matrix Z of rank k. The Algorithm relies on applying a structured l × m random matrix R to each column of A ,w herel is an integer near to, but greater than, k. The structure of R allows us to apply it to an arbitrary m × 1 vector at a cost proportional to m log(l); the resulting procedure can construct a rank-k approximation Z from the entries of A at a cost proportional to mn log(k) + l 2 (m + n). We prove