The Experts below are selected from a list of 145080 Experts worldwide ranked by ideXlab platform
Sundeep Rangan - One of the best experts on this subject based on the ideXlab platform.
-
orthogonal matching pursuit a brownian motion analysis
IEEE Transactions on Signal Processing, 2012Co-Authors: Alyson K Fletcher, Sundeep RanganAbstract:A well-known analysis of Tropp and Gilbert shows that orthogonal matching pursuit (OMP) can recover a k-sparse n-dimensional Real Vector from m=4klog(n) noise-free linear measurements obtained through a random Gaussian measurement matrix with a probability that approaches one as n→∞. This work strengthens this result by showing that a lower number of measurements, m=2klog(n-k) , is in fact sufficient for asymptotic recovery. More generally, when the sparsity level satisfies kmin ≤ k ≤ kmax but is unknown, m=2kmaxlog(n-kmin) measurements is sufficient. Furthermore, this number of measurements is also sufficient for detection of the sparsity pattern (support) of the Vector with measurement errors provided the signal-to-noise ratio (SNR) scales to infinity. The scaling m=2klog(n-k) exactly matches the number of measurements required by the more complex lasso method for signal recovery with a similar SNR scaling.
-
orthogonal matching pursuit from noisy random measurements a new analysis
Neural Information Processing Systems, 2009Co-Authors: Sundeep Rangan, Alyson K FletcherAbstract:A well-known analysis of Tropp and Gilbert shows that orthogonal matching pursuit (OMP) can recover a k-sparse n-dimensional Real Vector from m = 4k log(n) noise-free linear measurements obtained through a random Gaussian measurement matrix with a probability that approaches one as n → ∞. This work strengthens this result by showing that a lower number of measurements, m = 2k log(n - k), is in fact sufficient for asymptotic recovery. More generally, when the sparsity level satisfies kmin ≤ k ≤ kmax but is unknown, m = 2kmax log(n - kmin) measurements is sufficient. Furthermore, this number of measurements is also sufficient for detection of the sparsity pattern (support) of the Vector with measurement errors provided the signal-to-noise ratio (SNR) scales to infinity. The scaling m = 2k log(n - k) exactly matches the number of measurements required by the more complex lasso method for signal recovery in a similar SNR scaling.
Alyson K Fletcher - One of the best experts on this subject based on the ideXlab platform.
-
orthogonal matching pursuit a brownian motion analysis
IEEE Transactions on Signal Processing, 2012Co-Authors: Alyson K Fletcher, Sundeep RanganAbstract:A well-known analysis of Tropp and Gilbert shows that orthogonal matching pursuit (OMP) can recover a k-sparse n-dimensional Real Vector from m=4klog(n) noise-free linear measurements obtained through a random Gaussian measurement matrix with a probability that approaches one as n→∞. This work strengthens this result by showing that a lower number of measurements, m=2klog(n-k) , is in fact sufficient for asymptotic recovery. More generally, when the sparsity level satisfies kmin ≤ k ≤ kmax but is unknown, m=2kmaxlog(n-kmin) measurements is sufficient. Furthermore, this number of measurements is also sufficient for detection of the sparsity pattern (support) of the Vector with measurement errors provided the signal-to-noise ratio (SNR) scales to infinity. The scaling m=2klog(n-k) exactly matches the number of measurements required by the more complex lasso method for signal recovery with a similar SNR scaling.
-
orthogonal matching pursuit from noisy random measurements a new analysis
Neural Information Processing Systems, 2009Co-Authors: Sundeep Rangan, Alyson K FletcherAbstract:A well-known analysis of Tropp and Gilbert shows that orthogonal matching pursuit (OMP) can recover a k-sparse n-dimensional Real Vector from m = 4k log(n) noise-free linear measurements obtained through a random Gaussian measurement matrix with a probability that approaches one as n → ∞. This work strengthens this result by showing that a lower number of measurements, m = 2k log(n - k), is in fact sufficient for asymptotic recovery. More generally, when the sparsity level satisfies kmin ≤ k ≤ kmax but is unknown, m = 2kmax log(n - kmin) measurements is sufficient. Furthermore, this number of measurements is also sufficient for detection of the sparsity pattern (support) of the Vector with measurement errors provided the signal-to-noise ratio (SNR) scales to infinity. The scaling m = 2k log(n - k) exactly matches the number of measurements required by the more complex lasso method for signal recovery in a similar SNR scaling.
Boris V. Strokopytov - One of the best experts on this subject based on the ideXlab platform.
-
efficient calculation of a normal matrix Vector product for anisotropic full matrix least squares refinement of macromolecular structures
Journal of Applied Crystallography, 2009Co-Authors: Boris V. StrokopytovAbstract:A novel algorithm is described for multiplying a normal equation matrix by an arbitrary Real Vector using the fast Fourier transform technique during anisotropic crystallographic refinement. The matrix–Vector algorithm allows one to solve normal matrix equations using the conjugate-gradients or conjugate-directions technique without explicit calculation of a normal matrix. The anisotropic version of the algorithm has been implemented in a new version of the computer program FMLSQ. The updated program has been tested on several protein structures at high resolution. In addition, rapid methods for preconditioner and normal matrix–Vector product calculations are described.
-
How to multiply a matrix of normal equations by an arbitrary Vector using FFT.
Acta Crystallographica Section A Foundations of Crystallography, 2008Co-Authors: Boris V. StrokopytovAbstract:This paper describes a novel algorithm for multiplying a matrix of normal equations by an arbitrary Real Vector using the fast Fourier transform technique. The algorithm allows full-matrix least-squares refinement of macromolecular structures without explicit calculation of the normal matrix. The resulting equations have been implemented in a new computer program, FMLSQ. A preliminary version of the program has been tested on several protein structures. The consequences for crystallographic refinement of macromolecules are discussed in detail.
Vladimir Roubtsov - One of the best experts on this subject based on the ideXlab platform.
-
a geometric interpretation of coherent structures in navier stokes flows
Proceedings of The Royal Society A: Mathematical Physical and Engineering Sciences, 2009Co-Authors: Ian Roulstone, Bertrand Banos, J D Gibbon, Vladimir RoubtsovAbstract:The pressure in the incompressible three-dimensional Navier-Stokes and Euler equations is governed by Poisson's equation: this equation is studied using the geometry of three- forms in six dimensions. By studying the linear algebra of the Vector space of three-forms L 3 Wwhere W is a six-dimensional Real Vector space, we relate the characterization of non-degenerate elements of L 3 Wto the sign of the Laplacian of the pressure—and hence to the balance between the vorticity and the rate of strain. When the Laplacian of the pressure, Dp, satisfies DpO0, the three-form associated with Poisson's equation is the Real part of a decomposable complex form and an almost-complex structure can be identified. When Dp!0, a Real decomposable structure is identified. These results are discussed in the context of coherent structures in turbulence.
Zachary Scherr - One of the best experts on this subject based on the ideXlab platform.
-
on the number of rich lines in high dimensional Real Vector spaces
Discrete and Computational Geometry, 2016Co-Authors: Marton Hablicsek, Zachary ScherrAbstract:In this short note we use the Polynomial Ham Sandwich Theorem to strengthen a recent result of Dvir and Gopi about the number of rich lines in high dimensional Euclidean spaces. Our result shows that if there are sufficiently many rich lines incident to a set of points then a large fraction of them must be contained in a hyperplane.
-
on the number of rich lines in high dimensional Real Vector spaces
arXiv: Combinatorics, 2014Co-Authors: Marton Hablicsek, Zachary ScherrAbstract:In this short note we use the polynomial partitioning lemma to strengthen a recent result of Dvir and Gopi about the number of rich lines in high dimensional Euclidean spaces. Our result shows that if there are sufficiently many rich lines incident to a set of points then large fraction of them must be contained in a hyperplane.