The Experts below are selected from a list of 15858 Experts worldwide ranked by ideXlab platform
Monique Barel - One of the best experts on this subject based on the ideXlab platform.
-
a schur based algorithm for computing bounds to the smallest eigenvalue of a Symmetric Positive Definite toeplitz matrix
Linear Algebra and its Applications, 2008Co-Authors: Nicola Mastronardi, Monique Barel, Raf VandebrilAbstract:Abstract Recent progress in signal processing and estimation has generated c onsiderable interest in the problem of computing the smallest eigenvalue of Symmetric Positive Definite Toeplitz matrices. Several algorithms have been proposed in the literature. They compute the smallest eigenvalue in an iterative fashion, many of them relying on the Levinson–Durbin solution of sequences of Yule–Walker systems. Exploiting the properties of two algorithms recently developed for estimating a lower and an upper bound of the smallest singular value of upper triangular matrices, respectively, an algorithm for computing bounds to the smallest eigenvalue of a Symmetric Positive Definite Toeplitz matrix is derived. The algorithm relies on the computation of the R factor of the QR -factorization of the Toeplitz matrix and the inverse of R . The simultaneous computation of R and R −1 is efficiently accomplished by the generalized Schur algorithm.
-
Computing a Lower Bound of the Smallest Eigenvalue of a Symmetric Positive-Definite Toeplitz Matrix
IEEE Transactions on Information Theory, 2008Co-Authors: Teresa Laudadio, Nicola Mastronardi, Monique BarelAbstract:In this correspondence, several algorithms to compute a lower bound of the smallest eigenvalue of a Symmetric Positive-Definite Toeplitz matrix are described and compared in terms of accuracy and computational efficiency. Exploiting the Toeplitz structure of the considered matrix, new theoretical insights are derived and an efficient implementation of some of the aforementioned algorithms is provided.
-
A Schur-based algorithm for computing the smallest eigenvalue of a Symmetric Positive Definite Toeplitz matrix
2006Co-Authors: Nicola Mastronardi, Monique Barel, Raf VandebrilAbstract:Recent progress in signal processing and estimation has generated considerable interest in the problem of computing the smallest eigenvalue of Symmetric Positive Definite Toeplitz matrices. Several algorithms have been proposed in the literature. Many of them compute the smallest eigenvalue in an iterative fashion, relying on the Levinson–Durbin solution of sequences of Yule–Walker systems. Exploiting the properties of two algorithms recently developed for estimating a lower and an upper bound of the smallest singular value of upper triangular matrices, respectively, an algorithm for computing the smallest eigenvalue of a Symmetric Positive Definite Toeplitz matrix is derived. The algorithm relies on the computation of the R factor of the QR–factorization of the Toeplitz matrix and the inverse of R. The latter computation is efficiently accomplished by the generalized Schur algorithm.
Nicola Mastronardi - One of the best experts on this subject based on the ideXlab platform.
-
a schur based algorithm for computing bounds to the smallest eigenvalue of a Symmetric Positive Definite toeplitz matrix
Linear Algebra and its Applications, 2008Co-Authors: Nicola Mastronardi, Monique Barel, Raf VandebrilAbstract:Abstract Recent progress in signal processing and estimation has generated c onsiderable interest in the problem of computing the smallest eigenvalue of Symmetric Positive Definite Toeplitz matrices. Several algorithms have been proposed in the literature. They compute the smallest eigenvalue in an iterative fashion, many of them relying on the Levinson–Durbin solution of sequences of Yule–Walker systems. Exploiting the properties of two algorithms recently developed for estimating a lower and an upper bound of the smallest singular value of upper triangular matrices, respectively, an algorithm for computing bounds to the smallest eigenvalue of a Symmetric Positive Definite Toeplitz matrix is derived. The algorithm relies on the computation of the R factor of the QR -factorization of the Toeplitz matrix and the inverse of R . The simultaneous computation of R and R −1 is efficiently accomplished by the generalized Schur algorithm.
-
Computing a Lower Bound of the Smallest Eigenvalue of a Symmetric Positive-Definite Toeplitz Matrix
IEEE Transactions on Information Theory, 2008Co-Authors: Teresa Laudadio, Nicola Mastronardi, Monique BarelAbstract:In this correspondence, several algorithms to compute a lower bound of the smallest eigenvalue of a Symmetric Positive-Definite Toeplitz matrix are described and compared in terms of accuracy and computational efficiency. Exploiting the Toeplitz structure of the considered matrix, new theoretical insights are derived and an efficient implementation of some of the aforementioned algorithms is provided.
-
A Schur-based algorithm for computing the smallest eigenvalue of a Symmetric Positive Definite Toeplitz matrix
2006Co-Authors: Nicola Mastronardi, Monique Barel, Raf VandebrilAbstract:Recent progress in signal processing and estimation has generated considerable interest in the problem of computing the smallest eigenvalue of Symmetric Positive Definite Toeplitz matrices. Several algorithms have been proposed in the literature. Many of them compute the smallest eigenvalue in an iterative fashion, relying on the Levinson–Durbin solution of sequences of Yule–Walker systems. Exploiting the properties of two algorithms recently developed for estimating a lower and an upper bound of the smallest singular value of upper triangular matrices, respectively, an algorithm for computing the smallest eigenvalue of a Symmetric Positive Definite Toeplitz matrix is derived. The algorithm relies on the computation of the R factor of the QR–factorization of the Toeplitz matrix and the inverse of R. The latter computation is efficiently accomplished by the generalized Schur algorithm.
-
computing the smallest eigenpair of a Symmetric Positive Definite toeplitz matrix
SIAM Journal on Scientific Computing, 1999Co-Authors: Nicola Mastronardi, Daniel BoleyAbstract:An algorithm for computing the smallest eigenvalue of a Symmetric Positive Definite Toeplitz matrix is presented. The eigenvalue is approximated from below by Newton's method applied to the characteristic polynomial of the matrix. The Newton's step is calculated by a Levinson--Durbin type recursion. Simultaneously, this recursion produces a realistic error bound of the actual approximation without additional computing effort as well as a simple and efficient way to compute the associated eigenvector.
Panayot S. Vassilevski - One of the best experts on this subject based on the ideXlab platform.
-
Direction-Preserving and Schur-Monotonic Semiseparable Approximations of Symmetric Positive Definite Matrices
SIAM Journal on Matrix Analysis and Applications, 2010Co-Authors: Panayot S. VassilevskiAbstract:For a given Symmetric Positive Definite matrix $A\in\mathbf{R}^{N\times N}$, we develop a fast and backward stable algorithm to approximate $A$ by a Symmetric Positive Definite semiseparable matrix, accurate to a constant multiple of any prescribed tolerance. In addition, this algorithm preserves the product, $AZ$, for a given matrix $Z\in\mathbf{R}^{N\times d}$, where $d\ll N$. Our algorithm guarantees the Positive-Definiteness of the semiseparable matrix by embedding an approximation strategy inside a Cholesky factorization procedure to ensure that the Schur complements during the Cholesky factorization all remain Positive Definite after approximation. It uses a robust direction-preserving approximation scheme to ensure the preservation of $AZ$. We present numerical experiments and discuss the potential implications of our work.
-
Direction-Preserving and Schur-Monotonic Semi-Separable Approximations of Symmetric Positive Definite Matrices}
Lawrence Berkeley National Laboratory, 2010Co-Authors: Panayot S. VassilevskiAbstract:For a given Symmetric Positive Definite matrix A {element_of} R{sup N x N}, we develop a fast and backward stable algorithm to approximate A by a Symmetric Positive-Definite semi-separable matrix, accurate to a constant multiple of any prescribed tolerance. In addition, this algorithm preserves the product, AZ, for a given matrix Z {element_of} R{sup N x d}, where d
-
Direction-preserving and Schur-monotonic Semi-separable Approximations of Symmetric Positive Definite Matrices
2009Co-Authors: Panayot S. VassilevskiAbstract:For a given Symmetric Positive Definite matrix A {element_of} R{sup nxn}, we develop a fast and backward stable algorithm to approximate A by a Symmetric Positive-Definite semi-separable matrix, accurate to any prescribed tolerance. In addition, this algorithm preserves the product, AZ, for a given matrix Z {element_of} R{sup nxd}, where d
Raf Vandebril - One of the best experts on this subject based on the ideXlab platform.
-
a schur based algorithm for computing bounds to the smallest eigenvalue of a Symmetric Positive Definite toeplitz matrix
Linear Algebra and its Applications, 2008Co-Authors: Nicola Mastronardi, Monique Barel, Raf VandebrilAbstract:Abstract Recent progress in signal processing and estimation has generated c onsiderable interest in the problem of computing the smallest eigenvalue of Symmetric Positive Definite Toeplitz matrices. Several algorithms have been proposed in the literature. They compute the smallest eigenvalue in an iterative fashion, many of them relying on the Levinson–Durbin solution of sequences of Yule–Walker systems. Exploiting the properties of two algorithms recently developed for estimating a lower and an upper bound of the smallest singular value of upper triangular matrices, respectively, an algorithm for computing bounds to the smallest eigenvalue of a Symmetric Positive Definite Toeplitz matrix is derived. The algorithm relies on the computation of the R factor of the QR -factorization of the Toeplitz matrix and the inverse of R . The simultaneous computation of R and R −1 is efficiently accomplished by the generalized Schur algorithm.
-
A Schur-based algorithm for computing the smallest eigenvalue of a Symmetric Positive Definite Toeplitz matrix
2006Co-Authors: Nicola Mastronardi, Monique Barel, Raf VandebrilAbstract:Recent progress in signal processing and estimation has generated considerable interest in the problem of computing the smallest eigenvalue of Symmetric Positive Definite Toeplitz matrices. Several algorithms have been proposed in the literature. Many of them compute the smallest eigenvalue in an iterative fashion, relying on the Levinson–Durbin solution of sequences of Yule–Walker systems. Exploiting the properties of two algorithms recently developed for estimating a lower and an upper bound of the smallest singular value of upper triangular matrices, respectively, an algorithm for computing the smallest eigenvalue of a Symmetric Positive Definite Toeplitz matrix is derived. The algorithm relies on the computation of the R factor of the QR–factorization of the Toeplitz matrix and the inverse of R. The latter computation is efficiently accomplished by the generalized Schur algorithm.
-
A small note on the scaling of Symmetric Positive Definite semiseparable matrices
Numerical Algorithms, 2006Co-Authors: Raf Vandebril, Gene H. Golub, Marc Van BarelAbstract:In this paper we will adapt a known method for diagonal scaling of Symmetric Positive Definite tridiagonal matrices towards the semiseparable case. Based on the fact that a Symmetric, Positive Definite tridiagonal matrix $$T$$ satisfies property A, one can easily construct a diagonal matrix $$\hat{D}$$ such that $$\hat{D}T\hat{D}$$ has the lowest condition number over all matrices $$DTD$$ , for any choice of diagonal matrix $$D$$ . Knowing that semiseparable matrices are the inverses of tridiagonal matrices, one can derive similar properties for semiseparable matrices. Here, we will construct the optimal diagonal scaling of a semiseparable matrix, based on a new inversion formula for semiseparable matrices. Some numerical experiments are performed. In a first experiment we compare the condition numbers of the semiseparable matrices before and after the scaling. In a second numerical experiment we compare the scalability of matrices coming from the reduction to semiseparable form and matrices coming from the reduction to tridiagonal form.
Zhaoshui He - One of the best experts on this subject based on the ideXlab platform.
-
kernel sparse subspace clustering on Symmetric Positive Definite manifolds
Computer Vision and Pattern Recognition, 2016Co-Authors: Zhaoshui HeAbstract:Sparse subspa ce clustering (SSC), as one of the most successful subspace clustering methods, has achieved notable cluste ring accuracy in computer vision tasks. However, sseapplies only 10 vector data in Euclidean space. Unfo rtunately there is still no satisfactory approa ch to solve subspace clustering by self-expressive principle f or Symmetric Positive Definite (SPD) matrices which is very useful, in computer vision. In this paper, by embedding the SPD matrices into a Reproducing Kernel Hilbert Space (RKHS), a kernel subspace clustering method is constructed un the SPD manifold through an appropriat e Log-Euclidean kernel, termed as kernel sparse subspace clustering on the SPD Riemannian manifold!KSSCR). By exploiting the intrins ic Riemannian geometry within data, KSSCR can effectively characterize the geodesic distance between SPD matrices to uncover the underly ing subspace structure. Exper imental results Oft seve ral famous datasets demonstrate that the proposed method achieves bette r cluste ring results than the state-of-the-art approaches.
-
kernel sparse subspace clustering on Symmetric Positive Definite manifolds
arXiv: Computer Vision and Pattern Recognition, 2016Co-Authors: Zhaoshui HeAbstract:Sparse subspace clustering (SSC), as one of the most successful subspace clustering methods, has achieved notable clustering accuracy in computer vision tasks. However, SSC applies only to vector data in Euclidean space. As such, there is still no satisfactory approach to solve subspace clustering by ${\it self-expressive}$ principle for Symmetric Positive Definite (SPD) matrices which is very useful in computer vision. In this paper, by embedding the SPD matrices into a Reproducing Kernel Hilbert Space (RKHS), a kernel subspace clustering method is constructed on the SPD manifold through an appropriate Log-Euclidean kernel, termed as kernel sparse subspace clustering on the SPD Riemannian manifold (KSSCR). By exploiting the intrinsic Riemannian geometry within data, KSSCR can effectively characterize the geodesic distance between SPD matrices to uncover the underlying subspace structure. Experimental results on two famous database demonstrate that the proposed method achieves better clustering results than the state-of-the-art approaches.