Positive Semidefinite

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 360 Experts worldwide ranked by ideXlab platform

Lei Wang - One of the best experts on this subject based on the ideXlab platform.

  • Positive Semidefinite metric learning using boosting like algorithms
    Journal of Machine Learning Research, 2012
    Co-Authors: Chunhua Shen, Lei Wang, Junae Kim, Anton Van Den Hengel
    Abstract:

    The success of many machine learning and pattern recognition methods relies heavily upon the identification of an appropriate distance metric on the input data. It is often beneficial to learn such a metric from the input training data, instead of using a default one such as the Euclidean distance. In this work, we propose a boosting-based technique, termed BOOSTMETRIC, for learning a quadratic Mahalanobis distance metric. Learning a valid Mahalanobis distance metric requires enforcing the constraint that the matrix parameter to the metric remains Positive Semidefinite. Semidefinite programming is often used to enforce this constraint, but does not scale well and is not easy to implement. BOOSTMETRIC is instead based on the observation that any Positive Semidefinite matrix can be decomposed into a linear combination of trace-one rank-one matrices. BOOSTMETRIC thus uses rank-one Positive Semidefinite matrices as weak learners within an efficient and scalable boosting-based learning process. The resulting methods are easy to implement, efficient, and can accommodate various types of constraints. We extend traditional boosting algorithms in that its weak learner is a Positive Semidefinite matrix with trace and rank being one rather than a classifier or regressor. Experiments on various data sets demonstrate that the proposed algorithms compare favorably to those state-of-the-art methods in terms of classification accuracy and running time.

  • Positive Semidefinite metric learning using boosting like algorithms
    arXiv: Computer Vision and Pattern Recognition, 2011
    Co-Authors: Chunhua Shen, Lei Wang, Junae Kim, Anton Van Den Hengel
    Abstract:

    The success of many machine learning and pattern recognition methods relies heavily upon the identification of an appropriate distance metric on the input data. It is often beneficial to learn such a metric from the input training data, instead of using a default one such as the Euclidean distance. In this work, we propose a boosting-based technique, termed BoostMetric, for learning a quadratic Mahalanobis distance metric. Learning a valid Mahalanobis distance metric requires enforcing the constraint that the matrix parameter to the metric remains Positive definite. Semidefinite programming is often used to enforce this constraint, but does not scale well and easy to implement. BoostMetric is instead based on the observation that any Positive Semidefinite matrix can be decomposed into a linear combination of trace-one rank-one matrices. BoostMetric thus uses rank-one Positive Semidefinite matrices as weak learners within an efficient and scalable boosting-based learning process. The resulting methods are easy to implement, efficient, and can accommodate various types of constraints. We extend traditional boosting algorithms in that its weak learner is a Positive Semidefinite matrix with trace and rank being one rather than a classifier or regressor. Experiments on various datasets demonstrate that the proposed algorithms compare favorably to those state-of-the-art methods in terms of classification accuracy and running time.

  • Positive Semidefinite metric learning with boosting
    arXiv: Computer Vision and Pattern Recognition, 2009
    Co-Authors: Chunhua Shen, Lei Wang, Junae Kim, Anton Van Den Hengel
    Abstract:

    The learning of appropriate distance metrics is a critical problem in image classification and retrieval. In this work, we propose a boosting-based technique, termed \BoostMetric, for learning a Mahalanobis distance metric. One of the primary difficulties in learning such a metric is to ensure that the Mahalanobis matrix remains Positive Semidefinite. Semidefinite programming is sometimes used to enforce this constraint, but does not scale well. \BoostMetric is instead based on a key observation that any Positive Semidefinite matrix can be decomposed into a linear Positive combination of trace-one rank-one matrices. \BoostMetric thus uses rank-one Positive Semidefinite matrices as weak learners within an efficient and scalable boosting-based learning process. The resulting method is easy to implement, does not require tuning, and can accommodate various types of constraints. Experiments on various datasets show that the proposed algorithm compares favorably to those state-of-the-art methods in terms of classification accuracy and running time.

  • psdboost matrix generation linear programming for Positive Semidefinite matrices learning
    Neural Information Processing Systems, 2008
    Co-Authors: Chunhua Shen, Alan H. Welsh, Lei Wang
    Abstract:

    In this work, we consider the problem of learning a Positive Semidefinite matrix. The critical issue is how to preserve Positive Semidefiniteness during the course of learning. Our algorithm is mainly inspired by LPBoost [1] and the general greedy convex optimization framework of Zhang [2]. We demonstrate the essence of the algorithm, termed PSDBoost (Positive Semidefinite Boosting), by focusing on a few different applications in machine learning. The proposed PSDBoost algorithm extends traditional Boosting algorithms in that its parameter is a Positive Semidefinite matrix with trace being one instead of a classifier. PSDBoost is based on the observation that any trace-one Positive Semidefinite matrix can be decomposed into linear convex combinations of trace-one rank-one matrices, which serve as base learners of PSDBoost. Numerical experiments are presented.

Chunhua Shen - One of the best experts on this subject based on the ideXlab platform.

  • Positive Semidefinite metric learning using boosting like algorithms
    Journal of Machine Learning Research, 2012
    Co-Authors: Chunhua Shen, Lei Wang, Junae Kim, Anton Van Den Hengel
    Abstract:

    The success of many machine learning and pattern recognition methods relies heavily upon the identification of an appropriate distance metric on the input data. It is often beneficial to learn such a metric from the input training data, instead of using a default one such as the Euclidean distance. In this work, we propose a boosting-based technique, termed BOOSTMETRIC, for learning a quadratic Mahalanobis distance metric. Learning a valid Mahalanobis distance metric requires enforcing the constraint that the matrix parameter to the metric remains Positive Semidefinite. Semidefinite programming is often used to enforce this constraint, but does not scale well and is not easy to implement. BOOSTMETRIC is instead based on the observation that any Positive Semidefinite matrix can be decomposed into a linear combination of trace-one rank-one matrices. BOOSTMETRIC thus uses rank-one Positive Semidefinite matrices as weak learners within an efficient and scalable boosting-based learning process. The resulting methods are easy to implement, efficient, and can accommodate various types of constraints. We extend traditional boosting algorithms in that its weak learner is a Positive Semidefinite matrix with trace and rank being one rather than a classifier or regressor. Experiments on various data sets demonstrate that the proposed algorithms compare favorably to those state-of-the-art methods in terms of classification accuracy and running time.

  • Positive Semidefinite metric learning using boosting like algorithms
    arXiv: Computer Vision and Pattern Recognition, 2011
    Co-Authors: Chunhua Shen, Lei Wang, Junae Kim, Anton Van Den Hengel
    Abstract:

    The success of many machine learning and pattern recognition methods relies heavily upon the identification of an appropriate distance metric on the input data. It is often beneficial to learn such a metric from the input training data, instead of using a default one such as the Euclidean distance. In this work, we propose a boosting-based technique, termed BoostMetric, for learning a quadratic Mahalanobis distance metric. Learning a valid Mahalanobis distance metric requires enforcing the constraint that the matrix parameter to the metric remains Positive definite. Semidefinite programming is often used to enforce this constraint, but does not scale well and easy to implement. BoostMetric is instead based on the observation that any Positive Semidefinite matrix can be decomposed into a linear combination of trace-one rank-one matrices. BoostMetric thus uses rank-one Positive Semidefinite matrices as weak learners within an efficient and scalable boosting-based learning process. The resulting methods are easy to implement, efficient, and can accommodate various types of constraints. We extend traditional boosting algorithms in that its weak learner is a Positive Semidefinite matrix with trace and rank being one rather than a classifier or regressor. Experiments on various datasets demonstrate that the proposed algorithms compare favorably to those state-of-the-art methods in terms of classification accuracy and running time.

  • Positive Semidefinite metric learning with boosting
    arXiv: Computer Vision and Pattern Recognition, 2009
    Co-Authors: Chunhua Shen, Lei Wang, Junae Kim, Anton Van Den Hengel
    Abstract:

    The learning of appropriate distance metrics is a critical problem in image classification and retrieval. In this work, we propose a boosting-based technique, termed \BoostMetric, for learning a Mahalanobis distance metric. One of the primary difficulties in learning such a metric is to ensure that the Mahalanobis matrix remains Positive Semidefinite. Semidefinite programming is sometimes used to enforce this constraint, but does not scale well. \BoostMetric is instead based on a key observation that any Positive Semidefinite matrix can be decomposed into a linear Positive combination of trace-one rank-one matrices. \BoostMetric thus uses rank-one Positive Semidefinite matrices as weak learners within an efficient and scalable boosting-based learning process. The resulting method is easy to implement, does not require tuning, and can accommodate various types of constraints. Experiments on various datasets show that the proposed algorithm compares favorably to those state-of-the-art methods in terms of classification accuracy and running time.

  • psdboost matrix generation linear programming for Positive Semidefinite matrices learning
    Neural Information Processing Systems, 2008
    Co-Authors: Chunhua Shen, Alan H. Welsh, Lei Wang
    Abstract:

    In this work, we consider the problem of learning a Positive Semidefinite matrix. The critical issue is how to preserve Positive Semidefiniteness during the course of learning. Our algorithm is mainly inspired by LPBoost [1] and the general greedy convex optimization framework of Zhang [2]. We demonstrate the essence of the algorithm, termed PSDBoost (Positive Semidefinite Boosting), by focusing on a few different applications in machine learning. The proposed PSDBoost algorithm extends traditional Boosting algorithms in that its parameter is a Positive Semidefinite matrix with trace being one instead of a classifier. PSDBoost is based on the observation that any trace-one Positive Semidefinite matrix can be decomposed into linear convex combinations of trace-one rank-one matrices, which serve as base learners of PSDBoost. Numerical experiments are presented.

Anton Van Den Hengel - One of the best experts on this subject based on the ideXlab platform.

  • Positive Semidefinite metric learning using boosting like algorithms
    Journal of Machine Learning Research, 2012
    Co-Authors: Chunhua Shen, Lei Wang, Junae Kim, Anton Van Den Hengel
    Abstract:

    The success of many machine learning and pattern recognition methods relies heavily upon the identification of an appropriate distance metric on the input data. It is often beneficial to learn such a metric from the input training data, instead of using a default one such as the Euclidean distance. In this work, we propose a boosting-based technique, termed BOOSTMETRIC, for learning a quadratic Mahalanobis distance metric. Learning a valid Mahalanobis distance metric requires enforcing the constraint that the matrix parameter to the metric remains Positive Semidefinite. Semidefinite programming is often used to enforce this constraint, but does not scale well and is not easy to implement. BOOSTMETRIC is instead based on the observation that any Positive Semidefinite matrix can be decomposed into a linear combination of trace-one rank-one matrices. BOOSTMETRIC thus uses rank-one Positive Semidefinite matrices as weak learners within an efficient and scalable boosting-based learning process. The resulting methods are easy to implement, efficient, and can accommodate various types of constraints. We extend traditional boosting algorithms in that its weak learner is a Positive Semidefinite matrix with trace and rank being one rather than a classifier or regressor. Experiments on various data sets demonstrate that the proposed algorithms compare favorably to those state-of-the-art methods in terms of classification accuracy and running time.

  • Positive Semidefinite metric learning using boosting like algorithms
    arXiv: Computer Vision and Pattern Recognition, 2011
    Co-Authors: Chunhua Shen, Lei Wang, Junae Kim, Anton Van Den Hengel
    Abstract:

    The success of many machine learning and pattern recognition methods relies heavily upon the identification of an appropriate distance metric on the input data. It is often beneficial to learn such a metric from the input training data, instead of using a default one such as the Euclidean distance. In this work, we propose a boosting-based technique, termed BoostMetric, for learning a quadratic Mahalanobis distance metric. Learning a valid Mahalanobis distance metric requires enforcing the constraint that the matrix parameter to the metric remains Positive definite. Semidefinite programming is often used to enforce this constraint, but does not scale well and easy to implement. BoostMetric is instead based on the observation that any Positive Semidefinite matrix can be decomposed into a linear combination of trace-one rank-one matrices. BoostMetric thus uses rank-one Positive Semidefinite matrices as weak learners within an efficient and scalable boosting-based learning process. The resulting methods are easy to implement, efficient, and can accommodate various types of constraints. We extend traditional boosting algorithms in that its weak learner is a Positive Semidefinite matrix with trace and rank being one rather than a classifier or regressor. Experiments on various datasets demonstrate that the proposed algorithms compare favorably to those state-of-the-art methods in terms of classification accuracy and running time.

  • Positive Semidefinite metric learning with boosting
    arXiv: Computer Vision and Pattern Recognition, 2009
    Co-Authors: Chunhua Shen, Lei Wang, Junae Kim, Anton Van Den Hengel
    Abstract:

    The learning of appropriate distance metrics is a critical problem in image classification and retrieval. In this work, we propose a boosting-based technique, termed \BoostMetric, for learning a Mahalanobis distance metric. One of the primary difficulties in learning such a metric is to ensure that the Mahalanobis matrix remains Positive Semidefinite. Semidefinite programming is sometimes used to enforce this constraint, but does not scale well. \BoostMetric is instead based on a key observation that any Positive Semidefinite matrix can be decomposed into a linear Positive combination of trace-one rank-one matrices. \BoostMetric thus uses rank-one Positive Semidefinite matrices as weak learners within an efficient and scalable boosting-based learning process. The resulting method is easy to implement, does not require tuning, and can accommodate various types of constraints. Experiments on various datasets show that the proposed algorithm compares favorably to those state-of-the-art methods in terms of classification accuracy and running time.

Hamza Fawzi - One of the best experts on this subject based on the ideXlab platform.

  • On representing the Positive Semidefinite cone using the second-order cone
    Mathematical Programming, 2019
    Co-Authors: Hamza Fawzi
    Abstract:

    The second-order cone plays an important role in convex optimization and has strong expressive abilities despite its apparent simplicity. Second-order cone formulations can also be solved more efficiently than Semidefinite programming problems in general. We consider the following question, posed by Lewis and Glineur, Parrilo, Saunderson: is it possible to express the general Positive Semidefinite cone using second-order cones? We provide a negative answer to this question and show that the $$3\times 3$$ Positive Semidefinite cone does not admit any second-order cone representation. In fact we show that the slice consisting of $$3\times 3$$ Positive Semidefinite Hankel matrices does not admit a second-order cone representation. Our proof relies on exhibiting a sequence of submatrices of the slack matrix of the $$3\times 3$$ Positive Semidefinite cone whose “second-order cone rank” grows to infinity.

  • on polyhedral approximations of the Positive Semidefinite cone
    arXiv: Optimization and Control, 2018
    Co-Authors: Hamza Fawzi
    Abstract:

    Let $D$ be the set of $n\times n$ Positive Semidefinite matrices of trace equal to one, also known as the set of density matrices. We prove two results on the hardness of approximating $D$ with polytopes. First, we show that if $0 0$ is a constant that depends on $\epsilon$. Second, we show that any polytope $P$ such that $D \subset P$ and such that the Gaussian width of $P$ is at most twice the Gaussian width of $D$ must have extension complexity at least $\exp(cn^{1/3})$. The main ingredient of our proofs is hypercontractivity of the noise operator on the hypercube.

  • a lower bound on the Positive Semidefinite rank of convex bodies
    SIAM Journal on Applied Algebra and Geometry, 2018
    Co-Authors: Hamza Fawzi, Mohab Safey El Din
    Abstract:

    The Positive Semidefinite rank of a convex body $C$ is the size of its smallest Positive Semidefinite formulation. We show that the Positive Semidefinite rank of any convex body $C$ is at least $\sqrt{\log d}$, where $d$ is the smallest degree of a polynomial that vanishes on the boundary of the polar of $C$. This improves upon the existing bound, which relies on results from quantifier elimination. Our proof relies on the Bezout bound applied to the Karush--Kuhn--Tucker conditions of optimality. We discuss the connection with the algebraic degree of Semidefinite programming and show that the bound is tight (up to constant factor) for random spectrahedra of suitable dimension.

  • a lower bound on the Positive Semidefinite rank of convex bodies
    arXiv: Optimization and Control, 2017
    Co-Authors: Hamza Fawzi, Mohab Safey El Din
    Abstract:

    The Positive Semidefinite rank of a convex body $C$ is the size of its smallest Positive Semidefinite formulation. We show that the Positive Semidefinite rank of any convex body $C$ is at least $\sqrt{\log d}$ where $d$ is the smallest degree of a polynomial that vanishes on the boundary of the polar of $C$. This improves on the existing bound which relies on results from quantifier elimination. The proof relies on the Bezout bound applied to the Karush-Kuhn-Tucker conditions of optimality. We discuss the connection with the algebraic degree of Semidefinite programming and show that the bound is tight (up to constant factor) for random spectrahedra of suitable dimension.

  • On representing the Positive Semidefinite cone using the second-order cone
    arXiv: Optimization and Control, 2016
    Co-Authors: Hamza Fawzi
    Abstract:

    The second-order cone plays an important role in convex optimization and has strong expressive abilities despite its apparent simplicity. Second-order cone formulations can also be solved more efficiently than Semidefinite programming in general. We consider the following question, posed by Lewis and Glineur, Parrilo, Saunderson: is it possible to express the general Positive Semidefinite cone using second-order cones? We provide a negative answer to this question and show that the 3x3 Positive Semidefinite cone does not admit any second-order cone representation. Our proof relies on exhibiting a sequence of submatrices of the slack matrix of the 3x3 Positive Semidefinite cone whose "second-order cone rank" grows to infinity. We also discuss the possibility of representing certain slices of the 3x3 Positive Semidefinite cone using the second-order cone.

Richard Z Robinson - One of the best experts on this subject based on the ideXlab platform.

  • Positive Semidefinite rank and nested spectrahedra
    Linear & Multilinear Algebra, 2018
    Co-Authors: Kaie Kubjas, Elina Robeva, Richard Z Robinson
    Abstract:

    The set of matrices of given Positive Semidefinite rank is semialgebraic. In this paper we study the geometry of this set, and in small cases we describe its boundary. For general values of positiv...

  • Positive Semidefinite rank and nested spectrahedra
    arXiv: Algebraic Geometry, 2015
    Co-Authors: Kaie Kubjas, Elina Robeva, Richard Z Robinson
    Abstract:

    The set of matrices of given Positive Semidefinite rank is semialgebraic. In this paper we study the geometry of this set, and in small cases we describe its boundary. For general values of Positive Semidefinite rank we provide a conjecture for the description of this boundary. Our proof techniques are geometric in nature and rely on nesting spectrahedra between polytopes.

  • Positive Semidefinite rank
    Mathematical Programming, 2015
    Co-Authors: Hamza Fawzi, Joao Gouveia, Pablo A Parrilo, Richard Z Robinson, Rekha R Thomas
    Abstract:

    Let $$M \in \mathbb {R}^{p \times q}$$M?Rp×q be a nonnegative matrix. The Positive Semidefinite rank (psd rank) of M is the smallest integer k for which there exist Positive Semidefinite matrices $$A_i, B_j$$Ai,Bj of size $$k \times k$$k×k such that $$M_{ij} = {{\mathrm{trace}}}(A_i B_j)$$Mij=trace(AiBj). The psd rank has many appealing geometric interpretations, including Semidefinite representations of polyhedra and information-theoretic applications. In this paper we develop and survey the main mathematical properties of psd rank, including its geometry, relationships with other rank notions, and computational and algorithmic aspects.

  • worst case results for Positive Semidefinite rank
    Mathematical Programming, 2015
    Co-Authors: Joao Gouveia, Richard Z Robinson, Rekha R Thomas
    Abstract:

    We present various worst-case results on the Positive Semidefinite (psd) rank of a nonnegative matrix, primarily in the context of polytopes. We prove that the psd rank of a generic $$n$$n-dimensional polytope with $$v$$v vertices is at least $$(nv)^{\frac{1}{4}}$$(nv)14 improving on previous lower bounds. For polygons with $$v$$v vertices, we show that psd rank cannot exceed $$4 \left\lceil v/6 \right\rceil $$4v/6 which in turn shows that the psd rank of a $$p \times q$$p×q matrix of rank three is at most $$4\left\lceil \min \{p,q\}/6 \right\rceil $$4min{p,q}/6. In general, a nonnegative matrix of rank $${k+1 \atopwithdelims ()2}$$k+12 has psd rank at least $$k$$k and we pose the problem of deciding whether the psd rank is exactly $$k$$k. Using geometry and bounds on quantifier elimination, we show that this decision can be made in polynomial time when $$k$$k is fixed.

  • Positive Semidefinite rank
    arXiv: Optimization and Control, 2014
    Co-Authors: Hamza Fawzi, Joao Gouveia, Pablo A Parrilo, Richard Z Robinson, Rekha R Thomas
    Abstract:

    Let M be a p-by-q matrix with nonnegative entries. The Positive Semidefinite rank (psd rank) of M is the smallest integer k for which there exist Positive Semidefinite matrices $A_i, B_j$ of size $k \times k$ such that $M_{ij} = \text{trace}(A_i B_j)$. The psd rank has many appealing geometric interpretations, including Semidefinite representations of polyhedra and information-theoretic applications. In this paper we develop and survey the main mathematical properties of psd rank, including its geometry, relationships with other rank notions, and computational and algorithmic aspects.