Sum of Squares

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 101712 Experts worldwide ranked by ideXlab platform

Antonis Papachristodoulou - One of the best experts on this subject based on the ideXlab platform.

  • sparse Sum of Squares sos optimization a bridge between dsos sdsos and sos optimization for sparse polynomials
    Advances in Computing and Communications, 2019
    Co-Authors: Yang Zheng, Giovanni Fantuzzi, Antonis Papachristodoulou
    Abstract:

    Optimization over non-negative polynomials is fundamental for nonlinear systems analysis and control. This work investigates the relation between three tractable relaxations for optimizing over sparse non-negative polynomials: sparse Sum-of-Squares (SSOS) optimization, diagonally dominant Sum-of-Squares (DSOS) optimization, and scaled diagonally dominant Sum-of-Squares (SDSOS) optimization. We prove that the set of SSOS polynomials, an inner approximation of the cone of SOS polynomials, strictly contains the spaces of sparse DSOS/SDSOS polynomials. For problems with sparse polynomials, therefore, SSOS optimization is less conservative than its DSOS/SDSOS counterparts. Numerical results for large-scale sparse polynomial optimization problems demonstrate this fact, and also that SSOS optimization can be faster than DSOS/SDSOS methods despite requiring the solution of semidefinite programs instead of less expensive linear/second-order cone programs.

  • sparse Sum of Squares sos optimization a bridge between dsos sdsos and sos optimization for sparse polynomials
    arXiv: Optimization and Control, 2018
    Co-Authors: Yang Zheng, Giovanni Fantuzzi, Antonis Papachristodoulou
    Abstract:

    Optimization over non-negative polynomials is fundamental for nonlinear systems analysis and control. We investigate the relation between three tractable relaxations for optimizing over sparse non-negative polynomials: sparse Sum-of-Squares (SSOS) optimization, diagonally dominant Sum-of-Squares (DSOS) optimization, and scaled diagonally dominant Sum-of-Squares (SDSOS) optimization. We prove that the set of SSOS polynomials, an inner approximation of the cone of SOS polynomials, strictly contains the spaces of sparse DSOS/SDSOS polynomials. When applicable, therefore, SSOS optimization is less conservative than its DSOS/SDSOS counterparts. Numerical results for large-scale sparse polynomial optimization problems demonstrate this fact, and also that SSOS optimization can be faster than DSOS/SDSOS methods despite requiring the solution of semidefinite programs instead of less expensive linear/second-order cone programs.

  • Generalised absolute stability and Sum of Squares
    Automatica, 2013
    Co-Authors: Edward J. Hancock, Antonis Papachristodoulou
    Abstract:

    This paper introduces a general framework for analysing nonlinear systems using absolute stability theory and Sum of Squares programming. The technique decomposes a vector field into a system with a polynomial vector field in feedback with a nonlinear memoryless term, which is contained in a generalised sector inequality with polynomial bounds. This decomposition can be used to model uncertainty in the nonlinearity or to bound difficult-to-analyse terms by simpler polynomial functions, such as time varying, non-polynomial or higher order nonlinearities. Conditions for stability and regions of attraction are found using polynomial and Lur'e type Lyapunov functions, which generalise those used for the derivation of the multivariable circle and Popov criteria in classical absolute stability. The technique extends both absolute stability theory and the applicability of Sum of Squares programming. The usefulness of the technique is demonstrated with illustrative examples.

  • a converse Sum of Squares lyapunov result with a degree bound
    arXiv: Classical Analysis and ODEs, 2012
    Co-Authors: Matthew M. Peet, Antonis Papachristodoulou
    Abstract:

    Sum of Squares programming has been used extensively over the past decade for the stability analysis of nonlinear systems but several questions remain unanswered. In this paper, we show that exponential stability of a polynomial vector field on a bounded set implies the existence of a Lyapunov function which is a Sum-of-Squares of polynomials. In particular, the main result states that if a system is exponentially stable on a bounded nonempty set, then there exists an SOS Lyapunov function which is exponentially decreasing on that bounded set. The proof is constructive and uses the Picard iteration. A bound on the degree of this converse Lyapunov function is also given. This result implies that semidefinite programming can be used to answer the question of stability of a polynomial vector field with a bound on complexity.

  • A Converse Sum of Squares Lyapunov Result With a Degree Bound
    IEEE Transactions on Automatic Control, 2012
    Co-Authors: Matthew M. Peet, Antonis Papachristodoulou
    Abstract:

    Although Sum of Squares programming has been used extensively over the past decade for the stability analysis of nonlinear systems, several fundamental questions remain unanswered. In this paper, we show that exponential stability of a polynomial vector field on a bounded set implies the existence of a Lyapunov function which is a Sum of Squares of polynomials. In particular, the main result states that if a system is exponentially stable on a bounded nonempty set, then there exists a Sum of Squares Lyapunov function which is exponentially decreasing on that bounded set. Furthermore, we derive a bound on the degree of this converse Lyapunov function as a function of the continuity and stability properties of the vector field. The proof is constructive and uses the Picard iteration. Our result implies that semidefinite programming can be used to answer the question of stability of a polynomial vector field with a bound on complexity.

David Steurer - One of the best experts on this subject based on the ideXlab platform.

  • HIGH DIMENSIONAL ESTIMATION VIA Sum-of-Squares PROofS
    Proceedings of the International Congress of Mathematicians (ICM 2018), 2019
    Co-Authors: Prasad Raghavendra, Tselil Schramm, David Steurer
    Abstract:

    Estimation is the computational task of recovering a hidden parameter $x$ associated with a distribution $D_x$, given a measurement $y$ sampled from the distribution. High dimensional estimation problems arise naturally in statistics, machine learning, and complexity theory. Many high dimensional estimation problems can be formulated as systems of polynomial equations and inequalities, and thus give rise to natural probability distributions over polynomial systems. Sum-of-Squares proofs provide a powerful framework to reason about polynomial systems, and further there exist efficient algorithms to search for low-degree Sum-of-Squares proofs. Understanding and characterizing the power of Sum-of-Squares proofs for estimation problems has been a subject of intense study in recent years. On one hand, there is a growing body of work utilizing Sum-of-Squares proofs for recovering solutions to polynomial systems when the system is feasible. On the other hand, a general technique referred to as pseudocalibration has been developed towards showing lower bounds on the degree of Sum-of-Squares proofs. Finally, the existence of Sum-of-Squares refutations of a polynomial system has been shown to be intimately connected to the existence of spectral algorithms. In this article we survey these developments.

  • robust moment estimation and improved clustering via Sum of Squares
    Symposium on the Theory of Computing, 2018
    Co-Authors: Pravesh Kothari, Jacob Steinhardt, David Steurer
    Abstract:

    We develop efficient algorithms for estimating low-degree moments of unknown distributions in the presence of adversarial outliers and design a new family of convex relaxations for k-means clustering based on Sum-of-Squares method. As an immediate corollary, for any γ > 0, we obtain an efficient algorithm for learning the means of a mixture of k arbitrary distributions in d in time dO(1/γ) so long as the means have separation Ω(kγ). This in particular yields an algorithm for learning Gaussian mixtures with separation Ω(kγ), thus partially resolving an open problem of Regev and Vijayaraghavan regev2017learning. The guarantees of our robust estimation algorithms improve in many cases significantly over the best previous ones, obtained in the recent works. We also show that the guarantees of our algorithms match information-theoretic lower-bounds for the class of distributions we consider. These improved guarantees allow us to give improved algorithms for independent component analysis and learning mixtures of Gaussians in the presence of outliers. We also show a sharp upper bound on the Sum-of-Squares norms for moment tensors of any distribution that satisfies the Poincare inequality. The Poincare inequality is a central inequality in probability theory, and a large class of distributions satisfy it including Gaussians, product distributions, strongly log-concave distributions, and any Sum or uniformly continuous transformation of such distributions. As a consequence, this yields that all of the above algorithmic improvements hold for distributions satisfying the Poincare inequality.

  • exact tensor completion with Sum of Squares
    Conference on Learning Theory, 2017
    Co-Authors: Aaron Potechin, David Steurer
    Abstract:

    We obtain the first polynomial-time algorithm for exact tensor completion that improves over the bound implied by reduction to matrix completion. The algorithm recovers an unknown 3-tensor with $r$ incoherent, orthogonal components in $\mathbb R^n$ from $r\cdot \tilde O(n^{1.5})$ randomly observed entries of the tensor. This bound improves over the previous best one of $r\cdot \tilde O(n^{2})$ by reduction to exact matrix completion. Our bound also matches the best known results for the easier problem of approximate tensor completion (Barak & Moitra, 2015). Our algorithm and analysis extends seminal results for exact matrix completion (Candes & Recht, 2009) to the tensor setting via the Sum-of-Squares method. The main technical challenge is to show that a small number of randomly chosen monomials are enough to construct a degree-3 polynomial with precisely planted orthogonal global optima over the sphere and that this fact can be certified within the Sum-of-Squares proof system.

  • polynomial time tensor decompositions with Sum of Squares
    Foundations of Computer Science, 2016
    Co-Authors: Jonathan Shi, David Steurer
    Abstract:

    We give new algorithms based on the Sum-of-Squares method for tensor decomposition. Our results improve the best known running times from quasi-polynomial to polynomial for several problems, including decomposing random overcomplete 3-tensors and learning overcomplete dictionaries with constant relative sparsity. We also give the first robust analysis for decomposing overcomplete 4-tensors in the smoothed analysis model. A key ingredient of our analysis is to establish small spectral gaps in moment matrices derived from solutions to Sum-of-Squares relaxations. To enable this analysis we augment Sum-of-Squaresrelaxations with spectral analogs of maximum entropy constraints.

  • fast spectral algorithms from Sum of Squares proofs tensor decomposition and planted sparse vectors
    Symposium on the Theory of Computing, 2016
    Co-Authors: Samuel B Hopkins, Jonathan Shi, Tselil Schramm, David Steurer
    Abstract:

    We consider two problems that arise in machine learning applications: the problem of recovering a planted sparse vector in a random linear subspace and the problem of decomposing a random low-rank overcomplete 3-tensor. For both problems, the best known guarantees are based on the Sum-of-Squares method. We develop new algorithms inspired by analyses of the Sum-of-Squares method. Our algorithms achieve the same or similar guarantees as Sum-of-Squares for these problems but the running time is significantly faster. For the planted sparse vector problem, we give an algorithm with running time nearly linear in the input size that approximately recovers a planted sparse vector with up to constant relative sparsity in a random subspace of ℝn of dimension up to Ω(√n). These recovery guarantees match the best known ones of Barak, Kelner, and Steurer (STOC 2014) up to logarithmic factors. For tensor decomposition, we give an algorithm with running time close to linear in the input size (with exponent ≈ 1.125) that approximately recovers a component of a random 3-tensor over ℝn of rank up to Ω(n4/3). The best previous algorithm for this problem due to Ge and Ma (RANDOM 2015) works up to rank Ω(n3/2) but requires quasipolynomial time.

Aaron Potechin - One of the best experts on this subject based on the ideXlab platform.

  • Machinery for Proving Sum-of-Squares Lower Bounds on Certification Problems.
    arXiv: Computational Complexity, 2020
    Co-Authors: Aaron Potechin, Goutham Rajendran
    Abstract:

    In this paper, we construct general machinery for proving Sum-of-Squares lower bounds on certification problems by generalizing the techniques used by Barak et al. [FOCS 2016] to prove Sum-of-Squares lower bounds for planted clique. Using this machinery, we prove degree $n^{\epsilon}$ Sum-of-Squares lower bounds for tensor PCA, the Wishart model of sparse PCA, and a variant of planted clique which we call planted slightly denser subgraph.

  • a nearly tight Sum of Squares lower bound for the planted clique problem
    SIAM Journal on Computing, 2019
    Co-Authors: Boaz Barak, Jonathan A. Kelner, Samuel B Hopkins, Ankur Moitra, Pravesh Kothari, Aaron Potechin
    Abstract:

    We prove that with high probability over the choice of a random graph $G$ from the Erdos--Renyi distribution $G(n,1/2)$, the $n^{O(d)}$-time degree $d$ Sum-of-Squares (SOS) semidefinite programming...

  • Sum of Squares lower bounds from symmetry and a good story
    arXiv: Computational Complexity, 2017
    Co-Authors: Aaron Potechin
    Abstract:

    In this paper, we develop machinery for proving Sum of Squares lower bounds on symmetric problems based on the intuition that Sum of Squares has difficulty capturing integrality arguments, i.e. arguments that an expression must be an integer. Using this machinery, we prove a tight Sum of Squares lower bound for the following Turan type problem: Minimize the number of triangles in a graph $G$ with a fixed edge density. We also give an alternative proof of Grigoriev's Sum of Squares lower bound for the knapsack problem.

  • a note on property testing Sum of Squares and multivariate polynomial interpolation
    arXiv: Computational Complexity, 2017
    Co-Authors: Aaron Potechin, Liu Yang
    Abstract:

    In this paper, we investigate property testing whether or not a degree d multivariate poly- nomial is a Sum of Squares or is far from a Sum of Squares. We show that if we require that the property tester always accepts YES instances and uses random samples, $n^{\Omega(d)}$ samples are required, which is not much fewer than it would take to completely determine the polynomial. To prove this lower bound, we show that with high probability, multivariate polynomial in- terpolation matches arbitrary values on random points and the resulting polynomial has small norm. We then consider a particular polynomial which is non-negative yet not a Sum of Squares and use pseudo-expectation values to prove it is far from being a Sum of Squares.

  • exact tensor completion with Sum of Squares
    Conference on Learning Theory, 2017
    Co-Authors: Aaron Potechin, David Steurer
    Abstract:

    We obtain the first polynomial-time algorithm for exact tensor completion that improves over the bound implied by reduction to matrix completion. The algorithm recovers an unknown 3-tensor with $r$ incoherent, orthogonal components in $\mathbb R^n$ from $r\cdot \tilde O(n^{1.5})$ randomly observed entries of the tensor. This bound improves over the previous best one of $r\cdot \tilde O(n^{2})$ by reduction to exact matrix completion. Our bound also matches the best known results for the easier problem of approximate tensor completion (Barak & Moitra, 2015). Our algorithm and analysis extends seminal results for exact matrix completion (Candes & Recht, 2009) to the tensor setting via the Sum-of-Squares method. The main technical challenge is to show that a small number of randomly chosen monomials are enough to construct a degree-3 polynomial with precisely planted orthogonal global optima over the sphere and that this fact can be certified within the Sum-of-Squares proof system.

Matthew M. Peet - One of the best experts on this subject based on the ideXlab platform.

  • Stability analysis of sampled-data systems using Sum of Squares
    IEEE Transactions on Automatic Control, 2013
    Co-Authors: Alexandre Seuret, Matthew M. Peet
    Abstract:

    This article proposes a new approach to stability analysis of linear systems with sampled-data inputs or channels. The method, based on a variation of the discrete-time Lyapunov approach, provides stability conditions using functional variables subject to convex constraints. These stability conditions can be solved using the Sum of Squares methodology with little or no conservatism in both the case of synchronous and asynchronous sampling. Numerical examples are included to show convergence.

  • a converse Sum of Squares lyapunov result with a degree bound
    arXiv: Classical Analysis and ODEs, 2012
    Co-Authors: Matthew M. Peet, Antonis Papachristodoulou
    Abstract:

    Sum of Squares programming has been used extensively over the past decade for the stability analysis of nonlinear systems but several questions remain unanswered. In this paper, we show that exponential stability of a polynomial vector field on a bounded set implies the existence of a Lyapunov function which is a Sum-of-Squares of polynomials. In particular, the main result states that if a system is exponentially stable on a bounded nonempty set, then there exists an SOS Lyapunov function which is exponentially decreasing on that bounded set. The proof is constructive and uses the Picard iteration. A bound on the degree of this converse Lyapunov function is also given. This result implies that semidefinite programming can be used to answer the question of stability of a polynomial vector field with a bound on complexity.

  • CDC - Bootstrap current optimization in Tokamaks using Sum-of-Squares polynomials
    2012 IEEE 51st IEEE Conference on Decision and Control (CDC), 2012
    Co-Authors: Aditya Gahlawat, Emmanuel Witrant, Matthew M. Peet, Mazen Alamir
    Abstract:

    In this paper we present a Lyapunov based feedback design strategy, by employing the Sum-of-Squares polynomials framework, to maximize the bootstrap current in tokamaks. The bootstrap current may play an important role in reducing the external energy input required for tokamak operation. The Sum-of-Squares polynomials framework allows us to algorithmically construct controllers. Additionally, we provide a heuristic to take into account the control input shape constraints which arise due to limitations on the actuators.

  • A Converse Sum of Squares Lyapunov Result With a Degree Bound
    IEEE Transactions on Automatic Control, 2012
    Co-Authors: Matthew M. Peet, Antonis Papachristodoulou
    Abstract:

    Although Sum of Squares programming has been used extensively over the past decade for the stability analysis of nonlinear systems, several fundamental questions remain unanswered. In this paper, we show that exponential stability of a polynomial vector field on a bounded set implies the existence of a Lyapunov function which is a Sum of Squares of polynomials. In particular, the main result states that if a system is exponentially stable on a bounded nonempty set, then there exists a Sum of Squares Lyapunov function which is exponentially decreasing on that bounded set. Furthermore, we derive a bound on the degree of this converse Lyapunov function as a function of the continuity and stability properties of the vector field. The proof is constructive and uses the Picard iteration. Our result implies that semidefinite programming can be used to answer the question of stability of a polynomial vector field with a bound on complexity.

Boaz Barak - One of the best experts on this subject based on the ideXlab platform.

  • a nearly tight Sum of Squares lower bound for the planted clique problem
    SIAM Journal on Computing, 2019
    Co-Authors: Boaz Barak, Jonathan A. Kelner, Samuel B Hopkins, Ankur Moitra, Pravesh Kothari, Aaron Potechin
    Abstract:

    We prove that with high probability over the choice of a random graph $G$ from the Erdos--Renyi distribution $G(n,1/2)$, the $n^{O(d)}$-time degree $d$ Sum-of-Squares (SOS) semidefinite programming...

  • a nearly tight Sum of Squares lower bound for the planted clique problem
    Foundations of Computer Science, 2016
    Co-Authors: Boaz Barak, Jonathan A. Kelner, Samuel B Hopkins, Ankur Moitra, Pravesh Kothari, Aaron Potechin
    Abstract:

    We prove that with high probability over the choice of a random graph G from the Erdős-Renyi distribution G(n,1/2), the no(d)-time degree d Sum-of-Squares semidefinite programming relaxation for the clique problem will give a value of at least n1/2-c(d/log n)1/2 for some constant c > 0. This yields a nearly tight n1/2–o(1)) bound on the value of this program for any degree d = o(log n). Moreover we introduce a new framework that we call pseudo-calibration to construct Sum-of-Squares lower bounds. This framework is inspired by taking a computational analogue of Bayesian probability theory. It yields a general recipe for constructing good pseudo-distributions (i.e., dual certificates for the Sum-of-Squares semidefinite program), and sheds further light on the ways in which this hierarchy differs from others.

  • noisy tensor completion via the Sum of Squares hierarchy
    Conference on Learning Theory, 2016
    Co-Authors: Boaz Barak, Ankur Moitra
    Abstract:

    In the noisy tensor completion problem we observe m entries (whose location is chosen uniformly at random) from an unknown n1 n2 n3 tensor T . We asSume that T is entry-wise close to being rankr. Our goal is to fill in its missing entries using as few observations as possible. Let n = max(n1;n2;n3). We show that if m = n 3=2 r then there is a polynomial time algorithm based on the sixth level of the Sum-of-Squares hierarchy for completing it. Our estimate agrees with almost all of T ’s entries almost exactly and works even when our observations are corrupted by noise. This is also the first algorithm for tensor completion that works in the overcomplete case whenr > n, and in fact it works all the way up tor = n 3=2 . Our proofs are short and simple and are based on establishing a new connection between noisy tensor completion (through the language of Rademacher complexity) and the task of refuting random constant satisfaction problems. This connection seems to have gone unnoticed even in the context of matrix completion. Furthermore, we use this connection to show matching lower bounds. Our main technical result is in characterizing the Rademacher complexity of the sequence of norms that arise in the Sum-of-Squares relaxations to the tensor nuclear norm. These results point to an interesting new direction: Can we explore computational vs. sample complexity tradeoffs through the Sum-of-Squares hierarchy?

  • noisy tensor completion via the Sum of Squares hierarchy
    arXiv: Learning, 2015
    Co-Authors: Boaz Barak, Ankur Moitra
    Abstract:

    In the noisy tensor completion problem we observe $m$ entries (whose location is chosen uniformly at random) from an unknown $n_1 \times n_2 \times n_3$ tensor $T$. We asSume that $T$ is entry-wise close to being rank $r$. Our goal is to fill in its missing entries using as few observations as possible. Let $n = \max(n_1, n_2, n_3)$. We show that if $m = n^{3/2} r$ then there is a polynomial time algorithm based on the sixth level of the Sum-of-Squares hierarchy for completing it. Our estimate agrees with almost all of $T$'s entries almost exactly and works even when our observations are corrupted by noise. This is also the first algorithm for tensor completion that works in the overcomplete case when $r > n$, and in fact it works all the way up to $r = n^{3/2-\epsilon}$. Our proofs are short and simple and are based on establishing a new connection between noisy tensor completion (through the language of Rademacher complexity) and the task of refuting random constant satisfaction problems. This connection seems to have gone unnoticed even in the context of matrix completion. Furthermore, we use this connection to show matching lower bounds. Our main technical result is in characterizing the Rademacher complexity of the sequence of norms that arise in the Sum-of-Squares relaxations to the tensor nuclear norm. These results point to an interesting new direction: Can we explore computational vs. sample complexity tradeoffs through the Sum-of-Squares hierarchy?

  • STOC - Rounding Sum-of-Squares relaxations
    Proceedings of the forty-sixth annual ACM symposium on Theory of computing, 2014
    Co-Authors: Boaz Barak, Jonathan A. Kelner, David Steurer
    Abstract:

    We present a general approach to rounding semidefinite programming relaxations obtained by the Sum-of-Squares method (Lasserre hierarchy). Our approach is based on using the connection between these relaxations and the Sum-of-Squares proof system to transform a combining algorithm---an algorithm that maps a distribution over solutions into a (possibly weaker) solution---into a rounding algorithm that maps a solution of the relaxation to a solution of the original problem.