The Experts below are selected from a list of 101712 Experts worldwide ranked by ideXlab platform
Antonis Papachristodoulou  One of the best experts on this subject based on the ideXlab platform.

sparse Sum of Squares sos optimization a bridge between dsos sdsos and sos optimization for sparse polynomials
Advances in Computing and Communications, 2019CoAuthors: Yang Zheng, Giovanni Fantuzzi, Antonis PapachristodoulouAbstract:Optimization over nonnegative polynomials is fundamental for nonlinear systems analysis and control. This work investigates the relation between three tractable relaxations for optimizing over sparse nonnegative polynomials: sparse SumofSquares (SSOS) optimization, diagonally dominant SumofSquares (DSOS) optimization, and scaled diagonally dominant SumofSquares (SDSOS) optimization. We prove that the set of SSOS polynomials, an inner approximation of the cone of SOS polynomials, strictly contains the spaces of sparse DSOS/SDSOS polynomials. For problems with sparse polynomials, therefore, SSOS optimization is less conservative than its DSOS/SDSOS counterparts. Numerical results for largescale sparse polynomial optimization problems demonstrate this fact, and also that SSOS optimization can be faster than DSOS/SDSOS methods despite requiring the solution of semidefinite programs instead of less expensive linear/secondorder cone programs.

sparse Sum of Squares sos optimization a bridge between dsos sdsos and sos optimization for sparse polynomials
arXiv: Optimization and Control, 2018CoAuthors: Yang Zheng, Giovanni Fantuzzi, Antonis PapachristodoulouAbstract:Optimization over nonnegative polynomials is fundamental for nonlinear systems analysis and control. We investigate the relation between three tractable relaxations for optimizing over sparse nonnegative polynomials: sparse SumofSquares (SSOS) optimization, diagonally dominant SumofSquares (DSOS) optimization, and scaled diagonally dominant SumofSquares (SDSOS) optimization. We prove that the set of SSOS polynomials, an inner approximation of the cone of SOS polynomials, strictly contains the spaces of sparse DSOS/SDSOS polynomials. When applicable, therefore, SSOS optimization is less conservative than its DSOS/SDSOS counterparts. Numerical results for largescale sparse polynomial optimization problems demonstrate this fact, and also that SSOS optimization can be faster than DSOS/SDSOS methods despite requiring the solution of semidefinite programs instead of less expensive linear/secondorder cone programs.

Generalised absolute stability and Sum of Squares
Automatica, 2013CoAuthors: Edward J. Hancock, Antonis PapachristodoulouAbstract:This paper introduces a general framework for analysing nonlinear systems using absolute stability theory and Sum of Squares programming. The technique decomposes a vector field into a system with a polynomial vector field in feedback with a nonlinear memoryless term, which is contained in a generalised sector inequality with polynomial bounds. This decomposition can be used to model uncertainty in the nonlinearity or to bound difficulttoanalyse terms by simpler polynomial functions, such as time varying, nonpolynomial or higher order nonlinearities. Conditions for stability and regions of attraction are found using polynomial and Lur'e type Lyapunov functions, which generalise those used for the derivation of the multivariable circle and Popov criteria in classical absolute stability. The technique extends both absolute stability theory and the applicability of Sum of Squares programming. The usefulness of the technique is demonstrated with illustrative examples.

a converse Sum of Squares lyapunov result with a degree bound
arXiv: Classical Analysis and ODEs, 2012CoAuthors: Matthew M. Peet, Antonis PapachristodoulouAbstract:Sum of Squares programming has been used extensively over the past decade for the stability analysis of nonlinear systems but several questions remain unanswered. In this paper, we show that exponential stability of a polynomial vector field on a bounded set implies the existence of a Lyapunov function which is a SumofSquares of polynomials. In particular, the main result states that if a system is exponentially stable on a bounded nonempty set, then there exists an SOS Lyapunov function which is exponentially decreasing on that bounded set. The proof is constructive and uses the Picard iteration. A bound on the degree of this converse Lyapunov function is also given. This result implies that semidefinite programming can be used to answer the question of stability of a polynomial vector field with a bound on complexity.

A Converse Sum of Squares Lyapunov Result With a Degree Bound
IEEE Transactions on Automatic Control, 2012CoAuthors: Matthew M. Peet, Antonis PapachristodoulouAbstract:Although Sum of Squares programming has been used extensively over the past decade for the stability analysis of nonlinear systems, several fundamental questions remain unanswered. In this paper, we show that exponential stability of a polynomial vector field on a bounded set implies the existence of a Lyapunov function which is a Sum of Squares of polynomials. In particular, the main result states that if a system is exponentially stable on a bounded nonempty set, then there exists a Sum of Squares Lyapunov function which is exponentially decreasing on that bounded set. Furthermore, we derive a bound on the degree of this converse Lyapunov function as a function of the continuity and stability properties of the vector field. The proof is constructive and uses the Picard iteration. Our result implies that semidefinite programming can be used to answer the question of stability of a polynomial vector field with a bound on complexity.
David Steurer  One of the best experts on this subject based on the ideXlab platform.

HIGH DIMENSIONAL ESTIMATION VIA SumofSquares PROofS
Proceedings of the International Congress of Mathematicians (ICM 2018), 2019CoAuthors: Prasad Raghavendra, Tselil Schramm, David SteurerAbstract:Estimation is the computational task of recovering a hidden parameter $x$ associated with a distribution $D_x$, given a measurement $y$ sampled from the distribution. High dimensional estimation problems arise naturally in statistics, machine learning, and complexity theory. Many high dimensional estimation problems can be formulated as systems of polynomial equations and inequalities, and thus give rise to natural probability distributions over polynomial systems. SumofSquares proofs provide a powerful framework to reason about polynomial systems, and further there exist efficient algorithms to search for lowdegree SumofSquares proofs. Understanding and characterizing the power of SumofSquares proofs for estimation problems has been a subject of intense study in recent years. On one hand, there is a growing body of work utilizing SumofSquares proofs for recovering solutions to polynomial systems when the system is feasible. On the other hand, a general technique referred to as pseudocalibration has been developed towards showing lower bounds on the degree of SumofSquares proofs. Finally, the existence of SumofSquares refutations of a polynomial system has been shown to be intimately connected to the existence of spectral algorithms. In this article we survey these developments.

robust moment estimation and improved clustering via Sum of Squares
Symposium on the Theory of Computing, 2018CoAuthors: Pravesh Kothari, Jacob Steinhardt, David SteurerAbstract:We develop efficient algorithms for estimating lowdegree moments of unknown distributions in the presence of adversarial outliers and design a new family of convex relaxations for kmeans clustering based on SumofSquares method. As an immediate corollary, for any γ > 0, we obtain an efficient algorithm for learning the means of a mixture of k arbitrary distributions in d in time dO(1/γ) so long as the means have separation Ω(kγ). This in particular yields an algorithm for learning Gaussian mixtures with separation Ω(kγ), thus partially resolving an open problem of Regev and Vijayaraghavan regev2017learning. The guarantees of our robust estimation algorithms improve in many cases significantly over the best previous ones, obtained in the recent works. We also show that the guarantees of our algorithms match informationtheoretic lowerbounds for the class of distributions we consider. These improved guarantees allow us to give improved algorithms for independent component analysis and learning mixtures of Gaussians in the presence of outliers. We also show a sharp upper bound on the SumofSquares norms for moment tensors of any distribution that satisfies the Poincare inequality. The Poincare inequality is a central inequality in probability theory, and a large class of distributions satisfy it including Gaussians, product distributions, strongly logconcave distributions, and any Sum or uniformly continuous transformation of such distributions. As a consequence, this yields that all of the above algorithmic improvements hold for distributions satisfying the Poincare inequality.

exact tensor completion with Sum of Squares
Conference on Learning Theory, 2017CoAuthors: Aaron Potechin, David SteurerAbstract:We obtain the first polynomialtime algorithm for exact tensor completion that improves over the bound implied by reduction to matrix completion. The algorithm recovers an unknown 3tensor with $r$ incoherent, orthogonal components in $\mathbb R^n$ from $r\cdot \tilde O(n^{1.5})$ randomly observed entries of the tensor. This bound improves over the previous best one of $r\cdot \tilde O(n^{2})$ by reduction to exact matrix completion. Our bound also matches the best known results for the easier problem of approximate tensor completion (Barak & Moitra, 2015). Our algorithm and analysis extends seminal results for exact matrix completion (Candes & Recht, 2009) to the tensor setting via the SumofSquares method. The main technical challenge is to show that a small number of randomly chosen monomials are enough to construct a degree3 polynomial with precisely planted orthogonal global optima over the sphere and that this fact can be certified within the SumofSquares proof system.

polynomial time tensor decompositions with Sum of Squares
Foundations of Computer Science, 2016CoAuthors: Jonathan Shi, David SteurerAbstract:We give new algorithms based on the SumofSquares method for tensor decomposition. Our results improve the best known running times from quasipolynomial to polynomial for several problems, including decomposing random overcomplete 3tensors and learning overcomplete dictionaries with constant relative sparsity. We also give the first robust analysis for decomposing overcomplete 4tensors in the smoothed analysis model. A key ingredient of our analysis is to establish small spectral gaps in moment matrices derived from solutions to SumofSquares relaxations. To enable this analysis we augment SumofSquaresrelaxations with spectral analogs of maximum entropy constraints.

fast spectral algorithms from Sum of Squares proofs tensor decomposition and planted sparse vectors
Symposium on the Theory of Computing, 2016CoAuthors: Samuel B Hopkins, Jonathan Shi, Tselil Schramm, David SteurerAbstract:We consider two problems that arise in machine learning applications: the problem of recovering a planted sparse vector in a random linear subspace and the problem of decomposing a random lowrank overcomplete 3tensor. For both problems, the best known guarantees are based on the SumofSquares method. We develop new algorithms inspired by analyses of the SumofSquares method. Our algorithms achieve the same or similar guarantees as SumofSquares for these problems but the running time is significantly faster. For the planted sparse vector problem, we give an algorithm with running time nearly linear in the input size that approximately recovers a planted sparse vector with up to constant relative sparsity in a random subspace of ℝn of dimension up to Ω(√n). These recovery guarantees match the best known ones of Barak, Kelner, and Steurer (STOC 2014) up to logarithmic factors. For tensor decomposition, we give an algorithm with running time close to linear in the input size (with exponent ≈ 1.125) that approximately recovers a component of a random 3tensor over ℝn of rank up to Ω(n4/3). The best previous algorithm for this problem due to Ge and Ma (RANDOM 2015) works up to rank Ω(n3/2) but requires quasipolynomial time.
Aaron Potechin  One of the best experts on this subject based on the ideXlab platform.

Machinery for Proving SumofSquares Lower Bounds on Certification Problems.
arXiv: Computational Complexity, 2020CoAuthors: Aaron Potechin, Goutham RajendranAbstract:In this paper, we construct general machinery for proving SumofSquares lower bounds on certification problems by generalizing the techniques used by Barak et al. [FOCS 2016] to prove SumofSquares lower bounds for planted clique. Using this machinery, we prove degree $n^{\epsilon}$ SumofSquares lower bounds for tensor PCA, the Wishart model of sparse PCA, and a variant of planted clique which we call planted slightly denser subgraph.

a nearly tight Sum of Squares lower bound for the planted clique problem
SIAM Journal on Computing, 2019CoAuthors: Boaz Barak, Jonathan A. Kelner, Samuel B Hopkins, Ankur Moitra, Pravesh Kothari, Aaron PotechinAbstract:We prove that with high probability over the choice of a random graph $G$ from the ErdosRenyi distribution $G(n,1/2)$, the $n^{O(d)}$time degree $d$ SumofSquares (SOS) semidefinite programming...

Sum of Squares lower bounds from symmetry and a good story
arXiv: Computational Complexity, 2017CoAuthors: Aaron PotechinAbstract:In this paper, we develop machinery for proving Sum of Squares lower bounds on symmetric problems based on the intuition that Sum of Squares has difficulty capturing integrality arguments, i.e. arguments that an expression must be an integer. Using this machinery, we prove a tight Sum of Squares lower bound for the following Turan type problem: Minimize the number of triangles in a graph $G$ with a fixed edge density. We also give an alternative proof of Grigoriev's Sum of Squares lower bound for the knapsack problem.

a note on property testing Sum of Squares and multivariate polynomial interpolation
arXiv: Computational Complexity, 2017CoAuthors: Aaron Potechin, Liu YangAbstract:In this paper, we investigate property testing whether or not a degree d multivariate poly nomial is a Sum of Squares or is far from a Sum of Squares. We show that if we require that the property tester always accepts YES instances and uses random samples, $n^{\Omega(d)}$ samples are required, which is not much fewer than it would take to completely determine the polynomial. To prove this lower bound, we show that with high probability, multivariate polynomial in terpolation matches arbitrary values on random points and the resulting polynomial has small norm. We then consider a particular polynomial which is nonnegative yet not a Sum of Squares and use pseudoexpectation values to prove it is far from being a Sum of Squares.

exact tensor completion with Sum of Squares
Conference on Learning Theory, 2017CoAuthors: Aaron Potechin, David SteurerAbstract:We obtain the first polynomialtime algorithm for exact tensor completion that improves over the bound implied by reduction to matrix completion. The algorithm recovers an unknown 3tensor with $r$ incoherent, orthogonal components in $\mathbb R^n$ from $r\cdot \tilde O(n^{1.5})$ randomly observed entries of the tensor. This bound improves over the previous best one of $r\cdot \tilde O(n^{2})$ by reduction to exact matrix completion. Our bound also matches the best known results for the easier problem of approximate tensor completion (Barak & Moitra, 2015). Our algorithm and analysis extends seminal results for exact matrix completion (Candes & Recht, 2009) to the tensor setting via the SumofSquares method. The main technical challenge is to show that a small number of randomly chosen monomials are enough to construct a degree3 polynomial with precisely planted orthogonal global optima over the sphere and that this fact can be certified within the SumofSquares proof system.
Matthew M. Peet  One of the best experts on this subject based on the ideXlab platform.

Stability analysis of sampleddata systems using Sum of Squares
IEEE Transactions on Automatic Control, 2013CoAuthors: Alexandre Seuret, Matthew M. PeetAbstract:This article proposes a new approach to stability analysis of linear systems with sampleddata inputs or channels. The method, based on a variation of the discretetime Lyapunov approach, provides stability conditions using functional variables subject to convex constraints. These stability conditions can be solved using the Sum of Squares methodology with little or no conservatism in both the case of synchronous and asynchronous sampling. Numerical examples are included to show convergence.

a converse Sum of Squares lyapunov result with a degree bound
arXiv: Classical Analysis and ODEs, 2012CoAuthors: Matthew M. Peet, Antonis PapachristodoulouAbstract:Sum of Squares programming has been used extensively over the past decade for the stability analysis of nonlinear systems but several questions remain unanswered. In this paper, we show that exponential stability of a polynomial vector field on a bounded set implies the existence of a Lyapunov function which is a SumofSquares of polynomials. In particular, the main result states that if a system is exponentially stable on a bounded nonempty set, then there exists an SOS Lyapunov function which is exponentially decreasing on that bounded set. The proof is constructive and uses the Picard iteration. A bound on the degree of this converse Lyapunov function is also given. This result implies that semidefinite programming can be used to answer the question of stability of a polynomial vector field with a bound on complexity.

CDC  Bootstrap current optimization in Tokamaks using SumofSquares polynomials
2012 IEEE 51st IEEE Conference on Decision and Control (CDC), 2012CoAuthors: Aditya Gahlawat, Emmanuel Witrant, Matthew M. Peet, Mazen AlamirAbstract:In this paper we present a Lyapunov based feedback design strategy, by employing the SumofSquares polynomials framework, to maximize the bootstrap current in tokamaks. The bootstrap current may play an important role in reducing the external energy input required for tokamak operation. The SumofSquares polynomials framework allows us to algorithmically construct controllers. Additionally, we provide a heuristic to take into account the control input shape constraints which arise due to limitations on the actuators.

A Converse Sum of Squares Lyapunov Result With a Degree Bound
IEEE Transactions on Automatic Control, 2012CoAuthors: Matthew M. Peet, Antonis PapachristodoulouAbstract:Although Sum of Squares programming has been used extensively over the past decade for the stability analysis of nonlinear systems, several fundamental questions remain unanswered. In this paper, we show that exponential stability of a polynomial vector field on a bounded set implies the existence of a Lyapunov function which is a Sum of Squares of polynomials. In particular, the main result states that if a system is exponentially stable on a bounded nonempty set, then there exists a Sum of Squares Lyapunov function which is exponentially decreasing on that bounded set. Furthermore, we derive a bound on the degree of this converse Lyapunov function as a function of the continuity and stability properties of the vector field. The proof is constructive and uses the Picard iteration. Our result implies that semidefinite programming can be used to answer the question of stability of a polynomial vector field with a bound on complexity.
Boaz Barak  One of the best experts on this subject based on the ideXlab platform.

a nearly tight Sum of Squares lower bound for the planted clique problem
SIAM Journal on Computing, 2019CoAuthors: Boaz Barak, Jonathan A. Kelner, Samuel B Hopkins, Ankur Moitra, Pravesh Kothari, Aaron PotechinAbstract:We prove that with high probability over the choice of a random graph $G$ from the ErdosRenyi distribution $G(n,1/2)$, the $n^{O(d)}$time degree $d$ SumofSquares (SOS) semidefinite programming...

a nearly tight Sum of Squares lower bound for the planted clique problem
Foundations of Computer Science, 2016CoAuthors: Boaz Barak, Jonathan A. Kelner, Samuel B Hopkins, Ankur Moitra, Pravesh Kothari, Aaron PotechinAbstract:We prove that with high probability over the choice of a random graph G from the ErdősRenyi distribution G(n,1/2), the no(d)time degree d SumofSquares semidefinite programming relaxation for the clique problem will give a value of at least n1/2c(d/log n)1/2 for some constant c > 0. This yields a nearly tight n1/2–o(1)) bound on the value of this program for any degree d = o(log n). Moreover we introduce a new framework that we call pseudocalibration to construct SumofSquares lower bounds. This framework is inspired by taking a computational analogue of Bayesian probability theory. It yields a general recipe for constructing good pseudodistributions (i.e., dual certificates for the SumofSquares semidefinite program), and sheds further light on the ways in which this hierarchy differs from others.

noisy tensor completion via the Sum of Squares hierarchy
Conference on Learning Theory, 2016CoAuthors: Boaz Barak, Ankur MoitraAbstract:In the noisy tensor completion problem we observe m entries (whose location is chosen uniformly at random) from an unknown n1 n2 n3 tensor T . We asSume that T is entrywise close to being rankr. Our goal is to fill in its missing entries using as few observations as possible. Let n = max(n1;n2;n3). We show that if m = n 3=2 r then there is a polynomial time algorithm based on the sixth level of the SumofSquares hierarchy for completing it. Our estimate agrees with almost all of T ’s entries almost exactly and works even when our observations are corrupted by noise. This is also the first algorithm for tensor completion that works in the overcomplete case whenr > n, and in fact it works all the way up tor = n 3=2 . Our proofs are short and simple and are based on establishing a new connection between noisy tensor completion (through the language of Rademacher complexity) and the task of refuting random constant satisfaction problems. This connection seems to have gone unnoticed even in the context of matrix completion. Furthermore, we use this connection to show matching lower bounds. Our main technical result is in characterizing the Rademacher complexity of the sequence of norms that arise in the SumofSquares relaxations to the tensor nuclear norm. These results point to an interesting new direction: Can we explore computational vs. sample complexity tradeoffs through the SumofSquares hierarchy?

noisy tensor completion via the Sum of Squares hierarchy
arXiv: Learning, 2015CoAuthors: Boaz Barak, Ankur MoitraAbstract:In the noisy tensor completion problem we observe $m$ entries (whose location is chosen uniformly at random) from an unknown $n_1 \times n_2 \times n_3$ tensor $T$. We asSume that $T$ is entrywise close to being rank $r$. Our goal is to fill in its missing entries using as few observations as possible. Let $n = \max(n_1, n_2, n_3)$. We show that if $m = n^{3/2} r$ then there is a polynomial time algorithm based on the sixth level of the SumofSquares hierarchy for completing it. Our estimate agrees with almost all of $T$'s entries almost exactly and works even when our observations are corrupted by noise. This is also the first algorithm for tensor completion that works in the overcomplete case when $r > n$, and in fact it works all the way up to $r = n^{3/2\epsilon}$. Our proofs are short and simple and are based on establishing a new connection between noisy tensor completion (through the language of Rademacher complexity) and the task of refuting random constant satisfaction problems. This connection seems to have gone unnoticed even in the context of matrix completion. Furthermore, we use this connection to show matching lower bounds. Our main technical result is in characterizing the Rademacher complexity of the sequence of norms that arise in the SumofSquares relaxations to the tensor nuclear norm. These results point to an interesting new direction: Can we explore computational vs. sample complexity tradeoffs through the SumofSquares hierarchy?

STOC  Rounding SumofSquares relaxations
Proceedings of the fortysixth annual ACM symposium on Theory of computing, 2014CoAuthors: Boaz Barak, Jonathan A. Kelner, David SteurerAbstract:We present a general approach to rounding semidefinite programming relaxations obtained by the SumofSquares method (Lasserre hierarchy). Our approach is based on using the connection between these relaxations and the SumofSquares proof system to transform a combining algorithman algorithm that maps a distribution over solutions into a (possibly weaker) solutioninto a rounding algorithm that maps a solution of the relaxation to a solution of the original problem.