Convex Optimization

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 360 Experts worldwide ranked by ideXlab platform

Stephen Boyd - One of the best experts on this subject based on the ideXlab platform.

  • Tax-Aware Portfolio Construction via Convex Optimization
    Journal of Optimization Theory and Applications, 2021
    Co-Authors: Nicholas Moehle, Stephen Boyd, Mykel J. Kochenderfer, Andrew Ang
    Abstract:

    We describe an Optimization-based tax-aware portfolio construction method that adds tax liability to standard Markowitz-based portfolio construction. Our method produces a trade list that specifies the number of shares to buy of each asset and the number of shares to sell from each tax lot held. To avoid wash sales (in which some realized capital losses are disallowed), we assume that we trade monthly and cannot simultaneously buy and sell the same asset. The tax-aware portfolio construction problem is not Convex, but it becomes Convex when we specify, for each asset, whether we buy or sell it. It can be solved using standard mixed-integer Convex Optimization methods at the cost of very long solve times for some problem instances. We present a custom Convex relaxation of the problem that borrows curvature from the risk model. This relaxation can provide a good approximation of the true tax liability, while greatly enhancing computational tractability. This method requires the solution of only two Convex Optimization problems: the first determines whether we buy or sell each asset, and the second generates the final trade list. In our numerical experiments, our method almost always solves the nonConvex problem to optimality, and when it does not, it produces a trade list very close to optimal. Backtests show that the performance of our method is indistinguishable from that obtained using a globally optimal solution, but with significantly reduced computational effort.

  • Tax-Aware Portfolio Construction via Convex Optimization
    arXiv: Optimization and Control, 2020
    Co-Authors: Nicholas Moehle, Stephen Boyd, Mykel J. Kochenderfer, Andrew Ang
    Abstract:

    We describe an Optimization-based tax-aware portfolio construction method that adds tax liability to a standard Markowitz-based portfolio construction approach that models expected return, risk, and transaction costs. Our method produces a trade list that specifies the number of shares to buy of each asset and the number of shares to sell from each tax lot held. To avoid wash sales (in which some realized capital losses are disallowed), we assume that we trade monthly, and cannot simultaneously buy and sell the same asset. The tax-aware portfolio construction problem is not Convex, but it becomes Convex when we specify, for each asset, whether we buy or sell it. It can be solved using standard mixed-integer Convex Optimization methods at the cost of very long solve times for some problem instances. We present a custom Convex relaxation of the problem that borrows curvature from the risk model. This relaxation can provide a good approximation of the true tax liability, while greatly enhancing computational tractability. This method requires the solution of only two Convex Optimization problems: the first determines whether we buy or sell each asset, and the second generates the final trade list. This method is therefore extremely fast even in the worst case. In our numerical experiments, which are based on a realistic tax-loss harvesting scenario, our method almost always solves the nonConvex problem to optimality, and when in does not, it produces a trade list very close to optimal. Backtests show that the performance of our method is indistinguishable from that obtained using a globally optimal solution, but with significantly reduced computational effort.

  • differentiable Convex Optimization layers
    arXiv: Learning, 2019
    Co-Authors: Akshay Agrawal, Stephen Boyd, Shane Barratt, Brandon Amos, Steven Diamond, Zico J Kolter
    Abstract:

    Recent work has shown how to embed differentiable Optimization problems (that is, problems whose solutions can be backpropagated through) as layers within deep learning architectures. This method provides a useful inductive bias for certain problems, but existing software for differentiable Optimization layers is rigid and difficult to apply to new settings. In this paper, we propose an approach to differentiating through disciplined Convex programs, a subclass of Convex Optimization problems used by domain-specific languages (DSLs) for Convex Optimization. We introduce disciplined parametrized programming, a subset of disciplined Convex programming, and we show that every disciplined parametrized program can be represented as the composition of an affine map from parameters to problem data, a solver, and an affine map from the solver's solution to a solution of the original problem (a new form we refer to as affine-solver-affine form). We then demonstrate how to efficiently differentiate through each of these components, allowing for end-to-end analytical differentiation through the entire Convex program. We implement our methodology in version 1.1 of CVXPY, a popular Python-embedded DSL for Convex Optimization, and additionally implement differentiable layers for disciplined Convex programs in PyTorch and TensorFlow 2.0. Our implementation significantly lowers the barrier to using Convex Optimization problems in differentiable programs. We present applications in linear machine learning models and in stochastic control, and we show that our layer is competitive (in execution time) compared to specialized differentiable solvers from past work.

  • infeasibility detection in the alternating direction method of multipliers for Convex Optimization
    UKACC International Conference on Control, 2018
    Co-Authors: Goran Banjac, Paul J Goulart, Bartolomeo Stellato, Stephen Boyd
    Abstract:

    The alternating direction method of multipliers (ADMM) is a powerful operator splitting technique for solving structured Optimization problems. For Convex Optimization problems, it is well-known that the iterates generated by ADMM converge to a solution provided that it exists. If a solution does not exist then the ADMM iterates do not converge. Nevertheless, we show that the ADMM iterates yield conclusive information regarding problem infeasibility for a wide class of Convex Optimization problems including both quadratic and conic programs. In particular, we show that in the limit the ADMM iterates either satisfy a set of first-order optimality conditions or produce a certificate of either primal or dual infeasibility. Based on these results, we propose termination criteria for detecting primal and dual infeasibility in ADMM.

  • a rewriting system for Convex Optimization problems
    arXiv: Optimization and Control, 2017
    Co-Authors: Akshay Agrawal, Steven Diamond, Robin Verschueren, Stephen Boyd
    Abstract:

    We describe a modular rewriting system for translating Optimization problems written in a domain-specific language to forms compatible with low-level solver interfaces. Translation is facilitated by reductions, which accept a category of problems and transform instances of that category to equivalent instances of another category. Our system proceeds in two key phases: analysis, in which we attempt to find a suitable solver for a supplied problem, and canonicalization, in which we rewrite the problem in the selected solver's standard form. We implement the described system in version 1.0 of CVXPY, a domain-specific language for mathematical and especially Convex Optimization. By treating reductions as first-class objects, our method makes it easy to match problems to solvers well-suited for them and to support solvers with a wide variety of standard forms.

Martin J. Wainwright - One of the best experts on this subject based on the ideXlab platform.

  • Discussion: Latent variable graphical model selection via Convex Optimization
    Annals of Statistics, 2012
    Co-Authors: Martin J. Wainwright
    Abstract:

    Discussion of "Latent variable graphical model selection via Convex Optimization" by Venkat Chandrasekaran, Pablo A. Parrilo and Alan S. Willsky [arXiv:1008.1290].

  • information theoretic lower bounds on the oracle complexity of stochastic Convex Optimization
    IEEE Transactions on Information Theory, 2012
    Co-Authors: Alekh Agarwal, Peter L Bartlett, Pradeep Ravikumar, Martin J. Wainwright
    Abstract:

    Relative to the large literature on upper bounds on complexity of Convex Optimization, lesser attention has been paid to the fundamental hardn4516420ess of these problems. Given the extensive use of Convex Optimization in machine learning and statistics, gaining an understanding of these complexity-theoretic issues is important. In this paper, we study the complexity of stochastic Convex Optimization in an oracle model of computation. We introduce a new notion of discrepancy between functions, and use it to reduce problems of stochastic Convex Optimization to statistical parameter estimation, which can be lower bounded using information-theoretic methods. Using this approach, we improve upon known results and obtain tight minimax complexity estimates for various function classes.

  • information theoretic lower bounds on the oracle complexity of Convex Optimization
    2010
    Co-Authors: Alekh Agarwal, Peter L Bartlett, Pradeep Ravikumar, Martin J. Wainwright
    Abstract:

    Relative to the large literature on upper bounds on complexity of Convex Optimization, lesser attention has been paid to the fundamental hardness of these problems. Given the extensive use of Convex Optimization in machine learning and statistics, gaining an understanding of these complexity-theoretic issues is important. In this paper, we study the complexity of stochastic Convex Optimization in an oracle model of computation. We improve upon known results and obtain tight minimax complexity estimates for various function classes.

  • information theoretic lower bounds on the oracle complexity of stochastic Convex Optimization
    arXiv: Machine Learning, 2010
    Co-Authors: Alekh Agarwal, Peter L Bartlett, Pradeep Ravikumar, Martin J. Wainwright
    Abstract:

    Relative to the large literature on upper bounds on complexity of Convex Optimization, lesser attention has been paid to the fundamental hardness of these problems. Given the extensive use of Convex Optimization in machine learning and statistics, gaining an understanding of these complexity-theoretic issues is important. In this paper, we study the complexity of stochastic Convex Optimization in an oracle model of computation. We improve upon known results and obtain tight minimax complexity estimates for various function classes.

  • information theoretic lower bounds on the oracle complexity of Convex Optimization
    Neural Information Processing Systems, 2009
    Co-Authors: Alekh Agarwal, Martin J. Wainwright, Peter L Bartlett, Pradeep Ravikumar
    Abstract:

    Despite a large literature on upper bounds on complexity of Convex Optimization, relatively less attention has been paid to the fundamental hardness of these problems. Given the extensive use of Convex Optimization in machine learning and statistics, gaining a understanding of these complexity-theoretic issues is important. In this paper, we study the complexity of stochastic Convex Optimization in an oracle model of computation. We improve upon known results and obtain tight minimax complexity estimates for various function classes. We also discuss implications of these results for the understanding the inherent complexity of large-scale learning and estimation problems.

Beverley Mckeon - One of the best experts on this subject based on the ideXlab platform.

  • a low order decomposition of turbulent channel flow via resolvent analysis and Convex Optimization
    Physics of Fluids, 2014
    Co-Authors: Rashad Moarref, Mihailo R Jovanovic, Joel A Tropp, A S Sharma, Beverley Mckeon
    Abstract:

    We combine resolvent-mode decomposition with techniques from Convex Optimization to optimally approximate velocity spectra in a turbulent channel. The velocity is expressed as a weighted sum of resolvent modes that are dynamically significant, non-empirical, and scalable with Reynolds number. To optimally represent direct numerical simulations (DNS) data at friction Reynolds number 2003, we determine the weights of resolvent modes as the solution of a Convex Optimization problem. Using only 12 modes per wall-parallel wavenumber pair and temporal frequency, we obtain close agreement with DNS-spectra, reducing the wall-normal and temporal resolutions used in the simulation by three orders of magnitude.

  • a low order decomposition of turbulent channel flow via resolvent analysis and Convex Optimization
    arXiv: Fluid Dynamics, 2014
    Co-Authors: Rashad Moarref, Mihailo R Jovanovic, Joel A Tropp, A S Sharma, Beverley Mckeon
    Abstract:

    We combine resolvent-mode decomposition with techniques from Convex Optimization to optimally approximate velocity spectra in a turbulent channel. The velocity is expressed as a weighted sum of resolvent modes that are dynamically significant, non-empirical, and scalable with Reynolds number. To optimally represent DNS data at friction Reynolds number $2003$, we determine the weights of resolvent modes as the solution of a Convex Optimization problem. Using only $12$ modes per wall-parallel wavenumber pair and temporal frequency, we obtain close agreement with DNS-spectra, reducing the wall-normal and temporal resolutions used in the simulation by three orders of magnitude.

Wei Ren - One of the best experts on this subject based on the ideXlab platform.

  • continuous time distributed subgradient algorithm for Convex Optimization with general constraints
    IEEE Transactions on Automatic Control, 2019
    Co-Authors: Yanan Zhu, Guanghui Wen, Guanrong Chen, Wei Ren
    Abstract:

    The distributed Convex Optimization problem is studied in this paper for any fixed and connected network with general constraints. To solve such an Optimization problem, a new type of continuous-time distributed subgradient Optimization algorithm is proposed based on the Karuch–Kuhn–Tucker condition. By using tools from nonsmooth analysis and set-valued function theory, it is proved that the distributed Convex Optimization problem is solved on a network of agents equipped with the designed algorithm. For the case that the objective function is Convex but not strictly Convex, it is proved that the states of the agents associated with optimal variables could converge to an optimal solution of the Optimization problem. For the case that the objective function is strictly Convex, it is further shown that the states of agents associated with optimal variables could converge to the unique optimal solution. Finally, some simulations are performed to illustrate the theoretical analysis.

  • distributed Convex Optimization for continuous time dynamics with time varying cost function
    arXiv: Optimization and Control, 2015
    Co-Authors: Salar Rahili, Wei Ren
    Abstract:

    In this paper, a time-varying distributed Convex Optimization problem is studied for continuous-time multi-agent systems. Control algorithms are designed for the cases of single-integrator and double-integrator dynamics. Two discontinuous algorithms based on the signum function are proposed to solve the problem in each case. Then in the case of double-integrator dynamics, two continuous algorithms based on, respectively, a time-varying and a fixed boundary layer are proposed as continuous approximations of the signum function. Also, to account for inter-agent collision for physical agents, a distributed Convex Optimization problem with swarm tracking behavior is introduced for both single-integrator and double-integrator dynamics.

Alekh Agarwal - One of the best experts on this subject based on the ideXlab platform.

  • information theoretic lower bounds on the oracle complexity of stochastic Convex Optimization
    IEEE Transactions on Information Theory, 2012
    Co-Authors: Alekh Agarwal, Peter L Bartlett, Pradeep Ravikumar, Martin J. Wainwright
    Abstract:

    Relative to the large literature on upper bounds on complexity of Convex Optimization, lesser attention has been paid to the fundamental hardn4516420ess of these problems. Given the extensive use of Convex Optimization in machine learning and statistics, gaining an understanding of these complexity-theoretic issues is important. In this paper, we study the complexity of stochastic Convex Optimization in an oracle model of computation. We introduce a new notion of discrepancy between functions, and use it to reduce problems of stochastic Convex Optimization to statistical parameter estimation, which can be lower bounded using information-theoretic methods. Using this approach, we improve upon known results and obtain tight minimax complexity estimates for various function classes.

  • information theoretic lower bounds on the oracle complexity of Convex Optimization
    2010
    Co-Authors: Alekh Agarwal, Peter L Bartlett, Pradeep Ravikumar, Martin J. Wainwright
    Abstract:

    Relative to the large literature on upper bounds on complexity of Convex Optimization, lesser attention has been paid to the fundamental hardness of these problems. Given the extensive use of Convex Optimization in machine learning and statistics, gaining an understanding of these complexity-theoretic issues is important. In this paper, we study the complexity of stochastic Convex Optimization in an oracle model of computation. We improve upon known results and obtain tight minimax complexity estimates for various function classes.

  • information theoretic lower bounds on the oracle complexity of stochastic Convex Optimization
    arXiv: Machine Learning, 2010
    Co-Authors: Alekh Agarwal, Peter L Bartlett, Pradeep Ravikumar, Martin J. Wainwright
    Abstract:

    Relative to the large literature on upper bounds on complexity of Convex Optimization, lesser attention has been paid to the fundamental hardness of these problems. Given the extensive use of Convex Optimization in machine learning and statistics, gaining an understanding of these complexity-theoretic issues is important. In this paper, we study the complexity of stochastic Convex Optimization in an oracle model of computation. We improve upon known results and obtain tight minimax complexity estimates for various function classes.

  • optimal algorithms for online Convex Optimization with multi point bandit feedback
    Conference on Learning Theory, 2010
    Co-Authors: Alekh Agarwal, Ofer Dekel, Lin Xiao
    Abstract:

    Bandit Convex Optimization is a special case of online Convex Optimization with partial information. In this setting, a player attempts to minimize a sequence of adversarially generated Convex loss functions, while only observing the value of each function at a single point. In some cases, the minimax regret of these problems is known to be strictly worse than the minimax regret in the corresponding full information setting. We introduce the multi-point bandit setting, in which the player can query each loss function at multiple points. When the player is allowed to query each function at two points, we prove regret bounds that closely resemble bounds for the full information case. This suggests that knowing the value of each loss function at two points is almost as useful as knowing the value of each function everywhere. When the player is allowed to query each function at d+1 points (d being the dimension of the space), we prove regret bounds that are exactly equivalent to full information bounds for smooth functions.

  • information theoretic lower bounds on the oracle complexity of Convex Optimization
    Neural Information Processing Systems, 2009
    Co-Authors: Alekh Agarwal, Martin J. Wainwright, Peter L Bartlett, Pradeep Ravikumar
    Abstract:

    Despite a large literature on upper bounds on complexity of Convex Optimization, relatively less attention has been paid to the fundamental hardness of these problems. Given the extensive use of Convex Optimization in machine learning and statistics, gaining a understanding of these complexity-theoretic issues is important. In this paper, we study the complexity of stochastic Convex Optimization in an oracle model of computation. We improve upon known results and obtain tight minimax complexity estimates for various function classes. We also discuss implications of these results for the understanding the inherent complexity of large-scale learning and estimation problems.