The Experts below are selected from a list of 67680 Experts worldwide ranked by ideXlab platform
Kimchuan Toh - One of the best experts on this subject based on the ideXlab platform.
-
an augmented Lagrangian Method with constraint generations for shape constrained convex regression problems
arXiv: Optimization and Control, 2020Co-Authors: Meixia Lin, Defeng Sun, Kimchuan TohAbstract:Shape-constrained convex regression problem deals with fitting a convex function to the observed data, where additional constraints are imposed, such as component-wise monotonicity and uniform Lipschitz continuity. This paper provides a unified framework for computing the least squares estimator of a multivariate shape-constrained convex regression function in $\mathbb{R}^d$. We prove that the least squares estimator is computable via solving an essentially constrained convex quadratic programming (QP) problem with $(n+1)d$ variables, $n(n-1)$ linear inequality constraints and $n$ possibly non-polyhedral inequality constraints, where $n$ is the number of data points. To efficiently solve the generally very large-scale convex QP, we design a proximal augmented Lagrangian Method (proxALM) whose subproblems are solved by the semismooth Newton Method (SSN). To further accelerate the computation when $n$ is huge, we design a practical implementation of the constraint generation Method such that each reduced problem is efficiently solved by our proposed proxALM. Comprehensive numerical experiments, including those in the pricing of basket options and estimation of production functions in economics, demonstrate that our proposed proxALM outperforms the state-of-the-art algorithms, and the proposed acceleration technique further shortens the computation time by a large margin.
-
on the r superlinear convergence of the kkt residuals generated by the augmented Lagrangian Method for convex composite conic programming
Mathematical Programming, 2019Co-Authors: Ying Cui, Defeng Sun, Kimchuan TohAbstract:Due to the possible lack of primal-dual-type error bounds, it was not clear whether the Karush–Kuhn–Tucker (KKT) residuals of the sequence generated by the augmented Lagrangian Method (ALM) for solving convex composite conic programming (CCCP) problems converge superlinearly. In this paper, we resolve this issue by establishing the R-superlinear convergence of the KKT residuals generated by the ALM under only a mild quadratic growth condition on the dual of CCCP, with easy-to-implement stopping criteria for the augmented Lagrangian subproblems. This discovery may help to explain the good numerical performance of several recently developed semismooth Newton-CG based ALM solvers for linear and convex quadratic semidefinite programming.
-
qsdpnal a two phase augmented Lagrangian Method for convex quadratic semidefinite programming
Mathematical Programming Computation, 2018Co-Authors: Defeng Sun, Kimchuan TohAbstract:In this paper, we present a two-phase augmented Lagrangian Method, called QSDPNAL, for solving convex quadratic semidefinite programming (QSDP) problems with constraints consisting of a large number of linear equality and inequality constraints, a simple convex polyhedral set constraint, and a positive semidefinite cone constraint. A first order algorithm which relies on the inexact Schur complement based decomposition technique is developed in QSDPNAL-Phase I with the aim of solving a QSDP problem to moderate accuracy or using it to generate a reasonably good initial point for the second phase. In QSDPNAL-Phase II, we design an augmented Lagrangian Method (ALM) wherein the inner subproblem in each iteration is solved via inexact semismooth Newton based algorithms. Simple and implementable stopping criteria are designed for the ALM. Moreover, under mild conditions, we are able to establish the rate of convergence of the proposed algorithm and prove the R-(super)linear convergence of the KKT residual. In the implementation of QSDPNAL, we also develop efficient techniques for solving large scale linear systems of equations under certain subspace constraints. More specifically, simpler and yet better conditioned linear systems are carefully designed to replace the original linear systems and novel shadow sequences are constructed to alleviate the numerical difficulties brought about by the crucial subspace constraints. Extensive numerical results for various large scale QSDPs show that our two-phase algorithm is highly efficient and robust in obtaining accurate solutions. The software reviewed as part of this submission was given the DOI (Digital Object Identifier) https://doi.org/10.5281/zenodo.1206980 .
-
on the r superlinear convergence of the kkt residues generated by the augmented Lagrangian Method for convex composite conic programming
arXiv: Optimization and Control, 2017Co-Authors: Ying Cui, Defeng Sun, Kimchuan TohAbstract:Due to the possible lack of primal-dual-type error bounds, the superlinear convergence for the Karush-Kuhn-Tucker (KKT) residues of the sequence generated by augmented Lagrangian Method (ALM) for solving convex composite conic programming (CCCP) has long been an outstanding open question. In this paper, we aim to resolve this issue by first conducting convergence rate analysis for the ALM with Rockafellar's stopping criteria under only a mild quadratic growth condition on the dual of CCCP. More importantly, by further assuming that the Robinson constraint qualification holds, we establish the R-superlinear convergence of the KKT residues of the iterative sequence under easy-to-implement stopping criteria {for} the augmented Lagrangian subproblems. Equipped with this discovery, we gain insightful interpretations on the impressive numerical performance of several recently developed semismooth Newton-CG based ALM solvers for solving linear and convex quadratic semidefinite programming.
-
qsdpnal a two phase augmented Lagrangian Method for convex quadratic semidefinite programming
arXiv: Optimization and Control, 2015Co-Authors: Defeng Sun, Kimchuan TohAbstract:In this paper, we present a two-phase augmented Lagrangian Method, called QSDPNAL, for solving convex quadratic semidefinite programming (QSDP) problems with constraints consisting of a large number of linear equality, inequality constraints, a simple convex polyhedral set constraint, and a positive semidefinite cone constraint. A first order algorithm which relies on the inexact Schur complement based decomposition technique is developed in QSDPNAL-Phase I with the aim of solving a QSDP problem to moderate accuracy or using it to generate a reasonably good initial point for the second phase. In QSDPNAL-Phase II, we design an augmented Lagrangian Method (ALM) where the inner subproblem in each iteration is solved via inexact semismooth Newton based algorithms. Simple and implementable stopping criteria are designed for the ALM. Moreover, under mild conditions, we are able to establish the rate of convergence of the proposed algorithm and prove the R-(super)linear convergence of the KKT residual. In the implementation of QSDPNAL, we also develop efficient techniques for solving large scale linear systems of equations under certain subspace constraints. More specifically, simpler and yet better conditioned linear systems are carefully designed to replace the original linear systems and novel shadow sequences are constructed to alleviate the numerical difficulties brought about by the crucial subspace constraints. Extensive numerical results for various large scale QSDPs show that our two-phase algorithm is highly efficient and robust in obtaining accurate solutions.
Defeng Sun - One of the best experts on this subject based on the ideXlab platform.
-
an augmented Lagrangian Method with constraint generations for shape constrained convex regression problems
arXiv: Optimization and Control, 2020Co-Authors: Meixia Lin, Defeng Sun, Kimchuan TohAbstract:Shape-constrained convex regression problem deals with fitting a convex function to the observed data, where additional constraints are imposed, such as component-wise monotonicity and uniform Lipschitz continuity. This paper provides a unified framework for computing the least squares estimator of a multivariate shape-constrained convex regression function in $\mathbb{R}^d$. We prove that the least squares estimator is computable via solving an essentially constrained convex quadratic programming (QP) problem with $(n+1)d$ variables, $n(n-1)$ linear inequality constraints and $n$ possibly non-polyhedral inequality constraints, where $n$ is the number of data points. To efficiently solve the generally very large-scale convex QP, we design a proximal augmented Lagrangian Method (proxALM) whose subproblems are solved by the semismooth Newton Method (SSN). To further accelerate the computation when $n$ is huge, we design a practical implementation of the constraint generation Method such that each reduced problem is efficiently solved by our proposed proxALM. Comprehensive numerical experiments, including those in the pricing of basket options and estimation of production functions in economics, demonstrate that our proposed proxALM outperforms the state-of-the-art algorithms, and the proposed acceleration technique further shortens the computation time by a large margin.
-
on the r superlinear convergence of the kkt residuals generated by the augmented Lagrangian Method for convex composite conic programming
Mathematical Programming, 2019Co-Authors: Ying Cui, Defeng Sun, Kimchuan TohAbstract:Due to the possible lack of primal-dual-type error bounds, it was not clear whether the Karush–Kuhn–Tucker (KKT) residuals of the sequence generated by the augmented Lagrangian Method (ALM) for solving convex composite conic programming (CCCP) problems converge superlinearly. In this paper, we resolve this issue by establishing the R-superlinear convergence of the KKT residuals generated by the ALM under only a mild quadratic growth condition on the dual of CCCP, with easy-to-implement stopping criteria for the augmented Lagrangian subproblems. This discovery may help to explain the good numerical performance of several recently developed semismooth Newton-CG based ALM solvers for linear and convex quadratic semidefinite programming.
-
qsdpnal a two phase augmented Lagrangian Method for convex quadratic semidefinite programming
Mathematical Programming Computation, 2018Co-Authors: Defeng Sun, Kimchuan TohAbstract:In this paper, we present a two-phase augmented Lagrangian Method, called QSDPNAL, for solving convex quadratic semidefinite programming (QSDP) problems with constraints consisting of a large number of linear equality and inequality constraints, a simple convex polyhedral set constraint, and a positive semidefinite cone constraint. A first order algorithm which relies on the inexact Schur complement based decomposition technique is developed in QSDPNAL-Phase I with the aim of solving a QSDP problem to moderate accuracy or using it to generate a reasonably good initial point for the second phase. In QSDPNAL-Phase II, we design an augmented Lagrangian Method (ALM) wherein the inner subproblem in each iteration is solved via inexact semismooth Newton based algorithms. Simple and implementable stopping criteria are designed for the ALM. Moreover, under mild conditions, we are able to establish the rate of convergence of the proposed algorithm and prove the R-(super)linear convergence of the KKT residual. In the implementation of QSDPNAL, we also develop efficient techniques for solving large scale linear systems of equations under certain subspace constraints. More specifically, simpler and yet better conditioned linear systems are carefully designed to replace the original linear systems and novel shadow sequences are constructed to alleviate the numerical difficulties brought about by the crucial subspace constraints. Extensive numerical results for various large scale QSDPs show that our two-phase algorithm is highly efficient and robust in obtaining accurate solutions. The software reviewed as part of this submission was given the DOI (Digital Object Identifier) https://doi.org/10.5281/zenodo.1206980 .
-
on the r superlinear convergence of the kkt residues generated by the augmented Lagrangian Method for convex composite conic programming
arXiv: Optimization and Control, 2017Co-Authors: Ying Cui, Defeng Sun, Kimchuan TohAbstract:Due to the possible lack of primal-dual-type error bounds, the superlinear convergence for the Karush-Kuhn-Tucker (KKT) residues of the sequence generated by augmented Lagrangian Method (ALM) for solving convex composite conic programming (CCCP) has long been an outstanding open question. In this paper, we aim to resolve this issue by first conducting convergence rate analysis for the ALM with Rockafellar's stopping criteria under only a mild quadratic growth condition on the dual of CCCP. More importantly, by further assuming that the Robinson constraint qualification holds, we establish the R-superlinear convergence of the KKT residues of the iterative sequence under easy-to-implement stopping criteria {for} the augmented Lagrangian subproblems. Equipped with this discovery, we gain insightful interpretations on the impressive numerical performance of several recently developed semismooth Newton-CG based ALM solvers for solving linear and convex quadratic semidefinite programming.
-
qsdpnal a two phase augmented Lagrangian Method for convex quadratic semidefinite programming
arXiv: Optimization and Control, 2015Co-Authors: Defeng Sun, Kimchuan TohAbstract:In this paper, we present a two-phase augmented Lagrangian Method, called QSDPNAL, for solving convex quadratic semidefinite programming (QSDP) problems with constraints consisting of a large number of linear equality, inequality constraints, a simple convex polyhedral set constraint, and a positive semidefinite cone constraint. A first order algorithm which relies on the inexact Schur complement based decomposition technique is developed in QSDPNAL-Phase I with the aim of solving a QSDP problem to moderate accuracy or using it to generate a reasonably good initial point for the second phase. In QSDPNAL-Phase II, we design an augmented Lagrangian Method (ALM) where the inner subproblem in each iteration is solved via inexact semismooth Newton based algorithms. Simple and implementable stopping criteria are designed for the ALM. Moreover, under mild conditions, we are able to establish the rate of convergence of the proposed algorithm and prove the R-(super)linear convergence of the KKT residual. In the implementation of QSDPNAL, we also develop efficient techniques for solving large scale linear systems of equations under certain subspace constraints. More specifically, simpler and yet better conditioned linear systems are carefully designed to replace the original linear systems and novel shadow sequences are constructed to alleviate the numerical difficulties brought about by the crucial subspace constraints. Extensive numerical results for various large scale QSDPs show that our two-phase algorithm is highly efficient and robust in obtaining accurate solutions.
David Zhang - One of the best experts on this subject based on the ideXlab platform.
-
fast augmented Lagrangian Method for image smoothing with hyper laplacian gradient prior
Chinese Conference on Pattern Recognition, 2014Co-Authors: Li Chen, Dongwei Ren, David Zhang, Hongzhi Zhang, Wangmeng ZuoAbstract:As a fundamental tool, L 0 gradient smoothing has found a flurry of applications. Inspired by the progress of research on hyper-Laplacian prior, we propose a novel model, corresponding to L p-norm of gradients, for image smoothing, which can better maintain the general structure, whereas diminishing insignificant texture and impulse noise-like highlights. Algorithmically, we use augmented Lagrangian Method (ALM) to efficiently solve the optimization problem. Thanks to the fast convergence rate of ALM, the speed of the proposed Method is much faster than the L 0 gradient Method. We apply the proposed Method to natural image smoothing, cartoon artifacts removal, and tongue image segmentation, and the experimental results validate the performance of the proposed algorithm.
-
fast gradient vector flow computation based on augmented Lagrangian Method
Pattern Recognition Letters, 2013Co-Authors: Dongwei Ren, Wangmeng Zuo, Xiaofei Zhao, Zhouchen Lin, David ZhangAbstract:Gradient vector flow (GVF) and generalized GVF (GGVF) have been widely applied in many image processing applications. The high cost of GVF/GGVF computation, however, has restricted their potential applications on images with large size. Motivated by progress in fast image restoration algorithms, we reformulate the GVF/GGVF computation problem using the convex optimization model with equality constraint, and solve it using the inexact augmented Lagrangian Method (IALM). With fast Fourier transform (FFT), we provide two novel simple and efficient algorithms for GVF/GGVF computation, respectively. To further improve the computational efficiency, the multiresolution approach is adopted to perform the GVF/GGVF computation in a coarse-to-fine manner. Experimental results show that the proposed Methods can improve the computational speed of the original GVF/GGVF by one or two order of magnitude, and are more efficient than the state-of-the-art Methods for GVF/GGVF computation.
-
an algorithm based on augmented Lagrangian Method for generalized gradient vector flow computation
Chinese Conference on Pattern Recognition, 2012Co-Authors: Dongwei Ren, Wangmeng Zuo, Xiaofei Zhao, David Zhang, Hongzhi ZhangAbstract:We propose a novel algorithm for the fast computation of generalized gradient vector flow (GGVF) whose high cost of computation has restricted its potential applications on images with large size. We reformulate the GGVF problem as a convex optimization model with equality constraint. Our approach is based on a variable splitting Method to obtain an equivalent constrained optimization formulation, which is then addressed with the inexact augmented Lagrangian Method (IALM). To further enhance the computational efficiency, IALM is incorporated in a multiresolution approach. Experiments on a set of images with a variety of sizes show that the proposed Method can improve the computational speed of the original GGVF by one or two order of magnitude, and is comparable with the multigrid GGVF (MGGVF) Method in terms of the computational efficiency.
-
an augmented Lagrangian Method for fast gradient vector flow computation
International Conference on Image Processing, 2011Co-Authors: Wangmeng Zuo, Xiaofei Zhao, David ZhangAbstract:Gradient vector flow (GVF) and its generalization have been widely applied in many image processing applications. The high cost of GVF computation, however, has restricted their potential applications to images with large size. In this paper, motivated by progress in fast image restoration algorithms, we reformulate the GVF computation problem as a convex optimization model with an equality constraint, and solve it using a fast algorithm, inexact augmented Lagrangian Method (ALM). With fast Fourier transform (FFT), we provide a novel simple and efficient algorithm for GVF computation. Experimental results show that the proposed Method can improve the computational speed by an order of magnitude, and is even more efficient for images with large sizes.
Sishaj P. Simon - One of the best experts on this subject based on the ideXlab platform.
-
dynamic economic dispatch using maclaurin series based Lagrangian Method
Energy Conversion and Management, 2010Co-Authors: S Hemamalini, Sishaj P. SimonAbstract:Abstract Dynamic economic dispatch (DED) is one of the important optimization problems in power system operation. This paper proposes Maclaurin series based Lagrangian Method (MSL) to solve the DED problem for generating units with valve-point effect, considering the ramp-rate limits and spinning reserve constraint. Using Maclaurin series, the sine term used to model the valve-point effect is expanded and solved with Lagrangian Method. The feasibility of the proposed Method is validated with five unit test system for 24 h. Minute-by-minute dispatch for a large system with 40-units is also carried out in this work. Test results obtained with the proposed approach are compared with other techniques in the literature. The results obtained substantiate the applicability of the proposed Method for solving dynamic economic dispatch problems with non-smooth cost functions.
-
dynamic economic dispatch with valve point effect using maclaurin series based Lagrangian Method
International Journal of Computer Applications, 2010Co-Authors: S Hemamalini, Sishaj P. SimonAbstract:Dynamic Economic Dispatch (DED) plays a vital role in power generation, operation and control. It is a complicated, non-linear constrained problem. In this paper, Maclaurin series based Lagrangian Method (MSL) is used to solve the DED problem for generating units with valve-point effect, considering the ramp rate limits. Using Maclaurin series, the sine term used to model the valve-point effect is expanded and solved with Lagrangian Method. The feasibility of the proposed Method is validated for static economic dispatch problem for forty unit system and DED problem for five unit test system for 24 hour time interval. Results obtained with the proposed approach are compared with other techniques in the literature. The results obtained substantiate the applicability of the proposed Method for solving static and dynamic economic dispatch problems with non-smooth cost functions.
Masayoshi Yamamoto - One of the best experts on this subject based on the ideXlab platform.
-
evaluation of the Lagrangian Method for deriving equivalent circuits of integrated magnetic components a case study using the integrated winding coupled inductor
IEEE Transactions on Industry Applications, 2015Co-Authors: Kazuhiro Umetani, Seikoh Arimura, Masayoshi Yamamoto, Ju Imaoka, Tetsuo HiranoAbstract:Recently, Lagrangian dynamics have been applied to transforming integrated magnetic components into equivalent circuits of transformers and inductors. This Lagrangian Method is expected to yield an equivalent circuit with few components, when applied to an integrated magnetic component with few flux paths that can be magnetized independently. However, properness of this Method has not been verified. As a case study, this paper derives the equivalent circuit of the integrated winding coupled inductor using the Lagrangian Method to evaluate consistency with the magnetic circuit model and experimental behavior. As a result, the Lagrangian Method yielded a simpler equivalent circuit than those by the conventional Methods. Additionally, the equivalent circuit of the Lagrangian Method is found to be functionally equivalent to the magnetic circuit model and consistent with the experiment. These results support that the Lagrangian Method provides proper equivalent circuits and is useful for deriving simple equivalent circuits in some cases.
-
evaluation of the Lagrangian Method for deriving equivalent circuits of integrated magnetic components a case study using the integrated winding coupled inductor
Energy Conversion Congress and Exposition, 2013Co-Authors: Kazuhiro Umetani, Seikoh Arimura, Tetsuo Hirano, Jun Imaoka, Masayoshi YamamotoAbstract:Recently Lagrangian dynamics has been applied to transforming integrated magnetic components into equivalent circuits of basic magnetic components such as transformers and inductors. Although the Method is beneficial in simple and systematic derivation in many cases, it sometimes leads to different circuits from those by conventional Methods. Hence, the Lagrangian Method needs equivalence evaluation of transformation. As a case study, this paper derives equivalent circuits of the integrated winding coupled inductor using the Lagrangian Method and a conventional Method. The equivalent circuits are investigated to verify their consistency with magnetic circuit model and experimental behavior. As a result, the Lagrangian Method yields a simpler circuit than that by the conventional Method. Nonetheless, both circuits are found functionally equivalent to the magnetic circuit model and consistent with the experiment. The result suggests that the Lagrangian Method provides proper transformation and is useful for deriving simple equivalent circuits.