Convergence Rate

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 161418 Experts worldwide ranked by ideXlab platform

Pontus Giselsson - One of the best experts on this subject based on the ideXlab platform.

  • Tight global linear Convergence Rate bounds for Douglas–Rachford splitting
    Journal of Fixed Point Theory and Applications, 2017
    Co-Authors: Pontus Giselsson
    Abstract:

    Recently, several authors have shown local and global Convergence Rate results for Douglas–Rachford splitting under strong monotonicity, Lipschitz continuity, and cocoercivity assumptions. Most of these focus on the convex optimization setting. In the more general monotone inclusion setting, Lions and Mercier showed a linear Convergence Rate bound under the assumption that one of the two operators is strongly monotone and Lipschitz continuous. We show that this bound is not tight, meaning that no problem from the considered class converges exactly with that Rate. In this paper, we present tight global linear Convergence Rate bounds for that class of problems. We also provide tight linear Convergence Rate bounds under the assumptions that one of the operators is strongly monotone and cocoercive, and that one of the operators is strongly monotone and the other is cocoercive. All our linear Convergence results are obtained by proving the stronger property that the Douglas–Rachford operator is contractive.

  • CDC - Tight linear Convergence Rate bounds for Douglas-Rachford splitting and ADMM
    2015 54th IEEE Conference on Decision and Control (CDC), 2015
    Co-Authors: Pontus Giselsson
    Abstract:

    Douglas-Rachford splitting and the alternating direction method of multipliers (ADMM) can be used to solve convex optimization problems that consist of a sum of two functions. Convergence Rate estimates for these algorithms have received much attention lately. In particular, linear Convergence Rates have been shown by several authors under various assumptions. One such set of assumptions is strong convexity and smoothness of one of the functions in the minimization problem. The authors recently provided a linear Convergence Rate bound for such problems. In this paper, we show that this Rate bound is tight for the class of problems under consideration.

  • Tight Linear Convergence Rate Bounds for Douglas-Rachford Splitting and ADMM
    arXiv: Optimization and Control, 2015
    Co-Authors: Pontus Giselsson
    Abstract:

    Douglas-Rachford splitting and the alternating direction method of multipliers (ADMM) can be used to solve convex optimization problems that consist of a sum of two functions. Convergence Rate estimates for these algorithms have received much attention lately. In particular, linear Convergence Rates have been shown by several authors under various assumptions. One such set of assumptions is strong convexity and smoothness of one of the functions in the minimization problem. The authors recently provided a linear Convergence Rate bound for such problems. In this paper, we show that this Rate bound is tight for many algorithm parameter choices.

Paul J Goulart - One of the best experts on this subject based on the ideXlab platform.

  • tight global linear Convergence Rate bounds for operator splitting methods
    IEEE Transactions on Automatic Control, 2018
    Co-Authors: Goran Banjac, Paul J Goulart
    Abstract:

    In this paper, we establish necessary and sufficient conditions for global linear Convergence Rate bounds in operator splitting methods for a general class of convex optimization problems, where the associated fixed-point operator is strongly quasi-nonexpansive. We also provide a tight bound on the achievable Convergence Rate. Most existing results establishing global linear Convergence in such methods require restrictive assumptions regarding strong convexity and smoothness of the constituent functions in the optimization problem. However, there are several examples in the literature showing that linear Convergence is possible even when these properties do not hold. We provide a unifying analysis method for establishing global linear Convergence based on linear regularity and show that many existing results are special cases of our approach. Moreover, we propose a novel linearly convergent splitting method for linear programming.

Goran Banjac - One of the best experts on this subject based on the ideXlab platform.

  • tight global linear Convergence Rate bounds for operator splitting methods
    IEEE Transactions on Automatic Control, 2018
    Co-Authors: Goran Banjac, Paul J Goulart
    Abstract:

    In this paper, we establish necessary and sufficient conditions for global linear Convergence Rate bounds in operator splitting methods for a general class of convex optimization problems, where the associated fixed-point operator is strongly quasi-nonexpansive. We also provide a tight bound on the achievable Convergence Rate. Most existing results establishing global linear Convergence in such methods require restrictive assumptions regarding strong convexity and smoothness of the constituent functions in the optimization problem. However, there are several examples in the literature showing that linear Convergence is possible even when these properties do not hold. We provide a unifying analysis method for establishing global linear Convergence based on linear regularity and show that many existing results are special cases of our approach. Moreover, we propose a novel linearly convergent splitting method for linear programming.

Y. Shamash - One of the best experts on this subject based on the ideXlab platform.

  • On maximizing the Convergence Rate for linear systems with input saturation
    IEEE Transactions on Automatic Control, 2003
    Co-Authors: Zongli Lin, Y. Shamash
    Abstract:

    In this note, we consider a few important issues related to the maximization of the Convergence Rate inside a given ellipsoid for linear systems with input saturation. For continuous-time systems, the control that maximizes the Convergence Rate is simply a bang-bang control. Through studying the system under the maximal Convergence control, we reveal several fundamental results on set invariance. An important consequence of maximizing the Convergence Rate is that the maximal invariant ellipsoid is produced. We provide a simple method for finding the maximal invariant ellipsoid, and we also study the dependence of the maximal Convergence Rate on the Lyapunov function.

  • On maximizing the Convergence Rate for linear systems with input saturation
    Proceedings of the 2001 American Control Conference. (Cat. No.01CH37148), 2001
    Co-Authors: Zongli Lin, Y. Shamash
    Abstract:

    In this paper, we consider the problem of maximizing the Convergence Rate inside a given level set for both continuous-time and discrete-time systems with input saturation. We also provide simple methods for finding the largest ellipsoid of a given shape that can be made invariant with a satuRated control. For the continuous-time case, the maximal Convergence Rate is achieved by a bang-bang type control with a simple switching scheme. Suboptimal Convergence Rate can be achieved with satuRated high-gain linear feedback. For the discrete-time case, the maximal Convergence Rate is achieved by a coupled satuRated linear feedback.

Y M Chen - One of the best experts on this subject based on the ideXlab platform.

  • an improved yuan agrawal method with rapid Convergence Rate for fractional differential equations
    Computational Mechanics, 2019
    Co-Authors: Y M Chen
    Abstract:

    Due to the merit of transforming fractional differential equations into ordinary differential equations, the Yuan and Agrawal method has gained a lot of research interests over the past decade. In this paper, this method is improved with major emphasis on enhancing the Convergence Rate. The key procedure is to transform fractional derivative into an improper integral, which is integRated by Gauss–Laguerre quadrature rule. However, the integration converges slowly due to the singularity and slow decay of the integrand. To solve these problems, we reproduce the integrand to circumvent the singularity and slow decay simultaneously. With the reproduced integrand, the Convergence Rate is estimated to be no slower than $$ \, O(n^{ - 2} ) $$ with $$ n $$ as the number of quadrature nodes. In addition, we utilize a generalized Gauss–Laguerre rule to further improve the accuracy. Numerical examples are presented to validate the rapid Convergence Rate of the improved method, without causing additional computational burden compared to the original approach.