Sequence Converges

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 20259 Experts worldwide ranked by ideXlab platform

Mihai Postolache - One of the best experts on this subject based on the ideXlab platform.

  • forward backward splitting algorithm for fixed point problems and zeros of the sum of monotone operators
    Arabian Journal of Mathematics, 2020
    Co-Authors: Vahid Dadashi, Mihai Postolache
    Abstract:

    In this paper, we construct a forward–backward splitting algorithm for approximating a zero of the sum of an $$\alpha $$-inverse strongly monotone operator and a maximal monotone operator. The strong convergence theorem is then proved under mild conditions. Then, we add a nonexpansive mapping in the algorithm and prove that the generated Sequence Converges strongly to a common element of a fixed points set of a nonexpansive mapping and zero points set of the sum of monotone operators. We apply our main result both to equilibrium problems and convex programming.

Wen-jun Cao - One of the best experts on this subject based on the ideXlab platform.

  • On functional approximation of the equivalent control using learning variable structure control
    IEEE Transactions on Automatic Control, 2002
    Co-Authors: Wen-jun Cao
    Abstract:

    A learning variable structure control (LVSC) approach is originated to obtain the equivalent control of a general class of multiple-input-multiple-output (MIMO) variable structure systems under repeatable control tasks. LVSC synthesizes variable structure control (VSC) as the robust part which stabilizes the system, and learning control (LC) as the "plug-in" intelligent part which completely nullifies the effects of the matched uncertainties on tracking error. Rigorous proof based on energy function and functional analysis shows. that the tracking error Sequence Converges uniformly to zero, and that the bounded LC Sequence Converges to the equivalent control almost everywhere.

  • A Learning Variable Structure Controller of a Flexible One-Link Manipulator
    Journal of Dynamic Systems Measurement and Control, 2000
    Co-Authors: Wen-jun Cao
    Abstract:

    In this paper, tip regulation of a flexible one-link manipulator by Learning Variable Structure Control (LVSC) is investigated. Switching surface is designed according to a selected reference model which relocates system poles to be negative real ones, hence link vibration is eliminated. The proposed LVSC incorporates a learning mechanism to improve regulation accuracy. Rigorous proof shows: the tracking error Sequence Converges uniformly to zero; the uniformly bounded learning control Sequence Converges to the equivalent control almost everywhere. For practical considerations, the learning mechanism is further conducted in frequency domain by means of Fourier series expansion, hence achieves better regulation performance. Numerical simulations confirm the effectiveness and robustness of the proposed approach. [S0022-0434(00)01804-9]

  • A learning variable structure controller of a flexible one-link manipulator
    Proceedings of the 39th IEEE Conference on Decision and Control (Cat. No.00CH37187), 1
    Co-Authors: Wen-jun Cao
    Abstract:

    Tip regulation of a flexible one-link manipulator by learning variable structure control (LVSC) is investigated. The sliding surface is designed according to a selected reference model which relocates system poles to be negative real ones, hence link vibration is eliminated. The proposed LVSC incorporates a learning mechanism to improve regulation accuracy. Rigorous proof shows: the state's tracking error Sequence Converges uniformly to zero; the uniformly bounded learning control Sequence Converges to the equivalent control almost everywhere.

Poom Kumam - One of the best experts on this subject based on the ideXlab platform.

Uday V. Shanbhag - One of the best experts on this subject based on the ideXlab platform.

  • On Stochastic Mirror-prox Algorithms for Stochastic Cartesian Variational Inequalities: Randomized Block Coordinate and Optimal Averaging Schemes
    Set-valued and Variational Analysis, 2018
    Co-Authors: Farzad Yousefian, Angelia Nedic, Uday V. Shanbhag
    Abstract:

    Motivated by multi-user optimization problems and non-cooperative Nash games in uncertain regimes, we consider stochastic Cartesian variational inequality problems where the set is given as the Cartesian product of a collection of component sets. First, we consider the case where the number of the component sets is large and develop a randomized block stochastic mirror-prox algorithm, where at each iteration only a randomly selected block coordinate of the solution vector is updated through implementing two consecutive projection steps. We show that when the mapping is strictly pseudo-monotone, the algorithm generates a Sequence of iterates that Converges to the solution of the problem almost surely. When the maps are strongly pseudo-monotone, we prove that the mean-squared error diminishes at the optimal rate. Second, we consider large-scale stochastic optimization problems with convex objectives and develop a new averaging scheme for the randomized block stochastic mirror-prox algorithm. We show that by using a different set of weights than those employed in the classical stochastic mirror-prox methods, the objective values of the averaged Sequence Converges to the optimal value in the mean sense at an optimal rate. Third, we consider stochastic Cartesian variational inequality problems and develop a stochastic mirror-prox algorithm that employs the new weighted averaging scheme. We show that the expected value of a suitably defined gap function Converges to zero at an optimal rate.

  • On stochastic mirror-prox algorithms for stochastic Cartesian variational inequalities: randomized block coordinate, and optimal averaging schemes
    arXiv: Optimization and Control, 2016
    Co-Authors: Farzad Yousefian, Angelia Nedich, Uday V. Shanbhag
    Abstract:

    Motivated by multi-user optimization problems and non-cooperative Nash games in uncertain regimes, we consider stochastic Cartesian variational inequalities (SCVI) where the set is given as the Cartesian product of a collection of component sets. First, we consider the case where the number of the component sets is large. For solving this type of problems, the classical stochastic approximation methods and their prox generalizations are computationally inefficient as each iteration becomes very costly. To address this challenge, we develop a randomized block stochastic mirror-prox (B-SMP) algorithm, where at each iteration only a randomly selected block coordinate of the solution is updated through implementing two consecutive projection steps. Under standard assumptions on the problem and settings of the algorithm, we show that when the mapping is strictly pseudo-monotone, the algorithm generates a Sequence of iterates that Converges to the solution of the problem almost surely. To derive rate statements, we assume that the maps are strongly pseudo-monotone and obtain {a non-asymptotic mean squared error $\mathcal{O}\left(\frac{d}{k}\right)$, where $k$ is the iteration number and $d$ is the number of component sets. Second, we consider large-scale stochastic optimization problems with convex objectives. For this class of problems, we develop a new averaging scheme for the B-SMP algorithm. Unlike the classical averaging stochastic mirror-prox (SMP) method where a decreasing set of weights for the averaging Sequence is used, here we consider a different set of weights that are characterized in terms of the stepsizes and a {parameter}. We show that using such weights, the objective values of the averaged Sequence Converges to the optimal value in the mean sense with the rate $\mathcal{O}\left(\frac{\sqrt{d}}{\sqrt{k}}\right)$.

Xiangsun Zhang - One of the best experts on this subject based on the ideXlab platform.

  • A smoothing Levenberg–Marquardt method for NCP
    Applied Mathematics and Computation, 2006
    Co-Authors: Ju-liang Zhang, Xiangsun Zhang
    Abstract:

    Abstract In this paper, we convert the nonlinear complementarity problems to an equivalent smooth nonlinear equation system by using smoothing technique. Then we use Levenberg–Marquardt type method to solve the nonlinear equation system. The method has the following merits: (i) any cluster point of the iteration Sequence is a solution of the P 0  − NCP; (ii) it generates a bounded Sequence if the P 0  − NCP has a nonempty and bounded solution set; (iii) if the generalized Jacobian is nonsingular at a solution point, then the whole Sequence Converges to the (unique) solution of the P 0  − NCP superlinearly; (iv) for the P 0  − NCP, if an accumulation point of the iteration Sequence satisfies strict complementary condition, then the whole Sequence Converges to this accumulation point superlinearly.

  • A smoothing Levenberg-Marquardt method for NCP q
    2006
    Co-Authors: Ju-liang Zhang, Xiangsun Zhang
    Abstract:

    In this paper, we convert the nonlinear complementarity problems to an equivalent smooth nonlinear equation system by using smoothing technique. Then we use Levenberg–Marquardt type method to solve the nonlinear equation system. The method has the following merits: (i) any cluster point of the iteration Sequence is a solution of the P0 � NCP; (ii) it generates a bounded Sequence if the P0 � NCP has a nonempty and bounded solution set; (iii) if the generalized Jacobian is nonsingular at a solution point, then the whole Sequence Converges to the (unique) solution of the P0 � NCP superlinearly; (iv) for the P0 � NCP, if an accumulation point of the iteration Sequence satisfies strict complementary condition, then the whole Sequence Converges to this accumulation point superlinearly.

  • A smoothing Levenberg-Marquardt method for NCP
    Applied Mathematics and Computation, 2006
    Co-Authors: Ju-liang Zhang, Xiangsun Zhang
    Abstract:

    Abstract In this paper, we convert the nonlinear complementarity problems to an equivalent smooth nonlinear equation system by using smoothing technique. Then we use Levenberg–Marquardt type method to solve the nonlinear equation system. The method has the following merits: (i) any cluster point of the iteration Sequence is a solution of the P 0  − NCP; (ii) it generates a bounded Sequence if the P 0  − NCP has a nonempty and bounded solution set; (iii) if the generalized Jacobian is nonsingular at a solution point, then the whole Sequence Converges to the (unique) solution of the P 0  − NCP superlinearly; (iv) for the P 0  − NCP, if an accumulation point of the iteration Sequence satisfies strict complementary condition, then the whole Sequence Converges to this accumulation point superlinearly.