The Experts below are selected from a list of 10476 Experts worldwide ranked by ideXlab platform
Ioannis K. Argyros - One of the best experts on this subject based on the ideXlab platform.
-
Improved Local Convergence of Newton's method under weak majorant condition
Journal of Computational and Applied Mathematics, 2020Co-Authors: Ioannis K. Argyros, Saïd HiloutAbstract:We provide a Local Convergence analysis for Newton's method under a weak majorant condition in a Banach space setting. Our results provide under the same information a larger radius of Convergence and tighter error estimates on the distances involved than before [14]. Special cases and numerical examples are also provided in this study.
-
Local Convergence of iterative methods for solving equations and system of equations using weight function techniques
Applied Mathematics and Computation, 2019Co-Authors: Ioannis K. Argyros, Ramandeep Behl, J Tenreiro A Machado, Ali Saleh AlshomraniAbstract:Abstract This paper analyzes the Local Convergence of several iterative methods for approximating a Locally unique solution of a nonlinear equation in a Banach space. It is shown that the Local Convergence of these methods depends of hypotheses requiring the first-order derivative and the Lipschitz condition. The new approach expands the applicability of previous methods and formulates their theoretical radius of Convergence. Several numerical examples originated from real world problems illustrate the applicability of the technique in a wide range of nonlinear cases where previous methods can not be used.
-
Extended Local Convergence for some inexact methods with applications
Journal of Mathematical Chemistry, 2019Co-Authors: Ioannis K. Argyros, Á. Alberto Magreñán, M. J. Legaz, D. Moreno, Juan Antonio SiciliaAbstract:We present Local Convergence results for inexact iterative procedures of high Convergence order in a normed space in order to approximate a Locally unique solution. The hypotheses involve only Lipschitz conditions on the first Frechet-derivative of the operator involved. Earlier results involve Lipschitz-type hypotheses on higher than the first Frechet-derivative. The applicability of these methods is extended this way and under less computational cost. Special cases and applications are provided to show that these new results can apply to solve these equations.
-
Local Convergence for composite Chebyshev-type methods
Communications in Advanced Mathematical Sciences, 2018Co-Authors: Ioannis K. Argyros, Santhosh GeorgeAbstract:We replace Chebyshev's method for solving equations requiring the second derivative by a Chebyshev-type second derivative free method. The Local Convergence analysis of the new method is provided using hypotheses only on the first derivative in contrast to the Chebyshev method using hypotheses on the second derivative. This way we extend the applicability of the method. Numerical examples are also used to test the Convergence criteria and to obtain error bounds and also the radius of Convergence.
-
Semi-Local Convergence in Right Abstract Fractional Calculus
Functional Numerical Methods: Applications to Abstract Fractional Calculus, 2017Co-Authors: George A. Anastassiou, Ioannis K. ArgyrosAbstract:We provide a semi-Local Convergence analysis for a class of iterative methods under generalized conditions in order to solve equations in a Banach space setting. Some applications are suggested including Banach space valued functions of right fractional calculus, where all integrals are of Bochner-type. It follows [5].
Zbigniew Michalewicz - One of the best experts on this subject based on the ideXlab platform.
-
Analysis of Stability, Local Convergence, and Transformation Sensitivity of a Variant of the Particle Swarm Optimization Algorithm
IEEE Transactions on Evolutionary Computation, 2016Co-Authors: Mohammad Reza Bonyadi, Zbigniew MichalewiczAbstract:In this paper, we investigate three important properties (stability, Local Convergence, and transformation invariance) of a variant of particle swarm optimization (PSO) called standard PSO 2011 (SPSO2011). Through some experiments, we identify boundaries of coefficients for this algorithm that ensure particles converge to their equilibrium. Our experiments show that these Convergence boundaries for this algorithm are: 1) dependent on the number of dimensions of the problem; 2) different from that of some other PSO variants; and 3) not affected by the stagnation assumption. We also determine boundaries for coefficients associated with different behaviors, e.g., nonoscillatory and zigzagging, of particles before Convergence through analysis of particle positions in the frequency domain. In addition, we investigate the Local Convergence property of this algorithm and we prove that it is not Locally convergent. We provide a sufficient condition and related proofs for Local Convergence for a formulation that represents updating rules of a large class of PSO variants. We modify the SPSO2011 in such a way that it satisfies that sufficient condition; hence, the modified algorithm is Locally convergent. Also, we prove that the original standard PSO algorithm is not sensitive to rotation, scaling, and translation of the search space.
-
spso 2011 analysis of stability Local Convergence and rotation sensitivity
Genetic and Evolutionary Computation Conference, 2014Co-Authors: Mohammad Reza Bonyadi, Zbigniew MichalewiczAbstract:In a particle swarm optimization algorithm (PSO) it is essential to guarantee Convergence of particles to a point in the search space (this property is called stability of particles). It is also important that the PSO algorithm converges to a Local optimum (this is called the Local Convergence property). Further, it is usually expected that the performance of the PSO algorithm is not affected by rotating the search space (this property is called the rotation sensitivity). In this paper, these three properties, i.e. stability of particles, Local Convergence, and rotation sensitivity are investigated for a variant of PSO called Standard PSO2011 (SPSO2011). We experimentally define boundaries for the parameters of this algorithm in such a way that if the parameters are selected in these boundaries, the particles are stable, i.e. particles converge to a point in the search space. Also, we show that, unlike earlier versions of PSO, these boundaries are dependent on the number of dimensions of the problem. Moreover, we show that the algorithm is not Locally convergent in general case. Finally, we provide a proof and experimental evidence that the algorithm is rotation invariant.
-
GECCO - SPSO 2011: analysis of stability; Local Convergence; and rotation sensitivity
Proceedings of the 2014 conference on Genetic and evolutionary computation - GECCO '14, 2014Co-Authors: Mohammad Reza Bonyadi, Zbigniew MichalewiczAbstract:In a particle swarm optimization algorithm (PSO) it is essential to guarantee Convergence of particles to a point in the search space (this property is called stability of particles). It is also important that the PSO algorithm converges to a Local optimum (this is called the Local Convergence property). Further, it is usually expected that the performance of the PSO algorithm is not affected by rotating the search space (this property is called the rotation sensitivity). In this paper, these three properties, i.e. stability of particles, Local Convergence, and rotation sensitivity are investigated for a variant of PSO called Standard PSO2011 (SPSO2011). We experimentally define boundaries for the parameters of this algorithm in such a way that if the parameters are selected in these boundaries, the particles are stable, i.e. particles converge to a point in the search space. Also, we show that, unlike earlier versions of PSO, these boundaries are dependent on the number of dimensions of the problem. Moreover, we show that the algorithm is not Locally convergent in general case. Finally, we provide a proof and experimental evidence that the algorithm is rotation invariant.
Mohammad Reza Bonyadi - One of the best experts on this subject based on the ideXlab platform.
-
Analysis of Stability, Local Convergence, and Transformation Sensitivity of a Variant of the Particle Swarm Optimization Algorithm
IEEE Transactions on Evolutionary Computation, 2016Co-Authors: Mohammad Reza Bonyadi, Zbigniew MichalewiczAbstract:In this paper, we investigate three important properties (stability, Local Convergence, and transformation invariance) of a variant of particle swarm optimization (PSO) called standard PSO 2011 (SPSO2011). Through some experiments, we identify boundaries of coefficients for this algorithm that ensure particles converge to their equilibrium. Our experiments show that these Convergence boundaries for this algorithm are: 1) dependent on the number of dimensions of the problem; 2) different from that of some other PSO variants; and 3) not affected by the stagnation assumption. We also determine boundaries for coefficients associated with different behaviors, e.g., nonoscillatory and zigzagging, of particles before Convergence through analysis of particle positions in the frequency domain. In addition, we investigate the Local Convergence property of this algorithm and we prove that it is not Locally convergent. We provide a sufficient condition and related proofs for Local Convergence for a formulation that represents updating rules of a large class of PSO variants. We modify the SPSO2011 in such a way that it satisfies that sufficient condition; hence, the modified algorithm is Locally convergent. Also, we prove that the original standard PSO algorithm is not sensitive to rotation, scaling, and translation of the search space.
-
spso 2011 analysis of stability Local Convergence and rotation sensitivity
Genetic and Evolutionary Computation Conference, 2014Co-Authors: Mohammad Reza Bonyadi, Zbigniew MichalewiczAbstract:In a particle swarm optimization algorithm (PSO) it is essential to guarantee Convergence of particles to a point in the search space (this property is called stability of particles). It is also important that the PSO algorithm converges to a Local optimum (this is called the Local Convergence property). Further, it is usually expected that the performance of the PSO algorithm is not affected by rotating the search space (this property is called the rotation sensitivity). In this paper, these three properties, i.e. stability of particles, Local Convergence, and rotation sensitivity are investigated for a variant of PSO called Standard PSO2011 (SPSO2011). We experimentally define boundaries for the parameters of this algorithm in such a way that if the parameters are selected in these boundaries, the particles are stable, i.e. particles converge to a point in the search space. Also, we show that, unlike earlier versions of PSO, these boundaries are dependent on the number of dimensions of the problem. Moreover, we show that the algorithm is not Locally convergent in general case. Finally, we provide a proof and experimental evidence that the algorithm is rotation invariant.
-
GECCO - SPSO 2011: analysis of stability; Local Convergence; and rotation sensitivity
Proceedings of the 2014 conference on Genetic and evolutionary computation - GECCO '14, 2014Co-Authors: Mohammad Reza Bonyadi, Zbigniew MichalewiczAbstract:In a particle swarm optimization algorithm (PSO) it is essential to guarantee Convergence of particles to a point in the search space (this property is called stability of particles). It is also important that the PSO algorithm converges to a Local optimum (this is called the Local Convergence property). Further, it is usually expected that the performance of the PSO algorithm is not affected by rotating the search space (this property is called the rotation sensitivity). In this paper, these three properties, i.e. stability of particles, Local Convergence, and rotation sensitivity are investigated for a variant of PSO called Standard PSO2011 (SPSO2011). We experimentally define boundaries for the parameters of this algorithm in such a way that if the parameters are selected in these boundaries, the particles are stable, i.e. particles converge to a point in the search space. Also, we show that, unlike earlier versions of PSO, these boundaries are dependent on the number of dimensions of the problem. Moreover, we show that the algorithm is not Locally convergent in general case. Finally, we provide a proof and experimental evidence that the algorithm is rotation invariant.
Saïd Hilout - One of the best experts on this subject based on the ideXlab platform.
-
Improved Local Convergence of Newton's method under weak majorant condition
Journal of Computational and Applied Mathematics, 2020Co-Authors: Ioannis K. Argyros, Saïd HiloutAbstract:We provide a Local Convergence analysis for Newton's method under a weak majorant condition in a Banach space setting. Our results provide under the same information a larger radius of Convergence and tighter error estimates on the distances involved than before [14]. Special cases and numerical examples are also provided in this study.
-
Local Convergence ANALYSIS OF INEXACT NEWTON-LIKE METHODS
The Journal of Nonlinear Sciences and Applications, 2009Co-Authors: Ioannis K. Argyros, Saïd HiloutAbstract:We provide a Local Convergence analysis of inexact Newton-like methods in a Banach space setting under flexible majorant conditions. By introducing center-Lipschitz-type condition, we provide (under the same com- putational cost) a Convergence analysis with the following advantages over ear- lier work (9): finer error bounds on the distances involved, and a larger radius of Convergence. Special cases and applications are also provided in this study.
-
An improved Local Convergence analysis for Newton–Steffensen-type method
Journal of Applied Mathematics and Computing, 2009Co-Authors: Ioannis K. Argyros, Saïd HiloutAbstract:We provide a Local Convergence analysis for Newton–Steffensen-type algorithm for solving nonsmooth perturbed variational inclusions in Banach spaces. Under new center–conditions and the Aubin continuity property, we obtain the linear Local Convergence of Newton–Steffensen method. Our results compare favorably with related obtained in (Argyros and Hilout, 2007 submitted; Hilout in J. Math. Anal. Appl. 339:753–761, 2008).
-
Local Convergence analysis for a certain class of inexact methods.
The Journal of Nonlinear Sciences and Applications, 2008Co-Authors: Ioannis K. Argyros, Saïd HiloutAbstract:We provide a Local Convergence analysis for a certain class inexact methods in a Banach space setting, in order to approximate a solution of a nonlinear equation (6). The assumptions involve center{Lipschitz{type and radius{Lipschitz{type conditions (15), (8), (5). Our results have the following advantages (under the same computational cost): larger radii, and flner error bounds on the distances involved than in (8), (15) in many interesting cases. Numerical examples further validating the theoretical results are also pro- vided in this study.
-
An improved Local Convergence analysis for a two-step Steffensen-type method
Journal of Applied Mathematics and Computing, 2008Co-Authors: Ioannis K. Argyros, Saïd HiloutAbstract:We study a class of Steffensen-type algorithm for solving nonsmooth variational inclusions in Banach spaces. We provide a Local Convergence analysis under ω-conditioned divided difference, and the Aubin continuity property. This work on the one hand extends the results on Local Convergence of Steffensen’s method related to the resolution of nonlinear equations (see Amat and Busquier in Comput. Math. Appl. 49:13–22, 2005; J. Math. Anal. Appl. 324:1084–1092, 2006; Argyros in Southwest J. Pure Appl. Math. 1:23–29, 1997; Nonlinear Anal. 62:179–194, 2005; J. Math. Anal. Appl. 322:146–157, 2006; Rev. Colomb. Math. 40:65–73, 2006; Computational Theory of Iterative Methods, 2007). On the other hand our approach improves the ratio of Convergence and enlarges the Convergence ball under weaker hypotheses than one given in Hilout (Commun. Appl. Nonlinear Anal. 14:27–34, 2007).
Hans Butler - One of the best experts on this subject based on the ideXlab platform.
-
CDC - A method to guarantee Local Convergence for sequential quadratic programming with poor Hessian approximation
2017 IEEE 56th Annual Conference on Decision and Control (CDC), 2017Co-Authors: Tuan T. Nguyen, Mircea Lazar, Hans ButlerAbstract:Sequential Quadratic Programming (SQP) is a powerful class of algorithms for solving nonlinear optimization problems. Local Convergence of SQP algorithms is guaranteed when the Hessian approximation used in each Quadratic Programming subproblem is close to the true Hessian. However, a good Hessian approximation can be expensive to compute. Low cost Hessian approximations only guarantee Local Convergence under some assumptions, which are not always satisfied in practice. To address this problem, this paper proposes a simple method to guarantee Local Convergence for SQP with poor Hessian approximation. The effectiveness of the proposed algorithm is demonstrated in a numerical example.
-
A method to guarantee Local Convergence for sequential quadratic programming with poor Hessian approximation
2017 IEEE 56th Annual Conference on Decision and Control (CDC), 2017Co-Authors: Tuan T. Nguyen, Mircea Lazar, Hans ButlerAbstract:Sequential Quadratic Programming (SQP) is a powerful class of algorithms for solving nonlinear optimization problems. Local Convergence of SQP algorithms is guaranteed when the Hessian approximation used in each Quadratic Programming subproblem is close to the true Hessian. However, a good Hessian approximation can be expensive to compute. Low cost Hessian approximations only guarantee Local Convergence under some assumptions, which are not always satisfied in practice. To address this problem, this paper proposes a simple method to guarantee Local Convergence for SQP with poor Hessian approximation. The effectiveness of the proposed algorithm is demonstrated in a numerical example.