The Experts below are selected from a list of 261114 Experts worldwide ranked by ideXlab platform
Jie Shen - One of the best experts on this subject based on the ideXlab platform.
-
convergence of the polak ribiere polyak conjugate Gradient Method
Nonlinear Analysis-theory Methods & Applications, 2007Co-Authors: Zhenjun Shi, Jie ShenAbstract:Abstract In this paper, we consider the global convergence of the Polak–Ribiere–Polyak (abbreviated PRP) conjugate Gradient Method for unconstrained optimization problems. A new Armijo-type line search is proposed for the original PRP Method and some convergence properties are given under some mild conditions. The new Armijo-type line search can make the PRP Method choose a suitable initial step size so as to decrease the function evaluations at each iteration and improve the performance of the PRP Method. Numerical results show that the PRP Method with the new Armijo-type line search is more efficient than other similar Methods in practical computation.
-
Convergence of Liu-Storey conjugate Gradient Method
European Journal of Operational Research, 2007Co-Authors: Zhenjun Shi, Jie ShenAbstract:Abstract The conjugate Gradient Method is a useful and powerful approach for solving large-scale minimization problems. Liu and Storey developed a conjugate Gradient Method, which has good numerical performance but no global convergence result under traditional line searches such as Armijo, Wolfe and Goldstein line searches. In this paper a convergent version of Liu–Storey conjugate Gradient Method (LS in short) is proposed for minimizing functions that have Lipschitz continuous partial derivatives. By estimating the Lipschitz constant of the derivative of objective functions, we can find an adequate step size at each iteration so as to guarantee the global convergence and improve the efficiency of LS Method in practical computation.
-
convergence property and modifications of a memory Gradient Method
Asia-Pacific Journal of Operational Research, 2005Co-Authors: Zhenjun Shi, Jie ShenAbstract:We study properties of a modified memory Gradient Method, including the global convergence and rate of convergence. Numerical results show that modified memory Gradient Methods are effective in solving large-scale minimization problems.
-
a new super memory Gradient Method with curve search rule
Applied Mathematics and Computation, 2005Co-Authors: Zhenjun Shi, Jie ShenAbstract:In this paper, we propose a new super-memory Gradient Method with curve search rule for unconstrained optimization problems. The Method uses previous multi-step iterative information and some curve search rules to generate new iterative points at each iteration. This makes the new Method converge stably and be more suitable for solving large scale optimization problems than other similar Methods. We analyze the global convergence and convergence rate under some mild conditions. Numerical experiments show that some new algorithms are available and effective in practical computation.
Zhenjun Shi - One of the best experts on this subject based on the ideXlab platform.
-
convergence of the polak ribiere polyak conjugate Gradient Method
Nonlinear Analysis-theory Methods & Applications, 2007Co-Authors: Zhenjun Shi, Jie ShenAbstract:Abstract In this paper, we consider the global convergence of the Polak–Ribiere–Polyak (abbreviated PRP) conjugate Gradient Method for unconstrained optimization problems. A new Armijo-type line search is proposed for the original PRP Method and some convergence properties are given under some mild conditions. The new Armijo-type line search can make the PRP Method choose a suitable initial step size so as to decrease the function evaluations at each iteration and improve the performance of the PRP Method. Numerical results show that the PRP Method with the new Armijo-type line search is more efficient than other similar Methods in practical computation.
-
Convergence of Liu-Storey conjugate Gradient Method
European Journal of Operational Research, 2007Co-Authors: Zhenjun Shi, Jie ShenAbstract:Abstract The conjugate Gradient Method is a useful and powerful approach for solving large-scale minimization problems. Liu and Storey developed a conjugate Gradient Method, which has good numerical performance but no global convergence result under traditional line searches such as Armijo, Wolfe and Goldstein line searches. In this paper a convergent version of Liu–Storey conjugate Gradient Method (LS in short) is proposed for minimizing functions that have Lipschitz continuous partial derivatives. By estimating the Lipschitz constant of the derivative of objective functions, we can find an adequate step size at each iteration so as to guarantee the global convergence and improve the efficiency of LS Method in practical computation.
-
convergence property and modifications of a memory Gradient Method
Asia-Pacific Journal of Operational Research, 2005Co-Authors: Zhenjun Shi, Jie ShenAbstract:We study properties of a modified memory Gradient Method, including the global convergence and rate of convergence. Numerical results show that modified memory Gradient Methods are effective in solving large-scale minimization problems.
-
a new super memory Gradient Method with curve search rule
Applied Mathematics and Computation, 2005Co-Authors: Zhenjun Shi, Jie ShenAbstract:In this paper, we propose a new super-memory Gradient Method with curve search rule for unconstrained optimization problems. The Method uses previous multi-step iterative information and some curve search rules to generate new iterative points at each iteration. This makes the new Method converge stably and be more suitable for solving large scale optimization problems than other similar Methods. We analyze the global convergence and convergence rate under some mild conditions. Numerical experiments show that some new algorithms are available and effective in practical computation.
Jinbao Jian - One of the best experts on this subject based on the ideXlab platform.
-
a hybrid conjugate Gradient Method with descent property for unconstrained optimization
Applied Mathematical Modelling, 2015Co-Authors: Jinbao Jian, Lin Han, Xianzhen JiangAbstract:Abstract In this paper, based on some famous previous conjugate Gradient Methods, a new hybrid conjugate Gradient Method was presented for unconstrained optimization. The proposed Method can generate decent directions at every iteration, moreover, this property is independent of the steplength line search. Under the Wolfe line search, the proposed Method possesses global convergence. Medium-scale numerical experiments and their performance profiles are reported, which show that the proposed Method is promising.
Yuhong Dai - One of the best experts on this subject based on the ideXlab platform.
-
A Barzilai-Borwein conjugate Gradient Method
Science China Mathematics, 2016Co-Authors: Yuhong Dai, Caixia KouAbstract:The linear conjugate Gradient Method is an {\it optimal} Method for convex quadratic minimization due to the Krylov subspace minimization property. The proposition of limited-memory BFGS Method and Barzilai-Borwein Gradient Method, however, heavily restricted the use of conjugate Gradient Method for large-scale nonlinear optimization. This is, to the great extent, due to the requirement of a relatively exact line search at each iteration and the loss of conjugacy property of the search directions in various occasions. On the contrary, the limited-memory BFGS Method and the Barzilai-Bowein Gradient Method share the so-called {\it asymptotical one stepsize per line-search property}, namely, the trial stepsize in the Method will asymptotically be accepted by the line search when the iteration is close to the solution. This paper will focus on the analysis of the subspace minimization conjugate Gradient Method by Yuan and Stoer (1995). Specifically, if choosing the parameter in the Method by combining the Barzilai-Borwein idea, we will be able to provide some efficient Barzilai-Borwein conjugate Gradient (BBCG) Methods. The initial numerical experiments show that one of the variants, BBCG3, is specially efficient among many others without line searches. This variant of the BBCG Method might enjoy the asymptotical one stepsize per line-search property and become a strong candidate for large-scale nonlinear optimization.
-
alternate step Gradient Method
Optimization, 2003Co-Authors: Yuhong DaiAbstract:The Barzilai and Borwein (BB) Gradient Method does not guarantee a descent in the objective function at each iteration, but performs better than the classical steepest descent (SD) Method in practice. So far, the BB Method has found many successful applications and generalizations in linear systems, unconstrained optimization, convex-constrained optimization, stochastic optimization, etc. In this article, we propose a new Gradient Method that uses the SD and the BB steps alternately. Hence the name “alternate step (AS) Gradient Method.” Our theoretical and numerical analyses show that the AS Method is a promising alternative to the BB Method for linear systems. Unconstrained optimization algorithms related to the AS Method are also discussed. Particularly, a more efficient Gradient algorithm is provided by exploring the idea of the AS Method in the GBB algorithm by Raydan (1997). To establish a general R-linear convergence result for Gradient Methods, an important property of the stepsize is drawn in this...
-
new properties of a nonlinear conjugate Gradient Method
Numerische Mathematik, 2001Co-Authors: Yuhong DaiAbstract:This paper provides several new properties of the nonlinear conjugate Gradient Method in [5]. Firstly, the Method is proved to have a certain self-adjusting property that is independent of the line search and the function convexity. Secondly, under mild assumptions on the objective function, the Method is shown to be globally convergent with a variety of line searches. Thirdly, we find that instead of the negative Gradient direction, the search direction defined by the nonlinear conjugate Gradient Method in [5] can be used to restart any optimization Method while guaranteeing the global convergence of the Method. Some numerical results are also presented.
-
an efficient hybrid conjugate Gradient Method for unconstrained optimization
Annals of Operations Research, 2001Co-Authors: Yuhong Dai, Yaxiang YuanAbstract:Recently, we propose a nonlinear conjugate Gradient Method, which produces a descent search direction at every iteration and converges globally provided that the line search satisfies the weak Wolfe conditions. In this paper, we will study Methods related to the new nonlinear conjugate Gradient Method. Specifically, if the size of the scalar β k with respect to the one in the new Method belongs to some interval, then the corresponding Methods are proved to be globally convergent; otherwise, we are able to construct a convex quadratic example showing that the Methods need not converge. Numerical experiments are made for two combinations of the new Method and the Hestenes–Stiefel conjugate Gradient Method. The initial results show that, one of the hybrid Methods is especially efficient for the given test problems.
-
a nonlinear conjugate Gradient Method with a strong global convergence property
Siam Journal on Optimization, 1999Co-Authors: Yuhong Dai, Yaxiang YuanAbstract:Conjugate Gradient Methods are widely used for unconstrained optimization, especially large scale problems. The strong Wolfe conditions are usually used in the analyses and implementations of conjugate Gradient Methods. This paper presents a new version of the conjugate Gradient Method, which converges globally, provided the line search satisfies the standard Wolfe conditions. The conditions on the objective function are also weak, being similar to those required by the Zoutendijk condition.
Xianzhen Jiang - One of the best experts on this subject based on the ideXlab platform.
-
a hybrid conjugate Gradient Method with descent property for unconstrained optimization
Applied Mathematical Modelling, 2015Co-Authors: Jinbao Jian, Lin Han, Xianzhen JiangAbstract:Abstract In this paper, based on some famous previous conjugate Gradient Methods, a new hybrid conjugate Gradient Method was presented for unconstrained optimization. The proposed Method can generate decent directions at every iteration, moreover, this property is independent of the steplength line search. Under the Wolfe line search, the proposed Method possesses global convergence. Medium-scale numerical experiments and their performance profiles are reported, which show that the proposed Method is promising.