Unconstrained Optimization

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 18045 Experts worldwide ranked by ideXlab platform

Neculai Andrei - One of the best experts on this subject based on the ideXlab platform.

  • a derivative free two level random search method for Unconstrained Optimization
    2021
    Co-Authors: Neculai Andrei
    Abstract:

    The purpose of this chapter is to present a two-level random search method for Unconstrained Optimization and the corresponding algorithm. The idea of the algorithm is to randomly generate a number of trial points in some domains at two levels. At the first level, a number of trial points are generated around the initial point, where the minimizing function is evaluated. At the second level, another number of local trial points are generated around each trial point, where the minimizing function is evaluated again. The algorithm consists of a number of rules for replacing the trial points with local trial points and for generating some new trial points to get a point where the function value is smaller. Some details of the algorithm are developed and discussed, concerning the number of trial points, the number of local trial points, the bounds of the domains where these trial points are generated, the reduction of the bounds of these domains, the reduction of the trial points, middle points, the local character of searching, the finding of the minimum points, and the line search used for accelerating the algorithm. The numerical examples illustrate the characteristics and the performances of the algorithm. At the same time, some open problems are identified.

  • a new accelerated diagonal quasi newton updating method with scaled forward finite differences directional derivative for Unconstrained Optimization
    Optimization, 2020
    Co-Authors: Neculai Andrei
    Abstract:

    An accelerated diagonal quasi-Newton updating algorithm for Unconstrained Optimization is presented. The elements of the diagonal matrix approximating the Hessian are determined as scaling of the f...

  • a new diagonal quasi newton updating method with scaled forward finite differences directional derivative for Unconstrained Optimization
    Numerical Functional Analysis and Optimization, 2019
    Co-Authors: Neculai Andrei
    Abstract:

    A new diagonal quasi-Newton updating algorithm for Unconstrained Optimization is presented. The elements of the diagonal matrix approximating the Hessian are determined as scaled forward finite dif...

  • an adaptive conjugate gradient algorithm for large scale Unconstrained Optimization
    Journal of Computational and Applied Mathematics, 2016
    Co-Authors: Neculai Andrei
    Abstract:

    An adaptive conjugate gradient algorithm is presented. The search direction is computed as the sum of the negative gradient and a vector determined by minimizing the quadratic approximation of objective function at the current point. Using a special approximation of the inverse Hessian of the objective function, which depends by a positive parameter, we get the search direction which satisfies both the sufficient descent condition and Dai-Liao's conjugacy condition. The parameter in the search direction is determined in an adaptive manner by minimizing the largest eigenvalue of the matrix defining it in order to cluster all the eigenvalues. The global convergence of the algorithm is proved for uniformly convex functions. Using a set of 800 Unconstrained Optimization test problems we prove that our algorithm is significantly more efficient and more robust than CG-DESCENT algorithm. By solving five applications from the MINPACK-2 test problem collection, with 10 6 variables, we show that the suggested adaptive conjugate gradient algorithm is top performer versus CG-DESCENT.

  • a simple three term conjugate gradient algorithm for Unconstrained Optimization
    Journal of Computational and Applied Mathematics, 2013
    Co-Authors: Neculai Andrei
    Abstract:

    A simple three-term conjugate gradient algorithm which satisfies both the descent condition and the conjugacy condition is presented. This algorithm is a modification of the Hestenes and Stiefel algorithm (Hestenes and Stiefel, 1952) [10], or that of Hager and Zhang (Hager and Zhang, 2005) [23] in such a way that the search direction is descent and it satisfies the conjugacy condition. These properties are independent of the line search. Also, the algorithm could be considered as a modification of the memoryless BFGS quasi-Newton method. The new approximation of the minimum is obtained by the general Wolfe line search, now using a standard acceleration technique developed by Andrei (2009) [27]. For uniformly convex functions, under standard assumptions, the global convergence of the algorithm is proved. Numerical comparisons of the suggested three-term conjugate gradient algorithm versus six other three-term conjugate gradient algorithms, using a set of 750 Unconstrained Optimization problems, show that all these computational schemes have similar performances, the suggested one being slightly faster and more robust. The proposed three-term conjugate gradient algorithm substantially outperforms the well-known Hestenes and Stiefel conjugate gradient algorithm, as well as the more elaborate CG_DESCENT algorithm. Using five applications from the MINPACK-2 test problem collection (Averick et al., 1992) [25], with 10^6 variables, we show that the suggested three-term conjugate gradient algorithm is the top performer versus CG_DESCENT.

Philippe L Toint - One of the best experts on this subject based on the ideXlab platform.

  • nonlinear stepsize control trust regions and regularizations for Unconstrained Optimization
    Optimization Methods & Software, 2013
    Co-Authors: Philippe L Toint
    Abstract:

    A class of algorithms for Unconstrained Optimization is introduced, which subsumes the classical trust-region algorithm and two of its newer variants, as well as the cubic and quadratic regularization methods. A unified theory of global convergence to first-order critical points is then described for this class.

  • evaluation complexity of adaptive cubic regularization methods for convex Unconstrained Optimization
    Optimization Methods & Software, 2012
    Co-Authors: Coralia Cartis, Nicholas I. M. Gould, Philippe L Toint
    Abstract:

    The adaptive cubic regularization algorithms described in Cartis, Gould and Toint [Adaptive cubic regularisation methods for Unconstrained Optimization Part II: Worst-case function- and derivative-evaluation complexity, Math. Program. (2010), doi:10.1007/s10107-009-0337-y (online)]; [Part I: Motivation, convergence and numerical results, Math. Program. 127(2) (2011), pp. 245–295] for Unconstrained (nonconvex) Optimization are shown to have improved worst-case efficiency in terms of the function- and gradient-evaluation count when applied to convex and strongly convex objectives. In particular, our complexity upper bounds match in order (as a function of the accuracy of approximation), and sometimes even improve, those obtained by Nesterov [Introductory Lectures on Convex Optimization, Kluwer Academic Publishers, Dordrecht, 2004; Accelerating the cubic regularization of Newton's method on convex problems, Math. Program. 112(1) (2008), pp. 159–181] and Nesterov and Polyak [Cubic regularization of Newton's m...

  • complexity bounds for second order optimality in Unconstrained Optimization
    Journal of Complexity, 2012
    Co-Authors: Coralia Cartis, Nicholas I. M. Gould, Philippe L Toint
    Abstract:

    This paper examines worst-case evaluation bounds for finding weak minimizers in Unconstrained Optimization. For the cubic regularization algorithm, Nesterov and Polyak (2006) [15] and Cartis et al. (2010) [3] show that at most O(@e^-^3) iterations may have to be performed for finding an iterate which is within @e of satisfying second-order optimality conditions. We first show that this bound can be derived for a version of the algorithm, which only uses one-dimensional global Optimization of the cubic model and that it is sharp. We next consider the standard trust-region method and show that a bound of the same type may also be derived for this method, and that it is also sharp in some cases. We conclude by showing that a comparison of the bounds on the worst-case behaviour of the cubic regularization and trust-region algorithms favours the first of these methods.

  • adaptive cubic regularisation methods for Unconstrained Optimization part ii worst case function and derivative evaluation complexity
    Mathematical Programming, 2011
    Co-Authors: Coralia Cartis, Nicholas I. M. Gould, Philippe L Toint
    Abstract:

    An Adaptive Regularisation framework using Cubics (ARC) was proposed for Unconstrained Optimization and analysed in Cartis, Gould and Toint (Part I, Math Program, doi: 10.1007/s10107-009-0286-5, 2009), generalizing at the same time an unpublished method due to Griewank (Technical Report NA/12, 1981, DAMTP, University of Cambridge), an algorithm by Nesterov and Polyak (Math Program 108(1):177–205, 2006) and a proposal by Weiser, Deuflhard and Erdmann (Optim Methods Softw 22(3):413–431, 2007). In this companion paper, we further the analysis by providing worst-case global iteration complexity bounds for ARC and a second-order variant to achieve approximate first-order, and for the latter second-order, criticality of the iterates. In particular, the second-order ARC algorithm requires at most $${\mathcal{O}(\epsilon^{-3/2})}$$ iterations, or equivalently, function- and gradient-evaluations, to drive the norm of the gradient of the objective below the desired accuracy $${\epsilon}$$, and $${\mathcal{O}(\epsilon^{-3})}$$ iterations, to reach approximate nonnegative curvature in a subspace. The orders of these bounds match those proved for Algorithm 3.3 of Nesterov and Polyak which minimizes the cubic model globally on each iteration. Our approach is more general in that it allows the cubic model to be solved only approximately and may employ approximate Hessians.

  • adaptive cubic regularisation methods for Unconstrained Optimization part i motivation convergence and numerical results
    Mathematical Programming, 2011
    Co-Authors: Coralia Cartis, Nicholas I. M. Gould, Philippe L Toint
    Abstract:

    An Adaptive Regularisation algorithm using Cubics (ARC) is proposed for Unconstrained Optimization, generalizing at the same time an unpublished method due to Griewank (Technical Report NA/12, 1981, DAMTP, University of Cambridge), an algorithm by Nesterov and Polyak (Math Program 108(1):177–205, 2006) and a proposal by Weiser et al. (Optim Methods Softw 22(3):413–431, 2007). At each iteration of our approach, an approximate global minimizer of a local cubic regularisation of the objective function is determined, and this ensures a significant improvement in the objective so long as the Hessian of the objective is locally Lipschitz continuous. The new method uses an adaptive estimation of the local Lipschitz constant and approximations to the global model-minimizer which remain computationally-viable even for large-scale problems. We show that the excellent global and local convergence properties obtained by Nesterov and Polyak are retained, and sometimes extended to a wider class of problems, by our ARC approach. Preliminary numerical experiments with small-scale test problems from the CUTEr set show encouraging performance of the ARC algorithm when compared to a basic trust-region implementation.

Jinbao Jian - One of the best experts on this subject based on the ideXlab platform.

Keyvan Amini - One of the best experts on this subject based on the ideXlab platform.

  • a scaled three term conjugate gradient method for large scale Unconstrained Optimization problem
    Calcolo, 2019
    Co-Authors: Parvaneh Faramarzi, Keyvan Amini
    Abstract:

    The moving asymptote method is an efficient tool to solve structural Optimization. In this paper, a new scaled three-term conjugate gradient method is proposed by combining the moving asymptote technique with the conjugate gradient method. In this method, the scaling parameters are calculated by the idea of moving asymptotes. It is proved that the search directions generated always satisfy the sufficient descent condition independent of the line search. We establish the global convergence of the proposed method with Armijo-type line search. The numerical results show the efficiency of the new algorithm for solving large-scale Unconstrained Optimization problems.

  • an efficient nonmonotone trust region method for Unconstrained Optimization
    Numerical Algorithms, 2012
    Co-Authors: Masoud Ahookhosh, Keyvan Amini
    Abstract:

    The monotone trust-region methods are well-known techniques for solving Unconstrained Optimization problems. While it is known that the nonmonotone strategies not only can improve the likelihood of finding the global optimum but also can improve the numerical performance of approaches, the traditional nonmonotone strategy contains some disadvantages. In order to overcome to these drawbacks, we introduce a variant nonmonotone strategy and incorporate it into trust-region framework to construct more reliable approach. The new nonmonotone strategy is a convex combination of the maximum of function value of some prior successful iterates and the current function value. It is proved that the proposed algorithm possesses global convergence to first-order and second-order stationary points under some classical assumptions. Preliminary numerical experiments indicate that the new approach is considerably promising for solving Unconstrained Optimization problems.

  • a nonmonotone trust region line search method for large scale Unconstrained Optimization
    Applied Mathematical Modelling, 2012
    Co-Authors: Masoud Ahookhosh, Keyvan Amini, Mohammad Reza Peyghami
    Abstract:

    Abstract We consider an efficient trust-region framework which employs a new nonmonotone line search technique for Unconstrained Optimization problems. Unlike the traditional nonmonotone trust-region method, our proposed algorithm avoids resolving the subproblem whenever a trial step is rejected. Instead, it performs a nonmonotone Armijo-type line search in direction of the rejected trial step to construct a new point. Theoretical analysis indicates that the new approach preserves the global convergence to the first-order critical points under classical assumptions. Moreover, superlinear and quadratic convergence are established under suitable conditions. Numerical experiments show the efficiency and effectiveness of the proposed approach for solving Unconstrained Optimization problems.

  • a nonmonotone trust region method with adaptive radius for Unconstrained Optimization problems
    Computers & Mathematics With Applications, 2010
    Co-Authors: Masoud Ahookhosh, Keyvan Amini
    Abstract:

    In this paper, we incorporate a nonmonotone technique with the new proposed adaptive trust region radius (Shi and Guo, 2008) [4] in order to propose a new nonmonotone trust region method with an adaptive radius for Unconstrained Optimization. Both the nonmonotone techniques and adaptive trust region radius strategies can improve the trust region methods in the sense of global convergence. The global convergence to first and second order critical points together with local superlinear and quadratic convergence of the new method under some suitable conditions. Numerical results show that the new method is very efficient and robustness for Unconstrained Optimization problems.

Xianzhen Jiang - One of the best experts on this subject based on the ideXlab platform.