Nonlinear Optimization

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 106746 Experts worldwide ranked by ideXlab platform

Ruben Martinezcantin - One of the best experts on this subject based on the ideXlab platform.

Yunong Zhang - One of the best experts on this subject based on the ideXlab platform.

  • z type neural dynamics for time varying Nonlinear Optimization under a linear equality constraint with robot application
    Journal of Computational and Applied Mathematics, 2018
    Co-Authors: Mingzhi Mao, Frank Uhlig, Yunong Zhang
    Abstract:

    Abstract Nonlinear Optimization is widely important for science and engineering. Most research in Optimization has dealt with static Nonlinear Optimization while little has been done on time-varying Nonlinear Optimization problems. These are generally more complicated and demanding. We study time-varying Nonlinear Optimizations with time-varying linear equality constraints and adapt Z-type neural-dynamics (ZTND) for solving such problems. Using a Lagrange multipliers approach we construct a continuous ZTND model for such time-varying Optimizations. A new four-instant finite difference (FIFD) formula is proposed that helps us discretize the continuous ZTND model with high accuracy. We propose the FDZTND-K and FDZTND-U discrete models and compare their quality and the advantage of the FIFD formula with two standard Euler-discretization ZTND models, called EDZTND-K and EDZTND-U that achieve lower accuracy. Theoretical convergence of our continuous and discrete models is proved and our methods are tested in numerical experiments. For a real world, we apply the FDZTND-U model to robot motion planning and show its feasibility in practice.

  • continuous and discrete zhang dynamics for real time varying Nonlinear Optimization
    Numerical Algorithms, 2016
    Co-Authors: Long Jin, Yunong Zhang
    Abstract:

    Online solution of time-varying Nonlinear Optimization problems is considered an important issue in the fields of scientific and engineering research. In this study, the continuous-time derivative (CTD) model and two gradient dynamics (GD) models are developed for real-time varying Nonlinear Optimization (RTVNO). A continuous-time Zhang dynamics (CTZD) model is then generalized and investigated for RTVNO to remedy the weaknesses of CTD and GD models. For possible digital hardware realization, a discrete-time Zhang dynamics (DTZD) model, which can be further reduced to Newton-Raphson iteration (NRI), is also proposed and developed. Theoretical analyses indicate that the residual error of the CTZD model has an exponential convergence, and that the maximum steady-state residual error (MSSRE) of the DTZD model has an O(ź2) pattern with ź denoting the sampling gap. Simulation and numerical results further illustrate the efficacy and advantages of the proposed CTZD and DTZD models for RTVNO.

  • discrete time zhang neural network for online time varying Nonlinear Optimization with application to manipulator motion generation
    IEEE Transactions on Neural Networks, 2015
    Co-Authors: Long Jin, Yunong Zhang
    Abstract:

    In this brief, a discrete-time Zhang neural network (DTZNN) model is first proposed, developed, and investigated for online time-varying Nonlinear Optimization (OTVNO). Then, Newton iteration is shown to be derived from the proposed DTZNN model. In addition, to eliminate the explicit matrix-inversion operation, the quasi-Newton Broyden–Fletcher–Goldfarb–Shanno (BFGS) method is introduced, which can effectively approximate the inverse of Hessian matrix. A DTZNN-BFGS model is thus proposed and investigated for OTVNO, which is the combination of the DTZNN model and the quasi-Newton BFGS method. In addition, theoretical analyses show that, with step-size $h=1$ and/or with zero initial error, the maximal residual error of the DTZNN model has an $O(\tau ^{2})$ pattern, whereas the maximal residual error of the Newton iteration has an $O(\tau )$ pattern, with $\tau $ denoting the sampling gap. Besides, when $h\neq 1$ and $h\in (0,2)$ , the maximal steady-state residual error of the DTZNN model has an $O(\tau ^{2})$ pattern. Finally, an illustrative numerical experiment and an application example to manipulator motion generation are provided and analyzed to substantiate the efficacy of the proposed DTZNN and DTZNN-BFGS models for OTVNO.

  • neural dynamics and newton raphson iteration for Nonlinear Optimization
    Journal of Computational and Nonlinear Dynamics, 2014
    Co-Authors: Dongsheng Guo, Yunong Zhang
    Abstract:

    In this paper, a special type of neural dynamics (ND) is generalized and investigated for time-varying and static scalar-valued Nonlinear Optimization. In addition, for comparative purpose, the gradient-based neural dynamics (or termed gradient dynamics (GD)) is studied for Nonlinear Optimization. Moreover, for possible digital hardware realization, discrete-time ND (DTND) models are developed. With the linear activation function used and with the step size being 1, the DTND model reduces to Newton–Raphson iteration (NRI) for solving the static Nonlinear Optimization problems. That is, the well-known NRI method can be viewed as a special case of the DTND model. Besides, the geometric representation of the ND models is given for time-varying Nonlinear Optimization. Numerical results demonstrate the efficacy and advantages of the proposed ND models for time-varying and static Nonlinear Optimization.

Kai Yang - One of the best experts on this subject based on the ideXlab platform.

R G Harley - One of the best experts on this subject based on the ideXlab platform.

  • particle swarm Optimization basic concepts variants and applications in power systems
    IEEE Transactions on Evolutionary Computation, 2008
    Co-Authors: Y Del Valle, G K Venayagamoorthy, Salman Mohagheghi, J C Hernandez, R G Harley
    Abstract:

    Many areas in power systems require solving one or more Nonlinear Optimization problems. While analytical methods might suffer from slow convergence and the curse of dimensionality, heuristics-based swarm intelligence can be an efficient alternative. Particle swarm Optimization (PSO), part of the swarm intelligence family, is known to effectively solve large-scale Nonlinear Optimization problems. This paper presents a detailed overview of the basic concepts of PSO and its variants. Also, it provides a comprehensive survey on the power system applications that have benefited from the powerful nature of PSO as an Optimization technique. For each application, technical details that are required for applying PSO, such as its type, particle formulation (solution representation), and the most efficient fitness functions are also discussed.

Saman K Halgamuge - One of the best experts on this subject based on the ideXlab platform.

  • a comparison of constraint handling methods for the application of particle swarm Optimization to constrained Nonlinear Optimization problems
    Congress on Evolutionary Computation, 2003
    Co-Authors: G Coath, Saman K Halgamuge
    Abstract:

    We present a comparison of two constraint-handling methods used in the application of particle swarm Optimization (PSO) to constrained Nonlinear Optimization problems (CNOPs). A brief review of constraint-handling techniques for evolutionary algorithms (EAs) is given, followed by a direct comparison of two existing methods of enforcing constraints using PSO. The two methods considered are the application of nonstationary multistage penalty functions and the preservation of feasible solutions. Five benchmark functions are used for the comparison, and the results are examined to assess the performance of each method in terms of accuracy and rate of convergence. Conclusions are drawn and suggestions for the applicability of each method to real-world CNOPs are given.