The Experts below are selected from a list of 106746 Experts worldwide ranked by ideXlab platform
Ruben Martinezcantin - One of the best experts on this subject based on the ideXlab platform.
-
bayesopt a bayesian Optimization library for Nonlinear Optimization experimental design and bandits
arXiv: Learning, 2014Co-Authors: Ruben MartinezcantinAbstract:BayesOpt is a library with state-of-the-art Bayesian Optimization methods to solve Nonlinear Optimization, stochastic bandits or sequential experimental design problems. Bayesian Optimization is sample efficient by building a posterior distribution to capture the evidence and prior knowledge for the target function. Built in standard C++, the library is extremely efficient while being portable and flexible. It includes a common interface for C, C++, Python, Matlab and Octave.
-
bayesopt a bayesian Optimization library for Nonlinear Optimization experimental design and bandits
Journal of Machine Learning Research, 2014Co-Authors: Ruben MartinezcantinAbstract:BayesOpt is a library with state-of-the-art Bayesian Optimization methods to solve Nonlinear Optimization, stochastic bandits or sequential experimental design problems. Bayesian Optimization characterized for being sample efficient as it builds a posterior distribution to capture the evidence and prior knowledge of the target function. Built in standard C++, the library is extremely efficient while being portable and flexible. It includes a common interface for C, C++, Python, Matlab and Octave.
Yunong Zhang - One of the best experts on this subject based on the ideXlab platform.
-
z type neural dynamics for time varying Nonlinear Optimization under a linear equality constraint with robot application
Journal of Computational and Applied Mathematics, 2018Co-Authors: Mingzhi Mao, Frank Uhlig, Yunong ZhangAbstract:Abstract Nonlinear Optimization is widely important for science and engineering. Most research in Optimization has dealt with static Nonlinear Optimization while little has been done on time-varying Nonlinear Optimization problems. These are generally more complicated and demanding. We study time-varying Nonlinear Optimizations with time-varying linear equality constraints and adapt Z-type neural-dynamics (ZTND) for solving such problems. Using a Lagrange multipliers approach we construct a continuous ZTND model for such time-varying Optimizations. A new four-instant finite difference (FIFD) formula is proposed that helps us discretize the continuous ZTND model with high accuracy. We propose the FDZTND-K and FDZTND-U discrete models and compare their quality and the advantage of the FIFD formula with two standard Euler-discretization ZTND models, called EDZTND-K and EDZTND-U that achieve lower accuracy. Theoretical convergence of our continuous and discrete models is proved and our methods are tested in numerical experiments. For a real world, we apply the FDZTND-U model to robot motion planning and show its feasibility in practice.
-
continuous and discrete zhang dynamics for real time varying Nonlinear Optimization
Numerical Algorithms, 2016Co-Authors: Long Jin, Yunong ZhangAbstract:Online solution of time-varying Nonlinear Optimization problems is considered an important issue in the fields of scientific and engineering research. In this study, the continuous-time derivative (CTD) model and two gradient dynamics (GD) models are developed for real-time varying Nonlinear Optimization (RTVNO). A continuous-time Zhang dynamics (CTZD) model is then generalized and investigated for RTVNO to remedy the weaknesses of CTD and GD models. For possible digital hardware realization, a discrete-time Zhang dynamics (DTZD) model, which can be further reduced to Newton-Raphson iteration (NRI), is also proposed and developed. Theoretical analyses indicate that the residual error of the CTZD model has an exponential convergence, and that the maximum steady-state residual error (MSSRE) of the DTZD model has an O(ź2) pattern with ź denoting the sampling gap. Simulation and numerical results further illustrate the efficacy and advantages of the proposed CTZD and DTZD models for RTVNO.
-
discrete time zhang neural network for online time varying Nonlinear Optimization with application to manipulator motion generation
IEEE Transactions on Neural Networks, 2015Co-Authors: Long Jin, Yunong ZhangAbstract:In this brief, a discrete-time Zhang neural network (DTZNN) model is first proposed, developed, and investigated for online time-varying Nonlinear Optimization (OTVNO). Then, Newton iteration is shown to be derived from the proposed DTZNN model. In addition, to eliminate the explicit matrix-inversion operation, the quasi-Newton Broyden–Fletcher–Goldfarb–Shanno (BFGS) method is introduced, which can effectively approximate the inverse of Hessian matrix. A DTZNN-BFGS model is thus proposed and investigated for OTVNO, which is the combination of the DTZNN model and the quasi-Newton BFGS method. In addition, theoretical analyses show that, with step-size $h=1$ and/or with zero initial error, the maximal residual error of the DTZNN model has an $O(\tau ^{2})$ pattern, whereas the maximal residual error of the Newton iteration has an $O(\tau )$ pattern, with $\tau $ denoting the sampling gap. Besides, when $h\neq 1$ and $h\in (0,2)$ , the maximal steady-state residual error of the DTZNN model has an $O(\tau ^{2})$ pattern. Finally, an illustrative numerical experiment and an application example to manipulator motion generation are provided and analyzed to substantiate the efficacy of the proposed DTZNN and DTZNN-BFGS models for OTVNO.
-
neural dynamics and newton raphson iteration for Nonlinear Optimization
Journal of Computational and Nonlinear Dynamics, 2014Co-Authors: Dongsheng Guo, Yunong ZhangAbstract:In this paper, a special type of neural dynamics (ND) is generalized and investigated for time-varying and static scalar-valued Nonlinear Optimization. In addition, for comparative purpose, the gradient-based neural dynamics (or termed gradient dynamics (GD)) is studied for Nonlinear Optimization. Moreover, for possible digital hardware realization, discrete-time ND (DTND) models are developed. With the linear activation function used and with the step size being 1, the DTND model reduces to Newton–Raphson iteration (NRI) for solving the static Nonlinear Optimization problems. That is, the well-known NRI method can be viewed as a special case of the DTND model. Besides, the geometric representation of the ND models is given for time-varying Nonlinear Optimization. Numerical results demonstrate the efficacy and advantages of the proposed ND models for time-varying and static Nonlinear Optimization.
Kai Yang - One of the best experts on this subject based on the ideXlab platform.
-
inexact primal dual gradient projection methods for Nonlinear Optimization on convex set
Optimization, 2020Co-Authors: Fan Zhang, Hao Wang, Jiashan Wang, Kai YangAbstract:In this paper, we propose a novel primal–dual inexact gradient projection method for Nonlinear Optimization problems with convex-set constraint. This method only needs inexact computation of the pr...
-
inexact primal dual gradient projection methods for Nonlinear Optimization on convex set
arXiv: Optimization and Control, 2019Co-Authors: Fan Zhang, Hao Wang, Jiashan Wang, Kai YangAbstract:In this paper, we propose a novel primal-dual inexact gradient projection method for Nonlinear Optimization problems with convex-set constraint. This method only needs inexact computation of the projections onto the convex set for each iteration, consequently reducing the computational cost for projections per iteration. This feature is attractive especially for solving problems where the projections are computationally not easy to calculate. Global convergence guarantee and O(1/k) ergodic convergence rate of the optimality residual are provided under loose assumptions. We apply our proposed strategy to l1-ball constrained problems. Numerical results exhibit that our inexact gradient projection methods for solving l1-ball constrained problems are more efficient than the exact methods.
R G Harley - One of the best experts on this subject based on the ideXlab platform.
-
particle swarm Optimization basic concepts variants and applications in power systems
IEEE Transactions on Evolutionary Computation, 2008Co-Authors: Y Del Valle, G K Venayagamoorthy, Salman Mohagheghi, J C Hernandez, R G HarleyAbstract:Many areas in power systems require solving one or more Nonlinear Optimization problems. While analytical methods might suffer from slow convergence and the curse of dimensionality, heuristics-based swarm intelligence can be an efficient alternative. Particle swarm Optimization (PSO), part of the swarm intelligence family, is known to effectively solve large-scale Nonlinear Optimization problems. This paper presents a detailed overview of the basic concepts of PSO and its variants. Also, it provides a comprehensive survey on the power system applications that have benefited from the powerful nature of PSO as an Optimization technique. For each application, technical details that are required for applying PSO, such as its type, particle formulation (solution representation), and the most efficient fitness functions are also discussed.
Saman K Halgamuge - One of the best experts on this subject based on the ideXlab platform.
-
a comparison of constraint handling methods for the application of particle swarm Optimization to constrained Nonlinear Optimization problems
Congress on Evolutionary Computation, 2003Co-Authors: G Coath, Saman K HalgamugeAbstract:We present a comparison of two constraint-handling methods used in the application of particle swarm Optimization (PSO) to constrained Nonlinear Optimization problems (CNOPs). A brief review of constraint-handling techniques for evolutionary algorithms (EAs) is given, followed by a direct comparison of two existing methods of enforcing constraints using PSO. The two methods considered are the application of nonstationary multistage penalty functions and the preservation of feasible solutions. Five benchmark functions are used for the comparison, and the results are examined to assess the performance of each method in terms of accuracy and rate of convergence. Conclusions are drawn and suggestions for the applicability of each method to real-world CNOPs are given.