The Experts below are selected from a list of 31905 Experts worldwide ranked by ideXlab platform
T. Antczak - One of the best experts on this subject based on the ideXlab platform.
-
Exactness Property of the Exact Absolute Value Penalty Function Method for Solving Convex Nondifferentiable Interval-Valued Optimization Problems
Journal of Optimization Theory and Applications, 2018Co-Authors: T. AntczakAbstract:In the paper, the classical exact absolute value Function method is used for solving a nondifferentiable constrained interval-valued optimization problem with both inequality and equality constraints. The property of exactness of the penalization for the exact absolute value Penalty Function method is analyzed under assumption that the Functions constituting the considered nondifferentiable constrained optimization problem with the interval-valued objective Function are convex. The conditions guaranteeing the equivalence of the sets of LU-optimal solutions for the original constrained interval-valued extremum problem and for its associated penalized optimization problem with the interval-valued exact absolute value Penalty Function are given.
-
the exactness property of the vector exact l1 Penalty Function method in nondifferentiable invex multiobjective programming
Numerical Functional Analysis and Optimization, 2016Co-Authors: T. Antczak, Marcin StudniarskiAbstract:ABSTRACTIn this article, the vector exact l1 Penalty Function method used for solving nonconvex nondifferentiable multiobjective programming problems is analyzed. In this method, the vector penalized optimization problem with the vector exact l1 Penalty Function is defined. Conditions are given guaranteeing the equivalence of the sets of (weak) Pareto optimal solutions of the considered nondifferentiable multiobjective programming problem and of the associated vector penalized optimization problem with the vector exact l1 Penalty Function. This equivalence is established for nondifferentiable invex vector optimization problems. Some examples of vector optimization problems are presented to illustrate the results established in the article.
-
vector exponential Penalty Function method for nondifferentiable multiobjective programming problems
Bulletin of the Malaysian Mathematical Sciences Society, 2016Co-Authors: T. AntczakAbstract:In this paper, a new vector exponential Penalty Function method for nondifferentiable multiobjective programming problems with inequality constraints is introduced. First, the case when a sequence of vector penalized optimization problems with vector exponential Penalty Function constructed for the original multiobjective programming problem is considered, and the convergence of this method is established. Further, the exactness property of a vector exact Penalty Function method is defined and analyzed in the context of the introduced vector exponential Penalty Function method. Conditions are given guaranteeing the equivalence of the sets of (weak) Pareto solutions of the considered nondifferentiable multiobjective programming problem and the associated vector penalized optimization problem with the vector exact exponential Penalty Function. This equivalence is established for nondifferentiable vector optimization problems with inequality constraints in which involving Functions are r-invex.
-
a lower bound for the Penalty parameter in the exact minimax Penalty Function method for solving nondifferentiable extremum problems
Journal of Optimization Theory and Applications, 2013Co-Authors: T. AntczakAbstract:In the paper, we consider the exact minimax Penalty Function method used for solving a general nondifferentiable extremum problem with both inequality and equality constraints. We analyze the relationship between an optimal solution in the given constrained extremum problem and a minimizer in its associated penalized optimization problem with the exact minimax Penalty Function under the assumption of convexity of the Functions constituting the considered optimization problem (with the exception of those equality constraint Functions for which the associated Lagrange multipliers are negative—these Functions should be assumed to be concave). The lower bound of the Penalty parameter is given such that, for every value of the Penalty parameter above the threshold, the equivalence holds between the set of optimal solutions in the given extremum problem and the set of minimizers in its associated penalized optimization problem with the exact minimax Penalty Function.
-
SADDLE POINT CRITERIA AND THE EXACT MINIMAX Penalty Function METHOD IN NONCONVEX PROGRAMMING
Taiwanese Journal of Mathematics, 2013Co-Authors: T. AntczakAbstract:A new characterization of the exact minimax Penalty Function method is presented. The exactness of the penalization for the exact minimax Penalty Function method is analyzed in the context of saddle point criteria of the Lagrange Function in the nonconvex differentiable optimization problem with both inequality and equality constraints. Thus, new conditions for the exactness of the exact minimax Penalty Function method are established under assumption that the Functions constituting considered constrained optimization problem are invex with respect to the same Function $\eta $ (exception with those equality constraints for which the associated Lagrange multipliers are negative - these Functions should be assumed to be incave with respect to the same Function $\eta $). The threshold of the Penalty parameter is given such that, for all Penalty parameters exceeding this treshold, the equivalence holds between a saddle point of the Lagrange Function in the considered constrained extremum problem and a minimizer in its associated penalized optimization problem with the exact minimax Penalty Function.
Anurag Jayswal - One of the best experts on this subject based on the ideXlab platform.
-
an exact l1 Penalty Function method for multi dimensional first order pde constrained control optimization problem
European Journal of Control, 2020Co-Authors: Anurag JayswalAbstract:Abstract In this paper, we use the exact l1 Penalty Function method to solve a multi-dimensional first-order PDE constrained control optimization problem. The relationships between the aforesaid problem and its associated penalized problem with the exact l1 Penalty Function are established. Further, we show that an optimal solution to the considered problem is a minimizer of its associated penalized problem under the hypothesis of convex Lagrange Functional. In addition, the theoretical results are justified with some examples.
-
Convergence of exponential Penalty Function method for multiobjective fractional programming problems
Ain Shams Engineering Journal, 2014Co-Authors: Anurag Jayswal, Sarita ChoudhuryAbstract:Abstract In this paper, we extend the application of exponential Penalty Function method for solving multiobjective programming problem introduced by Liu and Feng (2010) to solve multiobjective fractional programming problem and analyze the relationship between weak efficient solutions of penalized problems and multiobjective fractional programming problem. Furthermore, we examine the convergence of this method for multiobjective fractional programming problems.
-
An Exact l_1 Exponential Penalty Function Method for Multiobjective Optimization Problems with Exponential-Type Invexity
Journal of the Operations Research Society of China, 2014Co-Authors: Anurag Jayswal, Sarita ChoudhuryAbstract:The purpose of this paper is to devise exact l1 exponential Penalty Function method to solve multiobjective optimization problems with exponential-type invexity. The conditions governing the equivalence of the (weak) efficient solutions to the vector optimization problem and the (weak) efficient solutions to associated unconstrained exponential penalized multiobjective optimization problem are studied. Examples are given to illustrate the obtained results.
Liansheng Zhang - One of the best experts on this subject based on the ideXlab platform.
-
on an exact Penalty Function method for nonlinear mixed discrete programming problems and its applications in search engine advertising problems
Applied Mathematics and Computation, 2015Co-Authors: Liansheng ZhangAbstract:In this paper, we study a new exact and smooth Penalty Function for the nonlinear mixed discrete programming problem by augumenting only one variable no matter how many constraints. Through the smooth and exact Penalty Function, we can transform the nonlinear mixed discrete programming problem into an unconstrained optimization model. We demonstrate that under mild conditions, when the Penalty parameter is sufficiently large, optimizers of this Penalty Function are precisely the optimizers of the nonlinear mixed discrete programming problem. Alternatively, under some mild assumptions, the local exactness property is also presented. The numerical results demonstrate that the new Penalty Function is an effective and promising approach. As important applications, we solve an increasingly popular search engine advertising problem via the new proposed Penalty Function.
-
on an exact Penalty Function method for semi infinite programming problems
Journal of Industrial and Management Optimization, 2012Co-Authors: Ka Fai Cedric Yiu, Yongjian Yang, Liansheng ZhangAbstract:In this paper, we study a new exact and smooth Penalty Function for semi-infinite programming problems with continuous inequality constraints. Through this exact Penalty Function, we can transform a semi-infinite programming problem into an unconstrained optimization problem. We find that, under some reasonable conditions when the Penalty parameter is sufficiently large, the local minimizer of this Penalty Function is the local minimizer of the primal problem. Moreover, under some mild assumptions, the local exactness property is explored. The numerical results demonstrate that it is an effective and promising approach for solving constrained semi-infinite programming problems.
-
on a refinement of the convergence analysis for the new exact Penalty Function method for continuous inequality constrained optimization problem
Journal of Industrial and Management Optimization, 2012Co-Authors: Liansheng ZhangAbstract:This note is to provide a refinement of the convergence analysis of the new exact Penalty Function method proposed recently.
-
new exact Penalty Function for solving constrained finite min max problems
Applied Mathematics and Mechanics-english Edition, 2012Co-Authors: Ka Fai Cedric Yiu, Liansheng ZhangAbstract:This paper introduces a new exact and smooth Penalty Function to tackleconstrained min-max problems.By using this new Penalty Function and adding justone extra variable,a constrained min-max problem is transformed into an unconstrainedoptimization one.It is proved that,under certain reasonable assumptions and when thePenalty parameter is sufficiently large,the minimizer of this unconstrained optimizationproblem is equivalent to the minimizer of the original constrained one.Numerical resultsdemonstrate that this Penalty Function method is an effective and promising approach forsolving constrained finite min-max problems.
-
a new exact Penalty Function method for continuous inequality constrained optimization problems
Journal of Industrial and Management Optimization, 2010Co-Authors: Kok Lay Teo, Liansheng Zhang, Yanqin BaiAbstract:In this paper, a computational approach based on a new exact Penalty Function method is devised for solving a class of continuous inequality constrained optimization problems. The continuous inequality constraints are first approximated by smooth Function in integral form. Then, we construct a new exact Penalty Function, where the summation of all these approximate smooth Functions in integral form, called the constraint violation, is appended to the objective Function. In this way, we obtain a sequence of approximate unconstrained optimization problems. It is shown that if the value of the Penalty parameter is sufficiently large, then any local minimizer of the corresponding unconstrained optimization problem is a local minimizer of the original problem. For illustration, three examples are solved using the proposed method. From the solutions obtained, we observe that the values of their objective Functions are amongst the smallest when compared with those obtained by other existing methods available in the literature. More importantly, our method finds solution which satisfies the continuous inequality constraints.
Zhiqing Meng - One of the best experts on this subject based on the ideXlab platform.
-
augmented lagrangian objective Penalty Function
Numerical Functional Analysis and Optimization, 2015Co-Authors: Zhiqing Meng, Chuangyin Dang, Rui Shen, Min JiangAbstract:Augmented Lagrangian Function is one of the most important tools used in solving some constrained optimization problems. In this article, we study an augmented Lagrangian objective Penalty Function and a modified augmented Lagrangian objective Penalty Function for inequality constrained optimization problems. First, we prove the dual properties of the augmented Lagrangian objective Penalty Function, which are at least as good as the traditional Lagrangian Function's. Under some conditions, the saddle point of the augmented Lagrangian objective Penalty Function satisfies the first-order Karush-Kuhn-Tucker condition. This is especially so when the Karush-Kuhn-Tucker condition holds for convex programming of its saddle point existence. Second, we prove the dual properties of the modified augmented Lagrangian objective Penalty Function. For a global optimal solution, when the exactness of the modified augmented Lagrangian objective Penalty Function holds, its saddle point exists. The sufficient and necessary ...
-
second order smoothing objective Penalty Function for constrained optimization problems
Numerical Functional Analysis and Optimization, 2014Co-Authors: Min Jiang, Rui Shen, Xinsheng Xu, Zhiqing MengAbstract:In this article, a novel objective Penalty Function as well as its second-order smoothing is introduced for constrained optimization problems (COP). It is shown that an optimal solution to the second-order smoothing objective Penalty optimization problem is an optimal solution to the original optimization problem under some mild conditions. Based on the second-order smoothing objective Penalty Function, an algorithm that has better convergence is introduced. Numerical examples illustrate that this algorithm is efficient in solving COP.
-
a smoothing objective Penalty Function algorithm for inequality constrained optimization problems
Numerical Functional Analysis and Optimization, 2011Co-Authors: Zhiqing Meng, Chuangyin Dang, Min Jiang, Rui ShenAbstract:In this article, a smoothing objective Penalty Function for inequality constrained optimization problems is presented. The article proves that this type of the smoothing objective Penalty Functions has good properties in helping to solve inequality constrained optimization problems. Moreover, based on the Penalty Function, an algorithm is presented to solve the inequality constrained optimization problems, with its convergence under some conditions proved. Two numerical experiments show that a satisfactory approximate optimal solution can be obtained by the proposed algorithm.
-
a Penalty Function method based on smoothing lower order Penalty Function
Journal of Computational and Applied Mathematics, 2011Co-Authors: Xinsheng Xu, Zhiqing Meng, Rui ShenAbstract:The paper introduces a smoothing technique for a lower order Penalty Function for constrained optimization problems (COP). It is proved that the optimal solution to the smoothed Penalty optimization problem is a @e2-approximate optimal solution to the original optimization problem under some mild assumptions. Based on the smoothed Penalty Function, an algorithm for solving COP is proposed and some numerical examples are given.
-
a Penalty Function algorithm with objective parameters for nonlinear mathematical programming
Journal of Industrial and Management Optimization, 2009Co-Authors: Zhiqing Meng, Chuangyin DangAbstract:In this paper, we present a Penalty Function with objective parameters for inequality constrained optimization problems. We prove that this type of Penalty Functions has good properties for helping to solve inequality constrained optimization problems. Moreover, based on the Penalty Function, we develop an algorithm to solve the inequality constrained optimization problems and prove its convergence under some conditions. Numerical experiments show that we can obtain a satisfactorily approximate solution for some constrained optimization problems as the same as the exact Penalty Function.
Stefano Lucidi - One of the best experts on this subject based on the ideXlab platform.
-
A derivative-free algorithm for inequality constrained nonlinear programming via smoothing of an ℓ∞ Penalty Function
2016Co-Authors: G. Liuzzi, Stefano LucidiAbstract:Abstract. In this paper we consider inequality constrained nonlinear optimization problems where the first order derivatives of the objective Function and the constraints cannot be used. Our starting point is the possibility to transform the original constrained problem into an unconstrained or linearly constrained minimization of a nonsmooth exact Penalty Function. This approach shows two main difficulties: the first one is the nonsmoothness of this class of exact Penalty Functions which may cause derivative-free codes to converge to nonstationary points of the problem; the second one is the fact that the equivalence between stationary points of the constrained problem and those of the exact Penalty Function can only be stated when the Penalty parameter is smaller than a threshold value which is not known a priori. In this paper we propose a derivative-free algorithm which overcomes the preceding difficulties and produces a sequence of points that admits a subsequence converging to a Karush–Kuhn–Tucker point of the constrained problem. In particular the proposed algorithm is based on a smoothing of the nondifferentiable exact Penalty Function and includes an updating rule which, after at most a finite number of updates, is able to determine a “right value” for the Penalty parameter. Furthermore we present the results obtained on a real world problem concerning the estimation of parameters in an insulin-glucose model of the human body. Key words. derivative-free optimization, constrained optimization, nonlinear programming, nondifferentiable exact Penalty Function
-
a derivative free algorithm for inequality constrained nonlinear programming via smoothing of an ell_ infty Penalty Function
Siam Journal on Optimization, 2009Co-Authors: G. Liuzzi, Stefano LucidiAbstract:In this paper we consider inequality constrained nonlinear optimization problems where the first order derivatives of the objective Function and the constraints cannot be used. Our starting point is the possibility to transform the original constrained problem into an unconstrained or linearly constrained minimization of a nonsmooth exact Penalty Function. This approach shows two main difficulties: the first one is the nonsmoothness of this class of exact Penalty Functions which may cause derivative-free codes to converge to nonstationary points of the problem; the second one is the fact that the equivalence between stationary points of the constrained problem and those of the exact Penalty Function can only be stated when the Penalty parameter is smaller than a threshold value which is not known a priori. In this paper we propose a derivative-free algorithm which overcomes the preceding difficulties and produces a sequence of points that admits a subsequence converging to a Karush-Kuhn-Tucker point of the constrained problem. In particular the proposed algorithm is based on a smoothing of the nondifferentiable exact Penalty Function and includes an updating rule which, after at most a finite number of updates, is able to determine a “right value” for the Penalty parameter. Furthermore we present the results obtained on a real world problem concerning the estimation of parameters in an insulin-glucose model of the human body.
-
a continuously differentiable exact Penalty Function for nonlinear programming problems with unbounded feasible set
Operations Research Letters, 1993Co-Authors: G Contaldi, G Di Pillo, Stefano LucidiAbstract:In this paper we define a new continuously differentiable exact Penalty Function for the solution of general nonlinear programming problems. The distinguishing feature of this Function is that a complete equivalence between its unconstrained minimization on an open perturbation of the feasible set and the solution of the original constrained problem can be established, without requiring the boundedness of the feasible set of the constrained problem.
-
new results on a continuously differentiable exact Penalty Function
Siam Journal on Optimization, 1992Co-Authors: Stefano LucidiAbstract:The main motivation of this paper is to weaken the conditions that imply the correspondence between the solution of a constrained problem and the unconstrained minimization of a continuously differentiable Function.In particular, a new continuously differentiable exact Penalty Function is proposed for the solution of nonlinear programming problems. Under mild assumptions, a complete equivalence can be established between the solution of the original constrained problem and the unconstrained minimization of this Penalty Function on a perturbation of the feasible set.This new Penalty Function and its exactness properties allow us to define globally and superlinearly convergent algorithms to solve nonlinear programming problems. As an example, a Newton-type algorithm is described which converges locally in one iteration in case of quadratic programming problems.