The Experts below are selected from a list of 2787 Experts worldwide ranked by ideXlab platform
Mohamed Abouhawwash - One of the best experts on this subject based on the ideXlab platform.
-
a smooth Proximity Measure for optimality in multi objective optimization using benson s method
Computers & Operations Research, 2020Co-Authors: Mohamed Abouhawwash, Mohamed A JameelAbstract:Abstract Multi-objective optimization problems give rise to a set of trade-off Pareto-optimal solutions. To evaluate a set-based multi-objective optimization algorithm, such as an evolutionary multi-objective optimization (EMO) algorithm, for its convergence and diversity attainment, more than one performance metrics are required. To Measure the convergence aspect, a new Karush-Kuhn-Tucker Proximity Measure (KKTPM) was recently proposed based on the extent of satisfaction of KKT optimality conditions on the augmented achievement scalarization function (AASF) formulation. However, the Pareto-optimality of a point depends on a parameter needed to be used in the AASF formulation. In this paper, we use Benson’s method as a scalarized version of the multi-objective optimization problem, mainly because it is parameter-less and is a popularly used in the multi-criterion decision-making (MCDM) literature. The proposed Benson’s method based metric (B-KKTPM) is applied to optimized solutions of popular EMO algorithms on standard two to 10-objective test problems and to a few engineering design problems. B-KKTPM is able to determine relative closeness of a set of trade-off solutions from the strictly efficient solutions without any prior knowledge of them. To reduce the computational cost of solving an optimization problem to compute B-KKTPM, we also propose a direct, but approximate, method. The obtained results from our extensive study indicates that (i) the proposed optimization based and direct B-KKTPMs can be used for a termination check for any optimization algorithm, and (ii) the direct B-KKTPM method can be used as a replacement of the optimization-based version for a good trade-off between computational time and accuracy.
-
A smooth Proximity Measure for optimality in multi-objective optimization using Benson’s method
Computers & Operations Research, 2020Co-Authors: Mohamed Abouhawwash, Mohamed A JameelAbstract:Abstract Multi-objective optimization problems give rise to a set of trade-off Pareto-optimal solutions. To evaluate a set-based multi-objective optimization algorithm, such as an evolutionary multi-objective optimization (EMO) algorithm, for its convergence and diversity attainment, more than one performance metrics are required. To Measure the convergence aspect, a new Karush-Kuhn-Tucker Proximity Measure (KKTPM) was recently proposed based on the extent of satisfaction of KKT optimality conditions on the augmented achievement scalarization function (AASF) formulation. However, the Pareto-optimality of a point depends on a parameter needed to be used in the AASF formulation. In this paper, we use Benson’s method as a scalarized version of the multi-objective optimization problem, mainly because it is parameter-less and is a popularly used in the multi-criterion decision-making (MCDM) literature. The proposed Benson’s method based metric (B-KKTPM) is applied to optimized solutions of popular EMO algorithms on standard two to 10-objective test problems and to a few engineering design problems. B-KKTPM is able to determine relative closeness of a set of trade-off solutions from the strictly efficient solutions without any prior knowledge of them. To reduce the computational cost of solving an optimization problem to compute B-KKTPM, we also propose a direct, but approximate, method. The obtained results from our extensive study indicates that (i) the proposed optimization based and direct B-KKTPMs can be used for a termination check for any optimization algorithm, and (ii) the direct B-KKTPM method can be used as a replacement of the optimization-based version for a good trade-off between computational time and accuracy.
-
evolutionary multi objective optimization using benson s karush kuhn tucker Proximity Measure
International Conference on Evolutionary Multi-criterion Optimization, 2019Co-Authors: Mohamed Abouhawwash, Mohamed A JameelAbstract:Many Evolutionary Algorithms (EAs) have been proposed over the last decade aiming at solving multi- and many-objective optimization problems. Although EA literature is rich in performance metrics designed specifically to evaluate the convergence ability of these algorithms, most of these metrics require the knowledge of the true Pareto Optimal (PO) front. In this paper, we suggest a novel Karush-Kuhn-Tucker (KKT) based Proximity Measure using Benson’s method (we call it B-KKTPM). B-KKTPM can determine the relative closeness of any point from the true PO front, without prior knowledge of this front. Finally, we integrate the proposed metric with two recent algorithms and apply it on several multi and many-objective optimization problems. Results show that B-KKTPM can be used as a termination condition for an Evolutionary Multi-objective Optimization (EMO) approach.
-
EMO - Evolutionary Multi-objective Optimization Using Benson's Karush-Kuhn-Tucker Proximity Measure.
Lecture Notes in Computer Science, 2019Co-Authors: Mohamed Abouhawwash, Mohamed A JameelAbstract:Many Evolutionary Algorithms (EAs) have been proposed over the last decade aiming at solving multi- and many-objective optimization problems. Although EA literature is rich in performance metrics designed specifically to evaluate the convergence ability of these algorithms, most of these metrics require the knowledge of the true Pareto Optimal (PO) front. In this paper, we suggest a novel Karush-Kuhn-Tucker (KKT) based Proximity Measure using Benson’s method (we call it B-KKTPM). B-KKTPM can determine the relative closeness of any point from the true PO front, without prior knowledge of this front. Finally, we integrate the proposed metric with two recent algorithms and apply it on several multi and many-objective optimization problems. Results show that B-KKTPM can be used as a termination condition for an Evolutionary Multi-objective Optimization (EMO) approach.
-
karush kuhn tucker Proximity Measure for multi objective optimization based on numerical gradients
Genetic and Evolutionary Computation Conference, 2016Co-Authors: Mohamed AbouhawwashAbstract:A Measure for estimating the convergence characteristics of a set of non-dominated points obtained by a multi-objective optimization algorithm was developed recently. The idea of the Measure was developed based on the Karush-Kuhn-Tucker (KKT) optimality conditions which require the gradients of objective and constraint functions. In this paper, we extend the scope of the proposed KKT Proximity Measure by computing gradients numerically and evaluating the accuracy of the numerically computed KKT Proximity Measure with the same computed using the exact gradient computation. The results are encouraging and open up the possibility of using the proposed KKTPM to non-differentiable problems as well.
Mohamed A Jameel - One of the best experts on this subject based on the ideXlab platform.
-
A smooth Proximity Measure for optimality in multi-objective optimization using Benson’s method
Computers & Operations Research, 2020Co-Authors: Mohamed Abouhawwash, Mohamed A JameelAbstract:Abstract Multi-objective optimization problems give rise to a set of trade-off Pareto-optimal solutions. To evaluate a set-based multi-objective optimization algorithm, such as an evolutionary multi-objective optimization (EMO) algorithm, for its convergence and diversity attainment, more than one performance metrics are required. To Measure the convergence aspect, a new Karush-Kuhn-Tucker Proximity Measure (KKTPM) was recently proposed based on the extent of satisfaction of KKT optimality conditions on the augmented achievement scalarization function (AASF) formulation. However, the Pareto-optimality of a point depends on a parameter needed to be used in the AASF formulation. In this paper, we use Benson’s method as a scalarized version of the multi-objective optimization problem, mainly because it is parameter-less and is a popularly used in the multi-criterion decision-making (MCDM) literature. The proposed Benson’s method based metric (B-KKTPM) is applied to optimized solutions of popular EMO algorithms on standard two to 10-objective test problems and to a few engineering design problems. B-KKTPM is able to determine relative closeness of a set of trade-off solutions from the strictly efficient solutions without any prior knowledge of them. To reduce the computational cost of solving an optimization problem to compute B-KKTPM, we also propose a direct, but approximate, method. The obtained results from our extensive study indicates that (i) the proposed optimization based and direct B-KKTPMs can be used for a termination check for any optimization algorithm, and (ii) the direct B-KKTPM method can be used as a replacement of the optimization-based version for a good trade-off between computational time and accuracy.
-
a smooth Proximity Measure for optimality in multi objective optimization using benson s method
Computers & Operations Research, 2020Co-Authors: Mohamed Abouhawwash, Mohamed A JameelAbstract:Abstract Multi-objective optimization problems give rise to a set of trade-off Pareto-optimal solutions. To evaluate a set-based multi-objective optimization algorithm, such as an evolutionary multi-objective optimization (EMO) algorithm, for its convergence and diversity attainment, more than one performance metrics are required. To Measure the convergence aspect, a new Karush-Kuhn-Tucker Proximity Measure (KKTPM) was recently proposed based on the extent of satisfaction of KKT optimality conditions on the augmented achievement scalarization function (AASF) formulation. However, the Pareto-optimality of a point depends on a parameter needed to be used in the AASF formulation. In this paper, we use Benson’s method as a scalarized version of the multi-objective optimization problem, mainly because it is parameter-less and is a popularly used in the multi-criterion decision-making (MCDM) literature. The proposed Benson’s method based metric (B-KKTPM) is applied to optimized solutions of popular EMO algorithms on standard two to 10-objective test problems and to a few engineering design problems. B-KKTPM is able to determine relative closeness of a set of trade-off solutions from the strictly efficient solutions without any prior knowledge of them. To reduce the computational cost of solving an optimization problem to compute B-KKTPM, we also propose a direct, but approximate, method. The obtained results from our extensive study indicates that (i) the proposed optimization based and direct B-KKTPMs can be used for a termination check for any optimization algorithm, and (ii) the direct B-KKTPM method can be used as a replacement of the optimization-based version for a good trade-off between computational time and accuracy.
-
evolutionary multi objective optimization using benson s karush kuhn tucker Proximity Measure
International Conference on Evolutionary Multi-criterion Optimization, 2019Co-Authors: Mohamed Abouhawwash, Mohamed A JameelAbstract:Many Evolutionary Algorithms (EAs) have been proposed over the last decade aiming at solving multi- and many-objective optimization problems. Although EA literature is rich in performance metrics designed specifically to evaluate the convergence ability of these algorithms, most of these metrics require the knowledge of the true Pareto Optimal (PO) front. In this paper, we suggest a novel Karush-Kuhn-Tucker (KKT) based Proximity Measure using Benson’s method (we call it B-KKTPM). B-KKTPM can determine the relative closeness of any point from the true PO front, without prior knowledge of this front. Finally, we integrate the proposed metric with two recent algorithms and apply it on several multi and many-objective optimization problems. Results show that B-KKTPM can be used as a termination condition for an Evolutionary Multi-objective Optimization (EMO) approach.
-
EMO - Evolutionary Multi-objective Optimization Using Benson's Karush-Kuhn-Tucker Proximity Measure.
Lecture Notes in Computer Science, 2019Co-Authors: Mohamed Abouhawwash, Mohamed A JameelAbstract:Many Evolutionary Algorithms (EAs) have been proposed over the last decade aiming at solving multi- and many-objective optimization problems. Although EA literature is rich in performance metrics designed specifically to evaluate the convergence ability of these algorithms, most of these metrics require the knowledge of the true Pareto Optimal (PO) front. In this paper, we suggest a novel Karush-Kuhn-Tucker (KKT) based Proximity Measure using Benson’s method (we call it B-KKTPM). B-KKTPM can determine the relative closeness of any point from the true PO front, without prior knowledge of this front. Finally, we integrate the proposed metric with two recent algorithms and apply it on several multi and many-objective optimization problems. Results show that B-KKTPM can be used as a termination condition for an Evolutionary Multi-objective Optimization (EMO) approach.
Joydeep Dutta - One of the best experts on this subject based on the ideXlab platform.
-
an optimality theory based Proximity Measure for evolutionary multi objective and many objective optimization
International Conference on Evolutionary Multi-criterion Optimization, 2015Co-Authors: Mohamed Abouhawwash, Joydeep DuttaAbstract:Evolutionary multi- and many-objective optimization (EMO) methods attempt to find a set of Pareto-optimal solutions, instead of a single optimal solution. To evaluate these algorithms, performance metrics either require the knowledge of the true Pareto-optimal solutions or, are ad-hoc and heuristic based. In this paper, we suggest a KKT Proximity Measure (KKTPM) that can provide an estimate of the Proximity of a set of trade-off solutions from the true Pareto-optimal solutions. Besides theoretical results, the proposed KKT Proximity Measure is computed for iteration-wise trade-off solutions obtained from specific EMO algorithms on two, three, five and 10-objective optimization problems. Results amply indicate the usefulness of the proposed KKTPM as a termination criterion for an EMO algorithm.
-
EMO (2) - An optimality theory based Proximity Measure for evolutionary multi-objective and many-objective optimization
Lecture Notes in Computer Science, 2015Co-Authors: Mohamed Abouhawwash, Joydeep DuttaAbstract:Evolutionary multi- and many-objective optimization (EMO) methods attempt to find a set of Pareto-optimal solutions, instead of a single optimal solution. To evaluate these algorithms, performance metrics either require the knowledge of the true Pareto-optimal solutions or, are ad-hoc and heuristic based. In this paper, we suggest a KKT Proximity Measure (KKTPM) that can provide an estimate of the Proximity of a set of trade-off solutions from the true Pareto-optimal solutions. Besides theoretical results, the proposed KKT Proximity Measure is computed for iteration-wise trade-off solutions obtained from specific EMO algorithms on two, three, five and 10-objective optimization problems. Results amply indicate the usefulness of the proposed KKTPM as a termination criterion for an EMO algorithm.
-
approximate kkt points and a Proximity Measure for termination
Journal of Global Optimization, 2013Co-Authors: Joydeep Dutta, Rupesh Tulshyan, Ramnik AroraAbstract:Karush---Kuhn---Tucker (KKT) optimality conditions are often checked for investigating whether a solution obtained by an optimization algorithm is a likely candidate for the optimum. In this study, we report that although the KKT conditions must all be satisfied at the optimal point, the extent of violation of KKT conditions at points arbitrarily close to the KKT point is not smooth, thereby making the KKT conditions difficult to use directly to evaluate the performance of an optimization algorithm. This happens due to the requirement of complimentary slackness condition associated with KKT optimality conditions. To overcome this difficulty, we define modified $${\epsilon}$$ -KKT points by relaxing the complimentary slackness and equilibrium equations of KKT conditions and suggest a KKT-Proximity Measure, that is shown to reduce sequentially to zero as the iterates approach the KKT point. Besides the theoretical development defining the modified $${\epsilon}$$ -KKT point, we present extensive computer simulations of the proposed methodology on a set of iterates obtained through an evolutionary optimization algorithm to illustrate the working of our proposed procedure on smooth and non-smooth problems. The results indicate that the proposed KKT-Proximity Measure can be used as a termination condition to optimization algorithms. As a by-product, the method helps to find Lagrange multipliers correspond to near-optimal solutions which can be of importance to practitioners. We also provide a comparison of our KKT-Proximity Measure with the stopping criterion used in popular commercial softwares.
Tharo Soun - One of the best experts on this subject based on the ideXlab platform.
-
using karush kuhn tucker Proximity Measure for solving bilevel optimization problems
Swarm and evolutionary computation, 2019Co-Authors: Ankur Sinha, Tharo SounAbstract:Abstract A common technique to solve bilevel optimization problems is by reducing the problem to a single level and then solving it as a standard optimization problem. A number of single level reduction formulations exist, but one of the most common ways is to replace the lower level optimization problem with its Karush-Kuhn-Tucker (KKT) conditions. Such a reduction strategy has been widely used in the classical optimization as well as the evolutionary computation literature. However, KKT conditions contain a set of non-linear equality constraints that are often found hard to satisfy. In this paper, we discuss a single level reduction of a bilevel problem using recently proposed relaxed KKT conditions. The conditions are relaxed; therefore, approximate, but the error in terms of distance from the true lower level KKT point is bounded. There is a Proximity Measure associated to the new KKT conditions, which gives an idea of the KKT error and distance from the optimum. We utilize this reduction method within an evolutionary algorithm to solve bilevel optimization problems. The proposed algorithm is compared against a number of recently proposed approaches. The idea is found to lead to significant computational savings, especially, in the lower level function evaluations. The idea is promising and might be useful for further developments on bilevel optimization both in the domain of classical as well as evolutionary optimization research.
-
Evolutionary bilevel optimization using KKT Proximity Measure
2017 IEEE Congress on Evolutionary Computation (CEC), 2017Co-Authors: Ankur Sinha, Tharo SounAbstract:Bilevel optimization problems are often reduced to single level using Karush-Kuhn-Tucker (KKT) conditions; however, there are some inherent difficulties when it comes to satisfying the KKT constraints strictly. In this paper, we discuss single level reduction of a bilevel problem using approximate KKT conditions which have been recently found to be more useful than the original and strict KKT conditions. We embed the recently proposed KKT Proximity Measure idea within an evolutionary algorithm to solve bilevel optimization problems. The idea is tested on a number of test problems and comparison results have been provided against a recently proposed evolutionary algorithm for bilevel optimization. The proposed idea leads to significant savings in lower level function evaluations and shows promise in further use of KKT Proximity Measures in bilevel optimization algorithm development.
-
CEC - Evolutionary bilevel optimization using KKT Proximity Measure
2017 IEEE Congress on Evolutionary Computation (CEC), 2017Co-Authors: Ankur Sinha, Tharo SounAbstract:Bilevel optimization problems are often reduced to single level using Karush-Kuhn-Tucker (KKT) conditions; however, there are some inherent difficulties when it comes to satisfying the KKT constraints strictly. In this paper, we discuss single level reduction of a bilevel problem using approximate KKT conditions which have been recently found to be more useful than the original and strict KKT conditions. We embed the recently proposed KKT Proximity Measure idea within an evolutionary algorithm to solve bilevel optimization problems. The idea is tested on a number of test problems and comparison results have been provided against a recently proposed evolutionary algorithm for bilevel optimization. The proposed idea leads to significant savings in lower level function evaluations and shows promise in further use of KKT Proximity Measures in bilevel optimization algorithm development.
Sunith Bandaru - One of the best experts on this subject based on the ideXlab platform.
-
kkt Proximity Measure for testing convergence in smooth multi objective optimization
Genetic and Evolutionary Computation Conference, 2011Co-Authors: Rupesh Tulshyan, Sunith BandaruAbstract:An earlier study defined a KKT-Proximity Measure to test the convergence property of an evolutionary algorithm for solving single-objective optimization problems. In this paper, we extend this Measure for testing convergence of a set of non-dominated solutions to the Pareto-optimal front in the case of smooth multi-objective optimization problems. Simulation results of NSGA-II on different two and three objective test problems indicate the suitability of using the Proximity Measure as a convergence metric for terminating a simulation of an evolutionary multi-criterion optimization algorithm.
-
GECCO (Companion) - KKT Proximity Measure for testing convergence in smooth multi-objective optimization
Proceedings of the 13th annual conference companion on Genetic and evolutionary computation - GECCO '11, 2011Co-Authors: Rupesh Tulshyan, Sunith BandaruAbstract:An earlier study defined a KKT-Proximity Measure to test the convergence property of an evolutionary algorithm for solving single-objective optimization problems. In this paper, we extend this Measure for testing convergence of a set of non-dominated solutions to the Pareto-optimal front in the case of smooth multi-objective optimization problems. Simulation results of NSGA-II on different two and three objective test problems indicate the suitability of using the Proximity Measure as a convergence metric for terminating a simulation of an evolutionary multi-criterion optimization algorithm.