Parameter Control

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 29256 Experts worldwide ranked by ideXlab platform

Irene Moser - One of the best experts on this subject based on the ideXlab platform.

  • A Systematic Literature Review of Adaptive Parameter Control Methods for Evolutionary Algorithms
    ACM Computing Surveys, 2016
    Co-Authors: Aldeida Aleti, Irene Moser
    Abstract:

    Evolutionary algorithms (EAs) are robust stochastic optimisers that perform well over a wide range of problems. Their robustness, however, may be affected by several adjustable Parameters, such as mutation rate, crossover rate, and population size. Algorithm Parameters are usually problem-specific, and often have to be tuned not only to the problem but even the problem instance at hand to achieve ideal performance. In addition, research has shown that different Parameter values may be optimal at different stages of the optimisation process. To address these issues, researchers have shifted their focus to adaptive Parameter Control, in which Parameter values are adjusted during the optimisation process based on the performance of the algorithm. These methods redefine Parameter values repeatedly based on implicit or explicit rules that decide how to make the best use of feedback from the optimisation algorithm. In this survey, we systematically investigate the state of the art in adaptive Parameter Control. The approaches are classified using a new conceptual model that subdivides the process of adapting Parameter values into four steps that are present explicitly or implicitly in all existing approaches that tune Parameters dynamically during the optimisation process. The analysis reveals the major focus areas of adaptive Parameter Control research as well as gaps and potential directions for further development in this area.

  • Choosing the Appropriate Forecasting Model for Predictive Parameter Control
    Evolutionary Computation, 2014
    Co-Authors: Aldeida Aleti, Irene Moser, Indika Meedeniya, Lars Grunske
    Abstract:

    All commonly used stochastic optimisation algorithms have to be Parameterised to perform effectively. Adaptive Parameter Control (APC) is an effective method used for this purpose. APC repeatedly adjusts Parameter values during the optimisation process for optimal algorithm performance. The assignment of Parameter values for a given iteration is based on previously measured performance. In recent research, time series prediction has been proposed as a method of projecting the probabilities to use for Parameter value selection. In this work, we examine the suitability of a variety of prediction methods for the projection of future Parameter performance based on previous data. All considered prediction methods have assumptions the time series data has to conform to for the prediction method to provide accurate projections. Looking specifically at Parameters of evolutionary algorithms (EAs), we find that all standard EA Parameters with the exception of population size conform largely to the assumptions made by the considered prediction methods. Evaluating the performance of these prediction methods, we find that linear regression provides the best results by a very small and statistically insignificant margin. Regardless of the prediction method, predictive Parameter Control outperforms state of the art Parameter Control methods when the performance data adheres to the assumptions made by the prediction method. When a Parameter's performance data does not adhere to the assumptions made by the forecasting method, the use of prediction does not have a notable adverse impact on the algorithm's performance.

  • GECCO - Entropy-based adaptive range Parameter Control for evolutionary algorithms
    Proceeding of the fifteenth annual conference on Genetic and evolutionary computation conference - GECCO '13, 2013
    Co-Authors: Aldeida Aleti, Irene Moser
    Abstract:

    Evolutionary Algorithms are equipped with a range of adjustable Parameters, such as crossover and mutation rates which significantly influence the performance of the algorithm. Practitioners usually do not have the knowledge and time to investigate the ideal Parameter values before the optimisation process. Furthermore, different Parameter values may be optimal for different problems, and even problem instances. In this work, we present a Parameter Control method which adjusts Parameter values during the optimisation process using the algorithm's performance as feedback. The approach is particularly effective with continuous Parameter intervals, which are adapted dynamically. Successful Parameter ranges are identified using an entropy-based clusterer, a method which outperforms state-of-the-art Parameter Control algorithms.

  • Studying feedback mechanisms for adaptive Parameter Control in evolutionary algorithms
    2013 IEEE Congress on Evolutionary Computation, 2013
    Co-Authors: Aldeida Aleti, Irene Moser
    Abstract:

    The performance of an Evolutionary Algorithm (EA) is greatly affected by the settings of its strategy Parameters. An effective solution to the Parameterisation problem is adaptive Parameter Control, which applies learning methods that use feedback from the optimisation process to evaluate the effect of Parameter value choices and adjust the Parameter values over the iterations. At every iteration of an EA, the performance of an EA is reported and employed by the feedback mechanism as an indication of the success of the Parameterisation of the algorithm instance. Many approaches to collect information about the algorithm's performance exist in single objective optimisation. In this work, we review the most recent and prominent approaches. In multiobjective optimisation, establishing a single scalar which can report the algorithm's performance as feedback for adaptive Parameter Control is a complex task. Existing performance measures of multiobjective optimisation are generally used as feedback for the optimisation process. We discuss the properties of these measures and present an empirical evaluation of the binary hypervolume and ϵ+-indicators as feedback for adaptive Parameter Control.

  • IEEE Congress on Evolutionary Computation - Studying feedback mechanisms for adaptive Parameter Control in evolutionary algorithms
    2013 IEEE Congress on Evolutionary Computation, 2013
    Co-Authors: Aldeida Aleti, Irene Moser
    Abstract:

    The performance of an Evolutionary Algorithm (EA) is greatly affected by the settings of its strategy Parameters. An effective solution to the Parameterisation problem is adaptive Parameter Control, which applies learning methods that use feedback from the optimisation process to evaluate the effect of Parameter value choices and adjust the Parameter values over the iterations. At every iteration of an EA, the performance of an EA is reported and employed by the feedback mechanism as an indication of the success of the Parameterisation of the algorithm instance. Many approaches to collect information about the algorithm's performance exist in single objective optimisation. In this work, we review the most recent and prominent approaches. In multiobjective optimisation, establishing a single scalar which can report the algorithm's performance as feedback for adaptive Parameter Control is a complex task. Existing performance measures of multiobjective optimisation are generally used as feedback for the optimisation process. We discuss the properties of these measures and present an empirical evaluation of the binary hypervolume and ϵ+-indicators as feedback for adaptive Parameter Control.

Aldeida Aleti - One of the best experts on this subject based on the ideXlab platform.

  • A Systematic Literature Review of Adaptive Parameter Control Methods for Evolutionary Algorithms
    ACM Computing Surveys, 2016
    Co-Authors: Aldeida Aleti, Irene Moser
    Abstract:

    Evolutionary algorithms (EAs) are robust stochastic optimisers that perform well over a wide range of problems. Their robustness, however, may be affected by several adjustable Parameters, such as mutation rate, crossover rate, and population size. Algorithm Parameters are usually problem-specific, and often have to be tuned not only to the problem but even the problem instance at hand to achieve ideal performance. In addition, research has shown that different Parameter values may be optimal at different stages of the optimisation process. To address these issues, researchers have shifted their focus to adaptive Parameter Control, in which Parameter values are adjusted during the optimisation process based on the performance of the algorithm. These methods redefine Parameter values repeatedly based on implicit or explicit rules that decide how to make the best use of feedback from the optimisation algorithm. In this survey, we systematically investigate the state of the art in adaptive Parameter Control. The approaches are classified using a new conceptual model that subdivides the process of adapting Parameter values into four steps that are present explicitly or implicitly in all existing approaches that tune Parameters dynamically during the optimisation process. The analysis reveals the major focus areas of adaptive Parameter Control research as well as gaps and potential directions for further development in this area.

  • Choosing the Appropriate Forecasting Model for Predictive Parameter Control
    Evolutionary Computation, 2014
    Co-Authors: Aldeida Aleti, Irene Moser, Indika Meedeniya, Lars Grunske
    Abstract:

    All commonly used stochastic optimisation algorithms have to be Parameterised to perform effectively. Adaptive Parameter Control (APC) is an effective method used for this purpose. APC repeatedly adjusts Parameter values during the optimisation process for optimal algorithm performance. The assignment of Parameter values for a given iteration is based on previously measured performance. In recent research, time series prediction has been proposed as a method of projecting the probabilities to use for Parameter value selection. In this work, we examine the suitability of a variety of prediction methods for the projection of future Parameter performance based on previous data. All considered prediction methods have assumptions the time series data has to conform to for the prediction method to provide accurate projections. Looking specifically at Parameters of evolutionary algorithms (EAs), we find that all standard EA Parameters with the exception of population size conform largely to the assumptions made by the considered prediction methods. Evaluating the performance of these prediction methods, we find that linear regression provides the best results by a very small and statistically insignificant margin. Regardless of the prediction method, predictive Parameter Control outperforms state of the art Parameter Control methods when the performance data adheres to the assumptions made by the prediction method. When a Parameter's performance data does not adhere to the assumptions made by the forecasting method, the use of prediction does not have a notable adverse impact on the algorithm's performance.

  • GECCO - Entropy-based adaptive range Parameter Control for evolutionary algorithms
    Proceeding of the fifteenth annual conference on Genetic and evolutionary computation conference - GECCO '13, 2013
    Co-Authors: Aldeida Aleti, Irene Moser
    Abstract:

    Evolutionary Algorithms are equipped with a range of adjustable Parameters, such as crossover and mutation rates which significantly influence the performance of the algorithm. Practitioners usually do not have the knowledge and time to investigate the ideal Parameter values before the optimisation process. Furthermore, different Parameter values may be optimal for different problems, and even problem instances. In this work, we present a Parameter Control method which adjusts Parameter values during the optimisation process using the algorithm's performance as feedback. The approach is particularly effective with continuous Parameter intervals, which are adapted dynamically. Successful Parameter ranges are identified using an entropy-based clusterer, a method which outperforms state-of-the-art Parameter Control algorithms.

  • Studying feedback mechanisms for adaptive Parameter Control in evolutionary algorithms
    2013 IEEE Congress on Evolutionary Computation, 2013
    Co-Authors: Aldeida Aleti, Irene Moser
    Abstract:

    The performance of an Evolutionary Algorithm (EA) is greatly affected by the settings of its strategy Parameters. An effective solution to the Parameterisation problem is adaptive Parameter Control, which applies learning methods that use feedback from the optimisation process to evaluate the effect of Parameter value choices and adjust the Parameter values over the iterations. At every iteration of an EA, the performance of an EA is reported and employed by the feedback mechanism as an indication of the success of the Parameterisation of the algorithm instance. Many approaches to collect information about the algorithm's performance exist in single objective optimisation. In this work, we review the most recent and prominent approaches. In multiobjective optimisation, establishing a single scalar which can report the algorithm's performance as feedback for adaptive Parameter Control is a complex task. Existing performance measures of multiobjective optimisation are generally used as feedback for the optimisation process. We discuss the properties of these measures and present an empirical evaluation of the binary hypervolume and ϵ+-indicators as feedback for adaptive Parameter Control.

  • IEEE Congress on Evolutionary Computation - Studying feedback mechanisms for adaptive Parameter Control in evolutionary algorithms
    2013 IEEE Congress on Evolutionary Computation, 2013
    Co-Authors: Aldeida Aleti, Irene Moser
    Abstract:

    The performance of an Evolutionary Algorithm (EA) is greatly affected by the settings of its strategy Parameters. An effective solution to the Parameterisation problem is adaptive Parameter Control, which applies learning methods that use feedback from the optimisation process to evaluate the effect of Parameter value choices and adjust the Parameter values over the iterations. At every iteration of an EA, the performance of an EA is reported and employed by the feedback mechanism as an indication of the success of the Parameterisation of the algorithm instance. Many approaches to collect information about the algorithm's performance exist in single objective optimisation. In this work, we review the most recent and prominent approaches. In multiobjective optimisation, establishing a single scalar which can report the algorithm's performance as feedback for adaptive Parameter Control is a complex task. Existing performance measures of multiobjective optimisation are generally used as feedback for the optimisation process. We discuss the properties of these measures and present an empirical evaluation of the binary hypervolume and ϵ+-indicators as feedback for adaptive Parameter Control.

A. E. Eiben - One of the best experts on this subject based on the ideXlab platform.

  • EvoApplications - Evaluating Reward Definitions for Parameter Control
    Applications of Evolutionary Computation, 2015
    Co-Authors: Giorgos Karafotias, Mark Hoogendoorn, A. E. Eiben
    Abstract:

    Parameter Controllers for Evolutionary Algorithms (EAs) deal with adjusting Parameter values during an evolutionary run. Many ad hoc approaches have been presented for Parameter Control, but few generic Parameter Controllers exist. Recently, successful Parameter Control methods based on Reinforcement Learning (RL) have been suggested for one-off applications, i.e. relatively long runs with Controllers used out-of-the-box with no tailoring to the problem at hand. However, the reward function used was not investigated in depth, though it is a non-trivial factor with an important impact on the performance of a RL mechanism. In this paper, we address this issue by defining and comparing four alternative reward functions for such generic and RL-based EA Parameter Controllers. We conducted experiments with different EAs, test problems and Controllers and results showed that the simplest reward function performs at least as well as the others, making it an ideal choice for generic out-of-the-box Parameter Control.

  • Parameter Control in Evolutionary Algorithms: Trends and Challenges
    IEEE Transactions on Evolutionary Computation, 2015
    Co-Authors: Giorgos Karafotias, Mark Hoogendoorn, A. E. Eiben
    Abstract:

    More than a decade after the first extensive overview on Parameter Control, we revisit the field and present a survey of the state-of-the-art. We briefly summarize the development of the field and discuss existing work related to each major Parameter or component of an evolutionary algorithm. Based on this overview, we observe trends in the area, identify some (methodological) shortcomings, and give recommendations for future research.

  • GECCO - Generic Parameter Control with reinforcement learning
    Proceedings of the 2014 conference on Genetic and evolutionary computation - GECCO '14, 2014
    Co-Authors: Giorgos Karafotias, A. E. Eiben, Mark Hoogendoorn
    Abstract:

    Parameter Control in Evolutionary Computing stands for an approach to Parameter setting that changes the Parameters of an Evolutionary Algorithm (EA) on-the-fly during the run. In this paper we address the issue of a generic and Parameter-independent Controller that can be readily plugged into an existing EA and offer performance improvements by varying the EA Parameters during the problem solution process. Our approach is based on a careful study of Reinforcement Learning (RL) theory and the use of existing RL techniques. We present experiments using various state-of-the-art EAs solving different difficult problems. Results show that our RL Control method has very good potential in improving the quality of the solution found without requiring additional resources or time and with minimal effort from the designer of the application.

  • GECCO (Companion) - Parameter Control: strategy or luck?
    Proceeding of the fifteenth annual conference companion on Genetic and evolutionary computation conference companion - GECCO '13 Companion, 2013
    Co-Authors: Giorgos Karafotias, Mark Hoogendoorn, A. E. Eiben
    Abstract:

    Parameter Control mechanisms in evolutionary algorithms (EAs) dynamically change the values of the EA Parameters during a run. Research over the last two decades has delivered ample examples where an EA using a Parameter Control mechanism outperforms its static version with fixed Parameter values. However, very few have investigated why such Parameter Control approaches perform better. In principle, it could be the case that using different Parameter values alone is already sufficient and EA performance can be improved without sophisticated Control strategies. This paper investigates whether very simple random variation in Parameter values during an evolutionary run can already provide improvements over static values.

  • Why Parameter Control mechanisms should be benchmarked against random variation
    2013 IEEE Congress on Evolutionary Computation, 2013
    Co-Authors: Giorgos Karafotias, Mark Hoogendoorn, A. E. Eiben
    Abstract:

    Parameter Control mechanisms in evolutionary algorithms (EAs) dynamically change the values of the EA Parameters during a run. Research over the last two decades has delivered ample examples where an EA using a Parameter Control mechanism outperforms its static version with fixed Parameter values. However, very few have investigated why such Parameter Control approaches perform better. In principle, it could be the case that using different Parameter values alone is already sufficient and EA performance can be improved without sophisticated Control strategies raising an issue in the methodology of Parameter Control mechanisms' evaluation. This paper investigates whether very simple random variation in Parameter values during an evolutionary run can already provide improvements over static values. Results suggest that random variation of Parameters should be included in the benchmarks when evaluating a new Parameter Control mechanism.

Ana Gabriela Palomeque-ortiz - One of the best experts on this subject based on the ideXlab platform.

  • Self-adaptive and Deterministic Parameter Control in Differential Evolution for Constrained Optimization
    Constraint-Handling in Evolutionary Optimization, 2020
    Co-Authors: Efrén Mezura-montes, Ana Gabriela Palomeque-ortiz
    Abstract:

    In this Chapter we present the modification of a Differential Evolution algorithm to solve constrained optimization problems. The changes include a deterministic and a self-adaptive Parameter Control in two of the Differential Evolution Parameters and also in two Parameters related with the constraint-handling mechanism. The proposed approach is extensively tested by using a set of well-known test problems and performance measures found in the specialized literature. Besides analyzing the final results obtained by the algorithm with respect to its original version, some interesting findings regarding the behavior found in the approach and in the values observed on each of the Parameters Controlled are also discussed.

  • Parameter Control in Differential Evolution for constrained optimization
    2009 IEEE Congress on Evolutionary Computation, 2009
    Co-Authors: Efrén Mezura-montes, Ana Gabriela Palomeque-ortiz
    Abstract:

    In this paper we present the addition of Parameter Control in a differential evolution algorithm for constrained optimization. Three Parameters are self-adapted by encoding them within each individual and a fourth Parameter is Controlled by a deterministic approach. A set of experiments are performed in order (1) to determine the performance of the modified algorithm with respect to its original version, (2) to analyze the behavior of the self-adaptive Parameter values and (3) to compare it with respect to state-of-the-art approaches. Based on the obtained results, some findings regarding the values for the DE Parameters as well as for the Parameters related with the constraint-handling mechanism are discussed.

Giorgos Karafotias - One of the best experts on this subject based on the ideXlab platform.

  • EvoApplications - Evaluating Reward Definitions for Parameter Control
    Applications of Evolutionary Computation, 2015
    Co-Authors: Giorgos Karafotias, Mark Hoogendoorn, A. E. Eiben
    Abstract:

    Parameter Controllers for Evolutionary Algorithms (EAs) deal with adjusting Parameter values during an evolutionary run. Many ad hoc approaches have been presented for Parameter Control, but few generic Parameter Controllers exist. Recently, successful Parameter Control methods based on Reinforcement Learning (RL) have been suggested for one-off applications, i.e. relatively long runs with Controllers used out-of-the-box with no tailoring to the problem at hand. However, the reward function used was not investigated in depth, though it is a non-trivial factor with an important impact on the performance of a RL mechanism. In this paper, we address this issue by defining and comparing four alternative reward functions for such generic and RL-based EA Parameter Controllers. We conducted experiments with different EAs, test problems and Controllers and results showed that the simplest reward function performs at least as well as the others, making it an ideal choice for generic out-of-the-box Parameter Control.

  • Parameter Control in Evolutionary Algorithms: Trends and Challenges
    IEEE Transactions on Evolutionary Computation, 2015
    Co-Authors: Giorgos Karafotias, Mark Hoogendoorn, A. E. Eiben
    Abstract:

    More than a decade after the first extensive overview on Parameter Control, we revisit the field and present a survey of the state-of-the-art. We briefly summarize the development of the field and discuss existing work related to each major Parameter or component of an evolutionary algorithm. Based on this overview, we observe trends in the area, identify some (methodological) shortcomings, and give recommendations for future research.

  • GECCO - Generic Parameter Control with reinforcement learning
    Proceedings of the 2014 conference on Genetic and evolutionary computation - GECCO '14, 2014
    Co-Authors: Giorgos Karafotias, A. E. Eiben, Mark Hoogendoorn
    Abstract:

    Parameter Control in Evolutionary Computing stands for an approach to Parameter setting that changes the Parameters of an Evolutionary Algorithm (EA) on-the-fly during the run. In this paper we address the issue of a generic and Parameter-independent Controller that can be readily plugged into an existing EA and offer performance improvements by varying the EA Parameters during the problem solution process. Our approach is based on a careful study of Reinforcement Learning (RL) theory and the use of existing RL techniques. We present experiments using various state-of-the-art EAs solving different difficult problems. Results show that our RL Control method has very good potential in improving the quality of the solution found without requiring additional resources or time and with minimal effort from the designer of the application.

  • GECCO (Companion) - Parameter Control: strategy or luck?
    Proceeding of the fifteenth annual conference companion on Genetic and evolutionary computation conference companion - GECCO '13 Companion, 2013
    Co-Authors: Giorgos Karafotias, Mark Hoogendoorn, A. E. Eiben
    Abstract:

    Parameter Control mechanisms in evolutionary algorithms (EAs) dynamically change the values of the EA Parameters during a run. Research over the last two decades has delivered ample examples where an EA using a Parameter Control mechanism outperforms its static version with fixed Parameter values. However, very few have investigated why such Parameter Control approaches perform better. In principle, it could be the case that using different Parameter values alone is already sufficient and EA performance can be improved without sophisticated Control strategies. This paper investigates whether very simple random variation in Parameter values during an evolutionary run can already provide improvements over static values.

  • Why Parameter Control mechanisms should be benchmarked against random variation
    2013 IEEE Congress on Evolutionary Computation, 2013
    Co-Authors: Giorgos Karafotias, Mark Hoogendoorn, A. E. Eiben
    Abstract:

    Parameter Control mechanisms in evolutionary algorithms (EAs) dynamically change the values of the EA Parameters during a run. Research over the last two decades has delivered ample examples where an EA using a Parameter Control mechanism outperforms its static version with fixed Parameter values. However, very few have investigated why such Parameter Control approaches perform better. In principle, it could be the case that using different Parameter values alone is already sufficient and EA performance can be improved without sophisticated Control strategies raising an issue in the methodology of Parameter Control mechanisms' evaluation. This paper investigates whether very simple random variation in Parameter values during an evolutionary run can already provide improvements over static values. Results suggest that random variation of Parameters should be included in the benchmarks when evaluating a new Parameter Control mechanism.