Stochastic Local Search

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 360 Experts worldwide ranked by ideXlab platform

Thomas Stutzle - One of the best experts on this subject based on the ideXlab platform.

  • automatic design of hybrid Stochastic Local Search algorithms for permutation flowshop problems with additional constraints
    Operations Research Perspectives, 2021
    Co-Authors: Federico Pagnozzi, Thomas Stutzle
    Abstract:

    Abstract Automatic design of Stochastic Local Search algorithms has been shown to be very effective in generating algorithms for the permutation flowshop problem for the most studied objectives including makespan, flowtime and total tardiness. The automatic design system uses a configuration tool to combine algorithmic components following a set of rules defined as a context-free grammar. In this paper we use the same system to tackle two of the most studied additional constraints for these objectives: sequence dependent setup times and no-idle constraint. Additional components have been added to adapt the system to the new problems while keeping intact the grammar structure and the experimental setup. The experiments show that the generated algorithms outperform the state of the art in each case.

  • automatic design of hybrid Stochastic Local Search algorithms for permutation flowshop problems
    European Journal of Operational Research, 2019
    Co-Authors: Federico Pagnozzi, Thomas Stutzle
    Abstract:

    Abstract Stochastic Local Search methods are at the core of many effective heuristics for tackling different permutation flowshop problems (PFSPs). Usually, such algorithms require a careful, manual algorithm engineering effort to reach high performance. An alternative to the manual algorithm engineering is the automated design of effective SLS algorithms through building flexible algorithm frameworks and using automatic algorithm configuration techniques to instantiate high-performing algorithms. In this paper, we automatically generate new high-performing algorithms for some of the most widely studied variants of the PFSP. More in detail, we (i) developed a new algorithm framework, EMILI, that implements algorithm-specific and problem-specific building blocks; (ii) define the rules of how to compose algorithms from the building blocks; and (iii) employ an automatic algorithm configuration tool to Search for high performing algorithm configurations. With these ingredients, we automatically generate algorithms for the PFSP with the objectives makespan, total completion time and total tardiness, which outperform the best algorithms obtained by a manual algorithm engineering process.

  • Stochastic Local Search algorithms an overview
    Handbook of Computational Intelligence, 2015
    Co-Authors: Holger H Hoos, Thomas Stutzle
    Abstract:

    In this chapter, we give an overview of the main concepts underlying the Stochastic Local Search (SLS) framework and outline some of the most relevant SLS techniques. We also discuss some major recent reSearch directions in the area of Stochastic Local Search. The remainder of this chapter is structured as follows. In Sect. 54.1, we situate the notion of SLS within the broader context of fundamental Search paradigms and briefly review the definition of an SLS algorithm. In Sect. 54.2, we summarize the main issues and trends in the design of greedy constructive and iterative improvement algorithms, while in Sects. 54.3–54.5, we provide a concise overview of some of the most widely used simple, hybrid, and population-based SLS methods. Finally, in Sect. 54.6, we discuss some recent topics of interest, such as the systematic design of SLS algorithms and methods for the automatic configuration of SLS

  • grammar based generation of Stochastic Local Search heuristics through automatic algorithm configuration tools
    Computers & Operations Research, 2014
    Co-Authors: Franco Mascia, Jeremie Duboislacoste, Manuel Lopezibanez, Thomas Stutzle
    Abstract:

    Several grammar-based genetic programming algorithms have been proposed in the literature to automatically generate heuristics for hard optimization problems. These approaches specify the algorithmic building blocks and the way in which they can be combined in a grammar; the best heuristic for the problem being tackled is found by an evolutionary algorithm that Searches in the algorithm design space defined by the grammar. In this work, we propose a novel representation of the grammar by a sequence of categorical, integer, and real-valued parameters. We then use a tool for automatic algorithm configuration to Search for the best algorithm for the problem at hand. Our experimental evaluation on the one-dimensional bin packing problem and the permutation flowshop problem with weighted tardiness objective shows that the proposed approach produces better algorithms than grammatical evolution, a well-established variant of grammar-based genetic programming. The reasons behind such improvement lie both in the representation proposed and in the method used to Search the algorithm design space.

  • algorithm comparison by automatically configurable Stochastic Local Search frameworks a case study using flow shop scheduling problems
    Lecture Notes in Computer Science, 2014
    Co-Authors: Franco Mascia, Jeremie Duboislacoste, Manuel Lopezibanez, Marieeleonore Marmion, Thomas Stutzle
    Abstract:

    The benefits of hybrid Stochastic Local Search (SLS) methods, in comparison with more classical (non-hybrid) ones are often difficult to quantify, since one has to take into account not only the final results obtained but also the effort spent on finding the best configuration of the hybrid and of the classical SLS method. In this paper, we study this trade-off by means of tools for automatic algorithm design, and, in particular, we study the generation of hybrid SLS algorithms versus selecting one classical SLS method among several. In addition, we tune the parameters of the classical SLS method separately and compare the results with the ones obtained when selection and tuning are done at the same time. We carry out experiments on two variants of the permutation flowshop scheduling problem that consider the minimization of weighted sum of completion times (PFSP-WCT) and the minimization of weighted tardiness (PFSP-WCT). Our results indicate that the hybrid algorithms we instantiate are able to match and improve over the best classical SLS method.

Ole J. Mengshoel - One of the best experts on this subject based on the ideXlab platform.

  • markov chain analysis of noise and restart in Stochastic Local Search
    International Joint Conference on Artificial Intelligence, 2016
    Co-Authors: Ole J. Mengshoel, Youssef Ahres
    Abstract:

    Stochastic Local Search (SLS) algorithms have proven to be very competitive in solving hard computational problems. This paper investigates the foundations of SLS algorithms. We develop a simple SLS algorithm, MarkovSLS, with three Search operators: greedy, noise, and restart. The Search operators are controlled by probability parameters, leading to soft (probabilistic) rather than hard (deterministic) restarts. We consider two special cases of the MarkovSLS algorithm: SoftSLS and AdaptiveSLS. In SoftSLS, the probability parameters are fixed, enabling analysis using standard homogeneous Markov chains. We study the interaction between the restart and noise parameters in Soft-SLS, and optimize them analytically in addition to the traditional empirical approach. Experimentally, we investigate the dependency of SoftSLS's performance on its noise and restart parameters, validating the analytical results. AdaptiveSLS dynamically adjusts its noise and restart parameters during Search. Experimentally, on synthetic and feature selection problems, we compare AdaptiveSLS with other algorithms including an analytically optimized version of SoftSLS, and find that it performs well while not requiring prior knowledge of the Search space.

  • Portfolios in Stochastic Local Search: Efficiently Computing Most Probable Explanations in Bayesian Networks
    Journal of Automated Reasoning, 2011
    Co-Authors: Ole J. Mengshoel, Dan Roth, David C. Wilkins
    Abstract:

    Portfolio methods support the combination of different algorithms and heuristics, including Stochastic Local Search (SLS) heuristics, and have been identified as a promising approach to solve computationally hard problems. While successful in experiments, theoretical foundations and analytical results for portfolio-based SLS heuristics are less developed. This article aims to improve the understanding of the role of portfolios of heuristics in SLS. We emphasize the problem of computing most probable explanations (MPEs) in Bayesian networks (BNs). Algorithmically, we discuss a portfolio-based SLS algorithm for MPE computation, Stochastic Greedy Search (SGS). SGS supports the integration of different initialization operators (or initialization heuristics) and different Search operators (greedy and noisy heuristics), thereby enabling new analytical and experimental results. Analytically, we introduce a novel Markov chain model tailored to portfolio-based SLS algorithms including SGS, thereby enabling us to analytically form expected hitting time results that explain empirical run time results. For a specific BN, we show the benefit of using a homogenous initialization portfolio. To further illustrate the portfolio approach, we consider novel additive Search heuristics for handling determinism in the form of zero entries in conditional probability tables in BNs. Our additive approach adds rather than multiplies probabilities when computing the utility of an explanation. We motivate the additive measure by studying the dramatic impact of zero entries in conditional probability tables on the number of zero-probability explanations, which again complicates the Search process. We consider the relationship between MAXSAT and MPE, and show that additive utility (or gain) is a generalization, to the probabilistic setting, of MAXSAT utility (or gain) used in the celebrated GSAT and WalkSAT algorithms and their descendants. Utilizing our Markov chain framework, we show that expected hitting time is a rational function—i.e. a ratio of two polynomials—of the probability of applying an additive Search operator. Experimentally, we report on synthetically generated BNs as well as BNs from applications, and compare SGS’s performance to that of Hugin, which performs BN inference by compilation to and propagation in clique trees. On synthetic networks, SGS speeds up computation by approximately two orders of magnitude compared to Hugin. In application networks, our approach is highly competitive in Bayesian networks with a high degree of determinism. In addition to showing that Stochastic Local Search can be competitive with clique tree clustering, our empirical results provide an improved understanding of the circumstances under which portfolio-based SLS outperforms clique tree clustering and vice versa.

  • Initialization and Restart in Stochastic Local Search: Computing a Most Probable Explanation in Bayesian Networks
    IEEE Transactions on Knowledge and Data Engineering, 2011
    Co-Authors: Ole J. Mengshoel, David C. Wilkins, Dan Roth
    Abstract:

    For hard computational problems, Stochastic Local Search has proven to be a competitive approach to finding optimal or approximately optimal problem solutions. Two key reSearch questions for Stochastic Local Search algorithms are: Which algorithms are effective for initialization? When should the Search process be restarted? In the present work, we investigate these reSearch questions in the context of approximate computation of most probable explanations (MPEs) in Bayesian networks (BNs). We introduce a novel approach, based on the Viterbi algorithm, to explanation initialization in BNs. While the Viterbi algorithm works on sequences and trees, our approach works on BNs with arbitrary topologies. We also give a novel formalization of Stochastic Local Search, with focus on initialization and restart, using probability theory and mixture models. Experimentally, we apply our methods to the problem of MPE computation, using a Stochastic Local Search algorithm known as Stochastic Greedy Search. By carefully optimizing both initialization and restart, we reduce the MPE Search time for application BNs by several orders of magnitude compared to using uniform at random initialization without restart. On several BNs from applications, the performance of Stochastic Greedy Search is competitive with clique tree clustering, a state-of-the-art exact algorithm used for MPE computation in BNs.

  • understanding the role of noise in Stochastic Local Search analysis and experiments
    Artificial Intelligence, 2008
    Co-Authors: Ole J. Mengshoel
    Abstract:

    Stochastic Local Search (SLS) algorithms have recently been proven to be among the best approaches to solving computationally hard problems. SLS algorithms typically have a number of parameters, optimized empirically, that characterize and determine their performance. In this article, we focus on the noise parameter. The theoretical foundation of SLS, including an understanding of how to the optimal noise varies with problem difficulty, is lagging compared to the strong empirical results obtained using these algorithms. A purely empirical approach to understanding and optimizing SLS noise, as problem instances vary, can be very computationally intensive. To complement existing experimental results, we formulate and analyze several Markov chain models of SLS in this article. In particular, we compute expected hitting times and show that they are rational functions for individual problem instances as well as their mixtures. Expected hitting time curves are analytical counterparts to noise response curves reported in the experimental literature. Hitting time analysis using polynomials and convex functions is also discussed. In addition, we present examples and experimental results illustrating the impact of varying noise probability on SLS run time. In experiments, where most probable explanations in Bayesian networks are computed, we use synthetic problem instances as well as problem instances from applications. We believe that our results provide an improved theoretical understanding of the role of noise in Stochastic Local Search, thereby providing a foundation for further progress in this area.

  • efficient bayesian network inference genetic algorithms Stochastic Local Search and abstraction
    Efficient Bayesian Network Inference: Genetic Algorithms Stochastic Local Search and Abstraction, 1999
    Co-Authors: David C. Wilkins, Ole J. Mengshoel
    Abstract:

    Bayesian networks are a core method for uncertainty representation and reasoning in artificial intelligence. This dissertation focuses on efficient approximate Bayesian network inference for computing the most probable explanation. New algorithms for efficient approximate inference are presented, and new techniques are described for creating hard synthetic Bayesian networks. Three major reSearch results related to speeding up Bayesian network inference are presented in this dissertation. First, improvements are made in the use of genetic algorithms. Local optima is an important problem in Bayesian network inference. Classical genetic algorithms converge, like hill-climbing algorithms, to one Local optimum, while niching genetic algorithms converge to multiple Local optima. This dissertation introduces the Probabilistic Crowding niching genetic algorithm, and presents theoretical and empirical results showing that Probabilistic Crowding gives predictable convergence, which at equilibrium is proportional to the utility function, which for Bayesian networks is the probability of an explanation. Second, improvements are made to Stochastic Local Search algorithms for efficient Bayesian network inference. This dissertation presents the Stochastic Greedy Search algorithm, which introduces noisy steps that allows Local Search to escape Local optima. We also introduce different measures of gain (or gradient) and an operator-based approach, giving several ways to Search Locally. Comparisons to the state-of-the-art inference algorithm Hugin show that Stochastic Greedy Search performs significantly better for satisfiability Bayesian networks as well as for certain Bayesian networks from applications. In application networks, initialization algorithms, which compute the most probable explanation in bounding cases, prove to be very valuable, and we introduce two novel initialization algorithms denoted forward and backward dynamic programming. Initialization starts the Local Search at points closer to Local optima than when Search starts from an explanation created uniformly at random. Lastly, improvements are made to the use and measurement of abstraction and aggregation to improve Bayesian network inference. A criterion is introduced that quantifies how different methods of abstraction impact the quality of inference. The criterion regards abstraction as noise and uses variance as a measure of quality. Results are presented that quantify how different methods and levels of abstraction effect accuracy. Two major reSearch results are presented that relate to creating hard synthetic Bayesian networks for empirical reSearch on inference algorithms. One method translates deceptive problems studied in genetic algorithms to a Bayesian network setting. We show that Bayesian networks can be deceptive, and this is important since genetic algorithm performance suffers on deceptive problems. The other result is based on translating satisfiability problems

Holger H Hoos - One of the best experts on this subject based on the ideXlab platform.

  • Stochastic Local Search algorithms an overview
    Handbook of Computational Intelligence, 2015
    Co-Authors: Holger H Hoos, Thomas Stutzle
    Abstract:

    In this chapter, we give an overview of the main concepts underlying the Stochastic Local Search (SLS) framework and outline some of the most relevant SLS techniques. We also discuss some major recent reSearch directions in the area of Stochastic Local Search. The remainder of this chapter is structured as follows. In Sect. 54.1, we situate the notion of SLS within the broader context of fundamental Search paradigms and briefly review the definition of an SLS algorithm. In Sect. 54.2, we summarize the main issues and trends in the design of greedy constructive and iterative improvement algorithms, while in Sects. 54.3–54.5, we provide a concise overview of some of the most widely used simple, hybrid, and population-based SLS methods. Finally, in Sect. 54.6, we discuss some recent topics of interest, such as the systematic design of SLS algorithms and methods for the automatic configuration of SLS

  • generating domain specific planners through automatic parameter configuration in lpg
    International Conference on Automated Planning and Scheduling, 2011
    Co-Authors: Mauro Vallati, Holger H Hoos, Alfonso Gerevini, Chris Fawcett, Alessandro Saetti
    Abstract:

    The ParLPG planning system is based on the idea of using a generic algorithm configuration procedure – here, the well-known ParamILS framework – to optimise the performance of a highly parametric planner on a set of problem instances representative of a specific planning domain. This idea is applied to LPG, a versatile and efficient planner based on Stochastic Local-Search with 62 parameters and over 6.5 × 10^17 possible configurations. A recent, large-scale empirical investigation showed that the approach behind ParLPG yields substantial performance improvements across a broad range of planning domains.

  • on the quality and quantity of random decisions in Stochastic Local Search for sat
    Lecture Notes in Computer Science, 2006
    Co-Authors: Dave A D Tompkins, Holger H Hoos
    Abstract:

    Stochastic Local Search (SLS) methods are underlying some of the best-performing algorithms for certain types of SAT instances, both from an empirical as well as from a theoretical point of view. By definition and in practice, random decisions are an essential ingredient of SLS algorithms. In this paper we empirically analyse the role of randomness in these algorithms. We first study the effect of the quality of the underlying random number sequence on the behaviour of well-known algorithms such as Papadimitriou's algorithm and Adaptive Novelty+. Our results indicate that while extremely poor quality random number sequences can have a detrimental effect on the behaviour of these algorithms, there is no evidence that the use of standard pseudo-random number generators is problematic. We also investigate the amount of randomness required to achieve the typical behaviour of these algorithms using derandomisation. Our experimental results indicate that the performance of SLS algorithms for SAT is surprisingly robust with respect to the number of random decisions made by an algorithm.

  • efficient Stochastic Local Search for mpe solving
    International Joint Conference on Artificial Intelligence, 2005
    Co-Authors: Frank Hutter, Holger H Hoos, Thomas Stutzle
    Abstract:

    Finding most probable explanations (MPEs) in graphical models, such as Bayesian belief networks, is a fundamental problem in reasoning under uncertainty, and much effort has been spent on developing effective algorithms for this NP-hard problem. Stochastic Local Search (SLS) approaches to MPE solving have previously been explored, but were found to be not competitive with state-of-the-art branch & bound methods. In this work, we identify the shortcomings of earlier SLS algorithms for the MPE problem and demonstrate how these can be overcome, leading to an SLS algorithm that substantially improves the state-of-the-art in solving hard networks with many variables, large domain sizes, high degree, and, most importantly, networks with high induced width.

  • Search space features underlying the performance of Stochastic Local Search algorithms for max sat
    Parallel Problem Solving from Nature, 2004
    Co-Authors: Holger H Hoos, Kevin Smyth, Thomas Stutzle
    Abstract:

    MAX-SAT is a well-known optimisation problem that can be seen as a generalisation of the propositional satisfiability problem. In this study, we investigate how the performance of Stochastic Local Search (SLS) algorithms – a large and prominent class of algorithms that includes, for example, Tabu Search, Evolutionary Algorithms and Simulated Annealing – depends on features of the underlying Search space. We show that two well-known measures of Search space structure, the autocorrelation length of random walks and the so-called fitness distance correlation, reflect complementary factors underlying instance hardness for high-performance SLS algorithms. While the autocorrelation measure is computationally cheap, the fitness distance correlation serves mainly as an a posteriori measure for explaining performance. We also study the dependence of SLS performance on features of the distribution of clause weights for individual instances and show that, depending on the variance of the clause weight distribution, different Search strategies seem to be suited best for dealing with the structure of the respective Search spaces.

Manuel Lopezibanez - One of the best experts on this subject based on the ideXlab platform.

  • grammar based generation of Stochastic Local Search heuristics through automatic algorithm configuration tools
    Computers & Operations Research, 2014
    Co-Authors: Franco Mascia, Jeremie Duboislacoste, Manuel Lopezibanez, Thomas Stutzle
    Abstract:

    Several grammar-based genetic programming algorithms have been proposed in the literature to automatically generate heuristics for hard optimization problems. These approaches specify the algorithmic building blocks and the way in which they can be combined in a grammar; the best heuristic for the problem being tackled is found by an evolutionary algorithm that Searches in the algorithm design space defined by the grammar. In this work, we propose a novel representation of the grammar by a sequence of categorical, integer, and real-valued parameters. We then use a tool for automatic algorithm configuration to Search for the best algorithm for the problem at hand. Our experimental evaluation on the one-dimensional bin packing problem and the permutation flowshop problem with weighted tardiness objective shows that the proposed approach produces better algorithms than grammatical evolution, a well-established variant of grammar-based genetic programming. The reasons behind such improvement lie both in the representation proposed and in the method used to Search the algorithm design space.

  • algorithm comparison by automatically configurable Stochastic Local Search frameworks a case study using flow shop scheduling problems
    Lecture Notes in Computer Science, 2014
    Co-Authors: Franco Mascia, Jeremie Duboislacoste, Manuel Lopezibanez, Marieeleonore Marmion, Thomas Stutzle
    Abstract:

    The benefits of hybrid Stochastic Local Search (SLS) methods, in comparison with more classical (non-hybrid) ones are often difficult to quantify, since one has to take into account not only the final results obtained but also the effort spent on finding the best configuration of the hybrid and of the classical SLS method. In this paper, we study this trade-off by means of tools for automatic algorithm design, and, in particular, we study the generation of hybrid SLS algorithms versus selecting one classical SLS method among several. In addition, we tune the parameters of the classical SLS method separately and compare the results with the ones obtained when selection and tuning are done at the same time. We carry out experiments on two variants of the permutation flowshop scheduling problem that consider the minimization of weighted sum of completion times (PFSP-WCT) and the minimization of weighted tardiness (PFSP-WCT). Our results indicate that the hybrid algorithms we instantiate are able to match and improve over the best classical SLS method.

  • automatic design of hybrid Stochastic Local Search algorithms
    Lecture Notes in Computer Science, 2013
    Co-Authors: Marieeleonore Marmion, Manuel Lopezibanez, Franco Mascia, Thomas Stutzle
    Abstract:

    Many Stochastic Local Search (SLS) methods rely on the manipulation of single solutions at each of the Search steps. Examples are iterative improvement, iterated Local Search, simulated annealing, variable neighborhood Search, and iterated greedy. These SLS methods are the basis of many state-of-the-art algorithms for hard combinatorial optimization problems. Often, several of these SLS methods are combined with each other to improve performance. We propose here a practical, unified structure that encompasses several such SLS methods. The proposed structure is unified because it integrates these metaheuristics into a single structure from which we can not only instantiate each of them, but we also can generate complex combinations and variants. Moreover, the structure is practical since we propose a method to instantiate actual algorithms for practical problems in a semi-automatic fashion. The method presented in this work implements a general Local Search structure as a grammar; an instantiation of such a grammar is a program that can be compiled into executable form. We propose to find the appropriate grammar instantiation for a particular problem by means of automatic configuration. The result is a semi-automatic system that, with little human effort, is able to generate powerful hybrid SLS algorithms.

  • exploratory analysis of Stochastic Local Search algorithms in biobjective optimization
    Experimental Methods for the Analysis of Optimization Algorithms, 2010
    Co-Authors: Manuel Lopezibanez, Luis Paquete, Thomas Stutzle
    Abstract:

    This chapter introduces two Perl programs that implement graphical tools for exploring the performance of Stochastic Local Search algorithms for biobjective optimization problems. These tools are based on the concept of the empirical attainment function (EAF), which describes the probabilistic distribution of the outcomes obtained by a Stochastic algorithm in the objective space. In particular, we consider the visualization of attainment surfaces and differences between the first-order EAFs of the outcomes of two algorithms. This visualization allows us to identify certain algorithmic behaviors in a graphical way. We explain the use of these visualization tools and illustrate them with examples arising from practice.

  • effective hybrid Stochastic Local Search algorithms for biobjective permutation flowshop scheduling
    Lecture Notes in Computer Science, 2009
    Co-Authors: Jeremie Duboislacoste, Manuel Lopezibanez, Thomas Stutzle
    Abstract:

    This paper presents the steps followed in the design of hybrid Stochastic Local Search algorithms for biobjective permutation flow shop scheduling problems. In particular, this paper tackles the three pairwise combinations of the objectives (i) makespan, (ii) the sum of the completion times of the jobs, and (iii) the weighted total tardiness of all jobs. The proposed algorithms are combinations of two Local Search methods: two-phase Local Search and Pareto Local Search. The design of the algorithms is based on a careful experimental analysis of crucial algorithmic components of the two Search methods. The final results show that the newly developed algorithms reach very high performance: The solutions obtained frequently improve upon the best nondominated solutions previously known, while requiring much shorter computation times.

Dalila Boughaci - One of the best experts on this subject based on the ideXlab platform.

  • graph based model for information retrieval using a Stochastic Local Search
    Pattern Recognition Letters, 2017
    Co-Authors: Sidali Hocine Farhi, Dalila Boughaci
    Abstract:

    Abstract Graph has become increasingly important in modeling complicated structures and data such as chemical compounds, and social networks. Recent advance of machine learning reSearch has witnessed a number of models based on graphs, from which information retrieval study is also benefited since many of these models have been verified by different information retrieval tasks. In this paper, we investigate the issues of indexing graphs and define a novel solution by applying a Stochastic Local Search (SLS) method to extract subgraphs that will be used for the Information Retrieval process. To reduce the size of the index, we take into consideration the size of the query and the set of the frequent subgraphs. In other words, the subgraphs that will be used to create the index will have a size equal to the size of the query in order to optimize as much as possible the Search space and the execution time. The proposed method is evaluated on the CACM collection, which contains the titles, authors and abstracts (where available) of a set of scientific articles and a set of queries and relevancy judgments and compared to the vector-based cosine model proposed in the literature. Our method is able to discover frequent subgraphs serving to establish the index and relevant documents for the Information Retrieval process’s output. The numerical results show that our method provides competitive results and finds high quality solutions (documents) compared to the relevant documents cited on the CACM collection.

  • hybrid harmony Search combined with Stochastic Local Search for feature selection
    Neural Processing Letters, 2016
    Co-Authors: Messaouda Nekkaa, Dalila Boughaci
    Abstract:

    Feature selection is a challenging task that has been the subject of a large amount of reSearch, especially in relation to classification tasks. It permits to eliminate the redundant attributes and enhance the classification accuracy by keeping only the relevant attributes. In this paper, we propose a hybrid Search method based on both harmony Search algorithm (HSA) and Stochastic Local Search (SLS) for feature selection in data classification. A novel probabilistic selection strategy is used in HSA---SLS to select the appropriate solutions to undergo Stochastic Local refinement, keeping a good compromise between exploration and exploitation. In addition, the HSA---SLS is combined with a support vector machine (SVM) classifier with optimized parameters. The proposed HSA---SLS method tries to find a subset of features that maximizes the classification accuracy rate of SVM. Experimental results show good performance in favor of our proposed method.

  • a Stochastic Local Search combined with support vector machine for web services classification
    Rapports de recherche internes, 2016
    Co-Authors: Abdelouahab Laachemi, Dalila Boughaci
    Abstract:

    In this paper, we are interested in the Web service classification. We propose a classification method that first uses a Stochastic Local Search (SLS) meta-heuristic for feature selection then call the Support Vector Machine (SVM) to do the classification task. The proposed method that combines SLS and SVM for Web service classification is validated on the QWS Dataset to measure its performance. We used a set of 364 Web services divided into four categories (Platinum, Gold, Silver and Bronze) in which quality is measured by 9 attributes. The experiments and the comparison show the effectiveness of our method for the classification of Web services.

  • Local Search methods for the optimal winner determination problem in combinatorial auctions
    Journal of Mathematical Modelling and Algorithms, 2010
    Co-Authors: Dalila Boughaci, Belaid Benhamou, Habiba Drias
    Abstract:

    In this paper, both Stochastic Local Search (SLS) and tabu Search (TS) are studied for the optimal winner determination problem (WDP) in combinatorial auctions. The proposed methods are evaluated on various benchmark problems, and compared with the hybrid simulated annealing (SAGII), the memetic algorithms (MA) and Casanova. The computational experiments show that the SLS provides competitive results and finds solutions of a higher quality than TS and Casanova methods.

  • a memetic algorithm for the optimal winner determination problem
    Soft Computing, 2009
    Co-Authors: Dalila Boughaci, Belaid Benhamou, Habiba Drias
    Abstract:

    In this paper, we propose a memetic algorithm for the optimal winner determination problem in combinatorial auctions. First, we investigate a new selection strategy based on both fitness and diversity to choose individuals to participate in the reproduction phase of the memetic algorithm. The resulting algorithm is enhanced by using a Stochastic Local Search (SLS) component combined with a specific crossover operator. This operator is used to identify promising Search regions while the Stochastic Local Search performs an intensified Search of solutions around these regions. Experiments on various realistic instances of the considered problem are performed to show and compare the effectiveness of our approach.