Optimization Algorithms

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 360 Experts worldwide ranked by ideXlab platform

Nathan Wiebe - One of the best experts on this subject based on the ideXlab platform.

  • optimizing quantum Optimization Algorithms via faster quantum gradient computation
    Symposium on Discrete Algorithms, 2019
    Co-Authors: Andras Gilyen, Srinivasan Arunachalam, Nathan Wiebe
    Abstract:

    We consider a generic framework of Optimization Algorithms based on gradient descent. We develop a quantum algorithm that computes the gradient of a multi-variate real-valued function f : Rd → R by evaluating it at only a logarithmic number of times in superposition. Our algorithm is an improved version of Jordan's gradient computation algorithm [28], providing an approximation of the gradient ▽f with quadratically better dependence on the evaluation accuracy of f, for an important class of smooth functions. Furthermore, we show that objective functions arising from variational quantum circuits usually satisfy the necessary smoothness conditions, hence our algorithm provides a quadratic improvement in the complexity of computing their gradient. We also show that in a continuous phase-query model, our gradient computation algorithm has optimal query complexity up to poly-logarithmic factors, for a particular class of smooth functions. Moreover, we show that for low-degree multivariate polynomials our algorithm can provide exponential speedups compared to Jordan's algorithm in terms of the dimension d. One of the technical challenges in applying our gradient computation procedure for quantum Optimization problems is the need to convert between a probability oracle (which is common in quantum Optimization procedures) and a phase oracle (which is common in quantum Algorithms) of the objective function f. We provide efficient subroutines to perform this delicate interconversion between the two types of oracles incurring only a logarithmic overhead, which might be of independent interest. Finally, using these tools we improve the runtime of prior approaches for training quantum auto-encoders, variational quantum eigensolvers (VQE), and quantum approximate Optimization Algorithms (QAOA).

  • optimizing quantum Optimization Algorithms via faster quantum gradient computation
    arXiv: Quantum Physics, 2017
    Co-Authors: Andras Gilyen, Srinivasan Arunachalam, Nathan Wiebe
    Abstract:

    We consider a generic framework of Optimization Algorithms based on gradient descent. We develop a quantum algorithm that computes the gradient of a multi-variate real-valued function $f:\mathbb{R}^d\rightarrow \mathbb{R}$ by evaluating it at only a logarithmic number of points in superposition. Our algorithm is an improved version of Stephen Jordan's gradient computation algorithm, providing an approximation of the gradient $\nabla f$ with quadratically better dependence on the evaluation accuracy of $f$, for an important class of smooth functions. Furthermore, we show that most objective functions arising from quantum Optimization procedures satisfy the necessary smoothness conditions, hence our algorithm provides a quadratic improvement in the complexity of computing their gradient. We also show that in a continuous phase-query model, our gradient computation algorithm has optimal query complexity up to poly-logarithmic factors, for a particular class of smooth functions. Moreover, we show that for low-degree multivariate polynomials our algorithm can provide exponential speedups compared to Jordan's algorithm in terms of the dimension $d$. One of the technical challenges in applying our gradient computation procedure for quantum Optimization problems is the need to convert between a probability oracle (which is common in quantum Optimization procedures) and a phase oracle (which is common in quantum Algorithms) of the objective function $f$. We provide efficient subroutines to perform this delicate interconversion between the two types of oracles incurring only a logarithmic overhead, which might be of independent interest. Finally, using these tools we improve the runtime of prior approaches for training quantum auto-encoders, variational quantum eigensolvers (VQE), and quantum approximate Optimization Algorithms (QAOA).

Majed M Alateeq - One of the best experts on this subject based on the ideXlab platform.

  • a comparative analysis of bio inspired Optimization Algorithms for automated test pattern generation in sequential circuits
    Applied Soft Computing, 2021
    Co-Authors: Majed M Alateeq, Witold Pedrycz
    Abstract:

    Abstract In recent years, bio-inspired Optimization Algorithms have demonstrated the ability to produce optimal solutions to numerous complex computational problems in science and engineering. In this work, a comparative analysis of bio-inspired Algorithms is presented to understand and quantify the performance of Algorithms in guiding the search process towards better solutions over all feasible solutions. Three evolutionary Algorithms, namely, the Genetic Algorithm (GA), Particle Swarm Optimization (PSO) and Differential Evolution (DE), are implemented to generate an optimized test sequence set for digital sequential circuits to investigate how they explore and the exploitation realized in their search spaces. The merit of using a comparative method to analyze and optimize search spaces is to reach a reliable and quantifiable conclusion about the relative performance of each algorithm. An improvement in the quality of solutions was achieved, particularly in terms of testing time, number of test vectors, and fault coverage of tested sequential circuits in comparison with other algorithmic test generators that have been presented in the literature. The study shows how to effectively reduce the search space without negatively affecting the results and guides the search process over a large space. In addition, the experiments highlight the limitations of each Optimization algorithm and offer some constructive methods of improvement. Moreover, several recommendations and guidelines regarding the use of Optimization Algorithms as test pattern generators to improve their performance and increase their efficiency are presented. Finally, we emphasize the relevance of bio-inspired Algorithms in solving complex computational problems, data manipulation, and Optimization objectives.

  • analysis of Optimization Algorithms in automated test pattern generation for sequential circuits
    Systems Man and Cybernetics, 2017
    Co-Authors: Majed M Alateeq, Witold Pedrycz
    Abstract:

    In automated test pattern generation (ATPG), test patterns are automatically generated and tested against all specific modeled faults. In this work, three Optimization Algorithms, namely: genetic algorithm (GA), particle swarm Optimization (PSO) and differential evolution (DE), were studied for the purpose of generating optimized test sequence sets. Furthermore, this paper investigated the broad use of evolutionary Algorithms and swarm intelligence in automated test pattern generation to expand the analysis of the subject. The obtained experimental results demonstrated the improvement in terms of testing time, number of test vectors, and fault coverage compared with previous Optimization-based test generators. In addition, the experiments highlight the weakness of each Optimization algorithm in the test pattern generation (TPG) and offer some constructive methods of improvement. We present several recommendations and guidelines regarding the use of Optimization Algorithms as test pattern generators to improve the performance and increase their efficiency.

Witold Pedrycz - One of the best experts on this subject based on the ideXlab platform.

  • a comparative analysis of bio inspired Optimization Algorithms for automated test pattern generation in sequential circuits
    Applied Soft Computing, 2021
    Co-Authors: Majed M Alateeq, Witold Pedrycz
    Abstract:

    Abstract In recent years, bio-inspired Optimization Algorithms have demonstrated the ability to produce optimal solutions to numerous complex computational problems in science and engineering. In this work, a comparative analysis of bio-inspired Algorithms is presented to understand and quantify the performance of Algorithms in guiding the search process towards better solutions over all feasible solutions. Three evolutionary Algorithms, namely, the Genetic Algorithm (GA), Particle Swarm Optimization (PSO) and Differential Evolution (DE), are implemented to generate an optimized test sequence set for digital sequential circuits to investigate how they explore and the exploitation realized in their search spaces. The merit of using a comparative method to analyze and optimize search spaces is to reach a reliable and quantifiable conclusion about the relative performance of each algorithm. An improvement in the quality of solutions was achieved, particularly in terms of testing time, number of test vectors, and fault coverage of tested sequential circuits in comparison with other algorithmic test generators that have been presented in the literature. The study shows how to effectively reduce the search space without negatively affecting the results and guides the search process over a large space. In addition, the experiments highlight the limitations of each Optimization algorithm and offer some constructive methods of improvement. Moreover, several recommendations and guidelines regarding the use of Optimization Algorithms as test pattern generators to improve their performance and increase their efficiency are presented. Finally, we emphasize the relevance of bio-inspired Algorithms in solving complex computational problems, data manipulation, and Optimization objectives.

  • analysis of Optimization Algorithms in automated test pattern generation for sequential circuits
    Systems Man and Cybernetics, 2017
    Co-Authors: Majed M Alateeq, Witold Pedrycz
    Abstract:

    In automated test pattern generation (ATPG), test patterns are automatically generated and tested against all specific modeled faults. In this work, three Optimization Algorithms, namely: genetic algorithm (GA), particle swarm Optimization (PSO) and differential evolution (DE), were studied for the purpose of generating optimized test sequence sets. Furthermore, this paper investigated the broad use of evolutionary Algorithms and swarm intelligence in automated test pattern generation to expand the analysis of the subject. The obtained experimental results demonstrated the improvement in terms of testing time, number of test vectors, and fault coverage compared with previous Optimization-based test generators. In addition, the experiments highlight the weakness of each Optimization algorithm in the test pattern generation (TPG) and offer some constructive methods of improvement. We present several recommendations and guidelines regarding the use of Optimization Algorithms as test pattern generators to improve the performance and increase their efficiency.

Peter Korosec - One of the best experts on this subject based on the ideXlab platform.

  • dsctool a web service based framework for statistical comparison of stochastic Optimization Algorithms
    Applied Soft Computing, 2020
    Co-Authors: Tome Eftimov, Gasper Petelin, Peter Korosec
    Abstract:

    Abstract DSCTool is a statistical tool for comparing performance of stochastic Optimization Algorithms on a single benchmark function (i.e. single-problem analysis) or a set of benchmark functions (i.e., multiple-problem analysis). DSCTool implements a recently proposed approach, called Deep Statistical Comparison (DSC), and its variants. DSC ranks Optimization Algorithms by comparing distributions of obtained solutions for a problem instead of using a simple descriptive statistic such as the mean or the median. The rankings obtained for an individual problem give the relations between the performance of the applied Algorithms. To compare Optimization Algorithms in the multiple-problem scenario, an appropriate statistical test must be applied to the rankings obtained for a set of problems. The main advantage of DSCTool are its REST web services, which means all its functionalities can be accessed from any programming language. In this paper, we present the DSCTool in detail with examples for its usage.

  • understanding exploration and exploitation powers of meta heuristic stochastic Optimization Algorithms through statistical analysis
    Genetic and Evolutionary Computation Conference, 2019
    Co-Authors: Tome Eftimov, Peter Korosec
    Abstract:

    Understanding of exploration and exploitation powers of meta-heuristic stochastic Optimization Algorithms is very important for algorithm developers. For this reason, we have recently proposed an approach for making a statistical comparison of meta-heuristic stochastic Optimization Algorithms according to the distribution of the solutions in the search space, which is also presented in this paper. Its main contribution is the support to identify exploration and exploitation powers of the compared Algorithms. This is especially important when dealing with multimodal search spaces, which consist of many local optima with similar values, and large-scale continuous Optimization problems, where it is hard to understand the reasons for the differences in performances. Experimental results showed that our recently proposed approach gives very promising results.

  • a novel statistical approach for comparing meta heuristic stochastic Optimization Algorithms according to the distribution of solutions in the search space
    Information Sciences, 2019
    Co-Authors: Tome Eftimov, Peter Korosec
    Abstract:

    Abstract In this paper a novel statistical approach for comparing meta-heuristic stochastic Optimization Algorithms according to the distribution of the solutions in the search space is introduced, known as extended Deep Statistical Comparison. This approach is an extension of the recently proposed Deep Statistical Comparison approach used for comparing meta-heuristic stochastic Optimization Algorithms according to the solutions values. Its main contribution is that the Algorithms are compared not only according to obtained solutions values, but also according to the distribution of the obtained solutions in the search space. The information it provides can additionally help to identify exploitation and exploration powers of the compared Algorithms. This is important when dealing with a multimodal search space, where there are a lot of local optima with similar values. The benchmark results show that our proposed approach gives promising results and can be used for a statistical comparison of meta-heuristic stochastic Optimization Algorithms according to solutions values and their distribution in the search space.

  • a novel approach to statistical comparison of meta heuristic stochastic Optimization Algorithms using deep statistics
    Information Sciences, 2017
    Co-Authors: Tome Eftimov, Peter Korosec, Barbara Korousic Seljak
    Abstract:

    Abstract In this paper a novel approach for making a statistical comparison of meta-heuristic stochastic Optimization Algorithms over multiple single-objective problems is introduced, where a new ranking scheme is proposed to obtain data for multiple problems. The main contribution of this approach is that the ranking scheme is based on the whole distribution, instead of using only one statistic to describe the distribution, such as average or median. Averages are sensitive to outliers (i.e., the poor runs of the stochastic Optimization Algorithms) and consequently medians are sometimes used. However, using the common approach with either averages or medians, the results can be affected by the ranking scheme that is used by some standard statistical tests. This happens when the differences between the averages or medians are in some ϵ-neighborhood and the Algorithms obtain different ranks though they should be ranked equally given the small differences that exist between them. The experimental results obtained on Black-Box Benchmarking 2015, show that our approach gives more robust results compared to the common approach in cases when the results are affected by outliers or by a misleading ranking scheme.

Andras Gilyen - One of the best experts on this subject based on the ideXlab platform.

  • optimizing quantum Optimization Algorithms via faster quantum gradient computation
    Symposium on Discrete Algorithms, 2019
    Co-Authors: Andras Gilyen, Srinivasan Arunachalam, Nathan Wiebe
    Abstract:

    We consider a generic framework of Optimization Algorithms based on gradient descent. We develop a quantum algorithm that computes the gradient of a multi-variate real-valued function f : Rd → R by evaluating it at only a logarithmic number of times in superposition. Our algorithm is an improved version of Jordan's gradient computation algorithm [28], providing an approximation of the gradient ▽f with quadratically better dependence on the evaluation accuracy of f, for an important class of smooth functions. Furthermore, we show that objective functions arising from variational quantum circuits usually satisfy the necessary smoothness conditions, hence our algorithm provides a quadratic improvement in the complexity of computing their gradient. We also show that in a continuous phase-query model, our gradient computation algorithm has optimal query complexity up to poly-logarithmic factors, for a particular class of smooth functions. Moreover, we show that for low-degree multivariate polynomials our algorithm can provide exponential speedups compared to Jordan's algorithm in terms of the dimension d. One of the technical challenges in applying our gradient computation procedure for quantum Optimization problems is the need to convert between a probability oracle (which is common in quantum Optimization procedures) and a phase oracle (which is common in quantum Algorithms) of the objective function f. We provide efficient subroutines to perform this delicate interconversion between the two types of oracles incurring only a logarithmic overhead, which might be of independent interest. Finally, using these tools we improve the runtime of prior approaches for training quantum auto-encoders, variational quantum eigensolvers (VQE), and quantum approximate Optimization Algorithms (QAOA).

  • optimizing quantum Optimization Algorithms via faster quantum gradient computation
    arXiv: Quantum Physics, 2017
    Co-Authors: Andras Gilyen, Srinivasan Arunachalam, Nathan Wiebe
    Abstract:

    We consider a generic framework of Optimization Algorithms based on gradient descent. We develop a quantum algorithm that computes the gradient of a multi-variate real-valued function $f:\mathbb{R}^d\rightarrow \mathbb{R}$ by evaluating it at only a logarithmic number of points in superposition. Our algorithm is an improved version of Stephen Jordan's gradient computation algorithm, providing an approximation of the gradient $\nabla f$ with quadratically better dependence on the evaluation accuracy of $f$, for an important class of smooth functions. Furthermore, we show that most objective functions arising from quantum Optimization procedures satisfy the necessary smoothness conditions, hence our algorithm provides a quadratic improvement in the complexity of computing their gradient. We also show that in a continuous phase-query model, our gradient computation algorithm has optimal query complexity up to poly-logarithmic factors, for a particular class of smooth functions. Moreover, we show that for low-degree multivariate polynomials our algorithm can provide exponential speedups compared to Jordan's algorithm in terms of the dimension $d$. One of the technical challenges in applying our gradient computation procedure for quantum Optimization problems is the need to convert between a probability oracle (which is common in quantum Optimization procedures) and a phase oracle (which is common in quantum Algorithms) of the objective function $f$. We provide efficient subroutines to perform this delicate interconversion between the two types of oracles incurring only a logarithmic overhead, which might be of independent interest. Finally, using these tools we improve the runtime of prior approaches for training quantum auto-encoders, variational quantum eigensolvers (VQE), and quantum approximate Optimization Algorithms (QAOA).