Covariance Matrix

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 90891 Experts worldwide ranked by ideXlab platform

Nikolaus Hansen - One of the best experts on this subject based on the ideXlab platform.

  • diagonal acceleration for Covariance Matrix adaptation evolution strategies
    2020
    Co-Authors: Youhei Akimoto, Nikolaus Hansen
    Abstract:

    We introduce an acceleration for Covariance Matrix adaptation evolution strategies (CMA-ES) by means of adaptive diagonal decoding (dd-CMA). This diagonal acceleration endows the default CMA-ES wit...

  • Efficient Covariance Matrix update for variable metric evolution strategies
    2009
    Co-Authors: Thorsten Suttorp, Nikolaus Hansen, Christian Igel
    Abstract:

    Randomized direct search algorithms for continuous domains, such as evolution strategies, are basic tools in machine learning. They are especially needed when the gradient of an objective function (e.g., loss, energy, or reward function) cannot be computed or estimated efficiently. Application areas include supervised and reinforcement learning as well as model selection. These randomized search strategies often rely on normally distributed additive variations of candidate solutions. In order to efficiently search in non-separable and ill-conditioned landscapes the Covariance Matrix of the normal distribution must be adapted, amounting to a variable metric method. Consequently, Covariance Matrix adaptation (CMA) is considered state-of-the-art in evolution strategies. In order to sample the normal distribution, the adapted Covariance Matrix needs to be decomposed, requiring in general Θ ( n ^3) operations, where n is the search space dimension. We propose a new update mechanism which can replace a rank-one Covariance Matrix update and the computationally expensive decomposition of the Covariance Matrix. The newly developed update rule reduces the computational complexity of the rank-one Covariance Matrix adaptation to Θ ( n ^2) without resorting to outdated distributions. We derive new versions of the elitist Covariance Matrix adaptation evolution strategy (CMA-ES) and the multi-objective CMA-ES. These algorithms are equivalent to the original procedures except that the update step for the variable metric distribution scales better in the problem dimension. We also introduce a simplified variant of the non-elitist CMA-ES with the incremental Covariance Matrix update and investigate its performance. Apart from the reduced time-complexity of the distribution update, the algebraic computations involved in all new algorithms are simpler compared to the original versions. The new update rule improves the performance of the CMA-ES for large scale machine learning problems in which the objective function can be evaluated fast.

  • a computational efficient Covariance Matrix update and a 1 1 cma for evolution strategies
    2006
    Co-Authors: Christian Igel, Thorsten Suttorp, Nikolaus Hansen
    Abstract:

    First, the Covariance Matrix adaptation (CMA) with rank-one update is introduced into the (1+1)-evolution strategy. An improved implementation of the 1/5-th success rule is proposed for step size adaptation, which replaces cumulative path length control. Second, an incremental Cholesky update for the Covariance Matrix is developed replacing the computational demanding and numerically involved decomposition of the Covariance Matrix. The Cholesky update can replace the decomposition only for the update without evolution path and reduces the computational effort from O(n3) to O(n2). The resulting (1+1)-Cholesky-CMA-ES is an elegant algorithm and the perhaps simplest evolution strategy with Covariance Matrix and step size adaptation. Simulations compare the introduced algorithms to previously published CMA versions.

Christian Igel - One of the best experts on this subject based on the ideXlab platform.

  • Efficient Covariance Matrix update for variable metric evolution strategies
    2009
    Co-Authors: Thorsten Suttorp, Nikolaus Hansen, Christian Igel
    Abstract:

    Randomized direct search algorithms for continuous domains, such as evolution strategies, are basic tools in machine learning. They are especially needed when the gradient of an objective function (e.g., loss, energy, or reward function) cannot be computed or estimated efficiently. Application areas include supervised and reinforcement learning as well as model selection. These randomized search strategies often rely on normally distributed additive variations of candidate solutions. In order to efficiently search in non-separable and ill-conditioned landscapes the Covariance Matrix of the normal distribution must be adapted, amounting to a variable metric method. Consequently, Covariance Matrix adaptation (CMA) is considered state-of-the-art in evolution strategies. In order to sample the normal distribution, the adapted Covariance Matrix needs to be decomposed, requiring in general Θ ( n ^3) operations, where n is the search space dimension. We propose a new update mechanism which can replace a rank-one Covariance Matrix update and the computationally expensive decomposition of the Covariance Matrix. The newly developed update rule reduces the computational complexity of the rank-one Covariance Matrix adaptation to Θ ( n ^2) without resorting to outdated distributions. We derive new versions of the elitist Covariance Matrix adaptation evolution strategy (CMA-ES) and the multi-objective CMA-ES. These algorithms are equivalent to the original procedures except that the update step for the variable metric distribution scales better in the problem dimension. We also introduce a simplified variant of the non-elitist CMA-ES with the incremental Covariance Matrix update and investigate its performance. Apart from the reduced time-complexity of the distribution update, the algebraic computations involved in all new algorithms are simpler compared to the original versions. The new update rule improves the performance of the CMA-ES for large scale machine learning problems in which the objective function can be evaluated fast.

  • a computational efficient Covariance Matrix update and a 1 1 cma for evolution strategies
    2006
    Co-Authors: Christian Igel, Thorsten Suttorp, Nikolaus Hansen
    Abstract:

    First, the Covariance Matrix adaptation (CMA) with rank-one update is introduced into the (1+1)-evolution strategy. An improved implementation of the 1/5-th success rule is proposed for step size adaptation, which replaces cumulative path length control. Second, an incremental Cholesky update for the Covariance Matrix is developed replacing the computational demanding and numerically involved decomposition of the Covariance Matrix. The Cholesky update can replace the decomposition only for the update without evolution path and reduces the computational effort from O(n3) to O(n2). The resulting (1+1)-Cholesky-CMA-ES is an elegant algorithm and the perhaps simplest evolution strategy with Covariance Matrix and step size adaptation. Simulations compare the introduced algorithms to previously published CMA versions.

Bernhard Sendhoff - One of the best experts on this subject based on the ideXlab platform.

  • simplify your Covariance Matrix adaptation evolution strategy
    2017
    Co-Authors: Hansgeorg Beyer, Bernhard Sendhoff
    Abstract:

    The standard Covariance Matrix adaptation evolution strategy (CMA-ES) comprises two evolution paths, one for the learning of the mutation strength and one for the rank-1 update of the Covariance Matrix. In this paper, it is shown that one can approximately transform this algorithm in such a manner that one of the evolution paths and the Covariance Matrix itself disappear. That is, the Covariance update and the Covariance Matrix square root operations are no longer needed in this novel so-called Matrix adaptation (MA) ES. The MA-ES performs nearly as well as the original CMA-ES. This is shown by empirical investigations considering the evolution dynamics and the empirical expected runtime on a set of standard test functions. Furthermore, it is shown that the MA-ES can be used as a search engine in a bi-population (BiPop) ES. The resulting BiPop-MA-ES is benchmarked using the BBOB comparing continuous optimizers (COCO) framework and compared with the performance of the CMA-ES-v3.61 production code. It is shown that this new BiPop-MA-ES—while algorithmically simpler—performs nearly equally well as the CMA-ES-v3.61 code.

  • Covariance Matrix adaptation revisited the cmsa evolution strategy
    2008
    Co-Authors: Hansgeorg Beyer, Bernhard Sendhoff
    Abstract:

    The Covariance Matrix adaptation evolution strategy (CMA-ES) rates among the most successful evolutionary algorithms for continuous parameter optimization. Nevertheless, it is plagued with some drawbacks like the complexity of the adaptation process and the reliance on a number of sophisticatedly constructed strategy parameter formulae for which no or little theoretical substantiation is available. Furthermore, the CMA-ES does not work well for large population sizes. In this paper, we propose an alternative --- simpler --- adaptation step of the Covariance Matrix which is closer to the "traditional" mutative self-adaptation. We compare the newly proposed algorithm, which we term the CMSA-ES, with the CMA-ES on a number of different test functions and are able to demonstrate its superiority in particular for large population sizes.

Sophocles Mavroeidis - One of the best experts on this subject based on the ideXlab platform.

  • a test for kronecker product structure Covariance Matrix
    2021
    Co-Authors: Patrik Guggenberger, Frank Kleibergen, Sophocles Mavroeidis
    Abstract:

    We propose a test for a Covariance Matrix to have Kronecker Product Structure (KPS). KPS implies a reduced rank restriction on an invertible transformation of the Covariance Matrix and the new procedure is an adaptation of the Kleibergen-Paap (2006) reduced rank test. The main extension concerns the singularity of the Covariance Matrix estimator involved in the rank test which complicates the derivation of its limiting distribution. We show this limiting distribution to be χ² with degrees of freedom corresponding to the number of restrictions tested. Re-examining sixteen highly cited papers conducting IV regressions, we find that KPS is not rejected in 24 of 30 specifications for moderate sample sizes at the 5% nominal size.

  • a test for kronecker product structure Covariance Matrix
    2020
    Co-Authors: Patrik Guggenberger, Frank Kleibergen, Sophocles Mavroeidis
    Abstract:

    We propose a test that a Covariance Matrix has Kronecker Product Structure (KPS). KPS implies a reduced rank restriction on an invertible transformation of the Covariance Matrix and the new procedure is an adaptation of the Kleibergen and Paap (2006) reduced rank test. KPS is a generalization of homoscedasticity and allows for more powerful subvector inference in linear Instrumental Variables (IV) regressions than can be achieved under general Covariance matrices. Re-examining sixteen highly cited papers conducting IV regressions, we find that KPS is not rejected in 24 of 30 specifications for moderate sample sizes at the 5% nominal size.

Emmanuel Flachaire - One of the best experts on this subject based on the ideXlab platform.

  • bootstrapping heteroskedasticity consistent Covariance Matrix estimator
    2002
    Co-Authors: Emmanuel Flachaire
    Abstract:

    Recent results of Cribari-Neto and Zarkos (1999) show that bootstrap methods can be successfully used to estimate a heteroskedasticity robust Covariance Matrix estimator. In this paper, we show that the wild bootstrap estimator can be calculated directly, without simulations, as it is just a more traditional estimator. Their experimental results seem to conflict with those of MacKinnon and White (1985); we reconcile these two results.