Variance Optimization

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 213 Experts worldwide ranked by ideXlab platform

Fabio Caccioli - One of the best experts on this subject based on the ideXlab platform.

  • analytic solution to Variance Optimization with no short positions
    Journal of Statistical Mechanics: Theory and Experiment, 2017
    Co-Authors: Imre Kondor, Gabor Papp, Fabio Caccioli
    Abstract:

    We consider the Variance portfolio Optimization problem with a ban on short selling. We provide an analytical solution by means of the replica method for the case of a portfolio of independent, but not identically distributed, assets. We study the behavior of the solution as a function of the ratio r between the number N of assets and the length T of the time series of returns used to estimate risk. The no-short-selling constraint acts as an asymmetric `1 regularizer, setting some of the portfolio weights to zero and keeping the out-of-sample estimator for the Variance bounded, avoiding the divergence present in the non-regularized case. However, the ban on short positions does not prevent the phase transition in the Optimization problem, only shifts the critical point from its non-regularized value of r = 1 to 2, and changes its character: at r = 2 the out-of-sample estimator for the portfolio Variance stays finite and the estimated in-sample Variance vanishes, while another critical parameter, related to the estimated portfolio weights and the condensate density, diverges at the critical value r = 2. We also calculate the distribution of the optimal weights over the random samples and show that the regularizer preferentially removes the assets with large Variances, in accord with one’s natural expectation

  • analytic solution to Variance Optimization with no short selling
    Social Science Research Network, 2017
    Co-Authors: Imre Kondor, Gabor Papp, Fabio Caccioli
    Abstract:

    A large portfolio of independent, but not identically distributed, returns is optimized under the Variance risk measure with a ban on short positions. To the best of our knowledge, this is the first time such a constrained Optimization is carried out analytically, which is made possible by the application of methods borrowed from the theory of disordered systems. The no-short selling constraint acts as an asymmetric l1 regularizer, setting some of the portfolio weights to zero and keeping the estimation error bounded, avoiding the divergence present in the non-regularized case. However, the susceptibility, i.e. the sensitivity of the optimal portfolio weights to changes in the returns, diverges at a critical value 2 of the ratio N/T, where N is the number of different assets in the portfolio and T is the sample size (the length of available time series). This means that a ban on short positions does not prevent the phase transition in the Optimization problem, it merely shifts the critical point from its nonregularized value of N/T = 1 to 2. It is shown that this critical value is universal, independent of the distribution of the returns. Beyond this critical value, the Variance of the portfolio identically vanishes for any portfolio weight vector constructed as a linear combination of the eigenvectors from the null space of the coVariance matrix, but these linear combinations are not legitimate solutions of the Optimization problem, as they are infinitely sensitive to any change in the input parameters, in particular they will wildly fluctuate from sample to sample. With some narrative license we may say that the regularizer takes care of the longitudinal fluctuations of the optimal weight vector, but does not eliminate the divergent transverse fluctuations. We also calculate the distribution of the optimal weights over the random samples and show that the regularizer preferentially removes the assets with large Variances, in accord with one’s natural expectation. The analytic calculations are supported by numerical simulations. The analytic and numerical results are in perfect agreement for N/T N/T > 2, where we know from exact linear algebraic considerations that no meaningful solution exits. The resolution of this paradox is that there are regularizers built into these solvers that stabilize the otherwise freely fluctuating, meaningless solutions. This should serve as a warning against the use of ready-made solver programs in empirical work without a good understanding of the theoretical structure of the problem at hand and the details of the tool used to solve it, and where the fundamental instability of the numerics may be masked by the stabilization due to the solver rather than the data.

  • analytic solution to Variance Optimization with no short selling
    arXiv: Portfolio Management, 2016
    Co-Authors: Imre Kondor, Gabor Papp, Fabio Caccioli
    Abstract:

    A large portfolio of independent returns is optimized under the Variance risk measure with a ban on short positions. The no-short selling constraint acts as an asymmetric $\ell_1$ regularizer, setting some of the portfolio weights to zero and keeping the out of sample estimator for the Variance bounded, avoiding the divergence present in the non-regularized case. However, the susceptibility, i.e. the sensitivity of the optimal portfolio weights to changes in the returns, diverges at a critical value $r=2$. This means that a ban on short positions does not prevent the phase transition in the Optimization problem, it merely shifts the critical point from its non-regularized value of $r=1$ to $2$. At $r=2$ the out of sample estimator for the portfolio Variance stays finite and the estimated in-sample Variance vanishes. We have performed numerical simulations to support the analytic results and found perfect agreement for $N/T<2$. Numerical experiments on finite size samples of symmetrically distributed returns show that above this critical point the probability of finding solutions with zero in-sample Variance increases rapidly with increasing $N$, becoming one in the large $N$ limit. However, these are not legitimate solutions of the Optimization problem, as they are infinitely sensitive to any change in the input parameters, in particular they will wildly fluctuate from sample to sample. We also calculate the distribution of the optimal weights over the random samples and show that the regularizer preferentially removes the assets with large Variances, in accord with one's natural expectation.

Imre Kondor - One of the best experts on this subject based on the ideXlab platform.

  • analytic solution to Variance Optimization with no short positions
    Journal of Statistical Mechanics: Theory and Experiment, 2017
    Co-Authors: Imre Kondor, Gabor Papp, Fabio Caccioli
    Abstract:

    We consider the Variance portfolio Optimization problem with a ban on short selling. We provide an analytical solution by means of the replica method for the case of a portfolio of independent, but not identically distributed, assets. We study the behavior of the solution as a function of the ratio r between the number N of assets and the length T of the time series of returns used to estimate risk. The no-short-selling constraint acts as an asymmetric `1 regularizer, setting some of the portfolio weights to zero and keeping the out-of-sample estimator for the Variance bounded, avoiding the divergence present in the non-regularized case. However, the ban on short positions does not prevent the phase transition in the Optimization problem, only shifts the critical point from its non-regularized value of r = 1 to 2, and changes its character: at r = 2 the out-of-sample estimator for the portfolio Variance stays finite and the estimated in-sample Variance vanishes, while another critical parameter, related to the estimated portfolio weights and the condensate density, diverges at the critical value r = 2. We also calculate the distribution of the optimal weights over the random samples and show that the regularizer preferentially removes the assets with large Variances, in accord with one’s natural expectation

  • analytic solution to Variance Optimization with no short selling
    Social Science Research Network, 2017
    Co-Authors: Imre Kondor, Gabor Papp, Fabio Caccioli
    Abstract:

    A large portfolio of independent, but not identically distributed, returns is optimized under the Variance risk measure with a ban on short positions. To the best of our knowledge, this is the first time such a constrained Optimization is carried out analytically, which is made possible by the application of methods borrowed from the theory of disordered systems. The no-short selling constraint acts as an asymmetric l1 regularizer, setting some of the portfolio weights to zero and keeping the estimation error bounded, avoiding the divergence present in the non-regularized case. However, the susceptibility, i.e. the sensitivity of the optimal portfolio weights to changes in the returns, diverges at a critical value 2 of the ratio N/T, where N is the number of different assets in the portfolio and T is the sample size (the length of available time series). This means that a ban on short positions does not prevent the phase transition in the Optimization problem, it merely shifts the critical point from its nonregularized value of N/T = 1 to 2. It is shown that this critical value is universal, independent of the distribution of the returns. Beyond this critical value, the Variance of the portfolio identically vanishes for any portfolio weight vector constructed as a linear combination of the eigenvectors from the null space of the coVariance matrix, but these linear combinations are not legitimate solutions of the Optimization problem, as they are infinitely sensitive to any change in the input parameters, in particular they will wildly fluctuate from sample to sample. With some narrative license we may say that the regularizer takes care of the longitudinal fluctuations of the optimal weight vector, but does not eliminate the divergent transverse fluctuations. We also calculate the distribution of the optimal weights over the random samples and show that the regularizer preferentially removes the assets with large Variances, in accord with one’s natural expectation. The analytic calculations are supported by numerical simulations. The analytic and numerical results are in perfect agreement for N/T N/T > 2, where we know from exact linear algebraic considerations that no meaningful solution exits. The resolution of this paradox is that there are regularizers built into these solvers that stabilize the otherwise freely fluctuating, meaningless solutions. This should serve as a warning against the use of ready-made solver programs in empirical work without a good understanding of the theoretical structure of the problem at hand and the details of the tool used to solve it, and where the fundamental instability of the numerics may be masked by the stabilization due to the solver rather than the data.

  • analytic solution to Variance Optimization with no short selling
    arXiv: Portfolio Management, 2016
    Co-Authors: Imre Kondor, Gabor Papp, Fabio Caccioli
    Abstract:

    A large portfolio of independent returns is optimized under the Variance risk measure with a ban on short positions. The no-short selling constraint acts as an asymmetric $\ell_1$ regularizer, setting some of the portfolio weights to zero and keeping the out of sample estimator for the Variance bounded, avoiding the divergence present in the non-regularized case. However, the susceptibility, i.e. the sensitivity of the optimal portfolio weights to changes in the returns, diverges at a critical value $r=2$. This means that a ban on short positions does not prevent the phase transition in the Optimization problem, it merely shifts the critical point from its non-regularized value of $r=1$ to $2$. At $r=2$ the out of sample estimator for the portfolio Variance stays finite and the estimated in-sample Variance vanishes. We have performed numerical simulations to support the analytic results and found perfect agreement for $N/T<2$. Numerical experiments on finite size samples of symmetrically distributed returns show that above this critical point the probability of finding solutions with zero in-sample Variance increases rapidly with increasing $N$, becoming one in the large $N$ limit. However, these are not legitimate solutions of the Optimization problem, as they are infinitely sensitive to any change in the input parameters, in particular they will wildly fluctuate from sample to sample. We also calculate the distribution of the optimal weights over the random samples and show that the regularizer preferentially removes the assets with large Variances, in accord with one's natural expectation.

Andrew Lim - One of the best experts on this subject based on the ideXlab platform.

  • robust empirical Optimization is almost the same as mean Variance Optimization
    Operations Research Letters, 2018
    Co-Authors: Junya Gotoh, Michael Jong Kim, Andrew Lim
    Abstract:

    Abstract We formulate a distributionally robust Optimization problem where the deviation of the alternative distribution is controlled by a ϕ -divergence penalty in the objective, and show that a large class of these problems are essentially equivalent to a mean–Variance problem. We also show that while a “small amount of robustness” always reduces the in-sample expected reward, the reduction in the Variance, which is a measure of sensitivity to model misspecification, is an order of magnitude larger.

  • robust empirical Optimization is almost the same as mean Variance Optimization
    Social Science Research Network, 2015
    Co-Authors: Junya Gotoh, Michael Jong Kim, Andrew Lim
    Abstract:

    We formulate a distributionally robust Optimization problem where the empirical distribution plays the role of the nominal model, the decision maker optimizes against a worst-case alternative, and the deviation of the alternative distribution from the nominal is controlled by a φ-divergence penalty in the objective. Our main finding is that a large class of robust empirical Optimization problems of this form are essentially equivalent to an in-sample mean-Variance problem. Intuitively, controlling the Variance reduces the sensitivity of a decision’s expected reward to perturbations in the tail of the in-sample reward distribution. This in turn reduces the sensitivity of its out-of-sample performance to perturbations in the nominal model, which is precisely the notion of robustness. We consider two applications, robust versions of the empirical news vendor and empirical portfolio Optimization problems, which we calibrate using resampling methods. Our numerical experiments show that the primary benefit of using robust empirical Optimization is its ability to produce solutions with low out-of-sample variability in the reward, which is consistent with our main theoretical finding. In the case of the portfolio choice problem, we draw on the insights from our main result to introduce a robust version of cross-validation that is useful in applications where distributions from resampling are sensitive to data variability and model mis-specification.

David I. August - One of the best experts on this subject based on the ideXlab platform.

  • compiler Optimization space exploration
    Symposium on Code Generation and Optimization, 2003
    Co-Authors: Spyridon Triantafyllis, Manish Vachharajani, Neil Vachharajani, David I. August
    Abstract:

    To meet the demands of modern architectures, optimizing compilers must incorporate an ever larger number of increasingly complex transformation algorithms. Since code transformations may often degrade performance or interfere with subsequent transformations, compilers employ predictive heuristics to guide Optimizations by predicting their effects a priori. Unfortunately, the unpredictability of Optimization interaction and the irregularity of today's wide-issue machines severely limit the accuracy of these heuristics. As a result, compiler writers may temper high Variance Optimization with overly conservative heuristics or may exclude these Optimizations entirely. While this process results in a compiler capable of generating good average code quality across the target benchmark set, it is at the cost of missed Optimization opportunities in individual code segments.To replace predictive heuristics, researchers have proposed compilers which explore many Optimization options, selecting the best one a posteriori. Unfortunately, these existing iterative compilation techniques are not practical for reasons of compile time and applicability. In this paper, we present the Optimization-Space Exploration (OSE) compiler organization, the first practical iterative compilation strategy applicable to Optimizations in general-purpose compilers. Instead of replacing predictive heuristics, OSE uses the compiler writer's knowledge encoded in the heuristics to select a small number of promising Optimization alternatives for a given code segment. Compile time is limited by evaluating only these alternatives for hot code segments using a general compiletime performance estimator. An OSE-enhanced version of lntel's highly-tuned, aggressively optimizing production compiler for IA-64 yields a significant performance improvement, more than 20% in some cases, on Itanium for SPEC codes.

  • compiler Optimization space exploration
    Symposium on Code Generation and Optimization, 2003
    Co-Authors: Spyridon Triantafyllis, Manish Vachharajani, Neil Vachharajani, David I. August
    Abstract:

    To meet the demands of modern architectures, optimizing compilers must incorporate an ever larger number of increasingly complex transformation algorithms. Since code transformations may often degrade performance or interfere with subsequent transformations, compilers employ predictive heuristics to guide Optimizations by predicting their effects a priori. Unfortunately, the unpredictability of Optimization interaction and the irregularity of today's wide-issue machines severely limit the accuracy of these heuristics. As a result, compiler writers may temper high Variance Optimization with overly conservative heuristics or may exclude these Optimizations entirely. While this process results in a compiler capable of generating good average code quality across the target benchmark set, it is at the cost of missed Optimization opportunities in individual code segments.To replace predictive heuristics, researchers have proposed compilers which explore many Optimization options, selecting the best one a posteriori. Unfortunately, these existing iterative compilation techniques are not practical for reasons of compile time and applicability. In this paper, we present the Optimization-Space Exploration (OSE) compiler organization, the first practical iterative compilation strategy applicable to Optimizations in general-purpose compilers. Instead of replacing predictive heuristics, OSE uses the compiler writer's knowledge encoded in the heuristics to select a small number of promising Optimization alternatives for a given code segment. Compile time is limited by evaluating only these alternatives for hot code segments using a general compiletime performance estimator. An OSE-enhanced version of lntel's highly-tuned, aggressively optimizing production compiler for IA-64 yields a significant performance improvement, more than 20% in some cases, on Itanium for SPEC codes.

Gabor Papp - One of the best experts on this subject based on the ideXlab platform.

  • analytic solution to Variance Optimization with no short positions
    Journal of Statistical Mechanics: Theory and Experiment, 2017
    Co-Authors: Imre Kondor, Gabor Papp, Fabio Caccioli
    Abstract:

    We consider the Variance portfolio Optimization problem with a ban on short selling. We provide an analytical solution by means of the replica method for the case of a portfolio of independent, but not identically distributed, assets. We study the behavior of the solution as a function of the ratio r between the number N of assets and the length T of the time series of returns used to estimate risk. The no-short-selling constraint acts as an asymmetric `1 regularizer, setting some of the portfolio weights to zero and keeping the out-of-sample estimator for the Variance bounded, avoiding the divergence present in the non-regularized case. However, the ban on short positions does not prevent the phase transition in the Optimization problem, only shifts the critical point from its non-regularized value of r = 1 to 2, and changes its character: at r = 2 the out-of-sample estimator for the portfolio Variance stays finite and the estimated in-sample Variance vanishes, while another critical parameter, related to the estimated portfolio weights and the condensate density, diverges at the critical value r = 2. We also calculate the distribution of the optimal weights over the random samples and show that the regularizer preferentially removes the assets with large Variances, in accord with one’s natural expectation

  • analytic solution to Variance Optimization with no short selling
    Social Science Research Network, 2017
    Co-Authors: Imre Kondor, Gabor Papp, Fabio Caccioli
    Abstract:

    A large portfolio of independent, but not identically distributed, returns is optimized under the Variance risk measure with a ban on short positions. To the best of our knowledge, this is the first time such a constrained Optimization is carried out analytically, which is made possible by the application of methods borrowed from the theory of disordered systems. The no-short selling constraint acts as an asymmetric l1 regularizer, setting some of the portfolio weights to zero and keeping the estimation error bounded, avoiding the divergence present in the non-regularized case. However, the susceptibility, i.e. the sensitivity of the optimal portfolio weights to changes in the returns, diverges at a critical value 2 of the ratio N/T, where N is the number of different assets in the portfolio and T is the sample size (the length of available time series). This means that a ban on short positions does not prevent the phase transition in the Optimization problem, it merely shifts the critical point from its nonregularized value of N/T = 1 to 2. It is shown that this critical value is universal, independent of the distribution of the returns. Beyond this critical value, the Variance of the portfolio identically vanishes for any portfolio weight vector constructed as a linear combination of the eigenvectors from the null space of the coVariance matrix, but these linear combinations are not legitimate solutions of the Optimization problem, as they are infinitely sensitive to any change in the input parameters, in particular they will wildly fluctuate from sample to sample. With some narrative license we may say that the regularizer takes care of the longitudinal fluctuations of the optimal weight vector, but does not eliminate the divergent transverse fluctuations. We also calculate the distribution of the optimal weights over the random samples and show that the regularizer preferentially removes the assets with large Variances, in accord with one’s natural expectation. The analytic calculations are supported by numerical simulations. The analytic and numerical results are in perfect agreement for N/T N/T > 2, where we know from exact linear algebraic considerations that no meaningful solution exits. The resolution of this paradox is that there are regularizers built into these solvers that stabilize the otherwise freely fluctuating, meaningless solutions. This should serve as a warning against the use of ready-made solver programs in empirical work without a good understanding of the theoretical structure of the problem at hand and the details of the tool used to solve it, and where the fundamental instability of the numerics may be masked by the stabilization due to the solver rather than the data.

  • analytic solution to Variance Optimization with no short selling
    arXiv: Portfolio Management, 2016
    Co-Authors: Imre Kondor, Gabor Papp, Fabio Caccioli
    Abstract:

    A large portfolio of independent returns is optimized under the Variance risk measure with a ban on short positions. The no-short selling constraint acts as an asymmetric $\ell_1$ regularizer, setting some of the portfolio weights to zero and keeping the out of sample estimator for the Variance bounded, avoiding the divergence present in the non-regularized case. However, the susceptibility, i.e. the sensitivity of the optimal portfolio weights to changes in the returns, diverges at a critical value $r=2$. This means that a ban on short positions does not prevent the phase transition in the Optimization problem, it merely shifts the critical point from its non-regularized value of $r=1$ to $2$. At $r=2$ the out of sample estimator for the portfolio Variance stays finite and the estimated in-sample Variance vanishes. We have performed numerical simulations to support the analytic results and found perfect agreement for $N/T<2$. Numerical experiments on finite size samples of symmetrically distributed returns show that above this critical point the probability of finding solutions with zero in-sample Variance increases rapidly with increasing $N$, becoming one in the large $N$ limit. However, these are not legitimate solutions of the Optimization problem, as they are infinitely sensitive to any change in the input parameters, in particular they will wildly fluctuate from sample to sample. We also calculate the distribution of the optimal weights over the random samples and show that the regularizer preferentially removes the assets with large Variances, in accord with one's natural expectation.