Tuned Estimator

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 1410 Experts worldwide ranked by ideXlab platform

Saharon Rosset - One of the best experts on this subject based on the ideXlab platform.

  • excess optimism how biased is the apparent error of an Estimator Tuned by sure
    Journal of the American Statistical Association, 2019
    Co-Authors: Ryan J. Tibshirani, Saharon Rosset
    Abstract:

    ABSTRACTNearly all Estimators in statistical prediction come with an associated tuning parameter, in one way or another. Common practice, given data, is to choose the tuning parameter value that minimizes a constructed estimate of the prediction error of the Estimator; we focus on Stein’s unbiased risk Estimator, or SURE, which forms an unbiased estimate of the prediction error by augmenting the observed training error with an estimate of the degrees of freedom of the Estimator. Parameter tuning via SURE minimization has been advocated by many authors, in a wide variety of problem settings, and in general, it is natural to ask: what is the prediction error of the SURE-Tuned Estimator? An obvious strategy would be simply use the apparent error estimate as reported by SURE, that is, the value of the SURE criterion at its minimum, to estimate the prediction error of the SURE-Tuned Estimator. But this is no longer unbiased; in fact, we would expect that the minimum of the SURE criterion is systematically bias...

  • Excess Optimism: How Biased is the Apparent Error of an Estimator Tuned by SURE?
    2018
    Co-Authors: Ryan J. Tibshirani, Saharon Rosset
    Abstract:

    Nearly all Estimators in statistical prediction come with an associated tuning parameter, in one way or another. Common practice, given data, is to choose the tuning parameter value that minimizes a constructed estimate of the prediction error of the Estimator; we focus on Stein’s unbiased risk Estimator, or SURE, which forms an unbiased estimate of the prediction error by augmenting the observed training error with an estimate of the degrees of freedom of the Estimator. Parameter tuning via SURE minimization has been advocated by many authors, in a wide variety of problem settings, and in general, it is natural to ask: what is the prediction error of the SURE-Tuned Estimator? An obvious strategy would be simply use the apparent error estimate as reported by SURE, that is, the value of the SURE criterion at its minimum, to estimate the prediction error of the SURE-Tuned Estimator. But this is no longer unbiased; in fact, we would expect that the minimum of the SURE criterion is systematically biased downwards for the true prediction error. In this work, we define the excess optimism of the SURE-Tuned Estimator to be the amount of this downward bias in the SURE minimum. We argue that the following two properties motivate the study of excess optimism: (i) an unbiased estimate of excess optimism, added to the SURE criterion at its minimum, gives an unbiased estimate of the prediction error of the SURE-Tuned Estimator; (ii) excess optimism serves as an upper bound on the excess risk, that is, the difference between the risk of the SURE-Tuned Estimator and the oracle risk (where the oracle uses the best fixed tuning parameter choice). We study excess optimism in two common settings: shrinkage Estimators and subset regression Estimators. Our main results include a James–Stein-like property of the SURE-Tuned shrinkage Estimator, which is shown to dominate the MLE; and both upper and lower bounds on excess optimism for SURE-Tuned subset regression. In the latter setting, when the collection of subsets is nested, our bounds are particularly tight, and reveal that in the case of no signal, the excess optimism is always in between 0 and 10 degrees of freedom, regardless of how many models are being selected from. Supplementary materials for this article are available online.

  • excess optimism how biased is the apparent error of an Estimator Tuned by sure
    arXiv: Statistics Theory, 2016
    Co-Authors: Ryan J. Tibshirani, Saharon Rosset
    Abstract:

    Nearly all Estimators in statistical prediction come with an associated tuning parameter, in one way or another. Common practice, given data, is to choose the tuning parameter value that minimizes a constructed estimate of the prediction error of the Estimator; we focus on Stein's unbiased risk Estimator, or SURE (Stein, 1981; Efron, 1986) which forms an unbiased estimate of the prediction error by augmenting the observed training error with an estimate of the degrees of freedom of the Estimator. Parameter tuning via SURE minimization has been advocated by many authors, in a wide variety of problem settings, and in general, it is natural to ask: what is the prediction error of the SURE-Tuned Estimator? An obvious strategy would be simply use the apparent error estimate as reported by SURE, i.e., the value of the SURE criterion at its minimum, to estimate the prediction error of the SURE-Tuned Estimator. But this is no longer unbiased; in fact, we would expect that the minimum of the SURE criterion is systematically biased downwards for the true prediction error. In this paper, we formally describe and study this bias.

Ryan J. Tibshirani - One of the best experts on this subject based on the ideXlab platform.

  • excess optimism how biased is the apparent error of an Estimator Tuned by sure
    Journal of the American Statistical Association, 2019
    Co-Authors: Ryan J. Tibshirani, Saharon Rosset
    Abstract:

    ABSTRACTNearly all Estimators in statistical prediction come with an associated tuning parameter, in one way or another. Common practice, given data, is to choose the tuning parameter value that minimizes a constructed estimate of the prediction error of the Estimator; we focus on Stein’s unbiased risk Estimator, or SURE, which forms an unbiased estimate of the prediction error by augmenting the observed training error with an estimate of the degrees of freedom of the Estimator. Parameter tuning via SURE minimization has been advocated by many authors, in a wide variety of problem settings, and in general, it is natural to ask: what is the prediction error of the SURE-Tuned Estimator? An obvious strategy would be simply use the apparent error estimate as reported by SURE, that is, the value of the SURE criterion at its minimum, to estimate the prediction error of the SURE-Tuned Estimator. But this is no longer unbiased; in fact, we would expect that the minimum of the SURE criterion is systematically bias...

  • Excess Optimism: How Biased is the Apparent Error of an Estimator Tuned by SURE?
    2018
    Co-Authors: Ryan J. Tibshirani, Saharon Rosset
    Abstract:

    Nearly all Estimators in statistical prediction come with an associated tuning parameter, in one way or another. Common practice, given data, is to choose the tuning parameter value that minimizes a constructed estimate of the prediction error of the Estimator; we focus on Stein’s unbiased risk Estimator, or SURE, which forms an unbiased estimate of the prediction error by augmenting the observed training error with an estimate of the degrees of freedom of the Estimator. Parameter tuning via SURE minimization has been advocated by many authors, in a wide variety of problem settings, and in general, it is natural to ask: what is the prediction error of the SURE-Tuned Estimator? An obvious strategy would be simply use the apparent error estimate as reported by SURE, that is, the value of the SURE criterion at its minimum, to estimate the prediction error of the SURE-Tuned Estimator. But this is no longer unbiased; in fact, we would expect that the minimum of the SURE criterion is systematically biased downwards for the true prediction error. In this work, we define the excess optimism of the SURE-Tuned Estimator to be the amount of this downward bias in the SURE minimum. We argue that the following two properties motivate the study of excess optimism: (i) an unbiased estimate of excess optimism, added to the SURE criterion at its minimum, gives an unbiased estimate of the prediction error of the SURE-Tuned Estimator; (ii) excess optimism serves as an upper bound on the excess risk, that is, the difference between the risk of the SURE-Tuned Estimator and the oracle risk (where the oracle uses the best fixed tuning parameter choice). We study excess optimism in two common settings: shrinkage Estimators and subset regression Estimators. Our main results include a James–Stein-like property of the SURE-Tuned shrinkage Estimator, which is shown to dominate the MLE; and both upper and lower bounds on excess optimism for SURE-Tuned subset regression. In the latter setting, when the collection of subsets is nested, our bounds are particularly tight, and reveal that in the case of no signal, the excess optimism is always in between 0 and 10 degrees of freedom, regardless of how many models are being selected from. Supplementary materials for this article are available online.

  • excess optimism how biased is the apparent error of an Estimator Tuned by sure
    arXiv: Statistics Theory, 2016
    Co-Authors: Ryan J. Tibshirani, Saharon Rosset
    Abstract:

    Nearly all Estimators in statistical prediction come with an associated tuning parameter, in one way or another. Common practice, given data, is to choose the tuning parameter value that minimizes a constructed estimate of the prediction error of the Estimator; we focus on Stein's unbiased risk Estimator, or SURE (Stein, 1981; Efron, 1986) which forms an unbiased estimate of the prediction error by augmenting the observed training error with an estimate of the degrees of freedom of the Estimator. Parameter tuning via SURE minimization has been advocated by many authors, in a wide variety of problem settings, and in general, it is natural to ask: what is the prediction error of the SURE-Tuned Estimator? An obvious strategy would be simply use the apparent error estimate as reported by SURE, i.e., the value of the SURE criterion at its minimum, to estimate the prediction error of the SURE-Tuned Estimator. But this is no longer unbiased; in fact, we would expect that the minimum of the SURE criterion is systematically biased downwards for the true prediction error. In this paper, we formally describe and study this bias.

Angelina Roche - One of the best experts on this subject based on the ideXlab platform.

  • adaptive estimation in the functional nonparametric regression model
    Journal of Multivariate Analysis, 2016
    Co-Authors: Gaelle Chagny, Angelina Roche
    Abstract:

    In this paper, we consider nonparametric regression estimation when the predictor is a functional random variable (typically a curve) and the response is scalar. Starting from a classical collection of kernel estimates, the bias-variance decomposition of a pointwise risk is investigated to understand what can be expected at best from adaptive estimation. We propose a fully data-driven local bandwidth selection rule in the spirit of the Goldenshluger and Lepski method. The main result is a nonasymptotic risk bound which shows the optimality of our Tuned Estimator from the oracle point of view. Convergence rates are also derived for regression functions belonging to Holder spaces and under various assumptions on the rate of decay of the small ball probability of the explanatory variable. A simulation study also illustrates the good practical performances of our Estimator.

Roche Angelina - One of the best experts on this subject based on the ideXlab platform.

  • ADAPTIVE ESTIMATION IN THE FUNCTIONAL NONPARAMETRIC REGRESSION MODEL
    'Elsevier BV', 2016
    Co-Authors: Chagny Gaëlle, Roche Angelina
    Abstract:

    International audienceIn this paper, we consider nonparametric regression estimation when the predictor is a functional random variable (typically a curve) and the response is scalar. Starting from a classical collection of kernel estimates, the bias-variance decomposition of a pointwise risk is investigated to understand what can be expected at best from adaptive estimation. We propose a fully data-driven local bandwidth selection rule in the spirit of the Goldenshluger and Lepski method. The main result is a nonasymptotic risk bound which shows the optimality of our Tuned Estimator from the oracle point of view. Convergence rates are also derived for regression functions belonging to Hölder spaces and under various assumptions on the rate of decay of the small ball probability of the explanatory variable. A simulation study also illustrates the good practical performances of our Estimator

Rosset Saharon - One of the best experts on this subject based on the ideXlab platform.

  • Excess Optimism: How Biased is the Apparent Error of an Estimator Tuned by SURE?
    2017
    Co-Authors: Tibshirani, Ryan J., Rosset Saharon
    Abstract:

    Nearly all Estimators in statistical prediction come with an associated tuning parameter, in one way or another. Common practice, given data, is to choose the tuning parameter value that minimizes a constructed estimate of the prediction error of the Estimator; we focus on Stein's unbiased risk Estimator, or SURE (Stein, 1981; Efron, 1986) which forms an unbiased estimate of the prediction error by augmenting the observed training error with an estimate of the degrees of freedom of the Estimator. Parameter tuning via SURE minimization has been advocated by many authors, in a wide variety of problem settings, and in general, it is natural to ask: what is the prediction error of the SURE-Tuned Estimator? An obvious strategy would be simply use the apparent error estimate as reported by SURE, i.e., the value of the SURE criterion at its minimum, to estimate the prediction error of the SURE-Tuned Estimator. But this is no longer unbiased; in fact, we would expect that the minimum of the SURE criterion is systematically biased downwards for the true prediction error. In this paper, we formally describe and study this bias.Comment: 39 pages, 3 figure