Smoothing Parameter

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 19356 Experts worldwide ranked by ideXlab platform

Peter Hall - One of the best experts on this subject based on the ideXlab platform.

  • reducing variability of crossvalidation for Smoothing Parameter choice
    Biometrika, 2009
    Co-Authors: Peter Hall, Andrew P Robinson
    Abstract:

    One of the attractions of crossvalidation, as a tool for Smoothing-Parameter choice, is its applicability to a wide variety of estimator types and contexts. However, its detractors comment adversely on the relatively high variance of crossvalidatory Smoothing Parameters, noting that this compromises the performance of the estimators in which those Parameters are used. We show that the variability can be reduced simply, significantly and reliably by employing bootstrap aggregation or bagging. We establish that in theory, when bagging is implemented using an adaptively chosen resample size, the variability of crossvalidation can be reduced by an order of magnitude. However, it is arguably more attractive to use a simpler approach, based for example on half-sample bagging, which can reduce variability by approximately 50%.

  • using simex for Smoothing Parameter choice in errors in variables problems
    Journal of the American Statistical Association, 2008
    Co-Authors: Aurore Delaigle, Peter Hall
    Abstract:

    SIMEX methods are attractive for solving curve estimation problems in errors-in-variables regression, using parametric or semiparametric techniques. However, nonparametric approaches are generally of quite a different type, being based on, for example, kernels, local-linear modeling, ridging, orthogonal series, or splines. All of these techniques involve the challenging (and not well studied) issue of empirical Smoothing Parameter choice. We show that SIMEX can be used effectively for selecting Smoothing Parameters when applying nonparametric methods to errors-in-variable regression. In particular, we suggest an approach based on multiple error-inflated (or remeasured) data sets and extrapolation.

  • loss and risk in Smoothing Parameter selection
    Journal of Nonparametric Statistics, 1994
    Co-Authors: Birgit Grund, Peter Hall, J S Marron
    Abstract:

    For several years there has been debate over the relative merits of loss and risk as measures of the performance of nonparametric density estimators. In the way that this debate has dealt with risk, it has largely ignored the fact that any practical bandwidth selection rule must produce a random bandwidth. Existing theory for risk of density estimators is almost invariably concerned with nonrandom bandwidths. In the present paper we examine two different definitions of risk, both of them appropriate to circumstances where the bandwidth is random. Arguments in favor of, and motivations for, each approach are presented, including formulation of appropriate decision-theoretic frameworks. It is shown that the two approaches can give diametrically opposite answers to the question of which of two competing bandwidths selection rules is superior. Technical results include some surprising conclusions about the nonexistence of risks, and even of moments of some common data-driven bandwidths under the usual assumpt...

  • a fourier approach to nonparametric deconvolution of a density estimate
    Journal of the royal statistical society series b-methodological, 1993
    Co-Authors: Peter J. Diggle, Peter Hall
    Abstract:

    We consider the problem of constructing a nonparametric estimate of a probability density function h from independent random samples of observations from densities a and f, when a represents the convolution of h and f. Our approach is based on truncated Fourier inversion, in which the truncation point plays the role of a Smoothing Parameter. We derive the asymptotic mean integrated squared error of the estimate and use this formula to suggest a simple practical method for choosing the truncation point from the data. Strikingly, when the Smoothing Parameter is chosen in this way then in many circumstances the estimator behaves, to first order, as though the true f were known

  • empirical functionals and efficient Smoothing Parameter selection
    Journal of the royal statistical society series b-methodological, 1992
    Co-Authors: Peter Hall, Iain M Johnstone
    Abstract:

    A striking feature of curve estimation is that the Smoothing Parameter h 0 , which minimizes the squared error of a kernel or Smoothing spline estimator, is very difficult to estimate. This is manifest both in slow rates of convergence and in high variability of standard methods such as cross-validation. We quantify this difficulty by describing nonparametric information bounds and exhibit asymptotically efficient estimators of h 0 that attain the bounds. The efficient estimators are substantially less variable than cross-validation (and other current procedures) and simulations suggest that they may offer improvements at moderate sample sizes, at least in terms of minimizing the squared error

Simon N. Wood - One of the best experts on this subject based on the ideXlab platform.

  • Smoothing Parameter and Model Selection for General Smooth Models
    2017
    Co-Authors: Simon N. Wood, Natalya Pya, Benjamin Safken
    Abstract:

    This article discusses a general framework for Smoothing Parameter estimation for models with regular likelihoods constructed in terms of unknown smooth functions of covariates. Gaussian random effects and parametric terms may also be present. By construction the method is numerically stable and convergent, and enables Smoothing Parameter uncertainty to be quantified. The latter enables us to fix a well known problem with AIC for such models, thereby improving the range of model selection tools available. The smooth functions are represented by reduced rank spline like smoothers, with associated quadratic penalties measuring function smoothness. Model estimation is by penalized likelihood maximization, where the Smoothing Parameters controlling the extent of penalization are estimated by Laplace approximate marginal likelihood. The methods cover, for example, generalized additive models for nonexponential family responses (e.g., beta, ordered categorical, scaled t distribution, negative binomial and Tweedie distributions), generalized additive models for location scale and shape (e.g., two stage zero inflation models, and Gaussian location-scale models), Cox proportional hazards models and multivariate additive models. The framework reduces the implementation of new model classes to the coding of some standard derivatives of the log-likelihood. Supplementary materials for this article are available online.

  • Smoothing Parameter and model selection for general smooth models
    Journal of the American Statistical Association, 2016
    Co-Authors: Simon N. Wood, Benjamin Safken
    Abstract:

    ABSTRACTThis article discusses a general framework for Smoothing Parameter estimation for models with regular likelihoods constructed in terms of unknown smooth functions of covariates. Gaussian random effects and parametric terms may also be present. By construction the method is numerically stable and convergent, and enables Smoothing Parameter uncertainty to be quantified. The latter enables us to fix a well known problem with AIC for such models, thereby improving the range of model selection tools available. The smooth functions are represented by reduced rank spline like smoothers, with associated quadratic penalties measuring function smoothness. Model estimation is by penalized likelihood maximization, where the Smoothing Parameters controlling the extent of penalization are estimated by Laplace approximate marginal likelihood. The methods cover, for example, generalized additive models for nonexponential family responses (e.g., beta, ordered categorical, scaled t distribution, negative binomial a...

  • Smoothing Parameter and model selection for general smooth models
    arXiv: Methodology, 2015
    Co-Authors: Simon N. Wood, Natalya Pya, Benjamin Safken
    Abstract:

    This paper discusses a general framework for Smoothing Parameter estimation for models with regular likelihoods constructed in terms of unknown smooth functions of covariates. Gaussian random effects and parametric terms may also be present. By construction the method is numerically stable and convergent, and enables Smoothing Parameter uncertainty to be quantified. The latter enables us to fix a well known problem with AIC for such models. The smooth functions are represented by reduced rank spline like smoothers, with associated quadratic penalties measuring function smoothness. Model estimation is by penalized likelihood maximization, where the Smoothing Parameters controlling the extent of penalization are estimated by Laplace approximate marginal likelihood. The methods cover, for example, generalized additive models for non-exponential family responses (for example beta, ordered categorical, scaled t distribution, negative binomial and Tweedie distributions), generalized additive models for location scale and shape (for example two stage zero inflation models, and Gaussian location-scale models), Cox proportional hazards models and multivariate additive models. The framework reduces the implementation of new model classes to the coding of some standard derivatives of the log likelihood.

  • mgcv mixed gam computation vehicle with gcv aic reml smoothness estimation
    2012
    Co-Authors: Simon N. Wood
    Abstract:

    R package for GAMs and other generalized ridge regression with multiple Smoothing Parameter selection by GCV, REML or UBRE/AIC. Also GAMMs. Includes a gam() function. A recommended package supplied with the R statistical language and environment.

  • Stable and Efficient Multiple Smoothing Parameter Estimation for Generalized Additive Models
    Journal of the American Statistical Association, 2004
    Co-Authors: Simon N. Wood
    Abstract:

    Representation of generalized additive models (GAM's) using penalized regression splines allows GAM's to be employed in a straightforward manner using penalized regression methods. Not only is inference facilitated by this approach, but it is also possible to integrate model selection in the form of Smoothing Parameter selection into model fitting in a computationally efficient manner using well founded criteria such as generalized cross-validation. The current fitting and Smoothing Parameter selection methods for such models are usually effective, but do not provide the level of numerical stability to which users of linear regression packages, for example, are accustomed. In particular the existing methods cannot deal adequately with numerical rank deficiency of the GAM fitting problem, and it is not straightforward to produce methods that can do so, given that the degree of rank deficiency can be Smoothing Parameter dependent. In addition, models with the potential flexibility of GAM's can also present ...

Chris Peikert - One of the best experts on this subject based on the ideXlab platform.

  • on the lattice Smoothing Parameter problem
    arXiv: Computational Complexity, 2014
    Co-Authors: Kaimin Chung, Daniel Dadush, Fenghao Liu, Chris Peikert
    Abstract:

    The Smoothing Parameter $\eta_{\epsilon}(\mathcal{L})$ of a Euclidean lattice $\mathcal{L}$, introduced by Micciancio and Regev (FOCS'04; SICOMP'07), is (informally) the smallest amount of Gaussian noise that "smooths out" the discrete structure of $\mathcal{L}$ (up to error $\epsilon$). It plays a central role in the best known worst-case/average-case reductions for lattice problems, a wealth of lattice-based cryptographic constructions, and (implicitly) the tightest known transference theorems for fundamental lattice quantities. In this work we initiate a study of the complexity of approximating the Smoothing Parameter to within a factor $\gamma$, denoted $\gamma$-${\rm GapSPP}$. We show that (for $\epsilon = 1/{\rm poly}(n)$): $(2+o(1))$-${\rm GapSPP} \in {\rm AM}$, via a Gaussian analogue of the classic Goldreich-Goldwasser protocol (STOC'98); $(1+o(1))$-${\rm GapSPP} \in {\rm coAM}$, via a careful application of the Goldwasser-Sipser (STOC'86) set size lower bound protocol to thin spherical shells; $(2+o(1))$-${\rm GapSPP} \in {\rm SZK} \subseteq {\rm AM} \cap {\rm coAM}$ (where ${\rm SZK}$ is the class of problems having statistical zero-knowledge proofs), by constructing a suitable instance-dependent commitment scheme (for a slightly worse $o(1)$-term); $(1+o(1))$-${\rm GapSPP}$ can be solved in deterministic $2^{O(n)} {\rm polylog}(1/\epsilon)$ time and $2^{O(n)}$ space. As an application, we demonstrate a tighter worst-case to average-case reduction for basing cryptography on the worst-case hardness of the ${\rm GapSPP}$ problem, with $\tilde{O}(\sqrt{n})$ smaller approximation factor than the ${\rm GapSVP}$ problem. Central to our results are two novel, and nearly tight, characterizations of the magnitude of discrete Gaussian sums.

  • on the lattice Smoothing Parameter problem
    Conference on Computational Complexity, 2013
    Co-Authors: Kaimin Chung, Daniel Dadush, Fenghao Liu, Chris Peikert
    Abstract:

    The Smoothing Parameter ηe(L) of a Euclidean lattice L, introduced by Micciancio and Regev (FOCS'04; SICOMP'07), is (informally) the smallest amount of Gaussian noise that “smooths out” the discrete structure of L (up to error e). It plays a central role in the best known worst-case/average-case reductions for lattice problems, a wealth of lattice-based cryptographic constructions, and (implicitly) the tightest known transference theorems for fundamental lattice quantities. In this work we initiate a study of the complexity of approximating the Smoothing Parameter to within a factor γ, denoted γ-GapSPP. We show that (for e = 1/ poly(n)): . (2+o(1))-GapSPP ∈ AM, via a Gaussian analogue of the classic Goldreich-Goldwasser protocol (STOC'98); . (1 + o(1))-GapSPP ∈ coAM, via a careful application of the Goldwasser-Sipser (STOC'86) set size lower bound protocol to thin shells in Rn; . (2 + o(1))-GapSPP E SZK ⊆ AM ∩ coAM (where SZK is the class of problems having statistical zero-knowledge proofs), by constructing a suitable instance-dependent commitment scheme (for a slightly worse o(1)-term); . (1 + o(1))-GapSPP can be solved in deterministic 2O(n) polylog(1/e) time and 2O(n) space. As an application, we demonstrate a tighter worst-case to average-case reduction for basing cryptography on the worstcase hardness of the GapSPP problem, with O(√n) smaller approximation factor than the GapSVP problem. Central to our results are two novel, and nearly tight, characterizations of the magnitude of discrete Gaussian sums over L: the first relates these directly to the Gaussian measure of the Voronoi cell of L, and the second to the fraction of overlap between Euclidean balls centered around points of L.

Kaimin Chung - One of the best experts on this subject based on the ideXlab platform.

  • on the lattice Smoothing Parameter problem
    arXiv: Computational Complexity, 2014
    Co-Authors: Kaimin Chung, Daniel Dadush, Fenghao Liu, Chris Peikert
    Abstract:

    The Smoothing Parameter $\eta_{\epsilon}(\mathcal{L})$ of a Euclidean lattice $\mathcal{L}$, introduced by Micciancio and Regev (FOCS'04; SICOMP'07), is (informally) the smallest amount of Gaussian noise that "smooths out" the discrete structure of $\mathcal{L}$ (up to error $\epsilon$). It plays a central role in the best known worst-case/average-case reductions for lattice problems, a wealth of lattice-based cryptographic constructions, and (implicitly) the tightest known transference theorems for fundamental lattice quantities. In this work we initiate a study of the complexity of approximating the Smoothing Parameter to within a factor $\gamma$, denoted $\gamma$-${\rm GapSPP}$. We show that (for $\epsilon = 1/{\rm poly}(n)$): $(2+o(1))$-${\rm GapSPP} \in {\rm AM}$, via a Gaussian analogue of the classic Goldreich-Goldwasser protocol (STOC'98); $(1+o(1))$-${\rm GapSPP} \in {\rm coAM}$, via a careful application of the Goldwasser-Sipser (STOC'86) set size lower bound protocol to thin spherical shells; $(2+o(1))$-${\rm GapSPP} \in {\rm SZK} \subseteq {\rm AM} \cap {\rm coAM}$ (where ${\rm SZK}$ is the class of problems having statistical zero-knowledge proofs), by constructing a suitable instance-dependent commitment scheme (for a slightly worse $o(1)$-term); $(1+o(1))$-${\rm GapSPP}$ can be solved in deterministic $2^{O(n)} {\rm polylog}(1/\epsilon)$ time and $2^{O(n)}$ space. As an application, we demonstrate a tighter worst-case to average-case reduction for basing cryptography on the worst-case hardness of the ${\rm GapSPP}$ problem, with $\tilde{O}(\sqrt{n})$ smaller approximation factor than the ${\rm GapSVP}$ problem. Central to our results are two novel, and nearly tight, characterizations of the magnitude of discrete Gaussian sums.

  • on the lattice Smoothing Parameter problem
    Conference on Computational Complexity, 2013
    Co-Authors: Kaimin Chung, Daniel Dadush, Fenghao Liu, Chris Peikert
    Abstract:

    The Smoothing Parameter ηe(L) of a Euclidean lattice L, introduced by Micciancio and Regev (FOCS'04; SICOMP'07), is (informally) the smallest amount of Gaussian noise that “smooths out” the discrete structure of L (up to error e). It plays a central role in the best known worst-case/average-case reductions for lattice problems, a wealth of lattice-based cryptographic constructions, and (implicitly) the tightest known transference theorems for fundamental lattice quantities. In this work we initiate a study of the complexity of approximating the Smoothing Parameter to within a factor γ, denoted γ-GapSPP. We show that (for e = 1/ poly(n)): . (2+o(1))-GapSPP ∈ AM, via a Gaussian analogue of the classic Goldreich-Goldwasser protocol (STOC'98); . (1 + o(1))-GapSPP ∈ coAM, via a careful application of the Goldwasser-Sipser (STOC'86) set size lower bound protocol to thin shells in Rn; . (2 + o(1))-GapSPP E SZK ⊆ AM ∩ coAM (where SZK is the class of problems having statistical zero-knowledge proofs), by constructing a suitable instance-dependent commitment scheme (for a slightly worse o(1)-term); . (1 + o(1))-GapSPP can be solved in deterministic 2O(n) polylog(1/e) time and 2O(n) space. As an application, we demonstrate a tighter worst-case to average-case reduction for basing cryptography on the worstcase hardness of the GapSPP problem, with O(√n) smaller approximation factor than the GapSVP problem. Central to our results are two novel, and nearly tight, characterizations of the magnitude of discrete Gaussian sums over L: the first relates these directly to the Gaussian measure of the Voronoi cell of L, and the second to the fraction of overlap between Euclidean balls centered around points of L.

M C Jones - One of the best experts on this subject based on the ideXlab platform.

  • local linear quantile regression
    Journal of the American Statistical Association, 1998
    Co-Authors: M C Jones
    Abstract:

    Abstract In this article we study nonparametric regression quantile estimation by kernel weighted local linear fitting. Two such estimators are considered. One is based on localizing the characterization of a regression quantile as the minimizer of E{pp (Y — a)|X = x}, where ρp is the appropriate “check” function. The other follows by inverting a local linear conditional distribution estimator and involves two Smoothing Parameters, rather than one. Our aim is to present fully operational versions of both approaches and to show that each works quite well; although either might be used in practice, we have a particular preference for the second. Our automatic Smoothing Parameter selection method is novel; the main regression quantile Smoothing Parameters are chosen by rule-of-thumb adaptations of state-of-the-art methods for Smoothing Parameter selection for regression mean estimation. The techniques are illustrated by application to two datasets and compared in simulations.