Jackknife

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 315 Experts worldwide ranked by ideXlab platform

Yves G Berger - One of the best experts on this subject based on the ideXlab platform.

  • a Jackknife variance estimator for self weighted two stage samples
    2013
    Co-Authors: Emilio L Escobar, Yves G Berger
    Abstract:

    Self-weighted two-stage sampling designs are popular in practice as they simplify field-work. It is common in practice to compute variance estimates only from the first sampling stage, neglecting the second stage. This omission may induce a bias in variance estimation; especially in situations where there is low variabil- ity between clusters or when sampling fractions are non-negligible. We propose a design-consistent Jackknife variance estimator that takes account of all stages via deletion of clusters and observations within clusters. The proposed Jackknife can be used for a wide class of point estimators. It does not need joint-inclusion prob- abilities and naturally includes finite population corrections. A simulation study shows that the proposed estimator can be more accurate than standard Jackknifes (Rao, Wu, and Yue (1992)) for self-weighted two-stage sampling designs.

  • a note on the asymptotic equivalence of Jackknife and linearization variance estimation for the gini coefficient
    Journal of Official Statistics, 2008
    Co-Authors: Yves G Berger
    Abstract:

    The Gini coefficient (Gini 1914) has proved valuable as a measure of income inequality. In cross-sectional studies of the Gini coefficient, information about the accuracy of its estimates is crucial. We show how to use Jackknife and linearization to estimate the variance of the Gini coefficient, allowing for the effect of the sampling design. The aim is to show the asymptotic equivalence (or consistency) of the generalized Jackknife estimator (Campbell 1980) and the Taylor linearization estimator (Kovac?evic´ and Binder 1997) for the variance of the Gini coefficient. A brief simulation study supports our findings

  • a Jackknife variance estimator for unistage stratified samples with unequal probabilities
    Biometrika, 2007
    Co-Authors: Yves G Berger
    Abstract:

    Existing Jackknife variance estimators used with sample surveys can seriously overestimate the true variance under unistage stratified sampling without replacement with unequal probabilities. A novel Jackknife variance estimator is proposed which is as numerically simple as existing Jackknife variance estimators. Under certain regularity conditions, the proposed variance estimator is consistent under stratified sampling without replacement with unequal probabilities. The high entropy regularity condition necessary for consistency is shown to hold for the Rao--Sampford design. An empirical study of three unequal probability sampling designs supports our findings. Copyright 2007, Oxford University Press.

  • A Jackknife Variance Estimator for Unistage Stratified Samples with Unequal Probabilities
    Biometrika, 2007
    Co-Authors: Yves G Berger
    Abstract:

    SUMMARY Existing Jackknife variance estimators used with sample surveys can seriously overestimate the true variance under unistage stratified sampling without replacement with unequal probabilities. A novel Jackknife variance estimator is proposed which is as numerically simple as existing Jackknife variance estimators. Under certain regularity conditions, the proposed variance estimator is consistent under stratified sampling without replacement with unequal probabilities. The high entropy regularity condition necessary for consistency is shown to hold for the Rao-Sampford design. An empirical study of three unequal probability sampling designs supports our findings.

  • a Jackknife variance estimator for unequal probability sampling
    Journal of The Royal Statistical Society Series B-statistical Methodology, 2005
    Co-Authors: Yves G Berger, Chris J Skinner
    Abstract:

    The Jackknife method is often used for variance estimation in sample surveys but has only been developed for a limited class of sampling designs. We propose a Jackknife variance estimator which is defined for any without-replacement unequal probability sampling design. We demonstrate design consistency of this estimator for a broad class of point estimators. A Monte Carlo study shows how the proposed estimator may improve on existing estimators.

Chris J Skinner - One of the best experts on this subject based on the ideXlab platform.

  • a Jackknife variance estimator for unequal probability sampling
    Journal of The Royal Statistical Society Series B-statistical Methodology, 2005
    Co-Authors: Yves G Berger, Chris J Skinner
    Abstract:

    The Jackknife method is often used for variance estimation in sample surveys but has only been developed for a limited class of sampling designs. We propose a Jackknife variance estimator which is defined for any without-replacement unequal probability sampling design. We demonstrate design consistency of this estimator for a broad class of point estimators. A Monte Carlo study shows how the proposed estimator may improve on existing estimators.

  • a Jackknife variance estimator for unequal probability sampling
    Journal of The Royal Statistical Society Series B-statistical Methodology, 2005
    Co-Authors: Yves G Berger, Chris J Skinner
    Abstract:

    The Jackknife method is often used for variance estimation in sample surveys but has only been developed for a limited class of sampling designs. We propose a Jackknife variance estimator which is defined for any without-replacement unequal probability sampling design. We demonstrate design consistency of this estimator for a broad class of point estimators. A Monte Carlo study shows how the proposed estimator may improve on existing estimators. Copyright 2005 Royal Statistical Society.

Shumei Wan - One of the best experts on this subject based on the ideXlab platform.

  • a unified Jackknife theory for empirical best prediction with m estimation
    Annals of Statistics, 2002
    Co-Authors: Jiming Jiang, Partha Lahiri, Shumei Wan
    Abstract:

    The paper presents a unified Jackknife theory for a fairly general class of mixed models which includes some of the widely used mixed linear models and generalized linear mixed models as special cases. The paper develops Jackknife theory for the important, but so far neglected, prediction problem for the general mixed model. For estimation of fixed parameters, a Jackknife method is considered for a general class of M-estimators which includes the maximum likelihood, residual maximum likelihood and ANOVA estimators for mixed linear models and the recently developed method of simulated moments estimators for generalized linear mixed models. For both the prediction and estimation problems, a Jackknife method is used to obtain estimators of the mean squared errors (MSE). Asymptotic unbiasedness of the MSE estimators is shown to hold essentially under certain moment conditions. Simulation studies undertaken support our theoretical results. 1. Introduction. Due to the advent of high-speed computers and powerful software, computer-oriented statistical methods, including various resampling methods, have received considerable attention in recent years as statisticians are constantly facing complex problems. The Jackknife method is one such simple resampling method which is very popular among survey samplers, primarily due

  • a unified Jackknife theory for empirical best prediction with m estimation
    Annals of Statistics, 2002
    Co-Authors: Jiming Jiang, Partha Lahiri, Shumei Wan
    Abstract:

    The paper presents a unified Jackknife theory for a fairly general class of mixed models which includes some of the widely used mixed linear models and generalized linear mixed models as special cases. The paper develops Jackknife theory for the important, but so far neglected, prediction problem for the general mixed model. For estimation of fixed parameters, a Jackknife method is considered for a general class of M-estimators which includes the maximum likelihood, residual maximum likelihood and ANOVA estimators for mixed linear models and the recently developed method of simulated moments estimators for generalized linear mixed models. For both the prediction and estimation problems, a Jackknife method is used to obtain estimators of the mean squared errors (MSE). Asymptotic unbiasedness of the MSE estimators is shown to hold essentially under certain moment conditions. Simulation studies undertaken support our theoretical results.

Jiming Jiang - One of the best experts on this subject based on the ideXlab platform.

  • a unified Jackknife theory for empirical best prediction with m estimation
    Annals of Statistics, 2002
    Co-Authors: Jiming Jiang, Partha Lahiri, Shumei Wan
    Abstract:

    The paper presents a unified Jackknife theory for a fairly general class of mixed models which includes some of the widely used mixed linear models and generalized linear mixed models as special cases. The paper develops Jackknife theory for the important, but so far neglected, prediction problem for the general mixed model. For estimation of fixed parameters, a Jackknife method is considered for a general class of M-estimators which includes the maximum likelihood, residual maximum likelihood and ANOVA estimators for mixed linear models and the recently developed method of simulated moments estimators for generalized linear mixed models. For both the prediction and estimation problems, a Jackknife method is used to obtain estimators of the mean squared errors (MSE). Asymptotic unbiasedness of the MSE estimators is shown to hold essentially under certain moment conditions. Simulation studies undertaken support our theoretical results. 1. Introduction. Due to the advent of high-speed computers and powerful software, computer-oriented statistical methods, including various resampling methods, have received considerable attention in recent years as statisticians are constantly facing complex problems. The Jackknife method is one such simple resampling method which is very popular among survey samplers, primarily due

  • a unified Jackknife theory for empirical best prediction with m estimation
    Annals of Statistics, 2002
    Co-Authors: Jiming Jiang, Partha Lahiri, Shumei Wan
    Abstract:

    The paper presents a unified Jackknife theory for a fairly general class of mixed models which includes some of the widely used mixed linear models and generalized linear mixed models as special cases. The paper develops Jackknife theory for the important, but so far neglected, prediction problem for the general mixed model. For estimation of fixed parameters, a Jackknife method is considered for a general class of M-estimators which includes the maximum likelihood, residual maximum likelihood and ANOVA estimators for mixed linear models and the recently developed method of simulated moments estimators for generalized linear mixed models. For both the prediction and estimation problems, a Jackknife method is used to obtain estimators of the mean squared errors (MSE). Asymptotic unbiasedness of the MSE estimators is shown to hold essentially under certain moment conditions. Simulation studies undertaken support our theoretical results.

Yanjun Han - One of the best experts on this subject based on the ideXlab platform.

  • bias correction with Jackknife bootstrap and taylor series
    IEEE Transactions on Information Theory, 2020
    Co-Authors: Jiantao Jiao, Yanjun Han
    Abstract:

    We analyze bias correction methods using Jackknife, bootstrap, and Taylor series. We focus on the binomial model, and consider the problem of bias correction for estimating $f(p)$ , where $f \in C[{0,1}]$ is arbitrary. We characterize the supremum norm of the bias of general Jackknife and bootstrap estimators for any continuous functions, and demonstrate the in delete- $d$ Jackknife, different values of $d$ may lead to drastically different behaviors in Jackknife. We show that in the binomial model, iterating the bootstrap bias correction infinitely many times may lead to divergence of bias and variance, and demonstrate that the bias properties of the bootstrap bias corrected estimator after $r-1$ rounds are of the same order as that of the $r$ -Jackknife estimator if a bounded coefficients condition is satisfied.