Frequentist

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 18084 Experts worldwide ranked by ideXlab platform

James O. Berger - One of the best experts on this subject based on the ideXlab platform.

  • Unified Conditional Frequentist and Bayesian Testing of Composite Hypotheses
    Scandinavian Journal of Statistics, 2003
    Co-Authors: Sarat C. Dass, James O. Berger
    Abstract:

    Testing of a composite null hypothesis versus a composite alternative is considered when both have a related invariance structure. The goal is to develop conditional Frequentist tests that allow the reporting of data-dependent error probabilities, error probabilities that have a strict Frequentist interpretation and that reflect the actual amount of evidence in the data. The resulting tests are also seen to be Bayesian tests, in the strong sense that the reported Frequentist error probabilities are also the posterior probabilities of the hypotheses under default choices of the prior distribution. The new procedures are illustrated in a variety of applications to model selection and multivariate hypothesis testing.

  • Simultaneous Bayesian-Frequentist sequential testing of nested hypotheses
    Biometrika, 1999
    Co-Authors: James O. Berger, B. Boukai, Yinping Wang
    Abstract:

    SUMMARY Conditional Frequentist tests of a precise hypothesis versus a composite alternative have recently been developed, and have been shown to be equivalent to conventional Bayes tests in the very strong sense that the reported Frequentist error probabilities equal the posterior probabilities of the hypotheses. These results are herein extended to sequential testing, and yield fully Frequentist sequential tests that are considerably easier to use than are conventional sequential tests. Among the interesting properties of these new tests is the lack of dependence of the reported error probabilities on the stopping rule, seeming to lend Frequentist support to the stopping rule principle.

  • unified Frequentist and bayesian testing of a precise hypothesis
    Statistical Science, 1997
    Co-Authors: James O. Berger, B. Boukai, Y Wang
    Abstract:

    In this paper, we show that the conditional Frequentist method of testing a precise hypothesis can be made virtually equiva- lent to Bayesian testing. The conditioning strategy proposed by Berger, Brown and Wolpert in 1994, for the simple versus simple case, is gener- alized to testing a precise null hypothesis versus a composite alternative hypothesis. Using this strategy, both the conditional Frequentist and the Bayesian will report the same error probabilities upon rejecting or ac- cepting. This is of considerable interest because it is often perceived that Bayesian and Frequentist testing are incompatible in this situation. That they are compatible, when conditional Frequentist testing is allowed, is a strong indication that the \wrong" Frequentist tests are currently be- ing used for postexperimental assessment of accuracy. The new unied testing procedure is discussed and illustrated in several common testing situations.

  • a unified conditional Frequentist and bayesian test for fixed and sequential simple hypothesis testing
    Annals of Statistics, 1994
    Co-Authors: James O. Berger, Lawrence D Brown, Robert L Wolpert
    Abstract:

    Preexperimental Frequentist error probabilities are arguably inadequate, as summaries of evidence from data, in many hypothesis-testing settings. The conditional Frequentist may respond to this by identifying certain subsets of the outcome space and reporting a conditional error probability, given the subset of the outcome space in which the observed data lie. Statistical methods consistent with the likelihood principle, including Bayesian methods, avoid the problem by a more extreme form of conditioning. In tits paper we prove that the conditional Frequentist's method can be made exactly equivalent to the Bayesian's in simple versus simple hypothesis testing: specifically, we find a conditioning strategy for which the conditional Frequentist's reported conditional error probabilities are the same as the Bayesian's posterior probabilities of error. A conditional Frequentist who uses such a strategy can exploit other features of the Bayesian approachfor example, the validity of sequential hypothesis tests (including versions of the sequential probability ratio test, or SPRT) even if the stopping rule is incompletely specified

Ray Kent - One of the best experts on this subject based on the ideXlab platform.

  • Rethinking Data Analysis - Part Two: Some Alternatives to Frequentist Approaches
    International Journal of Market Research, 2020
    Co-Authors: Ray Kent
    Abstract:

    In ‘Rethinking data analysis – part one: the limitations of Frequentist approaches'’ (Kent 2009) it was argued that standard, Frequentist statistics were developed for purposes entirely other than for the analysis of survey data; when applied in this context, the assumptions being made and the limitations of the statistical procedures are commonly ignored. This paper examines ways of approaching the analysis of data sets that can be seen as viable alternatives. It reviews Bayesian statistics, configurational and fuzzy set analysis, association rules in data mining, neural network analysis, chaos theory and the theory of the tipping point. Each of these approaches has its own limitations and not one of them can or should be seen as a total replacement for Frequentist approaches. Rather, they are alternatives that should be considered when Frequentist approaches are not appropriate or when they do not seem to be adequate to the task of finding patterns in a data set.

  • Rethinking data analysis (2): Alternatives to Frequentist approaches
    International Journal of Market Research, 2020
    Co-Authors: Ray Kent
    Abstract:

    In ‘Rethinking data analysis (1) The limitations of Frequentist approaches’ (Kent 2008) it was argued that standard, Frequentist statistics were developed for purposes entirely other than for the analysis of survey data; when applied in this context, the assumptions being made and the limitations of the statistical procedures are commonly ignored. This article examines ways of approaching the analysis of datasets that can be seen as viable alternatives. It reviews Bayesian statistics, configurational and fuzzy set analysis, association rules in data mining, neural network analysis, chaos theory and the theory of the tipping point. Each of these approaches has its own limitations and not one of them can or should be seen as a total replacement for Frequentist approaches. Rather, they are alternatives that should be considered when Frequentist approaches are not appropriate or when they do not seem to be adequate to the task of finding patterns in a datase

  • Rethinking data analysis - part two: some alternatives to Frequentist approaches
    International Journal of Market Research, 2009
    Co-Authors: Ray Kent
    Abstract:

    In ‘Rethinking data analysis – part one: the limitations of Frequentist approaches'’ (Kent 2009) it was argued that standard, Frequentist statistics were developed for purposes entirely other than ...

Manuela A Joore - One of the best experts on this subject based on the ideXlab platform.

  • sample size estimation for non inferiority trials Frequentist approach versus decision theory approach
    PLOS ONE, 2015
    Co-Authors: A C Bouman, A Ten J Catehoek, Bram L T Ramaekers, Manuela A Joore
    Abstract:

    Background Non-inferiority trials are performed when the main therapeutic effect of the new therapy is expected to be not unacceptably worse than that of the standard therapy, and the new therapy is expected to have advantages over the standard therapy in costs or other (health) consequences. These advantages however are not included in the classic Frequentist approach of sample size calculation for non-inferiority trials. In contrast, the decision theory approach of sample size calculation does include these factors. The objective of this study is to compare the conceptual and practical aspects of the Frequentist approach and decision theory approach of sample size calculation for non-inferiority trials, thereby demonstrating that the decision theory approach is more appropriate for sample size calculation of non-inferiority trials. Methods The Frequentist approach and decision theory approach of sample size calculation for non-inferiority trials are compared and applied to a case of a non-inferiority trial on individually tailored duration of elastic compression stocking therapy compared to two years elastic compression stocking therapy for the prevention of post thrombotic syndrome after deep vein thrombosis. Results The two approaches differ substantially in conceptual background, analytical approach, and input requirements. The sample size calculated according to the Frequentist approach yielded 788 patients, using a power of 80% and a one-sided significance level of 5%. The decision theory approach indicated that the optimal sample size was 500 patients, with a net value of €92 million. Conclusions This study demonstrates and explains the differences between the classic Frequentist approach and the decision theory approach of sample size calculation for non-inferiority trials. We argue that the decision theory approach of sample size estimation is most suitable for sample size calculation of non-inferiority trials.

Donald A Berry - One of the best experts on this subject based on the ideXlab platform.

  • bayesian decision theoretic group sequential clinical trial design based on a quadratic loss function a Frequentist evaluation
    Clinical Trials, 2007
    Co-Authors: Roger J Lewis, Ari M Lipsky, Donald A Berry
    Abstract:

    The decision to terminate a controlled clinical trial at the time of an interim analysis is perhaps best made by weighing the value of the likely additional information to be gained if further subjects are enrolled against the various costs of that further enrollment. The most commonly used statistical plans for interim analysis (eg, O'Brien–Fleming), however, are based on a Frequentist approach that makes no such comparison. A two-armed Bayesian decision-theoretic clinical trial design is developed for a disease with two possible outcomes, incorporating a quadratic decision loss function and using backward induction to quantify the cost of future enrollment. Monte Carlo simulation is used to compare Frequentist error rates and mean required sample sizes for these Bayesian designs with the two-tailed Frequentist group-sequential designs of, O'Brien–Fleming and Pocock. When the terminal decision loss function is chosen to yield typical Frequentist error rates, the mean sample sizes required by the Bayesian...

  • Relationship between bayesian and Frequentist sample size determination
    The American Statistician, 2005
    Co-Authors: Lurdes Y. T. Inoue, Donald A Berry, Giovanni Parmigiani
    Abstract:

    Sample size determination is among the most commonly encountered tasks in statistical practice. A broad range of Frequentist and Bayesian methods for sample size determination can be described as choosing the smallest sample that is sufficient to achieve some set of goals. An example for the Frequentist is seeking the smallest sample size that is sufficient to achieve a desired power at a specified significance level. An example for the Bayesian is seeking the smallest sample size necessary to obtain, in expectation, a desired rate of correct classification of the hypothesis as true or false. This article explores parallels between Bayesian and Frequentist methods for determining sample size. We provide a simple but general and pragmatic framework for investigating the relationship between the two approaches, based on identifying mappings to connect the Bayesian and Frequentist inputs necessary to obtain the same sample size. We illustrate this mapping with examples, highlighting a somewhat surprising “ap...

Rahul Mukerjee - One of the best experts on this subject based on the ideXlab platform.

  • On the approximate Frequentist validity of the posterior quantiles of a parametric function: results based on empirical and related likelihoods
    Test, 2011
    Co-Authors: In Hong Chang, Rahul Mukerjee
    Abstract:

    With reference to a wide class of empirical and related likelihoods, we study priors which ensure approximate Frequentist validity of the posterior quantiles of a general parametric function. It is seen that no data-free prior entails such Frequentist validity but, at least for the usual empirical likelihood, a data-dependent prior serves the purpose. Accounting for the nonlinearity of the parametric function of interest requires special attention in the derivation. A simulation study is seen to provide support, in finite samples, to our asymptotic results.

  • Asymptotic results on the Frequentist mean squared error of generalized Bayes point predictors
    Statistics & Probability Letters, 2004
    Co-Authors: In Hong Chang, Rahul Mukerjee
    Abstract:

    An asymptotic formula for the Frequentist mean squared error (MSE) of generalized Bayes point predictors is worked out. This formula yields an explicit second-order admissibility result when the underlying parameter is scalar valued. We note that probability-matching priors, including Jeffreys' prior, may not always behave well with respect to the MSE of generalized Bayes point predictors. On the other hand, it is seen that priors that match the posterior and Frequentist MSEs of such predictors can also keep the Frequentist MSE small.

  • bayesian prediction with approximate Frequentist validity
    Annals of Statistics, 2000
    Co-Authors: Gauri Sankar Datta, Rahul Mukerjee, Malay Ghosh, T J Sweeting
    Abstract:

    We characterize priors which asymptotically match the posterior coverage probability of a Bayesian prediction region with the corresponding Frequentist coverage probability. This is done considering both posterior quantiles and highest predictive density regions with reference to a future observation. The resulting priors are shown to be invariant under reparameterization. The role of Jeffreys' prior in this regard is also investigated. It is further shown that, for any given prior, it may be possible to choose an interval whose Bayesian predictive and Frequentist coverage probabilities are asymptotically matched.

  • Frequentist validity of posterior quantiles in the presence of a nuisance parameter higher order asymptotics
    Biometrika, 1993
    Co-Authors: Rahul Mukerjee
    Abstract:

    Given a random sample from a distribution with density function that depends on an unknown parameter θ=(θ 1 ,θ 2 )', we are concerned with Frequentist validity, up to o(n -1 ), of posterior quantiles of θ 1 treating θ 2 as a nuisance parameter. We propose to make the best choice of the prior on θ by matching, as far as practicable, the posterior and Frequentist coverage probabilities up to o(n -1 )