Frequentist Approach

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 360 Experts worldwide ranked by ideXlab platform

Dimitrios Psaltis - One of the best experts on this subject based on the ideXlab platform.

  • statistics of measuring neutron star radii assessing a Frequentist and a bayesian Approach
    The Astrophysical Journal, 2015
    Co-Authors: Feryal Ozel, Dimitrios Psaltis
    Abstract:

    Measuring neutron star radii with spectroscopic and timing techniques relies on the combination of multiple observables to break the degeneracies between the mass and radius introduced by general relativistic effects. Here, we explore a previously used Frequentist and a newly proposed Bayesian framework to obtain the most likely value and the uncertainty in such a measurement. We find that for the expected range of masses and radii and for realistic measurement errors, the Frequentist Approach suffers from biases that are larger than the accuracy in the radius measurement required to distinguish between the different equations of state. In contrast, in the Bayesian framework, the inferred uncertainties are larger, but the most likely values do not suffer from such biases. We also investigate ways of quantifying the degree of consistency between different spectroscopic measurements from a single source. We show that a careful assessment of the systematic uncertainties in the measurements eliminates the need for introducing ad hoc biases, which lead to artificially large inferred radii.

  • statistics of measuring neutron star radii the bayesian vs the Frequentist Approach
    arXiv: High Energy Astrophysical Phenomena, 2015
    Co-Authors: Feryal Ozel, Dimitrios Psaltis
    Abstract:

    Measuring neutron star radii with spectroscopic and timing techniques relies on the combination of multiple observables to break the degeneracies between the mass and radius introduced by general relativistic effects. Here, we explore a Frequentist and a Bayesian framework to obtain the most likely value and the uncertainty in such a measurement. We find that, for the expected range of masses and radii and for realistic measurement errors, the Frequentist Approach suffers from biases that are larger than the accuracy in the radius measurement required to distinguish between the different equations of state. In contrast, in the Bayesian framework, the inferred uncertainties are larger, but the most likely values do not suffer from such biases. We also investigated ways of quantifying the degree of consistency between different spectroscopic measurements from a single source. We showed that a careful assessment of the systematic uncertainties in the measurements eliminates the need for introducing ad hoc biases, which lead to artificially large inferred radii.

Judith D Goldberg - One of the best experts on this subject based on the ideXlab platform.

  • a hybrid bayesian Frequentist Approach to evaluate clinical trial designs for tests of superiority and non inferiority
    Statistics in Medicine, 2008
    Co-Authors: Yongzhao Shao, Vandana Mukhi, Judith D Goldberg
    Abstract:

    Specification of the study objective of superiority or non-inferiority at the design stage of a phase III clinical trial can sometimes be very difficult due to the uncertainty that surrounds the efficacy level of the experimental treatment. This uncertainty makes it tempting for investigators to design a trial that would allow testing of both superiority and non-inferiority hypotheses. However, when a conventional single-stage design is used to test both hypotheses, the sample size is based on the chosen primary objective of either superiority or non-inferiority. In this situation, the power of the test for the secondary objective can be low, which may lead to a large loss of resources. Potentially low reproducibility is another major concern for the single-stage design in phase III trials, because significant findings of confirmatory trials are required to be reproducible. In this paper, we propose a hybrid Bayesian-Frequentist Approach to evaluate reproducibility and power in single-stage designs for phase III trials to test both superiority and non-inferiority. The essence of the proposed Approach is to express the uncertainty that surrounds the efficacy of the experimental treatment as a probability distribution. Then one can use Bayes formula with simple graphical techniques to evaluate reproducibility and power adequacy.

John Gittins - One of the best experts on this subject based on the ideXlab platform.

  • a mixed bayesian Frequentist Approach in sample size determination problem for clinical trials
    Progress in Biological Sciences, 2016
    Co-Authors: Maryam Bideli, John Gittins, Hamid Pezeshk
    Abstract:

    In this paper we introduce a stochastic optimization method based ona mixed Bayesian/Frequentist Approach to a sample size determinationproblem in a clinical trial. The data are assumed to come from a nor-mal distribution for which both the mean and the variance are unknown.In contrast to the usual Bayesian decision theoretic methodology, whichassumes a single decision maker, our method recognizes the existence ofthree decision makers, namely: the company conducting the trial, whichdecides on its size; the regulator, whose approval is necessary for the drugto be licensed for sale; and the public at large, who determine ultimateusage. Moreover, we model the subsequent usage by plausible assumptionsfor actual behaviour. A Monte Carlo Markov Chain is applied to nd themaximum expected utility of conducting the trial.Sample size determination problem is an important task in the planning oftrials. The problem may be formulated formally in statistical terms. Themost frequently used methods are based on the required size, and power of thetrial for a specifed treatment efect Several authors haverecognized the value of using prior distributions rather than point estimatesin sample size calculations.

  • the choice of sample size a mixed bayesian Frequentist Approach
    Statistical Methods in Medical Research, 2009
    Co-Authors: Hamid Pezeshk, Nader Nematollahi, Vahed Maroufy, John Gittins
    Abstract:

    Sample size computations are largely based on Frequentist or classical methods. In the Bayesian Approach the prior information on the unknown parameters is taken into account. In this work we consider a fully Bayesian Approach to the sample size determination problem which was introduced by Grundy et al. and developed by Lindley. This Approach treats the problem as a decision problem and employs a utility function to find the optimal sample size of a trial. Furthermore, we assume that a regulatory authority, which is deciding on whether or not to grant a licence to a new treatment, uses a Frequentist Approach. We then find the optimal sample size for the trial by maximising the expected net benefit, which is the expected benefit of subsequent use of the new treatment minus the cost of the trial.

Felipe Rebello Lourenco - One of the best experts on this subject based on the ideXlab platform.

  • Frequentist Approach for estimation of false decision risks in conformity assessment based on measurement uncertainty of liquid chromatography analytical procedures
    Journal of Pharmaceutical and Biomedical Analysis, 2020
    Co-Authors: Luciana Separovic, Felipe Rebello Lourenco
    Abstract:

    Abstract The measurement uncertainty (MU) related to analytical results can lead to false decisions in conformity assessment, such as accepting or rejecting incorrectly a medicine lot (consumer’s and producer’s risks, respectively). These risks can be global or specific. It is important to understand the different types of conformity decision risks, and the different Approaches to estimate them to ensure the reliability of the analytical results. Thus, the aim of this work was to estimate the specific consumer’s and producer’s risks from the MU values of 64 liquid chromatography analytical procedures for antibiotic or antifungal assays, in order to evaluate their performances in conformity assessment. The specific risks of the analytical procedures were estimated by the Frequentist Approach following normal distribution using Microsoft Excel® software, and in addition a spreadsheet was created to be available as supplementary material to estimate specific risks by this Approach. Moreover, the global risks of the analytical procedures were estimated using Bayesian Approach, assuming a uniform scenario of production process. And finally, the estimation of specific risks by Bayesian and Frequentist Approaches was compared. Only 39 % of the evaluated analytical procedures had MU within the recommended. When the result is close to the specification limit, the risk can be significant, in such cases, a strategy is to adopt guard bands to reduce or expand the specification limits, minimizing the risks. The spreadsheet created shows the risk of false decision for a MU value, considering results within and outside the specification limits, allowing to verify the risk according to the analytical result obtained. The global risks values were practically equal to the expanded uncertainty values, as there is no tendency of the production process between lots within or outside the specification, but once the analytical result is known, the Frequentist Approach provides a more reliable risk estimate (specific risk). The specific risks estimated by Bayesian and Frequentist Approaches were divergent by the influence of the production process information on the first Approach, which may overestimate or underestimate the consumer's and producer's risks regarding the Frequentist Approach. Failures in medicine conformity assessment can cause much damage, therefore, preventive actions such as developing, evaluating and/or optimizing analytical procedures, are essential in order to guarantee measurement uncertainties below or equal to the target and adopt routine strategies to minimize the risk of false decisions in conformity assessment.

  • conformity decisions based on measurement uncertainty a case study applied to agar diffusion microbiological assay
    Journal of Pharmaceutical Innovation, 2020
    Co-Authors: Luciana Separovic, Maria Luiza De Godoy Bertanha, Alessandro Morais Saviano, Felipe Rebello Lourenco
    Abstract:

    Antimicrobial activity of drug products containing antibiotics is often measured using microbiological assays. However, the high values of measurement uncertainty associated with the analytical results obtained from microbiological assays may be an issue to conformity decisions. The aim of this work was to estimate the risk of false decisions in conformity assessment due to measurement uncertainty for the potency of apramycin in pharmaceutical drug products. Monte Carlo method (MCM) simulations were performed in order to estimate global consumers’ (Rc) and producers’ (Rp) risks using a Bayesian Approach and specific consumers’ (R′c) and producers’ (R′p) risks using a Frequentist Approach. Despite of the high value of measurement uncertainty, Rc and Rp were found to be 0.0% and 0.3%, respectively. However, R′c and R′p were found to be high when the analytical result is close to the specification limits. Risk estimation using Bayesian Approach is recommended to be applied by manufacturers, while Frequentist Approach may be an alternative to regulatory and third-party laboratories.

Manuela A Joore - One of the best experts on this subject based on the ideXlab platform.

  • sample size estimation for non inferiority trials Frequentist Approach versus decision theory Approach
    PLOS ONE, 2015
    Co-Authors: A C Bouman, A Ten J Catehoek, Bram L T Ramaekers, Manuela A Joore
    Abstract:

    Background Non-inferiority trials are performed when the main therapeutic effect of the new therapy is expected to be not unacceptably worse than that of the standard therapy, and the new therapy is expected to have advantages over the standard therapy in costs or other (health) consequences. These advantages however are not included in the classic Frequentist Approach of sample size calculation for non-inferiority trials. In contrast, the decision theory Approach of sample size calculation does include these factors. The objective of this study is to compare the conceptual and practical aspects of the Frequentist Approach and decision theory Approach of sample size calculation for non-inferiority trials, thereby demonstrating that the decision theory Approach is more appropriate for sample size calculation of non-inferiority trials. Methods The Frequentist Approach and decision theory Approach of sample size calculation for non-inferiority trials are compared and applied to a case of a non-inferiority trial on individually tailored duration of elastic compression stocking therapy compared to two years elastic compression stocking therapy for the prevention of post thrombotic syndrome after deep vein thrombosis. Results The two Approaches differ substantially in conceptual background, analytical Approach, and input requirements. The sample size calculated according to the Frequentist Approach yielded 788 patients, using a power of 80% and a one-sided significance level of 5%. The decision theory Approach indicated that the optimal sample size was 500 patients, with a net value of €92 million. Conclusions This study demonstrates and explains the differences between the classic Frequentist Approach and the decision theory Approach of sample size calculation for non-inferiority trials. We argue that the decision theory Approach of sample size estimation is most suitable for sample size calculation of non-inferiority trials.