Bayesian

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 679485 Experts worldwide ranked by ideXlab platform

Stephen G. Walker - One of the best experts on this subject based on the ideXlab platform.

  • a general framework for updating belief distributions
    2016
    Co-Authors: Pier Giovanni Bissiri, Christopher Holmes, Stephen G. Walker
    Abstract:

    We propose a framework for general Bayesian inference. We argue that a valid update of a prior belief distribution to a posterior can be made for parameters which are connected to observations through a loss function rather than the traditional likelihood function, which is recovered as a special case. Modern application areas make it increasingly challenging for Bayesians to attempt to model the true data-generating mechanism. For instance, when the object of interest is low dimensional, such as a mean or median, it is cumbersome to have to achieve this via a complete model for the whole data distribution. More importantly, there are settings where the parameter of interest does not directly index a family of density functions and thus the Bayesian approach to learning about such parameters is currently regarded as problematic. Our framework uses loss functions to connect information in the data to functionals of interest. The updating of beliefs then follows from a decision theoretic approach involving cumulative loss functions. Importantly, the procedure coincides with Bayesian updating when a true likelihood is known yet provides coherent subjective inference in much more general settings. Connections to other inference frameworks are highlighted.

  • a general framework for updating belief distributions
    2013
    Co-Authors: Pier Giovanni Bissiri, Christopher Holmes, Stephen G. Walker
    Abstract:

    We propose a framework for general Bayesian inference. We argue that a valid update of a prior belief distribution to a posterior can be made for parameters which are connected to observations through a loss function rather than the traditional likelihood function, which is recovered under the special case of using self information loss. Modern application areas make it is increasingly challenging for Bayesians to attempt to model the true data generating mechanism. Moreover, when the object of interest is low dimensional, such as a mean or median, it is cumbersome to have to achieve this via a complete model for the whole data distribution. More importantly, there are settings where the parameter of interest does not directly index a family of density functions and thus the Bayesian approach to learning about such parameters is currently regarded as problematic. Our proposed framework uses loss-functions to connect information in the data to functionals of interest. The updating of beliefs then follows from a decision theoretic approach involving cumulative loss functions. Importantly, the procedure coincides with Bayesian updating when a true likelihood is known, yet provides coherent subjective inference in much more general settings. Connections to other inference frameworks are highlighted.

Rianne De Heide - One of the best experts on this subject based on the ideXlab platform.

  • why optional stopping can be a problem for Bayesians
    2021
    Co-Authors: Rianne De Heide, Peter Grunwald
    Abstract:

    Recently, optional stopping has been a subject of debate in the Bayesian psychology community. Rouder (2014) argues that optional stopping is no problem for Bayesians, and even recommends the use of optional stopping in practice, as do Wagenmakers et al. (2012). This article addresses the question whether optional stopping is problematic for Bayesian methods, and specifies under which circumstances and in which sense it is and is not. By slightly varying and extending Rouder's (2014) experiments, we illustrate that, as soon as the parameters of interest are equipped with default or pragmatic priors - which means, in most practical applications of Bayes factor hypothesis testing - resilience to optional stopping can break down. We distinguish between three types of default priors, each having their own specific issues with optional stopping, ranging from no-problem-at-all (Type 0 priors) to quite severe (Type II priors).

  • on the truth convergence of open minded Bayesianism
    2021
    Co-Authors: Tom F Sterkenburg, Rianne De Heide
    Abstract:

    Wenmackers and Romeijn (2016) formalize ideas going back to Shimony (1970) and Putnam (1963) into an open-minded Bayesian inductive logic, that can dynamically incorporate statistical hypotheses proposed in the course of the learning process. In this paper, we show that Wenmackers and Romeijn's proposal does not preserve the classical Bayesian consistency guarantee of merger with the true hypothesis. We diagnose the problem, and offer a forward-looking open-minded Bayesians that does preserve a version of this guarantee.

  • Why optional stopping can be a problem for Bayesians
    2020
    Co-Authors: Rianne De Heide, Peter D. Grünwald
    Abstract:

    Recently, optional stopping has been a subject of debate in the Bayesian psychology community. Rouder ( Psychonomic Bulletin & Review 21 (2), 301–308, 2014 ) argues that optional stopping is no problem for Bayesians, and even recommends the use of optional stopping in practice, as do (Wagenmakers, Wetzels, Borsboom, van der Maas & Kievit, Perspectives on Psychological Science 7 , 627–633, 2012 ). This article addresses the question of whether optional stopping is problematic for Bayesian methods, and specifies under which circumstances and in which sense it is and is not. By slightly varying and extending Rouder’s ( Psychonomic Bulletin & Review 21 (2), 301–308, 2014 ) experiments, we illustrate that, as soon as the parameters of interest are equipped with default or pragmatic priors—which means, in most practical applications of Bayes factor hypothesis testing—resilience to optional stopping can break down. We distinguish between three types of default priors, each having their own specific issues with optional stopping, ranging from no-problem-at-all (type 0 priors) to quite severe (type II priors).

  • why optional stopping is a problem for Bayesians
    2017
    Co-Authors: Rianne De Heide, Peter Grunwald
    Abstract:

    Recently, optional stopping has been a subject of debate in the Bayesian psychology community. Rouder (2014) argues that optional stopping is no problem for Bayesians, and even recommends the use of optional stopping in practice, as do Wagenmakers et al. (2012). This article addresses the question whether optional stopping is problematic for Bayesian methods, and specifies under which circumstances and in which sense it is and is not. By slightly varying and extending Rouder's (2014) experiment, we illustrate that, as soon as the parameters of interest are equipped with default or pragmatic priors - which means, in most practical applications of Bayes Factor hypothesis testing - resilience to optional stopping can break down. We distinguish between four types of default priors, each having their own specific issues with optional stopping, ranging from no-problem-at-all (Type 0 priors) to quite severe (Type II and III priors).

Donald A Berry - One of the best experts on this subject based on the ideXlab platform.

  • a case for Bayesianism in clinical trials
    1993
    Co-Authors: Donald A Berry
    Abstract:

    This paper describes a Bayesian approach to the design and analysis of clinical trials, and compares it with the frequentist approach. Both approaches address learning under uncertainty. But they are different in a variety of ways. The Bayesian approach is more flexible. For example, accumulating data from a clinical trial can be used to update Bayesian measures, independent of the design of the trial. Frequentist measures are tied to the design, and interim analyses must be planned for frequentist measures to have meaning. Its flexibility makes the Bayesian approach ideal for analysing data from clinical trials. In carrying out a Bayesian analysis for inferring treatment effect, information from the clinical trial and other sources can be combined and used explicitly in drawing conclusions. Bayesians and frequentists address making decisions very differently. For example, when choosing or modifying the design of a clinical trial, Bayesians use all available information, including that which comes from the trial itself. The ability to calculate predictive probabilities for future observations is a distinct advantage of the Bayesian approach to designing clinical trials and other decisions. An important difference between Bayesian and frequentist thinking is the role of randomization.

Mark Steyvers - One of the best experts on this subject based on the ideXlab platform.

  • Bayesian models of cognition revisited setting optimality aside and letting data drive psychological theory
    2017
    Co-Authors: Sean Tauber, Daniel J Navarro, Amy Perfors, Mark Steyvers
    Abstract:

    Recent debates in the psychological literature have raised questions about the assumptions that underpin Bayesian models of cognition and what inferences they license about human cognition. In this paper we revisit this topic, arguing that there are 2 qualitatively different ways in which a Bayesian model could be constructed. The most common approach uses a Bayesian model as a normative standard upon which to license a claim about optimality. In the alternative approach, a descriptive Bayesian model need not correspond to any claim that the underlying cognition is optimal or rational, and is used solely as a tool for instantiating a substantive psychological theory. We present 3 case studies in which these 2 perspectives lead to different computational models and license different conclusions about human cognition. We demonstrate how the descriptive Bayesian approach can be used to answer different sorts of questions than the optimal approach, especially when combined with principled tools for model evaluation and model selection. More generally we argue for the importance of making a clear distinction between the 2 perspectives. Considerable confusion results when descriptive models and optimal models are conflated, and if Bayesians are to avoid contributing to this confusion it is important to avoid making normative claims when none are intended. (PsycINFO Database Record

  • Bayesian models of cognition revisited: Setting optimality aside and letting data drive psychological theory
    2016
    Co-Authors: Sean Tauber, Daniel J Navarro, Amy Perfors, Mark Steyvers
    Abstract:

    Recent debates in the psychological literature have raised questions about what assumptions underpin Bayesian models of cognition, and what infer-ences they license about human cognition. In this paper we revisit this topic, arguing that there are two qualitatively different ways in which a Bayesian model could be constructed. If a Bayesian model is intended to license a claim about optimality then the priors and likelihoods in the model must be constrained by reference to some external criterion. A descriptive Bayesian model need not correspond to any claim that the underlying cognition is optimal or rational, and is used solely as a tool for instantiating a sub-stantive psychological theory. We present three case studies in which these two perspectives lead to different computational models and license different conclusions about human cognition. We argue that the descriptive Bayesian approach is more useful overall, especially when combined with principled tools for model evaluation and model selection. More generally we argue for the importance of making a clear distinction between the two perspectives. Considerable confusion results when descriptive models and optimal models are conflated, and if Bayesians are to avoid contributing to this confusion it is important to avoid making normative claims when none are intended

Pier Giovanni Bissiri - One of the best experts on this subject based on the ideXlab platform.

  • a general framework for updating belief distributions
    2016
    Co-Authors: Pier Giovanni Bissiri, Christopher Holmes, Stephen G. Walker
    Abstract:

    We propose a framework for general Bayesian inference. We argue that a valid update of a prior belief distribution to a posterior can be made for parameters which are connected to observations through a loss function rather than the traditional likelihood function, which is recovered as a special case. Modern application areas make it increasingly challenging for Bayesians to attempt to model the true data-generating mechanism. For instance, when the object of interest is low dimensional, such as a mean or median, it is cumbersome to have to achieve this via a complete model for the whole data distribution. More importantly, there are settings where the parameter of interest does not directly index a family of density functions and thus the Bayesian approach to learning about such parameters is currently regarded as problematic. Our framework uses loss functions to connect information in the data to functionals of interest. The updating of beliefs then follows from a decision theoretic approach involving cumulative loss functions. Importantly, the procedure coincides with Bayesian updating when a true likelihood is known yet provides coherent subjective inference in much more general settings. Connections to other inference frameworks are highlighted.

  • a general framework for updating belief distributions
    2013
    Co-Authors: Pier Giovanni Bissiri, Christopher Holmes, Stephen G. Walker
    Abstract:

    We propose a framework for general Bayesian inference. We argue that a valid update of a prior belief distribution to a posterior can be made for parameters which are connected to observations through a loss function rather than the traditional likelihood function, which is recovered under the special case of using self information loss. Modern application areas make it is increasingly challenging for Bayesians to attempt to model the true data generating mechanism. Moreover, when the object of interest is low dimensional, such as a mean or median, it is cumbersome to have to achieve this via a complete model for the whole data distribution. More importantly, there are settings where the parameter of interest does not directly index a family of density functions and thus the Bayesian approach to learning about such parameters is currently regarded as problematic. Our proposed framework uses loss-functions to connect information in the data to functionals of interest. The updating of beliefs then follows from a decision theoretic approach involving cumulative loss functions. Importantly, the procedure coincides with Bayesian updating when a true likelihood is known, yet provides coherent subjective inference in much more general settings. Connections to other inference frameworks are highlighted.