Assigning Probability

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 96 Experts worldwide ranked by ideXlab platform

Ronald Meester - One of the best experts on this subject based on the ideXlab platform.

  • Uniquely Determined Uniform Probability on the Natural Numbers
    Journal of Theoretical Probability, 2016
    Co-Authors: Timber Kerkvliet, Ronald Meester
    Abstract:

    In this paper, we address the problem of constructing a uniform Probability measure on $${\mathbb {N}}$$ N . Of course, this is not possible within the bounds of the Kolmogorov axioms, and we have to violate at least one axiom. We define a Probability measure as a finitely additive measure Assigning Probability 1 to the whole space, on a domain which is closed under complements and finite disjoint unions. We introduce and motivate a notion of uniformity which we call weak thinnability, which is strictly stronger than extension of natural density. We construct a weakly thinnable Probability measure, and we show that on its domain, which contains sets without natural density, Probability is uniquely determined by weak thinnability. In this sense, we can assign uniform probabilities in a canonical way. We generalize this result to uniform Probability measures on other metric spaces, including $${\mathbb {R}}^n$$ R n .

  • Canonical uniform Probability measures
    arXiv: Probability, 2014
    Co-Authors: Timber Kerkvliet, Ronald Meester
    Abstract:

    In this paper, we address the problem of constructing a uniform Probability measure on $\mathbb{N}$. Of course, this is not possible within the bounds of the Kolmogorov axioms and we have to violate at least one axiom. We define a Probability measure as a finitely additive measure Assigning Probability $1$ to the whole space, on a domain which is closed under complements and finite disjoint unions. We introduce and motivate a notion of uniformity which we call weak thinnability, which is strictly stronger than extension of natural density. We construct a weakly thinnable Probability measure and we show that on its domain, which contains sets without natural density, Probability is uniquely determined by weak thinnability. In this sense, we can assign uniform probabilities in a canonical way. We generalize this result to uniform Probability measures on other metric spaces, including $\mathbb{R}^n$.

Timber Kerkvliet - One of the best experts on this subject based on the ideXlab platform.

  • Uniquely Determined Uniform Probability on the Natural Numbers
    Journal of Theoretical Probability, 2016
    Co-Authors: Timber Kerkvliet, Ronald Meester
    Abstract:

    In this paper, we address the problem of constructing a uniform Probability measure on $${\mathbb {N}}$$ N . Of course, this is not possible within the bounds of the Kolmogorov axioms, and we have to violate at least one axiom. We define a Probability measure as a finitely additive measure Assigning Probability 1 to the whole space, on a domain which is closed under complements and finite disjoint unions. We introduce and motivate a notion of uniformity which we call weak thinnability, which is strictly stronger than extension of natural density. We construct a weakly thinnable Probability measure, and we show that on its domain, which contains sets without natural density, Probability is uniquely determined by weak thinnability. In this sense, we can assign uniform probabilities in a canonical way. We generalize this result to uniform Probability measures on other metric spaces, including $${\mathbb {R}}^n$$ R n .

  • Canonical uniform Probability measures
    arXiv: Probability, 2014
    Co-Authors: Timber Kerkvliet, Ronald Meester
    Abstract:

    In this paper, we address the problem of constructing a uniform Probability measure on $\mathbb{N}$. Of course, this is not possible within the bounds of the Kolmogorov axioms and we have to violate at least one axiom. We define a Probability measure as a finitely additive measure Assigning Probability $1$ to the whole space, on a domain which is closed under complements and finite disjoint unions. We introduce and motivate a notion of uniformity which we call weak thinnability, which is strictly stronger than extension of natural density. We construct a weakly thinnable Probability measure and we show that on its domain, which contains sets without natural density, Probability is uniquely determined by weak thinnability. In this sense, we can assign uniform probabilities in a canonical way. We generalize this result to uniform Probability measures on other metric spaces, including $\mathbb{R}^n$.

Roland Preuss - One of the best experts on this subject based on the ideXlab platform.

  • Maximum entropy and Bayesian data analysis: Entropic prior distributions.
    Physical review. E Statistical nonlinear and soft matter physics, 2004
    Co-Authors: Ariel Caticha, Roland Preuss
    Abstract:

    The problem of Assigning Probability distributions which reflect the prior information available about experiments is one of the major stumbling blocks in the use of Bayesian methods of data analysis. In this paper the method of maximum (relative) entropy (ME) is used to translate the information contained in the known form of the likelihood into a prior distribution for Bayesian inference. The argument is inspired and guided by intuition gained from the successful use of ME methods in statistical mechanics. For experiments that cannot be repeated the resulting "entropic prior" is formally identical with the Einstein fluctuation formula. For repeatable experiments, however, the expected value of the entropy of the likelihood turns out to be relevant information that must be included in the analysis. The important case of a Gaussian likelihood is treated in detail.

Pedro Roth - One of the best experts on this subject based on the ideXlab platform.

  • Assigning Probability density functions in a context of information shortage
    Metrologia, 2004
    Co-Authors: Raul R. Cordero, Pedro Roth
    Abstract:

    In the context of experimental information shortage, uncertainty evaluation of a directly measured quantity involves obtaining its standard uncertainty as the standard deviation of an assigned Probability density function (pdf) that is assumed to apply. In this article, we present a criterion to select the appropriate pdf associated with the estimate of a quantity by seeking that pdf which is the most probable among those which agree with the available information. As examples, we apply this criterion to assign the proper pdf to a measurand assuming that we know just its estimate, or both its estimate and its standard uncertainty. Our results agree with those obtained by applying the principle of maximum entropy to both situations.

Doug Downey - One of the best experts on this subject based on the ideXlab platform.

  • Stolen Probability: A Structural Weakness of Neural Language Models
    arXiv: Learning, 2020
    Co-Authors: David Demeter, Gregory J. Kimmel, Doug Downey
    Abstract:

    Neural Network Language Models (NNLMs) generate Probability distributions by applying a softmax function to a distance metric formed by taking the dot product of a prediction vector with all word vectors in a high-dimensional embedding space. The dot-product distance metric forms part of the inductive bias of NNLMs. Although NNLMs optimize well with this inductive bias, we show that this results in a sub-optimal ordering of the embedding space that structurally impoverishes some words at the expense of others when Assigning Probability. We present numerical, theoretical and empirical analyses showing that words on the interior of the convex hull in the embedding space have their Probability bounded by the probabilities of the words on the hull.

  • ACL - Stolen Probability: A Structural Weakness of Neural Language Models
    Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020
    Co-Authors: David Demeter, Gregory J. Kimmel, Doug Downey
    Abstract:

    Neural Network Language Models (NNLMs) generate Probability distributions by applying a softmax function to a distance metric formed by taking the dot product of a prediction vector with all word vectors in a high-dimensional embedding space. The dot-product distance metric forms part of the inductive bias of NNLMs. Although NNLMs optimize well with this inductive bias, we show that this results in a sub-optimal ordering of the embedding space that structurally impoverishes some words at the expense of others when Assigning Probability. We present numerical, theoretical and empirical analyses which show that words on the interior of the convex hull in the embedding space have their Probability bounded by the probabilities of the words on the hull.