Unique Probability

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 300 Experts worldwide ranked by ideXlab platform

Giuseppe Toscani - One of the best experts on this subject based on the ideXlab platform.

  • Poincaré-type inequalities for stable densities
    Ricerche di Matematica, 2018
    Co-Authors: Giuseppe Toscani
    Abstract:

    In a recent paper we introduced the concept of fractional score, a generalization of the linear score function, well-known in theoretical statistics. As the Gaussian density is closely related to the linear score, the fractional score function allows to identify Levy stable laws as the (Unique) Probability densities for which the score of a random variable X is proportional to $$-X$$ . We use this analogy to extend to stable laws the classical Poincare inequality for Gaussian densities. Application of this inequality allows to obtain bounds on moments of stable laws, and a sharp one-dimensional version of Hardy–Poincare inequality.

  • Score functions, generalized relative Fisher information and applications
    Ricerche di Matematica, 2017
    Co-Authors: Giuseppe Toscani
    Abstract:

    Generalizations of the linear score function, a well-known concept in theoretical statistics, are introduced. As the Gaussian density and the classical Fisher information are closely related to the linear score, nonlinear (respectively fractional) score functions allow to identify generalized Gaussian densities (respectively Lévy stable laws) as the (Unique) Probability densities for which the score of a random variable X is proportional to $$-X$$ - X . In all cases, it is shown that the variance of the relative to the generalized Gaussian (respectively Lévy) score provides an upper bound for $$L^1$$ L 1 -distance from the generalized Gaussian density (respectively Lévy stable laws). Connections with nonlinear and fractional Fokker–Planck type equations are introduced and discussed.

  • Score functions, generalized relative Fisher information and applications
    Ricerche di Matematica, 2016
    Co-Authors: Giuseppe Toscani
    Abstract:

    Generalizations of the linear score function, a well-known concept in theoretical statistics, are introduced. As the Gaussian density and the classical Fisher information are closely related to the linear score, nonlinear (respectively fractional) score functions allow to identify generalized Gaussian densities (respectively Levy stable laws) as the (Unique) Probability densities for which the score of a random variable X is proportional to \(-X\). In all cases, it is shown that the variance of the relative to the generalized Gaussian (respectively Levy) score provides an upper bound for \(L^1\)-distance from the generalized Gaussian density (respectively Levy stable laws). Connections with nonlinear and fractional Fokker–Planck type equations are introduced and discussed.

Jonathan Touboul - One of the best experts on this subject based on the ideXlab platform.

  • Large deviations for randomly connected neural networks: II. State-dependent interactions
    Advances in Applied Probability, 2018
    Co-Authors: Tanguy Cabana, Jonathan Touboul
    Abstract:

    We continue the analysis of large deviations for randomly connected neural networks used as models of the brain. The originality of the model relies on the fact that the directed impact of one particle onto another depends on the state of both particles, and they have random Gaussian amplitude with mean and variance scaling as the inverse of the network size. Similarly to the spatially extended case (see Cabana and Touboul (2018)), we show that under sufficient regularity assumptions, the empirical measure satisfies a large deviations principle with a good rate function achieving its minimum at a Unique Probability measure, implying, in particular, its convergence in both averaged and quenched cases, as well as a propagation of a chaos property (in the averaged case only). The class of model we consider notably includes a stochastic version of the Kuramoto model with random connections.

  • Large deviations of particle systems in random interaction
    arXiv: Probability, 2016
    Co-Authors: Tanguy Cabana, Jonathan Touboul
    Abstract:

    We investigate the thermodynamic limit of a class of particle systems in random interaction that encompasses coupled oscillators systems and neuronal networks. In these systems, the interactions depend asymmetrically on the state of both particle and its amplitude is scaled by a Gaussian random coefficient whose variance decays as the inverse of the network size. We show that the empirical measure satisfies a large-deviation principle with good rate function achieving its minimum at a Unique Probability measure, implying convergence of the empirical measure and propagation of chaos. The limit is characterized through a complex non Markovian implicit equation in which the network interaction term is replaced by a Gaussian field depending on the state of the particle.

  • Large deviations for randomly connected neural networks: II. State-dependent interactions
    arXiv: Probability, 2016
    Co-Authors: Tanguy Cabana, Jonathan Touboul
    Abstract:

    This work continues the analysis of large deviations for randomly connected neural networks models of the brain. The originality of the model relies on the fact that the directed impact of one particle onto another depends on the state of both particles, and (ii) have random Gaussian amplitude with mean and variance scaling as the inverse of the network size. Similarly to the spatially extended case, we show that under sufficient regularity assumptions, the empirical measure satisfies a large-deviation principle with good rate function achieving its minimum at a Unique Probability measure, implying in particular its convergence in both averaged and quenched cases, as well as a propagation of chaos property (in the averaged case only). The class of model we consider notably includes a stochastic version of Kuramoto model with random connections.

David Schmeidler - One of the best experts on this subject based on the ideXlab platform.

  • On the Uniqueness of Subjective Probabilities
    Economic Theory, 1993
    Co-Authors: Edi Karni, David Schmeidler
    Abstract:

    The purpose of this paper is twofold: First, within the framework of Savage (1954), we suggest axiomatic foundations for the representation of event-dependent preference relations over acts. This representation has the form of expectation of event-debendent utility with respect to non-Unique subjective probabilities on the set of states. Second, we give an economic-theoretic motivation for selecting a Unique Probability distribution as an appropriate concept of “subjective probabilities.” However, unlike in Savage's theory, this notion of subjective probabilities does not necessarily represent the decisions-maker's belief regarding the likelihood of events.

Berndjurgen Falkowski - One of the best experts on this subject based on the ideXlab platform.

  • interpreting the output of certain neural networks as almost Unique Probability
    International Conference on Knowledge-Based and Intelligent Information and Engineering Systems, 2004
    Co-Authors: Berndjurgen Falkowski
    Abstract:

    In this paper sufficient conditions are derived that ensure that the output of certain Neural Networks may be interpreted as an almost Unique Probability distribution meaning that any two Probability distributions arising as outputs must be close in a sense to be defined. These are rather important in the context of so-called scoring systems arising in a banking environment if one attempts to compute default probabilities. Preliminary experimental evidence is presented showing that these conditions might well apply in practical situations. It is also noted that these conditions may at times prevent good generalization capabilities of the system.

  • KES - Interpreting the Output of Certain Neural Networks as Almost Unique Probability
    Lecture Notes in Computer Science, 2004
    Co-Authors: Berndjurgen Falkowski
    Abstract:

    In this paper sufficient conditions are derived that ensure that the output of certain Neural Networks may be interpreted as an almost Unique Probability distribution meaning that any two Probability distributions arising as outputs must be close in a sense to be defined. These are rather important in the context of so-called scoring systems arising in a banking environment if one attempts to compute default probabilities. Preliminary experimental evidence is presented showing that these conditions might well apply in practical situations. It is also noted that these conditions may at times prevent good generalization capabilities of the system.

Norio Takeoka - One of the best experts on this subject based on the ideXlab platform.

  • a theory of subjective learning
    Journal of Economic Theory, 2014
    Co-Authors: David Dillenberger, Juan Sebastian Lleras, Philipp Sadowski, Norio Takeoka
    Abstract:

    We study an individual who faces a dynamic decision problem in which the process of information arrival is unobserved by the analyst. We derive two utility representations of preferences over menus of acts that capture the individual’s uncertainty about his future beliefs. The most general representation identifies a Unique Probability distribution over the set of posteriors that the decision maker might face at the time of choosing from the menu. We use this representation to characterize a notion of “more preference for flexibility” via a subjective analogue of Blackwell’s (1951, 1953) comparisons of experiments. A more specialized representation Uniquely identifies information as a partition of the state space. This result allows us to compare individuals who expect to learn differently, even if they do not agree on their prior beliefs. We conclude by extending the basic model to accommodate an individual who expects to learn gradually over time by means of a subjective filtration.

  • a theory of subjective learning second version
    2013
    Co-Authors: David Dillenberger, Juan Sebastian Lleras, Philipp Sadowski, Norio Takeoka
    Abstract:

    We study an individual who faces a dynamic decision problem in which the process of information arrival is unobserved by the analyst. We elicit subjective information directly from choice behavior by deriving two utility representations of preferences over menus of acts. The most general representation identifies a Unique Probability distribution over the set of posteriors that the decision maker might face at the time of choosing from the menu. We use this representation to characterize a notion of †more preference for flexibility†via a subjective analogue of Blackwell’s (1951, 1953) comparisons of experiments. A more specialized representation Uniquely identifies information as a partition of the state space. This result allows us to compare individuals who expect to learn differently, even if they do not agree on their prior beliefs. On the extended domain of dated-menus, we show how to accommodate an individual who expects to learn gradually over time by means of a subjective filtration.