Marginal Probability

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 321 Experts worldwide ranked by ideXlab platform

Wei Wang - One of the best experts on this subject based on the ideXlab platform.

  • Statistics of spatial derivatives of Stokes parameters for isotropic random polarization field
    Journal of the Optical Society of America A, 2010
    Co-Authors: Shun Zhang, Mitsuo Takeda, Wei Wang
    Abstract:

    The statistical properties of the spatial derivatives of the Stokes parameters for a random polarization field are studied. Based on the Gaussian assumption for the electric fields, the six-dimensional joint Probability density function for the derivatives of the Stokes parameters is obtained from the statistics of the derivatives of the random polarization field. Subsequently, three two-dimensional Probability density functions of derivatives of each Stokes parameter and the corresponding six Marginal Probability density functions are given. Finally, the joint and Marginal density functions of the magnitude of the gradient of Stokes parameters are also derived for the first time, to our knowledge.

Ricky T. Q. Chen - One of the best experts on this subject based on the ideXlab platform.

  • ICLR - SUMO: Unbiased Estimation of Log Marginal Probability for Latent Variable Models
    2020
    Co-Authors: Yucen Luo, Ryan P Adams, Alex Beatson, Mohammad Norouzi, Jun Zhu, David Duvenaud, Ricky T. Q. Chen
    Abstract:

    The standard variational lower bounds used to train latent variable models produce biased estimates of most quantities of interest. We introduce an unbiased estimator of the log Marginal likelihood and its gradients for latent variable models based on randomized truncation of infinite series. If parameterized by an encoder-decoder architecture, the parameters of the encoder can be optimized to minimize its variance of this estimator. We show that models trained using our estimator give better test-set likelihoods than a standard importance-sampling based approach for the same average computational cost. This estimator also allows use of latent variable models for tasks where unbiased estimators, rather than Marginal likelihood lower bounds, are preferred, such as minimizing reverse KL divergences and estimating score functions.

  • sumo unbiased estimation of log Marginal Probability for latent variable models
    International Conference on Learning Representations, 2020
    Co-Authors: Yucen Luo, Ryan P Adams, Alex Beatson, Mohammad Norouzi, Jun Zhu, David Duvenaud, Ricky T. Q. Chen
    Abstract:

    The standard variational lower bounds used to train latent variable models produce biased estimates of most quantities of interest. We introduce an unbiased estimator of the log Marginal likelihood and its gradients for latent variable models based on randomized truncation of infinite series. If parameterized by an encoder-decoder architecture, the parameters of the encoder can be optimized to minimize its variance of this estimator. We show that models trained using our estimator give better test-set likelihoods than a standard importance-sampling based approach for the same average computational cost. This estimator also allows use of latent variable models for tasks where unbiased estimators, rather than Marginal likelihood lower bounds, are preferred, such as minimizing reverse KL divergences and estimating score functions.

  • SUMO: Unbiased Estimation of Log Marginal Probability for Latent Variable Models
    arXiv: Learning, 2020
    Co-Authors: Yucen Luo, Ryan P Adams, Alex Beatson, Mohammad Norouzi, Jun Zhu, David Duvenaud, Ricky T. Q. Chen
    Abstract:

    Standard variational lower bounds used to train latent variable models produce biased estimates of most quantities of interest. We introduce an unbiased estimator of the log Marginal likelihood and its gradients for latent variable models based on randomized truncation of infinite series. If parameterized by an encoder-decoder architecture, the parameters of the encoder can be optimized to minimize its variance of this estimator. We show that models trained using our estimator give better test-set likelihoods than a standard importance-sampling based approach for the same average computational cost. This estimator also allows use of latent variable models for tasks where unbiased estimators, rather than Marginal likelihood lower bounds, are preferred, such as minimizing reverse KL divergences and estimating score functions.

Zengqi Sun - One of the best experts on this subject based on the ideXlab platform.

  • Marginal Probability distribution estimation in characteristic space of covariance matrix
    World Congress on Computational Intelligence, 2008
    Co-Authors: Nan Ding, Shude Zhou, Hao Zhang, Zengqi Sun
    Abstract:

    Marginal Probability distribution has been widely used as the probabilistic model in EDAs because of its simplicity and efficiency. However, the obvious shortcoming of the kind of EDAs lies in its incapability of taking the correlation between variables into account. This paper tries to solve the problem from the point view of space transformation. As we know, it seems a default rule that the probabilistic model is usually constructed directly from the selected samples in the space defined by the problem. In the algorithm CM-MEDA, instead, we first transform the sampled data from the initial coordinate space into the characteristic space of covariance-matrix and then the Marginal probabilistic model is constructed in the new space. We find that the Marginal probabilistic model in the new space can capture the variable linkages in the initial space quite well. The relationship of CM-MEDA with Covariance-Matrix estimation and principal component analysis is also analyzed in this paper. We implement CM-MEDA in continuous domain based on both Gaussian and histogram models. The experimental results verify the effectiveness of our idea.

  • IEEE Congress on Evolutionary Computation - Marginal Probability distribution estimation in characteristic space of covariance-matrix
    2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence), 2008
    Co-Authors: Nan Ding, Shude Zhou, Hao Zhang, Zengqi Sun
    Abstract:

    Marginal Probability distribution has been widely used as the probabilistic model in EDAs because of its simplicity and efficiency. However, the obvious shortcoming of the kind of EDAs lies in its incapability of taking the correlation between variables into account. This paper tries to solve the problem from the point view of space transformation. As we know, it seems a default rule that the probabilistic model is usually constructed directly from the selected samples in the space defined by the problem. In the algorithm CM-MEDA, instead, we first transform the sampled data from the initial coordinate space into the characteristic space of covariance-matrix and then the Marginal probabilistic model is constructed in the new space. We find that the Marginal probabilistic model in the new space can capture the variable linkages in the initial space quite well. The relationship of CM-MEDA with Covariance-Matrix estimation and principal component analysis is also analyzed in this paper. We implement CM-MEDA in continuous domain based on both Gaussian and histogram models. The experimental results verify the effectiveness of our idea.

Demetris Koutsoyiannis - One of the best experts on this subject based on the ideXlab platform.

  • Global Investigation of Double Periodicity οf Hourly Wind Speed for Stochastic Simulation; Application in Greece
    Energy Procedia, 2016
    Co-Authors: Ilias Deligiannis, Panayiotis Dimitriadis, Olympia Daskalou, Yiannis Dimakos, Demetris Koutsoyiannis
    Abstract:

    Abstract The wind process is considered an important hydrometeorological process and one of the basic resources of renewable energy. In this paper, we analyze the double periodicity of wind, i.e., daily and annual, for numerous wind stations with hourly data around the globe and we develop a four-parameter model. Additionally, we apply this model to several stations in Greece and we estimate their Marginal characteristics and stochastic structure best described by an extended-Pareto Marginal Probability function and a Hurst-Kolmogorov process, respectively.

Joris M Mooij - One of the best experts on this subject based on the ideXlab platform.

  • libDAI: A Free and Open Source C++ Library for Discrete Approximate Inference in Graphical Models
    Journal of Machine Learning Research, 2010
    Co-Authors: Joris M Mooij
    Abstract:

    This paper describes the software package libDAI, a free & open source C++ library that provides implementations of various exact and approximate inference methods for graphical models with discrete-valued variables. libDAI supports directed graphical models (Bayesian networks) as well as undirected ones (Markov random fields and factor graphs). It offers various approximations of the partition sum, Marginal Probability distributions and maximum Probability states. Parameter learning is also supported. A feature comparison with other open source software packages for approximate inference is given. libDAI is licensed under the GPL v2+ license and is available at http://www.libdai.org.

  • bounds on Marginal Probability distributions
    Neural Information Processing Systems, 2008
    Co-Authors: Joris M Mooij, H J Kappen
    Abstract:

    We propose a novel bound on single-variable Marginal Probability distributions in factor graphs with discrete variables. The bound is obtained by propagating local bounds (convex sets of Probability distributions) over a subtree of the factor graph, rooted in the variable of interest. By construction, the method not only bounds the exact Marginal Probability distribution of a variable, but also its approximate Belief Propagation Marginal ("belief"). Thus, apart from providing a practical means to calculate bounds on Marginals, our contribution also lies in providing a better understanding of the error made by Belief Propagation. We show that our bound outperforms the state-of-the-art on some inference problems arising in medical diagnosis.

  • NIPS - Bounds on Marginal Probability distributions
    2008
    Co-Authors: Joris M Mooij, H J Kappen
    Abstract:

    We propose a novel bound on single-variable Marginal Probability distributions in factor graphs with discrete variables. The bound is obtained by propagating local bounds (convex sets of Probability distributions) over a subtree of the factor graph, rooted in the variable of interest. By construction, the method not only bounds the exact Marginal Probability distribution of a variable, but also its approximate Belief Propagation Marginal ("belief"). Thus, apart from providing a practical means to calculate bounds on Marginals, our contribution also lies in providing a better understanding of the error made by Belief Propagation. We show that our bound outperforms the state-of-the-art on some inference problems arising in medical diagnosis.

  • Novel Bounds on Marginal Probabilities
    arXiv: Probability, 2008
    Co-Authors: Joris M Mooij, H J Kappen
    Abstract:

    We derive two related novel bounds on single-variable Marginal Probability distributions in factor graphs with discrete variables. The first method propagates bounds over a subtree of the factor graph rooted in the variable, and the second method propagates bounds over the self-avoiding walk tree starting at the variable. By construction, both methods not only bound the exact Marginal Probability distribution of a variable, but also its approximate Belief Propagation Marginal (``belief''). Thus, apart from providing a practical means to calculate bounds on Marginals, our contribution also lies in an increased understanding of the error made by Belief Propagation. Empirically, we show that our bounds often outperform existing bounds in terms of accuracy and/or computation time. We also show that our bounds can yield nontrivial results for medical diagnosis inference problems.