Reparameterization

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 6708 Experts worldwide ranked by ideXlab platform

Alan S. Willsky - One of the best experts on this subject based on the ideXlab platform.

  • tree based Reparameterization framework for analysis of sum product and related algorithms
    IEEE Transactions on Information Theory, 2003
    Co-Authors: Martin J. Wainwright, Tommi S. Jaakkola, Alan S. Willsky
    Abstract:

    We present a tree-based Reparameterization (TRP) framework that provides a new conceptual view of a large class of algorithms for computing approximate marginals in graphs with cycles. This class includes the belief propagation (BP) or sum-product algorithm as well as variations and extensions of BP. Algorithms in this class can be formulated as a sequence of Reparameterization updates, each of which entails refactorizing a portion of the distribution corresponding to an acyclic subgraph (i.e., a tree, or more generally, a hypertree). The ultimate goal is to obtain an alternative but equivalent factorization using functions that represent (exact or approximate) marginal distributions on cliques of the graph. Our framework highlights an important property of the sum-product algorithm and the larger class of Reparameterization algorithms: the original distribution on the graph with cycles is not changed. The perspective of tree-based updates gives rise to a simple and intuitive characterization of the fixed points in terms of tree consistency. We develop interpretations of these results in terms of information geometry. The invariance of the distribution, in conjunction with the fixed-point characterization, enables us to derive an exact expression for the difference between the true marginals on an arbitrary graph with cycles, and the approximations provided by belief propagation. More broadly, our analysis applies to any algorithm that minimizes the Bethe free energy. We also develop bounds on the approximation error, which illuminate the conditions that govern their accuracy. Finally, we show how the Reparameterization perspective extends naturally to generalizations of BP (e.g., Kikuchi (1951) approximations and variants) via the notion of hypertree Reparameterization.

  • NIPS - Tree-based Reparameterization for approximate inference on loopy graphs
    2001
    Co-Authors: Martin J. Wainwright, Tommi S. Jaakkola, Alan S. Willsky
    Abstract:

    We develop a tree-based Reparameterization framework that provides a new conceptual view of a large class of iterative algorithms for computing approximate marginals in graphs with cycles. It includes belief propagation (BP), which can be reformulated as a very local form of Reparameterization. More generally, we consider algorithms that perform exact computations over spanning trees of the full graph. On the practical side, we find that such tree Reparameterization (TRP) algorithms have convergence properties superior to BP. The Reparameterization perspective also provides a number of theoretical insights into approximate inference, including a new characterization of fixed points; and an invariance intrinsic to TRP/BP. These two properties enable us to analyze and bound the error between the TRP/BP approximations and the actual marginals. While our results arise naturally from the TRP perspective, most of them apply in an algorithm-independent manner to any local minimum of the Bethe free energy. Our results also have natural extensions to more structured approximations [e.g., 1, 2].

  • tree based Reparameterization for approximate inference on loopy graphs
    Neural Information Processing Systems, 2001
    Co-Authors: Martin J. Wainwright, Tommi S. Jaakkola, Alan S. Willsky
    Abstract:

    We develop a tree-based Reparameterization framework that provides a new conceptual view of a large class of iterative algorithms for computing approximate marginals in graphs with cycles. It includes belief propagation (BP), which can be reformulated as a very local form of Reparameterization. More generally, we consider algorithms that perform exact computations over spanning trees of the full graph. On the practical side, we find that such tree Reparameterization (TRP) algorithms have convergence properties superior to BP. The Reparameterization perspective also provides a number of theoretical insights into approximate inference, including a new characterization of fixed points; and an invariance intrinsic to TRP/BP. These two properties enable us to analyze and bound the error between the TRP/BP approximations and the actual marginals. While our results arise naturally from the TRP perspective, most of them apply in an algorithm-independent manner to any local minimum of the Bethe free energy. Our results also have natural extensions to more structured approximations [e.g., 1, 2].

Martin J. Wainwright - One of the best experts on this subject based on the ideXlab platform.

  • tree based Reparameterization framework for analysis of sum product and related algorithms
    IEEE Transactions on Information Theory, 2003
    Co-Authors: Martin J. Wainwright, Tommi S. Jaakkola, Alan S. Willsky
    Abstract:

    We present a tree-based Reparameterization (TRP) framework that provides a new conceptual view of a large class of algorithms for computing approximate marginals in graphs with cycles. This class includes the belief propagation (BP) or sum-product algorithm as well as variations and extensions of BP. Algorithms in this class can be formulated as a sequence of Reparameterization updates, each of which entails refactorizing a portion of the distribution corresponding to an acyclic subgraph (i.e., a tree, or more generally, a hypertree). The ultimate goal is to obtain an alternative but equivalent factorization using functions that represent (exact or approximate) marginal distributions on cliques of the graph. Our framework highlights an important property of the sum-product algorithm and the larger class of Reparameterization algorithms: the original distribution on the graph with cycles is not changed. The perspective of tree-based updates gives rise to a simple and intuitive characterization of the fixed points in terms of tree consistency. We develop interpretations of these results in terms of information geometry. The invariance of the distribution, in conjunction with the fixed-point characterization, enables us to derive an exact expression for the difference between the true marginals on an arbitrary graph with cycles, and the approximations provided by belief propagation. More broadly, our analysis applies to any algorithm that minimizes the Bethe free energy. We also develop bounds on the approximation error, which illuminate the conditions that govern their accuracy. Finally, we show how the Reparameterization perspective extends naturally to generalizations of BP (e.g., Kikuchi (1951) approximations and variants) via the notion of hypertree Reparameterization.

  • NIPS - Tree-based Reparameterization for approximate inference on loopy graphs
    2001
    Co-Authors: Martin J. Wainwright, Tommi S. Jaakkola, Alan S. Willsky
    Abstract:

    We develop a tree-based Reparameterization framework that provides a new conceptual view of a large class of iterative algorithms for computing approximate marginals in graphs with cycles. It includes belief propagation (BP), which can be reformulated as a very local form of Reparameterization. More generally, we consider algorithms that perform exact computations over spanning trees of the full graph. On the practical side, we find that such tree Reparameterization (TRP) algorithms have convergence properties superior to BP. The Reparameterization perspective also provides a number of theoretical insights into approximate inference, including a new characterization of fixed points; and an invariance intrinsic to TRP/BP. These two properties enable us to analyze and bound the error between the TRP/BP approximations and the actual marginals. While our results arise naturally from the TRP perspective, most of them apply in an algorithm-independent manner to any local minimum of the Bethe free energy. Our results also have natural extensions to more structured approximations [e.g., 1, 2].

  • tree based Reparameterization for approximate inference on loopy graphs
    Neural Information Processing Systems, 2001
    Co-Authors: Martin J. Wainwright, Tommi S. Jaakkola, Alan S. Willsky
    Abstract:

    We develop a tree-based Reparameterization framework that provides a new conceptual view of a large class of iterative algorithms for computing approximate marginals in graphs with cycles. It includes belief propagation (BP), which can be reformulated as a very local form of Reparameterization. More generally, we consider algorithms that perform exact computations over spanning trees of the full graph. On the practical side, we find that such tree Reparameterization (TRP) algorithms have convergence properties superior to BP. The Reparameterization perspective also provides a number of theoretical insights into approximate inference, including a new characterization of fixed points; and an invariance intrinsic to TRP/BP. These two properties enable us to analyze and bound the error between the TRP/BP approximations and the actual marginals. While our results arise naturally from the TRP perspective, most of them apply in an algorithm-independent manner to any local minimum of the Bethe free energy. Our results also have natural extensions to more structured approximations [e.g., 1, 2].

David M. Blei - One of the best experts on this subject based on the ideXlab platform.

  • AISTATS - Reparameterization Gradients through Acceptance-Rejection Sampling Algorithms.
    2017
    Co-Authors: Christian A. Naesseth, Francisco J. R. Ruiz, Scott W. Linderman, David M. Blei
    Abstract:

    Variational inference using the Reparameterization trick has enabled large-scale approximate Bayesian inference in complex probabilistic models, leveraging stochastic optimization to sidestep intra ...

  • Reparameterization gradients through acceptance rejection sampling algorithms
    International Conference on Artificial Intelligence and Statistics, 2017
    Co-Authors: Christian A. Naesseth, Francisco J. R. Ruiz, Scott W. Linderman, David M. Blei
    Abstract:

    Variational inference using the Reparameterization trick has enabled large-scale approximate Bayesian inference in complex probabilistic models, leveraging stochastic optimization to sidestep intra ...

  • NIPS - The generalized Reparameterization gradient
    2016
    Co-Authors: Francisco J. R. Ruiz, Michalis K. Titsias, David M. Blei
    Abstract:

    The Reparameterization gradient has become a widely used method to obtain Monte Carlo gradients to optimize the variational objective. However, this technique does not easily apply to commonly used distributions such as beta or gamma without further approximations, and most practical applications of the Reparameterization gradient fit Gaussian distributions. In this paper, we introduce the generalized Reparameterization gradient, a method that extends the Reparameterization gradient to a wider class of variational distributions. Generalized Reparameterizations use invertible transformations of the latent variables which lead to transformed distributions that weakly depend on the variational parameters. This results in new Monte Carlo gradients that combine Reparameterization gradients and score function gradients. We demonstrate our approach on variational inference for two complex probabilistic models. The generalized Reparameterization is effective: even a single sample from the variational distribution is enough to obtain a low-variance gradient.

  • Reparameterization Gradients through Acceptance-Rejection Sampling Algorithms
    arXiv: Machine Learning, 2016
    Co-Authors: Christian A. Naesseth, Francisco J. R. Ruiz, Scott W. Linderman, David M. Blei
    Abstract:

    Variational inference using the Reparameterization trick has enabled large-scale approximate Bayesian inference in complex probabilistic models, leveraging stochastic optimization to sidestep intractable expectations. The Reparameterization trick is applicable when we can simulate a random variable by applying a differentiable deterministic function on an auxiliary random variable whose distribution is fixed. For many distributions of interest (such as the gamma or Dirichlet), simulation of random variables relies on acceptance-rejection sampling. The discontinuity introduced by the accept-reject step means that standard Reparameterization tricks are not applicable. We propose a new method that lets us leverage Reparameterization gradients even when variables are outputs of a acceptance-rejection sampling algorithm. Our approach enables Reparameterization on a larger class of variational distributions. In several studies of real and synthetic data, we show that the variance of the estimator of the gradient is significantly lower than other state-of-the-art methods. This leads to faster convergence of stochastic gradient variational inference.

  • The Generalized Reparameterization Gradient
    arXiv: Machine Learning, 2016
    Co-Authors: Francisco J. R. Ruiz, Michalis K. Titsias, David M. Blei
    Abstract:

    The Reparameterization gradient has become a widely used method to obtain Monte Carlo gradients to optimize the variational objective. However, this technique does not easily apply to commonly used distributions such as beta or gamma without further approximations, and most practical applications of the Reparameterization gradient fit Gaussian distributions. In this paper, we introduce the generalized Reparameterization gradient, a method that extends the Reparameterization gradient to a wider class of variational distributions. Generalized Reparameterizations use invertible transformations of the latent variables which lead to transformed distributions that weakly depend on the variational parameters. This results in new Monte Carlo gradients that combine Reparameterization gradients and score function gradients. We demonstrate our approach on variational inference for two complex probabilistic models. The generalized Reparameterization is effective: even a single sample from the variational distribution is enough to obtain a low-variance gradient.

Ryan P. Adams - One of the best experts on this subject based on the ideXlab platform.

  • Reducing Reparameterization Gradient Variance
    arXiv: Machine Learning, 2017
    Co-Authors: Andrew Miller, Nicholas J. Foti, Alexander D'amour, Ryan P. Adams
    Abstract:

    Optimization with noisy gradients has become ubiquitous in statistics and machine learning. Reparameterization gradients, or gradient estimates computed via the "Reparameterization trick," represent a class of noisy gradients often used in Monte Carlo variational inference (MCVI). However, when these gradient estimators are too noisy, the optimization procedure can be slow or fail to converge. One way to reduce noise is to use more samples for the gradient estimate, but this can be computationally expensive. Instead, we view the noisy gradient as a random variable, and form an inexpensive approximation of the generating procedure for the gradient sample. This approximation has high correlation with the noisy gradient by construction, making it a useful control variate for variance reduction. We demonstrate our approach on non-conjugate multi-level hierarchical models and a Bayesian neural net where we observed gradient variance reductions of multiple orders of magnitude (20-2,000x).

  • reducing Reparameterization gradient variance
    Neural Information Processing Systems, 2017
    Co-Authors: Andrew Miller, Nicholas J. Foti, Alexander Damour, Ryan P. Adams
    Abstract:

    Optimization with noisy gradients has become ubiquitous in statistics and machine learning. Reparameterization gradients, or gradient estimates computed via the ``Reparameterization trick,'' represent a class of noisy gradients often used in Monte Carlo variational inference (MCVI). However, when these gradient estimators are too noisy, the optimization procedure can be slow or fail to converge. One way to reduce noise is to generate more samples for the gradient estimate, but this can be computationally expensive. Instead, we view the noisy gradient as a random variable, and form an inexpensive approximation of the generating procedure for the gradient sample. This approximation has high correlation with the noisy gradient by construction, making it a useful control variate for variance reduction. We demonstrate our approach on a non-conjugate hierarchical model and a Bayesian neural net where our method attained orders of magnitude (20-2{,}000$\times$) reduction in gradient variance resulting in faster and more stable optimization.

  • NIPS - Reducing Reparameterization Gradient Variance
    2017
    Co-Authors: Andrew Miller, Nicholas J. Foti, Alexander D'amour, Ryan P. Adams
    Abstract:

    Optimization with noisy gradients has become ubiquitous in statistics and machine learning. Reparameterization gradients, or gradient estimates computed via the ``Reparameterization trick,'' represent a class of noisy gradients often used in Monte Carlo variational inference (MCVI). However, when these gradient estimators are too noisy, the optimization procedure can be slow or fail to converge. One way to reduce noise is to generate more samples for the gradient estimate, but this can be computationally expensive. Instead, we view the noisy gradient as a random variable, and form an inexpensive approximation of the generating procedure for the gradient sample. This approximation has high correlation with the noisy gradient by construction, making it a useful control variate for variance reduction. We demonstrate our approach on a non-conjugate hierarchical model and a Bayesian neural net where our method attained orders of magnitude (20-2{,}000$\times$) reduction in gradient variance resulting in faster and more stable optimization.

Hoon Hong - One of the best experts on this subject based on the ideXlab platform.

  • ImUp: A Maple Package for Uniformity-Improved Reparameterization of Plane Curves
    2014
    Co-Authors: Jing Yang, Dongming Wang, Hoon Hong
    Abstract:

    We present a software package for computing piecewise rational Reparameterizations of parametric plane curves, which have improved uniformities of angular speed. The package ImUp is implemented in Maple on the basis of some recently developed algorithms of Reparameterization using piecewise Möbius transformations. We discuss some implementation issues and illustrate the capability and performance of the public functions of ImUp with examples and experiments. It is shown that the quality of plots of plane curves may be effectively improved by means of Reparameterization using ImUp.

  • ASCM - ImUp : A Maple Package for Uniformity-Improved Reparameterization of Plane Curves
    Computer Mathematics, 2014
    Co-Authors: Jing Yang, Dongming Wang, Hoon Hong
    Abstract:

    We present a software package for computing piecewise rational Reparameterizations of parametric plane curves, which have improved uniformities of angular speed. The package ImUp is implemented in Maple on the basis of some recently developed algorithms of Reparameterization using piecewise Mobius transformations. We discuss some implementation issues and illustrate the capability and performance of the public functions of ImUp with examples and experiments. It is shown that the quality of plots of plane curves may be effectively improved by means of Reparameterization using ImUp.

  • Automated Deduction in Geometry - Improving Angular Speed Uniformity by C 1 Piecewise Reparameterization
    Automated Deduction in Geometry, 2013
    Co-Authors: Jing Yang, Dongming Wang, Hoon Hong
    Abstract:

    We show how to compute a C 1 piecewise-rational Reparameterization that closely approximates to the arc-angle parameterization of any plane curve by C 1 piecewise Mobius transformation. By making use of the information provided by the first derivative of the angular speed function, the unit interval is partitioned such that the obtained Reparameterization has high uniformity and continuous angular speed. An iteration process is used to refine the interval partition. Experimental results are presented to show the performance of the proposed method and the geometric behavior of the computed Reparameterizations.

  • A Framework for Improving Uniformity of Parameterizations of Curves
    Science China Information Sciences, 2013
    Co-Authors: Hoon Hong, Dongming Wang, Jing Yang
    Abstract:

    We define quasi-speed as a generalization of linear speed and angular speed for parameterizations of curves and use the uniformity of quasi-speed to measure the quality of the parameterizations. With such conceptual setting, a general framework is developed for studying uniformity behaviors under Reparameterization via proper parameter transformation and for computing Reparameterizations with improved uniformity of quasispeed by means of optimal single-piece, C 0 piecewise, and C 1 piecewise Möbius transformations. Algorithms are described for uniformity-improved Reparameterization using different Möbius transformations with different optimization techniques. Examples are presented to illustrate the concepts, the framework, and the algorithms. Experimental results are provided to validate the framework and to show the efficiency of the algorithms.

  • Improving angular speed uniformity by Reparameterization
    Computer Aided Geometric Design, 2013
    Co-Authors: Jing Yang, Dongming Wang, Hoon Hong
    Abstract:

    We introduce the notion of angular speed uniformity as a quality measure for parameter-izations of plane curves and propose an algorithm to compute uniform Reparameterizations for quadratic and cubic curves. We prove that only straight lines have uniform rational parameterizations. For any plane curve other than lines, we show how to find a rational Reparameterization that has the maximum uniformity among all the rational parameterizations of the same degree. We also establish specific results for quadratic and certain cubic Bezier curves.