Bayesian Learning

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 11220 Experts worldwide ranked by ideXlab platform

Vivekananda Roy - One of the best experts on this subject based on the ideXlab platform.

  • Posterior Impropriety of some Sparse Bayesian Learning Models
    arXiv: Statistics Theory, 2020
    Co-Authors: Anand Dixit, Vivekananda Roy
    Abstract:

    Sparse Bayesian Learning models are typically used for prediction in datasets with significantly greater number of covariates than observations. Among the class of sparse Bayesian Learning models, relevance vector machines (RVM) is very popular. Its popularity is demonstrated by a large number of citations of the original RVM paper of Tipping (2001)[JMLR, 1, 211 - 244]. In this article we show that RVM and some other sparse Bayesian Learning models with hyperparameter values currently used in the literature are based on improper posteriors. Further, we also provide necessary and sufficient conditions for posterior propriety of RVM.

  • Posterior impropriety of some sparse Bayesian Learning models
    Statistics & Probability Letters, 1
    Co-Authors: Anand Dixit, Vivekananda Roy
    Abstract:

    Abstract Sparse Bayesian Learning models are typically used for prediction in datasets with significantly greater number of covariates than observations. Such models often take a reproducing kernel Hilbert space (RKHS) approach to carry out the task of prediction and can be implemented using either proper or improper priors. In this article we show that a few sparse Bayesian Learning models in the literature, when implemented using improper priors, lead to improper posteriors.

Jun Fang - One of the best experts on this subject based on the ideXlab platform.

  • Computationally efficient sparse Bayesian Learning via generalized approximate message passing
    2016 IEEE International Conference on Ubiquitous Wireless Broadband (ICUWB), 2016
    Co-Authors: Xianbing Zou, Jun Fang
    Abstract:

    The sparse Bayesian Learning (also referred to as Bayesian compressed sensing) algorithm is a popular approach for sparse signal recovery, and has demonstrated superior performance in several experiments. Nevertheless, the sparse Bayesian Learning algorithm has a computational complexity that grows rapidly with the dimension of the signal, which hinders its application to many practical problems even with moderately large data sets. To address this issue, in this paper, we propose a computationally efficient sparse Bayesian Learning method by integrating the generalized approximate message passing (GAMP) technique. Specifically, the algorithm is developed within an expectation-maximization (EM) framework, using the GAMP to efficiently compute an approximation of the posterior distribution of hidden variables. The hyperparameters associated with the hierarchical Gaussian prior are learned by iteratively maximizing the Q-function which is calculated based on the posterior approximation obtained from the GAMP. Numerical results are provided to illustrate the computational efficiency and the effectiveness of the proposed algorithm.

  • ICASSP - Pattern-coupled sparse Bayesian Learning for recovery of block-sparse signals
    2014 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), 2014
    Co-Authors: Yanning Shen, Huiping Duan, Jun Fang
    Abstract:

    In this paper, we develop a new sparse Bayesian Learning method for recovery of block-sparse signals with un- known cluster patterns. A pattern-coupled hierarchical Gaussian prior model is introduced to characterize the statistical depen- dencies among coefficients, where a set of hyperparameters are employed to control the sparsity of signal coefficients. Unlike the conventional sparse Bayesian Learning framework in which each individual hyperparameter is associated independently with each coefficient, in this paper, the prior for each coefficient not only involves its own hyperparameter, but also the hyperparameters of its immediate neighbors. In doing this way, the sparsity patterns of neighboring coefficients are related to each other and the hierarchical model has the potential to encourage structured- sparse solutions. The hyperparameters, along with the sparse signal, are learned by maximizing their posterior probability via an expectation-maximization (EM) algorithm. Index Terms— Sparse Bayesian Learning, pattern-coupled hi- erarchical model, block-sparse signal recovery.

Bhaskar D. Rao - One of the best experts on this subject based on the ideXlab platform.

  • Sparse Bayesian Learning for basis selection
    IEEE Transactions on Signal Processing, 2004
    Co-Authors: David Wipf, Bhaskar D. Rao
    Abstract:

    Sparse Bayesian Learning (SBL) and specifically relevance vector machines have received much attention in the machine Learning literature as a means of achieving parsimonious representations in the context of regression and classification. The methodology relies on a parameterized prior that encourages models with few nonzero weights. In this paper, we adapt SBL to the signal processing problem of basis selection from overcomplete dictionaries, proving several results about the SBL cost function that elucidate its general behavior and provide solid theoretical justification for this application. Specifically, we have shown that SBL retains a desirable property of the /spl lscr//sub 0/-norm diversity measure (i.e., the global minimum is achieved at the maximally sparse solution) while often possessing a more limited constellation of local minima. We have also demonstrated that the local minima that do exist are achieved at sparse solutions. Later, we provide a novel interpretation of SBL that gives us valuable insight into why it is successful in producing sparse representations. Finally, we include simulation studies comparing sparse Bayesian Learning with basis pursuit and the more recent FOCal Underdetermined System Solver (FOCUSS) class of basis selection algorithms. These results indicate that our theoretical insights translate directly into improved performance.

  • ICASSP (6) - Bayesian Learning for sparse signal reconstruction
    2003 IEEE International Conference on Acoustics Speech and Signal Processing 2003. Proceedings. (ICASSP '03)., 1
    Co-Authors: D.p. Wipf, Bhaskar D. Rao
    Abstract:

    Sparse Bayesian Learning and specifically relevance vector machines have received much attention as a means of achieving parsimonious representations of signals in the context of regression and classification. We provide a simplified derivation of this paradigm from a Bayesian evidence perspective and apply it to the problem of basis selection from overcomplete dictionaries. Furthermore, we prove that the stable fixed points of the resulting algorithm are necessarily sparse, providing a solid theoretical justification for adapting the methodology to basis selection tasks. We then include simulation studies comparing sparse Bayesian Learning with basis pursuit and the more recent FOCUSS class of basis selection algorithms, empirically demonstrating superior performance in terms of average sparsity and success rate of recovering generative bases.

Matthew J Beal - One of the best experts on this subject based on the ideXlab platform.

  • propagation algorithms for variational Bayesian Learning
    Neural Information Processing Systems, 2000
    Co-Authors: Zoubin Ghahramani, Matthew J Beal
    Abstract:

    Variational approximations are becoming a widespread tool for Bayesian Learning of graphical models. We provide some theoretical results for the variational updates in a very general family of conjugate-exponential graphical models. We show how the belief propagation and the junction tree algorithms can be used in the inference step of variational Bayesian Learning. Applying these results to the Bayesian analysis of linear-Gaussian state-space models we obtain a Learning procedure that exploits the Kalman smoothing propagation, while integrating over all model parameters. We demonstrate how this can be used to infer the hidden state dimensionality of the state-space model in a variety of synthetic problems and one real high-dimensional data set.

  • NIPS - Propagation Algorithms for Variational Bayesian Learning
    2000
    Co-Authors: Zoubin Ghahramani, Matthew J Beal
    Abstract:

    Variational approximations are becoming a widespread tool for Bayesian Learning of graphical models. We provide some theoretical results for the variational updates in a very general family of conjugate-exponential graphical models. We show how the belief propagation and the junction tree algorithms can be used in the inference step of variational Bayesian Learning. Applying these results to the Bayesian analysis of linear-Gaussian state-space models we obtain a Learning procedure that exploits the Kalman smoothing propagation, while integrating over all model parameters. We demonstrate how this can be used to infer the hidden state dimensionality of the state-space model in a variety of synthetic problems and one real high-dimensional data set.

Anand Dixit - One of the best experts on this subject based on the ideXlab platform.

  • Posterior Impropriety of some Sparse Bayesian Learning Models
    arXiv: Statistics Theory, 2020
    Co-Authors: Anand Dixit, Vivekananda Roy
    Abstract:

    Sparse Bayesian Learning models are typically used for prediction in datasets with significantly greater number of covariates than observations. Among the class of sparse Bayesian Learning models, relevance vector machines (RVM) is very popular. Its popularity is demonstrated by a large number of citations of the original RVM paper of Tipping (2001)[JMLR, 1, 211 - 244]. In this article we show that RVM and some other sparse Bayesian Learning models with hyperparameter values currently used in the literature are based on improper posteriors. Further, we also provide necessary and sufficient conditions for posterior propriety of RVM.

  • Posterior impropriety of some sparse Bayesian Learning models
    Statistics & Probability Letters, 1
    Co-Authors: Anand Dixit, Vivekananda Roy
    Abstract:

    Abstract Sparse Bayesian Learning models are typically used for prediction in datasets with significantly greater number of covariates than observations. Such models often take a reproducing kernel Hilbert space (RKHS) approach to carry out the task of prediction and can be implemented using either proper or improper priors. In this article we show that a few sparse Bayesian Learning models in the literature, when implemented using improper priors, lead to improper posteriors.