Bayesian Perspective

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 13563 Experts worldwide ranked by ideXlab platform

Yuan Yuan - One of the best experts on this subject based on the ideXlab platform.

  • Sparse Coding From a Bayesian Perspective
    IEEE transactions on neural networks and learning systems, 2013
    Co-Authors: Yulong Wang, Yuan Yuan
    Abstract:

    Sparse coding is a promising theme in computer vision. Most of the existing sparse coding methods are based on either l0 or l1 penalty, which often leads to unstable solution or biased estimation. This is because of the nonconvexity and discontinuity of the l0 penalty and the over-penalization on the true large coefficients of the l1 penalty. In this paper, sparse coding is interpreted from a novel Bayesian Perspective, which results in a new objective function through maximum a posteriori estimation. The obtained solution of the objective function can generate more stable results than the l0 penalty and smaller reconstruction errors than the l1 penalty. In addition, the convergence property of the proposed algorithm for sparse coding is also established. The experiments on applications in single image super-resolution and visual tracking demonstrate that the proposed method is more effective than other state-of-the-art methods.

Bo Yang - One of the best experts on this subject based on the ideXlab platform.

  • Hierarchical sparse coding from a Bayesian Perspective
    Neurocomputing, 2018
    Co-Authors: Yupei Zhang, Ming Xiang, Bo Yang
    Abstract:

    Abstract We consider the problem of hierarchical sparse coding, where not only a few groups of atoms are active at a time but also each group enjoys internal sparsity. The current approaches are usually to achieve between-group sparsity using the l1 penalty, such that many groups have small coefficients rather than being accurately zeroed out. The trivial groups may incur the proneness to overfitting of noise and are thereby harmful to interpretability of sparse representation. To this end, we in this paper reformulate the hierarchical sparse model from a Bayesian Perspective employing twofold priors: the spike-and-slab prior and the Laplacian prior. The former is utilized to explicitly induce between-group sparsity, while the latter is adopted for both inducing within-group sparsity and obtaining a small reconstruction error. We propose a nest prior by integrating the both priors to result in hierarchical sparsity. The resultant optimization problem can be delivered a convergence solution in a few iterations via the proposed nested algorithm, corresponding to the nested prior. In experiments, we evaluate the performance of our method on signal recovery, image inpainting and sparse representation based classification, with simulated signals and two publicly available image databases. The results manifest that the proposed method, compared with the popular methods for sparse coding, can yield more concise representation and more reliable interpretation of data.

Yulong Wang - One of the best experts on this subject based on the ideXlab platform.

  • Sparse Coding From a Bayesian Perspective
    IEEE transactions on neural networks and learning systems, 2013
    Co-Authors: Yulong Wang, Yuan Yuan
    Abstract:

    Sparse coding is a promising theme in computer vision. Most of the existing sparse coding methods are based on either l0 or l1 penalty, which often leads to unstable solution or biased estimation. This is because of the nonconvexity and discontinuity of the l0 penalty and the over-penalization on the true large coefficients of the l1 penalty. In this paper, sparse coding is interpreted from a novel Bayesian Perspective, which results in a new objective function through maximum a posteriori estimation. The obtained solution of the objective function can generate more stable results than the l0 penalty and smaller reconstruction errors than the l1 penalty. In addition, the convergence property of the proposed algorithm for sparse coding is also established. The experiments on applications in single image super-resolution and visual tracking demonstrate that the proposed method is more effective than other state-of-the-art methods.

Andrew M. Stuart - One of the best experts on this subject based on the ideXlab platform.

  • Inverse Problems: A Bayesian Perspective
    Acta Numerica, 2010
    Co-Authors: Andrew M. Stuart
    Abstract:

    The subject of inverse problems in differential equations is of enormous practical importance, and has also generated substantial mathematical and computational innovation. Typically some form of regularization is required to ameliorate ill-posed behaviour. In this article we review the Bayesian approach to regularization, developing a function space viewpoint on the subject. This approach allows for a full characterization of all possible solutions, and their relative probabilities, whilst simultaneously forcing significant modelling issues to be addressed in a clear and precise fashion. Although expensive to implement, this approach is starting to lie within the range of the available computational resources in many application areas. It also allows for the quantification of uncertainty and risk, something which is increasingly demanded by these applications. Furthermore, the approach is conceptually important for the understanding of simpler, computationally expedient approaches to inverse problems. We demonstrate that, when formulated in a Bayesian fashion, a wide range of inverse problems share a common mathematical framework, and we high- light a theory of well-posedness which stems from this. The well-posedness theory provides the basis for a number of stability and approximation results which we describe. We also review a range of algorithmic approaches which are used when adopting the Bayesian approach to inverse problems. These include MCMC methods, filtering and the variational approach.

Yupei Zhang - One of the best experts on this subject based on the ideXlab platform.

  • Hierarchical sparse coding from a Bayesian Perspective
    Neurocomputing, 2018
    Co-Authors: Yupei Zhang, Ming Xiang, Bo Yang
    Abstract:

    Abstract We consider the problem of hierarchical sparse coding, where not only a few groups of atoms are active at a time but also each group enjoys internal sparsity. The current approaches are usually to achieve between-group sparsity using the l1 penalty, such that many groups have small coefficients rather than being accurately zeroed out. The trivial groups may incur the proneness to overfitting of noise and are thereby harmful to interpretability of sparse representation. To this end, we in this paper reformulate the hierarchical sparse model from a Bayesian Perspective employing twofold priors: the spike-and-slab prior and the Laplacian prior. The former is utilized to explicitly induce between-group sparsity, while the latter is adopted for both inducing within-group sparsity and obtaining a small reconstruction error. We propose a nest prior by integrating the both priors to result in hierarchical sparsity. The resultant optimization problem can be delivered a convergence solution in a few iterations via the proposed nested algorithm, corresponding to the nested prior. In experiments, we evaluate the performance of our method on signal recovery, image inpainting and sparse representation based classification, with simulated signals and two publicly available image databases. The results manifest that the proposed method, compared with the popular methods for sparse coding, can yield more concise representation and more reliable interpretation of data.