The Experts below are selected from a list of 10593 Experts worldwide ranked by ideXlab platform
Arthur Gretton - One of the best experts on this subject based on the ideXlab platform.
-
Kernel Bayes' Rule: Bayesian Inference with Positive Definite Kernels
Journal of Machine Learning Research, 2013Co-Authors: Kenji Fukumizu, Le Song, Arthur GrettonAbstract:A kernel method for realizing Bayes' Rule is proposed, based on representations of probabilities in reproducing kernel Hilbert spaces. Probabilities are uniquely characterized by the mean of the canonical map to the RKHS. The prior and conditional probabilities are expressed in terms of RKHS functions of an empirical sample: no explicit parametric model is needed for these quantities. The posterior is likewise an RKHS mean of a weighted sample. The estimator for the expectation of a function of the posterior is derived, and rates of consistency are shown. Some representative applications of the kernel Bayes' Rule are presented, including Bayesian computation without likelihood and filtering with a nonparametric state-space model.
-
kernel Bayes Rule
Neural Information Processing Systems, 2011Co-Authors: Kenji Fukumizu, Le Song, Arthur GrettonAbstract:A nonparametric kernel-based method for realizing Bayes' Rule is proposed, based on kernel representations of probabilities in reproducing kernel Hilbert spaces. The prior and conditional probabilities are expressed as empirical kernel mean and covariance operators, respectively, and the kernel mean of the posterior distribution is computed in the form of a weighted sample. The kernel Bayes' Rule can be applied to a wide variety of Bayesian inference problems: we demonstrate Bayesian computation without likelihood, and filtering with a nonparametric state-space model. A consistency rate for the posterior estimate is established.
-
Kernel Bayes' Rule
arXiv: Machine Learning, 2010Co-Authors: Kenji Fukumizu, Le Song, Arthur GrettonAbstract:A nonparametric kernel-based method for realizing Bayes' Rule is proposed, based on representations of probabilities in reproducing kernel Hilbert spaces. Probabilities are uniquely characterized by the mean of the canonical map to the RKHS. The prior and conditional probabilities are expressed in terms of RKHS functions of an empirical sample: no explicit parametric model is needed for these quantities. The posterior is likewise an RKHS mean of a weighted sample. The estimator for the expectation of a function of the posterior is derived, and rates of consistency are shown. Some representative applications of the kernel Bayes' Rule are presented, including Baysian computation without likelihood and filtering with a nonparametric state-space model.
Kenji Fukumizu - One of the best experts on this subject based on the ideXlab platform.
-
Kernel Bayes' Rule: Bayesian Inference with Positive Definite Kernels
Journal of Machine Learning Research, 2013Co-Authors: Kenji Fukumizu, Le Song, Arthur GrettonAbstract:A kernel method for realizing Bayes' Rule is proposed, based on representations of probabilities in reproducing kernel Hilbert spaces. Probabilities are uniquely characterized by the mean of the canonical map to the RKHS. The prior and conditional probabilities are expressed in terms of RKHS functions of an empirical sample: no explicit parametric model is needed for these quantities. The posterior is likewise an RKHS mean of a weighted sample. The estimator for the expectation of a function of the posterior is derived, and rates of consistency are shown. Some representative applications of the kernel Bayes' Rule are presented, including Bayesian computation without likelihood and filtering with a nonparametric state-space model.
-
kernel Bayes Rule
Neural Information Processing Systems, 2011Co-Authors: Kenji Fukumizu, Le Song, Arthur GrettonAbstract:A nonparametric kernel-based method for realizing Bayes' Rule is proposed, based on kernel representations of probabilities in reproducing kernel Hilbert spaces. The prior and conditional probabilities are expressed as empirical kernel mean and covariance operators, respectively, and the kernel mean of the posterior distribution is computed in the form of a weighted sample. The kernel Bayes' Rule can be applied to a wide variety of Bayesian inference problems: we demonstrate Bayesian computation without likelihood, and filtering with a nonparametric state-space model. A consistency rate for the posterior estimate is established.
-
Kernel Bayes' Rule
arXiv: Machine Learning, 2010Co-Authors: Kenji Fukumizu, Le Song, Arthur GrettonAbstract:A nonparametric kernel-based method for realizing Bayes' Rule is proposed, based on representations of probabilities in reproducing kernel Hilbert spaces. Probabilities are uniquely characterized by the mean of the canonical map to the RKHS. The prior and conditional probabilities are expressed in terms of RKHS functions of an empirical sample: no explicit parametric model is needed for these quantities. The posterior is likewise an RKHS mean of a weighted sample. The estimator for the expectation of a function of the posterior is derived, and rates of consistency are shown. Some representative applications of the kernel Bayes' Rule are presented, including Baysian computation without likelihood and filtering with a nonparametric state-space model.
Le Song - One of the best experts on this subject based on the ideXlab platform.
-
Particle Flow Bayes' Rule
arXiv: Learning, 2019Co-Authors: Xinshi Chen, Hanjun Dai, Le SongAbstract:We present a particle flow realization of Bayes' Rule, where an ODE-based neural operator is used to transport particles from a prior to its posterior after a new observation. We prove that such an ODE operator exists. Its neural parameterization can be trained in a meta-learning framework, allowing this operator to reason about the effect of an individual observation on the posterior, and thus generalize across different priors, observations and to sequential Bayesian inference. We demonstrated the generalization ability of our particle flow Bayes operator in several canonical and high dimensional examples.
-
Kernel Bayes' Rule: Bayesian Inference with Positive Definite Kernels
Journal of Machine Learning Research, 2013Co-Authors: Kenji Fukumizu, Le Song, Arthur GrettonAbstract:A kernel method for realizing Bayes' Rule is proposed, based on representations of probabilities in reproducing kernel Hilbert spaces. Probabilities are uniquely characterized by the mean of the canonical map to the RKHS. The prior and conditional probabilities are expressed in terms of RKHS functions of an empirical sample: no explicit parametric model is needed for these quantities. The posterior is likewise an RKHS mean of a weighted sample. The estimator for the expectation of a function of the posterior is derived, and rates of consistency are shown. Some representative applications of the kernel Bayes' Rule are presented, including Bayesian computation without likelihood and filtering with a nonparametric state-space model.
-
kernel Bayes Rule
Neural Information Processing Systems, 2011Co-Authors: Kenji Fukumizu, Le Song, Arthur GrettonAbstract:A nonparametric kernel-based method for realizing Bayes' Rule is proposed, based on kernel representations of probabilities in reproducing kernel Hilbert spaces. The prior and conditional probabilities are expressed as empirical kernel mean and covariance operators, respectively, and the kernel mean of the posterior distribution is computed in the form of a weighted sample. The kernel Bayes' Rule can be applied to a wide variety of Bayesian inference problems: we demonstrate Bayesian computation without likelihood, and filtering with a nonparametric state-space model. A consistency rate for the posterior estimate is established.
-
Kernel Bayes' Rule
arXiv: Machine Learning, 2010Co-Authors: Kenji Fukumizu, Le Song, Arthur GrettonAbstract:A nonparametric kernel-based method for realizing Bayes' Rule is proposed, based on representations of probabilities in reproducing kernel Hilbert spaces. Probabilities are uniquely characterized by the mean of the canonical map to the RKHS. The prior and conditional probabilities are expressed in terms of RKHS functions of an empirical sample: no explicit parametric model is needed for these quantities. The posterior is likewise an RKHS mean of a weighted sample. The estimator for the expectation of a function of the posterior is derived, and rates of consistency are shown. Some representative applications of the kernel Bayes' Rule are presented, including Baysian computation without likelihood and filtering with a nonparametric state-space model.
Robin T. Vollmer - One of the best experts on this subject based on the ideXlab platform.
-
Use of Bayes Rule and MIB-1 Proliferation Index to Discriminate Spitz Nevus From Malignant Melanoma
American journal of clinical pathology, 2004Co-Authors: Robin T. VollmerAbstract:Differentiating Spitz nevus from malignant melanoma is difficult and controversial. Despite helpful lists of differential diagnostic features, uncertainty about the diagnosis often provokes some to stain the tumor for MIB-1 antibody to Ki-67 and measure the proliferation index (PI) of the tumor. Of the many reports about MIB-1 PI in Spitz nevi and melanoma, none have consolidated the information to provide guidelines for the predictive probability that a lesion is a Spitz nevus, given that the MIB-1 PI falls into a certain interval. The present study used previously published data and exponential and γ probability density functions to model statistical distributions of PI, respectively, in Spitz nevi and melanomas and Bayes Rule to estimate the predictive probability that a lesion is a Spitz nevus, given an observed PI. Results indicate that PIs more than 10% favor a melanoma diagnosis and PIs less than 2%, Spitz nevus. PI values between 2% and 10% yield various predictive values for Spitz nevus, depending on the a priori probability that the lesion is a Spitz nevus. The algorithm tabulates guidelines for the predictive probabilities of Spitz nevus given an observed PI.
-
patient age in spitz nevus and malignant melanoma implication of Bayes Rule for differential diagnosis
American Journal of Clinical Pathology, 2004Co-Authors: Robin T. VollmerAbstract:In the differential diagnosis of Spitz nevus vs malignant melanoma, patient age provides a critical piece of clinical information, because Spitz nevi occur mostly in children and melanomas occur mostly in adults. Nevertheless, there is overlap in the age distributions of Spitz nevus and melanoma. The issue to consider is how these age distributions and their governing probability densities can impact the a priori probability that a lesion is a Spitz nevus vs a melanoma. Herein I introduce a quantitative approach that uses Bayes Rule together with previous published data on the age distributions in Spitz nevi and melanoma. The resulting algorithm yields plots and a table of predictive a priori probabilities of Spitz nevus, given patient age occurring within narrow intervals, and I believe these provide useful guidelines for using age in the differential diagnosis of Spitz nevus and malignant melanoma.
Sacha Sokoloski - One of the best experts on this subject based on the ideXlab platform.
-
implementing a Bayes filter in a neural circuit the case of unknown stimulus dynamics
Neural Computation, 2017Co-Authors: Sacha SokoloskiAbstract:In order to interact intelligently with objects in the world, animals must first transform neural population responses into estimates of the dynamic, unknown stimuli that caused them. The Bayesian solution to this problem is known as a Bayes filter, which applies Bayes' Rule to combine population responses with the predictions of an internal model. The internal model of the Bayes filter is based on the true stimulus dynamics, and in this note, we present a method for training a theoretical neural circuit to approximately implement a Bayes filter when the stimulus dynamics are unknown. To do this we use the inferential properties of linear probabilistic population codes to compute Bayes' Rule and train a neural network to compute approximate predictions by the method of maximum likelihood. In particular, we perform stochastic gradient descent on the negative log-likelihood of the neural network parameters with a novel approximation of the gradient. We demonstrate our methods on a finite-state, a linear, and a nonlinear filtering problem and show how the hidden layer of the neural network develops tuning curves consistent with findings in experimental neuroscience.
-
implementing a Bayes filter in a neural circuit the case of unknown stimulus dynamics
arXiv: Learning, 2015Co-Authors: Sacha SokoloskiAbstract:In order to interact intelligently with objects in the world, animals must first transform neural population responses into estimates of the dynamic, unknown stimuli which caused them. The Bayesian solution to this problem is known as a Bayes filter, which applies Bayes' Rule to combine population responses with the predictions of an internal model. In this paper we present a method for learning to approximate a Bayes filter when the stimulus dynamics are unknown. To do this we use the inferential properties of probabilistic population codes to compute Bayes' Rule, and train a neural network to compute approximate predictions by the method of maximum likelihood. In particular, we perform stochastic gradient descent on the negative log-likelihood with a novel approximation of the gradient. We demonstrate our methods on a finite-state, a linear, and a nonlinear filtering problem, and show how the hidden layer of the neural network develops tuning curves which are consistent with findings in experimental neuroscience.