Particle Filter

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 58026 Experts worldwide ranked by ideXlab platform

Sean P. Meyn - One of the best experts on this subject based on the ideXlab platform.

  • multivariable feedback Particle Filter
    Automatica, 2016
    Co-Authors: Tao Yang, Prashant G. Mehta, Richard S Laugesen, Sean P. Meyn
    Abstract:

    This paper presents the multivariable extension of the feedback Particle Filter (FPF) algorithm for the nonlinear Filtering problem in continuous-time. The FPF is a control-oriented approach to Particle Filtering. The approach does not require importance sampling or resampling and offers significant variance improvements; in particular, the algorithm can be applied to systems that are not stable. This paper describes new representations and algorithms for the FPF in the general multivariable nonlinear non-Gaussian setting. Theory surrounding the FPF is improved: Exactness of the FPF is established in the general setting, as well as well-posedness of the associated boundary value problem to obtain the Filter gain. A Galerkin finite-element algorithm is proposed for approximation of the gain. Its performance is illustrated in numerical experiments.

  • feedback Particle Filter
    IEEE Transactions on Automatic Control, 2013
    Co-Authors: Tao Yang, Prashant G. Mehta, Sean P. Meyn
    Abstract:

    The feedback Particle Filter introduced in this paper is a new approach to approximate nonlinear Filtering, motivated by techniques from mean-field game theory. The Filter is defined by an ensemble of controlled stochastic systems (the Particles). Each Particle evolves under feedback control based on its own state, and features of the empirical distribution of the ensemble. The feedback control law is obtained as the solution to an optimal control problem, in which the optimization criterion is the Kullback-Leibler divergence between the actual posterior, and the common posterior of any Particle. The following conclusions are obtained for diffusions with continuous observations: 1) The optimal control solution is exact: The two posteriors match exactly, provided they are initialized with identical priors. 2) The optimal Filter admits an innovation error-based gain feedback structure. 3) The optimal feedback gain is obtained via a solution of an Euler-Lagrange boundary value problem; the feedback gain equals the Kalman gain in the linear Gaussian case. Numerical algorithms are introduced and implemented in two general examples, and a neuroscience application involving coupled oscillators. In some cases it is found that the Filter exhibits significantly lower variance when compared to the bootstrap Particle Filter.

  • Feedback Particle Filter
    arXiv: Numerical Analysis, 2013
    Co-Authors: Tao Yang, Prashant G. Mehta, Sean P. Meyn
    Abstract:

    A new formulation of the Particle Filter for nonlinear Filtering is presented, based on concepts from optimal control, and from the mean-field game theory. The optimal control is chosen so that the posterior distribution of a Particle matches as closely as possible the posterior distribution of the true state given the observations. This is achieved by introducing a cost function, defined by the Kullback-Leibler (K-L) divergence between the actual posterior, and the posterior of any Particle. The optimal control input is characterized by a certain Euler-Lagrange (E-L) equation, and is shown to admit an innovation error-based feedback structure. For diffusions with continuous observations, the value of the optimal control solution is ideal. The two posteriors match exactly, provided they are initialized with identical priors. The feedback Particle Filter is defined by a family of stochastic systems, each evolving under this optimal control law. A numerical algorithm is introduced and implemented in two general examples, and a neuroscience application involving coupled oscillators. Some preliminary numerical comparisons between the feed- back Particle Filter and the bootstrap Particle Filter are described.

  • multivariable feedback Particle Filter
    Conference on Decision and Control, 2012
    Co-Authors: Tao Yang, Prashant G. Mehta, Richard S Laugesen, Sean P. Meyn
    Abstract:

    In recent work it is shown that importance sampling can be avoided in the Particle Filter through an innovation structure inspired by traditional nonlinear Filtering combined with Mean-Field Game formalisms [9], [19]. The resulting feedback Particle Filter (FPF) offers significant variance improvements; in particular, the algorithm can be applied to systems that are not stable. The Filter comes with an up-front computational cost to obtain the Filter gain. This paper describes new representations and algorithms to compute the gain in the general multivariable setting. The main contributions are, (i) Theory surrounding the FPF is improved: Consistency is established in the multivariate setting, as well as well-posedness of the associated PDE to obtain the Filter gain. (ii) The gain can be expressed as the gradient of a function, which is precisely the solution to Poisson's equation for a related MCMC diffusion (the Smoluchowski equation). This provides a bridge to MCMC as well as to approximate optimal Filtering approaches such as TD-learning, which can in turn be used to approximate the gain. (iii) Motivated by a weak formulation of Poisson's equation, a Galerkin finite-element algorithm is proposed for approximation of the gain. Its performance is illustrated in numerical experiments.

Tao Yang - One of the best experts on this subject based on the ideXlab platform.

  • multivariable feedback Particle Filter
    Automatica, 2016
    Co-Authors: Tao Yang, Prashant G. Mehta, Richard S Laugesen, Sean P. Meyn
    Abstract:

    This paper presents the multivariable extension of the feedback Particle Filter (FPF) algorithm for the nonlinear Filtering problem in continuous-time. The FPF is a control-oriented approach to Particle Filtering. The approach does not require importance sampling or resampling and offers significant variance improvements; in particular, the algorithm can be applied to systems that are not stable. This paper describes new representations and algorithms for the FPF in the general multivariable nonlinear non-Gaussian setting. Theory surrounding the FPF is improved: Exactness of the FPF is established in the general setting, as well as well-posedness of the associated boundary value problem to obtain the Filter gain. A Galerkin finite-element algorithm is proposed for approximation of the gain. Its performance is illustrated in numerical experiments.

  • feedback Particle Filter
    IEEE Transactions on Automatic Control, 2013
    Co-Authors: Tao Yang, Prashant G. Mehta, Sean P. Meyn
    Abstract:

    The feedback Particle Filter introduced in this paper is a new approach to approximate nonlinear Filtering, motivated by techniques from mean-field game theory. The Filter is defined by an ensemble of controlled stochastic systems (the Particles). Each Particle evolves under feedback control based on its own state, and features of the empirical distribution of the ensemble. The feedback control law is obtained as the solution to an optimal control problem, in which the optimization criterion is the Kullback-Leibler divergence between the actual posterior, and the common posterior of any Particle. The following conclusions are obtained for diffusions with continuous observations: 1) The optimal control solution is exact: The two posteriors match exactly, provided they are initialized with identical priors. 2) The optimal Filter admits an innovation error-based gain feedback structure. 3) The optimal feedback gain is obtained via a solution of an Euler-Lagrange boundary value problem; the feedback gain equals the Kalman gain in the linear Gaussian case. Numerical algorithms are introduced and implemented in two general examples, and a neuroscience application involving coupled oscillators. In some cases it is found that the Filter exhibits significantly lower variance when compared to the bootstrap Particle Filter.

  • Feedback Particle Filter
    arXiv: Numerical Analysis, 2013
    Co-Authors: Tao Yang, Prashant G. Mehta, Sean P. Meyn
    Abstract:

    A new formulation of the Particle Filter for nonlinear Filtering is presented, based on concepts from optimal control, and from the mean-field game theory. The optimal control is chosen so that the posterior distribution of a Particle matches as closely as possible the posterior distribution of the true state given the observations. This is achieved by introducing a cost function, defined by the Kullback-Leibler (K-L) divergence between the actual posterior, and the posterior of any Particle. The optimal control input is characterized by a certain Euler-Lagrange (E-L) equation, and is shown to admit an innovation error-based feedback structure. For diffusions with continuous observations, the value of the optimal control solution is ideal. The two posteriors match exactly, provided they are initialized with identical priors. The feedback Particle Filter is defined by a family of stochastic systems, each evolving under this optimal control law. A numerical algorithm is introduced and implemented in two general examples, and a neuroscience application involving coupled oscillators. Some preliminary numerical comparisons between the feed- back Particle Filter and the bootstrap Particle Filter are described.

  • multivariable feedback Particle Filter
    Conference on Decision and Control, 2012
    Co-Authors: Tao Yang, Prashant G. Mehta, Richard S Laugesen, Sean P. Meyn
    Abstract:

    In recent work it is shown that importance sampling can be avoided in the Particle Filter through an innovation structure inspired by traditional nonlinear Filtering combined with Mean-Field Game formalisms [9], [19]. The resulting feedback Particle Filter (FPF) offers significant variance improvements; in particular, the algorithm can be applied to systems that are not stable. The Filter comes with an up-front computational cost to obtain the Filter gain. This paper describes new representations and algorithms to compute the gain in the general multivariable setting. The main contributions are, (i) Theory surrounding the FPF is improved: Consistency is established in the multivariate setting, as well as well-posedness of the associated PDE to obtain the Filter gain. (ii) The gain can be expressed as the gradient of a function, which is precisely the solution to Poisson's equation for a related MCMC diffusion (the Smoluchowski equation). This provides a bridge to MCMC as well as to approximate optimal Filtering approaches such as TD-learning, which can in turn be used to approximate the gain. (iii) Motivated by a weak formulation of Poisson's equation, a Galerkin finite-element algorithm is proposed for approximation of the gain. Its performance is illustrated in numerical experiments.

Frank Dellaert - One of the best experts on this subject based on the ideXlab platform.

  • a rao blackwellized Particle Filter for eigentracking
    Computer Vision and Pattern Recognition, 2004
    Co-Authors: Zia Khan, Tucker Balch, Frank Dellaert
    Abstract:

    Subspace representations have been a popular way to model appearance in computer vision. In Jepson and Black's influential paper on EigenTracking, they were successfully applied in tracking. For noisy targets, optimization-based algorithms (including EigenTracking) often fail catastrophically after losing track. Particle Filters have recently emerged as a robust method for tracking in the presence of multi-modal distributions. To use subspace representations in a Particle Filter, the number of samples increases exponentially as the state vector includes the subspace coefficients. We introduce an efficient method for using subspace representations in a Particle Filter by applying Rao-Blackwellization to integrate out the subspace coefficients in the state vector. Fewer samples are needed since part of the posterior over the state vector is analytically calculated. We use probabilistic principal component analysis to obtain analytically tractable integrals. We show experimental results in a scenario in which we track a target in clutter.

  • an mcmc based Particle Filter for tracking multiple interacting targets
    European Conference on Computer Vision, 2004
    Co-Authors: Zia Khan, Tucker Balch, Frank Dellaert
    Abstract:

    We describe a Markov chain Monte Carlo based Particle Filter that effectively deals with interacting targets, i.e., targets that are influenced by the proximity and/or behavior of other targets. Such interactions cause problems for traditional approaches to the data association problem. In response, we developed a joint tracker that includes a more sophisticated motion model to maintain the identity of targets throughout an interaction, drastically reducing tracker failures. The paper presents two main contributions: (1) we show how a Markov random field (MRF) motion prior, built on the fly at each time step, can substantially improve tracking when targets interact, and (2) we show how this can be done efficiently using Markov chain Monte Carlo (MCMC) sampling. We prove that incorporating an MRF to model interactions is equivalent to adding an additional interaction factor to the importance weights in a joint Particle Filter. Since a joint Particle Filter suffers from exponential complexity in the number of tracked targets, we replace the traditional importance sampling step in the Particle Filter with an MCMC sampling step. The resulting Filter deals efficiently and effectively with complicated interactions when targets approach each other. We present both qualitative and quantitative results to substantiate the claims made in the paper, including a large scale experiment on a video-sequence of over 10,000 frames in length.

Prashant G. Mehta - One of the best experts on this subject based on the ideXlab platform.

  • multivariable feedback Particle Filter
    Automatica, 2016
    Co-Authors: Tao Yang, Prashant G. Mehta, Richard S Laugesen, Sean P. Meyn
    Abstract:

    This paper presents the multivariable extension of the feedback Particle Filter (FPF) algorithm for the nonlinear Filtering problem in continuous-time. The FPF is a control-oriented approach to Particle Filtering. The approach does not require importance sampling or resampling and offers significant variance improvements; in particular, the algorithm can be applied to systems that are not stable. This paper describes new representations and algorithms for the FPF in the general multivariable nonlinear non-Gaussian setting. Theory surrounding the FPF is improved: Exactness of the FPF is established in the general setting, as well as well-posedness of the associated boundary value problem to obtain the Filter gain. A Galerkin finite-element algorithm is proposed for approximation of the gain. Its performance is illustrated in numerical experiments.

  • feedback Particle Filter
    IEEE Transactions on Automatic Control, 2013
    Co-Authors: Tao Yang, Prashant G. Mehta, Sean P. Meyn
    Abstract:

    The feedback Particle Filter introduced in this paper is a new approach to approximate nonlinear Filtering, motivated by techniques from mean-field game theory. The Filter is defined by an ensemble of controlled stochastic systems (the Particles). Each Particle evolves under feedback control based on its own state, and features of the empirical distribution of the ensemble. The feedback control law is obtained as the solution to an optimal control problem, in which the optimization criterion is the Kullback-Leibler divergence between the actual posterior, and the common posterior of any Particle. The following conclusions are obtained for diffusions with continuous observations: 1) The optimal control solution is exact: The two posteriors match exactly, provided they are initialized with identical priors. 2) The optimal Filter admits an innovation error-based gain feedback structure. 3) The optimal feedback gain is obtained via a solution of an Euler-Lagrange boundary value problem; the feedback gain equals the Kalman gain in the linear Gaussian case. Numerical algorithms are introduced and implemented in two general examples, and a neuroscience application involving coupled oscillators. In some cases it is found that the Filter exhibits significantly lower variance when compared to the bootstrap Particle Filter.

  • Feedback Particle Filter
    arXiv: Numerical Analysis, 2013
    Co-Authors: Tao Yang, Prashant G. Mehta, Sean P. Meyn
    Abstract:

    A new formulation of the Particle Filter for nonlinear Filtering is presented, based on concepts from optimal control, and from the mean-field game theory. The optimal control is chosen so that the posterior distribution of a Particle matches as closely as possible the posterior distribution of the true state given the observations. This is achieved by introducing a cost function, defined by the Kullback-Leibler (K-L) divergence between the actual posterior, and the posterior of any Particle. The optimal control input is characterized by a certain Euler-Lagrange (E-L) equation, and is shown to admit an innovation error-based feedback structure. For diffusions with continuous observations, the value of the optimal control solution is ideal. The two posteriors match exactly, provided they are initialized with identical priors. The feedback Particle Filter is defined by a family of stochastic systems, each evolving under this optimal control law. A numerical algorithm is introduced and implemented in two general examples, and a neuroscience application involving coupled oscillators. Some preliminary numerical comparisons between the feed- back Particle Filter and the bootstrap Particle Filter are described.

  • multivariable feedback Particle Filter
    Conference on Decision and Control, 2012
    Co-Authors: Tao Yang, Prashant G. Mehta, Richard S Laugesen, Sean P. Meyn
    Abstract:

    In recent work it is shown that importance sampling can be avoided in the Particle Filter through an innovation structure inspired by traditional nonlinear Filtering combined with Mean-Field Game formalisms [9], [19]. The resulting feedback Particle Filter (FPF) offers significant variance improvements; in particular, the algorithm can be applied to systems that are not stable. The Filter comes with an up-front computational cost to obtain the Filter gain. This paper describes new representations and algorithms to compute the gain in the general multivariable setting. The main contributions are, (i) Theory surrounding the FPF is improved: Consistency is established in the multivariate setting, as well as well-posedness of the associated PDE to obtain the Filter gain. (ii) The gain can be expressed as the gradient of a function, which is precisely the solution to Poisson's equation for a related MCMC diffusion (the Smoluchowski equation). This provides a bridge to MCMC as well as to approximate optimal Filtering approaches such as TD-learning, which can in turn be used to approximate the gain. (iii) Motivated by a weak formulation of Poisson's equation, a Galerkin finite-element algorithm is proposed for approximation of the gain. Its performance is illustrated in numerical experiments.

Babak Hassibi - One of the best experts on this subject based on the ideXlab platform.

  • the kalman like Particle Filter optimal estimation with quantized innovations measurements
    IEEE Transactions on Signal Processing, 2013
    Co-Authors: Ravi Teja Sukhavasi, Babak Hassibi
    Abstract:

    We study the problem of optimal estimation and control of linear systems using quantized measurements. We show that the state conditioned on a causal quantization of the measurements can be expressed as the sum of a Gaussian random vector and a certain truncated Gaussian vector. This structure bears close resemblance to the full information Kalman Filter and so allows us to effectively combine the Kalman structure with a Particle Filter to recursively compute the state estimate. We call the resulting Filter the Kalman-like Particle Filter (KLPF) and observe that it delivers close to optimal performance using far fewer Particles than that of a Particle Filter directly applied to the original problem.

  • the kalman like Particle Filter optimal estimation with quantized innovations measurements
    Conference on Decision and Control, 2009
    Co-Authors: Ravi Teja Sukhavasi, Babak Hassibi
    Abstract:

    We study the problem of optimal estimation using quantized innovations, with application to distributed estimation over sensor networks. We show that the state probability density conditioned on the quantized innovations can be expressed as the sum of a Gaussian random vector and a certain truncated Gaussian vector. This structure bears close resemblance to the full information Kalman Filter and so allows us to effectively combine the Kalman structure with a Particle Filter to recursively compute the state estimate. We call the resuting Filter the Kalman like Particle Filter (KLPF) and observe that it delivers close to optimal performance using far fewer Particles than that of a Particle Filter directly applied to the original problem. We also note that the conditional state density follows a, so called, generalized closed skew-normal (GCSN) distribution.

  • the kalman like Particle Filter optimal estimation with quantized innovations measurements
    arXiv: Information Theory, 2009
    Co-Authors: Ravi Teja Sukhavasi, Babak Hassibi
    Abstract:

    We study the problem of optimal estimation and control of linear systems using quantized measurements, with a focus on applications over sensor networks. We show that the state conditioned on a causal quantization of the measurements can be expressed as the sum of a Gaussian random vector and a certain truncated Gaussian vector. This structure bears close resemblance to the full information Kalman Filter and so allows us to effectively combine the Kalman structure with a Particle Filter to recursively compute the state estimate. We call the resulting Filter the Kalman like Particle Filter (KLPF) and observe that it delivers close to optimal performance using far fewer Particles than that of a Particle Filter directly applied to the original problem. We show that the conditional state density follows a, so called, generalized closed skew-normal (GCSN) distribution. We further show that for such systems the classical separation property between control and estimation holds and that the certainty equivalent control law is LQG optimal.