Bayesian Filtering

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 13236 Experts worldwide ranked by ideXlab platform

Anthony Quinn - One of the best experts on this subject based on the ideXlab platform.

  • A data-driven forgetting factor for stabilized forgetting in approximate Bayesian Filtering
    2015 26th Irish Signals and Systems Conference (ISSC), 2015
    Co-Authors: S. Azizi, Anthony Quinn
    Abstract:

    The main focus of this paper is to extend Bayesian Filtering to allow for time-variant parameters in the transition kernels. Since a finite-dimensional exact solution is not available, we adopt stabilized forgetting in order to restore a recursive signal processing algorithm in this case, involving the processing of fixed, finite-dimensional statistics. This approximate solution is amenable to online sequential estimation, and is derived for a rich class of observation models. The data-driven forgetting factor is optimized sequentially using an iterative variational Bayes approach. A number of Bayesian Filtering problems involving parameter-variant Gaussian processes is addressed in this way. In simulations, we emphasize the performance enhancements achieved using the data-driven sequential assignment of the forgetting factor, when compared to the conventional approach, which adopts a fixed value.

  • EUSIPCO - Approximate Bayesian Filtering using stabilized forgetting
    2015 23rd European Signal Processing Conference (EUSIPCO), 2015
    Co-Authors: S. Azizi, Anthony Quinn
    Abstract:

    In this paper, we relax the modeling assumptions under which Bayesian Filtering is tractable. In order to restore tractability, we adopt the stabilizing forgetting (SF) operator, which replaces the explicit time evolution model of Bayesian Filtering. The principal contribution of the paper is to define a rich class of conditional observation models for which recursive, invariant, finite-dimensional statistics result from SF-based Bayesian Filtering. We specialize the result to the mixture Kalman filter, verifying that the exact solution is available in this case. This allows us to consider the quality of the SF-based approximate solution. Finally, we assess SF-based tracking of the time-varying rate parameter (state) in data modelled as a mixture of exponential components.

  • Variational Bayesian Filtering
    IEEE Transactions on Signal Processing, 2008
    Co-Authors: Vaclav Smidl, Anthony Quinn
    Abstract:

    The use of the variational Bayes (VB) approximation in Bayesian Filtering is studied, both as a means to accelerate marginalized particle Filtering and as a deterministic local (one-step) approximation. The VB method of approximation is reviewed, together with restrictions that allow various computational savings to be achieved. These variants provide a range of algorithms that can be used in a principled tradeoff between quality of approximation and computational cost. In combination with marginalized particle Filtering, they generalize previously published work on variational Filtering and extend currently available methods for speeding up stochastic approximations in Bayesian Filtering. In particular, the free-form nature of the VB approximation allows optimal selection of moments which summarize the particles. Other Bayesian Filtering schemes are developed by replacing the marginalization operator in Bayesian Filtering with VB-marginals. This leads to further computational savings at the cost of quality of approximation. The performance of the various VB Filtering schemes is illustrated in the context of a Gaussian model with a nonlinear substate, and a hidden Markov model.

  • The Restricted Variational Bayes Approximation in Bayesian Filtering
    2006 IEEE Nonlinear Statistical Signal Processing Workshop, 2006
    Co-Authors: Vaclav Smidl, Anthony Quinn
    Abstract:

    The Variational Bayes (VB) approach is used as a one-step approximation for Bayesian Filtering. It requires the availability of moments of the free-form distributional optimizers. The latter may have intractable functional forms. In this contribution, we replace these by appropriate fixed-form distributions yielding the required moments. We address two scenarios of this Restricted VB (RVB) approximation. For the first scenario, an application in identification of HMMs is given. Close relationship of the second scenario to Rao-Blackwellized particle Filtering is discussed and their performance is illustrated on a simple non-linear model.

  • ICASSP (3) - The Variational Bayes Approximation In Bayesian Filtering
    2006 IEEE International Conference on Acoustics Speed and Signal Processing Proceedings, 1
    Co-Authors: Vaclav Smidl, Anthony Quinn
    Abstract:

    The Variational Bayes (VB) approximation is applied in the context of Bayesian Filtering, yielding a tractable on-line scheme for a wide range of non-stationary parametric models. This VB-Filtering scheme is used to identify a Hidden Markov Model with an unknown non-stationary transition matrix. In a simulation study involving soft-bit data, reliable inference of the underlying binary sequence is achieved in tandem with estimation of the transition probabilities. The performance compares favourably with a proposed particle Filtering approach, and at lower computational cost.

Stephen J. Roberts - One of the best experts on this subject based on the ideXlab platform.

  • IJCNN - Recurrent Neural Filters: Learning Independent Bayesian Filtering Steps for Time Series Prediction
    2020 International Joint Conference on Neural Networks (IJCNN), 2020
    Co-Authors: Bryan Lim, Stefan Zohren, Stephen J. Roberts
    Abstract:

    Despite the recent popularity of deep generative state space models, few comparisons have been made between network architectures and the inference steps of the Bayesian Filtering framework – with most models simultaneously approximating both state transition and update steps with a single recurrent neural network (RNN). In this paper, we introduce the Recurrent Neural Filter (RNF), a novel recurrent autoencoder architecture that learns distinct representations for each Bayesian Filtering step, captured by a series of encoders and decoders. Testing this on three real-world time series datasets, we demonstrate that the decoupled representations learnt improve the accuracy of one-step-ahead forecasts while providing realistic uncertainty estimates, and also facilitate multistep prediction through the separation of encoder stages.

  • Recurrent Neural Filters: Learning Independent Bayesian Filtering Steps for Time Series Prediction
    arXiv: Machine Learning, 2019
    Co-Authors: Bryan Lim, Stefan Zohren, Stephen J. Roberts
    Abstract:

    Despite the recent popularity of deep generative state space models, few comparisons have been made between network architectures and the inference steps of the Bayesian Filtering framework -- with most models simultaneously approximating both state transition and update steps with a single recurrent neural network (RNN). In this paper, we introduce the Recurrent Neural Filter (RNF), a novel recurrent autoencoder architecture that learns distinct representations for each Bayesian Filtering step, captured by a series of encoders and decoders. Testing this on three real-world time series datasets, we demonstrate that the decoupled representations learnt not only improve the accuracy of one-step-ahead forecasts while providing realistic uncertainty estimates, but also facilitate multistep prediction through the separation of encoder stages.

Lihua Xie - One of the best experts on this subject based on the ideXlab platform.

  • Bayesian Filtering with unknown sensor measurement losses
    IEEE Transactions on Control of Network Systems, 2019
    Co-Authors: Jiaqi Zhang, Keyou You, Lihua Xie
    Abstract:

    This paper studies the state estimation problem of a stochastic nonlinear system with unknown sensor measurement losses. If the estimator knows the sensor measurement losses of a linear Gaussian system, the minimum variance estimate is easily computed by the celebrated intermittent Kalman filter (IKF). However, this will no longer be the case when the measurement losses are unknown and/or the system is nonlinear or non-Gaussian. By exploiting the binary property of the measurement loss process and the IKF, we design three suboptimal filters for the state estimation, that is, BKF-I, BKF-II, and RBPF. The BKF-I is based on the MAP estimator of the measurement loss process and the BKF-II is derived by estimating the conditional loss probability. The RBPF is a particle filter-based algorithm that marginalizes out the loss process to increase the efficiency of particles. All of the proposed filters can be easily implemented in recursive forms. Finally, a linear system, a target tracking system, and a quadrotor's path control problem are included to illustrate their effectiveness, and show the tradeoff between computational complexity and estimation accuracy of the proposed filters.

Bryan Lim - One of the best experts on this subject based on the ideXlab platform.

  • IJCNN - Recurrent Neural Filters: Learning Independent Bayesian Filtering Steps for Time Series Prediction
    2020 International Joint Conference on Neural Networks (IJCNN), 2020
    Co-Authors: Bryan Lim, Stefan Zohren, Stephen J. Roberts
    Abstract:

    Despite the recent popularity of deep generative state space models, few comparisons have been made between network architectures and the inference steps of the Bayesian Filtering framework – with most models simultaneously approximating both state transition and update steps with a single recurrent neural network (RNN). In this paper, we introduce the Recurrent Neural Filter (RNF), a novel recurrent autoencoder architecture that learns distinct representations for each Bayesian Filtering step, captured by a series of encoders and decoders. Testing this on three real-world time series datasets, we demonstrate that the decoupled representations learnt improve the accuracy of one-step-ahead forecasts while providing realistic uncertainty estimates, and also facilitate multistep prediction through the separation of encoder stages.

  • Recurrent Neural Filters: Learning Independent Bayesian Filtering Steps for Time Series Prediction
    arXiv: Machine Learning, 2019
    Co-Authors: Bryan Lim, Stefan Zohren, Stephen J. Roberts
    Abstract:

    Despite the recent popularity of deep generative state space models, few comparisons have been made between network architectures and the inference steps of the Bayesian Filtering framework -- with most models simultaneously approximating both state transition and update steps with a single recurrent neural network (RNN). In this paper, we introduce the Recurrent Neural Filter (RNF), a novel recurrent autoencoder architecture that learns distinct representations for each Bayesian Filtering step, captured by a series of encoders and decoders. Testing this on three real-world time series datasets, we demonstrate that the decoupled representations learnt not only improve the accuracy of one-step-ahead forecasts while providing realistic uncertainty estimates, but also facilitate multistep prediction through the separation of encoder stages.

Simo Särkkä - One of the best experts on this subject based on the ideXlab platform.

  • Probabilistic solutions to ordinary differential equations as nonlinear Bayesian Filtering: a new perspective
    Statistics and Computing, 2019
    Co-Authors: Filip Tronarp, Simo Särkkä, Hans Kersting, Philipp Hennig
    Abstract:

    We formulate probabilistic numerical approximations to solutions of ordinary differential equations (ODEs) as problems in Gaussian process (GP) regression with nonlinear measurement functions. This is achieved by defining the measurement sequence to consist of the observations of the difference between the derivative of the GP and the vector field evaluated at the GP—which are all identically zero at the solution of the ODE. When the GP has a state-space representation, the problem can be reduced to a nonlinear Bayesian Filtering problem and all widely used approximations to the Bayesian Filtering and smoothing problems become applicable. Furthermore, all previous GP-based ODE solvers that are formulated in terms of generating synthetic measurements of the gradient field come out as specific approximations. Based on the nonlinear Bayesian Filtering problem posed in this paper, we develop novel Gaussian solvers for which we establish favourable stability properties. Additionally, non-Gaussian approximations to the Filtering problem are derived by the particle filter approach. The resulting solvers are compared with other probabilistic solvers in illustrative experiments.

  • ICASSP - Updates in Bayesian Filtering by Continuous Projections on a Manifold of Densities
    ICASSP 2019 - 2019 IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), 2019
    Co-Authors: Filip Tronarp, Simo Särkkä
    Abstract:

    In this paper, we develop a novel method for approximate continuous-discrete Bayesian Filtering. The projection Filtering framework is exploited to develop accurate approximations of posterior distributions within parametric classes of probability distributions. This is done by formulating an ordinary differential equation for the posterior distribution that has the prior as initial value and hits the exact posterior after a unit of time. Particular emphasis is put on exponential families, especially the Gaussian family of densities. Experimental results demonstrate the efficacy and flexibility of the method.

  • Probabilistic Solutions To Ordinary Differential Equations As Non-Linear Bayesian Filtering: A New Perspective
    arXiv: Methodology, 2018
    Co-Authors: Filip Tronarp, Simo Särkkä, Hans Kersting, Philipp Hennig
    Abstract:

    We formulate probabilistic numerical approximations to solutions of ordinary differential equations (ODEs) as problems in Gaussian process (GP) regression with non-linear measurement functions. This is achieved by defining the measurement sequence to consist of the observations of the difference between the derivative of the GP and the vector field evaluated at the GP---which are all identically zero at the solution of the ODE. When the GP has a state-space representation, the problem can be reduced to a non-linear Bayesian Filtering problem and all widely-used approximations to the Bayesian Filtering and smoothing problems become applicable. Furthermore, all previous GP-based ODE solvers that are formulated in terms of generating synthetic measurements of the gradient field come out as specific approximations. Based on the non-linear Bayesian Filtering problem posed in this paper, we develop novel Gaussian solvers for which we establish favourable stability properties. Additionally, non-Gaussian approximations to the Filtering problem are derived by the particle filter approach. The resulting solvers are compared with other probabilistic solvers in illustrative experiments.

  • Spatiotemporal learning via infinite-dimensional Bayesian Filtering and smoothing: A look at gaussian process regression through kalman Filtering
    IEEE Signal Processing Magazine, 2013
    Co-Authors: Simo Särkkä, Arno Solin, Jouni Hartikainen
    Abstract:

    Gaussian process-based machine learning is a powerful Bayesian paradigm for nonparametric nonlinear regression and classification. In this article, we discuss connections of Gaussian process regression with Kalman Filtering and present methods for converting spatiotemporal Gaussian process regression problems into infinite-dimensional state-space models. This formulation allows for use of computationally efficient infinite-dimensional Kalman Filtering and smoothing methods, or more general Bayesian Filtering and smoothing methods, which reduces the problematic cubic complexity of Gaussian process regression in the number of time steps into linear time complexity. The implication of this is that the use of machine-learning models in signal processing becomes computationally feasible, and it opens the possibility to combine machine-learning techniques with signal processing methods.

  • Bayesian Filtering and Smoothing: What are Bayesian Filtering and smoothing?
    Bayesian Filtering and Smoothing, 1
    Co-Authors: Simo Särkkä
    Abstract:

    The term optimal Filtering traditionally refers to a class of methods that can be used for estimating the state of a time-varying system which is indirectly observed through noisy measurements. The term optimal in this context refers to statistical optimality. Bayesian Filtering refers to the Bayesian way of formulating optimal Filtering. In this book we use these terms interchangeably and always mean Bayesian Filtering. In optimal, Bayesian, and Bayesian optimal Filtering the state of the system refers to the collection of dynamic variables such as position, velocity, orientation, and angular velocity, which fully describe the system. The noise in the measurements means that they are uncertain; even if we knew the true system state the measurements would not be deterministic functions of the state, but would have a distribution of possible values. The time evolution of the state is modeled as a dynamic system which is perturbed by a certain process noise . This noise is used for modeling the uncertainties in the system dynamics. In most cases the system is not truly stochastic, but stochasticity is used for representing the model uncertainties. Bayesian smoothing (or optimal smoothing) is often considered to be a class of methods within the field of Bayesian Filtering. While Bayesian filters in their basic form only compute estimates of the current state of the system given the history of measurements, Bayesian smoothers can be used to reconstruct states that happened before the current time.