Uniform Convergence

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 360 Experts worldwide ranked by ideXlab platform

Joaquin Miguez - One of the best experts on this subject based on the ideXlab platform.

  • Uniform Convergence over time of a nested particle filtering scheme for recursive parameter estimation in state space markov models
    Advances in Applied Probability, 2017
    Co-Authors: Dan Crisan, Joaquin Miguez
    Abstract:

    We analyse the performance of a recursive Monte Carlo method for the Bayesian estimation of the static parameters of a discrete-time state-space Markov model. The algorithm employs two layers of particle filters to approximate the posterior probability distribution of the model parameters. In particular, the first layer yields an empirical distribution of samples on the parameter space, while the filters in the second layer are auxiliary devices to approximate the (analytically intractable) likelihood of the parameters. This approach relates the novel algorithm to the recent sequential Monte Carlo square method, which provides a nonrecursive solution to the same problem. In this paper we investigate the approximation of integrals of real bounded functions with respect to the posterior distribution of the system parameters. Under assumptions related to the compactness of the parameter support and the stability and continuity of the sequence of posterior distributions for the state-space model, we prove that the L p norms of the approximation errors vanish asymptotically (as the number of Monte Carlo samples generated by the algorithm increases) and Uniformly over time. We also prove that, under the same assumptions, the proposed scheme can asymptotically identify the parameter values for a class of models. We conclude the paper with a numerical example that illustrates the Uniform Convergence results by exploring the accuracy and stability of the proposed algorithm operating with long sequences of observations.

  • a proof of Uniform Convergence over time for a distributed particle filter
    Signal Processing, 2016
    Co-Authors: Joaquin Miguez, Manuel A Vazquez
    Abstract:

    Distributed signal processing algorithms have become a hot topic during the past years. One class of algorithms that have received special attention are particles filters (PFs). However, most distributed PFs involve various heuristic or simplifying approximations and, as a consequence, classical Convergence theorems for standard PFs do not hold for their distributed counterparts. In this paper, we analyze a distributed PF based on the non-proportional weight-allocation scheme of Bolic et al (2005) and prove rigorously that, under certain stability assumptions, its asymptotic Convergence is guaranteed Uniformly over time, in such a way that approximation errors can be kept bounded with a fixed computational budget. To illustrate the theoretical findings, we carry out computer simulations for a target tracking problem. The numerical results show that the distributed PF has a negligible performance loss (compared to a centralized filter) for this problem and enable us to empirically validate the key assumptions of the analysis. HighlightsRigorous analysis of a distributed particle filter based on a parallel resampling scheme of Bolic et al. (2005).Proof of Uniform Convergence over time.Analysis of the Convergence rates.A numerical study that complements the analytical results for a target tracking problem.

  • Uniform Convergence over time of a nested particle filtering scheme for recursive parameter estimation in state space markov models
    arXiv: Computation, 2016
    Co-Authors: Dan Crisan, Joaquin Miguez
    Abstract:

    We analyse the performance of a recursive Monte Carlo method for the Bayesian estimation of the static parameters of a discrete--time state--space Markov model. The algorithm employs two layers of particle filters to approximate the posterior probability distribution of the model parameters. In particular, the first layer yields an empirical distribution of samples on the parameter space, while the filters in the second layer are auxiliary devices to approximate the (analytically intractable) likelihood of the parameters. This approach relates the this algorithm to the recent sequential Monte Carlo square (SMC$^2$) method, which provides a {\em non-recursive} solution to the same problem. In this paper, we investigate the approximation, via the proposed scheme, of integrals of real bounded functions with respect to the posterior distribution of the system parameters. Under assumptions related to the compactness of the parameter support and the stability and continuity of the sequence of posterior distributions for the state--space model, we prove that the $L_p$ norms of the approximation errors vanish asymptotically (as the number of Monte Carlo samples generated by the algorithm increases) and Uniformly over time. We also prove that, under the same assumptions, the proposed scheme can asymptotically identify the parameter values for a class of models. We conclude the paper with a numerical example that illustrates the Uniform Convergence results by exploring the accuracy and stability of the proposed algorithm operating with long sequences of observations.

Dan Crisan - One of the best experts on this subject based on the ideXlab platform.

  • Uniform Convergence over time of a nested particle filtering scheme for recursive parameter estimation in state space markov models
    Advances in Applied Probability, 2017
    Co-Authors: Dan Crisan, Joaquin Miguez
    Abstract:

    We analyse the performance of a recursive Monte Carlo method for the Bayesian estimation of the static parameters of a discrete-time state-space Markov model. The algorithm employs two layers of particle filters to approximate the posterior probability distribution of the model parameters. In particular, the first layer yields an empirical distribution of samples on the parameter space, while the filters in the second layer are auxiliary devices to approximate the (analytically intractable) likelihood of the parameters. This approach relates the novel algorithm to the recent sequential Monte Carlo square method, which provides a nonrecursive solution to the same problem. In this paper we investigate the approximation of integrals of real bounded functions with respect to the posterior distribution of the system parameters. Under assumptions related to the compactness of the parameter support and the stability and continuity of the sequence of posterior distributions for the state-space model, we prove that the L p norms of the approximation errors vanish asymptotically (as the number of Monte Carlo samples generated by the algorithm increases) and Uniformly over time. We also prove that, under the same assumptions, the proposed scheme can asymptotically identify the parameter values for a class of models. We conclude the paper with a numerical example that illustrates the Uniform Convergence results by exploring the accuracy and stability of the proposed algorithm operating with long sequences of observations.

  • Uniform Convergence over time of a nested particle filtering scheme for recursive parameter estimation in state space markov models
    arXiv: Computation, 2016
    Co-Authors: Dan Crisan, Joaquin Miguez
    Abstract:

    We analyse the performance of a recursive Monte Carlo method for the Bayesian estimation of the static parameters of a discrete--time state--space Markov model. The algorithm employs two layers of particle filters to approximate the posterior probability distribution of the model parameters. In particular, the first layer yields an empirical distribution of samples on the parameter space, while the filters in the second layer are auxiliary devices to approximate the (analytically intractable) likelihood of the parameters. This approach relates the this algorithm to the recent sequential Monte Carlo square (SMC$^2$) method, which provides a {\em non-recursive} solution to the same problem. In this paper, we investigate the approximation, via the proposed scheme, of integrals of real bounded functions with respect to the posterior distribution of the system parameters. Under assumptions related to the compactness of the parameter support and the stability and continuity of the sequence of posterior distributions for the state--space model, we prove that the $L_p$ norms of the approximation errors vanish asymptotically (as the number of Monte Carlo samples generated by the algorithm increases) and Uniformly over time. We also prove that, under the same assumptions, the proposed scheme can asymptotically identify the parameter values for a class of models. We conclude the paper with a numerical example that illustrates the Uniform Convergence results by exploring the accuracy and stability of the proposed algorithm operating with long sequences of observations.

Yu N Kapustin - One of the best experts on this subject based on the ideXlab platform.

Nathan Srebro - One of the best experts on this subject based on the ideXlab platform.

  • Uniform Convergence of interpolators gaussian width norm bounds and benign overfitting
    arXiv: Machine Learning, 2021
    Co-Authors: Frederic Koehler, Danica J Sutherland, Lijia Zhou, Nathan Srebro
    Abstract:

    We consider interpolation learning in high-dimensional linear regression with Gaussian data, and prove a generic Uniform Convergence guarantee on the generalization error of interpolators in an arbitrary hypothesis class in terms of the class's Gaussian width. Applying the generic bound to Euclidean norm balls recovers the consistency result of Bartlett et al. (2020) for minimum-norm interpolators, and confirms a prediction of Zhou et al. (2020) for near-minimal-norm interpolators in the special case of Gaussian data. We demonstrate the generality of the bound by applying it to the simplex, obtaining a novel consistency result for minimum l1-norm interpolators (basis pursuit). Our results show how norm-based generalization bounds can explain and be used to analyze benign overfitting, at least in some settings.

  • on Uniform Convergence and low norm interpolation learning
    Neural Information Processing Systems, 2020
    Co-Authors: Lijia Zhou, Danica J Sutherland, Nathan Srebro
    Abstract:

    We consider an underdetermined noisy linear regression model where the minimum-norm interpolating predictor is known to be consistent, and ask: can Uniform Convergence in a norm ball, or at least (following Nagarajan and Kolter) the subset of a norm ball that the algorithm selects on a typical input set, explain this success? We show that Uniformly bounding the difference between empirical and population errors cannot show any learning in the norm ball, and cannot show consistency for any set, even one depending on the exact algorithm and distribution. But we argue we can explain the consistency of the minimal-norm interpolator with a slightly weaker, yet standard, notion: Uniform Convergence of zero-error predictors in a norm ball. We use this to bound the generalization error of low- (but not minimal-) norm interpolating predictors.

  • learnability stability and Uniform Convergence
    Journal of Machine Learning Research, 2010
    Co-Authors: Shai Shalevshwartz, Ohad Shamir, Nathan Srebro, Karthik Sridharan
    Abstract:

    The problem of characterizing learnability is the most basic question of statistical learning theory. A fundamental and long-standing answer, at least for the case of supervised classification and regression, is that learnability is equivalent to Uniform Convergence of the empirical risk to the population risk, and that if a problem is learnable, it is learnable via empirical risk minimization. In this paper, we consider the General Learning Setting (introduced by Vapnik), which includes most statistical learning problems as special cases. We show that in this setting, there are non-trivial learning problems where Uniform Convergence does not hold, empirical risk minimization fails, and yet they are learnable using alternative mechanisms. Instead of Uniform Convergence, we identify stability as the key necessary and sufficient condition for learnability. Moreover, we show that the conditions for learnability in the general setting are significantly more complex than in supervised classification and regression.

Sergiusz Kęska - One of the best experts on this subject based on the ideXlab platform.

  • On the Uniform Convergence of Sine Series with Square Root
    Journal of Function Spaces, 2019
    Co-Authors: Sergiusz Kęska
    Abstract:

    Chaundy and Jolliffe proved that if {ck}k=1∞ is a nonincreasing real sequence with limk→∞ck=0, then the series ∑k=1∞‍cksin⁡kx converges Uniformly if and only if kck→0. The purpose of this paper is to show that kck→0 is a necessary and sufficient condition for the Uniform Convergence of series ∑k=1∞‍cksin⁡kθ in θ∈[0,π]. However for ∑k=1∞‍cksin⁡k2θ it is not true in θ∈[0,π].