Maximum Likelihood Estimator

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 360 Experts worldwide ranked by ideXlab platform

Yihui Zhan - One of the best experts on this subject based on the ideXlab platform.

Randal Douc - One of the best experts on this subject based on the ideXlab platform.

  • general order observation driven models ergodicity and consistency of the Maximum Likelihood Estimator
    Electronic Journal of Statistics, 2021
    Co-Authors: Tepmony Sim, Randal Douc, Francois Roueff
    Abstract:

    The class of observation-driven models (ODMs) includes many models of non-linear time series which, in a fashion similar to, yet different from, hidden Markov models (HMMs), involve hidden variables. Interestingly, in contrast to most HMMs, ODMs enjoy Likelihoods that can be computed exactly with computational complexity of the same order as the number of observations, making Maximum Likelihood estimation the privileged approach for statistical inference for these models. A celebrated example of general order ODMs is the GARCH$(p,q)$ model, for which ergodicity and inference has been studied extensively. However little is known on more general models, in particular integer-valued ones, such as the log-linear Poisson GARCH or the NBIN-GARCH of order $(p,q)$ about which most of the existing results seem restricted to the case $p=q=1$. Here we fill this gap and derive ergodicity conditions for general ODMs. The consistency and the asymptotic normality of the Maximum Likelihood Estimator (MLE) can then be derived using the method already developed for first order ODMs.

  • handy sufficient conditions for the convergence of the Maximum Likelihood Estimator in observation driven models
    arXiv: Statistics Theory, 2015
    Co-Authors: Randal Douc, Francois Roueff
    Abstract:

    This paper generalizes asymptotic properties obtained in the observation-driven times series models considered by \cite{dou:kou:mou:2013} in the sense that the conditional law of each observation is also permitted to depend on the parameter. The existence of ergodic solutions and the consistency of the Maximum Likelihood Estimator (MLE) are derived under easy-to-check conditions. The obtained conditions appear to apply for a wide class of models. We illustrate our results with specific observation-driven times series, including the recently introduced NBIN-GARCH and NM-GARCH models, demonstrating the consistency of the MLE for these two models.

  • consistency of the Maximum Likelihood Estimator for general hidden markov models
    Annals of Statistics, 2011
    Co-Authors: Randal Douc, Eric Moulines, Jimmy Olsson, Ramon Van Handel
    Abstract:

    Consider a parametrized family of general hidden Markov models, where both the observed and unobserved components take values in a complete separable metric space. We prove that the Maximum Likelihood Estimator (MLE) of the parameter is strongly consistent under a rather minimal set of assumptions. As special cases of our main result, we obtain consistency in a large class of nonlinear state space models, as well as general results on linear Gaussian state space models and finite state models. A novel aspect of our approach is an information-theoretic technique for proving identifiability, which does not require an explicit representation for the relative entropy rate. Our method of proof could therefore form a foundation for the investigation of MLE consistency in more general dependent and non-Markovian time series. Also of independent interest is a general concentration inequality for V-uniformly ergodic Markov chains.

  • Consistency of the Maximum Likelihood Estimator for general hidden Markov models
    Annals of Statistics, 2011
    Co-Authors: Randal Douc, Eric Moulines, Jimmy Olsson, Ramon Van Handel
    Abstract:

    Consider a parametrized family of general hidden Markov models, where both the observed and unobserved components take values in a complete separable metric space. We prove that the Maximum Likelihood Estimator (MLE) of the parameter is strongly consistent under a rather minimal set of assumptions. As special cases of our main result, we obtain consistency in a large class of nonlinear state space models, as well as general results on linear Gaussian state space models and finite state models. A novel aspect of our approach is an information-theoretic technique for proving identifiability, which does not require an explicit representation for the relative entropy rate. Our method of proof could therefore form a foundation for the investigation of MLE consistency in more general dependent and non-Markovian time series. Also of independent interest is a general concentration inequality for V-uniformly ergodic Markov chains.

  • asymptotic properties of the Maximum Likelihood Estimator in autoregressive models with markov regime
    Annals of Statistics, 2004
    Co-Authors: Randal Douc, Eric Moulines, Tobias Ryden
    Abstract:

    An autoregressive process with Markov regime is an autoregressive process for which the regression function at each time point is given by a nonobservable Markov chain. In this paper we consider the asymptotic properties of the Maximum Likelihood Estimator in a possibly nonstationary process of this kind for which the hidden state space is compact but not necessarily finite. Consistency and asymptotic normality are shown to follow from uniform exponential forgetting of the initial distribution for the hidden Markov chain conditional on the observations.

Jon A Wellner - One of the best experts on this subject based on the ideXlab platform.

Tobias Ryden - One of the best experts on this subject based on the ideXlab platform.

  • asymptotic properties of the Maximum Likelihood Estimator in autoregressive models with markov regime
    Annals of Statistics, 2004
    Co-Authors: Randal Douc, Eric Moulines, Tobias Ryden
    Abstract:

    An autoregressive process with Markov regime is an autoregressive process for which the regression function at each time point is given by a nonobservable Markov chain. In this paper we consider the asymptotic properties of the Maximum Likelihood Estimator in a possibly nonstationary process of this kind for which the hidden state space is compact but not necessarily finite. Consistency and asymptotic normality are shown to follow from uniform exponential forgetting of the initial distribution for the hidden Markov chain conditional on the observations.

  • asymptotic normality of the Maximum Likelihood Estimator for general hidden markov models
    Annals of Statistics, 1998
    Co-Authors: Peter J. Bickel, Yaacov Ritov, Tobias Ryden
    Abstract:

    Hidden Markov models (HMMs) have during the last decade become a widespread tool for modeling sequences of dependent random variables. Inference for such models is usually based on the Maximum-Likelihood Estimator (MLE), and consistency of the MLE for general HMMs was recently proved by Leroux. In this paper we show that under mild conditions the MLE is also asymptotically normal and prove that the observed information matrix is a consistent Estimator of the Fisher information.

Olivier Wintenberger - One of the best experts on this subject based on the ideXlab platform.