The Experts below are selected from a list of 360 Experts worldwide ranked by ideXlab platform
Yihui Zhan - One of the best experts on this subject based on the ideXlab platform.
-
a hybrid algorithm for computation of the nonparametric Maximum Likelihood Estimator from censored data
Journal of the American Statistical Association, 1997Co-Authors: Jon A Wellner, Yihui ZhanAbstract:Abstract We present a hybrid algorithm for nonparametric Maximum Likelihood estimation from censored data when the log-Likelihood is concave. The hybrid algorithm uses a composite algorithmic mapping combining the expectation-maximization (EM) algorithm and the (modified) iterative convex minorant (ICM) algorithm. Global convergence of the hybrid algorithm is proven; the iterates generated by the hybrid algorithm are shown to converge to the nonparametric Maximum Likelihood Estimator (NPMLE) unambiguously. Numerical simulations demonstrate that the hybrid algorithm converges more rapidly than either of the EM or the naive ICM algorithm for doubly censored data. The speed of the hybrid algorithm makes it possible to accompany the NPMLE with bootstrap confidence bands.
-
a hybrid algorithm for computation of the nonparametric Maximum Likelihood Estimator from censored data
Journal of the American Statistical Association, 1997Co-Authors: Jon A Wellner, Yihui ZhanAbstract:Abstract We present a hybrid algorithm for nonparametric Maximum Likelihood estimation from censored data when the log-Likelihood is concave. The hybrid algorithm uses a composite algorithmic mapping combining the expectation-maximization (EM) algorithm and the (modified) iterative convex minorant (ICM) algorithm. Global convergence of the hybrid algorithm is proven; the iterates generated by the hybrid algorithm are shown to converge to the nonparametric Maximum Likelihood Estimator (NPMLE) unambiguously. Numerical simulations demonstrate that the hybrid algorithm converges more rapidly than either of the EM or the naive ICM algorithm for doubly censored data. The speed of the hybrid algorithm makes it possible to accompany the NPMLE with bootstrap confidence bands.
Randal Douc - One of the best experts on this subject based on the ideXlab platform.
-
general order observation driven models ergodicity and consistency of the Maximum Likelihood Estimator
Electronic Journal of Statistics, 2021Co-Authors: Tepmony Sim, Randal Douc, Francois RoueffAbstract:The class of observation-driven models (ODMs) includes many models of non-linear time series which, in a fashion similar to, yet different from, hidden Markov models (HMMs), involve hidden variables. Interestingly, in contrast to most HMMs, ODMs enjoy Likelihoods that can be computed exactly with computational complexity of the same order as the number of observations, making Maximum Likelihood estimation the privileged approach for statistical inference for these models. A celebrated example of general order ODMs is the GARCH$(p,q)$ model, for which ergodicity and inference has been studied extensively. However little is known on more general models, in particular integer-valued ones, such as the log-linear Poisson GARCH or the NBIN-GARCH of order $(p,q)$ about which most of the existing results seem restricted to the case $p=q=1$. Here we fill this gap and derive ergodicity conditions for general ODMs. The consistency and the asymptotic normality of the Maximum Likelihood Estimator (MLE) can then be derived using the method already developed for first order ODMs.
-
handy sufficient conditions for the convergence of the Maximum Likelihood Estimator in observation driven models
arXiv: Statistics Theory, 2015Co-Authors: Randal Douc, Francois RoueffAbstract:This paper generalizes asymptotic properties obtained in the observation-driven times series models considered by \cite{dou:kou:mou:2013} in the sense that the conditional law of each observation is also permitted to depend on the parameter. The existence of ergodic solutions and the consistency of the Maximum Likelihood Estimator (MLE) are derived under easy-to-check conditions. The obtained conditions appear to apply for a wide class of models. We illustrate our results with specific observation-driven times series, including the recently introduced NBIN-GARCH and NM-GARCH models, demonstrating the consistency of the MLE for these two models.
-
consistency of the Maximum Likelihood Estimator for general hidden markov models
Annals of Statistics, 2011Co-Authors: Randal Douc, Eric Moulines, Jimmy Olsson, Ramon Van HandelAbstract:Consider a parametrized family of general hidden Markov models, where both the observed and unobserved components take values in a complete separable metric space. We prove that the Maximum Likelihood Estimator (MLE) of the parameter is strongly consistent under a rather minimal set of assumptions. As special cases of our main result, we obtain consistency in a large class of nonlinear state space models, as well as general results on linear Gaussian state space models and finite state models. A novel aspect of our approach is an information-theoretic technique for proving identifiability, which does not require an explicit representation for the relative entropy rate. Our method of proof could therefore form a foundation for the investigation of MLE consistency in more general dependent and non-Markovian time series. Also of independent interest is a general concentration inequality for V-uniformly ergodic Markov chains.
-
Consistency of the Maximum Likelihood Estimator for general hidden Markov models
Annals of Statistics, 2011Co-Authors: Randal Douc, Eric Moulines, Jimmy Olsson, Ramon Van HandelAbstract:Consider a parametrized family of general hidden Markov models, where both the observed and unobserved components take values in a complete separable metric space. We prove that the Maximum Likelihood Estimator (MLE) of the parameter is strongly consistent under a rather minimal set of assumptions. As special cases of our main result, we obtain consistency in a large class of nonlinear state space models, as well as general results on linear Gaussian state space models and finite state models. A novel aspect of our approach is an information-theoretic technique for proving identifiability, which does not require an explicit representation for the relative entropy rate. Our method of proof could therefore form a foundation for the investigation of MLE consistency in more general dependent and non-Markovian time series. Also of independent interest is a general concentration inequality for V-uniformly ergodic Markov chains.
-
asymptotic properties of the Maximum Likelihood Estimator in autoregressive models with markov regime
Annals of Statistics, 2004Co-Authors: Randal Douc, Eric Moulines, Tobias RydenAbstract:An autoregressive process with Markov regime is an autoregressive process for which the regression function at each time point is given by a nonobservable Markov chain. In this paper we consider the asymptotic properties of the Maximum Likelihood Estimator in a possibly nonstationary process of this kind for which the hidden state space is compact but not necessarily finite. Consistency and asymptotic normality are shown to follow from uniform exponential forgetting of the initial distribution for the hidden Markov chain conditional on the observations.
Jon A Wellner - One of the best experts on this subject based on the ideXlab platform.
-
on the rate of convergence of the Maximum Likelihood Estimator of a k monotone density
Science China-mathematics, 2009Co-Authors: Jon A WellnerAbstract:Bounds for the bracketing entropy of the classes of bounded k-monotone functions on [0, A] are obtained under both the Hellinger distance and the Lp(Q) distance, where 1 ⩽ p < ∞ and Q is a probability measure on [0,A]. The result is then applied to obtain the rate of convergence of the Maximum Likelihood Estimator of a k-monotone density.
-
a hybrid algorithm for computation of the nonparametric Maximum Likelihood Estimator from censored data
Journal of the American Statistical Association, 1997Co-Authors: Jon A Wellner, Yihui ZhanAbstract:Abstract We present a hybrid algorithm for nonparametric Maximum Likelihood estimation from censored data when the log-Likelihood is concave. The hybrid algorithm uses a composite algorithmic mapping combining the expectation-maximization (EM) algorithm and the (modified) iterative convex minorant (ICM) algorithm. Global convergence of the hybrid algorithm is proven; the iterates generated by the hybrid algorithm are shown to converge to the nonparametric Maximum Likelihood Estimator (NPMLE) unambiguously. Numerical simulations demonstrate that the hybrid algorithm converges more rapidly than either of the EM or the naive ICM algorithm for doubly censored data. The speed of the hybrid algorithm makes it possible to accompany the NPMLE with bootstrap confidence bands.
-
a hybrid algorithm for computation of the nonparametric Maximum Likelihood Estimator from censored data
Journal of the American Statistical Association, 1997Co-Authors: Jon A Wellner, Yihui ZhanAbstract:Abstract We present a hybrid algorithm for nonparametric Maximum Likelihood estimation from censored data when the log-Likelihood is concave. The hybrid algorithm uses a composite algorithmic mapping combining the expectation-maximization (EM) algorithm and the (modified) iterative convex minorant (ICM) algorithm. Global convergence of the hybrid algorithm is proven; the iterates generated by the hybrid algorithm are shown to converge to the nonparametric Maximum Likelihood Estimator (NPMLE) unambiguously. Numerical simulations demonstrate that the hybrid algorithm converges more rapidly than either of the EM or the naive ICM algorithm for doubly censored data. The speed of the hybrid algorithm makes it possible to accompany the NPMLE with bootstrap confidence bands.
Tobias Ryden - One of the best experts on this subject based on the ideXlab platform.
-
asymptotic properties of the Maximum Likelihood Estimator in autoregressive models with markov regime
Annals of Statistics, 2004Co-Authors: Randal Douc, Eric Moulines, Tobias RydenAbstract:An autoregressive process with Markov regime is an autoregressive process for which the regression function at each time point is given by a nonobservable Markov chain. In this paper we consider the asymptotic properties of the Maximum Likelihood Estimator in a possibly nonstationary process of this kind for which the hidden state space is compact but not necessarily finite. Consistency and asymptotic normality are shown to follow from uniform exponential forgetting of the initial distribution for the hidden Markov chain conditional on the observations.
-
asymptotic normality of the Maximum Likelihood Estimator for general hidden markov models
Annals of Statistics, 1998Co-Authors: Peter J. Bickel, Yaacov Ritov, Tobias RydenAbstract:Hidden Markov models (HMMs) have during the last decade become a widespread tool for modeling sequences of dependent random variables. Inference for such models is usually based on the Maximum-Likelihood Estimator (MLE), and consistency of the MLE for general HMMs was recently proved by Leroux. In this paper we show that under mild conditions the MLE is also asymptotically normal and prove that the observed information matrix is a consistent Estimator of the Fisher information.
Olivier Wintenberger - One of the best experts on this subject based on the ideXlab platform.
-
asymptotic normality of the quasi Maximum Likelihood Estimator for multidimensional causal processes
Annals of Statistics, 2009Co-Authors: Jeanmarc Bardet, Olivier WintenbergerAbstract:Strong consistency and asymptotic normality of the quasi-Maximum Likelihood Estimator are given for a general class of multidimensional causal processes. For particular cases already studied in the literature [for instance univariate or multivariate ARCH(∞) processes], the assumptions required for establishing these results are often weaker than existing conditions. The QMLE asymptotic behavior is also given for numerous new examples of univariate or multivariate processes (for instance TARCH or NLARCH processes).
-
asymptotic normality of the quasi Maximum Likelihood Estimator for multidimensional causal processes
arXiv: Statistics Theory, 2007Co-Authors: Jeanmarc Bardet, Olivier WintenbergerAbstract:Strong consistency and asymptotic normality of the Quasi-Maximum Likelihood Estimator (QMLE) are given for a general class of multidimensional causal processes. For particular cases already studied in the literature (for instance univariate or multivariate GARCH, ARCH, ARMA-GARCH processes) the assumptions required for establishing these results are often weaker than existing conditions. The QMLE asymptotic behavior is also given for numerous new examples of univariate or multivariate processes (for instance TARCH or NLARCH processes).