Autoregression

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 306 Experts worldwide ranked by ideXlab platform

M. V. Boldin - One of the best experts on this subject based on the ideXlab platform.

  • On Symmetrized Pearson's Type Test in Autoregression with Outliers: Robust Testing of Normality
    arXiv: Statistics Theory, 2020
    Co-Authors: M. V. Boldin
    Abstract:

    We consider a stationary linear AR($p$) model with observations subject to gross errors (outliers). The Autoregression parameters are unknown as well as the distribution and moments of innoovations. The distribution of outliers $\Pi$ is unknown and arbitrary, their intensity is $\gamma n^{-1/2}$ with an unknown $\gamma$, $n$ is the sample size. The Autoregression parameters are estimated by any estimator which is $n^{1/2}$-consistent uniformly in $\gamma\leq \Gamma

  • Robust Sign Test for the Unit Root Hypothesis of Autoregression
    Theory of Probability & Its Applications, 2019
    Co-Authors: M. V. Boldin
    Abstract:

    An ${AR}(1)$-model is considered with Autoregression observations that contain gross errors (contaminations) with unknown arbitrary distribution. The unit root hypothesis for Autoregression is tested. A special sign test is proposed as an alternative to the least-square test (the latter test is not applicable in this setting). The sign test is shown to be locally qualitatively robust in terms of the equicontinuity of the power.

  • on the empirical distribution function of residuals in Autoregression with outliers and pearson s chi square type tests
    Mathematical Methods of Statistics, 2018
    Co-Authors: M. V. Boldin, M. N. Petriev
    Abstract:

    We consider a stationary linear AR(p) model with observations subject to gross errors (outliers). The distribution of outliers is unknown and arbitrary, their intensity is γn−1/2 with an unknown γ, n is the sample size. The Autoregression parameters are unknown, they are estimated by any estimator which is n1/2-consistent uniformly in γ ≤ Γ < ∞. Using the residuals from the estimated Autoregression, we construct a kind of empirical distribution function (e.d.f.), which is a counterpart of the (inaccessible) e.d.f. of the Autoregression innovations. We obtain a stochastic expansion of this e.d.f., which enables us to construct a test of Pearson’s chi-square type for testing hypotheses about the distribution of innovations. We establish qualitative robustness of this test in terms of uniform equicontinuity of the limiting level with respect to γ in a neighborhood of γ = 0.

  • On the Empirical Distribution Function of Residuals in Autoregression with Outliers and Pearson’s Chi-Square Type Tests
    Mathematical Methods of Statistics, 2018
    Co-Authors: M. V. Boldin, M. N. Petriev
    Abstract:

    We consider a stationary linear AR(p) model with observations subject to gross errors (outliers). The distribution of outliers is unknown and arbitrary, their intensity is γn−1/2 with an unknown γ, n is the sample size. The Autoregression parameters are unknown, they are estimated by any estimator which is n1/2-consistent uniformly in γ ≤ Γ < ∞. Using the residuals from the estimated Autoregression, we construct a kind of empirical distribution function (e.d.f.), which is a counterpart of the (inaccessible) e.d.f. of the Autoregression innovations. We obtain a stochastic expansion of this e.d.f., which enables us to construct a test of Pearson’s chi-square type for testing hypotheses about the distribution of innovations. We establish qualitative robustness of this test in terms of uniform equicontinuity of the limiting level with respect to γ in a neighborhood of γ = 0.

  • Robustness of Sign Tests for Testing Hypotheses about Order of Autoregression
    Theory of Probability & Its Applications, 2013
    Co-Authors: M. V. Boldin
    Abstract:

    Observations of Autoregression are contaminated by additive isolated outliers with an unknown random distribution. Intensity of the outliers $\gamma_n$ is $\min(1,n^{-1/2}\gamma)$, where $\gamma \ge 0$ is unknown, and $n$ is the data size. Robustness of sign tests for hypotheses about order of Autoregression is considered. The result is formulated in terms of equicontinuity of limiting power with respect to $\gamma$ at $\gamma=0$.

Peter C.b. Phillips - One of the best experts on this subject based on the ideXlab platform.

  • uniform asymptotic normality in stationary and unit root Autoregression
    Econometric Theory, 2011
    Co-Authors: Chirok Han, Peter C.b. Phillips, Donggyu Sul
    Abstract:

    While differencing transformations can eliminate nonstationarity, they typically reduce sig- nal strength and correspondingly reduce rates of convergence in unit root Autoregressions. The present paper shows that aggregating moment conditions that are formulated in differences provides an orderly mechanism for preserving information and signal strength in autoregres- sions with some very desirable properties. Inrst order Autoregression, a partially aggregated estimator based on moment conditions in differences is shown to have a limiting normal distri- bution which holds uniformly in the autoregressive coefcientincluding stationary and unit root cases. The rate of convergence is p n when jj < 1 and the limit distribution is the same as the Gaussian maximum likelihood estimator (MLE), but when � = 1 the rate of convergence to the normal distribution is within a slowly varying factor of n: A fully aggregated estimator is shown to have the same limit behavior in the stationary case and to have nonstandard limit distributions in unit root and near integrated cases which reduce both the bias and the variance of the MLE. This result shows that it is possible to improve on the asymptotic behavior of the MLE without using an articial shrinkage technique or otherwise accelerating convergence at unity at the cost of performance in the neighborhood of unity.

  • Uniform Asymptotic Normality in Stationary and Unit Root Autoregression
    SSRN Electronic Journal, 2010
    Co-Authors: Chirok Han, Peter C.b. Phillips, Donggyu Sul
    Abstract:

    While differencing transformations can eliminate nonstationarity, they typically reduce signal strength and correspondingly reduce rates of convergence in unit root Autoregressions. The present paper shows that aggregating moment conditions that are formulated in differences provides an orderly mechanism for preserving information and signal strength in Autoregressions with some very desirable properties. In first order Autoregression, a partially aggregated estimator based on moment conditions in differences is shown to have a limiting normal distribution which holds uniformly in the autoregressive coefficient rho including stationary and unit root cases. The rate of convergence is root of n when |rho|

  • Uniform Limit Theory for Stationary Autoregression
    Journal of Time Series Analysis, 2006
    Co-Authors: Liudas Giraitis, Peter C.b. Phillips
    Abstract:

    .  First order Autoregression is shown to satisfy a limit theory which is uniform over stationary values of the autoregressive coefficient ρ = ρn  ∈  [0, 1) provided (1 − ρn)n  ∞. This extends existing Gaussian limit theory by allowing for values of stationary ρ that include neighbourhoods of unity provided they are wider than O(n−1), even by a slowly varying factor. Rates of convergence depend on ρ and are at least but less than n. Only second moments are assumed, as in the case of stationary Autoregression with fixed ρ.

  • Uniform Limit Theory for Stationary Autoregression
    2004
    Co-Authors: Liudas Giraitis, Peter C.b. Phillips
    Abstract:

    First order Autoregression is shown to satisfy a limit theory which is uniform over stationary values of the autoregressive coefficient rho = rho_{n} in [0,1) provided (1 - rho_{n})n approaches infinity. This extends existing Gaussian limit theory by allowing for values of stationary rho that include neighbourhoods of unity provided they are wider than O(n^{1}), even by a slowly varying factor. Rates of convergence depend on rho and are at least squareroot of squareroot of n but less than n. Only second moments are assumed, as in the case of stationary Autoregression with fixed rho.

  • Nonstationary Density Estimation and Kernel Autoregression
    1998
    Co-Authors: Peter C.b. Phillips, Joon Y. Park
    Abstract:

    An asymptotic theory is developed for the kernel density estimate of a random walk and the kernel regression estimator of a nonstationary first order Autoregression. The kernel density estimator provides a consistent estimate of the local time spent by the random walk in the spatial vicinity of a point that is determined in part by the argument of the density and in part by initial conditions. The kernel regression estimator is shown to be consistent and to have a mixed normal limit theory. The limit distribution has a mixing variate that is given by the reciprocal of the local time of a standard Brownian motion. The permissible range for the bandwidth parameter h_{n} includes rates which may increase as well as decrease with the sample size n, in contrast to the case of a stationary Autoregression. However, the convergence rate of the kernel regression estimator is at most n^{1/4}, and this is slower than that of a stationary kernel Autoregression, in contrast to the parametric case. In spite of these differences in the limit theory and the rates of convergence between the stationary and nonstationary cases, it is shown that the usual formulae for confidence intervals for the regression function still apply when h_{n} -> 0.

A. K. Md. Ehsanes Saleh - One of the best experts on this subject based on the ideXlab platform.

  • Ridge Autoregression R-Estimation: Subspace Restriction
    Contemporary Developments in Statistical Theory, 2013
    Co-Authors: A. K. Md. Ehsanes Saleh
    Abstract:

    This paper considers the “ridge Autoregression R-estimation” of the AR (p)-model when the parameters of the AR(p)-model is suspected to belong to a linear subspace. Accordingly, we introduce ridge Autoregression (RARR) modifications to the usual five R-estimators of the parameters of the AR(p)-model. This class of (RARR)-R-estimators, not only alleviates the problem of multicollinearity in the estimated covariance matrix but also retains their asymptotic dominance properties under a quadratic loss function.

  • Autoregression Quantiles and Related Rank-Scores Processes
    The Annals of Statistics, 1995
    Co-Authors: Hira L. Koul, A. K. Md. Ehsanes Saleh
    Abstract:

    This paper develops extensions of the regression quantiles of Koenker and Bassett (1978) to Autoregression. It generalizes several results of Jureckova (1992a) and Gutenbrunner and Jureckova (1992) in linear regression to Autoregression models. In particular, it gives the asymptotic uniform linearity of linear rank-scores statistics based on residuals suitable in Autoregression. It also discusses the two types of $L$-statistics appropriate in Autoregression.

Pentti Saikkonen - One of the best experts on this subject based on the ideXlab platform.

  • Subgeometrically ergodic Autoregressions
    Econometric Theory, 2020
    Co-Authors: Mika Meitz, Pentti Saikkonen
    Abstract:

    In this paper we discuss how the notion of subgeometric ergodicity in Markov chain theory can be exploited to study stationarity and ergodicity of nonlinear time series models. Subgeometric ergodicity means that the transition probability measures converge to the stationary measure at a rate slower than geometric. Specifically, we consider suitably defined higher-order nonlinear Autoregressions that behave similarly to a unit root process for large values of the observed series but we place almost no restrictions on their dynamics for moderate values of the observed series. Results on the subgeometric ergodicity of nonlinear Autoregressions have previously appeared only in the first-order case. We provide an extension to the higher-order case and show that the Autoregressions we consider are, under appropriate conditions, subgeometrically ergodic. As useful implications we also obtain stationarity and $\beta$-mixing with subgeometrically decaying mixing coefficients.

  • gaussian mixture vector Autoregression
    Journal of Econometrics, 2016
    Co-Authors: Leena Kalliovirta, Mika Meitz, Pentti Saikkonen
    Abstract:

    This paper proposes a new nonlinear vector autoregressive (VAR) model referred to as the Gaussian mixture vector autoregressive (GMVAR) model. The GMVAR model belongs to the family of mixture vector autoregressive models and is designed for analyzing time series that exhibit regime-switching behavior. The main difference between the GMVAR model and previous mixture VAR models lies in the definition of the mixing weights that govern the regime probabilities. In the GMVAR model the mixing weights depend on past values of the series in a specific way that has very advantageous properties from both theoretical and practical point of view. A practical advantage is that there is a wide diversity of ways in which a researcher can associate different regimes with specific economically meaningful characteristics of the phenomenon modeled. A theoretical advantage is that stationarity and ergodicity of the underlying stochastic process are straightforward to establish and, contrary to most other nonlinear autoregressive models, explicit expressions of low order stationary marginal distributions are known. These theoretical properties are used to develop an asymptotic theory of maximum likelihood estimation for the GMVAR model whose practical usefulness is illustrated in a bivariate setting by examining the relationship between the EUR–USD exchange rate and a related interest rate data.

  • Forecasting with a noncausal VAR model
    Computational Statistics & Data Analysis, 2014
    Co-Authors: Henri Nyberg, Pentti Saikkonen
    Abstract:

    Simulation-based forecasting methods for a non-Gaussian noncausal vector autoregressive (VAR) model are proposed. In noncausal Autoregressions the assumption of non-Gaussianity is needed for reasons of identifiability. Unlike in conventional causal Autoregressions the prediction problem in noncausal Autoregressions is generally nonlinear, implying that its analytical solution is unfeasible and, therefore, simulation or numerical methods are required in computing forecasts. It turns out that different special cases of the model call for different simulation procedures. Monte Carlo simulations demonstrate that gains in forecasting accuracy are achieved by using the correct noncausal VAR model instead of its conventional causal counterpart. In an empirical application, a noncausal VAR model comprised of U.S. inflation and marginal cost turns out superior to the best-fitting conventional causal VAR model in forecasting inflation.

  • noncausal vector Autoregression
    Econometric Theory, 2013
    Co-Authors: Markku Lanne, Pentti Saikkonen
    Abstract:

    In this paper, we propose a new noncausal vector autoregressive (VAR) model for non-Gaussian time series. The assumption of non-Gaussianity is needed for reasons of identifiability. Assuming that the error distribution belongs to a fairly general class of elliptical distributions, we develop an asymptotic theory of maximum likelihood estimation and statistical inference. We argue that allowing for noncausality is of particular importance in economic applications which currently use only conventional causal VAR models. Indeed, if noncausality is incorrectly ignored, the use of a causal VAR model may yield suboptimal forecasts and misleading economic interpretations. Therefore, we propose a procedure for discriminating between causality and noncausality. The methods are illustrated with an application to interest rate data.

  • noncausal vector Autoregression
    2009
    Co-Authors: Markku Lanne, Pentti Saikkonen
    Abstract:

    In this paper, we propose a new noncausal vector autoregressive (VAR) model for non-Gaussian time series. The assumption of non-Gaussianity is needed for reasons of identifiability. Assuming that the error distribution belongs to a fairly general class of elliptical distributions, we develop an asymptotic theory of maximum likelihood estimation and statistical inference. We argue that allowing for noncausality is of importance in empirical economic research, which currently uses only conventional causal VAR models. Indeed, if noncausality is incorrectly ignored, the use of a causal VAR model may yield suboptimal forecasts and misleading economic interpretations. This is emphasized in the paper by noting that noncausality is closely related to the notion of nonfundamentalness, under which structural economic shocks cannot be recovered from an estimated causal VAR model. As detecting nonfundamentalness is therefore of great importance, we propose a procedure for discriminating between causality and noncausality that can be seen as a test of nonfundamentalness. The methods are illustrated with applications to fiscal foresight and the term structure of interest rates.

Jonathan Penm - One of the best experts on this subject based on the ideXlab platform.