Autocovariances

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 306 Experts worldwide ranked by ideXlab platform

Jingjing Yang - One of the best experts on this subject based on the ideXlab platform.

  • Exactly/Nearly Unbiased Estimation of Autocovariances of a Univariate Time Series With Unknown Mean
    Journal of Time Series Analysis, 2016
    Co-Authors: Timothy J. Vogelsang, Jingjing Yang
    Abstract:

    This article proposes an exactly/nearly unbiased estimator of the autocovariance function of a univariate time series with unknown mean. The estimator is a linear function of the usual sample Autocovariances computed using the observed demeaned data. The idea is to stack the usual sample Autocovariances into a vector and show that the expectation of this vector is a linear combination of population Autocovariances. A matrix that we label, A, collects the weights in these linear combinations. When the population Autocovariances of high lags are zero (small), exactly (nearly) unbiased estimators of the remaining Autocovariances can be obtained using the inverse of upper blocks of the A matrix. The A-matrix estimators are shown to be asymptotically equivalent to the usual sample autocovariance estimators. The A-matrix estimators can be used to construct estimators of the autocorrelation function that have less bias than the usual estimators. Simulations show that the A-matrix estimators can substantially reduce bias while not necessarily increasing mean square error. More powerful tests for the null hypothesis of white noise are obtained using the A-matrix estimators.

  • exactly nearly unbiased estimation of Autocovariances of a univariate time series with unknown mean
    Journal of Time Series Analysis, 2016
    Co-Authors: Timothy J. Vogelsang, Jingjing Yang
    Abstract:

    This article proposes an exactly/nearly unbiased estimator of the autocovariance function of a univariate time series with unknown mean. The estimator is a linear function of the usual sample Autocovariances computed using the observed demeaned data. The idea is to stack the usual sample Autocovariances into a vector and show that the expectation of this vector is a linear combination of population Autocovariances. A matrix that we label, A, collects the weights in these linear combinations. When the population Autocovariances of high lags are zero (small), exactly (nearly) unbiased estimators of the remaining Autocovariances can be obtained using the inverse of upper blocks of the A matrix. The A-matrix estimators are shown to be asymptotically equivalent to the usual sample autocovariance estimators. The A-matrix estimators can be used to construct estimators of the autocorrelation function that have less bias than the usual estimators. Simulations show that the A-matrix estimators can substantially reduce bias while not necessarily increasing mean square error. More powerful tests for the null hypothesis of white noise are obtained using the A-matrix estimators.

Timothy J. Vogelsang - One of the best experts on this subject based on the ideXlab platform.

  • Exactly/Nearly Unbiased Estimation of Autocovariances of a Univariate Time Series With Unknown Mean
    Journal of Time Series Analysis, 2016
    Co-Authors: Timothy J. Vogelsang, Jingjing Yang
    Abstract:

    This article proposes an exactly/nearly unbiased estimator of the autocovariance function of a univariate time series with unknown mean. The estimator is a linear function of the usual sample Autocovariances computed using the observed demeaned data. The idea is to stack the usual sample Autocovariances into a vector and show that the expectation of this vector is a linear combination of population Autocovariances. A matrix that we label, A, collects the weights in these linear combinations. When the population Autocovariances of high lags are zero (small), exactly (nearly) unbiased estimators of the remaining Autocovariances can be obtained using the inverse of upper blocks of the A matrix. The A-matrix estimators are shown to be asymptotically equivalent to the usual sample autocovariance estimators. The A-matrix estimators can be used to construct estimators of the autocorrelation function that have less bias than the usual estimators. Simulations show that the A-matrix estimators can substantially reduce bias while not necessarily increasing mean square error. More powerful tests for the null hypothesis of white noise are obtained using the A-matrix estimators.

  • exactly nearly unbiased estimation of Autocovariances of a univariate time series with unknown mean
    Journal of Time Series Analysis, 2016
    Co-Authors: Timothy J. Vogelsang, Jingjing Yang
    Abstract:

    This article proposes an exactly/nearly unbiased estimator of the autocovariance function of a univariate time series with unknown mean. The estimator is a linear function of the usual sample Autocovariances computed using the observed demeaned data. The idea is to stack the usual sample Autocovariances into a vector and show that the expectation of this vector is a linear combination of population Autocovariances. A matrix that we label, A, collects the weights in these linear combinations. When the population Autocovariances of high lags are zero (small), exactly (nearly) unbiased estimators of the remaining Autocovariances can be obtained using the inverse of upper blocks of the A matrix. The A-matrix estimators are shown to be asymptotically equivalent to the usual sample autocovariance estimators. The A-matrix estimators can be used to construct estimators of the autocorrelation function that have less bias than the usual estimators. Simulations show that the A-matrix estimators can substantially reduce bias while not necessarily increasing mean square error. More powerful tests for the null hypothesis of white noise are obtained using the A-matrix estimators.

Tucker Mcelroy - One of the best experts on this subject based on the ideXlab platform.

  • Computation of vector ARMA Autocovariances
    Statistics & Probability Letters, 2017
    Co-Authors: Tucker Mcelroy
    Abstract:

    This note describes an algorithm for computing the autocovariance sequence of a VARMA process, without requiring the intermediary step of determining the Wold representation. Although the recursive formula for the Autocovariances is well-known, the initialization of this recursion in standard treatments (such as Brockwell and Davis (1991) or Lutkepohl (2007)) is slightly nuanced; we provide explicit formulas and algorithms for the initial Autocovariances.

  • computation of the Autocovariances for time series with multiple long range persistencies
    Computational Statistics & Data Analysis, 2016
    Co-Authors: Tucker Mcelroy, Scott H Holan
    Abstract:

    Gegenbauer processes allow for flexible and convenient modeling of time series data with multiple spectral peaks, where the qualitative description of these peaks is via the concept of cyclical long-range dependence. The Gegenbauer class is extensive, including ARFIMA, seasonal ARFIMA, and GARMA processes as special cases. Model estimation is challenging for Gegenbauer processes when multiple zeros and poles occur in the spectral density, because the autocovariance function is laborious to compute. The method of splitting-essentially computing Autocovariances by convolving long memory and short memory dynamics-is only tractable when a single long memory pole exists. An additive decomposition of the spectrum into a sum of spectra is proposed, where each summand has a single singularity, so that a computationally efficient splitting method can be applied to each term and then aggregated. This approach differs from handling all the poles in the spectral density at once, via an analysis of truncation error. The proposed technique allows for fast estimation of time series with multiple long-range dependences, which is illustrated numerically and through several case-studies.

  • subsampling inference for the Autocovariances and autocorrelations of long memory heavy tailed linear time series
    Journal of Time Series Analysis, 2012
    Co-Authors: Tucker Mcelroy, Agnieszka Jach
    Abstract:

    We provide a self-normalization for the sample Autocovariances and autocorrelations of a linear, long-memory time series with innovations that have either finite fourth moment or are heavy-tailed with tail index 2 < α < 4. In the asymptotic distribution of the sample autocovariance there are three rates of convergence that depend on the interplay between the memory parameter d and α, and which consequently lead to three different limit distributions; for the sample autocorrelation the limit distribution only depends on d. We introduce a self-normalized sample autocovariance statistic, which is computable without knowledge of α or d (or their relationship), and which converges to a non-degenerate distribution. We also treat self-normalization of the autocorrelations. The sampling distributions can then be approximated non-parametrically by subsampling, as the corresponding asymptotic distribution is still parameter-dependent. The subsampling-based confidence intervals for the process Autocovariances and autocorrelations are shown to have satisfactory empirical coverage rates in a simulation study. The impact of subsampling block size on the coverage is assessed. The methodology is further applied to the log-squared returns of Merck stock.

Ryo Okui - One of the best experts on this subject based on the ideXlab platform.

Benjamin Kedem - One of the best experts on this subject based on the ideXlab platform.