Longitudinal Data

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 395121 Experts worldwide ranked by ideXlab platform

Hans-georg Müller - One of the best experts on this subject based on the ideXlab platform.

  • Modeling Longitudinal Data on Riemannian Manifolds
    arXiv: Methodology, 2018
    Co-Authors: Xiongtao Dai, Zhenhua Lin, Hans-georg Müller
    Abstract:

    When considering functional principal component analysis for sparsely observed Longitudinal Data that take values on a nonlinear manifold, a major challenge is how to handle the sparse and irregular observations that are commonly encountered in Longitudinal studies. Addressing this challenge, we provide theory and implementations for a manifold version of the principal analysis by conditional expectation (PACE) procedure that produces representations intrinsic to the manifold, extending a well-established version of functional principal component analysis targeting sparsely sampled Longitudinal Data in linear spaces. Key steps are local linear smoothing methods for the estimation of a Fr\'echet mean curve, mapping the observed manifold-valued Longitudinal Data to tangent spaces around the estimated mean curve, and applying smoothing methods to obtain the covariance structure of the mapped Data. Dimension reduction is achieved via representations based on the first few leading principal components. A finitely truncated representation of the original manifold-valued Data is then obtained by mapping these tangent space representations to the manifold. We show that the proposed estimates of mean curve and covariance structure achieve state-of-the-art convergence rates. For Longitudinal emotional well-being Data for unemployed workers as an example of time-dynamic compositional Data that are located on a sphere, we demonstrate that our methods lead to interpretable eigenfunctions and principal component scores. In a second example, we analyze the body shapes of wallabies by mapping the relative size of their body parts onto a spherical pre-shape space. Compared to standard functional principal component analysis, which is based on Euclidean geometry, the proposed approach leads to improved trajectory recovery for sparsely sampled Data on nonlinear manifolds.

  • A stickiness coefficient for Longitudinal Data
    Computational Statistics & Data Analysis, 2012
    Co-Authors: Andrea Gottlieb, Hans-georg Müller
    Abstract:

    In this paper, we introduce the stickiness coefficient, a summary statistic for time-course and Longitudinal Data, which is designed to characterize the time dynamics of such Data. The stickiness coefficient provides a simple, intuitive and informative measure that captures key information contained in time-course Data. Under the assumption that the Data are generated by the trajectories of a smooth underlying stochastic process, the stickiness coefficient illuminates the relationship between the value of the process at one time with the value it assumes at another time via a single numeric measure. In particular, the stickiness coefficient summarizes the extent to which deviations from the mean trajectory tend to co-vary over time. The estimation scheme we propose will allow for estimation even in the case that the Longitudinal Data are sparsely observed at irregular times and may be corrupted by noise. We demonstrate an estimation procedure for the stickiness coefficient and establish asymptotic consistency as well as asymptotic convergence rates. We illustrate the resulting stickiness coefficient with some theoretical calculations as well as several economic and health related Data examples.

  • Response-adaptive regression for Longitudinal Data.
    Biometrics, 2010
    Co-Authors: Hans-georg Müller
    Abstract:

    Summary We propose a response-adaptive model for functional linear regression, which is adapted to sparsely sampled Longitudinal responses. Our method aims at predicting response trajectories and models the regression relationship by directly conditioning the sparse and irregular observations of the response on the predictor, which can be of scalar, vector, or functional type. This obliterates the need to model the response trajectories, a task that is challenging for sparse Longitudinal Data and was previously required for functional regression implementations for Longitudinal Data. The proposed approach turns out to be superior compared to previous functional regression approaches in terms of prediction error. It encompasses a variety of regression settings that are relevant for the functional modeling of Longitudinal Data in the life sciences. The improved prediction of response trajectories with the proposed response-adaptive approach is illustrated for a Longitudinal study of Kiwi weight growth and by an analysis of the dynamic relationship between viral load and CD4 cell counts observed in AIDS clinical trials.

  • Empirical dynamics for Longitudinal Data
    The Annals of Statistics, 2010
    Co-Authors: Hans-georg Müller, Fang Yao
    Abstract:

    We demonstrate that the processes underlying on-line auction price bids and many other Longitudinal Data can be represented by an empirical first order stochastic ordinary differential equation with time-varying coefficients and a smooth drift process. This equation may be empirically obtained from Longitudinal observations for a sample of subjects and does not presuppose specific knowledge of the underlying processes. For the nonparametric estimation of the components of the differential equation, it suffices to have available sparsely observed Longitudinal measurements which may be noisy and are generated by underlying smooth random trajectories for each subject or experimental unit in the sample. The drift process that drives the equation determines how closely individual process trajectories follow a deterministic approximation of the differential equation. We provide estimates for trajectories and especially the variance function of the drift process. At each fixed time point, the proposed empirical dynamic model implies a decomposition of the derivative of the process underlying the Longitudinal Data into a component explained by a linear component determined by a varying coefficient function dynamic equation and an orthogonal complement that corresponds to the drift process. An enhanced perturbation result enables us to obtain improved asymptotic convergence rates for eigenfunction derivative estimation and consistency for the varying coefficient function and the components of the drift process. We illustrate the differential equation with an application to the dynamics of on-line auction Data.

  • functional Data analysis for sparse Longitudinal Data
    Journal of the American Statistical Association, 2005
    Co-Authors: Fang Yao, Hans-georg Müller, Janeling Wang
    Abstract:

    We propose a nonparametric method to perform functional principal components analysis for the case of sparse Longitudinal Data. The method aims at irregularly spaced Longitudinal Data, where the number of repeated measurements available per subject is small. In contrast, classical functional Data analysis requires a large number of regularly spaced measurements per subject. We assume that the repeated measurements are located randomly with a random number of repetitions for each subject and are determined by an underlying smooth random (subject-specific) trajectory plus measurement errors. Basic elements of our approach are the parsimonious estimation of the covariance structure and mean function of the trajectories, and the estimation of the variance of the measurement errors. The eigenfunction basis is estimated from the Data, and functional principal components score estimates are obtained by a conditioning step. This conditional estimation method is conceptually simple and straightforward to implement...

Richard H. Jones - One of the best experts on this subject based on the ideXlab platform.

  • Smoothing splines for Longitudinal Data.
    Statistics in medicine, 1995
    Co-Authors: Stewart J. Anderson, Richard H. Jones
    Abstract:

    In a Longitudinal Data model with fixed and random effects, polynomials are used to model the fixed effects and smoothing polynomial splines are used to model the within-subject random effect curves. The splines are generated by modelling the Data for each subject as observations of an integrated random walk with observational error. The initial conditions for each subject's deviation from the fixed effect curve are assumed to have zero mean and arbitrary covariance matrix which is estimated by maximum likelihood, producing an empirical Bayes estimate. This is in contrast to modelling a single curve using a diffuse prior. An example is presented using unbalanced Longitudinal Data from a pilot study in breast cancer patients.

Claudia Czado - One of the best experts on this subject based on the ideXlab platform.

  • a mixed autoregressive probit model for ordinal Longitudinal Data
    Biostatistics, 2010
    Co-Authors: Cristiano Varin, Claudia Czado
    Abstract:

    SUMMARY Longitudinal Data with binary and ordinal outcomes routinely appear in medical applications. Existing methods are typically designed to deal with short measurement series. In contrast, modern Longitudinal Data can result in large numbers of subject-specific serial observations. In this framework, we consider multivariate probit models with random effects to capture heterogeneity and autoregressive terms for describing the serial dependence. Since likelihood inference for the proposed class of models is computationally burdensome because of high-dimensional intractable integrals, a pseudolikelihood approach is followed. The methodology is motivated by the analysis of a large Longitudinal study on the determinants of migraine severity.

Jianqing Fan - One of the best experts on this subject based on the ideXlab platform.

  • Analysis of Longitudinal Data with Semiparametric Estimation of Covariance Function.
    Journal of the American Statistical Association, 2007
    Co-Authors: Jianqing Fan, Tao Huang
    Abstract:

    Improving efficiency for regression coefficients and predicting trajectories of individuals are two important aspects in the analysis of Longitudinal Data. Both involve estimation of the covariance function. Yet challenges arise in estimating the covariance function of Longitudinal Data collected at irregular time points. A class of semiparametric models for the covariance function by that imposes a parametric correlation structure while allowing a nonparametric variance function is proposed. A kernel estimator for estimating the nonparametric variance function is developed. Two methods for estimating parameters in the correlation structure—a quasi-likelihood approach and a minimum generalized variance method—are proposed. A semiparametric varying coefficient partially linear model for Longitudinal Data is introduced, and an estimation procedure for model coefficients using a profile weighted least squares approach is proposed. Sampling properties of the proposed estimation procedures are studied, and asy...

  • AN OVERVIEW ON NONPARAMETRIC AND SEMIPARAMETRIC TECHNIQUES FOR Longitudinal Data
    Frontiers in Statistics, 2006
    Co-Authors: Jianqing Fan
    Abstract:

    Longitudinal Data are often highly unbalanced because Data were collected at irregular and possibly subject-specific time points. It is difficult to directly apply traditional multivariate regression techniques for analyzing such highly unbalanced collected Data. This has led biostatisticians and statisticians to develop various modeling procedures for Longitudinal Data. Parametric regression models have been extended to Longitudinal Data analysis (Diggle, et al. 2002). They are very useful for analyzing Longitudinal Data and for providing a parsimonious description of the relationship between the response variable and its covariates. However, the parametric assumption likely introduce modeling biases. To relax the assumptions on parametric forms, various nonparametric models have been proposed for Longitudinal Data analysis. Earlier works on nonparametric regression analysis for Longitudinal Data were summarized in Muller (1988). Kernel regression was applied to repeated measurements Data with continuous re-

  • two step estimation of functional linear models with applications to Longitudinal Data
    Department of Statistics UCLA, 1999
    Co-Authors: Jianqing Fan, Jinting Zhang
    Abstract:

    Two-Step Estimation of Functional Linear Models with Applications to Longitudinal Data Jianqing Fan and Jin-Ting Zhang Department of Statistics UNC-Chapel Hill, NC 27599-3260 July 16, 1999 Abstract Functional linear models are useful in Longitudinal Data analysis. They include many classical and recently proposed statistical models for Longitudinal Data and other functional Data. Recently, smoothing spline and kernel methods have been proposed for estimating their coe cient functions nonparametrically but these methods are either intensive in computation or ine cient in perfor- mance. To overcome these drawbacks, in this paper, a simple and powerful two-step alternative is proposed. In particular, the implementation of the proposed approach via local polynomial smooth- ing is discussed. Methods for estimating standard deviations of estimated coe cient functions are also proposed. Some asymptotic results for the local polynomial estimators are established. Two Longitudinal Data sets, one of which involves time-dependent covariates, are used to demonstrate the proposed approach. Simulation studies show that our two-step approach improves the kernel method proposed in Hoover, et al (1998) in several aspects such as accuracy, computation time and visual appealingness of the estimators. Key Words And Phrases : Functional linear models, functional ANOVA, local polynomial smoothing, Longitudinal Data analysis. Short title : Functional linear models

Mark R. Segal - One of the best experts on this subject based on the ideXlab platform.

  • Tree-Structured Methods for Longitudinal Data
    Journal of the American Statistical Association, 1992
    Co-Authors: Mark R. Segal
    Abstract:

    Abstract The thrust of tree techniques is the extraction of meaningful subgroups characterized by common covariate values and homogeneous outcome. For Longitudinal Data, this homogeneity can pertain to the mean and/or to covariance structure. The regression tree methodology is extended to repeated measures and Longitudinal Data by modifying the split function so as to accommodate multiple responses. Several split functions are developed based either on deviations around subgroup mean vectors or on two sample statistics measuring subgroup separation. For the methods to be computationally feasible, it is necessary to devise updating algorithms for the split function. This has been done for some commonly used covariance specifications: independence, compound symmetry, and first-order autoregressive models. Data analytic issues, such as handling missing values and time-varying covariates and determining appropriate tree size are discussed. An illustrative example concerning immune function loss in a cohort of...