The Experts below are selected from a list of 413070 Experts worldwide ranked by ideXlab platform

Peter C Austin - One of the best experts on this subject based on the ideXlab platform.

  • a review of the use of time varying Covariates in the fine gray subdistribution hazard competing risk regression model
    Statistics in Medicine, 2020
    Co-Authors: Peter C Austin, Aurelien Latouche, Jason P Fine
    Abstract:

    In survival analysis, time-varying Covariates are Covariates whose value can change during follow-up. Outcomes in medical research are frequently subject to competing risks (events precluding the occurrence of the primary outcome). We review the types of time-varying Covariates and highlight the effect of their inclusion in the subdistribution hazard model. External time-dependent Covariates are external to the subject, can effect the failure process, but are not otherwise involved in the failure mechanism. Internal time-varying Covariates are measured on the subject, can effect the failure process directly, and may also be impacted by the failure mechanism. In the absence of competing risks, a consequence of including internal time-dependent Covariates in the Cox model is that one cannot estimate the survival function or the effect of Covariates on the survival function. In the presence of competing risks, the inclusion of internal time-varying Covariates in a subdistribution hazard model results in the loss of the ability to estimate the cumulative incidence function (CIF) or the effect of Covariates on the CIF. Furthermore, the definition of the risk set for the subdistribution hazard function can make defining internal time-varying Covariates difficult or impossible. We conducted a review of the use of time-varying Covariates in subdistribution hazard models in articles published in the medical literature in 2015 and in the first 5 months of 2019. Seven percent of articles published included a time-varying covariate. Several inappropriately described a time-varying covariate as having an association with the risk of the outcome.

  • generating survival times to simulate cox proportional hazards models with time varying Covariates
    Statistics in Medicine, 2012
    Co-Authors: Peter C Austin
    Abstract:

    Simulations and Monte Carlo methods serve an important role in modern statistical research. They allow for an examination of the performance of statistical procedures in settings in which analytic and mathematical derivations may not be feasible. A key element in any statistical simulation is the existence of an appropriate data-generating process: one must be able to simulate data from a specified statistical model. We describe data-generating processes for the Cox proportional hazards model with time-varying Covariates when event times follow an exponential, Weibull, or Gompertz distribution. We consider three types of time-varying Covariates: first, a dichotomous time-varying covariate that can change at most once from untreated to treated (e.g., organ transplant); second, a continuous time-varying covariate such as cumulative exposure at a constant dose to radiation or to a pharmaceutical agent used for a chronic condition; third, a dichotomous time-varying covariate with a subject being able to move repeatedly between treatment states (e.g., current compliance or use of a medication). In each setting, we derive closed-form expressions that allow one to simulate survival times so that survival times are related to a vector of fixed or time-invariant Covariates and to a single time-varying covariate. We illustrate the utility of our closed-form expressions for simulating event times by using Monte Carlo simulations to estimate the statistical power to detect as statistically significant the effect of different types of binary time-varying Covariates. This is compared with the statistical power to detect as statistically significant a binary time-invariant covariate. Copyright © 2012 John Wiley & Sons, Ltd.

  • goodness of fit diagnostics for the propensity score model when estimating treatment effects using covariate adjustment with the propensity score
    Pharmacoepidemiology and Drug Safety, 2008
    Co-Authors: Peter C Austin
    Abstract:

    The propensity score is defined to be a subject's probability of treatment selection, conditional on observed baseline Covariates. Conditional on the propensity score, treated and untreated subjects have similar distributions of observed baseline Covariates. In the medical literature, there are three commonly employed propensity-score methods: stratification (subclassification) on the propensity score, matching on the propensity score, and covariate adjustment using the propensity score. Methods have been developed to assess the adequacy of the propensity score model in the context of stratification on the propensity score and propensity-score matching. However, no comparable methods have been developed for covariate adjustment using the propensity score. Inferences about treatment effect made using propensity-score methods are only valid if, conditional on the propensity score, treated and untreated subjects have similar distributions of baseline Covariates. We develop both quantitative and qualitative methods to assess the balance in baseline Covariates between treated and untreated subjects. The quantitative method employs the weighted conditional standardized difference. This is the conditional difference in the mean of a covariate between treated and untreated subjects, in units of the pooled standard deviation, integrated over the distribution of the propensity score. The qualitative method employs quantile regression models to determine whether, conditional on the propensity score, treated and untreated subjects have similar distributions of continuous Covariates. We illustrate our methods using a large dataset of patients discharged from hospital with a diagnosis of a heart attack (acute myocardial infarction). The exposure was receipt of a prescription for a beta-blocker at hospital discharge.

James R. Carpenter - One of the best experts on this subject based on the ideXlab platform.

  • multiple imputation of Covariates by fully conditional specification accommodating the substantive model
    Statistical Methods in Medical Research, 2015
    Co-Authors: Jonathan W Bartlett, Shaun R Seaman, Ian R White, James R. Carpenter
    Abstract:

    Missing covariate data commonly occur in epidemiological and clinical research, and are often dealt with using multiple imputation. Imputation of partially observed Covariates is complicated if the substantive model is non-linear (e.g. Cox proportional hazards model), or contains non-linear (e.g. squared) or interaction terms, and standard software implementations of multiple imputation may impute Covariates from models that are incompatible with such substantive models. We show how imputation by fully conditional specification, a popular approach for performing multiple imputation, can be modified so that Covariates are imputed from models which are compatible with the substantive model. We investigate through simulation the performance of this proposal, and compare it with existing approaches. Simulation results suggest our proposal gives consistent estimates for a range of common substantive models, including models which contain non-linear covariate effects or interactions, provided data are missing at random and the assumed imputation models are correctly specified and mutually compatible. Stata software implementing the approach is freely available.

  • multiple imputation of Covariates by fully conditional specification accommodating the substantive model
    arXiv: Methodology, 2012
    Co-Authors: Jonathan W Bartlett, Shaun R Seaman, Ian R White, James R. Carpenter
    Abstract:

    Missing covariate data commonly occur in epidemiological and clinical research, and are often dealt with using multiple imputation (MI). Imputation of partially observed Covariates is complicated if the substantive model is non-linear (e.g. Cox proportional hazards model), or contains non-linear (e.g. squared) or interaction terms, and standard software implementations of MI may impute Covariates from models that are incompatible with such substantive models. We show how imputation by fully conditional specification, a popular approach for performing MI, can be modified so that Covariates are imputed from models which are compatible with the substantive model. We investigate through simulation the performance of this proposal, and compare it to existing approaches. Simulation results suggest our proposal gives consistent estimates for a range of common substantive models, including models which contain non-linear covariate effects or interactions, provided data are missing at random and the assumed imputation models are correctly specified and mutually compatible.

Chenchien Wang - One of the best experts on this subject based on the ideXlab platform.

  • matrix variate logistic regression model with application to eeg data
    Biostatistics, 2013
    Co-Authors: Hung Hung, Chenchien Wang
    Abstract:

    SUMMARY Logistic regression has been widely applied in the field of biomedical research for a long time. In some applications, the Covariates of interest have a natural structure, such as that of a matrix, at the time of collection. The rows and columns of the covariate matrix then have certain physical meanings, and they must contain useful information regarding the response. If we simply stack the covariate matrix as a vector and fit a conventional logistic regression model, relevant information can be lost, and the problem of inefficiency will arise. Motivated from these reasons, we propose in this paper the matrix variate logistic (MV-logistic) regression model. The advantages of the MV-logistic regression model include the preservation of the inherent matrix structure of Covariates and the parsimony of parameters needed. In the EEG Database Data Set, we successfully extract the structural effects of covariate matrix, and a high classification accuracy is achieved.

  • matrix variate logistic regression model with application to eeg data
    arXiv: Applications, 2011
    Co-Authors: Hung Hung, Chenchien Wang
    Abstract:

    Logistic regression has been widely applied in the field of biomedical research for a long time. In some applications, Covariates of interest have a natural structure, such as being a matrix, at the time of collection. The rows and columns of the covariate matrix then have certain physical meanings, and they must contain useful information regarding the response. If we simply stack the covariate matrix as a vector and fit the conventional logistic regression model, relevant information can be lost, and the problem of inefficiency will arise. Motivated from these reasons, we propose in this paper the matrix variate logistic (MV-logistic) regression model. Advantages of MV-logistic regression model include the preservation of the inherent matrix structure of Covariates and the parsimony of parameters needed. In the EEG Database Data Set, we successfully extract the structural effects of covariate matrix, and a high classification accuracy is achieved.

D Y Lin - One of the best experts on this subject based on the ideXlab platform.

  • additive hazards regression with covariate measurement error
    Journal of the American Statistical Association, 2000
    Co-Authors: Michal Kulich, D Y Lin
    Abstract:

    Abstract The additive hazards model specifies that the hazard function conditional on a set of Covariates is the sum of an arbitrary baseline hazard function and a regression function of Covariates. This article deals with the analysis of this semiparametric regression model with censored failure time data when Covariates are subject to measurement error. We assume that the true covariate is measured on a randomly chosen validation set, whereas a Surrogate covariate (i.e., an error-prone version of the true covariate) is measured on all study subjects. The Surrogate covariate is modeled as a linear function of the true covariate plus a random error. Only moment conditions are imposed on the measurement error distribution. We develop a class of estimating functions for the regression parameters that involve weighted combinations of the contributions from the validation and nonvalidation sets. The optimal weight can be selected by an adaptive procedure. The resulting estimators are consistent and asymptotic...

  • time dependent Covariates in the cox proportional hazards regression model
    Annual Review of Public Health, 1999
    Co-Authors: Lloyd D Fisher, D Y Lin
    Abstract:

    ▪ Abstract The Cox proportional-hazards regression model has achieved widespread use in the analysis of time-to-event data with censoring and Covariates. The Covariates may change their values over time. This article discusses the use of such time-dependent Covariates, which offer additional opportunities but must be used with caution. The interrelationships between the outcome and variable over time can lead to bias unless the relationships are well understood. The form of a time-dependent covariate is much more complex than in Cox models with fixed (non–time-dependent) Covariates. It involves constructing a function of time. Further, the model does not have some of the properties of the fixed-covariate model; it cannot usually be used to predict the survival (time-to-event) curve over time. The estimated probability of an event over time is not related to the hazard function in the usual fashion. An appendix summarizes the mathematics of time-dependent Covariates.

Xavier De Luna - One of the best experts on this subject based on the ideXlab platform.

  • Data-driven algorithms for dimension reduction in causal inference
    Computational Statistics & Data Analysis, 2017
    Co-Authors: Emma Persson, Jenny Hggstrm, Ingeborg Waernbaum, Xavier De Luna
    Abstract:

    In observational studies, the causal effect of a treatment may be confounded with variables that are related to both the treatment and the outcome of interest. In order to identify a causal effect, such studies often rely on the unconfoundedness assumption, i.e., that all confounding variables are observed. The choice of Covariates to control for, which is primarily based on subject matter knowledge, may result in a large covariate vector in the attempt to ensure that unconfoundedness holds. However, including redundant Covariates can affect bias and efficiency of nonparametric causal effect estimators, e.g., due to the curse of dimensionality. Data-driven algorithms for the selection of sufficient covariate subsets are investigated. Under the assumption of unconfoundedness the algorithms search for minimal subsets of the covariate vector. Based, e.g., on the framework of sufficient dimension reduction or kernel smoothing, the algorithms perform a backward elimination procedure assessing the significance of each covariate. Their performance is evaluated in simulations and an application using data from the Swedish Childhood Diabetes Register is also presented.

  • CovSel: An R Package for Covariate Selection When Estimating Average Causal Effects
    Journal of Statistical Software, 2015
    Co-Authors: Jenny Häggström, Emma Persson, Ingeborg Waernbaum, Xavier De Luna
    Abstract:

    We describe the R package CovSel, which reduces the dimension of the covariate vector for the purpose of estimating an average causal effect under the unconfoundedness assumption. Covariate selection algorithms developed in De Luna, Waernbaum, and Richardson (2011) are implemented using model-free backward elimination. We show how to use the package to select minimal sets of Covariates. The package can be used with continuous and discrete Covariates and the user can choose between marginal co-ordinate hypothesis tests and kernel-based smoothing as model-free dimension reduction techniques.