Scan Science and Technology
Contact Leading Edge Experts & Companies
Covariates
The Experts below are selected from a list of 413070 Experts worldwide ranked by ideXlab platform
Peter C Austin – One of the best experts on this subject based on the ideXlab platform.

a review of the use of time varying Covariates in the fine gray subdistribution hazard competing risk regression model
Statistics in Medicine, 2020CoAuthors: Peter C Austin, Aurelien Latouche, Jason P FineAbstract:In survival analysis, timevarying Covariates are Covariates whose value can change during followup. Outcomes in medical research are frequently subject to competing risks (events precluding the occurrence of the primary outcome). We review the types of timevarying Covariates and highlight the effect of their inclusion in the subdistribution hazard model. External timedependent Covariates are external to the subject, can effect the failure process, but are not otherwise involved in the failure mechanism. Internal timevarying Covariates are measured on the subject, can effect the failure process directly, and may also be impacted by the failure mechanism. In the absence of competing risks, a consequence of including internal timedependent Covariates in the Cox model is that one cannot estimate the survival function or the effect of Covariates on the survival function. In the presence of competing risks, the inclusion of internal timevarying Covariates in a subdistribution hazard model results in the loss of the ability to estimate the cumulative incidence function (CIF) or the effect of Covariates on the CIF. Furthermore, the definition of the risk set for the subdistribution hazard function can make defining internal timevarying Covariates difficult or impossible. We conducted a review of the use of timevarying Covariates in subdistribution hazard models in articles published in the medical literature in 2015 and in the first 5 months of 2019. Seven percent of articles published included a timevarying covariate. Several inappropriately described a timevarying covariate as having an association with the risk of the outcome.

generating survival times to simulate cox proportional hazards models with time varying Covariates
Statistics in Medicine, 2012CoAuthors: Peter C AustinAbstract:Simulations and Monte Carlo methods serve an important role in modern statistical research. They allow for an examination of the performance of statistical procedures in settings in which analytic and mathematical derivations may not be feasible. A key element in any statistical simulation is the existence of an appropriate datagenerating process: one must be able to simulate data from a specified statistical model. We describe datagenerating processes for the Cox proportional hazards model with timevarying Covariates when event times follow an exponential, Weibull, or Gompertz distribution. We consider three types of timevarying Covariates: first, a dichotomous timevarying covariate that can change at most once from untreated to treated (e.g., organ transplant); second, a continuous timevarying covariate such as cumulative exposure at a constant dose to radiation or to a pharmaceutical agent used for a chronic condition; third, a dichotomous timevarying covariate with a subject being able to move repeatedly between treatment states (e.g., current compliance or use of a medication). In each setting, we derive closedform expressions that allow one to simulate survival times so that survival times are related to a vector of fixed or timeinvariant Covariates and to a single timevarying covariate. We illustrate the utility of our closedform expressions for simulating event times by using Monte Carlo simulations to estimate the statistical power to detect as statistically significant the effect of different types of binary timevarying Covariates. This is compared with the statistical power to detect as statistically significant a binary timeinvariant covariate. Copyright © 2012 John Wiley & Sons, Ltd.

goodness of fit diagnostics for the propensity score model when estimating treatment effects using covariate adjustment with the propensity score
Pharmacoepidemiology and Drug Safety, 2008CoAuthors: Peter C AustinAbstract:The propensity score is defined to be a subject’s probability of treatment selection, conditional on observed baseline Covariates. Conditional on the propensity score, treated and untreated subjects have similar distributions of observed baseline Covariates. In the medical literature, there are three commonly employed propensityscore methods: stratification (subclassification) on the propensity score, matching on the propensity score, and covariate adjustment using the propensity score. Methods have been developed to assess the adequacy of the propensity score model in the context of stratification on the propensity score and propensityscore matching. However, no comparable methods have been developed for covariate adjustment using the propensity score. Inferences about treatment effect made using propensityscore methods are only valid if, conditional on the propensity score, treated and untreated subjects have similar distributions of baseline Covariates. We develop both quantitative and qualitative methods to assess the balance in baseline Covariates between treated and untreated subjects. The quantitative method employs the weighted conditional standardized difference. This is the conditional difference in the mean of a covariate between treated and untreated subjects, in units of the pooled standard deviation, integrated over the distribution of the propensity score. The qualitative method employs quantile regression models to determine whether, conditional on the propensity score, treated and untreated subjects have similar distributions of continuous Covariates. We illustrate our methods using a large dataset of patients discharged from hospital with a diagnosis of a heart attack (acute myocardial infainfarction). The exposure was receipt of a prescription for a betablocker at hospital discharge.
James R. Carpenter – One of the best experts on this subject based on the ideXlab platform.

multiple imputation of Covariates by fully conditional specification accommodating the substantive model
Statistical Methods in Medical Research, 2015CoAuthors: Jonathan W Bartlett, James R. Carpenter, Shaun R Seaman, Ian R WhiteAbstract:Missing covariate data commonly occur in epidemiological and clinical research, and are often dealt with using multiple imputation. Imputation of partially observed Covariates is complicated if the substantive model is nonlinear (e.g. Cox proportional hazards model), or contains nonlinear (e.g. squared) or interaction terms, and standard software implementations of multiple imputation may impute Covariates from models that are incompatible with such substantive models. We show how imputation by fully conditional specification, a popular approach for performing multiple imputation, can be modified so that Covariates are imputed from models which are compatible with the substantive model. We investigate through simulation the performance of this proposal, and compare it with existing approaches. Simulation results suggest our proposal gives consistent estimates for a range of common substantive models, including models which contain nonlinear covariate effects or interactions, provided data are missing at random and the assumed imputation models are correctly specified and mutually compatible. Stata software implementing the approach is freely available.

multiple imputation of Covariates by fully conditional specification accommodating the substantive model
arXiv: Methodology, 2012CoAuthors: Jonathan W Bartlett, James R. Carpenter, Shaun R Seaman, Ian R WhiteAbstract:Missing covariate data commonly occur in epidemiological and clinical research, and are often dealt with using multiple imputation (MI). Imputation of partially observed Covariates is complicated if the substantive model is nonlinear (e.g. Cox proportional hazards model), or contains nonlinear (e.g. squared) or interaction terms, and standard software implementations of MI may impute Covariates from models that are incompatible with such substantive models. We show how imputation by fully conditional specification, a popular approach for performing MI, can be modified so that Covariates are imputed from models which are compatible with the substantive model. We investigate through simulation the performance of this proposal, and compare it to existing approaches. Simulation results suggest our proposal gives consistent estimates for a range of common substantive models, including models which contain nonlinear covariate effects or interactions, provided data are missing at random and the assumed imputation models are correctly specified and mutually compatible.
Chenchien Wang – One of the best experts on this subject based on the ideXlab platform.

matrix variate logistic regression model with application to eeg data
Biostatistics, 2013CoAuthors: Hung Hung, Chenchien WangAbstract:SUMMARY Logistic regression has been widely applied in the field of biomedical research for a long time. In some applications, the Covariates of interest have a natural structure, such as that of a matrix, at the time of collection. The rows and columns of the covariate matrix then have certain physical meanings, and they must contain useful information regarding the response. If we simply stack the covariate matrix as a vector and fit a conventional logistic regression model, relevant information can be lost, and the problem of inefficiency will arise. Motivated from these reasons, we propose in this paper the matrix variate logistic (MVlogistic) regression model. The advantages of the MVlogistic regression model include the preservation of the inherent matrix structure of Covariates and the parsimony of parameters needed. In the EEG Database Data Set, we successfully extract the structural effects of covariate matrix, and a high classification accuracy is achieved.

matrix variate logistic regression model with application to eeg data
arXiv: Applications, 2011CoAuthors: Hung Hung, Chenchien WangAbstract:Logistic regression has been widely applied in the field of biomedical research for a long time. In some applications, Covariates of interest have a natural structure, such as being a matrix, at the time of collection. The rows and columns of the covariate matrix then have certain physical meanings, and they must contain useful information regarding the response. If we simply stack the covariate matrix as a vector and fit the conventional logistic regression model, relevant information can be lost, and the problem of inefficiency will arise. Motivated from these reasons, we propose in this paper the matrix variate logistic (MVlogistic) regression model. Advantages of MVlogistic regression model include the preservation of the inherent matrix structure of Covariates and the parsimony of parameters needed. In the EEG Database Data Set, we successfully extract the structural effects of covariate matrix, and a high classification accuracy is achieved.
D Y Lin – One of the best experts on this subject based on the ideXlab platform.

additive hazards regression with covariate measurement error
Journal of the American Statistical Association, 2000CoAuthors: Michal Kulich, D Y LinAbstract:Abstract The additive hazards model specifies that the hazard function conditional on a set of Covariates is the sum of an arbitrary baseline hazard function and a regression function of Covariates. This article deals with the analysis of this semiparametric regression model with censored failure time data when Covariates are subject to measurement error. We assume that the true covariate is measured on a randomly chosen validation set, whereas a Surrogate covariate (i.e., an errorprone version of the true covariate) is measured on all study subjects. The Surrogate covariate is modeled as a linear function of the true covariate plus a random error. Only moment conditions are imposed on the measurement error distribution. We develop a class of estimating functions for the regression parameters that involve weighted combinations of the contributions from the validation and nonvalidation sets. The optimal weight can be selected by an adaptive procedure. The resulting estimators are consistent and asymptotic…

time dependent Covariates in the cox proportional hazards regression model
Annual Review of Public Health, 1999CoAuthors: Lloyd D Fisher, D Y LinAbstract:▪ Abstract The Cox proportionalhazards regression model has achieved widespread use in the analysis of timetoevent data with censoring and Covariates. The Covariates may change their values over time. This article discusses the use of such timedependent Covariates, which offer additional opportunities but must be used with caution. The interrelationships between the outcome and variable over time can lead to bias unless the relationships are well understood. The form of a timedependent covariate is much more complex than in Cox models with fixed (non–timedependent) Covariates. It involves constructing a function of time. Further, the model does not have some of the properties of the fixedcovariate model; it cannot usually be used to predict the survival (timetoevent) curve over time. The estimated probability of an event over time is not related to the hazard function in the usual fashion. An appendix summarizes the mathematics of timedependent Covariates.
Xavier De Luna – One of the best experts on this subject based on the ideXlab platform.

Datadriven algorithms for dimension reduction in causal inference
Computational Statistics & Data Analysis, 2017CoAuthors: Emma Persson, Jenny Hggstrm, Ingeborg Waernbaum, Xavier De LunaAbstract:In observational studies, the causal effect of a treatment may be confounded with variables that are related to both the treatment and the outcome of interest. In order to identify a causal effect, such studies often rely on the unconfoundedness assumption, i.e., that all confounding variables are observed. The choice of Covariates to control for, which is primarily based on subject matter knowledge, may result in a large covariate vector in the attempt to ensure that unconfoundedness holds. However, including redundant Covariates can affect bias and efficiency of nonparametric causal effect estimators, e.g., due to the curse of dimensionality. Datadriven algorithms for the selection of sufficient covariate subsets are investigated. Under the assumption of unconfoundedness the algorithms search for minimal subsets of the covariate vector. Based, e.g., on the framework of sufficient dimension reduction or kernel smoothing, the algorithms perform a backward elimination procedure assessing the significance of each covariate. Their performance is evaluated in simulations and an application using data from the Swedish Childhood Diabetes Register is also presented.

CovSel: An R Package for Covariate Selection When Estimating Average Causal Effects
Journal of Statistical Software, 2015CoAuthors: Jenny Häggström, Emma Persson, Ingeborg Waernbaum, Xavier De LunaAbstract:We describe the R package CovSel, which reduces the dimension of the covariate vector for the purpose of estimating an average causcausal effect under the unconfoundedness assumption. Covariate selection algorithms developed in De Luna, Waernbaum, and Richardson (2011) are implemented using modelfree backward elimination. We show how to use the package to select minimal sets of Covariates. The package can be used with continuous and discrete Covariates and the user can choose between marginal coordinate hypothesis tests and kernelbased smoothing as modelfree dimension reduction techniques.