Inverse Probability

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 35574 Experts worldwide ranked by ideXlab platform

Miguel A. Hernán - One of the best experts on this subject based on the ideXlab platform.

  • ensemble learning of Inverse Probability weights for marginal structural modeling in large observational datasets
    Statistics in Medicine, 2015
    Co-Authors: Susan Gruber, Miguel A. Hernán, Roger Logan, Inmaculada Jarrin, Susana Monge
    Abstract:

    Inverse Probability weights used to fit marginal structural models are typically estimated using logistic regression. However, a data-adaptive procedure may be able to better exploit information available in measured covariates. By combining predictions from multiple algorithms, ensemble learning offers an alternative to logistic regression modeling to further reduce bias in estimated marginal structural model parameters. We describe the application of two ensemble learning approaches to estimating stabilized weights: super learning (SL), an ensemble machine learning approach that relies on V-fold cross validation, and an ensemble learner (EL) that creates a single partition of the data into training and validation sets. Longitudinal data from two multicenter cohort studies in Spain (CoRIS and CoRIS-MD) were analyzed to estimate the mortality hazard ratio for initiation versus no initiation of combined antiretroviral therapy among HIV positive subjects. Both ensemble approaches produced hazard ratio estimates further away from the null, and with tighter confidence intervals, than logistic regression modeling. Computation time for EL was less than half that of SL. We conclude that ensemble learning using a library of diverse candidate algorithms offers an alternative to parametric modeling of Inverse Probability weights when fitting marginal structural models. With large datasets, EL provides a rich search over the solution space in less time than SL with comparable results. Copyright © 2014 John Wiley & Sons, Ltd.

  • ensemble learning of Inverse Probability weights for marginal structural modeling in large observational datasets
    Statistics in Medicine, 2015
    Co-Authors: Susan Gruber, Miguel A. Hernán, Roger Logan, Inmaculada Jarrin, Susana Monge
    Abstract:

    Inverse Probability weights used to fit marginal structural models are typically estimated using logistic regression. However, a data-adaptive procedure may be able to better exploit information available in measured covariates. By combining predictions from multiple algorithms, ensemble learning offers an alternative to logistic regression modeling to further reduce bias in estimated marginal structural model parameters. We describe the application of two ensemble learning approaches to estimating stabilized weights: super learning (SL), an ensemble machine learning approach that relies on V-fold cross validation, and an ensemble learner (EL) that creates a single partition of the data into training and validation sets. Longitudinal data from two multicenter cohort studies in Spain (CoRIS and CoRIS-MD) were analyzed to estimate the mortality hazard ratio for initiation versus no initiation of combined antiretroviral therapy among HIV positive subjects. Both ensemble approaches produced hazard ratio estimates further away from the null, and with tighter confidence intervals, than logistic regression modeling. Computation time for EL was less than half that of SL. We conclude that ensemble learning using a library of diverse candidate algorithms offers an alternative to parametric modeling of Inverse Probability weights when fitting marginal structural models. With large datasets, EL provides a rich search over the solution space in less time than SL with comparable results. Copyright © 2014 John Wiley & Sons, Ltd.

  • Constructing Inverse Probability Weights for Marginal Structural Models
    American Journal of Epidemiology, 2008
    Co-Authors: Stephen R. Cole, Miguel A. Hernán
    Abstract:

    The method of Inverse Probability weighting (henceforth, weighting) can be used to adjust for measured confounding and selection bias under the four assumptions of consistency, exchangeability, positivity, and no misspecification of the model used to estimate weights. In recent years, several published estimates of the effect of time-varying exposures have been based on weighted estimation of the parameters of marginal structural models because, unlike standard statistical methods, weighting can appropriately adjust for measured time-varying confounders affected by prior exposure. As an example, the authors describe the last three assumptions using the change in viral load due to initiation of antiretroviral therapy among 918 human immunodeficiency virus-infected US men and women followed for a median of 5.8 years between 1996 and 2005. The authors describe possible tradeoffs that an epidemiologist may encounter when attempting to make inferences. For instance, a tradeoff between bias and precision is illustrated as a function of the extent to which confounding is controlled. Weight truncation is presented as an informal and easily implemented method to deal with these tradeoffs. Inverse Probability weighting provides a powerful methodological tool that may uncover causal effects of exposures that are otherwise obscured. However, as with all methods, diagnostics and sensitivity analyses are essential for proper use.

  • Comparison of dynamic treatment regimes via Inverse Probability weighting.
    Basic Clinical Pharmacology Toxicology, 2006
    Co-Authors: Miguel A. Hernán, Emilie Lanoy, Dominique Costagliola, James M. Robins
    Abstract:

    Appropriate analysis of observational data is our best chance to obtain answers to many questions that involve dynamic treatment regimes. This paper describes a simple method to compare dynamic treatment regimes by artificially censoring subjects and then using Inverse Probability weighting (IPW) to adjust for any selection bias introduced by the artificial censoring. The basic strategy can be summarized in four steps: 1) define two regimes of interest, 2) artificially censor individuals when they stop following one of the regimes of interest, 3) estimate Inverse Probability weights to adjust for the potential selection bias introduced by censoring in the previous step, 4) compare the survival of the uncensored individuals under each regime of interest by fitting an Inverse Probability weighted Cox proportional hazards model with the dichotomous regime indicator and the baseline confounders as covariates. In the absence of model misspecification, the method is valid provided data are available on all time-varying and baseline joint predictors of survival and regime discontinuation. We present an application of the method to compare the AIDS-free survival under two dynamic treatment regimes in a large prospective study of HIV-infected patients. The paper concludes by discussing the relative advantages and disadvantages of censoring/IPW versus g-estimation of nested structural models to compare dynamic regimes.

  • Adjusted survival curves with Inverse Probability weights.
    Computer Methods and Programs in Biomedicine, 2004
    Co-Authors: Stephen R. Cole, Miguel A. Hernán
    Abstract:

    Kaplan-Meier survival curves and the associated nonparametric log rank test statistic are methods of choice for unadjusted survival analyses, while the semiparametric Cox proportional hazards regression model is used ubiquitously as a method for covariate adjustment. The Cox model extends naturally to include covariates, but there is no generally accepted method to graphically depict adjusted survival curves. The authors describe a method and provide a simple worked example using Inverse Probability weights (IPW) to create adjusted survival curves. When the weights are non-parametrically estimated, this method is equivalent to direct standardization of the survival curves to the combined study population.

Kate L Lapane - One of the best experts on this subject based on the ideXlab platform.

  • the choice of analytical strategies in Inverse Probability of treatment weighted analysis a simulation study
    American Journal of Epidemiology, 2015
    Co-Authors: Shibing Yang, Juan Lu, Charles B Eaton, Spencer E Harpe, Kate L Lapane
    Abstract:

    : We sought to explore the impact of intention to treat and complex treatment use assumptions made during weight construction on the validity and precision of estimates derived from Inverse-Probability-of-treatment-weighted analysis. We simulated data assuming a nonexperimental design that attempted to quantify the effect of statin on lowering low-density lipoprotein cholesterol. We created 324 scenarios by varying parameter values (effect size, sample size, adherence level, Probability of treatment initiation, associations between low-density lipoprotein cholesterol and treatment initiation and continuation). Four analytical approaches were used: 1) assuming intention to treat; 2) assuming complex mechanisms of treatment use; 3) assuming a simple mechanism of treatment use; and 4) assuming invariant confounders. With a continuous outcome, estimates assuming intention to treat were biased toward the null when there were nonnull treatment effect and nonadherence after treatment initiation. For each 1% decrease in the proportion of patients staying on treatment after initiation, the bias in estimated average treatment effect increased by 1%. Inverse-Probability-of-treatment-weighted analyses that took into account the complex mechanisms of treatment use generated approximately unbiased estimates. Studies estimating the actual effect of a time-varying treatment need to consider the complex mechanisms of treatment use during weight construction.

  • The Choice of Analytical Strategies in Inverse-Probability-of-Treatment–Weighted Analysis: A Simulation Study
    American Journal of Epidemiology, 2015
    Co-Authors: Shibing Yang, Charles B Eaton, Spencer E Harpe, Kate L Lapane
    Abstract:

    We sought to explore the impact of intention to treat and complex treatment use assumptions made during weight construction on the validity and precision of estimates derived from Inverse-Probability-of-treatment-weighted analysis. We simulated data assuming a nonexperimental design that attempted to quantify the effect of statin on lowering low-density lipoprotein cholesterol. We created 324 scenarios by varying parameter values (effect size, sample size, adherence level, Probability of treatment initiation, associations between low-density lipoprotein cholesterol and treatment initiation and continuation). Four analytical approaches were used: 1) assuming intention to treat; 2) assuming complex mechanisms of treatment use; 3) assuming a simple mechanism of treatment use; and 4) assuming invariant confounders. With a continuous outcome, estimates assuming intention to treat were biased toward the null when there were nonnull treatment effect and nonadherence after treatment initiation. For each 1% decrease in the proportion of patients staying on treatment after initiation, the bias in estimated average treatment effect increased by 1%. Inverse-Probability-of-treatment-weighted analyses that took into account the complex mechanisms of treatment use generated approximately unbiased estimates. Studies estimating the actual effect of a time-varying treatment need to consider the complex mechanisms of treatment use during weight construction.

Stephen R. Cole - One of the best experts on this subject based on the ideXlab platform.

  • Exploring the Subtleties of Inverse Probability Weighting and Marginal Structural Models.
    Epidemiology, 2018
    Co-Authors: Alexander Breskin, Stephen R. Cole, Daniel Westreich
    Abstract:

    : Since being introduced to epidemiology in 2000, marginal structural models have become a commonly used method for causal inference in a wide range of epidemiologic settings. In this brief report, we aim to explore three subtleties of marginal structural models. First, we distinguish marginal structural models from the Inverse Probability weighting estimator, and we emphasize that marginal structural models are not only for longitudinal exposures. Second, we explore the meaning of the word "marginal" in "marginal structural model." Finally, we show that the specification of a marginal structural model can have important implications for the interpretation of its parameters. Each of these concepts have important implications for the use and understanding of marginal structural models, and thus providing detailed explanations of them may lead to better practices for the field of epidemiology.

  • Inverse Probability weighted estimation for monotone and nonmonotone missing data
    American Journal of Epidemiology, 2018
    Co-Authors: Baoluo Sun, Stephen R. Cole, Neil J Perkins, Ofer Harel, Emily M Mitchell, Enrique F Schisterman, Eric J. Tchetgen Tchetgen
    Abstract:

    Missing data is a common occurrence in epidemiologic research. In this paper, 3 data sets with induced missing values from the Collaborative Perinatal Project, a multisite US study conducted from 1959 to 1974, are provided as examples of prototypical epidemiologic studies with missing data. Our goal was to estimate the association of maternal smoking behavior with spontaneous abortion while adjusting for numerous confounders. At the same time, we did not necessarily wish to evaluate the joint distribution among potentially unobserved covariates, which is seldom the subject of substantive scientific interest. The Inverse Probability weighting (IPW) approach preserves the semiparametric structure of the underlying model of substantive interest and clearly separates the model of substantive interest from the model used to account for the missing data. However, IPW often will not result in valid inference if the missing-data pattern is nonmonotone, even if the data are missing at random. We describe a recently proposed approach to modeling nonmonotone missing-data mechanisms under missingness at random to use in constructing the weights in IPW complete-case estimation, and we illustrate the approach using 3 data sets described in a companion article (Am J Epidemiol. 2018;187(3):568-575).

  • Generalizing Evidence from Randomized Trials using Inverse Probability of Sampling Weights
    Journal of the Royal Statistical Society: Series A (Statistics in Society), 2018
    Co-Authors: Ashley L. Buchanan, Stephen R. Cole, Michael G. Hudgens, Katie R. Mollan, Paul E. Sax, Eric S. Daar, Adaora A. Adimora, Joseph J. Eron, Michael J. Mugavero
    Abstract:

    Results obtained in randomized trials may not easily generalize to target populations. Whereas in randomized trials the treatment assignment mechanism is known, the sampling mechanism by which individuals are selected to participate in the trial is typically not known and assuming random sampling from the target population is often dubious. We consider an Inverse Probability of sampling weighted (IPSW) estimator for generalizing trial results to a target population. The IPSW estimator is shown to be consistent and asymptotically normal. A consistent sandwich‐type variance estimator is derived and simulation results are presented comparing the IPSW estimator with a previously proposed stratified estimator. The methods are then utilized to generalize results from two randomized trials of human immunodeficiency virus treatment to all people living with the disease in the USA.

  • Limitation of Inverse Probability-of-Censoring Weights in Estimating Survival in the Presence of Strong Selection Bias
    American Journal of Epidemiology, 2011
    Co-Authors: Chanelle J. Howe, Stephen R. Cole, Joan S. Chmiel, Alvaro Muñoz
    Abstract:

    In time-to-event analyses, artificial censoring with correction for induced selection bias using Inverse Probability-of-censoring weights can be used to 1) examine the natural history of a disease after effective interventions are widely available, 2) correct bias due to noncompliance with fixed or dynamic treatment regimens, and 3) estimate survival in the presence of competing risks. Artificial censoring entails censoring participants when they meet a predefined study criterion, such as exposure to an intervention, failure to comply, or the occurrence of a competing outcome. Inverse Probability-of-censoring weights use measured common predictors of the artificial censoring mechanism and the outcome of interest to determine what the survival experience of the artificially censored participants would be had they never been exposed to the intervention, complied with their treatment regimen, or not developed the competing outcome. Even if all common predictors are appropriately measured and taken into account, in the context of small sample size and strong selection bias, Inverse Probability-of-censoring weights could fail because of violations in assumptions necessary to correct selection bias. The authors used an example from the Multicenter AIDS Cohort Study, 1984-2008, regarding estimation of long-term acquired immunodeficiency syndrome-free survival to demonstrate the impact of violations in necessary assumptions. Approaches to improve correction methods are discussed.

  • Constructing Inverse Probability Weights for Marginal Structural Models
    American Journal of Epidemiology, 2008
    Co-Authors: Stephen R. Cole, Miguel A. Hernán
    Abstract:

    The method of Inverse Probability weighting (henceforth, weighting) can be used to adjust for measured confounding and selection bias under the four assumptions of consistency, exchangeability, positivity, and no misspecification of the model used to estimate weights. In recent years, several published estimates of the effect of time-varying exposures have been based on weighted estimation of the parameters of marginal structural models because, unlike standard statistical methods, weighting can appropriately adjust for measured time-varying confounders affected by prior exposure. As an example, the authors describe the last three assumptions using the change in viral load due to initiation of antiretroviral therapy among 918 human immunodeficiency virus-infected US men and women followed for a median of 5.8 years between 1996 and 2005. The authors describe possible tradeoffs that an epidemiologist may encounter when attempting to make inferences. For instance, a tradeoff between bias and precision is illustrated as a function of the extent to which confounding is controlled. Weight truncation is presented as an informal and easily implemented method to deal with these tradeoffs. Inverse Probability weighting provides a powerful methodological tool that may uncover causal effects of exposures that are otherwise obscured. However, as with all methods, diagnostics and sensitivity analyses are essential for proper use.

James M. Robins - One of the best experts on this subject based on the ideXlab platform.

  • Wiley StatsRef: Statistics Reference Online - Inverse Probability Weighting in Survival Analysis
    Wiley StatsRef: Statistics Reference Online, 2014
    Co-Authors: Andrea Rotnitzky, James M. Robins
    Abstract:

    Survival studies usually incorporate high-dimensional covariate data, but interest may focus on low-dimensional characteristics of the survival distribution. Inverse Probability weighted augmented estimation (AIPW) provides a robust approach, largely insensitive to model misspecification. The method was originally introduced to deal with coarsened (incompletely observed) data, and is discussed here with and without the assumption of random coarsening. Keywords: coarsening; missing data; dimensionality; Cox proportional hazards model; marginal survival function; competing risks; robustness; model misspecification

  • A Cautionary Note on Specification of the Correlation Structure in Inverse-Probability-Weighted Estimation for Repeated Measures
    2012
    Co-Authors: Eric J. Tchetgen Tchetgen, M. Maria Glymour, Jennifer Weuve, James M. Robins
    Abstract:

    In studies of repeated outcomes, it is customary to account for dependence in the outcomes of a given individual by incorporating a working correlation structure for the individual’s outcomes in generalized estimating equations. Inverse-Probability weighting is also a common approach used for causal inference and missing or censored data problems in epidemiology. In the absence of InverseProbability weights, it is well known that generalized estimating equations consistently estimate the parameters of a correctly speci…ed regression model, irrespective of whether or not the working correlation structure is correct. In this commentary, we show that the situation is quite di¤erent when weights are present, and that regression estimates obtained from generalized estimating equations that are Inverse-Probability-weighted can be biased, even when the correlation structure is correct. Speci…cally, we show that weighted-generalized estimating equations as implemented in Proc GENMOD in SAS can produce biased regression estimates even when modeling bias is absent. We discuss possible strategies to avoid this potential bias and illustrate this phenomenon in an epidemiologic application.

  • Comment: Performance of Double-Robust Estimators When “Inverse Probability” Weights Are Highly Variable
    Statistical Science, 2007
    Co-Authors: James M. Robins, Mariela Sued, Quanhong Lei-gomez, Andrea Rotnitzky
    Abstract:

    Comment on ``Performance of Double-Robust Estimators When ``Inverse Probability'' Weights Are Highly Variable'' [arXiv:0804.2958]

  • Comparison of dynamic treatment regimes via Inverse Probability weighting.
    Basic Clinical Pharmacology Toxicology, 2006
    Co-Authors: Miguel A. Hernán, Emilie Lanoy, Dominique Costagliola, James M. Robins
    Abstract:

    Appropriate analysis of observational data is our best chance to obtain answers to many questions that involve dynamic treatment regimes. This paper describes a simple method to compare dynamic treatment regimes by artificially censoring subjects and then using Inverse Probability weighting (IPW) to adjust for any selection bias introduced by the artificial censoring. The basic strategy can be summarized in four steps: 1) define two regimes of interest, 2) artificially censor individuals when they stop following one of the regimes of interest, 3) estimate Inverse Probability weights to adjust for the potential selection bias introduced by censoring in the previous step, 4) compare the survival of the uncensored individuals under each regime of interest by fitting an Inverse Probability weighted Cox proportional hazards model with the dichotomous regime indicator and the baseline confounders as covariates. In the absence of model misspecification, the method is valid provided data are available on all time-varying and baseline joint predictors of survival and regime discontinuation. We present an application of the method to compare the AIDS-free survival under two dynamic treatment regimes in a large prospective study of HIV-infected patients. The paper concludes by discussing the relative advantages and disadvantages of censoring/IPW versus g-estimation of nested structural models to compare dynamic regimes.

  • Inverse Probability Weighting in Survival Analysis
    Encyclopedia of Biostatistics, 2005
    Co-Authors: Andrea Rotnitzky, James M. Robins
    Abstract:

    Survival studies usually incorporate high-dimensional covariate data, but interest may focus on low-dimensional characteristics of the survival distribution. Inverse Probability weighted augmented estimation (AIPW) provides a robust approach, largely insensitive to model misspecification. The method was originally introduced to deal with coarsened (incompletely observed) data, and is discussed here with and without the assumption of random coarsening. Keywords: coarsening; missing data; dimensionality; Cox proportional hazards model; marginal survival function; competing risks; robustness; model misspecification

Shibing Yang - One of the best experts on this subject based on the ideXlab platform.

  • the choice of analytical strategies in Inverse Probability of treatment weighted analysis a simulation study
    American Journal of Epidemiology, 2015
    Co-Authors: Shibing Yang, Juan Lu, Charles B Eaton, Spencer E Harpe, Kate L Lapane
    Abstract:

    : We sought to explore the impact of intention to treat and complex treatment use assumptions made during weight construction on the validity and precision of estimates derived from Inverse-Probability-of-treatment-weighted analysis. We simulated data assuming a nonexperimental design that attempted to quantify the effect of statin on lowering low-density lipoprotein cholesterol. We created 324 scenarios by varying parameter values (effect size, sample size, adherence level, Probability of treatment initiation, associations between low-density lipoprotein cholesterol and treatment initiation and continuation). Four analytical approaches were used: 1) assuming intention to treat; 2) assuming complex mechanisms of treatment use; 3) assuming a simple mechanism of treatment use; and 4) assuming invariant confounders. With a continuous outcome, estimates assuming intention to treat were biased toward the null when there were nonnull treatment effect and nonadherence after treatment initiation. For each 1% decrease in the proportion of patients staying on treatment after initiation, the bias in estimated average treatment effect increased by 1%. Inverse-Probability-of-treatment-weighted analyses that took into account the complex mechanisms of treatment use generated approximately unbiased estimates. Studies estimating the actual effect of a time-varying treatment need to consider the complex mechanisms of treatment use during weight construction.

  • The Choice of Analytical Strategies in Inverse-Probability-of-Treatment–Weighted Analysis: A Simulation Study
    American Journal of Epidemiology, 2015
    Co-Authors: Shibing Yang, Charles B Eaton, Spencer E Harpe, Kate L Lapane
    Abstract:

    We sought to explore the impact of intention to treat and complex treatment use assumptions made during weight construction on the validity and precision of estimates derived from Inverse-Probability-of-treatment-weighted analysis. We simulated data assuming a nonexperimental design that attempted to quantify the effect of statin on lowering low-density lipoprotein cholesterol. We created 324 scenarios by varying parameter values (effect size, sample size, adherence level, Probability of treatment initiation, associations between low-density lipoprotein cholesterol and treatment initiation and continuation). Four analytical approaches were used: 1) assuming intention to treat; 2) assuming complex mechanisms of treatment use; 3) assuming a simple mechanism of treatment use; and 4) assuming invariant confounders. With a continuous outcome, estimates assuming intention to treat were biased toward the null when there were nonnull treatment effect and nonadherence after treatment initiation. For each 1% decrease in the proportion of patients staying on treatment after initiation, the bias in estimated average treatment effect increased by 1%. Inverse-Probability-of-treatment-weighted analyses that took into account the complex mechanisms of treatment use generated approximately unbiased estimates. Studies estimating the actual effect of a time-varying treatment need to consider the complex mechanisms of treatment use during weight construction.