Counterfactual Outcome

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 4200 Experts worldwide ranked by ideXlab platform

Jeffrey A Smith - One of the best experts on this subject based on the ideXlab platform.

  • the pre programme earnings dip and the determinants of participation in a social programme implications for simple programme evaluation strategies
    The Economic Journal, 1999
    Co-Authors: James J Heckman, Jeffrey A Smith
    Abstract:

    The key to estimating the impact of a programme is constructing the Counterfactual Outcome representing what would have happened in its absence. This problem becomes more complicated when agents, such as individuals, firms or local governments, self-select into the programme rather than being exogenously assigned to it. This paper uses data from a major social experiment to identify what would have happened to the earnings of self-selected participants in a job training programme had they not participated in it. We investigate the implications of these earnings patterns for the validity of widely-used before-after and difference-in-differences estimators.

  • the pre programme earnings dip and the determinants of participation in a social programme implications for simple programme evaluation strategies
    The Economic Journal, 1999
    Co-Authors: James J Heckman, Jeffrey A Smith
    Abstract:

    The key to estimating the impact of a program is constructing the Counterfactual Outcome representing what would have happened in its absence. This problem becomes more complicated when agents, such as individuals, firms, or local governments, self-select into the program rather than being exogenously assigned to it. This paper uses data from a major social experiment to identify what would have happened to the earnings of self-selected participants in a job training program had they not participated in it. The authors investigate the implications of these earnings patterns for the validity of widely used before-after and difference-in-differences estimators.

  • the pre program earnings dip and the determinants of participation in a social program implications for simple program evaluation strategies
    National Bureau of Economic Research, 1999
    Co-Authors: James J Heckman, Jeffrey A Smith
    Abstract:

    The key to estimating the impact of a program is constructing the Counterfactual Outcome representing what would have happened in its absence. This problem becomes more complicated when agents self-select into the program rather than being exogenously assigned to it. This paper uses data from a major social experiment to identify what would have happened to the earnings of self-selected participants in a job training program had they not participated in it. We investigate the implications of these earnings patterns for the validity of widely-used before-after and difference-in-differences estimators. Motivated by the failure of these estimators to produce credible estimates, we investigate the determinants of program participation. We find that labor force status dynamics, rather than earnings or employment dynamics, drive the participation process. Our evidence suggests that training programs often function as a form of job search. Methods that control only for earnings dynamics, like the conventional difference-in-differences estimator, do not adequately capture the underlying differences between participants and non-participants. We use the estimated probabilities of participation in both matching estimators and a nonparametric, conditional version of the differences-in-differences estimator and produce large reductions in the selection bias in non-experimental estimates of the effect of training on earnings.

James J Heckman - One of the best experts on this subject based on the ideXlab platform.

  • the pre programme earnings dip and the determinants of participation in a social programme implications for simple programme evaluation strategies
    The Economic Journal, 1999
    Co-Authors: James J Heckman, Jeffrey A Smith
    Abstract:

    The key to estimating the impact of a programme is constructing the Counterfactual Outcome representing what would have happened in its absence. This problem becomes more complicated when agents, such as individuals, firms or local governments, self-select into the programme rather than being exogenously assigned to it. This paper uses data from a major social experiment to identify what would have happened to the earnings of self-selected participants in a job training programme had they not participated in it. We investigate the implications of these earnings patterns for the validity of widely-used before-after and difference-in-differences estimators.

  • the pre programme earnings dip and the determinants of participation in a social programme implications for simple programme evaluation strategies
    The Economic Journal, 1999
    Co-Authors: James J Heckman, Jeffrey A Smith
    Abstract:

    The key to estimating the impact of a program is constructing the Counterfactual Outcome representing what would have happened in its absence. This problem becomes more complicated when agents, such as individuals, firms, or local governments, self-select into the program rather than being exogenously assigned to it. This paper uses data from a major social experiment to identify what would have happened to the earnings of self-selected participants in a job training program had they not participated in it. The authors investigate the implications of these earnings patterns for the validity of widely used before-after and difference-in-differences estimators.

  • the pre program earnings dip and the determinants of participation in a social program implications for simple program evaluation strategies
    National Bureau of Economic Research, 1999
    Co-Authors: James J Heckman, Jeffrey A Smith
    Abstract:

    The key to estimating the impact of a program is constructing the Counterfactual Outcome representing what would have happened in its absence. This problem becomes more complicated when agents self-select into the program rather than being exogenously assigned to it. This paper uses data from a major social experiment to identify what would have happened to the earnings of self-selected participants in a job training program had they not participated in it. We investigate the implications of these earnings patterns for the validity of widely-used before-after and difference-in-differences estimators. Motivated by the failure of these estimators to produce credible estimates, we investigate the determinants of program participation. We find that labor force status dynamics, rather than earnings or employment dynamics, drive the participation process. Our evidence suggests that training programs often function as a form of job search. Methods that control only for earnings dynamics, like the conventional difference-in-differences estimator, do not adequately capture the underlying differences between participants and non-participants. We use the estimated probabilities of participation in both matching estimators and a nonparametric, conditional version of the differences-in-differences estimator and produce large reductions in the selection bias in non-experimental estimates of the effect of training on earnings.

Genevieve Lefebvre - One of the best experts on this subject based on the ideXlab platform.

  • comparing two Counterfactual Outcome approaches in causal mediation analysis of a multicategorical exposure an application for the estimation of the effect of maternal intake of inhaled corticosteroids doses on birthweight
    Statistical Methods in Medical Research, 2020
    Co-Authors: Mariia Samoilenko, Nadia Arrouf, Lucie Blais, Genevieve Lefebvre
    Abstract:

    Although medical research frequently involves an exposure variable with three or more discrete levels, detailed presentations of mediation techniques for dealing with multicategorical (multilevel) exposures are sparse. In this paper, we study two causal mediation approaches applicable to such a type of exposure for continuous mediator and Outcome: the closed-form regression-based approach of Valeri and VanderWeele, and the marginal structural model-based approach of Lange, Vansteelandt, and Bekaert. While the consideration of multicategorical exposures is found explicitly addressed in the literature for the latter approach, this is, to our knowledge, not yet the case for the former. We first illustrate the application of the two aforementioned approaches to assess the dose-response relationship between maternal intake of inhaled corticosteroids and birthweight, where this relationship is potentially mediated by gestational age. More specifically, we provide a precise roadmap for the application of the regression-based approach and of the marginal structural model-based approach on our cohort of pregnancies. Expressions for the natural direct and indirect effects associated with our categorical exposure are provided and, for the regression-based approach, analytic formulas for standard error calculation using the delta method are presented for these effects. Second, a simulation study which mimics our data is presented to add to current knowledge on these causal mediation techniques. Results from this study highlight the relevance to assess robustness of mediation results obtained from multicategorical exposures, most notably for the least prevalent of exposure categories.

Sun Liyang - One of the best experts on this subject based on the ideXlab platform.

  • De-biased Machine Learning in Instrumental Variable Models for Treatment Effects
    2020
    Co-Authors: Singh Rahul, Sun Liyang
    Abstract:

    Instrumental variable identification is a strategy in causal statistics for estimating the Counterfactual effect of treatment $D$ on output $Y$ controlling for covariates $\mathbf{X}$ using observational data. Even in the presence of an unmeasured confounder of $(Y,D)$, the treatment effect on the subpopulation of compliers can nonetheless be identified if an instrumental variable $Z$ is available. We introduce a de-biased machine learning (DML) approach to estimating complier parameters with high-dimensional data. Complier parameters include local average treatment effect, average complier characteristics, and complier Counterfactual Outcome distributions. In our approach, the de-biasing is itself performed by machine learning, a variant called automatic de-biased machine learning (Auto-DML). We prove our estimator is consistent, asymptotically normal, and semi-parametrically efficient. In experiments, our estimator outperforms state-of-the-art alternatives, and it does not require ad hoc trimming or censoring of a learned propensity score. We use it to estimate the effect of 401(k) participation on the distribution of net financial assets.Comment: 32 pages, 7 figure

  • De-biased Machine Learning in Instrumental Variable Models for Treatment Effects
    2020
    Co-Authors: Singh Rahul, Sun Liyang
    Abstract:

    We introduce a de-biased machine learning (DML) approach to estimating complier parameters with high-dimensional data. Complier parameters include local average treatment effect, average complier characteristics, and complier Counterfactual Outcome distributions. In our approach, the de-biasing is itself performed by machine learning, a variant called automatic de-biased machine learning (Auto-DML). By regularizing the balancing weights, it does not require ad hoc trimming or censoring. We prove our estimator is consistent, asymptotically normal, and semi-parametrically efficient. We use the new approach to estimate the effect of 401(k) participation on the distribution of net financial assets.Comment: 41 pages, 7 figure

Sonja A Swanson - One of the best experts on this subject based on the ideXlab platform.

  • Effect heterogeneity and variable selection for standardizing causal effects to a target population
    European Journal of Epidemiology, 2019
    Co-Authors: Anders Huitfeldt, Sonja A Swanson, Mats J. Stensrud, Etsuji Suzuki
    Abstract:

    The participants in randomized trials and other studies used for causal inference are often not representative of the populations seen by clinical decision-makers. To account for differences between populations, researchers may consider standardizing results to a target population. We discuss several different types of homogeneity conditions that are relevant for standardization: Homogeneity of effect measures, homogeneity of Counterfactual Outcome state transition parameters, and homogeneity of Counterfactual distributions. Each of these conditions can be used to show that a particular standardization procedure will result in an unbiased estimate of the effect in the target population, given assumptions about the relevant scientific context. We compare and contrast the homogeneity conditions, in particular their implications for selection of covariates for standardization and their implications for how to compute the standardized causal effect in the target population. While some of the recently developed Counterfactual approaches to generalizability rely upon homogeneity conditions that avoid many of the problems associated with traditional approaches, they often require adjustment for a large (and possibly unfeasible) set of covariates.

  • the choice of effect measure for binary Outcomes introducing Counterfactual Outcome state transition parameters
    Epidemiologic Methods, 2018
    Co-Authors: Anders Huitfeldt, Andrew Goldstein, Sonja A Swanson
    Abstract:

    Standard measures of effect, including the risk ratio, the odds ratio, and the risk difference, are associated with a number of well-described shortcomings, and no consensus exists about the conditions under which investigators should choose one effect measure over another. In this paper, we introduce a new framework for reasoning about choice of effect measure by linking two separate versions of the risk ratio to a Counterfactual causal model. In our approach, effects are defined in terms of "Counterfactual Outcome state transition parameters", that is, the proportion of those individuals who would not have been a case by the end of follow-up if untreated, who would have responded to treatment by becoming a case; and the proportion of those individuals who would have become a case by the end of follow-up if untreated who would have responded to treatment by not becoming a case. Although Counterfactual Outcome state transition parameters are generally not identified from the data without strong monotonicity assumptions, we show that when they stay constant between populations, there are important implications for model specification, meta-analysis, and research generalization.