Selection Bias

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 360 Experts worldwide ranked by ideXlab platform

Hunt Allcott - One of the best experts on this subject based on the ideXlab platform.

  • site Selection Bias in program evaluation
    Quarterly Journal of Economics, 2015
    Co-Authors: Hunt Allcott
    Abstract:

    "Site Selection Bias" can occur when the probability that a program is adopted or evaluated is correlated with its impacts. I test for site Selection Bias in the context of the Opower energy conservation programs, using 111 randomized control trials involving 8.6 million households across the United States. Predictions based on rich microdata from the first 10 replications substantially overstate efficacy in the next 101 sites. Several mechanisms caused this positive Selection. For example, utilities in more environmentalist areas are more likely to adopt the program, and their customers are more responsive to the treatment. Also, because utilities initially target treatment at higher-usage consumer subpopulations, efficacy drops as the program is later expanded. The results illustrate how program evaluations can still give systematically Biased out-of-sample predictions, even after many replications. JEL Codes: C93, D12, L94, O12, Q41.

  • site Selection Bias in program evaluation
    Research Papers in Economics, 2012
    Co-Authors: Hunt Allcott
    Abstract:

    "Site Selection Bias" can occur when the probability that a program is adopted or evaluated is correlated with its impacts. I test for site Selection Bias in the context of the Opower energy conservation programs, using 111 randomized control trials involving 8.6 million households across the U.S. Predictions based on rich microdata from the first ten replications substantially overstate efficacy in the next 101 sites. Several mechanisms caused this positive Selection. For example, utilities in more environmentalist areas are more likely to adopt the program, and their customers are more responsive to the treatment. Also, because utilities initially target treatment at higher-usage consumer subpopulations, efficacy drops as the program is later expanded. The results illustrate how program evaluations can still give systematically Biased out-of-sample predictions, even after many replications.

  • external validity and partner Selection Bias
    2012
    Co-Authors: Hunt Allcott, Sendhil Mullainathan
    Abstract:

    Program evaluation often involves generalizing internally-valid site-speci…c estimates to a dierent population or environment. While there is substantial evidence on the internal valid- ity of non-experimental relative to experimental estimates (e.g. Lalonde 1986), there is little quantitative evidence on the external validity of site-speci…c estimates, because identical treat- ments are rarely evaluated in multiple settings. This paper examines a remarkable series of 14 energy conservation …eld experiments run by a company called OPOWER, involving 550,000 households in dierent cities across the U.S. Despite the availability of potentially-promising individual-level controls, we show that the unexplained variation in treatment eects across sites is both statistically and economically signi…cant. Furthermore, we show that the electric utilities that partner with OPOWER dier systematically on characteristics that are correlated with the treatment eect, providing evidence of a "partner Selection Bias" that is analogous to Biases caused by individual-level Selection into treatment. We augment this result in a dierent context by showing that partner micro…nancial institutions (MFIs) that carry out randomized experiments appear to be selected on observable characteristics from the global pool of MFIs. Finally, we propose a statistical test for parameter heterogeneity at "sub-sites" within a site that provides suggestive evidence on whether site-speci…c estimates can be generalized.

Hristos Doucouliagos - One of the best experts on this subject based on the ideXlab platform.

  • detecting publication Selection Bias through excess statistical significance
    Research Synthesis Methods, 2021
    Co-Authors: T. D. Stanley, Hristos Doucouliagos, John P A Ioannidis, Evan C Carter
    Abstract:

    We introduce and evaluate three tests for publication Selection Bias based on excess statistical significance. The proposed tests incorporate heterogeneity explicitly in the formulas for expected and excess statistical significance. We calculate the expected proportion of statistically significant findings in the absence of selective reporting or publication Bias based on each study's standard error and meta-analysis estimates of the mean and variance of the true-effect distribution. Comparing the expected to the observed proportion of statistically significant results leads to a simple proportion of statistical significance test (PSST). Alternatively, we propose a direct test of excess statistical significance (TESS). We also combine these two tests of excess statistical significance (TESSPSST). Simulations show that these excess statistical significance tests often outperform the conventional Egger test for publication Selection Bias and the three-parameter Selection model. This article is protected by copyright. All rights reserved.

  • Meta-regression approximations to reduce publication Selection Bias.
    Research Synthesis Methods, 2013
    Co-Authors: T. D. Stanley, Hristos Doucouliagos
    Abstract:

    Publication Selection Bias is a serious challenge to the integrity of all empirical sciences. We derive meta-regression approximations to reduce this Bias. Our approach employs Taylor polynomial approximations to the conditional mean of a truncated distribution. A quadratic approximation without a linear term, precision-effect estimate with standard error (PEESE), is shown to have the smallest Bias and mean squared error in most cases and to outperform conventional meta-analysis estimators, often by a great deal. Monte Carlo simulations also demonstrate how a new hybrid estimator that conditionally combines PEESE and the Egger regression intercept can provide a practical solution to publication Selection Bias. PEESE is easily expanded to accommodate systematic heterogeneity along with complex and differential publication Selection Bias that is related to moderator variables. By providing an intuitive reason for these approximations, we can also explain why the Egger regression works so well and when it does not. These meta-regression methods are applied to several policy-relevant areas of research including antidepressant effectiveness, the value of a statistical life, the minimum wage, and nicotine replacement therapy.

  • publication Selection Bias in minimum wage research a meta regression analysis
    British Journal of Industrial Relations, 2009
    Co-Authors: Hristos Doucouliagos, T. D. Stanley
    Abstract:

    Card and Krueger's meta-analysis of the employment effects of minimum wages challenged existing theory. Unfortunately, their meta-analysis confused publication Selection with the absence of a genuine empirical effect. We apply recently developed meta-analysis methods to 64 US minimum-wage studies and corroborate that Card and Krueger's findings were nevertheless correct. The minimum-wage effects literature is contaminated by publication Selection Bias, which we estimate to be slightly larger than the average reported minimum-wage effect. Once this publication Selection is corrected, little or no evidence of a negative association between minimum wages and employment remains.

T. D. Stanley - One of the best experts on this subject based on the ideXlab platform.

  • detecting publication Selection Bias through excess statistical significance
    Research Synthesis Methods, 2021
    Co-Authors: T. D. Stanley, Hristos Doucouliagos, John P A Ioannidis, Evan C Carter
    Abstract:

    We introduce and evaluate three tests for publication Selection Bias based on excess statistical significance. The proposed tests incorporate heterogeneity explicitly in the formulas for expected and excess statistical significance. We calculate the expected proportion of statistically significant findings in the absence of selective reporting or publication Bias based on each study's standard error and meta-analysis estimates of the mean and variance of the true-effect distribution. Comparing the expected to the observed proportion of statistically significant results leads to a simple proportion of statistical significance test (PSST). Alternatively, we propose a direct test of excess statistical significance (TESS). We also combine these two tests of excess statistical significance (TESSPSST). Simulations show that these excess statistical significance tests often outperform the conventional Egger test for publication Selection Bias and the three-parameter Selection model. This article is protected by copyright. All rights reserved.

  • Meta-regression approximations to reduce publication Selection Bias.
    Research Synthesis Methods, 2013
    Co-Authors: T. D. Stanley, Hristos Doucouliagos
    Abstract:

    Publication Selection Bias is a serious challenge to the integrity of all empirical sciences. We derive meta-regression approximations to reduce this Bias. Our approach employs Taylor polynomial approximations to the conditional mean of a truncated distribution. A quadratic approximation without a linear term, precision-effect estimate with standard error (PEESE), is shown to have the smallest Bias and mean squared error in most cases and to outperform conventional meta-analysis estimators, often by a great deal. Monte Carlo simulations also demonstrate how a new hybrid estimator that conditionally combines PEESE and the Egger regression intercept can provide a practical solution to publication Selection Bias. PEESE is easily expanded to accommodate systematic heterogeneity along with complex and differential publication Selection Bias that is related to moderator variables. By providing an intuitive reason for these approximations, we can also explain why the Egger regression works so well and when it does not. These meta-regression methods are applied to several policy-relevant areas of research including antidepressant effectiveness, the value of a statistical life, the minimum wage, and nicotine replacement therapy.

  • publication Selection Bias in minimum wage research a meta regression analysis
    British Journal of Industrial Relations, 2009
    Co-Authors: Hristos Doucouliagos, T. D. Stanley
    Abstract:

    Card and Krueger's meta-analysis of the employment effects of minimum wages challenged existing theory. Unfortunately, their meta-analysis confused publication Selection with the absence of a genuine empirical effect. We apply recently developed meta-analysis methods to 64 US minimum-wage studies and corroborate that Card and Krueger's findings were nevertheless correct. The minimum-wage effects literature is contaminated by publication Selection Bias, which we estimate to be slightly larger than the average reported minimum-wage effect. Once this publication Selection is corrected, little or no evidence of a negative association between minimum wages and employment remains.

Chanelle J. Howe - One of the best experts on this subject based on the ideXlab platform.

  • Selection Bias due to loss to follow up in cohort studies
    Epidemiology, 2016
    Co-Authors: Chanelle J. Howe, Stephen R. Cole, Bryan Lau, Sonia Napravnik, Joseph J Eron
    Abstract:

    Selection Bias due to loss to follow up represents a threat to the internal validity of estimates derived from cohort studies. Over the past 15 years, stratification-based techniques as well as methods such as inverse probability-of-censoring weighted estimation have been more prominently discussed and offered as a means to correct for Selection Bias. However, unlike correcting for confounding Bias using inverse weighting, uptake of inverse probability-of-censoring weighted estimation as well as competing methods has been limited in the applied epidemiologic literature. To motivate greater use of inverse probability-of-censoring weighted estimation and competing methods, we use causal diagrams to describe the sources of Selection Bias in cohort studies employing a time-to-event framework when the quantity of interest is an absolute measure (e.g., absolute risk, survival function) or relative effect measure (e.g., risk difference, risk ratio). We highlight that whether a given estimate obtained from standard methods is potentially subject to Selection Bias depends on the causal diagram and the measure. We first broadly describe inverse probability-of-censoring weighted estimation and then give a simple example to demonstrate in detail how inverse probability-of-censoring weighted estimation mitigates Selection Bias and describe challenges to estimation. We then modify complex, real-world data from the University of North Carolina Center for AIDS Research HIV clinical cohort study and estimate the absolute and relative change in the occurrence of death with and without inverse probability-of-censoring weighted correction using the modified University of North Carolina data. We provide SAS code to aid with implementation of inverse probability-of-censoring weighted techniques.

  • Limitation of Inverse Probability-of-Censoring Weights in Estimating Survival in the Presence of Strong Selection Bias
    American Journal of Epidemiology, 2011
    Co-Authors: Chanelle J. Howe, Stephen R. Cole, Joan S. Chmiel, Alvaro Muñoz
    Abstract:

    In time-to-event analyses, artificial censoring with correction for induced Selection Bias using inverse probability-of-censoring weights can be used to 1) examine the natural history of a disease after effective interventions are widely available, 2) correct Bias due to noncompliance with fixed or dynamic treatment regimens, and 3) estimate survival in the presence of competing risks. Artificial censoring entails censoring participants when they meet a predefined study criterion, such as exposure to an intervention, failure to comply, or the occurrence of a competing outcome. Inverse probability-of-censoring weights use measured common predictors of the artificial censoring mechanism and the outcome of interest to determine what the survival experience of the artificially censored participants would be had they never been exposed to the intervention, complied with their treatment regimen, or not developed the competing outcome. Even if all common predictors are appropriately measured and taken into account, in the context of small sample size and strong Selection Bias, inverse probability-of-censoring weights could fail because of violations in assumptions necessary to correct Selection Bias. The authors used an example from the Multicenter AIDS Cohort Study, 1984-2008, regarding estimation of long-term acquired immunodeficiency syndrome-free survival to demonstrate the impact of violations in necessary assumptions. Approaches to improve correction methods are discussed.

Petra E Todd - One of the best experts on this subject based on the ideXlab platform.

  • characterizing Selection Bias using experimental data
    Research Papers in Economics, 1998
    Co-Authors: James J Heckman, Hidehiko Ichimura, Jeffrey A Smith, Petra E Todd
    Abstract:

    This paper develops and applies semiparametric econometric methods to estimate the form of Selection Bias that arises from using nonexperimental comparison groups to evaluate social programs and to test the identifying assumptions that justify three widely-used classes of estimators and our extensions of them: (a) the method of matching; (b) the classical econometric Selection model which represents the Bias solely as a function of the probability of participation; and (c) the method of difference-in-differences. Using data from an experiment on a prototypical social program combined with unusually rich data from a nonexperimental comparison group, we reject the assumptions justifying matching and our extensions of that method but find evidence in support of the index-sufficient Selection Bias model and the assumptions that justify application of a conditional semiparametric version of the method of difference-in-difference. Fa comparable people and to appropriately weight participants and nonparticipants a sources of Selection Bias as conveniently measured. We present a rigorous defin Bias and find that in our data it is a small component of conventially meausred it is still substantial when compared with experimentally-estimated program impa matching participants to comparison group members in the same labor market, givi same questionnaire, and making sure they have comparable characteristics substan the performance of any econometric program evaluation estimator. We show how t analysis to estimate the impact of treatment on the treated using ordinary obser

  • characterizing Selection Bias using experimental data
    Econometrica, 1998
    Co-Authors: James J Heckman, Hidehiko Ichimura, Jeffrey A Smith, Petra E Todd
    Abstract:

    Semiparametric methods are developed to estimate the Bias that arises from using nonexperimental comparison groups to evaluate social programs and to test the identifying assumptions that justify matching, Selection models, and the method of difference-in-differences. Using data from an experiment on a prototypical social program and data from nonexperimental comparison groups, we reject the assumptions justifying matching and our extensions of it. The evidence supports the Selection Bias model and the assumptions that justify a semiparametric version of the method of difference-in-differences. We extend our analysis to consider applications of the methods to ordinary observational data.

  • sources of Selection Bias in evaluating social programs an interpretation of conventional measures and evidence on the effectiveness of matching as a program evaluation method
    Proceedings of the National Academy of Sciences of the United States of America, 1996
    Co-Authors: James J Heckman, Hidehiko Ichimura, Jeffrey A Smith, Petra E Todd
    Abstract:

    This paper decomposes the conventional measure of Selection Bias in observational studies into three components. The first two components are due to differences in the distributions of characteristics between participant and nonparticipant (comparison) group members: the first arises from differences in the supports, and the second from differences in densities over the region of common support. The third component arises from Selection Bias precisely defined. Using data from a recent social experiment, we find that the component due to Selection Bias, precisely defined, is smaller than the first two components. However, Selection Bias still represents a substantial fraction of the experimental impact estimate. The empirical performance of matching methods of program evaluation is also examined. We find that matching based on the propensity score eliminates some but not all of the measured Selection Bias, with the remaining Bias still a substantial fraction of the estimated impact. We find that the support of the distribution of propensity scores for the comparison group is typically only a small portion of the support for the participant group. For values outside the common support, it is impossible to reliably estimate the effect of program participation using matching methods. If the impact of participation depends on the propensity score, as we find in our data, the failure of the common support condition severely limits matching compared with random assignment as an evaluation estimator.