Target Population

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 307758 Experts worldwide ranked by ideXlab platform

Elizabeth A Stuart - One of the best experts on this subject based on the ideXlab platform.

  • an outcome model approach to transporting a randomized controlled trial results to a Target Population
    Journal of the American Medical Informatics Association, 2019
    Co-Authors: R R Holman, Benjamin A Goldstein, Matthew Phelan, Neha J Pagidipati, Michael J Pencina, Elizabeth A Stuart
    Abstract:

    OBJECTIVE Participants enrolled into randomized controlled trials (RCTs) often do not reflect real-world Populations. Previous research in how best to transport RCT results to Target Populations has focused on weighting RCT data to look like the Target data. Simulation work, however, has suggested that an outcome model approach may be preferable. Here, we describe such an approach using source data from the 2 × 2 factorial NAVIGATOR (Nateglinide And Valsartan in Impaired Glucose Tolerance Outcomes Research) trial, which evaluated the impact of valsartan and nateglinide on cardiovascular outcomes and new-onset diabetes in a prediabetic Population. MATERIALS AND METHODS Our Target data consisted of people with prediabetes serviced at the Duke University Health System. We used random survival forests to develop separate outcome models for each of the 4 treatments, estimating the 5-year risk difference for progression to diabetes, and estimated the treatment effect in our local patient Populations, as well as subPopulations, and compared the results with the traditional weighting approach. RESULTS Our models suggested that the treatment effect for valsartan in our patient Population was the same as in the trial, whereas for nateglinide treatment effect was stronger than observed in the original trial. Our effect estimates were more efficient than the weighting approach and we effectively estimated subgroup differences. CONCLUSIONS The described method represents a straightforward approach to efficiently transporting an RCT result to any Target Population.

  • assessing methods for generalizing experimental impact estimates to Target Populations
    Journal of Research on Educational Effectiveness, 2016
    Co-Authors: Holger L Kern, Elizabeth A Stuart, Jennifer Hill, Donald P Green
    Abstract:

    Randomized experiments are considered the gold standard for causal inference, as they can provide unbiased estimates of treatment effects for the experimental participants. However, researchers and policymakers are often interested in using a specific experiment to inform decisions about other Target Populations. In education research, increasing attention is being paid to the potential lack of generalizability of randomized experiments, as the experimental participants may be unrepresentative of the Target Population of interest. This paper examines whether generalization may be assisted by statistical methods that adjust for observed differences between the experimental participants and members of a Target Population. The methods examined include approaches that reweight the experimental data so that participants more closely resemble the Target Population and methods that utilize models of the outcome. Two simulation studies and one empirical analysis investigate and compare the methods' performance. One simulation uses purely simulated data while the other utilizes data from an evaluation of a school-based dropout prevention program. Our simulations suggest that machine learning methods outperform regression-based methods when the required structural (ignorability) assumptions are satisfied. When these assumptions are violated, all of the methods examined perform poorly. Our empirical analysis uses data from a multi-site experiment to assess how well results from a given site predict impacts in other sites. Using a variety of extrapolation methods, predicted effects for each site are compared to actual benchmarks. Flexible modeling approaches perform best, although linear regression is not far behind. Taken together, these results suggest that flexible modeling techniques can aid generalization while underscoring the fact that even state-of-the-art statistical techniques still rely on strong assumptions.

  • The use of propensity scores to assess the generalizability of results from randomized trials
    Journal of The Royal Statistical Society Series A-statistics in Society, 2010
    Co-Authors: Elizabeth A Stuart, Catherine P. Bradshaw, Stephen R Cole, Philip J. Leaf
    Abstract:

    Randomized trials remain the most accepted design for estimating the effects of interventions, but they do not necessarily answer a question of primary interest: Will the program be effective in a Target Population in which it may be implemented? In other words, are the results generalizable? There has been very little statistical research on how to assess the generalizability, or “external validity,” of randomized trials. We propose the use of propensity-score-based metrics to quantify the similarity of the participants in a randomized trial and a Target Population. In this setting the propensity score model predicts participation in the randomized trial, given a set of covariates. The resulting propensity scores are used first to quantify the difference between the trial participants and the Target Population, and then to match, subclassify, or weight the control group outcomes to the Population, assessing how well the propensity score-adjusted outcomes track the outcomes actually observed in the Population. These metrics can serve as a first step in assessing the generalizability of results from randomized trials to Target Populations. This paper lays out these ideas, discusses the assumptions underlying the approach, and illustrates the metrics using data on the evaluation of a schoolwide prevention program called Positive Behavioral Interventions and Supports.

  • generalizing evidence from randomized clinical trials to Target Populations the actg 320 trial
    American Journal of Epidemiology, 2010
    Co-Authors: Stephen R Cole, Elizabeth A Stuart
    Abstract:

    Properly planned and conducted randomized clinical trials remain susceptible to a lack of external validity. The authors illustrate a model-based method to standardize observed trial results to a specified Target Population using a seminal human immunodeficiency virus (HIV) treatment trial, and they provide Monte Carlo simulation evidence supporting the method. The example trial enrolled 1,156 HIV-infected adult men and women in the United States in 1996, randomly assigned 577 to a highly active antiretroviral therapy and 579 to a largely ineffective combination therapy, and followed participants for 52 weeks. The Target Population was US people infected with HIV in 2006, as estimated by the Centers for Disease Control and Prevention. Results from the trial apply, albeit muted by 12%, to the Target Population, under the assumption that the authors have measured and correctly modeled the determinants of selection that reflect heterogeneity in the treatment effect. In simulations with a heterogeneous treatment effect, a conventional intent-to-treat estimate was biased with poor confidence limit coverage, but the proposed estimate was largely unbiased with appropriate confidence limit coverage. The proposed method standardizes observed trial results to a specified Target Population and thereby provides information regarding the generalizability of trial results.

Stephen R Cole - One of the best experts on this subject based on the ideXlab platform.

  • using bounds to compare the strength of exchangeability assumptions for internal and external validity
    American Journal of Epidemiology, 2019
    Co-Authors: Alexander Breskin, Daniel Westreich, Stephen R Cole, Jessie K Edwards
    Abstract:

    In the absence of strong assumptions (e.g., exchangeability), only bounds for causal effects can be identified. Here we describe bounds for the risk difference for an effect of a binary exposure on a binary outcome in 4 common study settings: observational studies and randomized studies, each with and without simple random selection from the Target Population. Through these scenarios, we introduce randomizations for selection and treatment, and the widths of the bounds are narrowed from 2 (the width of the range of the risk difference) to 0 (point identification). We then assess the strength of the assumptions of exchangeability for internal and external validity by comparing their contributions to the widths of the bounds in the setting of an observational study without random selection from the Target Population. We find that when less than two-thirds of the Target Population is selected into the study, the assumption of exchangeability for external validity of the risk difference is stronger than that for internal validity. The relative strength of these assumptions should be considered when designing, analyzing, and interpreting observational studies and will aid in determining the best methods for estimating the causal effects of interest.

  • The use of propensity scores to assess the generalizability of results from randomized trials
    Journal of The Royal Statistical Society Series A-statistics in Society, 2010
    Co-Authors: Elizabeth A Stuart, Catherine P. Bradshaw, Stephen R Cole, Philip J. Leaf
    Abstract:

    Randomized trials remain the most accepted design for estimating the effects of interventions, but they do not necessarily answer a question of primary interest: Will the program be effective in a Target Population in which it may be implemented? In other words, are the results generalizable? There has been very little statistical research on how to assess the generalizability, or “external validity,” of randomized trials. We propose the use of propensity-score-based metrics to quantify the similarity of the participants in a randomized trial and a Target Population. In this setting the propensity score model predicts participation in the randomized trial, given a set of covariates. The resulting propensity scores are used first to quantify the difference between the trial participants and the Target Population, and then to match, subclassify, or weight the control group outcomes to the Population, assessing how well the propensity score-adjusted outcomes track the outcomes actually observed in the Population. These metrics can serve as a first step in assessing the generalizability of results from randomized trials to Target Populations. This paper lays out these ideas, discusses the assumptions underlying the approach, and illustrates the metrics using data on the evaluation of a schoolwide prevention program called Positive Behavioral Interventions and Supports.

  • generalizing evidence from randomized clinical trials to Target Populations the actg 320 trial
    American Journal of Epidemiology, 2010
    Co-Authors: Stephen R Cole, Elizabeth A Stuart
    Abstract:

    Properly planned and conducted randomized clinical trials remain susceptible to a lack of external validity. The authors illustrate a model-based method to standardize observed trial results to a specified Target Population using a seminal human immunodeficiency virus (HIV) treatment trial, and they provide Monte Carlo simulation evidence supporting the method. The example trial enrolled 1,156 HIV-infected adult men and women in the United States in 1996, randomly assigned 577 to a highly active antiretroviral therapy and 579 to a largely ineffective combination therapy, and followed participants for 52 weeks. The Target Population was US people infected with HIV in 2006, as estimated by the Centers for Disease Control and Prevention. Results from the trial apply, albeit muted by 12%, to the Target Population, under the assumption that the authors have measured and correctly modeled the determinants of selection that reflect heterogeneity in the treatment effect. In simulations with a heterogeneous treatment effect, a conventional intent-to-treat estimate was biased with poor confidence limit coverage, but the proposed estimate was largely unbiased with appropriate confidence limit coverage. The proposed method standardizes observed trial results to a specified Target Population and thereby provides information regarding the generalizability of trial results.

Philip J. Leaf - One of the best experts on this subject based on the ideXlab platform.

  • Assessing the Generalizability of Randomized Trial Results to Target Populations
    Prevention Science, 2014
    Co-Authors: Catherine P. Bradshaw, Philip J. Leaf
    Abstract:

    Recent years have seen increasing interest in and attention to evidence-based practices, where the “evidence” generally comes from well-conducted randomized trials. However, while those trials yield accurate estimates of the effect of the intervention for the participants in the trial (known as “internal validity”), they do not always yield relevant information about the effects in a particular Target Population (known as “external validity”). This may be due to a lack of specification of a Target Population when designing the trial, difficulties recruiting a sample that is representative of a prespecified Target Population, or to interest in considering a Target Population somewhat different from the Population directly Targeted by the trial. This paper first provides an overview of existing design and analysis methods for assessing and enhancing the ability of a randomized trial to estimate treatment effects in a Target Population. It then provides a case study using one particular method, which weights the subjects in a randomized trial to match the Population on a set of observed characteristics. The case study uses data from a randomized trial of school-wide positive behavioral interventions and supports (PBIS); our interest is in generalizing the results to the state of Maryland. In the case of PBIS, after weighting, estimated effects in the Target Population were similar to those observed in the randomized trial. The paper illustrates that statistical methods can be used to assess and enhance the external validity of randomized trials, making the results more applicable to policy and clinical questions. However, there are also many open research questions; future research should focus on questions of treatment effect heterogeneity and further developing these methods for enhancing external validity. Researchers should think carefully about the external validity of randomized trials and be cautious about extrapolating results to specific Populations unless they are confident of the similarity between the trial sample and that Target Population.

  • The use of propensity scores to assess the generalizability of results from randomized trials
    Journal of The Royal Statistical Society Series A-statistics in Society, 2010
    Co-Authors: Elizabeth A Stuart, Catherine P. Bradshaw, Stephen R Cole, Philip J. Leaf
    Abstract:

    Randomized trials remain the most accepted design for estimating the effects of interventions, but they do not necessarily answer a question of primary interest: Will the program be effective in a Target Population in which it may be implemented? In other words, are the results generalizable? There has been very little statistical research on how to assess the generalizability, or “external validity,” of randomized trials. We propose the use of propensity-score-based metrics to quantify the similarity of the participants in a randomized trial and a Target Population. In this setting the propensity score model predicts participation in the randomized trial, given a set of covariates. The resulting propensity scores are used first to quantify the difference between the trial participants and the Target Population, and then to match, subclassify, or weight the control group outcomes to the Population, assessing how well the propensity score-adjusted outcomes track the outcomes actually observed in the Population. These metrics can serve as a first step in assessing the generalizability of results from randomized trials to Target Populations. This paper lays out these ideas, discusses the assumptions underlying the approach, and illustrates the metrics using data on the evaluation of a schoolwide prevention program called Positive Behavioral Interventions and Supports.

Alice S Whittemore - One of the best experts on this subject based on the ideXlab platform.

  • evaluating disease prediction models using a cohort whose covariate distribution differs from that of the Target Population
    Statistical Methods in Medical Research, 2019
    Co-Authors: Scott Powers, Valerie Mcguire, Leslie Bernstein, Alison J Canchola, Alice S Whittemore
    Abstract:

    Personal predictive models for disease development play important roles in chronic disease prevention. The performance of these models is evaluated by applying them to the baseline covariates of participants in external cohort studies, with model predictions compared to subjects' subsequent disease incidence. However, the covariate distribution among participants in a validation cohort may differ from that of the Population for which the model will be used. Since estimates of predictive model performance depend on the distribution of covariates among the subjects to which it is applied, such differences can cause misleading estimates of model performance in the Target Population. We propose a method for addressing this problem by weighting the cohort subjects to make their covariate distribution better match that of the Target Population. Simulations show that the method provides accurate estimates of model performance in the Target Population, while un-weighted estimates may not. We illustrate the method by applying it to evaluate an ovarian cancer prediction model Targeted to US women, using cohort data from participants in the California Teachers Study. The methods can be implemented using open-source code for public use as the R-package RMAP (Risk Model Assessment Package) available at http://stanford.edu/~ggong/rmap/ .

  • evaluating disease prediction models using a cohort whose covariate distribution differs from that of the Target Population
    Statistical Methods in Medical Research, 2019
    Co-Authors: Scott Powers, Valerie Mcguire, Leslie Bernstein, Alison J Canchola, Alice S Whittemore
    Abstract:

    Personal predictive models for disease development play important roles in chronic disease prevention. The performance of these models is evaluated by applying them to the baseline covariates of pa...

David M Dror - One of the best experts on this subject based on the ideXlab platform.

  • building awareness to health insurance among the Target Population of community based health insurance schemes in rural india
    Social Science Research Network, 2015
    Co-Authors: Pradeep Panda, Arpita Chakraborty, David M Dror
    Abstract:

    Objective: To evaluate an insurance awareness campaign carried out before the launch of three community-based health insurance (CBHI) schemes in rural India, answering the questions: Has the awareness campaign been successful in enhancing participants’ understanding of health insurance? What awareness tools were most useful from the participants’ point of view? Has enhanced awareness resulted in higher enrolment? methods Data for this analysis originates from a baseline survey (2010) and a follow-up survey (2011) of more than 800 households in the pre- and post-campaign periods. We used the difference in-differences method to evaluate the impact of awareness activities on insurance understanding.Assessment of usefulness of various tools was carried out based on respondents’ replies regarding the tool(s) they enjoyed and found most useful. An ordinary least square regression analysis was conducted to understand whether insurance knowledge and CBHI understanding are related with enrolment in CBHI.Results: The intervention cohort demonstrated substantially higher understanding of insurance concepts than the control group, and CBHI understanding was a positive determinant for enrolment. Respondents considered the ‘Treasure-Pot’ tool (an interactive game) as most useful in enhancing awareness to the effects of insurance.Conclusions: We conclude that awareness-raising is an important prerequisite for voluntary uptake of CBHI schemes and that interactive, contextualised awareness tools are useful in enhancing insurance understanding.