Impact Evaluation

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 360 Experts worldwide ranked by ideXlab platform

Eugene T Richardson - One of the best experts on this subject based on the ideXlab platform.

  • the symbolic violence of outbreak a mixed methods quasi experimental Impact Evaluation of social protection on ebola survivor wellbeing
    Social Science & Medicine, 2017
    Co-Authors: Eugene T Richardson, Daniel J Kelly, Osman Sesay, Michael Drasher, Ishaan K Desai, Raphael Frankfurter, Paul Farmer, Mohamed Bailor Barrie
    Abstract:

    Abstract Despite over 28,000 reported cases of Ebola virus disease (EVD) in the 2013–16 outbreak in West Africa, we are only beginning to trace the complex biosocial processes that have promoted its spread. Important questions remain, including the effects on survivors of clinical sequelae, loss of family and livelihood, and other psychological and social trauma. Another poorly understood question is what effect social protection and job creation programs have had on survivors’ wellbeing. Several clinical and social protection programs have been developed to respond to the needs of EVD survivors; however, little in the way of Impact Evaluation has taken place. We enrolled 200 randomly selected EVD survivors from Port Loko, Kenema, and Kailahun districts in Sierra Leone and stratified them based on the amount of instrumental social protection received post-discharge from an Ebola treatment unit. We then conducted a survey and in-depth interviews to assess participants’ wellbeing and food security. Social protection categories II-IV (moderate to extensive) were each significantly associated with ∼15–22% higher wellbeing scores compared to minimal social protection (p  Qualitative themes included having a sense of purpose during the crisis (work and fellowship helped survivors cope); using cash transfers to invest in business; the value of literacy and life-skills classes; loss of breadwinners (survivors with jobs were able to take over that role); and combating the consequences of stigma. We conclude that, for EVD survivors, short-term social protection during the vulnerable period post-discharge can pay dividends two years later. Based on the empiric evidence presented, we discuss how terms such as “outbreak” and “epidemic” do symbolic violence by creating the illusion that social suffering ends when transmission of a pathogen ceases.

Elizabeth A Stuart - One of the best experts on this subject based on the ideXlab platform.

  • covid 19 policy Impact Evaluation a guide to common design issues
    American Journal of Epidemiology, 2021
    Co-Authors: Noah Haber, Emma Clarkedeelder, Avi Feller, Joshua A Salomon, Elizabeth A Stuart
    Abstract:

    Policy responses to COVID-19, particularly those related to non-pharmaceutical interventions, are unprecedented in scale and scope. However, policy Impact Evaluations require a complex combination of circumstance, study design, data, statistics, and analysis. Beyond the issues that are faced for any policy, Evaluation of COVID-19 policies is complicated by additional challenges related to infectious disease dynamics and a multiplicity of interventions. The methods needed for policy-level Impact Evaluation are not often used or taught in epidemiology, and differ in important ways that may not be obvious. Methodological complications of policy Evaluations can make it difficult for decision-makers and researchers to synthesize and evaluate strength of evidence in COVID-19 health policy papers. We (1) introduce the basic suite of policy Impact Evaluation designs for observational data, including cross-sectional analyses, pre/post, interrupted time-series, and difference-in-differences analysis, (2) demonstrate key ways in which the requirements and assumptions underlying these designs are often violated in the context of COVID-19, and (3) provide decision-makers and reviewers a conceptual and graphical guide to identifying these key violations. The overall goal of this paper is to help epidemiologists, policy-makers, journal editors, journalists, researchers, and other research consumers understand and weigh the strengths and limitations of evidence.

  • covid 19 policy Impact Evaluation a guide to common design issues
    arXiv: Methodology, 2020
    Co-Authors: Noah Haber, Emma Clarkedeelder, Avi Feller, Joshua A Salomon, Elizabeth A Stuart
    Abstract:

    Policy responses to COVID-19, particularly those related to non-pharmaceutical interventions, are unprecedented in scale and scope. Epidemiologists are more involved in policy decisions and evidence generation than ever before. However, policy Impact Evaluations always require a complex combination of circumstance, study design, data, statistics, and analysis. Beyond the issues that are faced for any policy, Evaluation of COVID-19 policies is complicated by additional challenges related to infectious disease dynamics and lags, lack of direct observation of key outcomes, and a multiplicity of interventions occurring on an accelerated time scale. The methods needed for policy-level Impact Evaluation are not often used or taught in epidemiology, and differ in important ways that may not be obvious. The volume and speed, and methodological complications of policy Evaluations can make it difficult for decision-makers and researchers to synthesize and evaluate strength of evidence in COVID-19 health policy papers. In this paper, we (1) introduce the basic suite of policy Impact Evaluation designs for observational data, including cross-sectional analyses, pre/post, interrupted time-series, and difference-in-differences analysis, (2) demonstrate key ways in which the requirements and assumptions underlying these designs are often violated in the context of COVID-19, and (3) provide decision-makers and reviewers a conceptual and graphical guide to identifying these key violations. The overall goal of this paper is to help epidemiologists, policy-makers, journal editors, journalists, researchers, and other research consumers understand and weigh the strengths and limitations of evidence that is essential to decision-making.

Noah Haber - One of the best experts on this subject based on the ideXlab platform.

  • covid 19 policy Impact Evaluation a guide to common design issues
    American Journal of Epidemiology, 2021
    Co-Authors: Noah Haber, Emma Clarkedeelder, Avi Feller, Joshua A Salomon, Elizabeth A Stuart
    Abstract:

    Policy responses to COVID-19, particularly those related to non-pharmaceutical interventions, are unprecedented in scale and scope. However, policy Impact Evaluations require a complex combination of circumstance, study design, data, statistics, and analysis. Beyond the issues that are faced for any policy, Evaluation of COVID-19 policies is complicated by additional challenges related to infectious disease dynamics and a multiplicity of interventions. The methods needed for policy-level Impact Evaluation are not often used or taught in epidemiology, and differ in important ways that may not be obvious. Methodological complications of policy Evaluations can make it difficult for decision-makers and researchers to synthesize and evaluate strength of evidence in COVID-19 health policy papers. We (1) introduce the basic suite of policy Impact Evaluation designs for observational data, including cross-sectional analyses, pre/post, interrupted time-series, and difference-in-differences analysis, (2) demonstrate key ways in which the requirements and assumptions underlying these designs are often violated in the context of COVID-19, and (3) provide decision-makers and reviewers a conceptual and graphical guide to identifying these key violations. The overall goal of this paper is to help epidemiologists, policy-makers, journal editors, journalists, researchers, and other research consumers understand and weigh the strengths and limitations of evidence.

  • problems with evidence assessment in covid 19 health policy Impact Evaluation peachpie a systematic strength of methods review
    medRxiv, 2021
    Co-Authors: Noah Haber, Emma Clarkedeelder, Avi Feller, Emily R Smith, Joshua A Salomon, B Maccormackgelles, E M Stone, C Bolsterfoucault, Laura A Hatfield, C E Fry
    Abstract:

    Introduction: The Impact of policies on COVID-19 outcomes is one of the most important questions of our time. Unfortunately, there are substantial concerns about the strength and quality of the literature examining policy Impacts. This study systematically assessed the currently published COVID-19 policy Impact literature for a checklist of study design elements and methodological issues. Methods: We included studies that were primarily designed to estimate the quantitative Impact of one or more implemented COVID-19 policies on direct SARS-CoV-2 and COVID-19 outcomes. After searching PubMed for peer-reviewed articles published on November 26 or earlier and screening, all studies were reviewed by three reviewers independently and in consensus. The review tool was based on review guidance for assessing COVID-19 health policy Impact Evaluation analyses, including first identifying the assumptions behind the methods used, followed by assessing graphical display of outcomes data, functional form for the outcomes, timing between policy and Impact, concurrent changes to the outcomes, and an overall rating. Results: After 102 articles were identified as potentially meeting inclusion criteria, we identified 36 published articles that evaluated the quantitative Impact of COVID-19 policies on direct COVID-19 outcomes. The majority (n=23/36) of studies in our sample examined the Impact of stay-at-home requirements. Nine studies were set aside due to the study design being considered inappropriate for COVID-19 policy Impact Evaluation (n=8 pre/post; n=1 cross-section), and 27 articles were given a full consensus assessment. 20/27 met criteria for graphical display of data, 5/27 for functional form, 19/27 for timing between policy implementation and Impact, and only 3/27 for concurrent changes to the outcomes. Only 1/27 studies passed all of the above checks, and 4/27 were rated as overall appropriate. Including the 9 studies set aside, we found that only four (or by a stricter standard, only one) of the 36 identified published and peer-reviewed health policy Impact Evaluation studies passed a set of key design checks for identifying the causal Impact of policies on COVID-19 outcomes. Discussion: The current literature directly evaluating the Impact of COVID-19 policies largely fails to meet key design criteria for useful inference. This may be partially due to the circumstances for Evaluation being particularly difficult, as well as a context with desire for rapid publication, the importance of the topic, and weak peer review processes. Importantly, weak evidence is non-informative and does not indicate how effective these policies were on COVID-19 outcomes.

  • covid 19 policy Impact Evaluation a guide to common design issues
    arXiv: Methodology, 2020
    Co-Authors: Noah Haber, Emma Clarkedeelder, Avi Feller, Joshua A Salomon, Elizabeth A Stuart
    Abstract:

    Policy responses to COVID-19, particularly those related to non-pharmaceutical interventions, are unprecedented in scale and scope. Epidemiologists are more involved in policy decisions and evidence generation than ever before. However, policy Impact Evaluations always require a complex combination of circumstance, study design, data, statistics, and analysis. Beyond the issues that are faced for any policy, Evaluation of COVID-19 policies is complicated by additional challenges related to infectious disease dynamics and lags, lack of direct observation of key outcomes, and a multiplicity of interventions occurring on an accelerated time scale. The methods needed for policy-level Impact Evaluation are not often used or taught in epidemiology, and differ in important ways that may not be obvious. The volume and speed, and methodological complications of policy Evaluations can make it difficult for decision-makers and researchers to synthesize and evaluate strength of evidence in COVID-19 health policy papers. In this paper, we (1) introduce the basic suite of policy Impact Evaluation designs for observational data, including cross-sectional analyses, pre/post, interrupted time-series, and difference-in-differences analysis, (2) demonstrate key ways in which the requirements and assumptions underlying these designs are often violated in the context of COVID-19, and (3) provide decision-makers and reviewers a conceptual and graphical guide to identifying these key violations. The overall goal of this paper is to help epidemiologists, policy-makers, journal editors, journalists, researchers, and other research consumers understand and weigh the strengths and limitations of evidence that is essential to decision-making.

Emma Clarkedeelder - One of the best experts on this subject based on the ideXlab platform.

  • covid 19 policy Impact Evaluation a guide to common design issues
    American Journal of Epidemiology, 2021
    Co-Authors: Noah Haber, Emma Clarkedeelder, Avi Feller, Joshua A Salomon, Elizabeth A Stuart
    Abstract:

    Policy responses to COVID-19, particularly those related to non-pharmaceutical interventions, are unprecedented in scale and scope. However, policy Impact Evaluations require a complex combination of circumstance, study design, data, statistics, and analysis. Beyond the issues that are faced for any policy, Evaluation of COVID-19 policies is complicated by additional challenges related to infectious disease dynamics and a multiplicity of interventions. The methods needed for policy-level Impact Evaluation are not often used or taught in epidemiology, and differ in important ways that may not be obvious. Methodological complications of policy Evaluations can make it difficult for decision-makers and researchers to synthesize and evaluate strength of evidence in COVID-19 health policy papers. We (1) introduce the basic suite of policy Impact Evaluation designs for observational data, including cross-sectional analyses, pre/post, interrupted time-series, and difference-in-differences analysis, (2) demonstrate key ways in which the requirements and assumptions underlying these designs are often violated in the context of COVID-19, and (3) provide decision-makers and reviewers a conceptual and graphical guide to identifying these key violations. The overall goal of this paper is to help epidemiologists, policy-makers, journal editors, journalists, researchers, and other research consumers understand and weigh the strengths and limitations of evidence.

  • problems with evidence assessment in covid 19 health policy Impact Evaluation peachpie a systematic strength of methods review
    medRxiv, 2021
    Co-Authors: Noah Haber, Emma Clarkedeelder, Avi Feller, Emily R Smith, Joshua A Salomon, B Maccormackgelles, E M Stone, C Bolsterfoucault, Laura A Hatfield, C E Fry
    Abstract:

    Introduction: The Impact of policies on COVID-19 outcomes is one of the most important questions of our time. Unfortunately, there are substantial concerns about the strength and quality of the literature examining policy Impacts. This study systematically assessed the currently published COVID-19 policy Impact literature for a checklist of study design elements and methodological issues. Methods: We included studies that were primarily designed to estimate the quantitative Impact of one or more implemented COVID-19 policies on direct SARS-CoV-2 and COVID-19 outcomes. After searching PubMed for peer-reviewed articles published on November 26 or earlier and screening, all studies were reviewed by three reviewers independently and in consensus. The review tool was based on review guidance for assessing COVID-19 health policy Impact Evaluation analyses, including first identifying the assumptions behind the methods used, followed by assessing graphical display of outcomes data, functional form for the outcomes, timing between policy and Impact, concurrent changes to the outcomes, and an overall rating. Results: After 102 articles were identified as potentially meeting inclusion criteria, we identified 36 published articles that evaluated the quantitative Impact of COVID-19 policies on direct COVID-19 outcomes. The majority (n=23/36) of studies in our sample examined the Impact of stay-at-home requirements. Nine studies were set aside due to the study design being considered inappropriate for COVID-19 policy Impact Evaluation (n=8 pre/post; n=1 cross-section), and 27 articles were given a full consensus assessment. 20/27 met criteria for graphical display of data, 5/27 for functional form, 19/27 for timing between policy implementation and Impact, and only 3/27 for concurrent changes to the outcomes. Only 1/27 studies passed all of the above checks, and 4/27 were rated as overall appropriate. Including the 9 studies set aside, we found that only four (or by a stricter standard, only one) of the 36 identified published and peer-reviewed health policy Impact Evaluation studies passed a set of key design checks for identifying the causal Impact of policies on COVID-19 outcomes. Discussion: The current literature directly evaluating the Impact of COVID-19 policies largely fails to meet key design criteria for useful inference. This may be partially due to the circumstances for Evaluation being particularly difficult, as well as a context with desire for rapid publication, the importance of the topic, and weak peer review processes. Importantly, weak evidence is non-informative and does not indicate how effective these policies were on COVID-19 outcomes.

  • covid 19 policy Impact Evaluation a guide to common design issues
    arXiv: Methodology, 2020
    Co-Authors: Noah Haber, Emma Clarkedeelder, Avi Feller, Joshua A Salomon, Elizabeth A Stuart
    Abstract:

    Policy responses to COVID-19, particularly those related to non-pharmaceutical interventions, are unprecedented in scale and scope. Epidemiologists are more involved in policy decisions and evidence generation than ever before. However, policy Impact Evaluations always require a complex combination of circumstance, study design, data, statistics, and analysis. Beyond the issues that are faced for any policy, Evaluation of COVID-19 policies is complicated by additional challenges related to infectious disease dynamics and lags, lack of direct observation of key outcomes, and a multiplicity of interventions occurring on an accelerated time scale. The methods needed for policy-level Impact Evaluation are not often used or taught in epidemiology, and differ in important ways that may not be obvious. The volume and speed, and methodological complications of policy Evaluations can make it difficult for decision-makers and researchers to synthesize and evaluate strength of evidence in COVID-19 health policy papers. In this paper, we (1) introduce the basic suite of policy Impact Evaluation designs for observational data, including cross-sectional analyses, pre/post, interrupted time-series, and difference-in-differences analysis, (2) demonstrate key ways in which the requirements and assumptions underlying these designs are often violated in the context of COVID-19, and (3) provide decision-makers and reviewers a conceptual and graphical guide to identifying these key violations. The overall goal of this paper is to help epidemiologists, policy-makers, journal editors, journalists, researchers, and other research consumers understand and weigh the strengths and limitations of evidence that is essential to decision-making.

Avi Feller - One of the best experts on this subject based on the ideXlab platform.

  • covid 19 policy Impact Evaluation a guide to common design issues
    American Journal of Epidemiology, 2021
    Co-Authors: Noah Haber, Emma Clarkedeelder, Avi Feller, Joshua A Salomon, Elizabeth A Stuart
    Abstract:

    Policy responses to COVID-19, particularly those related to non-pharmaceutical interventions, are unprecedented in scale and scope. However, policy Impact Evaluations require a complex combination of circumstance, study design, data, statistics, and analysis. Beyond the issues that are faced for any policy, Evaluation of COVID-19 policies is complicated by additional challenges related to infectious disease dynamics and a multiplicity of interventions. The methods needed for policy-level Impact Evaluation are not often used or taught in epidemiology, and differ in important ways that may not be obvious. Methodological complications of policy Evaluations can make it difficult for decision-makers and researchers to synthesize and evaluate strength of evidence in COVID-19 health policy papers. We (1) introduce the basic suite of policy Impact Evaluation designs for observational data, including cross-sectional analyses, pre/post, interrupted time-series, and difference-in-differences analysis, (2) demonstrate key ways in which the requirements and assumptions underlying these designs are often violated in the context of COVID-19, and (3) provide decision-makers and reviewers a conceptual and graphical guide to identifying these key violations. The overall goal of this paper is to help epidemiologists, policy-makers, journal editors, journalists, researchers, and other research consumers understand and weigh the strengths and limitations of evidence.

  • problems with evidence assessment in covid 19 health policy Impact Evaluation peachpie a systematic strength of methods review
    medRxiv, 2021
    Co-Authors: Noah Haber, Emma Clarkedeelder, Avi Feller, Emily R Smith, Joshua A Salomon, B Maccormackgelles, E M Stone, C Bolsterfoucault, Laura A Hatfield, C E Fry
    Abstract:

    Introduction: The Impact of policies on COVID-19 outcomes is one of the most important questions of our time. Unfortunately, there are substantial concerns about the strength and quality of the literature examining policy Impacts. This study systematically assessed the currently published COVID-19 policy Impact literature for a checklist of study design elements and methodological issues. Methods: We included studies that were primarily designed to estimate the quantitative Impact of one or more implemented COVID-19 policies on direct SARS-CoV-2 and COVID-19 outcomes. After searching PubMed for peer-reviewed articles published on November 26 or earlier and screening, all studies were reviewed by three reviewers independently and in consensus. The review tool was based on review guidance for assessing COVID-19 health policy Impact Evaluation analyses, including first identifying the assumptions behind the methods used, followed by assessing graphical display of outcomes data, functional form for the outcomes, timing between policy and Impact, concurrent changes to the outcomes, and an overall rating. Results: After 102 articles were identified as potentially meeting inclusion criteria, we identified 36 published articles that evaluated the quantitative Impact of COVID-19 policies on direct COVID-19 outcomes. The majority (n=23/36) of studies in our sample examined the Impact of stay-at-home requirements. Nine studies were set aside due to the study design being considered inappropriate for COVID-19 policy Impact Evaluation (n=8 pre/post; n=1 cross-section), and 27 articles were given a full consensus assessment. 20/27 met criteria for graphical display of data, 5/27 for functional form, 19/27 for timing between policy implementation and Impact, and only 3/27 for concurrent changes to the outcomes. Only 1/27 studies passed all of the above checks, and 4/27 were rated as overall appropriate. Including the 9 studies set aside, we found that only four (or by a stricter standard, only one) of the 36 identified published and peer-reviewed health policy Impact Evaluation studies passed a set of key design checks for identifying the causal Impact of policies on COVID-19 outcomes. Discussion: The current literature directly evaluating the Impact of COVID-19 policies largely fails to meet key design criteria for useful inference. This may be partially due to the circumstances for Evaluation being particularly difficult, as well as a context with desire for rapid publication, the importance of the topic, and weak peer review processes. Importantly, weak evidence is non-informative and does not indicate how effective these policies were on COVID-19 outcomes.

  • covid 19 policy Impact Evaluation a guide to common design issues
    arXiv: Methodology, 2020
    Co-Authors: Noah Haber, Emma Clarkedeelder, Avi Feller, Joshua A Salomon, Elizabeth A Stuart
    Abstract:

    Policy responses to COVID-19, particularly those related to non-pharmaceutical interventions, are unprecedented in scale and scope. Epidemiologists are more involved in policy decisions and evidence generation than ever before. However, policy Impact Evaluations always require a complex combination of circumstance, study design, data, statistics, and analysis. Beyond the issues that are faced for any policy, Evaluation of COVID-19 policies is complicated by additional challenges related to infectious disease dynamics and lags, lack of direct observation of key outcomes, and a multiplicity of interventions occurring on an accelerated time scale. The methods needed for policy-level Impact Evaluation are not often used or taught in epidemiology, and differ in important ways that may not be obvious. The volume and speed, and methodological complications of policy Evaluations can make it difficult for decision-makers and researchers to synthesize and evaluate strength of evidence in COVID-19 health policy papers. In this paper, we (1) introduce the basic suite of policy Impact Evaluation designs for observational data, including cross-sectional analyses, pre/post, interrupted time-series, and difference-in-differences analysis, (2) demonstrate key ways in which the requirements and assumptions underlying these designs are often violated in the context of COVID-19, and (3) provide decision-makers and reviewers a conceptual and graphical guide to identifying these key violations. The overall goal of this paper is to help epidemiologists, policy-makers, journal editors, journalists, researchers, and other research consumers understand and weigh the strengths and limitations of evidence that is essential to decision-making.