Evaluation Practice

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 225 Experts worldwide ranked by ideXlab platform

David J Winchester - One of the best experts on this subject based on the ideXlab platform.

Karl Y Bilimoria - One of the best experts on this subject based on the ideXlab platform.

Benjamin Smith - One of the best experts on this subject based on the ideXlab platform.

  • the funding administrative and policy influences on the Evaluation of primary prevention programs in australia
    Prevention Science, 2019
    Co-Authors: Joanna Schwarzman, Adrian Bauman, Belinda J Gabbe, Chris Rissel, Trevor Shilton, Benjamin Smith
    Abstract:

    Evaluation of primary prevention and health promotion programs contributes necessary information to the evidence base for prevention programs. There is increasing demand for high-quality Evaluation of program impact and effectiveness for use in public health decision making. Despite the demand for evidence and known benefits, Evaluation of prevention programs can be challenging and organizations face barriers to conducting rigorous Evaluation. Evaluation capacity building efforts are gaining attention in the prevention field; however, there is limited knowledge about how components of the health promotion and primary prevention system (e.g., funding, administrative arrangements, and the policy environment) may facilitate or hinder this work. We sought to identify the important influences on Evaluation Practice within the Australian primary prevention and health promotion system. We conducted in-depth semi-structured interviews with experienced practitioners and managers (n = 40) from government and non-government organizations, and used thematic analysis to identify the main factors that impact on prevention program Evaluation. Firstly, accountability and reporting requirements impacted on Evaluation, especially if expectations were poorly aligned between the funding body and prevention organization. Secondly, the funding and political context was found to directly and indirectly affect the resources available and Evaluation approach. Finally, it was found that participants made use of various strategies to modify the prevention system for more favorable conditions for Evaluation. We highlight the opportunities to address barriers to Evaluation in the prevention system, and argue that through targeted investment, there is potential for widespread gain through improved Evaluation capacity.

Chris L S Coryn - One of the best experts on this subject based on the ideXlab platform.

  • a systematic review of theory driven Evaluation Practice from 1990 to 2009
    American Journal of Evaluation, 2011
    Co-Authors: Chris L S Coryn, Lindsay A Noakes, Carl D Westine, Daniela C Schroter
    Abstract:

    Although the general conceptual basis appeared far earlier, theory-driven Evaluation came to prominence only a few decades ago with the appearance of Chen’s 1990 book Theory-Driven Evaluations. Since that time, the approach has attracted many supporters as well as detractors. In this paper, 45 cases of theory-driven Evaluations, published over a twenty-year period, are systematically examined to ascertain how closely theory-driven Evaluation Practices comport with the key tenants of theory-driven Evaluation as described and prescribed by prominent theoretical writers. Evidence derived from this review to repudiate or substantiate many of the claims put forth both by critics of and advocates for theory-driven forms of Evaluation are presented and an agenda for future research on the approach is recommended.

  • models and mechanisms for evaluating government funded research an international comparison
    American Journal of Evaluation, 2007
    Co-Authors: Chris L S Coryn, John Hattie, Michael Scriven, David J Hartmann
    Abstract:

    This research describes, classifies, and comparatively evaluates national models and mechanisms used to evaluate research and allocate research funding in 16 countries. Although these models and mechanisms vary widely in terms of how research is evaluated and financed, nearly all share the common characteristic of relating funding to some measure of past performance. Each of these 16 national models and mechanisms were rated by independent, blinded panels of professional researchers and evaluators in two countries on more than 25 quality indicators. The national models were then ranked using the panels' ratings, in terms of their validity, credibility, utility, cost-effectiveness, and ethicality. The highest ratings were received by nations using large-scale research assessment exercises. Bulk funding and indicator-driven models received substantially lower ratings. Implications for research Evaluation Practice and policy are considered and discussed.

David J Bentrem - One of the best experts on this subject based on the ideXlab platform.