Peer Evaluation

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 360 Experts worldwide ranked by ideXlab platform

Hansdieter Daniel - One of the best experts on this subject based on the ideXlab platform.

  • funding decision making systems an empirical comparison of continuous and dichotomous approaches based on psychometric theory
    Research Evaluation, 2016
    Co-Authors: Rudiger Mutz, Hansdieter Daniel, Lutz Bornmann
    Abstract:

    Psychometrics questions the use of dichotomous decisions. For these reasons, De Los Reyes and Wang (2012) favour a continuous funding decision system, in which the funded percentage of a requested grant sum is coupled to the ratings that a proposal receives in the ex ante Peer Evaluation. In contrast to the 'winner takes all' philosophy in a dichotomous funding decision system, a continuous system takes the low reliability of Peer review ratings into account. Funding decisions are mostly based on Peer review rating systems that have rather low inter-rater reliability. This article aims to use psychometrics to simulate the two funding decision systems, to compare them to the funding decision system implemented by a real funding organization, and with this, to investigate for the first time the effects of measurement errors on funding decisions. We used Peer review data from the Austrian Science Fund (FWF) (N = 8,496 proposals), which obviously implements a hybrid funding decision system. The approval rate at FWF is 44.5%; our findings show that the approval rate would be 32.1% using a purely dichotomous system and 58.4% using a continuous funding decision system. As the funded percentage of a proposal's requested grant sum increases with increasing mean ex ante Peer Evaluation of a proposal (r = 0.23), the FWF also shows elements of a continuous funding decision system. Relative to a continuous system, a dichotomous system reduces the approval probability of a proposal overall. This is even the case for high-quality proposals (approval probability  ∼0.70).

  • Testing for the fairness and predictive validity of research funding decisions: A multilevel multiple imputation for missing data approach using ex-ante and ex-post Peer Evaluation data from the Austrian science fund
    Journal of the Association for Information Science and Technology, 2014
    Co-Authors: Rudiger Mutz, Lutz Bornmann, Hansdieter Daniel
    Abstract:

    It is essential for research funding organizations to ensure both the validity and fairness of the grant approval procedure. The ex-ante Peer Evaluation (EXANTE) of N = 8,496 grant applications submitted to the Austrian Science Fund from 1999 to 2009 was statistically analyzed. For 1,689 funded research projects an ex-post Peer Evaluation (EXPOST) was also available; for the rest of the grant applications a multilevel missing data imputation approach was used to consider verification bias for the first time in Peer-review research. Without imputation, the predictive validity of EXANTE was low (r = .26) but underestimated due to verification bias, and with imputation it was r = .49. That is, the decision-making procedure is capable of selecting the best research proposals for funding. In the EXANTE there were several potential biases (e.g., gender). With respect to the EXPOST there was only one real bias (discipline-specific and year-specific differential prediction). The novelty of this contribution is, first, the combining of theoretical concepts of validity and fairness with a missing data imputation approach to correct for verification bias and, second, multilevel modeling to test Peer review-based funding decisions for both validity and fairness in terms of potential and real biases.

Cheolil Lim - One of the best experts on this subject based on the ideXlab platform.

  • Peer Evaluation in blended team project based learning what do students find important
    Educational Technology & Society, 2012
    Co-Authors: Hyejung Lee, Cheolil Lim
    Abstract:

    Team project-based learning is reputed to be an appropriate way to activate interactions among students and to encourage knowledge building through collaborative learning. Peer Evaluation is an effective way for each student to participate actively in a team project. This article investigates the issues that are important to students when evaluating their Peers in team project-based learning. A message analysis framework was inductively derived for the study, and data collected from the team-project learning process were categorized within this framework. Each message type was analyzed with respect to the students’ Peer Evaluation results. The results showed that managerial, procedural, and social messages, rather than academic messages, significantly predicted Peer Evaluation results. These results indicate that students find social contributions, such as organizing or coordinating managerial abilities, more important than cognitive contributions when they evaluate Peers. Additional results and the significance of their implications are discussed.

Hyejung Lee - One of the best experts on this subject based on the ideXlab platform.

  • Peer Evaluation in blended team project based learning what do students find important
    Educational Technology & Society, 2012
    Co-Authors: Hyejung Lee, Cheolil Lim
    Abstract:

    Team project-based learning is reputed to be an appropriate way to activate interactions among students and to encourage knowledge building through collaborative learning. Peer Evaluation is an effective way for each student to participate actively in a team project. This article investigates the issues that are important to students when evaluating their Peers in team project-based learning. A message analysis framework was inductively derived for the study, and data collected from the team-project learning process were categorized within this framework. Each message type was analyzed with respect to the students’ Peer Evaluation results. The results showed that managerial, procedural, and social messages, rather than academic messages, significantly predicted Peer Evaluation results. These results indicate that students find social contributions, such as organizing or coordinating managerial abilities, more important than cognitive contributions when they evaluate Peers. Additional results and the significance of their implications are discussed.

Matthew W Ohland - One of the best experts on this subject based on the ideXlab platform.

  • the comprehensive assessment of team member effectiveness development of a behaviorally anchored rating scale for self and Peer Evaluation
    Academy of Management Learning and Education, 2012
    Co-Authors: Matthew W Ohland, Richard A Layton, Misty L Loughry, Lisa G Bullard, Richard M Felder, Cynthia J Finelli, David J Woehr, Hal R Pomeranz, Douglas G Schmucker
    Abstract:

    Instructors often incorporate self and Peer Evaluations when they use teamwork in their classes, which is common in management education. However, the process is often time consuming and frequently does not match well with guidance provided by the literature. This paper describes the development of a web-based instrument that efficiently collects and analyzes self- and Peer-Evaluation data. The instrument uses a behaviorally anchored rating scale (BARS) to measure team-member contributions in five areas that are based on the literature on team effectiveness. Three studies provide evidence for the validity of the new instrument. Implications for management education and areas for future research are discussed.

  • the comprehensive assessment of team member effectiveness development of a behaviorally anchored rating scale for self and Peer Evaluation
    Academy of Management Learning and Education, 2012
    Co-Authors: Matthew W Ohland, Richard A Layton, Misty L Loughry, Lisa G Bullard, Richard M Felder, Cynthia J Finelli, David J Woehr, Hal R Pomeranz, Douglas G Schmucker
    Abstract:

    Instructors often incorporate self- and Peer Evaluations when they use teamwork in their classes, which is common in management education. However, the process is often time consuming and frequentl...

  • design and validation of a web based system for assigning members to teams using instructor specified criteria
    Advances in engineering education, 2010
    Co-Authors: Richard A Layton, Matthew W Ohland, Misty L Loughry, George Dante Ricco
    Abstract:

    A significant body of research identifies a large number of team composition characteristics that affect the success of individuals and teams in cooperative learning and project-based team environments. Controlling these factors when assigning students to teams should result in improved learning experiences. However, it is very difficult for instructors to consider more than a few criteria when assigning teams, particularly in large classes. As a result, most instructors allow students to self-select teams, randomly assign teams, or, at best, balance teams on a very limited number of criteria. This paper describes the design of Team-Maker, a web-based software tool that surveys students about criteria that instructors want to use when creating teams and uses a max-min heuristic to determine team assignments based on distribution criteria specified by the instructor. The Team-Maker system was validated by comparing the team assignments generated by the Team-Maker software to assignments made by experienced faculty members using the same criteria. This validation experiment showed that Team-Maker consistently met the specified criteria more closely than the faculty members. We suggest that Team-Maker can be used in combination with the Comprehensive Assessment of Team-Member Effectiveness (CATME) Peer Evaluation instrument to form a powerful faculty support system for team-based and cooperative learning and for a variety of research purposes. Internet access to both the Team-Maker and CATME systems is freely available to college faculty in all disciplines by selecting the “request faculty account” button at https://www.catme.org.

  • effects of behavioral anchors on Peer Evaluation reliability
    Journal of Engineering Education, 2005
    Co-Authors: Matthew W Ohland, Richard A Layton, Misty L Loughry, Amy G Yuhasz
    Abstract:

    This paper presents comparisons of three Peer Evaluation instruments tested among students in undergraduate engineering classes: a single-item instrument without behavioral anchors, a ten-item instrument, and a single-item behaviorally anchored instrument. Studies using the instruments in undergraduate engineering classes over four years show that the use of behavioral anchors significantly improves the inter-rater reliability of the single-item instrument. The inter-rater reliability (based on four raters) of the behaviorally anchored instrument was 0.78, which was not significantly higher than that of the ten-item instrument (0.74), but it was substantially more parsimonious. The results of this study add to the body of knowledge on evaluating students' performance in teams. This is critical since the ability to function in multidisciplinary teams is a required student learning outcome of engineering programs.

  • developing a Peer Evaluation instrument that is simple reliable and valid
    2005 ASEE Annual Conference and Exposition: The Changing Landscape of Engineering and Technology Education in a Global World, 2005
    Co-Authors: Matthew W Ohland, Richard A Layton, Misty L Loughry, Rufus Carter, Lisa G Bullard, Richard M Felder, Cynthia J Finelli, Douglas G Schmucker
    Abstract:

    A multi-university research team is working to design a Peer Evaluation instrument for cooperative learning teams that is simple, reliable, and valid. In this work, an overview of the process of developing behaviorally anchored rating scales (BARS) will be presented, including the establishment of a theoretical basis for the instrument and a description of the extensive classroom testing of the draft instrument conducted during fall 2004. Introducing the draft instrument to the engineering education community through exposure in the NSF grantees' poster session is expected both to improve the validity of the scale itself through the feedback we receive and to accelerate the dissemination of the instrument.

Richard A Layton - One of the best experts on this subject based on the ideXlab platform.

  • the comprehensive assessment of team member effectiveness development of a behaviorally anchored rating scale for self and Peer Evaluation
    Academy of Management Learning and Education, 2012
    Co-Authors: Matthew W Ohland, Richard A Layton, Misty L Loughry, Lisa G Bullard, Richard M Felder, Cynthia J Finelli, David J Woehr, Hal R Pomeranz, Douglas G Schmucker
    Abstract:

    Instructors often incorporate self and Peer Evaluations when they use teamwork in their classes, which is common in management education. However, the process is often time consuming and frequently does not match well with guidance provided by the literature. This paper describes the development of a web-based instrument that efficiently collects and analyzes self- and Peer-Evaluation data. The instrument uses a behaviorally anchored rating scale (BARS) to measure team-member contributions in five areas that are based on the literature on team effectiveness. Three studies provide evidence for the validity of the new instrument. Implications for management education and areas for future research are discussed.

  • the comprehensive assessment of team member effectiveness development of a behaviorally anchored rating scale for self and Peer Evaluation
    Academy of Management Learning and Education, 2012
    Co-Authors: Matthew W Ohland, Richard A Layton, Misty L Loughry, Lisa G Bullard, Richard M Felder, Cynthia J Finelli, David J Woehr, Hal R Pomeranz, Douglas G Schmucker
    Abstract:

    Instructors often incorporate self- and Peer Evaluations when they use teamwork in their classes, which is common in management education. However, the process is often time consuming and frequentl...

  • design and validation of a web based system for assigning members to teams using instructor specified criteria
    Advances in engineering education, 2010
    Co-Authors: Richard A Layton, Matthew W Ohland, Misty L Loughry, George Dante Ricco
    Abstract:

    A significant body of research identifies a large number of team composition characteristics that affect the success of individuals and teams in cooperative learning and project-based team environments. Controlling these factors when assigning students to teams should result in improved learning experiences. However, it is very difficult for instructors to consider more than a few criteria when assigning teams, particularly in large classes. As a result, most instructors allow students to self-select teams, randomly assign teams, or, at best, balance teams on a very limited number of criteria. This paper describes the design of Team-Maker, a web-based software tool that surveys students about criteria that instructors want to use when creating teams and uses a max-min heuristic to determine team assignments based on distribution criteria specified by the instructor. The Team-Maker system was validated by comparing the team assignments generated by the Team-Maker software to assignments made by experienced faculty members using the same criteria. This validation experiment showed that Team-Maker consistently met the specified criteria more closely than the faculty members. We suggest that Team-Maker can be used in combination with the Comprehensive Assessment of Team-Member Effectiveness (CATME) Peer Evaluation instrument to form a powerful faculty support system for team-based and cooperative learning and for a variety of research purposes. Internet access to both the Team-Maker and CATME systems is freely available to college faculty in all disciplines by selecting the “request faculty account” button at https://www.catme.org.

  • effects of behavioral anchors on Peer Evaluation reliability
    Journal of Engineering Education, 2005
    Co-Authors: Matthew W Ohland, Richard A Layton, Misty L Loughry, Amy G Yuhasz
    Abstract:

    This paper presents comparisons of three Peer Evaluation instruments tested among students in undergraduate engineering classes: a single-item instrument without behavioral anchors, a ten-item instrument, and a single-item behaviorally anchored instrument. Studies using the instruments in undergraduate engineering classes over four years show that the use of behavioral anchors significantly improves the inter-rater reliability of the single-item instrument. The inter-rater reliability (based on four raters) of the behaviorally anchored instrument was 0.78, which was not significantly higher than that of the ten-item instrument (0.74), but it was substantially more parsimonious. The results of this study add to the body of knowledge on evaluating students' performance in teams. This is critical since the ability to function in multidisciplinary teams is a required student learning outcome of engineering programs.

  • developing a Peer Evaluation instrument that is simple reliable and valid
    2005 ASEE Annual Conference and Exposition: The Changing Landscape of Engineering and Technology Education in a Global World, 2005
    Co-Authors: Matthew W Ohland, Richard A Layton, Misty L Loughry, Rufus Carter, Lisa G Bullard, Richard M Felder, Cynthia J Finelli, Douglas G Schmucker
    Abstract:

    A multi-university research team is working to design a Peer Evaluation instrument for cooperative learning teams that is simple, reliable, and valid. In this work, an overview of the process of developing behaviorally anchored rating scales (BARS) will be presented, including the establishment of a theoretical basis for the instrument and a description of the extensive classroom testing of the draft instrument conducted during fall 2004. Introducing the draft instrument to the engineering education community through exposure in the NSF grantees' poster session is expected both to improve the validity of the scale itself through the feedback we receive and to accelerate the dissemination of the instrument.