Job Performance

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 40065 Experts worldwide ranked by ideXlab platform

Frank L Schmidt - One of the best experts on this subject based on the ideXlab platform.

  • comparative analysis of the reliability of Job Performance ratings
    Journal of Applied Psychology, 1996
    Co-Authors: Chockalingam Viswesvaran, Deniz S Ones, Frank L Schmidt
    Abstract:

    This study used meta-analytic methods to compare the interrater and intrarater reliabilities of ratings of 10 dimensions of Job Performance used in the literature; ratings of overall Job Performance were also examined. There was mixed support for the notion that some dimensions are rated more reliably than others. Supervisory ratings appear to have higher interrater reliability than peer ratings. Consistent with H. R. Rothstein (1990), mean interrater reliability of supervisory ratings of overall Job Performance was found to be .52. In all cases, interrater reliability is lower than intrarater reliability, indicating that the inappropriate use of intrarater reliability estimates to correct for biases from measurement error leads to biased research results. These findings have important implications for both research and practice.

Chockalingam Viswesvaran - One of the best experts on this subject based on the ideXlab platform.

  • Absenteeism and Measures of Job Performance: A Meta-Analysis
    International Journal of Selection and Assessment, 2002
    Co-Authors: Chockalingam Viswesvaran
    Abstract:

    The correlations reported in the extant literature between one form of counterproductive behaviors - absenteeism - and four different indices of Job Performance were meta-analytically cumulated. Job Performance indices utilized were productivity, quality, interpersonal behaviors, and effort. The former two were measured using organizational records, while the latter two were measured using supervisory ratings. The results suggest that absenteeism measures are more highly correlated with organizational records of quality, and supervisory ratings of both effort and interpersonal behaviors. Lower correlations were found with organizational records of productivity. These results suggest the potential for common determinants of absenteeism and some aspects of Job Performance. The fairly independent literatures that have developed on absenteeism and Job Performance can inform one another. Implications for modeling and assessing Job Performance are noted.

  • comparative analysis of the reliability of Job Performance ratings
    Journal of Applied Psychology, 1996
    Co-Authors: Chockalingam Viswesvaran, Deniz S Ones, Frank L Schmidt
    Abstract:

    This study used meta-analytic methods to compare the interrater and intrarater reliabilities of ratings of 10 dimensions of Job Performance used in the literature; ratings of overall Job Performance were also examined. There was mixed support for the notion that some dimensions are rated more reliably than others. Supervisory ratings appear to have higher interrater reliability than peer ratings. Consistent with H. R. Rothstein (1990), mean interrater reliability of supervisory ratings of overall Job Performance was found to be .52. In all cases, interrater reliability is lower than intrarater reliability, indicating that the inappropriate use of intrarater reliability estimates to correct for biases from measurement error leads to biased research results. These findings have important implications for both research and practice.

Kevin R Murphy - One of the best experts on this subject based on the ideXlab platform.

  • explaining the weak relationship between Job Performance and ratings of Job Performance
    Industrial and Organizational Psychology, 2008
    Co-Authors: Kevin R Murphy
    Abstract:

    Ratings of Job Performance are widely viewed as poor measures of Job Performance. Three models of the PerformancePerformance rating relationship offer very different explanations and solutions for this seemingly weak relationship. One-factor models suggest that measurement error is the main difference between Performance and Performance ratings and they offer a simple solution—that is, the correction for attenuation. Multifactor models suggest that the effects of Job Performance on Performance ratings are often masked by a range of systematic nonPerformance factors that also influence these ratings. These models suggest isolating and dampening the effects of these nonPerformance factors. Mediated models suggest that intentional distortions are a key reason that ratings often fail to reflect ratee Performance. These models suggest that raters must be given both the tools and the incentive to perform well as measurement instruments and that systematic efforts to remove the negative consequences of giving honest Performance ratings are needed if we hope to use Performance ratings as serious measures of Job Performance.

  • Perspectives on the Relationship Between Job Performance and Ratings of Job Performance
    Industrial and Organizational Psychology, 2008
    Co-Authors: Kevin R Murphy
    Abstract:

    The comments and suggestions prompted by K. R. Murphy’s (2008) description of alternate models of the relationship between Job Performance and ratings of Job Performance reflect 3 broad themes: (a) the relationship between Performance appraisal and Performance measurement, (b) the best psychometric models for understanding Performance ratings, and (c) the appropriateness of static measures for dynamic phenomena. This paper comments on these 3 themes and suggests directions for future research and practice in Performance appraisal that focuses on rater goals, organizational interventions to improve the accuracy and value of ratings, and assessments of the value of Performance ratings as criteria.

Henk T Van Der Molen - One of the best experts on this subject based on the ideXlab platform.

  • predicting expatriate Job Performance for selection purposes a quantitative review
    Journal of Cross-Cultural Psychology, 2005
    Co-Authors: Marise Ph Born, Madde E Willemsen, Henk T Van Der Molen
    Abstract:

    This article meta-analytically reviews empirical studies on the prediction of expatriate Job Performance. Using 30 primary studies (total N = 4,046), it was found that predictive validities of the Big Five were similar to Big Five validities reported for domestic employees. Extraversion, emotional stability, agreeableness, and conscientiousness were predictive of expatriate Job Performance; openness was not. Other predictors that were found to relate to expatriate Job Performance were cultural sensitivity and local language ability. Cultural flexibility, selection board ratings, tolerance for ambiguity, ego strength, peer nominations, task leadership, people leadership, social adaptability, and interpersonal interest emerged as predictors from exploratory investigations (K < 4). It is surprising that intelligence has seldom been investigated as a predictor of expatriate Job Performance.

Deniz S Ones - One of the best experts on this subject based on the ideXlab platform.

  • comparative analysis of the reliability of Job Performance ratings
    Journal of Applied Psychology, 1996
    Co-Authors: Chockalingam Viswesvaran, Deniz S Ones, Frank L Schmidt
    Abstract:

    This study used meta-analytic methods to compare the interrater and intrarater reliabilities of ratings of 10 dimensions of Job Performance used in the literature; ratings of overall Job Performance were also examined. There was mixed support for the notion that some dimensions are rated more reliably than others. Supervisory ratings appear to have higher interrater reliability than peer ratings. Consistent with H. R. Rothstein (1990), mean interrater reliability of supervisory ratings of overall Job Performance was found to be .52. In all cases, interrater reliability is lower than intrarater reliability, indicating that the inappropriate use of intrarater reliability estimates to correct for biases from measurement error leads to biased research results. These findings have important implications for both research and practice.