Cumulative Knowledge

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 29745 Experts worldwide ranked by ideXlab platform

Frank L Schmidt - One of the best experts on this subject based on the ideXlab platform.

  • measurement error obfuscates scientific Knowledge path to Cumulative Knowledge requires corrections for unreliability and psychometric meta analyses
    Industrial and Organizational Psychology, 2014
    Co-Authors: Chockalingam Viswesvaran, Deniz S. Ones, Frank L Schmidt
    Abstract:

    All measurements must contend with unreliability. No measure is free of measurement error. More attention must be paid to measurement error in all psychological research. The problem of reliability is more severe when rating scales are involved. Many of the constructs in industrial–organizational (I–O) psychology and organizational behavior/human resource management research are assessed using ratings. Most notably the organizationally central construct of job performance is often assessed using ratings (Austin & Villanova, 1992; Borman & Brush,

  • the impact of research synthesis methods on industrial organizational psychology the road from pessimism to optimism about Cumulative Knowledge
    Research Synthesis Methods, 2010
    Co-Authors: David S Degeest, Frank L Schmidt
    Abstract:

    This paper presents an account of the impact that research synthesis methods, in the form of psychometric meta-analysis, has had on industrial/organizational (I/O) psychology. This paper outlines the central contributions of psychometric meta-analysis in providing a method for developing Cumulative Knowledge. First, this paper describes the concerns and the state of the field before the development of meta-analytic methods. Second, the paper explains how meta-analysis addressed these concerns. Third, the paper details the development of psychometric meta-analysis through VG research and describes how the use of psychometric meta-analysis spread to other topic areas in the field. Finally, the paper presents illustrative example literatures, such as training and leadership, where meta-analysis had crucial impacts. Copyright © 2011 John Wiley & Sons, Ltd.

  • fixed versus random effects models in meta analysis model properties and an empirical comparison of differences in results
    British Journal of Mathematical and Statistical Psychology, 2009
    Co-Authors: Frank L Schmidt, Theodore L Hayes
    Abstract:

    Today most conclusions about Cumulative Knowledge in psychology are based on meta-analysis. We first present an examination of the important statistical differences between fixed-effects (FE) and random-effects (RE) models in meta-analysis and between two different RE procedures, due to Hedges and Vevea, and to Hunter and Schmidt. The implications of these differences for the appropriate interpretation of published meta-analyses are explored by applying the two RE procedures to 68 meta-analyses from five large meta-analytic studies previously published in Psychological Bulletin. Under the assumption that the goal of research is generalizable Knowledge, results indicated that the published FE confidence intervals (CIs) around mean effect sizes were on average 52% narrower than their actual width, with similar results being produced by the two RE procedures. These nominal 95% FE CIs were found to be on average 56% CIs. Because most meta-analyses in the literature use FE models, these findings suggest that the precision of meta-analysis findings in the literature has often been substantially overstated, with important consequences for research and practice.

  • fixed effects vs random effects meta analysis models implications for Cumulative research Knowledge
    International Journal of Selection and Assessment, 2000
    Co-Authors: John E Hunter, Frank L Schmidt
    Abstract:

    Research conclusions in the social sciences are increasingly based on meta-analysis, making questions of the accuracy of meta-analysis critical to the integrity of the base of Cumulative Knowledge. Both fixed effects (FE) and random effects (RE) meta-analysis models have been used widely in published meta-analyses. This article shows that FE models typically manifest a substantial Type I bias in significance tests for mean effect sizes and for moderator variables (interactions), while RE models do not. Likewise, FE models, but not RE models, yield confidence intervals for mean effect sizes that are narrower than their nominal width, thereby overstating the degree of precision in meta-analysis findings. This article demonstrates analytically that these biases in FE procedures are large enough to create serious distortions in conclusions about Cumulative Knowledge in the research literature. We therefore recommend that RE methods routinely be employed in meta-analysis in preference to FE methods.

  • statistical significance testing and Cumulative Knowledge in psychology implications for training of researchers
    Psychological Methods, 1996
    Co-Authors: Frank L Schmidt
    Abstract:

    Data analysis methods in psychology still emphasize statistical significance testing, despite numerous articles demonstrating its severe deficiencies. It is now possible to use meta-analysis to show that reliance on significance testing retards the development of Cumulative Knowledge. But reform of teaching and practice will also require that researchers learn that the benefits that they believe flow from use of significance testing are illusory. Teachers must revamp their courses to bring students to understand that (a) reliance on significance testing retards the growth of Cumulative research Knowledge; (b) benefits widely believed to flow from significance testing do not in fact exist; and (c) significance testing methods must be replaced with point estimates and confidence intervals in individual studies and with meta-analyses in the integration of multiple studies. This reform is essential to the future progress of Cumulative Knowledge in psychological research.

Roger Villanueva - One of the best experts on this subject based on the ideXlab platform.

  • towards the identification of the ommastrephid squid paralarvae mollusca cephalopoda morphological description of three species and a key to the north east atlantic species
    Zoological Journal of the Linnean Society, 2016
    Co-Authors: Fernando Angel Fernandezalvarez, Catarina P P Martins, Erica A G Vidal, Roger Villanueva
    Abstract:

    Oceanic squids of the family Ommastrephidae are an important fishing resource worldwide. Although Cumulative Knowledge exists on their subadult and adult forms, little is known about their young stages. Their hatchlings are among the smaller cephalopod paralarvae. They are characterized by the fusion of their tentacles into a proboscis and are very difficult to identify to species level, especially in areas where more than one species coexist. Seven species are found in the north-east (NE) Atlantic. In this study, mature oocytes of Illex coindetii, Todarodes sagittatus and Todaropsis eblanae were fertilized in vitro to obtain and describe hatchlings. Full descriptions based on morphometric characters, chromatophore patterns, skin sculpture and the structure of proboscis suckers are provided based on live specimens. This information was combined with previous descriptions of paralarvae, not necessarily based on DNA or known parentage, from four other ommastrephid species distributed in the same area and a dichotomous key was developed for the identification of paralarvae of the NE Atlantic. The most useful taxonomic characters were: the relative size of the lateral and medial suckers of the proboscis, the presence/absence of photophores and the arrangement of pegs on the proboscis suckers. This key was successfully used to identify wild collected rhynchoteuthion paralarvae from the NE Atlantic. Reliable identification of wild paralarvae can foster a better understanding of the population dynamics and life cycles of ommastrephid squids.

Chockalingam Viswesvaran - One of the best experts on this subject based on the ideXlab platform.

  • measurement error obfuscates scientific Knowledge path to Cumulative Knowledge requires corrections for unreliability and psychometric meta analyses
    Industrial and Organizational Psychology, 2014
    Co-Authors: Chockalingam Viswesvaran, Deniz S. Ones, Huy Le, Insue Oh
    Abstract:

    All measurements must contend with unreliability. No measure is free of measurement error. More attention must be paid to measurement error in all psychological research. The problem of reliability is more severe when rating scales are involved. Many of the constructs in industrial-organizational (I-O) psychology and organizational behavior/human resource management research are assessed using ratings. Most notably the organizationally central construct of job performance is often assessed using ratings (Austin & Villanova, 1992; Borman & Brush, 1993; Campbell, Gasser, & Oswald, 1996; Viswesvaran, Ones, & Schmidt, 1996; Viswesvaran, Schmidt, & Ones, 2005). The reliability of its assessment is a critical issue with consequences for (a) validation and (b) decision making. For over a century now, it has been known that measurement error obfuscates relationships among variables that scientists assess. Again for over a century, it has been known that statistical corrections for unreliability can help reveal the true magnitudes of relationships being examined. However, until mid-1970s, corrections for attenuation were hampered by the fact that the effect of sampling error is magnified in corrected correlations (Schmidt & Hunter, 1977). Only with the advent of psychometric meta-analysis was, it possible to fully reap the benefits of corrections for attenuation because the problem of sampling error was diminished by averaging across many samples and thereby increasing sample sizes. Since the advent of psychometric meta-analysis 38 years ago, scientific Knowledge in the field of I-O psychology has greatly increased. Hundreds of meta-analyses have established basic scientific principles and tested theories.Against this backdrop, LeBreton, Scherer, and James (2014) have written a focal article that distrusts corrections for unreliability in psychometric meta-analyses. They question the appropriateness of using interrater reliabilities of job performance ratings for corrections for attenuation in validity generalization studies. Because of length limitations on comments in this journal, we will address only major errors, not all errors in LeBreton et al.'s strident article. The focal article is unfortunately more emotional than rational in tone and conceptually and statistically confused. In our comment, we address only the two latter problems.We have organized our comment in five major sections: (a) purpose of validation and logic of correction for attenuation, (b) reliability of overall job performance ratings, (c) validity estimation versus administrative decision use of criteria, (d) accurate validity estimates for predictors used in employee selection, and (e) correct modeling of job performance determinants.

  • measurement error obfuscates scientific Knowledge path to Cumulative Knowledge requires corrections for unreliability and psychometric meta analyses
    Industrial and Organizational Psychology, 2014
    Co-Authors: Chockalingam Viswesvaran, Deniz S. Ones, Frank L Schmidt
    Abstract:

    All measurements must contend with unreliability. No measure is free of measurement error. More attention must be paid to measurement error in all psychological research. The problem of reliability is more severe when rating scales are involved. Many of the constructs in industrial–organizational (I–O) psychology and organizational behavior/human resource management research are assessed using ratings. Most notably the organizationally central construct of job performance is often assessed using ratings (Austin & Villanova, 1992; Borman & Brush,

Erica A G Vidal - One of the best experts on this subject based on the ideXlab platform.

  • towards the identification of the ommastrephid squid paralarvae mollusca cephalopoda morphological description of three species and a key to the north east atlantic species
    Zoological Journal of the Linnean Society, 2016
    Co-Authors: Fernando Angel Fernandezalvarez, Catarina P P Martins, Erica A G Vidal, Roger Villanueva
    Abstract:

    Oceanic squids of the family Ommastrephidae are an important fishing resource worldwide. Although Cumulative Knowledge exists on their subadult and adult forms, little is known about their young stages. Their hatchlings are among the smaller cephalopod paralarvae. They are characterized by the fusion of their tentacles into a proboscis and are very difficult to identify to species level, especially in areas where more than one species coexist. Seven species are found in the north-east (NE) Atlantic. In this study, mature oocytes of Illex coindetii, Todarodes sagittatus and Todaropsis eblanae were fertilized in vitro to obtain and describe hatchlings. Full descriptions based on morphometric characters, chromatophore patterns, skin sculpture and the structure of proboscis suckers are provided based on live specimens. This information was combined with previous descriptions of paralarvae, not necessarily based on DNA or known parentage, from four other ommastrephid species distributed in the same area and a dichotomous key was developed for the identification of paralarvae of the NE Atlantic. The most useful taxonomic characters were: the relative size of the lateral and medial suckers of the proboscis, the presence/absence of photophores and the arrangement of pegs on the proboscis suckers. This key was successfully used to identify wild collected rhynchoteuthion paralarvae from the NE Atlantic. Reliable identification of wild paralarvae can foster a better understanding of the population dynamics and life cycles of ommastrephid squids.

Insue Oh - One of the best experts on this subject based on the ideXlab platform.

  • measurement error obfuscates scientific Knowledge path to Cumulative Knowledge requires corrections for unreliability and psychometric meta analyses
    Industrial and Organizational Psychology, 2014
    Co-Authors: Chockalingam Viswesvaran, Deniz S. Ones, Huy Le, Insue Oh
    Abstract:

    All measurements must contend with unreliability. No measure is free of measurement error. More attention must be paid to measurement error in all psychological research. The problem of reliability is more severe when rating scales are involved. Many of the constructs in industrial-organizational (I-O) psychology and organizational behavior/human resource management research are assessed using ratings. Most notably the organizationally central construct of job performance is often assessed using ratings (Austin & Villanova, 1992; Borman & Brush, 1993; Campbell, Gasser, & Oswald, 1996; Viswesvaran, Ones, & Schmidt, 1996; Viswesvaran, Schmidt, & Ones, 2005). The reliability of its assessment is a critical issue with consequences for (a) validation and (b) decision making. For over a century now, it has been known that measurement error obfuscates relationships among variables that scientists assess. Again for over a century, it has been known that statistical corrections for unreliability can help reveal the true magnitudes of relationships being examined. However, until mid-1970s, corrections for attenuation were hampered by the fact that the effect of sampling error is magnified in corrected correlations (Schmidt & Hunter, 1977). Only with the advent of psychometric meta-analysis was, it possible to fully reap the benefits of corrections for attenuation because the problem of sampling error was diminished by averaging across many samples and thereby increasing sample sizes. Since the advent of psychometric meta-analysis 38 years ago, scientific Knowledge in the field of I-O psychology has greatly increased. Hundreds of meta-analyses have established basic scientific principles and tested theories.Against this backdrop, LeBreton, Scherer, and James (2014) have written a focal article that distrusts corrections for unreliability in psychometric meta-analyses. They question the appropriateness of using interrater reliabilities of job performance ratings for corrections for attenuation in validity generalization studies. Because of length limitations on comments in this journal, we will address only major errors, not all errors in LeBreton et al.'s strident article. The focal article is unfortunately more emotional than rational in tone and conceptually and statistically confused. In our comment, we address only the two latter problems.We have organized our comment in five major sections: (a) purpose of validation and logic of correction for attenuation, (b) reliability of overall job performance ratings, (c) validity estimation versus administrative decision use of criteria, (d) accurate validity estimates for predictors used in employee selection, and (e) correct modeling of job performance determinants.