Generalizability

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 324 Experts worldwide ranked by ideXlab platform

Robert L. Brennan - One of the best experts on this subject based on the ideXlab platform.

  • Generalizability Theory and Classical Test Theory
    Applied Measurement in Education, 2010
    Co-Authors: Robert L. Brennan
    Abstract:

    Broadly conceived, reliability involves quantifying the consistencies and inconsistencies in observed scores. Generalizability theory, or G theory, is particularly well suited to addressing such matters in that it enables an investigator to quantify and distinguish the sources of inconsistencies in observed scores that arise, or could arise, over replications of a measurement procedure. Classical test theory is an historical predecessor to G theory and, as such, it is sometimes called a parent of G theory. Important characteristics of both theories are considered in this article, but primary emphasis is placed on G theory. In addition, the two theories are briefly compared with item response theory.

  • Generalizability of Performance Assessments
    Educational Measurement: Issues and Practice, 2005
    Co-Authors: Robert L. Brennan, Eugene G . Johnson
    Abstract:

    How can the contributions of raters and tasks to error variance be estimated? Which source of error variance is usually greater? Are interrater coefficients adequate estimates of reliability? What other facets contribute to unreliability in performance assessments?

  • variability of estimated variance components and related statistics in a performance assessment
    Applied Measurement in Education, 2001
    Co-Authors: Robert L. Brennan
    Abstract:

    Generalizability theory provides a conceptual and statistical framework for estimating variance components and measurement precision. The theory has been widely used in evaluating technical qualities of performance assessments. However, estimates of variance components, measurement error variances, and Generalizability coefficients are likely to vary from one sample to another. This study empirically investigates sampling variability of estimated variance components using data collected in several years for a listening and writing performance assessment. This study also evaluates stability of estimated measurement precision from year to year. The results indicated that the estimated variance components varied from one study to another, especially when sample sizes were small. The estimated measurement error variances and Generalizability coefficients also changed from one year to another. Measurement precision projected by a Generalizability study may not be fully realized in an actual decision study. The...

Darshana Sedera - One of the best experts on this subject based on the ideXlab platform.

  • ICIS - Software Artefacts as Equipment: A New Conception to Software Development using Reusable Software Artefacts
    2015
    Co-Authors: Subasinghage Maduka Nuwangi, Darshana Sedera
    Abstract:

    Through the lens of Heidegger’s analysis of equipment, this study observes ‘software reuse’ – a popular phenomenon in the world. It presents an alternative conceptual view of the software artefact as ‘equipment’. This view provides a theoretical underpinning to this prominent practice to recognize software artefacts, as equipment. Employing the case study method, this study reports preliminary results of five software development projects to investigate the development and reusability of software artefacts. Two types of Generalizability were identified: 1) horizontal Generalizability, and 2) vertical Generalizability. From the results it can be inferred that reusability of software artefacts may depend on the type of Generalizability. The level of reusability of software artefacts may increase the level of maturity of software artefacts. Furthermore, the results indicated that the software artefacts were updated rapidly in the initial stages, compared to final stages of software development lifecycle.

  • Software Artefacts as Equipment: A New Conception to Software Development using Reusable Software Artefacts Research-in-Progress
    2015
    Co-Authors: Subasinghage Maduka Nuwangi, Darshana Sedera
    Abstract:

    Through the lens of Heidegger’s analysis of equipment, this study observes ‘software reuse’ – a popular phenomenon in the world. It presents an alternative conceptual view of the software artefact as ‘equipment’. This view provides a theoretical underpinning to this prominent practice to recognize software artefacts, as equipment. Employing the case study method, this study reports preliminary results of five software development projects to investigate the development and reusability of software artefacts. Two types of Generalizability were identified: 1) horizontal Generalizability, and 2) vertical Generalizability. From the results it can be inferred that reusability of software artefacts may depend on the type of Generalizability. The level of reusability of software artefacts may increase the level of maturity of software artefacts. Furthermore, the results indicated that the software artefacts were updated rapidly in the initial stages, compared to final stages of software development lifecycle.

Richard L Baskerville - One of the best experts on this subject based on the ideXlab platform.

  • generalizing Generalizability in information systems research
    Information Systems Research, 2003
    Co-Authors: Allen S Lee, Richard L Baskerville
    Abstract:

    Generalizability is a major concern to those who do, and use, research. Statistical, sampling-based Generalizability is well known, but methodologists have long been aware of conceptions of Generalizability beyond the statistical. The purpose of this essay is to clarify the concept of Generalizability by critically examining its nature, illustrating its use and misuse, and presenting a framework for classifying its different forms. The framework organizes the different forms into four types, which are defined by the distinction between empirical and theoretical kinds of statements. On the one hand, the framework affirms the bounds within which statistical, sampling-based Generalizability is legitimate. On the other hand, the framework indicates ways in which researchers in information systems and other fields may properly lay claim to Generalizability, and thereby broader relevance, even when their inquiry falls outside the bounds of sampling-based research.

L Baskervillerichard - One of the best experts on this subject based on the ideXlab platform.

James J. Cimino - One of the best experts on this subject based on the ideXlab platform.

  • Computer-aided assessment of the Generalizability of clinical trial results.
    International Journal of Medical Informatics, 2017
    Co-Authors: Amos Cahan, Sorel Cahan, James J. Cimino
    Abstract:

    Abstract Background The effects of an intervention on patients from populations other than that included in a trial may vary as a result of differences in population features, treatment administration, or general setting. Determining the Generalizability of a trial to a target population is important in clinical decision making at both the individual practitioner and policy-making levels. However, awareness to the challenges associated with the assessment of Generalizability of trials is low and tools to facilitate such assessment are lacking. Methods We review the main factors affecting the Generalizability of a clinical trial results beyond the trial population. We then propose a framework for a standardized evaluation of parameters relevant to determining the external validity of clinical trials to produce a “Generalizability score”. We then apply this framework to populations of patients with heart failure included in trials, cohorts and registries to demonstrate the use of the Generalizability score and its graphic representation along three dimensions: participants’ demographics, their clinical profile and intervention setting. We use the Generalizability score to compare a single trial to multiple “target” clinical scenarios. Additionally, we present the Generalizability score of several studies with regard to a single “target” population. Results Similarity indices vary considerably between trials and target population, but inconsistent reporting of participant characteristics limit head-to-head comparisons. Conclusion We discuss the challenges involved in performing automatic assessment of trial Generalizability at scale and propose the adoption of a standard format for reporting the characteristics of trial participants to enable better interpretation of their results.