External Validity

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 60771 Experts worldwide ranked by ideXlab platform

James F. Wilson - One of the best experts on this subject based on the ideXlab platform.

  • VII—Internal and External Validity in Thought Experiments
    Proceedings of the Aristotelian Society, 2016
    Co-Authors: James F. Wilson
    Abstract:

    This paper develops an account of rigour in the use of thought experiments in ethics. I argue that there are two separate challenges to be faced. The first is internal Validity: is the thought experiment designed in a way that allows its readers to make judgements that are confident and free of bias about the hypothesis or point of principle that it aims to test? The second is External Validity: to what extent do ethical judgements that are correct of the world of the thought experiment generalise to a wide variety of other contexts, including ethical decisionmaking in the actual world? Ensuring External Validity is the harder and more important problem of rigour, yet it is one that few philosophers have even noticed, let alone begun to solve.

  • vii internal and External Validity in thought experiments
    Proceedings of the Aristotelian Society 116 (2) pp. 127-152. (2016), 2016
    Co-Authors: James F. Wilson
    Abstract:

    This paper develops an account of rigour in the use of thought experiments in ethics. I argue that there are two separate challenges to be faced. The first is internal Validity: is the thought experiment designed in a way that allows its readers to make judgements that are confident and free of bias about the hypothesis or point of principle that it aims to test? The second is External Validity: to what extent do ethical judgements that are correct of the world of the thought experiment generalise to a wide variety of other contexts, including ethical decisionmaking in the actual world? Ensuring External Validity is the harder and more important problem of rigour, yet it is one that few philosophers have even noticed, let alone begun to solve.

Sven Apel - One of the best experts on this subject based on the ideXlab platform.

  • views on internal and External Validity in empirical software engineering
    International Conference on Software Engineering, 2015
    Co-Authors: Janet Siegmund, Norbert Siegmund, Sven Apel
    Abstract:

    Empirical methods have grown common in software engineering, but there is no consensus on how to apply them properly. Is practical relevance key? Do internally valid studies have any value? Should we replicate more to address the tradeoff between internal and External Validity? We asked the community how empirical research should take place in software engineering, with a focus on the tradeoff between internal and External Validity and replication, complemented with a literature review about the status of empirical research in software engineering. We found that the opinions differ considerably, and that there is no consensus in the community when to focus on internal or External Validity and how to conduct and review replications.

  • ICSE (1) - Views on internal and External Validity in empirical software engineering
    2015 IEEE ACM 37th IEEE International Conference on Software Engineering, 2015
    Co-Authors: Janet Siegmund, Norbert Siegmund, Sven Apel
    Abstract:

    Empirical methods have grown common in software engineering, but there is no consensus on how to apply them properly. Is practical relevance key? Do internally valid studies have any value? Should we replicate more to address the tradeoff between internal and External Validity? We asked the community how empirical research should take place in software engineering, with a focus on the tradeoff between internal and External Validity and replication, complemented with a literature review about the status of empirical research in software engineering. We found that the opinions differ considerably, and that there is no consensus in the community when to focus on internal or External Validity and how to conduct and review replications.

Janet Siegmund - One of the best experts on this subject based on the ideXlab platform.

  • views on internal and External Validity in empirical software engineering
    International Conference on Software Engineering, 2015
    Co-Authors: Janet Siegmund, Norbert Siegmund, Sven Apel
    Abstract:

    Empirical methods have grown common in software engineering, but there is no consensus on how to apply them properly. Is practical relevance key? Do internally valid studies have any value? Should we replicate more to address the tradeoff between internal and External Validity? We asked the community how empirical research should take place in software engineering, with a focus on the tradeoff between internal and External Validity and replication, complemented with a literature review about the status of empirical research in software engineering. We found that the opinions differ considerably, and that there is no consensus in the community when to focus on internal or External Validity and how to conduct and review replications.

  • ICSE (1) - Views on internal and External Validity in empirical software engineering
    2015 IEEE ACM 37th IEEE International Conference on Software Engineering, 2015
    Co-Authors: Janet Siegmund, Norbert Siegmund, Sven Apel
    Abstract:

    Empirical methods have grown common in software engineering, but there is no consensus on how to apply them properly. Is practical relevance key? Do internally valid studies have any value? Should we replicate more to address the tradeoff between internal and External Validity? We asked the community how empirical research should take place in software engineering, with a focus on the tradeoff between internal and External Validity and replication, complemented with a literature review about the status of empirical research in software engineering. We found that the opinions differ considerably, and that there is no consensus in the community when to focus on internal or External Validity and how to conduct and review replications.

Steven H. Woolf - One of the best experts on this subject based on the ideXlab platform.

  • External Validity reporting in prevention research.
    American journal of preventive medicine, 2008
    Co-Authors: Kevin Patrick, F. Douglas Scutchfield, Steven H. Woolf
    Abstract:

    Research is performed for two primary purposes: to expand the sum total of what we know and to find ways of applying what we know to solve problems or answer specific questions. These are sometimes described as basic and applied research, and prevention researchers engage in both. Epidemiologists discover the complicated and often intertwined causes of a disease and provide this information to clinical or public health researchers who then incorporate it into new approaches to reduce the incidence of that disease or develop strategies for treatments that improve health outcomes and quality of life. While it is sometimes difficult to place a value on the results of basic research, the value of applied research is more readily determined through an assessment of its utility to practitioners. Is the study generalizable? Does it have implications for how I care for my patients, or how I allocate the budget in my local public health department? The current buzzword for this is “translation”—as in “how well can the research be translated?” This is of increasing importance at a time when what we know about the causes of premature death and disability seems to be far outstripping our ability to intervene on those causes whether they are genetic, behavioral, social, ecologic, or, more likely, some combination of these. Relevant to this topic, this issue of the American Journal of Preventive Medicine includes an article by Klesges, Dzewaltowski, and Glasgow on the extent to which publications of the results of controlled trials of interventions to prevent childhood obesity include sufficient details to allow readers to determine if the results are generalizable. Applying criteria based in part on the RE-AIM framework (www.re-aim.org) as well as a more recently proposed standardized approach to quality assessment of reports of health promotion in prevention research, the authors found uneven attention to the criteria to be the rule among the 19 reviewed studies. Of the 24 dimensions of External Validity expected in an optimal paper, only four (descriptions of the target audience and target setting, inclusion/exclusion criteria, and attrition rate) were reported more than 90% of the time. At the other end of the spectrum, six dimensions were addressed less than 10% of the time: representativeness of the participants and of the settings, participation rate, whether implementation differed by staff, whether effects were moderated by staff or setting, and program sustainability. The authors conclude that the areas of reporting on External Validity in need of the most attention can be summarized as the “3-Rs”: better reports of the representativeness of participants, settings, and intervention staff; of the robustness of the intervention across different populations and staff-delivery approaches; and of the replicability of study results in other places. So what are we to do with these findings? If they are generalizable to the wider body of published intervention research, they suggest several potential causes of the disconnections between the generation of knowledge and its ultimate use. Apropos the emphasis noted above on the need for the translation of knowledge into practice, that something is “lost in translation” is increasingly used to characterize this circumstance. Klesges and colleagues note that the lack of details on standardized criteria for External Validity creates a situation in which it is difficult if not impossible to synthesize knowledge across many studies. Moreover, the lack of attention to External Validity in reports of research, in reality, may reflect a lack of attention to these issues in the conduct of the research itself. Caught up in the details of choosing a research design, appropriate measures, or a clever strategy for analysis, a researcher sometimes may overlook important participant-, setting-, or staff-level issues that are critical to the generalizability of the work. An additional problem is the inability to extract meaning from a study—or groups of studies— and put it in terms that policymakers and program planners can understand and use. Other stakeholders are likely to play a role in how rapidly this issue is addressed and in what ways. Klesges and her colleagues identify one group—journal editors—and highlight the 2006 Editors’ meeting convened by the CDC, the Robert Wood Johnson Foundation, and the National Institutes of Health (NIH) Office of Behavioral and Social Science Research. The meeting (with AJPM represented by KP) involved some very thoughtful discussion and, in the end, the editors endorsed the notion that it was important to explore ways to improve reporting of External Validity in scientific communication. But securing complete agreement on a unified strategy to accomplish this proved to be elusive, and at least one of us left the meeting sensitized From the American Journal of Preventive Medicine (Patrick, Scutchfield, Woolf); the University of California San Diego (Patrick), San Diego, California; School of Public Health, University of Kentucky (Scutchfield), Lexington, Kentucky; Virginia Commonwealth University (Woolf), Fairfax, Virginia Address correspondence and reprint requests to: Kevin Patrick, MD, MS, University of California, San Diego, 9500 Gilman Drive, Dept. 0811, La Jolla CA 92093-0811. E-mail: kpatrick@ucsd.edu.

Jeffrey W. Lucas - One of the best experts on this subject based on the ideXlab platform.

  • theory testing generalization and the problem of External Validity
    Sociological Theory, 2003
    Co-Authors: Jeffrey W. Lucas
    Abstract:

    External Validity refers to the generalization of research findings, either from a sample to a larger population or to settings and populations other than those studied. While definitions vary, discussions generally agree that experiments are lower in External Validity than other methodological approaches. Further, External Validity is widely treated as an issue to be addressed through methodological procedures. When testing theories, all measures are indirect indicators of theoretical constructs, and no methodological procedures taken alone can produce External Validity. External Validity can be assessed through determining (1) the extent to which empirical measures accurately reflect theoretical constructs, (2) whether the research setting conforms to the scope of the theory under test, (3) our confidence that findings will repeat under identical conditions, (4) whether findings support the theory being tested, and (5) the confirmatory status of the theory under test. In these ways, External Validity is foremost a theoretical issue and can only be addressed by an examination of the interplay between theory and methods.

  • Theory‐Testing, Generalization, and the Problem of External Validity
    Sociological Theory, 2003
    Co-Authors: Jeffrey W. Lucas
    Abstract:

    External Validity refers to the generalization of research findings, either from a sample to a larger population or to settings and populations other than those studied. While definitions vary, discussions generally agree that experiments are lower in External Validity than other methodological approaches. Further, External Validity is widely treated as an issue to be addressed through methodological procedures. When testing theories, all measures are indirect indicators of theoretical constructs, and no methodological procedures taken alone can produce External Validity. External Validity can be assessed through determining (1) the extent to which empirical measures accurately reflect theoretical constructs, (2) whether the research setting conforms to the scope of the theory under test, (3) our confidence that findings will repeat under identical conditions, (4) whether findings support the theory being tested, and (5) the confirmatory status of the theory under test. In these ways, External Validity is foremost a theoretical issue and can only be addressed by an examination of the interplay between theory and methods.