Diagnostic Accuracy

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 185838 Experts worldwide ranked by ideXlab platform

Patrick M.m. Bossuyt - One of the best experts on this subject based on the ideXlab platform.

  • Adaptive trial designs in Diagnostic Accuracy research.
    Statistics in medicine, 2019
    Co-Authors: Antonia Zapf, Patrick M.m. Bossuyt, Johannes B Reitsma, Maria Stark, Oke Gerke, Christoph Ehret, Norbert Benda, Jon Deeks, Todd A. Alonzo, Tim Friede
    Abstract:

    The aim of Diagnostic Accuracy studies is to evaluate how accurately a Diagnostic test can distinguish diseased from nondiseased individuals. Depending on the research question, different study designs and Accuracy measures are appropriate. As the prior knowledge in the planning phase is often very limited, modifications of design aspects such as the sample size during the ongoing trial could increase the efficiency of Diagnostic trials. In intervention studies, group sequential and adaptive designs are well established. Such designs are characterized by preplanned interim analyses, giving the opportunity to stop early for efficacy or futility or to modify elements of the study design. In contrast, in Diagnostic Accuracy studies, such flexible designs are less common, even if they are as important as for intervention studies. However, Diagnostic Accuracy studies have specific features, which may require adaptations of the statistical methods or may lead to specific advantages or limitations of sequential and adaptive designs. In this article, we summarize the current status of methodological research and applications of flexible designs in Diagnostic Accuracy research. Furthermore, we indicate and advocate future development of adaptive design methodology and their use in Diagnostic Accuracy trials from an interdisciplinary viewpoint. The term "interdisciplinary viewpoint" describes the collaboration of experts of the academic and nonacademic research.

  • STARD 2015: An Updated List of Essential Items for Reporting Diagnostic Accuracy Studies.
    Clinical chemistry, 2015
    Co-Authors: Patrick M.m. Bossuyt, Constantine A Gatsonis, Paul P Glasziou, Les M Irwig, Jeroen G Lijmer, Drummond Rennie, Johannes B Reitsma, David E Bruns, David Moher, Henrica C. W. De Vet
    Abstract:

    Incomplete reporting has been identified as a major source of avoidable waste in biomedical research. Essential information is often not provided in study reports, impeding the identification, critical appraisal, and replication of studies. To improve the quality of reporting of Diagnostic Accuracy studies, the Standards for Reporting of Diagnostic Accuracy Studies (STARD) statement was developed. Here we present STARD 2015, an updated list of 30 essential items that should be included in every report of a Diagnostic Accuracy study. This update incorporates recent evidence about sources of bias and variability in Diagnostic Accuracy and is intended to facilitate the use of STARD. As such, STARD 2015 may help to improve completeness and transparency in reporting of Diagnostic Accuracy studies.

  • STARD 2015: an updated list of essential items for reporting Diagnostic Accuracy studies
    BMJ (Clinical research ed.), 2015
    Co-Authors: Patrick M.m. Bossuyt, Constantine A Gatsonis, Paul P Glasziou, Les M Irwig, Jeroen G Lijmer, Drummond Rennie, Johannes B Reitsma, David E Bruns, David Moher, Henrica C. W. De Vet
    Abstract:

    Incomplete reporting has been identified as a major source of avoidable waste in biomedical research. Essential information is often not provided in study reports, impeding the identification, critical appraisal, and replication of studies. To improve the quality of reporting of Diagnostic Accuracy studies, the Standards for Reporting Diagnostic Accuracy (STARD) statement was developed. Here we present STARD 2015, an updated list of 30 essential items that should be included in every report of a Diagnostic Accuracy study. This update incorporates recent evidence about sources of bias and variability in Diagnostic Accuracy and is intended to facilitate the use of STARD. As such, STARD 2015 may help to improve completeness and transparency in reporting of Diagnostic Accuracy studies.

  • beyond Diagnostic Accuracy the clinical utility of Diagnostic tests
    Clinical Chemistry, 2012
    Co-Authors: Patrick M.m. Bossuyt, Johannes B Reitsma, Kristian Linnet, Karel G.m. Moons
    Abstract:

    Like any other medical technology or intervention, Diagnostic tests should be thoroughly evaluated before their introduction into daily practice. Increasingly, decision makers, physicians, and other users of Diagnostic tests request more than simple measures of a test's analytical or technical performance and Diagnostic Accuracy; they would also like to see testing lead to health benefits. In this last article of our series, we introduce the notion of clinical utility, which expresses--preferably in a quantitative form--to what extent Diagnostic testing improves health outcomes relative to the current best alternative, which could be some other form of testing or no testing at all. In most cases, Diagnostic tests improve patient outcomes by providing information that can be used to identify patients who will benefit from helpful downstream management actions, such as effective treatment in individuals with positive test results and no treatment for those with negative results. We describe how comparative randomized clinical trials can be used to estimate clinical utility. We contrast the definition of clinical utility with that of the personal utility of tests and markers. We show how Diagnostic Accuracy can be linked to clinical utility through an appropriate definition of the target condition in Diagnostic-Accuracy studies.

  • Verification problems in Diagnostic Accuracy studies: consequences and solutions
    BMJ (Clinical research ed.), 2011
    Co-Authors: Joris A. H. De Groot, Patrick M.m. Bossuyt, Johannes B Reitsma, Anne W.s. Rutjes, Nandini Dendukuri, Kristel J.m. Janssen, Karel G.m. Moons
    Abstract:

    The Accuracy of a Diagnostic test or combination of tests (such as in a Diagnostic model) is the ability to correctly identify patients with or without the target disease. In studies of Diagnostic Accuracy, the results of the test or model under study are verified by comparing them with results of a reference standard, applied to the same patients, to verify disease status (see first panel in figure⇓).1 Measures such as predictive values, post-test probabilities, ROC (receiver operating characteristics) curves, sensitivity, specificity, likelihood ratios, and odds ratios express how well the results of an index test agree with the outcome of the reference standard.2 Biased and exaggerated estimates of Diagnostic Accuracy can lead to inefficiencies in Diagnostic testing in practice, unnecessary costs, and physicians making incorrect treatment decisions. Diagnostic Accuracy studies with ( a ) complete verification by the same reference standard, ( b ) partial verification, or ( c )differential verification The reference standard ideally provides error-free classification of the disease outcome presence or absence. In some cases, it is not possible to verify the definitive presence or absence of disease in all patients with the (single) reference standard, which may result in bias. In this paper, we describe the most important types of disease verification problems using examples from published Diagnostic Accuracy studies. We also propose solutions to alleviate the associated biases. Often not all study subjects who undergo the index test receive the reference standard, leading to missing data on disease outcome (see middle panel in figure⇑). The bias associated with such situations of partial verification is known as partial verification bias, work-up bias, or referral bias.3 4 5 ### Clinical examples of partial verification Various mechanisms can lead to partial verification (see examples in table 1⇓). View this table: Table 1  Examples of Diagnostic Accuracy studies with problems in disease verification When the condition of interest …

Henrica C. W. De Vet - One of the best experts on this subject based on the ideXlab platform.

  • STARD 2015 guidelines for reporting Diagnostic Accuracy studies: Explanation and elaboration
    BMJ Open, 2016
    Co-Authors: Jérémie F. Cohen, Constantine A Gatsonis, Les M Irwig, Daniël A. Korevaar, Lotty Hooft, Johannes B Reitsma, David E Bruns, Douglas G Altman, Deborah Levine, Henrica C. W. De Vet
    Abstract:

    Diagnostic Accuracy studies are, like other clinical studies, at risk of bias due to shortcomings in design and conduct, and the results of a Diagnostic Accuracy study may not apply to other patient groups and settings. Readers of study reports need to be informed about study design and conduct, in sufficient detail to judge the trustworthiness and applicability of the study findings. The STARD statement (Standards for Reporting of Diagnostic Accuracy Studies) was developed to improve the completeness and transparency of reports of Diagnostic Accuracy studies. STARD contains a list of essential items that can be used as a checklist, by authors, reviewers and other readers, to ensure that a report of a Diagnostic Accuracy study contains the necessary information. STARD was recently updated. All updated STARD materials, including the checklist, are available at http://www.equator-network.org/reporting-guidelines/stard. Here, we present the STARD 2015 explanation and elaboration document. Through commented examples of appropriate reporting, we clarify the rationale for each of the 30 items on the STARD 2015 checklist, and describe what is expected from authors in developing sufficiently informative study reports.

  • STARD 2015: An Updated List of Essential Items for Reporting Diagnostic Accuracy Studies.
    Clinical chemistry, 2015
    Co-Authors: Patrick M.m. Bossuyt, Constantine A Gatsonis, Paul P Glasziou, Les M Irwig, Jeroen G Lijmer, Drummond Rennie, Johannes B Reitsma, David E Bruns, David Moher, Henrica C. W. De Vet
    Abstract:

    Incomplete reporting has been identified as a major source of avoidable waste in biomedical research. Essential information is often not provided in study reports, impeding the identification, critical appraisal, and replication of studies. To improve the quality of reporting of Diagnostic Accuracy studies, the Standards for Reporting of Diagnostic Accuracy Studies (STARD) statement was developed. Here we present STARD 2015, an updated list of 30 essential items that should be included in every report of a Diagnostic Accuracy study. This update incorporates recent evidence about sources of bias and variability in Diagnostic Accuracy and is intended to facilitate the use of STARD. As such, STARD 2015 may help to improve completeness and transparency in reporting of Diagnostic Accuracy studies.

  • STARD 2015: an updated list of essential items for reporting Diagnostic Accuracy studies
    BMJ (Clinical research ed.), 2015
    Co-Authors: Patrick M.m. Bossuyt, Constantine A Gatsonis, Paul P Glasziou, Les M Irwig, Jeroen G Lijmer, Drummond Rennie, Johannes B Reitsma, David E Bruns, David Moher, Henrica C. W. De Vet
    Abstract:

    Incomplete reporting has been identified as a major source of avoidable waste in biomedical research. Essential information is often not provided in study reports, impeding the identification, critical appraisal, and replication of studies. To improve the quality of reporting of Diagnostic Accuracy studies, the Standards for Reporting Diagnostic Accuracy (STARD) statement was developed. Here we present STARD 2015, an updated list of 30 essential items that should be included in every report of a Diagnostic Accuracy study. This update incorporates recent evidence about sources of bias and variability in Diagnostic Accuracy and is intended to facilitate the use of STARD. As such, STARD 2015 may help to improve completeness and transparency in reporting of Diagnostic Accuracy studies.

  • Quality of reporting of Diagnostic Accuracy studies
    Radiology, 2005
    Co-Authors: Nynke Smidt, Patrick M.m. Bossuyt, Johannes B Reitsma, Anne W.s. Rutjes, Daniëlle A.w.m. Van Der Windt, Raymond W. J. G. Ostelo, Lex M. Bouter, Henrica C. W. De Vet
    Abstract:

    PURPOSE: To evaluate quality of reporting in Diagnostic Accuracy articles published in 2000 in journals with impact factor of at least 4 by using items of Standards for Reporting of Diagnostic Accuracy (STARD) statement published later in 2003. MATERIALS AND METHODS: English-language articles on primary Diagnostic Accuracy studies in 2000 were identified with validated search strategy in MEDLINE. Articles published in journals with impact factor of 4 or higher that regularly publish articles on Diagnostic Accuracy were selected. Two independent reviewers evaluated quality of reporting by using STARD statement, which consists of 25 items and encourages use of a flow diagram. Total STARD score for each article was calculated by summing number of reported items. Subgroup analyses were performed for study design (case-control or cohort study) by using Student t tests for continuous outcomes and χ2 tests for dichotomous outcomes. RESULTS: Included were 124 articles published in 2000 in 12 journals: 33 case-con...

  • Towards Complete and Accurate Reporting of Studies of Diagnostic Accuracy: The STARD Initiative
    Annals of internal medicine, 2003
    Co-Authors: Patrick M.m. Bossuyt, Constantine A Gatsonis, Paul P Glasziou, Les M Irwig, Jeroen G Lijmer, Drummond Rennie, Johannes B Reitsma, David E Bruns, David Moher, Henrica C. W. De Vet
    Abstract:

    The STARD (Standards for Reporting of Diagnostic Accuracy) initiative provides carefully developed consensus-based guidelines for reporting of studies of Diagnostic Accuracy, enabling readers to be...

Johannes B Reitsma - One of the best experts on this subject based on the ideXlab platform.

  • Adaptive trial designs in Diagnostic Accuracy research.
    Statistics in medicine, 2019
    Co-Authors: Antonia Zapf, Patrick M.m. Bossuyt, Johannes B Reitsma, Maria Stark, Oke Gerke, Christoph Ehret, Norbert Benda, Jon Deeks, Todd A. Alonzo, Tim Friede
    Abstract:

    The aim of Diagnostic Accuracy studies is to evaluate how accurately a Diagnostic test can distinguish diseased from nondiseased individuals. Depending on the research question, different study designs and Accuracy measures are appropriate. As the prior knowledge in the planning phase is often very limited, modifications of design aspects such as the sample size during the ongoing trial could increase the efficiency of Diagnostic trials. In intervention studies, group sequential and adaptive designs are well established. Such designs are characterized by preplanned interim analyses, giving the opportunity to stop early for efficacy or futility or to modify elements of the study design. In contrast, in Diagnostic Accuracy studies, such flexible designs are less common, even if they are as important as for intervention studies. However, Diagnostic Accuracy studies have specific features, which may require adaptations of the statistical methods or may lead to specific advantages or limitations of sequential and adaptive designs. In this article, we summarize the current status of methodological research and applications of flexible designs in Diagnostic Accuracy research. Furthermore, we indicate and advocate future development of adaptive design methodology and their use in Diagnostic Accuracy trials from an interdisciplinary viewpoint. The term "interdisciplinary viewpoint" describes the collaboration of experts of the academic and nonacademic research.

  • STARD 2015 guidelines for reporting Diagnostic Accuracy studies: Explanation and elaboration
    BMJ Open, 2016
    Co-Authors: Jérémie F. Cohen, Constantine A Gatsonis, Les M Irwig, Daniël A. Korevaar, Lotty Hooft, Johannes B Reitsma, David E Bruns, Douglas G Altman, Deborah Levine, Henrica C. W. De Vet
    Abstract:

    Diagnostic Accuracy studies are, like other clinical studies, at risk of bias due to shortcomings in design and conduct, and the results of a Diagnostic Accuracy study may not apply to other patient groups and settings. Readers of study reports need to be informed about study design and conduct, in sufficient detail to judge the trustworthiness and applicability of the study findings. The STARD statement (Standards for Reporting of Diagnostic Accuracy Studies) was developed to improve the completeness and transparency of reports of Diagnostic Accuracy studies. STARD contains a list of essential items that can be used as a checklist, by authors, reviewers and other readers, to ensure that a report of a Diagnostic Accuracy study contains the necessary information. STARD was recently updated. All updated STARD materials, including the checklist, are available at http://www.equator-network.org/reporting-guidelines/stard. Here, we present the STARD 2015 explanation and elaboration document. Through commented examples of appropriate reporting, we clarify the rationale for each of the 30 items on the STARD 2015 checklist, and describe what is expected from authors in developing sufficiently informative study reports.

  • STARD 2015: An Updated List of Essential Items for Reporting Diagnostic Accuracy Studies.
    Clinical chemistry, 2015
    Co-Authors: Patrick M.m. Bossuyt, Constantine A Gatsonis, Paul P Glasziou, Les M Irwig, Jeroen G Lijmer, Drummond Rennie, Johannes B Reitsma, David E Bruns, David Moher, Henrica C. W. De Vet
    Abstract:

    Incomplete reporting has been identified as a major source of avoidable waste in biomedical research. Essential information is often not provided in study reports, impeding the identification, critical appraisal, and replication of studies. To improve the quality of reporting of Diagnostic Accuracy studies, the Standards for Reporting of Diagnostic Accuracy Studies (STARD) statement was developed. Here we present STARD 2015, an updated list of 30 essential items that should be included in every report of a Diagnostic Accuracy study. This update incorporates recent evidence about sources of bias and variability in Diagnostic Accuracy and is intended to facilitate the use of STARD. As such, STARD 2015 may help to improve completeness and transparency in reporting of Diagnostic Accuracy studies.

  • STARD 2015: an updated list of essential items for reporting Diagnostic Accuracy studies
    BMJ (Clinical research ed.), 2015
    Co-Authors: Patrick M.m. Bossuyt, Constantine A Gatsonis, Paul P Glasziou, Les M Irwig, Jeroen G Lijmer, Drummond Rennie, Johannes B Reitsma, David E Bruns, David Moher, Henrica C. W. De Vet
    Abstract:

    Incomplete reporting has been identified as a major source of avoidable waste in biomedical research. Essential information is often not provided in study reports, impeding the identification, critical appraisal, and replication of studies. To improve the quality of reporting of Diagnostic Accuracy studies, the Standards for Reporting Diagnostic Accuracy (STARD) statement was developed. Here we present STARD 2015, an updated list of 30 essential items that should be included in every report of a Diagnostic Accuracy study. This update incorporates recent evidence about sources of bias and variability in Diagnostic Accuracy and is intended to facilitate the use of STARD. As such, STARD 2015 may help to improve completeness and transparency in reporting of Diagnostic Accuracy studies.

  • beyond Diagnostic Accuracy the clinical utility of Diagnostic tests
    Clinical Chemistry, 2012
    Co-Authors: Patrick M.m. Bossuyt, Johannes B Reitsma, Kristian Linnet, Karel G.m. Moons
    Abstract:

    Like any other medical technology or intervention, Diagnostic tests should be thoroughly evaluated before their introduction into daily practice. Increasingly, decision makers, physicians, and other users of Diagnostic tests request more than simple measures of a test's analytical or technical performance and Diagnostic Accuracy; they would also like to see testing lead to health benefits. In this last article of our series, we introduce the notion of clinical utility, which expresses--preferably in a quantitative form--to what extent Diagnostic testing improves health outcomes relative to the current best alternative, which could be some other form of testing or no testing at all. In most cases, Diagnostic tests improve patient outcomes by providing information that can be used to identify patients who will benefit from helpful downstream management actions, such as effective treatment in individuals with positive test results and no treatment for those with negative results. We describe how comparative randomized clinical trials can be used to estimate clinical utility. We contrast the definition of clinical utility with that of the personal utility of tests and markers. We show how Diagnostic Accuracy can be linked to clinical utility through an appropriate definition of the target condition in Diagnostic-Accuracy studies.

David E Bruns - One of the best experts on this subject based on the ideXlab platform.

  • STARD 2015 guidelines for reporting Diagnostic Accuracy studies: Explanation and elaboration
    BMJ Open, 2016
    Co-Authors: Jérémie F. Cohen, Constantine A Gatsonis, Les M Irwig, Daniël A. Korevaar, Lotty Hooft, Johannes B Reitsma, David E Bruns, Douglas G Altman, Deborah Levine, Henrica C. W. De Vet
    Abstract:

    Diagnostic Accuracy studies are, like other clinical studies, at risk of bias due to shortcomings in design and conduct, and the results of a Diagnostic Accuracy study may not apply to other patient groups and settings. Readers of study reports need to be informed about study design and conduct, in sufficient detail to judge the trustworthiness and applicability of the study findings. The STARD statement (Standards for Reporting of Diagnostic Accuracy Studies) was developed to improve the completeness and transparency of reports of Diagnostic Accuracy studies. STARD contains a list of essential items that can be used as a checklist, by authors, reviewers and other readers, to ensure that a report of a Diagnostic Accuracy study contains the necessary information. STARD was recently updated. All updated STARD materials, including the checklist, are available at http://www.equator-network.org/reporting-guidelines/stard. Here, we present the STARD 2015 explanation and elaboration document. Through commented examples of appropriate reporting, we clarify the rationale for each of the 30 items on the STARD 2015 checklist, and describe what is expected from authors in developing sufficiently informative study reports.

  • STARD 2015: An Updated List of Essential Items for Reporting Diagnostic Accuracy Studies.
    Clinical chemistry, 2015
    Co-Authors: Patrick M.m. Bossuyt, Constantine A Gatsonis, Paul P Glasziou, Les M Irwig, Jeroen G Lijmer, Drummond Rennie, Johannes B Reitsma, David E Bruns, David Moher, Henrica C. W. De Vet
    Abstract:

    Incomplete reporting has been identified as a major source of avoidable waste in biomedical research. Essential information is often not provided in study reports, impeding the identification, critical appraisal, and replication of studies. To improve the quality of reporting of Diagnostic Accuracy studies, the Standards for Reporting of Diagnostic Accuracy Studies (STARD) statement was developed. Here we present STARD 2015, an updated list of 30 essential items that should be included in every report of a Diagnostic Accuracy study. This update incorporates recent evidence about sources of bias and variability in Diagnostic Accuracy and is intended to facilitate the use of STARD. As such, STARD 2015 may help to improve completeness and transparency in reporting of Diagnostic Accuracy studies.

  • STARD 2015: an updated list of essential items for reporting Diagnostic Accuracy studies
    BMJ (Clinical research ed.), 2015
    Co-Authors: Patrick M.m. Bossuyt, Constantine A Gatsonis, Paul P Glasziou, Les M Irwig, Jeroen G Lijmer, Drummond Rennie, Johannes B Reitsma, David E Bruns, David Moher, Henrica C. W. De Vet
    Abstract:

    Incomplete reporting has been identified as a major source of avoidable waste in biomedical research. Essential information is often not provided in study reports, impeding the identification, critical appraisal, and replication of studies. To improve the quality of reporting of Diagnostic Accuracy studies, the Standards for Reporting Diagnostic Accuracy (STARD) statement was developed. Here we present STARD 2015, an updated list of 30 essential items that should be included in every report of a Diagnostic Accuracy study. This update incorporates recent evidence about sources of bias and variability in Diagnostic Accuracy and is intended to facilitate the use of STARD. As such, STARD 2015 may help to improve completeness and transparency in reporting of Diagnostic Accuracy studies.

  • towards complete and accurate reporting of studies of Diagnostic Accuracy the stard initiative
    Veterinary Clinical Pathology, 2007
    Co-Authors: Patrick M.m. Bossuyt, Constantine A Gatsonis, Paul P Glasziou, Les M Irwig, Jeroen G Lijmer, Drummond Rennie, Johannes B Reitsma, David E Bruns, David Moher
    Abstract:

    Background: To comprehend the results of Diagnostic Accuracy studies, readers must understand the design, conduct, analysis, and results of such studies. That goal can be achieved only through complete transparency from authors. Objective: To improve the Accuracy and completeness of reporting of studies of Diagnostic Accuracy to allow readers to assess the potential for bias in the study and to evaluate its generalisability. Methods: The Standards for Reporting of Diagnostic Accuracy (STARD) steering committee searched the literature to identify publications on the appropriate conduct and reporting of Diagnostic studies and extracted potential items into an extensive list. Researchers, editors, and members of professional organisations shortened this list during a 2-day consensus meeting with the goal of developing a checklist and a generic flow diagram for studies of Diagnostic Accuracy. Results: The search for published guidelines on Diagnostic research yielded 33 previously published checklists, from which we extracted a list of 75 potential items. The consensus meeting shortened the list to 25 items, using evidence on bias whenever available. A prototypical flow diagram provides information about the method of patient recruitment, the order of test execution and the numbers of patients undergoing the test under evaluation, the reference standard or both. Conclusions: Evaluation of research depends on complete and accurate reporting. If medical journals adopt the checklist and the flow diagram, the quality of reporting of studies of Diagnostic Accuracy should improve to the advantage of clinicians, researchers, reviewers, journals, and the public.

  • Reporting studies of Diagnostic Accuracy according to a standard method; the Standards for Reporting of Diagnostic Accuracy (STARD)
    Nederlands tijdschrift voor geneeskunde, 2003
    Co-Authors: Patrick M.m. Bossuyt, Constantine A Gatsonis, Paul P Glasziou, Les M Irwig, Jeroen G Lijmer, Drummond Rennie, Johannes B Reitsma, David E Bruns, David Moher, De Vet Hc
    Abstract:

    The objective of the 'Standards for Reporting of Diagnostic Accuracy' (STARD) initiative is to improve the reporting of studies of Diagnostic Accuracy, so as to allow readers to assess the potential for bias in a study and to evaluate the generalibility of its results. The group searched the literature to identify publications on the appropriate conduct and reporting of Diagnostic studies. This was used to draw up a list of potential items. During a consensus meeting, a group of researchers, medical journal editors, and members of professional organisations reduced this list to a usable checklist. Wherever possible, evidence from the literature was used to justify the decisions made. The search for published guidelines about Diagnostic research yielded 33 previously published checklists, from which a list of 75 potential items was extracted. At the consensus meeting, participants shortened the list to a 25-item checklist. A generic flow diagram was drawn up to provide guidance on the method for including patients, the order in which tests were to be conducted and the number of patients to undergo the test being evaluated, the reference standard, or both. A scientific publication can only be assessed when the reporting is both correct and complete. Use of the checklist and flow diagram will improve the quality of reports produced, to the advantage of clinicians, researchers, reviewers, journal editors and other interested parties.

Drummond Rennie - One of the best experts on this subject based on the ideXlab platform.

  • STARD 2015: An Updated List of Essential Items for Reporting Diagnostic Accuracy Studies.
    Clinical chemistry, 2015
    Co-Authors: Patrick M.m. Bossuyt, Constantine A Gatsonis, Paul P Glasziou, Les M Irwig, Jeroen G Lijmer, Drummond Rennie, Johannes B Reitsma, David E Bruns, David Moher, Henrica C. W. De Vet
    Abstract:

    Incomplete reporting has been identified as a major source of avoidable waste in biomedical research. Essential information is often not provided in study reports, impeding the identification, critical appraisal, and replication of studies. To improve the quality of reporting of Diagnostic Accuracy studies, the Standards for Reporting of Diagnostic Accuracy Studies (STARD) statement was developed. Here we present STARD 2015, an updated list of 30 essential items that should be included in every report of a Diagnostic Accuracy study. This update incorporates recent evidence about sources of bias and variability in Diagnostic Accuracy and is intended to facilitate the use of STARD. As such, STARD 2015 may help to improve completeness and transparency in reporting of Diagnostic Accuracy studies.

  • STARD 2015: an updated list of essential items for reporting Diagnostic Accuracy studies
    BMJ (Clinical research ed.), 2015
    Co-Authors: Patrick M.m. Bossuyt, Constantine A Gatsonis, Paul P Glasziou, Les M Irwig, Jeroen G Lijmer, Drummond Rennie, Johannes B Reitsma, David E Bruns, David Moher, Henrica C. W. De Vet
    Abstract:

    Incomplete reporting has been identified as a major source of avoidable waste in biomedical research. Essential information is often not provided in study reports, impeding the identification, critical appraisal, and replication of studies. To improve the quality of reporting of Diagnostic Accuracy studies, the Standards for Reporting Diagnostic Accuracy (STARD) statement was developed. Here we present STARD 2015, an updated list of 30 essential items that should be included in every report of a Diagnostic Accuracy study. This update incorporates recent evidence about sources of bias and variability in Diagnostic Accuracy and is intended to facilitate the use of STARD. As such, STARD 2015 may help to improve completeness and transparency in reporting of Diagnostic Accuracy studies.

  • The STARD Statement for Reporting Diagnostic Accuracy Studies: Application to the History and Physical Examination
    Journal of general internal medicine, 2008
    Co-Authors: David L. Simel, Drummond Rennie, Patrick M.m. Bossuyt
    Abstract:

    Objective The Standards for Reporting of Diagnostic Accuracy (STARD) statement provided guidelines for investigators conducting Diagnostic Accuracy studies. We reviewed each item in the statement for its applicability to clinical examination Diagnostic Accuracy research, viewing each discrete aspect of the history and physical examination as a Diagnostic test.

  • towards complete and accurate reporting of studies of Diagnostic Accuracy the stard initiative
    Veterinary Clinical Pathology, 2007
    Co-Authors: Patrick M.m. Bossuyt, Constantine A Gatsonis, Paul P Glasziou, Les M Irwig, Jeroen G Lijmer, Drummond Rennie, Johannes B Reitsma, David E Bruns, David Moher
    Abstract:

    Background: To comprehend the results of Diagnostic Accuracy studies, readers must understand the design, conduct, analysis, and results of such studies. That goal can be achieved only through complete transparency from authors. Objective: To improve the Accuracy and completeness of reporting of studies of Diagnostic Accuracy to allow readers to assess the potential for bias in the study and to evaluate its generalisability. Methods: The Standards for Reporting of Diagnostic Accuracy (STARD) steering committee searched the literature to identify publications on the appropriate conduct and reporting of Diagnostic studies and extracted potential items into an extensive list. Researchers, editors, and members of professional organisations shortened this list during a 2-day consensus meeting with the goal of developing a checklist and a generic flow diagram for studies of Diagnostic Accuracy. Results: The search for published guidelines on Diagnostic research yielded 33 previously published checklists, from which we extracted a list of 75 potential items. The consensus meeting shortened the list to 25 items, using evidence on bias whenever available. A prototypical flow diagram provides information about the method of patient recruitment, the order of test execution and the numbers of patients undergoing the test under evaluation, the reference standard or both. Conclusions: Evaluation of research depends on complete and accurate reporting. If medical journals adopt the checklist and the flow diagram, the quality of reporting of studies of Diagnostic Accuracy should improve to the advantage of clinicians, researchers, reviewers, journals, and the public.

  • Reporting studies of Diagnostic Accuracy according to a standard method; the Standards for Reporting of Diagnostic Accuracy (STARD)
    Nederlands tijdschrift voor geneeskunde, 2003
    Co-Authors: Patrick M.m. Bossuyt, Constantine A Gatsonis, Paul P Glasziou, Les M Irwig, Jeroen G Lijmer, Drummond Rennie, Johannes B Reitsma, David E Bruns, David Moher, De Vet Hc
    Abstract:

    The objective of the 'Standards for Reporting of Diagnostic Accuracy' (STARD) initiative is to improve the reporting of studies of Diagnostic Accuracy, so as to allow readers to assess the potential for bias in a study and to evaluate the generalibility of its results. The group searched the literature to identify publications on the appropriate conduct and reporting of Diagnostic studies. This was used to draw up a list of potential items. During a consensus meeting, a group of researchers, medical journal editors, and members of professional organisations reduced this list to a usable checklist. Wherever possible, evidence from the literature was used to justify the decisions made. The search for published guidelines about Diagnostic research yielded 33 previously published checklists, from which a list of 75 potential items was extracted. At the consensus meeting, participants shortened the list to a 25-item checklist. A generic flow diagram was drawn up to provide guidance on the method for including patients, the order in which tests were to be conducted and the number of patients to undergo the test being evaluated, the reference standard, or both. A scientific publication can only be assessed when the reporting is both correct and complete. Use of the checklist and flow diagram will improve the quality of reports produced, to the advantage of clinicians, researchers, reviewers, journal editors and other interested parties.