Publishing Model

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 62604 Experts worldwide ranked by ideXlab platform

Eleanorrose Papas - One of the best experts on this subject based on the ideXlab platform.

  • does the use of open non anonymous peer review in scholarly Publishing introduce bias evidence from the f1000research post publication open peer review Publishing Model
    Journal of Information Science, 2020
    Co-Authors: Mike Thelwall, Liz Allen, Eleanorrose Papas, Zena Nyakoojo, Verena Weigert
    Abstract:

    As part of moves towards open knowledge practices, making peer review open is cited as a way to enable fuller scrutiny and transparency of assessments around research. There are now many flavours o...

  • does the use of open non anonymous peer review in scholarly Publishing introduce bias evidence from the f1000 post publication open peer review Publishing Model
    arXiv: Digital Libraries, 2019
    Co-Authors: Mike Thelwall, Liz Allen, Zena Nyakoojo, Verena Weigert, Eleanorrose Papas
    Abstract:

    This study examines whether there is any evidence of bias in two areas of common critique of open, non-anonymous peer review - and used in the post-publication, peer review system operated by the open-access scholarly Publishing platform F1000Research. First, is there evidence of bias where a reviewer based in a specific country assesses the work of an author also based in the same country? Second, are reviewers influenced by being able to see the comments and know the origins of previous reviewer? Methods: Scrutinising the open peer review comments published on F1000Research, we assess the extent of two frequently cited potential influences on reviewers that may be the result of the transparency offered by a fully attributable, open peer review Publishing Model: the national affiliations of authors and reviewers, and the ability of reviewers to view previously-published reviewer reports before submitting their own. The effects of these potential influences were investigated for all first versions of articles published by 8 July 2019 to F1000Research. In 16 out of the 20 countries with the most articles, there was a tendency for reviewers based in the same country to give a more positive review. The difference was statistically significant in one. Only 3 countries had the reverse tendency. Second, there is no evidence of a conformity bias. When reviewers mentioned a previous review in their peer review report, they were not more likely to give the same overall judgement. Although reviewers who had longer to potentially read a previously published reviewer reports were slightly less likely to agree with previous reviewer judgements, this could be due to these articles being difficult to judge rather than deliberate non-conformity.

James Q Del Rosso - One of the best experts on this subject based on the ideXlab platform.

  • rethinking the journal impact factor and Publishing in the digital age
    The Journal of clinical and aesthetic dermatology, 2020
    Co-Authors: Mark S Nestor, Daniel Fischer, David Arnold, Brian Berman, James Q Del Rosso
    Abstract:

    Clinical and experimental literature search has changed significantly over the past few decades, and with it, the way in which we value information. Today, our need for immediate access to relevant and specific literature, regardless of specialty, has led to a growing demand for open access to publications. The Journal Impact Factor (JIF) has been a long-time standard for representing the quality or "prestige" of a journal, but it appears to be losing its relevance. Here, we define the JIF and deconstruct its validity as a modern measure of a journal's quality, discuss the current Models of academic publication, including their advantages and shortcomings, and discuss the benefits and shortcomings of a variety of open-access Models, including costs to the author. We have quantified a nonsubscribed physician's access to full articles associated with dermatologic disease and aesthetics cited on PubMed. For some of the most common dermatology conditions, 23.1 percent of citations (ranging from 17.2% for melasma to 31.9% for malignant melanoma) were available as free full articles, and for aesthetic procedures, 18.9 percent of citations (ranging from 11.9% for laser hair removal to 27.9% for botulinum toxin) were available as free full articles. Finally, we discuss existing alternative metrics for measuring journal impact and propose the adoption of a superior Publishing Model, one that satisfies modern day standards of scholarly knowledge pursuit and dissemination of scholarly publications for dermatology and all of medical science.

Robin Champieux - One of the best experts on this subject based on the ideXlab platform.

  • Biomedical Journal Data Sharing Policies
    2018
    Co-Authors: Nicole Vasilevsky, Jessica Minnier, Melissa Haendel, Robin Champieux
    Abstract:

    Raw data of data sharing policies in over 300 journals, supporting the article currently under review: "Reproducible and reusable research: Are journal data sharing policies meeting the mark?".   Raw data and analysis of data sharing policies of 318 biomedical journals. The study authors manually reviewed the author instructions and editorial policies to analyze the each journal's data sharing requirements and characteristics. The data sharing policies were ranked using a rubric to determine if data sharing was required, recommended, or not addressed at all. The data sharing method and licensing recommendations were examined, as well any mention of reproducibility or similar concepts. The data was analyzed for patterns relating to Publishing volume, Journal Impact Factor, and the Publishing Model (open access or subscription) of each journal. We evaluated journals included in Thomson Reuter’s InCites 2013 Journal Citations Reports (JCR) classified within the following World of Science schema categories: Biochemistry and Molecular Biology, Biology, Cell Biology, Crystallography, Developmental Biology, Biomedical Engineering, Immunology, Medical Informatics, Microbiology, Microscopy, Multidisciplinary Sciences, and Neurosciences. These categories were selected to capture the journals Publishing the majority of peer-reviewed biomedical research. The original data pull included 1,166 journals, collectively Publishing 213,449 articles. We filtered this list to the journals in the top quartiles by impact factor (IF) or number of articles published 2013. Additionally, the list was manually reviewed to exclude short report and review journals, and titles determined to be outside the fields of basic medical science or clinical research. The final study set included 318 journals, which published 130,330 articles in 2013. The study set represented 27% of the original Journal Citation Report list and 61% of the original citable articles. Prior to our analysis, the 2014 Journal Citations Reports was released. After our initial analyses and first preprint submission, the 2015 Journal Citations Reports was released. While we did not use the 2014 or 2015 data to amend the journals in the study set, we did employ data from all three reports in our analyses. In our data pull from JCR, we included the journal title, International Standard Serial Number (ISSN), the total citable items for 2013, 2014, and 2015, the total citations to the journal for 2013/14/15, the impact factors for 2013/14/15, and the publisher.

  • reproducible and reusable research are journal data sharing policies meeting the mark
    PeerJ, 2017
    Co-Authors: Nicole Vasilevsky, Jessica Minnier, Melissa Haendel, Robin Champieux
    Abstract:

    BACKGROUND:There is wide agreement in the biomedical research community that research data sharing is a primary ingredient for ensuring that science is more transparent and reproducible. Publishers could play an important role in facilitating and enforcing data sharing; however, many journals have not yet implemented data sharing policies and the requirements vary widely across journals. This study set out to analyze the pervasiveness and quality of data sharing policies in the biomedical literature. METHODS:The online author's instructions and editorial policies for 318 biomedical journals were manually reviewed to analyze the journal's data sharing requirements and characteristics. The data sharing policies were ranked using a rubric to determine if data sharing was required, recommended, required only for omics data, or not addressed at all. The data sharing method and licensing recommendations were examined, as well any mention of reproducibility or similar concepts. The data was analyzed for patterns relating to Publishing volume, Journal Impact Factor, and the Publishing Model (open access or subscription) of each journal. RESULTS:A total of 11.9% of journals analyzed explicitly stated that data sharing was required as a condition of publication. A total of 9.1% of journals required data sharing, but did not state that it would affect publication decisions. 23.3% of journals had a statement encouraging authors to share their data but did not require it. A total of 9.1% of journals mentioned data sharing indirectly, and only 14.8% addressed protein, proteomic, and/or genomic data sharing. There was no mention of data sharing in 31.8% of journals. Impact factors were significantly higher for journals with the strongest data sharing policies compared to all other data sharing criteria. Open access journals were not more likely to require data sharing than subscription journals. DISCUSSION:Our study confirmed earlier investigations which observed that only a minority of biomedical journals require data sharing, and a significant association between higher Impact Factors and journals with a data sharing requirement. Moreover, while 65.7% of the journals in our study that required data sharing addressed the concept of reproducibility, as with earlier investigations, we found that most data sharing policies did not provide specific guidance on the practices that ensure data is maximally available and reusable.

Verena Weigert - One of the best experts on this subject based on the ideXlab platform.

  • does the use of open non anonymous peer review in scholarly Publishing introduce bias evidence from the f1000research post publication open peer review Publishing Model
    Journal of Information Science, 2020
    Co-Authors: Mike Thelwall, Liz Allen, Eleanorrose Papas, Zena Nyakoojo, Verena Weigert
    Abstract:

    As part of moves towards open knowledge practices, making peer review open is cited as a way to enable fuller scrutiny and transparency of assessments around research. There are now many flavours o...

  • does the use of open non anonymous peer review in scholarly Publishing introduce bias evidence from the f1000 post publication open peer review Publishing Model
    arXiv: Digital Libraries, 2019
    Co-Authors: Mike Thelwall, Liz Allen, Zena Nyakoojo, Verena Weigert, Eleanorrose Papas
    Abstract:

    This study examines whether there is any evidence of bias in two areas of common critique of open, non-anonymous peer review - and used in the post-publication, peer review system operated by the open-access scholarly Publishing platform F1000Research. First, is there evidence of bias where a reviewer based in a specific country assesses the work of an author also based in the same country? Second, are reviewers influenced by being able to see the comments and know the origins of previous reviewer? Methods: Scrutinising the open peer review comments published on F1000Research, we assess the extent of two frequently cited potential influences on reviewers that may be the result of the transparency offered by a fully attributable, open peer review Publishing Model: the national affiliations of authors and reviewers, and the ability of reviewers to view previously-published reviewer reports before submitting their own. The effects of these potential influences were investigated for all first versions of articles published by 8 July 2019 to F1000Research. In 16 out of the 20 countries with the most articles, there was a tendency for reviewers based in the same country to give a more positive review. The difference was statistically significant in one. Only 3 countries had the reverse tendency. Second, there is no evidence of a conformity bias. When reviewers mentioned a previous review in their peer review report, they were not more likely to give the same overall judgement. Although reviewers who had longer to potentially read a previously published reviewer reports were slightly less likely to agree with previous reviewer judgements, this could be due to these articles being difficult to judge rather than deliberate non-conformity.

Mike Thelwall - One of the best experts on this subject based on the ideXlab platform.

  • does the use of open non anonymous peer review in scholarly Publishing introduce bias evidence from the f1000research post publication open peer review Publishing Model
    Journal of Information Science, 2020
    Co-Authors: Mike Thelwall, Liz Allen, Eleanorrose Papas, Zena Nyakoojo, Verena Weigert
    Abstract:

    As part of moves towards open knowledge practices, making peer review open is cited as a way to enable fuller scrutiny and transparency of assessments around research. There are now many flavours o...

  • does the use of open non anonymous peer review in scholarly Publishing introduce bias evidence from the f1000 post publication open peer review Publishing Model
    arXiv: Digital Libraries, 2019
    Co-Authors: Mike Thelwall, Liz Allen, Zena Nyakoojo, Verena Weigert, Eleanorrose Papas
    Abstract:

    This study examines whether there is any evidence of bias in two areas of common critique of open, non-anonymous peer review - and used in the post-publication, peer review system operated by the open-access scholarly Publishing platform F1000Research. First, is there evidence of bias where a reviewer based in a specific country assesses the work of an author also based in the same country? Second, are reviewers influenced by being able to see the comments and know the origins of previous reviewer? Methods: Scrutinising the open peer review comments published on F1000Research, we assess the extent of two frequently cited potential influences on reviewers that may be the result of the transparency offered by a fully attributable, open peer review Publishing Model: the national affiliations of authors and reviewers, and the ability of reviewers to view previously-published reviewer reports before submitting their own. The effects of these potential influences were investigated for all first versions of articles published by 8 July 2019 to F1000Research. In 16 out of the 20 countries with the most articles, there was a tendency for reviewers based in the same country to give a more positive review. The difference was statistically significant in one. Only 3 countries had the reverse tendency. Second, there is no evidence of a conformity bias. When reviewers mentioned a previous review in their peer review report, they were not more likely to give the same overall judgement. Although reviewers who had longer to potentially read a previously published reviewer reports were slightly less likely to agree with previous reviewer judgements, this could be due to these articles being difficult to judge rather than deliberate non-conformity.