Journal Impact Factor

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 6801 Experts worldwide ranked by ideXlab platform

Lutz Bornmann - One of the best experts on this subject based on the ideXlab platform.

  • the integrated Impact indicator revisited i3 a non parametric alternative to the Journal Impact Factor
    Scientometrics, 2019
    Co-Authors: Loet Leydesdorff, Lutz Bornmann, Jonathan Adams
    Abstract:

    We propose the I3* indicator as a non-parametric alternative to the Journal Impact Factor (JIF) and h-index. We apply I3* to more than 10,000 Journals. The results can be compared with other Journal metrics. I3* is a promising variant within the general scheme of non-parametric I3 indicators introduced previously: I3* provides a single metric which correlates with both Impact in terms of citations (c) and output in terms of publications (p). We argue for weighting using four percentile classes: the top-1% and top-10% as excellence indicators; the top-50% and bottom-50% as output indicators. Like the h-index, which also incorporates both c and p, I3*-values are size-dependent; however, division of I3* by the number of publications (I3*/N) provides a size-independent indicator which correlates strongly with the 2- and 5-year Journal Impact Factors (JIF2 and JIF5). Unlike the h-index, I3* correlates significantly with both the total number of citations and publications. The values of I3* and I3*/N can be statistically tested against the expectation or against one another using chi-squared tests or effect sizes. A template (in Excel) is provided online for relevant tests.

  • the integrated Impact indicator i3 revisited a non parametric alternative to the Journal Impact Factor
    arXiv: Digital Libraries, 2018
    Co-Authors: Loet Leydesdorff, Lutz Bornmann, Jonathan Adams
    Abstract:

    We propose the I3* indicator as a non-parametric alternative to the Journal Impact Factor (JIF) and h-index. We apply I3* to more than 10,000 Journals. The results can be compared with other Journal metrics. I3* is a promising variant within the general scheme of non-parametric indicators I3 introduced previously: it provides a single metric which correlates with both Impact in terms of citations (c) and output in terms of publications (p). We argue for weighting using four percentile classes: the top-1% and top-10% as excellence indicators; the top-50% and bottom-50% as output indicators. Like the h-index, which also incorporates both c and p, I3*-values are size-dependent; however, division of I3* by the number of publications (I3*/N) provides a size-independent indicator which correlates strongly with the two- and five-year Journal Impact Factors (JIF2 and JIF5). Unlike the h-index, I3* correlates significantly with both the total number of citations and publications. The values of I3* and I3*/N can be statistically tested against the expectation or against one another using chi-square tests or effect sizes. A template (in Excel) is provided online for relevant tests.

  • can the Journal Impact Factor be used as a criterion for the selection of junior researchers a large scale empirical study based on researcherid data
    Journal of Informetrics, 2017
    Co-Authors: Lutz Bornmann, Richard Williams
    Abstract:

    Abstract Early in researchers’ careers, it is difficult to assess how good their work is or how important or influential the scholars will eventually be. Hence, funding agencies, academic departments, and others often use the Journal Impact Factor (JIF) of where the authors have published to assess their work and provide resources and rewards for future work. The use of JIFs in this way has been heavily criticized, however. Using a large data set with many thousands of publication profiles of individual researchers, this study tests the ability of the JIF (in its normalized variant) to identify, at the beginning of their careers, those candidates who will be successful in the long run. Instead of bare JIFs and citation counts, the metrics used here are standardized according to Web of Science subject categories and publication years. The results of the study indicate that the JIF (in its normalized variant) is able to discriminate between researchers who published papers later on with a citation Impact above or below average in a field and publication year – not only in the short term, but also in the long term. However, the low to medium effect sizes of the results also indicate that the JIF (in its normalized variant) should not be used as the sole criterion for identifying later success: other criteria, such as the novelty and significance of the specific research, academic distinctions, and the reputation of previous institutions, should also be considered.

  • the Journal Impact Factor should not be discarded
    arXiv: Digital Libraries, 2016
    Co-Authors: Lutz Bornmann, Alexander I Pudovkin
    Abstract:

    The Journal Impact Factor (JIF) has been heavily criticized over decades. This opinion piece argues that the JIF should not be demonized. It still can be employed for research evaluation purposes by carefully considering the context and academic environment.

  • Journal Impact Factor the poor man s citation analysis and alternative approaches
    2013
    Co-Authors: Werner Marx, Max Planck, Lutz Bornmann
    Abstract:

    The Journal Impact Factor has a number of drawbacks preventing its use for assessment of separate Journal articles and individuals. With that in mind, most experts would endorse the San Francisco Declaration on Research Assessment (DORA), which highlights the appropriate use of bibliometric indicators for quantitative research assessments. To curb the problem of skewed citations, an alternative, normalised metric is proposed. Percentiles, or percentile rank classes method is particularly useful for the normalisation. It is also advised to use specific percentile rank classes and assess individual scientists with Ptop 10% or PPtop 10% indicators. Keywords Bibliometrics; research evaluation; alternative metrics.

Giovanni Abramo - One of the best experts on this subject based on the ideXlab platform.

  • citations versus Journal Impact Factor as proxy of quality could the latter ever be preferable
    Scientometrics, 2010
    Co-Authors: Ciriaco Andrea Dangelo, Giovanni Abramo, Flavia Di Costa
    Abstract:

    In recent years bibliometricians have paid increasing attention to research evaluation methodological problems, among these being the choice of the most appropriate indicators for evaluating quality of scientific publications, and thus for evaluating the work of single scientists, research groups and entire organizations. Much literature has been devoted to analyzing the robustness of various indicators, and many works warn against the risks of using easily available and relatively simple proxies, such as Journal Impact Factor. The present work continues this line of research, examining whether it is valid that the use of the Impact Factor should always be avoided in favour of citations, or whether the use of Impact Factor could be acceptable, even preferable, in certain circumstances. The evaluation was conducted by observing all scientific publications in the hard sciences by Italian universities, for the period 2004–2007. Performance sensitivity analyses were conducted with changing indicators of quality and years of observation.

  • Citations versus Journal Impact Factor as proxy of quality: Could the latter ever be preferable?
    Scientometrics, 2010
    Co-Authors: Giovanni Abramo, Ciriaco Andrea D'angelo, Flavia Di Costa
    Abstract:

    In recent years bibliometricians have paid increasing attention to research evaluation methodological problems, among these being the choice of the most appropriate indicators for evaluating quality of scientific publications, and thus for evaluating the work of single scientists, research groups and entire organizations. Much literature has been devoted to analyzing the robustness of various indicators, and many works warn against the risks of using easily available and relatively simple proxies, such as Journal Impact Factor. The present work continues this line of research, examining whether it is valid that the use of the Impact Factor should always be avoided in favour of citations, or whether the use of Impact Factor could be acceptable, even preferable, in certain circumstances. The evaluation was conducted by observing all scientific publications in the hard sciences by Italian universities, for the period 2004-2007. Performance sensitivity analyses were conducted with changing indicators of quality and years of observation. © 2010 Akadémiai Kiadó, Budapest, Hungary.

Flavia Di Costa - One of the best experts on this subject based on the ideXlab platform.

  • citations versus Journal Impact Factor as proxy of quality could the latter ever be preferable
    Scientometrics, 2010
    Co-Authors: Ciriaco Andrea Dangelo, Giovanni Abramo, Flavia Di Costa
    Abstract:

    In recent years bibliometricians have paid increasing attention to research evaluation methodological problems, among these being the choice of the most appropriate indicators for evaluating quality of scientific publications, and thus for evaluating the work of single scientists, research groups and entire organizations. Much literature has been devoted to analyzing the robustness of various indicators, and many works warn against the risks of using easily available and relatively simple proxies, such as Journal Impact Factor. The present work continues this line of research, examining whether it is valid that the use of the Impact Factor should always be avoided in favour of citations, or whether the use of Impact Factor could be acceptable, even preferable, in certain circumstances. The evaluation was conducted by observing all scientific publications in the hard sciences by Italian universities, for the period 2004–2007. Performance sensitivity analyses were conducted with changing indicators of quality and years of observation.

  • Citations versus Journal Impact Factor as proxy of quality: Could the latter ever be preferable?
    Scientometrics, 2010
    Co-Authors: Giovanni Abramo, Ciriaco Andrea D'angelo, Flavia Di Costa
    Abstract:

    In recent years bibliometricians have paid increasing attention to research evaluation methodological problems, among these being the choice of the most appropriate indicators for evaluating quality of scientific publications, and thus for evaluating the work of single scientists, research groups and entire organizations. Much literature has been devoted to analyzing the robustness of various indicators, and many works warn against the risks of using easily available and relatively simple proxies, such as Journal Impact Factor. The present work continues this line of research, examining whether it is valid that the use of the Impact Factor should always be avoided in favour of citations, or whether the use of Impact Factor could be acceptable, even preferable, in certain circumstances. The evaluation was conducted by observing all scientific publications in the hard sciences by Italian universities, for the period 2004-2007. Performance sensitivity analyses were conducted with changing indicators of quality and years of observation. © 2010 Akadémiai Kiadó, Budapest, Hungary.

Jonathan Adams - One of the best experts on this subject based on the ideXlab platform.

  • the integrated Impact indicator revisited i3 a non parametric alternative to the Journal Impact Factor
    Scientometrics, 2019
    Co-Authors: Loet Leydesdorff, Lutz Bornmann, Jonathan Adams
    Abstract:

    We propose the I3* indicator as a non-parametric alternative to the Journal Impact Factor (JIF) and h-index. We apply I3* to more than 10,000 Journals. The results can be compared with other Journal metrics. I3* is a promising variant within the general scheme of non-parametric I3 indicators introduced previously: I3* provides a single metric which correlates with both Impact in terms of citations (c) and output in terms of publications (p). We argue for weighting using four percentile classes: the top-1% and top-10% as excellence indicators; the top-50% and bottom-50% as output indicators. Like the h-index, which also incorporates both c and p, I3*-values are size-dependent; however, division of I3* by the number of publications (I3*/N) provides a size-independent indicator which correlates strongly with the 2- and 5-year Journal Impact Factors (JIF2 and JIF5). Unlike the h-index, I3* correlates significantly with both the total number of citations and publications. The values of I3* and I3*/N can be statistically tested against the expectation or against one another using chi-squared tests or effect sizes. A template (in Excel) is provided online for relevant tests.

  • the integrated Impact indicator i3 revisited a non parametric alternative to the Journal Impact Factor
    arXiv: Digital Libraries, 2018
    Co-Authors: Loet Leydesdorff, Lutz Bornmann, Jonathan Adams
    Abstract:

    We propose the I3* indicator as a non-parametric alternative to the Journal Impact Factor (JIF) and h-index. We apply I3* to more than 10,000 Journals. The results can be compared with other Journal metrics. I3* is a promising variant within the general scheme of non-parametric indicators I3 introduced previously: it provides a single metric which correlates with both Impact in terms of citations (c) and output in terms of publications (p). We argue for weighting using four percentile classes: the top-1% and top-10% as excellence indicators; the top-50% and bottom-50% as output indicators. Like the h-index, which also incorporates both c and p, I3*-values are size-dependent; however, division of I3* by the number of publications (I3*/N) provides a size-independent indicator which correlates strongly with the two- and five-year Journal Impact Factors (JIF2 and JIF5). Unlike the h-index, I3* correlates significantly with both the total number of citations and publications. The values of I3* and I3*/N can be statistically tested against the expectation or against one another using chi-square tests or effect sizes. A template (in Excel) is provided online for relevant tests.

  • comments on a critique of the thomson reuters Journal Impact Factor
    Scientometrics, 2012
    Co-Authors: David Pendlebury, Jonathan Adams
    Abstract:

    We discuss research evaluation, the nature of Impact, and the use of the Thomson Reuters Journal Impact Factor and other indicators in scientometrics in the light of recent commentary.

Hansdieter Daniel - One of the best experts on this subject based on the ideXlab platform.

  • skewed citation distributions and bias Factors solutions to two core problems with the Journal Impact Factor
    Journal of Informetrics, 2012
    Co-Authors: Rudiger Mutz, Hansdieter Daniel
    Abstract:

    The Journal Impact Factor (JIF) proposed by Garfield in the year 1955 is one of the most prominent and common measures of the prestige, position, and importance of a scientific Journal. The JIF may profit from its comprehensibility, robustness, methodological reproducibility, simplicity, and rapid availability, but it is at the expense of serious technical and methodological flaws. The paper discusses two core problems with the JIF: first, citations of documents are generally not normally distributed, and, furthermore, the distribution is affected by outliers, which has serious consequences for the use of the mean value in the JIF calculation. Second, the JIF is affected by bias Factors that have nothing to do with the prestige or quality of a Journal (e.g., document type). For solving these two problems, we suggest using McCall's area transformation and the Rubin Causal Model. Citation data for documents of all Journals in the ISI Subject Category “Psychology, Mathematical” (Journal Citation Report) are used to illustrate the proposal.

  • the effect of a two stage publication process on the Journal Impact Factor a case study on the interactive open access Journal atmospheric chemistry and physics
    Scientometrics, 2011
    Co-Authors: Lutz Bornmann, Christoph Neuhaus, Hansdieter Daniel
    Abstract:

    Taking the interactive open access Journal Atmospheric Chemistry and Physics as an example, this study examines whether Thomson Reuters, for the Journal Citation Reports, correctly calculates the Journal Impact Factor (JIF) of a Journal that publishes several versions of a manuscript within a two-stage publication process. The results of this study show that the JIF of the Journal is not overestimated through the two-stage publication process.

  • the publication and citation Impact profiles of angewandte chemie and the Journal of the american chemical society based on the sections of chemical abstracts a case study on the limitations of the Journal Impact Factor
    Journal of the Association for Information Science and Technology, 2009
    Co-Authors: Christoph Neuhaus, Werner Marx, Hansdieter Daniel
    Abstract:

    The Journal Impact Factor (JIF) published by Thomson Reuters is often used to evaluate the significance and performance of scientific Journals. Besides methodological problems with the JIF, the critical issue is whether a single measure is sufficient for characterizing the Impact of Journals, particularly the Impact of multidisciplinary and wide-scope Journals that publish articles in a broad range of research fields. Taking Angewandte Chemie International Edition and the Journal of the American Chemical Society as examples, we examined the two Journals' publication and Impact profiles across the sections of Chemical Abstracts and compared the results with the JIF. The analysis was based primarily on Communications published in Angewandte Chemie International Edition and the Journal of the American Chemical Society during 2001 to 2005. The findings show that the information available in the Science Citation Index is a rather unreliable indication of the document type and is therefore inappropriate for comparative analysis. The findings further suggest that the composition of the Journal in terms of contribution types, the length of the citation window, and the thematic focus of the Journal in terms of the sections of Chemical Abstracts has a significant influence on the overall Journal citation Impact. Therefore, a single measure of Journal citation Impact such as the JIF is insufficient for characterizing the significance and performance of wide-scope Journals. For the comparison of Journals, more sophisticated methods such as publication and Impact profiles across subject headings of bibliographic databases (e.g., the sections of Chemical Abstracts) are valuable. © 2009 Wiley Periodicals, Inc.