Bibliometrics

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 56628 Experts worldwide ranked by ideXlab platform

Lutz Bornmann - One of the best experts on this subject based on the ideXlab platform.

  • studying Bibliometrics based heuristics bbhs a new research program on the use of Bibliometrics in research evaluation
    Scholarly Assessment Reports, 2020
    Co-Authors: Lutz Bornmann
    Abstract:

    How do decision makers in science use bibliometric indicators and how do they rely on the indicators? Could bibliometric indicators replace the decision makers’ judgments (partly or completely)? Bornmann and Marewski (2019) suggest that these and similar questions can be empirically answered by studying the evaluative use of Bibliometrics within the heuristics research program conceptualized by Gigerenzer, Todd, and ABC Research Group (1999). This program can serve as a framework so that the evaluative usage can be conceptually understood, empirically studied, and effectively taught. In this short communication, main lines suggested by Bornmann and Marewski (2019) are summarized in a brief overview.

  • Bibliometrics based heuristics what is their definition and how can they be studied research note
    Profesional De La Informacion, 2020
    Co-Authors: Lutz Bornmann
    Abstract:

    When scientists study the phenomena they are interested in, they apply sound methods and base their work on theoretical considerations. In contrast, when the fruits of their research are being evaluated, basic scientific standards do not seem to matter. Instead, simplistic bibliometric indicators (i.e., publication and citation counts) are, paradoxically, both widely used and criticized without any methodological and theoretical framework that would serve to ground both use and critique. Recently, however Bornmann and Marewski (2019) proposed such a framework. They developed Bibliometrics-based heuristics (BBHs) based on the fast-and-frugal heuristics approach (Gigerenzer; Todd; ABC Research Group, 1999) to decision making, in order to conceptually understand and empirically investigate the quantitative evaluation of research as well as to effectively train end-users of Bibliometrics (e.g., science managers, scientists). Heuristics are decision strategies that use part of the available information and ignore the rest. By exploiting the statistical structure of task environments, they can aid to make accurate, fast, effortless, and cost-efficient decisions without that trade-offs are incurred. Because of their simplicity, heuristics are easy to understand and communicate, enhancing the transparency of decision processes. In this commentary, we explain several BBHs and discuss how such heuristics can be employed in practice (using the evaluation of applicants for funding programs as one example). Furthermore, we outline why heuristics can perform well, and how they and their fit to task environments can be studied. In pointing to the potential of research on BBHs and to the risks that come with an under-researched, mindless usage of Bibliometrics, this commentary contributes to make research evaluation more scientific.

  • spatial Bibliometrics on the city level
    Journal of Information Science, 2019
    Co-Authors: Lutz Bornmann, Felix De Moyaanegon
    Abstract:

    It is very popular in Bibliometrics to present results on institutions not only as tabular lists, but also on maps (see, for example, the Leiden Ranking). However, the problem with these visualisations is that institutions are frequently spatially clustered in larger cities whereby institutions are positioned one above the other. In this Brief Communication, we propose as an alternative to visualise bibliometric data on the city rather than the institution level to avoid this problem.

  • Bibliometrics based heuristics what is their definition and how can they be studied
    arXiv: Digital Libraries, 2018
    Co-Authors: Lutz Bornmann, Julian N Marewski
    Abstract:

    Paradoxically, bibliometric indicators (i.e., publications and citation counts) are both widely used and widely criticized in research evaluation. At the same time, a common methodological and theoretical framework for conceptually understanding, empirically investigating, and effectively training end-users of Bibliometrics (e.g., science managers, scientists) is lacking. In this paper, we outline such a framework - the fast-and-frugal heuristics research framework developed by Gigerenzer et al. [1] - and discuss its application to evaluative Bibliometrics. Heuristics are decision strategies that use part of the available information (and ignore the rest). In so doing, they can aid to make accurate, fast, effortless, and cost-efficient decisions without that trade-offs are incurred (e.g., effort versus accuracy). Because of their simple structure, heuristics are easy to understand and communicate and can enhance the transparency of decision-making processes. We introduce three Bibliometrics-based heuristics and discuss how these heuristics can be employed in the evaluative practice (using the evaluation of applicants for funding programs as example).

  • professional and citizen Bibliometrics complementarities and ambivalences in the development and use of indicators a state of the art report
    Scientometrics, 2016
    Co-Authors: Loet Leydesdorff, Paul Wouters, Lutz Bornmann
    Abstract:

    Bibliometric indicators such as journal impact factors, h-indices, and total citation counts are algorithmic artifacts that can be used in research evaluation and management. These artifacts have no meaning by themselves, but receive their meaning from attributions in institutional practices. We distinguish four main stakeholders in these practices: (1) producers of bibliometric data and indicators; (2) bibliometricians who develop and test indicators; (3) research managers who apply the indicators; and (4) the scientists being evaluated with potentially competing career interests. These different positions may lead to different and sometimes conflicting perspectives on the meaning and value of the indicators. The indicators can thus be considered as boundary objects which are socially constructed in translations among these perspectives. This paper proposes an analytical clarification by listing an informed set of (sometimes unsolved) problems in Bibliometrics which can also shed light on the tension between simple but invalid indicators that are widely used (e.g., the h-index) and more sophisticated indicators that are not used or cannot be used in evaluation practices because they are not transparent for users, cannot be calculated, or are difficult to interpret.

Gianluca Setti - One of the best experts on this subject based on the ideXlab platform.

  • Bibliometric Indicators: Why Do We Need More Than One?
    IEEE Access, 2013
    Co-Authors: Gianluca Setti
    Abstract:

    This paper provides an overview of the main features of several bibliometric indicators which were proposed in the last few decades. Their pros and cons are highlighted and compared with the features of the well-known impact factor (IF) to show how alternative metrics are specifically designed to address the flaws that the IF was shown to have, especially in the last few years. We also report the results of recent studies in the bibliometric literature showing how the scientific impact of journals as evaluated by Bibliometrics is a very complicated matter and it is completely unrealistic to try to capture it by any single indicator, such as the IF or any other. As such, we conclude that the adoption of more metrics, with complementary features, to assess journal quality would be very beneficial as it would both offer a more comprehensive and balanced view of each journal in the space of scholarly publications, as well as eliminate the pressure on individuals and their incentive to do metric manipulation which is an unintended result of the current (mis)use of the IF as the gold standard for publication quality.

Lucheng Huang - One of the best experts on this subject based on the ideXlab platform.

  • integrating Bibliometrics and roadmapping methods a case of dye sensitized solar cell technology based industry in china
    Technological Forecasting and Social Change, 2015
    Co-Authors: Xin Li, Yuan Zhou, Lucheng Huang
    Abstract:

    Abstract Emerging industries are attracting increasing attention as they engage in innovation activities that transgress the boundaries of science and technology. Policy makers and industrial communities use roadmapping methods to predict future industrial growth, but the existing bibliometric/workshop methods have limitations when analyzing the full-lifecycle industrial emergence, including the transitions between science, technology, application, and the mass market. This paper, therefore, proposes a framework that integrates Bibliometrics and a technology roadmapping (TRM) workshop approach to strategize and plan the future development of the new, technology-based industry. The dye-sensitized solar cell technology-based industry in China is selected as a case study. In this case, the Bibliometrics method is applied to analyze the existing position of science and technology, and TRM workshops are used to strategize the future development from technology to application and marketing. Key events and impact on the development of the new, technology-based industry have been identified. This paper will contribute to the roadmapping and foresight methodology, and will be of interest to solar photovoltaic industry researchers.

Renduo Liu - One of the best experts on this subject based on the ideXlab platform.

  • Rehabilitation using virtual reality technology: a bibliometric analysis, 1996–2015
    Scientometrics, 2016
    Co-Authors: Y Huang, S. Ali, X. Zhai, X Bi, Q Huang, Renduo Liu
    Abstract:

    The aim of this study is to conduct a retrospective bibliometric\nanalysis of articles about rehabilitation medicine using virtual reality\ntechnology. Bibliometrics is one subfield of scientometric. It is an\neffective tool for evaluating research trends in different science\nfields. A systematic bibliometric search was performed using three\nacademic databases (PubMed, Scopus and Web of Science) between January\n1, 1996, and December 31, 2015. Research outputs, countries,\ninstitutions, authors, major journals, cited articles, subject area and\nhot research topics were analyzed to base on Bibliometrics\nmethodologies. The retrieval of results was analyzed and described in\nthe form of texts, tables, and graphics. Total of 15,191 articles were\nidentified from three academic databases; and from them 48.32 %\npublished as original articles. The articles were originated from 101\ncountries and territories. United States was ranked first with 4522\narticles, and United Kingdom was on second with 1369 articles. 96.75 %\nof the articles are published in English. 527 articles were published by\nthe Lecture Notes In Computer Science Including Subseries Lecture Notes\nIn Artificial Intelligence And Lecture Notes In Bioinformatics. With\nregard of the research institutions, Eidgenossische Technische\nHochschule Zurich published 208 articles ranked first. In the past 20\nyears, the research outcome of rehabilitation using virtual reality\ntechnology research has increased substantially. This study provides a\nvaluable reference for researchers to understand the overview and\npresent situations in this field.

Philipp Mayr - One of the best experts on this subject based on the ideXlab platform.

  • Mining Scientific Papers: NLP-enhanced Bibliometrics
    Frontiers in Research Metrics and Analytics, 2019
    Co-Authors: Iana Atanassova, Marc Bertin, Philipp Mayr
    Abstract:

    During the last decade, the availability of scientific papers in full text and in in machine-readable formats has become more and more widespread thanks to the growing number of publications on online platforms such as ArXiv, CiteSeer or PLoS and so forth. At the same time, research in the field of natural language processing and computational linguistics have provided a number of open source tools for versatile text processing (e.g. NLTK, Mallet, OpenNLP, CoreNLP, Gate, CiteSpace). The rise of Open Access publishing and the standardized formats for the representation of scientific papers (such as NLM-JATS, TEI, DocBook), and the availability of full-text datasets for research experiments and information retrieval corpora (e.g. PubMed, JSTOR, iSearch) have made possible to perform bibliometric studies not only considering the metadata of papers but also their full text content. Scientific papers are highly structured texts and display specific properties related to their references but also argumentative and rhetorical structure. Recent research in this field has concentrated on the construction of ontologies for the citations in scientific papers (e.g. CiTO, Linked Science) and studies of the distribution of references. However, up to now full-text mining efforts are rarely used to provide data for bibliometric analyses. While Bibliometrics traditionally relies on the analysis of metadata of scientific papers, we explore the ways full-text processing of scientific papers and linguistic analyses can contribute to bibliometric studies. This Research Topic aims to discuss novel approaches and provide insights into scientific writing that can bring new perspectives to understand both the nature of citations and the nature of scientific papers. The possibility to enrich metadata by the full-text processing of papers offers new fields of application to Bibliometrics studies. Full text offers a new field of investigation, where the major problems arise around the organization and structure of text, the extraction of information and its representation on the level of metadata. Furthermore, the study of contexts around in-text citations offers new perspectives related to the semantic dimension of citations. The analyses of citation contexts and the semantic categorization of publications will allow us to rethink co-citation networks, bibliographic coupling and other bibliometric techniques. This Research Topic aims to promote interdisciplinary research in Bibliometrics, natural language processing and computational linguistics in order to study the ways Bibliometrics can benefit from large-scale text analytics and sense mining of scientific papers. We encourage contributions on theoretical findings, practical methods, technologies on the processing of scientific corpora involving full text processing, semantic analysis, text mining, citation classification and related topics. We also encourage surveys and evaluations of state-of-the-art methods, as well as more exploratory papers to identify novel challenges and pave the way to future theoretical frameworks.

  • Editorial for the Bibliometric-enhanced Information Retrieval Workshop at ECIR 2014
    arXiv: Information Retrieval, 2014
    Co-Authors: Philipp Mayr, Andrea Scharnhorst, Philipp Schaer, Peter Mutschke
    Abstract:

    This first "Bibliometric-enhanced Information Retrieval" (BIR 2014) workshop aims to engage with the IR community about possible links to Bibliometrics and scholarly communication. Bibliometric techniques are not yet widely used to enhance retrieval processes in digital libraries, although they offer value-added effects for users. In this workshop we will explore how statistical modelling of scholarship, such as Bradfordizing or network analysis of co-authorship network, can improve retrieval services for specific communities, as well as for large, cross-domain collections. This workshop aims to raise awareness of the missing link between information retrieval (IR) and Bibliometrics / scientometrics and to create a common ground for the incorporation of bibliometric-enhanced services into retrieval at the digital library interface. Our interests include information retrieval, information seeking, science modelling, network analysis, and digital libraries. The goal is to apply insights from Bibliometrics, scientometrics, and informetrics to concrete practical problems of information retrieval and browsing.

  • ECIR - Bibliometric-Enhanced Information Retrieval
    Lecture Notes in Computer Science, 2014
    Co-Authors: Philipp Mayr, Andrea Scharnhorst, Birger Larsen, Philipp Schaer, Peter Mutschke
    Abstract:

    Bibliometric techniques are not yet widely used to enhance retrieval processes in digital libraries, although they offer value-added effects for users. In this workshop we will explore how statistical modelling of scholarship, such as Bradfordizing or network analysis of coauthorship network, can improve retrieval services for specific communities, as well as for large, cross-domain collections. This workshop aims to raise awareness of the missing link between information retrieval IR and Bibliometrics / scientometrics and to create a common ground for the incorporation of bibliometric-enhanced services into retrieval at the digital library interface.

  • Bibliometric-Enhanced Information Retrieval. Editorial for the workshop.
    Lecture Notes in Computer Science, 2014
    Co-Authors: Philipp Mayr, Peter Mutschke, Andrea Scharnhorst, Philipp Schaer, Tom Kenter, Arjen P. De Vries, Franciska De Jong, Maarten De Rijke, Chengxiang Zhai, Kira Radinsky
    Abstract:

    This first "Bibliometric-enhanced Information Retrieval" (BIR 2014) workshop aims to engage with the IR community about possible links to Bibliometrics and scholarly communication. Bibliometric techniques are not yet widely used to enhance retrieval processes in digital libraries, although they offer value-added effects for users. In this workshop we will explore how statistical modelling of scholarship, such as Bradfordizing or network analysis of co-authorship network, can improve retrieval services for specific communities, as well as for large, cross-domain collections. This workshop aims to raise awareness of the missing link between information retrieval (IR) and Bibliometrics / scientometrics and to create a common ground for the incorporation of bibliometric-enhanced services into retrieval at the digital library interface. Our interests include information retrieval, information seeking, science modelling, network analysis, and digital libraries. The goal is to apply insights from Bibliometrics, scientometrics, and informetrics to concrete practical problems of information retrieval and browsing.

  • BigData Conference - Bibliometric-enhanced retrieval models for big scholarly information systems
    2013 IEEE International Conference on Big Data, 2013
    Co-Authors: Philipp Mayr, Peter Mutschke
    Abstract:

    Bibliometric techniques are not yet widely used to enhance retrieval processes in digital libraries, although they offer value-added effects for users. In this paper we will explore how statistical modelling of scholarship, such as Bradfordizing or network analysis of coauthorship network, can improve retrieval services for specific communities, as well as for large, cross-domain large collections. This paper aims to raise awareness of the missing link between information retrieval (IR) and Bibliometrics / scientometrics and to create a common ground for the incorporation of bibliometric-enhanced services into retrieval at the digital library interface.