Academic Search - Explore the Science & Experts | ideXlab

Scan Science and Technology

Contact Leading Edge Experts & Companies

Academic Search

The Experts below are selected from a list of 72465 Experts worldwide ranked by ideXlab platform

Academic Search – Free Register to Access Experts & Abstracts

Neal R. Haddaway – One of the best experts on this subject based on the ideXlab platform.

  • which Academic Search systems are suitable for systematic reviews or meta analyses evaluating retrieval qualities of google scholar pubmed and 26 other resources
    Research Synthesis Methods, 2020
    Co-Authors: Michael Gusenbauer, Neal R. Haddaway
    Abstract:

    Rigorous evidence identification is essential for systematic reviews and meta-analyses (evidence syntheses) because the sample selection of relevant studies determines a review’s outcome, validity, and explanatory power. Yet, the Search systems allowing access to this evidence provide varying levels of precision, recall, and reproducibility and also demand different levels of effort. To date, it remains unclear which Search systems are most appropriate for evidence synthesis and why. Advice on which Search engines and bibliographic databases to choose for systematic Searches is limited and lacking systematic, empirical performance assessments. This study investigates and compares the systematic Search qualities of 28 widely used Academic Search systems, including Google Scholar, PubMed, and Web of Science. A novel, query-based method tests how well users are able to interact and retrieve records with each system. The study is the first to show the extent to which Search systems can effectively and efficiently perform (Boolean) Searches with regards to precision, recall, and reproducibility. We found substantial differences in the performance of Search systems, meaning that their usability in systematic Searches varies. Indeed, only half of the Search systems analyzed and only a few Open Access databases can be recommended for evidence syntheses without adding substantial caveats. Particularly, our findings demonstrate why Google Scholar is inappropriate as principal Search system. We call for database owners to recognize the requirements of evidence synthesis and for Academic journals to reassess quality requirements for systematic reviews. Our findings aim to support reSearchers in conducting better Searches for better evidence synthesis.

  • Which Academic Search Systems are Suitable for Systematic Reviews or Meta‐Analyses? Evaluating Retrieval Qualities of Google Scholar, PubMed and 26 other Resources
    Research synthesis methods, 2020
    Co-Authors: Michael Gusenbauer, Neal R. Haddaway
    Abstract:

    Rigorous evidence identification is essential for systematic reviews and meta-analyses (evidence syntheses) because the sample selection of relevant studies determines a review’s outcome, validity, and explanatory power. Yet, the Search systems allowing access to this evidence provide varying levels of precision, recall, and reproducibility and also demand different levels of effort. To date, it remains unclear which Search systems are most appropriate for evidence synthesis and why. Advice on which Search engines and bibliographic databases to choose for systematic Searches is limited and lacking systematic, empirical performance assessments. This study investigates and compares the systematic Search qualities of 28 widely used Academic Search systems, including Google Scholar, PubMed, and Web of Science. A novel, query-based method tests how well users are able to interact and retrieve records with each system. The study is the first to show the extent to which Search systems can effectively and efficiently perform (Boolean) Searches with regards to precision, recall, and reproducibility. We found substantial differences in the performance of Search systems, meaning that their usability in systematic Searches varies. Indeed, only half of the Search systems analyzed and only a few Open Access databases can be recommended for evidence syntheses without adding substantial caveats. Particularly, our findings demonstrate why Google Scholar is inappropriate as principal Search system. We call for database owners to recognize the requirements of evidence synthesis and for Academic journals to reassess quality requirements for systematic reviews. Our findings aim to support reSearchers in conducting better Searches for better evidence synthesis.

Michael Gusenbauer – One of the best experts on this subject based on the ideXlab platform.

  • which Academic Search systems are suitable for systematic reviews or meta analyses evaluating retrieval qualities of google scholar pubmed and 26 other resources
    Research Synthesis Methods, 2020
    Co-Authors: Michael Gusenbauer, Neal R. Haddaway
    Abstract:

    Rigorous evidence identification is essential for systematic reviews and meta-analyses (evidence syntheses) because the sample selection of relevant studies determines a review’s outcome, validity, and explanatory power. Yet, the Search systems allowing access to this evidence provide varying levels of precision, recall, and reproducibility and also demand different levels of effort. To date, it remains unclear which Search systems are most appropriate for evidence synthesis and why. Advice on which Search engines and bibliographic databases to choose for systematic Searches is limited and lacking systematic, empirical performance assessments. This study investigates and compares the systematic Search qualities of 28 widely used Academic Search systems, including Google Scholar, PubMed, and Web of Science. A novel, query-based method tests how well users are able to interact and retrieve records with each system. The study is the first to show the extent to which Search systems can effectively and efficiently perform (Boolean) Searches with regards to precision, recall, and reproducibility. We found substantial differences in the performance of Search systems, meaning that their usability in systematic Searches varies. Indeed, only half of the Search systems analyzed and only a few Open Access databases can be recommended for evidence syntheses without adding substantial caveats. Particularly, our findings demonstrate why Google Scholar is inappropriate as principal Search system. We call for database owners to recognize the requirements of evidence synthesis and for Academic journals to reassess quality requirements for systematic reviews. Our findings aim to support reSearchers in conducting better Searches for better evidence synthesis.

  • Which Academic Search Systems are Suitable for Systematic Reviews or Meta‐Analyses? Evaluating Retrieval Qualities of Google Scholar, PubMed and 26 other Resources
    Research synthesis methods, 2020
    Co-Authors: Michael Gusenbauer, Neal R. Haddaway
    Abstract:

    Rigorous evidence identification is essential for systematic reviews and meta-analyses (evidence syntheses) because the sample selection of relevant studies determines a review’s outcome, validity, and explanatory power. Yet, the Search systems allowing access to this evidence provide varying levels of precision, recall, and reproducibility and also demand different levels of effort. To date, it remains unclear which Search systems are most appropriate for evidence synthesis and why. Advice on which Search engines and bibliographic databases to choose for systematic Searches is limited and lacking systematic, empirical performance assessments. This study investigates and compares the systematic Search qualities of 28 widely used Academic Search systems, including Google Scholar, PubMed, and Web of Science. A novel, query-based method tests how well users are able to interact and retrieve records with each system. The study is the first to show the extent to which Search systems can effectively and efficiently perform (Boolean) Searches with regards to precision, recall, and reproducibility. We found substantial differences in the performance of Search systems, meaning that their usability in systematic Searches varies. Indeed, only half of the Search systems analyzed and only a few Open Access databases can be recommended for evidence syntheses without adding substantial caveats. Particularly, our findings demonstrate why Google Scholar is inappropriate as principal Search system. We call for database owners to recognize the requirements of evidence synthesis and for Academic journals to reassess quality requirements for systematic reviews. Our findings aim to support reSearchers in conducting better Searches for better evidence synthesis.

  • google scholar to overshadow them all comparing the sizes of 12 Academic Search engines and bibliographic databases
    Scientometrics, 2019
    Co-Authors: Michael Gusenbauer
    Abstract:

    Information on the size of Academic Search engines and bibliographic databases (ASEBDs) is often outdated or entirely unavailable. Hence, it is difficult to assess the scope of specific databases, such as Google Scholar. While scientometric studies have estimated ASEBD sizes before, the methods employed were able to compare only a few databases. Consequently, there is no up-to-date comparative information on the sizes of popular ASEBDs. This study aims to fill this blind spot by providing a comparative picture of 12 of the most commonly used ASEBDs. In doing so, we build on and refine previous scientometric reSearch by counting query hit data as an indicator of the number of accessible records. Iterative query optimization makes it possible to identify a maximum number of hits for most ASEBDs. The results were validated in terms of their capacity to assess database size by comparing them with official information on database sizes or previous scientometric studies. The queries used here are replicable, so size information can be updated quickly. The findings provide first-time size estimates of ProQuest and EbscoHost and indicate that Google Scholar’s size might have been underestimated so far by more than 50%. By our estimation Google Scholar, with 389 million records, is currently the most comprehensive Academic Search engine.

Maarten De Rijke – One of the best experts on this subject based on the ideXlab platform.

  • Characterizing and predicting downloads in Academic Search
    Information Processing & Management, 2019
    Co-Authors: Maarten De Rijke
    Abstract:

    Abstract Numerous studies have been conducted on the information interaction behavior of Search engine users. Few studies have considered information interactions in the domain of Academic Search. We focus on conversion behavior in this domain. Conversions have been widely studied in the e-commerce domain, e.g., for online shopping and hotel booking, but little is known about conversions in Academic Search. We start with a description of a unique dataset of a particular type of conversion in Academic Search, viz. users’ downloads of scientific papers. Then we move to an observational analysis of users’ download actions. We first characterize user actions and show their statistics in sessions. Then we focus on behavioral and topical aspects of downloads, revealing behavioral correlations across download sessions. We discover unique properties that differ from other conversion settings such as online shopping. Using insights gained from these observations, we consider the task of predicting the next download. In particular, we focus on predicting the time until the next download session, and on predicting the number of downloads. We cast these as time series prediction problems and model them using LSTMs. We develop a specialized model built on user segmentations that achieves significant improvements over the state-of-the art.

  • do topic shift and query reformulation patterns correlate in Academic Search
    European Conference on Information Retrieval, 2017
    Co-Authors: Maarten De Rijke
    Abstract:

    While it is known that Academic Searchers differ from typical web Searchers, little is known about the Search behavior of Academic Searchers over longer periods of time. In this study we take a look at Academic Searchers through a large-scale log analysis on a major Academic Search engine. We focus on two aspects: query reformulation patterns and topic shifts in queries. We first analyze how each of these aspects evolve over time. We identify important query reformulation patterns: revisiting and issuing new queries tend to happen more often over time. We also find that there are two distinct types of users: one type of users becomes increasingly focused on the topics they Search for as time goes by, and the other becomes increasingly diversifying. After analyzing these two aspects separately, we investigate whether, and to which degree, there is a correlation between topic shifts and query reformulations. Surprisingly, users’ preferences of query reformulations correlate little with their topic shift tendency. However, certain reformulations may help predict the magnitude of the topic shift that happens in the immediate next timespan. Our results shed light on Academic Searchers’ information seeking behavior and may benefit Search personalization.

  • ECIR – Do Topic Shift and Query Reformulation Patterns Correlate in Academic Search
    Lecture Notes in Computer Science, 2017
    Co-Authors: Maarten De Rijke
    Abstract:

    While it is known that Academic Searchers differ from typical web Searchers, little is known about the Search behavior of Academic Searchers over longer periods of time. In this study we take a look at Academic Searchers through a large-scale log analysis on a major Academic Search engine. We focus on two aspects: query reformulation patterns and topic shifts in queries. We first analyze how each of these aspects evolve over time. We identify important query reformulation patterns: revisiting and issuing new queries tend to happen more often over time. We also find that there are two distinct types of users: one type of users becomes increasingly focused on the topics they Search for as time goes by, and the other becomes increasingly diversifying. After analyzing these two aspects separately, we investigate whether, and to which degree, there is a correlation between topic shifts and query reformulations. Surprisingly, users’ preferences of query reformulations correlate little with their topic shift tendency. However, certain reformulations may help predict the magnitude of the topic shift that happens in the immediate next timespan. Our results shed light on Academic Searchers’ information seeking behavior and may benefit Search personalization.