Research Paper

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 3877440 Experts worldwide ranked by ideXlab platform

Joeran Beel - One of the best experts on this subject based on the ideXlab platform.

  • Research-Paper recommender systems: a literature survey
    International Journal on Digital Libraries, 2016
    Co-Authors: Joeran Beel, Stefan Langer, Bela Gipp, Corinna Breitinger
    Abstract:

    In the last 16 years, more than 200 Research articles were published about Research-Paper recommender systems. We reviewed these articles and present some descriptive statistics in this Paper, as well as a discussion about the major advancements and shortcomings and an overview of the most common recommendation concepts and approaches. We found that more than half of the recommendation approaches applied content-based filtering (55 %). Collaborative filtering was applied by only 18 % of the reviewed approaches, and graph-based recommendations by 16 %. Other recommendation concepts included stereotyping, item-centric recommendations, and hybrid recommendations. The content-based filtering approaches mainly utilized Papers that the users had authored, tagged, browsed, or downloaded. TF-IDF was the most frequently applied weighting scheme. In addition to simple terms, n-grams, topics, and citations were utilized to model users’ information needs. Our review revealed some shortcomings of the current Research. First, it remains unclear which recommendation concepts and approaches are the most promising. For instance, Researchers reported different results on the performance of content-based and collaborative filtering. Sometimes content-based filtering performed better than collaborative filtering and sometimes it performed worse. We identified three potential reasons for the ambiguity of the results. (A) Several evaluations had limitations. They were based on strongly pruned datasets, few participants in user studies, or did not use appropriate baselines. (B) Some authors provided little information about their algorithms, which makes it difficult to re-implement the approaches. Consequently, Researchers use different implementations of the same recommendations approaches, which might lead to variations in the results. (C) We speculated that minor variations in datasets, algorithms, or user populations inevitably lead to strong variations in the performance of the approaches. Hence, finding the most promising approaches is a challenge. As a second limitation, we noted that many authors neglected to take into account factors other than accuracy, for example overall user satisfaction. In addition, most approaches (81 %) neglected the user-modeling process and did not infer information automatically but let users provide keywords, text snippets, or a single Paper as input. Information on runtime was provided for 10 % of the approaches. Finally, few Research Papers had an impact on Research-Paper recommender systems in practice. We also identified a lack of authority and long-term Research interest in the field: 73 % of the authors published no more than one Paper on Research-Paper recommender systems, and there was little cooperation among different co-author groups. We concluded that several actions could improve the Research landscape: developing a common evaluation framework, agreement on the information to include in Research Papers, a stronger focus on non-accuracy aspects and user modeling, a platform for Researchers to exchange information, and an open-source framework that bundles the available recommendation approaches.

  • a comparison of offline evaluations online evaluations and user studies in the context of Research Paper recommender systems
    International Conference Theory and Practice Digital Libraries, 2015
    Co-Authors: Joeran Beel, Stefan Langer
    Abstract:

    The evaluation of recommender systems is key to the successful application of recommender systems in practice. However, recommender-systems evaluation has received too little attention in the recommender-system community, in particular in the community of Research-Paper recommender systems. In this Paper, we examine and discuss the appropriateness of different evaluation methods, i.e. offline evaluations, online evaluations, and user studies, in the context of Research-Paper recommender systems. We implemented different content-based filtering approaches in the Research-Paper recommender system of Docear. The approaches differed by the features to utilize (terms or citations), by user model size, whether stop-words were removed, and several other factors. The evaluations show that results from offline evaluations sometimes contradict results from online evaluations and user studies. We discuss potential reasons for the non-predictive power of offline evaluations, and discuss whether results of offline evaluations might have some inherent value. In the latter case, results of offline evaluations were worth to be published, even if they contradict results of user studies and online evaluations. However, although offline evaluations theoretically might have some inherent value, we conclude that in practice, offline evaluations are probably not suitable to evaluate recommender systems, particularly in the domain of Research Paper recommendations. We further analyze and discuss the appropriateness of several online evaluation metrics such as click-through rate, link-through rate, and cite-through rate.

  • Research Paper recommender system evaluation a quantitative literature survey
    Conference on Recommender Systems, 2013
    Co-Authors: Joeran Beel, Stefan Langer, Corinna Breitinger, Bela Gipp, Marcel Genzmehr, Andreas Nurnberger
    Abstract:

    Over 80 approaches for academic literature recommendation exist today. The approaches were introduced and evaluated in more than 170 Research articles, as well as patents, presentations and blogs. We reviewed these approaches and found most evaluations to contain major shortcomings. Of the approaches proposed, 21% were not evaluated. Among the evaluated approaches, 19% were not evaluated against a baseline. Of the user studies performed, 60% had 15 or fewer participants or did not report on the number of participants. Information on runtime and coverage was rarely provided. Due to these and several other shortcomings described in this Paper, we conclude that it is currently not possible to determine which recommendation approaches for academic literature are the most promising. However, there is little value in the existence of more than 80 approaches if the best performing approaches are unknown.

  • a comparative analysis of offline and online evaluations and discussion of Research Paper recommender system evaluation
    Conference on Recommender Systems, 2013
    Co-Authors: Joeran Beel, Stefan Langer, Marcel Genzmehr, Andreas Nurnberger, Bela Gipp
    Abstract:

    Offline evaluations are the most common evaluation method for Research Paper recommender systems. However, no thorough discussion on the appropriateness of offline evaluations has taken place, despite some voiced criticism. We conducted a study in which we evaluated various recommendation approaches with both offline and online evaluations. We found that results of offline and online evaluations often contradict each other. We discuss this finding in detail and conclude that offline evaluations may be inappropriate for evaluating Research Paper recommender systems, in many settings.

  • sponsored vs organic Research Paper recommendations and the impact of labeling
    International Conference Theory and Practice Digital Libraries, 2013
    Co-Authors: Joeran Beel, Stefan Langer, Marcel Genzmehr
    Abstract:

    In this Paper we show that organic recommendations are preferred over commercial recommendations even when they point to the same freely downloadable Research Papers. Simply the fact that users perceive recommendations as commercial decreased their willingness to accept them. It is further shown that the exact labeling of recommendations matters. For instance, recommendations labeled as ‘advertisement’ performed worse than those labeled as ‘sponsored’. Similarly, recommendations labeled as ‘Free Research Papers’ performed better than those labeled as ‘Research Papers’. However, whatever the differences between the labels were – the best performing recommendations were those with no label at all.

Andreas Nurnberger - One of the best experts on this subject based on the ideXlab platform.

  • a comparative analysis of offline and online evaluations and discussion of Research Paper recommender system evaluation
    Conference on Recommender Systems, 2013
    Co-Authors: Joeran Beel, Stefan Langer, Marcel Genzmehr, Andreas Nurnberger, Bela Gipp
    Abstract:

    Offline evaluations are the most common evaluation method for Research Paper recommender systems. However, no thorough discussion on the appropriateness of offline evaluations has taken place, despite some voiced criticism. We conducted a study in which we evaluated various recommendation approaches with both offline and online evaluations. We found that results of offline and online evaluations often contradict each other. We discuss this finding in detail and conclude that offline evaluations may be inappropriate for evaluating Research Paper recommender systems, in many settings.

  • Research Paper recommender system evaluation a quantitative literature survey
    Conference on Recommender Systems, 2013
    Co-Authors: Joeran Beel, Stefan Langer, Corinna Breitinger, Bela Gipp, Marcel Genzmehr, Andreas Nurnberger
    Abstract:

    Over 80 approaches for academic literature recommendation exist today. The approaches were introduced and evaluated in more than 170 Research articles, as well as patents, presentations and blogs. We reviewed these approaches and found most evaluations to contain major shortcomings. Of the approaches proposed, 21% were not evaluated. Among the evaluated approaches, 19% were not evaluated against a baseline. Of the user studies performed, 60% had 15 or fewer participants or did not report on the number of participants. Information on runtime and coverage was rarely provided. Due to these and several other shortcomings described in this Paper, we conclude that it is currently not possible to determine which recommendation approaches for academic literature are the most promising. However, there is little value in the existence of more than 80 approaches if the best performing approaches are unknown.

  • introducing docear s Research Paper recommender system
    ACM IEEE Joint Conference on Digital Libraries, 2013
    Co-Authors: Joeran Beel, Stefan Langer, Marcel Genzmehr, Andreas Nurnberger
    Abstract:

    In this demo Paper we present Docear's Research Paper recommender system. Docear is an academic literature suite to search, organize, and create Research articles. The users' data (Papers, references, annotations, etc.) is managed in mind maps and these mind maps are utilized for the recommendations. Using content-based filtering methods, Docear's recommender achieves click-through rates around 6%, in some scenarios even over 10%.

Corinna Breitinger - One of the best experts on this subject based on the ideXlab platform.

  • Research-Paper recommender systems: a literature survey
    International Journal on Digital Libraries, 2016
    Co-Authors: Joeran Beel, Stefan Langer, Bela Gipp, Corinna Breitinger
    Abstract:

    In the last 16 years, more than 200 Research articles were published about Research-Paper recommender systems. We reviewed these articles and present some descriptive statistics in this Paper, as well as a discussion about the major advancements and shortcomings and an overview of the most common recommendation concepts and approaches. We found that more than half of the recommendation approaches applied content-based filtering (55 %). Collaborative filtering was applied by only 18 % of the reviewed approaches, and graph-based recommendations by 16 %. Other recommendation concepts included stereotyping, item-centric recommendations, and hybrid recommendations. The content-based filtering approaches mainly utilized Papers that the users had authored, tagged, browsed, or downloaded. TF-IDF was the most frequently applied weighting scheme. In addition to simple terms, n-grams, topics, and citations were utilized to model users’ information needs. Our review revealed some shortcomings of the current Research. First, it remains unclear which recommendation concepts and approaches are the most promising. For instance, Researchers reported different results on the performance of content-based and collaborative filtering. Sometimes content-based filtering performed better than collaborative filtering and sometimes it performed worse. We identified three potential reasons for the ambiguity of the results. (A) Several evaluations had limitations. They were based on strongly pruned datasets, few participants in user studies, or did not use appropriate baselines. (B) Some authors provided little information about their algorithms, which makes it difficult to re-implement the approaches. Consequently, Researchers use different implementations of the same recommendations approaches, which might lead to variations in the results. (C) We speculated that minor variations in datasets, algorithms, or user populations inevitably lead to strong variations in the performance of the approaches. Hence, finding the most promising approaches is a challenge. As a second limitation, we noted that many authors neglected to take into account factors other than accuracy, for example overall user satisfaction. In addition, most approaches (81 %) neglected the user-modeling process and did not infer information automatically but let users provide keywords, text snippets, or a single Paper as input. Information on runtime was provided for 10 % of the approaches. Finally, few Research Papers had an impact on Research-Paper recommender systems in practice. We also identified a lack of authority and long-term Research interest in the field: 73 % of the authors published no more than one Paper on Research-Paper recommender systems, and there was little cooperation among different co-author groups. We concluded that several actions could improve the Research landscape: developing a common evaluation framework, agreement on the information to include in Research Papers, a stronger focus on non-accuracy aspects and user modeling, a platform for Researchers to exchange information, and an open-source framework that bundles the available recommendation approaches.

  • Research Paper recommender system evaluation a quantitative literature survey
    Conference on Recommender Systems, 2013
    Co-Authors: Joeran Beel, Stefan Langer, Corinna Breitinger, Bela Gipp, Marcel Genzmehr, Andreas Nurnberger
    Abstract:

    Over 80 approaches for academic literature recommendation exist today. The approaches were introduced and evaluated in more than 170 Research articles, as well as patents, presentations and blogs. We reviewed these approaches and found most evaluations to contain major shortcomings. Of the approaches proposed, 21% were not evaluated. Among the evaluated approaches, 19% were not evaluated against a baseline. Of the user studies performed, 60% had 15 or fewer participants or did not report on the number of participants. Information on runtime and coverage was rarely provided. Due to these and several other shortcomings described in this Paper, we conclude that it is currently not possible to determine which recommendation approaches for academic literature are the most promising. However, there is little value in the existence of more than 80 approaches if the best performing approaches are unknown.

Stefan Langer - One of the best experts on this subject based on the ideXlab platform.

  • Research-Paper recommender systems: a literature survey
    International Journal on Digital Libraries, 2016
    Co-Authors: Joeran Beel, Stefan Langer, Bela Gipp, Corinna Breitinger
    Abstract:

    In the last 16 years, more than 200 Research articles were published about Research-Paper recommender systems. We reviewed these articles and present some descriptive statistics in this Paper, as well as a discussion about the major advancements and shortcomings and an overview of the most common recommendation concepts and approaches. We found that more than half of the recommendation approaches applied content-based filtering (55 %). Collaborative filtering was applied by only 18 % of the reviewed approaches, and graph-based recommendations by 16 %. Other recommendation concepts included stereotyping, item-centric recommendations, and hybrid recommendations. The content-based filtering approaches mainly utilized Papers that the users had authored, tagged, browsed, or downloaded. TF-IDF was the most frequently applied weighting scheme. In addition to simple terms, n-grams, topics, and citations were utilized to model users’ information needs. Our review revealed some shortcomings of the current Research. First, it remains unclear which recommendation concepts and approaches are the most promising. For instance, Researchers reported different results on the performance of content-based and collaborative filtering. Sometimes content-based filtering performed better than collaborative filtering and sometimes it performed worse. We identified three potential reasons for the ambiguity of the results. (A) Several evaluations had limitations. They were based on strongly pruned datasets, few participants in user studies, or did not use appropriate baselines. (B) Some authors provided little information about their algorithms, which makes it difficult to re-implement the approaches. Consequently, Researchers use different implementations of the same recommendations approaches, which might lead to variations in the results. (C) We speculated that minor variations in datasets, algorithms, or user populations inevitably lead to strong variations in the performance of the approaches. Hence, finding the most promising approaches is a challenge. As a second limitation, we noted that many authors neglected to take into account factors other than accuracy, for example overall user satisfaction. In addition, most approaches (81 %) neglected the user-modeling process and did not infer information automatically but let users provide keywords, text snippets, or a single Paper as input. Information on runtime was provided for 10 % of the approaches. Finally, few Research Papers had an impact on Research-Paper recommender systems in practice. We also identified a lack of authority and long-term Research interest in the field: 73 % of the authors published no more than one Paper on Research-Paper recommender systems, and there was little cooperation among different co-author groups. We concluded that several actions could improve the Research landscape: developing a common evaluation framework, agreement on the information to include in Research Papers, a stronger focus on non-accuracy aspects and user modeling, a platform for Researchers to exchange information, and an open-source framework that bundles the available recommendation approaches.

  • a comparison of offline evaluations online evaluations and user studies in the context of Research Paper recommender systems
    International Conference Theory and Practice Digital Libraries, 2015
    Co-Authors: Joeran Beel, Stefan Langer
    Abstract:

    The evaluation of recommender systems is key to the successful application of recommender systems in practice. However, recommender-systems evaluation has received too little attention in the recommender-system community, in particular in the community of Research-Paper recommender systems. In this Paper, we examine and discuss the appropriateness of different evaluation methods, i.e. offline evaluations, online evaluations, and user studies, in the context of Research-Paper recommender systems. We implemented different content-based filtering approaches in the Research-Paper recommender system of Docear. The approaches differed by the features to utilize (terms or citations), by user model size, whether stop-words were removed, and several other factors. The evaluations show that results from offline evaluations sometimes contradict results from online evaluations and user studies. We discuss potential reasons for the non-predictive power of offline evaluations, and discuss whether results of offline evaluations might have some inherent value. In the latter case, results of offline evaluations were worth to be published, even if they contradict results of user studies and online evaluations. However, although offline evaluations theoretically might have some inherent value, we conclude that in practice, offline evaluations are probably not suitable to evaluate recommender systems, particularly in the domain of Research Paper recommendations. We further analyze and discuss the appropriateness of several online evaluation metrics such as click-through rate, link-through rate, and cite-through rate.

  • Research Paper recommender system evaluation a quantitative literature survey
    Conference on Recommender Systems, 2013
    Co-Authors: Joeran Beel, Stefan Langer, Corinna Breitinger, Bela Gipp, Marcel Genzmehr, Andreas Nurnberger
    Abstract:

    Over 80 approaches for academic literature recommendation exist today. The approaches were introduced and evaluated in more than 170 Research articles, as well as patents, presentations and blogs. We reviewed these approaches and found most evaluations to contain major shortcomings. Of the approaches proposed, 21% were not evaluated. Among the evaluated approaches, 19% were not evaluated against a baseline. Of the user studies performed, 60% had 15 or fewer participants or did not report on the number of participants. Information on runtime and coverage was rarely provided. Due to these and several other shortcomings described in this Paper, we conclude that it is currently not possible to determine which recommendation approaches for academic literature are the most promising. However, there is little value in the existence of more than 80 approaches if the best performing approaches are unknown.

  • a comparative analysis of offline and online evaluations and discussion of Research Paper recommender system evaluation
    Conference on Recommender Systems, 2013
    Co-Authors: Joeran Beel, Stefan Langer, Marcel Genzmehr, Andreas Nurnberger, Bela Gipp
    Abstract:

    Offline evaluations are the most common evaluation method for Research Paper recommender systems. However, no thorough discussion on the appropriateness of offline evaluations has taken place, despite some voiced criticism. We conducted a study in which we evaluated various recommendation approaches with both offline and online evaluations. We found that results of offline and online evaluations often contradict each other. We discuss this finding in detail and conclude that offline evaluations may be inappropriate for evaluating Research Paper recommender systems, in many settings.

  • sponsored vs organic Research Paper recommendations and the impact of labeling
    International Conference Theory and Practice Digital Libraries, 2013
    Co-Authors: Joeran Beel, Stefan Langer, Marcel Genzmehr
    Abstract:

    In this Paper we show that organic recommendations are preferred over commercial recommendations even when they point to the same freely downloadable Research Papers. Simply the fact that users perceive recommendations as commercial decreased their willingness to accept them. It is further shown that the exact labeling of recommendations matters. For instance, recommendations labeled as ‘advertisement’ performed worse than those labeled as ‘sponsored’. Similarly, recommendations labeled as ‘Free Research Papers’ performed better than those labeled as ‘Research Papers’. However, whatever the differences between the labels were – the best performing recommendations were those with no label at all.

Khalid Haruna - One of the best experts on this subject based on the ideXlab platform.

  • a collaborative approach for Research Paper recommender system
    PLOS ONE, 2017
    Co-Authors: Khalid Haruna, Maizatul Akmar Ismail, Damiasih Damiasih, Joko Sutopo, Tutut Herawan
    Abstract:

    Research Paper recommenders emerged over the last decade to ease finding publications relating to Researchers’ area of interest. The challenge was not just to provide Researchers with very rich publications at any time, any place and in any form but to also offer the right publication to the right Researcher in the right way. Several approaches exist in handling Paper recommender systems. However, these approaches assumed the availability of the whole contents of the recommending Papers to be freely accessible, which is not always true due to factors such as copyright restrictions. This Paper presents a collaborative approach for Research Paper recommender system. By leveraging the advantages of collaborative filtering approach, we utilize the publicly available contextual metadata to infer the hidden associations that exist between Research Papers in order to personalize recommendations. The novelty of our proposed approach is that it provides personalized recommendations regardless of the Research field and regardless of the user’s expertise. Using a publicly available dataset, our proposed approach has recorded a significant improvement over other baseline methods in measuring both the overall performance and the ability to return relevant and useful publications at the top of the recommendation list.