Subject Domain

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 91959 Experts worldwide ranked by ideXlab platform

Harris Cooper - One of the best experts on this subject based on the ideXlab platform.

  • a meta analysis of the effectiveness of intelligent tutoring systems on college students academic learning
    Journal of Educational Psychology, 2014
    Co-Authors: Saiying Steenbergenhu, Harris Cooper
    Abstract:

    This meta-analysis synthesizes research on the effectiveness of intelligent tutoring systems (ITS) for college students. Thirty-five reports were found containing 39 studies assessing the effectiveness of 22 types of ITS in higher education settings. Most frequently studied were AutoTutor, Assessment and Learning in Knowledge Spaces, eXtended Tutor-Expert System, and Web Interface for Statistics Education. Major findings include (a) Overall, ITS had a moderate positive effect on college students’ academic learning (g = .32 to g = .37); (b) ITS were less effective than human tutoring, but they outperformed all other instruction methods and learning activities, including traditional classroom instruction, reading printed text or computerized materials, computer-assisted instruction, laboratory or homework assignments, and no-treatment control; (c) ITS’s effectiveness did not significantly differ by different ITS, Subject Domain, or the manner or degree of their involvement in instruction and learning; and (d) effectiveness in earlier studies appeared to be significantly greater than that in more recent studies. In addition, there is some evidence suggesting the importance of teachers and pedagogy in ITS-assisted learning.

Saiying Steenbergenhu - One of the best experts on this subject based on the ideXlab platform.

  • a meta analysis of the effectiveness of intelligent tutoring systems on college students academic learning
    Journal of Educational Psychology, 2014
    Co-Authors: Saiying Steenbergenhu, Harris Cooper
    Abstract:

    This meta-analysis synthesizes research on the effectiveness of intelligent tutoring systems (ITS) for college students. Thirty-five reports were found containing 39 studies assessing the effectiveness of 22 types of ITS in higher education settings. Most frequently studied were AutoTutor, Assessment and Learning in Knowledge Spaces, eXtended Tutor-Expert System, and Web Interface for Statistics Education. Major findings include (a) Overall, ITS had a moderate positive effect on college students’ academic learning (g = .32 to g = .37); (b) ITS were less effective than human tutoring, but they outperformed all other instruction methods and learning activities, including traditional classroom instruction, reading printed text or computerized materials, computer-assisted instruction, laboratory or homework assignments, and no-treatment control; (c) ITS’s effectiveness did not significantly differ by different ITS, Subject Domain, or the manner or degree of their involvement in instruction and learning; and (d) effectiveness in earlier studies appeared to be significantly greater than that in more recent studies. In addition, there is some evidence suggesting the importance of teachers and pedagogy in ITS-assisted learning.

Bill Kules - One of the best experts on this subject based on the ideXlab platform.

  • retrieval effectiveness of table of contents and Subject headings
    ACM IEEE Joint Conference on Digital Libraries, 2007
    Co-Authors: Youngok Choi, Ingrid Hsiehyee, Bill Kules
    Abstract:

    The effectiveness of two modes of Subject representation - table of contents (TOC) and Subject headings - in Subject searching in an online public access catalog (OPAC) system was investigated. The retrieval difference between TOC and the Library of Congress Subject headings (LCSH) was statistically significant; the effect of Subject Domain was not statistically significant; users had better success matching their keywords to TOC than to LCSH; but their keywords often failed to retrieve items similar to the target items. These findings underscore the need to bridge user keywords to both TOC and LCSH.

Jochen Musch - One of the best experts on this subject based on the ideXlab platform.

  • REPRINT, pre-version: A brief history of web experimenting A Brief History of Web Experimenting
    2020
    Co-Authors: Ulf-dietrich Reips, Jochen Musch
    Abstract:

    In recent years, a small, but growing number of researchers has begun to use the world wide web as a medium for experimental research. To learn more about the circumstances and results of the first web experiments, we conducted a WWW-based online survey directed to researchers currently engaged in web experimenting. We hoped to get an impression of the experiences of the pioneering generation of web researchers. We summarize the results of this survey which showed that an increasing number of web experiments with promising results is now being conducted, and give a brief overview on the short history of web experiments. A brief history 4 The History of Web Experiments The introduction of computerized experimenting in the 1970s (e.g., Clearly, the computerized administration of experiments and questionnaires offered possibilities unavailable in traditional paper-and-pencil research. With hindsight, it is hardly surprising therefore that the computer revolution in experimental psychology in the 1970s was an overwhelming success. Twenty years later, most human experimental research in psychology is aided by computer automation. Extending computerized experimenting beyond single PCs, local computer networks have been used for the collection of data PsyScope, MEL (Micro Experiment Laboratory) and ERTS (Experimental Run Time System). Although computerized experiments have become the method of choice in conducting psychological research, there are many signs that another revolution is now beginning. It is associated with the recent exponential growth of the Internet. 5 A brief history of web experimenting The Internet's early purpose in the 1960s was to link a U.S. Defense Department network called the Advanced Research Projects Agency Network (ARPAnet) with a variety of other radio and satellite networks Common uses of forms are surveys, on-line order forms, or really any web page in which input is required from the user in order to accomplish a given task or provide a service to the user. Of course, for a psychologist, sending participant's experimental or questionnaire data back to the experimenter is the most interesting application of forms. Drawing on forms, the WWW first offered the possibility to conduct psychological surveys and experiments independent of any geographical constraints. A brief history 8 HTML was soon supplemented by Javascript, a compact, cross-platform, object-based scripting language that was first supported in version 2.0 of the Netscape Navigator and was also adopted by Microsoft in version 3.0 of its Internet Explorer JavaScript statements can respond to user events such as mouse-clicks, form input, and page navigation. For example, JavaScript functions can be used to verify that users enter valid information into a form requesting a fixed number format. Without any network transmission, the HTML page with embedded JavaScript can check the entered data and alert the user with a dialog box if the input is invalid. Another important technology became available in 1995, when James Gosling and a team of programmers at Sun Microsystems released an Internet programming language, called Java. It again radically altered the way applications and information can be retrieved, displayed, and used over the Internet. Client-side Java applets are small programs that are transmitted over the web and run on the user's machine, offering a large variety of possibilities for sophisticated experiments. Java was first built into version 3.0 of the Navigator and version 3.0 of the Explorer. Owing to these technological developments and its exponential growth during the past few years, the World Wide Web presents researchers with an unprecedented opportunity to conduct experiments with participants from all over the world rather than with the usual student sample from their local universities. It thus has the potential to serve as an alternative or supplemental source of Subjects and research environment for traditional psychological investigations However, the use of the WWW as a medium for experimental research also poses a unique set of challenges As in writing a general history of the Internet (e.g., Musch, 1997), the frequency and ease by which WWW documents are changed, combined with the lack of an effort to comprehensively collect those documents during the first years of the WWW, make it a difficult task to determine what really happened when. This difficulty is even reflected in recommendations for references to online documents (e.g., Ott, Krüger & Funke, 1997), which advise to add the lookup date to the reference. On the other hand, the WWW is still very young, the number of web experiments is rather small, and so people's memory (including our own) should be still fresh. , 1997). This might well have been the first true web experiment that went online. It appears to be the first psychology web experiment that was published in a scientific journal. While Krantz, Ballard and Scher used a within-Subjects design the first web experiment with a between-Subjects design appears to be the web experiment on cognitive consistency of causal mechanisms Experimental Psychology Lab at Tübingen (Reips, 1995a), which is now at Zurich, is still a place for methodological discussions on web experimenting and actively invites participation of experiments from other researchers which can be hosted by the lab. Since 1995, the following sites (with their opening dates) have gone online: • The most comprehensive list of web experiments on the WWW can be found on the Psychological Research on the Net page (American Psychological Society, 1995), which was created and is maintained by John Krantz. John Krantz and many (probably more than half) of the other currently active web experimenters generously agreed to participate in our survey on the experiences of the first generation of web researchers. Method All respondents were recruited via the Internet. To promote its existence, we announced the web experimenter survey to the following mailing lists: • PSYCGRAD (Psychology Graduate Student Internet Project) • RESEARCH (Psychology of the Internet: Research and Theory) • GIR-L (German Internet Research List) • SCiP (Society for Computers in Psychology) Additional invitations to participate were posted to the following Usenet newsgroups: • sci.psychology.research • sci.psychology.announce • sci.psychology.misc • alt.usenet.surveys 13 A brief history of web experimenting • bit.listserv.psycgrad • de.alt.umfragen • de.sci.psychologie • de.sci.misc • z-netz.wissenschaft.psychologie Personal invitations were sent via e-mail to all researchers who announced a web experiment at one of the following places: • In this first wave of the survey, we told the respondents that if they conducted more than one web experiment, they should answer all questions with regard to their first web experiment. A second wave was online from April 16 -April 28, 1999. In this second wave, we asked participants who had already participated in the first wave to answer some questions with respect to the last web experiment they had conducted. First time participants were asked to describe the first experiment they had conducted. The number of questions was reduced for the second wave of the survey which was announced to the same mailing lists and to the same newsgroups to which the first wave had been announced. In addition, it was announced to SJDM, the mailing list of the Society for Judgment and Decision making. There were 14 submissions from researchers who conducted a web experiment in the second wave of the survey. Additional submissions from researchers who conducted are survey rather than an experiment were not included into the analysis. Thus, the final sample consisted of 35 submissions from 29 different researchers currently engaged in web experimenting. The regional distribution of the 29 researchers was as follows: Germany (8), United States Procedure The survey consisted of three WWW pages and was written in HTML, the computer programming language most often used to display pages on the WWW. On the first page survey participants were greeted and informed about the rationale for conducting the survey. Also, participants were told that only online data collection was considered a web experiment that met the definition of "any undertaking in which some variable is manipulated; thus, in contrast with a survey, at least two conditions must be involved in an experiment". Then, participants were asked to indicate the number of web experiments they had conducted, and to provide us with their e-mail address for feedback and possibly additional questions. Most researchers had conducted one (N=15) or two (N=6) web experiments at the time of the survey. Eight experimenters already had conducted a higher number of studies (ranging from 3 to 20 experiments). Submission of the first page of the survey sent the form data (i.e. the data that were filled in by the respondent) to a server-side plugin (Mailagent 1.1, Netdrams Software, 1998), which wrote the data to a tab-delimited file. Also, it triggered the display of the second survey page in the Results When did you start the experiment? When did you end the experiment? John Krantz and colleagues from Hanover College started their first web experiment in April 1995. We are not aware of any psychology web experiment with at least two conditions (i.e., two levels of an independent variable) that appeared on the WWW before this date. The number of web experiments has been constantly rising since, with the majority of web studies starting during 1997 and 1998. 17 A brief history of web experimenting In the first wave of our survey, we told the respondents that if they conducted more than one web experiment, they should answer the following questions for their first web experiment for which they had reported the starting date. replicate a lab experiment with more power 2.9 2.5 20 chance to better reach a special subpopulation on the web (e.g., handicapped, rape victims, chess players) 2.6 2.5 20 The factor that experimenters rated as most important was reaching a large number of participants. The high statistical power associated with a large sample size, the high speed with which web experimenting is possible, and the chance to reach participants from other countries were also considered important by most of the respondents. How problematic do you think were the following potential problems in your web experiment? (1 -not problematic at all, 7 -very problematic) The biggest concern of the web experimenters participating in the first wave of our survey was the lack of control of participant's behavior during participation. However, a numeric value of 3.6 translates to not more than an assessment of this lack of control as "somewhat problematic". Ethical problems were not considered a problem by most web experimenters.Obviously, it is important to note that all these ratings came from researchers who themselves are conducting web experiments, and the results may well have been different if another sample of researchers had been asked. In which media did you announce your experiment? Naturally, the WWW was used most often to promote web experiments (22 out of 35 experiments were promoted on the web). Many researchers also relied on newsgroups (18), e-mails (15), and search engines (14) to advertise their experiment. Few experiments were also announced in print media (2) and radio programs (1). 19 A brief history of web experimenting How many design factors did you include in your experiment? For each of these factors, please specify the number of levels on which it was varied. The mean number of factors participants of the first wave of the survey indicated they had manipulated was 2.1 (with a median and a mode of 2.0). There were six experiments in which as many as three or more factors were varied. The following designs were reported in the first wave of the survey: levels per factor N 5 x 5 x 2 x n 1 5 x 5 x 4 1 5 x 3 x 2 1 3 x 3 x n 2 3 x 2 x 2 1 5 x 2 2 3 x 3 1 3 x 2 1 2 x 2 5 n x 2 1 3 1 2 4 not specified 1 60% of the designs involved between Subjects factor manipulations, another 20% of the designs involved within Subjects factor manipulations, and 20% of the designs involved both kinds of experimental manipulation. In what area of research did you conduct the experiment? To aid the respondents' decision, we offered a list of 54 Subject areas used for classifying poster The following selective list gives an impression of the theories and hypotheses that were tested in web experiments, and the independent variables that were manipulated. Hypotheses that are difficult to understand without sufficient knowledge of the Subject Domain, or that were not explained by the survey respondents are not included in the list. • Judges violate stochastic dominance in coalesced gambles, but satisfy stochastic dominance when gambles are presented in split form (high versus low variance of gambles; .01, .50, .99 probability to win higher price; value of prizes; combined versus split consequences of gamble; stochastic dominance between gambles). Lower degree of violation among people with more training and education in judgment and decision-making • Pronominal case-ambiguous objects elicit a preference towards accusative case assignment, whereas non-pronominal, case-ambiguous objects elicit no preference (object noun phrase type: ambiguous non-pronominal, ambiguous pronominal, unambiguous pronominal; object case: accusative, dative) • Answers to questions in online surveys are potentially Subject to biases, depending on the number of questions per page (one, two), scale type (pop-up, radio buttons), reading directionality (from left/top, from right/bottom), cursor entry position (top, bottom), question order (donation question first, expense question first), and numerical labeling (-5 to +5, 0 to 10) • Experts use their specific and their general knowledge for data evaluation (expertise: high versus low; data are in accordance with versus contradict expert knowledge) • Subjects perceive feminine male and female faces as more attractive (masculinity of male face: more feminine to more masculine) • Background color influences response to emotionally-laden statements (different shades of background color, print color white versus red

Hongjhe Chen - One of the best experts on this subject based on the ideXlab platform.

  • mining e learning Domain concept map from academic articles
    International Conference on Advanced Learning Technologies, 2006
    Co-Authors: Nianshing Chen, P Kinshuk, Chunwang Wei, Hongjhe Chen
    Abstract:

    Recent research has demonstrated the important of ontology and its applications. For example, while designing adaptive learning materials, designers need to refer to the ontology of a Subject Domain. Moreover, ontology can show the whole picture and core knowledge about a Subject Domain. Research from literature also suggested that graphical representation of ontology can reduce the problems of information overload and learning disorientation for learners. However, ontology constructions used to rely on Domain experts in the past; it is a time consuming and high cost task. Ontology creation for emerging new Domains like e-Learning is even more challenging. The aim of this paper is to construct e-Learning Domain concept maps, an alternative form of ontology, from academic articles. We adopt some relevant journal articles and conferences papers in e-Learning Domain as data sources, and apply text-mining techniques to automatically construct concept maps for e-Learning Domain. The constructed concept maps can provide a useful reference for researchers, who are new to e- Leaning field, to study related issues, for teachers to design adaptive courses, and for learners to understand the whole picture of e-Learning Domain knowledge.