Online Learning

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 330492 Experts worldwide ranked by ideXlab platform

Jonna M. Kulikowich - One of the best experts on this subject based on the ideXlab platform.

  • Online Learning Self-Efficacy in Students With and Without Online Learning Experience
    American Journal of Distance Education, 2016
    Co-Authors: Whitney Alicia Zimmerman, Jonna M. Kulikowich
    Abstract:

    ABSTRACTA need was identified for an instrument to measure Online Learning self-efficacy, which encompassed the wide variety of tasks required of successful Online students. The Online Learning Self-Efficacy Scale (OLSES) was designed to include tasks required of students enrolled in paced Online courses at one university. In the present study, the twenty-two-item scale was completed by 338 postsecondary students with and without Online Learning experience. Separate principal components analyses were performed using data collected from participants who had and had not completed an Online course. The results were similar for the two groups. A three-subscale structure was selected for use with all individuals. The three subscales represent items concerning (1) Learning in the Online environment, (2) time management, and (3) technology use. The reliability and validity of scores on the OLSES was explored through group comparisons and correlations. Suggestions for the use of the instrument with other populati...

Rose M. Marra - One of the best experts on this subject based on the ideXlab platform.

Whitney Alicia Zimmerman - One of the best experts on this subject based on the ideXlab platform.

  • Online Learning Self-Efficacy in Students With and Without Online Learning Experience
    American Journal of Distance Education, 2016
    Co-Authors: Whitney Alicia Zimmerman, Jonna M. Kulikowich
    Abstract:

    ABSTRACTA need was identified for an instrument to measure Online Learning self-efficacy, which encompassed the wide variety of tasks required of successful Online students. The Online Learning Self-Efficacy Scale (OLSES) was designed to include tasks required of students enrolled in paced Online courses at one university. In the present study, the twenty-two-item scale was completed by 338 postsecondary students with and without Online Learning experience. Separate principal components analyses were performed using data collected from participants who had and had not completed an Online course. The results were similar for the two groups. A three-subscale structure was selected for use with all individuals. The three subscales represent items concerning (1) Learning in the Online environment, (2) time management, and (3) technology use. The reliability and validity of scores on the OLSES was explored through group comparisons and correlations. Suggestions for the use of the instrument with other populati...

Demei Shen - One of the best experts on this subject based on the ideXlab platform.

Peilin Zhao - One of the best experts on this subject based on the ideXlab platform.

  • Online Learning: A Comprehensive Survey.
    arXiv: Learning, 2018
    Co-Authors: Steven C. H. Hoi, Sahoo, Peilin Zhao
    Abstract:

    Online Learning represents an important family of machine Learning algorithms, in which a learner attempts to resolve an Online prediction (or any type of decision-making) task by Learning a model/hypothesis from a sequence of data instances one at a time. The goal of Online Learning is to ensure that the Online learner would make a sequence of accurate predictions (or correct decisions) given the knowledge of correct answers to previous prediction or Learning tasks and possibly additional information. This is in contrast to many traditional batch Learning or offline machine Learning algorithms that are often designed to train a model in batch from a given collection of training data instances. This survey aims to provide a comprehensive survey of the Online machine Learning literatures through a systematic review of basic ideas and key principles and a proper categorization of different algorithms and techniques. Generally speaking, according to the Learning type and the forms of feedback information, the existing Online Learning works can be classified into three major categories: (i) supervised Online Learning where full feedback information is always available, (ii) Online Learning with limited feedback, and (iii) unsupervised Online Learning where there is no feedback available. Due to space limitation, the survey will be mainly focused on the first category, but also briefly cover some basics of the other two categories. Finally, we also discuss some open issues and attempt to shed light on potential future research directions in this field.

  • LIBOL: A Library for Online Learning Algorithms
    Journal of Machine Learning Research, 2014
    Co-Authors: Steven Chu Hong Hoi, Jialei Wang, Peilin Zhao
    Abstract:

    LIBOL is an open-source library for large-scale Online Learning, which consists of a large family of efficient and scalable state-of-the-art Online Learning algorithms for large-scale Online classification tasks. We have offered easy-to-use command-line tools and examples for users and developers, and also have made comprehensive documents available for both beginners and advanced users. LIBOL is not only a machine Learning toolbox, but also a comprehensive experimental platform for conducting Online Learning research.

  • Double Updating Online Learning
    Journal of Machine Learning Research, 2011
    Co-Authors: Peilin Zhao, Steven C. H. Hoi, Rong Jin
    Abstract:

    In most kernel based Online Learning algorithms, when an incoming instance is misclassified, it will be added into the pool of support vectors and assigned with a weight, which often remains unchanged during the rest of the Learning process. This is clearly insufficient since when a new support vector is added, we generally expect the weights of the other existing support vectors to be updated in order to reflect the influence of the added support vector. In this paper, we propose a new Online Learning method, termed Double Updating Online Learning, or DUOL for short, that explicitly addresses this problem. Instead of only assigning a fixed weight to the misclassified example received at the current trial, the proposed Online Learning algorithm also tries to update the weight for one of the existing support vectors. We show that the mistake bound can be improved by the proposed Online Learning method. We conduct an extensive set of empirical evaluations for both binary and multi-class Online Learning tasks. The experimental results show that the proposed technique is considerably more effective than the state-of-the-art Online Learning algorithms. The source code is available to public at http://www.cais.ntu.edu.sg/~chhoi/DUOL/.