Learning Algorithm

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 370770 Experts worldwide ranked by ideXlab platform

Andrzej Rusiecki - One of the best experts on this subject based on the ideXlab platform.

  • robust lts backpropagation Learning Algorithm
    International Work-Conference on Artificial and Natural Neural Networks, 2007
    Co-Authors: Andrzej Rusiecki
    Abstract:

    Training data sets containing outliers are often a problem for supervised neural networks Learning Algorithms. They may not always come up with acceptable performance and build very inaccurate models. In this paper new, robust to outliers, Learning Algorithm based on the Least Trimmed Squares (LTS) estimator is proposed. The LTS Learning Algorithm is simultaneously the first robust Learning Algorithm that takes into account not only gross errors but also leverage data points. Results of simulations of networks trained with the new Algorithm are presented and the robustness against outliers is demonstrated.

  • IWANN - Robust LTS backpropagation Learning Algorithm
    Computational and Ambient Intelligence, 2007
    Co-Authors: Andrzej Rusiecki
    Abstract:

    Training data sets containing outliers are often a problem for supervised neural networks Learning Algorithms. They may not always come up with acceptable performance and build very inaccurate models. In this paper new, robust to outliers, Learning Algorithm based on the Least Trimmed Squares (LTS) estimator is proposed. The LTS Learning Algorithm is simultaneously the first robust Learning Algorithm that takes into account not only gross errors but also leverage data points. Results of simulations of networks trained with the new Algorithm are presented and the robustness against outliers is demonstrated.

  • ICAISC - Robust MCD-Based Backpropagation Learning Algorithm
    Artificial Intelligence and Soft Computing – ICAISC 2008, 2006
    Co-Authors: Andrzej Rusiecki
    Abstract:

    Training data containing outliers are often a problem for supervised neural networks Learning methods that may not always come up with acceptable performance. In this paper a new, robust to outliers Learning Algorithm, employing the concept of initial data analysis by the MCD (minimum covariance determinant) estimator, is proposed. Results of implementation and simulation of nets trained with the new Algorithm and the traditional backpropagation (BP) Algorithm and robust Lmls are presented and compared. The better performance and robustness against outliers for the new method are demonstrated.

Qiuye Sun - One of the best experts on this subject based on the ideXlab platform.

  • nonlinear neuro optimal tracking control via stable iterative q Learning Algorithm
    Neurocomputing, 2015
    Co-Authors: Qinglai Wei, Ruizhuo Song, Qiuye Sun
    Abstract:

    This paper discusses a new policy iteration Q-Learning Algorithm to solve the infinite horizon optimal tracking problems for a class of discrete-time nonlinear systems. The idea is to use an iterative adaptive dynamic programming (ADP) technique to construct the iterative tracking control law which makes the system state track the desired state trajectory and simultaneously minimizes the iterative Q function. Via system transformation, the optimal tracking problem is transformed into an optimal regulation problem. The policy iteration Q-Learning Algorithm is then developed to obtain the optimal control law for the regulation system. Initialized by an arbitrary admissible control law, the convergence property is analyzed. It is shown that the iterative Q function is monotonically non-increasing and converges to the optimal Q function. It is proven that any of the iterative control laws can stabilize the transformed nonlinear system. Two neural networks are used to approximate the iterative Q function and compute the iterative control law, respectively, for facilitating the implementation of policy iteration Q-Learning Algorithm. Finally, two simulation examples are presented to illustrate the performance of the developed Algorithm.

Y. Lacouture - One of the best experts on this subject based on the ideXlab platform.

  • Mean-variance backpropagation: a connectionist Learning Algorithm with a selective attention mechanism
    IJCNN-91-Seattle International Joint Conference on Neural Networks, 1991
    Co-Authors: Y. Lacouture
    Abstract:

    A modified version of the backpropagation Learning Algorithm called mean-variance backpropagation (MV-BP) is presented. It uses gradient descent to minimize a weighted mixture of the overall mean and variance of the squared-errors computed across the stimulus set. Applied on a network with enough resources, the MV-BP Learning Algorithm yields Learning curves similar to those observed with the standard backpropagation Learning Algorithm but with faster Learning. When the new Learning Algorithm is used on a network with limited resources, Learning is still faster, but performance asymptotes at a higher level of mean-square error. The proposed MV-BP Learning Algorithm might not find the best solution, but it is probably more adequate for modeling human cognitive Learning since it allocates the resources in such a way that performance tends to be similar on all stimuli. >

Kate A Smith - One of the best experts on this subject based on the ideXlab platform.

  • on Learning Algorithm selection for classification
    Applied Soft Computing, 2006
    Co-Authors: Shawkat Ali, Kate A Smith
    Abstract:

    This paper introduces a new method for Learning Algorithm evaluation and selection, with empirical results based on classification. The empirical study has been conducted among 8 Algorithms/classifiers with 100 different classification problems. We evaluate the Algorithms' performance in terms of a variety of accuracy and complexity measures. Consistent with the No Free Lunch theorem, we do not expect to identify the single Algorithm that performs best on all datasets. Rather, we aim to determine the characteristics of datasets that lend themselves to superior modelling by certain Learning Algorithms. Our empirical results are used to generate rules, using the rule-based Learning Algorithm C5.0, to describe which types of Algorithms are suited to solving which types of classification problems. Most of the rules are generated with a high confidence rating.

B Walczak - One of the best experts on this subject based on the ideXlab platform.