Backpropagation Learning - Explore the Science & Experts | ideXlab

Scan Science and Technology

Contact Leading Edge Experts & Companies

Backpropagation Learning

The Experts below are selected from a list of 3558 Experts worldwide ranked by ideXlab platform

Andrzej Rusiecki – One of the best experts on this subject based on the ideXlab platform.

  • robust lts Backpropagation Learning algorithm
    International Work-Conference on Artificial and Natural Neural Networks, 2007
    Co-Authors: Andrzej Rusiecki

    Abstract:

    Training data sets containing outliers are often a problem for supervised neural networks Learning algorithms. They may not always come up with acceptable performance and build very inaccurate models. In this paper new, robust to outliers, Learning algorithm based on the Least Trimmed Squares (LTS) estimator is proposed. The LTS Learning algorithm is simultaneously the first robust Learning algorithm that takes into account not only gross errors but also leverage data points. Results of simulations of networks trained with the new algorithm are presented and the robustness against outliers is demonstrated.

  • IWANN – Robust LTS Backpropagation Learning algorithm
    Computational and Ambient Intelligence, 2007
    Co-Authors: Andrzej Rusiecki

    Abstract:

    Training data sets containing outliers are often a problem for supervised neural networks Learning algorithms. They may not always come up with acceptable performance and build very inaccurate models. In this paper new, robust to outliers, Learning algorithm based on the Least Trimmed Squares (LTS) estimator is proposed. The LTS Learning algorithm is simultaneously the first robust Learning algorithm that takes into account not only gross errors but also leverage data points. Results of simulations of networks trained with the new algorithm are presented and the robustness against outliers is demonstrated.

  • ICAISC – Robust MCD-Based Backpropagation Learning Algorithm
    Artificial Intelligence and Soft Computing – ICAISC 2008, 2006
    Co-Authors: Andrzej Rusiecki

    Abstract:

    Training data containing outliers are often a problem for supervised neural networks Learning methods that may not always come up with acceptable performance. In this paper a new, robust to outliers Learning algorithm, employing the concept of initial data analysis by the MCD (minimum covariance determinant) estimator, is proposed. Results of implementation and simulation of nets trained with the new algorithm and the traditional Backpropagation (BP) algorithm and robust Lmls are presented and compared. The better performance and robustness against outliers for the new method are demonstrated.

K. Nakayama – One of the best experts on this subject based on the ideXlab platform.

  • PAPER Special Section on Papers Selected from the 20th Symposium on Signal Processing An Adaptive Penalty-Based Learning Extension for the Backpropagation Family
    , 2020
    Co-Authors: B. Jansen, K. Nakayama

    Abstract:

    SUMMARY Over the years, many improvements and refinements to the Backpropagation Learning algorithm have been reported. In this paper, a new adaptive penalty-based Learning extension for the Backpropagation Learning algorithm and its variants is proposed. The new method initially puts pressure on artificial neural networks in order to get all outputs for all training patterns into the correct half of the output range, instead of mainly focusing on minimizing the difference between the target and actual output values. The upper bound of the penalty values is also controlled. The technique is easy to implement and computationally inexpensive. In this study, the new approach is applied to the Backpropagation Learning algorithm as well as the RPROP Learning algorithm. The superiority of the new proposed method is demonstrated though many simulations. By applying the extension, the percentage of successful runs can be greatly increased and the average number of epochs to convergence can be well reduced on various problem instances. The behavior of the penalty values during training is also analyzed and their active role within the Learning

  • An Adaptive Penalty-Based Learning Extension for the Backpropagation Family
    IEICE Transactions on Fundamentals of Electronics Communications and Computer Sciences, 2006
    Co-Authors: B. Jansen, K. Nakayama

    Abstract:

    Over the years, many improvements and refinements to the Backpropagation Learning algorithm have been reported. In this paper, a new adaptive penalty-based Learning extension for the Backpropagation Learning algorithm and its variants is proposed. The new method initially puts pressure on artificial neural networks in order to get all outputs for all training patterns into the correct half of the output range, instead of mainly focusing on minimizing the difference between the target and actual output values. The upper bound of the penalty values is also controlled. The technique is easy to implement and computationally inexpensive. In this study, the new approach is applied to the Backpropagation Learning algorithm as well as the RPROP Learning algorithm. The superiority of the new proposed method is demonstrated though many simulations. By applying the extension, the percentage of successful runs can be greatly increased and the average number of epochs to convergence can be well reduced on various problem instances. The behavior of the penalty values during training is also analyzed and their active role within the Learning process is confirmed.

  • An Adaptive Penalty-Based Learning Extension for Backpropagation and its Variants
    The 2006 IEEE International Joint Conference on Neural Network Proceedings, 2006
    Co-Authors: B. Jansen, K. Nakayama

    Abstract:

    Over the years, many improvements and refinements of the Backpropagation Learning algorithm have been reported. In this paper, a new adaptive penalty-based Learning extension for the Backpropagation Learning algorithm and its variants is proposed. The new method initially puts pressure on artificial neural networks in order to get all outputs for all training patterns into the correct half of the output range, instead of mainly focusing on minimizing the difference between the target and actual output values. The technique is easy to implement and computationally inexpensive. In this study, the new approach has been applied to the Backpropagation Learning algorithm as well as the RPROP Learning algorithm and simulations have been performed. The superiority of the new proposed method is demonstrated. By applying the extension, the number of successful runs can be greatly increased and the average number of epochs to convergence can be well reduced on various problem instances. Furthermore, the change of the penalty values during training has been studied and its observation shows the active role the penalties play within the Learning process.

B Walczak – One of the best experts on this subject based on the ideXlab platform.

  • neural networks with robust Backpropagation Learning algorithm
    Analytica Chimica Acta, 1996
    Co-Authors: B Walczak

    Abstract:

    A robust error suppressor function, which is independent of the underlying probability density function and convenient for computer programs, is proposed to robustify the Backpropagation Learning algorithm of multilayer feedforward networks. Its performance is studied for simulated data sets.