The Experts below are selected from a list of 3558 Experts worldwide ranked by ideXlab platform

Andrzej Rusiecki - One of the best experts on this subject based on the ideXlab platform.

  • robust lts Backpropagation Learning algorithm
    International Work-Conference on Artificial and Natural Neural Networks, 2007
    Co-Authors: Andrzej Rusiecki
    Abstract:

    Training data sets containing outliers are often a problem for supervised neural networks Learning algorithms. They may not always come up with acceptable performance and build very inaccurate models. In this paper new, robust to outliers, Learning algorithm based on the Least Trimmed Squares (LTS) estimator is proposed. The LTS Learning algorithm is simultaneously the first robust Learning algorithm that takes into account not only gross errors but also leverage data points. Results of simulations of networks trained with the new algorithm are presented and the robustness against outliers is demonstrated.

  • IWANN - Robust LTS Backpropagation Learning algorithm
    Computational and Ambient Intelligence, 2007
    Co-Authors: Andrzej Rusiecki
    Abstract:

    Training data sets containing outliers are often a problem for supervised neural networks Learning algorithms. They may not always come up with acceptable performance and build very inaccurate models. In this paper new, robust to outliers, Learning algorithm based on the Least Trimmed Squares (LTS) estimator is proposed. The LTS Learning algorithm is simultaneously the first robust Learning algorithm that takes into account not only gross errors but also leverage data points. Results of simulations of networks trained with the new algorithm are presented and the robustness against outliers is demonstrated.

  • ICAISC - Robust MCD-Based Backpropagation Learning Algorithm
    Artificial Intelligence and Soft Computing – ICAISC 2008, 2006
    Co-Authors: Andrzej Rusiecki
    Abstract:

    Training data containing outliers are often a problem for supervised neural networks Learning methods that may not always come up with acceptable performance. In this paper a new, robust to outliers Learning algorithm, employing the concept of initial data analysis by the MCD (minimum covariance determinant) estimator, is proposed. Results of implementation and simulation of nets trained with the new algorithm and the traditional Backpropagation (BP) algorithm and robust Lmls are presented and compared. The better performance and robustness against outliers for the new method are demonstrated.

K. Nakayama - One of the best experts on this subject based on the ideXlab platform.

  • PAPER Special Section on Papers Selected from the 20th Symposium on Signal Processing An Adaptive Penalty-Based Learning Extension for the Backpropagation Family
    2020
    Co-Authors: B. Jansen, K. Nakayama
    Abstract:

    SUMMARY Over the years, many improvements and refinements to the Backpropagation Learning algorithm have been reported. In this paper, a new adaptive penalty-based Learning extension for the Backpropagation Learning algorithm and its variants is proposed. The new method initially puts pressure on artificial neural networks in order to get all outputs for all training patterns into the correct half of the output range, instead of mainly focusing on minimizing the difference between the target and actual output values. The upper bound of the penalty values is also controlled. The technique is easy to implement and computationally inexpensive. In this study, the new approach is applied to the Backpropagation Learning algorithm as well as the RPROP Learning algorithm. The superiority of the new proposed method is demonstrated though many simulations. By applying the extension, the percentage of successful runs can be greatly increased and the average number of epochs to convergence can be well reduced on various problem instances. The behavior of the penalty values during training is also analyzed and their active role within the Learning

  • An Adaptive Penalty-Based Learning Extension for the Backpropagation Family
    IEICE Transactions on Fundamentals of Electronics Communications and Computer Sciences, 2006
    Co-Authors: B. Jansen, K. Nakayama
    Abstract:

    Over the years, many improvements and refinements to the Backpropagation Learning algorithm have been reported. In this paper, a new adaptive penalty-based Learning extension for the Backpropagation Learning algorithm and its variants is proposed. The new method initially puts pressure on artificial neural networks in order to get all outputs for all training patterns into the correct half of the output range, instead of mainly focusing on minimizing the difference between the target and actual output values. The upper bound of the penalty values is also controlled. The technique is easy to implement and computationally inexpensive. In this study, the new approach is applied to the Backpropagation Learning algorithm as well as the RPROP Learning algorithm. The superiority of the new proposed method is demonstrated though many simulations. By applying the extension, the percentage of successful runs can be greatly increased and the average number of epochs to convergence can be well reduced on various problem instances. The behavior of the penalty values during training is also analyzed and their active role within the Learning process is confirmed.

  • An Adaptive Penalty-Based Learning Extension for Backpropagation and its Variants
    The 2006 IEEE International Joint Conference on Neural Network Proceedings, 2006
    Co-Authors: B. Jansen, K. Nakayama
    Abstract:

    Over the years, many improvements and refinements of the Backpropagation Learning algorithm have been reported. In this paper, a new adaptive penalty-based Learning extension for the Backpropagation Learning algorithm and its variants is proposed. The new method initially puts pressure on artificial neural networks in order to get all outputs for all training patterns into the correct half of the output range, instead of mainly focusing on minimizing the difference between the target and actual output values. The technique is easy to implement and computationally inexpensive. In this study, the new approach has been applied to the Backpropagation Learning algorithm as well as the RPROP Learning algorithm and simulations have been performed. The superiority of the new proposed method is demonstrated. By applying the extension, the number of successful runs can be greatly increased and the average number of epochs to convergence can be well reduced on various problem instances. Furthermore, the change of the penalty values during training has been studied and its observation shows the active role the penalties play within the Learning process.

B Walczak - One of the best experts on this subject based on the ideXlab platform.

Kazuyuki Aihara - One of the best experts on this subject based on the ideXlab platform.

  • Backpropagation Learning algorithm for multilayer phasor neural networks
    International Conference on Neural Information Processing, 2009
    Co-Authors: Gouhei Tanaka, Kazuyuki Aihara
    Abstract:

    We present a Backpropagation Learning algorithm for multilayer feedforward phasor neural networks using a gradient descent method. The state of a phasor neuron takes a complex-valued state on the unit circle in the complex domain. Namely, the state can be identified only by its phase component because the amplitude component is fixed. Due to the circularity of the phase variable, phasor neural networks are useful to deal with periodic and multivalued variables. Under the assumption that the weight coefficients are complex numbers and the activation function is a continuous and differentiable function of a phase variable, we derive an iterative Learning algorithm to minimize the output error. In each step of the algorithm, the weight coefficients are updated in the gradient descent direction of the error function landscape. The proposed algorithm is numerically tested in function approximation task. The numerical results suggest that the proposed method has a better generalization ability compared with the other Backpropagation algorithm based on linear correction rule.

  • ICONIP (1) - Backpropagation Learning Algorithm for Multilayer Phasor Neural Networks
    Neural Information Processing, 2009
    Co-Authors: Gouhei Tanaka, Kazuyuki Aihara
    Abstract:

    We present a Backpropagation Learning algorithm for multilayer feedforward phasor neural networks using a gradient descent method. The state of a phasor neuron takes a complex-valued state on the unit circle in the complex domain. Namely, the state can be identified only by its phase component because the amplitude component is fixed. Due to the circularity of the phase variable, phasor neural networks are useful to deal with periodic and multivalued variables. Under the assumption that the weight coefficients are complex numbers and the activation function is a continuous and differentiable function of a phase variable, we derive an iterative Learning algorithm to minimize the output error. In each step of the algorithm, the weight coefficients are updated in the gradient descent direction of the error function landscape. The proposed algorithm is numerically tested in function approximation task. The numerical results suggest that the proposed method has a better generalization ability compared with the other Backpropagation algorithm based on linear correction rule.

Satoshi Matsuda - One of the best experts on this subject based on the ideXlab platform.