Backpropagation

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 35031 Experts worldwide ranked by ideXlab platform

Alexander Nikov - One of the best experts on this subject based on the ideXlab platform.

  • quick fuzzy Backpropagation algorithm
    Neural Networks, 2001
    Co-Authors: Alexander Nikov, Stanka Stoeva
    Abstract:

    A modification of the fuzzy Backpropagation (FBP) algorithm called QuickFBP algorithm is proposed, where the computation of the net function is significantly quicker. It is proved that the FBP algorithm is of exponential time complexity, while the QuickFBP algorithm is of polynomial time complexity. Convergence conditions of the QuickFBP, resp. the FBP algorithm are defined and proved for: (1) single output neural networks in case of training patterns with different targets; and (2) multiple output neural networks in case of training patterns with equivalued target vector. They support the automation of the weights training process (quasi-unsupervised learning) establishing the target value(s) depending on the network's input values. In these cases the simulation results confirm the convergence of both algorithms. An example with a large-sized neural network illustrates the significantly greater training speed of the QuickFBP rather than the FBP algorithm. The adaptation of an interactive web system to users on the basis of the QuickFBP algorithm is presented. Since the QuickFBP algorithm ensures quasi-unsupervised learning, this implies its broad applicability in areas of adaptive and adaptable interactive systems, data mining, etc. applications.

  • a fuzzy Backpropagation algorithm
    Fuzzy Sets and Systems, 2000
    Co-Authors: Stanka Stoeva, Alexander Nikov
    Abstract:

    This paper presents an extension of the standard Backpropagation algorithm (SBP). The proposed learning algorithm is based on the fuzzy integral of Sugeno and thus called fuzzy Backpropagation (FBP) algorithm. Necessary and sufficient conditions for convergence of FBP algorithm for single-output networks in case of single- and multiple-training patterns are proved. A computer simulation illustrates and confirms the theoretical results. FBP algorithm shows considerably greater convergence rate compared to SBP algorithm. Other advantages of FBP algorithm are that it reaches forward to the target value without oscillations, requires no assumptions about probability distribution and independence of input data. The convergence conditions enable training by automation of weights tuning process (quasi-unsupervised learning) pointing out the interval where the target value belongs to. This supports acquisition of implicit knowledge and ensures wide application, e.g. for creation of adaptable user interfaces, assessment of products, intelligent data analysis, etc.

Stanka Stoeva - One of the best experts on this subject based on the ideXlab platform.

  • quick fuzzy Backpropagation algorithm
    Neural Networks, 2001
    Co-Authors: Alexander Nikov, Stanka Stoeva
    Abstract:

    A modification of the fuzzy Backpropagation (FBP) algorithm called QuickFBP algorithm is proposed, where the computation of the net function is significantly quicker. It is proved that the FBP algorithm is of exponential time complexity, while the QuickFBP algorithm is of polynomial time complexity. Convergence conditions of the QuickFBP, resp. the FBP algorithm are defined and proved for: (1) single output neural networks in case of training patterns with different targets; and (2) multiple output neural networks in case of training patterns with equivalued target vector. They support the automation of the weights training process (quasi-unsupervised learning) establishing the target value(s) depending on the network's input values. In these cases the simulation results confirm the convergence of both algorithms. An example with a large-sized neural network illustrates the significantly greater training speed of the QuickFBP rather than the FBP algorithm. The adaptation of an interactive web system to users on the basis of the QuickFBP algorithm is presented. Since the QuickFBP algorithm ensures quasi-unsupervised learning, this implies its broad applicability in areas of adaptive and adaptable interactive systems, data mining, etc. applications.

  • a fuzzy Backpropagation algorithm
    Fuzzy Sets and Systems, 2000
    Co-Authors: Stanka Stoeva, Alexander Nikov
    Abstract:

    This paper presents an extension of the standard Backpropagation algorithm (SBP). The proposed learning algorithm is based on the fuzzy integral of Sugeno and thus called fuzzy Backpropagation (FBP) algorithm. Necessary and sufficient conditions for convergence of FBP algorithm for single-output networks in case of single- and multiple-training patterns are proved. A computer simulation illustrates and confirms the theoretical results. FBP algorithm shows considerably greater convergence rate compared to SBP algorithm. Other advantages of FBP algorithm are that it reaches forward to the target value without oscillations, requires no assumptions about probability distribution and independence of input data. The convergence conditions enable training by automation of weights tuning process (quasi-unsupervised learning) pointing out the interval where the target value belongs to. This supports acquisition of implicit knowledge and ensures wide application, e.g. for creation of adaptable user interfaces, assessment of products, intelligent data analysis, etc.

Dennis M Goodman - One of the best experts on this subject based on the ideXlab platform.

  • Backpropagation learning for multilayer feed forward neural networks using the conjugate gradient method
    International Journal of Neural Systems, 1991
    Co-Authors: Erik M Johansson, Farid Dowla, Dennis M Goodman
    Abstract:

    In many applications, the number of interconnects or weights in a neural network is so large that the learning time for the conventional Backpropagation algorithm can become excessively long. Numerical optimization theory offers a rich and robust set of techniques which can be applied to neural networks to improve learning rates. In particular, the conjugate gradient method is easily adapted to the Backpropagation learning problem. This paper describes the conjugate gradient method, its application to the Backpropagation learning problem and presents results of numerical tests which compare conventional Backpropagation, steepest descent and the conjugate gradient methods. For the parity problem, we find that the conjugate gradient method is an order of magnitude faster than conventional Backpropagation with momentum.

Zainab Namh Alsultani - One of the best experts on this subject based on the ideXlab platform.

  • hybrid system of learning vector quantization and enhanced resilient Backpropagation artificial neural network for intrusion classification
    2013
    Co-Authors: Reyadh Shaker Naoum, Zainab Namh Alsultani
    Abstract:

    Network-based computer systems play increasingly vital roles in modern society; they have become the target of intrusions by our enemies and criminals. Intrusion detection system attempts to detect computer attacks by examining various data records observed in processes on the network. This paper presents a hybrid intrusion detection system models, using Learning Vector Quantization and an enhanced resilient Backpropagation artificial neural network. The proposed system is divided into five phases: environment phase, dataset features and preprocessing phase, Learning Vector Quantization phase, enhanced resilient Backpropagation neural network phase and testing the hybrid system phase. A Supervised Learning Vector Quantization (LVQ) as the first stage of classification was trained to detect intrusions; it consists of two layers with two different transfer functions, competitive and linear. A multilayer perceptron as the second stage of classification was trained using an enhanced resilient Backpropagation training algorithm. Best number of hidden layers and hidden neurons were calculated to train the enhanced resilient Backpropagation neural network. One hidden layer with 32 hidden neurons was used in resilient Backpropagation artificial neural network training process. An optimal learning factor was derived to speed up the convergence of the resilient Backpropagation neural network performance. The evaluations were performed using the NSL-KDD99 network anomaly intrusion detection dataset. The experiments results demonstrate that the proposed system (LVQ_ERBP) has a detection rate about 97.06% with a false negative rate of 2%.

  • hybrid system of learning vector quantization and enhanced resilient Backpropagation artificial neural network for intrusion classification
    2013
    Co-Authors: Reyadh Shaker Naoum, Zainab Namh Alsultani
    Abstract:

    Network-based computer systems play increasingly vital roles in modern society; they have become the target of intrusions by our enemies and criminals. Intrusion detection system attempts to detect computer attacks by examining various data records observed in processes on the network. This paper presents a hybrid intrusion detection system models, using Learning Vector Quantization and an enhanced resilient Backpropagation artificial neural network. The proposed system is divided into five phases: environment phase, dataset features and preprocessing phase, Learning Vector Quantization phase, enhanced resilient Backpropagation neural network phase and testing the hybrid system phase. A Supervised Learning Vector Quantization (LVQ) as the first stage of classification was trained to detect intrusions; it consists of two layers with two different transfer functions, competitive and linear. A multilayer perceptron as the second stage of classification was trained using an enhanced resilient Backpropagation training algorithm. Best number of hidden layers and hidden neurons were calculated to train the enhanced resilient Backpropagation neural network. One hidden layer with 32 hidden neurons was used in resilient Backpropagation artificial neural network training process. An optimal learning factor was derived to speed up the convergence of the resilient Backpropagation neural network performance. The evaluations were performed using the NSL-KDD99 network anomaly intrusion detection dataset. The experiments results demonstrate that the proposed system (LVQ_ERBP) has a detection rate about 97.06% with a false negative rate of 2%.

Hyongsuk Kim - One of the best experts on this subject based on the ideXlab platform.

  • hybrid no propagation learning for multilayer neural networks
    Neurocomputing, 2018
    Co-Authors: Shyam Prasad Adhikari, Changju Yang, Krzysztof Slot, Michal Strzelecki, Hyongsuk Kim
    Abstract:

    Abstract A hybrid learning algorithm suitable for hardware implementation of multi-layer neural networks is proposed. Though Backpropagation is a powerful learning method for multilayer neural networks, its hardware implementation is difficult due to complexities of the neural synapses and the operations involved in error Backpropagation. We propose a learning algorithm with performance comparable to but easier than Backpropagation to be implemented in hardware for on-chip learning of multi-layer neural networks. In the proposed learning algorithm, a multilayer neural network is trained with a hybrid of gradient-based delta rule and a stochastic algorithm, called Random Weight Change. The parameters of the output layer are learned using the delta rule, whereas the inner layer parameters are learned using Random Weight Change, thereby the overall multilayer neural network is trained without the need for error Backpropagation. Experimental results showing better performance of the proposed hybrid learning rule than either of its constituent learning algorithms, and comparable to that of Backpropagation on the benchmark MNIST dataset are presented. Hardware architecture illustrating the ease of implementation of the proposed learning rule in analog hardware vis-a-vis the Backpropagation algorithm is also presented.