Backpropagation Algorithm - Explore the Science & Experts | ideXlab

Scan Science and Technology

Contact Leading Edge Experts & Companies

Backpropagation Algorithm

The Experts below are selected from a list of 10659 Experts worldwide ranked by ideXlab platform

P L Feintuch – 1st expert on this subject based on the ideXlab platform

  • statistical analysis of the single layer Backpropagation Algorithm i mean weight behavior
    IEEE Transactions on Signal Processing, 1993
    Co-Authors: N J Bershad, J J Shynk, P L Feintuch

    Abstract:

    The single-layer Backpropagation Algorithm is a gradient-descent method that adjusts the connection weights of a single-layer perceptron to minimize the mean-square error at the output. It is similar to the standard least mean square al- gorithm, except the output of the linear combiner contains a differentiable nonlinearity. In this paper, we present a statis- tical analysis of the mean weight behavior of the single-layer Backpropagation Algorithm for Gaussian input signals. It is based on a nonlinear system identification model of the desired response which is capable of generating an arbitrary hyper- plane decision boundary. It is demonstrated that, although the weights grow unbounded, the Algorithm, on average, quickly learns the correct hyperplane associated with the system iden- tification model.

  • Statistical analysis of the single-layer Backpropagation Algorithm
    [Proceedings] ICASSP 91: 1991 International Conference on Acoustics Speech and Signal Processing, 1991
    Co-Authors: N J Bershad, J J Shynk, P L Feintuch

    Abstract:

    The authors present a statistical analysis of the steady-state and transient properties of the single-layer Backpropagation Algorithm for Gaussian input signals. It is based on a nonlinear system identification model of the desired response which is capable of generating an arbitrary hyperplane decision boundary. It is demonstrated that, although the weights grow unbounded, the mean-square error decreases towards zero. These results indicate that the Algorithm, on average, quickly learns the correct hyperplane associated with the system identification model. However, the nature of the mean-square error and the corresponding performance surface are such that the perceptron is prevented from correctly classifying with probability one until the weights converge at infinity. >

N J Bershad – 2nd expert on this subject based on the ideXlab platform

  • statistical analysis of the single layer Backpropagation Algorithm i mean weight behavior
    IEEE Transactions on Signal Processing, 1993
    Co-Authors: N J Bershad, J J Shynk, P L Feintuch

    Abstract:

    The single-layer Backpropagation Algorithm is a gradient-descent method that adjusts the connection weights of a single-layer perceptron to minimize the mean-square error at the output. It is similar to the standard least mean square al- gorithm, except the output of the linear combiner contains a differentiable nonlinearity. In this paper, we present a statis- tical analysis of the mean weight behavior of the single-layer Backpropagation Algorithm for Gaussian input signals. It is based on a nonlinear system identification model of the desired response which is capable of generating an arbitrary hyper- plane decision boundary. It is demonstrated that, although the weights grow unbounded, the Algorithm, on average, quickly learns the correct hyperplane associated with the system iden- tification model.

  • Statistical analysis of the single-layer Backpropagation Algorithm
    [Proceedings] ICASSP 91: 1991 International Conference on Acoustics Speech and Signal Processing, 1991
    Co-Authors: N J Bershad, J J Shynk, P L Feintuch

    Abstract:

    The authors present a statistical analysis of the steady-state and transient properties of the single-layer Backpropagation Algorithm for Gaussian input signals. It is based on a nonlinear system identification model of the desired response which is capable of generating an arbitrary hyperplane decision boundary. It is demonstrated that, although the weights grow unbounded, the mean-square error decreases towards zero. These results indicate that the Algorithm, on average, quickly learns the correct hyperplane associated with the system identification model. However, the nature of the mean-square error and the corresponding performance surface are such that the perceptron is prevented from correctly classifying with probability one until the weights converge at infinity. >

J J Shynk – 3rd expert on this subject based on the ideXlab platform

  • statistical analysis of the single layer Backpropagation Algorithm i mean weight behavior
    IEEE Transactions on Signal Processing, 1993
    Co-Authors: N J Bershad, J J Shynk, P L Feintuch

    Abstract:

    The single-layer Backpropagation Algorithm is a gradient-descent method that adjusts the connection weights of a single-layer perceptron to minimize the mean-square error at the output. It is similar to the standard least mean square al- gorithm, except the output of the linear combiner contains a differentiable nonlinearity. In this paper, we present a statis- tical analysis of the mean weight behavior of the single-layer Backpropagation Algorithm for Gaussian input signals. It is based on a nonlinear system identification model of the desired response which is capable of generating an arbitrary hyper- plane decision boundary. It is demonstrated that, although the weights grow unbounded, the Algorithm, on average, quickly learns the correct hyperplane associated with the system iden- tification model.

  • Statistical analysis of the single-layer Backpropagation Algorithm
    [Proceedings] ICASSP 91: 1991 International Conference on Acoustics Speech and Signal Processing, 1991
    Co-Authors: N J Bershad, J J Shynk, P L Feintuch

    Abstract:

    The authors present a statistical analysis of the steady-state and transient properties of the single-layer Backpropagation Algorithm for Gaussian input signals. It is based on a nonlinear system identification model of the desired response which is capable of generating an arbitrary hyperplane decision boundary. It is demonstrated that, although the weights grow unbounded, the mean-square error decreases towards zero. These results indicate that the Algorithm, on average, quickly learns the correct hyperplane associated with the system identification model. However, the nature of the mean-square error and the corresponding performance surface are such that the perceptron is prevented from correctly classifying with probability one until the weights converge at infinity. >