Adaptive Learning Rate

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 3186 Experts worldwide ranked by ideXlab platform

Maurizio Valle - One of the best experts on this subject based on the ideXlab platform.

  • ICANN - Stochastic Supervised Learning Algorithms with Local and Adaptive Learning Rate for Recognising Hand-Written Characters
    Artificial Neural Networks — ICANN 2002, 2002
    Co-Authors: Matteo Giudici, Filippo Queirolo, Maurizio Valle
    Abstract:

    Supervised Learning algorithms (i.e. Back Propagation algorithms, BP) are reliable and widely adopted for real world applications. Among supervised algorithms, stochastic ones (e.g. Weight Perturbation algorithms, WP) exhibit analog VLSI hardware friendly features. Though, they have not been validated on meaningful applications. This paper presents the results of a thorough experimental validation of the parallel WP Learning algorithm on the recognition of handwritten characters. We adopted a local and Adaptive Learning Rate management to increase the efficiency. Our results demonstRate that the performance of the WP algorithm are comparable to the BP ones except that the network complexity (i.e. the number of hidden neurons) is fairly lower. The average number of iterations to reach convergence is higher than in the BP case, but this cannot be considered a heavy drawback in view of the analog parallel on-chip implementation of the Learning algorithm.

  • stochastic supervised Learning algorithms with local and Adaptive Learning Rate for recognising hand written characters
    Lecture Notes in Computer Science, 2002
    Co-Authors: Matteo Giudici, Filippo Queirolo, Maurizio Valle
    Abstract:

    Supervised Learning algorithms (i.e. Back Propagation algorithms, BP) are reliable and widely adopted for real world applications. Among supervised algorithms, stochastic ones (e.g. Weight Perturbation algorithms, WP) exhibit analog VLSI hardware friendly features. Though, they have not been validated on meaningful applications. This paper presents the results of a thorough experimental validation of the parallel WP Learning algorithm on the recognition of handwritten characters. We adopted a local and Adaptive Learning Rate management to increase the efficiency. Our results demonstRate that the performance of the WP algorithm are comparable to the BP ones except that the network complexity (i.e. the number of hidden neurons) is fairly lower. The average number of iterations to reach convergence is higher than in the BP case, but this cannot be considered a heavy drawback in view of the analog parallel on-chip implementation of the Learning algorithm.

  • evaluation of gradient descent Learning algorithms with Adaptive and local Learning Rate for recognising hand written numerals
    The European Symposium on Artificial Neural Networks, 2002
    Co-Authors: Matteo Giudici, Filippo Queirolo, Maurizio Valle
    Abstract:

    Gradient descent Learning algorithms, namely Back Propagation (BP), can significantly increase the classification performance of Multi Layer Perceptrons adopting a local and Adaptive Learning Rate management approach. In this paper, we present the comparison of the performance on hand-written characters classification of two BP algorithms, implementing fixed and Adaptive Learning Rate. The results show that the validation error and average number of Learning iterations are lower for the Adaptive Learning Rate BP algorithm.

Lei Ying - One of the best experts on this subject based on the ideXlab platform.

  • finite time performance bounds and Adaptive Learning Rate selection for two time scale reinforcement Learning
    Neural Information Processing Systems, 2019
    Co-Authors: Harsh Gupta, R Srikant, Lei Ying
    Abstract:

    We study two time-scale linear stochastic approximation algorithms, which can be used to model well-known reinforcement Learning algorithms such as GTD, GTD2, and TDC. We present finite-time performance bounds for the case where the Learning Rate is fixed. The key idea in obtaining these bounds is to use a Lyapunov function motivated by singular perturbation theory for linear differential equations. We use the bound to design an Adaptive Learning Rate scheme which significantly improves the convergence Rate over the known optimal polynomial decay rule in our experiments, and can be used to potentially improve the performance of any other schedule where the Learning Rate is changed at pre-determined time instants.

  • NeurIPS - Finite-Time Performance Bounds and Adaptive Learning Rate Selection for Two Time-Scale Reinforcement Learning
    2019
    Co-Authors: Harsh Gupta, R Srikant, Lei Ying
    Abstract:

    We study two time-scale linear stochastic approximation algorithms, which can be used to model well-known reinforcement Learning algorithms such as GTD, GTD2, and TDC. We present finite-time performance bounds for the case where the Learning Rate is fixed. The key idea in obtaining these bounds is to use a Lyapunov function motivated by singular perturbation theory for linear differential equations. We use the bound to design an Adaptive Learning Rate scheme which significantly improves the convergence Rate over the known optimal polynomial decay rule in our experiments, and can be used to potentially improve the performance of any other schedule where the Learning Rate is changed at pre-determined time instants.

Matteo Giudici - One of the best experts on this subject based on the ideXlab platform.

  • ICANN - Stochastic Supervised Learning Algorithms with Local and Adaptive Learning Rate for Recognising Hand-Written Characters
    Artificial Neural Networks — ICANN 2002, 2002
    Co-Authors: Matteo Giudici, Filippo Queirolo, Maurizio Valle
    Abstract:

    Supervised Learning algorithms (i.e. Back Propagation algorithms, BP) are reliable and widely adopted for real world applications. Among supervised algorithms, stochastic ones (e.g. Weight Perturbation algorithms, WP) exhibit analog VLSI hardware friendly features. Though, they have not been validated on meaningful applications. This paper presents the results of a thorough experimental validation of the parallel WP Learning algorithm on the recognition of handwritten characters. We adopted a local and Adaptive Learning Rate management to increase the efficiency. Our results demonstRate that the performance of the WP algorithm are comparable to the BP ones except that the network complexity (i.e. the number of hidden neurons) is fairly lower. The average number of iterations to reach convergence is higher than in the BP case, but this cannot be considered a heavy drawback in view of the analog parallel on-chip implementation of the Learning algorithm.

  • stochastic supervised Learning algorithms with local and Adaptive Learning Rate for recognising hand written characters
    Lecture Notes in Computer Science, 2002
    Co-Authors: Matteo Giudici, Filippo Queirolo, Maurizio Valle
    Abstract:

    Supervised Learning algorithms (i.e. Back Propagation algorithms, BP) are reliable and widely adopted for real world applications. Among supervised algorithms, stochastic ones (e.g. Weight Perturbation algorithms, WP) exhibit analog VLSI hardware friendly features. Though, they have not been validated on meaningful applications. This paper presents the results of a thorough experimental validation of the parallel WP Learning algorithm on the recognition of handwritten characters. We adopted a local and Adaptive Learning Rate management to increase the efficiency. Our results demonstRate that the performance of the WP algorithm are comparable to the BP ones except that the network complexity (i.e. the number of hidden neurons) is fairly lower. The average number of iterations to reach convergence is higher than in the BP case, but this cannot be considered a heavy drawback in view of the analog parallel on-chip implementation of the Learning algorithm.

  • evaluation of gradient descent Learning algorithms with Adaptive and local Learning Rate for recognising hand written numerals
    The European Symposium on Artificial Neural Networks, 2002
    Co-Authors: Matteo Giudici, Filippo Queirolo, Maurizio Valle
    Abstract:

    Gradient descent Learning algorithms, namely Back Propagation (BP), can significantly increase the classification performance of Multi Layer Perceptrons adopting a local and Adaptive Learning Rate management approach. In this paper, we present the comparison of the performance on hand-written characters classification of two BP algorithms, implementing fixed and Adaptive Learning Rate. The results show that the validation error and average number of Learning iterations are lower for the Adaptive Learning Rate BP algorithm.

Sotetsu Iwamura - One of the best experts on this subject based on the ideXlab platform.

Harsh Gupta - One of the best experts on this subject based on the ideXlab platform.

  • finite time performance bounds and Adaptive Learning Rate selection for two time scale reinforcement Learning
    Neural Information Processing Systems, 2019
    Co-Authors: Harsh Gupta, R Srikant, Lei Ying
    Abstract:

    We study two time-scale linear stochastic approximation algorithms, which can be used to model well-known reinforcement Learning algorithms such as GTD, GTD2, and TDC. We present finite-time performance bounds for the case where the Learning Rate is fixed. The key idea in obtaining these bounds is to use a Lyapunov function motivated by singular perturbation theory for linear differential equations. We use the bound to design an Adaptive Learning Rate scheme which significantly improves the convergence Rate over the known optimal polynomial decay rule in our experiments, and can be used to potentially improve the performance of any other schedule where the Learning Rate is changed at pre-determined time instants.

  • NeurIPS - Finite-Time Performance Bounds and Adaptive Learning Rate Selection for Two Time-Scale Reinforcement Learning
    2019
    Co-Authors: Harsh Gupta, R Srikant, Lei Ying
    Abstract:

    We study two time-scale linear stochastic approximation algorithms, which can be used to model well-known reinforcement Learning algorithms such as GTD, GTD2, and TDC. We present finite-time performance bounds for the case where the Learning Rate is fixed. The key idea in obtaining these bounds is to use a Lyapunov function motivated by singular perturbation theory for linear differential equations. We use the bound to design an Adaptive Learning Rate scheme which significantly improves the convergence Rate over the known optimal polynomial decay rule in our experiments, and can be used to potentially improve the performance of any other schedule where the Learning Rate is changed at pre-determined time instants.