Gradient Descent Method

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 22329 Experts worldwide ranked by ideXlab platform

Tong Zhang - One of the best experts on this subject based on the ideXlab platform.

  • accelerated mini batch stochastic dual coordinate ascent
    2013
    Co-Authors: Shai Shalevshwartz, Tong Zhang
    Abstract:

    Stochastic dual coordinate ascent (SDCA) is an effective technique for solving regularized loss minimization problems in machine learning. This paper considers an extension of SDCA under the mini-batch setting that is often used in practice. Our main contribution is to introduce an accelerated mini-batch version of SDCA and prove a fast convergence rate for this Method. We discuss an implementation of our Method over a parallel computing system, and compare the results to both the vanilla stochastic dual coordinate ascent and to the accelerated deterministic Gradient Descent Method of Nesterov [2007].

  • accelerated mini batch stochastic dual coordinate ascent
    2013
    Co-Authors: Shai Shalevshwartz, Tong Zhang
    Abstract:

    Stochastic dual coordinate ascent (SDCA) is an effective technique for solving regularized loss minimization problems in machine learning. This paper considers an extension of SDCA under the mini-batch setting that is often used in practice. Our main contribution is to introduce an accelerated mini-batch version of SDCA and prove a fast convergence rate for this Method. We discuss an implementation of our Method over a parallel computing system, and compare the results to both the vanilla stochastic dual coordinate ascent and to the accelerated deterministic Gradient Descent Method of \cite{nesterov2007Gradient}.

Masaharu Mizumoto - One of the best experts on this subject based on the ideXlab platform.

  • a new approach of neuro fuzzy learning algorithm for tuning fuzzy rules
    2000
    Co-Authors: Masaharu Mizumoto
    Abstract:

    Abstract In this paper, we develop a new approach of neuro-fuzzy learning algorithm for tuning fuzzy rules by using training input–output data, based on the Gradient Descent Method. A major advantage of this approach is that fuzzy rules or membership functions can be learned without changing the form of fuzzy rule table used in usual fuzzy applications, so that the case of non-firing or weak-firing can be well avoided, which is different from the conventional neuro-fuzzy learning algorithms. Moreover, some properties of the developed approach are also discussed. Finally, the efficiency of the developed approach is illustrated by means of identifying non-linear functions.

  • some considerations on conventional neuro fuzzy learning algorithms by Gradient Descent Method
    2000
    Co-Authors: Yan Shi, Masaharu Mizumoto
    Abstract:

    In this paper, we try to analyze several conventional neuro-fuzzy learning algorithms, which are widely used in recent fuzzy applications for tuning fuzzy rules, and give a summarization of their properties in detail. Some of these properties show that the uses of the conventional neuro-fuzzy learning algorithms are difficult or inconvenient sometimes for constructing an optimal fuzzy system model in practical fuzzy applications.

  • a learning algorithm for tuning fuzzy rules based on the Gradient Descent Method
    1996
    Co-Authors: Yan Shi, Masaharu Mizumoto, Naoyoshi Yubazaki, Masayuki Otani
    Abstract:

    In this paper, we suggest a utility learning algorithm for tuning fuzzy rules by using input-output training data, based on the Gradient Descent Method. The major advantage of this Method is that the fuzzy rules or membership functions can be learned without changing the form of the fuzzy rule table used in usual fuzzy controls, so that the case of weak-firing can be avoided, which is different from the conventional learning algorithm. Furthermore, we illustrated the efficiency of the suggested learning algorithm by means of several numerical examples.

Shai Shalevshwartz - One of the best experts on this subject based on the ideXlab platform.

  • accelerated mini batch stochastic dual coordinate ascent
    2013
    Co-Authors: Shai Shalevshwartz, Tong Zhang
    Abstract:

    Stochastic dual coordinate ascent (SDCA) is an effective technique for solving regularized loss minimization problems in machine learning. This paper considers an extension of SDCA under the mini-batch setting that is often used in practice. Our main contribution is to introduce an accelerated mini-batch version of SDCA and prove a fast convergence rate for this Method. We discuss an implementation of our Method over a parallel computing system, and compare the results to both the vanilla stochastic dual coordinate ascent and to the accelerated deterministic Gradient Descent Method of Nesterov [2007].

  • accelerated mini batch stochastic dual coordinate ascent
    2013
    Co-Authors: Shai Shalevshwartz, Tong Zhang
    Abstract:

    Stochastic dual coordinate ascent (SDCA) is an effective technique for solving regularized loss minimization problems in machine learning. This paper considers an extension of SDCA under the mini-batch setting that is often used in practice. Our main contribution is to introduce an accelerated mini-batch version of SDCA and prove a fast convergence rate for this Method. We discuss an implementation of our Method over a parallel computing system, and compare the results to both the vanilla stochastic dual coordinate ascent and to the accelerated deterministic Gradient Descent Method of \cite{nesterov2007Gradient}.

Yan Shi - One of the best experts on this subject based on the ideXlab platform.

Yunong Zhang - One of the best experts on this subject based on the ideXlab platform.