The Experts below are selected from a list of 22329 Experts worldwide ranked by ideXlab platform
Tong Zhang - One of the best experts on this subject based on the ideXlab platform.
-
accelerated mini batch stochastic dual coordinate ascent
2013Co-Authors: Shai Shalevshwartz, Tong ZhangAbstract:Stochastic dual coordinate ascent (SDCA) is an effective technique for solving regularized loss minimization problems in machine learning. This paper considers an extension of SDCA under the mini-batch setting that is often used in practice. Our main contribution is to introduce an accelerated mini-batch version of SDCA and prove a fast convergence rate for this Method. We discuss an implementation of our Method over a parallel computing system, and compare the results to both the vanilla stochastic dual coordinate ascent and to the accelerated deterministic Gradient Descent Method of Nesterov [2007].
-
accelerated mini batch stochastic dual coordinate ascent
2013Co-Authors: Shai Shalevshwartz, Tong ZhangAbstract:Stochastic dual coordinate ascent (SDCA) is an effective technique for solving regularized loss minimization problems in machine learning. This paper considers an extension of SDCA under the mini-batch setting that is often used in practice. Our main contribution is to introduce an accelerated mini-batch version of SDCA and prove a fast convergence rate for this Method. We discuss an implementation of our Method over a parallel computing system, and compare the results to both the vanilla stochastic dual coordinate ascent and to the accelerated deterministic Gradient Descent Method of \cite{nesterov2007Gradient}.
Masaharu Mizumoto - One of the best experts on this subject based on the ideXlab platform.
-
a new approach of neuro fuzzy learning algorithm for tuning fuzzy rules
2000Co-Authors: Masaharu MizumotoAbstract:Abstract In this paper, we develop a new approach of neuro-fuzzy learning algorithm for tuning fuzzy rules by using training input–output data, based on the Gradient Descent Method. A major advantage of this approach is that fuzzy rules or membership functions can be learned without changing the form of fuzzy rule table used in usual fuzzy applications, so that the case of non-firing or weak-firing can be well avoided, which is different from the conventional neuro-fuzzy learning algorithms. Moreover, some properties of the developed approach are also discussed. Finally, the efficiency of the developed approach is illustrated by means of identifying non-linear functions.
-
some considerations on conventional neuro fuzzy learning algorithms by Gradient Descent Method
2000Co-Authors: Yan Shi, Masaharu MizumotoAbstract:In this paper, we try to analyze several conventional neuro-fuzzy learning algorithms, which are widely used in recent fuzzy applications for tuning fuzzy rules, and give a summarization of their properties in detail. Some of these properties show that the uses of the conventional neuro-fuzzy learning algorithms are difficult or inconvenient sometimes for constructing an optimal fuzzy system model in practical fuzzy applications.
-
a learning algorithm for tuning fuzzy rules based on the Gradient Descent Method
1996Co-Authors: Yan Shi, Masaharu Mizumoto, Naoyoshi Yubazaki, Masayuki OtaniAbstract:In this paper, we suggest a utility learning algorithm for tuning fuzzy rules by using input-output training data, based on the Gradient Descent Method. The major advantage of this Method is that the fuzzy rules or membership functions can be learned without changing the form of the fuzzy rule table used in usual fuzzy controls, so that the case of weak-firing can be avoided, which is different from the conventional learning algorithm. Furthermore, we illustrated the efficiency of the suggested learning algorithm by means of several numerical examples.
Shai Shalevshwartz - One of the best experts on this subject based on the ideXlab platform.
-
accelerated mini batch stochastic dual coordinate ascent
2013Co-Authors: Shai Shalevshwartz, Tong ZhangAbstract:Stochastic dual coordinate ascent (SDCA) is an effective technique for solving regularized loss minimization problems in machine learning. This paper considers an extension of SDCA under the mini-batch setting that is often used in practice. Our main contribution is to introduce an accelerated mini-batch version of SDCA and prove a fast convergence rate for this Method. We discuss an implementation of our Method over a parallel computing system, and compare the results to both the vanilla stochastic dual coordinate ascent and to the accelerated deterministic Gradient Descent Method of Nesterov [2007].
-
accelerated mini batch stochastic dual coordinate ascent
2013Co-Authors: Shai Shalevshwartz, Tong ZhangAbstract:Stochastic dual coordinate ascent (SDCA) is an effective technique for solving regularized loss minimization problems in machine learning. This paper considers an extension of SDCA under the mini-batch setting that is often used in practice. Our main contribution is to introduce an accelerated mini-batch version of SDCA and prove a fast convergence rate for this Method. We discuss an implementation of our Method over a parallel computing system, and compare the results to both the vanilla stochastic dual coordinate ascent and to the accelerated deterministic Gradient Descent Method of \cite{nesterov2007Gradient}.
Yan Shi - One of the best experts on this subject based on the ideXlab platform.
-
some considerations on conventional neuro fuzzy learning algorithms by Gradient Descent Method
2000Co-Authors: Yan Shi, Masaharu MizumotoAbstract:In this paper, we try to analyze several conventional neuro-fuzzy learning algorithms, which are widely used in recent fuzzy applications for tuning fuzzy rules, and give a summarization of their properties in detail. Some of these properties show that the uses of the conventional neuro-fuzzy learning algorithms are difficult or inconvenient sometimes for constructing an optimal fuzzy system model in practical fuzzy applications.
-
a learning algorithm for tuning fuzzy rules based on the Gradient Descent Method
1996Co-Authors: Yan Shi, Masaharu Mizumoto, Naoyoshi Yubazaki, Masayuki OtaniAbstract:In this paper, we suggest a utility learning algorithm for tuning fuzzy rules by using input-output training data, based on the Gradient Descent Method. The major advantage of this Method is that the fuzzy rules or membership functions can be learned without changing the form of the fuzzy rule table used in usual fuzzy controls, so that the case of weak-firing can be avoided, which is different from the conventional learning algorithm. Furthermore, we illustrated the efficiency of the suggested learning algorithm by means of several numerical examples.
Yunong Zhang - One of the best experts on this subject based on the ideXlab platform.
-
different level redundancy resolution and its equivalent relationship analysis for robot manipulators using Gradient Descent and zhang s neural dynamic Methods
2012Co-Authors: Yunong ZhangAbstract:To solve the inverse kinematic problem of redundant robot manipulators, two redundancy-resolution schemes are investigated: one is resolved at joint-velocity level, and the other is resolved at joint-acceleration level. Both schemes are reformulated as a quadratic programming (QP) problem. Two recurrent neural networks (RNNs) are then developed for the online solution of the resultant QP problem. The first RNN solver is based on the Gradient-Descent Method and is termed as Gradient neural network (GNN). The other solver is based on Zhang 's neural-dynamic Method and is termed as Zhang neural network (ZNN). The computer simulations performed on a three-link planar robot arm and the PUMA560 manipulator demonstrate the efficacy of the two redundancy-resolution schemes and two RNN QP-solvers presented, as well as the superiority of the ZNN QP-solver compared to the GNN one. More importantly, the simulation results show that the solutions of the two presented schemes fit well with each other, i.e., the two different-level redundancy-resolution schemes could be equivalent in some sense. The theoretical analysis based on the Gradient-Descent Method and Zhang 's neural-dynamic Method further substantiates the new finding about the different-level redundancy-resolution equivalence.
-
equivalence of velocity level and acceleration level redundancy resolution of manipulators
2009Co-Authors: Binghuang Cai, Yunong ZhangAbstract:The equivalence of velocity-level and acceleration-level redundancy resolution of robot manipulators is investigated in this Letter. Theoretical analysis based on Gradient-Descent Method and computer simulations based on PUMA560 robot manipulator both demonstrate the equivalence of redundancy-resolution schemes at different levels.