Reinforcement Signal

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 9105 Experts worldwide ranked by ideXlab platform

Chinteng Lin - One of the best experts on this subject based on the ideXlab platform.

  • Reinforcement structure parameter learning for neural network based fuzzy logic control systems
    IEEE Transactions on Fuzzy Systems, 1994
    Co-Authors: Chinteng Lin, C S G Lee
    Abstract:

    This paper proposes a Reinforcement neural-network-based fuzzy logic control system (RNN-FLCS) for solving various Reinforcement learning problems. The proposed RNN-FLCS is constructed by integrating two neural-network-based fuzzy logic controllers (NN-FLC's), each of which is a connectionist model with a feedforward multilayered network developed for the realization of a fuzzy logic controller. One NN-FLC performs as a fuzzy predictor, and the other as a fuzzy controller. Using the temporal difference prediction method, the fuzzy predictor can predict the external Reinforcement Signal and provide a more informative internal Reinforcement Signal to the fuzzy controller. The fuzzy controller performs a stochastic exploratory algorithm to adapt itself according to the internal Reinforcement Signal. During the learning process, both structure learning and parameter learning are performed simultaneously in the two NN-FLC's using the fuzzy similarity measure. The proposed RNN-FLCS can construct a fuzzy logic control and decision-making system automatically and dynamically through a reward/penalty Signal or through very simple fuzzy information feedback such as "high," "too high," "low," and "too low." The proposed RNN-FLCS is best applied to the learning environment, where obtaining exact training data is expensive. It also preserves the advantages of the original NN-FLC, such as the ability to find proper network structure and parameters simultaneously and dynamically and to avoid the rule-matching time of the inference engine. Computer simulations were conducted to illustrate its performance and applicability. >

  • Reinforcement structure parameter learning for neural network based fuzzy logic control systems
    IEEE International Conference on Fuzzy Systems, 1993
    Co-Authors: Chinteng Lin, Changjin Lee
    Abstract:

    The authors propose a Reinforcement neural-network-based fuzzy logic control system (RNN-FLCS) for solving various Reinforcement learning problems. RNN-FLCS is best applied to learning environments where obtaining exact training data is expensive. It is constructed by integrating two neural-network-based fuzzy logic controllers (NN-FLCs), each of which is a connectionist model with a feedforward multilayered network developed for the realization of a fuzzy logic controller. One NN-FLC functions as a fuzzy predictor and the other as a fuzzy controller. Using the temporal difference prediction method, the fuzzy predictor can predict the external Reinforcement Signal and provide a more informative internal Reinforcement Signal to the fuzzy controller. The fuzzy controller implements a stochastic exploratory algorithm to adapt itself according to the internal Reinforcement Signal. During the learning process, the RNN-FLCs can construct a fuzzy logic control system automatically and dynamically through a reward-penalty Signal or through very simple fuzzy information feedback. Structure learning and parameter learning are performed simultaneously in the two NN-FLCs. Simulation results are presented. >

Changjin Lee - One of the best experts on this subject based on the ideXlab platform.

  • Reinforcement structure parameter learning for neural network based fuzzy logic control systems
    IEEE International Conference on Fuzzy Systems, 1993
    Co-Authors: Chinteng Lin, Changjin Lee
    Abstract:

    The authors propose a Reinforcement neural-network-based fuzzy logic control system (RNN-FLCS) for solving various Reinforcement learning problems. RNN-FLCS is best applied to learning environments where obtaining exact training data is expensive. It is constructed by integrating two neural-network-based fuzzy logic controllers (NN-FLCs), each of which is a connectionist model with a feedforward multilayered network developed for the realization of a fuzzy logic controller. One NN-FLC functions as a fuzzy predictor and the other as a fuzzy controller. Using the temporal difference prediction method, the fuzzy predictor can predict the external Reinforcement Signal and provide a more informative internal Reinforcement Signal to the fuzzy controller. The fuzzy controller implements a stochastic exploratory algorithm to adapt itself according to the internal Reinforcement Signal. During the learning process, the RNN-FLCs can construct a fuzzy logic control system automatically and dynamically through a reward-penalty Signal or through very simple fuzzy information feedback. Structure learning and parameter learning are performed simultaneously in the two NN-FLCs. Simulation results are presented. >

C S G Lee - One of the best experts on this subject based on the ideXlab platform.

  • Reinforcement structure parameter learning for neural network based fuzzy logic control systems
    IEEE Transactions on Fuzzy Systems, 1994
    Co-Authors: Chinteng Lin, C S G Lee
    Abstract:

    This paper proposes a Reinforcement neural-network-based fuzzy logic control system (RNN-FLCS) for solving various Reinforcement learning problems. The proposed RNN-FLCS is constructed by integrating two neural-network-based fuzzy logic controllers (NN-FLC's), each of which is a connectionist model with a feedforward multilayered network developed for the realization of a fuzzy logic controller. One NN-FLC performs as a fuzzy predictor, and the other as a fuzzy controller. Using the temporal difference prediction method, the fuzzy predictor can predict the external Reinforcement Signal and provide a more informative internal Reinforcement Signal to the fuzzy controller. The fuzzy controller performs a stochastic exploratory algorithm to adapt itself according to the internal Reinforcement Signal. During the learning process, both structure learning and parameter learning are performed simultaneously in the two NN-FLC's using the fuzzy similarity measure. The proposed RNN-FLCS can construct a fuzzy logic control and decision-making system automatically and dynamically through a reward/penalty Signal or through very simple fuzzy information feedback such as "high," "too high," "low," and "too low." The proposed RNN-FLCS is best applied to the learning environment, where obtaining exact training data is expensive. It also preserves the advantages of the original NN-FLC, such as the ability to find proper network structure and parameters simultaneously and dynamically and to avoid the rule-matching time of the inference engine. Computer simulations were conducted to illustrate its performance and applicability. >

Aditya Murthy - One of the best experts on this subject based on the ideXlab platform.

  • basal ganglia contributions during the learning of a visuomotor rotation effect of dopamine deep brain stimulation and Reinforcement
    European Journal of Neuroscience, 2019
    Co-Authors: Puneet Singh, Abhishek Lenka, Albert Stezin, Ketan Jhunjhunwala, Ashitava Ghosal, Aditya Murthy
    Abstract:

    It is commonly thought that visuomotor adaptation is mediated by the cerebellum while Reinforcement learning is mediated by the basal ganglia. In contrast to this strict dichotomy, we demonstrate a role for the basal ganglia in visuomotor adaptation (error-based motor learning) in patients with Parkinson's disease (PD) by comparing the degree of motor learning in the presence and absence of dopamine medication. We further show similar modulation of learning rates in the presence and absence of subthalamic deep brain stimulation. We also report that Reinforcement is an essential component of visuomotor adaptation by demonstrating the lack of motor learning in patients with PD during the ON-dopamine state relative to the OFF-dopamine state in the absence of a Reinforcement Signal. Taken together, these results raise the possibility that the basal ganglia modulate the gain of visuomotor adaptation based on the Reinforcement received at the end of the trial.

  • basal ganglia contributions during the learning of a visuomotor rotation effect of dopamine deep brain stimulation and Reinforcement
    bioRxiv, 2019
    Co-Authors: Puneet Singh, Abhishek Lenka, Albert Stezin, Ketan Jhunjhunwala, Ashitava Ghosal, Aditya Murthy
    Abstract:

    Abstract It is commonly thought that visuomotor adaptation is mediated by the cerebellum while Reinforcement learning is mediated by the basal ganglia. In contrast to this strict dichotomy, we demonstrate a role for the basal ganglia in visuomotor adaptation (error-based motor learning) in patients with Parkinson’s disease (PD) by comparing the degree of motor learning in the presence and absence of dopamine medication. We further show similar modulation of learning rates in the presence and absence of subthalamic deep brain stimulation. We also report that Reinforcement is an essential component of visuomotor adaptation by demonstrating the lack of motor learning in patients with PD during the ON-dopamine state relative to the OFF-dopamine state in the absence of a Reinforcement Signal. Taken together, these results suggest that the basal ganglia modulate the gain of visuomotor adaptation based on the Reinforcement received at the end of the trial.

Chuankai Lin - One of the best experts on this subject based on the ideXlab platform.

  • radial basis function neural network based adaptive critic control of induction motors
    Applied Soft Computing, 2011
    Co-Authors: Chuankai Lin
    Abstract:

    This paper presents a new adaptive critic controller to achieve precise position-tracking performance of induction motors using a radial basis function neural network (RBFNN). The adaptive controller consists of an associative search network (ASN), an adaptive critic network (ACN), a feedback controller and a robust controller. Due to the mechanical parameter drift, unmodelled dynamics, actuator saturation, and external disturbances, the exact model of an induction motor is difficult to be obtained. The ASN, which can approximate nonlinear functions, is employed to develop an RBFNN-based feedback control law to deal with the unknown dynamics. The ACN receives a reward from credit-assignment unit to generate an internal Reinforcement Signal to tune the ASN. Due to the inevitable approximation errors and uncertainties, a robust control technique is developed to reject the effects of the uncertainties. Moreover, the weight updating laws with projection algorithm can tune all parameters of the RBFNN and ensure the localized learning capability. By Lyapunov theory, the stability of the closed-loop system can be guaranteed. In addition, the effectiveness of the proposed RBFNN-based induction motor controller is verified by experimental results.

  • adaptive critic autopilot design of bank to turn missiles using fuzzy basis function networks
    Systems Man and Cybernetics, 2005
    Co-Authors: Chuankai Lin
    Abstract:

    A new adaptive critic autopilot design for bank-to-turn missiles is presented. In this paper, the architecture of adaptive critic learning scheme contains a fuzzy-basis-function-network based associative search element (ASE), which is employed to approximate nonlinear and complex functions of bank-to-turn missiles, and an adaptive critic element (ACE) generating the Reinforcement Signal to tune the associative search element. In the design of the adaptive critic autopilot, the control law receives Signals from a fixed gain controller, an ASE and an adaptive robust element, which can eliminate approximation errors and disturbances. Traditional adaptive critic Reinforcement learning is the problem faced by an agent that must learn behavior through trial-and-error interactions with a dynamic environment, however, the proposed tuning algorithm can significantly shorten the learning time by online tuning all parameters of fuzzy basis functions and weights of ASE and ACE. Moreover, the weight updating law derived from the Lyapunov stability theory is capable of guaranteeing both tracking performance and stability. Computer simulation results confirm the effectiveness of the proposed adaptive critic autopilot.