Augmented System

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 81051 Experts worldwide ranked by ideXlab platform

Jie Huang - One of the best experts on this subject based on the ideXlab platform.

  • Cooperative Robust Output Regulation for Second-Order Nonlinear Multiagent Systems With an Unknown ExoSystem
    IEEE Transactions on Automatic Control, 2018
    Co-Authors: Yi Dong, Jie Chen, Jie Huang
    Abstract:

    The cooperative robust output regulation problem for a class of second-order nonlinear multiagent Systems with an exactly known exoSystem has been studied recently. This paper further studies the same problem for the same class of Systems subject to an unknown exoSystem. We first show that the problem can be converted into the adaptive stabilization problem for an Augmented System composed of the given multiagent System and the distributed internal model. Due to the unknown parameters in the exoSystem, the Augmented System is a nonlinear multi-input System with both linearly parameterized uncertainties and norm-bounded nonlinear uncertainty. By combining the adaptive control technique and the robust control technique, we will solve our problem by a distributed adaptive robust state feedback controller.

  • RCAR - Event-Triggered Robust Practical Output Regulation for Output Feedback Nonlinear Systems
    2018 IEEE International Conference on Real-time Computing and Robotics (RCAR), 2018
    Co-Authors: Wei Liu, Jie Huang
    Abstract:

    This paper studies the event-triggered global robust practical output regulation problem (ETGRPORP) for output feedback nonlinear Systems with any relative degree. We first convert the problem, based on the internal model principle and the high-gain observer method, into the event-triggered global robust practical stabilization problem (ETGRPSP) of a well defined extended Augmented System. Then, we design an output-based event-triggered control law (OBETCL) and an output-based event-triggered mechanism (OBETM) to stabilize this extended Augmented System and show that the Zeno phenomenon does not happen, which leads to the solvability of the ETGRPORP.

  • cooperative robust output regulation for second order nonlinear multi agent Systems with an unknown exoSystem
    Conference on Decision and Control, 2017
    Co-Authors: Yi Dong, Jie Chen, Jie Huang
    Abstract:

    The cooperative robust output regulation problem for a class of second-order nonlinear multi-agent Systems with an exactly known exoSystem has been studied recently. This paper further studies the same problem for a class of second-order nonlinear multi-agent Systems subject to an unknown exoSystem. We first show that the problem can be converted into the adaptive stabilization problem for a modified Augmented System composed of the given multi-agent System and the distributed internal model. Due to the uncertain parameter in the exoSystem, the Augmented System is a nonlinear multi-input System with both dynamic and static uncertainties. By combining the adaptive control technique and the robust control technique, we solve our problem by a distributed adaptive robust state feedback controller.

  • CDC - Cooperative robust output regulation for second-order nonlinear multi-agent Systems with an unknown exoSystem
    2017 IEEE 56th Annual Conference on Decision and Control (CDC), 2017
    Co-Authors: Yi Dong, Jie Chen, Jie Huang
    Abstract:

    The cooperative robust output regulation problem for a class of second-order nonlinear multi-agent Systems with an exactly known exoSystem has been studied recently. This paper further studies the same problem for a class of second-order nonlinear multi-agent Systems subject to an unknown exoSystem. We first show that the problem can be converted into the adaptive stabilization problem for a modified Augmented System composed of the given multi-agent System and the distributed internal model. Due to the uncertain parameter in the exoSystem, the Augmented System is a nonlinear multi-input System with both dynamic and static uncertainties. By combining the adaptive control technique and the robust control technique, we solve our problem by a distributed adaptive robust state feedback controller.

  • cooperative output regulation for a class of linear multi agent Systems with unknown exoSystem
    International Conference on Control and Automation, 2014
    Co-Authors: Jie Huang
    Abstract:

    In this paper, we study the cooperative output regulation problem for a class of second-order linear uncertain multi-agent Systems with unknown exoSystem. We first present a distributed canonical internal model to convert the cooperative output regulation problem for the original plant into a stabilization problem for its Augmented System. Then we solve the problem via a distributed state feedback controller by combining the distributed internal model approach and the adaptive control technique. Our design is illustrated by an example.

Yunjian Peng - One of the best experts on this subject based on the ideXlab platform.

  • Adaptive optimal output feedback tracking control for unknown discrete-time linear Systems using a combined reinforcement Q-learning and internal model method
    IET Control Theory & Applications, 2019
    Co-Authors: Weijie Sun, Guangyue Zhao, Yunjian Peng
    Abstract:

    This study addresses the novel output feedback-based reinforcement Q-learning algorithms for optimal linear quadratic tracking problem of unknown discrete-time Systems. An Augmented System composed of the original controlled System and the reference trajectory dynamic is first constructed. Then, learning algorithms including on-policy and off-policy approaches are both developed to solve the optimal tracking control problem with unknown Augmented System dynamics. In both the optimal tracking control policies, a two-stage framework is proposed composed of two controllers, where the internal model controller is used to collect some data for the next process, and then the output feedback Q-learning scheme is able to learn the optimal tracking controller online by using the past input, output, and reference trajectory data of the Augmented System. Finally, simulation results are provided to verify the effectiveness of the proposed scheme.

  • ICARCV - Output Feedback Reinforcement Q-learning for Optimal Quadratic Tracking Control of Unknown Discrete-Time Linear Systems and Its Application
    2018 15th International Conference on Control Automation Robotics and Vision (ICARCV), 2018
    Co-Authors: Guangyue Zhao, Weijie Sun, He Cai, Yunjian Peng
    Abstract:

    In this paper, a novel output feedback solution based on the Q-learning algorithm using the measured data is proposed for the linear quadratic tracking (LQT) problem of unknown discrete-time Systems. To tackle this technical issue, an Augmented System composed of the original controlled System and the linear command generator is first constructed. Then, by using the past input, output, and reference trajectory data of the Augmented System, the output feedback Q-learning scheme is able to learn the optimal tracking controller online without requiring any knowledge of the Augmented System dynamics. Learning algorithms including both policy iteration (PI) and value iteration (VI) algorithms are developed to converge to the optimal solution. Finally, simulation results are provided to verify the effectiveness of the proposed scheme.

Zhaowu Ping - One of the best experts on this subject based on the ideXlab platform.

  • Speed tracking control of surface-mounted permanent-magnet synchronous motor with unknown exoSystem
    International Journal of Robust and Nonlinear Control, 2014
    Co-Authors: Zhaowu Ping, Jie Huang
    Abstract:

    Summary The surface-mounted permanent-magnet synchronous motor is a two-input, two-output nonlinear System. The multi-input, multi-output nature of the System has posed some specific challenges to various control methods. Recently, the robust output regulation problem of the System subject to a known neutrally stable exoSystem was studied. The problem came down to a global robust stabilization problem of an Augmented System composed of the original plant and an internal model. In this paper, we will further study the robust output regulation problem of the System subject to an unknown neutrally stable exoSystem. Like in the case where the exoSystem is known, the current problem can be solved by globally stabilizing an Augmented System. But unlike in the case where the exoSystem is known, the Augmented System takes a much more complicated form because of uncertainty in the exoSystem than the case where the exoSystem is known. In particular, the dynamic uncertainty in the current Augmented System contains linearly parameterized uncertainty, and hence is not input-to-state stable. By utilizing some dynamic coordinate transformation technique, and combining some robust control and adaptive control techniques, we will solve the problem via a recursive approach. Copyright © 2014 John Wiley & Sons, Ltd.

  • a control problem of surface mounted pm synchronous motor with unknown exoSystem
    American Control Conference, 2013
    Co-Authors: Zhaowu Ping, Jie Huang
    Abstract:

    The surface-mounted PM synchronous motor is a two-input, two-output nonlinear System. Recently, the robust output regulation problem of this motor System subject to a known neutrally stable exoSystem was studied. The problem came down to a global robust stabilization problem of an Augmented System composed of the original plant and an internal model. In this paper, we will further study the robust output regulation problem of the motor System subject to an unknown neutrally stable exoSystem. Like the case where the exoSystem is known, the current problem can be solved by globally stabilizing an Augmented System. But unlike the case where the exoSystem is known, the Augmented System takes a much more complicated form due to uncertainty in the exoSystem than the case where the exoSystem is known. In particular, the dynamic uncertainty in the current Augmented System is not input-to-state stable. By utilizing some dynamic coordinate transformation technique, and combining some robust control and adaptive control techniques, we will solve the problem via a recursive approach.

  • ACC - A control problem of surface-mounted PM synchronous motor with unknown exoSystem
    2013 American Control Conference, 2013
    Co-Authors: Zhaowu Ping, Jie Huang
    Abstract:

    The surface-mounted PM synchronous motor is a two-input, two-output nonlinear System. Recently, the robust output regulation problem of this motor System subject to a known neutrally stable exoSystem was studied. The problem came down to a global robust stabilization problem of an Augmented System composed of the original plant and an internal model. In this paper, we will further study the robust output regulation problem of the motor System subject to an unknown neutrally stable exoSystem. Like the case where the exoSystem is known, the current problem can be solved by globally stabilizing an Augmented System. But unlike the case where the exoSystem is known, the Augmented System takes a much more complicated form due to uncertainty in the exoSystem than the case where the exoSystem is known. In particular, the dynamic uncertainty in the current Augmented System is not input-to-state stable. By utilizing some dynamic coordinate transformation technique, and combining some robust control and adaptive control techniques, we will solve the problem via a recursive approach.

Guangyue Zhao - One of the best experts on this subject based on the ideXlab platform.

  • Adaptive optimal output feedback tracking control for unknown discrete-time linear Systems using a combined reinforcement Q-learning and internal model method
    IET Control Theory & Applications, 2019
    Co-Authors: Weijie Sun, Guangyue Zhao, Yunjian Peng
    Abstract:

    This study addresses the novel output feedback-based reinforcement Q-learning algorithms for optimal linear quadratic tracking problem of unknown discrete-time Systems. An Augmented System composed of the original controlled System and the reference trajectory dynamic is first constructed. Then, learning algorithms including on-policy and off-policy approaches are both developed to solve the optimal tracking control problem with unknown Augmented System dynamics. In both the optimal tracking control policies, a two-stage framework is proposed composed of two controllers, where the internal model controller is used to collect some data for the next process, and then the output feedback Q-learning scheme is able to learn the optimal tracking controller online by using the past input, output, and reference trajectory data of the Augmented System. Finally, simulation results are provided to verify the effectiveness of the proposed scheme.

  • ICARCV - Output Feedback Reinforcement Q-learning for Optimal Quadratic Tracking Control of Unknown Discrete-Time Linear Systems and Its Application
    2018 15th International Conference on Control Automation Robotics and Vision (ICARCV), 2018
    Co-Authors: Guangyue Zhao, Weijie Sun, He Cai, Yunjian Peng
    Abstract:

    In this paper, a novel output feedback solution based on the Q-learning algorithm using the measured data is proposed for the linear quadratic tracking (LQT) problem of unknown discrete-time Systems. To tackle this technical issue, an Augmented System composed of the original controlled System and the linear command generator is first constructed. Then, by using the past input, output, and reference trajectory data of the Augmented System, the output feedback Q-learning scheme is able to learn the optimal tracking controller online without requiring any knowledge of the Augmented System dynamics. Learning algorithms including both policy iteration (PI) and value iteration (VI) algorithms are developed to converge to the optimal solution. Finally, simulation results are provided to verify the effectiveness of the proposed scheme.

Weijie Sun - One of the best experts on this subject based on the ideXlab platform.

  • Adaptive optimal output feedback tracking control for unknown discrete-time linear Systems using a combined reinforcement Q-learning and internal model method
    IET Control Theory & Applications, 2019
    Co-Authors: Weijie Sun, Guangyue Zhao, Yunjian Peng
    Abstract:

    This study addresses the novel output feedback-based reinforcement Q-learning algorithms for optimal linear quadratic tracking problem of unknown discrete-time Systems. An Augmented System composed of the original controlled System and the reference trajectory dynamic is first constructed. Then, learning algorithms including on-policy and off-policy approaches are both developed to solve the optimal tracking control problem with unknown Augmented System dynamics. In both the optimal tracking control policies, a two-stage framework is proposed composed of two controllers, where the internal model controller is used to collect some data for the next process, and then the output feedback Q-learning scheme is able to learn the optimal tracking controller online by using the past input, output, and reference trajectory data of the Augmented System. Finally, simulation results are provided to verify the effectiveness of the proposed scheme.

  • ICARCV - Output Feedback Reinforcement Q-learning for Optimal Quadratic Tracking Control of Unknown Discrete-Time Linear Systems and Its Application
    2018 15th International Conference on Control Automation Robotics and Vision (ICARCV), 2018
    Co-Authors: Guangyue Zhao, Weijie Sun, He Cai, Yunjian Peng
    Abstract:

    In this paper, a novel output feedback solution based on the Q-learning algorithm using the measured data is proposed for the linear quadratic tracking (LQT) problem of unknown discrete-time Systems. To tackle this technical issue, an Augmented System composed of the original controlled System and the linear command generator is first constructed. Then, by using the past input, output, and reference trajectory data of the Augmented System, the output feedback Q-learning scheme is able to learn the optimal tracking controller online without requiring any knowledge of the Augmented System dynamics. Learning algorithms including both policy iteration (PI) and value iteration (VI) algorithms are developed to converge to the optimal solution. Finally, simulation results are provided to verify the effectiveness of the proposed scheme.