Decentralized Control System

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 23820 Experts worldwide ranked by ideXlab platform

F J Von Zuben - One of the best experts on this subject based on the ideXlab platform.

  • Decentralized Control System for autonomous navigation based on an evolved artificial immune network
    Congress on Evolutionary Computation, 2002
    Co-Authors: R Michelan, F J Von Zuben
    Abstract:

    This paper investigates an autonomous Control System of a mobile robot based on the immune network theory. The immune network navigates the robot to solve a multiobjective task, namely, garbage collection: the robot must find and collect garbage, while it establishes a trajectory without colliding with obstacles, and return to the base before it runs out of energy. Each network node corresponds to a specific antibody and describes a particular Control action for the robot. The antigens are the current state of the robot, read from a set of internal and external sensors. The network dynamics corresponds to the variation of antibody concentration levels, which change according to both mutual interaction of antibody nodes and of antibodies and antigens. It is proposed an evolutionary mechanism to determine the network configuration, that is, the parameters that define those interactions. Simulation results suggest that the proposal presented is very promising.

R Michelan - One of the best experts on this subject based on the ideXlab platform.

  • Decentralized Control System for autonomous navigation based on an evolved artificial immune network
    Congress on Evolutionary Computation, 2002
    Co-Authors: R Michelan, F J Von Zuben
    Abstract:

    This paper investigates an autonomous Control System of a mobile robot based on the immune network theory. The immune network navigates the robot to solve a multiobjective task, namely, garbage collection: the robot must find and collect garbage, while it establishes a trajectory without colliding with obstacles, and return to the base before it runs out of energy. Each network node corresponds to a specific antibody and describes a particular Control action for the robot. The antigens are the current state of the robot, read from a set of internal and external sensors. The network dynamics corresponds to the variation of antibody concentration levels, which change according to both mutual interaction of antibody nodes and of antibodies and antigens. It is proposed an evolutionary mechanism to determine the network configuration, that is, the parameters that define those interactions. Simulation results suggest that the proposal presented is very promising.

Qiang Xiong - One of the best experts on this subject based on the ideXlab platform.

  • effective transfer function method for Decentralized Control System design of multi input multi output processes
    Journal of Process Control, 2006
    Co-Authors: Qiang Xiong
    Abstract:

    Abstract In terms of relative gain and relative frequency, the effective transfer function for independent Controller design for multi-input multi-output processes is provided in this paper. Differing from existing equivalent transfer functions, the proposed effective transfer function provides both gain and phase information for Decentralized Controller design in a simple and straightforward manner. The interaction effects for a particular loop from all other closed loops are directly incorporated into the effective transfer functions in four ways. Consequently, the Decentralized Controllers can be independently designed by employing the single loop tuning techniques. This design method is simple, straightforward, easy to understand and implement by field engineers. Several multivariable industrial processes with different interaction modes are employed to demonstrate the effectiveness and simplicity of the method.

  • Decentralized Control System design for multivariable processesa novel method based on effective relative gain array
    Industrial & Engineering Chemistry Research, 2006
    Co-Authors: Qiang Xiong, Wenjian Cai
    Abstract:

    In this paper, a novel method for design of a Decentralized Control System for multivariable processes is proposed. On the basis of a new interaction measure, effective relative gain array (ERGA), in terms of energy transmission ratio, loop interactions are quantified by two elements, i.e., relative gain and relative critical frequency. The interaction effects for a particular loop from all other closed loops are analyzed through both steady-state gain and critical frequency variations. Consequently, appropriate detuning factors for Decentralized Controllers under different interaction conditions can be derived based on the effective relative gain, relative gain, and relative critical frequency. The design method can be effectively used for both normal processes as well as process-loop transfer functions containing unstable zeros resulted from other closed loops. This design method is simple, straightforward, and effective and can be easily understood and implemented by field engineers. Several multivaria...

Masashi Miura - One of the best experts on this subject based on the ideXlab platform.

  • communication based Decentralized demand response for smart microgrids
    IEEE Transactions on Industrial Electronics, 2017
    Co-Authors: Kazunori Sakurama, Masashi Miura
    Abstract:

    Demand response (DR) is one of the most promising solutions to efficient Control of smart grids with renewable energy resources. Usually, DR programs are implemented by means of centralized Control by power supply companies or independent System operators. In contrast, recently, the focus has been on Decentralized Control to enhance the efficient use of distributed energy resources especially on microgrids. This paper proposes a Decentralized Control System for DR. The key of the proposed method is a new Decentralized algorithm for determining appropriate Control signals (corresponding to prices and/or incentives) by using communication networks provided by smart meters. The effectiveness of the proposed method is illustrated by a numerical example.

Aditya Mahajan - One of the best experts on this subject based on the ideXlab platform.

  • team optimal solution of finite number of mean field coupled lqg subSystems
    arXiv: Optimization and Control, 2020
    Co-Authors: Jalal Arabneydi, Aditya Mahajan
    Abstract:

    A Decentralized Control System with linear dynamics, quadratic cost, and Gaussian disturbances is considered. The System consists of a finite number of subSystems whose dynamics and per-step cost function are coupled through their mean-field (empirical average). The System has mean-field sharing information structure, i.e., each Controller observes the state of its local subSystem (either perfectly or with noise) and the mean-field. It is shown that the optimal Control law is unique, linear, and identical across all subSystems. Moreover, the optimal gains are computed by solving two decoupled Riccati equations in the full observation model and by solving an additional filter Riccati equation in the noisy observation model. These Riccati equations do not depend on the number of subSystems. It is also shown that the optimal Decentralized performance is the same as the optimal centralized performance. An example, motivated by smart grids, is presented to illustrate the result.

  • reinforcement learning in Decentralized stochastic Control Systems with partial history sharing
    arXiv: Optimization and Control, 2020
    Co-Authors: Jalal Arabneydi, Aditya Mahajan
    Abstract:

    In this paper, we are interested in Systems with multiple agents that wish to collaborate in order to accomplish a common task while a) agents have different information (Decentralized information) and b) agents do not know the model of the System completely i.e., they may know the model partially or may not know it at all. The agents must learn the optimal strategies by interacting with their environment i.e., by Decentralized Reinforcement Learning (RL). The presence of multiple agents with different information makes Decentralized reinforcement learning conceptually more difficult than centralized reinforcement learning. In this paper, we develop a Decentralized reinforcement learning algorithm that learns $\epsilon$-team-optimal solution for partial history sharing information structure, which encompasses a large class of Decentralized Control Systems including delayed sharing, Control sharing, mean field sharing, etc. Our approach consists of two main steps. In the first step, we convert the Decentralized Control System to an equivalent centralized POMDP (Partially Observable Markov Decision Process) using an existing approach called common information approach. However, the resultant POMDP requires the complete knowledge of System model. To circumvent this requirement, in the second step, we introduce a new concept called "Incrementally Expanding Representation" using which we construct a finite-state RL algorithm whose approximation error converges to zero exponentially fast. We illustrate the proposed approach and verify it numerically by obtaining a Decentralized Q-learning algorithm for two-user Multi Access Broadcast Channel (MABC) which is a benchmark example for Decentralized Control Systems.

  • reinforcement learning in Decentralized stochastic Control Systems with partial history sharing
    Advances in Computing and Communications, 2015
    Co-Authors: Jalal Arabneydi, Aditya Mahajan
    Abstract:

    In this paper, we are interested in Systems with multiple agents that wish to collaborate in order to accomplish a common task while a) agents have different information (Decentralized information) and b) agents do not know the model of the System completely i.e., they may know the model partially or may not know it at all. The agents must learn the optimal strategies by interacting with their environment i.e., by Decentralized Reinforcement Learning (RL). The presence of multiple agents with different information makes Decentralized reinforcement learning conceptually more difficult than centralized reinforcement learning. In this paper, we develop a Decentralized reinforcement learning algorithm that learns ϵ-team-optimal solution for partial history sharing information structure, which encompasses a large class of Decentralized Control Systems including delayed sharing, Control sharing, mean field sharing, etc. Our approach consists of two main steps. In the first step, we convert the Decentralized Control System to an equivalent centralized POMDP (Partially Observable Markov Decision Process) using an existing approach called common information approach. However, the resultant POMDP requires the complete knowledge of System model. To circumvent this requirement, in the second step, we introduce a new concept called “Incrementally Expanding Representation” using which we construct a finite-state RL algorithm whose approximation error converges to zero exponentially fast. We illustrate the proposed approach and verify it numerically by obtaining a Decentralized Q-learning algorithm for two-user Multi Access Broadcast Channel (MABC) which is a benchmark example for Decentralized Control Systems.