The Experts below are selected from a list of 3891 Experts worldwide ranked by ideXlab platform
Bozenna Pasik-duncan - One of the best experts on this subject based on the ideXlab platform.
-
Stochastic adaptive control for continuous-time linear systems with quadratic Cost
Applied Mathematics & Optimization, 1996Co-Authors: Han-fu Chen, Tyrone E. Duncan, Bozenna Pasik-duncanAbstract:An adaptive control problem is formulated and solved for a completely observed, continuous-time, linear stochastic system with an ergodic quadratic Cost Criterion. The linear transformationsA of the state,B of the control, andC of the noise are assumed to be unknown. Assuming only thatA is stable and that the pair (A, C) is controllable and using a diminishing excitation control that is asymptotically negligible for an ergodic, quadratic Cost Criterion it is shown that a family of least-squares estimates is strongly consistent. Furthermore, an adaptive control is given using switchings that is self-optimizing for an ergodic, quadratic Cost Criterion.
-
Limit Theorems of Probability Theory and Optimality in Linear Stochastic Evolution Systems
Operations Research ’91, 1992Co-Authors: Bozenna Pasik-duncanAbstract:A survey of some methods and results for the adaptive control of infinite dimensional stochastic systems is given. Adaptive control includes the identification of unknown parameters and the construction of an adaptive control law that is optimal for an ergodic Cost Criterion. Some approaches to identification are given that yield a family of strongly consistent estimates. The ergodic Cost Criterion is a quadratic functional of the state and the control. The behavior of such a quadratic functional is investigated using various measures of convergence to the optimal Cost.
-
Continuous time adaptive LQG control
[1992] Proceedings of the 31st IEEE Conference on Decision and Control, 1Co-Authors: Han-fu Chen, Tyrone E. Duncan, Bozenna Pasik-duncanAbstract:An adaptive control problem is described and its solution for a completely observed, continuous time, linear stochastic system with an ergodic quadratic Cost Criterion is given. The linear transformations A of the state and B of the control are assumed to be unknown. Assuming only that A is stable, and that the pair, A and the linear transformation of the noise, is controllable, and using a diminishing excitation control, it is shown that a family of least squares estimates is strongly consistent. An adaptive control using switches that is self optimizing for an ergodic, quadratic Cost Criterion is presented. >
Ke Liu - One of the best experts on this subject based on the ideXlab platform.
-
A note on optimality conditions for continuous-time Markov decision processes with average Cost Criterion
IEEE Transactions on Automatic Control, 2001Co-Authors: Xianping Guo, Ke LiuAbstract:This note deals with continuous-time Markov decision processes with a denumerable state space and the average Cost Criterion. The transition rates are allowed to be unbounded, and the action set is a Borel space. We give a new set of conditions under which the existence of optimal stationary policies is ensured by using the optimality inequality. Our results are illustrated with a controlled queueing model. Moreover, we use an example to show that our conditions do not imply the existence of a solution to the optimality equations in the previous literature.
W.m. Mceneaney - One of the best experts on this subject based on the ideXlab platform.
-
Infinite time-horizon risk sensitive systems with quadratic growth
Proceedings of the 36th IEEE Conference on Decision and Control, 1Co-Authors: W.m. Mceneaney, K. ItoAbstract:Previous work on nonlinear infinite time-horizon risk sensitive systems has been restricted mainly to the case where the Cost Criterion is bounded or grows at most linearly. One would like the nonlinear theory to subsume the LEQG case. We consider nonlinear risk-sensitive systems where the Cost Criterion may grow quadratically. This leads to difficulties which are surprisingly formidable. For instance, the HJB equation typically has multiple classical solutions.
-
Risk sensitive control with ergodic Cost criteria
[1992] Proceedings of the 31st IEEE Conference on Decision and Control, 1Co-Authors: Wendell H. Fleming, W.m. MceneaneyAbstract:Stochastic control problems on an infinite time horizon with exponential Cost criteria are considered. The Donsker-Varadhan large deviation rate (1975, 1976) is used as a Criterion to be optimized. The optimum rate is characterized as the value of an associated stochastic differential game, with an ergodic (expected average Cost per unit time) Cost Criterion. By taking a small-noise limit a deterministic differential game with an average Cost per unit time Cost Criterion is obtained. This differential game is related to robust control of nonlinear systems. >
Tamer Basar - One of the best experts on this subject based on the ideXlab platform.
-
backstepping controller design for nonlinear stochastic systems under a risk sensitive Cost Criterion
Siam Journal on Control and Optimization, 1999Co-Authors: Tamer BasarAbstract:This paper develops a methodology for recursive construction of optimal and near-optimal controllers for strict-feedback stochastic nonlinear systems under a risk-sensitive Cost function Criterion. The design procedure follows the integrator backstepping methodology, and the controllers obtained guarantee any desired achievable level of long-term average Cost for a given risk-sensitivity parameter $\theta$. Furthermore, they lead to closed-loop system trajectories that are bounded in probability, and in some cases asymptotically stable in the large. These results also generalize to nonlinear systems with strongly stabilizable zero dynamics. A numerical example included in the paper illustrates the analytical results.
-
backstepping controller design for nonlinear stochastic systems under a risk sensitive Cost Criterion
American Control Conference, 1997Co-Authors: Tamer BasarAbstract:Develops a methodology for recursive construction of optimal and near-optimal controllers for strict-feedback stochastic nonlinear systems under a risk-sensitive Cost function Criterion. The design procedure follows the integrator backstepping methodology, and the controllers obtained guarantee any desired level of long-term average Cost, for a given risk-sensitivity parameter /spl theta/. Furthermore, they lead to closed-loop system trajectories that are bounded in probability, and in some cases asymptotically stable in the large. These results also generalize to nonlinear systems with strongly stabilizable zero dynamics.
Xianping Guo - One of the best experts on this subject based on the ideXlab platform.
-
Optimality Conditions for CTMDP with Average Cost Criterion
Markov Processes and Controlled Markov Chains, 2002Co-Authors: Xianping Guo, Weiping ZhuAbstract:In this paper, we consider continuous time Markov decision processes with (possibly unbounded) transition and Cost rates under the average Cost Criterion. We present a set of conditions that is weaker than those in [5, 11, 12, 14], and prove the existence of optimal stationary policies using the optimality inequality. Moreover, the theory is illustrated by two examples.
-
A note on optimality conditions for continuous-time Markov decision processes with average Cost Criterion
IEEE Transactions on Automatic Control, 2001Co-Authors: Xianping Guo, Ke LiuAbstract:This note deals with continuous-time Markov decision processes with a denumerable state space and the average Cost Criterion. The transition rates are allowed to be unbounded, and the action set is a Borel space. We give a new set of conditions under which the existence of optimal stationary policies is ensured by using the optimality inequality. Our results are illustrated with a controlled queueing model. Moreover, we use an example to show that our conditions do not imply the existence of a solution to the optimality equations in the previous literature.