Irreducible Markov Chain

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 2319 Experts worldwide ranked by ideXlab platform

Yiguang Hong - One of the best experts on this subject based on the ideXlab platform.

  • brief paper target containment control of multi agent systems with random switching interconnection topologies
    Automatica, 2012
    Co-Authors: Yiguang Hong
    Abstract:

    In this paper, the distributed containment control is considered for a second-order multi-agent system guided by multiple leaders with random switching topologies. The multi-leader control problem is investigated via a combination of convex analysis and stochastic process. The interaction topology between agents is described by a continuous-time Irreducible Markov Chain. A necessary and sufficient condition is obtained to make all the mobile agents almost surely asymptotically converge to the static convex leader set. Moreover, conditions on the tracking estimation are provided for the convex target set determined by moving multiple leaders.

  • CDC - Multi-leader set coordination of multi-agent systems with random switching topologies
    49th IEEE Conference on Decision and Control (CDC), 2010
    Co-Authors: Youcheng Lou, Yiguang Hong
    Abstract:

    In this paper, we consider the multi-leader following problem of a second-order multi-agent system with random switching topologies. This problem can also be viewed as a tracking problem of a target set specified multiple leaders. The interaction topology between agents is described by an Irreducible Markov Chain. A necessary and sufficient condition is obtained to make all the mobile agents almost surely asymptotically converge to the static convex target set determined by multiple leaders. Moreover, results are also given for the moving target set with independent and identically distributed (i.i.d.) random switching, too.

Carl D. Meyer - One of the best experts on this subject based on the ideXlab platform.

  • Updating Markov Chains with an Eye on Google's PageRank
    SIAM Journal on Matrix Analysis and Applications, 2006
    Co-Authors: Amy N. Langville, Carl D. Meyer
    Abstract:

    An iterative algorithm based on aggregation/disaggregation principles is presented for updating the stationary distribution of a finite homogeneous Irreducible Markov Chain. The focus is on large-scale problems of the kind that are characterized by Google's PageRank application, but the algorithm is shown to work well in general contexts. The algorithm is flexible in that it allows for changes to the transition probabilities as well as for the creation or deletion of states. In addition to establishing the rate of convergence, it is proven that the algorithm is globally convergent. Results of numerical experiments are presented.

  • Markov Chain sensitivity measured by mean first passage times
    Linear Algebra and its Applications, 2000
    Co-Authors: Grace E. Cho, Carl D. Meyer
    Abstract:

    Abstract The purpose of this article is to present results concerning the sensitivity of the stationary probabilities for an n -state, time-homogeneous, Irreducible Markov Chain in terms of the mean first passage times in the Chain.

  • Uniform Stability of Markov Chains
    SIAM Journal on Matrix Analysis and Applications, 1994
    Co-Authors: Ilse C. F. Ipsen, Carl D. Meyer
    Abstract:

    By deriving a new set of tight perturbation bounds, it is shown that all stationary probabilities of a finite Irreducible Markov Chain react essentially in the same way to perturbations in the transition probabilities. In particular, if at least one stationary probability is insensitive in a relative sense, then all stationary probabilities must be insensitive in an absolute sense. New measures of sensitivity are related to more traditional ones, and it is shown that all relevant condition numbers for the Markov Chain problem are small multiples of each other. Finally, the implications of these findings to the computation of stationary probabilities by direct methods are discussed, and the results are applied to stability issues in nearly transient Chains.

  • The Character of a Finite Markov Chain
    Linear Algebra Markov Chains and Queueing Models, 1993
    Co-Authors: Carl D. Meyer
    Abstract:

    The purpose of this article is to present the concept of the character of a finite Irreducible Markov Chain. It is demonstrated how the sensitivity of the stationary probabilities to perturbations in the transition probabilities can be gauged by the use of the character. 1. Introduction. It is well-known that if a finite, Irreducible, homo­ geneous Markov Chain has a sub dominant eigenvalue which is close to 1, then the Chain is ill-conditioned in the sense that the stationary probabili­ ties can be sensitive to small perturbations in the transition probabilities. However, the converse of this statement has been an open question. The purpose of this article is to help resolve this issue in terms of a spectral measure referred to as the character ofthe Chain. Before defining the char­ acter, it is instructive to review the situation concerning the sensitivity of the stationary distribution. If Pnxn is the transition probability matrix for such a Chain, and if TrT = (7T1' 7T2, ... , 7T n) is the stationary distribution vector satisfying TrTp = TrT and 2:7=1 7Ti = 1, the goal is to describe the effect on TrT when P is perturbed by a matrix E such that P = P + E is the transition prob­ ability matrix of another Irreducible Markov Chain. The problem can be considered as a perturbed eigenvector problem, or it can be analyzed as a perturbed linear system TrT A = 0, TrT e = 1, where A = 1- P, and e is a column of 1 's. If IT(P) = {I, A2, A3, ... , An} denotes the spectrum of P, then the traditional perturbation theory for eigenvectors says that if a sub dominant eigenvalue Ai is close to 1, then we expect TrT to be sensitive. But for general eigenvector problems, the converse is not true-i.e., well-separated eigenvalues do not guarantee a well-conditioned eigenvector. For example, consider

Youcheng Lou - One of the best experts on this subject based on the ideXlab platform.

  • CDC - Multi-leader set coordination of multi-agent systems with random switching topologies
    49th IEEE Conference on Decision and Control (CDC), 2010
    Co-Authors: Youcheng Lou, Yiguang Hong
    Abstract:

    In this paper, we consider the multi-leader following problem of a second-order multi-agent system with random switching topologies. This problem can also be viewed as a tracking problem of a target set specified multiple leaders. The interaction topology between agents is described by an Irreducible Markov Chain. A necessary and sufficient condition is obtained to make all the mobile agents almost surely asymptotically converge to the static convex target set determined by multiple leaders. Moreover, results are also given for the moving target set with independent and identically distributed (i.i.d.) random switching, too.

Alexander M Andronov - One of the best experts on this subject based on the ideXlab platform.

  • on a reward rate estimation for the finite Irreducible continuous time Markov Chain
    Journal of statistical theory and practice, 2017
    Co-Authors: Alexander M Andronov
    Abstract:

    A continuous-time homogeneous Irreducible Markov Chain {X(t)}, t ϵ [0; ∞), taking values on N = {1,..., k}, k <∞, is considered. Matrix λ = (λij) of the intensity of transition λij from state i to state j is known. A unit of the sojourn time in state i gives reward βi so the total reward during time t is \(Y(t) = \mathop \smallint \limits_0^t {\beta _{X(s)}}ds\). The reward rates {βi} are not known and it is necessary to estimate them. For that purpose the following statistical data on r observations are at our disposal: (1) t, observation time; (2) i, initial state X(0); (3) j, final state X(t); and (4) y, acquired reward Y(t). Two methods are used for the estimation: the weighted least-squares method and the saddle-point method for the Laplace transformation of the reward. Simulation study illustrates the suggested approaches.

D Meyercarl - One of the best experts on this subject based on the ideXlab platform.