Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Stochastic matrix

From Wikipedia, the free encyclopedia
Matrix used to describe the transitions of a Markov chain

For a matrix whose elements are stochastic, seeRandom matrix.

In mathematics, astochastic matrix is asquare matrix used to describe the transitions of aMarkov chain. Each of its entries is anonnegativereal number representing aprobability.[1][2]: 10  It is also called aprobability matrix,transition matrix,substitution matrix, orMarkov matrix. The stochastic matrix was first developed byAndrey Markov at the beginning of the 20th century, and has found use throughout a wide variety of scientific fields, includingprobability theory, statistics,mathematical finance andlinear algebra, as well ascomputer science andpopulation genetics. There are several different definitions and types of stochastic matrices:

  • Aright stochastic matrix is a square matrix of nonnegative real numbers, with each row summing to 1 (so it is also called arow stochastic matrix).
  • Aleft stochastic matrix is a square matrix of nonnegative real numbers, with each column summing to 1 (so it is also called acolumn stochastic matrix).
  • Adoubly stochastic matrix is a square matrix of nonnegative real numbers with each row and column summing to 1.
  • Asubstochastic matrix is a real square matrix whose row sums are all1.{\displaystyle \leq 1.}

In the same vein, one may define aprobability vector as avector whose elements are nonnegative real numbers which sum to 1. Thus, each row of a right stochastic matrix (or column of a left stochastic matrix) is a probability vector. Right stochastic matrices act uponrow vectors of probabilities by multiplication from the right (hence their name) and the matrix entry in thei-th row andj-th column is the probability of transition from statei to statej. Left stochastic matrices act uponcolumn vectors of probabilities by multiplication from the left (hence their name) and the matrix entry in thei-th row andj-th column is the probability of transition from statej to statei.

This article uses the right/row stochastic matrix convention.

History

[edit]
Andrey Markov in 1886

The stochastic matrix was developed alongside the Markov chain byAndrey Markov, aRussian mathematician and professor atSt. Petersburg University who first published on the topic in 1906.[3] His initial intended uses were for linguistic analysis and other mathematical subjects likecard shuffling, but both Markov chains and matrices rapidly found use in other fields.[3][4]

Stochastic matrices were further developed by scholars such asAndrey Kolmogorov, who expanded their possibilities by allowing for continuous-time Markov processes.[5] By the 1950s, articles using stochastic matrices had appeared in the fields ofeconometrics[6] andcircuit theory.[7] In the 1960s, stochastic matrices appeared in an even wider variety of scientific works, frombehavioral science[8] to geology[9][10] toresidential planning.[11] In addition, much mathematical work was also done through these decades to improve the range of uses and functionality of the stochastic matrix andMarkovian processes more generally.

From the 1970s to present, stochastic matrices have found use in almost every field that requires formal analysis, fromstructural science[12] tomedical diagnosis[13] topersonnel management.[14] In addition, stochastic matrices have found wide use inland change modeling, usually under the term Markov matrix.[15]

Definition and properties

[edit]

A stochastic matrix describes aMarkov chainXt over afinitestate spaceS withcardinalityα.

If theprobability of moving fromi toj in one time step isPr(j|i) =Pi,j, the stochastic matrixP is given by usingPi,j as thei-th row andj-th column element, e.g.,

P=[P1,1P1,2P1,jP1,αP2,1P2,2P2,jP2,αPi,1Pi,2Pi,jPi,αPα,1Pα,2Pα,jPα,α].{\displaystyle P=\left[{\begin{matrix}P_{1,1}&P_{1,2}&\dots &P_{1,j}&\dots &P_{1,\alpha }\\P_{2,1}&P_{2,2}&\dots &P_{2,j}&\dots &P_{2,\alpha }\\\vdots &\vdots &\ddots &\vdots &\ddots &\vdots \\P_{i,1}&P_{i,2}&\dots &P_{i,j}&\dots &P_{i,\alpha }\\\vdots &\vdots &\ddots &\vdots &\ddots &\vdots \\P_{\alpha ,1}&P_{\alpha ,2}&\dots &P_{\alpha ,j}&\dots &P_{\alpha ,\alpha }\\\end{matrix}}\right].}

Since the total of transition probability from a statei to all other states must be 1,i{1,,α},j=1αPi,j=1;{\displaystyle \forall i\in \{1,\ldots ,\alpha \},\quad \sum _{j=1}^{\alpha }P_{i,j}=1;\,}thus this matrix is a right stochastic matrix.

The above elementwise sum across each rowi ofP may be more concisely written asP1 =1, where1 is theα-dimensional column vector of all ones. Using this, it can be seen that the product of two right stochastic matricesP andP′′ is also right stochastic:PP′′1 =P′ (P′′1) =P1 =1. In general, thek-th powerPk of a right stochastic matrixP is also right stochastic. The probability of transitioning fromi toj in two steps is then given by the(i,j)-th element of the square ofP:

(P2)i,j.{\displaystyle \left(P^{2}\right)_{i,j}.}

In general, the probability transition of going from any state to another state in a finite Markov chain given by the matrixP ink steps is given byPk.

An initial probability distribution of states, specifying where the system might be initially and with what probabilities, is given as arow vector.

Astationaryprobability vectorπ is defined as a distribution, written as a row vector, that does not change under application of the transition matrix; that is, it is defined as a probability distribution on the set{1, …,n} which is also aleft eigenvector of the probability matrix, associated witheigenvalue 1:

πP=π.{\displaystyle {\boldsymbol {\pi }}P={\boldsymbol {\pi }}.}

Karpelevič regions forn = 3 andn = 4.

It can be shown that thespectral radius of any stochastic matrix is one. By theGershgorin circle theorem, all of the eigenvalues of a stochastic matrix have absolute values less than or equal to one. More precisely, the eigenvalues ofn{\displaystyle n}-by-n{\displaystyle n} stochastic matrices are restricted to lie within a subset of the complex unit disk, known as Karpelevič regions.[16] This result was originally obtained byFridrikh Karpelevich,[17] following a question originally posed by Kolmogorov[18] and partially addressed byNikolay Dmitriyev andEugene Dynkin.[19]

Additionally, every right stochastic matrix has an "obvious" column eigenvector associated to the eigenvalue 1: the vector1 used above, whose coordinates are all equal to 1. As left and right eigenvalues of a square matrix are the same, every stochastic matrix has, at least, aleft eigenvector associated to theeigenvalue 1 and the largest absolute value of all its eigenvalues is also 1. Finally, theBrouwer Fixed Point Theorem (applied to the compact convex set of all probability distributions of the finite set{1, ...,n}) implies that there is some left eigenvector which is also a stationary probability vector.

On the other hand, thePerron–Frobenius theorem also ensures that everyirreducible stochastic matrix has such a stationary vector, and that the largest absolute value of an eigenvalue is always 1. However, this theorem cannot be applied directly to such matrices because they need not be irreducible. In general, there may be several such vectors. However, for a matrix with strictly positive entries (or, more generally, for an irreducible aperiodic stochastic matrix), this vector is unique and can be computed by observing that for anyi we have the following limit,

limk(Pk)i,j=πj,{\displaystyle \lim _{k\rightarrow \infty }\left(P^{k}\right)_{i,j}={\boldsymbol {\pi }}_{j},}

whereπj is thej-th element of the row vectorπ. Among other things, this says that the long-term probability of being in a statej is independent of the initial statei. That both of these computations give the same stationary vector is a form of anergodic theorem, which is generally true in a wide variety ofdissipative dynamical systems: the system evolves, over time, to astationary state.


Intuitively, a stochastic matrix represents a Markov chain; the application of the stochastic matrix to a probability distribution redistributes the probability mass of the original distribution while preserving its total mass. If this process is applied repeatedly, the distribution converges to a stationary distribution for the Markov chain.[2]: 14–17 [20]: 116 

Stochastic matrices and their product form acategory, which is both a subcategory of thecategory of matrices and of the one ofMarkov kernels.

Example: Cat and mouse

[edit]

Suppose there is a timer and a row of five adjacent boxes. At time zero, a cat is in the first box, and a mouse is in the fifth box. The cat and the mouse both jump to a randomadjacent box when the timer advances. For example, if the cat is in the second box and the mouse is in the fourth, the probability thatthe cat will be in the first boxand the mouse in the fifth after the timer advances is one fourth. If the cat is in the first box and the mouse is in the fifth, the probability thatthe cat will be in box two and the mouse will be in box four after the timer advances is one. The cat eats the mouse if both end up in the same box, at which time the game ends. Let therandom variableK be the time the mouse stays in the game.

TheMarkov chain that represents this game contains the following five states specified by the combination of positions (cat,mouse). Note that while a naive enumeration of states would list 25 states, many are impossible either because the mouse can never have a lower index than the cat (as that would mean the mouse occupied the cat's box and survived to move past it), or because the sum of the two indices will always have evenparity. In addition, the 3 possible states that lead to the mouse's death are combined into one:

  • State 1: (1,3)
  • State 2: (1,5)
  • State 3: (2,4)
  • State 4: (3,5)
  • State 5: game over: (2,2), (3,3) & (4,4).

We use a stochastic matrix,P{\displaystyle P} (below), to represent thetransition probabilities of this system (rows and columns in this matrix are indexed by the possible states listed above, with the pre-transition state as the row and post-transition state as the column). For instance, starting from state 1 – 1st row – it is impossible for the system to stay in this state, soP11=0{\displaystyle P_{11}=0}; the system also cannot transition to state 2 – because the cat would have stayed in the same box – soP12=0{\displaystyle P_{12}=0}, and by a similar argument for the mouse,P14=0{\displaystyle P_{14}=0}. Transitions to states 3 or 5 are allowed, and thusP13,P150{\displaystyle P_{13},P_{15}\neq 0} .

P=[001/201/2001001/41/401/41/4001/201/200001].{\displaystyle P={\begin{bmatrix}0&0&1/2&0&1/2\\0&0&1&0&0\\1/4&1/4&0&1/4&1/4\\0&0&1/2&0&1/2\\0&0&0&0&1\end{bmatrix}}.}

Long-term averages

[edit]

No matter what the initial state, the cat will eventually catch the mouse (with probability 1) and a stationary stateπ = (0,0,0,0,1) is approached as a limit. To compute the long-term average or expected value of a stochastic variableY{\displaystyle Y}, for each stateSj{\displaystyle S_{j}} and timetk{\displaystyle t_{k}} there is a contribution ofYj,kP(S=Sj,t=tk){\displaystyle Y_{j,k}\cdot P(S=S_{j},t=t_{k})}. Survival can be treated as a binary variable withY=1{\displaystyle Y=1} for a surviving state andY=0{\displaystyle Y=0} for the terminated state. The states withY=0{\displaystyle Y=0} do not contribute to the long-term average.

Phase-type representation

[edit]
The survival function of the mouse. The mouse will survive at least the first time step.

As State 5 is an absorbing state, the distribution of time to absorption isdiscrete phase-type distributed. Suppose the system starts in state 2, represented by the vector[0,1,0,0,0]{\displaystyle [0,1,0,0,0]}. The states where the mouse has perished don't contribute to the survival average so state five can be ignored. The initial state and transition matrix can be reduced to,

τ=[0,1,0,0],T=[001200010141401400120],{\displaystyle {\boldsymbol {\tau }}=[0,1,0,0],\qquad T={\begin{bmatrix}0&0&{\frac {1}{2}}&0\\0&0&1&0\\{\frac {1}{4}}&{\frac {1}{4}}&0&{\frac {1}{4}}\\0&0&{\frac {1}{2}}&0\end{bmatrix}},}

and

(IT)11=[2.754.53.52.75],{\displaystyle (I-T)^{-1}{\boldsymbol {1}}={\begin{bmatrix}2.75\\4.5\\3.5\\2.75\end{bmatrix}},}

whereI{\displaystyle I} is theidentity matrix, and1{\displaystyle \mathbf {1} } represents a column matrix of all ones that acts as a sum over states.

Since each state is occupied for one step of time the expected time of the mouse's survival is just thesum of the probability of occupation over all surviving states and steps in time,

E[K]=τ(I+T+T2+)1=τ(IT)11=4.5.{\displaystyle E[K]={\boldsymbol {\tau }}\left(I+T+T^{2}+\cdots \right){\boldsymbol {1}}={\boldsymbol {\tau }}(I-T)^{-1}{\boldsymbol {1}}=4.5.}

Higher order moments are given by

E[K(K1)(Kn+1)]=n!τ(IT)nTn11.{\displaystyle E[K(K-1)\dots (K-n+1)]=n!{\boldsymbol {\tau }}(I-{T})^{-n}{T}^{n-1}\mathbf {1} \,.}

See also

[edit]

References

[edit]
  1. ^Asmussen, S. R. (2003). "Markov Chains".Applied Probability and Queues. Stochastic Modelling and Applied Probability. Vol. 51. pp. 3–8.doi:10.1007/0-387-21525-5_1.ISBN 978-0-387-00211-8.
  2. ^abLawler, Gregory F. (2006).Introduction to Stochastic Processes (2nd ed.). CRC Press.ISBN 1-58488-651-X.
  3. ^abHayes, Brian (2013). "First links in the Markov chain".American Scientist.101 (2):92–96.doi:10.1511/2013.101.92.
  4. ^Charles Miller Grinstead; James Laurie Snell (1997). Introduction to Probability. American Mathematical Soc. pp. 464–466.ISBN 978-0-8218-0749-1.
  5. ^Kendall, D. G.; Batchelor, G. K.; Bingham, N. H.; Hayman, W. K.; Hyland, J. M. E.; Lorentz, G. G.; Moffatt, H. K.; Parry, W.; Razborov, A. A.; Robinson, C. A.; Whittle, P. (1990). "Andrei Nikolaevich Kolmogorov (1903–1987)".Bulletin of the London Mathematical Society.22 (1): 33.doi:10.1112/blms/22.1.31.
  6. ^Solow, Robert (1 January 1952). "On the Structure of Linear Models".Econometrica.20 (1):29–46.doi:10.2307/1907805.JSTOR 1907805.
  7. ^Sittler, R. (1 December 1956). "Systems Analysis of Discrete Markov Processes".IRE Transactions on Circuit Theory.3 (4):257–266.doi:10.1109/TCT.1956.1086324.ISSN 0096-2007.
  8. ^Evans, Selby (1 July 1967). "Vargus 7: Computed patterns from markov processes".Behavioral Science.12 (4):323–328.doi:10.1002/bs.3830120407.ISSN 1099-1743.
  9. ^Gingerich, P. D. (1 January 1969). "Markov analysis of cyclic alluvial sediments".Journal of Sedimentary Research.39 (1):330–332.Bibcode:1969JSedR..39..330G.doi:10.1306/74d71c4e-2b21-11d7-8648000102c1865d.ISSN 1527-1404.
  10. ^Krumbein, W. C.; Dacey, Michael F. (1 March 1969). "Markov chains and embedded Markov chains in geology".Journal of the International Association for Mathematical Geology.1 (1):79–96.Bibcode:1969MatG....1...79K.doi:10.1007/BF02047072.ISSN 0020-5958.
  11. ^Wolfe, Harry B. (1 May 1967). "Models for Conditioning Aging of Residential Structures".Journal of the American Institute of Planners.33 (3):192–196.doi:10.1080/01944366708977915.ISSN 0002-8991.
  12. ^Krenk, S. (November 1989). "A Markov matrix for fatigue load simulation and rainflow range evaluation".Structural Safety.6 (2–4):247–258.doi:10.1016/0167-4730(89)90025-8.
  13. ^Beck, J.Robert; Pauker, Stephen G. (1 December 1983). "The Markov Process in Medical Prognosis".Medical Decision Making.3 (4):419–458.doi:10.1177/0272989X8300300403.ISSN 0272-989X.PMID 6668990.
  14. ^Gotz, Glenn A.; McCall, John J. (1 March 1983). "Sequential Analysis of the Stay/Leave Decision: U.S. Air Force Officers".Management Science.29 (3):335–351.doi:10.1287/mnsc.29.3.335.ISSN 0025-1909.
  15. ^Kamusoko, Courage; Aniya, Masamu; Adi, Bongo; Manjoro, Munyaradzi (1 July 2009). "Rural sustainability under threat in Zimbabwe – Simulation of future land use/cover changes in the Bindura district based on the Markov-cellular automata model".Applied Geography.29 (3):435–447.Bibcode:2009AppGe..29..435K.doi:10.1016/j.apgeog.2008.10.002.
  16. ^Munger, Devon; Nickerson, Andrew; Paparella, Pietro (2024). "Demystifying the Karpelevič theorem".Linear Algebra and Its Applications.702:46–62.arXiv:2309.03849.doi:10.1016/j.laa.2024.08.006.
  17. ^Karpelevič., Fridrikh (1951). "On the characteristic roots of matrices with nonnegative elements".Izv. Math.15 (4).
  18. ^Kolmogorov, Andrei (1937). "Markov chains with a countable number of possible states".Bull. Mosk. Gos. Univ. Math. Mekh.1 (3):1–15.
  19. ^Dmitriev, Nikolai; Dynkin, Eugene (1946). "On characteristic roots of stochastic matrices".Izvestiya Rossiiskoi Akademii Nauk. Seriya Matematicheskaya.10 (2):167–184.
  20. ^Kardar, Mehran (2007).Statistical Physics of Fields.Cambridge University Press.ISBN 978-0-521-87341-3.OCLC 920137477.
Matrix classes
Explicitly constrained entries
Constant
Conditions oneigenvalues or eigenvectors
Satisfying conditions onproducts orinverses
With specific applications
Used instatistics
Used ingraph theory
Used in science and engineering
Related terms
National
Other
Retrieved from "https://en.wikipedia.org/w/index.php?title=Stochastic_matrix&oldid=1288943458"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2026 Movatter.jp