
Inprobability theory andstatistics, the termMarkov property refers to thememoryless property of astochastic process, which means that its future evolution is independent of its history. It is named after theRussianmathematicianAndrey Markov. The termstrong Markov property is similar to the Markov property, except that the meaning of "present" is defined in terms of arandom variable known as astopping time.
The termMarkov assumption is used to describe a model where the Markov property is assumed to hold, such as ahidden Markov model.
AMarkov random field extends this property to two or more dimensions or to random variables defined for an interconnected network of items.[1] An example of a model for such a field is theIsing model.
A discrete-time stochastic process satisfying the Markov property is known as aMarkov chain.
A stochastic process has the Markov property if theconditional probability distribution of future states of the process (conditional on both past and present values) depends only upon the present state; that is, given the present, the future does not depend on the past. A process with this property is said to beMarkov orMarkovian and known as aMarkov process. Two famous classes of Markov process are theMarkov chain andBrownian motion.
Note that there is a subtle, often overlooked and very important point that is often missed in the plain English statement of the definition: the statespace of the process is constant through time. The conditional description involves a fixed "bandwidth". For example, without this restriction we could augment any process to one which includes the complete history from a given initial condition and it would be made to be Markovian. But the state space would be of increasing dimensionality over time and does not meet the definition.
Let be aprobability space with afiltration, for some (totally ordered) index set; and let be ameasurable space. An-valued stochastic processadapted to the filtration is said to possess theMarkov property if, for each and each with,
In the case where is a discrete set with thediscrete sigma algebra and, this can be reformulated as follows:
In other words, the distribution of at time depend solely on the state of at time and is independent of the state of the process at any time previous to, which corresponds precisely to the intuition described in the introduction.
If, then is calledtime-homogeneous if for all the weak Markov property holds:[3]
The newly introduced probability measure,, has the following intuition: It gives the probability that the process lies in some set at time, when it was started in at time zero. The function,, is also called thetransition function of and the collection itstransition semigroup.
There exists multiple alternative formulations of the elementary Markov property described above. The following are all equivalent:[4][5]
.
.
.
.
.
.
.
If there exists a so-calledshift-semigroup, i.e., functions such that
then the Markov property is equivalent to:[4]
.
.
Depending on the situation, some formulations might be easier to verify or to use than others.
Suppose that is astochastic process on aprobability space withnatural filtration. Then for anystopping time on, we can define
Then is said to have the strong Markov property if, for eachstopping time, conditional on the event, we have that for each, is independent of given. This is equivalent to
where denotes to indicator function of the set.
The strong Markov property implies the ordinary Markov property since by taking the stopping time, the ordinary Markov property can be deduced.[6] The converse is in general not true.
The strong Markov property only leads to non-trivial results in continuous time (i.e., results which do not hold with merely the Markov property), as in the discrete case the strong and the elementary Markov property are equivalent.[7]
Although the strong Markov property is in general stronger than the elementary Markov property, it is fulfilled by Markov processes with sufficiently "nice" regularity properties.
A continuous time Markov process is said to have theFeller property, if its transition semigroup (see above) fulfills[4]
where denotes the set ofcontinuous functionsvanishing at infinity and thesup norm. Then one can show that (if the filtration isaugmented) such a process has aversion with right-continuous (evencàdlàg) paths, which in turn fulfills the strong Markov property.
Assume that an urn contains two red balls and one green ball. One ball was drawn yesterday, one ball was drawn today, and the final ball will be drawn tomorrow. All of the draws are "without replacement".
Suppose you know that today's ball was red, but you have no information about yesterday's ball. The chance that tomorrow's ball will be red is 1/2. That's because the only two remaining outcomes for this random experiment are:
| Day | Outcome 1 | Outcome 2 |
|---|---|---|
| Yesterday | Red | Green |
| Today | Red | Red |
| Tomorrow | Green | Red |
On the other hand, if you know that both today and yesterday's balls were red, then you are guaranteed to get a green ball tomorrow.
This discrepancy shows that the probability distribution for tomorrow's color depends not only on the present value, but is also affected by information about the past. This stochastic process of observed colors doesn't have the Markov property. Using the same experiment above, if sampling "without replacement" is changed to sampling "with replacement," the process of observed colors will have the Markov property.[8]
Many prominent stochastic processes are Markov processes: TheBrownian motion, theBrownian bridge, thestochastic exponential, theOrnstein-Uhlenbeck process and thePoisson process have the Markov property.
More generally, any semimartingale with values in that is given by thestochastic differential equation
where is a-dimensional Brownian motion and are autonomous (i.e., they do not depend on time) Lipschitz functions, is time-homogeneous and has the strong Markov property. If are not autonomous, then still has the elementary Markov property.[3]
In the fields ofpredictive modelling andprobabilistic forecasting, the Markov property is considered desirable since it may enable the reasoning and resolution of the problem that otherwise would not be possible to be resolved because of itsintractability. Such a model is known as aMarkov model.
An application of the Markov property in a generalized form is inMarkov chain Monte Carlo computations in the context ofBayesian statistics.