4.1.1 Mathematics: Markov Chains
The underlying mathematical construction for Monte Carlo techniques is that of the
Markov chain. Given a probability space (Q, A, P) a Markov chain is a sequence of
random variables (Xk)k>o distributed in some way over a state-space S with the property
that:
P(Xk XkXk-i = Xk-1,Xk-2 Xk-2,...,Xo = xo) = P(Xk XkXk-i = Xk-1i) = T1,x _.
(4-1)
for some i,.n., r mapping T. Notice that the random variable Xk depends on Xk-1 only,
and not the earlier random variables in the sequence. This is sometimes referred to as the
Markov chain's lack of memory. A distribution p : S -> [0, 1] for an element Xk of the
Markov chain is defined to be the probability that Xk takes the value x or:
pk(x) P(Xk x). (4 2)
Clearly xrs p(x) = 1 for any distribution (hence the name). It is useful to think of the
Markov index k in (Xk) as time and investigate the evolution of the distribution with
time:
pt+1 = T tl,xtpt. (4-3)
If S is finite (which it is in any computer simulation), then T is a matrix whose rows sum
to 1. An equilibrium distribution 7 is defined by 7 = TT. A very important result of
the mathematical theory of Markov chains is that an periodic, irreducible and positive
A probability space is a triplet (Q, A, P) where Q represents the set of possible
outcomes, A is the set of events represented as a collection of subsets of Q and P : A
[0, 1] is the probability function.
t Equilibrium distributions are sometimes called invariant or stationary.