recurrent transition matrix T has a unique equilibrium distribution 7 and will converge
to it irrespective of its starting distribution, more precisely:
T N N-+00
(T)ijPi s 7. (4-4)
Moreover, such a transition matrix satisfies the Ergodic Theorem which states that for any
bounded function f : S -- R we have:
N-1
SN-1 N-+o0 -
+Z f(xk) -- f (4-5)
k-0
with probability one,) where = ExaEs 7(x)f(x). This result is a statement that one
may find intuitively sensible: that the proportion of time which the Markov chain spends
in a state x approaches 7(x), the value of the equilibrium distribution in that state. It
is a common strategy to approximate 7 with a finite sum kJ 0 f(xk) using the Ergodic
Theorem. This strategy is called importance sampling because it can be thought of as
summing over the important elements of 5: those that sample 7.
By construction, the Markov chain can simulate many physical processes, simply
because the defining "lack of memory" property, is seen so widely in nature. Markov
chains in general have many applications in the field of physics outside of stochastic
numerical simulations. But Monte Carlo simulation are the topic of interest here, and
we are now in a position to clarify what is meant by Monte Carlo simulations in the
first place. Let us consider the case where the state space S is very large and where each
element of it is a complicated object. Let us further assume that we want to calculate the
Irreducibility is sometimes called ergodicity, and essentially means that the transition
matrix must be able to go everywhere in state space. We will see this better later. The
conditions are not of great concern, positive recurrency is automatic in a finite state space.
i Rigorously this means that the events {Xk Xk} for k = 1, 2,... N such that (4-5) is
true, have probability one, or conversely that (4-5) is false on a subset of Q with measure
zero.