APPENDIX A
DISCRETE TIME FINITE STATE MARKOV CHAINS
A. 1 Introduction
The following paragraphs establish some definitions and theorems on discrete time
finite state Markov chains and related stochastic matrix concepts. These results fall into
three main categories, (1) elementary definitions, (2) definitions and theorems concerning
the state space and asymptotic behavior of time-homogeneous (stationary) Markov chains
and (3) some more advanced ergodicity definitions and theorems necessary for the analy-
sis of the asymptotic behavior of inhomogeneous Markov chains. These results are pres-
ented without proof or elaboration but the foundation required for the more elementary of
them can be obtained from [Cinl75, IsMa76] or many other references on Markov chains.
The ergodicity related results can be found in [IsMa76, Sene8 ].
Although some of the results discussed here apply to continuous time and/or
denumerably infinite state space Markov chains as well, the intention is to restrict consid-
eration to the discrete time finite state case. All references herein to Markov chains are
understood to mean discrete time finite state Markov chains.
In the following, let K = {0, 1,2, - } be the set of nonnegative integers, let
X = {Xk:k e K} be a discrete time (i.e. discrete sequence index) stochastic process with
finite cardinality state space E, and let i,j e E.
A.2 Elementary Definitions
Definition AI: If Vij E and every k e K, it follows that
Pr{Xk+1 =j:Xo= i, X,= i,... ,Xk= i}= Pr{Xk+1 =j:Xk= i},
then X is a Markov chain.
126