This is the form employed by Kirkpatrick et al. in the original work [KiGe83] and most
others published since are variations of it. Also, the usual form of the next state genera-
tion mechanism is
jE Si
Gi(T)= G,= Gji Ni J Eq. 2.5
0 otherwise
where Si c S is the set of states accessible from state i in one transition (by definition,
i i Si), and where N, = card(Si). Note that Gij defined by Eq. 2.5 is symmetric and inde-
pendent of T.
2.4.2 Asymptotic Convergence Behavior
The subject of interest in the remainder of this section is a set of sufficient condi-
tions on PT and T to ensure that an optimal solution is achieved. These conditions will
prove to guarantee asymptotic convergence only (i.e. T must be an infinite sequence,
which of course violates the termination requirement of the algorithm). Two cases will be
examined. The first only involves time-homogeneous (stationary) Markov chains (Defini-
tion A5) and is presented due to its relative ease of analysis. Its purpose is to provide a
foundation for the essential ideas involved in the second case, which requires an appeal to
ergodicity theorems for inhomogeneous (nonstationary) Markov chains. The useable con-
vergence behavior results which are the goal of this effort derive from analysis of the sec-
ond case.
The first (simple) algorithm is represented as a sequence of solutions evolving as a
sequence of distinct Markov chains. Each Markov chain in the sequence executes at a
fixed control parameter value (and hence is time-homogeneous) and each succeeding
Markov chain executes at a lower (but strictly positive) parameter value. Thus, in the
sequence t, each distinct parameter value, T,, is associated with a distinct time-
homogeneous Markov chain and T, occurs at some large number of consecutive locations,
K,, in T. This case is hereafter referred to as the homogeneous (or stationary) algorithm.