The analysis of the convergence behavior of the homogeneous algorithm includes the
hypothesis that each Markov chain in the sequence achieves its stationary distribution.
This hypothesis is equivalent to K, -- oo for all 1 (Definition A10, Theorem A3, Theorem
A4).
In the second case, the algorithm is represented as a sequence of solutions evolving
as a single inhomogeneous (nonstationary) Markov chain. This formulation is hereafter
referred to as the inhomogeneous (or nonstationary) algorithm. In the inhomogeneous
algorithm, the control parameter value is allowed to decrease (though not necessarily
required to) after each state transition. The dependence of G,(T) and Aj(T) on T results in
the inhomogeneous behavior.
2.4.2.1 The Homogeneous Algorithm
In the homogeneous algorithm, the means of establishing the requirements for
asymptotically optimal convergence is to first establish sufficient conditions for existence
of the stationary distribution of each Markov chain and then to establish sufficient condi-
tions to ensure that the stationary distribution converges to a uniform distribution over the
set of optimal solutions as the control parameter value approaches zero. That is
1
lim q,(i)= Nopt o Eq. 2.6
0 otherwise
where qT is the stationary distribution of the Markov chain executing at control parameter
value T, Sop, c S is the set of solutions i e S:C(i) = Cop, and Nop, = card(Sop).
Theorems A1-A3 can be employed to deduce sufficient conditions on PT(i,j) (or
alternatively on G,j(T) and Aj(T)) to ensure the existence of the stationary distribution of
each Markov chain in the sequence representing the homogeneous algorithm. Since only
combinatorial (finite solution space) optimization problems are under consideration and
since by definition the homogeneous algorithm only employs time-homogeneous Markov
chains, the finite state space and time-homogeneity requirements of Theorem A3 are