In practice we run MC simulations for a given lattice first to obtain estimates for
E but then run a much longer simulation to obtain the data points (xk, Uk) using the
previous estimate for the error matrix. We start the discussion with point 1) above.
In obtaining an estimate for the ij j' dependence of Corr (s, s') we calculate only
(si s ) since we are not interested in the additive constant which the second term in
the correlation is. This corresponds to keeping (i', j') fixed and using only the data for
which s, = +1. In practice this is achieved by simply not updating the spin at (i',j')
and calculating only (sj) on this lattice. The data set mentioned above then becomes
yj = iE(s') and xj = j'|.
As was explained in section 4.1.1 we may need to throw out some of the configurations
in the sequence (s(n)) because <,., v are too correlated to generate a Markov chain. This
is done by calculating the "lattice-mean" autocorrelation:
Aut (m) M- sZ(n + m)s((n)).
i,j
Typically Aut(m) is a falling exponential as a function of m. The reciprocal of the
exponent is called the autocorrelation-length, denoted by LAUT and in our case we have
LAUT = 10 20. This means that by using only every LAUT-th sweep, i.e., by generating
a reduced data set: s(k) for k = n LAUT + ko with n = 0, 1, 2,... we are certain the
spin configurations are not correlated sequentially. For calculating the data points yk it
is correct to use all of the data II but when estimating the variance and covariance of
the statistical errors we must consider the reduced data set. These are calculated using
II This is simply because the average over (s(n), s(n+ 1),..., s(n+LAUT)) can be used in
the reduced data set instead of the value s(n) only, and this is equivalent to just averaging
over all the spin configurations.