standard data analysis formulas [21].
A YA, 1i, B = B 1i
cov(A,B) (A-A,)(-B -B,) / 1
SAB AB
given the data sets (A1, A2,..., A, ...) and (BI, B2, ....
Turning next to point 2) above. The interpretation for the expression for F is
the following: We consider the values '-, = (yk fA(Xk)), for k = 1,... K, to be
normally distributed random variables with given variance and covariance. Recall that the
underlying n (sweep label) dependence of Xk and Yk has now been discarded and replaced
by an "allowable" distribution in the "errors" ,,' If ,; for k = 1,..., K, were normally
distributed random variables with mean zero and variance one and among themselves
uncorrelated, we could relate the v-s and w-s by: Cv = w with the matrix C given by
the Cholesky factorization of the error matrix E and v = (v,... vK) But since v is so
simple we can calculate the norm as follows
I|v|| = V TV (C- w) T(C- w)
wTE w F (4-14)
so that F is just the squared distance between the data and the fitting function fA.
Minimizing F(A) as a function of A is just the well known least-squares fitting, with
the extension that we use statistical information about the fitted data. We minimize F by
use of the Levenberg-Marquardt algorithm, which is an optimized and robust version of the
Newton-Rhapson class of methods. These optimization methods "search" for the minima
by starting at an initial guess for the parameters A and travelling in A-space, with certain
rules determining the step AA, evaluating F at each step until convergence is obtained.