* Approach III: In cases where the return maps of the different experts overlap,
simulations show that often with the hard competition (approach II) an expert starts
to win, gets adapted and starts fitting the other regime. In fact, since the regimes
only differ for the part of the breathing cycle after the end of inspiration, there are
not a lot of samples that allow the criterion to show a change in regime, which
creates a problem associated to the "build up" property of the criterion. So instead
of just monitoring the MSE estimate, we also monitor its slope for detection of
unusual spikes indicating that there might have been a change of regime, in which
case the winning expert is not adapted, but the second one is (the threshold on the
slope is determined visually, depending on prior simulations for the same data)
* Approach IV: This last approach borrows to the annealed competition of experts
(ACE) [3] and the self-annealing competitive prediction (SACP) [15] algorithms.
The algorithm is identical to the ACE in the fact that the gating function is formed
under the assumption that all experts have equal prediction error variance, but not
assuming a Gaussian distribution of the error: the gating function is defined
accordingly to the SACP as in (2.11), normalized in (2.12).
(p,(n) = [s(n)]2 (2.10)
g, (n)= (2.12)
K
M being the parameter that determines the degree of competition: the higher M, the
harder the competition. Since the real data is not even piecewise stationary and
might exhibit some slow changes, M will not be annealed to a very high value (we
cannot assume that after several breaths in the same regime we have more
information about it: this is true for the test set, but not for real data). Also, the
experts are adapted at every time step with a learning rate proportional to the gate
output like for the online ACE.
2.2.4 Implementation Concerns
Most of the difficulties in using these methods lay in the number of parameters that
need to be set, and their effect on the performance of the system. For example the issue of
the size and initialization of all the experts or their training parameters is addressed in the
next chapter, but the competition and memory parameters also play a great role in
achieving a good segmentation. While the segmentation methods are designed to be