These approaches were first tested on a synthetic data set, to understand the effect of the system and model parameters, then approach IV was applied to real data, yielding good results especially with the RBFs. Overall, the results displayed in chapter 5 are very encouraging and show that these segmentation methods can work well even if the data is not piecewise stationary, but at the price of a lot of human parameter settings and design choices. The main challenge resides in the interdependence of all these parameters. As previously explained, optimizing several parameters could improve the results. One of these is the embedding: since the data is oversampled for some segments of the data, the choice of a simple delay for the lag is probably not an optimum value, and some simulations with a lag of two or three should be performed. Future work should also include considering using more experts (at least three: one for each type of inspiration, one for expiration) so that there is more opportunity of specialization and differentiation between the experts, and allowing those different experts to be structurally different: since they are modeling processes that might not require the same state space dimension, the same memory depth on the criterion, or might not even be accurately modeled by the same kind of experts! The self-organizing map of competiting predictors [28] might perform better than the algorithms tested, but it seemed awkward to create a Kohonen map for only two predictors. Improvement on the models could derive from using a linear fit of the data complemented by RBFs explaining the residual variations, and from examining different estimates for the criterion (as discussed, using a boxcar average would give too much weight to past values of the instantaneous error for the purpose of change detection, but the a ramp or triangular window would have the advantage of being