500 1000 1500 2000 2500 3000 002 500 1000 1500 2000 2500 3000 1 I I I I 500 1000 1500 2000 2500 3000 500 1000 1500 2000 2500 3000 Figure 4-7 Results: (a) Flow: desired in blue, predicted in red; (b) MSE criterion, full line is for expert 1, dotted line is for expert 2; (c) Winner; (d) Pressure. 4.2.2 Other Approaches: Online Adaptation For the two types of RBFN, the linear layer is updated with the gradient descent, at the same learning rate l=0.01 determined experimentally (except for approach IV, where l=0.002, because each expert is adapted even it is not the winner). K-means clustering. For approach II, Figure 4-8 presents the results for k-means clustering. Obviously adapting the second layer weights helps in detecting the previously undetected mandatory breaths, but the "buildup" still appears, allowing the detection only within the last few samples, but the memory depth could not be shorter, since there is already an unwarranted switch on the set of three mandatory breaths (the first switch is only detected accurately because it occurs within 100 data samples and the criterion, initialized at zero, has reached its full memory length yet, sot it has not build up yet).