Other important parameters exist that are specific to each segmentation method,
explored in the following sections.
2.2.1 Classical Sequential Supervised Approach
The first step for understanding segmentation is to understand supervised change
detection in a signal presenting switching among a finite number K of known processes.
In that case, predictive models are available (or developed for each process using
measured time series), and represented by their PDF's pk(X(n)), X(n) being the history
of the measurements x(n) over n time steps.
Then the time series is monitored online by computing at every step for any two
processes the log-likelihood ratio of the processes PDF's values at that time step, as
shown in (2.6).
L,(n)= log X()) with i,j=1....K (2.6)
pj (X(n))
The log-likelihood ratio is a very important tool in sequential change point
detection. Indeed, it serves as the decision function: basically, once a starting regime i has
been identified, all L4 (n) are monitored for j # i, and whenever one L4 (n) goes above a
set threshold, it means there has been a change of regime at that point. This method needs
therefore offline training of the experts, and then an online segmentation is performed.
2.2.2 Classical Sequential Unsupervised Approach
In the case when the number of states (or subprocesses) is very large or completely
unknown, as well as the change points between the regions, the classical approach to
segmentation focuses on a sequential detection of change points in the data by developing
a local model of the data and monitoring its dynamics for change, as for the supervised