The embedding theorem. Taken's embedding theorem [18] allows for states to be
reconstructed from observations. Let a vector time series defined by (3.3), x(n) being the
sample at time n of the time series. The times series described by z(n) creates a path of
observations in the Euclidian space of size k.
z(n) = {x(n),x(n ),...,x((n -(k- )r)} (3.3)
The embedding theorem states that for a certain value K of k sufficiently large (and
the in the absence of noise), the trajectories of the data never intersect in that state space,
and k>K is a sufficient condition for the existence of a diffeomorphism that maps the
current state z(n) to the next z(n + 1). Assuming the underlying dynamics of the system
are smooth enough to be modeled from the observable, this one-to-one mapping is the
systems dynamic description that the network will try to model; K is called the
embedding dimension, and r is called the lag.
Note that the standard embedding uses as simple time delay embedding, but
quantities other than delayed observations can be useful in some applications, such as: the
time since the last local maximum or minimum of the time series; the result of a change
of base in the reconstruction space obtained by multiplying each state vector z(n) by a
weighting matrix of full rank K (guaranteeing the existence of the mapping); or the
trajectory of the data in a memory space created by the implementation of the embedding
with a gamma delay line.
Theoretically, if the embedding is performed perfectly, any nonlinear adaptive
system can identify the mapping in the reconstruction space, and therefore would not be
helpful in determining the best modeling strategy [19]. All these considerations of course
assume stationarity of the data, which is not our case and therefore we can still expect