The first part of this tutorial introduces the rationale of the Simplex Projection, which is one of the building blocks of Empirical Dynamics Modelling (EDM). The second part shows how to run a simplex projection using the rEDM R package (Ye et al. 2017) to forecast (predict) values in a time series. Further, you will learn how the forecast skill of the simplex projection reveals important properties of the dynamics behind a time series. You will need the most recent version of R and the rEDM package installed in your working computer.

After running this tutorial, you can review the concepts of forecasting and learn details about the rEDM library in the introductory documentation of the package (vignette) which you can find here or in your local R installation with the command

Simplex projection: a visual walk through 1

The shadow manifold

We will start by simulating a deterministic discrete-time dynamics with chaotic behavior. To create a 150-step time series of this dynamics run the commands below in the R console:

If you look at this system’s dynamics through a single time series (time series of variable X, for example) the dynamics look quite messy:

Surprisingly, it is possible to recover the rules that govern a dynamical system using the time series (i.e., sequence of observations) of a single variable of the system. The Takens Theorem tells us that there is a simple way to recover the properties of a dynamical system by building a lagged coordinate plot. Below you can see such a plot for the X variable, where each axis is the lagged X time series itself, i.e., \(X(t), X(t+1), X(t+2)\). Each point in this 3-D plot depicts the state of the system at a given time t and in the next two time steps (t+1 and t+2). If you play with the plot turning it around, you can see that the points form a clear pattern. This pattern means that the possible states of the system are constrained to a certain subset of the plotting space, with a well defined shape.

We call the plot space, the state-space reconstruction (SSR), and the shape it reveals is called a shadow manifold, which is an embedding of the true attractor of the dynamical system. For now, let’s assume that the plot depicted below is a valid embedding 2, and thus nearby points in the SSR corresponds to similar system states.

Suppose we want to predict the last point of the time series (t = 150). The blue point in the 3-D lagged plot below is X(t-3, t-2, t-1), which is the state of the system immediately before t = 150. The first, second, third and fourth nearest neighbors of the blue point are depicted in red, green, orange, and magenta, respectively. If you zoom into the points, you will see segments linking each neighbor to the focal point we want to predict. The length of these segments are Euclidian distances. You can also pan and rotate the cloud of points to get a better idea of the spatial pattern of these points.

It is important to notice that the neighboring points in the manifold (above) correspond to the states of the system most similar to the focal state we want to predict. Below you can see each of these states in the time series, represented by the same code color used in the manifold above.

Simplex forecasting

And here is the trick: because the neighboring points in the shadow manifold are trajectories that match the focal one, they provide a good guess of what will happen next in the time series. To show this in the plot below, we moved one time step forward each highlighted trajectory that corresponds to a neighboring point in the shadow manifold. The arrows project the resulting values of X to t=150, and the black triangle is an average of these projected points, which is the forecasted value of X for this time step. The actual value of X(t=150) is the last point in the series. Not bad!

Simplex projection is therefore a forecast for a given state based on the average behavior of similar states of the system in the past. Such average is weighted by the distance of each neighbor to the focal point in the shadow manifold - that is, closer points have more weight because they match better the focal trajectory.

The plot below shows the focal and neighboring points at their original position and projected, that is, at their new positions when we move one time step forward. The black point which is linked to the projected neighbors is the forecasted state. The blue point close to the black one is the actual true state. You can zoom in, pan and rotate the figure to check that the distances of the projected neighboring points to the forecast are scaled to the distances of the original neighbors to the focal point.

There are some additional technicalities to calculate simplex projections. For instance, the weights of the neighbors scale exponentially with their distances to the focal point. All these details are in in a pseudocode available here.

Simplex projection in practice

Before proceeding please read the subsections “Nearest Neighbor Forecasting using Simplex Projection” and “Prediction Decay” in the rEDM tutorial.

In this section we will use the rEDM package to forecast many points in the time series we created, and also to get some important information about the dynamics that generated this series. To start open R and load the package typing in the R console

The function simplex in the rEDM package runs simplex projections for a time series. The argument E sets the embedding dimensions (number of time lags in the lagged plot) and the argument stats_only sets if only statistics of the forecasts should be stored (or the forecasted values too).

The command below forecasts one step forward each point in the series for an embedding dimension of three (as we used in the previous section), and stores the forecasted values:

The function returns a list with many elements, most of them are statistics about the forecast:

## [1] "stats"        "model_output"

The last element of the list, called model_output, is a list of data frames with the time, observed and predicted values and the variance of predicted values:

##   Index Observations Predictions Pred_Variance Const_Predictions
## 1     3    0.5392810         NaN           NaN               NaN
## 2     4    0.5030702   0.3636169  8.193049e-03         0.5392810
## 3     5    0.2915131   0.3533368  2.930746e-03         0.5030702
## 4     6    0.4366629   0.4492918  1.441837e-05         0.2915131
## 5     7    0.4564548   0.4841492  2.783205e-04         0.4366629
## 6     8    0.5577898   0.5034417  5.362669e-03         0.4564548

The one-step forward forecast was very good for most of the points:

The Pearson linear correlation between observed and predicted values is a measure of forecast skill. This correlation is one of the statistics available in the object returned by the simplex function:

## [[1]]
## [1] 0.9234572

The correlation is close to one, which indicates an excellent forecast skill.

Optimal embedding dimension

We can run simplex forecasts for different embedding dimensions simply providing multiple values to the argument E. The command below runs simplex projection forecasts using two, three and ten embedding dimensions. The observed and predicted values for each projection are then used to plot predicted x observed values:

Embeddings with too few or too many dimensions do not unravel the original manifold into a shadow manifold properly and thus worsen the forecast. In our example we can see this for \(E=2\) or \(E=10\). To find out the optimal embedding dimension, we can plot the forecast skill (correlation between predicted and observed) against the embedding dimension. The optimal number of dimensions for our current case is three (as we guessed in the previous section):

Prediction decay

So far we used simplex projection to forecast one time step forward, but how the simplex projection perform as we increase the time for forecast? To check this run the commands below to run forecasts for one to ten time steps ahead (argument tp = 1:10 in the function simplex) and plot the forecast skill in function of these times to prediction:

There is a sharp decline of the forecast skill when we increase the time to prediction. This is a characteristic feature of chaotic dynamics. The next section show how we can use this property to distinguish deterministic chaos from random noise in a time series.

Distinguishing error from chaos

Now we will apply the prediction-decay diagnostics to non-chaotic time series, starting with a pure deterministic simulation. The code below simulates 150 time steps of the logistic map at the verge of chaos. The result is a time series with a complex periodicity, but not quite a chaotic one.

The forecast skill is very high (as we would expect for a non-chaotic deterministic time series) and varies very little with the embedding dimension.

We will proceed using an embedding dimension of two, to check the forecast skill x prediction time plot. The plot shows that here is not a decay in the forecast skill with the time to prediction, which again is not surprising for a periodic deterministic time series:

Things can get more interesting if we add some noise to the time series. The commands below simulate that the time series has independent measurement errors with constant variance. To do that we add to each observation a value drawn from a Gaussian distribution with zero mean and standard deviation equal to those of the time series itself.

Now the series looks chaotic. Or should we say ‘only noisy’? The prediction decay plot provides the answer:

The prediction decay plot shows an overall decrease of forecast skill caused by the addition of measurement error, but there is no sign of prediction decay with forecast time:

Exercise

During the 1950’s the entomologist Alexander Nicholson carried out a series of detailed experiments with caged populations of the sheep blowfly in Australia. The results suggested a very complex, non-linear system and inspired many advances in the mathematical modelling of population dynamics. For a detailed historical and mathematical account see Brilinger, J Time Ser Analisys 2012.

In this exercise, you should use the tools provided in this tutorial to investigate how complex are the dynamics behind Nicholson’s time series. To download one of the few time series that were preserved, use the commands below:

Hints

1. The time series has some periodicity, which in some cases can be caused by seasonal factors. This makes EDM analysis much more complicated3. Nevertheless, in some cases you can keep it simple by analysing the time series of the differentiated values, which in our case are the variation in the population sizes form one time step to the other. For the purpose of this exercise this trick solves the problem of periodicity in the data. You get the differentiated time series with the command:

2. S-mapping (Sugihara 1994) is another forecasting method that can be used for identify another aspect of complex dynamics, called state-dependent behavior. This method is available in rEDM (see the tutorial of the package). Does this method provide additional insights about the dynamics behind Nicholson’s results?

Learn more

  • Sugihara G. and R. M. May. 1990. Nonlinear forecasting as a way of distinguishing chaos from measurement error in time series. Nature 344:734–741.
  • Sugihara G. 1994. Nonlinear forecasting for the classification of natural time series. Philosophical Transactions: Physical Sciences and Engineering 348:477–495.
  • Anderson C. & Sugihara G. Simplex projection - Order out the chaos. http://deepeco.ucsd.edu/simplex/
  • “Simplex projection walkthrough”, a tutorial by Owen Petchey.

Glossary

  • Attractor: the set of points to which the trajectories of a dynamical system converges over time.

  • Lagged variables plot: a plot of a time series as a function of the same time series lagged by multiples of a length of time \(\tau\). For instance, the axes of a three-dimensional lagged variable plot are the original time series \(X(t)\), the lagged time series \(X(t + \tau)\), and \(X(t + 2\tau)\).

  • State space reconstruction: the process of making a representation of a state space of interest that preserves some important properties. In the context of EDM, the reconsctructed state space corresponds to the lagged variables plot of a time series.

  • Shadow Manifold: the projection of the original manifold (which is an attractor in our case) into an Euclidean space. This space has a dimension (the embedding dimension) that ensures a one-to-one map to the original manifold.

  • Embedding dimension: for our purposes, the dimension of the Euclidean space used to build the shadow manifold. In the case of a state space reconstruction, the embedding dimension is the dimension of this space, that is, the number of axes of the lagged variables plot.


  1. This section is adapted from the tutorial “Simplex projection walkthrough”, by Owen Petchey.

  2. The next section shows some compelling evidence for this.

  3. For a full appraisal see: Deyle, E.R., Maher, M.C., Hernandez, R.D., Basu, S. and Sugihara, G., 2016. Global environmental drivers of influenza. Proceedings of the National Academy of Sciences, 113(46), pp.13081-13086.