I have been looking for a package to do time series modelling in R with neural networks for quite some time with limited success. The only implementation I am aware of that takes care of autoregressive lags in a user-friendly way is the
nnetar function in the
forecast package, written by Rob Hyndman. In my view there is space for a more flexible implementation, so I decided to write a few functions for that purpose. For now these are included in the TStools package that is available in GitHub, but when I am happy with their performance and flexibility I will put them in a package of their own.
Here I will provide a quick overview of what these is available right now. I plan to write a more detailed post about these functions when I get the time.
For this example I will model the
AirPassengers time series available in R. I have kept the last 24 observations as a test set and will use the rest to fit the neural networks. Currently there are two types of neural network available, both feed-forward: (i) multilayer perceptrons (use function
mlp); and extreme learning machines (use function
# Fit MLP mlp.fit <- mlp(y.in) plot(mlp.fit) print(mlp.fit)
This is the basic command to fit an MLP network to a time series. This will attempt to automatically specify autoregressive inputs and any necessary pre-processing of the time series. With the pre-specified arguments it trains 20 networks which are used to produce an ensemble forecast and a single hidden layer with 5 nodes. You can override any of these settings. The output of
MLP fit with 5 hidden nodes and 20 repetitions. Series modelled in differences: D1. Univariate lags: (1,3,4,6,7,8,9,10,12) Deterministic seasonal dummies included. Forecast combined using the median operator. MSE: 6.2011.
As you can see the function determined that level differences are needed to capture the trend. It also selected some autoregressive lags and decided to also use dummy variables for the seasonality. Using
plot displays the architecture of the network (Fig. 1).
The light red inputs represent the binary dummies used to code seasonality, while the grey ones are autoregressive lags. To produce forecasts you can type:
mlp.frc <- forecast(mlp.fit,h=tst.n) plot(mlp.frc)
Fig. 2 shows the ensemble forecast, together with the forecasts of the individual neural networks. You can control the way that forecasts are combined (I recommend using the median or mode operators), as well as the size of the ensemble.
You can also let it choose the number of hidden nodes. There are various options for that, but all are computationally expensive (I plan to move the base code to CUDA at some point, so that computational cost stops being an issue).
# Fit MLP with automatic hidden layer specification mlp2.fit <- mlp(y.in,hd.auto.type="valid",hd.max=10) print(round(mlp2.fit$MSEH,4))
This will evaluate from 1 up to 10 hidden nodes and pick the best on validation set MSE. You can also use cross-validation (if you have patience…). You can ask it to output the errors for each size:
MSE H.1 0.0083 H.2 0.0066 H.3 0.0065 H.4 0.0066 H.5 0.0071 H.6 0.0074 H.7 0.0061 H.8 0.0076 H.9 0.0083 H.10 0.0076
There are a few experimental options in specifying various aspects of the neural networks, which are not fully documented and is probably best if you stay away from them for now!
ELMs work pretty much in the same way, although for these I have made default the automatic specification of the hidden layer.
# Fit ELM elm.fit <- elm(y.in) print(elm.fit) plot(elm.fit)
This gives the following network summary:
ELM fit with 100 hidden nodes and 20 repetitions. Series modelled in differences: D1. Univariate lags: (1,3,4,6,7,8,9,10,12) Deterministic seasonal dummies included. Forecast combined using the median operator. Output weight estimation using: lasso. MSE: 83.0044.
I appreciate that using 100 hidden nodes on such a short time series can make some people uneasy, but I am using a shrinkage estimator instead of conventional least squares to estimate the weights, which in fact eliminates most of the connections. This is apparent in the network architecture in Fig. 3. Only the nodes connected with the black lines to the output layer contribute to the forecasts. The remaining connection weights have been shrunk to zero.
# Use THieF library(thief) mlp.thief <- thief(y.in,h=tst.n,forecastfunction=mlp.thief)
There is a similar function for using ELM networks:
Since for this simple example I kept some test set, I benchmark the forecasts against exponential smoothing:
|MLP (5 nodes)||62.471|
Temporal hierarchies, like MAPA, are great for making your forecasts more robust and often more accurate. However, with neural networks the additional computational cost is evident!
These functions are still in development, so the default values may change and there are a few experimental options that may give you good results or not!