ISF2018 Presentation: Beyond summary performance metrics for forecast selection and combination

By | June 20, 2018

Nikolaos Kourentzes, Ivan Svetunkov and Stephan Kolassa, ISF2018, 20th June 2018.

In doing forecast selection or combination we typically rely on some performance metric. For example, that could be Akaike Information Criterion or some cross-validated accuracy measure. From these we can either pick the top performer, or construct combination weights. There is ample empirical evidence demonstrating the appropriateness of such metrics, both in terms of resulting forecast accuracy and automation of the forecasting process. Yet, these performance metrics are summary statistics, that do not reflect higher moments of the metrics. This poses similar issues to analysing only point forecasts to assess the risks associated with a prediction, instead of looking at prediction intervals as well. Looking at summary statistics does not reflect the uncertainty in the ranking of alternative forecasts, and therefore the uncertainty in selection and combination of forecasts. We propose a modification in the use of the AIC and an associated procedure for selecting a single forecast or constructing combination weights that aims to go beyond the use of summary statistics to characterise each forecast. We demonstrate that our approach does not require an arbitrary dichotomy between forecast selection, combination or pooling, and switches appropriately depending on the time series on hand and the pool of forecasts considered. The performance of the approach is evaluated empirically on a large number of real time series from various sources.

Download slides.

Leave a Reply

Your email address will not be published. Required fields are marked *