OR59 Keynote: Uncertainty in predictive modelling

By | September 19, 2017

I recently presented at the OR59 conference my views and current work (with colleagues) on uncertainty in predictive modelling. I think this is a topic that deserves quite a bit of research attention, as it has substnatial implications for estimation, model selection and eventually decision making.

The talk has three parts:

  • Argue (as others before me!) that model based uncertainty (the sigma we get from our models) is not the full story, and estimation/model uncertainty should be accounted in prediction intervals and decision making. Key point: most model outputs assume that the model itself is true, which is… not true!
  • Provide initial results from an approach to directly account for model selection uncertainty that leads is improvements in model selection, but also leads naturally to model combination, aswering what and when to combine.
  • Demonstrate how work in Multiple Temporal Aggregation is an effective way at addressing modelling uncertainty, summarising research up to this point.

You can download the talk here.


Forecasts are central to decision making. Over the last decades there have been substantial innovations in business forecasting, resulting in increased accuracy of forecasts. Models and modelling principles have matured to address company problems in a realistic sense, i.e. they are aware of the requirements and limitations of practice; and tested empirically to demonstrate their effectiveness. Furthermore, there has been a shift in recognising the importance of having models instead of methods to facilitate parameterisation, model selection and the generation of prediction intervals. The latter has been instrumental in refocusing from point forecasts to prediction intervals, which reflect the relevant risk for the decisions supported by the forecasts. At the same time the quality and quantity of potential model inputs has increased exponentially, permitting models to use more information sources and support higher frequency of decision making, such as daily and weekly planning cycles. All these have facilitated and made necessary an increase in automation of the forecasting process, bringing to the forefront a new dimension of uncertainty: the model selection and specification uncertainty. The uncertainty captured in the prediction intervals assumes that the selected model is `true’. This is hardly the case in practice and we should account for that additional uncertainty. First, we discuss the uncertainties implied in model selection and specification. Then we proceed to develop a way to measure this uncertainty and derive a new way to perform model selection. We demonstrate that that this not only leads to superior selection, but also provides a natural link to model combination and specifying the relevant pool of models. Last, we demonstrate that once we recognise the uncertainty in model specification, we can extract more information from our data by using the multiple temporal aggregation frameworks, and empirically show the achieved increase in forecast accuracy and reliability.

2 thoughts on “OR59 Keynote: Uncertainty in predictive modelling

  1. Stephan Kolassa

    You make good points. (Incidentally, I just gave an internal training on time series forecasting yesterday, and I harped on much the same points. That’s how I know they are good.)

    May I suggest that you add a slide with references at the end of your presentation, or embed links to the DOI pages? It’s kind of hard to find the actual paper you refer to as “Xia et al. (2011)”…

    1. Nikos Post author

      Good point Stephan, I should have done that already! Let me find some time between the various meetings/reviews (I am not PhD director, which involves the fun aspect of getting on top of all PhD topics in the department!) and I will upload a supplementary file.


Leave a Reply

Your email address will not be published. Required fields are marked *