This is joint work with Fotios Petropoulos and Kostantinos Nikolopoulos and discusses the performance of experts selecting forecasting models, against automatic statistical model selection, as well as providing guidelines how to maximise the benefits. This is very exciting research, demonstrating the both some limitations of statistical model selection (and avenues for new research), as well as the advantages and weaknesses of human experts performing this task.
In this paper we explore how judgment can be used to improve model selection for forecasting.We benchmark the performance of judgmental model selection against the statistical one, based on information criteria. Apart from the simple model choice approach, we also examine the efficacy of a judgmental model build approach, where experts are asked to decide on the existence of the structural components (trend and seasonality) of the time series. The sample consists of almost 700 participants that contributed in a custom-designed laboratory experiment. The results suggest that humans perform model selection differently than statistics. When forecasting performance is assessed, individual judgmental model selection performs equally if not better to statistical model selection. Simple combination of the statistical and judgmental selections and judgmental aggregation significantly outperform both statistical and judgmental selection.