Although Croston’s method and its variants are popular for intermittent demand time series, there have been limited advances in identifying how to select appropriate smoothing parameters and initial values. From the one hand this complicates forecasting for organisations, and from the other hand it does not permit automation. Recent research investigated various cost functions for optimising these methods and found two newly proposed ones, namely the MSR and MAR, for Mean Squared and Absolute Rate, to perform better than conventional squared or absolute error based cost functions. The argument for the new cost functions, in a nutshell, is that these methods produce a **demand rate forecast**, rather than **a demand size forecast** and therefore using error based cost functions is inappropriate.

The resulting parameters were found to be close to the ones suggested by the literature. The same paper looked at whether constraining the parameters helped and found minimal differences. Optimising the demand size and interval parameters separately was found to be beneficial, as well as optimising the initial values of demand and interval (or demand probability for the case of TSB). When all these findings were used in practice there were substantial improvements in inventory performance terms, as discussed in detail in the paper.

I have put together a simulator to illustrate how the various methods and cost functions work. All functions are from the tsintermittent package for R.

Update: If minimum and maximum aggregation levels are equal, an ADIDA forecast with automatic model selection (using iMAPA) is now produced correctly.