{"id":965,"date":"2016-04-10T15:36:28","date_gmt":"2016-04-10T15:36:28","guid":{"rendered":"http:\/\/kourentzes.com\/forecasting\/?p=965"},"modified":"2017-03-30T11:23:44","modified_gmt":"2017-03-30T11:23:44","slug":"a-fundamental-idea-in-extrapolative-forecasting","status":"publish","type":"post","link":"https:\/\/kourentzes.com\/forecasting\/2016\/04\/10\/a-fundamental-idea-in-extrapolative-forecasting\/","title":{"rendered":"A fundamental idea in extrapolative forecasting"},"content":{"rendered":"<p style=\"text-align: justify;\">Extrapolative forecasting, using models such as exponential smoothing, is arguably not very complicated from a mathematical point of view, but it requires a shift in logic in terms of what is a good forecast. For this discussion I will use a simple form of exponential smoothing to demontrate my point.<\/p>\n<h4>1. The forecasting model: single exponential smoothing<\/h4>\n<p style=\"text-align: justify;\">The forecast is calculated as:<\/p>\n<p><img src='https:\/\/s0.wp.com\/latex.php?latex=F_%7Bt%2B1%7D+%3D+%5Calpha+A_t+%2B+%281-%5Calpha%29+F_t&#038;bg=ffffff&#038;fg=000000&#038;s=1' alt='F_{t+1} = \\alpha A_t + (1-\\alpha) F_t' title='F_{t+1} = \\alpha A_t + (1-\\alpha) F_t' class='latex' \/>,<\/p>\n<p style=\"text-align: justify;\">where <img src='https:\/\/s0.wp.com\/latex.php?latex=F_t&#038;bg=ffffff&#038;fg=000000&#038;s=0' alt='F_t' title='F_t' class='latex' \/> is the forecast and <img src='https:\/\/s0.wp.com\/latex.php?latex=A_t&#038;bg=ffffff&#038;fg=000000&#038;s=0' alt='A_t' title='A_t' class='latex' \/> is the actual historical value for period <img src='https:\/\/s0.wp.com\/latex.php?latex=t&#038;bg=ffffff&#038;fg=000000&#038;s=0' alt='t' title='t' class='latex' \/>. The smoothing parameter <img src='https:\/\/s0.wp.com\/latex.php?latex=%5Calpha&#038;bg=ffffff&#038;fg=000000&#038;s=0' alt='\\alpha' title='\\alpha' class='latex' \/> is a value between 0 and 1. In its simplest interpretation this can be seen as a weighted moving average, where the distribution of weight is controlled by the smoothing parameter. Without going into too much detail we can say the following:<\/p>\n<ul>\n<li style=\"text-align: justify;\">A low value for <img src='https:\/\/s0.wp.com\/latex.php?latex=%5Calpha+&#038;bg=ffffff&#038;fg=000000&#038;s=0' alt='\\alpha ' title='\\alpha ' class='latex' \/> results in a long weighted moving average and in turn in a very smooth model fit;<\/li>\n<li style=\"text-align: justify;\">A high value results in a short weighted moving average that update the forecast very fast according to the most recent actual values of the time series.<\/li>\n<\/ul>\n<p style=\"text-align: justify;\">For example consider the case when <img src='https:\/\/s0.wp.com\/latex.php?latex=%5Calpha+%3D+1&#038;bg=ffffff&#038;fg=000000&#038;s=0' alt='\\alpha = 1' title='\\alpha = 1' class='latex' \/>, the forecast equation becomes: <img src='https:\/\/s0.wp.com\/latex.php?latex=F_%7Bt%2B1%7D+%3D+A_t+&#038;bg=ffffff&#038;fg=000000&#038;s=0' alt='F_{t+1} = A_t ' title='F_{t+1} = A_t ' class='latex' \/>, which is the same as the Naive method and in practice makes the forecast equal to the last observed value. No older observations are considered and no smoothing occurs. If we want to be proper with our model, then neither values of 0 or 1 are allowed for the smoothing parameter, but the above example is quite illustrative and therefore useful.<\/p>\n<h4 style=\"text-align: justify;\">2. Forecasting sales<\/h4>\n<p style=\"text-align: justify;\">Now that the basics of the model are explained let us look at the following example. Let us assume we have to forecast sales of a product with two years of history and we have two alternative model fits, one with smoothing parameter equal to 0.1 and one equal to 0.9.<\/p>\n<div id=\"attachment_974\" style=\"width: 675px\" class=\"wp-caption aligncenter\"><a href=\"http:\/\/kourentzes.com\/forecasting\/wp-content\/uploads\/2016\/04\/fit1.png\"><img aria-describedby=\"caption-attachment-974\" decoding=\"async\" loading=\"lazy\" class=\"wp-image-974 size-large\" src=\"http:\/\/kourentzes.com\/forecasting\/wp-content\/uploads\/2016\/04\/fit1-1024x380.png\" alt=\"fit1\" width=\"665\" height=\"247\" srcset=\"https:\/\/kourentzes.com\/forecasting\/wp-content\/uploads\/2016\/04\/fit1-1024x380.png 1024w, https:\/\/kourentzes.com\/forecasting\/wp-content\/uploads\/2016\/04\/fit1-150x56.png 150w, https:\/\/kourentzes.com\/forecasting\/wp-content\/uploads\/2016\/04\/fit1-300x111.png 300w, https:\/\/kourentzes.com\/forecasting\/wp-content\/uploads\/2016\/04\/fit1-768x285.png 768w, https:\/\/kourentzes.com\/forecasting\/wp-content\/uploads\/2016\/04\/fit1-660x245.png 660w, https:\/\/kourentzes.com\/forecasting\/wp-content\/uploads\/2016\/04\/fit1.png 1049w\" sizes=\"(max-width: 665px) 100vw, 665px\" \/><\/a><p id=\"caption-attachment-974\" class=\"wp-caption-text\">Fig. 1. Model fit with parameter 0.1 and 0.9.<\/p><\/div>\n<p style=\"text-align: justify;\">The key question is: <strong>which of the two alternatives is the best for forecasting future sales?<\/strong> This is a question I get quite often by practitioners and students in various forms.<\/p>\n<p style=\"text-align: justify;\">The typical answer I get from people who are not trained in forecasting\/statistics is that the option with parameter 0.9 is best. It indeed seems to follow the shape of past sales quite closely and arguably if we could somehow shift it to the left by one period the fit would be fantastic. Whereas on the other hand the fit based on parameter 0.1 is a flat line that does not follow the observed historic sales.<\/p>\n<p style=\"text-align: justify;\">This is a reasonable argument, but unfortunately it is wrong. In fact I mislead you so far, because in the equation for single exponential smoothing I did not include the <em>error<\/em> term and therefore we focused on comparing the actual sales and the point forecast. The point forecast is simply the most probable value in the future, but it is not the only possible one! Every forecast assumes some error, as there is always unaccounted information (what we typically call noise). We should really think every forecasted value as a distribution of values, with the most probable being the point forecast. Fig. 2 illustrates this by providing both the point forecast as well as the 80% and 90% Prediction Intervals (PI), i.e. the areas within which the future actual is expected to be with 80% and 90% confidence.<\/p>\n<div id=\"attachment_984\" style=\"width: 310px\" class=\"wp-caption aligncenter\"><a href=\"http:\/\/kourentzes.com\/forecasting\/wp-content\/uploads\/2016\/04\/frc1.png\"><img aria-describedby=\"caption-attachment-984\" decoding=\"async\" loading=\"lazy\" class=\"wp-image-984 size-medium\" src=\"http:\/\/kourentzes.com\/forecasting\/wp-content\/uploads\/2016\/04\/frc1-300x219.png\" alt=\"frc1\" width=\"300\" height=\"219\" srcset=\"https:\/\/kourentzes.com\/forecasting\/wp-content\/uploads\/2016\/04\/frc1-300x219.png 300w, https:\/\/kourentzes.com\/forecasting\/wp-content\/uploads\/2016\/04\/frc1-150x109.png 150w, https:\/\/kourentzes.com\/forecasting\/wp-content\/uploads\/2016\/04\/frc1.png 618w\" sizes=\"(max-width: 300px) 100vw, 300px\" \/><\/a><p id=\"caption-attachment-984\" class=\"wp-caption-text\">Fig. 2. A period ahead forecast with prediction intervals.<\/p><\/div>\n<p style=\"text-align: justify;\">Observe that as we look for higher confidence (from 80% to 90%) the PIs become wider. That already tells you something about how much confidence we have with regards to the accuracy of the point forecast!<\/p>\n<h4 style=\"text-align: justify;\">3. Prediction Intervals<\/h4>\n<p style=\"text-align: justify;\">The PIs are connected to the error variance of the forecast, which for unbiased forecasts is simply the Root Mean Squared Error (RMSE):<\/p>\n<img src='https:\/\/s0.wp.com\/latex.php?latex=+%5Ctext%7BRMSE%7D+%3D+%5Csqrt%7B%5Cfrac%7B1%7D%7Bn%7D%7B%5Csum_%7Bi%3D1%7D%5E%7Bn%7D%7B%28A_i-F_i%29%5E2%7D%7D%7D&#038;bg=ffffff&#038;fg=000000&#038;s=1' alt=' \\text{RMSE} = \\sqrt{\\frac{1}{n}{\\sum_{i=1}^{n}{(A_i-F_i)^2}}}' title=' \\text{RMSE} = \\sqrt{\\frac{1}{n}{\\sum_{i=1}^{n}{(A_i-F_i)^2}}}' class='latex' \/>\n<p style=\"text-align: justify;\">where <img src='https:\/\/s0.wp.com\/latex.php?latex=n&#038;bg=ffffff&#038;fg=000000&#038;s=0' alt='n' title='n' class='latex' \/> is the number of errors between historical and fitted values. This is also related to why we typically optimise forecasting models on squared errors: we are trying to minimise the error variance. So the smaller the RMSE the tighter are the PIs, which if our statistics are correct implies we have more confidence in our forecast. Obviously this is connected with any decisions that we may take using these forecasts, such as inventory decisions. The PIs and the safety stock are connected. For a more in-depth and relevant discussion on the connection to safety stock, as well as the problem of biased forecasts read <a href=\"http:\/\/kourentzes.com\/forecasting\/2016\/03\/30\/distributions-of-forecasting-errors-of-forecast-combinations-implications-for-inventory-management\/\">this<\/a>.<\/p>\n<p style=\"text-align: justify;\">For our example forecasts we have the following RMSE:<\/p>\n<ul style=\"text-align: justify;\">\n<li>Parameter 0.1: RMSE = 35.9<\/li>\n<li>Parameter 0.9: RMSE = 40.4<\/li>\n<\/ul>\n<p style=\"text-align: justify;\">which already informs us that we have less certainty on the predictions that are based on the smoothing parameter 0.9 that tries to follow the pattern of sales &#8220;better&#8221;. Fig 3 illustrates this for the in-sample fit for both cases.<\/p>\n<div id=\"attachment_987\" style=\"width: 675px\" class=\"wp-caption aligncenter\"><a href=\"http:\/\/kourentzes.com\/forecasting\/wp-content\/uploads\/2016\/04\/fit2.png\"><img aria-describedby=\"caption-attachment-987\" decoding=\"async\" loading=\"lazy\" class=\"wp-image-987 size-large\" src=\"http:\/\/kourentzes.com\/forecasting\/wp-content\/uploads\/2016\/04\/fit2-1024x380.png\" alt=\"fit2\" width=\"665\" height=\"247\" srcset=\"https:\/\/kourentzes.com\/forecasting\/wp-content\/uploads\/2016\/04\/fit2-1024x380.png 1024w, https:\/\/kourentzes.com\/forecasting\/wp-content\/uploads\/2016\/04\/fit2-150x56.png 150w, https:\/\/kourentzes.com\/forecasting\/wp-content\/uploads\/2016\/04\/fit2-300x111.png 300w, https:\/\/kourentzes.com\/forecasting\/wp-content\/uploads\/2016\/04\/fit2-768x285.png 768w, https:\/\/kourentzes.com\/forecasting\/wp-content\/uploads\/2016\/04\/fit2-660x245.png 660w, https:\/\/kourentzes.com\/forecasting\/wp-content\/uploads\/2016\/04\/fit2.png 1049w\" sizes=\"(max-width: 665px) 100vw, 665px\" \/><\/a><p id=\"caption-attachment-987\" class=\"wp-caption-text\">Fig. 3. Fit to historical sales with 80% and 90% one-step ahead prediction intervals.<\/p><\/div>\n<p style=\"text-align: justify;\">There are a few things we can say about Fig. 3. First consider the plot for parameter 0.1. You can see that most historical sales are within the 90% prediction interval, as the name suggests. The 80% prediction interval does a decent job as well. On the other hand the fitted value (the point forecast) does not follow the sales pattern and if we would consider this as the only indication of a good forecast, we would reject it. Compare this with the plot for parameter 0.9. Now things are much more erratic. The prediction intervals are more risky, in the sense that more points are outside or just marginally inside, even though the intervals themselves are wider. The fitted values still do not fare better in being close to the historical sales (for each month!).<\/p>\n<p style=\"text-align: justify;\">Consider another aspect of this, suppose that you need to take some decision on the forecasts. The &#8220;smooth&#8221; one based on the low parameter provides a very stable forecast and PIs. So for example running an inventory at 90% service level, more or less implies having to meet a demand &amp; safety stock looking similar to the 90% PI. Now consider taking the same decision using the predictions based on parameter 0.9. You will need to revise the plan all the time, as the predictions and respective PIs are very volatile.<\/p>\n<p style=\"text-align: justify;\">The true forecasts are even more striking, as it is shown in Fig. 4. For parameter 0.1 the prediction intervals are tighter, implying more confidence in our predictions, but also less costly decisions, such as lower safety stock. On the other hand the second forecast requires much wider prediction intervals, more uncertainty and the forecast does not look more reasonable! In both cases we get a flat line of forecasts, as this is what single exponential smoothing is capable of producing as a forecast. The impression that is was doing something more was misleading. Observe that as we are calculating the PIs for multiple steps ahead these typically become wider. Again, consider the cost implications.<\/p>\n<div id=\"attachment_993\" style=\"width: 675px\" class=\"wp-caption aligncenter\"><a href=\"http:\/\/kourentzes.com\/forecasting\/wp-content\/uploads\/2016\/04\/frc2.png\"><img aria-describedby=\"caption-attachment-993\" decoding=\"async\" loading=\"lazy\" class=\"wp-image-993 size-large\" src=\"http:\/\/kourentzes.com\/forecasting\/wp-content\/uploads\/2016\/04\/frc2-1024x380.png\" alt=\"frc2\" width=\"665\" height=\"247\" srcset=\"https:\/\/kourentzes.com\/forecasting\/wp-content\/uploads\/2016\/04\/frc2-1024x380.png 1024w, https:\/\/kourentzes.com\/forecasting\/wp-content\/uploads\/2016\/04\/frc2-150x56.png 150w, https:\/\/kourentzes.com\/forecasting\/wp-content\/uploads\/2016\/04\/frc2-300x111.png 300w, https:\/\/kourentzes.com\/forecasting\/wp-content\/uploads\/2016\/04\/frc2-768x285.png 768w, https:\/\/kourentzes.com\/forecasting\/wp-content\/uploads\/2016\/04\/frc2-660x245.png 660w, https:\/\/kourentzes.com\/forecasting\/wp-content\/uploads\/2016\/04\/frc2.png 1049w\" sizes=\"(max-width: 665px) 100vw, 665px\" \/><\/a><p id=\"caption-attachment-993\" class=\"wp-caption-text\">Fig. 4. Forecasts with 80% and 90% prediction intervals.<\/p><\/div>\n<h4 style=\"text-align: justify;\">4. A fundamental idea<\/h4>\n<p style=\"text-align: justify;\">Real time series contain noise, which cannot be forecasted. Therefore our objective should be to capture only the underlying overall structure and not the specific patterns that are most likely due to noise. Think what your forecasting model (or method) is capable of capturing and try to do only that. Single exponential smoothing is only capable of capturing the &#8220;level&#8221; of the time series. It is incapable of capturing trend, seasonality or special events and we should not abuse the model to do so. This will only result in very uncertain predictions, with substantial cost implications that only look more &#8220;reasonable&#8221; if we consider only the point forecasts. Once the PIs are calculated it becomes clear that we are making life much harder for us.<\/p>\n<p style=\"text-align: justify;\">Long story short:<\/p>\n<ul style=\"text-align: justify;\">\n<li><strong>The point forecast will be (always) wrong, so instead one should look at prediction intervals.<\/strong><\/li>\n<li><strong>Consider your model and try to fit to the structure it is able to do; do not be tempted to &#8220;explain&#8221; everything with your model. The latter will make forecasts to follow around noise, resulting in poor PIs and expensive decisions. <\/strong><\/li>\n<\/ul>\n<p style=\"text-align: justify;\">Obviously more complex models are able to capture more patterns and details from a time series, but again that in itself may just lead to over-fitting, a topic I will not go into in this post!<\/p>\n<p style=\"text-align: justify;\">I hope this illustration helps explain why we should not try to make our extrapolative forecasts match the historical patterns fully, and switch from thinking about point forecasts to distributions, as conveyed by the prediction intervals.<\/p>\n<p style=\"text-align: justify;\">A final note: my intention was not to be exact with my statistics, but rather illustrate a point! There is much more to be said about PIs, parameter and model selection and so on.<\/p>\n<p>You may find it helpful to experiment with different exponential smoothing models and parameters and PIs using this interactive <a href=\"http:\/\/kourentzes.com\/forecasting\/2014\/10\/30\/exponential-smoothing-demo\/\">demo<\/a>.<\/p>\n<div class=\"SPOSTARBUST-Related-Posts\"><H3>Related Posts<\/H3><ul class=\"entry-meta\"><li class=\"SPOSTARBUST-Related-Post\"><a title=\"OR62 -The quest for greater forecasting accuracy: Perspectives from Statistics &#038; Machine Learning\" href=\"https:\/\/kourentzes.com\/forecasting\/2020\/10\/20\/or62-forecasting-stream\/\" rel=\"bookmark\">OR62 -The quest for greater forecasting accuracy: Perspectives from Statistics &#038; Machine Learning<\/a><\/li>\n<li class=\"SPOSTARBUST-Related-Post\"><a title=\"Forecasting Forum Scandinavia &#8211; first workshop!\" href=\"https:\/\/kourentzes.com\/forecasting\/2020\/09\/20\/forecasting-forum-scandinavia-first-workshop\/\" rel=\"bookmark\">Forecasting Forum Scandinavia &#8211; first workshop!<\/a><\/li>\n<li class=\"SPOSTARBUST-Related-Post\"><a title=\"Automatic robust estimation for exponential smoothing: perspectives from statistics and machine learning\" href=\"https:\/\/kourentzes.com\/forecasting\/2020\/06\/04\/automatic-robust-estimation-for-exponential-smoothing-perspectives-from-statistics-and-machine-learning\/\" rel=\"bookmark\">Automatic robust estimation for exponential smoothing: perspectives from statistics and machine learning<\/a><\/li>\n<\/ul><\/div><!-- AddThis Advanced Settings generic via filter on the_content --><!-- AddThis Share Buttons generic via filter on the_content -->","protected":false},"excerpt":{"rendered":"<p>Extrapolative forecasting, using models such as exponential smoothing, is arguably not very complicated from a mathematical point of view, but it requires a shift in logic in terms of what is a good forecast. For this discussion I will use a simple form of exponential smoothing to demontrate my point. 1. The forecasting model: single\u2026 <span class=\"read-more\"><a href=\"https:\/\/kourentzes.com\/forecasting\/2016\/04\/10\/a-fundamental-idea-in-extrapolative-forecasting\/\">Read More &raquo;<\/a><\/span><!-- AddThis Advanced Settings generic via filter on get_the_excerpt --><!-- AddThis Share Buttons generic via filter on get_the_excerpt --><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[41],"tags":[32,76,27,23],"_links":{"self":[{"href":"https:\/\/kourentzes.com\/forecasting\/wp-json\/wp\/v2\/posts\/965"}],"collection":[{"href":"https:\/\/kourentzes.com\/forecasting\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/kourentzes.com\/forecasting\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/kourentzes.com\/forecasting\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/kourentzes.com\/forecasting\/wp-json\/wp\/v2\/comments?post=965"}],"version-history":[{"count":1,"href":"https:\/\/kourentzes.com\/forecasting\/wp-json\/wp\/v2\/posts\/965\/revisions"}],"predecessor-version":[{"id":1297,"href":"https:\/\/kourentzes.com\/forecasting\/wp-json\/wp\/v2\/posts\/965\/revisions\/1297"}],"wp:attachment":[{"href":"https:\/\/kourentzes.com\/forecasting\/wp-json\/wp\/v2\/media?parent=965"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/kourentzes.com\/forecasting\/wp-json\/wp\/v2\/categories?post=965"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/kourentzes.com\/forecasting\/wp-json\/wp\/v2\/tags?post=965"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}<!-- WP Super Cache is installed but broken. The constant WPCACHEHOME must be set in the file wp-config.php and point at the WP Super Cache plugin directory. -->