As I mentioned in Part 1 of my two-part discussion of Demand Forecasting Artificial Intelligence (AI), demand forecasters until relatively recently relied on statistical models. However, there were many known limitations to these models. First and foremost, they did not include contextual internal and external data like promotions, seasonality, weather forecasts stockouts, or other demand factors. Second, they did not factor for cross-item effects like cannibalization or halo effects between items.
Today, however, leading retail technology vendors including my company have created modelling algorithms that leverage highly refined machine learning, AI and deep learning to replace the statistical approach. These sophisticated models consume current contextual data for generating far more accurate forecasts. Moreover, the machine learning aspect of this approach means that as you provide the models with data over time about promotions, stock-outs, seasonal events, etc., the model continues to learn from the current and historical data and is able to make increasingly accurate predictions.
Clearly the benefits of switching from model-centric to data-centric forecasting are compelling. Still, some retailers have concerns about embracing a significantly different approach and feel unprepared to identify relevant data sources and unsure how to optimally provide the correct data to the new models. – instead of tweaking the model manually to get a statistical outcome that seems to be a good fit. Our team here at SymphonyAI helps customers identify and leverage relevant, available data – not only from within the retailer but also from external sources. Our expert models in turn apply advanced machine learning algorithms, enabling the models to continually learn, as I’ve mentioned, and also to identify and ultimately overcome any forecasting errors that point to the need for additional data or more accurate data sources.
How did we get to this inflection point from statistical models to AI-based models? A multi-decade series of competitions organized by academics call the M Competitions were organized to help scientists learn how to improve forecasting accuracy. In 2018, the M4 competition saw the first use of deep learning (DL) to supplement the traditional pure statistical approaches to forecasting. Even then, ML was used in a very limited way – for example, a summary of the competition notes that of the 17 most accurate methods, 12 were combinations of statistical approaches.[i]
Beginning with the M5 Competition, which ran from March through June 2020, a significant shift was clearly underway. The iteration of the M Competitions featured a pure contextual retail dataset. For the first time, ML was clearly and decisively outperforming traditional statistical methods on the data. Without question, the transition to a data-centric approach is the right way to go for retailers – and SymphonyAI is a leader in applying this type of ML to significantly outperform the traditional statistical benchmarks. The more the effort applied to rigorous data quality and enrichment of valuable contextual data fed to the model, the more accurate the forecast is. Moreover, we have tailored Demand Forecasting AI to be very retail-focused by leveraging our own deep retail experience as well as input from our worldwide customer base of innovative retailers. The result is an approach that applies cutting-edge science yet is optimized for real-world retail environments – and with a proven track record.
In the final blog in the series, from my colleague Troy Prothero, will discuss some of that real-world impact through the experiences of our retailers who have leveraged our Demand Forecasting AI in their own environments.
Want to learn more? Connect with a solution consultant.
[i] The M4 Competition: Results, findings, conclusion and way forward, by Spryos Markdakis, Evanglos Spiliotis and Vassilios Assimakopoulos. International Journal of Forecasting, Volume 37, Issue 3, July-September 2021, pp. 1308-1309.