Let’s say you want to know what a burger will taste like. You look at the ingredients, and to some extent you’ll be able to “forecast” what it’ll taste like. Part of the difficulty is that your input data isn’t going to be super descriptive. For example, the menu isn’t going to say the exact percentage of fat, how long it’ll be cooked etc. However, the description is likely to be sufficiently good to get a rough idea of what it tastes like, although this is always going to be somewhat qualitative. When it comes to evaluating the utility of economic forecasts (unlike burger forecasts) there are many approaches! Indeed, assessing a forecast is something we think about a lot at Turnleaf Analytics, where we use machine learning and alternative data to generate economic forecasts.
Qualitative and quantitative ideas to evaluate economic forecasts
Below we give a few qualitative ideas for evaluating economic forecasts:
- Does the forecast force the investor to think differently? Do they stress test his or her prior views?
- Is new information being provided by the forecasts, compared to other sources he or she follows?
- Can a trader use the forecasts to help formulate trading ideas?
Obviously in practice this list of qualitative metrics can be much longer! We can also think of many quantitative metrics, which might actually intersect with a lot of quantitative metrics. We again give a few examples below.
- What are the historical errors in the forecast (eg. mean absolute error or root mean square error)?
- How often has the forecast historical been better than a benchmark?
- Can we create historically profitable trading rules using the forecasts?
- If we combine the forecasts with other sources, do we improve our quantitative metrics?
Comparing benchmarks to our forecast
We have noted using a benchmark, but there can be many different interpretations of a benchmark. It could be official sources such as central banks, or it could be surveys either conducted by official sources or by data firms. The benchmark can also be derived from the market, in some cases. We need to be careful when using a benchmark, in particular understanding when the benchmark forecast was constructed. If we are looking at a short term benchmark released say one day before an economic data release, we can’t compare that against a forecast made a month before the data release. Clearly, something published much closer to the event is likely to be more accurate, given the plethora of additional data available closer to the time. Indeed, the same is true if we are comparing, say a forecast done a month before an economic release and one done a year before. You also have the situation that your benchmark might well be different from another user.
Why should we look at multiple metrics for forecasts?
The rationale behind looking at multiple quantitative (and indeed qualitative) metrics, is that looking at one in isolation may not give a rounded picture. Furthermore not every user of a forecast is likely to have the same success criteria for forecasts. Ultimately, from a trader’s view the most important point about a forecast is whether it helps to improve his or her trading P&L. From that perspective, being able to create a historically systematic trading rule using a forecast can help to validate that point.
Which instruments we choose can impact the precise link between an economic forecast and a trader’s P&L. If we take inflation for example, for an inflation swap, the link between an inflation forecast and monetising it via an inflation swap is more direct, than for example trading fixed income and FX. Of course fixed income and FX are still very much impacted by inflation, but we note for an inflation swap, the payoff is directly expressed in terms of the CPI index.
Alternatively, we can view a forecast through a more qualitative angle, does a forecast help a trader to (hopefully!) come up with more profitable trading views. An economist using a forecast could be more interested in other quantitative metrics, such as the forecast error or how these forecasts errors correlate with their own models. If the errors have a low correlation, the additional forecast can provide another reference point for them, and input into their own models.
We’ve gone through a view different ideas for evaluating an economic forecasts, namely because there isn’t a single way to evaluate a forecast. How you approach a forecast will depend on your use case. Some users will be more inclined to evaluate forecasts from a qualitative point of view particularly if they are more discretionary in their approach to markets. Conversely, quants will be skewed more towards using quantitative approaches to evaluate forecasts. For both qualitative and quantitative, there are multiple metrics, and in practice, it is likely that several will be used. Traders will likely be more focused on their P&L as a way to evaluate forecasts as we might expect.