Can machine learning help predict recessions? Not really

Artificial intelligence models stumble on noisy data and lack of interpretability

What are the odds of a recession hitting the United States in 2024?

Goldman Sachs places its bet at 15%. The New York Federal Reserve suggests a gloomier likelihood of 69% using a model based on the yield curve, one of the best known indicators of recessions. A group of economists project a 50% chance of a downturn, in a survey by Wolters Kluwer. Polls of CEOs and consumers show 84% and 69%, respectively, braced for recessionary winds in the next 12–18 months.

Quants trying to make sense of these contradicting perspectives might wonder if artificial intelligence models might be more accurate. However, the use of machine learning to forecast recessions has proven less favorable than initially anticipated.

“Recession modeling relative to other areas hasn’t evolved as much,” says Max Gokhman, head of MosaiQ investment strategy at Franklin Templeton.

The reason for the stunted progress is twofold: a lack of historical data for training models, and the difficulty of isolating useful economic signals from the background noise.

Raul Leote de Carvalho, head of the quant research group at BNP Paribas Asset Management, uses the analogy of an AI algorithm designed to recognize images. A successful algo is trained on millions of images. “There are not that many photos of the markets that we can take from the past to define these things, so there’s a lot of error in forecasting,” Carvalho says.

Quants have hit blocks before with their efforts to use AI to predict downtowns, but some believe there could be potential breakthroughs on the horizon in the form of explainable AI and quantum computing.

The lifeblood of AI is data. Yet, the rarity of economic downturns means recessionary data is scarce. Since the 1990s, for example, the US has seen only four recessions as defined by GDP-based indicators. A dearth of data means AI models have little to learn from, which “limits the type and complexity of models you can use”, says Aric Whitewood, co-chief executive of AI macro hedge fund XAI Asset Management.

“Less than 10 instances of recessions means it’s very hard to create a good quality model. Ideally you would want tens or hundreds of instances or more in order to train a model,” he says.

Quants delineate recession forecasting into two primary areas: identifying variables that indicate recessions, and finding new statistical techniques to provide fresh data sources for their forecasts.

In the first category, BNPPAM has constructed models based on macro variables designed to detect regime changes. The objective was not to model recession risk specifically but to provide an early-warning indicator of when the economy might shift from one macro regime to another, says Carvalho. However, the signals were not strong enough for use in live investing, and the asset manager has shelved the project indefinitely.

Carvalho says the exercise suffered due to a lack of sufficient data and because financial markets are rife with so much noise that even machine learning’s ability to handle it proved wanting.

Doomsayer

The inversion of the Treasury yield curve is historically a reliable harbinger of recession. The inversion happens when short-term rate expectations exceed long-term ones. In the most recent four US recessions, the curve inverted in the months prior to a recession. The yield curve has been inverted for at least the past year, putting investors on high alert for an imminent US recession.

But GDP data—the usual gauge of recessions—is a lagging measure. This means an economy can be in the grip of a recession before the data officially confirms it. To combat this lag, economic forecasters often use more immediate data as a proxy for GDP numbers. Examples include job figures, nonfarm payrolls, the Purchasing Managers’ Index which shows economic trends in manufacturing and the service sector—or composites of these indicators.

Conventional tools to forecast recessions include Markov switching models which try to attach a probability to whether an economy is in a recession or an expansion, as well as the probability of transitioning from an expansion into a recession and vice versa.

Quants and economists also use dynamic factor models, which assume that the co-movement in a large number of time series can be explained by a small number of unobserved common factors. One of these latent factors is the business cycle.

Using machine learning for forecasting recessions, though, has been found lacking in different ways. In some approaches, the predictive power of the AI’s forecasts wasn’t strong enough. In others, models with the potential to illuminate the intricate relationships between markets and recession indicators have been too hard to interpret.

BNPPAM employed a clustering algorithm designed for handling large datasets, known as Birch, or balanced iterative reducing and clustering using hierarchies.

The quant group opted for an unsupervised approach, allowing the algorithm to autonomously identify regimes without specifying the types to look for. They fed the algorithm typical macro time series related to recessions, such as employment, interest rates, real estate data, inflation, and equity and bond valuations. The goal was to identify patterns across these variables during different regimes from the 1970s to the present.

“We let the machine learning [model] decide whether there’s a regime or not. Then, what we wanted was for the machine learning to find out in a totally unsupervised way whether asset prices have specific behaviour in these regimes,” Carvalho says.

The machine successfully identified regimes, recognising common periods where certain time series exhibited similar trends, such as inflation rising and equity valuations being cheap. But disappointingly, there was no correlation between the regimes identified by the machine and asset performance. Furthermore, Carvalho notes, the dates of the regimes identified by the algorithm failed to coincide with actual recession dates.

Deep thinking

Franklin Templeton, meanwhile, has looked at deep learning to better analyze the web of factors that contribute to recessions. Deep learning is the branch of machine learning that aims to mimic the workings of the human brain. However, Gokhman says the black-box nature of current AI algorithms limits their use.

The firm starts with the assumption that there are certain indicators to predict a recession, but their relative importance for each business cycle varies. A relatively simple machine learning model adjusts the weight of each indicator based on what the model perceives as most relevant for this particular point in history.

In this kind of approach, the machine learning process is looking for patterns relative to historical data. It works by changing the coefficients of formulas defined by quants when building the model, as their relationship to the predictive outcome evolves over time.

More complex models might do this with 50 or more such indicators. It is possible either to preprogramme the model with formulas or let it create its own. Either way, during the training of the deep learning model, the neural network can end up completely rewriting the formulas. Consequently, the creators of the model find themselves unable to comprehend the reasons and processes behind these revisions.

“When we talk about deep learning and using a neural network approach to creating any kind of model, we are giving up our ability to look in. That’s where deep learning as a black box does have some gaps for being applied to regime modeling,” Gokhman says.

XAI Asset Management has abandoned the idea of deep learning for predictive models. Instead, it is focusing on feature engineering, the process extracting elements of raw data that are then shaped into useful inputs for training new machine learning models. For example, Whitewood says researchers can embed knowledge of pre-1970s recessions into a machine learning model trained on post-1970s recessions.

“We would tend to use a simple, low-variance, interpretable model such as decision trees or k-nearest neighbor,” he explains.

Due to the low signal-to-noise ratio of financial markets and the low frequency of recessions, some firms have opted for a different approach: using natural language processing (NLP).

PGIM has developed a recession sentiment indicator based on NLP, incorporating the sentiment of 7,000 news articles a day containing the term “recession”. The model also attempts to triangulate the state of the economy using NLP to assess corporate sentiment and evaluate central bank perspectives through Fed statements and speeches on economic growth and inflation.

One possible downside of such an approach is that sentiment can often lag market prices. This may work against a model whose job is to predict recessions to help investors trade ahead of anticipated market moves.

Beyond machine learning

Last month, research firm Moody’s Analytics, in collaboration with Rigetti Computing and Imperial College London, announced it was exploring the use of quantum computing to enhance machine learning algorithms used by its analysts for classifying and assessing recession risks.

Carmen Recio Valcarce, a mathematician at Moody’s Analytics, says the impetus behind the project is the shortcomings of existing models.

“Predicting an economic recession is a challenging task because they are rare events, and there are many variables that can influence a recession. In specific countries, there is almost no data. This is bad news because statistical models rely heavily on the examples that they have seen in the past to be able to predict the future,” she says.

The work focuses on quantum-enhanced signature kernels, a combination of classical machine learning signature methods with quantum data transformation, which would speed up processing by allowing multiple calculations to be made simultaneously.

Using a simulated quantum computer, researchers assessed the power of this approach in predicting US economic recessions over the next 12 months and found the quantum-enhanced signature kernel outperformed classical models.

Elsewhere, Franklin Templeton’s Gokhman believes that advances in the explainability of AI could broaden the application of complex neural networks for assessing recession risk. One approach involves implementing an additional neural network on top of the initial one, serving as a translator. Another entails instructing the AI to articulate its reasoning. Both are examples of explainable AI (XAI) models.

“Often current models tell you what they think you want to hear, even if it’s not representative of how they arrived at the answer or even the sources they used,” Gokhman says.

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@waterstechnology.com or view our subscription options here: http://subscriptions.waterstechnology.com/subscribe

You are currently unable to copy this content. Please contact info@waterstechnology.com to find out more.

Most read articles loading...

You need to sign in to use this feature. If you don’t have a WatersTechnology account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here