Academic macroeconomists almost never try to make forecasts about the macroeconomy. That probably sounds almost unbelievable to most people, but it’s true — look at any macro paper in a mainstream economic journal, and there’s only a very small chance that it will be about forecasting. There are one or two economists out there who are deeply interested in the reason for this, and a small trickle of promising papers. And some economists do pay attention to their models’ forecasting performance. But the vast majority of research is about predicting the effects of policies, rather than about predicting what will actually happen.
This might seem like a dereliction of duty. Many, such as the Queen of England, expect that economists should be able to see recessions coming — or at least to try. After all, weather forecasters try to predict storms and heat waves, right?
Academic economists will give varying explanations for why they don’t pay much attention to forecasting, but the core reason is that it’s very, very hard to do. Unlike weather, where Doppler radar and other technology gathers fine-grained details on air currents, humidity and temperature, macroeconomics is traditionally limited to a few noisy variables like gross domestic product, investment, consumption and consumer price indexes, collected only at low frequencies and whose very definitions rely on a number of questionable assumptions. And unlike weather, where models are underpinned by laws of physics good enough to land astronauts on the moon, macroeconomics has only a patchy, poor understanding of individual human behavior. Even the most modern macro models, supposedly based on the actions of individual actors, typically forecast the economy no better than ultra-simple models with only one equation.
With inferior data and inferior theory, perhaps it’s not surprising that macroeconomists would throw up their hands. There’s also a cynical interpretation — policy recommendations are a lot harder to falsify, while with forecasting everyone knows when you get it wrong.
Whatever the reason, the field of macroeconomic forecasting is now exclusively the domain of central bankers, government workers and private-sector economists and consultants. But academics should try to get back in the game, because a powerful new tool is available that might be a game-changer. That tool is machine learning.
Loosely speaking, machine learning refers to a collection of algorithmic methods that focus on predicting things as accurately as possible instead of modeling them precisely. Instead of traditional statistics, where statisticians build models of how the world works, machine learning algorithms are generally given the freedom to figure out the model on their own. The downside is that it’s often hard to tell exactly why the algorithms make the predictions they make, but the upside is that they are generally more accurate than traditional methods. In recent years, a type of algorithm known as deep learning has made incredible strides in a number of fields. Deep-learning algorithms, often marketed as artificial intelligence, can now beat humans at complex board games and operate self-driving cars.
Economists love new tools, and machine learning is no exception. Pioneering researchers like Susan Athey, Guido Imbens and Sendhil Mullainathan have figured out ways to use algorithms to help improve the statistical methods used to identify cause and effect in economics. That required some ingenuity, since identifying causality is all about determining the structure of the economy, whereas machine learning usually focuses on predicting things without knowing as much about the structure.
Forecasting recessions, in fact, seems like a much more natural task for machine learning. Critics of mainstream macroeconomics sometimes claim that because the economy is a chaotic system, predicting it mathematically is a fool’s errand. But machine learning techniques have been known to make surprising headway at predicting chaotic systems much better than traditional math approaches.
But so far, the explosion in machine learning techniques has failed to spark a revival of interest in macroeconomic forecasting in the halls of academia. There are some exceptions. In 2016, U.K.-based economists Rickard Nyman and Paul Ormerod applied a machine-learning technique known as random forests to predicting recessions, and found favorable results. Thomas Cook and Aaron Smalter Hall of the Federal Reserve Bank of Kansas City tried deep learning, and reported that their algorithms displayed greater predictive accuracy than traditional models.
These are highly encouraging efforts, but they are isolated and haven’t received as much attention as they deserve. The stars of the macro field have yet to embrace the idea that new techniques might be game-changing for forecasting. But if machine learning really could allow economists to predict recessions before they happen, it would be like finding the Holy Grail — the field, and perhaps the economics profession as a whole, would win back the prestige it lost in the Great Recession, and more.
It also might help significantly improve monetary policy. Central bankers, instead of going on a combination of abstruse models and gut instinct, could potentially target the forecasts of machine-learning algorithms. If the best available algorithms predicted inflation that was off from the central bank’s target, it could adjust interest rates until the forecast lined up with the desired rate.
As tools improve, science changes. Academia has largely given up on predicting recessions, but the boom in machine learning is a reason to give it another try.
Noah Smith is a Bloomberg Opinion columnist. He was an assistant professor of finance at Stony Brook University, and he blogs at Noahpinion.
— For more Bloomberg Opinion columns, visit http://www.bloomberg.com/opinion.