The global economic crisis of 2008-09 has triggered much soul searching in the economics profession. How come only a few economists saw it coming? Was it because the profession was blind to the very possibility of catastrophic failures in a market economy, as Nobel laureate Paul Krugman put it?
Such thoughts naturally warrant a more relevant theory. Or was it largely a predictive failure? The response here has been to work towards improving the measurement of the build-up of systemic risk so that decision makers can fashion a timely policy response.
In this regard, a comparison was drawn to the Great Depression of the 1930s, a response to which was the development of national income accounting. Very few economists saw this coming.
And when it struck, policy makers had limited data on stock indices, freight car loadings and incomplete indices of industrial production to respond to the catastrophe.
However, with the development of national income accounting later on, they could better understand the ups and downs of macroeconomy and work towards lessening their frequency and severity over time.
The 2008-09 crisis marks another inflexion point of sorts. The policy makers appeared clueless about how and why events like the US sub-prime mortgage defaults in early 2007 led to a deep global synchronised economic downturn.
Relevant information about the financial sector and its linkages to the real economy were conspicuous by their absence for policy makers to act in time.
Researchers like Professor Arvind Krishnamurthy of Northwestern University and National Bureau of Economics Research (NBER) have attempted to better understand the financial amplification mechanisms in liquidity crises.
A more ambitious effort at better measurement is his collaboration with Markus Brunnermeier of Princeton University and NBER and Gary Gorton of Yale University and NBER in their recent paper on "risk topography" that outlines a data acquisition and dissemination process that informs policy makers about systemic risk in an improved manner.
Existing measures were woefully inadequate for this task - leverage, for instance, has little meaning in a world of derivatives and off-balance sheet vehicles. Similarly, liquidity has not been clearly defined, let alone measured.
What is systemic risk? According to these researchers, it is "the risk that shocks affects the financial sector and trigger an endogenous adverse feedback significantly amplifying these shocks, causing further deterioration in the financial sector, and leading to significant output losses."
Such a risk typically builds up in a low-volatility period and only materialises when it becomes apparent to a sufficient number of players that accumulating imbalances is not sustainable.
The subsequent fallout involves amplification mechanisms with spillover effects across the financial sector and the real economy.
The three authors, accordingly, outline a proposal for an improved measuring of risks and liquidity in the financial sector.
Their basic idea is to elicit from market participants, like financial firms, their sensitivity to a number of specified factors and scenarios on a regular basis: "Essentially, we ask firms to report their 'deltas' with respect to the specified factors, that is the dollar gain or loss that occurs when the specified factor changes by a specified amount, as well as increase or decrease in liquidity as defined by a liquidity index, the liquidity mismatch index."
For example, they ask firms what their capital gain or loss is when house prices fall by 5, 10, 15 and 20 per cent and what if they rise by similar increments. In addition, financial institutions would also have to report how their liquidity position changes.
The focus on these two dimensions is largely owing to the fact that capital and liquidity are considered the most significant factors underlying the behaviour of financial firms during crises.
The upshot is a better basis to gauge the risk as well as liquidity sensitivities of market participants with respect to major risk factors and liquidity scenarios.
With such measures, policy makers and regulators would be in a better position to understand the vulnerability of the economy to systemic risk and incorporate the financial sector more realistically in a macroeconomics model.
The general equilibrium responses and economy-wide system effects of financial shocks can be better understood and calibrated with such data.
Thus, if real estate prices crash by 20 per cent, it is possible to go beyond the losses to the commercial banking sector and compute the general equilibrium response of the real economy to such a financial shock.
The global economic crisis of 2008-09 has clearly shown that existing measurement systems in the financial sector are outmoded and need to be overhauled if policy makers are to have any chance of tracking the build-up of systemic risk that usually takes place in the background.
With better data on finance, the macroeconomic models can do a better job of tracking and perhaps predicting the onset of crises in the economy triggered by financial shocks. The big question is: will better data enable economists to see the next big crisis coming?
From the Ivory Tower makes research from the academic world accessible to all our readers.