At the onset of the Depression, governments confronted a collapse in output without the benefit of knowing what that output was. The US national accounts had yet to be invented, so the Hoover and Roosevelt administrations based policy on shards of evidence: the behaviour of stock prices, freight-car loadings, production data from particular companies. The dearth of reliable economic measures provoked a minor revolution. Simon Kuznets led the economics profession in creating statistics for gross domestic product and much more.
Now, in the wake of the financial crisis, the question is whether a new Kuznets revolution is possible today. On one side you have thinkers who call for the creation of economy-wide financial risk statistics, who pushed successfully for last year's launch of the US Office of Financial Research. On the other side is Andy Haldane, executive director for financial stability at the Bank of England. In a speech at the Fed's Jackson Hole conference last week, Mr Haldane subversively argued that the neo-Kuznets vision could be a trap.
It is easy to see why this vision is attractive. The crisis has shown that economic performance ought to be judged in terms of risks as well as quantities. In pure quantity terms, gross domestic product growth in the US, Britain or Spain was robust in the years up to the crisis. But adjusted for risk, these countries' records were considerably less good. Just as sophisticated fund managers have long measured their performance by some version of the Sharpe ratio – returns divided by the risks taken to generate them – so policy makers must learn to risk-adjust macroeconomic performance.
If risk is important, the case for a data revolution seems clear. Standard metrics of risk capture leverage, or risk-weighted leverage, or exposure to the possibility that short-term credit might dry up. They do not capture the risks in currency swaps, interest rate swaps or other derivative contracts. If policy makers in the 1930s could glimpse only half the GDP elephant, their descendants arguably see an even smaller proportion of the risks in finance.
The trouble is that harvesting good data is no small challenge. In the US, the OFR has worked with the Securities and Exchange Commission to start collecting information from hedge funds. The exercise is proving costly; some hedge funds have spent 3,000 hours responding to SEC questions. Worse, the SEC's questions are so vaguely worded that many answers will be worthless. And they may have perverse consequences. Ordered to disclose all internal risk measures, funds may lighten their burden by discarding some of them. Finance may end up dicier than it was.
In a thoughtful paper, three proponents of a neo-Kuznets revolution – Markus Brunnermeier, Gary Gorton and Arvind Krishnamurthy – suggest a different way of tackling the data gap. Rather than asking dozens of unfocused questions, regulators should ask financial companies to estimate how they would fare under various stress scenarios: a change in one-year loan rates, a panic in the repo market, or some combination of such shocks. But this approach presumes that companies can calculate the answers. In practice, a shock in one corner of the system can generate aftershocks in unexpected places. How is a bank or hedge fund supposed to pinpoint its expected losses in the face of such feedbacks?
Enter Mr Haldane of the Bank of England, whose starting point is that finance features not just risk, which is quantifiable, but uncertainty, which is not. Human beings are incapable of listing all possible futures and assigning probabilities to each one. Even in a controlled game such as chess, grand masters can only grasp the consequences of all possible sequences five moves out. Finance is vastly more complex, not least because the "rules" – for instance, the relationship between long-term and short-term interest rates – are constantly in flux.
In an uncertain world, grand risk-mapping ambitions can be taken only so far. The OFR can usefully press companies to improve the quality of their data, which are often scattered among incompatible IT platforms. But regulators cannot be expected to measure all the risk in an economy; nor should they spend unlimited resources on an effort that will only disappoint. In the US and the UK, the growth of financial regulation has far outpaced that of the financial industry, as armies of supervisors seek to discover risks and neuter them. The trend is not sustainable.
For banks as for hedge funds, costly attempts to gather complex data may be counterproductive. Mr Haldane constructs a sample of about 100 global banks in 2006 and asks which of two measures predict the odds of failure in the crisis: a simple leverage ratio, measuring assets over equity; or a more complex, risk-weighted one. The answer is that the simple metric performs better. The vast expansion of the Basel rules over a quarter of a century may have achieved nothing.
None of this means that tough regulation is wrong. If complex risks cannot be finely monitored, they must be crudely capped. Leverage should be limited without wasting time on over-complex risk weighting. Derivatives should be moved on to exchanges, without shedding tears about the suffocation of complex over-the-counter products. And banks that are too complex should be broken into pieces. In financial regulation, simple is good.
The writer is a senior fellow at the Council on Foreign Relations and an FT contributing editor
This article appears in full on CFR.org by permission of its original publisher. It was originally available here (Subscription required).