Experts discuss methods of economic measurement.
This symposium is presented by the Maurice R. Greenberg Center for Geoeconomic Studies and is made possible through the generous support of Stephen C. Freidheim.
MALLABY: Great, so welcome to the third session of the Stephen C. Freidheim Symposium on Global Economics. This session is about measuring the economy in a digital age. The premise here is that we all know that new forms of data, big data, algorithms that crunch it, are driving innovation in the private sector.
But in this session we want to talk a bit about how that translates into public goods. So how can big data generate better measures of what’s going on in the economy, therefore better understanding of how policy tools might affect the economy? What are the public benefits? How can we migrate that innovation that we know is happening in driverless cars or algorithmic trading or whatever it is into a public policy realm?
So we’ve got three great panelists to speak about this. Right next to me is Diana Farrell, chief executive officer and president of the JPMorgan Institute and the creator of said institute, which is in the vanguard of sort of delving into a big organization, JPMorgan, and surfacing data that could be of use to economic policy and to public debates.
Matthew Shapiro, in the middle, is Lawrence Klein collegiate professor of economics at the University of Michigan, and he’s also—what it doesn’t say here is an adviser to the federal government on economic statistics. And when it comes to these data debates, he’s pretty much at the core of them.
And then Hal Varian, who, as chief economist of Google, is, of course, right smack bang in the center of everything data and how to extract meaning from it.
So I think maybe to start off with I’m going to go to Matt to talk about sort of old data, right. So we’ve got a debate we’re going to have about the new data we can generate from new sources. But tell us a bit about the status quo in economics and the data that economists rely on. Why couldn’t we just continue as we are?
SHAPIRO: Well, thank you very much; glad to be here.
So I try to straddle both worlds, the world of survey research, the world—and the world of big data, or what I prefer to call naturally occurring data. The problem with the old-school way of doing it is we largely rely on surveys. And we all know that response rates are going down. Who of us answers the telephone when it looks like a telemarketer or even someone from the University of Michigan Survey Research Center trying to elicit our opinion on something? So response rates are plummeting. And this is also true of official surveys.
So many of the surveys that we rely on are not mandatory. And one of the most—one that is frankly in trouble is the retail sales survey, which is the monthly survey that gets us consumption data. That’s 70 percent of the economy. Stores or outlets are just not responding. So we need to turn to new and exciting forms of data to get a better picture of what’s going on in the economy.
MALLABY: I mean, take even the current population survey, which I think is the one that generates the unemployment rate. How many people get actually—how many households is that really based on?
SHAPIRO: It sounds like a big number. There are 68,000 households that the Bureau of Labor Statistics and the Census Bureau approach monthly to get us the unemployment and employment statistics on the household side.
Diana, you worked in the White House. You had to make economic policy. Did you ever feel—before you even imagined that you would be creating new data at JPMorgan, did you feel the limits of the old data?
FARRELL: Very much so, and I think in two regards primarily. One was the timeliness of the data, because, as you all know, many of the data that are reported, the public administration data, are very lagged, often a quarter a month. They get revised frequently. So we don’t really know what happened till many months and sometimes years after the event. And they’re not granular enough. We don’t have a good enough understanding of what’s happening at a zip code level, which, when you’re really trying to address a national crisis, is actually very, very helpful.
And, you know, I’ll give you one example. Right at the beginning of the crisis, as we—as the administration came in and we were drafting the recovery act and a bunch of the other sort of short-term interventions, we did not know how bad the recession was. The numbers that came out for the first year were revised by a factor—a very significant factor after the fact. And I think the time that we passed the $800 billion recovery act, it was impossible to make the case for more. We probably should have had more, but we just didn’t have that data till a year later—too late.
On the granularity, remember the biggest aspect of the crisis was housing. And housing, despite the fact that it was a national debacle, had very, very different regional features. And so we were flying blind on a lot of the programs that were trying to address the most hard-hit areas, et cetera.
So I think this is a really big issue on timeliness, granularity, high frequency, and an understanding of the fact that the economy is really quite volatile. And a lot of the surveys done every year, every three years, et cetera, exude a sense of more smoothness and stability than is actually real at any time.
MALLABY: So Hal, sticking on this issue of what’s wrong with the old data, I mean, Diana is emphasizing the lack of timeliness, the lack of granularity. Matt has talked about falling—first of all, the small survey size in the traditional surveys, and probably a falling response rate.
You’ve also argued, I think, that a lot of the time the measurements are just looking at the wrong thing. Take the idea of chips. How do you think about computer chips’ usefulness? People may actually just be looking, when they use the traditional measures, at entirely the wrong measure.
VARIAN: Right. So the issue there is that the goals of the chip makers have changed. And as they’ve changed, we should be using different forms of hedonic adjustment, and it always takes time for the data collectors to recognize these—the changes.
But a bigger issue, I think, is a point that Diana alluded to in passing; namely, there’s a lot of very interesting private-sector data that’s gathered at high frequency that potentially fills some of the gap with these problems of the public data.
I was actually in this employment survey, and it took a year and a half. You know, they interview you in person and then you’ve got three follow-up phone calls. So it was very, very thorough. But I can bet their participation rates are going down, down, down, because it’s such a pain for them to collect that data.
FARRELL: You know, Hal, you mentioned this, both the private sector and the difficulty of getting this data. When we started the institute, we focused initially on the household sector and really tried to get a sense of the health of the household sector. Why? Because, from my time in government and from a lot of other smart people, we knew that that was one of the sectors that is least well-understood.
So the current play of public administration data, the best snapshot we have of the household sector comes from the Survey of Consumer Finance. It’s 4,500 households on a good year. It’s done every three years. And it’s not longitudinal in nature. So it’s not the same households.
But even more interesting, to the point you’re raising, Hal, when it was first started after the war it was a four-hour survey where the surveyor would show up at your kitchen table and go through a 60-page questionnaire. And, you know, the parent—the husband and wife would pull out their stacks and they would write it all out. And over the years, because of response rates, because of people’s unwillingness to participate, it’s been reduced down to a 90-second online—sorry—90-minute online survey of many of the questions that just get skipped. People just don’t even answer them.
So I think that is the dearth that you’re talking about. And what we’ve brought to the picture, as an example of the private sector, you know, data that were created for very different purposes, this is Chase’s platform, which touches 50 percent of U.S. households through demand deposit accounts, credit card accounts, saving accounts, student loans, mortgages, you name it. You can create an integrated picture that is what’s really happening at the integrated household level between income and consumption. How do these move together?
And you get these snapshots monthly for balances and daily for transactions. It’s just a completely different game. We started with 50 million households. And then you can do all kinds of screens to ensure you’re looking at the right thing, et cetera. You still end up with volumes that make the Survey of Consumer Finance not even remotely comparable.
MALLABY: So, Hal, Diana is talking about one example here where JPMorgan Chase has access to the finances of 50 million customers. And, appropriately anonymized, you can surface that stuff. That’s one type of new data. There’s a whole nother range of sources, from scanners, from Google searches. Talk a bit about the range of sources for new insight that you see out there.
VARIAN: So what’s happened over the last decade is many, many companies have put in data warehouses of one form or another. They’re tracking transactions. And these transactions are very helpful to us in looking at what the state of the economy is like. So this is passive data-gathering as opposed to the active data-gathering we talked about before.
And some of these companies have made indices and data available to researchers and other parties. Google has Google Trends, which is an index of search activity by both actual query and by category of query. Matt’s looked at Twitter data, so I’ll let him talk about that. Intuit has a small business employment index. It’s built around their QuickBooks product, so that’s a Cloud computing model, so they can aggregate employment issues at the establishment level.
ADP, as you know, has data on large enterprises, and they’ve got a lot of stuff that’s kind of behind the scenes they’re trying to figure out what to do with. Zillow real estate site has a real estate index. Auction.com has a commercial real estate index. MasterCard has spending polls, which is daily spend by category, by region.
So there’s lots and lots of new sources of data out there that are tracking very interesting things. And the challenge that we face as a profession is how do we integrate this data into the traditional sources? Sometimes there will be some substitution, buts lots of times there will be complementarities between the data that’s being collected now and the traditional data.
MALLABY: Matt, how does a research economist use Twitter?
MALLABY: You just tweet your papers, right? That’s all. (Laughter.)
SHAPIRO: No, I’m just a reader on Twitter. I don’t contribute. We have—at the University of Michigan we have a project trying to follow labor-market dynamics using tweets. So the idea is looking for tweet—well, we have two ways of doing it. One is the computer science way, which basically tries to use machine learning techniques to learn—discover patterns. Another is using economic expert knowledge, which is basically looking for terms a priori, like I lost my job, I’m looking for my job, in the tweet stream. So we have a small data set of about 6 billion tweets that we’re tracking, and—
MALLABY: Six billion sounds more than in the household survey.
SHAPIRO: Yes. But it’s tricky. And these data are hard to use. We’re trying to figure out—it tracked—it’s one of these—this is what happens in the social media space. It worked extremely well for two years. We put out a paper. And then it broke. But there’s a lesson in that, which we’re studying now.
The reason it broke is not so much that tweets change or who is tweeting changed. What happened is the relationship between what people said and what was going on in the economy changed. As the economy was healing, the relationship between job loss and claiming unemployment insurance changed to something you actually see in the official data that people were losing a job three, four years ago in this economy was really bad news because durations of unemployment were quite long. Now we’re shifting to a phase where it looks like job loss might be an opportunity, because people are quickly finding jobs.
Using these kind of data is a challenge, but it’s sort of—it gets at what the Fed is presumably thinking about right now, which is not so much what’s the exact state of the economy—that’s part of it—but really has the economy healed. And you can actually pick up in the tweet stream a sort of shift from job loss that sounds like a serious setback to job loss that might be an opportunity and a healing of the labor market. So you can use the data in these ways, but it’s tricky. We have very short sample periods. We’re dealing—
MALLABY: Because any data have noise. And until you’ve worked with those data for a while, you’re not sure what the noise is. That’s what you’re saying.
SHAPIRO: It’s noise. And we have—with official statistics we have decades and decades of business cycles to learn about it. With something like tweets, we only have a couple of years. So—
MALLABY: So, Hal, did you want to—
VARIAN: Yeah. I was going to say we did something similar with the Google query. So if you’ve become unemployed, what’s the first thing you’ll do? You’ll go to your computer and you’ll say, where’s the unemployment office? What are the hours of the unemployment office? How do I apply for unemployment? What are the forms I need to fill out? You know, on and on and on.
And indeed, you see queries of that sort are very, very highly correlated with the actual filing for unemployment benefits. And the nice thing is this data, the Google data, is available in real time. We’ve got 11 years’ worth of data available on a daily level. And for the last six months it’s available on an hourly level. So you can actually look at hourly query volume on a systematic basis.
MALLABY: Is that really useful?
MALLABY: That sounds like high-frequency sort of trading kind of—
VARIAN: So there’s been—there have been two very interesting applications of it that I’ve seen. One is on the Greek referendum, where the attitudes were changing over this very short time period. And it was quite an interesting study. You were able to call the outcome almost exactly. And the other was on the Irish gay marriage referendum, where you got the same thing. As people were asking about these political issues and arranging rallies and demonstrations and so on, you can see this activity happening in real time and you can utilize that in order to understand what the underlying dynamics look like.
MALLABY: Diana, we’ve been talking so far, do you want to just—
FARRELL: I’ll answer your question. I was just going to point one thing that hasn’t come out about, I think, the shift that is occurring as we think about economic data, and we do have a lot of figuring out to do, which is that, for better and for worse, these surveys and a lot of administrative data were established for the purpose of monitoring the economy. But they had embedded a theory of how the economy worked.
So, you know, my favorite example is any macroeconomic model that I’ve ever seen, and probably has ever been developed, has an assumption that the adjustment mechanisms will eventually bring the current account and capital accounts back into balance over some reasonable period of time.
Well, we’ve been in an economy, in a global economy, now for two decades where that just patently has not happened except for very short periods of time. And you can fault the models, but you also have to just acknowledge that the models have to have a presumption of how the economy works in order to be populated and to be thought through.
I think what we’re doing with these naturally occurring data are taking a very different approach to economics, which is to say it’s a behavior approach to science. We’re going to observe the behavior and then we’re going to estimate it. We’re not going to presume a model of behavior up front. And it’s a very different approach to the science, which I think is probably really challenging a lot of, you know, the traditional economic models.
MALLABY: That’s an interesting point. Do you want to react to that, the idea that there’s been this debate in economics between is the individually fully rationale. Is the individual subject to behavior distortions? Maybe we don’t even need to have that discussion if we can actually observe the behavior and say, well, this is how they behave.
VARIAN: Let me actually make a slightly different point that follows up on this, and that is, another—one of the things we’ve discovered from machine learning techniques—and actually we knew in economics decades ago and forgot—namely, the average of several models almost always outperforms a single model, OK. So this is known as ensemble learning these days. But we knew this in the ’60s when we looked at big macro models. And people noticed, gee, the average does better than any single one.
Now, in science we’re always trying to say, well, I have my theory. My theory is right. Here’s the data. And you say, no, no, I have my theory; here’s the data. But actually the average of our two models is probably going to perform better than either one. So one of the capabilities that we have with big data now is to do this kind of model averaging in a way that was more difficult before.
So when we do our own forecasting at Google for query growth and things like that, we actually average 35 different models. And in the famous NetFlix challenge, where they tried to predict viewing habits and movie ratings, they averaged 800 models. So this idea that you don’t have a single model of how the world works, you have multiple models which you can then average, is an extremely powerful idea. And as we have more and more big data available, we’re going to see more and more of this going on.
SHAPIRO: I’d like to add to that. So my view is a little between Diana’s and classical economics. So we—I’m also working with a large-scale data set of daily balance and transactions data, following a million accountholders who have a smartphone app that gets their balances and transactions. And we’ve used it to study what happened in the government shutdown in 2013; very interesting experiment where government workers lost 40 percent of their paycheck. But by the time they lost it, it was pretty clear that they would get the reimbursement two weeks later.
So classical economics says just not to worry, a temporary drop. We should be buffered. An income theory says that’s not right. Quite—the man on the street would say, oh, this is a disaster. We actually learned what the man on the street actually does. And we found a number of remarkable and surprising things that you wouldn’t have known if you didn’t have these kind of data.
So about 25 percent of workers—and these are government workers who you’d expect who have relatively stable circumstances, relatively middle class—show up the day before their paycheck with essentially nothing in the bank. So losing 40 percent of your paycheck should be a disaster, and you’d expect spending to drop 40 percent. It dropped some, but not nearly as much because consumers are creative in how they adjust.
There are other ways to buffer shocks to spending than simply borrowing on the credit card or dipping into liquid assets these folks don’t have. They basically do the time-honored thing of delaying payments, delaying mortgage payments and credit cards. This is something that maybe shouldn’t have come as a surprise to me if I’d read my Dickens but actually is—it’s not what we have in our economic models. We can learn things about how consumers actually behave.
It’s not necessarily behavioral, in quotes, which might mean irrational. It just means—behavior means a little different than the textbook models that we learned and teach.
VARIAN: By the way, Matt, I have to tell a story about mobile phone use in developing countries. There was a very nice report from a social scientist about demand for mobile phones in Haiti. It turns out the biggest single source of demand, the reason people wanted the mobile phone, was so they could call up their friends and borrow money for short time periods. (Laughter.) And it was exactly—they wanted the social network where, when they ran into some distress—some payment was delayed; something didn’t happen—they were able to pull their social network together and get the money.
This was, again, even more creative than what we see in the developed countries. You see this going on all the time.
SHAPIRO: Actually, I was on a panel when just getting—getting going on the work on the Twitter. And one of the panelists from American University was using mobile phones to study social networks among the homeless. And this came as a big surprise, because normally one thinks of the homeless as not being able to afford a mobile phone. And that’s probably true of long-term hard-core homeless, but actually this study showed that the homeless who had previously been middle class, who might have been right on the margin, were actually using mobile phones as a way to maintain connection with society, a way to, if there were a job interview, to be available. If you don’t have a street address, having a mobile phone number is an important social element. This was surprising and sort of very—
MALLABY: We’ll bring the discussion back to this, to sort of—around in a minute again. But I like Diana’s sort of, you know, ethereal—you know, the nature of economics, beyond rational man, beyond behavioral economics. Think a bit about how this stuff changes economics in the big picture. So in the response to the 1930s, the Depression, the U.S. government found it didn’t have the data it needed to understand why the economy were. So that’s the origins of the GDP data, the national income accounts.
Now, following a big financial crisis, there’s been interest in a new set of data. People have written about the need for sort of single measures of aggregate risk in the financial system, which seems rather too ambitious. But what you’re all grappling with and talking about is essentially the second revolution in data, which maybe, because of what Diana is saying, gives you more insight into how things really work than behavioral economics, which was discussed so much after the crisis.
This data revolution could be more consequential than the—(inaudible)—stuff in the ’70s that people now refer to all the time. I mean, did you have a reaction to that, Hal? Do you think—I mean, George Soros, in setting up INET, to think about the new economics, is really talking about behavioral. Maybe that’s the old economics now.
VARIAN: Well, I’m a big fan of behavioral economics. I think it’s quite important to understand it. But I’m even a bigger fan of looking at the actual data. And whether it’s a rational model, a behavioral model, or whether there are these kind of things that we think of as second-order effects but actually are first-order effects for the people involved, that’s the most interesting aspect, in my view, because we all sort of generalize from introspection our own experience, people we know, and things like that. But looking at a broader segment of the population, you see lots of behaviors that are very different. And I think that’s one of the great things about your data, if you can explore that.
FARRELL: I think that—
FARRELL: Yeah, that’s—yeah. In fact, I would say, back to the bigger-picture story, is the presumption of a certain rational behavior is belied by the extraordinary variation in behavior that we actually observe against big data sets. So, you know, one of the more recent pieces that we did was on gas prices. The gas price has declined 45 percent. This means a lot of money in the pockets of households. What have households done? Well, Gallup goes out and interviews a bunch of people. Seventy percent say I’ve been putting it aside. I’ve been saving it. I’ve been paying down debt.
The aggregate data, because they have, I would argue, more of a view that behavior is going to be more generally true across individuals, gets it, I think, pretty muddled. It’s really not able to ask the question very well. And it gets closer to the Gallup view that 45 percent is being spent but only 65 percent is being saved—but 65 is being saved.
Well, the first observation we had, we actually were able to observe gas price—gas spend versus non-gas spend and say, first of all, did we see a decline in gas spend? Yes, we saw a very significant decline.
MALLABY: Because you can see people’s accounts.
FARRELL: Absolutely non-personally identified and all. We can see a sort of aggregate impact on gas price decline. First observation is even though gas prices declined by 45 percent, gas spend declined only by 25 percent. Why? Because people do something kind of crazy. At the times when gas prices go up, they shift to premium gas.
FARRELL: So that’s the first thing we—
MALLABY: I wanted to spend more on my gas.
FARRELL: (Laughs.) I really did. But then we can also observe what categories saw increased consumption. And we isolate those. But it’s really important to say, look, we’ve got the variation of gas spend. The average gas spend in the U.S. is $101 a month—$101 a month. But for those—the bottom quintile spenders, it’s $2 a month. And the top quintile spenders are spending $360 a month. So already, with that kind of variation, when you ask a question of what do people do with the gas spend, you’re going to get a radically different answer depending on what population you’re talking about.
And you can play that out, as we did, to create a real control group between high gas spenders and low gas spenders in order to isolate the real impact of gas prices. And we conclude that people have—despite what they think they’re doing, they’ve spent most of it, at least 80 percent of it. They’re spending it on restaurants and groceries and entertainment and department stores.
But I think that’s a great example of a bunch of things we’re talking about—too much variation to try and impose a behavioral model, because you’re going to act very differently to a gas-price decline if you’re spending two bucks a month than if you’re spending $360 a month on a typical week and—typical month. And until we can map out all of that, we just will get very obfuscated aggregate data, which I think is really what we have on these kinds of questions.
MALLABY: So I think there’s two kinds of measurement improvements that we’re talking about. One is sort of just understanding how much is being produced or measuring what the real unemployment rate is, getting beyond a survey which is of limited sample size and bad response rate, having better quality date.
But another one is less static. It’s sort of understanding a likely reaction to some kind of stimulus, whether it’s a natural stimulus like a change in energy prices or a policy stimulus like a cut in the federal funds rate or a budget stimulus.
Maybe, Matt—I think, you know, you talked a bit about your look at the federal shutdown and what that did to people’s budgets. But can you think of—are there other studies that have taken—that have given better understanding of how—what the right amount of stimulus would be, the right mix of fiscal and military, maybe drawing from other countries which have different sets of data available? I think that in Scandinavia there’s extraordinarily transparent data on labor-market behavior, so you can really see how workers are responding and moving cities, and—
SHAPIRO: So almost at every economic downturn we have some kind of tax rebate or payroll tax cut or something that is meant to put dollars in the hands of households. And there are a range of views from theory about what this might be. Basically the government’s cutting a check and moving money from its checking account to the private checking account, and then the question is, will it be spent?
And I think where Diana was going in her previous comment is there’s just tremendous heterogeneity in what people do. And there are multiple sources of heterogeneity. One is heterogeneity of economic circumstance. So will low-income individuals versus high-income individuals behave differently? Actually, my research on that using surveys, which are limited, actually is counterintuitive. Actually, the low-income folks are just as likely to save their tax rebate as high-income folks. And that’s because an extra thousand dollars might be a huge benefit to, say, paying off a debt rather than spending.
But we can use the data, the granularity such as I’m working with for the JPMorgan Institute, to really understand this. And it’s extremely important for the design of fiscal stimulus programs since we typically do this at every turn. But the second type of heterogeneity is more behavioral; that given income level, people behave differently. Some people seem to have the spending gene. Some people seem to have the saving gene. In research we’ve done coming with the expiration of the payroll tax cut recently, it really looked like some folks were targeting debt repayment, which is very hard to—we basically don’t have an economic model that you’re supposed to target.
Spending—if there’s an adjustment in income, you may or may not adjust spending depending on the circumstance. But there is no model where you should be targeting debt repayment. Yet that seems what people are doing. So we need to understand this. And what most of the research is focused on, the impacts of—(inaudible)—policy, it actually is really important for monetary policy, because the way monetary policy is supposed to operate is one of the channels.
The conventional channel is lower interest rates lead to more spending on interest-sensitive components of spending. We’ve seen—we don’t—we haven’t seen that working as powerfully as it might have, given the very low interest rates we’ve had. That’s probably due to the fact that a lot of the natural spenders, those who might want to borrow, even at this late stage in the expansion, are feeling like they have more debt than they want rather than less, so they’re not going to respond to lower interest rates the way that the economy would—the way our models would predict. And these kind of data can get insight into that.
VARIAN: Yeah, I think this heterogeneity aspect is very important, because the theory by its nature has simplifications. And there’s a representative consumer in lots of theories or there’s two classes, capitalist and workers, or maybe three classes. But, of course, in reality there are a huge variety of different motivations and different understandings, different circumstances. And so by looking at this kind of data we can get a much better idea of how these forces interact. So—
MALLABY: I’d like to ask you a question about Uber, which Jason Furman referred to in the past session, because this strikes me as an example of something where you have a new technology that comes along and there’s quite a lot of disagreement in the public debate as to whether this is, A, good for drivers because they can work flexibly, and more, and whatever, or B, bad for drivers because they’re, you know, being exploited, don’t get benefits, et cetera. And this is presumably another thing which—I mean, you have technological change going on. There’s fractious debate about whether it’s good or bad. And we may be able to understand better what the truth is. Is that right?
VARIAN: Well, I think in the Uber example it’s a very interesting one, because these are, after all, voluntary transactions among informed people. And when you look at both anecdotal evidence—namely, you ask your Uber driver how do you like your job? Invariably they say I love the flexibility. I can do it between classes. I can do it when I drop my kids off for day care. I do this. I do that. And there has been work by Krueger and Hall on surveys of Uber drivers. And indeed, the flexibility is cited as the number one advantage of Uber.
Now, that wouldn’t have been something that we might have guessed ex ante. It comes back to this heterogeneity. But there are lots of people that have kind of gaps in their day that allow them to go out and earn income in this kind of flex work. That’s an important insight. I mean, that’s an important aspect of the economy that I don’t think either the left or the right really focused on in terms of their thinking about worker behavior.
So I’m all for it. I think we let the market sort these kinds of things out and utilize that flexibility that otherwise would have gone to waste.
FARRELL: I think a big question, a big aspect of the answer to your question or to Jason’s question is, you know, what is the use of Uber? If Uber’s talking points are maybe correct, but they will say this is a good thing because the vast majority of our drivers are only driving 10, 15 hours a week. And so what it’s really enabling is a smoothing of income and other needs, consumption needs, that is much needed. And, by the way, we document that extensively in our work at the institute, which is that individuals experience very high levels of income volatility and even higher levels of consumption volatility, much more so than any of the aggregate or traditional data sources.
So if you think that is true, that for most people this is just additive to a base level of work, then you say this is wonderful, because people really do need to smooth out the income and consumption volatility that they will be exposed to. I think to the extent that you think that more and more people are relying on this as a source of their only stable income, then I think some of the concerns pipe in. Certainly the data that I’ve seen suggests that for now most of it is additive work, not primary work. But, you know, that’s changing too. And so I think that that question will remain open for a while.
MALLABY: So I think we’ve established that, you know, there’s a lot of exciting potential here for economics and for public policy.
Let’s talk about how you realize the vision. So, you know, we know that enormous amounts of data are being captured by scanners, by Google, by all kinds of private entities that do it for business purposes. Now the question is how you translate that into the public space.
So let’s start with Diana, since that’s what you’re actually doing. In fact, you all do it in some way, so I’ll come to all of you. But you set up the institute at JPMorgan with the idea of surfacing and publishing anonymized data. How hard was that?
FARRELL: It’s a great question, because I think that we have sold the big data and the sort of analytic platforms and all these things, which are very real, oversold them. This stuff is really hard to do. All of these data were created for very different purposes. They are really about informing commercial lines of businesses, issues, and ensuring custody and all these important roles. And frankly, the investment in creating these data probably would not have occurred unless they had those commercial purposes and that return on investment.
And what we have done is gone through extensive both legal and regulatory and ethical compliance to ensure that we’re treating these data with extraordinary care, but then bring them into repositories that translate those data into economic concepts.
So think about it just for a minute, to get into the mechanics of it. If you’re watching an anonymized sort of inflow and outflow of accounts, you will see certain things that you can easily understand as economic concepts. You’ll see payroll come in. You say, well, I know payroll is income. Great. Ah, payroll is after-tax, right, because taxes get deducted from payroll. So now we’re talking about a certain kind of disposable income that may or may not be. You’ve got to worry about those sorts of things. And there’s a lot of coding, artificial-intelligence type of application, that goes to translating all these data that were created for a different purpose into economic concepts.
So that has been a big part of the first six, eight months of even starting the work on the household sector. And I would say it is very hard. It probably was not possible as recently as seven years ago because the techniques that have been developed both in machine learning and artificial intelligence are only now coming to be properly applicable. And yet the promise is huge, so it’s worth doing. It’s expensive to do. It’s hard to do.
I think the bigger challenge—you alluded to this, Hal, and we’ve been in many conversations about this with other folks who care about this public-good aspect of private-sector data—is it’s hard enough within a JPMorgan Chase Institute—I mean, between JPMorgan Chase to get the saving data to be properly paralleled with the demand deposit data and the credit card data, et cetera. And that’s all within one institution that, for various reasons, wants to do this.
If we were trying to take even multiple bank data and bring into public-good repositories or our data with utilities data, which is, you know, something else, or something else, what are the standards? What are the protocols? What is the underlying infrastructure that’s going to enable these data sets to work together?
One of the first people who really started pioneering the use of public-administration data is Raj Chetty, who some of you may know did work with the IRS data and managed to link that IRS data, I mean, in terms of long-term income data to education and concluded some pretty profound things for education policy. These are administrative data sets; very, very hard what he did to make the IRS data connect with administration data on the education side. Again, that’s within one government and within one sector.
I don’t—I guess my call would be this is a challenge, but we need to start now as a society, just as we did after the world war, to establish these surveys and this whole mechanism for looking at the world to establish the infrastructure, the standards, the protocol, so that, step by step, we can begin putting these data sets together, because that’s really, you know, where we’re going to get the right perspective on the world.
I mean, we have to worry, with our data sets, all of ours, with questions of representativeness. You know, how representative is the Chase customer base as a measure of the economy with completeness? How complete is the picture that you see with whatever lens we’re bringing on? And, you know, we can do all kinds of intelligent statistical applications to improve that. But, you know, it’s really—the holy grail here is getting a lot of the infrastructure to enable more data sets to talk to each other.
MALLABY: So your vision is multiple banks or multiple companies are feeding their own data so that you can cross-check. And you have a sort of—
FARRELL: Right, a more complete and more—
MALLABY: Three hundred sixty degree—yeah.
Hal, I mean, you know, Google, I guess, is known for, you know, beginning the search, making sense out of masses of online data and navigating it and figuring out what it means. But have you, do you think, as a company begun to translate that into some vision of collaborative data sharing beyond Google?
VARIAN: Yeah. So the general policy at Google is we give the data to everybody or to nobody, because it’s too hard to figure out those gray cases. And so things like Google Trends, Google Correlates, Google Consumer Surveys, those are available to everybody; and the first two for free, the second one—the third one for kind of a nominal cost.
That’s the easiest route to take, although I will say a lot of this data would be very valuable to have in greater detail to experts, like people here. But we haven’t figured out a good way to do that, and I don’t think we’re going to figure out a good way to do that in the immediate future.
Other companies—I mean, if you look at the MasterCard spending pulse data, which is very useful, that’s available by subscription for not very expensive. The ADP data, some of it’s released for free and some of it’s still proprietary in the company. At this stage there’s lots of different models, so we kind of work our way through.
I agree with Diana’s point that we should start trying to think more systematically about this. And how many panels have we been on? (Laughs.)
FARRELL: (Laughs.) Too many. And we’re both called to action together definitely.
VARIAN: Where people are—you know, are beginning to do that. But it’s still in the early stage.
By the way, I did want to mention one other point about—to follow up on what you said. Another issue with the private-sector data, as opposed to the public data, is what do you do when you improve your methodology? So now the government—I mean, these people who are collecting the data, they are very dedicated people, and generally they do a very good job. It’s just that their job is getting more and more difficult as time goes on. When they redefined R&D as investment in GDP, they dutifully went back and filled in 40 years of data and said, well, this is how the GDP would be adjusted, not just going forward but going backwards.
In the private sector we don’t do that. When we improve our geographic assignment system—assignment of IP address as geography, well, we didn’t go backfill. We said we made a change here, note; and the same thing with categories. You have categories of searches, categories of goods, categories of this and that, continually evolving. And when they went back and changed SIC codes to NAICS codes, they provided a mapping. Well, the private sector totally won’t do this. We’re thinking of what’s going on in the future and we don’t care as much maybe as we should about the past. But the government data has to care about the whole—integrity of the whole data set. So that’s also an issue.
MALLABY: I want to bring the members in, but just one word from Matt. How do you see it as a sort of user of these data?
SHAPIRO: Well, I’m glad Hal put in a pitch for our friends in the fiscal system who we all rely on and where Diana opened up that there was this huge miss in consumer spending, basically, in the fourth quarter of ’08, when the economy was falling off a cliff. Having these kind of data available to help the Bureau of Economic Analysis do the GDP, which is—it’s an estimate, but it’s essentially a projection, because many of the benchmark data are available with multiyear lags, some of that inherently so to the extent that it’s collected from administrative or tax data. Those do come in with a lag.
So it’s really important that the private-sector data custodians think about the needs of the statistical system, because this is—they are the folks who put together these public goods that we all take for granted. And if there’s a deterioration in quality or a difficulty in tracking changes in the economy or an expectation in the political process that data is free, so should national statistics. It’s something that the system needs help with.
So I think, in addition to helping researchers answer questions of public-policy importance, it’s important to figure out ways to feed the data in ways that respect the need for the public and policymakers to have timely data, and also—and as Hal points out, these are institutions which do care about consistency over time and will, to the best means possible, try to backfill or try to create statistic—consistent statistics.
So that’s—so it’s important to have these institutions and figure out ways that they can partner in an environment where data are costly, where what everyone thinks of as free would actually come at a bit cost where data are hardly proprietary. They’re important parts of individuals’ business models, and where there are very real privacy and, as we heard on the last panel, security concerns with this.
FARRELL: Could I add just one thing?
FARRELL: I know you meant this, Matt, but I want to make sure that as we think about this new world of private sector comes to the rescue, it is not in lieu of what the public is doing. I think there are many ways in which the private sector will complement in critical ways, but I do think we’re missing a constituency for the importance of public data.
You know, if you talk to the folks at the Census, their budgets are getting cut. They’re—even with the response rate issue being addressed or not. So I think the call is broadly for the private sector to play a more active role to contribute these data, but not in order to make obsolete the public version. I think it’s about both of them working together. I think that’s really the vision.
MALLABY: So who has a question? The big—potential of big data. Roger.
Q: Thanks. Roger Kubarych, Craig Drill Capital, but a lifelong forecaster and former colleague of many of you.
I’ve often been asked by clients and senators and people like that, what’s the best—single best piece of data that you use in forecasting? And the answer is easy: Initial claims every week. It backs up your point about frequency and granularity. It’s a terrific number. It’s in the index of leading indicators. And it really tells a tremendous story because of frequency and granularity, but also doesn’t get revised. Why not? Because the people who produce it at the state and local government level have to write checks, and so that has to be right. So I’m big on that.
Rather than ask a question, I’ll give you a challenge or project. Maybe you can get Wall Street to pay for it. I think you will be able to do something with this. There’s a rule. It’s called RMD. Many of you know it. Many of you will. It says that at 70 ½ you have to start disinvesting from your retirement accounts. And that’s going to happen to a lot of people over the next few years. Millions of people are going to be faced with RMD. And you’ve got data or you can get data. You can watch what people who have just turned 70 ½ are doing or have done over the last couple of years, because this is going to be billions and billions of sales of securities, of mutual funds, that people probably won’t do except for that rule. And how they react to that and how the stock market and the bond market react to that is a big issue.
MALLABY: Anyone want to have a crack at that?
FARRELL: I would just say that one of the things that you question raises—I’ll go back and think about it some more—is that many of the data we have on the household sector lack real understanding with an age dimension. And that is one of the things that we know we can mine well in the data sets that we have, because we do have that.
Recently the CFPB, the Consumer Financial Protection agency, launched an effort to address predatory practices toward the elderly in financial products and otherwise. And they came to us and said we don’t even have the data to know what’s happening with older people. Can you help? And we said of course we can. So that was my only reaction. But that’s a good question.
Q: I resist your intimating that I’m old. (Laughter.)
FARRELL: Well, no, no, no. You said watch the 70 ½ year olds. That’s where I—(laughter)—
MALLABY: Raises the question. Who’s got another question? Let’s go over there. The man in the blue tie. Quick question? The microphone’s just coming up.
Q: I wanted to ask about your internal policies.
MALLABY: Could you please identify yourself?
Q: David Gruppo.
Your internal policies relative to the use of the data. So you described a very new world where now, all of a sudden, private-sector companies have access to much better granular data, perhaps far more predictive than the public, which could be quite valuable for proprietary traders or your Treasury Department. Do you have current policies or do we have to think about how—whether anything—any kind of rule should or shouldn’t be in place relative to the use of that data for their own financial gain or others? It’s obviously important for product development, right? But where is the line? And is that something we have to begin to think about as we now have these kinds of sources of data?
MALLABY: Well, can you think of an example where it would be perverse or negative?
Q: Well, neither perverse or not, but just as an example, if you’re far better at predicting what inflation is going to be or what the jobless rate is going to be, then the government statistics and the bond market is thinking one thing and you know pretty sure that it’s going to be something else, you can make a lot of money on that. Is that good or bad? Whether you should or shouldn’t share that data is a question.
FARRELL: So just from—I think most highly regulated industries have all kinds of rules and regulations around data usage of all sorts. And this would be true for the financial industry, as it is for the health care industry, as you know, with HIPAA and all kinds of uses and otherwise that have to be met.
Having said that, I will say, you know, almost every private-sector company in the country, certainly, but maybe in the world, worth their salt is mining their own data for all kinds of purposes, product development and otherwise. So I think that there has been quite a bit of thought given to this as it relates to health care data and financial data. And so there are some pretty strict protocols. But even those probably need to be updated, and probably a lot more thinking has to go into this. And, you know, we’re just early days for everyone in both—
MALLABY: You’re thinking about privacy concerns as opposed to the fact that somebody can make too much money, quote-unquote.
FARRELL: Oh, so I’m thinking primarily privacy. I think, in respect to, you know, proprietary data that helps you see the world better, I—you know, I think that’s just been the name of the game forever and a day. So I’m not sure. I don’t know. But that may be subject to some revision too, but I think that’s been true—
MALLABY: You get more appropriate pricing in public markets so people have better information and an incentive to create it.
Q: The question is where the money is going too. If it’s all going in one place, it may create an issue.
MALLABY: Arturo. There’s a mic coming, I think.
Q: Thank you. Arturo Porzecanski from American University.
It must be very tempting to make use of all data that one can get. But I was wondering, since there is more and more technology and more and more change and more and more access to technology, whether it’s such a moving target that this data is good for anything more than a couple of years. I mean, in the beginning—I don’t know—there are taxi cabs and there’s taxi cabs plus Uber, then there’s taxi cubs plus Uber plus others, you know, competitors. Or people might move from using their parents’ credit card to using their own debit card. Then eventually they get credit so they have a credit card, et cetera; use of Facebook and other things.
You can see how it changes from year to year, and also by generation. So it is reliable?
MALLABY: Let’s first go to Matt, because you’re the one who raised the issue of, you know, it’s difficult to know how to read a data set until you’ve had some years of experience understanding where the distortions are. Arturo’s saying the world changes so fast that you’re not going to have these years of experience with data sets because the thing you are measuring is changing so much.
SHAPIRO: Well, it depends what you mean by changing. So take the taxi cab example. So we—well, we used to have horse and buggy. Then we had cars. Then we had—then we—the Medallion cab future is moving to Uber, and then driverless cars. We still have the basic activity. So I’m—I guess I’m a bit more optimistic. You have to follow these things. You have to link together different data sets.
I mean, just in the survey business there’s been dramatic change over the last 10, 15 years from in-person to phone to Internet. And then we thought we were way ahead of that, but all our—a lot of our Internet surveys are designed assuming someone has a desktop or laptop. But now everything’s being done on smartphones, and that’s—just as we catch up to smartphones we have watches or autonomous collection devices.
So you have to keep ahead of these things. But there’s enough coexistence of different modes at a point in time. If you’re attentive and collect data, you can try to do some comparisons. We do the best we can. But I think just to throw up one’s hands and say it’s just too complicated doesn’t work either.
VARIAN: I would say it comes back to heterogeneity again. All unhappy economies are unhappy in different ways. (Laughter.) And if we go back and look at recessions, we certainly see this. There’s oil-driven—oil-price-driven recessions. There’s Fed-driven recessions. There’s consumer sentiment. There’s housing; I mean, on and on. And they don’t unfold the same way, and the same factors are relatively different magnitude in terms of interpretation.
But what’s nice is, using some of these tools, we can dive more deeply and understand what the root causes or a combination of root causes look like. Just as an example, one of the questions that always arises, well, what is—is consumer sentiment predictive of recessionary activities? And the answer is not over the long run, but in some recessions it’s hugely important. So being able to look at these different factors in a different context can be very useful.
SHAPIRO: So could I draw another example—
MALLABY: Yeah, sure.
SHAPIRO: —from my own research? So I’ve done work on measuring price indices in health care. And that’s—the scale of change there is dramatic. So I studied the example of cataract surgery, where 50 years ago cataract surgery involved a bundle of services that involved a week in the hospital, anesthetist service, ophthalmologist service, and so on. And now it’s rather—now it’s a half-hour procedure implanting an ocular lens.
If you look—if you study that industry of cataract surgery from the point of view of what the inputs are, you get a very distorted view. But if you said instead I watch, the service getting good vision when there’s a clouded lens, you’re actually going to get it right and get these technological changes. It’s not easy to measure, but at least in principle you get it correct. So you—we have to sort of think both about the measurement and actually what’s happening with the technology simultaneously and somewhat creatively. But if one thinks somewhat creatively, I think there is some hope.
MALLABY: Let’s go in the back there; the gentleman in the white shirt.
Q: Scott Helfstein, BNY.
Professor, I think your point with the cataract surgery tees this up perfectly. I’m curious, to the panel, with the new and integrated data methods, if there’s anything to shed light on the productivity puzzle, right, or the drought? Where is it? And that’s a great example. You know, we’ve changed how we’re doing this process. And so are we less productive? Are we more productive? Can we see it? Do we know where it’s going?
MALLABY: Hal can give his riff on productivity.
VARIAN: Yes, I’ll give you—there’s a lot of anecdotes and not too much data at this point, but I’ll toss in another—(laughter)—I’ll toss in another anecdote at least.
Look at your smartphone. So what does your smartphone do? Well, it’s a camera and it’s a GPS system and it’s a media player and it’s a game player and it’s an e-book and it substitutes for a land line, and on and on and on. So how does this show up in national accounts? Well, what happens is the way they value the smartphone is just a mobile phone. The expenditure on it is your $50 or so monthly service fee. That’s it. So from a GDP point of view, what’s happened is we’ve seen reduced consumption of digital cameras, reduced consumption of land lines, reduced consumption of GPS, reduced consumption—we haven’t seen any change in this one in terms of the price; just in terms of the spread in the population.
Now, they know that’s wrong—(laughs)—obviously. They want to fix it. But this idea that we’ve seen this huge substitution away from individual devices towards a signal device, we’re dramatically underestimating the productivity or we’re dramatically underestimating the actual price decline in this set of services. So that is a productivity issue. And that’s not the only one. You can go through and identify lots of other cases where the technology is just changing very rapidly, and that rapid change in technology has not gotten translated into the numbers yet.
So my belief—I’m from Silicon Valley, so of course I’m a techno-optimist—is that as we gain more experience and as agencies catch up with some of this change, we are going to see some improvements in productivity when measured accurately, correctly.
MALLABY: As you know, there is the counter-argument that, you know, there’s a bunch of things that—where the price goes up but the quality didn’t. And so we may be over-measuring—
MALLABY: —utility delivered to the consumer, when the shirt is redesigned and costs $10 more because it’s the new season. Did it really deliver more utility?
VARIAN: This debate will go on for some time. (Laughs.)
FARRELL: But I think it does underscore the key aspect of it, which is that when you think of productivity as the value add that accrues in a traditional kind of microeconomic context to a company that profits from it, we kind of get it. When you think about productivity as the value add that accrues to a consumer, we’ve always had a very hard time measuring that because of the price effects.
If you take—if I can sell this for $20 and you can sell it for $10, I will seem more productive simply because somebody was willing to pay this. But I think we all know there’s something a little funky about that. I think your example, Hal, is another one. So really getting behind the utility, the aspect of productivity that is true consumer surplus, that is not valued in dollar terms, is impossible. You know, your YouTube experience, there’s no dollar value associated with that, whereas 10 years ago you probably would have paid three, four dollars for every one of those videos. That’s the only way you could have had it.
VARIAN: Especially if you like cats. (Laughs.)
FARRELL: (Laughs.) If you like cats. You know, and I think that is the real challenge.
MALLABY: We’ll debate the utility of—
FARRELL: I don’t know how we’re going to get at it. But I think we really do have a crisis of understanding of that aspect of productivity.
VARIAN: By the way, on that one we do have YouTube Red, which is a subscription service, no commercials. So you can either watch the ads and YouTube or you can pay and get an ad-free experience.
MALLABY: How many people are actually paying to avoid the ads?
VARIAN: It just was released a month ago, so we don’t know yet.
MALLABY: OK. All right. Question with the white shirt back there.
Q: I’m Gerald Pollack.
The Bureau of Economic Analysis, which is responsible for the national income accounts, the quarterly GDP releases, has to struggle with all of these problems that you have been discussing here. How well have they done? What can they do that they aren’t doing?
FARRELL: So we have been meeting with them pretty regularly since we started the institute, because we hold them in very high regard, to my point that I think they’re playing an extremely important function and need to continue playing it. I think they’ve done extremely well, given the limitations of what they have, especially the fact that congressional budgets have been really, really—no constituency in Congress or elsewhere for sustaining data sets. So they’re cutting all sorts of data sets and struggling with more and more expensive surveys because the response rates are going down.
And I think they are doing a good job of reaching out to the private sector and getting more input to help do an even better job than they’re doing. But I guess they won’t do as good a job if they’re standing alone on this. I think that they do need help.
SHAPIRO: So it’s a—we have a complex and somewhat Balkanized statistical system. We have the Census Bureau, which collects many of the revenue and quantity data. Then we have the Bureau of Labor Statistics, which does the price indices, where many of these quality issues come up. And then we have the Bureau of Economic Analysis that puts it all together. And they all are struggling valiantly.
And what we really need is a reengineering of the statistical system that takes into account all of these innovations and information technology. So instead of having one survey do retail sales and then have a separate survey asking consumers where they purchase and then getting the prices at that and then doing division and then having the BEA put it all together, those should really—those should be integrated using the information technology from scanners—
SHAPIRO: —scanners and so on, although scanners—I edited a volume on scanners a decade ago, and they are not a panacea. But it’s probably gotten much—it’s probably gotten—it’s probably gotten much better. The problem is the statistical agencies are also obligated to have—produce ongoing statistics under very tight budgets, as we see throughout the federal government, and improve what they’re doing incrementally. In the health sphere there have been huge improvements where much of the technological change was ignored. Similarly in computers there have been big improvements, but not nearly—not nearly—for every victory in price measurement you can point to an egregious example, such as Hal does.
But the agencies don’t have—they are dealing with barely adequate budgets to deal—to have continuity in their program. And what you would need to have, even if there were a reengineering, even if there were a commitment to big investment and doing that, that takes some dollars too. It eventually might pay off in terms of cost saving and improvement—improved quality. But the agencies don’t have any room in their budgets to do that, on top of doing the necessary work of doing the continuing statistics and providing a period of overlap where sort of old techniques and new techniques can be compared.
VARIAN: I completely concur with that. They place a big challenge of having to pursue both paths at once. I’ve been talking to them as well, I guess all of us have.
FARRELL: To their credit.
VARIAN: Yes, and particular in this challenge of measuring productivity increases in free goods. So you have free Google. You have free e-mail. You have free docs. You have all sorts of things, services available for you for free. Well, what’s the right price index? (Laughs.) Right? Zero? Zero-point-zero every year or negative? I don’t know. It’s a challenge, because it’s clear that these things have contributed to productivity. But how exactly they fit into the accounts—
SHAPIRO: By the way, these are not new challenges. I mean, we—they are—they seem more apparent because they’re sitting in our pockets. But you have the same thing in banking services going way back. How do you value free checking?
SHAPIRO: It really isn’t free. It’s something that you pay for by having a somewhat lower interest rate than you would if you didn’t have free checking. And that’s how the agencies deal with it. And my co-author and I, David Wilcox, when we were working with price measurement in the ’90s, called this the house-to-house combat of price measurement.
Really there’s almost no substitute for taking this good by good, figuring out what really is—what is the good in the broadest sense of the word? Is it listening to music? Is it being able to call mom, et cetera? How do people do that? And how is that changing over time? And that’s what we’re asking these agencies to do. And there are tremendous data resources available that it would be great if they could utilize even better.
MALLABY: Another question. Let’s go here.
Q: Karen Harris from Bain & Company.
On that same line, GDP, the ultimate aggregator of data, just how poor a measure of U.S. economic performance is it? And what are accessible alternatives today?
VARIAN: I’ll start because I probably know the least about it. (Laughter.) So we can get better as we go down the row here.
There’s a lot of issues with GDP. And I’ll just mention two. One is in theory GDP is supposed to equal gross domestic income. But if you look over the last 15 years and add up the so-called statistical discrepancy, it’s off by a trillion dollars. So it’s a pretty big difference. And maybe we have better technology for measuring income than we do for measuring output. And maybe one way to think about economic policy is to focus more on the income side of national accounts than on the product side.
The second thing is GDP is only reported quarterly. It’s revised three times. Just as you said, the problems with the—in the Obama White House were not—it’s not a single example. There’s lots of examples of this where the metric is so difficult to measure and so complicated, it takes months and months to get it right.
Plus I think there’s a lot of conceptual difficulties in terms of how do we measure the flow of services from durables. We do it for housing but for nothing else; not for cars, not for other things. So there’s a lot of issues of that sort that you really wonder whether so much attention should be paid to GDP, but we should try to come up with more either income-based accounts or current activity indicator accounts.
MALLABY: But Hal, if I understand what you were saying earlier, it’s not merely that income isn’t equal to output, which it should be. It’s also that you’re saying a consumer surplus valued at zero ought to be measured, because it actually makes—
MALLABY: —people’s lives better. Is that—
VARIAN: Yeah, I’m not quite—
MALLABY: You seem to be offering a rather small-bore response there, when your real point is—
VARIAN: No, no. I think the point is GDP is not really a measure of consumer surplus. It’s a measure of the level of economic activity, because it’s measuring just these market-based transactions. It’s not looking at all sorts of other unpriced activities which contribute to consumer surplus.
So just if we focus on that issue of measuring the level of economic activity, I think there are better ways to do that. If you want to expand the question and say let’s try to get better measures of consumer welfare, well, then there’s a huge number of things that you could throw in there. So I’m kind of looking at the low-hanging fruit—
FARRELL: But I think even if we stick with the GDP issue, I think there is a link to what you were saying, Sebastian, which is, you know, 70 percent of GDP is personal consumption. So if you don’t think we have a good measure on personal consumption, then by definition you’re challenged before you turn to investment or government or net exports, each of which I could have a point of view on.
I think, on the consumption, you referenced this before, Matt. I think it’s particularly challenging. I don’t think that, whether it’s the retail sales or the merchant retail trade survey or even the consumer expenditure survey, all the different ways in which we try and get a handle on personal consumption, I think what we’re learning, looking at debit and credit card spend, just that, which is—of course there’s a lot of stuff that isn’t on debit and credit card spend—is that we—our conception of the economy or of commerce is quite different today.
To your point, it’s quite different today than it was in the past. And there are lots of things that, if you take a merchant view of the world and you want to have the same merchants roughly in the pool over five years so that you have consistency, you’re going to miss a lot of things. You’re going to miss food trucks. You’re going to miss—almost any transaction goes on a square swipe that, you know, is commerce, by all intents and purposes. You’re going to miss a lot of the services, which I think the retail and the CX miss.
So I do think that its biggest virtue is that it is longitudinal. It goes back a long, long time, to your point that, you know, anything else we try to develop now, you’d have a bear of a time trying to understand, well, how does this compare to the past, and what do we make—what use do we make of any new understanding of it? But there’s no question that anyone, even those people who work with it, would tell you it’s quite flawed. The question is, how do we move on to continue a longitudinal understanding of ourselves as we try to revise these. And should we even try to do that?
We are going to be putting out, starting next month, a local commerce index, which is our view of the way we should think about commerce in cities. But, you know, it’s just one other way. And it’s as much a function of the data that we have than it is about the reality out there.
SHAPIRO: So I’d like not to deny the premise of the question. Let me modify it a little. I actually think we have a remarkably good statistical system; certainly the best in the world. The unemployment and employment report came out last Friday. That was only a few days after the end of the reference month. The GDP statistics, which basically—which are an estimate—it’s an estimate of all the transactions in the U.S. by lots of categories; not as fine as we would like, but a lot of detail, both real and nominal, comes out within 30 days of the end of the quarter. It does get revised, and that’s because source data comes in.
So I think—I think we should think about improving the framework. And I think we need to distinguish between problems that are purely measurement problems, that we just don’t get accurate enough or timely enough data on the thing we want to measure, and that can be addressed by naturally occurring data or tougher conceptual problems that maybe we’re not measuring exactly what we would like to measure.
I mean, there’s been a lot of focus on sort of non-market accounts or non-market activities like watching YouTube. But we don’t count child care or non—we count child care if it’s paid for but not child care if it’s family-provided as part of GDP. And that’s part of the DNA of the GDP accounts. We might want to have that different. There are satellite accounts that do take into account non-market activity. These things are being done by our statistical system using limited data. So I think we should really focus on improving the data.
I’m certainly an advocate of using the high-frequency naturally occurring data. But it’s really important to understand that it really isn’t going to be that useful if we don’t have the benchmarks that the official statistics provide.
One possible solution to this is to try to move away from the high-frequency nature—at least some official statistics; use those more to—for benchmark purposes, for monitoring long-term productivity trends, for what’s the potential growth that we were hearing about in Jason Furman’s presentation this morning, and maybe—and then use higher-frequency data that we can get from commercial sources for current activity.
So you can easily imagine a system that combines the two. But I think it’s quite—we’re fooling ourselves if we think that it would be really easy to substitute these kind of data for the official statistics. And if that happened, then we would really have a very limited basis for knowing what we’re measuring at all.
MALLABY: People sometimes forget that in the late 19th century, a time where the backdrop in the economy was one of extraordinary productivity and technological growth, the macro picture was unbelievably unstable because the policy environment was not very good at smoothing out shocks. In fact, it often made them worse, and continuing into the 1930s even worse.
So I think the benefits we’ve gained from a statistical system created after the Second World War in the 1940s to enable better macroeconomic measurement has paid dividends. And they’re probably insufficiently appreciative of. And so I’m excited to include in this symposium on the U.S. economy the promise that perhaps a new generation of revolutionary data will further improve our macro management and make things better in the future, because that’s why I wanted to include this panel in this free-time symposium.
Thank you all for coming. Thank you to the panel. (Applause.)
FARRELL: Thank you.
MALLABY: And talking about consumption, on the consumption question there is a lunch upstairs. (Laughter.)
This is an uncorrected transcript.