Symposium

Robert B. Menschel Economics Symposium

Wednesday, February 7, 2018
typing on keyboard

The Robert B. Menschel Economics Symposium was held on February 7, 2018. This event examines the insights that big data has provided into economic and political behavior, as well as the impact that technology will have on the future of work. The symposium, presented by the Maurice R. Greenberg Center for Geoeconomic Studies, is made possible through the generous support of Robert B. Menschel.

Session I: Keynote With Hal Varian

The Robert B. Menschel Economics Symposium generates critical thinking about the consequences of herd mentality behavior in global economics. The 2018 symposium will examine the insights that big data has provided into economic—and political—behavior.

This symposium, presented by the Maurice R. Greenberg Center for Geoeconomic Studies, is made possible through the generous support of Robert B. Menschel.

ORSZAG: OK. Good afternoon, everyone. My name is Peter Orszag. And I’d like to welcome you to this symposium, which is presented by the Maurice R. Greenberg Center for Geoeconomic Studies and made possible through the generous support of Bob Menschel. I should note that today’s symposium is on the record and is being livestreamed. So just please keep that in mind.

I am delighted that to kick off the Bob Menschel Economics Symposium we have Hal Varian with us. He is the chief economist at Google and has had many academic positions before that. I still remember when I was in graduate school the “in Varian” saying, which is if you had a question the answer was “in Varian,” and that meant it was in his textbook. (Laughter.) And he’s gone on to do many terrific things since then.

So the way this will work is Hal is going to kick off with some opening remarks. I then have a series of questions for him. And then we’ll open it up to all of you. So looking forward to a lively discussion.

Hal.

VARIAN: OK. Well, I thought what I’d start with is talking a bit about this role of automation. Actually, I have a whole involved presentation that I started out calling Automation and Procreation. (Laughter.) But I finally shortened it to Bots and Tots—(laughter)—because when we look—as economists, when we look at a market, particularly the labor market, there’s two forces, things that shift demand around. And certainly automation, computerization, robots, and so on, shift the demand for human labor. But we sometimes forget about the supply issue; that is, the demographic determinants of the supply of labor and what’s going on on that side of the market.

So first I’ll say a word or two about bots, and then I’ll turn to the tot side. So on the bots, Jim Bessen, who’s an economist at BU, looked at the 1950 Census, looked at occupations listed in the 1950 Census. There were, I think, 270 of them. And he traced what happened as the Census occupational classification evolved. There was only one occupation for that 1950 Census that was displaced by automation, and that is elevator operators. OK, I know there are some still here in New York—(laughter)—but not in Silicon Valley. I don’t think we have tall enough buildings.

Anyway, so the elevator operators were replaced. But if you think about it, the tasks that elevator operators did were not really replaced. They just moved to different people in the organization. So now when you walk into a building, there are a couple of people sitting there at desks. Maybe there’s a receptionist or security people. The elevator operator used to do safety inspections. They used to answer questions that people had. They gave people guidance. They delivered packages. They told people about sales and promotions in department stores and so on.

Those things are still done. Those tasks are still around. They’re just done by a different set of occupational titles. So when you start looking at jobs from the task point of view, you get a very different picture of what that demand displacement looks like. For example, if you think about the assembly line, we spent 100 years optimizing the assembly line, right? You’re trying to have a situation where there’s one person in one position who does the same task over and over again, and it’s not too surprising, since you’ve optimized to that extent and made the work very routine on a station-by-station basis, you could replace that with computers, with robots.

And now, if you look, 50 percent of all robots are in automobile manufacturing plants, OK. So it’s very focused towards that environment—a routine environment that’s replicable and kind of stationery in some sense.

So if you look at environments that don’t have those characteristics, then the possibilities of automation become much more problematic. So let’s take a job like gardener. We don’t think of gardener as being a really high-skilled, super high-skilled job. But it’s a very varied job. There’s lots of different things that gardeners do. They trim the trees. They cut the grass. They plant flowers. They do this, they do that; all sorts of activities. And automating any one of those tasks you could do with several million dollars and several years of research. But automating all of them would be extremely difficult.

And the same thing is true of a task like a maid in a hotel. All the rooms are different. They all have different layouts. They’re very heterogeneous. It would be very hard to build a robot which did all of those tasks that a maid does. Now, assisting the maid by providing better ways to clean things and better ways to pick things up and better layouts and so forth, that can certainly be done. That kind of augmentation of labor is certainly possible. But I think there are some jobs which are quite difficult to automate all of the tasks associated with that job. So I think we’ll see automation in those categories move more slowly.

Now, one example that people like to bring up is, well, what about autonomous vehicles? We’ve got all of this investment going on in driverless cars, and it’s very successful. In fact, I will tell you, we would have driverless cars now if it weren’t for those pesky humans. They’re the problem, because it’s those other cars that are driven by humans and the pedestrians that are going every such direction. They add the heterogeneity and the noise to the environment that make it very difficult to get around. Going down an empty freeway, or even the freeway that has a number of cars on it, is much easier to do than to navigate around New York City, as an example—maybe one of the worst-case examples, except for Boston, which is—(laughter)—so, which is even worse. So I think you will see cars. And there are going to autonomous vehicles. They’re going to be available first in the most homogeneous environments and then expand to a wider gap. And that has been really billions of dollars of investment over decades to get to the point we’re at now in terms of making these a reality.

OK, that’s my short sermon on bots. Let me turn to tots. There’s only one social science that can predict a decade or two in the future, and that’s demography. So we don’t really know where technology will be in 10 years or 20 years, but we have a good idea of how many 25- to 55-year-old people there’ll be in 10 years or 25 years. And what happens there, when you look at it, the two big shocks to the 20th century—namely the Baby Boomers and women entering the labor force, those are both no longer—(laughs)—they’re not going to be sources of growth in the labor force anymore because Baby Boomers are retiring. Remember, the Baby Boom decade was 1946 to 1964. So those people are retiring. And they’re continuing to be retiring.

And yet, they still expect to consume. So the labor force is what’s producing goods and services for the nonworkers to consume. And what’s happening on that side is pretty grim. It’s the labor force is now growing at half the rate of the population growth. And if you look at the next decade, the 2020s, you’re seeing the lowest growth in the labor force ever, OK? And the U.S. is in good shape compared to other developed countries. Look at Japan, Korea, China, Germany, Italy, Spain. They all have a very serious demographic crisis. And what that means is there will be fewer workers to support a larger group of nonworking people. And if we don’t increase productivity—which really means automation—then we’re going to be in big trouble in those developed countries.

All right? So the automation, in my view, is coming along just in time—just in time to address this coming period of labor shortages. And all our intuitions were developed around a world where there is a plenty of labor, in part because of the Baby Boomers and the entry of women into the labor force, which happened in the ’60s and ’70s. But the future is going to look quite different. And so the intuitions we have now about firms can always buy the labor they need, I think that’s going to change rather dramatically in the 2020s and into the 2030s, and things don’t actually get back aligned together until about 2060. So the next several decades are going to be periods with tight labor markets and a strong desire for having more workers who can produce the goods that the entire population can enjoy. So that’s my quick spiel on bots and tots.

ORSZAG: All right. Well, lots to talk about. So, actually, let’s just—before we turn to other topics just stay on this for a second. So let’s take the example of the maids in, you know, a different configuration of hotel rooms. We already have robots that can figure out a room. The military has drones that can enter a house—actually, unlike military squads that have to enter from one door so they’re not shooting each other from both sides—scope out all the parameters of the house and navigate. So why are you skeptical that the full array of maid services will not be automated at some point over the next, you know, couple decades?

VARIAN: So how much did it cost the Army to develop that capability?

ORSZAG: A lot, yeah, yeah.

VARIAN: A lot. A lot. And so—

ORSZAG: But it exists, is my point.

VARIAN: No, so I think you’re right. Eventually we will likely to get to more automation in these very heterogeneous environments. But that’s not something that’s going to happen overnight and displace millions of people immediately. It’s something where, I think, you’re going to see time to adjust. So I don’t want to be too pessimistic on this. We’ve had a lot of tremendous accomplishments on artificial intelligence and machine learning just in the last five or six years. And there will be further breakthroughs, most likely. But it’s not something that’s going to be—in my view, it’s not something that’s going to be as disruptive as you would guess from reading the newspaper headlines.

ORSZAG: So let’s take that, and then turn to—I mean, you touched briefly on productivity. And one of the great puzzles is we seem to have advances in ML, AI, and lots of other dimensions, but productivity growth has plummeted, including the most recent reading from the end of last year. So there’s this ongoing debate about whether we’re just mis-measuring and, even on a GDP concept, things are actually better than the measured statistics and so that might be part of the explanation. But where to you come down on—are we mis-measuring both GDP and other concepts of economic well-being, and has that mis-measurement gotten worse by enough to explain the measured productivity deceleration?

VARIAN: So I would say there are two aspects to that question. One is, take the traditional concept of GDP and productivity measurements. Are we doing it correctly, and has it changed, has it gotten worse lately? And then the second question, is GDP the right thing to look at to begin with?

So let me—let me say a word or two about the first part. It has gotten much more difficult to measure GDP because of the rise of services and intangibles of all sorts, so services—that’s—80 percent of the labor force is in services. It’s quite hard to measure quality improvement in those areas, it’s quite hard to measure them directly.

GDP came out of World War II. It was easy to measure tanks rolling off the assembly line, or automobiles, or physical goods being produced. It’s much harder to measure those intangibles. And a lot of the economy is in intangibles now, so for example, think of things like design and software, both of which are intangibles. So when Apple draws up the plans for the iPhone and builds the software operating system for the iPhone, sends it to China, iPhone is assembled in China, comes back into the U.S. So the import of that physical product is relatively easy to measure. What’s hard to measure are the email attachments, and the files, and the software updates, and the design plans, which go the other direction because those don’t go through a market—you know, they’re going through an internal process of the firm.

And if you look at something like Google doing its Pixel phone, we developed this open source operating system android. It’s got a price of zero, by definition, because anybody can download and use its operating system for their mobile phone, and that, too, is very, very hard to measure—the value of that service. But if you think about a mobile phone, it costs $150 for the parts and labor in a low-end iPhone—iPhone SE—but at least half of that has got to be the value of the software and design. So mobile phones—that’s a $400 billion industry. If half of it is due to the software and design—and I think that’s a very conservative estimate—then there’s a big chunk of GDP that we may not be measuring properly, OK?

There’s a lot of debate about this, and it’s something—it isn’t the same problem we faced a few decades ago when you had the design, the engineering, the assembly of a product all done in the same country. Now all of those things can be done in different countries.

So it’s a challenge—it’s a challenge to the whole idea of GDP measurement. People are working on it. I think we’re going to improve in this dimension, but it’s—but it’s part of the story. I don’t want to claim it’s the whole story, by means, but it’s a part of the story about the productivity growth.

And then this question is, is that what we want to measure? Well, GDP is gross domestic product. Now that means it only relates to things that are produced domestically. But welfare, from an economist’s point of view, is about what is consumed domestically, and those are not necessarily the same things. Not only do we have the imports as a difference, but it’s this whole question of the pricing of what’s produced because, to be in GDP, you have to be sold in a market somewhere, generally.

So let me give you my favorite example: photos. Back in 2000 there were 80 billion photos taken in the world, and I can give you that very precise number because there were only three companies that made film, so you could kind of count up the film. And the film and developing costs about 50 cents a photo, roughly speaking. So now there are 1.6 trillion photos taken in the world—in 2015, according to my estimates, 20 times as many—and the price has gone from 50 cents apiece to zero essentially, OK? So any normal human would say, wow, what an incredible increase in productivity, but not an economist. (Laughter.) No, we wouldn’t say that—because that’s about what’s consumed. All of those photos, for the most part, are given away or shared or you go over them in your—on your phone and you say, oh gee, it’s so nice to see Jimmy’s birthday party or whatever it was. Another example like that is GPS machines. So GPS was originally a very expensive technology. Only truckers and people in logistics could use it—thousands of dollars per unit.

Price came down, down, down, and as that price came down, real GDP went up, OK, but until it hit zero. Then it’s not in GDP anymore. So you’ve got this tremendous improvement in the product as the price is going down, down, down. Hit zero. It’s not in GDP and, by definition, there’s no quality improvement anymore for those zero-price goods or, for that matter, for imports, right.

So look at the smart phone, the defining device of our time. What’s happened there is it’s replaced the camera, the GPS system, the music player, the alarm clock, the flashlight. I mean, we could go on and on and on in all of these things that are bundled into that one product, and all those reductions in the sales of cameras, GPS machines, alarm clocks, music players and so on, those probably had a negative effect on GDP because they were things that were not produced.

And the phone itself, unfortunately, does not have any quality adjustment in the—in the figures—in the figures produced by the BLS. They would love to do that. They want to—they want to put quality adjustment in for these kinds of products but they’re very constrained in budget and they’re very constrained in the capability in doing this in any rapid way.

So, again, things will get better as we improve our measurements in these areas. But right now, there’s some really important stuff left out of the consumption side of measurement.

ORSZAG: OK. So two quick questions on this topic. So the first one is, just to pin you down, let’s say productivity growth has fallen by 1 percent—a hundred basis points. How much of that is actually an illusion because mismeasurement has gotten worse?

VARIAN: Twenty-five percent.

ORSZAG: OK. Boom.

VARIAN: That's the number I use. A fourth. A fourth.

ORSZAG: There we go. Good, OK.

VARIAN: But—and so let me—let me—

ORSZAG: I didn’t expect that precise an answer. That’s great.

VARIAN: No, because I’ve been—(laughter)—I’ve been asked the question so many times—

ORSZAG: Very good. OK.

VARIAN: —I better come up with an answer. (Laughs.) But let me—let me say one other thing. What about the other 75 percent, because that’s still a mystery, I would say.

ORSZAG: Right.

VARIAN: And part of it is this issue of leaders and laggers. Firms that are adopting these technologies more quickly are becoming more productive. Firms that are sticking with traditional technology—and so the gap between the best performance in terms of productivity and the worst performance I believe that’s widened.

Now, there’s some evidence for that. I won’t say it’s conclusive but it looks like that may be part of the story as well.

ORSZAG: Yeah. That was going to be another topic, which is that widening dispersion at the firm level. But let me just also ask you, so one of the things you mentioned was constrained funding at the statistical agencies and so difficulty in kind of doing a quality adjustment on a phone.

It opens up the question—the obvious question of, well, we have this exploding data in the private sector. Alan Krueger and others had, you know, been interested in trying to see whether the BEA and BLS and other official agencies could incorporate more private sector data. We have the Billion Prices Project. What is your perspective on how much we’re going to see melding of the official statistics and these, you know, additional sources of information?

VARIAN: So I think that will happen. But, again, it will likely happen fairly slowly. So just as one example, let’s take something about mobile phones—if you wanted to do quality adjustment how would you do it? You’d have to start with a database of lots of mobile phones and what features they had and what their prices were and how that changed over the years, and that’s a big task, except when you stop and think about it, well, Amazon, Google—we’ve already done that. We’ve compiled those lists. When you go online and use Google shopping or you use Amazon or you use Walmart or any of those other places, the private sector has compiled the data in the way that would be useful to the BLS to be able to draw upon. So that’s kind of a natural case where I think you would see this happen.

Now, if you went to more exotic sources of information, like the Google queries we’ll hear about this afternoon from Seth, or look at tweets or any of these other things, those need further study. They may be useful in understanding what’s going on in the economy, but I’m not ready to import them into official statistics yet.

ORSZAG: All right, I’m going to ask two more quick questions and then we’re going to open it up. So please be thinking of your questions.

The first one is we just last weekend had the natural catastrophe of the Philadelphia Eagles beating the New England Patriots—(laughter)—at the Super Bowl. And you have provided, along with Seth and some other co-authors, some insight into whether those $5 million ads that run on TV are actually worth it from the perspective of the sponsor, so in particular movies.

Can you explain what you did, what you found, and more broadly the nature of kind of concluding things about causality in a big-data world?

VARIAN: Yeah. So a couple of years ago, at the National Academy of Sciences, we had a meeting on causality meets big data, because big data, like lots of other statistical analysis, often uses what we call observational data, where you’re just looking at what people did, and you can’t necessarily infer causal relationships from observational data.

But there are some tricks that you can use, some methods that you can use, to approach those questions. And I would say economists have been at the forefront in utilizing these techniques. So Seth Stephens-Davidowitz, who’s here in the front row, is going to talk about this a little more later this afternoon. But I will say a word or two about our Super Bowl study, because I think it’s kind of cute.

So there are two things, two facts about the Super Bowl that everybody knows. One is that the ads in the Super Bowl are often sold out substantially ahead of time, sometimes as early as October, November. The Super Bowl doesn’t happen till February. So that’s point one.

Point two is the home cities of the teams that are competing in the Super Bowl see elevated viewership, on the order of 10 to 15 percent more. And this obviously is true of other kinds of athletic events as well.

So if you put these two together, what you’re seeing is in October, November, advertisers are laying out the money, betting on the—you know, putting their—buying their ads or locking their ad purchases in place. And then, a few months later, two random cities—right, random from the viewpoint of the ad purchase back there in the fall—end up getting 10 to 15 percent more viewerships.

So you can look at ads that play during those Super Bowls and you will see, gee, those ads are seen by 10 to 15 percent more people in these two cities than they would have been otherwise. And so that’s almost as good as a controlled experiment, like picking two treatment groups and using everybody else as a control. So you could do that kind of interpretation of this as an experiment with a control group, and you can actually infer a causal connection between the—how much those ads affected purchase behavior.

And what we looked at was movies, where there were, what, seven, eight movies advertised in the Super Bowl. And, sure enough, in those cities that were the home cities of the teams that were played—playing—there was an elevated audience for those movies. The opening weekend was larger than any other city. So it was a kind of cute example.

And I think it extends to all sorts of other athletic events. You look at the World Series and the playoffs in basketball and hockey. In all those cases, that final game for the championship is going to be much more viewed by the home cities of the teams that are playing. They see more ads. And so we can get an idea of how well those ads are working.

ORSZAG: By the way, just so you know, the opening weekend revenue was something like $7 million more for a $3 million average ad. So it was a pretty big, high return.

VARIAN: Although what’s happening now is the price of the ads is—(laughs)—

ORSZAG: Of course. Well, that’s what you expect.

VARIAN: They all read our paper and said, hey, whatever (you’re paying ?). (Laughter.)

ORSZAG: That’s what you’d expect.

All right, final question before we open up to members, just to deal with one of the elephants in the room. So there’s been a growing drumbeat, both from the left and in Europe, about the nature of competition in networks and platform technologies. You’ve been—your firm has been kind of in the middle of all that.

How should we think about the application of antitrust to a platform technology, A? And B, there have also been questions about whether the free-content, paid-advertising model is the right model or whether we should have some different economic model for these platform technologies.

VARIAN: OK. So the Europeans have this term they use called GAFA or GAFAM, which is Google, Apple, Facebook—what’s the A? Oh, Amazon, I think I’ve got them all, and Microsoft sometimes, OK? A term that they use commonly. And they argue—they claim that these platforms have undue monopoly power. But I think when you look at those platforms in an objective way, you will see a very remarkable fact. Namely, they are competing against each other intensely. So Apple, Microsoft, Google all have an operating system. Apple and Google make mobile phones. So does Microsoft, I guess. And in fact, Amazon has tiptoed a little into that market as well. Look at productivity software, like Google Docs or Microsoft Word, competing intensely there.

So each of those companies has a core competency or historical legacy. In Google’s case it’s search, in Apple’s case it’s devices, Microsoft operating systems, Facebook social networks, and Amazon retail. But they are constantly competing with the other firms across completely different industries. Who would have guessed a few years ago that Amazon would create cloud computing, OK, that had nothing directly to do with their core business, and that would create a hugely competitive industry with Microsoft, Google, and Amazon all struggling to improve the quality and to lower the price of their offerings? Who would have guessed that a few years ago?

So I look at this industry. I say, gosh, it’s that competition that’s, in fact, creating the innovation and the low prices and the expansion of these firms because they are one of the most competitive industries around. Far from the least competitive. So that’s my answer to the competition point of view—the competition issue.

And the other part was?

ORSZAG: Paid advertising versus other models.

VARIAN: Oh, yeah. So the interesting thing there is you look at TV. We had this evolution of ad-supported TV. And then along comes services like Netflix, and YouTube Red, and Amazon and so on that started offering subscription-based TV on a much broader scale. Of course, there was HBO, and Starz, and all these other networks as well. Well, those are now some of the most popular channels out there. The creative energy that’s going into producing this non-advertising-based, subscription-based TV is huge. I think I saw Netflix is going to have 80 new productions next year.

And so the whole industry has shifted away from the ad-supported model toward the subscription-based model. And there’s some suggestions, there’s some thoughts, that the textual word is going the same way. The newspapers, The Wall Street Journal and The New York Times, all the papers are pushing very hard on subscription-based models, probably with some advertising as well—kind of like the printed version of the newspapers. But subscriptions seem to be a very active area of interest these days.

ORSZAG: With ad-free or reduced—

VARIAN: With ad-free or reduced—

ORSZAG: So the future is that we could pay more Google and get rid of the ads? Is that what you’re saying? (Laughter.)

VARIAN: Well, if you—if you use—first of all, Google ads are really very informative. Hopefully you understand. (Laughter.)

ORSZAG: I understand that. (Laughter.)

VARIAN: If you go to YouTube, you can watch YouTube with the ads, or you can pay a subscription fee of ($)9.99 a month and get YouTube Red, which is ad-free. So consumers have choice. They can pick what they want. And that’s a good thing, from the viewpoint of the economy.

ORSZAG: OK. With that, we’re going to open it up to your questions. Typical schtick of please identify yourself and please ask a question.

VARIAN: And there are runners with microphones.

ORSZAG: Yeah. And don’t forget this is—(audio break)—

Q: (In progress following audio break)—graphic analysis that leads you to the conclusion that we’re going to have a labor shortage because, as you know now, the participation rate in the labor market is about 62.7 percent, down where it was in the ’70s. And the older cohort is actually doing better than the younger cohort. So I’m wondering how to bridge to your demographic analysis. Thank you.

VARIAN: Right. So I didn’t actually say surplus. I said a tight labor market, because the wage will adjust, and characteristics of the job will adjust to clear the market. I’m an economist, so I’ve got to say that. But I do think you will see a tighter labor market than we’ve been used to in the last—in the last 50 years. It’s also true that people’s retirement decisions are changing. They’re retiring later. And when they do retire, they often don’t go to zero labor hours because of part-time work. And that’s a good thing from the viewpoint of the economy. When you look across developed countries, like the length of a workweek, the shortest workweek among developed economies is in the Netherlands. It’s 29 hours a week—29 hours. That’s a whole day less than we work, which is about 37.5, I think. And—

ORSZAG: What do you mean by we? (Laughter.)

VARIAN: Not you and me—

ORSZAG: OK. I’m just—

VARIAN: —nor anybody in this audience. But I mean we as a country. So what happens, there’s nothing sacred about a five-day week. And there’s a lot of demand for flexible work. And the reason the Netherlands has such a short week is, in fact, because they have much more flexible part-time work, partly because of subsidized day care, partly because of the tax treatment of those earnings by nontraditional workers, and so on.

So I think we’ll see the U.S. move in that direction. That tightness of the labor market is going to encourage people to work more, particularly after they retire and are still in good health. And, by the way, the one sort of other message I should say about that demographics is not only are there going to be more retirees and relatively fewer people in the labor market, but, of course, as the retirees age, they become more expensive. So it’s even more important to be able to produce goods that people can utilize in the health and medical industry, for example.

Q: But what about in the context of the displacements that are anticipated? For example, as you said, in autonomous driving there’s 2 million jobs. How do you think about the demographics and the future of employment in the context of what we know is going to be—

ORSZAG: Interaction between the tots and the bots.

VARIAN: So, of course, the demand curve is shifting and the supply curve is shifting. So which shifts the most is the answer. We’re going to see a reduction in labor, that’s for sure, because both curves have shifted to the left. But the question is what happens to the wage. And already you’re seeing this push for higher wages because of the shortage of hiring people at the low end of the labor market. So in Europe and in many places in the U.S., you go into McDonald’s and there’s a kiosk where you order—not through a person, but you check off what you want at the kiosk and you go over and pick it up.

So you can see those things being automated to a large degree in the future. If you want to see what the future looks like in an extreme case, go to Japan, where there are vending machines that are providing so many things because of the shortage of labor or the tightness of the labor markets there.

So as I said, I can’t predict what technology will be like in 10 years, how many autonomous vehicles will be on the road. But I can predict how many 25- to 55-year-old people will be around in that period, and it’s going to dampen and maybe overcome this reduction in the demand for labor from the automation side. So it’s not so much I’m saying absolutely one’s going to displace the other. I’m saying there are countervailing forces at work.

ORSZAG: OK, let’s go over here.

Q: Nise Agwha (ph) of Pace University.

In your futuristic predictions in terms of how the world will evolve in the face of automation, I was wondering how you would incorporate the following basic facts of economics that I was taught. Economics is the science of scarcity. So, at the end of the day, time is scarce. Energy is scarce. Attention span is scarce. So how do you weave that reality into your predictions?

VARIAN: Well, I would say what happens is what’s scarce at one period of time—let’s say getting authoritative answers to questions—(laughter)—could be widely available at a later date. So some of these things that are scarce now are going to be more bountiful in the future. But I think the things you alluded to—for example, there’s only 24 hours in a day and the scarcity of attention—those are going to continue to be scarcities even into the future.

So, again, I don’t want to come up—I don’t want to be pinned down, let us say, to a definitive, specific version of the future, whether there’ll be lots of automation or whether there’ll be a crisis from scarcity of labor. But I do want to say that these forces are important to consider together. And if you read the typical article in the news, you’ll see it’s all about the bot side of things and very little mention of the tots side of things. And, by the way, it has very important global implications as well, because look at the countries that are getting younger—which is basically India, as a prime example, and many countries in Africa, like Nigeria as a specific case. So with China getting older, is India going to be the new China? Is Africa going to be the new China? How are global resources divided among these forces that are driven—these countries and regions are driven very heavily by demographic change. Well, we’ll wait and see.

ORSZAG: Up here.

Q: John Biggs, former CEO of TIAA-CREF.

John Maynard Keynes wrote a famous letter to his grandchildren back in the ’30s that the average workweek by the time they had matured—and I think we are the age of his grandchildren. I’m a little senior for that. But the average workweek would be a day and a half, because you could produce everything you needed in a day and a half. Now I—

ORSZAG: The Netherlands has gotten there.

VARIAN: Yeah, exactly. (Laughter.) That’s what I was going to say.

Q: I have always argued that he was wrong on that because people didn’t retire back then. And the retirement need increased enormously the time of consumption. And so he’s wrong. But I don’t see how I can explain this to my grandchildren, what kind of—I think a workweek of a day and a half is absurd. But conceivably, when you look at the total economy, we’re going to have that kind of relationship. And where are the jobs going to be?

VARIAN: Well, everybody loves three-day weekends. So to me, it’s not at all implausible that our grandchildren would see four-day workweeks, for example. After all, it’s there in the Netherlands, right? And the question of the division of your life between work and leisure is interesting. Well, you take it when you’re young or you take it when you’re old, the point you made about retirement, because we have seen, of course, people retiring earlier and living longer. And so you have seen a shift towards more leisure. Back in—back in the 1700s—17(00)-1800s, the workweek was 70 hours a week—70 hours per week. And now, as I said earlier, it’s about half that. And if you look at chopping another day off of it, it seems to me to be perfectly feasible.

And there’s also this blurring distinction between what’s work and what’s leisure, because we know at Google that a lot of surfing the web goes on during work hours. (Laughter.)

ORSZAG: Nowhere else, just at Google.

VARIAN: No, no, no, not at—no, no, not at Google. I’m talking about Google users. We look at the—(inaudible). (Laughter.) So we know from looking at the query makes that there’s a lot of, you know, shopping, planning, purchases, all sorts of things that goes on at work, and vice versa there’s a lot of work that goes on at home. In part, because of those mobile phones and computers and the internet that makes everybody accessible over the weekends and other times. So these too are countervailing forces. It may be that work will be spread out in a different way than we’re seeing it with this conventional 40-hour work week.

 ORSZAG: Could I just ask—

VARIAN: I’m not confident that will happen.

ORSZAG: We’ll come over here in a second, but just—a lot of the comments have been about the average workweek and, you know, aggregate. But there’s also this important both socioeconomic and educational dimension to it. So one of the questions that always arises is to date, basically, technology, I think, has largely been complementary to high-skilled workers, and so has been one of the forces driving wage inequality upward. It’s plausible to imagine that that might reverse, that, you know, legal skills and bankers, lord forbid, and others might be displaced by automation. Just give us a little bit more on the kind of gradient, instead of just the average, by education as you see this unfolding, including, you know, the race between technology and demographics.

VARIAN: Yeah. So there’s this labor displacement versus labor—labor displacement verses labor augmentation. And I think that we’re already seeing it happen. So let me give you a few examples. It used to be, to be a cashier you had to know how to make change. No longer necessary. In fact, cashier is the number-two job in the U.S. with respect to occupation and number of people employed. It used to be to be a taxi driver, you had to know how to drive around town. No longer necessary. (Laughter.) Of course!

It used to be that to work in a kennel, you had to recognize dog breeds. Well, now if you use Google Lens, you can photograph the dog, and it will say, oh, that’s a collie, or that’s a German Shepherd, or whatever—you can identify those dog breeds.

It used to be that to be a gardener, you had to know different plants and what kind of characteristics those plants—well, that’s gone away, too.

Now those are low-level jobs—all the examples I gave you—and they’ve been augmented dramatically by information technology, OK? So that will continue, I think, and of course our jobs have been augmented by information technology—consulting online resources, and producing documents, and all of those things—but I see it really as much more on the augmentation side than the replacement side because, when you look at the lists of tasks that a gardener does, some of them require this kind of cognitive assistance, but some of them don’t. And the nice thing is that, of course, if you work in the kennel, or on the garden, or any of these other things, you will pick up these terms anyway, so it’s not just a place—a way of totally displacing people; it’s just really educating people and training people to be able to do their job more effectively.

In fact, every day there are 500 million views on YouTube of how-to videos, OK. That’s how to solve a quadratic equation, or how to compute the area of a trapezoid, or all those cognitive stuff, but there’s all these other things: how to bake a soufflé, or how to play the piano, or how to weld, or how to fix a screen door, or how to remove a stripped bolt. I will bet almost everyone in this audience has used some instructional video on YouTube, OK? And what’s interesting about it is we never think of that as being part of the educational system, but it’s a fantastic way to deliver information on an as-needed, as-necessary basis—both the high-level cognitive stuff and the manual stuff.

And so we never had a technology before that could educate such a broad group of people any time on an as-needed basis for free. So it really is a remarkable time to have that capability, and I think that’s going to have an impact on the labor market.

ORSZAG: All right, let’s go over here, and then we’ll come up here.

Q: Stephen Blank.

I think your very sunny statement needs to be pressed a little bit harder.

VARIAN: OK.

Q: If we’re talking about aggregate jobs, what we see a lot of now is increasing jobs—an increasing number of people who work very hard, very long hours, and make a good deal of money. We see a lot of people who don’t work at all—not because they don’t—aren’t willing to get off their asses and do something, to coin a phrase, but rather because—for health reasons, for access to education, home, whatever—they cannot access these matters.

We’ve now learned how generational this becomes. Once people get caught in this, it’s very hard for their kids to get out.

It seems to me that the scenario you’ve suggested is very optimistic and very unlikely—that we will see more people, as we move forward, who work much harder, and—

VARIAN: Unlikely we’ll see more people who work harder—

Q: No, no. Well, your scenario is unlikely.

VARIAN: Oh.

Q: What we will see is people who work—a large number of people who work harder, and an increasing large number of people who don’t work at all, who don’t have access to these matters, and a widening division between the two.

Is that a reason—is that less likely in your view?

ORSZAG: And maybe I can just add on to it for a second. I mean, one of the kind of histories or sort of permanent effects could come from—I mean, the classic examples of going on disability benefit and you never come off, the new phenomenon of opioid epidemic where you basically are—have a really hard time reentering the workforce. So how does that interact with the rest of your analysis?

VARIAN: Yeah, well, analysis—this may be too abstract a word—

ORSZAG: OK. Vision. Vision.

VARIAN: —my speculation—

ORSZAG: Speculation.

VARIAN: —my vision, my—(inaudible).

ORSZAG: Yes, OK.

VARIAN: Yeah, so that’s a—that’s a possibility. I’m not going to deny that. Things could get better; things could get worse.

My argument really is that a lot of these factors that you described are due to having this loose labor market, this fact that people are readily available, and you could always go out and hire people that you need. That’s going to be less so in the future. It’s going to be a tighter labor market, to an extent. So it’s a tighter labor market, and you’ve got this way to deliver training on an as-needed basis, I think some of those effects are going to be reversed.

Now you could be right, I could be right, you know. I’m not saying I have a magic ball, but I do think that we’re going to see a different set of dynamics in the future than we’ve seen in the last 50 years. And I hope we can solve the problems that you described.

ORSZAG: Let’s go up here.

Q: Thank you. Nili Gilbert from Matarin Capital.

There are those who have argued that the impacts of technology and automation will keep prices very low for a long time—that they will continue to put a downward pression on inflation, therefore, interest rates and other yields, maybe even for generations.

But when I listen to what you’re saying about demographic trends, particularly though the lens of wage growth, it makes me think that there could also be countervailing inflationary trends, and in that sense, automation, technology and human labor are not perfect substitutes. So how do you see the equilibrium between those two forces evolving as they relate through the lens of inflation and other forces that may make them imperfect substitutes?

VARIAN: Yeah. Well, come back to my first statement about demand and supply. If you shift the demand curve to the left and you shift the supply curve to the left, then we can be pretty sure the amount of hours worked goes down. But we don’t know what happens to the wage. The wage could go up or it could go down. Depends on the magnitude of those shifts.

Well, that’s just a metaphor. That’s something from Econ 1 and we know the real-world labor markets are much more complicated than that. But it does give us one place to start looking. And I will say that this will change over time. As you have, let’s say, fast food becomes more automated—because you can go from the kiosk to voice recognition or punching things in to controlling the appliances, et cetera—we may see that having a big impact on that sector of the—of the market and it may be a rather different impact when we look, let’s say, at the impact of autonomous vehicles or, particularly, autonomous trucking.

Right now, if you look at the near-term future, well, there’s a shortage of 50,000 truck drivers in the U.S. You’ve got—the trucking industry just put in orders for—I forget the number—a very large number of new trucks because the demand for logistics and delivery and moving things around has increased along with recovery of the—of the economy.

So my guess would be look over the next decade. I don’t think we’re going to see huge displacement in those areas but in the longer-term future, of course, it’s certainly going to be there. So, you know, the line in Silicon Valley is we always overestimate the amount of change that can occur in a year and we underestimate what can occur in a decade. So I think that’s a very good principle to keep in mind. Things aren’t going to dramatically change overnight. But a decade from now, things will look different, particularly with respect to the labor/automation mix.

ORSZAG: I think we have—let’s go way in the back there, sir. Yeah.

Q: Hi. Jeffrey Young, DeepMacro.

I just wanted to pick up on a thread that’s been going through a lot of the questioning about what we used to call forecasting. Now, I think we call it predictive analytics. This may be a little bit technical—

VARIAN: (Laughs.) Right.

Q: —but it really relates to, you know, data and which Google and others have a lot of. Do you feel that the methods, the standards, the techniques that—you know, that a lot of predictive analytics is based on are at the same level of quality—better, worse—than more traditional methods? And I ask that just because it, obviously, is a buzz word. There’s a lot of hype around it. What do you really think about does it work?

VARIAN: Yeah.

Q: And are people cutting corners when they make the claims?

VARIAN: So that’s an excellent question because, obviously, there is hype around it. But is it excessive? I mean, there’s—some hype is deserved and—

ORSZAG: It’s not bragging if it’s true. (Laughter.)

VARIAN: Exactly. So—I’ll have to remember that line.

ORSZAG: It wasn’t mine.

VARIAN: Yeah. (Laughter.) That’s OK.

ORSZAG: Yeah.

VARIAN: All right. So a really good place to look is a company called Kaggle. That’s at K-A-G-G-L-E dot com. And I was actually an angel investor in Kaggle and it was acquired by Google last year. No connection in those two things. But what they do is they sponsor machine learning contests. So you’ll say here’s a data set of people who were discharged from hospital and were readmitted within the next six months, and here’s a whole bunch of characteristics of those people and the procedures, and so on; build the best predictive model you can build of who will be readmitted, and then we can intervene and not discharge people before they’re really ready to be discharged. And that’s a million-dollar prize. It was a million dollars to whoever could come up with the best predictive model for that.

And the—now there’s another one from Zillow. Zillow, who does real estate valuation as part of what they do, they’re offering a million-dollar prize to anybody that can improve their prediction algorithm for housing prices as a function of characteristics of the—of the house and the market.

And Google sponsored one. YouTube videos—we took 4.5 million YouTube videos that were labeled according to what people were doing, OK? Were these people dancing? Were they fighting? Were they exercising? Were they walking down the sidewalks of New York, you know? How could you tell? Well, they labeled all those things with what people were doing, and the person that come up—the group that came up with the best predictions there were—got, I think in their case, was several hundred thousand dollars.

So these are real data, problems people really care about and are willing to pay for, and they’re very carefully evaluated. There’s a (holdout ?) set, a trading set, a validation set, and so on. So the contests are conducted under very careful conditions. And, yes, you can really get improved performance out of these new algorithms and new compute capabilities. But remember, it’s a contest. So—(laughs)—the individuals’ training, experience, knowledge, intuition still plays a very big role in using these technologies, and I think that’s going to continue to be the case.

ORSZAG: All right. We will sneak in one last question. Right here.

Q: Thank you. Thanks. Juan Ocampo with Trajectory Asset Management.

Hal, thank you for joining us. Your insights are great.

Question about not the level of the wages in the future, as you look forward, but the dispersion, the fungibility of labor. And one thing that we’ve observed is that education has made a big difference in terms of kind of segmenting the labor force. That, and the degree to which the other factors of production, if you want to use that word, you know, are relatively rigid. Some of the things that you have mentioned might correct that, if I can use that word. For example, people who didn’t get the same kind of—they didn’t go to MIT—might be able to learn on YouTube a number of skills that make them more productive, so forth and so on. How do you see the balance between the different forces affecting that dispersion, if you will, and the fungibility of labor that underlies a better dispersion, a tighter one?

VARIAN: Well, those are very deep questions, I think, and I’m not sure I’m capable of coming up with an overall answer. But I’ll make a few points relevant to what you—what you said.

If you look at the Khan Academy—people are nodding their heads; they know what the Khan Academy is—it’s a fantastic resource, especially for people who don’t necessarily have mom and dad at home who can answer their Algebra questions, which could be many, many people—(laughter)—by the way.

And, secondly, if you look at the—if you look at it, mathematics, that’s one of the real stumbling blocks. Algebra is really a stumbling block for kids in high school because some get it right away and some don’t. And if you are out sick for a week, or you miss something, or you don’t understand some point, it can derail you, derail your whole education and chances of going to college. And now, again, we have great services like the Khan Academy that help people survive those problems. So that’s a really, really big plus. They may not have local expertise. They may miss something. They may need a patient tutor to go over it. Well, that’s there now. So that helps to some degree. It’s not a total answer by any means, but it is potentially helpful.

And this gentleman over here who mentioned the problem of if you don’t have educated parents at home, it may be very hard for children to acquire the educational skills that they need to succeed in school. And so you get this development that’s very unhealthy from the viewpoint of the people, obviously, but also from the viewpoint of the economy, because you want people to be able to contribute productively to economic growth.

So I don’t have a good answer to your question of whether we will be able to solve those problems, but there are some helpful technologies that may be useful in addressing those issues. That’s all I can say on that.

ORSZAG: All right. Well, with that, I hope everyone will join me in thanking Hal. Thank you very much. (Applause.)

There is a second session of this symposium that will delve into many of these topics in much more detail that will commence at 2:15, in 15 minutes. In the meanwhile, there’s coffee and refreshments. So please join us back here at 2:15. Thank you again.

(END)

Session II: Behavioral Economics in the Age of Big Data

The Robert B. Menschel Economics Symposium generates critical thinking about the consequences of herd mentality behavior in global economics. The 2018 symposium will examine the insights that big data has provided into economic—and political—behavior.

This symposium, presented by the Maurice R. Greenberg Center for Geoeconomic Studies, is made possible through the generous support of Robert B. Menschel.

GALLAGHER: Good afternoon, everyone. Welcome to the second session of the Council on Foreign Relations Robert B. Menschel Economics Symposium, Behavioral Economics in the Age of Big Data. I’m Leigh Gallagher. I also want to, by the way, thank Robert Menschel for his generous support of this program. We’re all indebted to him for this. I’m Leigh Gallagher and I’ll be presiding over today’s discussion. And in the name of bipartisanship among your moderators, I am an Eagles fan. (Laughter.)

We are—at this time—whoa, sorry. We’ve got a great conversation today. Behavioral economics seems to be the hot new thing. I guess we could say that for many years now. But big data is sort of the latest wrinkle, and it has tremendous implications. So we have three fantastic panelists with us today. We have Hersh Shefrin, who is coming to us by satellite in California. Hersch are you there?

SHEFRIN: I am. Hello, Leigh.

GALLAGHER: Hi. Thanks for being with us. This is not even really a delight; this is very impressive. Thanks for being with us. Hersh is a pioneer in the field of behavioral economics. And our other two panelists are experts in data and what we can learn about behavior from it. Seth Stephens-Davidowitz worked at Google. Hal Varian mentioned him a few times. It should be noted that both Hal and Peter Orszag were at one point Seth’s boss. So when does that happen? (Laughter.) Anyway, Seth worked at Google as a data scientist and uses dad from the internet to gain insights into the human psyche. So he can tell us a lot about ourselves and our true nature. So that might be a little scary. (Laughs.) Dan Wagner was the chief analytics officer for Obama for America—the Obama for America campaign in 2012, and is now the founder and CEO of Civis Analytics, which provides data analytics for companies. So welcome to all of you.

Hersh, let me start with you. You are our behavioral economist on the panel. Can you just tell us—I guess define behavioral economics for us, maybe. But also, it seems that we are in this sort of era of behavioral economics. It wasn’t always so popular. Now it is. So give us some context for that. And also when big data started to enter the field, from your perspective.

SHEFRIN: Thanks, Leigh. I’d say as a definition, the way to think about behavioral economics is that it’s the study of how psychology impacts economic behavior through our motivations and our judgements, the way that we interpret data and then use data as a basis of making decisions. You know, I just want to add something quickly. A lot of people think that behavioral economics is about people being irrational. I’d say it’s more that people are imperfectly rational. And that if you think about the glasses as half-full, it will sort of help to think about behavioral economics as a way of pointing to how the glass can be a little bit more full as we move forward.

In terms of—in terms of behavioral economics getting hot, well, you know, it actually was hot in the days of Adam Smith. And one of your—one of your questioners mentioned John Maynard Keynes. He was a behavioral economist without doubt. If you read the General Treatise from 1936, the word “psychology” appears many, many times in that treatise. It’s just that economists got into the idea of thinking, after World War II, that psychology wasn’t important. And so it sort of wound up getting forgotten. But back in the ’70s, Dick Thaler and I started to think about bringing—ways of bringing it back, psychology back into economics.

And I say that probably a recognition that big data was going to be important, that started to happen in the late 1990s when people began to understand that if you wanted to do experiments that you could—you didn’t just have to rely on undergraduates as subjects for experiments, but you could start to use the internet, per se. And I think there was an inkling about that, but I don’t think we really saw how big a role it was going to play until a little bit of time had elapsed.

So I hope that sort of answers the questions that you had in mind. I can—you know, I can go on a bit if you want and talk about the connections I see between big data and behavioral econ. But let me just sort of turn it back to you to see how you’d like to proceed.

GALLAGHER: Sure. Sure, sure. That’s a great overview and we’ll definitely have ample opportunity to get back to you. I’d like to talk about some of those connections.

Let me jump over to Dan next. Dan, can you just give us a little bit of—tell us about your background, your work on the Obama campaign. And can you tell us also how behavioral economics plays into your work today?

WAGNER: Sure. So I disagree with Hersh a little bit, in that irrationality isn’t totally part of it. I think the most recent election is an example of behavioral economics as a demonstration of irrationality. (Laughter.) But that’s my point of view.

GALLAGHER: Hersh? (Laughter.)

WAGNER: OK, good.

GALLAGHER: OK. (Laughter.) He’s on video conference.

WAGNER: So beginning in the early 2000s, there was a movement within political science to try and understand some of the psychological attributes that were driving decision making around participating in politics and how people thought about politics in terms of candidates and parties. And obviously that has evolved considerably over time. And the way that we began to think about the role of behavioral economics was kind of dividing them between these kind of two classes of rules. Number one is, behavioral economics was essentially the study of decision making, taking into account psychological attributes such as social position, normative behaviors, et cetera. And data science was more or less fancy statistics using lots of data to figure that out.

Behavioral economics became the underlying theory that we were thinking about, consuming a lot of theories from academic journals, especially research around social pressure and crowd behavior around elections. And data science became our measurement vehicle to try and understand the behavioral—of these types of psychological attributes and actually defining behavior and opinion. One of the formative experiments that we did before the election is we began to try and understand what was motivating turnout. And we tried two pieces of content. And our question was, again, what motivates people to turnout. And so we began doing these largescale experiments, both offline and online.

And the most notable experiment that we did—again, assessing this question about what motivates people to turnout—is we had these two pieces of content. One was a persuasive message that provided—a kind of glossy piece of mail and digital—that had a picture of the candidate, his family, and gave a set of logical reasons for why you should vote for this candidate instead of the opposing candidate, both in terms of the national context and the local context, and providing you some personalization around him. The second was a letter from the state senator—or, rather, yeah, the federal senator in their state. It was kind of a boring kind of construction, but it referenced their previous participation in the last election, notified them that turnout in this election was going to be pretty high, and encouraged them to vote in this election along with their neighbors.

So we basically provided these two pieces of content. One was an exciting piece of persuasion and two was kind of a boring validation from their senator encouraging them to vote, and kind of referencing their previous history in voting. So I’ll do a little poll in terms of what people think, and I may have given away the result. But I think one is kind of persuasion and then two is the second piece. Who thinks one? Yeah, so this is a loaded question—(laughter)—so I’ll just go to the results. So at this time, this is kind of a big result in political theory, because our—both the kind of predominant thinking in the Democratic Party and—more so the Democratic Party than the Republican Party, is that communicating to logic and reason was the effective way to motivate people to turn out. Turned out to be a complete lie.

What motivated people to turnout was essentially the social pressure around participation. And kind of driving more or less, reinforcing this kind of herd behavior and attitude, especially towards people that were less likely—less habituated in voting. People that were already habituated in voting, it more or less became the normative religion to them. But people who weren’t habituated to voting were kind of directed by a different thing, and that was directed by a different theory.

And since then, the kind of replace—so, in terms of what this did and the relationship between these two things, behavioral economics set the standard of theory. Data science became the vehicle through which we are measuring that. And we’re measuring it through two ways. One was kind of the prediction, which is the experimental basis on which we measure these different interventions and the economics behind it. And then two is the attribution, which is ex-post how are you evaluating the effects of those different kind of experiments? The internet obviously being the biggest natural experiment in human history.

And so those kind of two standards of data science have helped us, both in context of elections but also in companies and governments, assess the kind of value of these different interventions that we create proactively or that we can see in terms of evaluating there the causal effect of those things ex-post.

GALLAGHER: So it can really help you figure out what buttons to push. And in the example of the previous tactics, we were pushing the wrong buttons to get the result.

WAGNER: Yeah, yeah. The idea of buttons, what buttons work, and then what buttons you should change over time. There you go.

GALLAGHER: OK. Well-put, OK. (Laughter.) Seth, tell us a little bit about your work both at Google—and you talk a lot about experimentation and what you have discovered in that realm. So tell us a little bit about that.

STEPHENS-DAVIDOWITZ: Yeah, so I’ve been doing research over the last six or seven years on what we can learn from people’s internet behavior, particularly their Google searches but also other sources. And I think there are really two advantages to Google searches relative to other traditional data sources. One is people tend to be really honest on Google. So they tell things Google that they might not—tell things to Google that they might not tell anybody else. (Laughter.) And this is different from surveys, the traditional look into people’s psyche, where people sometimes lie, and they tell what they think the surveyor wants to hear, not what is the truth.

Actually, the thing about turning out to vote, it’s really hard to predict before an election who’s going to actually turn out to vote in an election because more than 50 percent of people who don’t vote tell a survey they’re going to vote. Sure, sure, sure. I’m going to exercise my civic duty. Either they are lying to themselves or deliberately deceiving the pollster. We can actually see in Google searches weeks before the election—even by mid-October—there’s high predictive power in searches for vote and voting. So people who are searching how to vote or where to vote weeks before an election are much more likely to actually turn out to vote.

One of the things you saw in the previous election was that African-American turnout was going to be much lower than previous elections because in cities—it’s anonymous and aggregated data, but you see in cities that are 90, 95 percent black there was just a huge drop in searches for vote and how to vote and where to vote. And this is one of the reasons that Hillary Clinton did worse than the polls suggested, because a lot of the models—because you can’t trust what people say—just assumed the electorate would look the same as it did four years earlier. And the Google searches where people are more honest, where they’re actually looking for information they need, was saying that the electorate was going to look a lot different, that it was going to be a much small black part of the electorate.

So there are many examples. I’ve studied racism and sexual preferences where the data from surveys are very different from the data that you can get online. And I think it’s behavioral just because I think more and more of economists these days—I think in some sense we’re all behavioral economists. I think, you know, 30 years ago, 40 years ago when you went to get a Ph.D. in economics, so much of it was these models of human behavior, theoretical models, assuming that people are going to act in their rational self-interest. And I think more and more economics and all the social sciences are all about empirics and data.

So—and I think that’s because of behavioral economics, because we realized you can’t just put a model that assumes people are going to be totally rational. You have to actually see how they behave. And with all the data, we can do this. So I think—I don’t even think to myself: Is it behavioral, is it not behavioral? I just think, what are people doing.

GALLAGHER: Mmm hmm. And what are they not saying?

STEPHENS-DAVIDOWITZ: Yeah, what are they not saying, yeah.

GALLAGHER: So just—so when we all here type in something in Google and the predictive—it fills in for you. You know, if I type in, I don’t know—well, here’s an example. I wrote a book called The End of the Suburbs a few years ago. And I just—during my research I typed in the “the suburbs are.” And what came up was: horrible, boring—you know, it totally supported my theory, which is—(laughter)—no, I’m not an anti-suburb person. But, anyway, can we trust that? Is that really what the world is thinking?

STEPHENS-DAVIDOWITZ: I think—I think we have to be a little cautious. I was probably a little under-cautious. I was so excited when I first saw this data, and then you do see all these windows into people that they don’t normally say that I’m like, oh my God, this is accurate and everything else is garbage. And I think it’s not totally true. I think there is a pressure on Google. There’s probably—every data set is biased towards what’s socially acceptable. So surveys are biased towards what’s socially acceptable. Facebook and social media are really biased to what’s socially acceptable. People are—you know, exaggerate how good their lives are there. (Laughter.)

And I think—I think Google, because people are alone and they’re by themselves and they’re looking for different information, they tend to tell the things that they don’t tell anyone else. So if all day you’re saying how much you like your suburban life, then you kind of go to Google and just say the suburbs are horrible. (Laughter.) So it’s not—I don’t think it’s—that there aren’t people who like the suburbs. There’s definitely a different view of people, and you show them—which is kind of psychological, that people have multiple sides. My favorite example is you compare how people describe their husbands on social media and on Google. (Laughter.)

So on social media if you look at the top way people complete the phrase “my husband is,” it’s my husband is the best, my best friend, so amazing, a total joy, awesome. And then on Google, it’s my husband is a jerk, annoying, obnoxious. So it’s very different view of people. And I don’t think one—I don’t think one of them is right. I don’t think that marriage is all that the husband is a jerk, the husband’s annoying, I think—but I think they’re somewhat in between and that it is interesting that the two sources get such a different—show such a different picture of marriage.

GALLAGHER: So, Hersh, we can rely on data to, you know, to do all this—all of this work for us, but as long as there are humans involved, I mean, there is going to be some susceptibility to groupthink in economics. Is that right? I mean, talk to us a little bit about that, the overreliance on data.

SHEFRIN: I think that—well, big data is going to give us big advances. And I think that’s really going to grab our attention and make us very excited about the whole concept.

When Dan talked about what motivates people to vote and there being collective pressure, I think that collective pressure is at the heart of what constitutes groupthink and the way that groupthink and big data will come together to make decisions.

So I have this concern that big data, while it will bring big advances, will also bring big dangers and big mistakes because of the tendency to either exploit big data or to ignore big data. So the kinds of issues that, you know, concern me as I, you know, think back in the past—and in fact, it doesn’t have to be that long ago—you know, have to do with, you know, big disasters like the nuclear meltdown and Fukushima Daiichi or the Deepwater Horizon explosion in the Gulf of Mexico. Those sorts of—those sorts of—oh, or thinking about New York, how it is that the SEC missed Bernie Madoff after five investigations—not just once, twice or three times, but five. In all of those instances, groupthink played a really important role because collective pressure led people to make judgments that were different than they might have had they been, and I’ll say, perfectly rational.

So I think the interactions are going to be that because big data is going to be so powerful that when groupthink is a driver it can really magnify our vulnerability to what happens. Because in groups, we either want to please strong leaders—and I’ll say that I don’t totally agree with Dan’s initial comment about the nature of the most recent election, because I think we are now seeing groupthink fostered and encouraged as an active policy. But I think that we just have to be very vulnerable and aware as to exactly what can happen.

If you are overconfident about the strength of your organization to deal with the problems—for example, data breaches—then we’re going to see more problems like Equifax where we had both IT vulnerability, but we also had organizational vulnerability, and groupthink operated by making that whole organization feel that the risks were much lower than they actually were. So that one person—it only took one person to make a mistake and not pass along a critical IT update for 143 million accounts to be breached, one person. Where else did one person make a big problem? Ask anybody in Hawaii what it felt like—(laughter)—to think that you might have just 30 minutes left to live before a nuclear bomb was dropped on you.

So those kinds of—those kinds of issues are significant and important and they’re part of the human psyche. So it doesn’t—you know, it doesn’t—it shouldn’t surprise a behavioral economist who’s familiar with just how strong groupthink issues are to think about what would—what would be more effective in terms of encouraging people to vote.

And, you know, I’ll just say I wrote a Huff Post piece a week before the last federal election, because one of the things that worried me was that for psychological reasons the turnout for Hillary would be lower than people thought because they just weren’t excited by her. And what excites you, what grabs your attention, that really is critical. It’s what—it’s what speaks to your nucleus accumbens, what lights up in your brain, what generates dopamine. I mean, all of those things are absolutely critical. And it’s not what you think about rationally and coolly, but what mechanisms get activated neurologically that wind up being so important.

GALLAGHER: Dan, let me go back to you on that election point. Do you think that with the proper use of data we should have been able to properly predict where the results were headed?

WAGNER: Yeah. I mean, the question of behavioral economics is not so much what is the behavior, it’s to what do you attribute the behavior. And the big failure in the 2016 election was a measurement failure where turnout was a little piece of it, but when you actually look at the kind of distribution of votes that were lost between 2012 and 2016, the majority of that was not because people didn’t turn out, it was because many people changed their votes from candidate A, Barack Obama, to candidate B, Donald Trump. So that was a kind of, like, causal reason within these states that you saw such a drop in vote share in Michigan, Pennsylvania, and Wisconsin.

So when you—when you’re trying to understand that, the primary piece that’s interesting from a measurement point of view is, why was the measurement so wrong in those states? And there’s a few root causes for that, but the primary root cause is a lot of these people who supported Trump decided to not participate in the surveys because of a loss of trust in institutions broadly, feelings of deep resentments, feeling of mistrust. And so a lot of these folks were saw were kind of participating in kind of behavioral and social environments, but reduced their participation in a lot of surveys. And as a result, that was a primary kind of miss.

And if you look at what were the places where those measurements were most off—because you look at Virginia, it’s fine. You look at Colorado, it’s fine. You look at some other states, it’s fine. So why was it so off in Pennsylvania, Wisconsin, and Michigan? The reason is because you have the highest density of marginalized populations, not economically marginalized, but culturally marginalized populations that have the highest density of racial resentment, feelings of isolation, deterioration of communities, et cetera.

And as a result of their institutional feelings of isolation, the result of that is they didn’t participate in the primary measurement vehicle that we use to understand these folks. And that was why we missed. And that should have been known much earlier.

I think there’s been a catch-up in terms of how to evaluate that measurement bias over time. But we—we would have known had the measurement been correct, but the measurement vehicle that we used to understand the underlying psychology of why people were voting a certain way or why they weren’t was wrong.

Now, if you’re in a company, you should be freaking out because the same kind of measurement vehicle that’s traditionally used to understand behaviors and elections is the same type of measurement that’s used to understand brands, policy, et cetera. And if that measurement vehicle is broken for a huge percentage of your population, then you’re going to suffer the same kind of measurement issue about your products, your policies, et cetera when you have this kind of heterogenous difference in how people trust institutions and in the measurement vehicles that are behind those institutions.

GALLAGHER: Can you give us an example of a product that sort of misfired in the commercial marketplace because of that?

WAGNER: Well, I can’t—I probably can’t give a specific example, but here’s a—here’s a recent one that we kind of looked at a while ago. So you had that Audi commercial that came out in the Super Bowl a couple of years ago. Now, if you’re somebody in this room, which is kind of, like, you live in a city, you’re broadly educated, you have kind of a shared set of normative values around the participation of women in the economy—

GALLAGHER: What was the commercial again?

WAGNER: It was a—it was a—it was a little girl driving down in a soapbox car down a hill and her father was kind of congratulating her. She won and she beat a bunch of boys, right? Kind of, like, the message of that was female empowerment in an area that had been broadly owned by men, which is racing cars, right, of any sport that has the highest disproportion between male and female participation. It was just, like, driving yourself and nearly killing yourself in a gas-laden metal vehicle, right? (Laughter.) Why people do it I have no idea. (Laughter.)

And so, if you ask somebody in this room did you like that commercial, many of you maybe had been in tears, right? It was so wonderful, blah, blah, blah, blah, blah, right? (Laughter.) But if you do that experiment to a lot of people who exist in these marginalized populations and you conduct that experiment, afterwards they say, hell, no, I’m not buying an Audi.

Now, why is that? Why do they react to this message of empowerment so much differently than somebody else, right? Because a lot of it is, I think, they’re self-rationalizing their own tribal existence, right? Imagine a woman who’s in an abusive relationship with her husband, imagine somebody that, even the more common case, has been kind of left by somebody and kind of is existing on her own resources and her own rights. So what psychological context does she have when she looks at that commercial? She probably looks at it as, like, this is bullshit, this is complete and utter bullshit. This is a bunch of you urban fancy pants getting down and kind of placing your values on top of me. This is not what life is like. She’s not saying that logically, but this kind of secondary component of her brain is internalizing that and that’s reflected in the measurement in turn—

GALLAGHER: Right. But Audi is probably trying to target the urban, upwardly mobile—

WAGNER: Yeah, for Audi—from a consumer, like, from a consumer point of view—

GALLAGHER: —Super Bowl—

WAGNER: —it’s probably—

GALLAGHER: —is broadcasting to everybody.

WAGNER: Yeah, that was just a case of what we were looking at, right?

GALLAGHER: Yeah, right.

WAGNER: A lot of it is, like, the more common cases, the Budweiser commercial last year, which is the immigration example, which is they made it before the immigration fight was all-consuming, but it was about this German immigrant coming to St. Louis. And a lot of people like us, again, said, oh, my God, this is beautiful. But if you looked at the primary consumers of what that may have been, they may have reacted, a lot of them did react negatively to this commercial. It may have been effective in terms of increasing kind of brand among the people they were trying to reach—totally—but some people may—some people probably did react to that more negatively because how they essentially internalize their feelings about immigration.

GALLAGHER: Seth, any comments about that? I also want to ask you about something else.

STEPHENS-DAVIDOWITZ: Yeah. So it’s—I did a study that just reminds me so much of what Dan’s talking about, how think can backfire, where I was studying Islamophobia also using Google searches. So people—this isn’t even Islamophobia. It’s not a fear of Muslims, it’s more rage towards Muslims. People make some really, really nasty searches about Muslims on Google. They search things—this is disturbing—they’ll search things like—a small number of people search things like I had Muslims or kill Muslims or no Syrian refugees, really, really nasty stuff.

And even though these searches are a little strange, they actually do predict week by week how many hate crimes there are going to be. So when more people are making these searches, there are more hate crimes against Muslim Americans.

So after the—in December 2015 after the San Bernardino attack where two Muslim Americans had shot up and killed 15 of one of the guy’s coworkers, Islamophobia, anti-Muslim sentiment was at an all-time high, it was out of control. You saw this in Google searches, you saw it on Twitter, you saw it everywhere. And a few days later, Barack Obama decided he was going to give a speech to kind of calm people down and try to calm this almost mob mentality that had overtaken the United States. And he gave what I thought was a beautiful speech, kind of classic Obama. He talked about how it’s the responsibility of Americans not to give into fear, to appeal to freedom, how it’s our responsibility not to judge people of a different religion, how it’s responsibility to let other people into our country no matter their religion. And it got rave reviews, The New York Times said it was amazing, Newsweek said it was amazing, Boston Globe said it was an amazing speech.

So I was studying this Google search data, anonymous and aggregated, and they have minute-by-minute data, so you can look what happened during and after Obama’s speech to all these horrible, nasty searches by angry people about Muslims. And you saw not only did they not drop, they didn’t stay the same, they went way up and stayed up afterwards. So it seemed like everything Obama did, even though all the traditional sources were patting him on the back and saying great job and thought it was really moving, actually backfired.

But then there’s a—that’s a little pessimistic. I want to leave with an optimistic note. Whereas at the end of the speech, there was something he said, which may have been more effective, where he said that Muslim Americans are our friends and neighbors, they’re our sports heroes and they’re the men and women who will die for our country. And then you see literally seconds later a huge explosion of searches for Muslim athletes followed by Muslim—and then for the next days later, for the first—for the first time in many years, the top descriptor of Muslims on Google searches was not Muslim—was not Muslim extremists or Muslim terrorists, it was Muslim athletes, followed by Muslim soldiers, and they kept the top two spots for many weeks later.

And you see all around the internet these young men saying, Muhammad Ali was a Muslim? Shaquille O’Neal is a Muslim? Like, I didn’t know this. You know, their image of a Muslim American up until that point had been Osama bin Laden and these two people who had shot up San Bernardino. So I think you kind of saw in that study—and I’m not going to say from one little study that I’ve solved hatred in the world—(laughter)—but I do think the power of some of these data sources, if you kind of think what doesn’t work, what seemed to not work in this particular study, it was lecturing people, right? What you should do, here’s your responsibility, things they’ve been told a thousand times. What is more effective is maybe subtly provoking their curiosity, giving them new information.

And I think what this kind of says—and this is, like, classic behavior economics, in my opinion, because it’s something that—people are not necessarily rational and telling them arguments on why they should change their behavior is not necessarily the most effective strategy. I think it does show the power of some of these new internet data sources to kind of turn a lot of these questions that have been very, very difficult into real sciences.

GALLAGHER: That’s a nice note to throw it out to the members now. I’d love to welcome anyone who would like to ask a question of our panelists.

I just want to remind everybody that this meeting is on the record.

So just raise your hand. Please wait for the mic and speak directly into it and state your name and affiliation.

I think we have a question over there.

Q: I’m Ron Tiersky from Amherst College.

My question has to do with the fact that I think you are a little too critical of yourself and it doesn’t—it doesn’t quite work. You seem to be saying that you should have predicted, you should have been able to predict the outcome in 2016 election. I don’t know why people don’t say more about the fact that Hillary won 3 million more votes than Donald Trump. And someone will say, oh, our big-data predictions and all this. Why were we so wrong? As I understand it, it was a matter of a total of 70,000 or 80,000 votes in three states, which could be seen as a rounding error. So I—I’m wondering if the attempt at prediction is not—is not overdone. It could be that 70(,000) or 80,000 voters, I don’t know, argued with their spouse or something, something that was very short term rather than some long-term or medium-term factor that you could have predicted.

GALLAGHER: I guess, Dan, you may want to take that.

Without getting into a too-wonky discussion of the Electoral College, which is really also—

WAGNER: Yeah, well, I think the important concept here is you have the difference in the mean and the difference in the distribution. And the difference in the distribution is often or usually way more important. And so you have a—you have a heavy distribution in terms of the population of educated voters across the country.

You look at a state like Virginia or Maryland or Connecticut, a disproportionate number of those people went to college, they’re very high-income states, very immensely educated, and as a result they have a different set of normative values that exist in their communities. Almost every one of those states, the prediction was below what actually happened, right, almost all of them, California especially. But if you look in Michigan, Wisconsin, Pennsylvania, the prediction was way above what happened, right? So you have this weird case where there was a—there was a distribution of bias that was really weird depending on the concentration of educated or uneducated voters. Well, the mean was the same, right, which makes sense, but the distribution of bias was very different.

And because of some historical accident and a conversation that happened between Alexander Hamilton and George Washington and all these other people, like, probably drunk, we now have this—and where to put Washington, D.C., you have this Electoral College. And just by some accident of history with, like, autoworkers and whatever, you have a higher concentration of those voters in these states. And because the measurement was off in those states, the resource allocation was off, but the measurement wasn’t proper; thus, the Electoral College game, which is the only game that matters in an election, the placement was wrong. And as a result, we lost it.

But the important conclusion for measurement is it’s not how off—it’s not the difference in the mean, it’s the difference in the mean by state, so the distribution of bias over time. Again, like, that’s important—and Hal Varian talked a lot about this—in terms of, what is the distribution of this stuff over time? And how heterogenous are the tests or how heterogenous are these effects by population? And looking at that is more important than looking at just the means.

GALLAGHER: Right here in the front row.

WAGNER: Maybe that was too wonky, I don’t know.

GALLAGHER: No, that was good, that was good, very good.

Q: Hi, Susan McCaw, COM Investments.

This question is for Seth. Loved your book and I loved the way you used data to analyze what people are really thinking. But my question is that with the increasing use of artificial intelligence, what is the likelihood that some of this data gets polluted, meaning algorithms that are set up that might not allow you to analyze the data the way you have? Like, for example, in elections, let’s say there’s an algorithm written for people to write, you know, how to turn out to vote in urban areas, but that in actuality isn’t what they’re searching.

STEPHENS-DAVIDOWITZ: Yeah, I think it’s definitely a difficult—it’s a difficult situation. I kind of also agree with one of Hal’s points that, you know, initially, again, when I first saw this day, I’m, like, oh, my God, all the old economic measurements are out the window, let’s just use Google searches and tweets and we’ll measure all the economy. And I think I kind of agree with Hal’s point that it’s going to take many years of learning how to make sense of this data. And you do see changes in behavior over time that aren’t always—that aren’t always necessarily clearly driven by a change of behavior, a change of attitude that, you know, yeah, that is easily explained.

I don’t know. I think—I think what the particular—I think bots in general aren’t really included in the Google search data because Google has a, from what I understand, a many-billion-dollar incentive to realize what search it queries are bots and not include them in the dataset, though Hal could correct me if I’m wrong. So I think that particular element of it isn’t necessarily an issue.

I think there is an issue with autocomplete that it kind of brings people in a certain direction, but something that has a small lead as a search may get a bigger lead as everyone sees that autocomplete and they have ideas that they didn’t realize they had previously.

GALLAGHER: Hersh, can you weigh in on that also? Can artificial intelligence—can big data in the field of behavioral economics be hacked or be erroneously skewed?

SHEFRIN: That’s one of my biggest fears, is that we are vulnerable. And I think that, you know, these things come together. So the issues that Dan mentioned about not having the right metrics is one issue. The potential to be hacked, not just the data itself, but the underlying algorithms is second. And our tendency, which is to trust technology until something happens to convince us otherwise, all of those things come together in a natural way.

So it’s something where I think to remember what Andy Grove told us and the way he ran intel: Only the paranoid survive. (Laughter.) And I think that is great advice.

GALLAGHER: Who has a question?

All the way in the back there.

Q: Andres Small from Partners Group.

I guess this is a question for Dan. To dig a little bit further on your point about, your examples about social pressure being more effective than perhaps logic. But when I look at the voter participation absentia in the U.S., it hovers around 50 percent. It’s not—you could easily flip a coin and say, well, just as many people are not voting rather than voting, right? So how do you isolate that? And what makes you—what gave you the confidence that it’s a statistically significant factor or R-squared is high, all that?

WAGNER: So it’s 50 percent midterm elections and 65 percent presidential elections, just to—whatever. So how does that happen? So voting, like many civic duties, is more of a habitual thing and a religious thing than it is a logical thing. If you look at just the marginal impact of your vote, you know, it’s approaching epsilon, especially in the case of 185 million registered voters, 130 million registered—between 130 (million) and 140 million people will participate in a presidential election. Let’s say your state is off by a million votes. The marginal consequence of your vote makes no logical sense in kind of, like, fundamental mathematics.

So why do you participate? In terms of your civic duty, you do it as a religion and that religion becomes embedded in your habit over time. That religion is reinforced by the normative values in your community, right?

The participation rates, for example, in the 1950s were very high after the Second World War when civic participation in terms of a normative value was extremely high. That diminished in the 1970s and the 1980s and then it kind of skyrocketed in the 2008 election of Barack Obama when that kind of religion was accelerated. And so you have that habit in terms of, you know, how that really works.

One interesting piece is that, if you look over time, what are the groups of people that participate at the higher rates, highest rates? Number one is the Roosevelt generation of people who came about the New Deal and remained very loyal to those principles. So there’s kind of an age gap there or an age set there.

Another age set within—if you look at millennials, you’re, like, oh, these people are kind of wankers, they don’t participate. (Laughter.) But there’s one group of people that does participate and that is the group of men and women who are about our age, who basically turned 18—

GALLAGHER: About our age? (Laughter.) I don’t know if you want to say me.

WAGNER: Thirty-four to 36.

GALLAGHER: OK. (Laughter.) Thank you.

WAGNER: I may look like I’m 12, but I am old.

So if you look at that group, they came of voting age as the Iraq War was at its height, right? They came about, and that habit has essentially extended up to our age now. And if you look at this kind of discontinuity between the people before and after, their voting rates have actually—are much lower than that specific group among that age range just because those people have internalized those values into what they do while people before and after vote at lower rates.

And so while, mathematically, voting is kind of absurd, the reason why there is variability in this is the point at which you start voting and how that essentially gets ingrained in habit, and that varies much by when, that varies much about the context of the election and how that gets ingrained, that varies a lot by the community in which you participate.

A lot of these things are generational. I think a person in the previous said a lot of these habits become generational and that’s certainly the case to civic participation. And, quite frankly, there’s a lot of fear that a lot of these communities who are becoming generationally affected are lowering their participation in elections as opposed to rising, and so you have kind of this increasing proportion of people who are participating as a result of generational habits and a lot of people who aren’t, and that’s very disturbing.

GALLAGHER: I’d love to get to one more question, but I also want to see if Hersh or Seth wants to chime in about voting patterns, voting behavior.

No? OK.

SHEFRIN: Well, I just—I’ll just—I’ll just make a quick comment. Value expressiveness really is a critical issue along with social preferences. So I think that it’s, from a pure calculus perspective, it’s not in your self-interest to vote. But people do vote.

I think this point that Dan made about habits, that’s been part of what brought me into behavioral economics, was to understand the role that habits play, what it is that leads people to begin particular habits, because they need to have some reward. But the fact is that once you get going into a habit, it’s like you’re in a rut and you’ll tend to do it automatically, it doesn’t become difficult. So this is, I think, a really fundamental insight and I just want to reinforce it.

GALLAGHER: OK, great.

Who else has a question?

Right here.

Q: I’m Gerald Pollack.

Some countries, for example Argentina, have requirements that every citizen vote or there’s penalty for not voting. How would you judge the outcomes of such systems compared with our own?

WAGNER: I guess I’m up again. Sorry. So you have unintended consequences in some cases of that system. In some elections where you have that, Peru, et cetera, you have a lot of people who vote for third-party candidates, they write in Mickey Mouse, et cetera because you don’t have an even distribution of knowledge. And so a lot of people, it frankly kind of disengages them as opposed to engages them.

The fair compromise that I think would be the best—and if you look at Oregon in the United States has really kind of set the terms of this. You have automatic voter registration, which says that voting isn’t compulsory, but registration is compulsory. So there’s no kind of intermediate process to make sure that you validate your permission as a citizen to vote; but rather, everybody gets allowance and everybody gets a ballot. And as a result of what you’ve seen, you’ve seen skyrocketing voter participation relative to all other states and you’ve seen more interest in voting by those people in those certain states.

And so I’d say, in terms of a policy, you know, obviously, there’s kind of country biases and all that stuff. But if you look at just kind of, like, the experiments that people have done or matched cases, it seems like the case of Oregon is a better example of what policymakers should be thinking about, which you’re all registered, you all get a ballot, you have a choice in whether you want to participate, versus compulsory voting which traditionally, especially in Latin America, has had some weird unintended consequences, especially in elections where the participation is lower.

GALLAGHER: Anyone, another question?

Yes, in the back.

Q: Hi, Lauren Tannenbaum, Joe Smith Brand Strategy.

Dan, I think your point about if this—it’s actually not about politics.

WAGNER: OK. (Laughter.) OK.

Q: It comes off of your point of, if measurement failed so badly for us in politics, what’s it doing to corporations and brands? I work in brand strategy.

WAGNER: Yeah.

Q: I’m curious, from any of you, kind of, what do you see as a solution to that? If survey mechanisms are inherently biased and challenging and measurement is a problem, where do we go?

WAGNER: I’m going to make one small comment and I’ll turn it over to Seth because I think he’s smarter than me.

So the role of data science in these two—you have two sides: one is prediction and two is—one is prediction, two is attribution, right? Now, ideally, those two things will converge upon the outcome. Prediction was really wrong in this election. But ideally, you can make statistical adjustments based on kind of validation such that you can improve the correlation between outcome and prediction. And attribution is really the exciting thing that’s happening today, is you can build better attribution mechanisms between the outcome and some of the incredible bodies of information that are happening.

Unfortunately, with prediction, you only do the ex-post validation. With behavioral kind of data sets, you have incredible opportunities. So I would say, if you’re running a brand, those are the two things that you want to consider: How do I get my prediction as close as possible to the outcome, validating that over time in the same way to politics? And how do I build, especially in brands that are intermediated by retailers and stuff like that, how do I build a database of proper attribution that can assess my interventions upon that with some type of causal interpretation?

GALLAGHER: OK, we’ll have Seth.

And then, Hersh, we’ll go to you on this, too.

STEPHENS-DAVIDOWITZ: Yes, I think the big future of brands, of business and big data is rapid experimentation, following what people do, not what they say they’re going to do. So that was kind of the big shock when I first worked under Hal at Google when I went from an academic background where just about all the research that my classmates were doing and the professors were doing was small experiments asking people what they would do under certain situations. And when you go to Google—and when you go to Google, they just do rapid experiments all the time following what people actually do, what do they actually click on. It’s called A/B tests. And it’s really, really cheap and easy to do.

GALLAGHER: Are people familiar with the concept of A/B testing?

STEPHENS-DAVIDOWITZ: So basically, like, Facebook can do—Facebook can show two versions of the website. So to 500 users, a control group, they’ll show the same website and to 500 other users, the treatment group, they’ll maybe make one of the buttons red or they’ll change the font. And then they can measure how these two groups behave. Namely, how often do they use Facebook? What do they click on? Are they doing more of the things that we want as a company?

GALLAGHER: In journalism, I’ll just say, it’s done a lot now in the era of internet journalism. And it’s why we end up with some headlines like “10 things that go better with ketchup.” It’s, like, what’s clickable, you know, where are people clicking?

STEPHENS-DAVIDOWITZ: Yes.

GALLAGHER: So it’s really revolutionized, also in more substantial ways, but go ahead.

STEPHENS-DAVIDOWITZ: And the key is just how cheap and easy it is. It’s one line of code. And you can even just set it up so you can measure, you know, do 10 different versions of a headline in a journalism article. You can say, you know, 10 different—10 slightly different headlines and see which ones get more people to click on the article.

So I think that combined with following what people actually do, what they click on, what they buy, is really what’s going to be kind of the big use of big data in the corporate world. And it already is for the biggest corporations, but it hasn’t necessarily filtered to every other corporation yet.

GALLAGHER: Hersh, any thoughts on that, surveying?

SHEFRIN: I think there are always going to be type-one and type-two errors, so we just kind of have to get used to it. There may be opportunities for improvement.

One of the things that—I just wanted to link back to, you know, to Danny Kahneman who was the first economics Nobel for behavioral finance as a psychologist, not even an economist. But, you know, in his—in his book Thinking Fast and Slow—I’m not there, but if I was there I’d ask people to raise their hands if they know this book.

GALLAGHER: Let’s do it.

SHEFRIN: Oh, do it.

GALLAGHER: Who knows the book or read the book? Yeah, it’s a huge book. Yeah.

SHEFRIN: OK. So he tells us that inside our heads are two systems: It’s the fast-thinking system that is our intuition and the slow-thinking system that’s deliberate and conscious. And the thing is that our slow-thinking system, it really doesn’t have a whole lot in terms of resources relative to our fast-thinking system, which means that most of the time we really don’t understand what it is that we’re doing. (Laughter.)

So I think that what the internet is going to do for us and these new big-data, artificial intelligence routines will do for us is they’re going to help our slow-thinking, deliberate system understand that other part of us a whole lot better. You know, in effect, the internet is going to become our new shrink.

GALLAGHER: That’s a good way of putting it. (Laughter.)

Yes, right here in the front.

Q: Hi. I’d like to—Juan Ocampo.

I’d like to ask a question about herding, not necessarily in elections, but in general. What interesting things, that are less obvious with all the big data and the kind of—the underlying academic theories that you’ve been able to use so much, would you point out as being, you know, interesting and unusual about herding. For example, are there formulas that you can use to try and get a group of consumers to herd and like Beanie Babies, for example? Can you—how quickly can you detect whether in fact you’ve got a wave going, you know, this is herding versus that’s not? And finally, are there ways to extinguish a herd that’s going on and it’s kind of in the upswell? Can you cut it off?

GALLAGHER: Great question. Herd behavior is a huge part of this whole field. How do you create it, identify it, stop it? Who wants to take that?

Hersh, why don’t we start with you?

SHEFRIN: Sometimes you want herding. Sometimes you sort of need herding. It helps people to coalesce around a common set of ideas so that they don’t engage in excessive competition. So I don’t want to knock herding as a general concept. It helps build communities. It helps people find ways to come together. But like any good thing, too much of a good thing is not such a good thing. So I think that we can get excessive herding. And that’s where you sort of want to start to pay attention to interventions that take place.

I think that we’re really just at the beginning of trying to understand what it takes to slow down unnecessary momentum. It’s the sort of, you know, momentum that happens when people get over-focused and crowd behavior takes over and people sort of stop thinking, but let their emotions take over.

So the kinds of interventions that are going to be important are going to require ways of defusing the emotion of the moment to try and bring up that slow-thinking part of us rather than letting our fast-acting systems get carried away.

GALLAGHER: Seth, what about you?

STEPHENS-DAVIDOWITZ: Yeah, so one of the—one of my favorite studies was by a former colleague of mine Qing Wu and he studied, using big data, fashion herding. And basically, he wanted to see—he was using Google searches—of all the big fashion trends, where did they start and kind of is there a model to predict how these trends get going that eventually everybody is wearing. I know nothing about fashion, but—I don’t know, whatever fashion things are. He’s a big fashion guy, so he kind of could study these. He knew more about the subject.

And what I think I would have expected would be that there are these clear patterns, that fashion trends start in Brooklyn among a certain select group of people and then maybe, you know, as you kind of said, there’s some sort of clear distribution that says, oh, now everyone is going to herd onto it.

And what Qing actually found is that it’s almost entirely unpredictable, that sometimes they start in Brooklyn and sometimes they start in Salt Lake City and sometimes they start in Denver and sometimes they start in Kansas. They’re just all over the place. And sometimes they start and then they stop, and sometimes they start and keep going. So it was kind of—it was kind of pretty shocking to me that there are—sometimes you—the data tells you that things are just totally chaotic and there’s no science to it, which is interesting, although there’s less obvious things to do with it.

GALLAGHER: Yeah.

WAGNER: If we had time, I’d ask everybody who bought an $800 Canada Goose jacket, but I’ll—(laughter)—

GALLAGHER: We can ask.

WAGNER: —leave that.

GALLAGHER: Who has a Canada Goose jacket? I actually don’t. OK, only one.

WAGNER: This might not be the right population. (Laughter.)

GALLAGHER: OK.

WAGNER: So, you know, we’ve done a little bit of work in this, basically looking at mostly on kind of what hate gets distributed, unfortunately. But in terms of, like, what—if you take a social node in that you look at what kind of distributes itself from that node out to the population, especially from, in this election, from the most important example being a Russian, you know, how did they interfere in the Russian election? They kind of insert something into the network and that distributes itself in the network.

So what types of things? And I think this is going to be something that these folks are working on that is super interesting, especially about the interference of foreign powers in our election, but also for brands, et cetera, is there’s a lot of underlying psychological drivers that make somebody share stuff. Right? And I think that’s something that’s really understudied right now.

Three things that we know of: feeling cool, feeling funny and feeling accepted. Right? We know those things are drivers. We don’t know the kind of relative magnitude of those different things, but I think Twitter in terms of identifying a network of where these things originate and from where they kind of are shared out to people is the best laboratory that we have in terms of sharing is the behavior and the root cause of that is the content itself and saying, what is the underlying psychology of that comment? Is it funny? It is acceptance? Is it looking cool, right? So how do you kind of rank order yourself in a social organization relative to your network?

And then I think what we need to think about is, what are some more of those drivers and how do they in terms of rank order and sharing, which is kind of the primary way that you kind of voice yourself within a community, whether it’s a Canada Goose jacket or bitcoin or whatever? Bitcoin is different. There’s some technical terms.

GALLAGHER: Bitcoin.

WAGNER: But Canada Goose, yes.

GALLAGHER: I think we have room for one more question—for one more question.

Right here in the middle-to-back, right there.

If you could just wait for the microphone. Yeah. There we go.

Q: Hi, thanks. My name is Georgia Keohane at Pershing Square Foundation.

I wanted to get back to another bad data and sort of decision-making behavioral economics question. So I guess to the extent that we hope that new and better data will allow us to make better decision-making and sort of remove irrationality, my question is, in cases where we’re not—we don’t necessarily have good data or any data—so you’re a bad credit risk, you have a bad credit score, not because you’re actually a bad credit risk, but because you don’t have a credit history, and/or your bail is set here, you know, based on your previous arrest records which tell you much more about sort of what you look like necessarily than you’re a flight risk. And how do we think about some of the new data sources that you described to actually supply us the right data or good data or small data to allow us to make better decisions that are actually rather than worse decisions?

GALLAGHER: Different new sources of data.

STEPHENS-DAVIDOWITZ: Yes. I think—I think what’s going to happen is that there are going to be fewer cases where you don’t have information about people. So there’s going to be something about everybody that’s predictive of how good a credit risk they are, how good an employer they are.

I think there are dangers to this, too. There was a recent study where they said they could predict people’s psychology and IQ based on what they like on Facebook. And there are really, really subtle patterns. So some of them are not surprising, like, you’re more likely to be neurotic if you like Nirvana or something, like, OK, that makes sense. (Laughter.) And I think you’re more likely to be intelligent if you like Mozart, which I guess, OK, I get it. But then they had one where you’re more likely to be intelligent if you like curly fries and, like, significant, one of the biggest correlations with intelligence is liking curly fries on Facebook. (Laughter.) And then it’s just, like, well, OK, like, it may.

And that’s just—the way the world works, people who like curly fries and people who don’t like curly fries are not identical. They’re not identical populations. There are things that are different about them. And we don’t know exactly the reason, but for whatever reason, intelligence correlates, it doesn’t necessarily cause, but it correlates with—(laughter)—with intelligence.

So I guess the idea would be that—it gets to big ethical questions, I think which also your question touched on. Which is, sometimes we don’t—there are—I’m sure there are many intelligent people who don’t like curly fries and we don’t want to necessarily penalize them, you know, if some company is putting on advertisements and they say, oh, wow, let’s target curly fries people, they would get a slightly higher intelligence in their employee group, but it would probably unfairly punish people.

So I think it helps in that there’s so much more information about people, but it also raises big ethical questions of what information we should use and what information we shouldn’t use.

GALLAGHER: OK. We’re just about out of time, so, Hersh and Dan, I’ll have you quickly just chime in on any new data sources that you see.

SHEFRIN: I just want to say that with new neural network algorithms, it will be possible to tease out patterns that I think will be informative and may make things better. But just don’t expect perfection. There will always be type-one and type-two errors. And if you knew more, you would have been able to make a better decision. You know, the bail decision that your questioner asked about, for example. It’s just the way that, you know, that life is. We’re just going to have to live with some limitation, but I think that big data has its advantages and we will see advances on that.

GALLAGHER: Thanks.

And, Dan, final word to you.

WAGNER: Very much the kind of network thing I think is interesting. I think if you’re a business, the most important—I think the most important data is kind of boring, is going to be consumer data that provides a means of reinforcement. So a lot of AI, machine learning, the most—a lot of the most important work for a business is essentially reinforcement learning, which says I predict something, I enable some action, which is some placement of resources, I do it, I measure what worked, and I adjust my processes based on what I learned. Right?

And I think Google does this, Facebook does it, Amazon does it. The businesses that are going to improve are those who are going to essentially, number one, implement that process, number two that can learn from that process, and, number three, are the ones that can embed small experiments in that process to improve in this kind of sequential tournament of resource placement over time.

And a lot of businesses for the very first time, especially those that are direct to consumer, are creating the underlying data that has the potential for reinforcement learning, right? A lot of people don’t have the staff to do this or a lot of the primary components. But if you’re a business, I would think this would be the primary data source and maybe it is just kind of captured in your servers right now. But if you’re a business, I think this is the way that you become competitive, or you’re going to be in a rough spot.

GALLAGHER: Great, excellent.

Well, from curly fries to epsilon, please join me in thanking our wonderful panelists.

And thanks to all of you for being here. (Applause.)

(END)

Top Stories on CFR

United States

Each Friday, I look at what the presidential contenders are saying about foreign policy. This Week: Joe Biden doesn’t want one of America’s closest allies to buy a once iconic American company.

Immigration and Migration

Dara Lind, a senior fellow at the American Immigration Council, sits down with James M. Lindsay to discuss the record surge in migrants and asylum seekers crossing the U.S. southern border.

Center for Preventive Action

Every January, CFR’s annual Preventive Priorities Survey analyzes the conflicts most likely to occur in the year ahead and measures their potential impact. For the first time, the survey anticipates that this year, 2024, the United States will contend not only with a slew of global threats, but also a high risk of upheaval within its own borders. Is the country prepared for the eruption of election-related instability at home while wars continue to rage abroad?