Daniel Kahneman discusses insights from behavioral economics.
MURRAY: Can we get started? Good. Thank you all for coming. Welcome to the Council on Foreign Relations Robert B. Menschel Economics Symposium.
My name is Alan Murray. I’m the chief content officer for Time Inc. And I’ve really been looking forward to the conversation today with Daniel Kahneman, who I think most of you know, needs no introduction. He is a psychologist and economist, winner of the Nobel Prize in economics, and author of what I would argue is one of the most important and influential books of the—of the last decade—apologies to those in the audience who have written their own books—(laughter)—I’m sure they have been all very important and influential as well, but—“Thinking, Fast and Slow,” which is a fascinating exploration of how the human mind works and how intuitive thinking, which you call “System 1,” interacts with deliberative thinking, “System 2.” And my conclusion after reading it is that System 1 is the 600-pound gorilla here—
MURRAY: —more than we realize.
MURRAY: Can you talk about that a little bit?
KAHNEMAN: Well, I mean, the claim, you know, in the book is that what we are conscious of, we are conscious of our conscious thoughts. I mean, we are conscious of our deliberations. Most of what happens in our mind happens silently. And the most important thing that happen in our mind happen silently. We’re just aware of the result. We’re not aware of the process. And so we—the processes that we’re aware of tend to be deliberate and sequential, but the associative networks that lies behind it and that brings ideas forward into consciousness, we’re really not aware of it.
So we live with System 2. We’re mostly aware of it, you know, those terms. And System 1, which does most of the work, is not recognized.
MURRAY: But because we’re aware of the deliberative thinking, we tend to think our decisions are more deliberative than they really are.
KAHNEMAN: Absolutely. I mean, what really happens is—what I find very remarkable is you ask people why they believe certain things or why they have chosen to do certain things, and people are not at a loss for words. They tell you. (Laughter.) So we always have—we always have reasons, or we think we have reasons. But actually, when you look more closely at it, very often the reasons have very little to do with—they’re not causes, they’re explanations.
MURRAY: After the fact.
KAHNEMAN: Yeah, after the fact.
MURRAY: So, you know, Malcolm Gladwell wrote a book probably around the same time as your book called “Blink” that made the argument that the System 1, that intuitive decisions are often better than deliberative decisions. Do you buy that?
KAHNEMAN: Well, in some cases that is certainly true. So there are many cases where there are expert intuitions. So a master chess player just sees only strong moves. I mean, all the moves that come to the mind of a master are strong moves, and then there’s some deliberating choosing among them.
A lot of research has been done on firefighters, and experienced firefighters do things without knowing why they do them. So when you talk about that, it sounds like a miracle. But all of us are intuitive drivers. So we’re experts, and we do things without thinking. And if we stop to think about everything we do when we drive, we couldn’t do it. All of us are experts in social contact with other people. So we intuitively understand social situations without knowing why we do. So there are many domains in which intuitive expertise is really quite reliable and much better.
MURRAY: So who needs deliberative thinking? (Laughter.)
KAHNEMAN: Well, you know, in many domains we don’t have expertise. And in the domains where you don’t have expertise, it’s going to be fine if you didn’t—if we didn’t feel that we have expertise when we don’t. So many intuitions are right, but we also have intuitions—and subjectively they feel just the same—which are not based on expertise; they are just based on other processes.
MURRAY: So I don’t want this to get too political, but the current president of the United States—(laughter)—has said on a couple of occasions that one of his great strengths is that he can, without expertise—he’s not a big book reader; he probably hasn’t read your book. Everyone here has, I promise you, but he may—but he says without a lot of—without a lot of expertise, he can make better decisions than other people by applying common sense. How does that fit into your framework? (Laughter.)
KAHNEMAN: Well, let’s put it that way. I think I can understand why he’s so happy with it, I mean, with the way the mind works. And when you are not aware of doubts, you know, when you are really not very clearly aware of what you said before, and when you are—it’s a happy state to be in. (Laughter.) And not having doubt is clearly part of his makeup. He is not faking it. I mean, this I’m sure, you know, is quite true. He really thinks that he’s great.
MURRAY: Well, you have a whole section of your book on overconfidence—
MURRAY: —which you seem to feel is one of the great problems of the way we—
KAHNEMAN: Yeah, I didn’t have—I didn’t have President Trump in mind when—
MURRAY: No, no, I’m—I wasn’t—I wasn’t suggesting that. But talk a little bit about that. I mean, that’s—you highlight that as one of the great dangers of this way of thinking.
KAHNEMAN: I mean, one of the main things that happen is that we live in a world—subjectively, we live in a world that is much simpler than the world out there really is. And that is true for all of us; that is, we all live in a simplified world. And one of the thing(s) that happen that, you know, we can recognize is that whenever an event happens, its explanation comes with it immediately. That is, we understand the world even when we couldn’t predict it. We understand almost everything after the fact. We have a story.
MURRAY: Especially people in my business, yeah. (Laughs.)
KAHNEMAN: But not only in your business. I mean, you know, you cater to something that happens in everybody’s mind. And this ability to tell stories after the fact and to believe them—because they are—because they come without alternatives—this feeds a sense that the world makes sense, the world is understandable. And that sense that the world is understandable is very much exaggerated. I mean, so we’re not aware of complexities because we somehow rule them out, and that’s part of our makeup.
And that leads to overconfidence. Overconfidence basically is because it’s quite often extremely difficult to imagine alternatives to the way that you think. And one of the things that—I mean, I’m going to say something you haven’t asked me about, but when—
MURRAY: People do that to me all the time. Feel free.
KAHNEMAN: I know. (Laughter.)
But, you know, when I—when I see this glass full of water, I know it’s there. I mean, it’s reality. And I’m sure you see it too, because I assume that you see the same reality that I do. Now, about this glass it’s probably true, but the same kind of false consensus, as we call it, that works out in many business situations, many situations of decision-making. And where there are differences, large differences among people that they’re not aware of and they can’t imagine those differences, this is very abstract.
MURRAY: Do you—do you have an example?
KAHNEMAN: I’ll give you an example. So here is an example. It’s a large insurance company, and they have many underwriters, who from the point of view of the company are interchangeable; that is, you know, they get assigned essentially at random to different cases. Now you take a set of cases—I’m describing an experiment we actually ran—you take a set of cases and you present them to, say, 50 underwriters, and they put a dollar number on them. And now I ask the executives in that company something that you can ask yourselves. Suppose you take a random pair of underwriters and you look at the two numbers they produced, and you take the average of those two numbers, and then you take the difference between those two numbers. So, in proportion, you know, how large are the difference? And you probably have an intuition because most people do, actually, in a well-run business, when you have experienced underwriters. What they expected was something like 5 to 10 percent variability. This sounds about right. If you’re a pessimist you’ll say 15 (percent). It is actually 50 (percent), five-zero.
KAHNEMAN: And that is experienced underwriters. Experience doesn’t reduce that variability.
MURRAY: And this was their own underwriters?
KAHNEMAN: Their own underwriters.
MURRAY: So they had been working with them for years, they knew what—
KAHNEMAN: Their own underwriters. It’s the same with claims adjusters. You know, in the business—in the underwriting—in the insurance business, it’s extremely important when a claim comes in to set—to determine what it might be worth, what it might cost. And about 50 percent variability.
MURRAY: This is another big theme of the book: We as human beings tend to be extremely bad at making statistical judgments.
MURRAY: We get it wrong.
KAHNEMAN: In this case, though, what I want to emphasize before I get to statistics is that what I found most strange in my interaction with that insurance company was the problem was completely new to them. It had never occurred to anybody. They had what I call now a noise problem, and they had a huge noise problem. Obviously, you know, if there is 50 percent variability, something needs fixing. And they realized that because they believed the numbers. They had generated the experiment themselves. But how could people not be aware that there is a noise problem?
And there is just an assumption: When I see a case, I think I see it correctly. I respect you. You are my colleague. I assume you would see the same. And so there is that assumption that people agree when, in fact, they would not agree.
MURRAY: Fascinating, yeah.
Now, on—more broadly, the—I mean, part of the reason I ask this is because I studied economics in graduate school and I learned this whole elaborate discipline, which was built on the notion that people are reasonably rational and do fairly well with the statistical problems that face them in daily life—until I read your book, which brought that all down. It’s a pretty dramatic change in economics that’s occurred over your career and lifetime.
KAHNEMAN: Well, I mean, I don’t think there’s been such a deep change in economics—
KAHNEMAN: —as a result of our work. But people in economics are more tolerant than they were 30 years ago of the idea that people make mistakes.
I don’t like the word “irrational.” I mean, I think, you know, we are—
KAHNEMAN: Oh, because, you know, we have been—Amos Tversky, my colleague, and I, you know, were sort of called the prophets of irrationality. We really never wanted to be thought of that way. I think of irrationality as sort of impulsivity and, you know, sort of a crazy thing. What is really happening is that the standard of what economists call rational is totally infeasible for a finite mind. And so there is something deeply unreasonable about the theory of rationality.
KAHNEMAN: And then people, you know, they muddle through, but they’re not. You just cannot meet those standards.
MURRAY: Did you get some resistance from your colleagues in the economics profession arguing that there was something deeply unreasonable about an assumption on which the profession was built?
KAHNEMAN: Well, yeah, I mean, we got—(laughter)—we got a fair amount. I mean, mostly what happened—mostly what happened is that economists ignored us for a very long time. And then a few economists—
MURRAY: Did that bother you?
MURRAY: You didn’t mind being ignored?
KAHNEMAN: Absolutely not. I mean, you know, we would not—we did not do our work to reform economics.
Behavioral economics actually started out in a bar, I would say. (Laughter.) That’s the origin story. There was a conference of, I think, of the science. And the person who at the time were the vice president of the Sloan Foundation and who eventually became the president of the—of the Russell Sage Foundation approached Amos Tversky and me at that bar, and he said that he wanted to bring economics and psychology together, and did we have any idea about how to do that. And I actually remember what I told him because I told him that this was not a project on which he could expect to spend a lot of money honestly because there just isn’t that much you can do.
MURRAY: (Laughs.) Not going to happen.
KAHNEMAN: And mainly, don’t give any money to psychologists who want to reform economics. (Laughter.) But look for economists who might be interested in psychology and support them.
And the first grant that was made at the Russell Sage Foundation was to an economist, a young economist called Richard Thaler—
KAHNEMAN: —to spend a year with me in Vancouver. And I think that that year and that grant was, you know, foundational, and the rest he did. I mean, it’s not that we reformed economics; you know, there was a movement among economists, among young economists who were interested in that field and that developed it.
KAHNEMAN: It’s still a minority field, but it’s now accepted.
MURRAY: Accepted, yes,
So you have people in this room who make a lot of important decisions and consequential decisions every day. So tell them how to improve their own decision-making. We’re going to do a little self-help here.
KAHNEMAN: No. (Laughs.)
MURRAY: How do they improve their decision-making processes?
KAHNEMAN: Well, I would say that when you talk to an individual, I generally simply refuse to answer that question because, you know, I know how little studying this problem has done for the quality of my decisions, so I’m not—(laughter)—I’m not going to—
MURRAY: You don’t—you don’t feel like you make better decisions after the last 30 years?
KAHNEMAN: Seriously, no. (Laughter.) But organizations are different. I am really much more optimistic about organizations than about individuals because organizations have procedures and processes, and you can design procedures and processes to avoid some problems.
MURRAY: Yeah. Give me some examples. We were talking earlier about the premortem.
KAHNEMAN: The premortem is an idea—not mine, but it’s one of my favorite—and this is that when in an organization a decision is about to be made—it’s not finalized yet—you get a group of the people who are involved in making the decision and you get together a special meeting where everybody gets a sheet of paper. And the meeting is announced as follows. Suppose we just made the decision that we are contemplating. Now a year has passed. It was a disaster. Now write the history of that disaster. That’s called a premortem, and it’s a splendid idea, I think. And it’s very good because in an organization typically as you approach a decision, and especially if the leader of the organization is clearly in favor of it, it becomes disloyal to express doubts. Pessimists are unwelcome. Doubters are unwelcome. And what the premortem does, it legitimizes doubt, and it gives people an opportunity to actually be clever in raising doubts that they never would raise in the normal run of—
MURRAY: Does it very often change the decision?
KAHNEMAN: I don’t think so. But I think it—(laughter)—but I think it often improves the decision. That is, I—it’s difficult for me to imagine that if a group is set on doing something they will just abandon it because somebody thought of something. But they will take precautions they might not have taken. They will consider scenarios they might not have considered. So this is a way to improve decision, not to modify them radically.
Let’s take a few minutes. I’m going to open it up to questions in just a few minutes, so prepare them. They need to be brilliant.
But let’s just take a minute to talk about artificial intelligence. And I really—two things I want to ask you. The first is, this process that you describe of how the human brain works and the interaction between the intuitive and the deliberative, do you think that can ever be replicated by algorithm?
KAHNEMAN: Well, you know, the answer is almost certainly yes. You know, this is a computing device that we have. It won’t be replicated exactly. You know, it shouldn’t be replicated exactly. But that you can train an artificial intelligence to develop powerful intuitions in a complex domain, this we know.
I mean, last year a program devised by deep minds in London beat the world champion at go. And I met them a few months later, and they said that their program is currently so much better than it was when they beat the world champion.
KAHNEMAN: That it’s not even close. And the reason is that between the time they beat the world champion and the time I talked to them, that program had played with itself 30 million times. (Laughter.) And so the kinds of progress that you can make—and I pick go because it’s an intuitive game. I mean, it’s a game that typically people cannot really explain why they feel a move is strong or not. And the observers—there’s going to be a film—I think the premiere is at the end of this week—called “AlphaGo,” because they made a film of that story. And evidently there were moves that the program made that people who are experts at go recognized immediately as an extremely strong move and a completely novel one, and go has—
MURRAY: Wow. And will artificial intelligence improve the quality of decisions?
KAHNEMAN: Well, you know, it depends in what domain. So there are certain domains where we can see it coming. I mean, there are professions that are rapidly becoming obsolete. I mean, dermatology—diagnostic dermatology is obsolete. Diagnostic radiology, it’s just a matter of time.
Now, when will it be the case that you will have a module for evaluation of business propositions? Not immediately, but do I see any reason to believe that there won’t be one within 10 or 15 years? I guess—I think there will be one.
MURRAY: And will it operate on the principles of System 1, intuition? Or will it operate on the principles of System 2?
KAHNEMAN: I think, in a way, neither. I mean, it will look more like—you know, it will be very fast, and in that sense it will look like System 1. But you might also—and that will happen, too—you will—there will be programs that will reach a conclusion very quickly through a process of learning, like learning go, with big data.
KAHNEMAN: So you learn from very big data.
MURRAY: Do you think it’s possible that we will then integrate the intuitive and deliberative better than the—than the human brain does?
KAHNEMAN: Well, one of the things that, you know, having developed that kind of software, we’ll also develop programs to explain it in System 2 terms. So it’s going to be separate because what generates a solution is the deep learning. You know, it automatically looks at the big data and develops—
MURRAY: Can’t really be turned into a story that we can grasp.
KAHNEMAN: No, it’s not—it’s not. But I’m quite sure that that is a development that is coming, that we’ll develop programs to tell stories about those decisions so that we can understand them in terms of reasoned argument.
MURRAY: So you have a fascinating personal background. You were born in Tel Aviv. Your parents were from Lithuania, moved to Paris, were there when the Nazis came over. Your father was held for a period of time. How has that affected your unusual career choice?
KAHNEMAN: I’m not sure it has.
KAHNEMAN: I’m not sure it had any effect at all. I mean, I—
MURRAY: There weren’t a lot of people running around putting economics and psychology together at the time you did.
KAHNEMAN: No, but you know, that’s an accident. I mean, you know, that is the kind of thing that happened accidentally. Why I became a psychologist, I—when I was introspecting about this, I thought it’s because my mother was a very intelligent gossip. (Laughter.) And I—gossip was—you know, intelligent gossip was really an important part of my life as a child, just listening to it. And people were endlessly complicated. There was nothing simple, and there was a lot of irony. And that, I think, is something that I grew up with and that maybe turned me into a psychologist.
MURRAY: And then how about the economics part?
KAHNEMAN: Oh, the economics was completely accidental. I had—I had a colleague, Amos Tversky, who was more of a mathematical psychologist. He was not an economist either. But we did work on decision theory, which we published in an economics journal—an econometric. The reason we published it was that it was the most prestigious journal, you know, to which people would send that kind of theory. And having published it, if the same article had been published in a psychological journal, no economist would have looked at it. But because it appeared in their journal, quite a few economists took it seriously. It was an indication that we were worthy of a certain kind of respect. And so we were adopted by economics. It’s not that we ever had an ambition to change economics.
MURRAY: So I have—I could go on for a long, long time, but I want to open it up to the members because they already got about five hands up in the air. Just a couple of things. A reminder that this meeting is on the record, so feel free to use it. If you have a question, wait for the microphone, speak directly into it, state your name and affiliation, and limit yourself to one question; no multiple questions. If you ask multiple questions, he’s not going to answer it, so.
Start right there in the back and then over there on the other side.
KAHNEMAN: And I’m hard of hearing, so don’t be insulted if I ask for the question to be repeated.
Q: I’m Lew Alexander from Nomura.
The question I’d like to ask is really about historical analysis, and it’s essentially a question about how optimistic or pessimistic I should be. On the one hand, history has the problem that you look back for a reason and you tend to find what you’re looking for. But at the same time, there are procedures that good historical analysis has to deal with that: adherence to focus on primary sources and whatnot. And I guess my question to you is, do you feel like good history is possible? Or are we—are we sort of doomed to kind of the confirmation bias kind of problem in historical analysis?
KAHNEMAN: Well, you know, it’s hard for me as an outsider to define what “good history” would be like. History, by its nature it’s going to be a story that we tell with the strength and limitations of stories. And making it as factual as possible is—but it’s not going to be a science. It’s not going to be general because it deals with individual cases and with individual stories. So I don’t know what—I don’t really know what an answer to your question would look like because I have no idea what good history would look like.
MURRAY: You said something interesting earlier; why is it so hard to come up with alternative stories?
KAHNEMAN: Well, this is really a characteristic of the perceptual system, that when we perceive things we make a choice. And frequently, when stimuli are ambiguous, we can see it this way or that way. And the remarkable—everybody here, I’m sure, has seen the Necker cube. It’s the sort of cube that’s—it’s flat on the page, but it appears three-dimensional, and it flips. If you stare at it long enough, it—there are two three-dimensional solutions that you see, and they flip automatically. It’s nothing voluntary about it. And it flips all at once, and you only see one interpretation—
MURRAY: You can’t see them both at the same time.
KAHNEMAN: You don’t see them both. You know that there are two, but you see only one. And what happens is a process where once a solution gets adopted, it suppresses others. And this mechanism of a single solution coming up and suppressing alternatives, that occurs in perception and it occurs in cognition. So when we have a story, it suppresses alternatives.
MURRAY: Fascinating. So, for a historian—even though it’s not your chosen field—I mean, are there tricks you could use to test your thesis or make sure you’re not suppressing alternative versions?
KAHNEMAN: Well, I mean, very likely you are suppressing alternative versions, and that may not be a bad thing. I mean, it’s not—you probably want to check with other people because this is not something that you yourself are likely to see. I mean, we—because of that process of inhibiting alternatives, we tend to go with one option.
MURRAY: There was a question in the back. Yes, right there.
Q: Robert Klitzman from Columbia University. Thank you so much.
A lot of voters in the last election, I think, when faced with the complexities of the modern world, relied on type one—System 1 thinking and looked for simple solutions: let’s blame these people, those people, et cetera, et cetera, and many voted for Trump. And I’m wondering how we might address that. In other words, is there better messaging—especially with social media, that focuses, I think, on sort of short answers that may not be correct that sort of appeal intuitively? Are there ways that we might address that better than we’re doing? Thank you.
KAHNEMAN: No, I’m not sure that this was special to this election. I think that what was different in this election were the candidates. But the process of making decisions on an emotional and simplified basis, that I think is true for every election. So I’m not even sure that this election was very different from others.
And what can be done about it? You know, I don’t know who would be doing the doing, you know. (Laughter.) Who would we want to be doing something about it? I can’t—it’s very difficult to imagine an alternative.
MURRAY: Is there—
KAHNEMAN: I would have liked personally a system in which voting is compulsory, which it is in some countries, so it’s the default is you vote. And once the default is that you vote, it changes the character of it. I think it would encourage—compulsory voting would encourage deliberate voting. This is—you know, this is a hunch that I have.
MURRAY: Why? Why is that? Why would compulsory voting encourage deliberative voting?
KAHNEMAN: Because you have to make a choice.
MURRAY: It’s not a passionate—
KAHNEMAN: At the moment—at the moment, you know, it’s—the people are either very involved and they know what they want, and so they participate in the system, and the others don’t participate. But if you’re really—you have to do it, I think that would be a very good thing, in part because the population of nonvoters is really very poorly represented in the system. And you would have probably a very different political system if everybody voted.
MURRAY: Do you think there’s a role for education here to help people be more conscious of this battle between System 1 and System 2 and how to balance it?
KAHNEMAN: Well, I’m not very optimistic that—you know, it—look, it’s more in the culture, I think, than in the educational system. I mean, does the culture encourage or approve of deliberation, or does it actually approve of gut decision-making? And my sense is that actually people want to be led by a leader who’s intuitive; maybe not quite as intuitive as—(laughter)—
Q: Hi, Doctor. Andy Russell. I’m a student of psychology and business and have spent the past 18 years building companies in digital media, social media. And actually your firm, TGG Group, is an investor of one of my companies. And I’ve recently built a software to kind of use your biases and everything I’ve studied about you to help people collaborate around the world to do better things.
What you know, having been the author of this entire science, and what we all know about big data and what’s available through social media and how easy it is to either influence or manipulate people’s decision-making, I believe we’re living in a dangerous time. And I’d like to hear your thoughts on that.
KAHNEMAN: I think we can all agree that we’re living in dangerous times. I’m not—I don’t know enough about social media and, you know, how this has really changed life. One has the sense that it has made a very big difference, but—
MURRAY: You don’t have a Twitter account?
KAHNEMAN: I think—(laughter)—I do—
MURRAY: You’re not an active user—
KAHNEMAN: No, I’m obviously not.
MURRAY: —of your Twitter account. (Laughter.)
Yes, right here, and then over here.
Q: Hi. Jove Oliver with Oliver Global.
When I was in grad school, the buzzword was sort of interdisciplinary. And I recently heard the director of the Santa Fe Institute talk about anti-disciplinary. So I was wondering, as someone who’s got something to say about that, do you think that college campuses are a bit too siloed these days?
KAHNEMAN: That’s not my impression, really. I mean, you know, I think there is a lot of—at least there was at Princeton. My sense was not that people were trained to be very narrow. Graduate schools tend to be highly specialized, and that is largely because of the way the job market has evolved. There is a lot of competition and people have to publish a lot while they’re graduate students.
I would say that in the better universities, undergraduate training is not overly siloed. At least that’s my impression.
MURRAY: Right here.
Q: (Off mic)—JPMorgan.
Among many areas that your work with Amos impacted, public health was one that surprised me the most, and especially developing evidence-based medicine. Were you surprised to see how far it reached? And how do you foresee medicine, health care, and public health in general, incorporating more and more of your ideas?
KAHNEMAN: Well, we were very surprised. You know, our work had a lot more impact than we ever thought it was going to have. Clearly, you know, when you look at a development like that, clearly there was readiness on the part of—there was an audience waiting for something like that. And we happened to arrive at the right time and with a message that was easy to assimilate.
And it’s very clear that many developments in terms of evidence-based medicine, evidence-based everything, are very compatible with the message of, you know, behavioral economics and with the message of the kind of psychology that Tversky and I were doing.
Clearly—and this is happening, and it’s encountering resistance because of the appeal of intuition that we were talking about earlier. So evidence-based medicine is not having a very smooth run. I mean, there is a lot of opposition to it, and it’s quite interesting to see where it comes from.
I believe that eventually, you know, truth will out, and eventually—eventually, evidence-based medicine will be accepted and we will know. It will be accompanied by knowing when we can trust intuitions. I mean, this is really the key issue is we don’t want to give up intuition. We don’t want only evidence-based.
And one of the real dangers of evidence-based, algorithms and so on, is that experts will be discouraged and wither. Expertise will wither. And how to find the balance is—that is going to be a serious challenge, I think, because eventually I see evidence-based everything taking over. And yet there will be a time at which this will have to be resisted, because we are going to be losing something.
We had an interesting conversation in the back room about—talking about public-policy applications, the increasing use of nudges, of public-policy mechanisms that don’t take away your ability to choose but push you in a certain direction, understanding the psychology of the decision, an example being pension opt-outs instead of opt-ins for savings, for instance.
MURRAY: Can you talk a little bit about the power of that?
KAHNEMAN: Well, yes. I mean, that’s—this idea—I should make clear, all this line of thinking comes from Richard Thaler himself. I mean, it’s not something that—in which I had a—I’m a disciple of this, and certainly very enthusiastic about it. But the best example, and it’s an early example, it’s something that Dick Thaler and a student of his, Shlomo Benartzi, called Save More Tomorrow. That’s the plan. And the plan is offered by an organization to its employees.
And the idea of the plan is that you don’t increase your saving right away, actually. Your saving will increase automatically, and actually by a fairly large number, that—3 percent in the first—by 3 percent of your salary, not of the increase, next time you get an increase in pay. And it will go on increasing every time you get a raise until you stop it.
That is a very, very clever use of psychology. And it was done entirely by economists. But what is clever about it is that it avoids losing anything. There’s no loss. There’s no sacrifice. It’s a foregone gain. Furthermore, it’s not an immediate foregone gain. It will happen later. And it will happen at a happy moment, and you’ll barely be aware of it.
And then procrastination, which normally is a terrible force against saving, in this case encourages saving more and more, until you find yourself saving too much and you stop it.
KAHNEMAN: So this is—this was really a brilliancy, I thought, and extremely effective. I mean, in the original application, I think it raised the saving rate in an organization from 3 percent to 11 percent. So those are not small effects. Those are enormous effects.
MURRAY: Yes, right there.
Q: Eva Giappagin (ph), World Bank.
Most of the decisions that we make is based upon a reference point or the belief or the subconscious mind, right. I’m interested about people in resource-poor setting who don’t have education, who don’t have skills, who don’t think they are capable. How do we change their subconscious mind?
MURRAY: That’s a small question for you.
KAHNEMAN: I mean, the answer, how do we change their subconscious mind, is easy. We don’t. I mean, you know, we can’t. The only thing that we can do is change the context in which people make their decisions. And if we change the context, which Richard Thaler and Cass Sunstein have called choice architecture—that is, the set-up in which people make their decisions—with the same minds, conscious and unconscious, people can be led to make better decisions, decisions that are more in their own interest and, you know, that make more sense.
It won’t happen by influencing the people themselves. That’s not the way to do it, really. The way to do it is to change the environment in which people live and the environment in which decisions are made. And that can be done.
MURRAY: Right here, Consuelo, and then right there.
Q: Alan, thanks, number one, for a lovely interview. It’s really great to see you.
MURRAY: Thank you.
Q: And Dr. Kahneman, thank you so much for being here as well. My name is Consuelo Mack. I work at “WealthTrack,” my show on public television. And I’m very intrigued by your—the premortem concept and that you’re very optimistic about corporations with systems and processes being able to make better decisions.
There is a current working assumption that the more voices you bring to a corporate table, the better the business outcome will be, and specifically the more women you bring to decision-making and the more minorities you bring to decision-making. What’s your view—what is your thought about that? Does diversity make for better decisions?
KAHNEMAN: Well, I really have no expertise in that. And I’m relying not even on primary sources but on secondary sources. So what I understand is the case is that there is an optimum level of diversity. When you have too little diversity, it’s not very good. And when you have too much, it’s paralyzing. And so there is an optimum to be sought.
As for men and women, I really don’t know the research. You know, I have hunches, like everyone else. And they’re not worth more than anyone else’s hunches. My guess is that—my guess is that this is very salutary, because I really believe there are differences in orientation and that this kind of diversity is very—is going to be useful. But it’s just an opinion. It really—there’s no expertise behind it.
MURRAY: Well, although, I mean, you’ve done a lot of fascinating experiments over your career. You must have looked at gender differences in the course of those experiments. What did you—what did you discover about—
MURRAY: —gender differences that you’re willing to share with us? (Laughter.)
KAHNEMAN: Well, I—I mean, my main confession is I’ve never been very interested. And my sense is that within the kind of thing that we did—
MURRAY: Not that big.
KAHNEMAN: —if it had been very large, we would have known about it.
KAHNEMAN: We never—we never—
MURRAY: It never jumped out at you.
KAHNEMAN: We never—no, it never jumped out. And, you know, there were the questions. There has not been—there’s been one very substantial case that nobody can explain of a difference favoring men, which is—I don’t know how many of you are familiar with the one-item test that turns out to be very good, the puzzle, the bat-and-ball puzzle. How many people know the bat-and-ball puzzle? Oh, OK. Younger people—it’s better known in very young audiences.
A bat and a ball together cost $1.10. The bat costs a dollar more than the ball. How much did the ball cost? Now, what’s interesting about this puzzle, it’s about System 1 and System 2. It actually was devised in that context by a post-doc of mine, Shane Frederick. And everybody has an idea. It comes to everybody’s mind that it’s 10 cents. But 10 cents is wrong, because 10 cents and $1.10 is $1.20. So the correct answer is five cents.
What’s interesting is that the majority of Harvard students fail that test, and MIT and—I mean, so very, very smart people fail it. Now, why on earth did I—
MURRAY: We were talking gender differences.
KAHNEMAN: Yeah. On this problem, there is a gender difference.
KAHNEMAN: Yeah. Men do better than women. Now—and it’s not a small effect either. It’s fairly large. Now, my wife, who, you know, was a—who had a national medal of science—she claims that this is a kind of puzzle that only men would be interested in. (Laughter.) So, you know, this is—that was—you know, that was—
MURRAY: She is a wise woman. (Laughs.)
KAHNEMAN: By and large, no, we haven’t found much.
MURRAY: Go ahead, right here. And I’m going to go over here and then come back to you.
Q: Hi. I’m Jonathan Tepperman. I’m the managing editor of Foreign Affairs Magazine.
Two things. First, if you’d ever like to write for us, we would love to you have, please.
MURRAY: (Clears throat.)
Q: I can give you my card afterward.
MURRAY: It’s just one question and no—must be pithy. (Laughs.)
Q: That wasn’t a question. It was a proposal.
The question is as follows. What’s the big-picture implication of your work for high-level foreign-policy decision-making, or policy decision-making in general? You know, if you look over the last four presidential administrations here, we’ve seen a range from an extremely organized, deliberative style under President Obama to a more disorganized, deliberative style under President Clinton to an organized, impulsive style under President Bush, to a disorganized, impulsive style under President Trump.
Is there a happy medium somewhere in there that you see? Thank you.
KAHNEMAN: Well, you know, there’s probably more than one happy medium. And it really depends on what you’re looking for. In terms of what the public is looking for, the voting public, I think Obama was too deliberative. And I think it cost him politically. You know, that’s just my impression, that actually people want a leader who is more—it’s very odd. They want a leader who knows—you know, who knows immediately, who has intuitions. And deliberation is costly in terms of the confidence in the public in the quality of the decisions. So there is an optimum there.
Then, if you look for an optimum in terms of the quality of the decisions that are ultimately made, then, you know, my bias would be to say that there isn’t too much deliberation when you’re dealing with issues of peace and war. But the politics may lead to different places.
MURRAY: Right here on the aisle, and then here.
Q: (Daniel has ?) a lot of implications for investment committees and investment decision-making, like the first person to speak—(audio break)—across cultures. You know, do these dynamics work differently in an Asian culture versus a Western culture, that sort of thing?
KAHNEMAN: I have no expertise. I don’t know—(audio break)—is we don’t know more. I mean, I—again, this is the kind of thing where I suppose that if there had been very clear answers, somebody would have told me. But—(laughter)—so I’m inclined to guess that there are no very clear answers.
I know that some features of—some basic features of decision-making, like loss aversion, are very general. I think there are large differences in other features of decision-making, like optimism. There are more optimistic cultures, very clearly, and others where pessimism is considered more intelligent. So—
MURRAY: And you see—and that’s a deep difference.
KAHNEMAN: That is a deep difference, I think. I think that’s a deep difference.
KAHNEMAN: And it has a lot to do with—my guess is that being unrealistically optimistic is very good for entrepreneurship and so on. I mean, that’s the—it’s—the dynamics of capitalism require a lack of realism, I think.
MURRAY: Right here.
Q: Hi. I’m Jack Rosenthal, retired from The New York Times.
I wonder if you’d be willing to talk a bit about the undoing idea and whether it’s relevant in the extreme to things like climate denial.
KAHNEMAN: Well, I mean, the undoing idea, the Undoing Project, was something that I—well, it’s the name of a book that Michael Lewis wrote about Amos Tversky and me. But it originally was a project that I engaged in primarily. I’m trying to think about how do people construct alternatives to reality.
And my particular—my interest in this was prompted by tragedy in my family. A nephew in the Israeli air force was killed. And I was very struck by the fact that people kept saying “if only.” And that—and that “if only” has rules to it. We don’t just complete “if only” in any—every which way. There are certain things that you use. So I was interested in counterfactuals. And this is the Undoing Project.
Climate denial, I think, is not necessarily related to the Undoing Project. It’s very powerful, clearly. You know, the anchors of the psychology of climate denial is elementary. It’s very basic. And it’s going to be extremely difficult to overcome.
MURRAY: When you say it’s elementary, can you elaborate a little bit?
KAHNEMAN: Well, the—whether people believe or do not believe is one issue. And people believe in climate and climate change or don’t believe in climate change not because of the scientific evidence. And we really ought to get rid of the idea that scientific evidence has much to do with people’s beliefs. I mean, there is—
MURRAY: Is that a general comment, or in the case of climate?
KAHNEMAN: Yeah, it’s a general comment.
KAHNEMAN: I think it’s a general comment. I mean, there is—the correlation between attitude to gay marriage and belief in climate change is just too high to be explained by, you know—
KAHNEMAN: —by science. So clearly—and clearly what is people’s beliefs about climate change and about other things are primarily determined by socialization. They’re determined—we believe in things that people that we trust and love believe in. And that, by the way, is certainly true of my belief in climate change. I believe in climate change because I believe that, you know, if the National Academy says there’s climate change, but—
MURRAY: They’re your people.
KAHNEMAN: They’re my people.
KAHNEMAN: But other people—you know, they’re not everybody’s people. And so this, I think—that’s a very basic part of it. Where do beliefs come from? And the other part of it is that climate change is really the kind of threat for which—that we as humans have not evolved to cope with. It’s too distant. It’s too remote. It just is not the kind of urgent mobilizing thing. If there were a meteor, you know, coming to earth, even in 50 years, it would be completely differently. And that would be—people, you know, could imagine that. It would be concrete. It would be specific. You could mobilize humanity against the meteor. Climate change is different. And it’s much, much harder, I think.
MURRAY: Yes, sir, right here.
Q: Nise Aghwa (ph) of Pace University.
Even if you believe in evidence-based science, frequently, whether it’s in medicine, finance, or economics, the power of the tests are so weak that you have to rely on System 1, on your intuition, to make a decision. How do you bridge that gap?
KAHNEMAN: Well, you know, if a decision must be made, you’re going to make it on the best way—you know, in the best way possible. And under some time pressure, there’s no time for deliberation. You just must do, you know, what you can do. That happens a lot.
If there is time to reflect, then in many situations, even when the evidence is incomplete, reflection might pay off. But this is very specific. As I was saying earlier, there are domains where we can trust our intuitions, and there are domains where we really shouldn’t. And one of the problems is that we don’t know subjectively which is which. I mean, this is where some science and some knowledge has to come in from the outside.
MURRAY: But it did sound like you were saying earlier that the intuition works better in areas where you have a great deal of expertise—
MURRAY: —and expertise.
KAHNEMAN: But we have powerful intuitions in other areas as well. And that’s the problem. The real problem—and we mentioned overconfidence earlier—is that our subjective confidence is not a very good indication of accuracy. I mean, that’s just empirically. When you look at the correlation between subjective confidence and accuracy, it is not sufficiently hard. And that creates a problem.
MURRAY: Yes, sir, right here.
Q: Thank you, Alan.
Professor Kahneman, very nice to see you again. Daniel Arbess.
I’m going to try to articulate this question back to Alan’s original question about algorithms and behavior economics. Say you have a kid who’s at business school and he’s trying to figure out—he wants to be involved in the investment business. He’s trying to figure out should he follow systemic data-driven investing, algorithmic investing, or should he, like his father is telling him, learn about the world, learn about human behavior? (Laughter.) He’s studying behavioral economics but he’s being drawn to data analytics.
So I want to go back to Alan’s question, because I’m not sure I completely caught your answer as to how we’re going to reconcile increasing dependence on data analytics with the fact that the decisions that come out of data analytics are only as good as the data. If the data set is incomplete and we’re faced with a different scenario and relying on these systemic strategies, what will that produce?
KAHNEMAN: Well, the question is what the alternative is. And, you know, it’s easy to think—it’s easy to come up with scenarios in which big data would lead you astray because something is happening that is not covered in the data. The question is whether intuition or common sense is, in those cases, a very good alternative. There is, I don’t think—I don’t think there is a general answer to this question.
MURRAY: But you did say earlier that you thought big data, artificial intelligence, will improve the quality of decision-making.
KAHNEMAN: I think it’s coming. I think in every—you know, we can see it happening. It’s going to take a while. But there is no obvious reason to think that it will stop in particular somewhere.
MURRAY: So, in general, better decisions will be made 30 years from now than are today.
KAHNEMAN: Well, yes. I think the world will have changed so much because of artificial intelligence, and it’s—but in many domains, yes, better decisions will be made.
MURRAY: I have one last question. You’re going to get it, but that puts a big burden on you. This has to be a gripping question.
Q: I’ll try to get a grip.
Would you be willing to speculate on the trajectory of artificial intelligence and machine learning and its social impact? Are we going to end up with Ray Kurzweil’s singularity, or are we going to end up, as Arthur Koestler once speculated, lucky if they keep us as pets? (Laughter.)
MURRAY: And before he answers that, would you identify yourself and your affiliation?
Q: Yes. Phil Hike (ph), Insight LSE (ph), which is a fuel-cell technology development company.
MURRAY: Thank you.
KAHNEMAN: Well, you know, you can only guess and speculate about that. The movement—what is very striking, at least to an outsider, about AI is the speed at which things are happening. And I mentioned the Go championship earlier. And perhaps the most striking thing about the Go championship was that, six months before it happened, people thought it was going to take 10 years. And things like that are happening all over the place, I mean, especially through deep learning.
So what it’s going to do, it’s going to change the world before the singularity. The singularity is when there will be an artificial intelligence which can design an artificial intelligence that is even more intelligent than it is. And then, you know, the idea is things could explode out of control very quickly. But long before any singularity, our world will change because of artificial intelligence.
Now, you know, some economists say that we’ve been there before. Technological change does not really cause unemployment. Pessimists—and I am one—tend to think that this is different. And I think many people think that this is different, in part because of speed; so the kind of social changes that are going to occur when you create what Yuval Harari calls superfluous people for whom really there is nothing to do.
This could be happening within the next few decades, and it’s going to change the world to, you know, an extent that we can’t imagine. And you don’t need singularity for that. You need a set of advances that are localized. And we can see those happening. So, you know, obviously self-driving cars, you know, that’s just one example. But, you know, it’s going to happen in medicine, in law, in business decisions. It’s hard to see where it stops.
MURRAY: Dr. Kahneman, I’m sure everybody in this room would agree that this was an hour very well spent; a round of applause. (Applause.)
It’s a coffee break, coffee break, and then back in here at 2:15.