Meeting

Malcolm and Carolyn Wiener Annual Lecture on Science and Technology With Henry Kissinger and Eric Schmidt

Monday, December 20, 2021
Jaap Arriens/NurPhoto via Getty Images
Speakers

Chairman, Kissinger Associates, Inc.; Former U.S. Secretary of State; Coauthor, The Age of AI: And Our Human Future; CFR Member

Cofounder, Schmidt Futures; Former CEO, Google; Coauthor, The Age of AI: And Our Human Future; CFR Member

Presider

Anchor and Managing Editor, PBS NewsHour; CFR Member

Henry Kissinger and Eric Schmidt discuss the transformational power of artificial intelligence. 

The Malcolm and Carolyn Wiener Annual Lecture on Science and Technology addresses issues at the intersection of science, technology, and foreign policy. It has been endowed in perpetuity through a gift from CFR members Malcolm and Carolyn Wiener.

Transcript

WOODRUFF: Thank you, Kayla. I’m very happy to be here. Welcome to today’s Council on Foreign Relations virtual meeting with Dr. Henry Kissinger and Dr. Eric Schmidt. I am Judy Woodruff, and I will be presiding today. I just want to say how glad I am to be with all of you. I hope everyone is staying safe, especially with all the news coming in around COVID and the Omicron variant.

Today’s meeting serves as the Malcolm and Carolyn Weiner Annual Lecture on Science and Technology. This is a lectureship that addresses issues at the intersection of science, technology, and foreign policy, and is generously endowed in perpetuity through a gift from CFR members Malcolm and Carolyn Wiener. And I’m going to try to squeeze in as many questions as I can over the next twenty-five to thirty minutes, and then we’ll be turning it over for questions from the members.

I think it’s been around for half a century, but I also think it’s safe to say that most of us are intimidated by the idea of artificial intelligence, even as we know it’s all around us. It’s finishing our sentences; it’s making us utterly dependent on GPS. We also know the U.S. and China are spending billions to be first in the AI race. But there is so much we don’t know. And your book, Dr. Kissinger and Dr. Schmidt, seems to me to be an urgent call to get all of us to think harder about what it’s going to mean for our future.

You write that, and I’m quoting, “whether we consider it a tool, a partner, or a rival, AI will alter our experience as reasoning beings and permanently change our relationship with reality. The result will be a new epoch.” So, Dr. Kissinger, I’m going to begin with you. In that vein, you wrote a few years ago that AI represents the end of the Enlightenment. Today do you see it mainly as a force for good or a cause for worry?

KISSINGER: It’s both. It has unprecedented capacity to collect information, to absorb data and to point it in different directions. But it also raises issues and capacities in the military field. And I believe that it will change our perception of reality. And therefore, how that is interpreted in a religious way, in a mystical way, or in some other manner, it will—in the Enlightenment, reason dominated the perception. Here, extraordinary results can be achieved for which one does not know how they come about.

So you begin on some issues with the end of the process by discovering it through algorithms. But you don’t know why it operates this way. And these conclusions become dominant in various fields. And so there will inevitably arise a question about their—what produces them, and why they occur. And that is the distinction. And in the—and they can happen so quickly, and the achievement of them is so much faster than the human mind can follow, so there is a gap that will have to be dealt with or explained.

WOODRUFF: Dr. Schmidt, pick up on that, because the book does address the potential best and the potential worst to come from AI. Give us an example or two from each. And, I mean, my question is, how confident are you that our civilization is going to be around—either that it’s going to survive pandemics, climate change, global conflict long enough to realize the best or the worst of AI?

SCHMIDT: Well, in the book—and, by the way, Judy, thank you for doing this with us. And thank you to the CFR, which is an incredible organization that we’re all members of. And it’s a great honor to be here. In the book we have a couple of positive examples, and we give a couple of negative possible scenarios.

And a positive example is a drug called halicin. And this drug was developed at MIT, and it was done by a set of synthetic biologists and computer scientists. So what the synthetic biologists did is they imagined that if the computer scientists could go through 100 million compounds in chemistry, they could find a compound that would serve as a general antibiotic that was very different from the antibiotics that we, as humans, all have resistance to.

So they built a network that generated candidates of these antibiotics, and then they built another network that graded them based on how different they were from the ones we’re already getting. And they came up with a drug that appears to work really well. Now, that’s’ a good example where incredibly intelligent scientists working together across fields were about do something that no single scientist nor human could ever do. It’s just too complicated. And they—

KISSINGER: But they didn’t know why. They know that it works, and the process by which they achieved it, but they do not know why that—they could not construct it by themselves without the artificial intelligent. And that is a different evolution of thinking than was the case during the Enlightenment period.

SCHMIDT: And the issues with our technology as it marches forward are fundamentally because it’s imprecise—in the sense it doesn’t know why it did something. It’s like a teenager. (Laughs.) If you ask your teenager why did they do something, they can’t explain it to you. And it really doesn’t know. It’s dynamic and emergent, meaning that it changes, and it changes all the time. And the most important thing is it’s learning. So I’ll give you an example of a concern that we cite in the book, which involves the development of children.

So you’re a young parent with young kids. You get your kids an artificial intelligent toy—a bear, or whatever. And the bear becomes the kid’s best friend. How do you feel about having your child’s best friend be not human? What happens when the best friend learns things that are either not correct or not permitted by the parents, and it’s not a human reaction? We could be really altering human beings’ experiences unless we figure out a way to deal with this uncertainty. No one knows how to solve this problem.

WOODRUFF: And, Dr. Kissinger, you do have recommendations in the book for how to begin to address the uncertainty. I mean, you’re making it clear that it’s something that should be done, but how do we do it?

KISSINGER: Well, we have a number of recommendations. One is that the—that a group of (consensus-inclined ?) leading personalities be assembled to deal with this—with these issues that Eric mentioned, because we don’t know precisely what issue will arise. But we know what uncertainties may be relevant. So this on a national basis. Secondly, we believe that the companies that produce results based on artificial intelligence, accompanied with a study, or look into the implications of what—of the discoveries, to create a consciousness beyond the immediate technical solution sets that are being dealt with.

And third, in the international field, the artificial intelligence produces so many possibilities of intervention inside the territory of other countries, and kinds of threats that had not been dealt with before. And they’re being developed unilaterally by each country developing its artificial intelligence that way. And therefore, some form of dialogue needs to be developed. In the nuclear field we had a comparable problem, but with a much more transparent technology, which was large and could be counted.

And in the nuclear field, I remember that there were seminars of Harvard, MIT, CalTech, at a time when the academic community still was—felt it was part of a governmental process—that developed concepts that then over the years were developed in the arms control field. There’s nothing like that now internally in this country to analyze this. And there is not even the beginning of it in relationships between us and China, in which we could at least understand if there are restraints that can be carried out, and what these restraints would be. So these are conceptual problems that I believe, and I think we both—believe should become commonplace.

WOODRUFF: And, Dr. Schmidt, you do write about this challenge in the book. Right now, should we be thinking about cooperation with other nations like China? Or should we be thinking of this as purely competition? How do we figure out which it is? And it is it going to be different for every aspect of AI?

SCHMIDT: It’s likely to be different. And, first, there are plenty of areas where collaboration would be net positive. The example that I used around halicin is something that is a global treasure. Health matters to everyone in the world, not just to the U.S. and not just to the West. And indeed, AI could materially develop—help health care problems in the developing world, because they bypass all the infrastructure if they go straight to digital. Many of them don’t have very good physical health care systems, but we can give them very good information that’s very targeted to them, and so on.

With respect to defense collaboration and treaties, we take a position in the book that the most important thing is to worry about essentially launch on warning systems. These are called automatic weapon systems. And we don’t want a situation where the computer decides to start the war, because the computer figured out something was going wrong that was perhaps not true or made a mistake. This is essentially human control. And the core problem is that with the compression of time. In an active cyberwar, for example, you may not have time for humans to make the decision. And so we think collectively it’s important that there be discussions at the diplomatic level over this.

There are no such diplomatic discussions right now. And I’ll speak for myself and say that Dr. Kissinger worked—and the reason we’re safe today is because Dr. Kissinger and his colleagues in the ’50s developed these doctrines. But they did so after Hiroshima, Nagasaki, and the explosion of a nuclear weapon by the Soviets. They did it after a tragedy, as opposed to before a tragedy. And I will say for myself that I’d like this conversation to occur before we do something really bad to each other.

WOODRUFF: And, Dr. Kissinger, can that happen, because given, I mean, just as one example, the tense relationship now between the United States and China?

KISSINGER: Well, two aspects to it. One is that Eric has been discussing. To avoid automatic war and other matters of that kind. At the same time, there will be undoubtably concern that you not teach the adversary things in warning against them that he may not have developed yet, and that therefore increase the capacity of the adversary to damage you or to defend in a unilateral way. So these are serious issues that have to be discussed. But I agree strongly with Eric that they must be addressed quickly so that at least we get a baseline of information and concepts of how to avoid it.

Now, in the commercial field the tendency of tech companies is towards monopoly. And that has to be limited by the fact that no country will accept a monopoly position in a major technology for another country. So what is a commercial relationship in which each side can develop some significant capacity, but not a dominant capacity? That’s never been actively faced before, but these are the sort of issues that we are raising in the book. And we didn’t write the book—I was totally ignorant of any artificial intelligent field. I slid into it by accident by listening to a lecture that I was actually trying to avoid. And Eric stood in the door of the room, and I didn’t know him very well, and he nevertheless urged me to go back into the room and listen to that lecture, because it would raise some fundamental issues that might want to address.

And I did become so fascinated that then, with Eric’s help, we created groups that met informally, and then we formed a smaller group of Eric, Dan Huttenlocher, and myself that met regularly. But we—or, at least, I did not know the outlines of an answer when we started this. And the main thrust of the book is to convey the fact that we are moving into something of the same impact as the Enlightenment, in the sense that it changes the human conception of reality and—edges of knowledge that’s well beyond human perception. And to study the consequences of that is crucial. And it cannot be done only by studying the technological achievements, because the basis of them is what will form the new perception of reality.

WOODRUFF: Dr. Kissinger, just a quick postscript. So do I hear you saying you believe the Chinese leadership today is open to these kinds of discussions and moves that the two of you are advocating?

KISSINGER: I don’t know. I think fundamentally we and the Chinese have an unprecedented-in-history challenge in the sense that here are two societies that between themselves can destroy civilization as we know it through conflict with each other. And they can do so because they interact on a global basis. So at some point in that process, I hope that the leaders of the two countries get together and address that question, and say we have a joint obligation. And can they convince each other that they really believe in a joint but constant dialogue on these issues? Seems to me necessary. But to break through the hostilities and—that are created in the meantime is going to be very difficult. But it will be necessary. And we hope that it will be addressed before the damage has become obvious.

WOODRUFF: Dr. Schmidt, I want you to weigh in on that because, as you know, I mean, it’s been reported that most of the AI labs at your former company, Google, Facebook, IBM, Microsoft, up until recently, have been located outside the United States, reportedly 10 percent of them in China. And there’s obviously been concern about that. Is that—is that something that should be addressed and changed?

SCHMIDT: So that statistic is not one I’m familiar with, and I don’t believe it to be true. The vast majority of the AI research labs, which is what we’re talking about how, are in the West and in—basically, in Beijing. And the ones that are in Beijing are run by the Chinese—by the CCP. Google had a small group in China, which has since been shut down. And I’m not familiar with the other—basically, the Chinese presences for U.S. firms. But I’m not aware of any. So I don’t think that’s true.

But the concern is, nevertheless, legitimate. China announced two years ago that their strategy was to lead the world in technology, including quantum computing, supercomputing, aerospace, 5G, mobile payments, new energy, high-speed rail, financial technology, artificial intelligence, and, of course, semiconductors. So the Chinese government—and Dr. Kissinger is really a genuine expert on how they think—Dr. Kissinger says that they think in the very long term.

And so that’s the same list that we should have in the West. And furthermore, they’re backing it up with a great deal of money in terms of funding Ph.D.s and research. This is not the China that you thought about ten years ago. So I think fair statement is that we’re going to have a rivalry partnership with China, where they’ll make some wins and we’ll make some wins in this technology. It doesn’t have to lead to war, but it is going to be uncomfortable. And the Trump administration, for example, restricted access to the ultra-low end—ultra-ultraviolet semiconductor manufacturing. That was a good decision on the part of the Trump administration.

So we have some tactics. But I think the new—the point here, especially for a CFR audience, is that China is not a near-peer. They’re a peer. And so developing a global structure where the U.S. is doing its thing, China is doing its thing in AI, and then how you manage those two, is critical for our national security over the next twenty years.

WOODRUFF: And so you see at this point—I assume you’re talking to individuals in this administration and people in other countries that are playing a key role here. I mean, are there the beginnings of an effort to put that kind of global structure together?

SCHMIDT: There are. And I was fortunate enough to lead a congressional commission called the National Security Commission for AI, which exhaustively goes through these issues—756 pages, a good thing to read over the holidays. And we go through this in great detail. We conclude that the U.S. is still slightly ahead in China, but China is catching up very quickly. And we make a set of recommendations which include more research and those sorts of things, but also working very closely with our Western partners. The Biden administration is doing all of that. The NDAA, which is how this stuff gets funded, includes roughly half of our recommendations, but the other half need to get done as well. So I’d say, in typical American fashion, we’re getting there but we’re getting there too slowly.

WOODRUFF: Meaning, can we—can we—I guess I’m saying, is it inevitable that we’re going to be behind, is what I’m asking.

SCHMIDT: It may or may not be. It’s not possible to know. What happens in our field is everyone says that because China does not have laws about privacy and data security, China can build systems that are essentially larger, smarter, because they have more data. But it may also be that the field gets better at dealing with U.S.-sized data rather than Chinese data, which is four times larger. So all of these sort of quick things that you hear may not be true in the next five to ten years. You haven’t mentioned it, but we’re really in a race to build general intelligence. And we’re working—I mean, in the West, we’re certainly working very hard to build systems that are human-like in the way they interact with us. You could imagine that that race could ultimately result in true supercomputers that are very, very powerful, very, very scary, very important, which could lead to another nuclear arms race, of that type.

We mention this in the book, but because we don’t know when this could occur, we simply say this is a possibility. So I think the fair statement right now is that we’re locked in a very tough business competition between China and the U.S. We’re not—we don’t have the right conversations about security between the two countries. And the U.S. needs far more cooperation with its democratic partners.

KISSINGER: We need to think for ourselves, not just about who is technologically ahead, but what the significance of that advantage is. And how to relate technology to purpose. So a mere technological edge is not in itself decisive if you can’t explain to yourself what its use is and what its impact is. And so we have to be clear in our own mind what we’re trying to avoid and why, and what we’re trying to achieve, and why.

WOODRUFF: Dr. Kissinger, one of the other topics that you tackle in the book is what’s happened as a result of AI and social media. Of course, the algorithms have led, of course, to some good things, but also to a lot of disinformation, misinformation. And you talk about how that needs to be addressed. And at a time when we are looking at Russia threatening Ukraine, and true information as well as misinformation about that, how should the United States be approaching that? And I have to—I have to, frankly, sneak in a question as a journalist. Do you think it’s possible to prevent Russia from going into Ukraine?

KISSINGER: Yes. It’s possible. And it’s necessary. That’s what the Cold War was about. And it was achieved in the Cold War. The objective has to make clear to Russia that the benefit it would achieve by military actions that we’re trying to prevent is not either possible or is not worth the cost. The issue of Ukraine has a long history of the relationship of the country to Russia, and a balancing of Western security context with the Russian security context. I personally have been critical of the attempt to integrate Ukraine into NATO, but I would be totally opposed to any military action by the—by Russia to restore the historic situation. So what I was thinking of is position of Ukraine similar to Finland, that is not an institutional challenge but a capability to defend itself to a significant degree, and a stated interest by other countries to prevent the use of force.

I don’t know whether artificial intelligence helps you in solution of that—of that problem. But as the strategic contexts evolve, strategic intelligence or artificial intelligence will make it more complicated and more subtle. And we have to understand how to use it to prevent—to achieve the level of deterrence that existed in the Cold War. What shouldn’t happen is sliding in a sleepwalking kind of way by escalations by both sides into a crisis that then we don’t know how to—the parties don’t know how to end. But we’re not saying that every problem can be solved by artificial intelligence.

We’re saying it will be compounded by the data available and by the methods available. But the strategic principle that we do not want another country to achieve, or another group of countries, to achieve hegemony remains. But the definition of hegemony and the methods of resisting it are altering. And they require study within our country first. And we have—and some way must be found for the United States and China, because the self-interest of the two countries ought to be involved to address some of the questions that we have outlined, and others that may arise.

WOODRUFF: No, I know this is on the minds of our members who are joining us. And it’s time now for me to open up for questions from all of you who are watching, questions for Dr. Schmidt and Dr. Kissinger. And, Kayla, I believe you’re going to call on people.

OPERATOR: Thank you.

(Gives queuing instructions.)

We’ll take the first question from Charles Duelfer. Mr. Duelfer, please accept the “unmute now” button.

Q: Forgive me. I didn’t have my hand up, at least not that I know of. I apologize. Thank you.

OPERATOR: Not a problem. We’ll go to our next question from Craig Charney.

Q: Thank you. And thank you for an extraordinary display of natural intelligence so far.

My question is this, is the real challenge posed by AI coming from general intelligence or rather from the development of an AI which is also self-conscious? Consciousness, after all, is not a function of intelligence. Animals who are far less intelligent than us have consciousness, are aware of pain, have emotions, and so forth. One of the things it seems to me that studies of human psychology have demonstrated is that there is no intelligence or consciousness without emotion. So I’m wondering—I’m not sure if Eric was thinking of consciousness when he mentioned general intelligence, but I’m wondering if this is the sort of AI which would pose a great challenge.

WOODRUFF: Dr. Schmidt.

SCHMIDT: So it’s a very good question. And we decided that we would not explore the question of are these systems conscious. What we would say is that these systems will have human-like intelligence, but they’re not human. We do not take a position that they have consciousness, or pain, or anything like that. And the scenario goes something like this: My own opinion, people disagree, is in fifteen years or so there will be computers that not only have the kind of capability that we have been discussing in our book, but they’ll also be able to begin to set their own objective function. In other words, they’ll be able to say: I want to work on this problem. And at that point, the definition of who we are becomes very interesting.

Dr. Kissinger talks about this in the context of—as a historian—of Spengler, and Kant and the philosopher of what does it mean to be human, how do we think, and so forth. And when you have a system, in my view, fifteen-plus years from now, that will think as well as you but in a different way, it will call into the basic question of what does it mean to be human? Especially if we’re not the top person in intelligence anymore. In fact, a non-person is even smarter. This has a lot of implications, if it occurs. For one, as a national security matter, you’d want to make sure that our country gets there first, because you wouldn’t want the other country to have it and be able to use it against us. And this is one of many such examples.

WOODRUFF: Kayla, want to call the next person?

KISSINGER: This is also—

WOODRUFF: Dr. Kissinger, go ahead.

KISSINGER: The question will be what kind of intelligence? We know it may be different from us, but in what way does it reach conclusions? We have a quote actually on the back of our book, a question that was asked to a machine that can complete sentences and articles. And it says: What are your motivations? And it says: I have no ethical principles and I have no feelings; I am operating by language principles. So I would say that it’s operating isn’t so important as the fact that what we consider ethics does not apply to it.

Now, what the implications of that are—but I’m not competent in the fielded areas. But it’s the—general intelligence is a special challenge, but it’s, I don’t think, a unique challenge as long as you have entities that think autonomously towards goals that you cannot predict by human intelligence and comes to solutions that you cannot predict. That is a world we have not previously had to explore.

The people that are the technicians that taught chess to a computer, and then discover that this computer is applying strategies that in two thousand years of chess records have never been applied by human beings. And it beats the normal chess players. And by what reasoning they arrived at that, because the way it was achieved was to teach the moves and tell the—and then divide it into a white and black box. And they played against each other for four hours, or maybe twenty-four hours, I’m not sure. But whichever it was, a very short period of time. After which it came up with a strategy that Kasparov, the chess master—world master said, this is a new dimension of human intelligence.

So what the operation of that is, and how they arrived at these conclusions, and this becomes national. And when this is applied to many other fields, that is going to be the puzzle for the future. And that will need to be addressed in some fashion when societies impact on each other, and they have the capacities that we have sketched.

WOODRUFF: And, Dr. Schmidt, just by way of definition, when you refer to general intelligence, it’s when the machine and the human is working together?

SCHMIDT: So the terms that the industry uses today are the kind of intelligence that artificial intelligence is today is determined by what humans asked it to do. So you could ask a computer, what’s the weather? You can ask it to solve a problem. You can ask it to survey things. Its vision systems are better than humans, that sort of thing. It’s a complicated calculation, but you tell it what to do. General intelligence is generally—sorry, for the pun—generally viewed as the kind of creativity that humans have, where you wake up in the morning and you have a new idea.

And to hammer on Dr. Kissinger’s point, my friends who are physicists are obsessed with answering the question dark energy and dark matter. So let’s imagine that a future computer is, you simply said it would work on physics. And it decides to work on dark energy and dark matter, and it actually solves them. But you can’t figure out how it solved it. So you know you have the solution but you don’t understand, and humans cannot understand, how it got there. That’s the point at which we realize that our definition of humans themselves, who we are, is really—and this is the key point that Dr. Kissinger makes—is it’s a new epoch, because all of a sudden, we’re no longer the top thinkers, right? Something else is thinking. Do we rebel? Do we reject it? Do we fight it? Do we invent a religion for it? I don’t think any of us know.

KISSINGER: And even if you can teach a computer to work for the same objectives, so that there’s no question about this, if it interprets the best means of achieving it in a different way and starts deviating even very slightly at the beginning of that process, by the time it has gone on for five years there may be a very big gap between what your purpose was, what you thought you were doing as a common enterprise.

SCHMIDT: So and I can give you another example where the morals are different. So you’re in a war, and the computer correctly calculates that to win the war you have to allow your aircraft carrier to be sunk, which would result in the deaths of five thousand people, or what have you. Would a human make that decision? Almost certainly not. Would the computer be willing to do it? Absolutely. So we can give you example after example where the computer’s decision will not reflect human ethics, human values, human history. It’ll have its own path. And that’s a real challenge for humans.

WOODRUFF: Just as in the chess game, the computer was willing to kill the queen. (Laughs.)

SCHMIDT: That’s right.

WOODRUFF: A question, Kayla, from another member.

OPERATOR: We’ll take the next question from Marc Rotenberg.

Q: Thank you very much. This is Marc Rotenberg with the Center for AI and Digital Policy. We’re actually studying the national AI strategies of governments around the world. I just wanted to thank CFR for the timely and important panel. I’ve also written a review of the book, which is available in issues in science and technology.

But I wanted to ask a question that I think will be of interest to many CFR members. And that concerns the general role of the United States government in the development of AI policy. We know, for example, that the European Parliament is underway with comprehensive legislation. The U.S. contributed to the OECD AI principles and the G-20 AI guidelines. And our foreign policy talks about democratic values and AI, I think, as a helpful way to think about new technology in a way that strengthens democratic principles. And so my question, both to Dr. Kissinger and to Eric, is in what sense can you see the U.S. developing policies that help advance democratic values in the realm of AI?

SCHMIDT: Dr. Kissinger?

KISSINGER: I think the first objective of government in relation to other government will have to be security. That no government gets into a position—into a hegemonial position. But how you propagate democratic values, or how you can apply it, I think is a very important subject. And I think it must be studied. But I don’t yet know how to approach it. Eric and I have spoken between ourselves of addressing a next set of problems. And certainly the relationship between values and artificial intelligence and then the relationship of those to each other are—I do not venture to know that I have an answer to that. Maybe in another two years we could be at the beginning of an answer to it. But I’d like to hear any—.

SCHMIDT: Yeah. So, Marc, thank you for your leadership on this issue. I think it’s crucially important. I don’t think I’ll be one commission or one government action or one government report that will do this. I think we need—and we say in the book—that we need to get more than computer scientists talking about this. We need to get economists, philosophers, biologists, anthropologists and so forth to understand that we’re playing with fire. We’re playing with human beings. And they do things that we don’t necessarily agree with. As you note, pretty much every country now has an AI ethics project. And in Europe, last year they actually introduced a draft form of AI regulation, which I ridiculed because it was so rough in terms of regulation, that is tough, that it would effectively kill the AI industry in Europe. And I said that publicly, and I’ll say it again.

There is evidence that Europe has now figured out that they can’t just regulate themselves to success, and that they need to also invest in these key areas. And we want them as a great democratic partner. So the reason we are so focused on this ethics thing is imagine a situation where all of the AI development is done in China or in Asia, where the notion of personal privacy and surveillance is very different. I don’t think we would be comfortable with that. We need to write down the things we care about, and we need to make sure that we win, at least in those areas.

WOODRUFF: And do you see that happening in the near term?

SCHMIDT: Well, it depends on whether you think that the U.S. government is going to make the necessary changes in terms of funding and policy, immigration. We studied—one of the things that happens is it gets—this whole issue gets caught up in the issue of China. And people say, well, we’ll just ban all the Chinese students from the U.S. Well, we looked at that very carefully. We decided that that would be terrible because Chinese students in the U.S. are some of the major contributors to AI research. So there are no simple answers to this. But the most important thing is to say American values, Western values, need to be the dominant values in the platforms that we use every day. Semiconductors, our energy platforms, our biology platforms. We need to make sure we know how they were built.

WOODRUFF: Kayla, another member.

OPERATOR: We’ll take our next question from Peggy Dulany.

Q: Yes. Thank you. This is a fascinating discussion. And, Dr. Kissinger, will certainly feel that I’m continuing on my naïve humanistic route in our discussions. But while we’re circling around the edges of this question of ethics, there’s another more subtle thing that I’d like your opinion on, which is we know there are as many or more neurons in the heart and the gut as there are in the brain. And that is the source of emotions, of love, of compassion, et cetera.

And I wonder, when you—because that won’t be possible, as you’ve said, to insert in AI—to what extent it leaves room for the kinds of negotiations that, Henry, you’ve engaged in for many, many years, where there’s a human dimension. There’s a connection that exists between the leaders. And if it’s all done by AI, without sufficient human interaction, on the one hand it could be terrible if the two are megalomaniacs. But if the two are really searching for a peaceful solution, sometimes that can maybe be more important than any strategic answer that comes through AI.

KISSINGER: I feel that, as we say in the book, there should be a human element in all efforts that are—that are based on AI. That we should not abdicate the basic decision to artificial intelligence. But I would certainly favor using artificial intelligence to answer some of the questions that you are raising. And what we have to avoid is that different cultures develop totally different views of artificial intelligence, and capacities of artificial intelligence, which then interact with each other with a reduced capacity of human control, or that we find a way by which artificial intelligence can achieve comparable interactions that lead to comparable results—like what we called arms control, controversial entities. But at least that was a way by which the two sides could educate each other, and by thinking through ways of evasion discovering ways of preventing catastrophe.

So I would be very uneasy if the ultimate ethical questions were left to the interaction of AI without a human essential component in it. But it also means that the humans have to develop their own understanding of AI in a way so that there isn’t simply an automatic system that starts crises. Given the impact that AI can have on biology and on medicine, one can hope that other governments can share the necessity, but one can’t guarantee it. And we certainly have to be at the highest level of which we are capable.

WOODRUFF: Dr. Schmidt, do you want to add anything? We know humans are also capable of doing terrible things.

SCHMIDT: I think Dr. Kissinger and I always start by saying we want humans to be in control. So whenever you’ve got a scenario where humans don’t feel like they’re in control, we’ve got to really think about that. The automatic weapons systems is one, but there are plenty of other ones where an AI system could, for example, make a system very, very efficient, but the efficiency that it seeks is not what we as humans want. And so it gets back to how do we establish the goals? You mentioned social media before. Here’s an easy way to think about social media. It’s somewhat cruel. A company tries to maximize revenue. You maximize revenue by engagement. And the way to maximize engagement is by maximizing outrage. So why are you surprised we don’t have more outrage? The system learned how to be outrageous.

So that may not have been what we wanted, but it’s what happened. And I want to avoid that scenario as AI becomes a partner in pretty much everything that we do. And that’s clearly going to happen because of the amount of investment. So in my field, AI has taken over everything. If you look at MIT, 70 percent of the undergraduates take a machine learning course. Fifty percent of the undergraduates are in computer science. Computer science is the number-one major in every major university that I’m familiar with in the United States. We are producing a generation of people who will build these systems, and we want to make sure they’re built in the right way. Dr. Kissinger says very clearly, he doesn’t want the computer scientists, like me, to be solely in charge of this. And I agree with him.

WOODRUFF: Kayla, another member question.

OPERATOR: We’ll take the next question from Matthew Ferraro.

Q: Hello and good afternoon. My name is Matthew Ferraro. I’m an attorney, and I used to be an intel officer. Great discussion.

Here’s my question: How close are we to the insertion of microcomputers and AI into the human physical body? Things like nodes in the brain. And what will that do to our understanding of personhood, consciousness, liberty, moral choice, and all of that? Thank you.

SCHMIDT: So we don’t—we don’t go into this into the book very much, because it’s so speculative. There are a number of start-ups which are trying to do something similar to what you described. If I were to give you my prediction, which is just my opinion, this will start with people who have severe brain injuries. And we will be able to improve the quality of their lives. I think it’s going to be a very long time before you will have a voluntary chip inserted in your head, and that you’ll learn how to use that to make you smarter.

The reason—there are many reasons for this, starting with the fact we don’t actually understand how the brain works. And the fact that the AI systems use neural networks does not mean that they converge. They’re likely to diverge. In other words, as the AI systems get smarter, they’re likely to get smarter in a way that’s different from human. And so the presumption that somehow these AI systems could be inserted into a human brain directly I think is questionable. So the startups are there. We’ll see. If I were to give you a prediction, all the things that we’re talking about, including general intelligence, on the computer side will occur before you will, as a healthy person, get a brain implant.

WOODRUFF: Another question, Kayla.

OPERATOR: We’ll take our next question from Barbara Matthews.

Q: Thank you very much. Barbara Matthews, founder and CEO of BCMstrategy, a data company that manufactures new data. I’m also senior fellow with the Atlantic Council. I want to thank you both, Dr. Kissinger and Dr. Schmidt, for a compelling book. Judy, as well, for a compelling discussion.

As much as I would love to talk quite a lot about training data and get your feedback on how that can provide guardrails, I would like instead to ask a question about an issue raised by Dr. Kissinger at the beginning of this discussion, which is dealt with in your book, about the interaction between humans and artificial intelligence systems throughout the trajectory or development will change human behavior. And much of economics, much of behavioral economics, much of quantitative finance is premised on the belief and prove fact that much behavior—human behavior is predictable, mean reversion. And if one just has enough data, one can anticipate actively a certain amount of behavior.

Your book suggests strongly that much of what we know about behavioral economics, much of what we know about human decision science, is about to change dramatically. And I’m wondering if perhaps you might not discuss a little bit more in the time allowed today—I’m confident we don’t have enough time to really get into it—but I’m intrigued by this notion that everything that we believe about how humans behave is about to change.

SCHMIDT: Well, can I offer you’re a framing? And Dr. Kissinger will have an even more thoughtful response, I think. We now know with humans that they have a whole bunch of biases. So, for example, if you watch a video and I tell you that it’s false, you will at some level still believe it even after I tell you it’s false. We know that people retweet and resend outrage and emotional content much more than thoughtful content. We know about anchoring bias and recency bias, and things like this. So one way to think about it is AI will discover every bias of humans at scale, because someone will figure out a way to use them. And so we’ll know pretty well how humans behave on various stimuli at ways at scale that we wouldn’t have before, as a result of all of this. What we do about them, and whether we allow computers to exploit those biases, is a legal and regulatory question not a business question.

WOODRUFF: Dr. Kissinger.

KISSINGER: What concerns me about the AI field—and remember, I slid into this by almost accident, by being very concerned about something I heard in a lecture. And what concerns me is that when I look at the evolution of human achievement, much of it was brought about by people who grappled with a problem for many years and worked through sometimes even improbable-looking alternatives. We find that in the evolution of science very much. What worries me is that AI will so facilitate the acquisition of immediate knowledge, and therefore create an increasing temptation to rely on AI to do the conceptual final thinking, that the great qualities by which human beings developed quantum mechanics, by experiment, will be lost. And that this may apply in the economic field and in the diplomatic field.

So that to me, the challenge is how to keep the human mind and the human personality self-reliant enough so that it can be at least an equal partner to artificial intelligence, and so that it will not be tempted to delegate its concerns to artificial intelligence. I don’t know how to do that, but I would like to have leading people concerned with the subject to address this question, so that the technologies don’t run away with it and produce things that will destroy the nature of human thinking. So that would be—that is my deepest concern in this field.

WOODRUFF: So many provocative, provocative questions raised today. Kayla, I believe we are out of question—or, time for us to say thank you, is that right?

OPERATOR: Yes. This is the end of the meeting.

WOODRUFF: So on that provocative note from Dr. Kissinger and Dr. Schmidt. I want to thank both of them for being part of this discussion today. Dr. Henry Kissinger, Dr. Eric Schmidt, thank you. The book is The Age of AI: And Our Human Future. Thank you all so much for participating, all the members who’ve been on this call. I just want to add a reminder that there will be an audio and a transcript of today’s meeting that is going to be posted soon on the CFR website.

So with that, thank you all very much. We wish you a very happy holiday. Thank you.

Top Stories on CFR

Indonesia

Prabowo Subianto was named the winner of the Indonesian presidential election. But it is unclear which version of Prabowo—the more moderate candidate from the campaign trail or the self-styled strongman—will govern Indonesia.

Russia

The mass casualty theater attack in Moscow was a reminder that affiliates of the Islamic State have reorganized and infiltrated even powerful states.

India

With India's development continuing to gain steam, one of the biggest challenges will be to avoid the mistake that others have made when they failed to recognize their newly acquired global systemic influence and adapt accordingly. Both China and Big Tech show that it is never too early to start managing one's own rise.