The Future of Artificial Intelligence and its Impact on Society

Friday, November 3, 2017
Mike Blake/Reuters
Speaker
Ray Kurzweil

Inventor, Author, and Futurist

Presider
Nicholas Thompson

Editor in Chief, Wired

THOMPSON: All right. Hello, everybody. Welcome to the closing session of the Council on Foreign Relations 22nd Annual Term Member Conference with Ray Kurzweil.

I’m Nicholas Thompson. I will be presiding over today’s session.

I’d also like to thank Andrew Gundlach, him and the Anna-Maria and Stephen Kellen Foundation, for their generous support of the CFR Term Member Program. I was a term member a couple years ago. I love this program. What a great event. I’m so glad to be here. (Laughter.)

I’m also glad to be here with Ray. All right, I’m going to read Ray’s biography, and then I’m going to dig into some questions about how the world is changing, and he will blow your mind. (Laughter.)

So Ray Kurzweil is one of the world’s leading inventors, thinkers, and futurists, with a 30-year track record of accurate predictions. It’s true. If you look at his early books, they’re 96 percent, 98 percent accurate. Called “the restless genius” by The Wall Street Journal, “the ultimate thinking machine” by Forbes magazine, he was selected as one of the top entrepreneurs by Inc. magazine, which described him as the rightful heir to Thomas Edison. PBS selected him as one of the 16 revolutionaries who made America. Reading his bio, I was upset that he quotes all of my competitors, so I’m going to add a quote from Wired magazine, which is “His mission is bolder than any voyage to space”—Wired magazine. (Laughter.)

Ray was the principal inventor of the first CCD flat-bed scanner, the first omni-font optical character recognition, the first print-to-speech reading machine for the blind, the first text-to-speech synthesizer, the first music synthesizer capable of recreating the grand piano and other orchestral instruments, and the first commercially marketed large-vocabulary speech recognition.

Among Ray’s many honors, he received a Grammy Award for outstanding achievements in music technology, he is the recipient of the National Medal of Technology, was inducted into the National Inventors Hall of Fame, holds 21 honorary Doctorates, and honors from three U.S. presidents. Amazing.

Ray has written five national bestselling books, which you should all buy immediately if you’re on your phones, including New York Times bestsellers “The Singularity Is Near” and “How To Create A Mind.” He is co-founder and chancellor of Singularity University; and director of engineering at Google, heading up a team developing machine intelligence and natural language understanding.

He also, as I learned walking here, is the father of one of my sister’s friends from grade school, who referred to him as the cool dad with the electric pianos, so. (Laughter.) Welcome, Ray.

KURZWEIL: Great to be here.

THOMPSON: All right. So some of you are probably familiar with his work. Some of you may not be. But let’s begin, Ray, by talking about the law of accelerating returns, what that means for technology. Lay out a framework for what’s about to happen, and then we’ll dig into how foreign policy is going to be turned on its head.

KURZWEIL: Sure. Well, that’s the basis of my futurism. In 1981 I realized that the key to being successful as an inventor was timing. The inventors whose names you recognize, like Thomas Edison or my new boss—my first boss, Larry Page, were in the right place with the right idea at the right time. And timing turns out to be important for everything from writing magazine articles to making investments to romance. You got to be at the right place at the right time.

So I started with the common wisdom that you cannot predict the future, and I made a very surprising discovery. There’s actually one thing about the future that’s remarkably predictable, and that is that the price, performance, and capacity not of everything, not of every technology, but of every information technology follows a very predictable path. And that path is exponential, not linear. So that’s the law of accelerating returns. But it bears a little explanation.

So I had the price/performance of computing—calculations per second per constant dollar—going back to the 1890 Census through 1980 on a logarithmic scale, where a straight line is exponential growth. It was a gradual, second level of exponential, but it was a very smooth curve. And you could not see World War I or World War II or the Great Depression or the Cold War on that curve. So I projected it out to 2050. We’re now 36 years later. It’s exactly where it should be. So this is not just looking backward now and overfitting to past data, but this has been a forward-looking progression that started in 1981. And it’s true of many different measures of information technology.

And the progression is not linear. It’s exponential. And our brains are linear. If you wonder why we have a brain, it’s to predict the future. But the kind of challenges we had, you know, 50,000 years ago when our brains were evolving were linear ones. We’d look up and say: OK, that animal’s going that way, I’m coming up the path this way, we’re going to meet at that rock. That’s not a good idea. I’m going to take a different path. That was good for survival. That became hardwired in our brains. We didn’t expect that animal to speed up as it went along. We made a linear projection.

The primary difference between myself and my critics, and many of them are coming around, is we look at the same world. They apply their linear intuition. For example, halfway through the Genome Project, 1 percent of the genome had been collected after seven years. So mainstream critics said: I told you this wasn’t going to work. Here you are, seven years, 1 percent. It’s going to take 700 years, just like we said. My reaction at the time was, well, we finished 1 percent, we’re almost done, because 1 percent is only seven doublings from 100 percent, and have been doubling every year. Indeed, that continued. The project was finished seven years later. That’s continued since the end of the Genome Project. That first genome cost a billion dollars. We’re now down to $1,000. And every other aspect of what we call biotechnology—understanding this data, modeling, simulating it, and, most importantly, reprograming it, is progressing exponentially.

And I’ll mention just one implication of the law of accelerating returns, because it has many ripple effects and it’s really behind this remarkable digital revolution we see, is the 50 percent deflation rate in information technologies. So I can get the same computation, communication, genetic sequencing, brain data as I could a year ago for half the price today. That’s why you can buy an iPhone or an Android phone that’s twice as good as the one two years ago for twice the price. You put some of the improved price performance in price and some of it into performance. So you asked me actually just a few minutes a question that I was also asked by Christine Lagarde, head of the IMF, at her annual meeting recently: How come we don’t see this in productivity statistics? And that’s because we factor it out. We put it in the numerator and the denominator.

So when this girl in Africa buys a smartphone for $75, it counts as $75 of economic activity, despite the fact that it’s literally a trillion dollars of computation circa 1960, a billion dollars circa 1980. It’s got millions of dollars of free information apps, just one of which is an encyclopedia far better than the one I saved up for years as a teenager to buy. All that counts for zero in economic activity because it’s free. So we really don’t count the value of these products. And people who compile these statistics say, well, we take into consideration quality improvement in products, but really using models that use the old linear assumption.

So then Christine said, yes, it’s true the digital world’s amazing. We can do all these remarkable things. But you can’t wear information technology. You can’t eat it. You can’t wear it. You can’t live in it. And that’s—and my next point is all of that’s going to change. We’ll be able to print out clothing using 3-D printers. Not today. We’re kind of in the hype phase of 3-D printing, but the 2020s—early 2020s, we’ll be able to print out clothing. There’ll be lots of cool open-source designs you can download for free. We’ll still have a fashion industry, just like we still have a music and movie and book industry. Coexistence of free, open-source products—which are a great leveler—and proprietary products. We’ll print—we’ll be able to create food very inexpensively using 3-D—vertical agriculture, using hydroponic plants for fruits and vegetables, in-vitro cloning of muscle tissue for meat. The first hamburger to be produced this way has already been consumed. It was expensive. It was a few hundred thousand dollars, but—(laughter)—but it was very good. (Laughter.)

THOMPSON: A free side—that’s, like, what it costs.

KURZWEIL: But that’s research costs. So it’s a long discussion, but all of these different resources are going to become information technologies. A building was put together recently, as a demo, using little modules snapped together, Lego-style, printed on a 3-D printer in Asia. Put together a three-story office building in a few days. That’ll be the nature of construction in the 2020s. 3-D printers will print out the physical things we need. She said, OK, but we’re getting very crowded. Land is not expanding. That’s not an information technology. And I said, well, actually, there’s lots of land. We just have decided to crowd ourselves together so we can work and play together. Cities was an early invention. We’re already spreading out with even the crude virtual and augmented reality we have today. Try taking a train trip anywhere in the world, and you’ll see that 97 percent of the land is unused.

THOMPSON: All right. So lots coming. (Laughter.) Let’s talk about intelligence. So, like, the phone in my pocket. It’s better than I am at math. It’s better than I am at Go. It’s better than I am at a lot of things. When will it be better at me at holding a conversation? When will the phone interview you instead of me?

KURZWEIL: We do have technologies that can have conversations. I mean, my team at Google created Smart Replies, you know.

THOMPSON: And when you—Ray actually answers all of your emails, is another way to describe his job at Google. Which is amazing.

KURZWEIL: Yeah. So we’re writing millions of emails. And it has to understand the meaning of the email it’s responding to, even though the proposed suggestions are brief. But your question really is a Turing equivalent question. You need—it’s equivalent to the Turing test. And I’m a believer that the Turing test is a valid test on the full range of human intelligence. You need the full flexibility of human intelligence to pass a valid Turing test. There’s no sort of simple, natural language-processing tricks you can do to do that. That’s where a human judge can tell the difference between talking over what you call teletype lines, basically instant messaging, with an AI versus a human. And if the human judge can’t tell the difference, then we consider the human—the AI to be at human levels, which is really what you’re asking.

I’ve been—that’s been a key prediction of mine. I’ve been consistent in saying 2029. In 1989, in the age of intelligent machines, I bounded that between early 2020s and late 2030s. And in the age of spiritual machines, in ’99, I said 2029. The Stanford—the Stanford AI department found that daunting, so they held a conference. And the consensus of the AI experts at that time was hundreds of years. Twenty-five percent thought it would never happen. My view, and the consensus view, or the median view, of AI experts have been getting closer together, but not because I’ve been changing my view. (Laughter.) And in 2006 there was a Dartmouth conferenced called AI 50, celebrating the 50th anniversary of the 1956 Dartmouth conference where AI—artificial intelligence got its name, by John McCarthy and Marvin Minsky, who was my mentor for over 50 years.

And the consensus then was 50 years. So at that time I was saying 23 years. We just had, actually, an AI ethics conference at Asilomar, patterned on the successful biotech Asilomar conference of 40 years earlier. And the consensus view there was about 20 to 30 years. I’m saying, well, at that time, 13. So the median view now is—I’m still more optimistic, but not by that much. And there’s a growing group of people who think I’m too conservative. I think that a key issue I didn’t mention with the law of accelerating returns is not only does the hardware progress exponentially, but so does the software. And that’s a long discussion. But we can see now the daunting pace of progress—with one milestone after another falling to AI. And we can talk about some of the techniques. But so I’m feeling more and more confident. I think the AI community is gaining confidence that we’re not far off from that milestone.

THOMPSON: All right. So I’ve got 12 more years I can do this kind of thing. So anybody in the CFR booking department, please keep inviting me because I do enjoy doing these and time is limited. (Laughter.)

Let’s talk a little bit about regulation. There are a lot of people in this—

KURZWEIL: Well, let me comment on that. This is not an alien invasion of intelligent machines from Mars. We create tools to extent our reach. We couldn’t reach that fruit at that higher branch a thousand years ago, so we invented a tool that extended our reach. Who here could build this building with our bare hands? But we created machines that extends and leverage out muscles. We can now access all of human knowledge with a few keystrokes. And we’re going to literally merge with this technology, with AI, to make us smarter. It already does. I mean, these devices are brain-extenders. And people really think of it that way. And that’s a new thing. I mean, people didn’t think of their smartphones that way just a few years ago. They’ll literally go inside our bodies and brains. But I think that’s an arbitrary distinction. Even though they’re outside our bodies and brains they’re already brain extenders. And they will make us smarter and funnier. (Laughter.)

THOMPSON: Well, I could use some of that. That’ll probably be five years ago.

So let’s talk about how to make this future go forward in the best way. So we have an opportunity here. You have 300 people who are going to be making some of the most important policy decisions. I see people from tech companies here, making big decisions at tech companies, people in Congress, people in the White House, people probably in foreign ministries. Explain the sort of—the framework for policymakers and how they should think about this accelerating technology, what they should do, and what they should not do.

KURZWEIL: Well, I mean, there has been a lot of focus on AI ethics, now to keep the technology safe. And it’s kind of a polarized discussion, like a lot of discussions nowadays. I’ve actually talked about both promise and peril for quite a long time. Started in the 1990s, and did extensively in ’99. You might remember Bill Boy’s cover story in Wired.

THOMPSON: I wasn’t there at the time.

KURZWEIL: Right. But that was a reflection on my book, “Age of Spiritual Machines,” which had just come out, talking about the great promise and peril. And he focused on the peril, and created quite a firestorm then. 2006, chapter eight of “The Singularity is Near” is the deeply intertwined promise versus peril of GNR, genetics, nanotechnology, and robotics. Technology has always been a double-edged sword. Fire kept us warm, cooked our food, and burned down our houses. These technologies are much more powerful. It’s also a long discussion, but I think we should go through three phases, at least I did, in contemplating this. First is delight at the opportunity to overcome age-old afflictions—poverty, disease, and so on. Then alarm, that these technologies can be destructive and cause even existential risks.

And finally, I think where we need to come out is an appreciation that we have a moral imperative to continue progress in these technologies, because despite the progress we’ve made—and that’s a whole ’nother issue. People think things are getting worse, but they’re actually getting better and we can talk about that. But there’s still a lot of human suffering to be overcome. It’s only continued progress, particularly in AI, that’s going to enable us to continue overcoming poverty and disease and environmental degradation, while we attend to the peril. And there’s a good framework for doing that.

Forty years ago—I alluded to this a moment ago—there were visionaries who saw both the promise and the peril of biotechnology. Basically, reprogramming biology away from disease and aging, neither of which was feasible 40 years ago. So they held a conference called the Asilomar Conference at the conference center in Asilomar. They came up with ethical guidelines and strategies—how to keep these technologies safe. So now it’s 40 years later. We are getting clinical impact of biotechnology. It’s a trickle today. It’ll be a flood over the next decade. The number of people who have been harmed either accidentally or intentionally by abuse of biotechnology so far has been zero. That doesn’t mean we can cross it off our list—OK, we took care of that one. But it is—because we have to keep revising these guidelines because the technology keeps getting more sophisticated—but it’s a good model for how to proceed. And we just had our first Asilomar conference on the AI ethics.

And a lot of these ethical guidelines—particularly in the case of, say, biotechnology—have been fashioned into law. So I think that’s the goal. It’s the first thing to understand. The extremes are, oh, let’s band the technology. Let’s slow it down. That’s really not the right approach. Let’s guide it in a constructive manner. There are—there are strategies to do that. That’s another complicated discussion.

THOMPSON: Well, let’s delve into that complicated discussion, at least—at least for a minute or two. What—you know, you can imagine—you can imagine some rules that, say, Congress could say, that anybody working on a certain kind of technology has to make their data open, right, for example, or has to be willing to share their data sets, and at least to allow competitive markets over these incredibly powerful—these incredibly powerful tools. You can imagine the government saying—actually, there’ll be a big government-funded option and we’re going to have kind of like open AI, but run by the government. You can imagine a huge national infrastructure movement to build out this technology. So at least people with public interest at heart have control over some of it. What would you—what would you recommend?

KURZWEIL: Well, I think open source data and algorithms in generally is a good idea. Google put all of its AI algorithms in the public domain with TensorFlow, which is open-source. A lot of data is open source. I think that’s a good direction to go in. I think it’s really the combination of open source and the ongoing law of accelerating returns that will bring us closer and closer to the ideals.

I remember in fourth grade my social studies teacher put on the board, “From each according to his ability, to each according to his need.” And he asked the class: Well, what’s wrong with that? And we said, oh, that’s great. And he said, well, that’s the premise of communism. And we were like, oh my God, is he a secret commie? (Laughter.) But that is a good goal. Forced collectivism was a bad strategy. (Laughter.) But I think we can get to that goal through a combination of open source and the ongoing exponential progress in both hardware and software.

So I think it’s a good direction to go in. There are lots of issues, such as privacy, that are critical to maintain. And I think people in this field are genuinely concerned about the issues. There was a lot of representation by Google in DeepMind or at this Asilomar conference. It’s not clear what the right answers are. I think we want to keep—continue the progress. But when you have so much power, even with good intentions, there can be abuses.

THOMPSON: What worries you? I mean, your view of the future is very optimistic. That’s very encouraging to hear. But what worries you?

KURZWEIL: Well, I’ve been accused of being an optimist, and you have to be an optimist to be an entrepreneur because if you knew all the problems that you were going to encounter you’d probably never start any project. (Laughter.) But I have, as I say, been concerned about, written about the downsides, which are existential. We had the first existential risk to humanity when I was in grade school. We would have these civil defense drills where we would get under our desks and put our hands behind our heads to protect us from the thermonuclear war. And it worked. (Laughter.)

THOMPSON: Mmm hmm. It’s true.

KURZWEIL: So we have—we have new existential risks. These technologies are very powerful. And so I do worry about that, even though I’m an optimist. And I’m optimistic that we’ll make it through. I’m not as optimistic that there won’t be difficult episodes. World War II, 50 million people died. And that was certainty exacerbated by the power of technology at that time. I think it’s important, though, for people to recognize we are making progress. There was a poll taken of 24,000 people in 26 countries recently, and asked: Has poverty worldwide gotten better or worse? Ninety percent said incorrectly it’s gotten worse. Only 1 percent said correctly that it has fallen by 50 percent or more. There’s a general pessimism that things are getting worse, the world’s getting more violent.

So I said, oh, this is the most peaceful time human history. And people seem to have—don’t you pay attention to the news? Because there was an incident a few days ago, and a few days before that. Well, our information about what’s wrong with the world is getting exponentially better. And certainly it’s far from a perfect world, but your chance of actually—Steven Pinker documents this in his book “The Better Angles of our Nature,” that there’s been an exponential decline in the likelihood of your being killed by interpersonal or state-sponsored violence. So that’s another long discussion, but—

THOMPSON: Well, assuming people are not going to be killed, we’re going to have a Q&A in about five minutes. So I’m going to ask you one more question, which is about their futures, which is: What should they do, right? They’re about to enter a world where the career choices are career choices mapped onto a world with completely different technology. So in your view, what advice do you give to people in this room?

KURZWEIL: Well, it really is an old piece of advice, which is to follow your passion, because there’s really no area that’s not going to be affected or that isn’t a part of this story. My father knew he had a passion for music from a very young age. And music is relevant to this story. The neocortex is a hierarchy of modules. And we got an additional amount of neocortex 2 million year ago when we got these big foreheads. And what we did with it is put it at the top of the neocortical hierarchy. And that was the enabling factor for us to invent music. Every human culture ever discovered has music and no other animals have music. Humor came with that additional neocortex, language, art, science, technology, magazines. We’re going to do it again. We’re going to merge with neocortex—simulated neocortex in the cloud. So, again, we’ll be smarter and we’ll create new forms of communication that are as profound as music is. Try explaining music to an animal that doesn’t understand that concept.

So everything is really relevant. And it’s not going to—my view is not that AI is going to displace us. It’s going to enhance us. It does already. I mean, who can do their work without these brain extenders we have today? And that’s going to continue to be the case. And say, oh, only the wealthy are going to have these tools. And I say, yeah, like smartphones, of which there are 3 billion. I just—I just—I was saying 2 billion, but I just read the new statistic now it’s 3 billion. It’ll be six billion in a couple years. That’s because of this fantastic price performance explosion. So these technologies—only the wealthy will be able to have these technologies at a point in time where they don’t work. By the time they work well, they’re virtually free. (Laughter.)

So, you know, find where you have a passion. Some people have complex passions that are not easily categorized. And so find a way of contributing to the world that—where you think you can make a difference. Use the tools that are available. The reason I came up with the law of accelerating returns literally was to time my own technology projects so I could start them a few years before they were feasible. So try to anticipate where technology is going. And it’s—people forget where we have come from. I mean, just a few years ago we had little devices that looked like a smartphone but they didn’t work very well. So that revolution, and mobile apps, for example, you know, hardly existed five years ago. The world will be comparably different in five years. So try to time your projects to kind of meet the train at the station.

THOMPSON: All right, if Council on Foreign Relations term members do not have questions for Ray Kurzweil I will eat my shoes. And already we have half the front two rows. (Laughter.) So right here. Please say—state your affiliation, ask the question. A reminder, this meeting is on the record. So your words can and will be used against you.

Q: So Adedayo Banwo, Deutsche Bank.

With all due respect, sir, you’re so smart it’s kind of scary. (Laughter.) And I think the same holds true for the technology. So I was wondering what sort of strategies do you envision for people in your spaces of explaining this and to make it less scary for normal people?

KURZWEIL: Well, the unknown is scary. So part of my mission is going to try to explain where we’re going, and show how it has been beneficial. We’ve overcome—I mean, human life used to be extremely difficult. If you want to read something scary, read about life 200 years ago. Thomas Hobbes describes it as short, brutish, disaster-prone, disease and poverty filled. And there was no understanding of the germ theory of disease. People got diseases all the time. There was no antibiotics. A routine infection would throw a family into economic desperation. There were no social safety nets. Life expectancy was 37 in 1800. And even the kings and queens didn’t have amenities a century or two ago that the poor have today, like flush toilets, and refrigerators, and radio, TV, computers.

So the future really will provide an opportunity to continue to overcome human affiliation and add to our creativity. There’s a lot of fear about jobs, but the movement has also been in the right direction. If I were a prescient futurist in 1900, I’d say, well, 38 percent of you work on farms and 25 percent of you work in factories. That’s two-thirds of the population. But I predict in 100 years that’ll be 2 percent on farms and 9 percent in factories. And everybody would go, oh my God, we’re going to be out of work. And I’d say, don’t worry, for every job we destroy at the bottom of the skill ladder, we’re going to create more than one job at the top of the skill ladder. And people would go, oh, great. What new jobs? And I’d say, well, I don’t know, they haven’t been invented yet. (Laughter.)

Which is part of the problem. We can clearly see today—we can see even better because we have better communication and knowledge, the jobs going away. If you’re driving a truck or a car, you’ve probably heard about self-driving vehicles. And we can’t see what’s coming. If I were really prescient in 1900, I’d say, well, you’ll be creating websites and doing data analytics and apps for mobile devices. And nobody would have any idea what I’m talking about. So some economists say, yes, that’s all true, but it’s slowed down in recent years. That’s not true. The whole mobile app economy, which is many different ways to make money, didn’t exist five years ago.

And so most importantly, not everybody’s thrilled with their job but an increasing fraction of people actually get some of their self-identification and gratification and self-actualization from their employment, which wasn’t the case 100 years ago. So I try to articulate the benefits and what we can see that’s positive about the future.

THOMPSON: In the very front right here. And I’d like to say and I’m glad we don’t all drop dead at 37. Thank you, Ray. (Laughter.)

Q: Hi.

KURZWEIL: Well, I gave a talk to junior high school kids—13- and 14-year-old science winners from around the country. And I said to them: If it hadn’t been for the scientific progress we’ve made, you all would be senior citizens because life expectancy was 19 1,000 years ago.

Q: Hi. Alana Ackerson.

And I just did a doctorate looking at how the development of newer and better technology is a way that we, as humans, express faith in a better future for humanity. And so from that lens, I’d love to hear your thinking on—as we are now—as intelligence is now transcending us as humans in some ways, what are your thoughts about how we are going to be redefining our understanding of who we are as humans. And taking your phrase, an age of spiritual machines, how we are from a spiritual perspective? So how is this going to fundamentally change, you know, our lens on these things?

KURZWEIL: Right. Well, great question. We are the species that changes ourselves and changes who we are. That came with that additional neocortex 2 million years ago, and that we could then conceptualize different futures. We had one other evolutionary change, which is this modest appendage, opposable thumbs. So I could imagine I could take the branch and I could strip off the leaves and create a tool. And then I had the dexterity to actually carry it out. So we created a whole new evolutionary process of evolution of technology.

You mentioned spirituality. I think this is all a spiritual process. Our theory is basically one of evolution. So what do we see in evolution? Both biological evolution and technological evolution, which is really a continuation of biological evolution—I have a chart showing the acceleration of change, which goes very smoothly from biological to technological evolution. It took hundreds of millions of years for mammals to evolve, for example, and now we have paradigm shifts in just a few years’ time.

But what happens to entities in evolution? They get more complicated. They get more knowledgeable. They get more creative. They get more beautiful. So if we look at how God has been described in different religious traditions—I grew up in a Unitarian church where we studied all the world’s religions. And there was a common theme. Although they all have different metaphors and different stories, God was described as an ideal, as being unlimited in, what, in those very same qualities—all knowing, infinitely intelligent, infinitely creative, infinitely loving, and so on.

And that—so we actually improved those very qualities at an exponential rate through an evolutionary process. Never achieving an infinite level, but growing explosively in that direction. So we never become God, but we become more God-like in these various qualities. So you could say that evolution is a spiritual process, bringing us closer to God.

And the other implication that spiritual—and really where that title, “Age of Spiritual Machines” comes from—is what is this—what is spiritual? It’s really a word for consciousness. Our whole moral system, our sense of values is that consciousness is the precious thing. That’s the sort of underlying debate in animal rights. Conscious entities are what’s important. Non-conscious entities are only important insofar as they affect the conscious experience of conscious entities. So who and what are conscious is a key question. And that’s the underlying question in animal rights. That’ll be the question when it comes to the AI.

So I alluded to earlier, we are going to merge with this technology. I’d say we already have done that to some extent. Medical nanorobots will go inside our brain, connect our neocortex to the cloud. So your smartphone, even though it is itself a billion times more powerful per dollar than the computer I used when I was an undergraduate at MIT, it multiples itself again a millionfold by connecting to millions of computers in the cloud. We can’t do that directly from our neocortex. We do it indirectly through these devices. We’ll do it directly in the 2030s. and not just to do things like search and translation directly from our brains, although we’ll do that, but to actually connect to more neocortex.

So it’ll be just like what happened 2 million years ago when we got these big foreheads, and we got this additional neocortex and put it at the top of the hierarchy. That was the enabling factor for humor, and language, music, and so on. We’ll do it again. Only, unlike 2 million years ago, it won’t be a one-shot deal. We couldn’t keep growing this enclosure without making birth impossible. The cloud is pure information technology. It’s not limited by a fixed enclosure. So we will become more and more non-biological.

So people say, oh, we’re going to lose our humanity. Well, if you define human as being necessarily purely biological, I think we’re already not purely human anymore, because we’re not purely biological anymore. And we’re going to become increasingly nonbiological. But that’s who we are. I mean, that is the definition of a human, the species that changes itself, it creates tools, it goes beyond our limitations.

THOMPSON: Are there any conscious in the back of the room? How about way back there in red? Dark hair, yeah. Maybe you’re not in red, but.

Q: Thank you. It’s more of a pink, I think. (Laughter.)

THOMPSON: My glasses aren’t enhanced enough. (Laughter.)

Q: Remo (ph) with Senator Dick Durbin’s office.

Thank you. This was the most elegant description of the robots coming that I have ever heard. And I am just curious, you know, so much of the emphasis has been on sort of the really lovely side of human nature, on science and exploration. And I’m curious as we sort of move more towards our robot partners, what about the dark side? What about war and war machines and violence? I think already in some early AI experiments there has been this issue of sort of when you take the internet in aggregate there is, you know, sort of a hostility or negative in verbiage that is put into the—you know, put into some of these machines. And, you know, what is—I guess, what is your thinking on that, or what is the reaction, or what are the controls for that? And I asked as someone—I’m an optimist. And I just I think—at least, from sitting in Washington—one of the worries about this is, you know, how do you—how do you filter for that? And we haven’t even been able to answer that question for the internet yet.

KURZWEIL: Yeah. Well, let me come back to what I think can be done about that. But just to make an overriding point that the increase in communications, I think, has fostered democracy and peace. So that started with books in the middle ages. And then we had other forms of communication. You can count the number of democracies a century ago on the fingers of one hand. You could count the number of democracies two centuries on the fingers of one finger. The world is not a perfect democracy today, but that has been accepted as the ideal and people at least argue as to how they are a democracy. That’s become the model. That wasn’t the case a century ago. There wasn’t that consensus.

And the world has become more peaceful, as I just mentioned. And there’s a lot of documentation on that. Doesn’t appear to be the case because our information about what’s wrong with the world is getting exponentially better. And there’s actually an evolutionary preference for bad news, because it was actually in your interest, you know, tens of thousands of years ago for you to pay attention to bad news. A little rustling in the leaves that might be a predator, that was important to pay attention to. The fact that your crops were 1 percent better than last year, that was not critical for your survival for you to be aware of.

That being said, you know, we’re learning a lot about how these platforms can be used to amplify all kinds of human inclinations, and be manipulated. And a lot of this is fairly recent information that we’re learning. But we can—so, AI learns from examples. There’s a motto in the field that life begins at a billion examples. And the best way to get examples is to learn from people. So AI very often learns from people. Not always. AlphaGo Zero just learned from itself, by playing Go games against itself. But that’s not always feasible, particularly when you’re trying to deal with more complex, real-world issues.

There’s a major effort in the field—it’s going on in all the major companies and in open-source research as well-to debias AI, because it’s going to pick up biases from people if it’s learning from people. And people have biases. And so to overcome gender bias and other types of—racial bias, that can actually be a goal. As humans, we pick up biases from all of the things we’ve seen, a lot of it’s subconscious. We then learn, as educated humans, to recognize bias and try to overcome it. And we can have conflicts within our own minds. The metaphor that my mentor Marvin Minsky used in the title of his book, “Society of Mind,” I think is a good one, because I very often hear the society of my own mind bickering about some issue. And there’s different factions for different issues.

But there is a whole area of research to debias AI and to overcome the biases it may pick up from people. So that’s one type of research that can overcome problems with machine intelligence. And in these ways, machine intelligence, you know, can be actually less bias than the humans that it learned from. Overall, though, I think despite the promise and peril that’s intertwined in social media, it’s overall been a very beneficial thing. I walk through airports, and every child over the age of two is on their devices. And it’s become a world community. And I think the generation now growing up, more so than any other generation, feels that they are citizens of the world, because they’re really in touch with all the cultures of the world.

THOMPSON: Right here in the front, Adam.

Q: Adam Ghetti with Ionic Security.

Ray, really appreciate all the work you keep doing on singularity. But I do want to leap forward to 2030 timeframe. We’ve got the nanobots. We’re connected. You know, I’ve got a collective set of kind of thought and almost infinite capacity comparative to where are today.

KURZWEIL: That won’t be till 2035.

Q: All right, 2035. We’re there. That happens. And kind on the same progression of more information has created more peace, better kind of understanding of a global community versus these isolated communities. When we have the ability to have that kind of much higher bandwidth access to this extraordinary processing and intelligence infrastructure, does that deeper appreciation now of everybody else and every other culture, because we have the bandwidth to absorb it and process it, what are the implications you see on nation-state structures and global governing structures when the world is truly connected that way?

KURZWEIL: I mean, we are already part of the way there. You know, there’s some issue the retirement funds in Italy and the whole world is not only aware of it, but it affects everyone. So we are not isolated islands, and haven’t been that way for a long time. And there’s appreciation how pieces, issues of economics or our affairs anywhere in the world have an immediate effect on us. It’s not just that we care about these other people.

And, you know, old institutions continue. There are still horses and buggies not far from here. There’s still mechanical typewriters. Religion is still an institution, but has much less effect than it did, say, a century ago. It wasn’t that long ago in human history that it really governed and had deep influence on every facet of everyone’s life. It now has a role that varies depending on the person, but it’s a much less pervasive force. The same thing’s true of nation-states. I mean, people really are increasingly identifying as citizens of the world. And I think over time nation-states will becomes less influential. I mean, I think we’re on that path.

THOMPSON: Wait, can I just—in the last year in this country, we have not grown closer to the rest of the world. I know a lot of people would say our democracy has not gotten better. Is this a blip in the ongoing continuing progress and mankind coming together? Or is this—or are many people misinterpreting that?

KURZWEIL: Well, I mean, the polarization in politics in the United States and in some other places in the world is unfortunate. I don’t think it is an issue for the kinds of things that we’ve been talking about today. We’ve had, you know, major blips in the world. World War II was a pretty big blip. And actually, didn’t affect these trends at all. I mean, one of the remarkable things I noted in ’81, when I first noticed this, is where is World War II or World War I on these graphs? So it kind of has a mind of its own. And there may be things that we don’t like in, say, certain government officials or the government. But there’s very robust discussion. We’re not in a totalitarian era, where you can’t voice your views. I’d be more concerned if we moved in that direction, but I don’t see that happening.

So not to diminish the importance of government and who’s in power and so forth, but it’s at a different level. The kinds of issues we’re talking about are not really affected by these issues. There are existential risks that I worry about, that I mentioned, because technology is a double-edged sword.

THOMPSON: On the far left.

Q: Hi. Aubrey Hruby, Africa expert.

And my question is about inequality. From a human history perspective, maybe we started out very equal and hunter and gatherer, but there were a lot of phases of most of human history where inequality is quite high. And I’m wondering whether you think that the 20th century was an anomaly in that sense, and how the diffusion of technology is going to impact inequality if you look at Gini coefficients worldwide.

KURZWEIL: Well, I think there’s been a worldwide movement towards greater equality. Ancient societies were not equal. First of all, gender roles were very rigid and there was no concept of gender equality. My own family’s been involved in this. My daughter’s right here, but my great grandmother started the first school in Europe that provided higher education for girls in 1868, taken over by her daughter, who became the first woman to get a Ph.D. in chemistry, and wrote about this. The title of her book is, “One Life is Not Enough,” which kind of presaged my interest in life extension. But if you talk about gender equality, you’ve come a long way.

But there’s all kinds of issues. I mean, slavery was considered the norm certainly here in the United States, but around the world, not that long ago in human history. And there’s been, you know, recent issues with gay rights and gay marriage. And there’s a continual progression, I think, of human rights. This Unitarian church I grew up in, one motto was “many paths to the truth,” so we studied all the world religious traditions, but they were also focused on the here and now and civil liberties. And we went to civil rights marches in the South in the 1950s. So I think we are making progress. I mean, there wasn’t even discussion of these issues not that long ago. I mean, when I was—

Q: (Off mic)—economic inequality from, like, a global perspective.

KURZWEIL: Economic equality is getting better. Poverty in Asia has fallen by over 90 percent according to the World Bank. In the last 20 years, they’ve gone from primitive agrarian economies to thriving information economies. Africa and South America have growth rates that are substantially higher than the developed world. So any snapshot you take, there’s inequality, but it’s dramatically moving in the right direction. Poverty worldwide has fallen 50 percent in the last 20 years, and there’s many other measures of that.

People say the digital divide, but, you know, the internet, smart phones are very strong in Africa. That’s a change just in the last few years. So we’re moving in the right direction. At any one point in time, there’s grave inequality and people who are suffering, but the numbers are growing in the right direction.

THOMPSON: In the far, very far back, last man against the wall.

Q: Hi. My name is Lester Malpin (ph).

I guess my question is a question very generic that you probably get asked every day. And I’m the father of four kids, so I think about it all the time. And that is really, why—we have people like yourself, but in our country, why do we pour—why do we score so poorly in STEM? Why do you think that happens? And what do you think is the answer to that, because I’m trying to get my kids prepared for 2035?

KURZWEIL: Well, it’s an interesting dichotomy. We do, on various measures, fall behind on STEM. There’s still a gender issue with STEM. The girls are right up there with the boys up through junior high school, but then they tend to drop out and we don’t fully understand the reasons for that. There seem to be powerful cultural issues at work and people are working on overcoming that, but we don’t fully understand it.

But despite all that, the United States leads in STEM. I mean, the most—the leading high-tech companies are American. Certainly, other countries like China are moving rapidly. But we do have kind of an entrepreneurial leadership. There’s a frontier mentality that comes from the United States being a place that you went to overcome oppression and to seek a better life that is, I think, engrained in the American character.

And also, we are the land of all the world’s peoples, so we understand all the different markets around the world because all those people are here and that’s what makes up America. That’s not true of most other countries. There’s some diversity in some countries, but nothing like the United States.

I do think STEM, literacy, scientific, technological, digital literacy should be an important part of education. It’s shocking to me that many high schools don’t have—don’t teach computer software. They should be teaching programming in elementary school. It’s a good way to think, it’s a good way to develop intellectual skills, and a good way to view the world.

So I think this is a good thing for us to focus on. As a nation we’re doing well in STEM. I think just to be a world citizen, it’s important to actually have literacy in these areas.

THOMPSON: Right there in the middle.

Q: Adam Pearlman with the U.S. courts.

Taking your hypothesis of, you know, a connected, collective consciousness maybe—maybe collective, maybe not—what does that do to individuality, comparative advantage on an individual level and the concept of leadership if we’re all connected through implants or anything else?

KURZWEIL: Well, yeah, people—another way of asking that is, aren’t we going to all become the same if we all have access to all this knowledge and thinking capacity? I think we’ll actually become more different. We’re actually pretty much the same as, you know, unenhanced, we all have a very similar architecture of 300 million neocortical modules, so fixed capacity. We’ll be able to delve much more deeply into specific issues or skills or areas of insight as we enhance our thinking capacity. And we can do that already to the extent that we have these brain extenders.

It was very hard to write a book 30 years ago. My most important information technology was a roll of quarters to feed the copier in the public library. So we do have the ability to think more deeply, so people will—I mean, we already see a tremendous diversity, for example, of musical feelings. I mean, when I was a kid there was pop music and there was jazz and classical music and that was it, and now there’s hundreds of genres. So that will continue. There will be a tremendous diversity of ways you can approach subjects. And I think people will become more different when they have more capacity to think more deeply.

THOMPSON: Right back here in the front row. Last question because time always moves exponentially at the end of these, and so we are out of it.

Q: Jaime Yassif from the Open Philanthropy Project. Thank you for coming to speak with us today.

I hear from your remarks that you’re making a prediction that artificial general intelligence is 12 years out. And you’ve mentioned a couple of times that, notwithstanding your optimism, you are concerned somewhat about existential risks, so I was wondering if you could elaborate a little bit more on what you mean by that. And what is the most important thing that you think technologists should be doing to reduce those risks?

KURZWEIL: Well, I mean, existential risks are risks that threaten the survival of our civilization. So the first existential risk that humanity ever faced was nuclear proliferation. We have had the ability to destroy all of humanity some number of times over; there’s debate about what that number is, but it doesn’t really matter.

And these new technologies, it’s not hard to come up with scenarios where they could be highly destructive or destroy all of humanity. So biotechnology, for example, we have the ability to reprogram biology away from disease. Immunotherapy, for example, which is a very exciting breakthrough in cancer, I think it’s going to be quite revolutionary, it’s just getting started. That’s reprogramming the immune system to go after cancer, which it normally doesn’t do.

But a bioterrorist could reprogram a virus to be more deadly and more communicable and more stealthy and create a superweapon. And that was the specter that spawned this first Asilomar conference 40 years ago. And there’s been recurring conferences to make these ethical guidelines and safety protocols and strategies more sophisticated, and so far it’s worked. But we keep making the technology more sophisticated, so we have to, you know, reinvent them over and over again.

So we just had our first Asilomar conference on AI ethics. We came up with a set of ethics which we all signed off on. A lot of them are somewhat vague. I think if you look back at the Asilomar guidelines for biotech 40 years ago it was the same thing. They’ve gotten more teeth and more specificity and more significance over time.

I think it’s an important issue to give a high priority to. You know, there’s—we’re finding we have to build ethical values into software. A classic example is your self-driving car. The whole motive—the whole motive for self-driving cars is they’ll, you know, they’ll eliminate 99 percent of the 2 million deaths a year from human drivers. But it will get into a situation where it has to make an ethical decision. Should it drive towards the baby carriage or towards the elderly couple or towards the wall and perhaps kill your passenger? Do you have an ethical guideline to your passenger when I own you? You can’t send an email to the software designers in that circumstance of, gee, what should I do? It’s got to be built into the software. So those are practical issues, and there’s a whole area of AI ethics growing over this.

But how do we deal with the more existential risks, weaponization of AI, which is not something in the future? I mean, defense departments all around the world have been applying AI. There was a document going around asking people to agree to ban autonomous weapons, which sounds like a good idea. And the example that’s used is, well, we banned chemical weapons, so why not autonomous AI weapons? It’s a little more complicated because, you know, we can get by without anthrax and without small pox. It’s OK to just ban them. But autonomous—an autonomous weapon is a dual-use technology. The Amazon drone that’s delivering your frozen waffles or medicine to a hospital in Africa could be delivering a weapon. It’s the same technology, and kind of the horse is already out of the barn, which is just to say that it’s a more complicated issue how to deal with that. But the goal is to reap the promise and control the peril.

There’s no simple sort of algorithms, there’s no little subroutine we can put in our AI as, OK, put this subroutine in and it will keep your AIs benign. There are technical strategies when it comes to biotechnology and nanotechnology. But intelligence is inherently uncontrollable. You know, if there’s some entity that’s more powerful than you and I and it’s out for our destruction, the best strategy is not to get in that situation in the first place. And failing that, the best next strategy would be to get some other AI that’s even more intelligent than the one that’s out for you on your side. (Laughter.) Intelligence is inherently uncontrollable.

So my strategy, which is not foolproof, is to practice the kind of ethics and morality and values we’d like to see in the world in our own human society. Because future society SSA, it’s not some invasion from Mars of intelligent machines. It’s going to—it is emerging from our civilization today. It’s going to be an enhancement of who we are. And so if we’re practicing the kind of values we cherish in our world today, that’s the best strategy to have for a world in the future that embodies those values.

THOMPSON: All right, that is the perfect note to end on. We will all go now to an autonomous, weapon-free cocktail reception.

Thank you very much, Ray Kurzweil. (Applause.)

(END)

Top Stories on CFR

Mexico

Organized crime’s hold on local governments fuels record election violence; Europe’s cocaine pipeline shifting to the Southern Cone.

Defense and Security

John Barrientos, a captain in the U.S. Navy and a visiting military fellow at CFR, and Kristen Thompson, a colonel in the U.S. Air Force and a visiting military fellow at CFR, sit down with James M. Lindsay to provide an inside view on how the U.S. military is adapting to the challenges it faces.

Myanmar

The Myanmar army is experiencing a rapid rise in defections and military losses, posing questions about the continued viability of the junta’s grip on power.