Chairman and Chief Executive Officer, Sinovation Ventures; President, Sinovation Ventures Artificial Intelligence Institute; Author,AI Superpowers: China, Silicon Valley, and the New World Order
Managing Director, Insight Venture Partners
Kai-Fu Lee discusses the advances in artificial intelligence technology, the effects on the future of work, and the technology race between the United States and China.
PAREKH: Welcome, everybody, to today’s Council on Foreign Relations Malcolm and Carolyn Wiener Lecture on Science and Technology on “The Artificial Intelligence Race and the New World Order” with Dr. Kai-Fu Lee. Dr. Kai-Fu Lee is chairman and chief executive officer of Sinovation Ventures and president of Sinovation Ventures’ Artificial Intelligence Institute, and author of the new book—new bestselling book, I would add—AI Superpowers: China, Silicon Valley, and the New World Order.
A special thanks to Malcolm and Carolyn Wiener for making the lectureship possible, and we’re happy to have Malcolm in the audience today.
I am Deven Parekh, managing director at Insight Venture Partners, and I will be presiding over today’s discussion.
Thank you, Dr. Lee, and I really enjoyed the book, as a fellow venture capitalist.
LEE: Thank you, Deven. Thanks.
PAREKH: I thought we would set the context with maybe you just starting with a little bit of your background and how you got here.
LEE: Sure. I’ve started working on artificial intelligence, or AI, thirty-eight years ago in my sophomore year in this city, at Columbia. And I’ve been an AI researcher. I’ve developed AI products at Apple, Microsoft, Google. And Sinovation Ventures makes most of our investments in AI. So I’ve seen AI from all around, and that’s pretty much my experience. We’re a venture capital firm, basically investing exclusively in China. AI is one of our—is our biggest area of investment.
PAREKH: So you were educated in the U.S., but you ended up working for, first, Microsoft in China, and then Google in China.
LEE: Right. Yes.
PAREKH: What was it like running the China sub of a U.S. company?
LEE: Well, at Microsoft Research—I started at Microsoft Research, so that was a lot of fun. Bill let us do whatever we wanted, just hired smart people and do research and ignore all the other things. So that was different.
Google China was probably both the most exhalating and also the most frustrating job that I’ve had. The exhilarating part was it was a time of ascension for Google. That brand everywhere in the world attracted the smartest people. People were saying if you don’t get—if you didn’t get invited for an interview with Google, it means your IQ is not high enough. It was so much so that in China, Silicon Valley, I got I think the smartest team I’ve ever had, and that was the great part. And we did a lot of things. We actually, despite—people seem to just remember the exit, but during my tenure we took Google’s share from nine percent to twenty-four percent, possibly as high as it’s conceivably able to go. And then—and then Google pulled out.
The frustrating part is, I think, between the values that the Chinese government wanted to ensure, and the laws and regulations, and Google’s own values, in particular the three words don’t do evil, which is a terrible—I mean, Google is a brilliant company. I love the company, but those are three terrible words to have for a corporate value because it allows everyone to interpret it however they want. And it ended up being basically a clash of values, a lot of trust lost, and Google decided to pull out. Now, I, of course, have no idea about the final incidents, but I kind of saw the downward spiral, so I left in September ’09 and Google pulled out in December ’09.
PAREKH: It was a common experience for many other companies outside of Google to not really be able to successfully penetrate China, whether it be Amazon, Yahoo, many others—Yahoo, only through their investment in Alibaba, ended up being the driver of value. Any common characteristics that you saw as to why these U.S. companies had trouble succeeding in China?
LEE: OK, so first I want to talk about the past—about, you know, ten years ago—and then I want to talk about now.
LEE: Back ten years ago, I think the issue is there was just too much presumption that U.S., in particular Silicon Valley, was the center of the world, and easily Silicon Valley companies took over Europe and Japan and wherever it went—South America. Basically, viewed China just another market. But China was just too different, not just in the regulations, but in the users, language, culture, patterns, expectations.
And also, the other very big issue was that multinationals generally send salespeople to run these organizations. And the salespeople have a quota. They have a KPI. They’re very smart about lowballing expectations and beating it, and then getting their promotion to be the corporate senior VP at headquarters.
PAREKH: Back in Silicon Valley. Often back in Silicon Valley, right.
LEE: Exactly. Their goal was to do the China thing for two years and then get promoted to senior VP and back to the U.S., so they would just lowball the numbers and beat it. But they were against these gladiators in China who own, you know, sixty percent of the company, who don’t want to just win, they want everyone else to lose—(laughter)—and they fight for winner take all and no holds barred. So they were just no match for the local. So that was ten years ago.
PAREKH: Yeah, and my—our experience at the time is anytime we would call into China, it would be this is the Amazon of China JD, and Alibaba, and Baidu is the Google.
PAREKH: And every time we thought about investment, we just said, OK, is it a U.S. version that succeeded? Then maybe we should look at it. Today, obviously, that’s changed.
LEE: Yes. Very different, yeah.
PAREKH: So talk a little bit about how that market evolved, no longer a copycat market.
LEE: Right. Yeah. So, basically, the conclusion I would reach is that today is a very different story. Now, even if American companies woke up and said, uh-oh, I’m going to find someone just as gladiatorial as my Chinese competitor, there would probably be no chance. And why is that? Because China has evolved in the last ten years from copycat to a(n) inspired by America, but build a better product—usually—to Chinese innovation.
I think Silicon Valley thinks that it’s—a company is only valid if it has an original brilliant idea and builds the product and shocks the world, and obviously we are all huge fans when that happens. But that is not the only way to build a product. China started as a copycat because China had a 0.2 percent internet penetration back in ’96. So it had nowhere to go but to learn from the U.S. But it is not true that once a copycat, always a copycat. The Chinese original copycats learned what they had to learn. And there are some who remain copycats for life and they are nowhere—they made some money, but they really got nowhere—but there are some people who learn how to build a product, who learn how to satisfy users, who learn how to beat competitors, who learn new business models that were not Silicon Valley-originated. And then they went on to become serial entrepreneurs who invented, I would say, a completely different way of entrepreneurship in China, and now they’re innovating using that model.
So today, you know, companies like Ant Financial, (Total ?), TikTok, Crysho (ph), VIPKID, Mobike, these are companies—and Pinduoduo—you probably never heard of them unless they happen to have gone public recently here, and you probably don’t understand them. The one phrase that describes them are probably not accurate. And I wouldn’t be able to explain any of them to you in less than five minutes because these are Chinese innovation powered by Chinese method of entrepreneurism, and these companies are worth anywhere between two billion and a hundred billion. So I think the wave of Chinese innovation is coming. This is with or without AI, which is another topic.
PAREKH: We’re going to come to that.
LEE: Yeah. With or without AI.
And I think it—America needs to study this method of entrepreneurism. It isn’t better or worse; it’s different. To Silicon Valley it would look ugly, but it works. It creates value. And I think Harvard Business School, you know, will use these for case studies. I think business schools are the ones who will be open-minded to studying these. I think Silicon Valley will look down upon this and frown upon them as once a copycat, always a copycat. So I’m glad to be in New York talking to people who might listen, because for America—think about this—a Chinese young entrepreneur is learning from U.S. and China simultaneously. A U.S. entrepreneur, if he or she only learns from Silicon Valley, you’re missing half the lessons. How are you going to compete in a world where you take half as many lessons?
PAREKH: Well, let’s talk about AI, which is obviously the big topic of your book. Your Ph.D. thesis, I think it was at Carnegie Mellon, was over thirty years ago. And yet, it was probably twenty or twenty-five years later before anything really happened in AI.
LEE: Yes, yes.
PAREKH: Talk about why.
LEE: Two main reasons.
One is a big breakthrough ten years ago called deep learning. Stated simply, deep learning is a way for one single domain to use a huge amount of data to train a system to do things much better than people can do. But keep in mind this is single domain, like playing Go, recognizing faces, serving ads, but not broad intelligence. One single domain with objective quantitative methods. And that breakthrough was a breakthrough that none of us AI researchers really expected. We always thought we’d find ways to understand the human brain. No progress on that, but this pattern recognition became amazingly good—so good that it can take off in many, many areas.
But it has some big constraints. One constraint is a single domain. I mentioned that. The other is huge amount of data. So it only works when you have huge amounts of data. So while some of the similar ideas had been proposed at the time when I did my Ph.D., there just wasn’t enough data to make it work. So the internet created enough data and enough object tagging. Every time you use Amazon, Google, you click, that’s teaching it something about you, and that was gathered to make internet companies the first wave of great AI companies. So deep learning and huge amounts of data.
PAREKH: You talk in your book about the four phases of AI.
PAREKH: If you could just walk through them at a high level, and kind of where are we on the evolution of each of them?
LEE: OK. Internet AI is pretty mature. All internet companies that have lots of users, lots of data, they use it. They use it to target ads, to get you to get better results monetize off of you.
And, by the way, the AI is also trained with an object function. Why that is is when you get all the data, you basically say, go make the default rate low for the loans, or go target ads so we get maximum ad revenue, or go show the newsfeed so that people stay the longest minutes on Facebook, OK? So the wonderful thing is you get this wonderful knob on this tool that maximizes what you want, and the more data you feed it the better it gets.
So back to the wave. Second wave is business AI, so a bank, insurance, hospital, government branches. Take a bank as an example. You can use all the transactional data of all of your customers to decide credit card fraud, loan default and approval, asset allocation, and of course target each customer with the right kind of investment products that he or she may buy. So that’s the business wave. So all the companies that have lots of data previously thought of them as just a cost center; now they can be monetized.
The third wave is when AI can see and hear. So speech and vision, cameras in airports that protect us, that do face recognition of terrorists; Amazon Echo, that we talk to; Amazon Go as the ultimate autonomous store. And extrapolate from that autonomous systems: hospitals that can watch patients so they don’t call, that can report when patients do fall or have an issue; autonomous fast food with cooks and chefs and cashiers all done automatically, but not robots that walk around. All you need are cameras and sensors, and then users—I mean, customers will go in and take this and put it in the pocket, you can directly charge them for it because that action was recognized and understood. So that’s third wave, with eyes and ears.
Fourth wave would be autonomous AI with arms and legs. Not necessarily with arms that look like this, but basically the ability to move and manipulate. So that could be autonomous factory floor assembly line, so no more humans; autonomous dishwasher; autonomous fruit picker. And I’m not making these up. We invested in companies that did each one of these. If you want that dishwasher, it’s $300,000, so—(laughter)—I’ll take a check if you want that. But for restaurants it makes sense because if you have five dishwashers, in a year and a half you’ve paid back, right? And then going on to—eventually to consumer robotics
But, of course, the big autonomous app is autonomous vehicles. That will completely disrupt our transportation and logistics, probably dramatically increasing our GDP; our savings, because we won’t have to buy cars anymore; improving our efficiency, because not much congestion; improve the pollution, because they’ll be electronic vehicles; and also displace a lot of jobs; and, oh, also increase—improve the safety because, again, AI gets better with data. So the first day that the car launches, maybe it’s one percent safer than humans. But give it five years, it’s probably eighty percent better than humans. Give it ten years, it’s probably ninety percent better than humans. And then give it fifteen years, humans probably won’t be allowed to drive because—(laughter)—we’ll be the greatest hazard to ourselves.
PAREKH: There are those who have a very dystopian view of AI as you get from phase four to phase five and phase six that we haven’t even imagined yet. I think Elon Musk even called it more dangerous than nuclear weapons.
PAREKH: What’s your perspective on it?
LEE: Well, I think a lot of the dystopians actually are not AI experts. AI experts generally see this as a super patter recognizer.
What’s misleading is if you just read the newspaper you will see one breakthrough after another, right? First, chess and then Go; and then diagnosing lung cancer, skin cancer; and then you see these autonomous vehicles and you see AI that’s, you know, figuring out energy problems. So every day you open the paper you see more breakthroughs. So this gives certain people who write books to exaggerate the story to say that when these applications grow exponentially we’ll be facing what’s called singularity, and then suddenly one day we wake up, they control us or they’re smarter than us.
But if you really look at what’s happening, it’s one algorithm that’s being applied maybe nearly exponentially to many applications. So it’s like when you first have electricity, when you first have the internet, when you first have the operating system. These things enable many applications to be built, but it doesn’t mean there is an electricity 2.0, 3.0, 4.0, then it takes over the Earth; or operating system; or internet. So that, I think, is just being very creative/imaginative—people who want to sell books, people who want to be famous. There is no engineering evidence that that’s coming.
Now, we can’t say never because, you know, a thousand years ago, if you asked a responsible scientist and say, will people be able to fly, they will probably say, well, we see no engineering means of doing that. But ultimately, we did. So I’m not going to say never, but I promise you that when I see—the day I see robotic overlords coming—(laughter)—I’ll write—I’ll write another book, OK? (Laughter.)
PAREKH: Or they might write the book first. (Laughter.)
LEE: They—yeah. I might not have the credibility. (Laughter.)
PAREKH: So let’s go back to the U.S. versus China context for a second. You talked about this—this machine learning innovation ten or fifteen years ago. And at the time and even today, the vast majority of (super I ?) and AI algorithmic experts are in the U.S. I think it’s over sixty-five or seventy percent.
LEE: Yes. Yep.
PAREKH: But that was for the age of expertise. And what you’ve talked about, and one of the reasons why you’re not worried about singularity, is that those algorithms are not changing now as much. We’re now in the age of implementation, taking those algorithms and domain by domain trying to figure out how to apply those algorithms. And so you talk about data being the new oil and China being Saudi Arabia, which I’m not sure what it makes the U.S. Talk a little bit—talk a little bit about this.
LEE: Right. So, first, the premise is there are no, you know, deep learning 2.0 coming, and that may very well be wrong. I think, you know, projecting not reaching human intelligence, I’m pretty confident we won’t see that in twenty years. But there may very well be another disruptive technology that makes some kind of learning dramatically better. And to the extent that happens, U.S. is likely to be in a leadership position.
But those breakthroughs are hard to project. If you look at the last sixty-two years of artificial intelligence, really only one breakthrough, and that was ten years ago. Haven’t seen another one since, so to project another one in the next ten years, possible, but I don’t know how likely that is. But for now we can basically monetize and make applications and implement based on all that we already know. So while the super experts U.S. outnumbers China ten to one, if you look at the—let’s call the regular experts. (Laughs.) Super expert, maybe a thousand of them. Regular experts, maybe a hundred thousand. China and the U.S., actually China will have more of them. And then, if you look at a million of the mini-experts, China would have a lot more than the U.S. So China is fully equipped with engineers to build AI products, but not to disrupt AI.
And also, China has these gladiatorial entrepreneurs I just told you about, details in the book.
And then, of course, the most important is AI gets better with data. The best way to make your AI better is not to go hire three brilliant scientists; it’s just to get more data if you can. And China has more data because it has more users. That’s the breadth level. But also it has more depth because if you go to China—have been to China recently you’ll know that Chinese people do everything on their phone. Their rides on bicycles, their payments, their retail stores, their restaurants, ordering takeout—everything’s done through the phone. Paying taxes, everything’s on the phone.
So that record—now, that record, of course, belongs to the application in which they use. But there are many applications. Each application, similar to the U.S., has just more users and more depth. So if we think it’s three to four times more users, three to four times more depth, or there is a 10X difference in data. So a Chinese—equivalent Chinese company with equivalent skillset and funding is probably going to collect ten times more data and therefore build better AI. And, again, AI you can monetize and tweak the knob. And Chinese companies are much more willing to tweak the knob to make money. And that will result in more profitable companies.
So as a result, today’s most valuable speech recognition, machine translation, computer vision, face recognition, drone companies, are all Chinese. And there are about fifteen Chinese unicorns in the pre-IPO stage. I don’t know how many there are in the U.S., but probably not a larger number. So arguably, China’s already forged ahead monetizing and implementing—out-monetizing and out-implementing the U.S., although the technology side is still substantially behind.
PAREKH: One of my favorite stories in your book is about the two thieves who go to Hangzhou hoping to break into a couple of stores and steal some money. When they got arrested they said, where is all the cash? Because the market had gone almost all mobile. And so there’s literally—you go to the store, there is no cash. Everyone pays with Alipay. Even beggars use—
LEE: Beggars! Beggars hold up a sign, you know: I’m hungry. Scan me. (Laughter.) I’m totally serious. If you go to China you’ll see this.
PAREKH: Do you think that there is a difference in societal norms as it relates to comfort with that collection of data between the U.S. and China?
LEE: Clearly U.S.—I mean, the United States was founded based on, you know, individual liberty, rights. And therefore the demand for privacy and the consumer advocacy is at the very forefront. China hasn’t had that development. Now, Chinese consumers are beginning to learn about privacy. They’re asking for it. There are some laws starting to govern—the sale of data is actually considered criminal. So it’s emerging. So I wouldn’t say that Chinese consumers just don’t care about privacy. But in general, I think there’s much more concern about that in the U.S.
I would just say that on the issue of privacy, we should all be very cautious that it’s not just giving me the privacy. The privacy is actually connected to convenience, to safety, and to social good. So it’s a very careful knob to tweak. You know, I think the GDPR is too simple a knob, because think about—I mean, as a—as a cancer patient—in remission, but a cancer patient nevertheless—I’d be happy to donate all of my medical records without worrying about privacy. But I don’t want, you know, photos of my family to be shared. And so I think—you know, because there’s a social good related to that. So I think privacy is a complex issue. In some countries where there are a lot of cameras and people say, wow, that’s a terrible invasion of privacy. But possibly for that country or that city, there’s a lot of crime. So that lack of—loss of privacy is compensated with greater safety. So I’m not stating what’s right or wrong. I’m just saying it’s a complex issue.
PAREKH: So before we open it up to the members for their questions, let’s talk about the implications of all this. You made some predictions about the number of jobs that could be displaced by these technologies. You talked specifically about this concept of one-to-one replacement versus disruption. Can you talk a little bit about what that means, and then what your predictions are as it relates to potential job loss?
LEE: Sure. Some economists—a lot of economists and think tanks have come out with various numbers between nine percent to about forty-six percent of jobs being displaced by AI and automation and software over the next ten or twelve years. The numbers vary greatly. I tend to go with the higher numbers. And the reason is I think a lot of the think tanks didn’t think about a couple of factors. One is that it’s U.S. and China dual engine moving this forward. Secondly, most think tanks think about one-to-one displacement—like, one robot for one cashier. One robot for one cook. But actually, there are industry-level disruptions.
You know, we funded a company that has an app. You download the app, you type in a few things, you allow it to suck up some data, at the same—similar to Facebook takes data from your phone. And it uses all that data, plus what you typed in, to give you—decide whether or not to give you a loan in basically two seconds. So that is going to disrupt. And they have a default rate much lower than banks. That’s going to disrupt the way that banks in the future do loans. And to the extent this app becomes widespread, the banking loan officer jobs will be gone, right?
We’re funding also autonomous fast food. And to the extent these take off, there will be fewer jobs in McDonald’s and Kentucky Fried Chicken. So I think the disruption is more complex than just job for job, country for country. It’s two engines moving forward. It’s one-to-one displacement and disruption-oriented displacement.
PAREKH: You talk about the characteristics of jobs that are least likely to be displaced by AI. Can you describe them?
LEE: Sure. Well, I described what AI can do. So it’s pretty clear what it cannot do. It cannot do multidomain, right? That’s a single domain. So strategic, you know. CFR, you think a lot about diplomacy, you know, negotiating the trade war. (Laughs.) Those things are too complex multidomain. It cannot do creativity because the humans provide the objective functions. It doesn’t create. It won’t invent a new drug. It can’t invent a new style of painting. It can only replicate, emulate. So those are kind of the cognitive advanced jobs.
But another type of job it cannot do are the human interaction jobs, the jobs that require a human connection, warmth, trust, and basically that we wouldn’t want a robot to do, nor can a robot currently do a very good job. So those are jobs like—that vary over the whole spectrum, from jobs like nanny, elderly caretaker, nurse to teachers and doctors, social worker, psychiatrists, and so on. So I think those service jobs will be large enough, potentially, to take the routine jobs that are displaced for a transition to this type of human-to-human service jobs.
PAREKH: The last question before I open it up, you mentioned your cancer diagnosis earlier. You talk a lot in the book about how that impacted your thought process on what solutions might be for the job loss that we talked about. Maybe you can touch on that.
LEE: Sure. I’ve been a workaholic my whole life. Much less so now after my cancer remission. But before then, I worked one hundred hours a week. I wake up twice in the middle of the night to answer email. And a lot of really bad habits—a lack of sleep, no exercise, basically not much time for the family. But facing death made me realize that working is just—working equals meaning of life was something that we’ve been brainwashed with in our society. Certainly in the Chinese society, and to some extent in other societies, like the U.S., as well. But once I faced death, I realized working really was not the most important thing. If I had one hundred days to live, I wanted to be with my loved ones. I wanted to do the things I love. And I regret not having done that properly prior to my cancer diagnosis.
So for myself, I’ve reformed and changed my work style, put priority on my family and friends. And that also—it was that experience that when I look at the AI job displacement issue, I think that maybe the compassionate jobs, maybe jobs that require love are the ones that will not only help transition routine workers into a new beginning, but also will inject more warmth and positive energy to the society.
PAREKH: So at this time I’d like to open it up to our members to join the conversation. Just a reminder that the meeting is on the record. Wait for the microphone to come to you. Please stand, state your name and affiliation. And please limit yourself to one concise question—and please make it a question.
Q: Hi. Danah Boyd, Microsoft Research.
One of the biggest things we saw transition early stage technologies was security because any technology that has power can be corrupted. You’re right that AI depends so heavily on different kinds of data, but we’re also watching all of these new ways in which data and models are being built are also being—facing different kinds of attacks. I’m curious, where do you see the maturation of the conversation of what security might look like in an AI world.
LEE: Thank you. That’s a great question. If I were to be given a chance to say the second biggest concern for AI, that would be it. It actually wouldn’t be privacy, even though that’s important. I think security’s a big issue. You know, we’ve seen security for PC and phone, viruses and things like that. But for AI, it’s much more dangerous because if AI controls all the autonomous vehicles, a hacker going in can turn that into autonomous weapon that kills people rather than avoids hitting people. You can also imagine hacking into AI is maybe even harder to detect, because usually when you hack into a PC or phone, you’re injecting some code. So there is a trace and evidence of a computer program that’s running on your PC or phone, so at least you can recognize it. But with deep learning, it just a giant array of numbers. If someone went in and tweaked some, I don’t know how you would detect it, other than—and also, the models are always tweaking themselves.
See, when I said AI gets better with more data, that just means you gave a million loans, you got a 3 percent default rate. Now you give three million more, every loan you give your default rate goes down a tiny little bit. So you’re—you’re modifying—your models are self-modifying all the time. So if some hacker came in and modified a model to let you give that person a bunch of loans, and they take—steal your money, or if they hack it in so that a terrorist face will not be recognized, or if they hack in the computer vision system so that their car becomes invisible, these are all forms of security that we have never seen before. So I think that’s an area that computer scientists really have to focus on because without that the dangers are substantial.
PAREKH: I think we had one more over here.
Q: Robert Klitzman from Columbia University. Thank you very much.
You suggested that you didn’t see AI going into fields like psychiatry or areas where there’s human trust. And yet, there are a lot of apps being developed now to do counseling through AI, et cetera. I’m just wondering how much of a firewall you saw there in terms of where AI may be going.
LEE: Right. I think there are always special needs. I think people who use apps for psychiatry are—probably have a special reason. Maybe they just have something they don’t want to tell a human, or maybe they are very introverted and just don’t want to face a human and talk about something that’s very private. So I can see those having uses, but I’m just saying the quality of—the process of curing someone or helping someone improve is quite unlikely with automatic methods because, first, I think the process of human interaction and the emotional EQ are not those objective things you can train—like a default rate or targeting rate or greater investment return. Those are very objective yes/no, right/wrong, large/small questions. You know, making someone who’s paranoid feel more at ease is not so objectively identifiable and trainable.
Secondly, we also see that AI and human interaction, when they fail they may fail catastrophically—such as Microsoft Tay as deployed on Twitter started to—you know, after a day started cursing at people. (Laughter.) So those catastrophic issues is another. And then finally, I think even if—even if I’m wrong and AI did an OK job of counseling, I think many people still won’t accept it. So I think at least there is a couple of decades where those human jobs are a good stop-gap solution, if not permanent solution.
PAREKH: Back there.
Q: Michelle Caruso-Cabrera, CNBC contributor.
There are reports that Google wants to go back into China. Should they?
LEE: Well, I talked about a parallel universe. So the challenges will be very big to traverse into a parallel universe because of user habits, the software stack that’s already there. I show you my list of my apps on my phone, you won’t recognize any of them. So any company that goes in will have to try to fit in. That’s difficult. I can understand why Google might feel its Android isn’t getting fully monetized. Maybe there’s some way they can find, by reentry, to monetize that. Or perhaps they can find a partner, such as Tencent, to work with. I mean, I’m purely speculating. If I actually knew anything, I couldn’t tell you. (Laughs.) So those are the possible speculation.
Q: (Off mic)—run a company today, if you were CEO of Google, do you—
LEE: I think if I got a really strong local partner, I would. And—or, if I think I can start to monetize Android, which I don’t, then I would. But currently, I don’t think either one is very easy. So most likely, I wouldn’t.
Q: (Off mic.)
LEE: Oh, but if I were already there, that’s different. See, had they not pulled out—when I was there we had 24 percent share. And if we hung in there, had 24 percent share, that’s a different story. Then we would be intertwined in that parallel universe. Right, that’s a—two different answers.
Q: (Inaudible)—from Bessemer Trust.
You mentioned deep learning as one of the two breakthroughs in your talk earlier. So I remember studying back propagation dual networks back in college, which was many years ago. So can you please elaborate as to what’s the specific transformation that you’ve seen in that? Thank you.
LEE: Sure. It’s really just a—the mathematics that it takes for thousands of layers. You know, when I studied—I studied back prop also. And it was one or two layers. You’re maybe a little younger, so maybe a few more layers. But now they do thousands of layers. And when you do thousands of layers, you no longer have to give human-extracted features. So you don’t have to go in and say, oh, I think for loans income’s really important. And I think you should, you know, not focus on the hair color or the battery level. But you can throw it all in when you have that many layers and the system will decide if a person’s hair color affects the likelihood of you paying a loan, or if a person’s battery level affects the likelihood of paying back a loan.
PAREKH: That’s an actual example from the book, the battery level.
LEE: It’s actually an example, that your battery level is slightly correlated with your likelihood of paying back the loan. (Laughter.)
PAREKH: Just to go around the room, I’ll go in the back there.
Q: Martin Indyk from the Council on Foreign Relations.
Can you talk about the application of AI and machine learning to warfare?
LEE: I can, but it would be purely speculation because we, as an investment company, stay away from anything that relates to weapons or war. I think, clearly, face recognition and targeting can be done. Clearly autonomous weapons is the worse-case of all AI, because you simultaneously launch an almost eradication of another country. And I think many countries, for those reasons, are trying to agree not to go into that area, just like the nuclear treaties. I also think AI is not just a—causes war to maybe not necessarily be fought. Well, one is you can see through Boston Dynamics that the future of soldiers might be that way. But you could also see that maybe wars don’t have to be fought in face-to-face, whether it’s human-to-human or robot-to-robot. Cyberwar appears to be another path. And AI could be plugged into that.
I think my biggest personal concern, being a novice in this space, is that even if the countries find ways to agree to keep this under control, AI is a pretty open technology. And terrorists would not be subject to the same constraints that countries agree to.
PAREKH: Here in the front.
Q: Joel Cohen, Rockefeller University. Thank you for your talk.
Would you address the future of education for the future of AI? And why is it that the United States has significant fewer mini experts? And what could we do to increase the depth of capacity in the United States? And what could China learn from the U.S. to increase their depth at the very highest levels?
LEE: Wow. Like, four really good questions. OK, I’ll try to answer them all. (Laughter.) I think—first, I think U.S. and China should learn from each other, but probably not adopt fully the other’s strengths. Because I think the U.S. strength comes from the brilliance of universities, such as yours, that have great researchers, great reputation. And they become a funnel for the world’s smartest people to come here and study, some of whom stay.
That is the American magic, I think, is that normally countries’ ability to do research is—should be roughly proportional to its population because super smart, creative people, while each—assuming every country is roughly, you know, equal—assuming their equal economically, then it’s how many people you have. That’s how many geniuses you’ll have. And that normally would put China at an advantage. But the difference here is that the whole world loves to study in the U.S. The U.S. is a melting pot. And great universities. Chinese, Indians, Iranians—everybody wants to study here. And China can’t emulate that, because—well, you know, Chinese universities generally attract Chinese students. It’s a good thing China has a large population, but it’s not yet a melting pot. So I would like to learn, but it may be very, very difficult.
Why isn’t U.S. training more mini engineers? I think U.S. education lets people do what they love. So a certain percentage will love AI, but not everybody will love AI. Chinese education tends to rank jobs based—
PAREKH: Be more prescriptive. Be a little bit more prescriptive. (Laughs.)
LEE: Rank jobs on how much they pay. AI engineers pays a lot, so everybody rushes in. And China has a large population. So that’s what the many engineers. But with respect to, you know, other types of education, I think teachers will be a very different and very special job that will not be displaced by AI. I think too much of our percentage of our—too many—too large a percentage of our teachers’ jobs are taken by routine tasks like giving exams, giving homework, homework drills, teaching—fix pronunciation, math drills. All that could be done by AI. And when AI takes away that sixty percent of a teacher’s job, the teacher has much more time to become a mentor, to help the students, to make this more humanistic. So that would be my hope.
And this Sunday, if you watch 60 Minutes, I’ll be on it to talk about this topic.
PAREKH: Kai-Fu, is it also true that those same Chinese students who were coming to study here and then staying here are increasingly coming back to China?
LEE: They are, but still a large percentage stay. I don’t have the exact number. I think it used to be eighty percent stayed. Now it’s like forty percent stay. Something in that order. So it’s still a net positive for the U.S.
PAREKH: Right here in the front.
Q: So, Esther Dyson, Wellville.
I’m just curious, as you said most of the excitement around AI is basically pattern recognition and sort of fitting things to data and statistics. But the real advances, it seems to me, are going to end up being in something that can design models, that can reason by analogy, that understands relationships. And where are we on that? Because I’m sure that more interesting people are working on that, not on better statistics.
LEE: Actually, it’s kind of going through a transition. In the academia, there were certainly people sort of tweaking deep learning and all the related technologies—you know, transfer learning, reinforcement learning, and basically machine learning approaches. And tweaking and combining them to solve problems that show accuracy rates and improving them year to year. A bunch of these people were poached by Google, Microsoft, Facebook. And they’re gone. So academia shrank all of a sudden because of this poaching by quadrupling of the salaries.
What remains in academia I think is probably more than ever committed to what you described, going after the new frontiers. You know, the next deep learning, but different, you know, combined with human representation, self-replicating code, moving towards artificial general intelligence, common-sense reasoning, full natural language understanding. I think that happens for a couple of reasons. One is the near-term optimizers have been poached away, and that’s what they love to do. Secondly, companies will beat universities at those tasks anyway. So there’s not much point to keep tweaking those numbers anymore.
Face recognition, speech recognition, academia will never be—lead industry again. And people like Geoff Hinton, who invented deep learning, are coming out and saying: Don’t do deep learning anymore. We need to find the next big thing. So I do think the American advantage is to increase the funding in these more fundamental areas. And then when the next big thing comes out, can once again lead the world.
Q: Hi. I’m Joan Kaufman from the Schwartzman Scholars Program.
And my question is really about global governance mechanisms. As we sort of move forward with this brave new world and all these technologies, there are a lot of cultural differences around privacy in different, you know, comfort levels. And what are the global governance institutions that are thinking about these things right now? Are there any that are dealing with how to negotiate and regulate or set standards—ethical standards for the use of AI? Or, you know, should there be? And who—you know, who could play that role? Do you think that’s practical as we move forward over the next several decades?
LEE: Yeah, thank you. That’s a great question. I think it’s really hard, given the increasing nationalism in a lot of countries nowadays. People want to own their own destiny. And also, the diversity and difference in cultures and governments laws and regulations. On the other hand, I think we have an interesting crowdsourcing problem. We’re crowdsourcing to each government to do its own thing and then hopefully governments are watching each other, are learning from best practices. I think that sense, Europe put a foot forward and said: GDPR, that’s going to be how we give privacy back to the citizens. It’s obviously the wrong answer, what’s they’ve done.
On the other hand—(laughs)—it’s very good they’ve tried, because then their failure will cause other countries to say, OK, we can—we need to do something, but let’s not do that. So—and I think—I think that’s probably how it will come about. I wish there were global governing bodies. United Nations is unlikely to have an ability to do that. And there is—among the private—among the private organizations, similar to this one, I kind of like Partnership for AI. I think they have good people, the top AI scientists with a conscience are joining. And I think going in a good direction.
On the other hand, I’m also a little skeptical that—because it’s multinational and lots of smart people have different ideas—whether there is a true lowest-common denominator. But I think if anyone’s going to try to find one, they’ll be the ones.
PAREKH: All the way in the back.
Q: Thank you. Ariana King. I’m a reporter with the Nikkei Asian Review.
Moving down from global governance into just governments, what can you say about the government policies in the U.S. versus in China—Made in China 2025—for example, that might be giving China an edge over the U.S.?
PAREKH: Maybe you could talk also—you talk in the book about the mass innovation and entrepreneurship initiative in China, and how that really unleashed AI. Maybe add that on.
LEE: Sure. Sure. That actually is a policy I did not study or read. But I’ll give you two others. Hopefully they’re—you can draw your analogies. I think that Chinese policies, we should take them more as setting the tone, not as a huge funding source of a trillion dollars, OK? So since I don’t know the one—I mean, I know of the one you mentioned, but I’ve studied more two others because they relate to me. One is the Mass Entrepreneurship and Innovation Plan. That was about four or five years ago. And then there is the AI plan from the State Council, which was last year—July last year.
So in the Mass Entrepreneurship Plan—in both plans, what happens is the central government sets a tone, and then the local governments are sort of like—it’s the crowdsourcing again. They’re each trying different things and see how it goes. And in the case of Mass Entrepreneurship plan, the local governments build up all these incubators, accelerators, angel funds, guiding funds. I think the conclusion is some of them work, some don’t. I think guiding funds were quite successful. They’re basically government acting as LPs, but with some of the profit given back to the GPs. So to accelerate that. Most of the accelerators I don’t think were all that successful.
However, it did something really amazing, which was to change the mindset of the Chinese people about risk averseness. Because, as you may recall, if you go back to anyone—anything anyone read ten years ago, oh, China, entrepreneurship? I don’t know. It’s too much risk averse. This is a thousand-year-old cultural issue. But this plan really told the people, hey, being an entrepreneur is OK. Working in the startup is OK. Failing is OK, just try again. That, I think, was a massive shift that I wouldn’t have thought would be possible.
PAREKH: And quickly. It happened so quickly.
LEE: Five years. I’ll tell you, when I started Sinovation Ventures, I had to wine and dine fiancées, fiancées’ parents—(laughter)—parents, grandparents, basically convincing them to let their children or fiancée to come work for a startup, and explained to them I will personally mentor them and help them; and if they don’t succeed, then, well, there will be another chance, et cetera. But I don’t have to do that anymore. I think the traditional risk-averseness has been un-brainwashed in five years. That’s the power of the central policy, not the dollars that went in. Each government voluntarily put its dollars forward.
In the case of the AI plan, I think, again, the government said AI is a big priority; there is no funding as far as I know affiliated—associated with it from the State Council. But each bank, once they read the plan, they became more open to buying AI software. So the companies we fund sold a lot more after the plan came out, but it wasn’t government money.
PAREKH: But talk about the autonomous city, just to give people a sense of the scope of the infrastructure.
LEE: Sure. Right. So another big thing is the infrastructure that comes about, and much of that happens also at the local level. So each city will make its own decision.
For example, Nanjing says, oh, we’ve got lots of great universities; we’re going to put—we’re going to put a lot of our next year’s budget on building these research parks to leverage our academic expertise.
And then (Shaoguan ?) became a new city the size of Chicago designed for autonomous vehicle(s). They have a two-layered road—the top layer for pedestrians and bicycles and pets, bottom layer for cars—so as to minimize the worst case, which is a car hits a pedestrian.
Suzhou created a two-level, ten-square-kilometer space. Top level is human-driven car; bottom level is autonomous car. That reduces the—again, reduces the likelihood of accidents, allows the autonomous cars to talk to each other, and also fixes the lighting problem which caused the—because you can—you can have fixed lighting in an underground. That’s what caused the Tesla accident.
And Zhejiang’s province built a new highway with sensors that tell the cars—talk to the cars to make them safer.
So I think the way to interpret these Chinese policies is setting the tone, and then letting private companies do what they do, and build what private companies can’t do, which is infrastructure—not too dissimilar from what President Eisenhower did with the interstate highways.
PAREKH: Here in the front.
Q: Ralph Buultjens, New York University.
Could you please speculate on the impact of all of this on the global political power structure of the future? Is it going to lead to a reduction in the power of the state? Is it going—those societies which have an advantage in this will rule the world? Will it lead to a new kind of AI imperialism or colonialism? How will this affect, in your view, the future power structure of the world?
LEE: I see. I’m not an expert in that. I haven’t thought deeply about that.
I will tell you what I have thought about. I think it will give more power to U.S. and China, the two countries who are co-lead, and be far ahead of the other countries. Whether the two countries collaborate or compete, they will lead. They will also form parallel universe ecosystems that don’t have a lot of mutual dependence. I know today U.S. and China have a lot of mutual dependence, but in the field of mobile, internet, and AI, the two universes are actually quite separate, which in some sense is a good thing because if a Chinese VC funds a Chinese company selling to Chinese customer, the winning of one company isn’t at a loss or the expense of the other, so that could be a good thing.
I also think, in the jobs issue that we discussed, it’s possible that stronger execution-oriented states, such as China, may have an advantage in dealing with crises that may come up, like job crisis, shifting these jobs to those jobs; suddenly increasing taxes—those would be harder to do in the political system here. I’m not advocating any form of government; I’m just saying a strong government may have an advantage if there AI-triggered crises.
Q: Increase the power of the state?
LEE: Power of the state. I don’t know. I don’t know about that. Obviously, if the state is good at using technology and is permitted to have more power—is permitted by its governance to have more power, that would be something it could do. But I can’t really see crystal ball, yeah.
PAREKH: Right here in the fourth row.
Q: Thank you so much. Liz Economy from the Council on Foreign Relations. Thanks for a really terrific set of remarks.
LEE: Thank you.
Q: And I think the picture that you paint of the future role of AI is exciting, even thrilling in terms of what it could mean transformationally for the economies of various countries. But I’m wondering, in terms of the political elements, and whether you have any concerns about the role of AI as a tool for greater government repression, for example, in China. And given your iconic status, you know, in the AI community and more broadly in the tech community in China, whether you have discussions with government officials about what may be too far when it comes to the use of AI in terms of repression.
LEE: I see. Actually, no government consults me on anything. (Laughter.) Not the government of Chimainland China—not the government of Taiwan, not the government of the United States, and I am happy to be left alone—(laughter)—to just do—
PAREKH: To do your thing.
LEE: —my investing and technology. And that is absolutely the truth.
PAREKH: All the way in the back, right.
Q: (Off mic)—from Paulson and Company.
There was an article in The New York Times earlier this week entitled Private Business Built Modern China; Now the Government is Pushing Back.
If you think about businesses in China like Alibaba or DiDi that are two-sided—if you have a lot of sellers then you get a lot of buyers. On Alibaba that creates a monopoly. With DiDi, which is like Uber, if you have a lot of vehicles, then you have a lot of passengers, that creates a monopoly. DiDi has about ninety percent market share.
I’d be interested to get your thoughts. It seems like AI could add another layer of monopoly because, as you pointed out, the more data you have, the better AI you can have. So I’d be interested to get your thoughts on that.
LEE: Yeah, absolutely. I think AI is the new, additional factor that’s for monopoly maintenance, and reinforcement, and growth, and that is a concern to American consumers, Chinese consumers, all the consumers.
On the other hand, AI also permeates way beyond any industry, and AI also is a force for disruption. So DiDi may not be easily disturbed by another identical competitor, but we’ve seen Mobike start to go into that area because, as you know, bicycles are connected to cars.
We’ve seen Meituan starting to move into there. So in China, actually, the dynamism of everyone wants to move into everyone else’s space is a special form of checking against monopoly. You would think Alipay had a monopoly, right? There was no other viable payment mechanism on the internet. But Wechat found a way by, you know, double subsidy to DiDi rentals, by giving red envelopes, giving away free money. And it actually forced its way into a forty percent market share.
So, you know, China antitrust laws haven’t been enforced all that much, but the incredible competitiveness in the market, and the unrelenting desire to move into somebody else’s space is kind of a very unusual way to check on monopolies.
PAREKH: So the biggest problem with today is only having an hour because there is so much to talk about, but I’d like to thank Dr. Lee for joining us today. (Applause.)
LEE: Thank you.
PAREKH: And there are books for sale on the way out, if you are interested. Thank you again.