Transcript
Hello. I’m Richard Haass, the president of the Council on Foreign Relations, and this is Nine Questions for the World, a special limited edition podcast series.
In each episode, you’ll be hearing me in conversation with some of the best thinkers of our time, as we ask fundamental questions about the century to come.
For those of you who don't know, the Council on Foreign Relations, or CFR, is an independent, non-partisan membership organization. We are dedicated to informing the public about the foreign policy choices facing the United States and other countries. We're also a think tank, a publisher, and an educational institution.
Today’s episode features a conversation that took place on November 1st, 2021. I spoke with Fei-Fei Li, a professor of computer science at Stanford and Co-Director of the Institute of Human-Centered Artificial Intelligence.
We spoke about Artificial Intelligence, or AI if you prefer, quantum computing, deep learning, and more. I did my best to make sure I understood these terms and these technologies, figuring that if I came away with a better grasp of what was going on and why it mattered, many of you would as well. We also discussed how these technologies are likely to unfold, and how they would affect our societies and our world more broadly. And we also got to talk about education and how to increase the odds that scientists would better understand the world they are working in, and the rest of us would better understand the science that would so change our lives.
Here’s that conversation.
Richard HAASS:. Dr. Li, I want to thank you for all you do, and I want to thank you in particular for joining us for the next hour.
Fei Fei LI: Thank you, Dr. Haass. I'm so excited and really honored to join this.
HAASS: So I want to begin with one line in your bio. It says the following, Li's current research interests include, cognitively inspired artificial intelligence, machine learning, deep learning, computer vision and artificial intelligence, plus I suppose it means health care, especially ambient intelligence systems for health care delivery. And the only thing it doesn't mention, probably, is quantum computing or some type of robotics, but we're going to throw those in too. So what I'd love to do, before I get into the policy questions, Fei-Fei, is my background in college was Middle Eastern studies. There was a science requirement at Oberlin, and I took geology, and I learned a little bit about rocks and a little bit about tectonic plates, and actually the tectonic plates and continental drift were useful for images for social science, but I don't have a whole lot of science background. I'll just hazard to guess, I'm not unique in this. We probably have a fairly wide range of knowledge and skill sets here. So just to even out the playing field a little bit, let's just go through that. Cognitively inspired artificial intelligence, what is that?
LI: First of all, these are great questions. Let's just define artificial intelligence first before we describe cognitively inspired.
HAASS: I was going to go there, so thank you.
LI: So AI is a big word these days, but it is really rooted in the science of computing, combined with mathematics. And what it does is a computing science that takes data of some form, like you said in geology and you know data of earth images, or data of texts, data of x-rays, and then intelligently compute it so that we can discern patterns, help to make decisions. And if you put that capability into an object like a car or a robot, the decisions can even involve actions, like, do I turn right, or do I stop? That is the result of intelligent computing. So in one sentence, if I can make it a long sentence, it's a computing science that takes in data, learns to discern patterns and reason with the data so that it can help make inferences and make decisions and even make actions happen.
HAASS: Okay. Thank you. My next cocktail party, I will be much more successful than the previous ones. Machine learning. Because often I hear these used somewhat interchangeably, but I assume they're not synonymous. So how do I understand the phrase, machine learning?
LI: You’re not so wrong. Actually, to help move this conversation forward, there is a bunch of words, machine learning, deep learning, including the, cognitively inspired, we haven't touched on, all of those is just a way to emphasize different kinds of tools. I actually started as a physics major. And think about the world of physics. From Newton to Maxwell to Einstein to Feynman, these physicists used different tools, from calculus to partial differential equations to statistics, quantum mechanics, so in a way AI is similar. It's a much younger field, but our tools, they change. We started as a field using tools that are logic-based or rule based, and then we started to use tools that are, what we call machine learning-based, which has probabilities, statistics, and then recently, when the public became aware of AI, it's mostly because the latest wave of tools became so effective, and they're called deep learning tools. And those of you with a slightly more technical background in the audience, deep learning is another word for neural network. It's yet again, just a different set of tools. So whether it's rule based or machine learning based or deep learning based and other tools, they all try to help the field of AI to achieve some of the questions and tasks that we aspire to do.
HAASS: Okay, so let's move from the definitional to the consequential. So there's deep learning and machine learning and AI. At some point we'll get to quantum computers and the rest. As a first effort, what's the answer to the “so what” question? To what extent are these incremental increases over what we could do before? To what extent are they fundamental, if you will, orders of magnitude? What will allow us to conceivably do, what can we do now? Where might we be heading with this set of tools?
LI: That really, Richard, is a great question. The “so what” question, the answer to that is a big deal. So if we just be a little bit more epic looking at human civilization, we as a species, since our activities are documented from, I don't know, cave drawings, we've never stopped innovating. From the discovery of fire to using sharpened stone to cut some animal bones, all the way to today, innovating tools to improve our lives is in the DNA of humans as a species. But once a while ... An economist friend said to me that, once every 200 years or so, we really invent some tools that really takes a huge leap forward in our ability. I use the example of fire, but, obviously, steam, steam engines, obviously electricity, obviously the invention of cars, and then PC and biotech. You can see there are points in human innovation that the tools we created, innovated, fundamentally changes the way economics and society work, and it changes productivity. It changes people's lives. So the so what of AI, in my opinion, and many of our colleagues, is that it's as big as that level of fundamental changes of human society. Why? Because AI as a tool creates very, very powerful computational machines that can understand the world, data, whether it's medical or transportation or financial, whatever it is, in such a powerful way that even human brain sometimes cannot compete with that. Once you have that capability to discern patterns, to make inferences and to help make decisions, you change the way people work, and you change the way people live. We can have a lot of examples, if you want me to give some, but that's why the “so what” is big. It is a transformative force of our economy and the way people work and live.
HAASS: Just so I understand, there's those things where we do them, and something comes along, an invention, and we can do those things differently or better than we could do them, more efficiently, faster. There's those inventions that come along, and we can actually do different things. It's qualitatively different. Where does AI fit in then? Is it both? Or how do I understand it?
LI: Great question. I think it's both. Let me use health care as an example so let's start with, we do them now, but we could do them better with AI. Take a radiologist. A radiologist on call today takes, say, urgent care or ER's data and tries to help the doctors in the ER triage patients. If a patient has life-threatening condition versus a patient who has early signs of pneumonia, which is not as life-threatening, the radiologist has to read that x-ray and make a decision on how do we triage and rank these patients in terms of priorities. That's already happening. But a radiologist, even the most seasoned, let's say takes seconds or minutes to do this. On top of that, humans will make mistakes, and humans will have moments of fatigue. But imagine that process is now helped by machines who have infinite capacity of computing for the practical purposes we're talking about, who doesn't fatigue, who doesn't need a dinner or lunch, and who can help the radiologist to triage or make some inferences faster. Suddenly, the existing work of triaging patients based on radiology reading is much more improved. Mistakes are reduced. Efficiency is increased. This is one way of helping the existing work. Let me take you to another extreme, which is some work that humans cannot do and we cannot imagine. Here's one example, that in our ICUs, patients are fighting for life and death. Our nurses and doctors are working extremely hard, nurses are fatigued. Our ICU nurses are extremely fatigued, yet we still require that 24/7, continuous monitoring of the patient because their condition just can go sideways so fast. Even one possible condition, which is a safety of patient getting delirious because of drugs and falling out of patient bed can be a fatal injury to our patient. So now what do we do? Well, frankly, not much, because our nurses are overworked, and these things just happen. Imagine there is an extra pairs of, I wouldn't say eyes, but sensors that continuously help our nurses to monitor the physical mobility condition of our patient. And as soon as there's a sign of a dangerous move or a predicted early sign of a dangerous move, the nurses are alerted. This is something that doesn't happen in our ICUs. This is part of my research. I talk to nurses. They're constantly worried about this, but they don't have a way of really staying on top of that. If that happens, that is a new technology that can help our health care workers to take care of our patients better. So that's where we haven't been able to do today, and we can imagine.
HAASS: That's an obvious example where this emerging technology is a positive. It can save lives, whether helping us read MRIs or, in this case, sense some disturbance in a patient's situation that could be life-threatening. What are the potential applications of this that are, shall we say, going in the other direction, that keep you up at night because, whether for individuals or the social or even international level, that you worry about could have really negative or destructive consequences potentially?
LI: Yeah, Richard, actually a lot of it, because if I wasn't worried, I wouldn't have co-started this Human-Centered AI Institute at Stanford. So, honestly, as a scientist, even since my days of a student of physics, I have learned technology is a double-edged sword. It is invented by humans and used by humans, and depending on the value system and all that, it can be used badly, right? So, for example, even in medicine, let's say a piece of AI technology that helps our dermatologist to predict, let's say, a skin cancer condition, that sounds so benevolent, and that's what we wish for. But if we don't train this algorithm in a fair way, we suddenly could be in a biased territory, where we think this technology is helping everybody, except it's trained with biased data, and people with certain types of skin tone were not well represented in this data. Then we use this technology downstream. Suddenly, we create a very unfair and actually life-threatening application to some of the members of the society. So bias is something that keeps me awake, whether it's intentional or unintentional. Of course, privacy. Again, even in our example of the patient sensing, what if that is hacked? What if that capability is coming into the hands of adversarial players, that uses information in adversarial ways that violates privacy and other situations? That can be individual adversarials, as well as organized adversarials. So that's another area of concern. Labor market change and the macroeconomics is also something that we need to double down to study because history has told us, whenever a transformative technology is introduced to our society, it really upsets the traditional composition of labor. In this process, people might be losing jobs. Jobs might be shifting. How do we deal with that macroscopic level issue, as well as individual's livelihood? And of course, there is the whole military aspect of a technology, and, again, history has seen that. Again, as a student of physics, we have learned about that in the early days of our study, and AI is also another example. So there are many aspects that this technology can be used, whether intended or not intended, in adversarial ways.
HAASS: I agree there. I think almost all technologies have the potential to be used in benign or malign ways, domestically and internationally. Given the nature of the technology, the speed at which it is changing, the number of places where research is going on, does government stand a chance, or will the technology inevitably outpace any attempts to regulate either areas of research or areas of application?
LI: Yeah great question. At Stanford HAI, we actually discuss a lot about this. One thing I want to say is anything I say is a result of learning from so many multidisciplinary experts, so in the past few years this is a topic we talk a lot. I think it's actually both. I think it's not about government standing a chance or not. Government is part of the ecosystem, and it plays an important role in our society. There are two aspects to this. You talk about regulation. I think, as we have seen right, think about, for example, cars with seat belts, or how clinical studies are regulated through FDA. Government has always participated in proper regulation of technology, putting guardrails to protect people. And I think AI is a technology that government needs to participate in this kind of regulatory aspect. In the meantime, government also plays an important role in invigorating the ecosystem of innovation, and this is especially true. As a proud American scientist, we have seen, for the past decades, how US government has played a positive role in invigorating our country's innovation, and that's why we're the most innovative country in the world, whether it's biosciences or computer sciences or physical sciences. And I think in the age of AI, we actually, we being those of us who are in public sector and in academia, are eager to see that to happen. So much resource is siloed in a small number of big tech companies, and our talents are flowing in a disproportional way into these companies. It's important government participating, invigorating this ecosystem. I actually participate on a task force by White House OSTP and NSF on establishing national AI research resource, and these are the efforts, I believe strongly, that government should participate in.
HAASS: Can I just press you on that a little bit? If Goldilocks were joining our conversation, what would be too much or too little government participation in the ecosystem? How does one right size the government role? Or to put it in another way, what's the optimal role between the universities, private companies, and government? Because we obviously have a much more, what I would call, bottom-up ecosystem than other countries, China, others, which have more of a top-down ecosystem. What do you see as the right mix in this ecosystem?
LI: Wow, Richard, I almost wish I knew the answer. But I hear you, and I know why you ask this question. It's an important one. I don't think I can tell you the exact right mix, but I think there is a methodology that's really important and special to American success, which is the multi-stakeholder methodology, which is that we do want to involve the civil society, the higher education, public sector, industry, private industry, and government onto the table and help to invigorate this together. I think that is what's unique in America. In a way, the less than many other parts of the world, regulatory or top-down force in America is part of the fundamental reason we are so much more innovative, because we have a lot more freedom as a society to innovate. But in the meantime, we have seen government has played important roles in innovation. I'll tell you, all the important early studies of AI technology, like things you might not have heard of such as back-propogation and neural network methodology or some data-drive work came out of academia that are largely supported by government grants. So even in AI, the early days, the foundational days of AI, we needed the government’s support, and we're still just at the beginning. We continue to need the multi-stakeholder approach.
HAASS: I'm going to ask you to take out your crystal ball for a minute, and let's talk a little bit about trends and futures. There's a lot of us who think the greatest challenge to the United States today is democracy. It's our own political system, its ability to function and so forth. When you look at these emerging technologies, do you see them as contributing to, if you will, the solution or contributing to the problem?
LI: Richard, I see them as both, and I see them as an important influencing factor as both. I think that part of the right use of this technology can strengthen our democracy, can strengthen the way government policy making helps. We've got colleagues at Stanford HAI who works with local and state government in making policies much more efficient and understanding data so that government can make much better decisions. We've got a lot of colleagues who work on different aspects of policy recommend, whether it's national security or economics, and this tool set, this AI as a tool, really is very, very useful. In the meantime, if we don't use it in the right way or we don't understand the adversarial usage of it, it might exacerbate the problem. Right, look at today's social media and the recommendation systems and deep fake technology that is deeply disturbing and might exacerbate the process of democracy. So in my opinion it's both.
HAASS: I don't know how much you're focused on military applications. But when you look at the future of conflict and the future of warfare, I guess, the question is, something like AI, which you've thought about more than anybody or as much as anybody, to what extent do you see that as revolutionizing? Can we already tell, in terms of warfare, in ways that would potentially have real consequences either for the individual soldier or for platforms like ships, airplanes, tanks, what have you? What is the generic or directional impact, if I had to describe it that way, of AI, as best as you can see it?
LI: So I think the military and intelligence use of AI, intelligence here meaning the national intelligence community, it’s inevitable. For example, you mentioned robotics earlier, and in defense scenarios, whether we're talking about things running on the ground or in the water or flying in the air, all of that can be technically, think of it, related to robotics and robotics ability. And AI is a technology that basically is the brain of a robot. So whether you're talking about a civilian self-driving car or a militarized piece of vehicle or airplane or ship, that technology will be deeply, profoundly impactful. My colleagues at Hoover Institute at Stanford, Dr. Amy Zegart, also is working on AI and algorithms impacting intelligence. I cannot speak for her. I don't have the expertise, but I know that some of her research is very deep in analyzing the impact of national security and the intelligence community as well.
HAASS: Is directionally though the implications, would you think, given the level of detail, the quantity of information that AI can contend with, the speed at which it can, the reduction of errors, does it inevitably move towards what, a reduction in the human role in a lot of these enterprises, whether your enterprise is warfare or something else, that all things being equal, the labor model, if you will, becomes less human-centric?
LI: That is a great question, Richard. That question applies to both military as well as civilian use of this technology. What is the role of human-role here? In fact, at HAI, when we established this institute, we designed three pillars that are fundamental to AI and human-centered AI, and one of them is what we call human enhancement, is really, the way to think about a responsible way of using AI that is a true reflection of important values of human centeredness and human rights. So when you say human role, maybe technology might be changing the physical labor role, but it should not change the human values, human rights and the human centeredness. You know, a friend of mine just did a surgery where the surgeon didn't even touch her. The entire surgery was done by a surgical robot, but the surgeon was in the room. The whole process was human centered and human serving. As an AI technologist, I was still a little terrified hearing, because I care about the friend. I said, "Are you okay allowing this robot to work on you?" She gave me the most compelling answer. She said, "Well, the best surgeons hand has a resolution of, say, five millimeters, but the robot can have a resolution of one millimeter or even less." That part of physical labor the robot can do better, but the entire design of the system and the process is human designed and human centered. So I think the human role, the human values and human rights of the application of AI systems must be there, must be preserved and enhanced.
HAASS: Somewhere in there, there's a joke that robots have better bedside manner than some surgeons, but I won't go there, or maybe the robots will make house calls. One other technology question, then I want to end with some education questions, which is about quantum computing. I read a lot about that also, but we haven't really talked about it. Again, how much of a game changer is that, beyond, if you will, “traditional computing” how do we understand that?
LI: So this is getting outside of my realm of expertise, despite my Princeton undergraduate degree in physics, but I'm very excited from a technology point of view. Quantum computing fundamentally, when it works, can up the computing capability of our machines by orders of magnitude. And this is once again the same trajectory of human innovation, as we innovate tools to outpace ourselves. Somewhere, when wheels were invented, humans were outrun. Airplane out fly humans. Computers out calculate humans. AI out compute humans, and quantum will add to that. So that is an inevitable trend. It will be a game changer because of the orders of magnitude change of the compute. Imagine climate, right? Everybody's worried about climate. Climate computing is extremely, extremely prohibitively large because we're talking about atmospheric and water and earth data that comes in petabytes of quantity. Even today's biggest computers are still going to crunch on these numbers in a very difficult way, and also not to mention the energy they consume. Quantum computing can be a game changer when the amount of computing can happen much more efficiently.
HAASS: Interesting. Fei-Fei, I want to end with two educational questions. The Council on Foreign Relations in recent years, really, over the last decade, has become much more of an educational institution. We don't have students in the sense that Stanford does, but we are in the business of trying to educate and being a resource. So I want to ask the same question from two different directions. One is, you know Stanford's turning out all these wonderful, young graduates in computer sciences and engineering and the like. What is it though they also need to know, do you believe, that goes beyond engineering and computer sciences? What do you believe that every Stanford graduate also needs? Because they're dealing with these technologies. We've been speaking for over a half an hour, and they obviously have all sorts of impact on our societies, on our lives. So one would ideally want them to be somewhat informed to think through almost philosophically or ethically or morally about potential uses or economically what would be the implications for labor and so forth. What is your sense, even for someone who's concentrating, or majoring as we used to say, in computer sciences or what have you? What else do they need?
LI: Richard, this question really it just touches my heart. It really is the entire foundation of what I believe in, especially after my sabbatical at a major technology company in Silicon Valley, at Google. I met so many engineers, young engineers, who just came into the workforce, who come to me and, in a way, cry for help, some literally, because they are seeing the deficiency of their education. They are struggling with the seismic social impact of the technology they create, and they don't even know how to contextualize what they're doing in the social, moral, philosophical, ethical framework. So whether it's CFR educating leaders or Stanford, or our community college educating workforce, and tomorrow's generation, I really, really believe in what we call a new bilingual education. This is not between Spanish and English. This is really about humanity and humanism and technology, because we cannot pretend this technology is just a bunch of numbers and equations. It impacts our society. Even when you're writing that line of code for x-ray reading for a radiologist, it's important as a technologist that you understand how multi-stakeholder methodology works. You understand the implication of your technology to radiologists and to patients. You understand the bias, the human bias that comes into your data, and its downstream implications. That takes a bilingual, human-centered education. So I agree with you. At Stanford, we are already starting what we call embedded ethics in CS program, where not only we have CS courses on ethics of technology, but even for the hardcore technology course, like I teach deep learning for computer vision. We will have a unit that is the ethics unit, and our research lab engages with legal scholars and ethicists and bioethicists because we do a lot of health care that guides our design of the projects. So it's already happening, but, in my opinion, not fast enough.
HAASS: I think it's great that it's happening. I would also hope there would be some embedded study of international relations and some embedded study of citizenship and American democracy because engineers are also going to be full citizens in this society. They're going to be participants in a 21st century world. What about the other direction of bilingual, people like me, study social science, not science science, international relations in my case? One of the best reasons I know to have children is they can help with the gadgets around the house. Very quickly I'm out of my depth. Given what we're talking about, I don't need to write computer code, but what do I need to understand? What is, if you will, a basic level of literacy in science that non-scientists need to have given the importance of the issues we're discussing?
LI: Richard, I cannot agree more. I believe the science of computing is the new foundational knowledge of the 21st century and going forward, just like there is a basic requirement of math and natural science for any undergraduate degree or high school diploma in this country, I think some basic understanding of computing should be required. I remember when I was an undergrad at Princeton. We actually have a course called “physics for poets” and I think we...
HAASS: I had “rocks for jocks” is what geology was in my day, but physics for poets was also offered.
LI: So we need to have a “computer science for humanists” and it's happening, but we hear a lot of people talking about, you hear congressional hearings of Silicon Valley business leaders and our policy makers ask questions that reflect a basic understanding of how internet business works or how computer, AI-based products and services work, and I think that's more and more a problem. We need policy makers, artists, teachers, many parts of our civil society to understand the fundamental science of computing because that's just going to be more and more.
HAASS: Fei-Fei Li, I like your idea so much of bilingualism between science and the humanities, in both directions. I'm going to do you the ultimate honor. I'm going to steal it.
LI: Awesome.
HAASS: Thank you for all you do day in day out, week in week out, year in year out, I'm not bilingual yet, but I feel I can do the equivalent now of what we call restaurant French. I can now do restaurant AI, and I can fake it a little bit. So thank you for getting me to this point, and, again, thank you for all you do.
LI: Thank you, Dr. Haass, and as a technologist, I'm learning every day, and I do call for all technologists to learn the human, societal aspect of our technology as well.
Thank you for joining us. I hope you enjoyed the conversation.
If you’d like to learn more please visit CFR.org/9questions where you can find a transcript as well as additional resources on this topic. Have a question or some feedback? Send us an email at [email protected].
Subscribe to the show on Apple Podcasts, Spotify, Stitcher, or wherever you get your audio.
And with that I ask that you stay informed and stay safe.
Show Notes
About This Episode
Artificial intelligence, quantum computing, robotics, and other emerging technologies are radically altering society, prompting experts to question whether they will be detrimental or advantageous to the world. In this episode of Nine Questions for the World, Richard Haass and Fei-Fei Li, a professor at Stanford University, consider ethical approaches to technological growth and the future of tech regulation.
This podcast series was originally presented as “The 21st Century World: Big Challenges and Big Ideas,” an event series in celebration of CFR’s centennial. This episode is based on a live event that took place on November 1, 2021.
See the corresponding video here.
Dig Deeper
From Fei-Fei Li
Tess Posner and Fei-Fei Li, “AI will change the world, so it’s time to change AI,” Nature
From CFR
Chris Rohlf, “AI Code Generation and Cybersecurity”
Lauren A. Kahn, “U.S. Leadership in Artificial Intelligence Is Still Possible”
Robert Morgus and Justin Sherman, “What Policymakers Need to Know About Quantum Computing”
Christopher Zheng, “The Cybersecurity Vulnerabilities to Artificial Intelligence”
Read More
Joy Buolamwini, “Artificial Intelligence Has a Problem With Gender and Racial Bias. Here’s How to Solve It,” Time
Ashley Stahl, “How AI Will Impact The Future Of Work And Life,” Forbes
Spandana Singh, “The Building Blocks of Meaningful AI Regulation,” New America
Watch and Listen
“The State of Ethical AI Frameworks 2021,” AI Today
“Predicting the unintended consequences of AI, with Niya Stoimenova - TU Delft,” The Human-Centered AI Podcast
Podcast with Richard Haass December 16, 2021 Nine Questions for the World
Podcast with Richard Haass December 16, 2021 Nine Questions for the World
Aging, Youth Bulges, and Population
Podcast with Richard Haass December 16, 2021 Nine Questions for the World