The Global Artificial Intelligence Race
Panelists discuss the global race for leadership in artificial intelligence, and provide an analysis of major AI legislation and initiatives in China, the European Union, and the United States.
AYYAR: Well, good afternoon, everybody, and welcome to the Council on Foreign Relations. It’s a pleasure to be here with you today for the discussion on the global artificial intelligence race. Not uncharacteristically, the Council on Foreign Relations has assembled a really distinguished panel I’m overjoyed to be—to be with today to discuss this very interesting and all-encompassing topic that I think many of you, like me, are reading about.
With us, from my right, your left, Peter Fatelnig, who is the minister counselor for digital economic policy at the Council—excuse me, at European Union here in the United States. He’s a leader with a great background and biography available to you in this—in this area.
To his left, Jason Matheny, the former director of the Intelligence Advanced Research Projects Agency under the DNI, an actual economist turned technologist who understands our U.S. perspective on these investments and what’s unfolding for us in the artificial intelligence race.
And to my immediate right, Samm Sacks, who’s a senior technology fellow at the CSIS and whose article was published today in Foreign Affairs, in case you had questions about collaboration and cooperation with China and the United States in innovation and preserving the innovation advantage.
So welcome to all three of you and thank you for being with us today. I’d like to begin by reminding everybody we’re on the record today. I know many of you are experts in this area and look forward to questions and answers with our panelists. I’ll ask you to please be brief. We have a full house today. When the question and answer period begins, I’ll try and be a little bit provocative with a few questions for our panelists to prime the discussion, and then we’ll turn at 1:00 promptly to your members’ questions and continue the conversation until 1:30 today.
I’d like to begin with a brief introduction from each of our panelists. And so, Peter, I think we’ll start with you and move towards Samm. If you could begin with a few remarks for our audience.
FATELNIG: Well, I’m working for the European Union so I’m posted here in Washington, D.C. since spring this year. I’m looking back at a career actually more on industrial policy. I’ve been working in Brussels on innovation startups. Up until recently, I managed a team of a thousand forty-eight startups. So I have quite some interest in innovation experience, looking back, and this also gave me a bit—I hope at least gained me the invitation to this meeting here, as I have done some previous work on artificial intelligence.
AYYAR: Wonderful. Thank you, Peter.
Jason.
MATHENY: So I was previously in the intelligence community working at IARPA, which is an organization within the intelligence community that funds research at over five hundred universities, colleges, and companies, many of them involving AI. So in that role, we looked not only at our own investments in AI but also in foreign investments in order to understand how that technology was evolving.
SACKS: I focus on Chinese science and technology policy at CSIS and I’ve worked on Chinese tech issues for over a decade, both in the public and the private sector, and frequently go to China and work with—meet with Chinese tech companies to get their perspectives on these issues.
AYYAR: Wonderful. Well, just with that very brief introduction, let me share with you that the CFR has assembled both a U.S.-allied and potentially adversarial perspective for us to examine as we talk about the artificial intelligence race. I want to start out just by asking a framing question and I’d ask each of you to give me just a few thoughts on that framing question, and then I’d like to dive a little bit deeper into what I suggested are the values and the nature of this race, if it exists.
First is this idea that there is a global AI race and, if so, if you believe there is, kind of the nature of that race and potentially what the stakes are for certainly the European Union, China, and Russia, but also the United States. And so I thought we’d just start out with a little bit of framing and then I have a couple of specific questions for each of you to help us understand, more broadly, what your perspective on this is.
So, Samm, why don’t we begin with you? Your sense of is there a global AI race and, if so— certainly colored by your expertise in China—but if so, what are the stakes?
SACKS: Sure. I think that our relationship with China in the AI domain is truly defined by collaboration and competition, and it’s clear that China has advantages when we talk about the entrepreneurship culture, when we talk about supportive government policy and funding and data, and we can get into what all that means.
But I want to posit for the discussion today that the interdependence between China and the U.S. in AI is much more deeply ingrained than I think some people give it credit for and I would even argue that the risks of severing that couldn’t be more destructive to U.S. innovation and competitiveness than people understand. So to put another plug in for my article today—
AYYAR: Article, right?
SACKS: — in Foreign Affairs as we talk about this. Why is decoupling a risk? And I think that if we look at the number of leaders in the U.S. tech space that are now investing and acquiring Chinese AI companies, if we look at flows of talent and funding across the Pacific, this goes both ways now. Not to mention the fact that if we are not engaging with China when it comes to things like AI safety and ethics, what does a world look like in which the Chinese leadership is looking not just to be leaders in the tech but also in the rules and the governance around that, and I think there is every argument to be made to sort of work with them in that domain. So we can get more into that. But just to throw it out there, what does that adversarial relationship look like, is more complicated.
AYYAR: Samm is going to make it difficult for me to cast China in the role of adversary here. But I—
SACKS: Well, no. I can do that, too, and that’s the thing. I don’t want to be naïve about that. So we can do both. Yeah.
AYYAR: That’s right. Well, listen, you’ll have—you’ll have some questions in various areas, which I’m very excited to hear your thoughts on.
But, Jason, what about the United States’ perspective on this, if you could share a little about that?
MATHENY: Yes. So I think, first, there’s a risk of overhyping current capabilities which, right now, are fairly primitive and brittle. So I think, in general, the popular press has probably exaggerated the capabilities that currently exist or even on near-term horizons. I think there’s also oftentimes an exaggeration of China’s capabilities and I’ll be interested in the discussion about assessments of what China has, the kind of disadvantages that it has in some ways in the research infrastructure, the comparative advantages of the U.S. in Europe in some aspects, particularly in its extraordinary university system, the use of competition to decide which technologies are better as opposed to picking winners in advance, which China has done, as well as some advantages that we have because of our investments in computing, semiconductor manufacturing, where we have a significant lead on China.
So I think I’m not quite as alarmist as many about what the role of competition could be. I share Samm’s view, though, that there are so many areas for collaboration and cooperation related to AI security and safety, and we shouldn’t squander those.
AYYAR: Very good, Jason. But I want to get you on the record now. Are we—does the United States perceive that there is a global AI race? Can you tell me yes or no? What do you think?
MATHENY: So I would say that there’s not a distinct race compared to other technology areas in which there is some form of competition globally—
AYYAR: Good. OK.
MATHENY: —for example, semiconductors, pharmaceuticals, manufacturing, energy. I mean, if you look at AI as a percent of the economy it’s small compared to many other sectors where there’s global competition. I think AI is given sort of more significance as an emerging technology just because its impacts on the economy are likely to grow as a percentage of GDP in contrast to some of the other sectors that I mentioned.
AYYAR: Wonderful. Thank you, Jason.
Peter, is there competition? And if so, what are the stakes?
FATELNIG: Well, you know, competition in the race—for a race you need a goal, and I’m not entirely sure we actually have an idea where the goal is.
AYYAR: OK.
FATELNIG: And very often these conversations are rather like we all being in an escape room and trying to find the key of getting out. (Laughter.) So I don’t think, from our perspective, that’s necessarily a race. We’re not yet there, right. Maybe there will be more competition and a competitive race. But that is not yet at least how we perceive it.
The reasons why I’m saying that or what supports that argument is if you look at what countries are doing, what they put in place are very broad strategies. They’re not focusing, because why or how could you focus? Should you focus on deployment?
Well, no. Of course, we have to go in research because there’s so much to be done there. Do we know anything about the ethical conversation? I’m sure we’re going to come in this conversation to that point. But we are at the beginning of all of that. So the race hasn’t even started, I would say.
AYYAR: I see. But let me come back to each of you then with a maybe more specific set of questions—if we could talk a little bit about values and ethics. President Macron of France, in particular, has spoken meaningfully about the need for us to have a value framework. I think, certainly, with GDPR but the European Union has been leading this discussion of the privacy and of the use of data. Do you sense that this discussion on values and ethics is going to play the proper role with potentially adversarial relationships with Russia and China in the use and investment in AI?
FATELNIG: Mm hmm. I think you have used two interesting words here, which were data and values, and let me start in on the value. I think the whole—the first question we have to ask ourself in AI are we as human beings ready for that technology. You know we feel comfortable with a lot of technologies. We know how to drive a truck. We can build spaceships and all that kind of stuff. We feel comfortable we can manage and master that.
Now there is new technology coming around. Are we sure we can master it? And we want to master it. I mean, are we ready as a human society to accept actually the bias so prevalent in our society we have today? If AI is going to change our society, taking away bias, how do we deal with that situation? You know, I think there’s a lot we all have to kind of go back into ourselves.
Are we entirely sure that’s the world we want to live in—that’s the economy we would like to construct, which is purely based on data without any values? Of course, there have been always values and you can make the test yourself. If you—if you pull out your phone and ask some really tricky clever question—your Siri or whatever is your internet assistant there—you will see there is bias built in. There are values built into this, for good reason, because I think right now we would not be ready to hear all the answers the assistant may have ready for us.
So I think we have to differentiate a bit between the data and the values, and values are very, very important because value(s) are an expression of the outcome. What do I want from an AI? A chess game; the AI is not—it’s not programmed to play the game. It’s programmed to win the game. That’s a different outcome than playing—winning. That’s a choice somebody has made, whether the system should win the game and not just play the game. So I think the choices in terms of outcome, they are human. They are ours, and that conversation we need—what outcomes do we want from AI.
AYYAR: Very good. That’s a great posit.
I want to turn to Jason now. And Jason, you know, President Putin in Russia has been pretty explicit. By 2030, he wants thirty percent of his weapons systems to be autonomous. I think you said something very important in your introductory remarks, which may be helpful and we can get into with the question and answers, but from your perspective as a technologist leading one of the most advanced research and intelligence organizations—an organization, by the way, the United States government has looked to for leadership in this area—do you sense that what President Putin said is true and does it frame our approach in the United States or in the Western nations toward artificial intelligence? Any thoughts on whether he’s framed the race in a military or national security way?
MATHENY: Yeah. I think that authoritarian regimes have a very strong interest in AI because AI can have a multiplier effect on the resilience of authoritarian governments, and I think seeing the early adoption and, some would say, a sort of irrational exuberance around AI in both Russia and China is partly a measure of that.
If you have very little faith in your own human capital to exercise your own political will either because they have different goals, say, than the political leadership or the party leadership, then you're very interested in autonomy. And if you also believe that your political system will not succeed without a high degree of ubiquitous surveillance, then you have a high degree of interest in AI.
So, I mean, some things that concern me about the exuberance in China and Russia towards AI is its potential applications to surveillance, its potential applications to information campaigns in everything from manufactured information like deepfakes to the industrialization of propaganda.
So I think that’s partly why expressions like Putin’s are unsurprising is to see authoritarians sort of drift towards this. In the United States, I think there’s a much more ambivalent attitude towards the role of autonomy. The DOD directive that requires, for example, that there’s meaningful human control over targeting is I think in part a measure of our belief in the importance of having human beings involved there. Even if they’re low-ranking human beings, we want them engaged in those kinds of decisions. That’s the sort of faith that Putin doesn’t have in his own personnel.
AYYAR: Yeah. Very insightful that the United States trains its military in the profession of arms to make these value-based decisions.
But, you know, Samm, he was taking a little bit of a shot at China there with this notion of the surveillance state. I know you’re fresh back from China. I wonder if you could share a few thoughts from what you think the Chinese perspective is. What we read of the Chinese perspective suggests that they’re very serious.
By 2030, the global power in artificial intelligence and both at the education level and at the policy level with their industries they seem to be investing. I can just mention a couple of the companies that Jason I think may be alluding to, like SenseTime and Face++, that have taken enormous investment from the Chinese government, and I guess my question to you is to what end and how do you see that unfolding.
SACKS: So back to the point about surveillance. You know, there is no question that Beijing is using technology like AI to try to build one of the most sophisticated comprehensive surveillance states in the world. I think that’s very clear. There are certainly limitations to that. I heard an interesting figure recently that said of the two hundred million cameras in China, only about 15 percent of them have the resolution to actually be able to put that off, and I think 2 percent of them have the AI chips necessary to be able to distinguish.
So, you know, there has been a lot of work done looking at the deterrent effect, even if that technology has not met the aspirations, right. But I also want to sort of just throw in there that when we look at how AI is being used right now in China, a lot of the applications are really trying to solve real-world problems in Chinese society.
And just to give you some interesting examples, a conversation with a Chinese VC firm—you know, they talked about some of the ways that AI has—some of the interesting startups that they’re looking at, and the use of AI to do maintenance on wind farms, right, where this is a very dangerous, very difficult thing to do to be able to go up and perform maintenance with a person hanging from a basket, and this has enabled them to be able to do that, right. Or mushroom picking—going on an assembly line where this is a high margins job. The young people don’t want to do it and now you use computer vision and robotics.
So there’s—I think that economic drivers are playing a huge role in some of the business cases around AI and so I want to put that into the conversation along with sort of the big bad scary things.
AYYAR: Samm, you’re really humanizing China in a disturbing way for us. (Laughter.) It’s not making it easy for me at all, but I accept that there are many applications, and I want to ask Peter and perhaps Jason to comment on the notion that there is a mythology out there—and Samm was alluding to it, I think, in the way that she described the natural use of these capabilities—to create opportunity for workforce—the workforce to transition and I wondered if both of you, both from a European and an American perspective, could share what the reality is with the concerns we are reading about in terms of displacement.
I guess in America the excitement was around autonomous vehicles and Jason mentioned to me earlier there’s, of course, a frenzy about what that potentially could mean in that profession. But I’m interested in your thoughts about whether that’s a likely outcome or whether, as Samm alludes to, that there may be a very natural transition as we incorporate AI in our markets for our workforces, both in Europe and in America.
FATELNIG: First, I want to make really clear that in the European Union spring this year we have launched a number of initiatives on the positive side of AI. We think this is a very promising technology either in a narrow way—picking mushrooms—or in broader ways—driving cars. So I think there’s a lot to benefit from as a society and as an economy.
Now, having said that—we are going full steam ahead in investing in the research, deployment, and whatever you can imagine—there’s also this conversation what does it mean. So and we are very seriously looking into what are the issues of what’s the future of work in a world of AI. So we are very carefully looking into that. We have colleagues actually also posted here in Washington, D.C. who are looking at that, try to see the environment and temperature here.
Now, the question in here is how fast will it come and what will be the actual impact, and here I think we should be a bit more careful. If we look back at the—at the example when computers came into our lives sort of at the end of the ’80s or early ’90s up until 2010, there was a tremendous progress. But it didn’t really show up as a major displacement as workforce, and it didn’t show up, neither, in the—in the macroeconomic statistics. The growth by this change was probably immeasurable. I mean, the OECD puts it at sort of one percent. Let’s agree on that figure for a moment.
So it’s—I think we look here at slow transition which created new jobs, changed completely the way we work and we live and communicate today. But it really didn’t have that negative impact. So I think we should be prepared for the worst but we shouldn’t anticipate it either. I think we should at the same time be able to use it in a positive fashion as we go along.
AYYAR: Let me assuage some of our members who are concerned about being replaced at their work by the potential of AI.
Jason, do you agree? I think what I—if I could summarize, he’s saying, hey, positive—we can manage the technology and transition. Disruption might be a little aggressive to describe what the impacts are in the short term. Is that the way you see the world unfolding in this race?
MATHENY: Yeah. I think—I think so. I think it’s a little unsettling that there’s so much disagreement among economists on what even the direction of change will be, whether it means net displacement or net possible addition of new jobs that we created. I think the most pessimistic scenario is unlikely to unfold, which is wholesale removal of some jobs or professions. I think what’s more likely is that percentages of a person’s job will be displaced.
And to give one example, intelligence analysts who look at satellite imagery spend most of their time looking for an object in an image and then they spend the remainder of their time—let’s say twenty-five percent—trying to figure out why that missile is in the image, what it means, what that missile will be doing tomorrow. It would be good to have machines automate the first part—finding the missiles in the image—so that the analysts can spend time doing what they have a comparative advantage doing, which is thinking about why—the sense making—and I think we’ll see that in a range of professions, including ones that I think are sort of the lowest-hanging fruit for displacement of one sort, things like service professions that depend on pattern recognition like radiology or dermatology diagnostics. And I think, ultimately, those professions won’t disappear but the nature of those jobs will shift more to doing things that humans can do.
AYYAR: OK. So two of the three of our panelists have, I think, suggested that the nature of our economies can adapt to these capabilities and to really put these capabilities to use in a way that elevates and accelerates the judgment—the human judgment—that’s required in complex environments.
Samm, where are you on this? Is the global AI race—by the way, you also agree that it’s generally positive. I wonder if you could start with us, going back the other direction, and share with me what you think. If there is a global AI race then what are the stakes for America and the West, potentially against, in some areas at least, an adversarial China and Russia. Maybe that’s more in the military, as you guys have described it. Maybe it’s more in the military-national security realm. But are there stakes for us in the economies of these nations on how we view data and the access to data and the ability of these kinds of algorithms and capabilities to unfold in the market economies? Your thoughts.
SACKS: So I think that there is a major competition going on right now, but it’s not just over the technology. It’s over the governance and the rules for these technologies and, you know, the outcome is going to have major implications for U.S. competitiveness, political power, our ability to be technological leaders, and that’s why I think it’s so important that we handle this challenge of cooperation and competition correctly.
And, you know, you said before I’m making China look so human—(laughter)—and I want to—I don’t want to—I want to have our eyes open about the fact that this is a competition, right, and we need to be aware that this is a leadership that has stated, we have aspirations to be a global superpower in this—in this realm, and I think we need to be realistic about what that looks like.
You know, we’re seeing Chinese companies looking to take a more—have a seat at the table in international standards setting, and in AI they’re also looking to set the standards for safety and ethics, and there are major implications for that. A group of analysts and I translated and analyzed—China issued a white paper earlier this year with—about AI standards around ethics and privacy. We need to pay attention to that as that goes forward.
AYYAR: Well, Jason, I wonder if you could pick that up a little bit, on the safety and security especially. It sounds like you believe that there is—there’s deep water that connects us across potential adversaries, collaboration and competition. This may be an area where collaboration is prevalent.
MATHENY: Yeah. I think so. So even in periods of competition, say, between the U.S. and the Soviet Union, we engaged in forms of technological cooperation, for example, in the development of the permissive action links that protect nuclear weapons. And then after the Soviet Union we had the cooperative threat reduction programs, which ensured that nuclear stockpiles were secure and protected so that they didn’t fall into hands outside of states.
I think those kinds of examples of cooperation are ones that we need in AI security research. We would all benefit, even while we’re being competitive, in having systems that are more reliable, that are more robust, to fairly primitive attacks right now. Most of off-the-shelf AI systems can be easily fooled with about, you know, freshman level computer science effort. A famous class of vulnerabilities, which is, unfortunately, one that’s shared by most of the commercially off-the-shelf available image classifiers, is that you can present an image to a(n) off-the-shelf system that looks for all the world like a school bus but gets mischaracterized as a tank or as some other weapons system.
So the possibility for these systems being fooled or reaching errorful conclusions including in high stakes situations involving weapons systems is, I think, a failure mode that both the U.S. and China and Europe all have a shared interest in avoiding, and cooperative research then on security including standards setting, as Samm mentioned, which I think is hugely important, investments in things like national test beds for reliability and safety testing at NIST and elsewhere, I think those are—those are key investments for the U.S. government.
AYYAR: Wonderful.
OK, Peter. Before we go to the lightning round, I have one last question for you. Are the free markets enough in the Western nations? You’re here for the European Union in the United States but, you know, both Germany, France, and U.K. all have independent AI strategies and, of course, European Union, in April of 2018, released an AI roadmap. Your thoughts about whether the free markets are enough or whether the governments will actually have to be proactive in investing in these—in these technologies in order to drive the growth and get the productivity that we think AI may provide.
FATELNIG: Mm hmm. That’s a difficult question.
AYYAR: OK. Take what part of it you’d like. (Laughter.)
FATELNIG: Following a bit what has been said here, I think a perspective we have is that machines, like all employees, need to be managed. Now, the rules for that management I see a lot of common ground between Europe and the United States because they are value based—those rules—and I live in this—under this assumption that I think we do share a lot of common values in how this society or economy should be managed.
So I think there we would really love to see a lot more cooperation and try to see if we can build a transatlantic common market for AI systems to be deployed here or there or wherever. I don’t think that will be easily to be—to be had with other regions of the world. So that would certainly be something.
And if we were to do that, then I think that would be the—then we would be in a race because the race is—race is about different value systems. I don’t think there should be a race between Europe and the U.S. I mean, you know, we can talk for that—about that forever and maybe there is in some aspects. But I don’t think that is going to produce the value we are actually both after. The value we’re both after is preserving our economic and societal systems we have and extending what we have.
AYYAR: Is that at risk with Russia and China’s competing versions in the application of these technologies or do you think—I guess the quick answer I need from the three of you before I turn it over to our members is, is the global AI race more about competition or collaboration?
Peter.
FATELNIG: The risk is there, yes, because the risk is not where the impact will be achieved. The risk is where it will be (sat ?) and pushed in the direction today.
AYYAR: I see.
Jason.
MATHENY: I think the default is cooperation just because the research community, including in the U.S. and China, likes to publish just about everything that they do.
AYYAR: Right.
Samm.
SACKS: I’ll agree with that.
AYYAR: OK.
SACKS: But I think we have to be aware of where the fault lines are and manage those very carefully.
AYYAR: OK, members. You’ve got a very positive outlook here on the global AI race. I’d like to just give you a quick reminder of the rules for on-the-record conversations at CFR. Please, if you have a question for our panel state your name, your association, and please, brevity is the better part of valor here. Be brief with your question and we’ll try and get as many members’ questions on the record as we can, and all of the panelists will be here for a few minutes afterward if you are expertly knowledgeable in these areas and want to engage in a larger debate with them.
So I’m going to begin with the gentleman here five rows back. Thank you.
Q: Hi. I’m Mike Brown. I’m on the faculty at the Elliott School at GW.
I’m really struck by the complacency of the remarks that we’ve heard in the first half hour and I’d like to hear more about what you see as the risks and dangers, and maybe not the worst case scenario but what some of the bad case scenarios look like over the next decade or two.
AYYAR: Thank you, Mike. I appreciate that, and there is a lot of reading out there about worst case scenarios and I think it may be helpful just to directly address, one or two of you, if you could, why we think many of the kind of Elon Musk, if I could use as an example the Silicon Valley, the future that they may be talking about isn’t something that we ought to be in the next decade or so too concerned about.
Jason, would you like to start?
MATHENY: Yeah. I mean, so if you take, say, surveys of AI researchers, and there have been a few in the last several years just looking at when you would achieve artificial general intelligence—the sort of thing that Elon Musk is worried about—I mean, that’s decades in the future. So it’s highly speculative. I mean, some people don’t think we’ll achieve it in the century. Others think maybe in fifty years.
The things that we know to worry about are things like autonomous cyber weapons or algorithmic traders run amok in our financial systems, high-stakes accidents involving critical networks. Those are the things that I would worry about as opposed to Skynet or Terminator. So worry more about digital flubber than Skynet.
AYYAR: And let me—let me add to assuage—it’s rare for the presiding person to share insight from their experience. But, you know, I have a group of the leading computer vision machine learning artificial intelligence Ph.D.s in the world in Silicon Valley working in national security and intelligence, and everything Jason just told you is exactly right, from my perspective, technically.
There is a larger over-the-horizon concern about the values and ethics that guide the application of these technologies. But even the simplest thing is challenging in this environment with huge amounts of data. So we can talk offline, but I want you to be—I want you to take to heart what Jason shared with you. We are years away. For the next dozen years or so, it’s a man-plus-machine team that will yield the greatest outcomes for humanity, not just an individual nation, and I do believe there’s much more that brings us together in these technologies in the challenges we can face as a nation and the world than there is that pulls us apart.
The young lady there in the fifth row. Thank you.
Q: Hi. Kate Fernandez from the Cohen Group.
So you just alluded to this a little bit in your comments, but I’m wondering if you all could speak to the relationship between Silicon Valley and Washington on some of these issues and how that then relates to this conversation about whether or not we’re in competition with countries like China in the future.
AYYAR: Well, thank you for that question. I wonder if—I guess I’m from Silicon Valley so I can start and then I welcome your insight. But I do think we’re at the beginning of a conversation in America about the implications of these technologies, certainly in the public square. I think what you may be alluding to with Google’s recent withdrawal from the Project Maven capabilities that they were providing, the—(inaudible)—capabilities they were providing the Department of Defense.
I was personally saddened by it. I think Google is an amazing company. It’s got just extraordinary technologists, and I think that discussion has not reached its full potential here in America. My sense is that the European Union—I was going to ask Peter if he could comment on this—has been leading the discussion about how we view the data—the personal data that we all generate—when we trade kind of our convenience for that knowledge—that our companies that in Silicon Valley have been, largely, using that in a very kind of quiet under-the-covers way to help precisely target advertising and create revenue generation for their firms.
And I think America is now waking up the idea that data and their personal data are very relevant to outcomes in their lives and I think that discussion the European Union has really led, from my perspective, with the advent of GDPR. That’s another CFR meeting so we probably can’t go too deeply into the data privacy. But I think that Silicon Valley is one part of the American voice here. There’s a much richer discussion coming, I think, led by Congress about how we feel about data and how well it should be protected. I think the question for the panel is do Western nations who wrestle with privacy in data, are they somehow inhibited in the competition if data is fuel, as I think Samm was joking about earlier with Kai-Fu Lee talking about China and the inherent advantages some of these more authoritarian nations have. From your perspective, panel, do they have a big advantage over us because of their access to data and our concerns about privacy?
SACKS: Well, that’s how Mark Zuckerberg made it out to be. When he testified before Congress, he used China—
AYYAR: He did.
SACKS: —as a justification—
AYYAR: He did.
SACKS: —not to regulate.
AYYAR: He sure did. Right.
SACKS: He said, if you regulate us then we’re going to fall behind China because they don’t have limits on what they can do. The problem is that’s changing in China and I—you know, let’s be realistic. It’s very different what the government and what companies can do in data, with data, than they can do here. But the reality is you have companies that don’t want to share their data with the government that are pushing back on that in China right now. You have China building out a whole new system around data protection, which—and I’ve been in conversations with the lead drafter of the standard in China—they modeled it after GDPR.
Now, the problem is I think there’s really two tracks to data privacy in China. I think there are new rules that specifically inhibit what the companies can do with user data, but the government has much more free rein. But the reality is you have a European model of data protection. You now have a China model of data protection, and the rest of the world is looking at this as they are grappling with their own issues and the U.S. is sort of in reactive mode.
AYYAR: We do have competing models emerging.
Peter, do you want to comment on that?
FATELNIG: Yeah. Yeah. Because I think in Europe what we have are rules for the free flow of data. This is the way I would present it, because the conversation about data is not as simple as some have said. Data is not data. There’s personal data. That’s a different category from scientific data or from open data by government—these are all different categories—or business data.
There are different rules of the road for those sort of data. And I think what the EU has been done to clarify what are the rules of the road for the different categories of data, and please use it, and please share it. So I would decree there is a free flow of personal data. Companies need to know, need to have the security to operate, and I think in that sense it’s a contribution to AI and clarify how companies like Facebook or any other company is operating in that space.
AYYAR: Thank you. We’ll close that question and—but there are some implications we couldn’t get to with national security and defense and I hope the other questions will bring that out.
Can we have the young lady right behind the previous question? Thank you.
Q: Thank you. Hi. Really fascinating. Jenna Ben-Yehuda from the Women’s Foreign Policy Network.
Jason, I wanted to come back to comments that you and Samm had made about codifying bias in AI, and we saw Amazon recently scrapping its recruiting tool that doesn’t like women. What is the role of government and, specifically, an organization like IARPA in funding efforts to ensure that taxpayers are not effectively paying to codify racial and gender, et cetera, et cetera, bias. And also how can we—how can we help on the oversight side, thinking back to Zuckerberg’s time before the Hill? There’s a lot of education that I think needs to happen amongst our lawmakers. What are the steps that we can start taking now to help them get smart so they can be useful in their oversight efforts? Thanks.
MATHENY: Yeah. Good question. So at IARPA, when we discovered a few years back that the data sets that we were using to generate facial recognition systems were biased because they had too few representatives of ethnic minorities in the data set, then we started addressing that by making sure that the data sets were more representative.
In our case, it’s for foreign populations that those systems are deployed. So ensuring that data sets are representative of the kinds of uses to which they’re going to be employed and also represent the sort of values that you would like ultimately the system to exhibit is really important.
This is also an active area for research, just trying to figure out how to create machine learning systems that are less biased by design or reveal their own biases through testing. I think one important role for government is not only to be aware of that problem when they’re training systems but develop standards and develop testing protocols to discover cases where a system is biased.
And then I think, lastly, I mean, this is not just a U.S. problem. It’s a problem for everybody. One disadvantage that China has in terms of its own internal data is that the internal data, because China’s internet companies don’t have a lot of international users—it’s mostly just China users—their bias problems are likely to be even more severe than those of, say, the U.S.
AYYAR: Peter.
SACKS: I would also use that as a—as a counter argument to those who say that China is the Saudi Arabia of oil—of data, right, and that there are limits when you have homogenous data sets. Does this mean that the applications the Chinese internet companies come up with are going to be really good for serving Chinese teenagers exactly the kind of material they want to read on the internet? But how does that translate outside of China’s closed ecosystem is another question.
AYYAR: It’s an open question about whether that data (as fuel ?) extends beyond.
Peter, I wanted to ask you to just comment. You know, there is some concern about whether algorithmic transparency or algorithmic accountability needs to be something that governments take responsibility for because they’re free markets. Maybe in the West there may be some inclination the market holds companies accountable. But there’s real concern, I think, if I understand the sense of the question, that governments need to be involved. Would you say the European Union has a right to enforce algorithmic accountability when the outcomes are—and impacts are significant to their populations?
FATELNIG: Algorithmic accountability—I wish everybody good luck in getting there. So I think this like prying open a black box and then what do you find in that set.
AYYAR: (Laughs.) I see.
FATELNIG: So I think we should move away, perhaps, from that way of looking at accountability. Accountability needs—but on the level of outcomes, you know, we need— do we get from the AI systems the right outcomes? And there we have to hold individuals/organizations who are dealing with AI accountable at that level. And there may be a role for government, yes.
AYYAR: OK. Thank you. Sir.
Q: Hi. Francisco Martin-Rayo from Boston Consulting Group.
Samm, a lot of the rumors we’re hearing out of China is that there is more of an emphasis on kind of state-owned enterprises over private-sector enterprises, right. So private enterprises are being asked to sell part of their equity stakes to state-owned enterprises. Last year, I think out of all the companies that went bankrupt, not a single state-owned enterprise. The quantitative easing that’s going on is all going through state-owned enterprises. I mean, is that what you’re seeing today and what do you think that means for the innovation cycle there?
SACKS: Absolutely, and I actually just got back from China and a lot of the entrepreneurs, the private tech companies, are very worried right now. It’s a really tough time to be a private sector company in China. There were—a number of articles went viral on Chinese social media talking about how sort of discrimination and disadvantages that private companies face when it came to their own government.
I think that there’s a lack of an appreciation here. We tend to see the Chinese companies as monolithic and I think there’s a lack of appreciation about how a lot of tech companies in China are undermined by their own government, and I hear this both from large and from small companies. And it’s something that Trump administration policy needs to take into account as it’s trying to figure out the right way to sort of hold Beijing accountable for some of the issues at hand.
AYYAR: Very good. Sir.
Q: Hello. My name is Dan Byers (sp). I’m an independent consultant on technology and international development, formerly with USAID and the NSC.
So my question is looking at—given about half the world right now is currently not online but is expected to come online in the next ten years, and whether you’re talking about Asia, Africa, or even Latin America, much of the technology that’s—the platforms that are necessary to do that are being provided by Chinese companies, what are the implications for that as you move forward for competitions for influence, economic, political, or even security issues in that part of the world?
AYYAR: Let me add to that before you guys. Samm, you can maybe take the lead and then we’ll ask Peter and Jason. But it does seem as if Chinese investment is very strong. I think in 2017 it was greater—forty-eight percent of the funding for AI companies is in China, thirty-eight percent in America. I think the market here was greater. But are there long-term implications for what China is doing globally here with this emerging population coming into the—to come to the World Wide Web and its access to the markets?
SACKS: So I think that, you know, there’s—I think the percentage of internet penetration in China is still very low. I forget what the exact number is, but compared to the U.S. it’s just—it’s magnitudes lower. You know, companies like Alibaba have done a really good job at trying to sort of connect rural communities to national e-commerce networks and—but there’s a lot of growth there.
China also presents a compelling model to other emerging economies that says here’s how you can use the digital economy as a driver of growth. Oh, but you can also have a vast surveillance and sort of government monitoring system at the same time as it’s this new engine of growth. And I think that’s what we need to be aware of when we think about the competition element again. This is a different competing model to a digital economy.
AYYAR: Jason, anything you want to add to that?
MATHENY: I have nothing to add. I did see this person had her hand up for a while and—yeah.
AYYAR: Well, there you go. Let’s go. Thank you.
Q: Thanks. I think your line of sight was directed that way. Laura Rosenberger from the Alliance for Securing Democracy.
I just wanted to follow up on a couple of the earlier questions. It strikes me that this conversation feels very divorced from geopolitical realities. We are pretty clearly headed for a time of deep strategic competition with China. Moscow is trying to undermine the United States and Europe at pretty much every turn. So while there may be great collaboration among research scientists, I think at the geopolitical level that’s likely not to be the dynamic. Couple that with some of the warnings that Jason and Samm issued about the way that AI really enables authoritarianism in new ways.
We are seeing a global rise of authoritarianism. We are seeing democracies under siege, and I just wonder how you come out so positive on some of these things against the backdrop of those geopolitics because I think it’s a bit shortsighted to focus only on the technical dimensions when, as we’ve already seen, while China may not have perfected, you know, general AI in the ways that Elon Musk lays out, they’re doing a pretty good job of using surveillance capabilities and facial recognition technology to throw upwards of a million people in camps in Xinjiang. So I think that maybe we—I would love to have you guys unpack that just a little bit more.
AYYAR: I appreciate that. Let me just talk to it directly. Of course, it’s really exciting to talk about the edge of these technologies and how they’re used to harm or to help, and the mainstream of what’s going on in collaboration and work, between nations in some cases but certainly inside nations and certain disciplines, is pretty spectacular. In fact, of all the technologies I’ve seen and grown up with, the one that is the most collaboration historically around is AI.
There’s an OpenAI foundation. There’s a spectacular—if you come to CVPR where the most latest capabilities and papers are being published, it’s a multinational environment and they are all sharing their work openly—Chinese scientists, Israeli scientists, American scientists—from which governments who want to use this to control or to prevent liberalization are taking operationalization of capabilities to bear on their challenges, and we will see in technology, broadly, the use of any emerging technology to reinforce the political realities of the environment.
And I think, to your point, whether it is intrinsic to AI to be used in that, you know, capability, I like to jokingly say in Computer Vision where we operate that our systems are smart but not that smart in that, you know, if you ask my systems to report back on every place two men and a knife are, they would show you where a surgeon is about to cut into someone and a thief who’s about to rob somebody, and it doesn’t know the difference.
And in the same way, these technologies can be used for good or for evil and for the purpose of political—preventing political liberalization or for, obviously, the more expansive and liberal values that we hold dear. I think that the source of the question, though, is what everybody’s concerned about where you hear these leaders talking about the dangers of the use of AI and how potential adversaries—I would call them great powers now—great power rivalry is reemerging and does AI change the character of that and does the race that we’re talking about really impact the outcomes here.
So this is—you know, does Russia and China, by gaining advantage in AI, really bring their systems of governance and their economies and their nations to a faster rise in an adversarial way towards America.
So Peter, your thoughts on that.
FATELNIG: Thank you very much for that question because I think you are fully right, and I do not want to portray the European Union as being naïve on that topic. We ask ourselves very, very strongly whether the deployment of those technologies today already is undermining actively our democracies, and probably many people will answer the question with yes, and we have to step in and take action that we do not allow other people to undermine the democracies we have.
So that’s a different question, maybe, a bit from what you asked because that’s not necessarily the question of what—how do we deal with Russia. But it is related to that so and we have to take in the European Union as much as the United States are taking proactive actions on trying to strengthen our democracy and not allowing these technologies to undermine it.
At the same time, the second one is then going out, and we run quite a large (debunk ?) networks and STRATCOM initiatives vis-à-vis the Russian—Russia and other countries—in order to combat that on a more outgoing and more assertive basis. Now, that has limits. I was using the words assertive and combat. It’s maybe not exactly like that. But there is a desire to be more active in the way of going out.
AYYAR: Jason—
SACKS: I don’t—I don’t—I don't think that anything we’ve said on the panel contradicts that points that you make. I completely agree with you and I think that we—it’s time for a very candid conversation about—things that are going on in Xinjiang are horrifying. We also know that these technologies are being used in other ways in China’s economy that have had very important benefits to people. For anyone that’s an analyst and a long-time watcher of China, they know that contradiction is just the name of the game. This isn’t black and white.
But I think that we need to have a very frank conversation. Look, for any tech company—U.S. tech company—that is in the China market right now, you know, when we’re talking about communications, when we’re talking about the cloud, what are the implications of being in a market where you have a Xinjiang scenario? What does that mean? And I don’t see that conversation happening in the candid constructive way that it needs to be. You know, we can say, OK, let’s pull out entirely. Well, that’s not realistic, right. So what does that look like. I mean, and here I’m going to be very provocative, OK.
AYYAR: Please.
SACKS: But I think when we talk about—for example, there was a lot of controversy around Google bringing in a censored search engine to China. So when you ask Chinese users, do you want a censored Google, they will tell you Baidu is illegal, fraudulent, unethical. The net benefit to Chinese users of having a censored Google might be better. There are some significant ethical implications that come along with that. But the point is we need to have a candid conversation that doesn’t just look at this in a black and white way.
AYYAR: Jason, there’s real fear out there. People are upset and concerned about what they’re reading and, you know, is China going to get an advantage over us by using AI to control or to—you know, to really propagate their vision of the world—social credits—and, you know, I’d like you maybe to help address why, from American and certainly from a technologist’s standpoint, you’re less concerned about those dramatic outcomes because you know there’s some time before the real power of AI can, you know, get to autonomy.
But what about the thoughts that AI could be used in the course of our normal great power rivalry in ways that we’re not expecting and what should we be doing about that?
MATHENY: Yeah. So we certainly shouldn’t be complacent. I mean, the United States should be making lots of investments in fundamental research and our competitive markets and our defense and intelligence applications for AI. But there isn’t necessarily an offensive asymmetry here where—for example, propaganda that can be industrialized. You can also industrialize the detection of propaganda using AI.
There’s a very good report called Malicious Uses of AI, which is sort of a survey of the various ways in which AI can be misused by states and non-states. It’s really unclear, I think, who wins—either those that are seeking to use AI as a way of amplifying authoritarianism or those who are using AI in a way to counteract authoritarian regimes. I think there’s also a dimension of this which is that it may not be great powers that benefit the most from AI but small poor states and non-states that, at least in percentage terms, are able to increase their power projection much more.
So, for example, China, Russia, the United States, all have a sophisticated air force. Most poor states don’t have a sophisticated air force but could buy one maybe in twenty years by taking commercial drones and weaponizing them. So if you have a drone swarm of ten thousand that have shaped charges on them you might be able to stand up to a world-leading air force.
I’m actually much more concerned in terms of kinetic weapons about the kinds of disruptions that this will create for nongreat powers.
AYYAR: Thank you for that. Ma’am.
Q: Paula Stern, the Stern Group.
My question is about standards-setting bodies. The word standards I think was mentioned once. But I’d—and I have heard now about the role of government—sovereign government—private sector, the universities. Could you address the role of standards-setting bodies vis-à-vis the sovereign governments and the degree to which they autonomously create standards, which may or may not be preferable from a point of view of at least the values and democratic values? Just I’d like to hear your observations, generally, on standards-setting bodies, particularly with regard to AI.
AYYAR: Peter, would you mind leading that?
FATELNIG: Yeah, I have—I understand. I think it’s an interesting question. I would probably, for a full answer, have to think a bit longer. But, you know, AI are complex systems. I mean, so complex systems are not getting standardized at that level. So if you talk about technical standardization there are elements and components you work with, and that we are all very familiar with.
Now, what I think we could look here in AI standardization is what is the role of international standardizations organizations; can they help us spreading values—can they help us. That will be maybe more in a softer way, because it may not be technical standards which are then obligatory to follow. But there may be ways, you know, and roles they could play. I like your question without even knowing the answer to it. (Laughter.) I think it’s something to be pursued further, I think, in my mind.
SACKS: Well, one thing we have to keep in mind—I mean, I’ll talk about, say, 5G standards, for example. The working groups that are involved with that these are technical bodies and I think a lot of the times there’s a discussion about sort of political influence and what does this mean. But, you know, these are very technical consensus-based working groups and so the process is not designed to allow political influence just by the nature, you know, in that realm. But, I mean, that’s just one slice. I don’t know if you had to—wanted to—
MATHENY: I think they’re hugely important. Their importance is underestimated nationally and I think it’s unglamorous work. It’s often really tedious work, but thinking very carefully about how those bodies are structured, the decision making process for setting up the standards.
I mean, in general, when I’ve seen U.S. companies participate I’ve sort of thought they’ve, you know, been broader than just their own self-interest in terms of the kinds of standards that they promulgate. But that’s really worth carefully checking and I think, likewise, to see are we making sure that when government agencies have representatives on those bodies are they representing more than just the self-interest of their single agency.
AYYAR: I think the other part that’s challenging is that AI is trying to automate and increase productivity. In a particular discipline that may mean different things. And so it’s not as if a single set of standards for AI’s use across multiple verticals could probably be agreed to easily.
So this is very complex and it gets to the values that Peter was talking about earlier and that’s the kind of transparency when we do automate with what it is we’re automating and what it is we’re considering when we automate so that when we look at these outcomes we can trace back the providence of the data and understand what was—what has happened to the data in drawing the conclusion that the system drew.
But it’s a great question and it’s complicated because we’re literally increasing productivity across multiple fields with the potential that these technologies offer.
Ladies and gentlemen, we’ve just got time for one more question. So let me go back to the far back and right here—this gentleman.
Q: Hi. Jason Hansberger from U.S. Air Force.
Pretty—kind of a base question. I’ll ask you to speculate some. How do you view AI as a(n) input to society and the economy? Is it electrification and the invention of the internal combustion engine or is it fiber optics and satellites?
AYYAR: Samm, you first.
SACKS: I mentioned before that from the Chinese context there’s an emphasis on solving real-world problems and efficiencies, and this is already happening if you look at the big internet companies in China, the way that they’re using AI to be able to process hundreds of thousands of transactions a second or make their logistics infrastructure lighting fast, and this is already happening in ways that are probably not as sexy and scary as some of the ways people want to look at it.
AYYAR: Well, you know, we’ll go to the other speakers. But I also, just as a closing thought, would add that I do think, at least for the foreseeable future, we’re talking about specific problems that you can address with AI. The kind of more generalized AI that people are fearful of is some ways off, the best we understand it. And so, for now, it’s very much about what particular problem are we attempting to solve when we talk about using algorithms and data to help us understand and contextualize. And so from that regard, it is really crawl-walk-run, although there’s some of these breakthroughs in radiology and in health that we think are very exciting because we never had access to the data before. But, largely, it’s like very many other transformations we’ve had in our growing economies, from my perspective.
Jason.
MATHENY: I mean, I think we’re at sort of the early stages of electrification right now. I mean, I think in the long run, it’s something potentially more transformative than electricity. The area in which I think we’re likely to see the greatest gains, say, in the next few decades is the application of AI to the process of scientific discovery in engineering. So since those are the drivers of innovation and economic growth in general, if you can amplify those rates by automating some process of scientific discovery and technology development, you would be scaling up the economy’s growth as a whole. That’s a very hard problem. But I think that’s the one that could have the greatest impact.
AYYAR: Peter, you get the closing comment.
FATELNIG: Yeah. I think you can portray the answer to your question as the best scenario you can have is if nothing happens. So if AI becomes like electricity—part of every activity, almost every activity we have—if it sort of amplifies our society in a positive fashion and create or grows our GDP, I mean, these would be fantastic. You know, I’m not sure if we will achieve that in the short term in any meaningful way but I think these would be the two outcomes I would be looking for—a better society and a growing economy, and I think this is—this is what brings us further and this is what we should pinpoint AI to do. That’s a job these people should have.
AYYAR: Well, thank you all very much. Our panelists will be here for a few minutes if you have continuing questions. But I look forward to seeing you again at the next CFR. (Applause.)
(END)