Gender Bias Inside the Digital Revolution: Human Rights and Women's Rights Online

Friday, November 9, 2018
REUTERS/Akintunde Akinleye
Presider

Adjunct Senior Fellow, Women and Foreign Policy Program, Council on Foreign Relations

Speaker
Safiya Noble

Assistant Professor, University of Southern California Annenberg School for Communication and Journalism

As the world moves deeper into the digital age, international organizations, businesses, and governments grapple with questions about how technology challenges and reinforces biases. Dr. Safiya Noble, assistant professor at the University of Southern California Annenberg School of Communication and the author of Algorithms of Oppression: How Search Engines Reinforce Racism, discussed her work to define digital technology as a human rights issue in the context of the United Nation’s investigative report on internet access, as well as her work on algorithmic bias.

 

POWELL: So let’s go ahead and get started. Folks are welcome to bring your coffee and food in here. I want to say welcome. And—if folks can get settled. Welcome and thanks to both old and new friends. I know there are a number of advisory committee members around the table and supporters in various ways. So it’s always a thrill to see all of you.

It is with great pleasure that I have an opportunity to introduce Safiya Noble who, as you can see from the bio, she’s a professor at both University of Southern California Annenberg School of Communication, as well as UCLA. And she told me she just got tenure yesterday at UCLA, so—(cheers, applause)—we have to—if we had champagne—I wish we had champagne to toast her. But also equally exciting is that she’s also just accepted a job at Oxford University, where she’s going to be joining their faculty this summer. So this is a woman on the move.

NOBLE: It’s an embarrassment of riches. That’s all I can say. (Laughter.)

POWELL: Really incredible. And she’s just—she was at Harvard yesterday, speaking with them. So we’re so lucky to have captured her for this brief moment. So as I’ve become really fascinated with women in technology, and I’ve produced a CFR report on women in tech. And everyone, when I started down this path, kept telling me about this book, Algorithms of Oppression. And so I knew right away that this is a very important voice and that we had to have her at this table. When I worked in the Obama administration is when Syria and Egypt shut down the internet when folks were out in Tahrir Square demanding democracy in those countries. At the same time, protesters were using social media as tools of demanding democracy and human rights. And so technology comes with a double-edged sword. As we move into this digital age, it can be both freedom-enhancing and freedom limiting. And so it’s that quandary that really brings me to this table and wanting to learn more.

So with that, I’m going to turn it over to Professor Noble. And she’ll speak for about ten minutes to lay out some opening remarks. And then we’ll open it up, because we have a lot of experts and smart people around the table, so that we can have a more informal discussion.

NOBLE: Thank you very much. I really have never had the opportunity to speak to such an impressive and slight intimidating crowd of people. So thank you for the opportunity. Thank you, Professor Powell, for the invitation.

I thought maybe I would talk a little bit about the book, because I’m sure not everyone—I mean, most of you probably just heard of it a couple of minutes ago. And so I could kind of give you some of the highlights and what led me to this research. But let me just say that in the kind of spirit of the type of work that you all do, what I’m most interested in in this work is trying to surface not just the affordances of the web and digital technologies, but also some of the consequences for people who are most vulnerable, people who are most vulnerable, people who—for whom they may not be able to use the internet in some type of liberatory fashion, but in fact may find themselves the victim of the kinds of, you know, dangerous and misinformed types of processes that happen on the web. So I bring kind of this sharing and the spirit of thinking about what might we all be able to do in thinking about policy or other types of interventions.

So let me just kind of set the stage. The book really was a genesis of a project I was doing in graduate school. When I came to graduate school to get a Ph.D., just a few years ago, I had spent 15 years in marketing and advertising. So at the time that I was leaving work with very large brands here in the United States, brands who were interested in, quite frankly, gaming search engines and getting on the first page, and were spending a lot of money with the ad agencies that I worked with. I was surprised to enter graduate school at the University of Illinois and hear so many people talking about Google like it was the new public library, or some type of new trusted information frontier, at a moment when I had just left industry and knew that we were spending a significant amount of money to use it more like a public relations engine, let’s say.

So that was—that piqued my interest, that the academy was in one place, so to speak, around what was happening. And of course, this wasn’t just with Google. This was Yahoo and other. I mean, some of us have been on the web—probably most of us in this room—far before the search engine came around. And so I was—I was thinking about kind of this dynamic between what was happening in industry and what was happening in the academy, but then also what the public was doing with search engines. Pew was releasing great research about search engine use. And what they were finding was that more than seventy percent of the public was relying upon search engines like a trusted, verified, credible resource. And they have a very important study. They have two studies on search engine use that you can find with—as part of the Pew American Internet life series of work that they do.

So this led me to start to coordinate and develop a study on a variety of different identities. I was thinking about how do people in communities get represented in something like a search engine? And of course, you know, we—at that time, I was living in the Midwest in not an incredibly diverse town and thinking about the ways that people who might have very limited contact with people who are different from them might in fact be highly reliant on upon something that they trusted to provide them credible, vetted information about other people. That first study really looked at a variety of kind of racialized and gendered identities, starting with black girls. My niece is here. She’s a graduate student at NYU. And she was one of the people that I was thinking about when doing this search on black girls. At I was—at the time, she was, you know, young, a tween. And I was stunned to find that the first page in all the major search engines brought back pornography as the primary representation of black girls.

And so, you know, we started to expand. And, you know, when I—when I saw that—and I won’t. This isn’t the place. Normally I would put a slide up and you would see the sites. But I just feel embarrassed, quite frankly, to say the names of these porn sites. But they’re real pornified, if you will. (Laughter.) And they’re really sugary and other kinds of adjectives that start with P. So there as—this led me to looking at Latina girls, Asian girls, a lot of different kinds of, again, ethnic minority identities, and seeing that overwhelmingly the representations of these girls were pornography. It opened up a bigger series of questions, which is: What does it mean when we kind of have a larger, broader, public discourse, particularly coming out of Silicon Valley, that the internet simply reflects back what the majority of people are doing?

However, when you looked at these kinds of searches, you didn’t have to add the word “porn.” You didn’t have to add the word “sex.” These girls of color in the United States were synonymous with pornography. And of course, they weren’t even girls. These were women that was represented in this site. So at a kind of fundamental, you know, Sexism 101, women were coded as girls. And I started writing and speaking about that. And you know, I will say that I’ve been in a—what I feel like is a quiet, unspoken relationship with Google in particular, who I talk about quite a bit, because, you know, over the years they have adjusted some of the results.

And I’ve seen their response to critics. And I think that there are some positive steps happening in that direction. But more systematically, when you look at misrepresentative information, this is a very dynamic and difficult set of issues to get our arms around. And I think it’s the kind of thing that we need to be thinking about, particularly when people are in a numerical minority in the community, in the country. They really would never have either the capital to SEO their way out of it by kind of aligning different content with the keywords, and the incredible amount of money that it takes in these advertising-driven platforms. But they also would never have the majority in terms of the kinds of people who are searching on these identities.

So that became a bit of the foundation for the book. But it was really the opening into thinking about what does it mean for us to have this kind of tension between private corporations being positioned as public goods, as public information resources? And this is a really big tension. Of course, I come out of the field of library and information science. And so I’m—you know, it’s kind of thinking about the role that librarians and other kinds of information professionals have played—teachers, professors, and so forth—in helping us make sense of knowledge and knowledge management in the world. And yet, I see this kind of—this paradigm shift now to where—and of course, any of us, all of us in the room who teach students, we know that students now say they could never write a research paper, for example, without a Google search or some other search engine. And it doesn’t even occur to them to use the library, except as a study space. (Laughter.) But that’s a different talk for another time. (Laughter.)

So I started thinking about kind of, again, what are these tensions between advertising platforms and other kinds of public interest information resources, and how might we shift the conversation? And this is really what the book is about. It’s not really about trying to do away with the current, you know, set of companies. Although, I will say that I think the more diversity we have in information platforms, the better that is for democracy. And I’m not the first person to say that. There are many people who know that the more journalism we have, the more media outlets we have, the more schools and universities and, on these types of public institutions, the more we bolster democracy. And I think that this is important when we think about kind of one monopoly leader being trusted and, in many ways, displacing other kinds of public institutions, like public libraries or public interest media.

So you know, this is a—leads us, of course, to the inevitable kinds of conversations that I think start to creep into the fore, which is what is this kind of space of information, and who are the people that are involved in it? And, you know, I feel very fortunate to have my colleague Sarah Roberts, who I just happened to be lucky enough to convince her to come to New York with me from Harvard yesterday. But you know, this is a world’s expert on commercial content moderators. And these are people—now we’re talking about, you know, more than one hundred thousand people around the world in global sites that are deciding, for example, is the napalm girl and these photos of—for example, of war—is that child exploitation material or is that documentation or evidence of war and human rights abuse? And these are the kinds of tensions that now we start to see, and Sarah so well documented in her work in her forthcoming book.

So I think that, you know, what we know is that the dominant discourse has been that large digital media platforms are simply conduits. They’re free-speech zones, where anything goes, except for this of us who study these platforms. We know there’s actually a global labor force that’s involved in deciding whether things can come down or whether things should stay up. We know that there are a host of programmers and other managers and executives who make policy and decisions about content. And those are values-based. But there’s no transparency about what those values are, in many cases. And these are the kinds of conversations that I think scholars like I—or, like me are trying to reveal with our work. More so to just shift the conversation and say: Well, if there are values at play that we know are present, why aren’t they transparent and what’s at stake when they’re not transparent?

And this is where I find we start to move into the realm of concerns about civil and human rights, and what the role is of industry in relationship to either fomenting something as, you know, basic as should pornography be the dominant representative information of children of color, or girls of colors, or should platforms be mindful or even regulated around the—you know, their participation or their—the way that they’re used in ethnic cleansing projects, like the Rohingya in Myanmar, for example. And, or, you know, the disinformation campaigns, for example, maybe around new civil rights movements, whether it be the Black Lives Matter movement, which—you know, we may look back in twenty or thirty years at the campaigns of propaganda and disinformation that have moved through these spaces to suppress civil rights movements in the way that we look back on the civil rights movement, quite frankly.

You know, at the time that the civil rights movement was underway it was a vilified movement. Everybody marched with King, apparently, in my family, but we know that’s actually not true. (Laughter.) So we—you know, but we have a retrospective about, you know, our deep commitments. But at the time, we had very similar kinds of campaigns of propaganda and disinformation against, you know, empowering and the giving of rights to African-Americans, as were indigenous people, Latinos, and even women. So I think this kind of moves us into the space of thinking about what’s happening at a global scale. We know, for example, that in—the EU has been much more aggressive in terms of policy and thinking about what their conceptions of values are around speech and, we might call it, content. But that’s a very generic word for a lot of very different kinds of information that might be moving through a platform.

And, you know, I think that some of our conceptions in the U.S. are also often exported to other parts of the world, as if they’re kind of acceptable. And, you know, I’ve often found it interesting to see social media and other types of digital media platforms argue that they’re not responsible for the content that moves through their platforms, except for Sarah’s revealing work that there are hundreds of thousands of people who work at subcontractors, in many cases, or direct employees of these companies who are curating and pulling down content, or that hate speech can’t be managed or acknowledged even in the United States because it’s free speech and platforms might argue that there’s nothing they can do about it, except in Germany and France, where it’s illegal to traffic in antisemitism, for example, and that content does come down. So this is why it’s important for us to kind of understand the larger kind of international scope of concerns.

And, you know, I also think that in talking with people in industry, still to this day, the incredible amount of financial rick and legal risk that these companies are facing in trying to think through how content moves through their platforms and where the line of demarcation is in terms of their responsibility or culpability is on their minds. And so we might be able to also kind of in the public interest play a role in conversation and dialogue with companies about how to solve some of these problems. And I certainly have been available for that.

So those are kind of, like, the broad strokes of things that I think maybe will get us at least to a provocation and, you know, a set of maybe some questions and conversations. You know, I’ve been talking about the harms that I think—and of course, you know, I’m not just an information science scholar. I’m also a communications scholar. And I understand, for example, the incredible harm that comes to the public when we, you know, ingest and have contact with stereotypical racist and sexist kinds of content that circulates in our societies. So, you know, it’s not of little consequence to have this kind of content and disinformation moving around. And, you know, when I was talking about this years ago, when I first started this work in 2010, people felt very sympathetic and sad for the black girls, and for the girls of color. But then when the same kind of mechanisms threw a presidential campaign in 2016, everybody started paying attention.

So, you know, I feel that it cost us quite a bit, quite frankly, in the political arena to have to see some of these mechanisms unfold. But I think it’s a really important moment now. And I don’t think we can leave this—these conversations to hoping that other people are having them. So I’m really grateful that you all are open to exploring these ideas, at least at this kind of—at this roundtable. And maybe I’ll leave it there.

POWELL: Thank you so much.

So I’m going to open it up and just ask you to put your cards sideways. I have questions of my own, but I want to maybe first go to folks around the table, and then I’ll jump in if there’s time. Let’s start with Mark Shulman.

Q: Oh, thanks, Catherine.

What a fascinating talk. And I was unfamiliar with much of what you were saying. And I’m really grateful for you revealing all this to me. In particular, you talked about the lack of transparency in the decisions made by people running big—the search engines, and that those—those non-transparent decisions reflect a lot of values, and that people are not taking responsibility for how their values play out to silence, or characterize, or mischaracterize people. But I don’t know, assuming they ought to be more transparent, what sort of responsibility ought there to be for deciding how to characterize, prioritize content? I couldn’t tell if you want the German model, where there’s criminal sanctions. Of course, there’s some First Amendment issues with that in this country. Do you want civil liability to attach to slander or libel? Or what sort of locus is appropriate for deciding on the algorithms and the responsibility for the aligning of content? And then who decides? Because I look at the FCC today, or the federal government today, and I think that if we charge them with the responsibility for deciding what’s hateful speech, you might find a lot of transgender kids being shut out of the internet.

NOBLE: Yeah. It’s a great question. And I think—I don’t think it has a simple answer. So let me just say that first. I mean, there’s a—there’s a few dimensions in terms of the consequences of what happens with the way in which data profiling in particular is another large dimension of what happens in the search engine. So all the past things that we’ve done, the way in which our information and our participation in digital systems is brokered and sold across a whole host of other companies. I mean, it was rather amazing to me when the Cambridge Analytica story broke and everyone was shocked. And I was like, that’s the business model of the internet. (Laughs.) I mean, is buying and trading on our data, and the intense datafication of our lives.

So I know for sure we are already seeing evidence of things like technological redlining, which is something that I talk about in the book, which is this idea that certain opportunities may be foreclosed, and higher prices might be attached to certain kinds of people, right? Maybe not-so-good insurance rates, or higher premiums on a mortgage, or maybe not even be able to get a mortgage because, again, new experiments that are happening around social credit and so forth. And these are being experimented in other parts of the world, but also I’ve seen a lot of reports of people talking about kind of what is their data profile, and how might they ever be able to intervene upon that? And I think this is where search and social media companies play a huge role, because they are very large aggregators of online participation and what people are doing.

So I think we don’t have legislation that allows people in the United States, at least, to think about controlling their data profiles. We don’t have legislation like the right to be forgotten, for example, in the U.S. And I think that to the degree that people can come to know how their data profile is created and also to be able to intervene upon it is important. I once heard the director of the CIA at a meeting say that as far as the state was concerned, people are their data profile. And I would argue, a lot of people don’t know what their data profile is. So I think that—of course, that has all kinds of other implications around, you know, the freedom to read anything one might want to read in the world, right? The freedom to kind of explore and know things. I mean, if I’m trying to read up on al-Qaida, does that mean I’m part of al-Qaida, right? So these are complex issues that I don’t think we have an appropriate framework yet to do sense-making.

Certainly I think that—you know, the imperative for industry is profit-making. And so I guess the question is, you know, profits up to what point. Certainly every industry is interested in those imperatives and, quite frankly, are held account for them. We have a paradigm in some industries about consumer harm that we don’t necessarily have in the tech industry. So, for example, we would never allow pharmaceutical companies to make drugs and just pass them out on the street and see what happens. And so, like, I hope it works out, right? (Laughter.) I mean, I can just—OK. We just—it’s inconceivable on a certain level. I know, now I’m—yeah. Don’t put that in the internet. I don’t know. Automotive industries. You know, other industries where we can see, well, what is the potential impact of consumer harm.

And of course, there are great scholars doing work around the harm. Virginia Eubanks has a new book out called Automating Inequality, where she looks at things like people losing their children in the foster care system because of erroneous databases, and the removal of those kinds of decisions from human beings, like social workers, and putting those into automated systems. And there’s just multiple examples. So I think we really don’t have the conversations in place that need to be had about what is—who is liable. You know, and as the state and government start to become deeply invested in these systems too, it will be harder and harder to extract or to do the sense-making.

You know, one of the things I often repeat, you know, Cathy O’Neil wrote this great book called Weapons of Math Destruction. And one of the things she said in it is, you know, you can’t take an algorithm to court. And this is part of, I think, the tension that we’re in right now around the—many of these media platforms—tech media platforms really argue that they are simply kind of the pipes and they are not responsible for the content that moves through them. And so I think this is—again, this could be argued. Some of us argue about where the culpability lies. And I don’t think that’s all been entirely sorted out.

So I guess, you know, that would be kind of broad strokes. I mean, certainly I think we need I can’t even say much more aggressive. We just need to be thinking in a more serious way about policy. I mean, when I watched the congressional hearings when Mark Zuckerberg was brought in, you know, to account for what was happening in Facebook and, you know, the answer is AI will fix it eventually. I mean, you know, people like us we just—it’s like, we know AI is still trying to figure out is this table a table. (Laughter.) Are you kidding? So I mean, that is—is a cat a cat? That’s what’s going on in the state-of-the-art of AI right now. So despite what anyone will mislead you to begin.

So I don’t think these problems will be solved by AI. I think they also get solved, again, in the realm of these content moderators and people who make the policy. You know, Sarah wrote this chapter in a book that I—that I published a couple of years ago. And one of thing one of her informant said—I should let you speak to this—but, you know, he said—he said, you know, it was interesting me. I worked for a large, you know, mega-tech company. And I didn’t understand how things like drug murders—drug-related murders on Juarez, Mexico has to come down from the platform, but beheadings in Baghdad got to stay up. It seemed weirdly like there was a relationship between U.S. foreign policy, but I couldn’t really say because I’m too low-level a worker to know. But it just seemed like violence in some parts of the world was all go and other parts was a no-go.

And then he would say, you know, I would ask my managers why was it we would take down animal mutilation videos, but blackface was in? Who decided? What were these kinds of, you know, decisions about—how did blackface, you know, and kind of this, like, gross, stereotypical kind of performance—who did—upon whose values was it OK for that to be in and other things to be out? So these are—these decisions are being made all the time. And when you think about the kind of—I mean, YouTube alone, you know, they were on “Good Morning America.” The vice president for YouTube was on it a few months ago. And he said—and this was kind of a recent statistic. He said four hundred hours per hour— four hundred hours per minute is being uploaded just to YouTube’s platform— four hundred hours of content per minute, twenty-four by seven.

So I think that, you know, again, when you think about the type of people that are having to flag that, or having to do sense-making of that, what we know is that that’s a volume that can’t actually even be tended to. So it’s up until the public takes it down, or others. And I think, again, we have a lot of work to do in sorting out these lines of demarcation around responsibility.

POWELL: So I have five people on the queue. And I’m glad I’ve deferred my question. I have, like, a million questions at this point. But let me to go to my neighbor here.

Q: Hi. Nan Keohane, Princeton University.

It’s a fascinating presentation. I realize that a lot of other people want to ask questions. So I’ll ask a question and ask you to give a brief answer to help us think about it further. My question is about path dependency and how these things get begun. So you said, you know, when people found pornography when they looked for black girls, it’s because that’s what they wanted to find? You sort of implied that? Does that mean that in an early period when only a small number of people were using this search engine, a few people somehow indicated that they wanted to find pornography, and then that became the norm? I know algorithms are part of the answer, but can you tell me conceptually how does that happen?

NOBLE: Yes. It’s capital. The porn industry has more money than everybody.

Q: So it’s done through advertising, or?

NOBLE: Well, Google search is an advertising platform. That’s what it is. So it’s a—you know, you have to remember that—not just Google. I mean, Facebook is optimizing content also in relationship to advertising.

Q: So it’s not as thought someone—that it was the result of mass preferences that led to that. It’s that that’s what was provided by the capital-supported documents?

NOBLE: I think the way I see Google characterize what happens in its search when it reports out is that there are over 200 factors that go into the decision-making metrics that they use about what to prioritize and what to sort out. You know, they don’t even index the entire web. They index about—last I saw as about forty-five, it might be more than that, percent of the web. So those who can pay to optimize and pay more to have their content connected to the keywords—“black girls,” “Latina girls,” “Filipina girls,” and so forth—they can outspend me or anyone else that’s interested in those keywords. And so, you know, how ad words works for Google, it’s a twenty-four by seven live auction. And people just pay too big to optimize their content.

So industries that have a lot of money always end up on the first page. I mean, this is why if you’re looking, you know, for—and, of course, geolocation is important. You know, you’re not going to necessarily find content from Bangladesh. You’re going to get U.S.-based content. So, you know, geography matters. Also what other people have been searching, because Google’s trying to help shortcut and optimize in those ways. So there’s a kind of a confluence of factors.

Q: Thank you.

POWELL: And, by the way, I mean to mention, you mentioned Cathy O’Neil, the author of Weapons of Math Destruction. She came to speak here this past spring, which was fantastic.

NOBLE: Great. Great.

POWELL: Let’s go to Michael Walsh. Do you still—you took your card down.

Q: Oh, no. A lot of it was answered. But I—having started living in a conformed culture at the age of twenty, I’m just curious. Is what you’re going through now or what people are going through in terms of black versus—black girls, or Latina girls, or whatever—does this also happen in other cultures? And if so, what is being done to try to balance out your research, or how do we find new ways of helping to open that up so that it becomes a global issue, and not just your own—just your own very specific research and technology. Because this is all brand new. And, you know, when we see from the past election, I’m just curious, is that how do we educate the public to participate in this research that you say has to be done?

NOBLE: Yeah. I mean, I—most of my work is based in the U.S., for sure. So that’s the site that I’m most familiar with. But I will tell you, for example, Elad Segev wrote a really important book about Google and the digital divide years ago as his dissertation. And he certainly had tracked kind of the misrepresentative kinds of information and the way in which groups who were in a minority in other countries were also held hostage in some ways, or had a harder time breaking through, let me put it that way, with their—with their content. And of course, these things are certainly important. I mean, the, you know, the cooptation of certain kinds of keywords, for example, often still happens in the U.S. And, I mean, many people have written, for example, about how Jewish people around the world have been maligned or their representations have been connected to Holocaust denial information or, you know, really white nationalist kinds of messages. And so we see that phenomena happen outside of the United States, for sure. But, again, the interventions are different in different parts of the world because they’re—the policy requirements are different.

Q: But how do we—how do we—

NOBLE: I don’t know how we’re going to do it. But, you know, I feel like this is a really important step, is that we’re having larger conversations. And so—

POWELL: So, and in fact, just before this session we were talking about—I’m teaching the Rwandan genocide in my human rights class now. And just showed the first part of the Frontline documentary on the genocide, which features the use of hate radio. And I’m just thinking now, if in 1994 these kind of platforms were where they are today, how much, you know, even more amplified that message of hate would be.

NOBLE: Let me just add though, something that I think is a little bit different from radio is that, you know, radio is also about kind of, like, the use of spectrum. But when we start talking about platforms, those are something different than radio in terms of the decision-making of what can be seen and what can not be seen. And so I would just make that distinction a little bit in terms of, again, a different kind of curatorial process that’s happening. Certainly I would—we can’t overlook that in places like Rwanda part of what’s also fueling genocide there are the mineral wars and, you know, the selling and the trade of arms for minerals. And of course, many of the ways we don’t talk about technology in the West is we don’t think about, for example, the incredible extractive industries that are happening in the Congo, for example, and, again, that—you know, where, you know, the United Nations has said that the Congo is, you know, the greatest site of sexual violence in the world. So kind of the concerns at my work kind of extend to thinking about the kind of material dimensions of this. Like, you know, the component parts, where are they sourced from, and what are the politics of that?

And certainly, you know, if we had to design this and say: No one dies, now go design. That would be really different than design it with the cheapest possible, you know, minerals and commodities possible. And I think that’s something to put into play, especially when we talk about Rwanda. And certainly, you know, I spent the in Ghana, in Accra, studying e-waste and what happens with our devices when we’re done with them, and how they get loaded up into barges. And people think they just go to the recycling center. But they actually go on barges, and they’re shipped to China, the west coast of Africa, and other places. Where, you know, huge new toxic e-waste cities are emerging, and people are being poisoned, and their lives are being cut short. And if we think that that’s not tied to refugee crises and the displacement of people when their lands are poisoned as we kind of rush in our thirst for the digital, I think we—again, we’re just being incredibly short-sighted.

POWELL: Let’s go to Sylvia Ann Hewlett.

Q: Thank you.

POWELL: It’s already on.

Q: It’s on? OK. (Laughs.)

Incredibly urgent work. Thank you.

NOBLE: Thank you.

Q: A small comment and then a question. We know that AI—sexism or racism are being baked into that as we speak. You know, Joy Buolamwini’s work, you know, she finds her face not recognized because the folks that designed the—or coded the algorithms didn’t have a wide enough range of skin tones to somehow pick her up. Now, companies like Microsoft, where I was recently, is working on the premise that if you get more diversity around decision-making tables, and around design teams, right, you finally will be more responsive to a bigger range of end-users. And perhaps the content and the degree of empathy for new marketplaces because it is true, you know, the biggest growth market in the world is not China, it’s women. You know, it’s a huge growth market.

So the potential of these diverse groups to actually exercise market power, if only they were represented around decision-making tables in the tech industry, is that some kind of piece of hope going forward? Because right now we know that design is unduly, you know, kind of dominated by young white guys, right? Which is making the problem of how it’s all determining the future that much more poisonous.

NOBLE: Yeah.

POWELL: Before you jump in, can we take a second question? Because, like, now there’s so many cards up, I want to get as many people in.

NOBLE: Yeah. OK. Yeah.

POWELL: Linda, you took down your card.

Q: I was just going to raise the question of how do you get people to even care? It seems that everybody that I know doesn’t—you know, I’m not on Facebook or anything because I don’t want people knowing this data. And I’m considered an absolute elephant in the room. So when people—

NOBLE: Or the smartest person.

Q: So when people are not—are not alarmed by this, how do we begin to get any traction?

NOBLE: Sure. OK. These are both great questions. And I think they go together well.

So one of the things that often is argued is that if we had more diversity in Silicon Valley or Silicon corridors—

Q: At the top.

NOBLE: Exactly. In management, but also among programmers, right? The kind of whole ecosystem. That we, you know, we could solve these problems. I guess where I part—you know, I think Joy Buolamwini’s work at MIT is trying to raise these concerns. You know, there’s also the—and this is because she doesn’t want false-positives, right, where people are recognized, but it’s not them. And of course, you might remember this study that the ACLU did, pointing kind of the market AI facial recognition technologies at the Congressional Black Caucus and half of them being—or, a third of them being flagged as felons and criminals because the AI is so unsophisticated. And that is what I mean when I say it’s still trying to figure out is a table a table.

So this idea that somehow if we diversify the data sets, diversify the training data, diversify the design work and the management, that that will solve it. And I think I just push back on this a little bit to say: Certainly we need to do—there’s nothing wrong with doing that. However, what we don’t have are people who are critical thinkers around these issues that are at the table designing. We don’t have people with Ph.D.’s, for example, in black studies, or ethnic studies, or gender studies, right, with the same level of legitimacy as a, you know, person with a bachelor’s degree in computer science who’s making these technologies and deploying them on the public, all right? So this is one dimension of kind of lack of expertise.

We also don’t find that these technologies are being invented and pointed at maybe—well, the question then is who are these technologies designed for? Who are they pointed out? So one of the things that’s very, you know, prominent now, L.A. has been an epicenter for predictive policing technologies, also using facial recognition and other kinds of technologies. And, you know, those policing, for example, technologies, or Microsoft, or Amazon’s AI around facial recognition, it’s pointed at the borders. It’s pointed—who is it pointed at? It’s pointed at low-income communities. It’s pointed at vulnerable people. These are people I consider kind of the data-disposable, who are experimented upon to perfect these technologies.

You know, no shade, but they don’t get pointed at Wall Street, let’s say, to say who might be the next person in here who’s going to defraud the economy, right? Like, these are, like—you know, I mean, I’m being a cheeky, but I—you know, the questions have to be asked around, like, what are these technologies? Who are they working in service? And who are they being exercised upon? And this, of course, goes to, you know, what we often find, is that the most powerful and influential are not engaging with these technologies at all, and their children are not being allowed to be raised on these technologies, right? It’s actually poor and working-class people, middle-class people.

So I think, you know, we have to think about kind of, again, like, what are the increasing mechanisms of control that are also built into the design, again, of, like, why we—why do we want these to exist? What is their purpose? And I think those are important questions too.

POWELL: Let’s go to Katharine and then Kenneth. We can group the two of them.

Q: OK. I got to do my microphone here. Can everybody hear me? Oh. Really fascinating. I was actually the first news editor of the Huffington Post, and did all the moderation, and got calls from Dick Cheney’s office, and Elizabeth Edwards, even, to take comments down. So I have stories.

NOBLE: I have somebody who might want to interview you.

Q: Yeah, yeah, yeah. Lots of stories. (Laughter.) I also was the head of digital at The Washington Post. So lots of stories.

So I actually do—I have a platform now that does all the large-scale gender diversity recruiting for corporations. Microsoft’s a big client. And to your—to what you’ve just said, it was super interesting. I have two—this is a question. We are getting—from a lot of these corporations they’re saying: We need more African-American women. We don’t need more white women. So we’re saying OK. But then our problem is, we need women to self-select with their race. And so as you’re pointing out, that the more data that people put on the more it can be used the wrong way. So I’m personally in this quandary right now, that, you know, we’re literally losing sales deals right now because we can’t get enough women to say, you know, how they identify. So that’s the opposite of this. And the GDPR stuff has also been very hampering on that side. So, you know, how do you sort of get the good players—and I’ve learned from years of being in digital, and some bad things. So there’s that.

And then to that point too, I’m starting to see a lot with venture capitalists coming in and investing in verticalized communities because they’re sick of LinkedIn. They’re sick of Facebook. They’re sick of Glassdoor. They want these communities that are created just for people who self-identify. So there’s a lot of money that starts—that seems to start—that’s going into just women-only communities, where you don’t have the rules of Facebook. And a lot of money I’m seeing, even like with construction worker communities. And the question I have for you is it’s making me nervous, because I just saw that Alex what’s-his-name, the Infowars—

NOBLE: Jones.

Q: Jones, yeah. That devil guy. He—(laughs)—he just created his own—he’s off Facebook. But I guess he’s coming back on there. But the rise of these very scary political verticalized communities, even outside the bounds of these companies that seem to have some responsibility, what do we do about them?

POWELL: And, sorry, if we can put Kenneth with Katherine.

NOBLE: You bet.

POWELL: Perfect.

Q: You don’t seem to want to suggest specific enforcement mechanisms to bring about the transparency of values that you suggest. What are some of the specific suggestions by people within the field as to how to bring about or to enforce the values that you’re suggesting?

NOBLE: OK. Let me start with that one, and then we’ll come back, OK?

Q: (Laughs.) I gave you a lot.

NOBLE: It’s not that I don’t want to suggest enforcement mechanisms. I mean, we have ideas and there are people who are already talking about the kinds of convenings that need to happen in more democratic ways, I think, to generate policy. Certainly we don’t have, for example, you know, at the level of, let’s say, the United Nations or some other appropriate international body enough of a framework—a regulatory framework, for example, for people to be protected from these technologies, right? How does one opt out of being facially recognized, for example, in the world? That doesn’t exist as a—you know, as a rule of practice, or of law. So I think that we certainly have a number of areas where we’re concerned about everything from I think, you know, worker and labor protections that should be afforded to people who do a lot of the kind of dangerous work that is part of this ecosystem that I’m talking about, to being protected, to having personal privacy rights that are, again, fairly non-existent in the U.S. context.

So I think there are actually a lot of axes where we can intervene. I mean, one of the things that I also argue for in my own work is that we can’t simultaneously on one hand regulate the tech sector and then gut every possible public interest institution that would stand up as an alternative too. So we can’t gut public media, public libraries, public education, public universities, and so forth, which really stand up in a democracy to be kind of places where knowledge can proliferate too, and not—again, not allow for these kinds of, you know, less evidence-oriented, if you will, or disinformation-oriented spaces to dominate as kind of the public good around education and information. So we have lots of things and places where we can invest, and we can intervene.

I will just say, you know, there—this idea of, you know, vertical markets is not new. I mean, some of us remember iVillage, BlackPlanet, I mean, MiGente—

Q: Gay.com.

NOBLE: Gay.com, PlanetOut. I mean, these are—we’re old. Been on the internet for a long time. So we remember those. And those were also extractive communities. Great research that’s been on those. Those are really about targeted marketing and being able to have a target market. So of course, some types of information flourish in them, but also it’s about aligning, you know, consumer purchasing power with certain types of ideas and identities. So I think we can look to some of the past work and say, like, you know, what’s the staying power of them?

The issue isn’t so much, you know, that—you know, ethnic-based identity, you know, communities, whether it’s, like, women or other kind of marginalized communities, you know, that they have a space to be and generate community. I think the issue is that, you know, and the work of people like Jessie Daniels who’s at Hunter College here in New York. She wrote a great book called Cyber Racism. And in that book, she talks about it’s that racist speech has more speech than anyone else. So that it’s the distribution of visibility, quite frankly, that I think is also part of what we’re talking about. And we see that there are, you know, not just individual bad actors, but there is a—there is a—you know, click—the clickbait of racism and sexism and, in some cases, even going as far as genocide in certain parts of the world is incredibly profitable. It moves through these platforms. And every time it moves the companies make money, irrespective of their own opinion about it, right?

And so those are things. At what point can profit—I mean, you know, we can look bac to history. You know, people have written—what’s the name of that book about IBM, the origins of IBM? You know, IBM perfected its punch card technology on the Holocaust. But no one remembers these histories and stories. We’ll have a retrospective about this moment. And I think we need to use our knowledge of history of these systems to think through it.

POWELL: OK. So since you mentioned public media, we of course have to go to Fabiola, and then also Ryan.

Q: Thanks. I actually—my background is I was part of the founding team at Yahoo. So—and I actually was the head of the team that built and led Yahoo Europe. And so we led—Sarah will be very familiar with the very famous French case around taking down Nazi items. And I share this—and I share this—I haven’t read your book, so I—

Q: It’s not out. It’s not your fault. It’s coming. It’s coming. (Laughter.)

Q: Oh, it’s not out yet. OK. (Laughs.) So what I found very interesting at the time is because it was the early days of the web in the late ’90s. What we set as the standard in Europe was—I had a team across Europe. And we spent a lot of time talking about—the way we managed search at that point in time was we actually had surfers, individuals who actually received the sites and then would categorize them. And so in and amongst my team at the time we decided—and I remember thinking at the time thinking, oh my God, we’re playing God. We decided collectively—I had a very diverse team—decided collectively what we were going to accept and what we were not going to accept. And the challenge became when the volume became so large, it became very difficult to do that. So we did our best. And we kept on racing, right, to take things down, like antisemitic items, right, to try to take down child pornography. We were basically applying our own sets of values at the time.

What struck me, and the lesson that I learned, was I was able to control that in my domain, because this was the team that I had built and led. But I was incapable of convincing my counterparts—with whom I had also built, right, I was part of the original thirty-two in Silicon Valley—the importance of some of these issues. And the real challenge was that we had a lot of traffic going to dot-com versus going to our local European and individual—you know, eight country sites at the time. And so to make a long story short, what it taught me was it helps to have the regulation because the French case, right, brought things to the fore, right, and allowed them to start focusing and realizing, oh. But they still were able to say, oh, that’s something that just happens in those pesky Europeans, right? In those pesky European countries.

POWELL: Who don’t like free speech.

Q: Right. Who don’t like free speech. (Laughs.) But it didn’t change things here in the U.S. Now, what I’ve seen subsequently in the past 18 years is that I see Silicon Valley has become more and more dominated by mass-scaling developers, I guess is the best way to put it, right? And there has been shift. It’s been well written about and you can see it in some of the stats. And what concerns me is you have this issue here in the U.S., but similarly, to the gentleman’s question before, even though you have regulation in Europe, the right to be forgotten is a very good example, I have friends who’ve recently tried to be forgotten, had initially been told they will be forgotten, and then Google has so much money behind them and so much lobbying money behind them, that they’re able to make them not forgotten. (Laughs.) Right?

So I guess, you know, you answered the question that said that it was quite a complex set of things that need to be done. And I’d like to hear—in order to address these issues—I’d like to hear what you think of upstarts like Inrupt and Tim Berners-Lee trying to decentralize the web. Because I think this is not—this is not an issue that you’re going to solve in one way, right? This is an issue you’re going to have to build awareness, right? There’s going to have to be at some level some regulation. But then because these organizations have become so powerful on a global level, the next piece that you’re also going to have to do is figure out a way to take the web that you describe, so that is so highly concentrated in the hands of very few, right, and unpack that. I’d like to hear what you think.

POWELL: OK. I think Ryan and then you can close with your grand thoughts.

NOBLE: OK, great.

Do you want to have a comment? I mean, because it is a little bit directly related to content moderation.

Q: Well, I just wanted to say—and I think it ties in with some of these other provocative questions—that, you know, really what we’re trying to ascertain in so many ways is how do we fix the problem. And we have a number of solutions. We’ve talked about legal interventions. We’ve talked about policy and regulation at the nation-state level. We’ve talked about diversifying the workforce. We’ve talked about greater transparency. So I think the answer among those is yes to all, right? But some of them we must beware of. What does it mean to have nation—localized nation regulation in an autocratic state, OK? What does it mean to diversify the workforce pipeline when we’re not attending to gross social inequity in the local community and in our material world? So there’s a disconnect.

And I often think that part of the problem that we’re dealing with is that Silicon Valley in particular—which as I think you astutely point out and know probably better than anyone in the room is dominated by particular ideologies of liberation through technology. It believes that social problems can be solved with technology without really attending to the dimensions and the extent to which those problems are fomented by the very technology that we’re talking about. So putting some black women as coders on the team, I’m here for that. Please do. And let’s do it. But let’s also think about this pipeline issue when we’re talking about extraction, or when we’re talking about shipping labor to the Philippines, as we ship trash to the Philippines.

And the one other thing I wanted to say is that, you know, the other bit of rhetoric about these technologies simply mirroring or reflecting human nature, human expression, yes. Yes, and they do things in an unprecedented way. They do them at scope, scale, volume, speed, and at a level of connectivity that we’ve not really attended to or seen before. And so that is actually not status quo. (Laughs.) That actually does something with all of these—if we started a baseline of a lack of equity and a lack of justice and of strife and of, you know, communities being marginalized and oppressed, and we pipe that through systems that don’t attend to those fundamental things but circulate them at speed, we have a problem on the output side. But we have to start at the front end. And that’s all I’ll say. Thank you.

POWELL: I think we can give thirty seconds to Ryan and then thirty seconds to you to respond, yes.

Q: Thank you very much. Ryan Kaminski with the U.N. Foundation. It’s been a terrific discussion.

On this issue of education and awareness, you were talking about the hearings on Capitol Hill where, you know, WhatsApp was being compared with emailing. You know, it reminds me that the first resolution at the U.N. Human Rights Council on internet freedom was passed after the U.S. ambassador there brought diplomats from Geneva to Silicon Valley to talk about the internet and have, like, you know, a candid discussion. So my question is, you know, what platform—if that’s a helpful model—would be useful for that? Is it the companies? Is it scholars? Is the U.N.? Is it another platform? And what group of people would be best to have that kind of leapfrog seminar to learn about these issues and gain a better understanding? Thank you very, very much.

NOBLE: Thank you so much. I don’t think that the foxes can guard the henhouse. So I think that there are a number of scholars, quite frankly there have been mostly women who—scholars, who right now have been at the forefront in my opinion of having the complex conversations about discrimination, ethics, and so forth, and technology, their impact on social inequality, and so forth. And I would be happy to provide a list to you at any time of people that I think are important and we could gather. I know that the U.N. has special rapporteurs that are looking at things like race and human rights. And we also have rapporteurs looking at technology and society. But we don’t necessarily have them meeting. And so that’s something that I think we’re interested in trying to bring about. Certainly we could add labor to that conversation.

So I think the U.N. plays a really important role. And I don’t think that policymakers at the moment are kind of scaffolded up yet on the kind of granular levels of the research and the evidence. And so that’s something that we can at least provide from our lane. And then, again, convenings and making—I mean, making this work legible to policymakers is really important. And I’m not sure that, you know, that should be led by, you know, the current industry players.

POWELL: Thank you so much. So much there. (Laughs.) Hopefully we can bring you back from Oxford at some point for a follow-up discussion.

NOBLE: I’d love to come to New York anytime. Listen, I just want to say thank you so much for this opportunity and, of course, for your leadership. And the invitation is really meaningful to me. So thank you for the opportunity to share some ideas today. (Applause.)

(END)

This is an uncorrected transcript.

Top Stories on CFR

United States

Each Friday, I look at what the presidential contenders are saying about foreign policy. This Week: Joe Biden doesn’t want one of America’s closest allies to buy a once iconic American company.

Immigration and Migration

Dara Lind, a senior fellow at the American Immigration Council, sits down with James M. Lindsay to discuss the record surge in migrants and asylum seekers crossing the U.S. southern border.

Center for Preventive Action

Every January, CFR’s annual Preventive Priorities Survey analyzes the conflicts most likely to occur in the year ahead and measures their potential impact. For the first time, the survey anticipates that this year, 2024, the United States will contend not only with a slew of global threats, but also a high risk of upheaval within its own borders. Is the country prepared for the eruption of election-related instability at home while wars continue to rage abroad?