Academic Webinar: Big Tech and Global Order

Wednesday, February 15, 2023
Avishek Das/SOPA Images/Getty Images

Scott and Dorothy Bullitt Chair of American History and Professor, University of Washington

Maria Casa

Director, National Program and Outreach Administration, Council on Foreign Relations

Academic and Higher Education Webinars

Margaret O’Mara, Scott and Dorothy Bullitt Chair of American history and professor at the University of Washington, leads the conversation on big tech and global order.


CASA: Welcome to today’s session of the Winter/Spring 2023 CFR Academic Webinar Series. I’m Maria Casa, director of the National Program and Outreach at CFR. Thank you all for joining us.

Today’s discussion is on the record, and the video and transcript will be available on our website, CFR.org/Academic, if you would like to share it with your colleagues or classmates. As always, CFR takes no institutional positions on matters of policy.

We are delighted to have Margaret O’Mara with us to discuss big tech and global order. Dr. O’Mara is the Scott and Dorothy Bullitt Chair of American history and professor at the University of Washington. She writes and teaches about the growth of the high-tech economy, the history of American politics, and the connections between the two.

Dr. O’Mara is an Organization of American Historians distinguished lecturer and has received the University of Washington Distinguished Teaching Award for Innovation with Technology. Previously, she served as a fellow with the Center for Advanced Study in the Behavioral Sciences, the American Council of Learned Societies, and the National Forum on the Future of Liberal Education.

From 1993 to 1997, Dr. O’Mara served in the Clinton administration as an economic and social policy aide in the White House and in the U.S. Department of Health and Human Services. She is the author of several books and an editor of the Politics and Society in Modern America series at Princeton University Press.

Welcome, Margaret. Thank you very much for speaking with us today.

O’MARA: Thank you so much, Maria, and thank you all for being here today. I’m setting my supercomputer on my wrist timer so I—to time my talk to you, and which is very apropos and it’s really—it’s great to be here.

I have a few slides I wanted to share as I talk through, and I thought that since we had some really interesting meaty present tense readings from Foreign Affairs as background for this conversation as well as the recent review essay that I wrote last year, I thought I would set the scene a little more with a little more history and how we got to now and thinking in broad terms about how the technology industry relates to geopolitics and the global order as this very distinctive set of very powerful companies now.

So I will share accordingly, and, Maria, I hope that this is showing up on your screen as it should.

So I knew I—today I needed to, of course, talk—open with something in the news, this—the current—the ongoing questions around what has—what was in the sky and what is being shot down in addition to a Chinese spy balloon, which is really kind of getting to a question that’s at the center of all of my work.

I write at the intersection of economic history and political history and I do that because I’m interested in questions of power. Who has power? What do they value? This is the kind of the question of the U.S.-China—the operative question of the U.S.-China rivalry and the—and concern about China, what are the values, what are the—and Chinese technology and Chinese technology companies, particularly consumer-facing ones.

And this is also an operative question about the extraordinary concentration of wealth and power in a few large platform companies that are based on the West Coast of the United States—(laughs)—a couple in my town of Seattle where I am right now talking to you, and others in Silicon Valley.

It’s very interesting when one does a Google image search to find a publicly available image and puts in Silicon Valley the images that come up are either the title cards of the HBO television comedy, which I was tempted to add, but the—really, the iconic shot of the valley as place is the Apple headquarters—the Spaceship, as it’s called in Cupertino—that opened a few years ago in the middle of suburbia.

And this is—you know, the questions of concentrated power in the Q&A among the background readings, you know, this was noted by several of the experts consulted about what is the threat of big tech geopolitically and concentrated power, whether that’s good, bad, if that’s an advantage geopolitically or not. It was something that many of those folks brought up as did the other readings as well.

And this question of power—who has power and taking power—has been an animating question of the modern technology industry and there’s an irony in this that if you think about the ideological granddaddy of Apple itself is the Whole Earth Catalog, which I—and this is—I quote from this in the opening to my review essay that was part of the background readings and I just thought I would pop this up in full for us to think about.

This is Stewart Brand. This is the first issue of the Whole Earth Catalog. The full issue is digitized at the Internet Archive as are so many other wonderful artifacts and primary source materials about this world, and this is right here on the—you know, you turn—open the cover and here is the purpose: “We are as gods and might as well get used to it. So far, remotely done power and glory as via government, big business, formal education, and church has succeeded to the point where gross obscure actual gains. In response to this dilemma and to these gains a realm of intimate personal power is developing—power of the individual to conduct his own education, find his own inspiration, shape his own environment, and share his adventure with whoever is interested. Tools that aid this process are sought and promoted by the Whole Earth Catalog.”

The audience of the Whole Earth Catalog was not a bunch of techies, per se. It was back to the landers, people who were going and founding communes and the catalog was—you know, which was more a piece of art than it was an actual shopping guide, had all sorts of things from books by Buckminster Fuller to camp stoves and to the occasional Hewlett Packard scientific calculator, making this kind of statement that these tools could actually be used for empowerment of the individual because, of course, the world of 1968 is one in which computers and AI are in the hands of the establishment.

We see this playing out in multiple scales including Hollywood films like Kubrick’s 2001: A Space Odyssey, which, of course, follows, what, four years earlier Dr. Strangelove, which was also a satiric commentary on concentrated power of the military industrial complex, and computers were, indeed, things that were used by large government agencies, by the Pentagon, by Fortune 50 companies.

And so the countercultural computer or personal computer movement is very much about individual power and taking this away from the global order, so to speak. This is the taking—using these tools as a way to connect people at the individual level, put a computer on every desk, connect everyone via computer networks to one another, and that is how the future will be changed.

That is how the inequities of the world would be remedied. The notion of ultimate connectivity as a positive good was not something that originated with Facebook but, indeed, has much, much deeper origins and that’s worth thinking about as we consider where we are in 2023 and where things are going from there.

It’s also worth thinking about the way in which global—the global order and particularly national security and government spending has played a role—an instrumental role—in the growth of the technology industry as it is.

Take, for example, the original venture-backed startup, Fairchild Semiconductor, which is legendary as really starting the silicon semiconductor industry in the valley. It is the—it puts the silicon in the valley, and the eight co-founders known as the Traitorous Eight because they all quit en masse their previous job at Shockley Semiconductor working for William Shockley, the co-inventor of the transistor, and they went off and did something that one does not—did not do in 1957 very often, which was start your own company.

This was something that you did if you were weird and you couldn’t work for people. That’s what one old timer told me, reflecting back on this moment. But they, indeed, started their own company, found outside financing and in this group contains Robert Noyce and Gordon Moore, the two co-founders of Intel, as well as Gene Kleiner, co-founder of Kleiner Perkins, the venture capital firm.

This is really the—you know, the original—where it all began, and yes, this is a story of free-market entrepreneurialism but it also is a story of the national security state. This is a—Fairchild is founded at a moment when most of the business in the Santa Clara Valley of California, later known as Silicon Valley, was defense related. This is where the jobs were. This is the business they were doing, by and large. There was not a significant commercial market for their products.

A month after they’re incorporated—in September ’57 is when Fairchild incorporates itself. October 1957 Sputnik goes into orbit. The consequent wave of space spending is really what is the literal rocket ship that gets Silicon Valley’s chip business going. The integrated circuits made by Fairchild and other chip makers in the valley go into the Apollo guidance system. NASA is buying these chips at a time that there is not a commercial market for them and that enables these companies to scale up production to create a commodity that can be delivered to the enterprise.

And so by the time you get to the 1970s you are not talking about defense contractors in any way. These are companies that are putting their chips in cars and in other—all sorts of one time mechanical equipment is becoming transistorized. And Intel is Intel, still one of the most important and consequential—globally consequential tech companies around at the center of the action in the CHIPS Act of last year, not to mention others.

But this longer history and this intertwining with the military industrial complex and with broader geopolitics—because, of course, the space program and the Apollo program was a Cold War effort. It was about beating the Soviets to the moon, not just doing it because we could.

But that really kind of dissipates and fades from collective memory in the Valley and beyond with the rise of these entrepreneurs like Steve Jobs, Steve Wozniak, Bill Gates, young, new-time CEOs that are presenting a very, very different face of business and really being consciously apolitical, presenting themselves as something so far apart from Washington, D.C.

And this notion of tech, big or little, being something separate from government and governance is perpetuated by leaders of both parties, not just Ronald Reagan but also by Democrats of a younger generation that in the early 1980s there was a brief moment in which lawmakers like Tim Wirth and Gary Hart were referred to as Atari Democrats because they were so bullish on high-tech industries as the United States’ economic future.

And the way in which politicians and lawmakers from the 1980s forward talked about tech was very much in the same key as that of people like Steve Jobs, which is that this is a revolutionary—the tools have been taken from the establishment, and this is something that is apart from politics, that transcends the old global order and is a new one. And, in fact, in the speech in May 1988 in Moscow at the end of his presidency Ronald Reagan delivers a—you know, really frames the post-Cold War future as one in which the microchip is the revolutionary instrument of freedom: “Standing here before a mural of your revolution”—and a very large bust of Lenin—“I talk about a very different revolution that is taking place right now. Its effects are peaceful but they will fundamentally alter our world, and it is—the tiny silicon chip is the agent of that, no bigger than a fingerprint.”

This is really remarkable, if we sit back and take a deep breath and think about it, and particularly thinking about what happens after that. What happens after that are decades in which, again, leaders of both parties in the United States and world leaders elsewhere are framing the internet and understanding the internet as this tool for freedom and liberation, a tool that will advance democracy.

Bill Clinton, towards the end of his presidency, famously kind of said, effectively, that I’m not worried about China because the internet is going to bring—you know, internet is going to make it very hard to have anything but democracy. And this notion of a post-Cold War and beyond the end of history and tech and big tech being central to that that, in fact, aided the rise of big tech. That was a rationale for a light regulatory hand in the United States, allowing these companies to grow and flourish and so big, indeed, they have become.

But I want to end on a note just thinking about the—you know, why this history is important, why this connective tissue between past and present actually does matter. It isn’t just that, oh, this is nice to know. This is useful.

Lawrence Preston Gise was the second—sorry, the first deputy administrator of DARPA in 1958, created in the wake of the Sputnik—post-Sputnik panic, originally called ARPA, now DARPA.

He later ran the entire Western Division of the Atomic Energy Commission—Los Alamos, Livermore, et cetera. Longtime government public servant. In his retirement he retired to his farm in west Texas and his young grandson came and lived with him every summer. And his grandson throughout his life has talked about how—what a profound influence his grandfather was on him, showing him how to be a self-sufficient rancher, how to wrangle cattle and to build a barbed wire fence.

But the grandson—you know, what the grandson didn’t mention that much because it wasn’t really relevant to his personal experience was who his grandfather was and what he had done. But when that grandson, Jeff Bezos—a few years ago when there was—when Google employees were writing their open letter to CEO Sundar Pichai saying, we are not in the defense business. We are—we don’t like the fact that you are doing work with the Pentagon, and pressuring Google successfully and other companies to get out of doing work with the Pentagon, Bezos reflected, no, I think we’re—I think this is our patriotic duty to do work—do this kind of work.

And as I listened to him say that on a stage in an interview I thought, ah, that’s his grandfather talking because this little boy, of course, was Jeff Bezos, the grandfather of Lawrence Preston Gise, and those—that connective tissue—familial connective tissue as well as corporate and political connective tissue, I think, is very relevant to what we have before us today.

So I’ll leave it there. Thanks.

CASA: Thank you, Margaret, for that very interesting introduction.

Let’s open up to questions.

(Gives queuing instructions.)

While our participants are gathering their thoughts would you start us off by providing a few examples of emerging technologies that are affecting higher education?

O’MARA: Yeah. Well, we’ve had a very interesting last three years in which the debate over online learning versus in-person learning very quickly was not necessarily resolved. We did this mass real-time experiment, and I think it made—put into sharp relief the way in which different technologies are shaping the way that higher education institutions are working and this question of who’s controlling the—who controls the platforms and how we mediate what learning we do.

Even though I now teach in person again almost everything that I do in terms of assignments and communication is through electronic learning management systems. The one we use at UW is Canvas. But, of course, there are these broader questions—ethical questions and substantive questions—about how our AI-enabled technologies including, notably, the star of the moment, ChatGPT, going to change the way in which—it’s mostly been around how are students going to cheat more effectively.

But I think it also has these bigger questions about how you learn and where knowledge, where the human—where the human is necessary. My take on it is, aside from the kind of feeling pretty confident in my having such arcane prompts for my midterm essay questions and research projects that ChatGPT, I think, would have a very hard time doing a good job with it but although I’m looking forward to many a form letter being filled by that technology in the future, I think that there is a—you know, this has a history, too. The concern about the robot overlords is a very deep one. It extends from—you know, predates the digital age, and the anxiety about whether computers are becoming too powerful.

Of course, this question of artificial intelligence or augmented intelligence kind of is the computer augmenting what a human can do rather than replacing what a human can do or pretending to have the nuance and the complexity that a human might be able to convey.

I think there’s, you know, these bigger questions and I’m sure—I imagine there are going to be some other questions about AI. Really, you know, this is a—I think this is a very good learning moment, quite frankly, to think more—you know, one of the things I teach about a lot is kind of the information that is on the internet and who’s created it and how it is architected and how it is findable and how those platforms have been developed over time.

And what ChatGPT and other AIs like them are doing is they’re scraping this extraordinary bounteous ocean of information and it is as good as the—it’s as good as its source, right. So whatever you’re able to do with it you have—your source materials are going to determine it.

So if there is bias in the sources, if there is inaccuracy in the sources, there is—that will be replicated. It cannot be—you know, I think what it is is it’s a really good rough draft, first draft, for then someone with tacit knowledge and understanding to come into, and I like to think of digital tools as ones that reveal where things that only people can do that cannot be replicated, that this—where human knowledge cannot be, where a machine still—even though a machine is informed by things that humans do and now does it at remarkable speed and scale it still is—there is—we are able to identify where humanity makes a difference.

And then my one last caution is I do—you know, the one thing you can’t do with these new—any of these new technologies is do them well really fast, and the rush to it is a little anxiety inducing.

CASA: Thank you.

Our first question is from Michael Leong from the—he’s a graduate student at the University of Arizona. Michael, would you like to unmute and ask your question?

Q: Yeah. Hi, Dr. O’Mara. Hi, Ms. Casa. Sorry for any background noise.

I just had a, like, general question about your thoughts on the role big tech plays in geopolitics. Specifically, we’ve seen with SpaceX and Starlink especially with what’s going on in Ukraine and how much support that has been provided to the Ukrainian Armed Forces, and potentially holding that over—(inaudible)—forces. So, basically, do we expect to see private companies having more leverage over geopolitical events? And how can we go forward with that?

O’MARA: Yeah. That’s a really—that’s a really great question. And you know, I think that there’s—it’s interesting because the way—there’s always been public-private partnerships in American state building and American geopolitics, and that’s something—it’s worth kind of just noting that.

Like, from the very beginning the United States has used private entities as instruments of policy, as parastatal entities, whether it be through, you know, land grants and transcontinental railroad building in the nineteenth century all the way through to Starlink and Ukraine because, of course, the Pentagon is involved, too—you know, that SpaceX is in a very—is a significant government contractor as ones before it.

I think that where there’s a really interesting departure from the norm is that what we’ve seen, particularly in the last, you know, the last forty years but in this sort of post-Cold War moment has been and particularly in the last ten to fifteen years a real push by the Pentagon to go to commercial enterprises for technology and kind of a different model of contracting and, I should say, more broadly, national security agencies. And this is something, you know, a real—including the push under—when Ash Carter was in charge of DOD to really go to Silicon Valley and say, you guys have the best technology and a lot of it is commercial, and we need to update our systems and our software and do this.

But I think that the SpaceX partnership is one piece of that. But there has been a real—you know, as the government has, perhaps, not gotten smaller but done less than it used to do and there’s been more privatization, there have been—there’s been a vacuum left that private companies have stepped into and I think Ian Bremmer’s piece was really—made some really important points in this regard that there are things that these platform companies are doing that the state used to do or states used to do and that does give them an inordinate amount of power.

You know, and these companies are structurally—often a lot of the control over these companies is in the hands of very, very few, including an inordinate unusual amount of founder power, and Silicon Valley, although there’s plenty of political opinionating coming out of there now, which is really a departure from the norm, this kind of partisan statements of such—you know, declarations of the—of recent years are something that really didn’t—you didn’t see very much before.

These are not folks who are—you know, their expertise lies in other domains. So that’s where my concern—some concern lies where you have these parastatal actors that are becoming, effectively, states and head of states then and they are not, indeed, speaking for—you know, they’re not sovereign powers in the same way and they are speaking for themselves and speaking from their own knowledge base rather than a broader sense of—you know, they’re not speaking for the public. That’s not their job.

CASA: Our next question is from Michael Raisinghani from Texas Woman’s University. Michael, if you could unmute.

Q: Thank you, Ms. Casa and Dr. O’Mara. A very insightful discussion. Thank you for that.

I just thought maybe if you could maybe offer some clarity around the generative AI, whether it’s ChatGPT or Wordtune or any of this in terms of the future. If you look, let’s say, five, ten years ahead, if that’s not too long, what would your thoughts be in this OpenAI playground?

O’MARA: Mmm hmm. Well, with the first—with the caveat that the first rule of history is that you can’t predict the future—(laughs)—and (it’s true ?); we are historians, we like to look backwards rather than forwards—I will then wade into the waters of prediction, or at least what I think the implications are.

I mean, one thing about ChatGPT as a product, for example, which has been really—I mean, what a—kudos for a sort of fabulous rollout and marketing and all of a sudden kind of jumping into our public consciousness and being able to release what they did in part because it wasn’t a research arm of a very large company where things are more being kept closer because they might be used for that company’s purposes.

Google, for example, kind of, you know, has very in short order followed on with the reveal of what they have but they kind of were beaten to the punch by OpenAI because OpenAI wasn’t—you know, it was a different sort of company, a different sort of enterprise.

You know, a lot of it are things that are already out there in the world. If we’ve, you know, made an airline reservation and had a back and forth with a chatbot, like, that’s—that’s an example of some of that that’s already out in the world.

If you’re working on a Google doc and doing what absolutely drives me bonkers, which is that Google’s kind of completing my sentences for me, but that predictive text, those—you know, many things that we are—that consumers are already interacting with and that enterprises are using are components of this and this is just kind of bringing it together.

I think that we should be very cautious about the potential of and the accuracy of and the revolutionary nature of ChatGPT or any of these whether it be Bard or Ernie or, you know, name your perspective chatbot. It is what it is. Again, it’s coming from the—it’s got the source material it has, it’s working with, which is not—you know, this is not human intelligence. This is kind of compilation and doing it very rapidly and remarkably and in a way that presents with, you know, literacy.

So I’m not—you know, does very cool stuff. But where the future goes, I mean, clearly, look, these company—the big platform companies have a lot of money and they have a great deal of motivation and need to be there for the next big thing and, you know, if we dial back eighteen months ago there were many in tech who were saying crypto and Web3 was the next big thing and that did not—has not played out as some might have hoped. But there is a real desire for, you know, not being left behind.

Again, this is where my worry is for the next five years. If this is driven by market pressures to kind of be the—have the best search, have the best—embed this technology in your products at scale that is going to come with a lot of hazards. It is going to replicate the algorithmic bias, the problems with—extant problems with the internet.

I worry when I see Google saying publicly, we are going to move quickly on this and it may not be perfect but we’re going to move quickly when Google itself has been grappling with and called out on its kind of looking the other way with some of the real ethical dilemmas and the exclusions and biases that are inherent in some of the incredibly powerful LLMs—the models that they are creating.

So that’s my concern. This is a genie that is—you know, letting this genie out of the bottle and letting it become a mass consumer product, and if—you know, OpenAI, to its credit, if you go to ChatGPT’s website it has a lot of disclaimers first about this is not the full story, effectively, and in the Microsoft rollout of their embedding the technology in Bing last week Microsoft leaders, as well as Sam Altman of OpenAI, were kind of—their talking points were very careful to say this is not everything.

But it does present—it’s very alluring and I think we’re going to see it in a lot more places. Is it going to change everything? I think everyone’s waiting for, like, another internet to change everything and I don’t know if—I don’t know. The jury’s out. I don’t know.

CASA: Thank you.

Our next question is a written one. It comes from Denis Fred Simon, clinical professor of global business and technology at the University of North Carolina at Chapel Hill. He asked, technology developments have brought to the surface the evolving tension between the drive for security with the desire for privacy. The U.S. represents one model while China represents another model. How do societies resolve this tension and is there some preferred equilibrium point?

O’MARA: That is a—that’s the billion-dollar question and it’s—I think it’s a relevant one that goes way back. (Laughs.) I mean, there are many moments in the kind of evolution of all of these technologies where the question of who should know what and what’s allowable.

If we go back to 1994 and the controversy over the Clipper chip, which was NSA wanting to build a backdoor into commercially available software, and that was something that the industry squashed because it would, among other things, have made it very difficult for a company like Microsoft to sell their products in China or other places if you had a—knew that the U.S. national security agencies were going to have a window into it. And, of course, that all comes roaring back in 2013 with Snowden’s revelations that, indeed, the NSA was using social media platforms and other commercial platforms—consumer-facing platforms—to gather data on individuals.

You know, what is the perfect balance? I mean, this is—I wish I had this nice answer. (Laughs.) I would probably have a really nice second career consulting and advising. But I think there is a—what is clear is that part of what has enabled the American technology industry to do what it has done and to generate companies that have produced, whether you think the transformations on balance are good or bad, transformative products, right. So everything we’re using to facilitate this conversation that all of us are having right now is coming from that font.

And democratic capitalism was really critical to that and having a free—mostly free flow of information and not having large-scale censorship. I mean, the postscript to the Clipper chip—you know, Clipper chip controversy is two years later the Telecom Act of 1996, which was, on the one hand, designed to ensure the economic growth of what were then very small industries in the internet sector and not—and prevent the telecoms from ruling it all but also were—you know, this was a kind of making a call about, OK, in terms when it comes to the speech on the internet we are going to let the companies regulate that and not be penalized for private—when private companies decide that they want to take someone down, which is really what Section 230 is. It’s not about free speech in a constitutional sense. It’s about the right of a company to censor or to moderate content. It’s often the opposite of the way that it’s kind of understood or interpreted or spun in some ways.

But it is clear that the institutions of—that encourage free movement of people and capital have been—are pretty critical in fueling innovation writ large or the development and the deployment and scaling of new technologies, particularly digital technologies. But I think you can see that playing out in other things, too.

So that has been, I think, a real tension and a real—there’s a market dimension to this, not just in terms of an ethical dimension or political dimension that there does need to be some kind of unfettered ability of people to build companies and to grow them in certain ways.

But it’s a fine balance. I mean, this sort of, like, when does regulation—when does it—when do you need to have the state come in and in what dimension and which state. And this goes back to that core question of like, OK, the powerful entities, what are their values? What are they fighting for? Who are they fighting for?

I don’t know. I’m not giving you a terribly good answer because I think it’s a really central question to which many have grappled for that answer for a very long time.

CASA: Thank you. Our next question comes from Ahmuan Williams, a graduate student at the University of Oklahoma. Ahmuan?

Q: Thank you. Hi.

I’m wondering about ChatGPT, about the regulation side of that. It seems like it’s Microsoft that has kind of invested itself into ChatGPT. Microsoft had before gotten the Pentagon contract just a few years back.

So it’s kind of a two-part question. So, first of all, how does that—what does that say about government’s interest in artificial intelligence and what can be done? I know the Council of Foreign Relations also reported that the Council of Europe is actually planning an AI convention to figure out how, you know, a framework of some type of AI convention in terms of treaties will work out.

But what should we be worried about when it comes to government and the use of AI in political advertisements and campaigns, about, basically, them flooding opinions with, you know, one candidate’s ideas and, therefore, them being able to win because they’re manipulating our opinions?

So what would you say would be kind of a regulation scheme that might come out of these type—new flourishing AI devices?

O’MARA: Mmm hmm. Mmm hmm. That’s a good question. I think there’s sort of different layers to it.

I mean, I see that, you know, the Pentagon contract—the JEDI contract—being awarded to Microsoft, much to Amazon’s distress—(laughs)—and litigious distress, is a kind of a separate stream from its decision to invest 10 billion (dollars) in OpenAI. I think that’s a commercial decision. I think that’s a recognition that Microsoft research was not producing the—you know, Microsoft didn’t have something in house that was comparable.

Microsoft saw an opportunity to at last do a—you know, knock Google off of its dominant pedestal in search and make Bing the kind of long—kind of a punch line—no longer a punch line but actually something that was a product that people would actively seek out and not just use because it was preinstalled on their Microsoft devices. That is—so I see that as a market decision kind of separate from.

The bigger AI question, the question of AI frameworks, yes, and this, again, has a longer history and, you know, I kind of liken AI to the Pacific Ocean. It’s an enormous category that contains multitudes. Like, it’s—you know, we can—oftentimes when we talk about AI or the AI that we see and we experience, it’s machine learning. And part of why we have such extraordinary advances in machine learning in the last decade has—because of the harvesting of individual data on these platforms that we as individuals use, whether it be Google or Meta or others, that that has just put so much out there that now these companies can create something that—you know, that the state of the art has accelerated vastly.

Government often is playing catch up, not just in tech but just in business regulation, generally. The other—you know, another example of this in the United States cases with the—in the late nineteenth century, early twentieth century, with what were then new high-tech tech-driven industries of railroads and oil and steel that grew to enormous size and then government regulators played catch up and created the institutions that to this day are the regulators like the FTC created in 1913. Like, you know, that’s—of that vintage.

So, I think that it depends on—when it comes to—the question about electoral politics, which I think is less about government entities—this is about entities, people and organizations that want to be in charge of government or governments—that is, you know, AI—new technologies of all kinds that incorporate ever more sophisticated kind of, essentially, disinformation, that—information that presents as real and it is not.

The increased volume of that and the scale of that and the sophistication of that and the undetectability of it does create a real challenge to free and fair elections and also to preventing, in the American context, international and foreign intervention in and manipulation of elections but true in every context. That is, you know, getting good information before voters and allowing bad actors to exploit existing prejudices or misassumptions. That is an existing problem that probably will be accelerated by it.

I think there’s—there’s a strong case to be made, at least in the U.S. context, for much stronger regulation of campaign advertising that extends to the internet in a much more stricter form. In that domain there’s—I think we have pretty good evidence that that has not been—you know, having that back end has made the existing restrictions on other types of campaign speech and other media kind of made them moot because you can just go on a social platform and do other things.

So there’s—you know, this is—I think the other thing that compromises this is the rapidly changing nature of the technology and the digital—and the global reach of these digital technologies that extends any other product made—you know, any other kind of product. It just is borderless that—in a kind of overwhelming way.

That doesn’t mean government should give up. But I think there’s a sort of supranational level of frameworks, and then there are all sorts of subnational kind of domain-specific frameworks that could occur to do something as a countervailing force or at least slow the role of developers and companies in moving forward in these products.

CASA: Thank you. Our next question is a written one. It comes from Prashant Hosur, assistant professor of humanities and social sciences at Clarkson University.

He asks, how do you—or she. I’m sorry. I’m not sure. How do you think big tech is likely to affect conventional wisdom around issues of great power rivalry and power transitions?

O’MARA: Hmm. I don’t—well, I think there are a—these are always—these definitions are always being redefined and who the great powers are and what gives them power is always being reshuffled and—but, of course, markets and economic resources and wealth and—are implicated in this for millennia.

I think that tech companies do have this—American tech companies and the tech platforms, which I should preface this by saying, you know, none of the companies we’re talking about now are going to rule forever. Maybe that just goes without—it’s worth just note, you know, this is—we will have the rise and fall. Every firm will be a dinosaur.

Detroit was the most innovative city in the world a hundred and ten years ago. There’s still a lot of innovation and great stuff coming out of Detroit, but if you—if I queried anyone here and said, what’s the capital of innovation I don’t know if you would say Detroit. But back in the heyday of the American auto industry it was, and I think it’s a good reminder. We aren’t always going to be talking about this place in northern California and north Seattle in this way.

But what we have right now are these companies that their products, unlike the products of Henry Ford or General Motors, are ones that are—go across borders with—you know, the same product goes across borders seamlessly and effortlessly, unlike an automobile where a—to sell in a certain country you have to meet that country’s fuel standards and, you know, safety standards, et cetera, et cetera. You have a different model for a different market.

Instead, here, you know, a Facebook goes where it goes, Google goes where it goes, YouTube goes where it goes, and that has been kind of extraordinary in terms of internationalizing politics, political trends. I think what we’ve seen globally is very—you know, the role of the internet in that has been extraordinary, both for good and for ill, in the last fifteen years.

And then the kind of—the immense—the great deal of power that they have in the many different domains and, again, Ian Bremmer also observed this kind of the—all the different things they do and that is something that is different from twenty-five years ago where you now have companies that are based on the West Coast of the United States with products designed by a small group of people from a kind of narrow, homogenous band of experience who are doing things like transforming taxis and hotels and, I mean, you name it, kind of going everywhere in a way that in the day of the—you know, the first Macintosh, which was like this cool thing on your desk, that was—yes, it was a transformative product. It was a big deal and Silicon Valley was—became a household word and a phrase in the 1980s and the dot.com era, too. That was—you know, everyone’s getting online with their AOL discs they got in the mail.

But what’s happened in the twenty-first century is at a scale and—a global scale and an influence across many different domains, and politics, this very deliberate kind of we are a platform for politics that has really reshaped the global order in ways that are quite profound.

This is not to say that everything has to do with big tech is at the root of everything. But let’s put it in context and let’s, you know—and also recognize that these are not companies that were designed to do this stuff. They’ve been wildly successful what they set out to do and they have a high-growth tech-driven model that is designed to move fast and, yes, indeed, it breaks things and that has—you know, that has been—they are driven by quarterly earnings. They are driven by other things, as they should be. They are for-profit companies, many of them publicly traded.

But the—but because, I think, in part they have been presenting themselves as, you know, we’re change the world, we’re not evil, we’re something different, we’re a kinder, gentler capitalism, there has been so much hope hung on them as the answer for a lot of things, and that is not—kind of giving states and state power something of the past to get its act together that instead states need to step up.

CASA: Our next question is from Alex Grigor. He’s a PhD candidate from University of Cambridge. Alex?

Q: Hello. Yes. Thank you. Can you hear me?

O’MARA: Yes.

CASA: Yes.

Q: Yeah. Hi. Thank you, Ms. O’Mara. Very insightful and, in fact, a lot of these questions are very good as well.

So they’ve touched upon a lot of what I was going to ask and so I’ll narrow it down slightly. My research is looking at cyber warfare and sort of international conflict particularly between the U.S. and China but beyond, and I was wondering—you started with the sort of military industrial complex and industry sort of breaking away from that.

Do you see attempts, perhaps, because of China and the—that the technology industry and the military are so closely entwined that there’s an attempt by the U.S. and, indeed, other countries. You see increase in defense spending in Japan and Germany. But it seems to be specifically focused, according to my research, on the technologies that are coming out of that, looking to reengage that sort of relationship. They might get that a little bit by regulation. Perhaps the current downsizing of technology companies is an opportunity for governments to finally be able to recruit some good computer scientists that they haven’t been able to—(laughs)—(inaudible). Perhaps it’s ASML and semiconductor sort of things.

Do you see that as part of the tension a conscious attempt at moving towards reintegrating a lot of these technologies back into government?

O’MARA: Yeah. I think we’re at a really interesting moment. I mean, one thing that’s—you know, that’s important to note about the U.S. defense industry is it never went away from the tech sector. It just kind of went underground.

Lockheed, the major defense contractor, now Lockheed Martin, was the biggest numerical employer in the valley through the end of the Cold War through the end of the 1980s. So well into the commercial PC era and—but very—you know, kind of most of what was going on there was top secret stuff. So no one was on the cover of Forbes magazine trumpeting what they’ve done. And there has been—but there has been a real renewed push, particularly with the kind of—to get made in Silicon Valley or, you know, made in the commercial sector software being deployed for military use and national security use and, of course, this is very—completely bound up in the questions of cyber warfare and these existing commercial networks, and commercial platforms and products are ones that are being used and deployed by state actors and nonstate actors as tools for cyber terrorism and cyber warfare.

So, yes, I think it’s just going to get tighter and closer and the great—you know, the stark reality of American politics, particularly in the twentieth and into the twenty-first centuries, is the one place that the U.S. is willing to spend lots of money in the discretionary budget is on defense and the one place where kind of it creates a rationale for this unfettered—largely, unfettered spending or spending with kind of a willingness to spend a lot of money on things that don’t have an immediately measurable or commercializable outcome is in national security writ large.

That’s why the U.S. spent so much money on the space program and created this incredible opportunity for these young companies making chips that only—making this device that only—only they were making the things that the space program needed, and this willingness to fail and the willingness to waste money, quite frankly.

And so now we’re entering into this sort of fresh—this interesting—you know, the geopolitical competition with China between the U.S. has this two dimensions in a way and the very—my kind of blunt way of thinking about it it’s kind of like the Soviet Union and Japan all wrapped up in one, Japan meaning the competition in the 1980s with Japan, which stimulated a great deal of energy among—led by Silicon Valley chip makers for the U.S. to do something to help them compete and one of those outcomes was SEMATECH, the consortium to develop advanced semiconductor technology, whose funding—it was important but its funding was a fraction of the wave of money that just was authorized through last year’s legislation, the CHIPS Act as well as Inflation Reduction Act and others.

So I’m seeing, you know, this kind of turn to hardware and military hardware and that a lot of the commercial—the government subsidized or incentivized commercial development of green technology and advanced semiconductor, particularly in military but other semiconductor technology and bringing semiconductor manufacturing home to the United States, that is—even those dimensions that are nonmilitary, that are civilian, it’s kind of like the Apollo program. That was a civilian program but it was done for these broader geopolitical goals to advance the economic strength and, hence, the broader geopolitical strength of the United States against a competitor that was seen as quite dangerous.

So that’s my way of saying you’re right, that this is where this is all going and so I think that’s why this sort of having a healthy sense of this long-term relationship is healthy. It’s healthy for the private sector to recognize the government’s always been there. So it isn’t though you had some innovative secret that the government is going to take away by being involved. And to also think about what are the broader goals that—you know, who is benefiting from them and what is the purpose and recognize often that, you know, many of the advanced technologies we have in the United States are thanks to U.S. military funding for R&D back in the day.

CASA: Our next question is written. It’s from Damian Odunze, who is an assistant professor at Delta State University.

Regarding cybersecurity, do you think tech companies should take greater responsibility since they develop the hardware and software packages? Can the government mandate them, for instance, to have inbuilt security systems?

O’MARA: Hmm. Yeah. I think—look, with great power comes great responsibility is a useful reminder for the people at the top of these companies that for—that are so remarkably powerful at the moment and because their platforms are so ubiquitous.

There are—you see, for example, Microsoft has really—is a—I think what they’ve done in terms of partnering with the White House and its occupants and being—kind of acting as a NSA first alert system of sorts and kind of being open about that I think that’s been good for them from a public relations perspective, and also—but I think it also reflects this acknowledgement of that responsibility and that it also is bad for their business if these systems are exploited.

Yeah, I think that, again, regulation is something that—you know, it’s like saying Voldemort in Silicon Valley. Like, some people are, like, oh, regulation, you know. But there’s really—there can be a really generative and important role that regulation can play, and the current industry has grown up in such a lightly-regulated fashion you just kind of get used to having all that freedom, and when it comes to cybersecurity and to these issues of national security importance and sort of global importance and importance to the users of the products and the companies that make them there’s, I think, a mutual interest in having some sort of rules of the road and that—and I think any company that’s operating at a certain scale is—understands that it’s in their market interest to be—you know, not to be a renegade, that they are working with.

But I think having—you know, there can be a willingness to work with but they’re—having a knowledge and an understanding and a respect for your government partners, your state partners, whether they be U.S. or non-U.S. or supranational is really critically important and sometimes tech folks are a little too, like, oh, politics, they don’t know what they’re doing, you know. We know better. And I think there needs to be a little more mutual exchange of information and some more—yes, some more technical people being able to be successfully recruited into government would probably be a help, too, so there’s—on both sides of the table you have technically savvy people who really understand the inner workings of how this stuff is made and don’t have simplistic answers of like, oh, we’ll just take all the China-made technology out of it. You’re, like, well, there’s—like, it’s kind of deep in the system. You know, so having technologists in the conversation at all points is important.

CASA: Thank you. I think we have time for one more question. We’ll take that from Louis Esparza, assistant professor at California State University in Los Angeles.

Q: Hi. Thank you for your very interesting talk.

So I’m coming at this from the social movements literature and I’m coming into this conversation because I’m interested in the censorship and influence of big tech that you seem to be, you know, more literate in.

So my question is do you think that this—the recent trends with big tech and collaboration with federal agencies is a rupture with the origin story of the 1960s that you talked about in your talk or do you think it’s a continuity of it?

O’MARA: Yeah. That’s a great way to put it. The answer is, is it both? Well, it’s something of a rupture. I mean, look, this—you know, you have this—you have an industry that grows up as intensely—you know, that those that are writing and reading the Whole Earth Catalog in 1968 the military industrial complex is all around them. It is paying for their education sort of effectively or paying for the facilities where they’re going to college at Berkeley or Stanford or name your research university—University of Washington.

It is the available jobs to them. It is paying for the computers that they learn to code on and that they’re doing their work on. It is everywhere and it is—and when you are kind of rebelling against that establishment, when you see that establishment is waging war in Vietnam as being a power—not a power for good but a power for evil or for a malevolent—a government you don’t trust whose power, whose motivations you don’t trust, then you—you know, you want to really push back against that and that is very much what the personal computer movement that then becomes an industry is.

That’s why all those people who were sitting around in the 1970s in Xerox Palo Alto Research Center—Xerox Park—just spitballing ideas, they just did not want to have anything to do with military technology. So that’s still there, and then that—and that ethos also suffused other actors in, you know, American government and culture in the 1980s forward, the sort of anti-government sentiment, and the concerns about concentrated power continue to animate all of this.

And the great irony is that has enabled the growth of these private companies to the power of states. (Laughs.) So it’s kind of both of those things are happening and I think, in some ways, wanting to completely revolutionize the whole system was something that was not quite possible to do, although many—it is extraordinary how much it has done.

CASA: Margaret, thank you very much for this fascinating discussion and to all of you for your questions and comments.

I hope you will follow Margaret on Twitter at @margaretomara. Our next Academic Webinar will take place on Wednesday, March 1, at 1:00 p.m. Eastern Time. Chris Li, director of research of the Asia Pacific Initiative and fellow at the Belfer Center for Science and International Affairs at Harvard University, will lead a conversation on U.S. strategy in East Asia.

In the meantime, I encourage you to learn about CFR’s paid internships for students and fellowships for professors at CFR.org/Careers. Follow at @CFR_Academic on Twitter and visit CFR.org, ForeignAffairs.com, and ThinkGlobalHealth.org for research and analysis on global issues.

Thank you again for joining us today. We look forward to you tuning in for our webinar on March 1. Bye.



Top Stories on CFR

Immigration and Migration

Election 2024

Vice President Kamala Harris is seeking the 2024 Democratic presidential nomination in the wake of Joe Biden's exit from the race.


The closely watched elections on July 28 will determine whether incumbent President Nicolás Maduro wins a third term or allows a democratic transition.