The AI Bubble + The Productivity Paradox + India’s AI Summit
Is there an AI bubble, or just an OpenAI bubble? Markets remain focused on whether valuations can be justified by sufficiently fast revenue growth, while the real economy braces for AI’s impact on productivity, jobs, and other disruptions. With global leaders and tech CEOs convening in India to debate AI governance, the stakes are rising fast. Credit markets, hiring data, and business sentiment could signal whether this year will bring a continued jobless expansion or something more concerning.
Published
Hosts
- Sebastian MallabyPaul A. Volcker Senior Fellow for International Economics
- Rebecca PattersonSenior Fellow
Producer
- Molly McAnanyProducer, Podcasts
Supervising Producer
- Gabrielle SierraDirector, Podcasting
Supervising Producer
- Jeremy SherlickDirector of Video
Audio Producer
- Markus ZakariaAudio Producer & Sound Designer
Researcher
- Liza JacobResearch Associate, Finance, Business, and Technology
Transcript
PATTERSON:
I’m Rebecca Patterson.
MALLABY:
And I’m Sebastian Mallaby.
PATTERSON:
Welcome to The Spillover. Each week we examine the ripple effects of global events, providing insight on the most important topics across economics, financial markets, technology, and geopolitics. Normally, we’re going to be releasing a new episode every week, but last week something unusual happened. The Supreme Court ruled against the White House’s IEPA or reciprocal emergency powers tariffs. And Sebastian unfortunately was traveling, but I was lucky enough to have as my guest, Mike Froman, president of the Council on Foreign Relations, who also happens to be a former U.S. trade representative. So it was great to be able to react to really important news like that. But Sebastian, I’m very happy to have you back.
MALLABY:
Well, it was the first emergency spillover, Rebecca. Congratulations.
PATTERSON:
Oh, well, thank you for that. And I’m guessing, given the world we’re living in now, Sebastian, that you and I will probably have a few more emergency spillovers in the coming weeks, months and years. But for now, I think we have a really great topic for this week, which we planned before all the recent market gyrations, which was artificial intelligence. Honestly, I can’t think of many issues that have more Spillovers than this. I also think we could spend many, many hours on this topic, and I’m sure in future podcast episodes we’re going to come back to it. But Sebastian, you’ve been writing a book on AI for the last few years. Your head is much deeper in this topic than mine is, although I’m trying. So why don’t I let you tell us today what parts of the AI narrative we should talk about?
MALLABY:
Well, I guess if I had to choose amongst many options, right now, I would say first of all, the debate about is there an AI bubble is front and center for financial markets. Then for the real economy, it’s like what is going to be the effect on productivity and jobs and growth rates? So I think we should get into that as a second thing. And then because there was this summit just now in India with heads of state and tech leaders and so forth talking about AI, I think we should touch on AI governance because this does spill over into government policy very much.
PATTERSON:
A hundred percent. I mean, we’re seeing that just this week with the negotiation, if you will, between Anthropic and the Pentagon over what their technology can or can’t do for the U.S. government. And that India Summit, I was amazed, 86 countries represented. So this is clearly top of mind for every country in the world, as it should be. But I think your list sounds like more than enough to cover this week, and you can’t tell from my backdrop, but I happen to be sitting today in Menlo Park, which is pretty much ground zero for Silicon Valley and all things AI.
MALLABY:
We need 30 seconds on the vibe here. What are you feeling out there?
PATTERSON:
So, the vibe. Companies out here still have five-year strategic plans, three-year strategic plans, but they also have one-month plans and even one-week plans because the speed of change is so fast that they have to constantly iterate. And so if we feel like it’s hard for us to keep up with all the changes going on in AI, it’s hard for them too out here, the people actually doing it. That was striking to me. One thing I didn’t hear though, Sebastian, now maybe that’s because they don’t want to go there, but I haven’t heard a single person proactively use the word SaaS-pocalypse, which it’s just… I love Wall Street because we always come up with new bizarre words and terms to explain things. And one that’s definitely come up over the last few weeks is SaaS-pocalypse or the apocalypse for certain companies, software as a service is shocking. No one’s brought that one up yet.
MALLABY:
But that is one reason why we should be getting into this financial bubble debate right at the top because the SaaS-pocalypse, this is software as a service. You take a company like ServiceNow, which is one of these enterprise software companies, it is down 25% in the past month and the Nasdaq’s just down like 3%. Because the perception is that Claude and other AI models are very good at writing code, so any company that writes code for a business is going to be wiped out to completely disrupted, right? And in fact, it’s not limited to coding companies. You’ve seen bank stocks sell off. JP Morgan, I think, down 4.5% in the last few days because of this idea that you’ll have agents making payments, maybe using stable coins, who knows? And that’ll fundamentally disrupt banking. I’m a bit skeptical about that one, but it does show the extent to which AI is rattling financial markets.
PATTERSON:
I need to justify forward-looking expectations and, especially in the US, these companies are trying to win the AI race. So their incentive is to go as fast as they can to push out new tools and new models, and that’s great in some ways. We have the ability to do things with some of these AI tools that didn’t even exist a few years ago, and that’s cool. At the same time, one person’s gain and benefit is another person’s disruption. So to me, it’s like AI has become the ultimate frenemy. It’s your friend and it’s your enemy depending on where you’re sitting and sometimes at the same time.
So when I think, are we in an AI bubble or not, last summer, last fall before we started to get some of these fears coming up, I think it was the right question to be asking. You had very concentrated positions in the market. You had a handful of stocks leading total gains for the market. You had very high valuations. Not only current but expected going forward, which are possible, but historically very unusual to actually achieve. So that was all fair to ask the question. And then of course we started seeing these companies which have been funding their growth through free cash flow, starting to lean more and more on debt to fund their growth. And that also I think has fueled some of the bubble concerns.
Honestly, is it a bubble about to burst like 2008, 2009? I don’t know. My gut is no, and I’ll tell you why my gut is no. I think we’re already seeing the bubble deflate somewhat. Some of these disruptions, some of these fears have led investors to diversify a bit out of some of these Mag Seven stocks. I actually read one guy this week switching from Magnificent Seven, they can do no wrong, to the Hated Eight. He threw I think Oracle into the Magnificent Seven. But it just shows you how the sentiment can turn. But in a way, it’s good for structural longer term trends. If you have some of the heat, some the froth being taken out, that can actually support the longer term bullish trends. So I actually think what we’re seeing right now is probably a good thing for the longer term market. That said, you can’t rule out that the jitters we’re seeing right now, the deflation we’re seeing right now, you can’t rule out that it turns into something worse. That we do get some momentum here that bleeds into the real economy.
So I don’t have high conviction which way that goes. Where I do feel strongly is just the importance of the question, because we know from the last 12, 24 months how important the companies are for the economy, both the capital expenditures, the CapEx that have supported growth and the wealth creation that in turn is supporting consumption. So the answer to the question is going to be important for your economic view.
MALLABY:
Right. That’s very well put. I mean for my 2 cents on whether there is a bubble or not, is that at a high level for AI as a whole, it’s not a bubble. The 2008 comparison is entirely inappropriate, first because that bubble in 2008 was built on a housing stock, which hadn’t changed a whole heck of a lot. Just the valuation of it and the financialization of it had changed. It wasn’t an underlying technology improvement. Here we do clearly have an A+ technology, that the idea that this was just going to be hallucination and next token prediction and it’s a stochastic parish and all that stuff, it’s just nonsense. And we know it’s nonsense now. We’re more than three years beyond where ChatGPT was released and we’ve had iteration after iteration of improvement, we’ve got much longer term memory in these systems. They have much longer context windows. They can be multimodal and deal with video as well.
Then there was the year of agents last year when the big discussion was agentic AI. This year it’s sort of world models and the ability to transition the AI into robotics and have AI that operates in the real world. So there’s just this constant stream of improvements. It’s clearly an A+ technology. But I don’t think, on the other hand, that it means that all of these AI companies are going to make it. I think some of them are fragile, and so there can be a bubble in specific companies. Top of my list would be OpenAI.
PATTERSON:
Okay. So we might have not a broad bubble, but a company-specific bubble. But that could be a canary in the coal mine, right? I mean, open OpenAI is a huge, very high-profile company. If it has a major problem, could that feed through to the rest of the market? So why do you think OpenAI potentially is a bubble, and what do you think is going to happen?
MALLABY:
Well, I think the way to think about this technology, as I say, it’s an A+ technology, but it is an F- business model for now. These companies are building these impressive systems, but it’s difficult to charge money for them. And you see that in OpenAI’s data. They get more users steadily over time. I think they’re on 900 million or so active users, but the subscription numbers are not going up because actually you can switch between any number of at least half a dozen pretty good models. Therefore, none of them can raise prices on consumers very successfully. They could try running ads next to the output, and in fact, OpenAI recently announced that they would do that. But if you irritate users with too many ads, again, they’ll just shift. So it’s not sticky yet, right?
PATTERSON:
Right, right.
MALLABY:
But my contention is that that will change over time as the models know the user better and they start getting the user’s credit card doing shopping, paying the utility bills. And then it becomes harder to switch. At that point, you could raise prices, and if OpenAI has 900 billion users, there is a hypothetical future in which it actually becomes the portal for all of the internet. And the internet is way more exciting and useful than it used to be because of AI. And so 900 million consumers, that’s a heck of a business if you can charge for it.
PATTERSON:
Right now, it’s fairly easy to switch. There’s very little switching costs. I use different models. I like trying out different models, but I tend to lean more on Claude, but it wouldn’t take much for me to put in a prompt saying, “Take everything about me and switch it to this model over here.” But what you’re suggesting is, in the future, the more and more I use a model, the more it knows about me. So it becomes something like changing a long-serving family doctor, or let’s say I have an account at a bank and it’s not just my checking and savings, but it’s my mortgage and it’s my insurance and my kids‘ stuff, and then those switching costs become really meaningful and I’m just not going to want to bother. Plus you’re saying they’re going to give me new things I can do with them. Is that right?
MALLABY:
Yeah, right. I mean, it’ll pay your utility bills and all that. And so then the point where it’s sticky, prices could be raised. But the question is, and this is why we’re running an experiment, not merely with an unprecedented technology AI, but with the nature of global capital markets, how much can they finance this gap between a great technology today, which is losing money and this vision of the future where it might make money? And we’re used to thinking that capital markets, particularly American capital markets, are the kind of eighth wonder of the world, they’re totally bottomless, of course, they can finance something which will pay off later. But the sums of money required to get open AI from here to 2030 are huge. I mean, it’s not billions. It’s hundreds of billions. They project, and these are leaked projections that you can read in the press, they project that they’re going to lose, I think it’s 660 billion between now and 2030. We haven’t witnessed this ever in capitalism.
PATTERSON:
But you’ve got a situation today where bankers all over the place in the United States think it’s the year of MNAs, it’s the year of IPOs. They’re talking about a record year for IPOs, but it’s a little harder to have an initial public offering if you’re bleeding that much money. Will people have the faith that you’re going to turn it around quickly and fund you? So I guess that’s the public market side you’re describing. And then on private markets, if this had happened five years ago, it might be a different answer. But today you’re not seeing private equity, for example, giving the distributions that a lot of their longtime clients expect and want. Therefore, they’re having a harder time raising new capital, and now they’re getting a little anxious about all the exposure they have to AI writ large, both the AI builders and the potentially disrupted companies. So to your point, Sebastian, where does the money come from this time to avoid a financing cliff, right?
MALLABY:
I mean, I think you highlighted just then doubts about the supply of private equity because private equity industry has this moment when it’s a little bit tougher to raise money. But then there’s also the question about the demand for that capital because when OpenAI in 2025 raised $41 billion in private transactions, that was by far and away the biggest ever private fundraising operation in history. Now there’s a rumor that they’re going to raise a hundred billion in the next round, so two and a half times more. Which is, in other words, it’s like they’re breaking their own record, more than doubling it. It’s an incredible amount of money, a hundred billion dollars, but it’s a small amount of money relative to what’s needed, which is the 660 I mentioned before. So how many times can they beat their own record, up it by 2.5X? Can they keep the hype machine going?
The latest thing is, “Oh, we’re going to supplant the iPhone. There’s going to be a new device. Forget Apple, forget Android.” I don’t think so. I mean, even if you’ve… It’s a debate, but you can’t say that it’s likely that you’re going to disrupt the most useful consumer product of the last 20 years. So I think that OpenAI has a 50% chance of going bust because it just can’t raise the capital to get to the future where it might be profitable.
PATTERSON:
Okay. That line right there, your sentence right there, Sebastian, that’s clearly the takeaway from this podcast episode. I don’t know what’s priced into the markets. It’s harder to do with OpenAI, but I doubt it’s a 50% chance that that company goes bust. And I have to think if we woke up tomorrow and that was the top of the Wall Street Journal or the New York Times or CNBC, the markets wouldn’t like that very much. That would be a big shock and we would probably get a pretty negative, violent reaction that could easily turn into something that feeds into the real economy. Am I over reading this or… That’s a big statement.
MALLABY:
So I say yes and no, maybe. I mean, I agree with you in the sense that of course, it would be a massive psychological shock if this iconic company, OpenAI, which is sort of the poster child for the whole AI revolution, if it were to fail, that would be a big deal. And there would be connected companies that are building compute clouds, data centers for OpenAI, and whether they would still have a customer would become uncertain. That’s why Oracle is down by more than 50% in the last few months. That’s one of the Hated Eight that you mentioned earlier. Because they’re building data centers for OpenAI, and if OpenAI something happened to them, that would be bad. So there would be a knock on effect.
Now to your point about you can’t see OpenAI’s risk in the private valuation, you’re right. I mean, there’s something fishy going on when OpenAI’s business partner, Oracle, is down more than 50%, but OpenAI is talking about raising more money at a higher valuation than last year. In fact, a lot higher. It was worth 500 billion coming into 2026. Now there’s discussion of a new fundraising round. We’ll see what the valuation comes out at, but the rumors are kind of 800 billion. So that’s a 60% markup when the business partner, Oracle, is 50% down. Something doesn’t compute here, right?
PATTERSON:
Well, it could be that investors are using some of these public companies as proxies to hedge the risk that you’re describing. Whether OpenAI goes bankrupt or not, if OpenAI runs into challenges, you’re thinking about the spillovers, which companies would be most affected. And depending on what kind of investor you are, you reduce those in your portfolio. You go short, but you’re looking for things that would hedge your bets if that negative outcome to Oracle occurred.
MALLABY:
Right, totally. Okay. So that’s the negative story that OpenAI might fail, there might be spillover effects for Oracle and so forth. The positive story, which I think we should also mention here, and this is a classic kind of frenemy. It’s both good and bad. It’s evil and friendly. The one certainty here is uncertainty. But let’s just play out the other side of the scenario if OpenAI were to fail. Look, it’s still an amazing team of scientists. It’s still a product which has 900 million users. So somebody would buy it. I mean, Microsoft is the obvious one because they’ve already put a lot of money in and they don’t have a very good internal AI operation and they want one.
So let’s say Microsoft or some similar company buys OpenAI, then they retain the scientific team, they carry on building models, the demand for data center continues, and you just simply have a change of ownership label on that particular AI lab. And I think that’s quite plausible. There would be a discount from the valuation that the acquirer would pay. So the equity investors in OpenAI would take a bath. Fine, these are sovereign wealth funds, what have you. They can afford to take a hit. So that would be the scenario, and I think that’s the most likely one. And what it would mean is that OpenAI would turn out to have been a bubble valuation, but AI as a sector is kind of okay and moves forward. And so I think that’s my base case.
PATTERSON:
So that gets back to your A+ technology. So the business model right now, not great, but the technology underlying it is potentially truly great. And we’ve seen this movie before. I mean I remember, oh gosh, I think it was early 2022. I’m in Davos, you’re walking down the main strip. There were some interesting things on that strip, but one of them was Metaverse stuff. So you can put on the VR glasses and see things, and maybe I’m just not cool enough, but I thought, “Okay, these VR headsets, they’re heavy. They’re so heavy, they hurt my head.” And I think, not that I know more than anyone else, but it didn’t shock me later in the year when the Metaverse hype kind of fizzled. And I think coupled with the Fed raising interest rates, because inflation was so high back then, you had those stocks just taking a huge bath.
But then we got a reset, to your point. The stocks came down and then, bingo, late November 2022, ChatGPT, so the combination of a new catalyst and lower valuations allowed everything to climb back up. So maybe what you’re describing may end up being a little bit like that one. You get a painful reset evaluation, but then we go back to the longer term trend.
MALLABY:
And I think the Metaverse thing is interesting and good to think about, but it’s just different from the invention of a new form of cognition-
PATTERSON:
Hundred percent.
MALLABY:
… which is a far more profound change.
PATTERSON:
A hundred percent. Yeah. I mean, just the things I saw yesterday and a few of the demos I was at, kinks are constantly being worked out. Technology is iterative. Nothing’s perfect the first time around, but the progress these companies are making so quickly, I mean, it’s incredible. It really is just incredible. It blows you away.
MALLABY:
Okay, so maybe we’ve talked enough now about this bubble question, but let’s move to the real economy. You follow that very closely. You’ve sometimes described yourself as the numbers girl. So tell us how you see all of this impacting the real economy.
PATTERSON:
Yeah, well, let’s talk about the good news first. If you can take a step back from the market volatility we’ve had over the last few days, that there’s an underlying dynamic in the U.S. economy today, which is quite strong. Earnings growth has been robust so far. You have huge tax refunds that are going to flow through and turn into consumption starting now basically for the next several months at least. The Fed has more room to cut rates if it needs to. And the AI CapEx story, this is money that is coming. It’s hard for them to pull it back much because there are contract signed, there are data centers being built. And if you believe the estimates for this year, what the top five companies say they’re going to spend, it’s something around $700 billion. I mean, just wrapping your head around that number, that’s one year’s CapEx. It’s the size of Sweden’s entire economy. So this is massive fiscal stimulus.
So it’s important to know that even when we’re thinking about the AI disruption, it’s good news that it’s happening against a backdrop where the broader economy, I mean, there’s lots of things we could parse about it, but the broader economy looks good. I think the risk to me from the open AI scenario is that if it happens and it runs into real trouble, it’s happening at the same time as these disruption risks. And the combination of those could lead companies to reduce investment to not the AI companies necessarily, but broader, reduce investment, reduce hiring. And I think that can easily snowball. Maybe we don’t see a recession, but we could easily see a couple of quarters of significantly slower growth. Because if more people are losing their jobs, they don’t have income. If they don’t have income, they’re not spending, if they’re not spending, companies don’t have revenues, therefore the earning growth expectations get revised down. So there’s a negative feedback loop. Eventually the Fed would respond with rate cuts, but that might take a little while. So I could absolutely see that happening this year.
MALLABY:
Yeah, I feel like there’s maybe two questions we have to disentangle in the impact on the real economy. One is, let’s say it’s an A+ technology, let’s say therefore it enables companies to produce more stuff with fewer workers. That means higher productivity, more profits. The question is, which types of company capture that benefit, that upside from AI productivity? And the SaaS-pocalypse story is that while the makers of the AI foundation models, Anthropic and OpenAI and Google and so forth, these guys are going to disrupt the incumbents.
But there’s another story, which is that if you take banking for example, you’ve got JP Morgan, they have a great brand, they have very entrenched relationships with big customers all over the world. It’s a very regulated industry. So there are big barriers to entry. The prospect that some native AI startup disrupts them, it happens a bit on the margin already. You see it in a few things like Stripe’s success in payments and maybe wealth management, native AI, software-based wealth management platforms like Affirm have done quite well. So I’m not saying there’s zero threat to JP Morgan, but I feel like JP Morgan is going to integrate its own AI into its business faster than the native AI people can kind of muscle in and try and do bond underwriting or something. So I would say that it’s by no means clear whether the incumbents implement AI faster than the AI companies manage to bust into the incumbents‘ entrenched position. So I think that’s the first question.
PATTERSON:
Yeah. And I’d add to that, use JP Morgan as an example, companies like that, that have the resources and the strength of talent to build their own AI, to disrupt themselves actually in a way ,that puts them in a position to be net beneficiaries, I think, or at least slow down how they might be disrupted. The challenge, I think, when we’re looking at the U.S. economy is that most companies in the United States are small and medium-sized enterprises, and they don’t have the money. They don’t have the resources necessarily to make the adjustments. I think it comes down in some ways to, well, a couple things. Friction. So can AI reduce that friction completely and therefore you’re less needed?
But there’s also companies that you’re going to need, regardless of what happens with AI and companies that are more asset heavy that are going to be around in 10 or 20 years, regardless of what happens with AI, we’re going to want nurses and doctors in 20 years. We’re not going to have AI robotics doing all of the operations in the world. We’re going to still want farms. We’re going to want fresh vegetables. Now maybe the tractors will be run by AI, but there are still going to be humans in the loop. So I think there are some jobs that are going to be there and maybe even grow. And then there’s going to be other sectors that are more at risk. And I think the smaller companies in general and small companies that don’t have that human touch that’s required are more at risk.
MALLABY:
So one dimension of this debate, as I was saying, is which types of company capture the upside from AI? But the second thing is how quickly does that upside materialize? Because it’s one thing to invent great cutting edge technology, which is happening super fast, and another thing to diffuse that into business processes. I mean, what matters if you’re trying to understand the impact on the real economy, it seems to me is not so much how many new models did Anthropic release? It’s like how do Anthropic’s customers integrate that technology into their workflows? Can they really do with less workers? Is it more than just at the margin? And typically with new technologies, integrating it is the labor of years, maybe even decades, not just a few months.
PATTERSON:
Yeah, no, I agree with that. So I’d say two things on that. One, I think my base case is that we continue to be in a jobless expansion, that the economy can have positive growth, but we just don’t see a lot of workers as this stuff gets integrated. And I was hearing that last year, CEOs and CFOs saying, “We have to invest in AI. Either we want to or we feel forced to, and that costs a lot of money, therefore we have to offset that cost somewhere else to keep our budgets stable, so we’re going to take that money out of personnel.”
And so it wasn’t that they were laying off a lot of people, they were laying off at the margin, but they just weren’t replacing and they weren’t adding. And I think that’s continuing. I thought it was interesting last fall, the CEO of Walmart was on the record saying that he thought AI could allow him to keep headcount flat for the next few years globally. And so if we have enough of those companies, then I think it takes a while to integrate the AI. And in the meantime, companies are going to be very cautious about adding heads because they don’t want to have too much personnel and realize they can frankly replace some of those jobs with AI.
When I think about how do I watch this as an investor, how do I look for those spillovers? Just for the audience, there’s a couple markers I’m personally watching to see if we stay in this jobless expansion benign scenario or it gets worse. So on the labor market itself, I think there are hiring intention surveys that get asked every month. I would be looking at weekly jobless claims, especially continuing claims, people who have lost their jobs and can’t get new ones. If either of those weekly jobless claims series starts to rise more quickly, that tends to be something that can snowball in a negative way. I would be watching credit markets pretty carefully. We haven’t seen credit markets show stress yet outside a few handful of companies. But if you started to see the stock market anxiety bleed into credit, that would be another one.
So sentiment, labor market jobless claims, especially the weekly data because it’s so high frequency, intentions, and then credit markets. I’d say if you see those start to worsen quickly or more than they have been in the past, that could be a signal that this more benign outcome might be getting worse.
MALLABY:
Yeah. And sort of fizzling way in the background, there is some sort of other maybe longer term questions, which is what does this do to human capital formation? It’s very tough for younger people to get into the first rung of the labor market. That’s not just depriving them of income, it’s depriving them of experience and an opportunity to learn how to be a professional and all that stuff. And that could have long-term impacts on both societies and on economies. And so it’s not playing out quickly. I think the Fed has been pretty clear in saying they don’t see direct AI disruption to the labor market yet. Sometimes people announce AI-related job cuts, and it turns out that really the AI was the excuse for a restructuring they had to do anyway. So I don’t think we see it on labor markets and productivity very pronounced yet, but we assume that it’s going to play out into the future.
And the other one I’ll just keep an eye on is what this does for interest rates, because as you and I have discussed on previous shows, this question of where long-term interest rates are has an enormous impact on government budgets and whether the big, big debt burdens are supportable or not. And it’s ambiguous, right?
PATTERSON:
Right. Yeah, no, it is been really interesting on that front. So the idea that we have a productivity boom, first of all, we don’t know yet. I go a little nuts when I hear people saying, “We’re in the middle of the biggest productivity boom in years.” We don’t know. It takes quarters to understand if productivity growth is meaningfully changing, because at the end of the day, productivity is a residual. When you figure out what drives GDP, you put all the pieces together, and the piece, you can’t explain as productivity, and it’s very noisy. So you don’t know yet, and we won’t know probably till next year sometime if productivity is meaningfully changing from AI. And I think the Yale Budget Lab among other good research groups is doing some really nice work trying to explain what’s happening with productivity and what’s not happening. So I’d refer people to that.
But in terms of, to your point, Sebastian, what it means for interest rates, there is one school of thought that if we have a productivity boom, you can get more output with fewer workers, it can be disinflationary and therefore you can have lower interest rates. And that’s been an argument the White House has put forward, that we could actually be cutting rates now because we’re so sure productivity is coming and it’ll be disinflationary. Again, I would argue we don’t know yet, and I wouldn’t advocate a rate change based on a hope. I think you want a little more data given the importance of rate setting.
But the other camp, to your point, we heard this articulated by Governor Barr at the Fed just in the last couple days. He suggested that it’s possible AI productivity is disinflationary, but it’s also possible that AI productivity leads to more demand for capital, that could actually push up interest rates. And again, to your point, while having higher growth is good, having higher interest rates also increases the payments on our government debt. And so which one of those grows faster is going to be really important for debt sustainability.
MALLABY:
I want to point out to everyone, the implicit criticism or maybe explicit of the President’s nominee to be chairman of the Fed, Kevin Warsh, right? Disagreeing with him basically because he’s been fairly bullish on the idea that productivity gains resulting from AI might leave space for cutting interest rates. And it’s not so clear, actually.
PATTERSON:
No, I am not criticizing Warsh per se. I’m criticizing the argument though. I appreciate the Fed doesn’t want to rely only on backward-looking information because some of the data we get gets revised, they get revised a quarter or two quarters later. But to put too much weight on something that may or may not be happening to me is equally misplaced. And so honestly, I think one of the Fed’s strengths is the qualitative data it gets by going out to its various communities and talking to companies. The San Francisco Fed is talking to the companies that I’m talking to this week all the time. So they’re getting a better sense of what’s happening. I would trust what they say on productivity more than on what one GDP quarter report suggests.
MALLABY:
Right. But as I think you said at the top, even the people in these companies are not sure how fast they’re going to make progress, how that’s going to affect the real economy. They don’t know either.
PATTERSON:
Right. Again, a reason you don’t want to make policy that’s going to affect the entire world because Fed policy does affect interest rates, not just in the US, but it has spillovers to the entire world. We talked about that in our inaugural episode. You want to be really careful what your premise is that’s making you change rates or policy in general.
MALLABY:
So talking about the entire world, should we discuss this India AI summit?
PATTERSON:
Yes, absolutely. That is… Again, I think all these countries coming together, it’s a good reminder of the geopolitics of AI, who’s winning, who’s losing, U.S. versus China, but also all the other countries that are trying to understand how they can benefit from it, how they don’t get left behind from it, maybe how they leapfrog. And then there’s the governance question. And I know, Sebastian, you’ve been focused a lot on that, and it’s hard to get that right. If the U.S. wants to win geopolitically, plenty of people here in Silicon Valley would say, “We can’t have any regulation. We need to go fast to win, to protect America from a defense perspective, to make sure we’re the economy in the world.” And yet when we go fast, it has unintended consequences potentially. If you believe the research scenario that came out a few days ago, I think the company was Citrini Research, a blog post caused a big market reaction because they talked about an unemployment rate shooting up to over 10% in a year or two. That’s the unintended consequence of go fast and break things.
So how do you get the governance right? It’s going to matter, not just for the US, but also globally and affect the geopolitics.
MALLABY:
Right. Well, with one of my CFR colleagues, one of our CFR colleagues, Sebastian Elbaum, I co-wrote a piece in Foreign Affairs which just came out recently, which grappled with this governance thing. And part of our starting point was just what you said, Rebecca, which is this is such a big change. The notion that the right response from governments is nothing is crazy. We had the industrial Revolution, which I think probably was smaller than this cognitive revolution that we are on the front end of now. And after the industrial revolution, there was a bunch of actual political revolutions in Europe and then a couple of world wars. So the disruption was very, very profound, and I think it’s going to be very, very profound this time and then needs to be some sort of government response.
And that was indeed the first kind of fallout, the first reaction to ChatGPT coming out at the end of 2022. In 2023, 2024, you saw a bunch of moves with government setting up AI safety institutes. This was done in the US, in Britain, and the EU, in Japan that you had these global summits to discuss what should be done. The first one was in Britain in Bletchley Park where they did the work on breaking the German military codes in the Second World War. So it goes back to Turing and the origins of computer science. And then there were follow-on summits in Seoul in South Korea and then in Paris. And so that momentum was quite palpable to figure out what should be done. And it’s completely fizzled in 2025, 2026.
PATTERSON:
And I’m frankly sad about that. Now, I understand maybe the AI safety institutes, I honestly don’t know if they were slowing down innovation. It certainly didn’t feel that way. But I do think there’s now a narrative from some actors, including some of the tech companies, that we can’t have things like that because it slows us down. But honestly, to me, it was so smart. You had communication both from the private and public sector, so they’re talking to each other, they’re collaborating, and you had collaboration across countries. And with a technology so powerful, I think it’s an issue where we need to have global coordination. We need to have a global shared understanding and hopefully rules of the road, so we don’t have some of those really scary unintended consequences. So I’m frankly very sad that all those efforts have fizzled out.
MALLABY:
And I think they fizzled out partly because the Trump administration is just a bit impatient or maybe more than a bit impatient with either regulation or global coordination. So those two attitudes contributed to the governance agenda globally being set back. But it’s also because AI now, as we were discussing earlier, the CapEx is creating another Sweden every year. It’s astonishing. So that is so important to growth now that people don’t want to stop that. And so the impetus to slow it down, put sand in the gears, has reduced. And then I think most importantly of all is this race dynamic, which you’re alluding to, which is if we slow down U.S. labs or Western labs and China doesn’t do the same thing and they race ahead, that we’re going to lose strategically, geopolitically, militarily. They’re going to integrate AI into their systems faster than we do. And so I think that is the most difficult and serious obstacle to sensible governance responses. It’s this race dynamic, particularly with China.
PATTERSON:
Yeah, but even the race dynamic, it doesn’t always add up completely. We can’t have any regulation around the U.S. companies because they need to win and we need to stay ahead of China. And at the same time, we need to support our U.S. companies. So we’re going to sell our most advanced chips to China, which helps them in the race. So it’s hard for me to square that circle completely.
MALLABY:
Yeah, I mean, I think of it actually as a triangle, not a circle. I mean, in this essay I did with our colleagues Sebastian Elbaum, we talk about the AI trilemma. And the idea is that if you think about what would you want to get out of governance of AI, you want basically three things. You want economic security where you integrate AI fast enough that your companies in your country don’t just get outcompeted by everybody else. So that’s the economic one. You want military, national security, security. You want to put AI into your surveillance, your intelligence systems, your weapons systems at least as fast as your adversary. So that’s sort of objective number two. And then objective number three is sort of a bucket, societal security where you don’t want rogue actors to use the AI for nefarious means. You don’t want deep fakes and so forth to disrupt elections. You want the tail risk scenario of these systems somehow attacking humans because they develop their own objectives.
You’ve got a bunch of these societal risks and you have to choose and be quite clear minded when you are designing your policy. Are you going for economic security, are you going for national security, or are you going for societal security?
PATTERSON:
Well, it feels right now that societal security is getting left in the dust, at least here in the US. Would you agree with that?
MALLABY:
Yeah, I think that’s right. I think the Trump administration’s position is we’re going to maximize, we’re go fast as possible. We’re going to maximize both economic security because U.S. companies will be ahead and we’re also going to maximize national security because we’re going to build it as fast as we can and adopt it in military systems. There’s a bit of a setback going on right now with the fight between the Department of War and Anthropic, but basically you’re right. I mean, the priority is full steam ahead. And therefore, if your priority is speed, by definition, you can’t take the time to stress test models before they’re released to the public to do the red teaming that you ought to be doing. And to think about the safety side of the research agenda, how do you get systems which are interpretable by the users, which kind of explain what they’re doing, which avoid toxic outcomes? These things can be engineered in, but you do need to take the time to do it.
PATTERSON:
So what is the workable solution? Maybe you’re just kind of going into that now with what you just said, but if you are at the White House, if you were advising Secretary Bessent and David Sachs and the team there, what would you suggest they do on this front?
MALLABY:
I think there are two important policy adjustments that could be made. The first would be a tax regime that would not raise money, but that would incentivize the private sector to do more safety research. And the way you do that is that when a lab say, spends a billion dollars on training an AI model that would attract a tax, let’s say it’s 5% or something, but you would get a credit if you diverted some of the billion to safety research. And that might include, by the way, funding academic safety research, which would be a way of reviving the computer science departments in U.S. campuses, which right now have been denuded because all of a lot of the talent had been sucked out and hired into the private sector. And so the pipeline of future computer scientists, AI scientists, is weakened by their hot demand right now to put those people into work in the private sector. So I think you could address both of those problems with this safety tax that just drives more private sector spending into research on making the models safer.
PATTERSON:
Okay. And there’s a second piece of your policy?
MALLABY:
That’s right. The second thing would be is that the U.S. set up the AI Safety Institute in the previous administration. It still exists, although it’s been given a new name, but its powers are inadequate and its funding is inadequate. It needs to have more revenues. And you could think about ways of a bit like the Fed or the FDA for pharmaceuticals. They have sources of revenue which are kind of out of the budget. They charge the people they’re regulating and whatever. So in the AI space, there are lots of government data sets that could be provided to the private sector on an anonymized basis for a fee, and maybe that fee could be used to support the work of the AI Safety Institute. So that’s one part of it.
But then the institute needs the authority to veto a model before it’s released if it looks like it’s dangerous. Because it’s crazy that you’ve got the Fed supervising banks and can tell the banks, “You’ve got to change your capital ratios,” or something when a banking meltdown is a serious thing, but it’s probably less serious than some sort of AI apocalypse. And equally with pharmaceuticals, we have the FDA, which can veto the release of a drug. Why can’t we have that for cutting edge AI models? I think we should, and I think at least we should be positioning ourselves where we can do that if we decide it’s necessary. And so we need to build up this AI Safety Institute. That’s kind of proposal number two.
PATTERSON:
I think what you’re saying is sensible, but just to play devil’s advocate for a second, if we’re slowing down models or vetoing models and China’s going full steam ahead and doesn’t care, does that get back to another part of the trilemma, which is the geopolitical heft, the national security?
MALLABY:
Yeah. Look, this is a real problem and a real tension. But what I would say is that think about the Cuban Missile Crisis in 1962, and then think about the signing of the Nonproliferation Treaty for Nuclear Weapons in 1968. There was only a six-year gap between near Armageddon with nuclear weapons and something very constructive. And so cold wars and geopolitical rivalries go through periods of extreme worry and tension and danger, and then you get to detente at some point in the future. So I don’t see why we wouldn’t assume the same with the current competition between China and the US. Right now, we’re in a very bad period. It’s very hard to talk to the Chinese about collaboration on AI, not least because we went through a phase of trying to deprive them of AI by banning exports of semiconductors and chip making equipment. And that didn’t really work, I didn’t think.
But if we create the institutions within the U.S. and we figure out how to do that, how they work, we have a kind of operating model, then when the geopolitical window opens in the future and we can do diplomacy with China, we’re positioned to internationalize the safety policy, I think. Sometimes when you’ve got a blockage on taking a risk seriously, you need to experience the danger viscerally and Cuban Missile crisis was that. We came very close to a total disaster. And civilian nuclear power, you have Three Mile Island and so on. So I think when things go wrong, they might go right afterwards, but we’d rather not have to go through the first step to get to the second.
PATTERSON:
Yeah. And maybe that takes me back to where we started this entire conversation, which is on disruption. That would be the ultimate disruption, which hopefully obviously we want to avoid. But either way, it feels like a takeaway I have from this conversation is that we’re now in a period of disruptions. And this last week shows us disruptions we hadn’t counted on, and we’re going to have more of those, whether they’re geopolitical or corporate or economic.
I hate to stop this conversation, Sebastian, but I know we’re going to come back to AI in future episodes. So maybe for now, let me see if I can summarize our key points from today. And then always we want our listeners to stick around for something fun, interesting, quirky that we want to share this week.
But in terms of kind of what we’re trying to tell you today, I don’t think either of us come down on the side that this is with confidence an AI bubble broadly. I think there are some things that are a little bubbly, but we’re not in a big bubble, but we might be in a bubble for a company. And OpenAI is the canary in the coal mine that I think you, especially, Sebastian, are watching. Overall, this is an incredible transformative technology, A+ technology. But in the case of OpenAI, given its financing needs and spend right now, it’s as you put it, maybe an F business model. And the question is whether or not capital can be pulled together to get them from one side to the other side. And that’s a big question. So 50% chance that OpenAI hits a financing cliff this year and maybe needs to be rescued. And there’s markers to watch to see if OpenAI bleeds into the real economy, weekly jobless claims, hiring intentions, sentiment.
I think our second takeaway is that even if we don’t have a bubble bursting this year, there’s some real economic impacts that we need to keep watching, jobs, productivity, and interest rates. But they’re early days, right? We don’t know. No one knows how much productivity, how much of a hit on jobs we’re going to have, but as we get data and we get things like blog posts, you’re going to have very schizophrenic market sentiment. Our worry is that longer term, this is going to have a serious impact on younger workers starting their careers as they try to figure out how to position in this new world.
And then I guess last but not least, AI governance. As we think about the geopolitics of AI, governance is going to be a key piece of that. And the window for governance is narrowing fast. It’s not a choice between regulating and not regulating. It’s between having smart rules now or scrambling to write them after we look into an abyss.
MALLABY:
That was a great summary. Completely agree.
MALLABY:
So on the tip of the week, I’m not sure mine is so amusing, but I do think it’s worth mentioning. It’s about Blue Owl Capital. This is the private credit firm, which like many in the private financing space, have raised so much money from traditional backers, sovereign wealth funds, pension funds, endowments and so forth, that they’ve gone off and they’ve raised money also from private retail backers. And about a week ago, there was this moment when the retail backers of Blue Owl got wind that they’d invested in a lot of these SaaS companies. And because of the SaaS-pocalypse, there was a sudden move to withdraw funds and it was kind of a run on the bank. And Blue Owl had to do some financial engineering and basically offload more than a billion dollars of assets to its internal insurance wing to kind of isolate itself from that run risk.
But I think given how private equity and private credit have grown and become central to the way that our financial system functions, and given how it is moving towards retail customers as a source of capital, I think we need to keep an eye on that. It might even be an episode for a future spillover.
PATTERSON:
A hundred percent. I agree with you. I’m going to end us on a slightly more cheerful note and just give you another fun Wall Street term that you can throw around at cocktail parties or whatever this week. So the term is HALO, and I’m not talking about angelic AI tools. I’m talking about heavy asset, low obsolescence, HALO companies. And so what we’ve seen in the last couple of weeks is this movement within the financial markets out of both tech and the disrupted companies into companies that we know will be around in 10 or 20 years that have lots of assets. Things like mining, things like nurses, firefighters. I’m making it up right now, but things we’re going to have regardless of what happens with AI. So you can talk about how we all need to have more HALOs out there these days. But maybe, Sebastian, with that, let’s call it a wrap. Thank you for being with us all today, everyone listening to The Spillover, and we’re going to see you next week.
MALLABY:
Thank you, Rebecca. That was fun.
PATTERSON:
For resources used in this episode and more information, please visit CFR.org/podcast/Spillover and take a look at our show notes. If you have an idea or just want to chat with us, email [email protected]. Be sure to include The Spillover in the subject line.
This episode was produced by Molly McAnany, Gabrielle Sierra, and Jeremy Sherlick. Our video editor is Claire Seton, our sound designer and audio producer is Markus Zakaria. And research for this episode was provided by Liza Jacob and Daniel Hadi. Thank you so much.
The Hook: AI inspires both promise and fear. It brings the promise of transformational technology that could boost productivity, alongside fears of industry disruption and fast-moving job losses.
The Spillovers: These debates are already moving markets. The risk is not a broad AI bubble, but an OpenAI bubble specifically, raising the question of whether U.S. capital markets can keep financing heavy investment before revenues fully materialize. Even if valuations hold, the real economic effects on jobs, productivity, and global competitiveness are just beginning to unfold, with young workers and slower-moving economies the most exposed. The window for smart AI governance is narrowing, and policy decisions made now could either stabilize expectations or amplify future market shocks.
The Spillover is a production of the Council on Foreign Relations. The opinions expressed on the show are solely those of the hosts and guests, not of the Council, which takes no institutional positions on matters of policy.
Mentioned on the Episode:
Sebastian Mallaby and Sebastian Elbaum, “The AI Trilemma,” Foreign Affairs
Sebastian Mallaby, The Infinity Machine: Demis Hassabis, DeepMind, and the Quest for Superintelligence
Alap Shah, “The 2028 Global Intelligence Crisis,” Citrini Research
Matt Shumer, “Something Big Is Happening in AI — and Most People Will Be Blindsided,” Forbes
Martha Gimbel, “An AI Productivity Boom? Don’t Count Your (Productivity Data) Chickens,” Yale Budget Lab






