The World Faces a Sharp Rise in Extreme Weather. Can AI Help?
from Energy Security and Climate Change Program
from Energy Security and Climate Change Program

The World Faces a Sharp Rise in Extreme Weather. Can AI Help?

Two people embrace as they evacuate the devastating wildfires in the Los Angeles area, on January 8, 2025.
Two people embrace as they evacuate the devastating wildfires in the Los Angeles area, on January 8, 2025. David Swanson/Reuters

Artificial intelligence has emerged as a powerful tool to improve the accuracy and timeliness of forecasting, with 2024 proving to be a banner year for swift progress.

January 24, 2025 1:59 pm (EST)

Two people embrace as they evacuate the devastating wildfires in the Los Angeles area, on January 8, 2025.
Two people embrace as they evacuate the devastating wildfires in the Los Angeles area, on January 8, 2025. David Swanson/Reuters
Article
Current political and economic issues succinctly explained.

Alice Hill is the David M. Rubenstein senior fellow for energy and the environment at the Council on Foreign Relations. Colin McCormick is the chief innovation officer at Carbon Direct, a commissioner with the Washington DC Commission on Climate Change and Resiliency, and an adjunct professor at Georgetown University.

More From Our Experts

The Los Angeles wildfires this January reached a size and ferocity that puts them as some of the most destructive fires California has seen. Entire communities have been wiped out, and more than 160,000 people were under evacuation warning. At least twenty-eight people have died. Emergency warning systems failed to fully meet the moment; a false alarm sent to millions of residents sparked panic while some warnings came twelve hours late, and a lack of time stamps made it impossible to tell if alerts were current or outdated. 

More on:

Climate Change

Artificial Intelligence (AI)

Disasters

Energy and Climate Policy

Organizations tracking climate change say such extreme weather events are likely to become more common, say . But the ability to adapt may have a crucial new tool—emerging artificial intelligence (AI) programs propelling early warning systems.

Forecasts in a Hotter World

In the past twenty years, human-caused climate change has resulted in more than half a million deaths and massive economic damage. Rising temperatures have led to bigger storms, heavy precipitation known as “rain bombs,” larger wildfires, longer periods of extreme heat, and extended drought. According to the UN Office for Disaster Risk Reduction, from 1980 to 1999, the world suffered about 4,200 disasters that caused 1.19 million deaths and $1.63 trillion in economic losses. From 2000 to 2019, the number of major disasters soared to 7,348, with 1.23 million people killed and losses nearly doubling to $3 trillion. While some of the rising losses may be attributable to better reporting, inflation, and more people living in riskier areas, climate change also shares some of the blame.

Indeed, today’s disaster intensification loop reflects what climate scientists have long predicted. A study published in Nature estimated that climate change will cause $38 trillion in annual global economic damage by 2050, based on climate impacts in 1,600 subnational regions worldwide over the past forty years.

More From Our Experts

In the face of accelerating climate change, effective early warning systems are needed more than ever to reduce mortality and economic harm. Such systems provide time to evacuate, lessen damage to homes, and take precautions like moving vehicles and livestock to higher ground to reduce business disruption. Providing just twenty-four hours of advance notice could reduce damage by 30 percent, says the UN Environment Program, which estimates that investing $800 million in early warning for developing countries could prevent losses of $3–16 billion.

Yet only about half of the world’s countries have the resources to provide early warning about imminent weather conditions. To close the gap, the United Nations launched an ambitious “Early Warning for All” initiative in 2022, aiming to ensure that every person on the planet is protected by these systems by the end of 2027. 

More on:

Climate Change

Artificial Intelligence (AI)

Disasters

Energy and Climate Policy

But there are significant barriers. Effective systems require accurate weather forecasting, timely issuance, and appropriate infrastructure for disseminating warnings. For many countries, progress is stymied by a lack of technical expertise to develop, operate, and maintain tracking systems; unclear decision-making protocols to reach local communities; and a knowledge gap in metrological monitoring, historical data, and forecasting ability. Africa has the least developed land-based weather observation network, with many existing systems outdated and poorly maintained. According to the World Meteorological Organization, Africa has less than forty radar stations, unevenly distributed, with only about half able to provide accurate short-term forecasts. Up to 60 percent of Africans are not protected by early warning systems. 

The Potential of AI

In both developing and developed nations, AI can play a meaningful role in improving forecasting. This technology can rapidly analyze and synthesize vast amounts of historical weather data, learning to recognize subtle patterns that play out repeatedly to make highly accurate predictions about future weather. However, the introduction of AI also faces challenges, including limited observational weather data in many parts of the world, security issues, excessive power demands, and potential bias in outcomes. 

Conventional, state-of-the-art weather forecasting uses detailed scientific calculations of the physics and chemistry of the atmosphere, land, and oceans. Major national weather services such as the U.S. National Oceanographic and Atmospheric Administration (NOAA), and intergovernmental bodies such as the European Center for Medium-Range Weather Forecasts, operate expensive and energy-intensive supercomputers to run these calculations, which can take hours to complete. Typically, these elite institutions produce and publish ten-day-ahead forecasts several times a day, using up-to-the-minute weather data from satellites and other systems for calibration.

Very few countries have access to supercomputers and highly trained weather scientists, so those countries often rely on less accurate methods that can run on normal computers. This problem has inspired many research groups to try using AI to make accurate, timely forecasts. While training AI models is a time-consuming, energy-intensive process, using them to make forecasts is often hundreds of times faster and less energy-consuming than conventional calculations. This research has paid off dramatically over the past year; several emerging AI weather-forecasting systems are highly accurate and can run using a simple laptop or a tiny amount of online cloud computing.

One important example is GraphCast, an AI weather model released in late 2023 by Google DeepMind. During the 2024 Atlantic hurricane season, GraphCast proved remarkably accurate at predicting the path of hurricanes, identifying the landfall location of Hurricane Beryl in July and Hurricane Milton in October well before conventional weather models. (Google has recently released an even more capable AI model called GenCast.) Other large technology companies such as Huawei and Nvidia, and a growing number of smaller tech startups, have also developed advanced AI weather models with impressive forecasting accuracy. 

Government weather services are working hard to figure out how to incorporate these new models into their forecasting services. The European forecasting center has adopted GraphCast on an experimental basis, offering AI-based weather forecasts on its website. NOAA has held workshops with leading academics and technology companies on AI weather forecasting, and NASA is collaborating with tech company IBM to develop foundational AI weather models. 

Meanwhile, innovative new partnerships are emerging to deliver AI-based weather forecasting to regions outside the Organization for Economic Cooperation and Development (OECD), including:

  • The UN World Food Program and scientists at Oxford University collaborating with African trade bloc the Intergovernmental Authority on Development’s Climate Prediction and Applications Center to bring AI extreme weather forecasting and early warning systems to countries in east Africa, including Kenya and Ethiopia;
  • The Philippine Atmospheric, Geophysical and Astronomical Services Administration’s plans to adopt AI weather forecasting; and
  • A coalition of international development banks and national governments announcing the $1 billion Agriculture Innovation Mechanism for Scale initiative at the COP29 climate conference in November 2024 to help national weather services across the globe adopt AI-supported weather forecasting with a focus on providing advanced forecasts to farmers.

The Limits of AI

While these AI models offer enormous promise for bringing accurate, specific weather forecasts to more countries and regions, they are not a panacea. AI-based forecasting continues to face funding, energy, and implementation challenges.

Funding for data collection. Measurements from local weather stations are needed to calibrate AI model forecasts, and these are not available in many parts of the Global South. Also, while AI models have shown very good ability to predict storm paths, they are not as accurate in predicting some other weather features like maximum storm wind speeds. AI-based forecasting depends on increased international funding to support the development of local weather data collection infrastructure. Just as governments need to invest in local meteorological expertise, they will need assistance to attain access to AI capacity to generate the forecasts. 

Dirty energy in AI. Global electricity use is surging with the rise of AI, much of which is powered by fossil fuels. The energy drain in the near term from AI will also require vigilance by governments to ensure that power demand does not conflict with climate change emissions reduction goals. 

Private-public cooperation. It is also important to remember that even though private-sector technology companies like Google and Nvidia have led the development of AI weather models, those models depend heavily on historical and current observational weather data collected by government weather services and research programs. This dependence highlights the need to resolve important policy questions about the best way to structure ongoing public-private cooperation. Widespread adoption of AI-assisted forecasting will require greater collaboration between forecasters and local organizations to tailor global forecasts to regional needs.

Accessibility of forecasting. Another challenge is making sure that forecasts improved by AI are readily accessible and available to everyone—and that they are acted upon. In December 2024, residents of Mayotte, a densely populated archipelago between the African continent and Madagascar, reportedly did not credit warnings issued by MeteoFrance, a regional meteorological service, fifty hours before Cyclone Chido made landfall, potentially increasing the loss of life. 

Accurate weather forecasting is, and should remain, a freely available public good. If people, communities, or companies want to pay for added services, they can, but core life-saving services should be free of charge. AI has the potential to greatly improve weather forecasting. With better forecasting, early warning systems have the potential to save more people’s lives, livelihoods, and homes. 

This work represents the views and opinions solely of the authors. The Council on Foreign Relations is an independent, nonpartisan membership organization, think tank, and publisher, and takes no institutional positions on matters of policy

Creative Commons
Creative Commons: Some rights reserved.
Close
This work is licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) License.
View License Detail
Close

Top Stories on CFR

Artificial Intelligence (AI)

Sign up to receive CFR President Mike Froman’s analysis on the most important foreign policy story of the week, delivered to your inbox every Friday afternoon. Subscribe to The World This Week. In the Middle East, Israel and Iran are engaged in what could be the most consequential conflict in the region since the wars in Afghanistan and Iraq. CFR’s experts continue to cover all aspects of the evolving conflict on CFR.org. While the situation evolves, including the potential for direct U.S. involvement, it is worth touching on another recent development in the region which could have far-reaching consequences: the diffusion of cutting-edge U.S. artificial intelligence (AI) technology to leading Gulf powers. The defining feature of President Donald Trump’s foreign policy is his willingness to question and, in many cases, reject the prevailing consensus on matters ranging from European security to trade. His approach to AI policy is no exception. Less than six months into his second term, Trump is set to fundamentally rewrite the United States’ international AI strategy in ways that could influence the balance of global power for decades to come. In February, at the Artificial Intelligence Action Summit in Paris, Vice President JD Vance delivered a rousing speech at the Grand Palais, and made it clear that the Trump administration planned to abandon the Biden administration’s safety-centric approach to AI governance in favor of a laissez-faire regulatory regime. “The AI future is not going to be won by hand-wringing about safety,” Vance said. “It will be won by building—from reliable power plants to the manufacturing facilities that can produce the chips of the future.” And as Trump’s AI czar David Sacks put it, “Washington wants to control things, the bureaucracy wants to control things. That’s not a winning formula for technology development. We’ve got to let the private sector cook.” The accelerationist thrust of Vance and Sacks’s remarks is manifesting on a global scale. Last month, during Trump’s tour of the Middle East, the United States announced a series of deals to permit the United Arab Emirates (UAE) and Saudi Arabia to import huge quantities (potentially over one million units) of advanced AI chips to be housed in massive new data centers that will serve U.S. and Gulf AI firms that are training and operating cutting-edge models. These imports were made possible by the Trump administration’s decision to scrap a Biden administration executive order that capped chip exports to geopolitical swing states in the Gulf and beyond, and which represents the most significant proliferation of AI capabilities outside the United States and China to date. The recipe for building and operating cutting-edge AI models has a few key raw ingredients: training data, algorithms (the governing logic of AI models like ChatGPT), advanced chips like Graphics Processing Units (GPUs) or Tensor Processing Units (TPUs)—and massive, power-hungry data centers filled with advanced chips.  Today, the United States maintains a monopoly of only one of these inputs: advanced semiconductors, and more specifically, the design of advanced semiconductors—a field in which U.S. tech giants like Nvidia and AMD, remain far ahead of their global competitors. To weaponize this chokepoint, the first Trump administration and the Biden administration placed a series of ever-stricter export controls on the sale of advanced U.S.-designed AI chips to countries of concern, including China.  The semiconductor export control regime culminated in the final days of the Biden administration with the rollout of the Framework for Artificial Intelligence Diffusion, more commonly known as the AI diffusion rule—a comprehensive global framework for limiting the proliferation of advanced semiconductors. The rule sorted the world into three camps. Tier 1 countries, including core U.S. allies such as Australia, Japan, and the United Kingdom, were exempt from restrictions, whereas tier 3 countries, such as Russia, China, and Iran, were subject to the extremely stringent controls. The core controversy of the diffusion rule stemmed from the tier 2 bucket, which included some 150 countries including India, Mexico, Israel, Switzerland, Saudi Arabia, and the United Arab Emirates. Many tier 2 states, particularly Gulf powers with deep economic and military ties to the United States, were furious.  The rule wasn’t just a matter of how many chips could be imported and by whom. It refashioned how the United States could steer the distribution of computing resources, including the regulation and real-time monitoring of their deployment abroad and the terms by which the technologies can be shared with third parties. Proponents of the restrictions pointed to the need to limit geopolitical swing states’ access to leading AI capabilities and to prevent Chinese, Russian, and other adversarial actors from accessing powerful AI chips by contracting cloud service providers in these swing states.  However, critics of the rule, including leading AI model developers and cloud service providers, claimed that the constraints would stifle U.S. innovation and incentivize tier 2 countries to adopt Chinese AI infrastructure. Moreover, critics argued that with domestic capital expenditures on AI development and infrastructure running into the hundreds of billions of dollars in 2025 alone, fresh capital and scale-up opportunities in the Gulf and beyond represented the most viable option for expanding the U.S. AI ecosystem. This hypothesis is about to be tested in real time. In May, the Trump administration killed the diffusion rule, days before it would have been set into motion, in part to facilitate the export of these cutting-edge chips abroad to the Gulf powers. This represents a fundamental pivot for AI policy, but potentially also in the logic of U.S. grand strategy vis-à-vis China. The most recent era of great power competition, the Cold War, was fundamentally bipolar and the United States leaned heavily on the principle of non-proliferation, particularly in the nuclear domain, to limit the possibility of new entrants. We are now playing by a new set of rules where the diffusion of U.S. technology—and an effort to box out Chinese technology—is of paramount importance. Perhaps maintaining and expanding the United States’ global market share in key AI chokepoint technologies will deny China the scale it needs to outcompete the United States—but it also introduces the risk of U.S. chips falling into the wrong hands via transhipment, smuggling, and other means, or being co-opted by authoritarian regimes for malign purposes.  Such risks are not illusory: there is already ample evidence of Chinese firms using shell entities to access leading-edge U.S. chips through cloud service providers in Southeast Asia. And Chinese firms, including Huawei, were important vendors for leading Gulf AI firms, including the UAE’s G-42, until the U.S. government forced the firm to divest its Chinese hardware as a condition for receiving a strategic investment from Microsoft in 2024. In the United States, the ability to build new data centers is severely constrained by complex permitting processes and limited capacity to bring new power to the grid. What the Gulf countries lack in terms of semiconductor prowess and AI talent, they make up for with abundant capital, energy, and accommodating regulations. The Gulf countries are well-positioned for massive AI infrastructure buildouts. The question is simply, using whose technology—American or Chinese—and on what terms? In Saudi Arabia and the UAE, it will be American technology for now. The question remains whether the diffusion of the most powerful dual-use technologies of our day will bind foreign users to the United States and what impact it will have on the global balance of power.  We welcome your feedback on this column. Let me know what foreign policy issues you’d like me to address next by replying to [email protected].

Iran

As Trump weighs whether to join Israel's bombing campaign of Iran, some have questioned if the president has the authority to involve the U.S. military in this conflict.

RealEcon

The Global Fragility Act (GFA) serves as a blueprint for smart U.S. funding to prevent and end conflict, and bipartisan congressional leaders advocate reauthorization of the 2019 law.