from The Internationalist and International Institutions and Global Governance Program

Averting Global Catastrophe: A New IIGG Blog Series

Opération Licorne (“Operation Unicorn”) nuclear test, May 22, 1970. A 914 kiloton thermonuclear air burst over Fangataufa, French Polynesia. Galerie Bilderwelt/Getty Images

Nature and technology pose a worrying array of threats to twenty-first century civilization. These global menaces and the catastrophic risks associated with them are the subject of a new International Institutions and Global Governance program blog series. 

January 10, 2019

Opération Licorne (“Operation Unicorn”) nuclear test, May 22, 1970. A 914 kiloton thermonuclear air burst over Fangataufa, French Polynesia. Galerie Bilderwelt/Getty Images
Blog Post
Blog posts represent the views of CFR fellows and staff and not those of CFR, which takes no institutional positions.

Coauthored with Kyle L. Evanoff, research associate for International Institutions and Global Governance at the Council on Foreign Relations.

A preoccupation with doom is a hallmark of our age. A coterie of Cassandras, including leading scientists like Martin Rees and the late Stephen Hawking, philosophers like Nick Bostrom, and tech entrepreneurs such as Elon Musk, have proclaimed the twenty-first century to be among the most dangerous in human history. A large swath of the public shares their pessimism: In a recent poll of citizens in the United States, Australia, Canada, and the United Kingdom, more than half of those surveyed believed that there is a fifty percent or greater chance that humanity will go extinct in the next 100 years.

More on:


Global Governance

Technology and Innovation


Public Health Threats and Pandemics

Indeed, experts warn darkly of emerging threats that add to the list of natural and anthropogenic menaces capable of hobbling or even destroying civilization. These range from well-known dangers like global nuclear war to more speculative hazards, such as those posed by unsafe advanced artificial intelligence (AI).

How seriously should we take these concerns? Over the next few months, the Internationalist blog will take a closer look at global catastrophic risk. We will analyze specific dangers, including a large-scale nuclear exchange, a devastating global pandemic, a planetary collision with a large near-Earth object (NEO), a collapse of the biosphere, and the specter of the “rise of the machines.” We will draw upon existing scientific literature to assess the likelihood of such disasters and their potential impact, and we will consider feasible policy responses to mitigate or prevent these dangers, especially through the creation of more effective frameworks of international cooperation.

In this initial installment, we focus on two questions: Why has catastrophic risk suddenly ascended to prominence on the global agenda? And how should we think about this category of threat?

Why Has Global Catastrophic Risk Ascended to Prominence?

To the first question, a professional historian might well respond: plus ça change, plus c’est la même chose (the more things change, the more they stay the same). Predicting the end of the world has been a recurrent theme in Western culture. More generally, a capacity to consider what might go badly wrong seems to be hard-wired in the human brain, an evolutionary adaptation to an uncertain and often deadly social and natural environment. Catastrophism got an extra boost in the mid-twentieth century, of course, when the Manhattan Project bequeathed to us the nuclear age. Witnessing the first atomic detonation, physicist J. Robert Oppenheimer famously invoked the Bhagavad Gita (“Now I am become death, the destroyer of worlds”). The subsequent bombing of Hiroshima and Nagasaki led to the creation of the Bulletin of the Atomic Scientists, and the prospect of a nuclear arms race with the Soviet Union prompted the placement in 1947 of a Doomsday Clock on its cover, the clock’s minute hand perilously close to midnight ever since.    

Still, the current preoccupation with catastrophic risk is distinctive. For one thing, it is not limited to longstanding concerns like global pandemics or nuclear war, or even the now-familiar one of climate change. It also includes the adverse consequences of technological innovations, as well as relatively exotic dangers, such as those posed by collisions with large NEOs or accidents in high-energy physics experiments. For another, the intellectual preoccupation with risk is emerging not from religious quarters, as has so often been the case in history, but from a diverse group of secular scholars at the frontiers of knowledge. These contemporary philosophers, mathematicians, cosmologists, biologists, and computer scientists share a conviction that technology has been a double-edged sword, catalyzing unparalleled progress but also exposing humanity to unprecedented dangers. What has really changed, in this view, is humanity’s growing ability—whether out of malevolence or carelessness—to cause destruction on a scale previously unimaginable. It is the culmination and synthesis of an expanding technological frontier and a shrinking planet.

More on:


Global Governance

Technology and Innovation


Public Health Threats and Pandemics

In recent decades, writers as varied as Isaac Asimov, Richard Posner, and Jared Diamond have authored tomes on societal or civilizational collapse. Today, writers spill more ink than ever on these topics, as big money flows into these previously neglected intellectual precincts. Silicon Valley elites such as the aforementioned Musk, Skype founding engineer Jaan Tallinn, and Facebook cofounder Dustin Moskovitz are using their wealth to publicize and counter what they consider catastrophic, even existential, risks—including those created or exacerbated by technological innovations themselves.

The result has been a new chapter for techno-philanthropy and a new ecosystem of foundations and research institutions focused on catastrophic risk. Representative organizations include the Open Philanthropy Project (Moskovitz’s charitable arm), the Future of Life Institute (recipient of a $10 million grant from Musk), the Centre for the Study of Existential Risk at Cambridge University (cofounded by Tallinn), the Future of Humanity Institute at Oxford University (funded by the Open Philanthropy Project, among others), and the Global Challenges Foundation (which held an online, international prize competition to better address these risks). While each institution has its unique focus, they share a focus on disruptive technology, a long time horizon, a globalist and transgenerational outlook, an interdisciplinary approach, an openness to radical ideas, and an affinity for digitally-enabled, networked collaboration. The Effective Altruism movement, which seeks to maximize good done per philanthropic dollar spent, provides a shared intellectual and financial backbone for many of these undertakings, working under the notion that a severe enough global catastrophe could preclude the flourishing of all future generations, an unacceptable moral loss in their eyes.

How Should We Think About Global Catastrophic Risk?

The first challenge of thinking about global catastrophic risk is defining what the term actually means. On this matter, experts are hazy. Nick Bostrom and Milan Ćirković suggest that a global event that caused 10 million fatalities or 10 trillion dollars of economic losses would surely count as a global catastrophe, whereas another that caused 10 thousand deaths or 10 million dollars in loss would not. “As for disasters falling between these points, the definition is vague,” they demur. In 2016, the Global Challenges Foundation suggested an alternative, higher threshold: any event that could kill at least 10 percent of the world’s population. Other authorities focus less on precise empirical benchmarks than on qualitative impacts. According to the Open Philanthropy Project, global catastrophic risks are those “bad enough to change the very long-term trajectory of humanity in a less favorable direction (e.g., ranging from a dramatic slowdown in the improvement of global standards of living to the end of industrial civilization or human extinction.”

As an initial cut, it makes sense to conceive of potential risks along three different axes, much in the vein of author Phil Torres. The first is severity: Is the envisioned contingency essentially a nuisance, at one end of the continuum, or is it hellish? The second is scope: Is its impact limited to a local group, or nation, or is it truly global? The third is duration, in at least two senses: How long does it take for the catastrophe to unfold, and how long are its effects likely to last? (A comparison of the risks posed by runaway climate change and extensive nuclear war is instructive. Both would be catastrophic, but their different time frames have ramifications for both preventive and responsive measures).

To be classified as a global catastrophic risk, an event would need to have an awful, worldwide, and enduring (rather than temporary) impact. At the extreme, such a catastrophe could even be existential, threatening the very survival or permanently curtailing the development of the human species (and conceivably other forms of life).

One implication of this approach is that some events may be decidedly tragic but might not rise to the level of global catastrophic risk. Examples would include the Great Recession that began around 2008; the Cambodian genocide, in which some two million may have perished; or perhaps even the AIDS epidemic, which has killed an estimated 35 million people worldwide to date.

Based on these criteria, we can identify a number of global catastrophic risks that merit consideration:

  • Nuclear war (large-scale)
  • Biosphere collapse (due to climate change and related phenomena)
  • Global pandemic (on order of the Black Death)
  • Threats from outer space (especially NEO)
  • Wayward machines (such as unsafe advanced AI)
  • Geoengineering mishap
  • Supervolcano eruption
  • Nanotechnology gone wrong
  • Science experiment gone wrong

As this non-exhaustive list makes clear, global catastrophic threats can arise from natural forces (as in the eruption of a supervolcano), from human actions (as in nuclear war), or some combination of the two (as in biosphere collapse).

When it comes to prioritizing among catastrophic and more mundane risks, policymakers need to weigh many factors. Two of the most important are impact and probability. Some contingencies might be highly damaging but exceedingly rare (a nearby supernova, for instance); others more likely but insufficiently damaging. A third consideration is time horizon: is the threat in question imminent, emerging, on the still-distant horizon, or simply speculative? Finally, there is the question of mitigation—or at least preparation. Is there anything that can be done to avert, forestall, or lessen the catastrophe, or (as in the case of a supernova, for the foreseeable future, at least) is it truly out of human hands, unamenable to policy responses? Easy answers often escape these questions.

Each subsequent installment in this series will be devoted to a particular catastrophic risk, including those noted in the list above. Each blog post will describe the potential threat, analyzing its nature and origins, as well as its likelihood and impact. It will then assess existing (or feasible) policy responses to mitigate this risk, with special focus on developing more effective international standards, norms, rules, and institutions and mobilizing needed resources.

For some of these risks, reforms to existing arrangements may suffice. For others, entirely new cooperative ecosystems may be necessary. And for yet others, the political, economic, or scientific calculus may prove disheartening altogether. Overall, however, one thing seems certain: international cooperation is essential if we are to avoid the worst outcomes and continue to survive—and thrive—on a shrinking planet.   

Creative Commons
Creative Commons: Some rights reserved.
This work is licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) License.
View License Detail

More in the Averting Global Catastrophe Blog Series