Recently, I was talking to a colleague I admire about my work on nuclear winter theory when the colleague asked quite innocently, “Wasn’t that, like, dispelled in the 1980s?” It was a vexing question, but not surprising.
Under President Reagan, the US Defense Department did its best to undermine the claims of Carl Sagan and others who amplified scientific findings about the possible climatic consequences of nuclear war. In a 1985 report, Defense Secretary Caspar Weinberger argued that climate models did not yet provide sufficient certainty to warrant acting upon the winter hypothesis, despite the expert consensus that had begun to take shape. Yet, thanks to advances in climate science since the 1980s, our ability to simulate the effects of nuclear detonations has improved significantly. Although there is disagreement about the extent and exact conditions under which a nuclear exchange could lower Earth’s surface temperature for long periods—threatening large-scale famine, among other dire consequences—the fundamental premise of nuclear winter theory is now widely accepted by reputable scientists.
Encounters like the one I had with my colleague illustrate that, in many ways, the nuclear field continues to suffer from a tendency toward insularity that fails to take other relevant areas of research into account. Nuclear issues are part of a web of converging catastrophic and existential risks to humanity. Although such risks do not occur in a vacuum, the study and practice of mitigating them too often does.
I began my career at Sandia National Laboratories, where I was recruited to work on safeguarding nuclear materials against malicious actors to prevent the further spread of nuclear weapons. A few years ago, my frustration with the siloing of the field led me to transition into an interdisciplinary research environment. Now, I am convinced, as water policy expert Sandra Postel has argued, that addressing just one crisis “bit by bit and step by step” will lead to failure in addressing major global risks.
To be effective, those who hope to reduce the probability and impact of existential risks must make themselves literate in multiple crises, discerning how their work affects, and is affected by, other domains. They must also understand how their work impacts the communities most vulnerable to catastrophic risks.
A new epoch, a new type of practitioner. While human beings have always been vulnerable to naturally occurring hazards—including volcanoes, asteroids, and infectious diseases—the period following World War II introduced numerous human-caused (or anthropogenic) risks to society. Post-1945, the world experienced a Great Acceleration in globalized economic activity, resulting in an intensification of environmental harms, including emissions of heat-trapping gases such as carbon dioxide. In the decades that followed, anthropogenic risks from nuclear weapons, increasingly sophisticated biological weapons, and artificial intelligence (AI)—to name a few—also ballooned.
As a result, humanity now has a greater ability to inflict widespread harm on the planet than ever before. Along with the power to damage the natural environment on a massive scale, humans also have an unprecedented capacity to undermine the long-term survival of their own species. Indeed, some scientists have suggested that the first detonation of a nuclear weapon at the Trinity Site in New Mexico should be considered the official start of an Anthropocene Epoch, as it marked the introduction of the most destructive technology in human history.
In the context of these evolving global human impacts, the task of limiting their most catastrophic consequences is an enormous one. Plans to guard against global disaster must account for the sheer magnitude and complexity of such risks, as well as the ways in which they might interact unexpectedly, augmenting one another or creating unintended cascading effects. Take nuclear winter, for instance: Beyond killing hundreds of thousands or millions of people immediately, even a “small” regional nuclear exchange could wreak havoc on the environment, dealing a shattering blow to ecosystems already struggling with rapid biodiversity loss caused by human activities. Under such circumstances, modelling suggests that global agriculture could also falter, with billions dying of starvation.
Anthropogenic climate change is itself a source of risk that could cascade into or intersect with others. The more the planet warms and natural habitats are destroyed, the more wild animals will be displaced. Their movements can allow for pathogens to spread more easily, creating a greater risk of disease transmission from animals to humans, which can lead to global pandemics.
These looming possibilities demand actionable research to address converging risks within a shrinking window of time. Accordingly, in the last two decades, a growing discipline has coalesced around the study of catastrophic and existential risks to humanity.
Learning from the study of catastrophic and existential risks. The benefits of the cross-disciplinary approach to reducing catastrophic and existential risks to humanity are numerous. First, such an approach improves research by making optimal use of limited resources and sharing insights, failures, and lessons across areas of risk. Having previously worked mostly with specialists in the nuclear field, at Cambridge’s Center for the Study of Existential Risk (CSER) I now interact daily with colleagues whose expertise spans volcano risk, AI safety and ethics, planetary defence against asteroids, food systems, and climate advocacy movements, among other topics.
The consolidation of such disparate academic, technical, and policy expertise in one organization creates a venue for constructive criticism from previously unexplored angles. In my own experience researching the intersection of risks from AI and nuclear weapons, for example, a marrying of technical and policy expertise from both the nuclear and AI fields is absolutely vital.
Second, an interdisciplinary approach to catastrophic risks acknowledges that individual risks are part of a web in which each strand has implications for all others. Resilience against one risk may also aid readiness against another. The COVID-19 pandemic has demonstrated the world’s lack of preparedness to cope with events that disrupt global supply chains and infrastructure, with shocks in one part of the world causing ripple effects throughout. The lessons learned should inform not just how we mitigate likely risks, but also how we build resilience against worst-case scenarios or unexpected outcomes. Having an awareness of all identified risk sources can help governments to formulate long-term plans for each, without allowing a singular issue to consume disproportionate energy and resources.
There is also a considerable opportunity to innovate policies that alleviate multiple issues at once. For instance, the positive effects of forest conservation extend to pandemic prevention, carbon sequestration, and biodiversity retention. And general restraint about the development of potentially dangerous AI systems leaves less room for AI to be deployed prematurely in high-stakes military settings, such as the command and control of nuclear weapons systems.
Third, the cognitive and emotional weight of daunting challenges is smaller when shared. Humans are not naturally skilled at long-term planning and can become numb to high-risk events, as has been seen throughout the COVID-19 pandemic. Often, colleagues from separate areas of research can provide new tools for understanding one’s own research focus. For example, researchers studying the framing of climate change as a threat to security might learn volumes from the benefits and harms of securitization in other areas, such as HIV/AIDS.
Viewing individual catastrophic risks within the context of all catastrophic risk implicates the long-term impacts of our work, which in turn should engender a commitment to intergenerational justice and ethics. While working in the nuclear field, I often felt dismayed by short-term thinking, which fixated on incremental steps in arms control without questioning the long-term consequences of particular policies. When the possibility of catastrophic events, or even human extinction, is treated as a starting point, issues like nuclear war cannot be easily sterilized. The study of catastrophic risks inherently asks how actions will affect both the present moment and generations far into the future.
My argument for a cross-disciplinary approach to existential risks is not meant to diminish the ongoing importance of specialized research. To be sure, the risks to humanity that we know about so far are each distinct and require tailored methods of mitigation.
But work on distinct areas of risk is stronger when it is underpinned by shared goals and legible to audiences outside one’s own discipline. In their ground-breaking research on the recent history of scientific obfuscation, Naomi Oreskes and Erik Conway demonstrated that political obstacles to risk mitigation are often eerily similar across different subject areas; the same actors aiming to sow the seeds of climate denial were also responsible for creating doubt about the validity of nuclear winter theories or the need for decisive action to address the ozone hole. Confronting catastrophic risk requires good stewardship of information; if complex issues are not communicated accurately and ethically to the public and shielded from corrupt special interests, risk mitigation can—and will—stall.
Centering frontline communities. It might seem self-evident that plans and strategies to mitigate risk should avoid creating new harms themselves. But in the same way that mainstream climate activism often shirks environmental justice, research on catastrophic risks frequently neglects and excludes the communities most vulnerable to those risks.
Being from New Mexico, I often ponder this disconnect. My own family’s history makes my relationship to the US nuclear weapons program personal: When the very first nuclear test was conducted in 1945, my grandfather Antonio and his siblings felt the earth shake violently. Confused and frightened, they huddled together in their small home near the Sandia mountains. And due to the legacy of nuclear testing in New Mexico, family friends have received cancer diagnoses following their exposure to contaminated water. (Similar struggles are documented at length by Myrriah Gómez in her recent book, Nuclear Nuevo México: Colonialism and the Effects of the Nuclear Industrial Complex on Nuevomexicanos.)
From uranium mining on the Navajo Nation to nuclear testing on the Marshall Islands, the frontline communities of nuclear weapon programs are not unlike those experiencing the worst of the climate crisis. If other catastrophic events unfold, these same historically marginalized communities will suffer again. Not surprisingly for a field with origins in the elite academic environments of Western Europe and the United States, the study of catastrophic and existential risks often neglects these perspectives—in both the substance of the research and the demographics of the field’s membership.
Frontline communities have faced threats to their survival in a way that most researchers and policymakers have not. They possess valuable knowledge about how to cope and adapt. The question of what constitutes a catastrophic or existential risk, as well as how these risks relate to one another, is intricately tied to who you ask. In the face of converging risks, the effort to ameliorate them needs diversity of thought and experience. Just as researchers cannot thoroughly grasp their own disciplines without knowing how they interact with others, they will lack a clear picture of the risks the world faces until they learn from the communities that bear the brunt of inaction.
Source: thebulletin.org
Comments