Close-up of a group of children holding a model planet Earth at the beach.

Futures-studies research can focus on specific technologies or speculate about existential risks.Credit: FG Trade Latin/Getty

The science-fiction author Ray Bradbury once wrote: “I don’t try to describe the future. I try to prevent it.” From asteroid strikes and nuclear winters to runaway artificial intelligence (AI), there are plenty of scenarios that humanity would rather avoid. So much so, that a burgeoning research discipline is dedicated to that task.

Consider the Centre for the Study of Existential Risk at the University of Cambridge, UK, which focuses on the study and mitigation of threats that could lead to human extinction or civilization collapse.

“There are cascade effects between different kinds of hazards,” says historian Matthew Connelly, the centre’s director. “And, really, if you want to understand the future of life on this planet, you have to look at it in terms of systems.”

Some future shocks are hard to avoid — as researchers at the 19-year-old Future of Humanity Institute (FHI) at the University of Oxford, UK, discovered last week, when the institute was shut down in the wake of “increasing administrative headwinds”, according to a statement on its website.

“We had a good run,” says philosopher Nick Bostrom, who led the FHI since its creation in 2005. “I think the death by bureaucracy was regrettable, but there are now so many more places where this [research] can be done.”

Forward-thinking approach

A loose confederation of academic interests, the field of futures studies embraces everything from philosophical musings to more constrained and rigorous exercises that map out how specific technologies are likely to progress.

Kerstin Cuhls, a scientific project manager at the Fraunhofer Institute for Systems and Innovation Research in Karlsruhe, Germany, last year completed one such foresight exercise for the German government, investigating chronobiology and circadian rhythms.

“We invited a lot of people who know about chronobiology, but also people from associations and companies, and brought them together to discuss what could be the future of the field and applications,” she says. Projecting over 20 years, the exercise covered probable advances in the science, such as a better understanding of the molecular and genetic mechanisms linked to sleep; the impact of increased screen use; and the potential knock-on effects of widespread disruption to circadian rhythms, including mental-health problems and obesity.

“We try to promote action,” Cuhls says — most notably by using the study outcomes to support a (so far unsuccessful) attempt to abolish Germany’s annual shift to ‘summer time’, when clocks are set forward by one hour, and to argue that teenagers would benefit from starting school later in the day.

Exercises such as this one are different from conventional forecasts, says George Wright, a psychologist and futures researcher at the University of Strathclyde in Glasgow, UK. Rather than using statistics to extrapolate a single data series on the basis of current trends, futures studies try to account for variation and uncertainties across many influences and variables.

“In the future, there will be political change, economic change, social change, technological change, legal change, maybe regulatory change,” he says. To account for this, futures researchers often produce and describe several alternative scenarios.

Risks to humanity

Patrick van der Duin, a foresight consultant based in the Hague, the Netherlands, and co-editor-in-chief of the journal Futures, says that focused foresight exercises, such as the chronobiology one, are different from newer, more speculative research on existential risk. The speculative approach often focuses on low-probability events that have a very large potential impact, he adds, rather than projecting from the present on the basis of plausible and predictable steps.

Bostrom points to raising the profile of the existential risks posed by AI as the FHI’s biggest achievement over the past decade. “We were the first in academia to develop the fields of AI safety and AI governance,” he says. “At the time, many viewed this stuff as outlandish, but these concerns are now embraced by many AI leaders and echoed by many political leaders across the globe.”

Some futures researchers go further than exploring possible scenarios, and actively seek ways to promote what they see as the most desirable vision of the future.

“We’ve done a lot of work on lethal autonomous weapons,” says Emilia Javorsky, a physician-scientist who directs the Futures Program at the Future of Life Institute in Campbell, California. “These are very much challenges of today that are just going to be amplified tomorrow if we don’t do something about them.”

In 2017, the institute produced a viral video called Slaughterbots as part of a campaign against AI-enabled weapons. The video is widely credited with helping to build opposition to the technology. “We are seeing tangible outputs of this work,” Javorsky says.

Future funding

Several national governments are establishing groups to examine existential risk, including from AI. Although there is growing public and political interest in futures studies, much of the research that feeds into these discussions is still funded by philanthropic organizations, rather than government grants. That hurts the field’s reputation, Connelly says, and is something he is trying to change.

Winning competitive grants is “what we need to do to establish this field for the long run and to earn the respect of others in academia”, he says. This is a common issue, he adds, for emerging areas of research that are multidisciplinary or don’t fit into existing fields.

“If you want to stick around, then you have to begin to demonstrate the work does meet the standard people would expect of any kind of academic work,” he says.