I.

In 1942, one of Robert Oppenheimer’s colleagues came to him with a disturbing suggestion: in the event their work on the Manhattan Project succeeded and they built the world’s first atomic bomb, it was quite possible the explosion would set the skies on fire. Shaken, Oppenheimer privately told one of the project’s most senior figures, Arthur Compton, who responded with horror, according to a biography of Oppenheimer:
Was there really any chance that an atomic bomb would trigger the explosion of the nitrogen in the atmosphere or of the hydrogen in the ocean? This would be the ultimate catastrophe. Better to accept the slavery of the Nazis than to run a chance of drawing the final curtain on mankind!

Compton told Oppenheimer that “unless they came up with a firm and reliable conclusion that our atomic bombs could not explode the air or the sea, these bombs must never be made.” The team ran a series of calculations and decided the math supported their case that the “gadget,” as the bomb was known, was safe. Work continued. Still, at the site of the Trinity test in New Mexico on July 16, 1945, one of the scientists offered the others a bet on “whether or not the bomb would ignite the atmosphere, and if so, whether it would merely destroy New Mexico or destroy the world.” Luckily for us, it did neither.



Or rather, not luckily, because Oppenheimer, a brilliant theoretical physicist, and his team had done the work to be sure of what they were doing. With their risk assessment, Oppenheimer – who famously remarked that the Trinity detonation brought to mind the words, “Now I am become Death, the destroyer of worlds,” from the Bhagavad Gita – had inadvertently created a new field of study: existential risk.

Seventy years after the Trinity test, such risk assessments are still being carried out – and as before, we don’t know of it. Based at several academic institutions in the UK and US, the Manhattan Project’s spiritual heirs toil away in relative obscurity. There are not many of them. They are no longer building bombs. They come from all sorts of disciplines – philosophy, now, as well as the worlds of science. Their goal is nothing less than saving the human race from destroying itself.

II.

“We attract weird people,” Andrew Snyder-Beattie said. “I get crazy emails in my inbox all the time.” What kinds of people? “People who have their own theories of physics.”


The FHI’s Andrew Snyder-Beattie.(Quartz/Kabir Chibber)


Snyder-Beattie is the project manager at the Future of Humanity Institute. Headed up by Nick Bostrom, the Swedish philosopher famous for popularising the risks of artificial intelligence, the FHI is part of the Oxford Martin School, created when a computer billionaire gave the largest donation in Oxford University’s 900-year history to set up a place to solve some of the world’s biggest problems. One of Bostrom’s research papers (pdf, p. 26) noted that more academic research has been done on dung beetles and Star Trek than on human extinction. The FHI is trying to change that.

The institute sits on the first floor – next to the Centre for Effective Altruism – of a practical, nondescript office building. In the main lobby, if you can call it that, there’s a huge multi-sided whiteboard, scribbled with notes, graphs, charts, and a small memorial to James Martin, the billionaire donor, who died in 2013. When I visited recently, the board was dominated by the ultimate office sweepstakes: a timeline that showed the likelihood, according to each FHI researcher, that the human race would go extinct in the next 100 years. They asked me not to publish it. (Most said the chances were quite low, but one person put it at 40%.)

“On an intellectual level, we have these core set of goals: try to figure out what really, really matters to future of the largest part of humanity, and then what can we investigate about that?” says one of the researchers, a genial Swede named Anders Sandberg, who wears a peculiar steel medallion hanging over his shirt. “Real success would be coming up with an idea to make the world better. Or even figure out what direction ‘better’ is.”


One scale for evaluating risks to mankind.(Nick Bostrom)


III.

Thirty-six centuries before the Manhattan Project, mankind was already angering the heavens. The earliest known story of the end of the world features a man who builds an ark to survive a great flood, long before Noah. The neo-Assyrian tale, found on a 17th century BC stone tablet from ancient Mesopotamia, blames us for the flood: we made so much noise with our tools, keeping the gods up at night, so that one sent a deluge to wipe us out.

The myth of Atrahahis, the sole survivor, whose name translates as The Exceedingly Wise, belongs to the field of eschatology, the study of end times. We live in a world of existential risk, where we have become our own gods. Today, mankind’s noise – technology – continues to be the source of our gravest perils.

The UN is trying to prevent us from killing each other with autonomous robots in the here and now. In an echo of the Manhattan Project, the creation of the recently-restarted Large Hadron Collider – the man-made atom smasher beneath France and Switzerland – prompted fears it would create a black hole here on Earth. (Like Oppenheimer’s boys, the LHC team has done several risk assessments on the likelihood of ending the world. We should be OK.)

Other potential threats are harder to spot. In 2012, scientists working on making mutant flu strains more lethal were forced to redact some of their findings. Last year, the US government stopped funding further research in this area. That was due in no small part to campaigning by existential-risk scientists like Martin Rees, the founder of the Centre for the Study of Existential Risk, Cambridge University’s counterpart to the FHI. “It is hard to make a clandestine H-bomb,” Rees says. “In contrast, biotech involves small-scale dual use equipment. Millions will one day have the capability to misuse it.” His greatest fear? “An eco-fanatic, empowered by the bio-hacking expertise that may soon be routine, who believes that ‘Gaia’ is being threatened by the presence of too many humans.”

By contrast, some of the things most people think of as existential risks may not be. Bill Gates, Elon Musk, and Stephen Hawking, as well as Bostrom himself, have all warned of the dangers of creating a computer with superintelligence; in popular culture, visions of of the machines that enslave human beings in films like The Matrix and Terminator abound. But this is something the Oxford boys shrug off as not an imminent threat. “What engineers are doing now with machine learning, and almost anything, is really not scary,” says another FHI researcher, Daniel Dewey, an American ex-Googler specializing in machine superintelligence. “Something that gets attributed to us a lot is that we think you should be afraid. And we don’t spend any of our time thinking you should be afraid.”

And when what you’re worried about is human extinction, the bar for what counts as a catastrophe is high—brutally high. Take, for example, global warming. “Climate change could constitute an existential risk if it’s worse than we expect and there’s a feedback loop that causes temperatures to rise by 20°C,” Snyder-Beattie says. (We’re currently heading for a rise of between 2-4°C.)


The FHI’s Anders Sandberg and his medallion.(Quartz/Kabir Chibber)


“Yes,” Sandberg jumps in, “but you could live in this standard climate-change world. You would have a lot of world cities with water in the streets and you might a lot of people who would be displaced and our standard of living would be much lower, but our existence might be a fairly good one.” He thinks, and adds: “Our future is more than just our next century. Maybe the next few centuries are really lousy and then the glory starts!”

The medallion that Sandberg, a computational neuroscientist by training, wears around his neck contains instructions on how to freeze him and have him sent to a cryonics lab in Arizona in the event of his death. “Statistically, I’m likely to die of something predictable so I have time to get a ticket to America,” he says with a wry smile. And planning for your own demise comes with an interesting side effect. “You become very aware of existential risk,” Sandberg says.

“If you have a short life expectancy, the future doesn’t matter much,” he adds. “If you’re lying there frozen in nitrogen you become more exposed along the way.”

IV.

Existential risk researchers often find themselves frustrated with the rigidness of their fields. Seth Baum comes from a family of engineers, but found computer scientists unsuited to his own pondering and probing. “This conversation is about asking which problems should we be working on,” he told me. “From the engineer’s perspective, that’s somebody else’s job.”

So instead, Baum – who was previously a geography postgraduate student – opened the independent Global Catastrophic Risk Institute in New York. There have always been academics concerned with human survival, Baum says. “What has been a relatively recent is the emergence of a community who is not specifically focused on one risk, or one set of risks, but a broad category of them,” he adds.

The GCRI, the CSER and the FHI form the trifecta of this community. There’s little rivalry; the CSER’s most recent lecture was titled “Will we cause our own extinction? Natural versus anthropogenic extinction risks” and given by the FHI’s Toby Ord. “But it’s still a very small community, given the scale and range of the issues that need addressing,” says Rees, a 72-year-old astrophysicist and Britain’s Astronomer Royal, who runs the CSER when he’s not busy sitting in the House of Lords. “The stakes are so high that even if we reduce the probability of a catastrophe by one in the sixth decimal place, we’ll have earned our keep.”


But the researchers want us to pull back, rather than focus on any one threat.


British astrophysicist Martin Rees, the founder of CESR.(AP Photo/Lefteris Pitarakis)


“If we said that the priority right now is synthetic biology,” Dewey says, “it would be easy to forget what we’re here doing, and the priority is, a hundred years from now, we should be able to look back and say, man, it is a good thing that we got our act together.”

Thinking of the future then becomes not a theoretical conversation but an ethical one. “Some of the major triumphs we have achieved as a civilization have to do with changing our moral standards,” Dewey adds. “We no longer think it’s acceptable to go to war with a country and then wipe their population off the face of the earth. We no longer think slavery is acceptable and we think that women should have equal rights to men.”

In other words, we save the human race not by worrying about AI or pandemics, but by understanding why we are worth saving in the first place. The existential risk researchers are really trying to make us care about our descendants as much as we care about ourselves.

Snyder-Beattie describes this as raising awareness of the “ethical value of future generations.” “We see that people do not think about other people across space, but they also don’t end up thinking of other people across time,” he says. Rees likes to quote the ecologist EO Wilson, who once said: “Causing mass extinctions is the action that future generations would least forgive us for.”

V.

In one of the FHI’s conference rooms in Oxford, there is a portrait of a strong-nosed military man with a thick moustache and a mournful expression. That man is Stanislav Petrov, who was the officer on duty in a bunker near Moscow one night in September 1983 when his computer told him that the US had just launched four nuclear missiles at the Soviet Union. The protocol was for Petrov to call his superiors and prepare the counter-strike; in other words, World War III. Instead, Petrov ignored the alarm.

“If I had sent my report up the chain of command, nobody would have said a word against it,” he told the BBC 30 years later. In the most banal of circumstances, Petrov ensured that humanity is still here now. There is talk of naming another conference room at the FHI after Vasili Arkhipov, another Russian aboard a Soviet nuclear submarine that appeared to be under attack at the height of the Cuban missile crisis in 1962. All three commanders needed to agree to launch a nuke; Arkhipov was the only one who refused.

It is telling that mavericks like Petrov and Arkhipov are the heroes to these researchers, not Oppenheimer – who in 1960 visited Japan and according to a Pulitzer Prize-winning biography coldly remarked, “I do not regret that I had something to do with the technical success of the atomic bomb” – nor any of his men who inadvertently created the field of existential risk.

Looking at Petrov’s picture, Sandberg said: “There’s something symbolic about individuals who seriously improved the prospects for the human race. It also serves to remind us that maybe – it’s unlikely but maybe – one day, you might find yourself in a similar situation where the fate of the world is hanging on what you do,” he says. “Maybe you should think very carefully.”

This article was originally published on qz.com.