In 1884, the Russian romantic revivalist Victor Vasnetsov was commissioned by St. Volodymyr's Cathedral in Kyiv to paint a series of frescoes. Working with local artists over five years, he created some challenging works that attracted both praise and accusations of sacrilege. Among them is a lively rendition of the Four Horsemen of the Apocalypse from the Book of Revelations. Conquest, War, Famine and Death ride out over a scene of destruction surrounded by clouds of angels and the Lamb of God in triumphal pose at the top of the picture, having opened the fourth of seven seals.
Divinely-sanctioned mass destruction has been a staple of religious writing for millennia, and an inspiration for warlords down the ages, who like to think of themselves as agents of destiny rather than as mass murderers. At time of writing St. Volodymyr's Cathedral, which I was lucky enough to visit on a work trip to Ukraine in 2018, had suffered only minor damage from the Russian bombardment of Kyiv, and is being used as a refuge by its citizens.
It is two millennia since the early Christian known as John, exiled to the island of Patmos, wrote down his revelations. Conquest and war have continued apace, only slightly abated in recent decades by the spectre of nuclear annihilation. Famine has also been pushed back in most parts of the world by advances in agriculture, and as for death, although no-one has yet survived it to tell the tale, modern hygiene and medicine have greatly reduced childhood mortality, and many more of us are now living longer and healthier lives.
So whither Armageddon? What are the prospects for apocalypse 21 years into the 21st Century?
Five runners and riders in the Doomsday Gold Cup
My first post on this blog is a light-hearted but serious-minded canter through today’s candidates for ‘horsemen of the apocalypse’, though following Rachael Blackmore’s victory at Cheltenham, we should update that to ‘runners and riders in the existential risks to humanity stakes’. I’ve never been a betting man, and hesitate to call the odds on doomsday, but if you are interested in some numbers, check out the Ragnarök Series on the Metaculus prediction market. Or for for a more measured if parochial assessment, the UK’s National Risk Register 2020.
I might have left some out, and hope that doesn’t tempt fate. So with apologies to supervolcanoes and nanotech grey goo, my top 5 threats to humanity, or at least to modern civilisation, are:
· Nuclear war followed by nuclear winter and subsequent climate change
· Collision with asteroid or comet, or other extra-terrestrial intervention
· Catastrophic ecosystem collapse as a result of accelerating global warming
· Natural or bio-engineered pandemic diseases
· Unfriendly or merely ‘misaligned’ artificial intelligence
Some of these seem to be the stuff of sci-fi, but they are also real possibilities, even if vanishingly improbable, at least in the short term.
War, huh, what is it good for?
As an adolescent in the early 1980s, convinced that nuclear Armageddon was imminent, I joined the Campaign for Nuclear Disarmament and distributed ‘protest & survive’ leaflets to my bemused classmates. It didn’t make me popular in school, and my prophecies of doom were widely ignored, but the fact that it didn’t happen doesn’t mean that disastrous nuclear war wasn’t a real risk – and it still may be.
Such risks are notoriously hard to quantify. The quarter century between the Cuban missile crisis and the Rejykjavik Summit saw at least 10 dangerous false alarms that could have led to war. For example, in 1979, the US early warning system NORAD triggered an alert that the Soviet Union had launched more than 2,000 ballistic missiles, and bombers were scrambled, to be turned back only when the mistake was spotted just in time.
An even more lethal crisis was narrowly averted in 1983, when the hawkish Reagan administration deployed medium-range missiles in Europe conducted large-scale military manoeuvres. Believing the US was preparing a pre-emptive strike against them, Soviet leaders put their systems on high alert. In September, Korean Air Lines flight 007 from New York to Seoul strayed into Soviet airspace over Kamchatka, and was shot down by interceptors, killing all 269 people on board including US Congressman Larry McDonald.
A few weeks later, Soviet early warning satellites mistakenly identified an anomalous cloud reflection as US missiles in flight. Lieutenant Colonel Stanislav Petrov, on duty in the command bunker, decided that despite the high state of alert, it was unlikely that the US would open hostilities with only a handful of rockets. Contrary to orders, he insisted on awaiting confirmation from closer radar stations, and is now hailed as the hero who prevented World War 3.
The threat of ‘mutually assured destruction’ subsided with the end of the cold war and break-up of the USSR. But nuclear proliferation continued as China, Israel, India and Pakistan all acquired weapons, and more recently rogue states like Iran and North Korea trying to develop them.
The biggest rogue state today is Putin’s Russia, which has an estimated 6,000 nuclear warheads. While way down from 45,000 in the 1980s this is still the largest arsenal in the world, enough to kill most humans and make large areas of the planet uninhabitable. Russia has been upgrading its nuclear systems recently, and put them on high alert after its invasion of Ukraine in February 2022, reawakening the grim spectre of nuclear conflict.
So what scale of damage would a nuclear war cause? Most scenarios predict tens of millions of deaths both directly and through the resulting disruption to services. More pessimistic predictions point to the end of civilization or even the near extinction of the human race, with only a few survivors in remote areas. As US generals threatened to do to Vietnam, we would be ‘bombed back into the stone age’, except with vast areas made uninhabitable by radiation and long-lasting damage to ecosystems and the global climate. Vast swathes of sci-fi from Walter Miller to Mad Max are set in post-nuclear dystopian futures where humans are either fighting for survival or rebuilding some kind of culture.
Death from above
The only existential risk we know will definitely happen, is that our own sun will eventually run out of hydrogen fuel in its core, and expand to become a red giant, swallowing up Mercury and Venus and incinerating all life on Earth. The good news is that it won’t happen for another 2 billion years or so, giving our descendants, if there are any and if they have the technology, plenty of time to move the planet or find a new home.
Asteroid collision is a sci-fi staple. While our solar system has become a lot more peaceful in the 4.6 billion years since it first formed, all the inner planets including Earth have been bombarded with rocks large and small throughout geological history. Most famously, a 10 km diameter meteor hit what is now the Chicxulub peninsula in Mexico 66 million years ago. The impact caused a mass extinction on a planetary scale, killing off 3/4 of all species on Earth at the time, including nearly all the dinosaurs - except for those that evolved into birds.
There are many such rocks still flying around in the solar system, but astronomers have recently managed to map the orbits of most of them, and none have been found to be on collision course with Earth for thousands of years. Much smaller rocks can still create havoc. In 1908, one around 50m across entered the atmosphere and exploded in the air over an uninhabited region of Siberia with the force of a 10 megaton bomb. The blast flattened an estimated 80 million trees over 2,000 km2, and could easily have pulverised a city killing nearly everyone.
Comets are less predictable, as they come from the outer solar system and beyond, but we probably already have the technology to detect and divert or destroy such heavenly bodies with a few months warning. Other examples of ‘death from the skies’ such as black holes and gamma ray bursts, are extremely unlikely. Attack by aliens, another sci-fi favourite, can’t be ruled out, but the vast distances between stars, and our failure to detect any signals of extra-terrestrial intelligence to date, suggest it is not probable, unless it comes in the form or spacefaring AI drones programmed to turn us all into paperclips, or more usefully more drones, in which case see below.
Ecosystem collapse
Even without nuclear war or extra-terrestrial intervention, human activities are already causing widespread catastrophic damage to biodiversity and food systems. Ocean warming and destruction of vital ecosystems like rainforests have precipitated the largest mass extinction event since the Chicxulub meteor. In the last 200 years we have lost an estimated 7% of all species, and this is accelerating.
Even if we somehow miraculously manage to reverse carbon emissions and pollution, and restore damaged habitats, we face many decades of ecological disaster and disruption. While this will inevitably lead to great loss and suffering, humans will probably survive in a hotter and more hostile world, along with our domestic organisms and those like mice, rats, crows, mosquitoes and microorganisms who have adapted to our company.
Because it is a slow-burning crisis, many think humans will have time to adapt to a warming world, but the biggest threat comes from those accelerators and positive feedback loops such as methane release from permafrost, and melting ice-sheets disrupting the planetary cycles of oceanic currents. The burning of the Amazon rainforest in recent years has highlighted that habitat and species loss can be irreversible, and even if we can expect ecosystems to recover over a period of centuries, in the short term we are in for a hot and bumpy ride.
Pandemics, natural and artificial
A hotter and less diverse planet may also mean greater vulnerability to pandemics. Covid-19 showed how even a relatively mild virus can impact the world’s health systems and economy. A century before the far more lethal Spanish ‘flu epidemic, initially spread by WWI troop movements from the USA to Europe, killed an estimated 50-100 million people worldwide, far more than the war itself and around 5% of the global population. The ‘Black Death’ waves of bubonic plague from 1300–1700 killed an even higher percentage, and imported diseases like smallpox have wiped out entire indigenous populations worldwide.
Over the past century, we have stolen a march on infectious diseases through modern medicine, sanitation and public health. But ‘new’ viruses like HIV/AIDS, Ebola and SARS coronaviruses have continued to mutate and jump species to humans, and we can expect more to come. Some kind of influenza, spread from birds or bats, is still considered the likeliest candidate for ‘Disease X’, the next pandemic predicted by epidemiologists.
Even a pandemic as lethal as Ebola and infectious as ‘flu would be unlikely to kill all humans on the planet. Some populations would survive and develop resistance, though not before health systems had been destroyed and economies and societies severely damaged. However, a truly species-ending ‘Disease X’ or omega strain, is not impossible to conceive of and may even be technically possible, given advances in genomic engineering.
Bio-terrorism is now right at the top of existential threats to humanity being considered by security agencies. Many labs around the world are conducting ‘gain of function’ studies looking at how to engineer more infectious and deadly viruses in order to develop defences against them. They have strict security protocols but a determined bad actor could easily get their hands on samples, and sci-fi and even mainstream movies are full of sinister scenarios in which killer diseases are deliberately introduced.
The lab-leak hypothesis for the origin of Covid-19, widely touted by the Trump administration early in 2022, is now considered unlikely, but only because the actual protein structure of SARS-CoV-2 matches wild samples from bats more closely than it does any viruses currently being used in gain of function research. The Wuhan Institute of Virology does indeed work on such viruses, and it is somewhat chilling that the main reason for ruling out an artificial origin is that a man-made virus would look different, and would probably be more infectious and lethal than Covid has turned out to be.
AI-mageddon out of here
Another existential threat now taken increasingly seriously is artificial intelligence. Rather than a malicious AI intent on destruction envisaged by movies like ‘Terminator’, the fear is that a super-advanced general AI might inadvertently destroy humanity by too literally carrying out its programming. Like the enchanted mop in the sorcerer’s apprentice, it will have power to multiply and mobilise huge technological resources, and a relentless will to fulfil its goals, but without the wit to understand the impacts of its actions.
There is no such AI as yet, but it may be only a few decades away, if not sooner, and the problem is by the time we know it’s here, it might already be too late! Before DeepMind’s AlphaGo programme beat Go champion Lee Sedol in 2016, most experts though that was impossible, and over the eight years since then, AI algorithms and quantum computing promising ever-greater processing power have proceeded apace. Super-intelligent AI in the service of humanity offers huge potential benefits, but also great risk.
The danger is that even an AI with apparently harmless goals can act in surprisingly harmful ways. For example, a computer with the sole, unconstrained goal of solving an incredibly hard maths problem may turn the entire Earth into one giant computer, in an effort to increase its computational power, as in Douglas Adams’ Hitchhiker’s Guide to the Galaxy.
This unfortunate phenomenon is also called ‘paperclip maximising’ after philosopher Nick Bostrom’s thought experiment about an AI whose only goal is to make as many paper clips as possible.
“The AI will realise quickly that it would be much better if there were no humans because humans might decide to switch it off.... Also, human bodies contain a lot of atoms that could be made into paper clips.”
To avoid being turned into paperclips, or an equally absurd fate, at the hands of our over-zealous AI servants, we need to programme in some very robust and sophisticated ethics modules along the lines of Asimov’s three laws of robotics. Indeed there is now a movement to mobilise resources to understand and manage AI risk and embed ethics in AI initiatives worldwide, mainly focused in Silicon Valley, but also in research centres like Bostrom’s Future of Humanity Institute in Oxford.
Many commentators and techno-activists, particularly among the ‘rationalist’ followers of Eliezer Yudkowsky and influential bloggers like Open Philanthropy founder Holden Karnofsky and psychiatrist Scott Alexander, believe that AI risk is now the greatest existential threat to humanity, with the potential to wipe us out as a species.
Being human…
I am certainly biased, but I believe such an outcome would be a great pity, and that the survival of humankind and the thriving of human civilisation are highly desirable.
This is not only because I love life, and cherish the people I am lucky enough to share it with, but also because I believe we are far from finished as a species. The astonishing technological advances we are now living through offer the potential to extend healthy life-spans, preserve ecosystems and greatly diminish want and suffering.
If we continue to avoid apocalypse, whether natural, artificial or act of God, we could soon be entering an era of tremendous human flourishing.
These are some of the directions I’d like to explore in future blogs, and I hope you, dear readers, will share this journey.
Please do comment, subscribe and if you like it, tell your friends.
Apocalypse now, later or maybe never?
Thanks Pat brilliant articulate Armageddon article big love
Looking forward to reading this one