Skip to main contentSkip to navigationSkip to navigation

Is the end nigh?

This article is more than 20 years old
Oliver Morton is morbidly fascinated by Our Final Century, Martin Rees's exploration of humankind's chances of surviving next hundred years

Our Final Century: Will the Human Race Survive the Twenty-First Century?
by Martin Rees
228pp, Heinemann, £17.99

Sir Martin Rees, Britain's most distinguished theoretical astrophysicist and one of its best writers on matters cosmological, is no stranger to catastrophe; he has a professional interest in supernovae, gamma-ray bursts, cannibal galaxies and many of the universe's other savageries. In Our Final Century, though, his concern is not just destruction, but self-destruction. The 20th century, he points out, was the first in which humanity's chance of self-destruction shot up above the eschatological background noise. At a cumulative risk that Rees (long active in disarmanent campaigns) sets retrospectively at about one in six, a nuclear holocaust knocked other end-of-the-world scenarios - asteroid impacts, supervolcano eruptions, and, for the fanciful, alien invasion - into a cocked hat. Now he thinks things are getting worse. Although the dangers are no longer dominated by nuclear weapons, Rees puts the chances of civilisation coming to an end in the 21st century as high as 50:50.

Towards the end of the book, Rees looks at a number of probabilistic arguments for thinking that the end of civilisation is not far away, arguments he finds hard to fault but wisely, it seems to me, avoids taking too seriously. He wonders whether humanity's demise might have cosmic significance - whether the universe will grow old empty and unfulfilled without us. He also looks at the slim chances of purely accidental destruction that could spread far beyond the Earth as the unintended consequence of a scientific experiment changing the nature of matter or space-time, a subject that is fascinating and worth taking more seriously than you might expect.

But the heart of his argument is the risk of intentional destruction. Like Bill Joy, chief scientist of Sun Microsystems, who wrote an influential article on the matter in Wired a few years ago, Rees fears that biotechnology and nanotechnology will provide greater potential for destruction, and permit ever smaller groups - or indeed individuals - to make use of them. This worry is so real to Rees that he has offered a $1,000 wager on a death toll of a million or more due to a single act of terror (or error) using these technologies within the next 20 years; if you feel like taking him up on it - and he would love to lose the bet - the details are at www.longbets.org.

Rees's discussion of these risks is informed, judicious, and occasionally delightfully donnish: "Unless you hold your life very cheap indeed [a $50 game of Russian roulette] would be an imprudent - indeed blazingly foolish - gamble to have taken." Oddly, though, he shies away from some of the technical detail that the reader wants from such a sharp and well-exercised scientific mind. Take the nanotechnological threat that a "grey goo" of self-replicating machines might eat the entire biosphere. Rees says that such goo is not in breach of any known physical laws, but that it may be as far beyond our current technology as a starship capable of flitting from planet to planet at 90% of the speed of light. The appetite is immediately whetted for an incisive discussion of how current ideas about self-replicating nanomachinery stack up in the practicality stakes against starship manufacture, which is certainly not a feasible proposition in this century. But the topic is then dropped. The fact that it worries the Prince of Wales does not necessarily make nanotechnology a good thing, or a safe one. But if we're to take the risk seriously we need something more to gnaw on than the fact that it breaks no laws of physics. Neither do invisible rabbits.

The danger of biotechnological terrorism, on the other hand, is as plain to see as a rabbit glowing in the dark: biological weapons, unlike nuclear ones, do not require hard-to-isolate and very pricey materials; knowledge about how to make vicious pathogens is freely available, though the craft of how to use them as weapons has managed to stay a little more arcane; and the rate of progress in biological science is such that anything a huge lab can do now, a small and meagre one could take on in 10 years. This will make biological weaponry available to small groups, including some gripped by folies-à-plusieurs that make them resistant both to the threat of deterrence and to the logic of diminishing returns that frequently, if not always, imposes a limit on political terror.

The death of a million people is a horrific prospect, but it is a long way from what is necessary for a return to the stone age. In some situations, such as the war in Congo, it is possible to kill a million people without most of civilisation - the urge for inverted commas here is strong - even noticing, let alone ending. The average war-related mortality in the 20th century was close to two million a year. A century that suffered terrorist mega-atrocities every decade but avoided major world or even regional wars would see far fewer conflict casualties than the century just finished.

Rees's argument, though, implies that a million-death event would be just the beginning; eventually a bio or nano-plague of some sort would up the ante a thousand-fold. But here, it seems to me, Rees falls into an error made by his adversaries in debates over the star wars plans of the 1980s: the fallacy of the last move. Even if it were possible to design a system capable of dealing with the currently deployed threat, Rees and other critics of Reagan's programme argued, the threat that would actually be faced would be a threat that had moved on. Mutatis mutandis, defences against bioterror will evolve in step with the threat. It is entirely conceivable that there will be no last move, and no final "game over".

How this might unfold is not easily imagined: as Rees says, "it is easier to conceive of effective threats than effective antidotes". But if imagination is against us, numbers and resources will be on our side. The defences we may erect, or others may erect in our name, will certainly not stop all attacks, and their implementation could have nasty costs. As Rees notes, there is some merit in the provocative case for radically reduced expectations of privacy made by David Brin in his book The Transparent Society, but there is no denying that, at best, it would be quite a dislocation. However, neither social dislocation, nor a profoundly unwelcome period of near-totalitarianism, would seem to fit Rees's definition of "the end of civilisation".

Rees is not convincing in his 50:50 estimate that this will be humanity's last century. The risks seem high, however, that it might be humanity's worst - one in which the numbers of those living in misery and dying from avoidable causes exceed all precedents. Devastating famines, mega-droughts, wars and plagues, both natural and not, are all possible, even likely. But they are not inevitable; enlightened statecraft, open societies, international solidarity, overhauled medicine, respect for human rights and the wise and accountable use of technology could help us not only avoid the worst of our future but also build on its best prospects.

In left-leaning science-fiction circles you increasingly come across Alasdair Gray's exhortation to "work as if you lived in the early days of a better nation", advice which remains applicable however bad the outlook. In that Rees's timely warnings underline the need for the sort of participatory optimism Gray describes, they are useful. They would have been even more welcome, though, if he had offered a little more by way of strategies for survival.

· Oliver Morton's Mapping Mars was short-listed for the Guardian first book award.

Most viewed

Most viewed