[ad_1]
Existential risks are disasters that could wipe out humanity or permanently cripple us, including natural or man-made disasters. Dr. Nick Bostrom first articulated the concept and created a risk table to explain it. Our minds and institutions are ill-equipped to deal with existential risk thinking, and Bostrom lists a dozen existential risks and ranks them according to their severity and recoverability. Countermeasures include observation and warning systems and regulating certain technologies.
An existential risk is a disaster so great that it either wipes out all of humanity or permanently cripples us. These can be natural disasters or man-made disasters of an intentional or accidental nature. An existential risk may have been around for a long time, or just a few decades, or perhaps it lies in our future. Examples of existential risk include large asteroid strikes, nuclear warfare, and rogue artificial intelligence.
The concept of existential risk was first articulated in its current form by Dr. Nick Bostrom, an Oxford philosopher. He uses a risk table similar to the following to explain existential risks:
Risk scopeGlobalEl Niñodeforestationexistential riskLocaltemporaryeconomic crisishurricanePersonal paper cutssprained ankles shotRisk intensityNegligibleManageableTerminal
Existential risks are global and terminal, or perhaps nearly terminal. An extremely contagious virus with a lethality rate of 99.9% to which no one is immune is an example of an existential risk.
Bostrom points out that our minds and institutions are ill-equipped to deal with existential risk thinking, because we’ve never experienced one before: if we had, we wouldn’t be here thinking about it. Like a child who doesn’t know a stove is hot until he touches it, we have little experience with disasters at this level. The bubonic plague of medieval Europe and the Spanish flu of World War I give us a glimpse of what an existential disaster would be like. Tens of millions of healthy people were killed within hours by both diseases.
In his canonical article on the subject, Bostrom lists a dozen existential risks and ranks them according to their severity and recoverability. Some of the more plausible ones are listed here:
genetically modified viruses
nanotechnology arms race
catastrophic nuclear war
runaway self-replicating robotics
Superintelligent AI indifferent to humans
physical disaster in a particle accelerator
the explosion of a supervolcano blocks out the sun
Due to the extreme gravity and irreversibility of existential risk, it is worth thinking about and implementing possible countermeasures. While the chance of a given existential threat becoming a reality is small, the immense stakes imply a serious avoidance agenda. For man-made threats, countermeasures include sophisticated observation and warning systems and regulating certain technologies to ensure they are not used for mass destruction. Countries suspected of possessing weapons of mass destruction are sometimes invaded by other countries concerned about the long-term consequences, as the Iraq war vividly demonstrates.
[ad_2]