# Nick Bostrom

Philosopher Nick Bostrom at the Oxford Museum of Natural History in 2013

Nick Bostrom (born Niklas Boström on 10 March 1973[1]) is a Swedish philosopher at St. Cross College, University of Oxford known for his work on existential risk and the anthropic principle. He holds a PhD from the London School of Economics (2000). He is currently the director of both The Future of Humanity Institute and the Oxford Martin Programme on the Impacts of Future Technology as part of the Oxford Martin School at Oxford University.[2]

He is the author over 100 publications,[3] including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), and Human Enhancement (ed., OUP, 2009). He has been awarded the Eugene R. Gannon Award and has been listed in Foreign Policy's Top 100 Global Thinkers list.

In addition to his writing for academic and popular press, Bostrom makes frequent media appearances in which he talks about transhumanism-related topics such as artificial intelligence, superintelligence, mind uploading, cryonics, nanotechnology, human enhancement, and the simulation argument.

## Philosophy

### Ethics of human enhancement

Bostrom is favourable towards "human enhancement", or "self-improvement and human perfectibility through the ethical application of science",[4][5] as well as a critic of bio-conservative views.[6] He has proposed the reversal test for reducing status quo bias in bioethical discussions of human enhancement.[7]

In 1998, Bostrom co-founded (with David Pearce) the World Transhumanist Association[4] (which has since changed its name to Humanity+). In 2004, he co-founded (with James Hughes) the Institute for Ethics and Emerging Technologies. In 2005 he was appointed Director of the newly created Future of Humanity Institute in Oxford. Bostrom is the 2009 recipient of the Eugene R. Gannon Award for the Continued Pursuit of Human Advancement [8][9] and was named in Foreign Policy's 2009 list of top global thinkers "for accepting no limits on human potential." [10]

### Existential risk

Bostrom has addressed the philosophical question of humanity's long-term survival.[11][12] He defines an existential risk as one in which an "adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential." In the 2008 volume "Global Catastrophic Risks", editors Bostrom and Cirkovic offer a detailed taxonomy of existential risk, and various papers link existential risk to observer selection effects[13] and the Fermi paradox.[14]

### Simulation argument

Bostrom argues that at least one of the following statements is very likely to be true:

1. Human civilization is unlikely to reach a level of technological maturity capable of producing simulated realities, or such simulations are physically impossible.
2. A comparable civilization reaching aforementioned technological status will likely not produce a significant number of simulated realities, for any of a number of reasons, such as diversion of computational processing power for other tasks, ethical considerations of holding entities captive in simulated realities, etc.
3. Any entities with our general set of experiences are almost certainly living in a simulation.

To quantify that tripartite disjunction, he offers the following equation:[15]

$f_\textrm{sim} = \frac{f_\textrm{p}NH} {(f_\textrm{p}NH)+H}$

where:

$f_\textrm{p}$ is the fraction of all human civilizations that will reach a technological capability to program reality simulators.
$N$ is the average number of ancestor-simulations run by the civilizations mentioned by $f_\textrm{p}$.
$H$ is the average number of individuals who have lived in a civilization before it was able to perform reality simulation.
$f_\textrm{sim}$ is the fraction of all humans who live in virtual realities.

N can be calculated by multiplying the number of civilizations interested in performing such simulations ($f_\textrm{1}$) by the number of simulations run by such civilizations ($N_\textrm{1}$):

$N = f_\textrm{1}$$N_\textrm{1}$

Thus the formula becomes:

$f_\textrm{sim} = \frac{f_\textrm{p}f_\textrm{1}N_\textrm{1}} {(f_\textrm{p}f_\textrm{1}N_\textrm{1})+1}$

Because post-human computing power $N_\textrm{1}$ will be such a large value, at least one of the following three approximations will be true:

$f_\textrm{p}$ ≈ 0
$f_\textrm{1}$ ≈ 0
$f_\textrm{sim}$ ≈ 1

## References

1. ^ nickbostrom.com
2. ^ http://www.oxfordmartin.ox.ac.uk/people/22