Self-Indication Assumption Doomsday argument rebuttal

From Wikipedia, the free encyclopedia
Jump to: navigation, search

The Self-Indication Assumption Doomsday argument rebuttal is an objection to the Doomsday argument (that there is only a 5% chance of more than twenty times the historic number of humans ever being born) by arguing that the chance of being born is not one, but is an increasing function of the number of people who will be born.

History[edit]

This objection to the Doomsday Argument (DA), originally by Dennis Dieks (1992), developed by Bartha & Hitchcock (1999), and expanded by Ken Olum (2001), is that the possibility of you existing at all depends on how many humans will ever exist (N). If N is big, then the chance of you existing is higher than if only a few humans will ever exist. Since you do indeed exist, this is evidence that N is high. The argument is sometimes expressed in an alternative way by having the posterior marginal distribution of n based on N without explicitly invoking a non-zero chance of existing. The Bayesian inference mathematics are identical.

The current name for this attack within the (very active) DA community is the "Self-Indication Assumption" (SIA), proposed by one of its opponents, the DA-advocate Nick Bostrom. His (2000) definition reads:

SIA: Given the fact that you exist, you should (other things equal) favor hypotheses according to which many observers exist over hypotheses on which few observers exist.

A development of Dieks's original paper by Kopf, Krtous and Page (1994), showed that the SIA precisely cancels out the effect of the Doomsday Argument, and therefore, one's birth position (n) gives no information about the total number of humans that will exist (N). This conclusion of SIA is uncontroversial with modern DA-proponents, who instead question the validity of the assumption itself, not the conclusion which would follow, if the SIA were true.

The Bayesian inference of N from n under the SIA[edit]

The SIA-mathematics considers the chance of being the nth human as being conditioned on the joint probability of two separate events, both of which must be true:

  1. Being born: With marginal probability P(b).
  2. Being nth in line: With marginal probability (1/N), under the Principle of indifference.

This means that the pdf for n, is concentrated at P(n = 0) = 1 - P(b), and that for P(n > 0) the marginal distribution can be calculated from the conditional:

P(n \mid N) = \frac{P(b \mid N)}{N} Where n > 0

J. Richard Gott's DA could be formulated similarly up to this point, where it has P(b | N) = P(b) = 1, producing Gott's inference of n from N. However, Dennis Dieks argues that P(b) < 1, and that P(b | N) rises proportionally in N (which is a SIA). This can be expressed mathematically:

P(b \mid N) = \frac{N}{c} Where c is a constant.

The SIA’s effect was expressed by Page et al. as Assumption 2 for the prior probability distribution, P(N):

"The probability for the observer to exist somewhere in a history of length N is proportional to the probability for that history and to the number of people in that history." (1994 - Emphasis added to: [1])

They note that similar assumptions had been dismissed by Leslie on the grounds that: "it seems wrong to treat ourselves as if we were once immaterial souls harbouring hopes of becoming embodied, hopes that would have been greater, the greater the number of bodies to be created." (1992)

One argument given for P(b | N) rising in N that does not create Leslie’s “immaterial souls” is the possibility of being born into any of a large number of universes within a multiverse. You can only be born into one, so the indifference principle within this (humans-across-universes) reference class would mean that the chance of being born into a particular universe is proportional to its weight in humans, N. (Echoing the weak anthropic principle.)

In this framework, the chance of 'not being born' is zero, but the chance of 'not being born into this universe' is non-zero.

Whatever the reasoning, the essential idea of the Self-Indication Assumption is that the prior probability of birth into this universe is rising in N, and is generally considered to be proportional to N. (The following discussion assumes they are proportional so P(b | N) = 2 P(b | 2N), since other functions increasing in N produce similar results.) Therefore:

P(n \mid N) = \frac{1}{c} Where n > 0

Effect of the “unborn” on the Bayesian inference[edit]

To clarify the exposition, Gott’s vague prior N distribution is ‘capped’ at some “universal carrying capacity”, \Omega. (This prevents N’s distribution being an improper prior.)

\Omega is the largest possible value for N if all living space in the 'universe' is consumed. The \Omega limit has no specified upper bounds (to habitable planets in the Galaxy, say) but makes N’s posterior distribution more tractable:

  P(N) = \left\{\begin{matrix}
\frac{1}{N \ln(\Omega) }, \;\; N \le \Omega \\
0, \;\; N > \Omega \end{matrix}\right.

The \ln(\Omega) factor normalizes the N’s probability, allowing calculation of the marginal P(n > 0) by integration of P(b|N) across the [1, \Omega] range of possible N:

P(n) = \int_{N=n}^{\Omega} P(n \mid N)P(N) \,dN =
\int_{n}^{\Omega} \left[\frac{1}{c}\right]  \frac{1}{N \ln(\Omega) } \,dN

This range starts at n rather than 1, because n can’t be greater than N. It uses the Principle of indifference for n’s distribution given N, and implies:

P(n) =  \ln\left(\frac{\Omega}{ n} \right) \frac { 1}{ \ln(\Omega) c }

Substituting these marginals into the conditional equation (assuming N below \Omega) gives:

P(N \mid n)  = \frac{ P(n \mid N)P(N)  }{P(n)}  = \frac{ 1}{c} \frac{1}{N \ln(\Omega) }  \frac { \ln(\Omega) c }{\ln(\frac{\Omega}{ n} )  } = \frac{1}{N \ln(\frac{\Omega}{ n} ) }

The probabilistic bounds on N with the SIA[edit]

The chance of doomsday before an arbitrary factor of the current population, x, is born can be inferred, by integrating the chance of N having any value above xn. (Normally x = 20.)

P((N > xn)\mid n) = \int_{xn}^{\Omega} \frac{1}{N \ln(\frac{\Omega}{ n} ) }  \,dN = \frac{ \ln(\Omega)-\ln(xn) }{ \ln(\Omega) - \ln(n) }

Therefore, given the posterior information that we have been born and that we are nth in line: For any factor, x << (\Omega / n), of the current population:

\lim_{\Omega\to\infty} P(N <= xn) = 0
Conclusion: n provides no information about N, in an unbounded vague prior SIA universe.

Significance of Omega[edit]

Figure A: The probability of reaching One Trillion people from the current (60 billion) count under the SIA, against different theoretical maxima for N. (The x-axis is the finite maximum number of people that can ever be born into the universe - the upper limit of N - plotted on a log scale.) Note that the carrying capacity is the maximum number of people in any potential universe, not just this one, but even on Earth the maximum number of people that sunlight can simultaneously support has been estimated (e.g. by Isaac Asimov) at over a trillion; extending this for a million generations gives an upper limit over a quintillion on just this planet, and a septillion if humanity colonizes fewer than 0.001% of the galaxy's stars with similar efficiency. Under the SIA, as the potential size of N increases, the significance of the current number of births (n) decreases, to the point that this posterior information does not constrain the actual value of N at all.

The finite \Omega is essential to this solution in order to produce finite integrals. In a bounded universe, \Omega actually must be finite, although this is not usually an argument used by those proposing the SIA rebuttal. However, other proponents of indefinite survival of human (and posthuman) intelligence have postulated a finite endpoint, as the (extremely high) “Omega”.

Specifying any finite upper limit, \Omega, was not a part of Dieks's argument, and critics of the SIA have argued that an infinite upper bound on N creates an Improper integral (or summation) in the bayesian inference on N, which is a challenge to the logic of the critique. (For example Eastmond, and Bostrom, who argues that if the SIA cannot rule out an infinite number of potential humans, it is fatally flawed.)

The unbounded vague prior is scale invariant, in that the mean is arbitrary. Therefore no finite value can be selected with more than a 50% chance of being above N (the marginal distribution of N). Olum's critique depends on such a limit existing; without this his critique is technically not applicable. Therefore it must be cautioned that the simplification here (to bound N's distribution at \Omega) omits a significant hurdle to the credibility of the Self-Indication Assumption Doomsday argument rebuttal.

Remarks[edit]

Many people, (such as Bostrom) believe the leading candidate for Doomsday argument refutation is a Self-Indication Assumption of some kind. It is popular partly because it is a purely Bayesian argument which accepts some of the DA's premises (such as the Indifference and Copernican principles). Other observations:

  • The joint prior distribution, P(n|N), can be manipulated to produce a wide range of links between n and N by defining various birth probabilities given N. Since this distribution must be assumed prior to evidence, any particular choice of P(b|N) is faith-based. Many writers feel a joint distribution with no link N to n is more natural than the strong link given by the vague prior, making the DA "Irrelevant" (Page et al.) Others, such as Gott feel the opposite, and are more comfortable using the pure vague prior as the prior joint probability, with P(b|N) = 1 at all N.
  • The SIA rebuttal is a very special form of the "a priori" rebuttal of the DA, and differs from that approach in being purely statistical.
  • If the SIA is true then the mere fact of existence leads credence to "any" theory that postulates a "high" number of conscious beings in the universe, and controversially implies that a theory which does not is unlikely to be true. (For instance, the SIA implies that N is likely to be very high, so the probability of an upcoming Armageddon is correspondingly low, which makes the Doomsday clock’s warning of relatively imminent destruction a mistake.)

Under the Self-Indication Assumption the 'reference class' of which we are part includes a potentially vast number of the unborn (at least into this universe). In order to overturn the conventional DA calculation so completely the reservoir of souls (potential births) in the reference class must be astoundingly large. For instance, the certain-birth DA estimates the chance of reaching the trillionth (10^{12}th) birth at around 5%; to shift this probability above 90% the SIA requires a potential number of humans (\Omega) in the order of 10^{24} (a septillion births). This might be feasible physically, and is also possible within the conventional DA model (though staggeringly unlikely). However, the SIA differs from the normal DA in having the reference class include all septillion unborn potential-humans at this point in history, when only sixty billion have been born. Including unborn people in the reference class we sample from means including in the reference class things for which we can never have any evidence. This puts the SIA at odds with philosophical approaches requiring strictly falsifiable constructs, such as Logical positivism.

SIA Intuition: the lost-property metaphor[edit]

It can be hard to visualize how the Self-Indication Assumption changes the distribution because everyday cases where a null result can be returned don't change the statistics significantly. The following two examples of estimating the size of a darkened space show how the probability shift can occur:

  • Cloak-room case: Imagine looking for your coat in a dark room and finding it one foot from the door; the Bayesian inference from a vague prior is that the room is less than 20 feet long (with 95% confidence).
  • Lost-property case: Your coat has been filed somewhere in a huge lost-property warehouse, and as you search through its many aisles you see that they are all filled to capacity with belongings, and are various lengths. The aisle lengths are distributed according to the vague prior, except that none are more than 100 feet long. Finally, you find your coat one foot into a dark aisle, and wonder whether that aisle is more than twenty feet long.

The Bayesian inference shifts from the cloak-room case to the lost-property case, because of the chance that the coat would not be found in the aisle it was found in, and some estimate of the aisle's dimensions. Using the SIA Bayesian inference equation with \Omega = 100, n = 1, x = 20 gives the chance that the aisle is above 20 feet long in the Lost-property case:

  • Cloak-room case: The confidence that the room is shorter than 20 feet long given the position of the coat = 95%
  • Lost-property case: The confidence that the aisle is less than 20 feet long given exactly the same information about the coat's position in it = 65%

The confidence that the unseen space is longer than 20 feet is directly analogous to the confidence that the human race will become more than 20 times as numerous as it has been. Using an \Omega of one hundred times the current value only increases the subjective chance seven times (from 5% to 35%), but this is a very small limit for the purposes of exposition.

Problems with the SIA[edit]

The SIA is not an assumption or axiom of Dieks' system. In fact, as stated, the negation of the SIA is a theorem of Dieks' system. A proposition similar to the SIA can be derived from Dieks' system, but it is necessary to revise the SIA to limit it to situations where you don't know the date or your birth order number. Even this related proposition is not an axiom of Dieks. It is a theorem, derived from other fundamental assumptions. In Dieks, you may never have been born and the end of the human race is independent of your birth order number. A proposition related to the SIA, but not the SIA itself, can be derived from these assumptions. Hence, no one assumes the SIA. It should be called the Self-Indication Corollary, perhaps.

SIA's own doomsday argument[edit]

Katja Grace argues that while SIA overcomes the standard doomsday argument, when combined with an assumption of a Great Filter, SIA leads to another kind of doomsday prediction. The reasoning is as follows. In some worlds, the filter may be early—some time before the advent of a technological civilization like ours. In other worlds, the filter may be late—between the advent of technological civilization and galactic colonization. Collectively, the worlds with mostly late filters have many more instances of life at the human level of development, so SIA, together with the knowledge that we are at the human-level stage, implies we're probably in one of the worlds with a late filter. In other words, the risk of extinction is higher than we would have naively supposed.[1][2][3]

Notes[edit]

  1. ^ Grace, Katja (Oct 2010). Anthropic Reasoning in the Great Filter. 
  2. ^ Grace, Katja (23 Mar 2010). "SIA doomsday: The filter is ahead". Meteuphoric. Retrieved 13 June 2014. 
  3. ^ Hanson, Robin (22 Mar 2010). "Very Bad News". Overcoming Bias. Retrieved 13 June 2014. 

External links[edit]