# Measure problem (cosmology)

The measure problem in cosmology concerns how to compute fractions of universes of different types within a multiverse. It typically arises in the context of eternal inflation. The problem arises because different approaches to calculating these fractions yield different results, and it's not clear which approach (if any) is correct.[1]

Measures can be evaluated by whether they predict observed physical constants, as well as whether they avoid counterintuitive implications, such as the youngness paradox or Boltzmann brains.[2] While dozens of measures have been proposed,[3]:2 few physicists consider the problem to be solved.[4]

## The problem

Infinite multiverse theories are becoming increasingly popular, but because they involve infinitely many instances of different types of universes, it's unclear how to compute the fractions of each type of universe.[4] Alan Guth put it this way:[4]

In a single universe, cows born with two heads are rarer than cows born with one head. [But in an infinitely branching multiverse] there are an infinite number of one-headed cows and an infinite number of two-headed cows. What happens to the ratio?

Sean M. Carroll offered another informal example:[1]

Say there are an infinite number of universes in which George W. Bush became President in 2000, and also an infinite number in which Al Gore became President in 2000. To calculate the fraction N(Bush)/N(Gore), we need to have a measure — a way of taming those infinities. Usually this is done by “regularization.” We start with a small piece of universe where all the numbers are finite, calculate the fraction, and then let our piece get bigger, and calculate the limit that our fraction approaches.

However, different procedures for computing the limit of this fraction yield wildly different answers.[1]

One way to illustrate how different regularization methods produce different answers is to calculate the limit of the fraction of sets positive integers that are even. Suppose the integers are ordered the usual way,

1, 2, 3, 4, 5, 6, 7, 8, ...

At a cutoff of "the first five elements of the list", the fraction is 2/5; at a cutoff of "the first six elements" the fraction is 1/2; the limit of the fraction, as the subset grows, converges to 1/2. However, if the integers are ordered in a different way,

1, 2, 4, 3, 6, 8, 5, 10, 12, 7, 14, 16, ...

the limit of the fraction of integers that are even converges to 2/3 rather than 1/2.[5]

A popular way to decide what ordering to use in regularization is to pick the simplest or most natural-seeming method of ordering. Everyone agrees that the first sequence, ordered by increasing size of the integers, seems more natural. Similarly, many physicists agree that the "proper-time cutoff measure" (below) seems the simplest and most natural method of regularization. Unfortunately, the proper-time cutoff measure seems to produce incorrect results.[3]:2[5]

The measure problem is important in cosmology because in order to compare cosmological theories in an infinite multiverse, we need to know which types of universes theories predict to be more common than others.[4]

## Proposed measures

In this toy multiverse, the left-hand region exits inflation (red line) later than the right-hand region does. With the proper-time cutoff shown by the black dotted lines, the immediately post-inflation portion of the left-hand universe dominates the measure, flooding the measure with five "Boltzmann babies" (red) that are freakishly young. Extending the proper-time cutoff to later times does not help, as other regions (not pictured) that exit inflation even later would then dominate. With the scale-factor cutoff shown by the gray dotted lines, only observers who exist before the region has expanded by the scale factor are counted, giving normal observers (blue) time to dominate the measure, while the left-hand universe hits the scale cutoff even before it exits inflation in this example.[3]

### Proper-time cutoff

The proper-time cutoff measure considers the probability ${\displaystyle P(\phi ,t)}$ of finding a given scalar field ${\displaystyle \phi }$ at a given proper time ${\displaystyle t}$.[3]:1–2 During inflation, the region around a point grows like ${\displaystyle e^{3H\Delta t}}$ in a small proper-time interval ${\displaystyle \Delta t}$.[3]:1

This measure has the advantage of being stationary in the sense that probabilities remain the same over time in the limit of large ${\displaystyle t}$.[3]:1 However, it suffers from the youngness paradox, which has the effect of making it exponentially more probable that we'd be in regions of high temperature, in conflict with what we observe; this is because regions that exited inflation later than our region, spent more time than us experiencing runaway inflationary exponential growth.[3]:2 For example, observers in a Universe of 13.7 billion years old (our observed age) are outnumbered by observers in a 13.0 billion year old Universe by a factor of ${\displaystyle 10^{10^{60}}}$. This lopsidedness continues, until the most numerous observers resembling us are "Boltzmann babies" formed by improbable fluctuations in the hot, very early, Universe. Therefore, physicists reject the simple proper-time cutoff as a failed hypothesis.[6]

### Scale-factor cutoff

Time can be parameterized in different ways than proper time.[3]:1 One choice is to parameterize by the scale factor of space ${\displaystyle a}$, or more commonly by ${\displaystyle \eta \sim \log a}$.[3]:1 Then a given region of space expands as ${\displaystyle e^{3\Delta \eta }}$, independent of ${\displaystyle H}$.[3]:1

This approach can be generalized to a family of measures in which a small region grows as ${\displaystyle e^{3H^{\beta }\Delta t_{\beta }}}$ for some ${\displaystyle \beta }$ and time-slicing approach ${\displaystyle t_{\beta }}$.[3]:1–2 Any choice for ${\displaystyle \beta }$ remains stationary for large times.

The scale-factor cutoff measure takes ${\displaystyle \beta =0}$, which avoids the youngness paradox by not giving greater weight to regions that retain high energy density for long periods. [3]:2

This measure is very sensitive to the choice of ${\displaystyle \beta }$ because any ${\displaystyle \beta >0}$ yields the youngness paradox, while any ${\displaystyle \beta <0}$ yields an "oldness paradox" in which most life is predicted to exist in cold, empty space as Boltzmann brains rather than as the evolved creatures with orderly experiences that we seem to be.[3]:2

De Simone et al. (2010) consider the scale-factor cutoff measure to be a very promising solution to the measure problem.[7] This measure has also been shown to produce good agreement with observational values of the cosmological constant.[8]

### Stationary

The stationary measure proceeds from the observation that different processes achieve stationarity of ${\displaystyle P(\phi ,t)}$ at different times.[3]:2 Thus, rather than comparing processes at a given time since the beginning, the stationary measure compares them in terms of time since each process individually become stationary.[3]:2 For instance, different regions of the universe can be compared based on time since star formation began.[3]:3

Andrei Linde and coauthors have suggested that the stationary measure avoids both the youngness paradox and Boltzmann brains.[2] However, the stationary measure predicts extreme (either very large or very small) values of the primordial density contrast ${\displaystyle Q}$ and the gravitational constant ${\displaystyle G}$, inconsistent with observations.[7]:2

### Causal diamond

Reheating marks the end of inflation. The causal diamond is the finite four-volume formed by intersecting the future light cone of an observer crossing the reheating hypersurface with the past light cone of the point where the observer has exited a given vacuum.[3]:2 Put another way, the causal diamond is[4]

the largest swath accessible to a single observer traveling from the beginning of time to the end of time. The finite boundaries of a causal diamond are formed by the intersection of two cones of light, like the dispersing rays from a pair of flashlights pointed toward each other in the dark. One cone points outward from the moment matter was created after a Big Bang — the earliest conceivable birth of an observer — and the other aims backward from the farthest reach of our future horizon, the moment when the causal diamond becomes an empty, timeless void and the observer can no longer access information linking cause to effect.

The causal diamond measure multiplies the following quantities:[9]:1,4

• the prior probability that a world line enters a given vacuum
• the probability that observers emerge in that vacuum, approximated as the difference in entropy between exiting and entering the diamond. ("[T]he more free energy, the more likely it is that observers will emerge.")

Different prior probabilities of vacuum types yield different results.[3]:2 Entropy production can be approximated as the number of galaxies in the diamond.[3]:2

An attraction of this approach is that it avoids comparing infinities, which is the original source of the measure problem.[4]

### Watcher

The watcher measure imagines the world line of an eternal "watcher" that passes through an infinite number of big crunch singularities.[10]