life expectancy


Recently I stumbled upon this math problem:
The positive integer N has a finite value but is unknown to us.
We are looking for a function f(X) which minimizes the sum
E = |f(0) - N| + |f(1) - N| + |f(2) - N| + ... + |f(N-1) - N|
for almost all N.

Notice that we do not know the value of N, so f(X) cannot depend on it, which eliminates the trivial solution f(X) = N among others.
In order to illustrate what I am looking for, consider as first example
1] f(X) = 0, which results in E = N2.
However, there is a better solution
2] f(X) = X, which results in E = (N+1)*N/2, which is less for almost all N (except N=1).

Unfortunately, I do not know the best solution f(X) and this is where you are invited to leave a comment to help me out.
But I do have strong evidence (i.e. a numerical test up to large values of N) that
3] f(X) = X + sqrt(X) is an even better solution.

So what is the motivation for this problem and why the title for this blog post?
Consider a process or phenomenon which exists already for X years and we try to estimate the total lifetime N without any further information. So our estimate can only depend on X and we try to find a function which minimizes the total estimation error E as described above; every year we make an estimate f(X) which is wrong by the amount |f(X) - N| and we try to minimize the sum of those errors. In some sense this is a variation of the infamous 'doomsday argument'.

It is now obvious why 1] is a bad solution and 2] is much better. Btw the function f(X) = 2*X would give the same total error E minus a small constant, so whether we assume that the process ends immediately or estimate that it will last twice as long (as it already did) makes no significant difference.

Btw the solution 3] creates a paradox: The best estimate for the life expectancy seems to depend on what units one uses: We get a different result if we calculate with days rather than years.


added later: If I assume f(X) = c*X, perhaps motivated by that paradox, then I can show that c=sqrt(2) minimizes E for large enough N. However, the function
4] f(X) = sqrt(2)*X + sqrt(X) seems to be an even better candidate, but I do not know how to determine a,b to minimize E for f(X) = a*X + b*sqrt(X) or determine the general solution for f(X).


added later: There are better ways to illustrate the problem; e.g. a box contains an unknown number N of candies. You take out one after another and at each step X you have to guess N. At the end there is a penalty proportional to the sum of your wrong guesses.
Perhaps a "deeper" example considers a Turing machine, which performed already X steps and we try to guess after how many steps N it halts.
The reason these examples are "better" is that there is no change of physical units (e.g. from years to days) that would affect N.

no black holes?


Laura Mersini-Houghton and Harald Pfeiffer published a paper with numerical results suggesting that black holes may not really exist (see also this earlier result). As one would expect, several pop. sci. webpages have already picked this story up.

The paper is of course not a general proof, but describes a particular model using certain assumptions; it considers the spherically symmetric collapse of pressure-less dust and it makes simplifying assumptions about the Hawking radiation: The energy tensor for the Hawking radiation is taken from earlier calculations for (static) black holes, proportional to 1/R^2, and I don't think this is justified if one wants to prove that black holes do not exist. Further it is assumed that most of the radiation is generated by the collapsing body itself (*) and finally assumptions are made about the heat transfer function C which I cannot follow (yet).

The resulting differential equations are numerically integrated until a shell-crossing singularity appears, in other words a naked singularity (presumably an artifact of the model assumptions, i.e. perfect spherical symmetry, so it is only slightly embarrassing in a paper which wants to remove black hole singularities).
The behavior of the dust suggests a rebound near the horizon, but it is too bad the full evolution is unknown, because it raises interesting questions.
What happens to the pressure-less dust in the long run? Will it collapse again after the rebound, perhaps infinitely often?
What does the final state (including Hawking radiation and the "influx" of negative energy) actually look like?

I am sure this paper will generate several responses and eventually more realistic calculations will follow.
Until then I remain skeptical that this result will actually hold in general.


(*) I admit that I do not understand this passage in the earlier paper: "Hawking radiation is produced by the changing gravitational field of the collapsing star, i.e. prior to the black hole formation [..]. Otherwise the surface gravity of the black hole κ, and the temperature of Hawking radiation would increase with time..."
I thought the standard picture is that the "influx" is at the event horizon (not the collapsing body) and the temperature does indeed increase with time...


added later: Supposedly William Unruh was more direct and he thinks that the paper is nonsense.

the many worlds interpretation does not work


I posted a comment on the Shtetl blog, rejecting (once again) the many worlds interpretation (mwi). It is supposed to solve the "measurement problem" of quantum theory, so let us first consider a simple experiment with 2 possible outcomes.
The main mwi assumption is that after the measurement both outcomes are realized and subsequently two macroscopically different configurations M1 and M2 exist in some (decohered) superposition.

However, we can make the differences between M1 and M2 arbitrarily large and therefore gravitation cannot be ignored. M1 and M2 will in general be associated with two different space-time geometries and so far we do not have a consistent framework to deal with such a superposition.
In a few cases it has been tried to describe such an evolution but the conclusions are not in favor of mwi.
And how would the branching of space-time(s) work if the measurement is spread out over spacelike events, e.g. in an EPR-type experiment?

This gets worse if one considers a realistic experiment with a continuum of possible outcomes, e.g. the radioactive decay of a Pu atom, which can happen at any point of the continuous time parameter t. Assuming that this decay gets amplified with a Geiger counter to different macroscopic configurations, how would one describe the superposition of the associated continuum of space-time geometries?

The Copenhagen interpretation does not have this problem, because it only deals with one outcome and in general one can "reduce" the wave function before a superposition of spacetime geometries needs to be considered.

A mwi proponent may argue that this issue can be postponed until we have a consistent theory of quantum gravity and simply consider a Newtonian fixed background (or a flat Minkowski background). In this case one still has to face the well known problem of the Born probabilities, the preferred basis problem, the question of what it takes to be a world and others, discussed previously on this blog. And if one (implicitly) allows the existence of Newtonian clocks, then why not the classical observer of Copenhagen?

In other words, the mwi so far creates more problems than it solves.