life expectancy


Recently I stumbled upon this math problem:
The positive integer N has a finite value but is unknown to us.
We are looking for a function f(X) which minimizes the sum
E = |f(0) - N| + |f(1) - N| + |f(2) - N| + ... + |f(N-1) - N|
for almost all N.

Notice that we do not know the value of N, so f(X) cannot depend on it, which eliminates the trivial solution f(X) = N among others.
In order to illustrate what I am looking for, consider as first example
1] f(X) = 0, which results in E = N2.
However, there is a better solution
2] f(X) = X, which results in E = (N+1)*N/2, which is less for almost all N (except N=1).

Unfortunately, I do not know the best solution f(X) and this is where you are invited to leave a comment to help me out.
But I do have strong evidence (i.e. a numerical test up to large values of N) that
3] f(X) = X + sqrt(X) is an even better solution.

So what is the motivation for this problem and why the title for this blog post?
Consider a process or phenomenon which exists already for X years and we try to estimate the total lifetime N without any further information. So our estimate can only depend on X and we try to find a function which minimizes the total estimation error E as described above; every year we make an estimate f(X) which is wrong by the amount |f(X) - N| and we try to minimize the sum of those errors. In some sense this is a variation of the infamous 'doomsday argument'.

It is now obvious why 1] is a bad solution and 2] is much better. Btw the function f(X) = 2*X would give the same total error E minus a small constant, so whether we assume that the process ends immediately or estimate that it will last twice as long (as it already did) makes no significant difference.

Btw the solution 3] creates a paradox: The best estimate for the life expectancy seems to depend on what units one uses: We get a different result if we calculate with days rather than years.


added later: If I assume f(X) = c*X, perhaps motivated by that paradox, then I can show that c=sqrt(2) minimizes E for large enough N. However, the function
4] f(X) = sqrt(2)*X + sqrt(X) seems to be an even better candidate, but I do not know how to determine a,b to minimize E for f(X) = a*X + b*sqrt(X) or determine the general solution for f(X).


added later: There are better ways to illustrate the problem; e.g. a box contains an unknown number N of candies. You take out one after another and at each step X you have to guess N. At the end there is a penalty proportional to the sum of your wrong guesses.
Perhaps a "deeper" example considers a Turing machine, which performed already X steps and we try to guess after how many steps N it halts.
The reason these examples are "better" is that there is no change of physical units (e.g. from years to days) that would affect N.

no black holes?


Laura Mersini-Houghton and Harald Pfeiffer published a paper with numerical results suggesting that black holes may not really exist (see also this earlier result). As one would expect, several pop. sci. webpages have already picked this story up.

The paper is of course not a general proof, but describes a particular model using certain assumptions; it considers the spherically symmetric collapse of pressure-less dust and it makes simplifying assumptions about the Hawking radiation: The energy tensor for the Hawking radiation is taken from earlier calculations for (static) black holes, proportional to 1/R^2, and I don't think this is justified if one wants to prove that black holes do not exist. Further it is assumed that most of the radiation is generated by the collapsing body itself (*) and finally assumptions are made about the heat transfer function C which I cannot follow (yet).

The resulting differential equations are numerically integrated until a shell-crossing singularity appears, in other words a naked singularity (presumably an artifact of the model assumptions, i.e. perfect spherical symmetry, so it is only slightly embarrassing in a paper which wants to remove black hole singularities).
The behavior of the dust suggests a rebound near the horizon, but it is too bad the full evolution is unknown, because it raises interesting questions.
What happens to the pressure-less dust in the long run? Will it collapse again after the rebound, perhaps infinitely often?
What does the final state (including Hawking radiation and the "influx" of negative energy) actually look like?

I am sure this paper will generate several responses and eventually more realistic calculations will follow.
Until then I remain skeptical that this result will actually hold in general.


(*) I admit that I do not understand this passage in the earlier paper: "Hawking radiation is produced by the changing gravitational field of the collapsing star, i.e. prior to the black hole formation [..]. Otherwise the surface gravity of the black hole κ, and the temperature of Hawking radiation would increase with time..."
I thought the standard picture is that the "influx" is at the event horizon (not the collapsing body) and the temperature does indeed increase with time...


added later: Supposedly William Unruh was more direct and he thinks that the paper is nonsense.

any good answers to this one?


At the Strings 2014 conference, Piotr Bizon talked about the gravitational turbulent instability of AdS5.
I became aware of this issue more than three years go and I have to admit that I still do not really understand what it means. As I see it, turbulence is one of the big unsolved problems in physics mostly due to the fact that it prevents us from neatly separating energy scales; the opposite of the clean separation which enables renormalization a la Wilson.
So what does it mean that this turbulence instability shows up on one side of the famous AdS/CFT correspondence?

my derivation of the Born rule


I just read (parts of) Sean Carroll's derivation of the Born rule, but I do not find it very convincing, because there is a much simpler, straightforward derivation available to resolve this problem of "self-locating uncertainty".

1) We shall use a "hardcore" many worlds interpretation, assuming that the world splits into a quasi-infinite number of branches at any time, which realizes all possible outcomes of quantum theory. We assume that those branches are all equally real and a simple counting argument shows that the Born rule does not hold for almost all of those branches. It follows that we do not live in one of those generic branches, which solves the first part of our self-location problem.

2) It is reasonable to assume that some of those infinitely many branches contain at least one quantum computer capable of simulating human life. Those computers will have to simulate quantum theory, but we can further assume that they will only keep one branch at a time in order to save resources. It is straightforward to assume that they are programmed to use the Born rule to select this branch randomly.

3) We observe the Born rule to great precision and it follows that we are the human beings simulated in one of those quantum computers. This finally resolves the self-location problem.

I would add that (some of) the simulated human beings will use the Copenhagen interpretation to explain what they experience; i.e. an interpretation which emphasizes the importance of the observer and her 'conscious experience'. Obviously, the simulated human beings are unaware that their 'conscious experience' is indeed a side effect of the procedure which selects the simulated branch randomly.

effective altruism


I mentioned Jess Riedel in the previous blog post. Here I want to highlight his list of organizations related to effective altruism.
While we contemplate how many worlds there are, we can try to improve the one we know - beyond posting hashtags on twitter.

preferred basis


I did write some comments on Scott's blog which might be interesting to those bothered by the 'preferred basis problem'. It begins here and references are made to this answer at physics.stackexchange by Jess Riedel and this paper by Dowker and Kent.

While I'm at it, I should also link to this paper about 'entanglement relativity' and a (claimed) inconsistency of the Everett interpretation. I am not sure if the argument is correct (decoherence might appear different for two different decompositions, but does this really prove anything?) and would appreciate any input.

added later: The back and forth in the comment thread ended (for now) with a homework exercise for mwi proponents.

added later: Btw another interesting paper from an Austrian team about decoherence due to classical, weak gravitation (i.e. on Earth).

----

Btw this unrelated comment Scott made about the "arrow of time" was a bit shallow imho. My own view of the problem begins with this thought experiment.

a probability puzzle


No paradox and nothing profound here, just a little puzzle to pass the time until Monday.
I have two reasons for posting it: i) It is similar to some problems I have to deal with at work (*) and ii) it gives me an opportunity to link to the blog where I got it from (after the solution is revealed).

So Alice and Bob like to play a certain (card) game (if they are not busy with encryption problems and black hole entanglement). Everybody knows that Alice is slightly more skilled at this game and wins with probability 55%; However, she really likes to win and so Alice and Bob always play as many games as it takes for Alice to be ahead in winnings (x). So sometimes they play just one game (if Alice wins immediately) and sometimes many, but what is the expected number N of games the two will play (after a fresh start)?

(*) A similar problem I would be dealing with could be e.g. of the form "if I have an order sitting at the bid, how long will it take on average to get filled".

(x) Added later, just to clarify: Alice and Bob play N games until Alice wins one game more than Bob. E.g. Alice wins the 1st game; Or Bob wins the 1st and Alice wins the next 2 games; Or ...



------------------


This puzzle is equivalent to a biased random walk of the difference D in winnings between Bob and Alice. It begins at D=0 and if D>0 it means that Bob is ahead; The random walk ends at D=-1 i.e. when Alice is ahead by one. So what is the expectation value E = E[N] of the length N of this random walk?

There are two ways to solve it. One can (try to) sum up all terms in the series of all possible events as described here. I assume this is how John von Neumann would have solved this puzzle.

Fortunately, there is a much easier solution for the rest of us and you can find it in the comments.
It gives us E = 1/(2p - 1) and with p=0.55 for Alice to win a single game we get E=10.

Notice that E diverges for p=1/2 and I find this somewhat counterintuitive, knowing that an unbiased random walk will visit every point D with probability 1.

the strange result(s) of Frank Tipler


I met Prof. Tipler in 1992 during a seminar in Vienna about relativity and cosmology, he was a visiting professor for a year and I remember very 'normal' discussions e.g. of the Reissner-Kerr solution.
Two years later he wrote about The Physics of Immortality and I thought that his book was quite interesting, although I disagreed with his conclusion(s) and I remember that I felt uneasy about the certainty with which he expressed his unconventional views.
He jumped the shark with his next book about The Physics of Christianity and I am not sure in which Lalaland he found himself after this jump ...

But he continues to write papers about quantum physics as a proponent of a 'hardcore' many worlds interpretation (m.w.i.) and this post is about one of his conclusions:
His interpretation is actually based on the Bohm interpretation, assuming a deterministic Hamilton-Jacobi evolution of a distribution of hidden variables. While the original Bohm interpretation considers particle positions, in the Tipler interpretation the different possible universes are the hidden variables. He understands the Bohr probabilities as Bayesian probabilities obtained by the many real observers in the multi-verse of all those universes. I think at this point his views are still compatible with m.w.i. a la Everett and he argues that the Heisenberg uncertainty principle follows from his proposal arising "from the interference of the other universes of the multiverse, not from some intrinsic indeterminism in nature". So far so good ...

But then he claims to have a test of his interpretation by measuring pattern convergence rates: Frequencies of events measured in real experiments with sample size N will converge as 1/N to the Born frequencies. And I think this has to be wrong.
He even notices that in classical statistics the frequencies of events following e.g. a Gauss distribution converge slower, i.e. 1/sqrt(N), and I wonder why this does not bother him. After all, it is not difficult to set up a simple quantum physics experiment which reproduces classical convergence. Consider a (weakly) radioactive source which triggers a Geiger counter with 50% probability in a certain time interval. Now we let the Geiger counter tick along and we can be quite sure that the sequence 100101011111000010101000... that we will obtain obeys the well known laws of conventional statistics.
What am I missing?
Can one use Tipler's result as (another) example that m.w.i. does not reproduce the properties of Born probabilities correctly?


Btw if you wonder why I wrote this post now ... I saw Tipler's name on this diagram and remembered that I always wanted to write something about his strange result.