### a probability puzzle

No paradox and nothing profound here, just a little puzzle to pass the time until Monday.

I have two reasons for posting it: i) It is similar to some problems I have to deal with at work (*) and ii) it gives me an opportunity to link to the blog where I got it from (after the solution is revealed).

So Alice and Bob like to play a certain (card) game (if they are not busy with encryption problems and black hole entanglement). Everybody knows that Alice is slightly more skilled at this game and wins with probability 55%; However, she really likes to win and so Alice and Bob always play as many games as it takes for Alice to be ahead in winnings (x). So sometimes they play just one game (if Alice wins immediately) and sometimes many, but what is the expected number N of games the two will play (after a fresh start)?

(*) A similar problem I would be dealing with could be e.g. of the form "if I have an order sitting at the bid, how long will it take on average to get filled".

(x) Added later, just to clarify: Alice and Bob play N games until Alice wins one game more than Bob. E.g. Alice wins the 1st game; Or Bob wins the 1st and Alice wins the next 2 games; Or ...

------------------

This puzzle is equivalent to a biased random walk of the difference D in winnings between Bob and Alice. It begins at D=0 and if D>0 it means that Bob is ahead; The random walk ends at D=-1 i.e. when Alice is ahead by one. So what is the expectation value E = E[N] of the length N of this random walk?

There are two ways to solve it. One can (try to) sum up all terms in the series of all possible events as described here. I assume this is how John von Neumann would have solved this puzzle.

Fortunately, there is a much easier solution for the rest of us and you can find it in the comments.

It gives us E = 1/(2p - 1) and with p=0.55 for Alice to win a single game we get E=10.

Notice that E diverges for p=1/2 and I find this somewhat counterintuitive, knowing that an unbiased random walk will visit every point D with probability 1.

### the strange result(s) of Frank Tipler

I met Prof. Tipler in 1992 during a seminar in Vienna about relativity and cosmology, he was a visiting professor for a year and I remember very 'normal' discussions e.g. of the Reissner-Kerr solution.

Two years later he wrote about The Physics of Immortality and I thought that his book was quite interesting, although I disagreed with his conclusion(s) and I remember that I felt uneasy about the certainty with which he expressed his unconventional views.

He jumped the shark with his next book about The Physics of Christianity and I am not sure in which Lalaland he found himself after this jump ...

But he continues to write papers about quantum physics as a proponent of a 'hardcore' many worlds interpretation (m.w.i.) and this post is about one of his conclusions:

His interpretation is actually based on the Bohm interpretation, assuming a deterministic Hamilton-Jacobi evolution of a distribution of hidden variables. While the original Bohm interpretation considers particle positions, in the Tipler interpretation the different possible universes are the hidden variables. He understands the Bohr probabilities as Bayesian probabilities obtained by the many real observers in the multi-verse of all those universes. I think at this point his views are still compatible with m.w.i. a la Everett and he argues that the Heisenberg uncertainty principle follows from his proposal arising "from the interference of the other universes of the multiverse, not from some intrinsic indeterminism in nature". So far so good ...

But then he claims to have a test of his interpretation by measuring pattern convergence rates: Frequencies of events measured in real experiments with sample size N will converge as 1/N to the Born frequencies. And I think this has to be wrong.

He even notices that in classical statistics the frequencies of events following e.g. a Gauss distribution converge slower, i.e. 1/sqrt(N), and I wonder why this does not bother him. After all, it is not difficult to set up a simple quantum physics experiment which reproduces classical convergence. Consider a (weakly) radioactive source which triggers a Geiger counter with 50% probability in a certain time interval. Now we let the Geiger counter tick along and we can be quite sure that the sequence 100101011111000010101000... that we will obtain obeys the well known laws of conventional statistics.

What am I missing?

Can one use Tipler's result as (another) example that m.w.i. does not reproduce the properties of Born probabilities correctly?

Btw if you wonder why I wrote this post now ... I saw Tipler's name on this diagram and remembered that I always wanted to write something about his strange result.

### the phase structure of CDT

In my previous post I criticized the description of the CDT phase diagram on a popular physics blog.

In this post I want to actually talk about the numerical CDT results.

The phase diagram depends on two coupling constants K and D (in the text they use kappa and delta). While K corresponds to the gravitational coupling, D measures the ratio of 'timelike' and 'spacelike' edges; I use quotes ' ' because the simulation is actually done in the Euclidean sector, but edges fall in different categories, depending on what kind of distances they would correspond to after Wick rotation. There is a third coupling parameter, which corresponds to a cosmological constant, but it is fixed for technical reasons.

As I already explained, one looks for a critical line in D,K corresponding to a 2nd order phase transition and the reason is that long-range fluctuations are associated with such a transition, so that the details of the discretization do not matter any more.

So this is what I find weird: The parameter D describes a detail of the discrete model and the hope is to fine tune D, as a function of K, in order to find a critical line where the details of the discretization no longer matter...

The authors notice that D has "no immediate interpretation in the Einstein-Hilbert action" and thus the critical value D(K) does not correspond to any feature of the continuum limit - unless the continuum limit is not Einstein-Hilbert but Horava-Lifshitz gravity. This is what the authors propose and discuss in section 4 of their paper: HL gravity breaks diffeomorphism invariance of EH gravity, just like CDT does, and the parameter D would have a 'physical' meaning in this case.

It seems that the authors hope that EH gravity will be restored somewhere along the critical D(K) line, however, it is unlikely imho that there is such a path from HL gravity to real gravitation.

### backreaction

It seems that an internet tradition is emerging, whereby a blog remains dormant for a while, until something so outrageously wrong appears on the interwebs that one has no choice but to respond to it.

In my case, Sabine Hossenfelder wrote about the phase diagram of CDT on her popular physics blog and I just have to set a few things straight:

1) We read that "... most recently the simulations found that ... space-time has various different phases, much like water has different phases".

But, of course, the phase structure of various lattice gravity models has been studied (implicitly and explicitly) since the early days of lattice gravity simulations, i.e. the 1980s. If one wants to find a reasonable continuum limit for such a model, then one has to examine the phase structure of the lattice model; In general, if the model has one or more coupling parameters then it will (most likely) exhibit different phases, just like water.

2) The holy grail to a physically interesting continuum limit is the existence of a non-trivial fixpoint, which appears in the phase diagram as a 2nd order phase transition. IF such a transition exists for CDT, it will be located on (one of) the critical lines and perhaps at the tri-critical point. The continuum limit will

**not**appear in the area C of the diagram; There you certainly cannot "share images of your lunch with people you don’t know on facebook".

As far as I know, the existence of such a 2nd order transition has not been demonstrated yet, although intriguing hints have appeared in other lattice models previously. Of course, even IF such a 2nd order transition could be demonstrated, one would still not know if the continuum limit has anything to do with gravitation as we know it.

3) This 2nd order phase transition is prerequisite to a consistent continuum model and all 4d geometries would be generated with the same critical parameter values. It is therefore misguided to imagine that this phase transition happened at or near the big bang.

Indeed, the coupling parameters depicted in the phase diagram are bare, i.e. un-renormalized, coupling parameters and while the diagram may indicate existence and location of a non-trivial fixed point, almost all of the phase diagram is actually non-physical.

Therefore one cannot expect that this phase transition may be an alternative and/or replacement for inflation (as Sabine discussed in the comments).

Subscribe to:
Posts (Atom)