In my previous post I criticized the description of the CDT phase diagram on a popular physics blog.
In this post I want to actually talk about the numerical CDT results.
The phase diagram depends on two coupling constants K and D (in the text they use kappa and delta). While K corresponds to the gravitational coupling, D measures the ratio of 'timelike' and 'spacelike' edges; I use quotes ' ' because the simulation is actually done in the Euclidean sector, but edges fall in different categories, depending on what kind of distances they would correspond to after Wick rotation. There is a third coupling parameter, which corresponds to a cosmological constant, but it is fixed for technical reasons.
As I already explained, one looks for a critical line in D,K corresponding to a 2nd order phase transition and the reason is that long-range fluctuations are associated with such a transition, so that the details of the discretization do not matter any more.
So this is what I find weird: The parameter D describes a detail of the discrete model and the hope is to fine tune D, as a function of K, in order to find a critical line where the details of the discretization no longer matter...
The authors notice that D has "no immediate interpretation in the Einstein-Hilbert action" and thus the critical value D(K) does not correspond to any feature of the continuum limit - unless the continuum limit is not Einstein-Hilbert but Horava-Lifshitz gravity. This is what the authors propose and discuss in section 4 of their paper: HL gravity breaks diffeomorphism invariance of EH gravity, just like CDT does, and the parameter D would have a 'physical' meaning in this case.
It seems that the authors hope that EH gravity will be restored somewhere along the critical D(K) line, however, it is unlikely imho that there is such a path from HL gravity to real gravitation.
It seems that an internet tradition is emerging, whereby a blog remains dormant for a while, until something so outrageously wrong appears on the interwebs that one has no choice but to respond to it.
In my case, Sabine Hossenfelder wrote about the phase diagram of CDT on her popular physics blog and I just have to set a few things straight:
1) We read that "... most recently the simulations found that ... space-time has various different phases, much like water has different phases".
But, of course, the phase structure of various lattice gravity models has been studied (implicitly and explicitly) since the early days of lattice gravity simulations, i.e. the 1980s. If one wants to find a reasonable continuum limit for such a model, then one has to examine the phase structure of the lattice model; In general, if the model has one or more coupling parameters then it will (most likely) exhibit different phases, just like water.
2) The holy grail to a physically interesting continuum limit is the existence of a non-trivial fixpoint, which appears in the phase diagram as a 2nd order phase transition. IF such a transition exists for CDT, it will be located on (one of) the critical lines and perhaps at the tri-critical point. The continuum limit will not appear in the area C of the diagram; There you certainly cannot "share images of your lunch with people you don’t know on facebook".
As far as I know, the existence of such a 2nd order transition has not been demonstrated yet, although intriguing hints have appeared in other lattice models previously. Of course, even IF such a 2nd order transition could be demonstrated, one would still not know if the continuum limit has anything to do with gravitation as we know it.
3) This 2nd order phase transition is prerequisite to a consistent continuum model and all 4d geometries would be generated with the same critical parameter values. It is therefore misguided to imagine that this phase transition happened at or near the big bang.
Indeed, the coupling parameters depicted in the phase diagram are bare, i.e. un-renormalized, coupling parameters and while the diagram may indicate existence and location of a non-trivial fixed point, almost all of the phase diagram is actually non-physical.
Therefore one cannot expect that this phase transition may be an alternative and/or replacement for inflation (as Sabine discussed in the comments).
I was thinking about a good example of Knightian uncertainty and this is my proposal: We cannot know what theorems a particular mathematician or a group of mathematicians will be able to prove.
As a concrete example, consider the collaborative effort to lower H from Zhang's 70 million; Currently, the best confirmed value for H is 60,726. But will H drop below 1000 by the end of this year?
Obviously we cannot know for sure (otherwise we would have a proof for H < 1000 already) and I think that any attempt to come up with a 'Bayesian probability' in this case would be inappropriate.
But this means that the state of our world by the end of this year is unknown. (If you wonder how to reconcile such a statement with physics, I recommend these links: 1, 2, 3, 4).
Btw it is possible that some Martian mathematicians, with far advanced math capabilities, would already know the answer to the above question, but in this case I have to assume that they have theorems in their advanced Martian mathematics which they cannot prove yet.
So the strong form of my example is this: In our universe there is at least one mathematician, who is unable to predict if she will be able to prove the theorem she is currently working on - and nobody else is able to predict it either, because she is the smartest/most advanced mathematician.
Therefore the future state of our universe is unknowable.
We randomly implement a Turing machine Tm with N states, using a Geiger counter plus radioactive material as 'quantum coin'. We proceed as follows: First we use the 'quantum coin' to determine (with probability 1/2) if the Tm has N=2 (head) or N>2 (tail) states. If we got tail, we use the 'quantum coin' again to see if N=3 or N>3 and so on and so forth.
Once we have determined N in that way, we then construct the transition table(s) of the Tm, using the 'quantum coin' again and again, so that all (8N)N possible transition tables could be realized.
Once this construction is finished, we start the Tm on a tape with a few 1s sprinkled on it and then watch what happens.
This experiment is easy to understand if we follow the Copenhagen interpretation. The Tm we will build is most likely quite simple, because the probability for a complex N-state Tm decreases rapidly as 1/2N-1. Once the Tm is put together, its evolution is not even a quantum mechanics problem any more. If we want, the transition table(s) of this particular Tm can be inspected before it runs, to determine if it will halt.
But this experiment is much more difficult to understand within a many worlds interpretation: Every use of the 'quantum coin' splits the world and thus we are dealing with a wave function of the universe which contains every possible Tm in one of its branches. The amplitudes assigned to worlds with N-state machines are quite small if N is large, but all the worlds are equally real.
Unfortunately, due to the Halteproblem, the evolution of this wave function is uncomputable. In other words, the wave function of the universe does not have a mathematical description at all.
The best part of this experiment is that I do not even have to do it myself. Somewhere in the universal wavefunction there is already a branch where somebody, somewhere in the universe, has decided to do this experiment (*). This means that the amplitude of my branch is already uncomputable (x).
(*) This also takes care of the counter argument that on Earth resources are finite and thus the experiment has to terminate at a certain (large) N0. Since we have to consider the wave function of the multiverse (of infinite size), this argument is not convincing, because we cannot know N0.
(x) Notice that the overlap between different branches of the wave function is very small, due to decoherence, but in general non-zero.
Recently I read the biography of Erwin Schrödinger by John Gribbin, who points out that E.S. proposed a many worlds interpretation of quantum theory several years before Everett. This got me thinking how to make sense of m.w.i. after all.
As I have pointed out several times on this blog [1, 2, 3], a major problem of the many worlds interpretation is the derivation of the Born rule.
But I think there is a way out: If qubits are the fundamental building blocks of our world, then every event could eventually be reduced to a series of yes-no alternatives of equal probability (an ur alternative) - and in this case the m.w.i. gives the correct probability.
I think this would also take care of the 'preferred basis' problem, because if the world is fundamentally discrete, the 'preferred basis' would assign two unit vectors in Hilbert space to each qubit.
C.F.v. Weizsäcker proposed his ur-theory many years before the term 'qubit' was invented and if one is serious about m.w.i. then it would be a strong reason to consider ur-theory or something similar (*).
Much later the idea that our world is a large quantum computer has been investigated e.g. by Seth Lloyd (but I don't know if it would work with urs).
In this case the task to derive the Born rule would be equivalent to derive QFT as we know it together with general relativity from ur-theory and/or from the behavior of large quantum computers.
(*) C.F.v. Weizsäcker himself was a believer in the Copenhagen interpretation and rejected m.w.i. explicitly in his book.
I just came across this blog post which proposes some sort of Bayesian mathematics: "[..] Suppose you conjecture something as part of your research program. [..] you could use Bayes' theorem to give two estimates on the plausibility of your conjecture being true. One is giving the most generous probabilities given the evidence, and the other is giving the least generous. You’ll get some sort of Bayesian confidence interval of the probability of the conjecture being true."
Obviously, I was thinking about counter-examples to this proposal. As we all know from Kurt Gödel and the pop. sci. literature, mathematics cannot be automated due to the Halteproblem; So a natural starting point would be a Turing machine with n possible internal states. We assume that n is large enough and the machine complicated enough that our Bayesian mathematician cannot figure out right away if it will halt or run forever.
So she assigns a probability p(H) to the conjecture that the machine will at some point halt and begins with an uninformed prior p(H) = 1/2. Then the machine starts and writes and reads from its infinite tape and our Bayesian mathematician updates p(H). For a long while all she sees is that the machine continues and continues and so p(H) decreases, but suddenly after L steps the machine hits the stop state. Did p(H) help the Bayesian mathematician to anticipate this event?
So an interesting question would e.g. ask this: If the Bayesian mathematician is equivalent to a Turing machine with N internal states, can she use Bayesian updating so that p(H) increases and indeed reach 1 before the machine stops after L steps and at the same time be sure that p(H) will decrease and reach 0 if the machine never stops? I guess she could try to detect loops and other patterns, as long as they are not too complicated to be handled by an N state Turing machine...
Well, I am pretty sure if N is fixed and n is allowed to be arbitrarily large this would violate the spirit as well as the letter of the Halteproblem. But what about n < N? Do we know anything about that?
added later: After thinking about this some more I would like to clarify a few things: The detection of 'endless loops' or other patterns, which indicate that the machine can never halt, has nothing to do with Bayesian updating itself. The task to find such algorithms and pattern detectors is equivalent to proving a conjecture the old fashioned way.
Also, if one examines all Turing machines with n smaller than some given number, starting on a finite sample of tapes and running for less than L steps, one may find certain frequencies of halting and not-halting behavior and perhaps one can extrapolate to some extent to the case of much larger L and derive some bounds for p(H) that way. But again this would not have anything to do with Bayesian updating, but it would perhaps be a generalization of Chaitin's reasoning.
The more I think about it, it seems that Bayesian updating itself does not help at all with this problem...
added even later: Meanwhile I found some papers related to this question: 1, 2, 3. I noticed that "The major drawback of the AIXI model is that it is uncomputable".
added much later: Scott A. uses a probability argument about complexity theory in a comment on his blog (see also the following comments). But I don't find his argument very convincing.
Cosma reminds us that a certain sloth is already ten years old; But as a true statistician he also gives us an error bar with this date ...
However, his list of the twenty best pieces misses the one I consider the best post. After all, the title of the paper it references inspired the name of this blog...
Well, congratulations and I am looking forward to the next ten years of slothing ...