was Wolfram right after all?



Recently, Gerard 't Hooft published his own version of superstring theory.

"Ideas presented in two earlier papers are applied to string theory. ... We now also show that a cellular automaton in 1+1 dimensions that processes only ones and zeros, can be mapped onto a fermionic quantum field theory in a similar way. The natural system to apply all of this to is superstring theory ..." (*)



The earlier papers he refers to describe a duality between a deterministic cellular automaton and a bosonic quantum field theory in 1+1 dimensions and argue that Born's rule strongly points towards determinism underlying quantum mechanics (x).



All this is far from the mainstream, but 't Hooft is a physicist not a crackpot and so he points out problems of his proposal(s) in his papers, e.g. he notes that some of his models have an unbounded Hamiltonian and he does discuss the apparent contradiction with Bell's inequality.




(*) It is known for a long time that the Ising model is equivalent to a fermionic field in 2 dimensions (see e.g. this paper for references).



(x) Quantum theory without the Copenhagen 'collapse' is a deterministic theory, so it is not too surprising if one finds such a duality. But it is unusual that Born's rule 'strongly points towards' determinism.



many simulated worlds



I think it makes sense to combine Nick Bostrom's simulation argument with the many worlds interpretation.

While Copenhagen tells us that some outcomes are very unlikely, the m.w.i. assures us that every possible world actually exists. So we can be sure that there are worlds which contain the necessary equipment to simulate your conscious experience - and since there are (infinitely) many different ways to simulate the same experience, we can follow Bostrum's argument to finally conclude that it is almost certain that you are experiencing The Matrix right now (*).



If you believe the m.w.i. then you must believe that your experience is just a fake (x).




(*) There are N ways how your experience can be simulated and only 1 how it would be real, since you cannot distinguish them they have equal probability and for (infinitely) large N the conclusion follows.





(x) Of course, my argument will probably not convince you, which is just what The Matrix does to you ...



Columbo and his memories




This was recently posted on that other blog, but on second thought it really belongs here, following a proud tradition of confusing thoughts about the arrow of time.




Inspector Columbo enters an empty apartment (*) and finds a dead body on the floor. He measures the body temperature and determines it to be 33C, while the room temperature is 21C. He could now use physics forensic science to predict quite well what will happen going forward. The dead body will continue to cool and a few hours later will reach equilibrium with the room temperature. He could even predict how several days and weeks ahead this dead body will turn slowly but surely into a decaying corpse.

But he is not really interested in that. He wants to predict postdict what happened earlier. Again, he can use thermodynamics to determine how many minutes earlier the body temperature was 34C, 35C, ... But when his postdiction reaches 37C he must stop. All he can postdict is the time of death, but he can not go beyond that point. Once his calculation reaches 37C he deals with a living human being and from the data he has he cannot postdict what that person was doing. It would not make any sense to continue his calculation - and he would have to stop at 100C anyways.



In some sense 37C is like a 'singularity' for his postdiction, which he cannot cross - quite remarkable, because we are used to thinking that the past is certain and the future is not; But here it seems to be the other way around. (Actually this case is not that special; In general, physicists are pretty good at making predictions, if they have the necessary initial data, but they are not good at all at making postdictions from the same data, which is why they normally don't do it.)



Later on, the clever inspector will find clues to what happened, fingerprints and other evidence - in other words he will find documents about the past. The truly amazing thing about those documents is that they fit together and tell a coherent story (e.g. the fingerprint on the door knob is the same as the fingerprint on the knife). Even more amazing is that in the end the killer will confess and tell the inspector what happened and his memory will match the story reconstructed from those documents.



Which brings us to the final question. Why do we have memories of the past but not the future? In other words, why does the inspector consider the dead body at the beginning of our story as evidence of a murder (which happened in the past) - but not as a document and memory of the rotten corpse it will be in the future?

Is it because he knows the future better than the past?






(*) There never was such an episode, but we can assume that he solved murder cases not shown on tv. Also, I am aware that he is actually a Lieutenant.



pictures of Columbo



I thought I should illustrate the following story with two pictures.







Columbo (C) collects documents (*) about the murder case (the black vertical lines). He can easily predict the future of those documents, because they are stable (otherwise they would not be good documents). However, he can postdict the past of those documents only up to a certain point, when they have been created.

Different documents tell a coherent story, therefore he can assume that they were created by the same event E. But notice that he cannot postdict the state of those documents before E. The future of those documents is known, but their past is uncertain beyond a particular point and this is what makes them memories of the past event E.



This picture of the 'arrow of time' is somewhat different from the usual image of entropy being low in the past and uncertainty increasing in the future.









(*) 'document' is used in a general sense - a hot cup of coffee in an empty apartment 'documents' that a person was in that apartment not too long ago. We know this because we cannot postdict the temperature of that coffee beyond 100C - so we know somebody had to be there to make it.


still asking the same question(s)



I write this post mostly to show that this blog is still alive ... and still asking the same question(s).



Recently, I read this paper about a numerical study in lattice gravity, trying to distinguish 1st and 2nd order phase transitions. They use and refer to the methods I am familiar with, but I do wonder if this is really the best one can do nowadays.



If one does e.g. fit the location of the critical coupling as a function of lattice size, one has to deal with two big problems: First, the location of the critical coupling is not so well defined (e.g. due to the metastable states associated with 1st order transitions) for a given lattice size; there are limits on computation time and resources (*).

Second, how can one be sure if the lattice is big enough to be in the 'scaling region', i.e. big enough that small size corrections can be neglected (if the typical size of the 'bubbles', which come with a 1st order transition, is n and the lattice size N is smaller than n one has a problem).



So what is the current state-of-the-art and where are the professional statisticians and their Bayesian stochastic network thingy-ma-jiggies when we need them (x)? Please let me know if you know something.






(*) A related question for the practitioner: Is it better to spend the available computation power on a small number of iterations on a large lattice or is it better to do many iterations on a small lattice?



(x) Speaking of Bayesian thingy-ma-jiggies ...



follow up on quantum gravity



A year ago I mentioned a numerical study which indicates that AdS is unstable against small perturbations.

Meanwhile, Horowitz et al. "find strong support for this idea". They also mention that "any field theory with a gravity dual must exhibit the same turbulent instability, and transfer energy from large to small scales", but it is unclear (to me) what this means for the AdS/CFT correspondence. But notice that they study AdS4, although the assumption seems to be that AdS5 contains the same instability.



Two years ago I wrote about higher order gravity models. Recently, Leonardo Modesto considered "higher derivative gravity involving an infinite number of derivative terms".
This new model "is instead ghost-free" and "finite from two loops upwards: the theory is then super-renormalizable".



Last , but not least, Daniel Coumbe and Jack Laiho have published version 2 of their paper "exploring the phase diagram of lattice quantum gravity"; a while ago I mentioned the talk about it at the Lattice 2011 conference.





added later: In a new paper Ashoke Sen calculates logarithmic corrections to the entropy of black holes which "disagree with the existing result in loop quantum gravity".


too many worlds



This is the story told in many books: Initially Erwin and his cat are in some initial state |I> and then they develop into a superposition |F> = |H> + |N> with H indicating a happy cat and N a not so happy one (*). Due to decoherence the overlap <H|N> is very small - but not exactly zero. The interpretation is that |H> and |N> are associated with two different worlds, there is no 'collapse' (real or subjective) of the wave function which would eliminate one of the two.



But there is one issue I have with this story, not often told in those books: If there is no 'collapse' then how did we get the initial state |I> ? We have to assume that before his experiment with the cat Erwin made a decision to use his cat and not a dog, so really we have something like |I> + |d>. But before that he made a decision to do the experiment or not at all and before then a committee made a decision if he gets funding and before then ...

So really the initial state was something like |I> + |x1> + |x2> + |x3> + ...

and therefore the following state looked something like |F> = |H> + |N> + |x1> + |x2> + |x3> + ... .



If there is never a 'collapse' then quantum theory is like a programming language without garbage collection and we have to assume that the number of branches of the wave function is infinite - at least from our branch we cannot determine how many other branches there are. (Also we do not know the amplitude of our branch relative to all the others; We could exist due to a freak event in the past for all we know.)



But this is a real problem, because the overlap between different branches is very small but not zero.

So if we calculate <H|F> we get <H|H> + <H|N> + <H|x1> + <H|x2> + <H|x3> + ... and although every <H|x> is very small due to decoherence, there is a priori no reason for the infinite sum over all branches to converge (**); I have never seen a good argument why this (infinite) sum should remain small compared to the terms we are interested in.



Of course one can try to save the appearances by making the assumption that somehow the sum only has a finite (but certainly very large) number of terms, e.g. the universe is finite and time is discrete. However, it would mean that we try to save quantum theory by making assumptions about cosmology and I don't think this is very convincing.





(*) Notice that I omit normalization factors (like 1/sqrt(2)) in this text to keep the ascii math readable.

(**) Notice that the point of this post is that it is actually a problem to determine those normalization factors (like 1/sqrt(2)) so that the total sums to 1.


the wave function of a photon



Every now and then a discussion of quantum theory and its interpretation involves photons, perhaps propagating in some interferometer; e.g. when I wrote about the interpretation problem I used the picture of a Mach-Zehnder interferometer.
And then a discussion of the many worlds or the 'collapse' of the wave function often follows.



But notice that the wave function of a photon is actually a problematic concept, e.g. "...it is possible to define position operators and localized states for massive particles and for massless particles of spin 0, but not for massless particles with spin."

Certainly, the electromagnetic field is not the wave function of a photon, as sometimes implied by discussions of interference in (quantum) optics experiments.



Why do I mention this? Well, it seems to me that some people consider wavefunctions as something very real, while others (including me) think of them as descriptions of reality; It might be helpful to consider single photons before discussing the many worlds.


a classical anthropic Everett model



"..the only kinds of entities we know for sure to be
real are our mental feelings and perceptions (including dreams). The material
world in which we have the impression of living is essentially just a theoretical
construct to account for our perceptions."



This was not written by Ernst Mach but is from a remarkable paper by Brandon Carter, dealing with the classical limit of the Everett many worlds interpretation.



"Assuming ... that mental processes have an essentially
classical rather than quantum nature, this essay has the relatively modest
purpose of attempting to sketch the outlines of a simpler, more easily accessible, classical unification ... as an approximation to a more fundamental quantum unification that remains elusive."



As every other many worlds interpretation, it struggles with the issue of probabilities and proposes a solution "based on the use of an appropriate anthropic principle in conjunction with the Everett approach...".

I really like the way the problem is posed, but I don't find the answer very convincing [x]. If one acknowledges our perceptions as being necessarily
classical realities, does Bohr's interpretation of quantum theory (or something similar) not make much more sense?



You, as a sentient being with q=1 (or as just another one of my theoretical constructions), be the judge of that...





[x] One problem I have with it (p.6ff) is this: Once we assume that the outside world is only my theoretical construct, how can the number of sentient beings (which are only theoretical constructs as well) make a difference to the calculated probabilities? Also, would the anthropic argument not indicate that it
is always more likely to survive a certain situation than to be killed?


really the best of all possible worlds?



Given two equally good roads, the majority of drivers will pick the more crowded road...

... and of course I have fewer friends than my friends have on average.



So the following is not too surprising: We assume that we can assign a "quality of life" value Q to each world in the multiverse of all possible worlds, with Q=0 meaning "boring as hell" and Q->infinity meaning a world "approaching heaven". But then we either live most likely in a world of less quality than the average world - or the total Q grows (less than) linear with the sample size, which means the multiverse as a whole is of low quality...



inspired by Marginal Revolution


under my umbrella



"I would like to know whether it is likely to rain tomorrow. However, when I ask an Everett follower, all she will tell me in direct response to my question is that `it will rain and it won't rain.'
Nevertheless, she is willing to tell me whether or not it is rational for me to take an umbrella. I suppose that if she tells me that it would be highly irrational for me to take an umbrella, I can take the hint and deduce that it is very unlikely to rain.

However, there does seem something wrong with the fact that she is not allowed to say this directly."



Robert Wald reviewed Many Worlds?: Everett, Quantum Theory, and Reality


efficiency



Cosma writes about power laws and psycho killers;
The usual story - somebody sees a power law when e.g. a log-normal distribution would be a much better fit.

This suggests that three toed sloths and the internet are not fully efficient distributors of information.



Meanwhile, on intrade I saw (Jan 24, 8am ET) some interesting odds for the Dow Jones Industrial index, suggesting that markets are not always fully efficient either:

Dow Jones to close ON or ABOVE 11750 on 31 Jan 2012: probability = 91.5%

Dow Jones to close ON or ABOVE 12000 on 31 Jan 2012: probability = 93.0%

Dow Jones to close ON or ABOVE 12250 on 31 Jan 2012: probability = 97.0%



So I wonder about the odds in a cliché betting market...