tag:blogger.com,1999:blog-54189945886213620102016-02-17T16:21:26.184-08:00the blog formerly known as The Statistical MechanicWolfgangnoreply@blogger.comBlogger127125tag:blogger.com,1999:blog-5418994588621362010.post-50310963399098427882015-05-07T11:15:00.000-07:002015-12-04T17:09:35.622-08:00the many worlds interpretation does not work <br>
I posted <a href="http://www.scottaaronson.com/blog/?p=1951#comment-115845">a comment</a> on the Shtetl blog, rejecting (once again) the many worlds interpretation (mwi); it is supposed to solve the "measurement problem" of quantum theory,
so let us first consider a simple experiment with 2 possible outcomes. <br>
The main mwi assumption is that after
the measurement both outcomes are realized and subsequently two macroscopically different configurations
M1 and M2 exist in some (decohered) superposition. <br>
<br>
However, we can make the differences between M1 and M2 arbitrarily large and therefore gravitation cannot be ignored. M1 and M2 will in general be associated with two different space-time geometries and so far we do not
have a consistent framework to deal with <a href="http://blog.jessriedel.com/2013/05/30/superpostions-of-the-metric/">such a superposition</a> (*); should we e.g. use 2 different time parameters t<sub>1</sub>, t<sub>2</sub> - <a href="http://www.scottaaronson.com/blog/?p=1781#comment-104973">one for each observer</a> in each space-time? <br>
In a few cases it has been tried to describe such an evolution but <a href="http://arxiv.org/abs/0705.2357">the conclusions are not in favor of mwi</a>. <br>
And how would the branching of space-time(s) work if the measurement is spread out over spacelike events, e.g. in an EPR-type experiment? <br>
<br>
This gets worse if one considers a realistic experiment with a continuum of possible outcomes,
e.g. the radioactive decay of a Pu atom, which can happen at any point of the continuous time parameter t.
Assuming that this decay gets amplified with a Geiger counter to different macroscopic configurations, how
would one describe the superposition of the associated continuum of space-time geometries? <br>
<br>
The Copenhagen interpretation does not have this problem, because it only deals with one outcome
and in general one can "reduce" the wave function before a superposition of spacetime geometries needs to be considered. <br>
<br>
A mwi proponent may argue that this issue can be postponed until we have a consistent theory of quantum gravity and simply assume a Newtonian fixed background (or a flat Minkowski background). But if one (implicitly) allows the existence of Newtonian clocks, then why not the classical observer of Copenhagen? <br>
<br>
In addition one has to face the well known problem of <a href="http://tsm2.blogspot.com/2014/06/my-derivation-of-born-rule.html">the Born probabilities</a> (x), the <a href="http://tsm2.blogspot.com/2014/04/preferred-basis.html">preferred basis</a> problem, the question of <a href="http://216.224.167.220/tsm/Nobs.html">what it takes to be a world</a>, the puzzling fact that <a href="http://arxiv.org/abs/1210.8447">nothing ever happens</a> and other problems, discussed previously <a href="http://tsm2.blogspot.com/search/label/many%20worlds">on this blog</a>. <br>
<br>
In other words, the mwi so far creates more problems than it solves. <br>
<br>
<br>
<small>(*) In technical terms: The semi-classical approximation of quantum field theories plus gravitation is ultimately inconsistent and we do not yet have a fully consistent quantum theory of gravitation to describe such a measurement situation. <br>
<br>
(x) See also <a href="http://infoproc.blogspot.com/2015/11/the-measure-problem-in-many-worlds.html">this opinion</a>, which is a variant of the argument I made previously <a href="http://tsm2.blogspot.com/2010/05/one-espresso-many-worlds.html">here</a> and <a href="http://tsm2.blogspot.com/2010/06/one-pipe-many-worlds.html">here</a>. </small> <br>
<br>
Wolfgangnoreply@blogger.com8tag:blogger.com,1999:blog-5418994588621362010.post-27111007095859740112015-05-03T01:36:00.000-07:002015-05-03T11:10:52.339-07:00entanglement<br>
About five years ago I sent this email to Erik Verlinde: <br>
<br>
----------------------------- <br>
1/24/10 <br>
<br>
Dear Professor Verlinde,<br>
<br>
I read your preprint about gravity as 'entropic force' and have the following question/suggestion: <br>
It seems that there are large classes of systems following an area law for entanglement entropy,
see e.g. arxiv.org/abs/0808.3773 <br>
Should they all show 'entropic gravity' somehow? <br>
If yes, the obvious next step would be to pick a simple model, e.g. arxiv.org/abs/quant-ph/0605112
and check if/how entropic gravity manifests itself. <br>
<br>
Thank you, <br>
Wolfgang <br>
------------------------------ <br>
<br>
As it turns out <a href="http://arxiv.org/abs/1405.2933">this idea may not have been so stupid after all</a>. <br>
<br>
Of course, if <a href="https://www.quantamagazine.org/20150428-how-quantum-pairs-stitch-space-time/">entanglement stitches space-time together</a> then the question remains: entanglement of what? <br>
I still think the entangled qubits of <a href="http://arxiv.org/abs/quant-ph/0501135">a quantum computer</a> are the most natural candidates; if we live inside one it would also <a href="http://tsm2.blogspot.co.at/2014/06/my-derivation-of-born-rule.html">solve the measurement problem</a>. <br>
But what would it be computing? We can <a href="http://tsm2.blogspot.co.at/2010/07/7-x-6-41-you-little-sh.html">only speculate</a>. <br>
<br>
Wolfgangnoreply@blogger.com1tag:blogger.com,1999:blog-5418994588621362010.post-18577043578632948742015-03-15T06:33:00.000-07:002015-03-15T08:48:28.894-07:00the sleeping Brad DeLong problem<br>
Brad DeLong was sound asleep for several months, but after he woke up he found an old blog post and quickly calculated the probability that <a href="http://www.bradford-delong.com/2015/03/sleeping-beauty-again-thirders-correct-double-halfers-confused-halfers-wrong-festival-of-fools-blogging.html">Lubos is an April fool</a>; he wrote a blog post about it, because he had no memory of all the earlier debates about this. <br>
Of course, <a href="http://motls.blogspot.com/2015/03/sleeping-beauty-and-beast-named-brad.html?m=1">Lubos</a> disagreed with this calculation, the probabilities are very different according to his argument. So who is right?<br>
<br>
I wrote about the Sleeping B. problem almost <a href="http://216.224.167.220/tsm/sb.html">ten years ago</a>, when it was mostly discussed in academic papers. Meanwhile it has become an internet standard and turned into a political debate: Lubos and the reactionaries vs. Sean and the liberals. Unfortunately, all the nuances of the problem have been lost, as it happens in tribal conflicts, such as the existence of <a href="http://wbmh.blogspot.com/2014/07/sleeping-beauty.html">solutions which are neither 1/2 nor 1/3</a>. <br>
<br>
In one of the many comments somebody wrote something like "if I would be really interested in this problem, I would code a simulation and see what it does". Unfortunately, if she would actually sit down to implement a simulation, she would quickly find that it does not really answer anything; this is one of the problems which cannot be settled with an experiment or simulation, because it is about the question what we actually mean with "probability". Both physicists and economists are not well trained at thinking through what it is exactly they are talking about, therefore I predict that this debate will not go away anytime soon. <br>
<br>
So what is really the issue with this? Well, if the "a priori probability" is 1/2 then S.B. has to bet "as if" it was 1/3 due to the setup of the problem. So if you think probabilities are defined as betting odds (as many Bayesians do) then you will prefer 1/3. If you think probabilities are objective properties of physical systems then you will probably (!) prefer 1/2. (Btw, notice that S.B. has to (implicitly) use the 1/2 "a priori" to calculate the 1/3.)<br>
<br>
I actually prefer the 3/8 solution, because it is not so clear how one should understand it. But I realized that I am a tiny minority of one a long time ago ... <br>
<br>
<br>
------- <br>
<br>
Perhaps it is useful to repeat the 3/8 solution here: <br>
If the outcome was Head, she will only wake up on Tuesday; but if it was Tail,
she will either wake up Tuesday or Wednesday with "a priori probability" 1/2 for each, since she cannot distinguish the two days. <br>
Therefore, the probability for the just awoken S.B. that today is Tuesday is <br>
p(Tue) = (1/2) + (1/2)(1/2) = 3/4 <br>
where the 1st term corresponds to Head and the 2nd to Tail. <br>
<br>
Now we can calculate the probability for Head as
p(Head) = (1/2)*p(Tue) + 0*p(Wed) = (1/2)*(3/4) = 3/8, <br>
knowing that the "a priori probability" for Head on Tuesday is 1/2. <br>
<br>
Wolfgangnoreply@blogger.com0tag:blogger.com,1999:blog-5418994588621362010.post-53048515753259228362015-03-09T14:35:00.002-07:002015-03-10T13:38:12.134-07:00the chaos computer club<br>
A <a href="http://arxiv.org/abs/1503.01409">recent paper</a> suggests a fundamental limit on the chaos in physical systems. <br>
Lubos wrote an easy to read <a href="http://motls.blogspot.com/2015/03/taming-butterfly-effect.html?m=1">introduction</a> to the main idea. <br>
<br>
I think this might be interesting for <a href="http://www.scottaaronson.com/blog/">Scott</a> and everybody else interested in (quantum)computing.
If one considers a Turing machine sensitive to initial conditions (i.e. the input string) or if one considers a (quantum)computer simulating chaotic systems, the conjecture seems to imply a limit on computability. <br>
<br>
Or think about a device which measures the position x of the butterfly wings, sends the result to a computer, which calculates a function f(x) to determine its output. The conjecture seems to suggest a limit on the functions the computer can calculate in a finite amount of time. <br>
<br>
Is it correct to read the result as "the number N of different internal states any computer can reach after a time T is bounded by e<sup>wT</sup> where w is a fundamental constant"? <br>
<br>
Wolfgangnoreply@blogger.com1tag:blogger.com,1999:blog-5418994588621362010.post-7625574261954312242014-11-15T10:42:00.002-08:002014-11-18T20:17:54.163-08:00life expectancy <br>
Recently I stumbled upon this math problem: <br>
The positive integer N has a finite value but is unknown to us. <br>
We are looking for a function f(X) which minimizes the sum <br>
E = |f(0) - N| + |f(1) - N| + |f(2) - N| + ... + |f(N-1) - N| <br>
for almost all N. <br>
<br>
Notice that we do not know the value of N, so f(X) cannot depend on it,
which eliminates the trivial solution f(X) = N among others. <br>
In order to illustrate what I am looking for, consider as first example <br>
1] f(X) = 0, which results in E = N<sup>2</sup>. <br>
However, there is a better solution <br>
2] f(X) = X, which results in E = (N+1)*N/2, which is less for almost all N (except N=1). <br>
<br>
Unfortunately, I do not know the best solution f(X) and this is where you are invited to leave a comment to help me out. <br>
But I do have strong evidence (i.e. a numerical test up to large values of N) that <br>
3] f(X) = X + sqrt(X) is an even better solution. <br>
<br>
So what is the motivation for this problem and why the title for this blog post? <br>
Consider a process or phenomenon which exists already for X years and we try to estimate the total lifetime N without any further information. So our estimate can only depend on X and we try to find a function which minimizes the total estimation error E as described above; every year we make an estimate f(X) which is wrong by the amount |f(X) - N| and we try to minimize the sum of those errors. In some sense this is a variation of the infamous 'doomsday argument'.<br>
<br>
It is now obvious why 1] is a bad solution and 2] is much better. Btw the function f(X) = 2*X would give the same total error E minus a small constant, so whether we assume that the process ends immediately or estimate that it will last twice as long (as it already did) makes no significant difference. <br>
<br>
Btw the solution 3] creates a paradox: The best estimate for the life expectancy seems to depend on what units one uses: We get a different result if we calculate with days rather than years. <br>
<br>
<br>
added later: If I assume f(X) = c*X, perhaps motivated by that paradox, then I can show that c=sqrt(2) minimizes E for large enough N. However, the function <br>
4] f(X) = sqrt(2)*X + sqrt(X) seems to be an even better candidate, but I do not know how to
determine a,b to minimize E for f(X) = a*X + b*sqrt(X) or determine the general solution for f(X).<br>
<br>
<br>
added later: There are better ways to illustrate the problem; e.g. a box contains an unknown number N of candies. You take out one after another and at each step X you have to guess N.
At the end there is a penalty proportional to the sum of your wrong guesses. <br>
Perhaps a "deeper" example considers a Turing machine, which performed already X steps and we try to guess after how many steps N it halts. <br>
The reason these examples are "better" is that there is no change of physical units (e.g. from years to days) that would affect N. <br>
<br>
Wolfgangnoreply@blogger.com6tag:blogger.com,1999:blog-5418994588621362010.post-45714289703440094642014-09-24T17:43:00.000-07:002014-09-26T08:40:58.736-07:00no black holes?<br>
Laura Mersini-Houghton and Harald Pfeiffer published <a href="http://arxiv.org/abs/1409.1837">a paper with numerical results</a> suggesting that black holes may not really exist (see also <a href="http://arxiv.org/abs/1406.1525">this earlier result</a>). As one would expect, several pop. sci. webpages have already <a href="http://phys.org/news/2014-09-black-holes.html">picked this story up</a>. <br>
<br>
The paper is of course not a general proof, but describes <i>a particular model</i> using <i>certain assumptions</i>; it considers the spherically symmetric collapse of pressure-less dust and it makes simplifying assumptions about the Hawking radiation: The energy tensor for the Hawking radiation is taken from earlier calculations for (static) black holes, proportional to 1/R^2, and I don't think this is justified if one wants to prove that black holes do not exist. Further it is assumed that most of the radiation is generated by the collapsing body itself (*) and finally assumptions are made about the heat transfer function C which I cannot follow (yet). <br>
<br>
The resulting differential equations are numerically integrated until a shell-crossing singularity appears, in other words a naked singularity (presumably an artifact of the model assumptions, i.e. perfect spherical symmetry, so it is only slightly embarrassing in a paper which wants to remove black hole singularities). <br>
The behavior of the dust suggests a rebound near the horizon, but it is too bad the full evolution is unknown, because it raises interesting questions.<br>
What happens to the pressure-less dust in the long run? Will it collapse again after the rebound, perhaps infinitely often? <br>
What does the final state (including Hawking radiation and the "influx" of negative energy) actually look like? <br>
<br>
I am sure this paper will generate several responses and eventually more realistic calculations will follow.<br>
Until then I remain skeptical that this result will actually hold in general. <br>
<br>
<br>
<small>(*) I admit that I do not understand this passage in the earlier paper: "Hawking
radiation is produced by the changing gravitational field
of the collapsing star, i.e. prior to the black hole formation [..]. Otherwise the surface gravity of the black hole κ, and the temperature of Hawking radiation would
increase with time..." <br>
I thought the standard picture is that the "influx" is at the event horizon (not the collapsing body) and the temperature does indeed increase with time... <br>
<br>
</small>
<br>
added later: Supposedly William Unruh was more direct and he thinks that <a href="http://www.examiner.com/article/do-black-holes-exist-not-everyone-agrees-with-new-claims">the paper is nonsense</a>. <br>
<br>
Wolfgangnoreply@blogger.com1tag:blogger.com,1999:blog-5418994588621362010.post-66018483280128152572014-06-27T11:35:00.002-07:002014-06-27T12:57:34.397-07:00any good answers to this one?<br>
At the Strings 2014 conference, Piotr Bizon talked about <a href="http://physics.princeton.edu/strings2014/slides/Bizon.pdf">the gravitational turbulent instability</a> of AdS<sub>5</sub>. <br>
I became aware of this issue more than <a href="http://tsm2.blogspot.com/2011/04/is-ads-unstable.html">three years go</a> and I have to admit that I still do not really understand what it means. As I see it, turbulence is one of the big unsolved problems in physics mostly due to the fact that it prevents us from neatly separating energy scales; the opposite of the clean separation which enables renormalization a la Wilson. <br>
So what does it mean that this turbulence instability shows up on one side of the famous AdS/CFT correspondence? <br>
<br>
Wolfgangnoreply@blogger.com0tag:blogger.com,1999:blog-5418994588621362010.post-9875822143669505192014-06-22T06:41:00.000-07:002014-06-22T12:57:07.273-07:00my derivation of the Born rule<br>
I just read (parts of) Sean Carroll's <a href="http://arxiv.org/abs/1405.7577">derivation of the Born rule</a>, but I do not find it very convincing, because there is a much simpler, straightforward derivation available to resolve this problem of "self-locating uncertainty". <br>
<br>
1) We shall use a "hardcore" many worlds interpretation, assuming that the world splits into a quasi-infinite number of branches at any time, which realizes all possible outcomes of quantum theory. We assume that those branches are all equally real and a simple counting argument shows that the Born rule does not hold for almost all of those branches. It follows that we do not live in one of those generic branches, which solves the first part of our self-location problem. <br>
<br>
2) It is reasonable to assume that some of those infinitely many branches contain at least one quantum computer capable of simulating human life. Those computers will have to simulate quantum theory, but we can further assume that they will only keep one branch at a time in order to save resources. It is straightforward to assume that <a href="https://en.wikipedia.org/wiki/Gleason%27s_theorem">they are programmed to use the Born rule</a> to select this branch randomly. <br>
<br>
3) We observe the Born rule to great precision and it follows that we are the human beings simulated in one of those quantum computers. This finally resolves the self-location problem. <br>
<br>
I would add that (some of) the simulated human beings will use the Copenhagen interpretation to explain what they experience; i.e. an interpretation which emphasizes the importance of the observer and her 'conscious experience'. Obviously, the simulated human beings are unaware that their 'conscious experience' is indeed a side effect of the procedure which selects the simulated branch randomly. <br>
<br>
Wolfgangnoreply@blogger.com1tag:blogger.com,1999:blog-5418994588621362010.post-20820179271636241862014-05-11T08:29:00.000-07:002014-05-17T10:09:35.140-07:00effective altruism<br>
I mentioned <a href="http://blog.jessriedel.com/">Jess Riedel</a> in the previous blog post. Here I want to highlight <a href="http://blog.jessriedel.com/2014/04/27/state-of-ea-organizations/">his list of organizations related to effective altruism</a>. <br>
While we contemplate how many worlds there are, we can <a href="http://www.givingwhatwecan.org/top-charities">try to improve</a> the one we know - beyond posting hashtags on twitter. <br>
<br>
Wolfgangnoreply@blogger.com0tag:blogger.com,1999:blog-5418994588621362010.post-21371517039346651422014-04-18T12:28:00.004-07:002014-04-29T18:58:23.839-07:00preferred basis<br>
I did write some comments on Scott's blog which might be interesting to those bothered by the 'preferred basis problem'. It begins <a href="http://www.scottaaronson.com/blog/?p=1781#comment-104916">here</a> and references are made to <a href="http://physics.stackexchange.com/questions/65177/is-the-preferred-basis-problem-solved">this answer</a> at <i>physics.stackexchange</i> by <a href="http://blog.jessriedel.com/">Jess Riedel</a> and <a href="http://arxiv.org/abs/gr-qc/9412067">this paper</a> by Dowker and Kent. <br>
<br>
While I'm at it, I should also link to <a href="http://arxiv.org/abs/1109.6424">this paper</a> about 'entanglement relativity' and a (claimed) inconsistency of the Everett interpretation. I am not sure if the argument is correct (decoherence might appear different for two different decompositions, but does this really prove anything?) and would appreciate any input. <br>
<br>
added later: The back and forth in the comment thread ended (for now) with <a href="http://www.scottaaronson.com/blog/?p=1781#comment-104973">a homework exercise</a> for mwi proponents. <br>
<br>
added later: Btw another interesting paper from an Austrian team about <a href="http://arxiv.org/abs/1311.1095">decoherence due to classical, weak gravitation</a> (i.e. on Earth). <br>
<br>
---- <br>
<br>
Btw <a href="">this unrelated comment</a> Scott made about the "arrow of time" was a bit shallow imho. My own view of the problem <a href="http://216.224.167.220/tsm/duel.html">begins with this thought experiment</a>. <br>
<br>
Wolfgangnoreply@blogger.com7tag:blogger.com,1999:blog-5418994588621362010.post-55489342412335714282014-03-15T12:37:00.000-07:002014-03-16T17:18:44.038-07:00a probability puzzle<br>
No paradox and nothing profound here, just a little puzzle to pass the time <a href="http://resonaances.blogspot.com/2014/03/plot-for-weekend-flexing-biceps.html">until Monday</a>. <br>
I have two reasons for posting it: i) It is similar to some problems I have to deal with at work (*) and ii) it gives me an opportunity to link to <a href="http://possiblywrong.wordpress.com">the blog</a> where I got it from (after the solution is revealed). <br>
<br>
So Alice and Bob like to play a certain (card) game (if they are not busy with encryption problems and black hole entanglement). Everybody knows that Alice is slightly more skilled at this game and wins with probability 55%; However, she really likes to win and so Alice and Bob always play as many games as it takes for Alice to be ahead in winnings (x). So sometimes they play just one game (if Alice wins immediately) and sometimes many, but what is the expected number N of games the two will play (after a fresh start)? <br>
<br>
<small>(*) A similar problem I would be dealing with could be e.g. of the form "if I have an order sitting at the bid, how long will it take on average to get filled". <br>
<br>
(x) Added later, just to clarify: Alice and Bob play N games until Alice wins one game more than Bob. E.g. Alice wins the 1st game; Or Bob wins the 1st and Alice wins the next 2 games; Or ... </small><br>
<br>
<br>
------------------ <br>
<br>
<br>
This puzzle is equivalent to a biased random walk of the difference D in winnings
between Bob and Alice. It begins at D=0 and if D>0 it means that Bob is ahead; The random walk
ends at D=-1 i.e. when Alice is ahead by one. So what is the expectation value E = E[N] of the length N of this random walk? <br>
<br>
There are two ways to solve it. One can (try to) sum up all terms in the series of all possible events
as described <a href="http://possiblywrong.wordpress.com/2013/06/30/probability-puzzle-solution/">here</a>. I assume this is how <a href="http://mathworld.wolfram.com/TwoTrainsPuzzle.html">John von Neumann</a> would have solved this puzzle. <br>
<br>
Fortunately, there is a much easier solution for the rest of us and you can find it in the comments. <br>
It gives us E = 1/(2p - 1) and with p=0.55 for Alice to win a single game we get E=10. <br>
<br>
Notice that E diverges for p=1/2 and I find this somewhat counterintuitive, knowing that an unbiased
random walk will visit every point D with probability 1. <br>
<br>
Wolfgangnoreply@blogger.com6tag:blogger.com,1999:blog-5418994588621362010.post-12176170266035587562014-02-16T10:59:00.001-08:002014-03-10T14:36:39.780-07:00the strange result(s) of Frank Tipler<br>
I met Prof. Tipler in 1992 during a seminar in Vienna about relativity and cosmology, he was a visiting professor for a year and I remember very 'normal' discussions e.g. of the Reissner-Kerr solution. <br>
Two years later he wrote about <a href="http://tsm42.blogspot.com/2009/01/the-physics-of-immortality-and.html">The Physics of Immortality</a> and I thought that his book was quite interesting, although I disagreed with his conclusion(s) and I remember that I felt uneasy about the certainty with which he expressed his unconventional views. <br>
He jumped the shark with his next book about <a href="http://www.csicop.org/si/show/the_strange_case_of_frank_jennings_tipler">The Physics of Christianity</a> and I am not sure in which Lalaland he found himself after this jump ...<br>
<br>
But he continues to write papers about quantum physics as a proponent of a 'hardcore' many worlds interpretation (m.w.i.) and this post is about one of his conclusions: <br>
<a href="http://arxiv.org/abs/quant-ph/0611245">His interpretation</a> is actually based on the Bohm interpretation, assuming a deterministic Hamilton-Jacobi evolution of a distribution of hidden variables. While the original Bohm interpretation considers particle positions, in the Tipler interpretation the different possible universes are the hidden variables. He understands the Bohr probabilities as Bayesian probabilities obtained by the many real observers in the multi-verse of all those universes. I think at this point his views are still compatible with m.w.i. a la Everett and he argues that the Heisenberg uncertainty principle <a href="http://arxiv.org/abs/1007.4566">follows from his proposal</a> arising "from the interference of the other universes of the multiverse, not from some intrinsic indeterminism in nature". So far so good ...<br>
<br>
But then he claims to have <a href="http://arxiv.org/abs/0809.4422">a test of his interpretation by measuring pattern convergence rates</a>: Frequencies of events measured in real experiments with sample size N will converge as 1/N to the Born frequencies. And I think this has to be wrong. <br>
He even notices that in classical statistics the frequencies of events following e.g. a Gauss distribution converge slower, i.e. 1/sqrt(N), and I wonder why this does not bother him. After all, it is not difficult to set up a simple quantum physics experiment which reproduces classical convergence. Consider a (weakly) radioactive source which triggers a Geiger counter with 50% probability in a certain time interval. Now we let the Geiger counter tick along and we can be quite sure that the sequence 100101011111000010101000... that we will obtain obeys the well known laws of conventional statistics. <br>
What am I missing? <br>
Can one use Tipler's result as (another) example that m.w.i. does not reproduce the properties of Born probabilities correctly? <br>
<br>
<br>
<small>Btw if you wonder why I wrote this post now ... I saw Tipler's name on <a href="http://qbnets.wordpress.com/2014/02/13/particle-physics-and-quantum-computing-two-parallel-worlds/">this diagram</a> and remembered that I always wanted to write something about his strange result. </small><br>
<br>
Wolfgangnoreply@blogger.com4tag:blogger.com,1999:blog-5418994588621362010.post-67671777785143366262013-12-15T09:19:00.001-08:002014-03-10T11:32:10.708-07:00the phase structure of CDT<br>
In <a href="http://tsm2.blogspot.com/2013/12/backreaction.html">my previous post</a> I criticized the description of the CDT phase diagram on a popular physics blog. <br>
In this post I want to actually talk about <a href="http://arxiv.org/abs/1302.2173">the numerical CDT results</a>. <br>
<br>
The phase diagram depends on two coupling constants K and D (in the text they use kappa and delta). While K corresponds to the gravitational coupling, D measures the ratio of 'timelike' and 'spacelike' edges; I use quotes ' ' because the simulation is actually done in the Euclidean sector, but edges fall in different categories, depending on what kind of distances they would correspond to after Wick rotation. There is a third coupling parameter, which corresponds to a cosmological constant, but it is fixed for technical reasons. <br>
<br>
As I already explained, one looks for a critical line in D,K corresponding to a 2nd order phase transition and the reason is that long-range fluctuations are associated with such a transition, so that the details of the discretization do not matter any more. <br>
So this is what I find weird: The parameter D describes a detail of the discrete model and the hope is to fine tune D, as a function of K, in order to find a critical line where the details of the discretization no longer matter... <br>
<br>
The authors notice that D has "no immediate interpretation in the Einstein-Hilbert action" and thus the critical value D(K) does not correspond to any feature of the continuum limit - unless the continuum limit is not Einstein-Hilbert but Horava-Lifshitz gravity. This is what the authors propose and discuss in section 4 of their paper: HL gravity breaks diffeomorphism invariance of EH gravity, just like CDT does, and the parameter D would have a 'physical' meaning in this case.<br>
<br>
It seems that the authors hope that EH gravity will be restored somewhere along the critical D(K) line, however, <a href="http://tsm42.blogspot.com/2010/04/triangles.html">it is unlikely</a> imho that there is such a path from HL gravity to real gravitation. <br>
<br> Wolfgangnoreply@blogger.com2tag:blogger.com,1999:blog-5418994588621362010.post-31451217838284510922013-12-07T14:21:00.001-08:002014-03-10T11:31:56.869-07:00backreaction<br>
It seems that an internet tradition is emerging, whereby <a href="http://golem.ph.utexas.edu/~distler/blog/archives/002652.html">a blog remains dormant for a while</a>, until something so outrageously wrong appears on the interwebs that one has no choice but to respond to it. <br>
<br>
In my case, Sabine Hossenfelder wrote about <a href="http://backreaction.blogspot.com/2013/12/the-three-phases-of-space-time.html">the phase diagram of CDT</a> on her popular physics blog and I just have to set a few things straight: <br>
<br>
1) We read that "... most recently the simulations found that ... space-time has various different phases, much like water has different phases". <br>
But, of course, the phase structure of various lattice gravity models has been studied (implicitly and explicitly) since the early days of lattice gravity simulations, i.e. the 1980s. If one wants to find a reasonable continuum limit for such a model, then one has to examine the phase structure of the lattice model; In general, if the model has one or more coupling parameters then it will (most likely) exhibit different phases, just like water. <br>
<br>
2) The holy grail to a physically interesting continuum limit is the existence of a non-trivial fixpoint, which appears in the phase diagram as a 2nd order phase transition. IF such a transition exists for CDT, it will be located on (one of) the critical lines and perhaps at the tri-critical point. The continuum limit will <b>not</b> appear in the area C of the diagram; There you certainly cannot "share images of your lunch with people you don’t know on facebook". <br>
As far as I know, the existence of such a 2nd order transition has not been demonstrated yet, although <a href="http://xxx.lanl.gov/abs/hep-lat/0309002">intriguing hints have appeared</a> in other lattice models previously. Of course, even IF such a 2nd order transition could be demonstrated, one would still not know if the continuum limit has anything to do with gravitation as we know it. <br>
<br>
3) This 2nd order phase transition is prerequisite to a consistent continuum model and all 4d geometries would be generated with the same critical parameter values. It is therefore misguided to imagine that this phase transition happened at or near the big bang. <br>
Indeed, the coupling parameters depicted in the phase diagram are bare, i.e. un-renormalized, coupling parameters and while the diagram may indicate existence and location of a non-trivial fixed point, almost all of the phase diagram is actually non-physical. <br>
Therefore one cannot expect that this phase transition may be an alternative and/or replacement for inflation (as Sabine discussed in the comments). <br>
<br>
Wolfgangnoreply@blogger.com4tag:blogger.com,1999:blog-5418994588621362010.post-43346844344711117372013-06-19T14:04:00.000-07:002014-03-10T14:11:43.825-07:00Knightian uncertainty<br>
I was thinking about a good example of <a href="http://www.scottaaronson.com/blog/?p=1438">Knightian uncertainty</a> and this is my proposal:
We cannot know what theorems a particular mathematician or a group of mathematicians will be able to prove.<br>
As a concrete example, consider <a href="http://michaelnielsen.org/polymath1/index.php?title=Bounded_gaps_between_primes">the collaborative effort to lower H</a> from Zhang's 70 million; Currently, the best confirmed value for H is 60,726. But will H drop below 1000 by the end of this year? <br>
<br>
Obviously we cannot know for sure (otherwise we would have a proof for H < 1000 already) and I think that any attempt to come up with a 'Bayesian probability' in this case would be inappropriate. <br>
But this means that the state of our world by the end of this year is unknown. (If you wonder how to reconcile such a statement with physics, I recommend these links:
<a href="http://yolanda3.dynalias.org/tsm/predict.html">1</a>,
<a href="http://tsm2.blogspot.com/2011/06/coleman-argument.html">2</a>,
<a href="http://tsm2.blogspot.com/2009/10/arthur-and-free-will.html">3</a>,
<a href="http://tsm2.blogspot.com/2013/05/many-turing-machines.html">4</a>). <br>
Btw it is possible that some Martian mathematicians, with far advanced math capabilities, would already know the answer to the above question, but in this case I have to assume that they have theorems in their advanced Martian mathematics which they cannot prove yet. <br>
<br>
So the strong form of my example is this: In our universe there is at least one mathematician, who is unable to predict if she will be able to prove the theorem she is currently working on - and nobody else is able to predict it either, because she is the smartest/most advanced mathematician.<br>
Therefore the future state of our universe is unknowable. <br>
<br>Wolfgangnoreply@blogger.comtag:blogger.com,1999:blog-5418994588621362010.post-54320708561298064072013-05-25T14:03:00.000-07:002014-03-10T14:37:00.976-07:00many Turing machines<br>
We randomly implement a Turing machine Tm with N states, using a Geiger counter plus radioactive material as 'quantum coin'. We proceed as follows: First we use the 'quantum coin' to determine (with probability 1/2) if the Tm has N=2 (head) or N>2 (tail) states. If we got tail, we use the 'quantum coin' again to see if N=3 or N>3 and so on and so forth. <br>
Once we have determined N in that way, we then construct the transition table(s) of the Tm, using the 'quantum coin' again and again, so that all (8N)<sup>N</sup> possible transition tables could be realized. <br>
Once this construction is finished, we start the Tm on a tape with a few 1s sprinkled on it and then watch what happens.<br>
<br>
This experiment is easy to understand if we follow the Copenhagen interpretation. The Tm we will build is most likely quite simple, because the probability for a complex N-state Tm decreases rapidly as 1/2<sup>N-1</sup>. Once the Tm is put together, its evolution is not even a quantum mechanics problem any more. If we want, the transition table(s) of this particular Tm can be inspected before it runs, to determine if it will halt. <br>
<br>
But this experiment is much more difficult to understand within a many worlds interpretation: Every use of the 'quantum coin' splits the world and thus we are dealing with a wave function of the universe which contains every possible Tm in one of its branches.
The amplitudes assigned to worlds with N-state machines are quite small if N is large, but all the worlds are equally real.<br>
Unfortunately, due to the <i>Halteproblem</i>, the evolution of this wave function is uncomputable. In other words, the wave function of the universe does not have a mathematical description at all. <br>
<br>
The best part of this experiment is that I do not even have to do it myself. Somewhere in the universal wavefunction there is already a branch where somebody, somewhere in the universe, has decided to do this experiment (*). This means that the amplitude of my branch is already uncomputable (x).<br>
<br>
<br>
<small>(*) This also takes care of the counter argument that on Earth resources are finite and thus the experiment has to terminate at a certain (large) N<sub>0</sub>. Since we have to consider the wave function of the multiverse (of infinite size), this argument is not convincing, because we cannot know N<sub>0</sub>. <br>
<br>
(x) Notice that the overlap between different branches of the wave function is very small, due to decoherence, but in general non-zero. </small> <br>
<br>Wolfgangnoreply@blogger.comtag:blogger.com,1999:blog-5418994588621362010.post-90572971493618710472013-04-05T14:01:00.000-07:002014-03-10T14:37:16.706-07:00the many urs interpretation<br>
Recently I read the biography of Erwin Schrödinger by John Gribbin,
who points out that E.S. proposed a many worlds interpretation of quantum theory
several years before Everett. This got me thinking how to make sense
of m.w.i. after all. <br>
<br>
As I have pointed out several times on this blog [<a href="http://tsm2.blogspot.com/2010/05/one-espresso-many-worlds.html">1</a>, <a href="http://tsm2.blogspot.com/2010/06/one-pipe-many-worlds.html">2</a>, <a href="http://wbmh.blogspot.com/2013/01/decoherence-and-incoherence.html">3</a>], a major problem of the
many worlds interpretation is the derivation of the Born rule.<br>
But I think there is a way out: If qubits are the fundamental building blocks of
our world, then every event could eventually be reduced to a series of yes-no alternatives
of equal probability (an <a href="http://en.wikipedia.org/wiki/Carl_Friedrich_von_Weizs%C3%A4cker#Theory_of_ur-alternatives">ur alternative</a>) - and in this case the m.w.i. gives the correct probability.<br>
I think this would also take care of the 'preferred basis' problem, because if the
world is fundamentally discrete, the 'preferred basis' would assign
two unit vectors in Hilbert space to each qubit. <br>
<br>
C.F.v. Weizsäcker proposed his <a href="http://arxiv.org/abs/quant-ph/0309183">ur-theory</a> many years before the term 'qubit'
was invented and if one is serious about m.w.i. then it would be a strong reason
to consider ur-theory or something similar (*). <br>
Much later the idea that <a href="http://arxiv.org/abs/quant-ph/0501135">our world is a large quantum computer</a> has been investigated e.g. by Seth Lloyd (but I don't know if it would work with urs).<br>
In this case the task to derive the Born rule would be equivalent to derive QFT as we know
it together with general relativity from ur-theory and/or from the behavior of large quantum computers.<br>
<br>
<br>
<small>(*) C.F.v. Weizsäcker himself was a believer in the Copenhagen interpretation
and rejected m.w.i. explicitly in his book.</small><br>
<br>Wolfgangnoreply@blogger.comtag:blogger.com,1999:blog-5418994588621362010.post-81927915393964677742013-03-27T14:00:00.000-07:002014-03-10T14:09:25.777-07:00the Bayesian mathematician<br>
I just came across <a href="http://hilbertthm90.wordpress.com/2013/03/26/bayesianism-in-the-philosophy-of-math/">this blog post</a> which proposes some sort of Bayesian mathematics: "[..] Suppose you conjecture something as part of your research program. [..] you could use Bayes' theorem to give two estimates on the plausibility of your conjecture being true. One is giving the most generous probabilities given the evidence, and the other is giving the least generous. You’ll get some sort of Bayesian confidence interval of the probability of the conjecture being true." <br>
<br>
Obviously, I was thinking about counter-examples to this proposal. As we all know from Kurt Gödel and the pop. sci. literature, mathematics cannot be automated due to the <i>Halteproblem</i>; So a natural starting point would be a Turing machine with n possible internal states. We assume that n is large enough and the machine complicated enough that our Bayesian mathematician cannot figure out right away if it will halt or run forever. <br>
So she assigns a probability p(H) to the conjecture that the machine will at some point halt and begins with an uninformed prior p(H) = 1/2. Then the machine starts and writes and reads from its infinite tape and our Bayesian mathematician updates p(H). For a long while all she sees is that the machine continues and continues and so p(H) decreases, but suddenly after L steps the machine hits the <i>stop</i> state. Did p(H) help the Bayesian mathematician to anticipate this event? <br>
<br>
So an interesting question would e.g. ask this: If the Bayesian mathematician is equivalent to a Turing machine with N internal states, can she use Bayesian updating so that p(H) increases and indeed reach 1 before the machine stops after L steps and at the same time be sure that p(H) will decrease and reach 0 if the machine never stops? I guess she could try to detect loops and other patterns, as long as they are not too complicated to be handled by an N state Turing machine... <br>
Well, I am pretty sure if N is fixed and n is allowed to be arbitrarily large this would violate the spirit as well as the letter of the <i>Halteproblem</i>. But what about n < N? Do we know anything about that? <br>
<br>
<br>
added later: After thinking about this some more I would like to clarify a few things: The detection of 'endless loops' or other patterns, which indicate that the machine can never halt, has nothing to do with Bayesian updating itself. The task to find such algorithms and pattern detectors is equivalent to proving a conjecture the old fashioned way. <br>
Also, if one examines all Turing machines with n smaller than some given number, starting on a finite sample of tapes and running for less than L steps, one may find certain frequencies of halting and not-halting behavior and perhaps one can extrapolate to some extent to the case of much larger L and derive some bounds for p(H) that way. But again this would not have anything to do with Bayesian updating, but it would perhaps be <a href="http://en.wikipedia.org/wiki/Halting_probability">a generalization of Chaitin's reasoning</a>. <br>
The more I think about it, it seems that Bayesian updating itself does not help at all with this problem... <br>
<br>
added even later: Meanwhile I found some papers related to this question: <a href="http://arxiv.org/abs/1202.6153">1</a>, <a href="http://arxiv.org/abs/1001.2813">2</a>, <a href="http://arxiv.org/abs/cs/0012011">3</a>. I noticed that "The major drawback of the AIXI model is that it is uncomputable".<br>
<br>
added much later: <a href="http://www.scottaaronson.com/blog/?p=1293#comment-68418">Scott A. uses a probability argument</a> about complexity theory in a comment on his blog (see also the following comments). But I don't find his argument very convincing. <br>
<br>Wolfgangnoreply@blogger.comtag:blogger.com,1999:blog-5418994588621362010.post-89074575769658152652013-03-18T13:57:00.000-07:002014-03-10T14:07:55.337-07:00ten years after<br>
Cosma reminds us that <a href="http://vserver1.cscs.lsa.umich.edu/~crshalizi/weblog/1015.html">a certain sloth is already ten years old</a>; But as a true statistician he also gives us an error bar with this date ... <br>
However, his list of the twenty best pieces misses <a href="http://vserver1.cscs.lsa.umich.edu/~crshalizi/weblog/270.html">the one I consider the best post</a>. After all, the title of <a href="http://arxiv.org/abs/cond-mat/0410063">the paper</a> it references inspired the name of this blog... <br>
Well, congratulations and I am looking forward to the next ten years of slothing ... <br>
<br>Wolfgangnoreply@blogger.comtag:blogger.com,1999:blog-5418994588621362010.post-42108356977725447532012-07-22T05:20:00.000-07:002014-03-10T11:10:54.465-07:00was Wolfram right after all?<br><br />Recently, Gerard 't Hooft published <a href="http://arxiv.org/abs/1207.3612">his own version of superstring theory</a>. <br><br />"Ideas presented in two earlier papers are applied to string theory. ... We now also show that a cellular automaton in 1+1 dimensions that processes only ones and zeros, can be mapped onto a fermionic quantum field theory in a similar way. The natural system to apply all of this to is superstring theory ..." (*)<br><br /><br><br />The earlier papers he refers to describe a <a href="http://arxiv.org/abs/1205.4107">duality between a deterministic cellular automaton and a bosonic quantum field theory in 1+1 dimensions</a> and argue that Born's rule strongly points towards <a href="http://arxiv.org/abs/1112.1811">determinism underlying quantum mechanics</a> (x). <br><br /><br><br />All this is far from the mainstream, but 't Hooft is a physicist not a crackpot and so he points out problems of his proposal(s) in his papers, e.g. he notes that some of his models have an unbounded Hamiltonian and he does discuss the apparent contradiction with Bell's inequality. <br><br /><br><br /><small><br />(*) It is known for a long time that the Ising model is equivalent to a fermionic field in 2 dimensions (see e.g. <a href="http://arxiv.org/abs/math-ph/0411084">this paper</a> for references).<br><br /><br><br />(x) Quantum theory without the Copenhagen 'collapse' is a deterministic theory, so it is not too surprising if one finds such a duality. But it is unusual that Born's rule 'strongly points towards' determinism. <br><br /></small><br /><br>Wolfgangnoreply@blogger.com2tag:blogger.com,1999:blog-5418994588621362010.post-23580468844558917082012-06-30T00:47:00.000-07:002014-03-10T14:37:38.221-07:00many simulated worlds<br><br />I think it makes sense to combine <a href="http://www.simulation-argument.com/simulation.html">Nick Bostrom's simulation argument</a> with the many worlds interpretation.<br><br />While Copenhagen tells us that some outcomes are very unlikely, the m.w.i. assures us that every possible world actually exists. So we can be sure that there are worlds which contain the necessary equipment to simulate your conscious experience - and since there are (infinitely) many different ways to simulate the same experience, we can follow Bostrum's argument to finally conclude that it is almost certain that you are experiencing The Matrix right now (*). <br><br /><br><br />If you believe the m.w.i. then you must believe that your experience is just a fake (x). <br><br /><br><br /><small><br />(*) There are N ways how your experience can be simulated and only 1 how it would be real, since you cannot distinguish them they have equal probability and for (infinitely) large N the conclusion follows. <br> <br /></small><br /><br><br /><small><br />(x) Of course, my argument will probably not convince you, which is just what The Matrix does to you ... <br><br /></small><br /><br>Wolfgangnoreply@blogger.com1tag:blogger.com,1999:blog-5418994588621362010.post-17704828270412022252012-06-22T07:58:00.000-07:002014-03-10T11:10:54.485-07:00Columbo and his memories<br><br /><small><br />This was recently posted on <a href="http://wbmh.blogspot.com">that other blog</a>, but on second thought it really belongs here, following <a href="http://tsm2.blogspot.com/2009/01/backwards-or-twice-as-fast.html">a proud tradition of confusing thoughts</a> about <a href="http://tsm2.blogspot.com/2010/11/future-has-ended.html">the arrow of time</a>. <br><br /></small><br /><br><br />Inspector Columbo enters an empty apartment (*) and finds a dead body on the floor. He measures the body temperature and determines it to be 33C, while the room temperature is 21C. He could now use <strike>physics</strike> forensic science to predict quite well what will happen going forward. The dead body will continue to cool and a few hours later will reach equilibrium with the room temperature. He could even predict how several days and weeks ahead this dead body will turn slowly but surely into a decaying corpse. <br><br />But he is not really interested in that. He wants to <strike>predict</strike> postdict what happened earlier. Again, he can use thermodynamics to determine how many minutes earlier the body temperature was 34C, 35C, ... But when his postdiction reaches 37C he must stop. All he can postdict is the time of death, but he can not go beyond that point. Once his calculation reaches 37C he deals with a living human being and from the data he has he cannot postdict what that person was doing. It would not make any sense to continue his calculation - and he would have to stop at 100C anyways. <br><br /><br><br />In some sense 37C is like a 'singularity' for his postdiction, which he cannot cross - quite remarkable, because we are used to thinking that the past is certain and the future is not; But here it seems to be the other way around. (Actually this case is not that special; In general, physicists are pretty good at making predictions, if they have the necessary initial data, but they are <a href="http://yolanda3.dynalias.org/tsm/duel.html">not good at all</a> at making postdictions from the same data, which is why they normally don't do it.) <br><br /><br><br />Later on, the clever inspector will find clues to what happened, fingerprints and other evidence - in other words he will find <i>documents</i> about the past. The truly amazing thing about those documents is that they fit together and tell a coherent story (e.g. the fingerprint on the door knob is the same as the fingerprint on the knife). Even more amazing is that in the end the killer will confess and tell the inspector what happened and his <i>memory</i> will match the story reconstructed from those documents. <br><br /><br><br />Which brings us to the final question. Why do we have memories of the past but not the future? In other words, why does the inspector consider the dead body at the beginning of our story as evidence of a murder (which happened in the past) - but not as a document and memory of the rotten corpse it will be in the future? <br><br />Is it because he knows the future better than the past? <br><br /><br> <br /><br><br /><small><br />(*) There never was such an episode, but we can assume that he solved murder cases not shown on tv. Also, I am aware that he is actually a Lieutenant. <br><br /></small><br /><br>Wolfgangnoreply@blogger.com0tag:blogger.com,1999:blog-5418994588621362010.post-23782749641850537042012-06-22T07:55:00.000-07:002014-03-10T11:10:54.493-07:00pictures of Columbo<br><br />I thought I should illustrate <a href="http://tsm2.blogspot.com/2012/06/columbo-and-memories-of-future.html">the following story</a> with two pictures. <br><br /><br><br /><img border="0" height="280" width="400" src="http://2.bp.blogspot.com/-_dRC2QpJ-uY/T97u2JVhw2I/AAAAAAAAAKw/lWPgKI8QTvQ/s200/columbo1.JPG" /> <br><br /><br><br />Columbo (C) collects documents (*) about the murder case (the black vertical lines). He can easily predict the future of those documents, because they are stable (otherwise they would not be good documents). However, he can postdict the past of those documents only up to a certain point, when they have been created. <br><br />Different documents tell a coherent story, therefore he can assume that they were created by the same event E. But notice that he cannot postdict the state of those documents before E. The future of those documents is known, but their past is uncertain beyond a particular point and this is what makes them memories of the past event E. <br><br /><br><br />This picture of the 'arrow of time' is somewhat different from the usual image of entropy being low in the past and uncertainty increasing in the future. <br><br /><br><br /><img border="0" height="280" width="400" src="http://2.bp.blogspot.com/-L1_J7hoB_ak/T97wXNCpuzI/AAAAAAAAAK8/VU30rqthEPs/s200/columbo2.JPG" /> <br><br /><br><br /><br><br />(*) 'document' is used in a general sense - a hot cup of coffee in an empty apartment 'documents' that a person was in that apartment not too long ago. We know this because we cannot postdict the temperature of that coffee beyond 100C - so we know somebody had to be there to make it. <br><br /><br>Wolfgangnoreply@blogger.com0tag:blogger.com,1999:blog-5418994588621362010.post-16928006832176536662012-06-17T02:39:00.000-07:002014-03-10T11:10:54.504-07:00still asking the same question(s)<br><br />I write this post mostly to show that this blog is still alive ... and still <a href="http://tsm2.blogspot.co.at/2010/09/please-can-you-help-me.html">asking the same question(s)</a>. <br><br /><br><br />Recently, I read <a href="http://arxiv.org/abs/1205.1229">this paper</a> about a numerical study in lattice gravity, trying to distinguish 1st and 2nd order <a href="http://cscs.umich.edu/~crshalizi/notabene/phase-transitions.html">phase transitions</a>. They use and refer to <a href="http://arxiv.org/abs/hep-lat/9608098">the methods I am familiar with</a>, but I do wonder if this is really the best one can do nowadays. <br><br /><br><br />If one does e.g. fit the location of the critical coupling as a function of lattice size, one has to deal with two big problems: First, the location of the critical coupling is not so well defined (e.g. due to the metastable states associated with 1st order transitions) for a given lattice size; there are limits on computation time and resources (*). <br><br />Second, how can one be sure if the lattice is big enough to be in the 'scaling region', i.e. big enough that small size corrections can be neglected (if the typical size of the 'bubbles', which come with a 1st order transition, is n and the lattice size N is smaller than n one has a problem). <br><br /><br><br />So what is the current state-of-the-art and where are the professional statisticians and their Bayesian stochastic network <a href="http://dspace.mit.edu/handle/1721.1/41649">thingy-ma-jiggies</a> when we need them (x)? Please let me know if you know something. <br><br /><br><br /><br><br /><small><br />(*) A related question for the practitioner: Is it better to spend the available computation power on a small number of iterations on a large lattice or is it better to do many iterations on a small lattice? <br><br /><br><br />(x) Speaking of <a href="http://normaldeviate.wordpress.com/2012/06/14/freedmans-neglected-theorem/">Bayesian</a> thingy-ma-jiggies ... <br><br /></small><br /><br>Wolfgangnoreply@blogger.com0tag:blogger.com,1999:blog-5418994588621362010.post-34093966427869519282012-05-01T01:00:00.000-07:002014-03-10T11:10:54.511-07:00follow up on quantum gravity<br><br />A year ago <a href="http://tsm2.blogspot.com/2011/04/is-ads-stable.html">I mentioned a numerical study</a> which indicates that AdS is unstable against small perturbations. <br><br />Meanwhile, <a href="http://arxiv.org/abs/1109.1825">Horowitz et al.</a> "find strong support for this idea". They also mention that "any field theory with a gravity dual must exhibit the same turbulent instability, and transfer energy from large to small scales", but it is unclear (to me) what this means for the AdS/CFT correspondence. But notice that they study AdS<sub>4</sub>, although the assumption seems to be that AdS<sub>5</sub> contains the same instability. <br><br /><br><br />Two years ago I wrote about <a href="http://tsm2.blogspot.com/2009/04/living-with-ghosts.html">higher order gravity models</a>. Recently, <a href="http://arxiv.org/abs/1202.0008">Leonardo Modesto</a> considered "higher derivative gravity involving an infinite number of derivative terms". <br />This new model "is instead ghost-free" and "finite from two loops upwards: the theory is then super-renormalizable". <br><br /><br><br />Last , but not least, <a href="http://arxiv.org/abs/1201.2864">Daniel Coumbe and Jack Laiho</a> have published version 2 of their paper "exploring the phase diagram of lattice quantum gravity"; a while ago I mentioned <a href="http://tsm2.blogspot.com/2011/07/asymptotic-safety.html">the talk</a> about it at the Lattice 2011 conference. <br><br /><br><br /><br><br />added later: In a new paper <a href="http://arxiv.org/abs/1205.0971">Ashoke Sen</a> calculates logarithmic corrections to the entropy of black holes which "disagree with the existing result in loop quantum gravity". <br><br /><br>Wolfgangnoreply@blogger.com1