the many worlds interpretation does not work (yet)
I posted a comment on the Shtetl blog, rejecting (once again) the many worlds interpretation (mwi); it is supposed to solve the "measurement problem" of quantum theory, so let us first consider a simple experiment with 2 possible outcomes.
The main mwi assumption is that after the measurement both outcomes are realized and subsequently two macroscopically different configurations M1 and M2 exist in some (decohered) superposition.
However, we can make the differences between M1 and M2 arbitrarily large and therefore gravitation cannot be ignored. M1 and M2 will in general be associated with two different space-time geometries and so far we do not have a consistent framework to deal with such a superposition (*); should we e.g. use 2 different time parameters t1, t2 - one for each observer in each space-time?
In a few cases it has been tried to describe such an evolution but the conclusions are not in favor of mwi.
And how would the branching of space-time(s) work if the measurement is spread out over spacelike events, e.g. in an EPR-type experiment?
This gets worse if one considers a realistic experiment with a continuum of possible outcomes, e.g. the radioactive decay of a Pu atom, which can happen at any point of the continuous time parameter t. Assuming that this decay gets amplified with a Geiger counter to different macroscopic configurations, how would one describe the superposition of the associated continuum of space-time geometries?
The Copenhagen interpretation does not have this problem, because it only deals with one outcome and in general one can "reduce" the wave function before a superposition of spacetime geometries needs to be considered.
A mwi proponent may argue that this issue can be postponed until we have a consistent theory of quantum gravity and simply assume a Newtonian fixed background (or a flat Minkowski background). But if one (implicitly) allows the existence of Newtonian clocks, then why not the classical observer of Copenhagen?
In addition one has to face the well known problem of the Born probabilities (x), the preferred basis problem, the question of what it takes to be a world, the puzzling fact that nothing ever happens and other problems, discussed previously on this blog.
In other words, the mwi so far creates more problems than it solves.
(*) In technical terms: The semi-classical approximation of quantum field theories plus gravitation is ultimately inconsistent and we do not yet have a fully consistent quantum theory of gravitation to describe such a measurement situation.
(x) See also this opinion, which is a variant of the argument I made previously here and here.
entanglement
About five years ago I sent this email to Erik Verlinde:
-----------------------------
1/24/10
Dear Professor Verlinde,
I read your preprint about gravity as 'entropic force' and have the following question/suggestion:
It seems that there are large classes of systems following an area law for entanglement entropy, see e.g. arxiv.org/abs/0808.3773
Should they all show 'entropic gravity' somehow?
If yes, the obvious next step would be to pick a simple model, e.g. arxiv.org/abs/quant-ph/0605112 and check if/how entropic gravity manifests itself.
Thank you,
Wolfgang
------------------------------
As it turns out this idea may not have been so stupid after all.
Of course, if entanglement stitches space-time together then the question remains: entanglement of what?
I still think the entangled qubits of a quantum computer are the most natural candidates; if we live inside one it would also solve the measurement problem.
But what would it be computing? We can only speculate.
the sleeping Brad DeLong problem
Brad DeLong was sound asleep for several months, but after he woke up he found an old blog post and quickly calculated the probability that Lubos is an April fool; he wrote a blog post about it, because he had no memory of all the earlier debates about this.
Of course, Lubos disagreed with this calculation, the probabilities are very different according to his argument. So who is right?
I wrote about the Sleeping B. problem almost ten years ago, when it was mostly discussed in academic papers. Meanwhile it has become an internet standard and turned into a political debate: Lubos and the reactionaries vs. Sean and the liberals. Unfortunately, all the nuances of the problem have been lost, as it happens in tribal conflicts, such as the existence of solutions which are neither 1/2 nor 1/3.
In one of the many comments somebody wrote something like "if I would be really interested in this problem, I would code a simulation and see what it does". Unfortunately, if she would actually sit down to implement a simulation, she would quickly find that it does not really answer anything; this is one of the problems which cannot be settled with an experiment or simulation, because it is about the question what we actually mean with "probability". Both physicists and economists are not well trained at thinking through what it is exactly they are talking about, therefore I predict that this debate will not go away anytime soon.
So what is really the issue with this? Well, if the "a priori probability" is 1/2 then S.B. has to bet "as if" it was 1/3 due to the setup of the problem. So if you think probabilities are defined as betting odds (as many Bayesians do) then you will prefer 1/3. If you think probabilities are objective properties of physical systems then you will probably (!) prefer 1/2. (Btw, notice that S.B. has to (implicitly) use the 1/2 "a priori" to calculate the 1/3.)
I actually prefer the 3/8 solution, because it is not so clear how one should understand it. But I realized that I am a tiny minority of one a long time ago ...
-------
Perhaps it is useful to repeat the 3/8 solution here:
If the outcome was Head, she will only wake up on Tuesday; but if it was Tail, she will either wake up Tuesday or Wednesday with "a priori probability" 1/2 for each, since she cannot distinguish the two days.
Therefore, the probability for the just awoken S.B. that today is Tuesday is
p(Tue) = (1/2) + (1/2)(1/2) = 3/4
where the 1st term corresponds to Head and the 2nd to Tail.
Now we can calculate the probability for Head as p(Head) = (1/2)*p(Tue) + 0*p(Wed) = (1/2)*(3/4) = 3/8,
knowing that the "a priori probability" for Head on Tuesday is 1/2.
the chaos computer club
A recent paper suggests a fundamental limit on the chaos in physical systems.
Lubos wrote an easy to read introduction to the main idea.
I think this might be interesting for Scott and everybody else interested in (quantum)computing. If one considers a Turing machine sensitive to initial conditions (i.e. the input string) or if one considers a (quantum)computer simulating chaotic systems, the conjecture seems to imply a limit on computability.
Or think about a device which measures the position x of the butterfly wings, sends the result to a computer, which calculates a function f(x) to determine its output. The conjecture seems to suggest a limit on the functions the computer can calculate in a finite amount of time.
Is it correct to read the result as "the number N of different internal states any computer can reach after a time T is bounded by ewT where w is a fundamental constant"?
Subscribe to:
Posts (Atom)