nothing ever happens


I have previously explained my problems with the "many worlds interpretation", e.g. if one considers the superposition of macroscopically different branches, one would need to know how to handle the branching of space-time geometry.
But let us ignore gravitation for now, assume a flat background, and only consider non-relativistic, simple quantum theory.
We begin with an initial state |i> at t0 and it develops according to Schroedinger's equation, at some point containing the superposition |s(t)> of observer(s) and all that; Schroedinger's cat is in a superposition of dead and alive and so is Schroedinger, if we just wait long enough.

But somebody had to set up the experiment and get it started at t0. We cannot assume a classical observer to do that; instead we have to consider this setup of |i> as just another quantum process and the experiment could have started at t0+d or t0-d. In fact, if |s(t)> is a solution of the Schroedinger equation, then so is |s(t+d)> with d being some arbitrary constant and the many worlds wave function is the superposition of all of them.
Notice that this is very different from standard Copenhagen quantum theory. In Copenhagen there is usually an initial condition (i.e. the beginning of the experiment) that selects one solution; but such an initial condition requires a classical observer who sets up the experiment.
Obviously, if the state of our system is such that |s(t)> = |s(t+d)>, for arbitrarily small d, it follows that d|s>/dt = 0, i.e. nothing ever happens in the "many worlds interpretation"; unless we consider a Hamiltonian that explicitly depends on the time parameter.
So it seems that one needs to introduce classical time and classical clocks somehow externally, otherwise one deals with a system that forever remains in its ground state, H|s> = 0.

escape velocity


This is the copy of a blog post from my other blog, which got some comments there.

CIP asked a question about the entropy during star formation and I think we got the answer, at least qualitatively; but I would like to understand this better.
So let us begin with this calculation of John Baez, which gets the entropy wrong - it would decrease during star formation, i.e. the gravitational collapse of the matter which makes up the star. What the formula leaves out is the entropy of the outgoing radiation, but I would like to stay in a simple Newtonian model with classical point particles only.
In this case the "missing entropy" must come from the particles with velocities above the escape velocity of the star, which leave the collapsing cluster of particles. (The positions and velocities of the particles are actually not bounded, violating an assumption of this calculation, as he noted at the end of his page.) In other words, the formula John uses can only be an approximation, there is actually no decreasing volume V which encloses all particles and if one defines V considering a sphere which encloses all particles which cannot escape, the number N he uses would not be constant. So how does one really calculate the entropy?
A simpler question would be: If the initial number of particles was N, contained in a volume V, what fraction will escape within a small time interval dt? The Maxwell distribution would tell us the number of particles with velocities above the escape velocity and approximately 1/2 of them would escape, if they are within a distance dt*v from the surface ...
But all this seems a bit unsatisfactory; does anybody have the reference to a full calculation of this problem or do I have to run a computer simulation?

added later: A simple simulation of N=1000 particles, initially contained within a sphere of radius 1 and with zero initial velocity, suggests that after long enough time almost all particles escape to a location outside the initial sphere, due to the simulated gravitational interaction. Of course, my program (quickly cobbled together) could be wrong or inaccurate. The chart below shows the fraction of escaped particles on the y axis after so many time steps on the x axis (I have no explanation for the kink after 500 time steps).




The distribution of particles (projected onto a 2d plane) after hundred time steps ...



... one can see a "halo" of escaping particles surrounding the majority of particles in the collapsing star.

virtual information loss


Sabine Hossenfelder wrote a blog post about the information loss paradox, pretty much repeating standard arguments made in this debate. Among them the following:
"Physicists are using quantum field theory here on planet Earth to describe, for example, what happens in LHC collisions. [..] in principle black holes can be created and subsequently annihilated in any particle collision as virtual particles. This would mean then [..] we’d have no reason to even expect a unitary evolution."

As I said, this is a standard argument, but I have a problem with it:
In any experiment we can do, e.g. at the LHC, the energy of such a virtual black hole would be well below the Planck mass, i.e. far from the quasi-classical limit where the information loss problem is discussed.
In which sense can any particle fall into a microscopic b.h. with a radius much smaller than the Planck length, if its wavelength is much larger? So in which sense would a virtual b.h. pose an information loss problem?
We have to assume that the mass of an off-shell virtual b.h. could be arbitrarily large, but its contribution to any S-matrix element would be strongly suppressed (at least exponentially) for energies much larger than the collision energy, which is well below the Planck mass. Therefore its contribution would for all practical purposes be unmeasurable.


added later: Without a full theory of quantum gravity (and even string theory does not know how to handle black holes yet, see e.g. fuzzballs and firewalls vs. ER=EPR) we can only make some basic estimates.
There are estimates of proton decay due to virtual black holes and the expected lifetime is about 1045 years - a factor 1011 higher than what we could currently detect.
But I think even those estimates are too low if information loss requires a black hole of mass > mPlanck (if the surface area is indeed quantized black holes with a small mass m < mPlanck may not even exist). Wick rotation suggests that the contribution of a massive black hole m > mPlanck to any Feynman diagram would be suppressed by a factor exp(-k2) or exp(-(m/E)2) if E is the energy of the collision.
Btw the same exponential factor shows up in a different estimate, suppressing the production of black holes even for large collision energies E.

the many worlds interpretation does not work (yet)


I posted a comment on the Shtetl blog, rejecting (once again) the many worlds interpretation (mwi); it is supposed to solve the "measurement problem" of quantum theory, so let us first consider a simple experiment with 2 possible outcomes.
The main mwi assumption is that after the measurement both outcomes are realized and subsequently two macroscopically different configurations M1 and M2 exist in some (decohered) superposition.

However, we can make the differences between M1 and M2 arbitrarily large and therefore gravitation cannot be ignored. M1 and M2 will in general be associated with two different space-time geometries and so far we do not have a consistent framework to deal with such a superposition (*); should we e.g. use 2 different time parameters t1, t2 - one for each observer in each space-time?
In a few cases it has been tried to describe such an evolution but the conclusions are not in favor of mwi.
And how would the branching of space-time(s) work if the measurement is spread out over spacelike events, e.g. in an EPR-type experiment?

This gets worse if one considers a realistic experiment with a continuum of possible outcomes, e.g. the radioactive decay of a Pu atom, which can happen at any point of the continuous time parameter t. Assuming that this decay gets amplified with a Geiger counter to different macroscopic configurations, how would one describe the superposition of the associated continuum of space-time geometries?

The Copenhagen interpretation does not have this problem, because it only deals with one outcome and in general one can "reduce" the wave function before a superposition of spacetime geometries needs to be considered.

A mwi proponent may argue that this issue can be postponed until we have a consistent theory of quantum gravity and simply assume a Newtonian fixed background (or a flat Minkowski background). But if one (implicitly) allows the existence of Newtonian clocks, then why not the classical observer of Copenhagen?

In addition one has to face the well known problem of the Born probabilities (x), the preferred basis problem, the question of what it takes to be a world, the puzzling fact that nothing ever happens and other problems, discussed previously on this blog.

In other words, the mwi so far creates more problems than it solves.


(*) In technical terms: The semi-classical approximation of quantum field theories plus gravitation is ultimately inconsistent and we do not yet have a fully consistent quantum theory of gravitation to describe such a measurement situation.

(x) See also this opinion, which is a variant of the argument I made previously here and here.