This is the story told in many books: Initially Erwin and his cat are in some initial state |I> and then they develop into a superposition |F> = |H> + |N> with H indicating a happy cat and N a not so happy one (*). Due to decoherence the overlap <H|N> is very small - but not exactly zero. The interpretation is that |H> and |N> are associated with two different worlds, there is no 'collapse' (real or subjective) of the wave function which would eliminate one of the two.

But there is one issue I have with this story, not often told in those books: If there is no 'collapse' then how did we get the initial state |I> ? We have to assume that before his experiment with the cat Erwin made a decision to use his cat and not a dog, so really we have something like |I> + |d>. But before that he made a decision to do the experiment or not at all and before then a committee made a decision if he gets funding and before then ...

So really the initial state was something like |I> + |x1> + |x2> + |x3> + ...

and therefore the following state looked something like |F> = |H> + |N> + |x1> + |x2> + |x3> + ... .

If there is never a 'collapse' then quantum theory is like a programming language without garbage collection and we have to assume that the number of branches of the wave function is infinite - at least from our branch we cannot determine how many other branches there are. (Also we do not know the amplitude of our branch relative to all the others; We could exist due to a freak event in the past for all we know.)

But this is a real problem, because the overlap between different branches is very small but not zero.

So if we calculate <H|F> we get <H|H> + <H|N> + <H|x1> + <H|x2> + <H|x3> + ... and although every <H|x> is very small due to decoherence, there is

*a priori*no reason for the infinite sum over all branches to converge (**); I have never seen a good argument why this (infinite) sum should remain small compared to the terms we are interested in.

Of course one can try to save the appearances by making the assumption that somehow the sum only has a finite (but certainly very large) number of terms, e.g. the universe is finite and time is discrete. However, it would mean that we try to save quantum theory by making assumptions about cosmology and I don't think this is very convincing.

(*) Notice that I omit normalization factors (like 1/sqrt(2)) in this text to keep the ascii math readable.

(**) Notice that the point of this post is that it is actually a problem to determine those normalization factors (like 1/sqrt(2)) so that the total sums to 1.

## 3 comments:

If the universe began in a well defined state then the wave function will remain normalizable independent of the evolution into different branches. Where is the problem?

>> If the universe began in a well defined state

Susskind and Vilenkin debate right now if the universe is eternal or had a beginning. In any case, the normalization of the wavefunction of the universe is not a trivial matter...

But if you assume that the wavefunction of the universe is normalizable, it is still obvious that only a tiny bit of this wavefunction is associated with our branch(es) and the vast majority is about other worlds. And we dont know much about this vast majority and we dont even know the amplitudes in it.

(The |I> state could come with a tiny weight for all we know!)

Therefore we cannot know how the sum over those other branches converges and how it compares to the few parts interesting to us imho.

If we consider Erwin's cat and do a real calculation we need to ignore all the other worlds - i.e. we need to collapse the wave function of the universe.

Perhaps I should point out that

<H|F> = <H|H> + <H|N> + <H|x1> + <H|x2> + ...

is not some abstract quantity.

It determines the probability that we find a happy cat in the state |F>.

In other words, if the x terms overwhelm the sum it means that m.w.i. does not get probabilities right (of course we kind of knew that already 8-)

Post a Comment