why this blog is unnecessary



I assume you noticed already that this blog revolves around a few issues only: the arrow of time [1, 2, 3], the interpretation of quantum theory [1, 2, 3], the meaning of probability [1, 2, 3] and the role of consciousness [1, 2, 3].



But many years ago, H. D. Zeh published a remarkable paper, which already condensed all those issues into one master piece and some thought provoking conclusions, even "assuming that every spacetime point carries consciousness". I really recommend it.



So it seems that all that might be left for me to write about are Ikea chairs and theology.



-------



added later: Yes I am aware that locating consciousness in single spacetime points does not help us much with quantum gravity and the 'branching' of wavefunctions might be problematic if there is no classical time. But fortunately there is already a realist interpretation of quantum mechanics "suggesting that solving the problem of time in quantum gravity leads to a solution of the measurement problem in quantum mechanics".

And since 'quantum gravity' is more or less a solved problem, this blog really is unnecessary.


the future has ended ...



... and the past begins when you read this paper about the universal arrow of time. It claims that "if two subsystems have opposite arrow-directions initially, the interaction between them makes the configuration statistically unstable and causes a decay towards a system with a universal direction of the arrow of time."



I think this is just another example of 'initial condition chauvinism' and propose the following counter-example, considering a Newtonian toy model which contains two different types of particles. Initially we assume that there is no interaction between particles of the two different types and we specify initial conditions for the particles of type 1 (red) at t=tI with an associated low entropy. As the configuration evolves for t > tI the entropy increases. Now we specify final conidtions for the particles of type 2 (blue) at tF > tI and evolve the configuration of type 2 particles to decreasing t < tF.
Obviously, the associated entropy is low at tF for particles of type 2 and increases as t decreases.



In a second step we turn on a (weak) interaction between the two particle types and can (e.g. iteratively) determine the resulting particle configurations. (We use the configuration we obtained initially without interaction and correct in a first step the particle trajectories due to the weak interaction with the other type. We repeat these corrections as many times as desired.) I claim that this will not reverse the increasing/decreasing entropy of either particle type, even if we finally make the interaction stronger and stronger. Due to the symmetry of the toy model if one could make an argument that e.g the entropy has to change direction for particles of type 2, then one could make the same argument for type 1 in the other direction.



I think it might be interesting to see e.g. a computer simulation of such a toy model.







added later: There are two different ways to think about and simulate such a toy model:

In the first, one specifies 'normal' initial conditions for the red particles at tI and one specifies 'special' or 'correlated' initial conditions for the blue particles also at tI and then evolves the system forward to t>tI. The blue particles would be distributed over a wide region at tI but momentum would be carefully chosen so that they converge towards a narrow region at tF.

In this picture it is natural to assume that interaction between red and blue particles should force a common arrow of time, at least if there are more red particles than blue and if we wait long enough.

I called this assumption 'initial condition chauvinism'.



In the second, one specifies 'normal' initial conditions for the red particles at tI and final conditions for the blue particles at tF, just as I explained above. In this picture it is clear that whatever happens between the red and blue particles cannot change the fact that the blue particles converge into a narrow region at tF, even if there are more red particles and even if the interaction is strong. The only way entropy can reverse for the blue particles is if the red particles would somehow force them into an even more narrow region at tI and I just cannot see how this could happen.



The most interesting aspect of this is, of course, that the two different pictures should be equivalent!



PS: I am aware of the fact that in physics we usually specify initial conditions and
not final conditions, but this is exactly the puzzle of the 'arrow of time' and
cannot be used to derive it imho.



added even later: It is of course true that "for most mixed initial-final conditions, an appropriate solution (of the Hamiltonian equations of motion) does not exist." However, I am pretty sure that the separation of initial and final conditions for red and blue particles, as described above, ensures that a solution to the equations of motion does indeed exist for this toy world. I would be very interested to see a convincing argument why the iterative procedure (for weak coupling) as described in the text does not converge.


three (actually five) links



There are better places than this blog to read about probabilities and all that; Here are three examples ...



Terence Tao wrote a non-technical article about universality.

xkcd is not only your favorite web comic, but also posted an interesting probability puzzle a while ago [*].

And speaking of puzzling problems, if you have a question about statistics then maybe you should try StackExchange [x].





[*] If you have problems finding the answer in the many comments, it is here.



[x] added later: There is now also a StackExchange for physics.


a new proof for the truth of string theory



The proof presented in my previous blog post has meanwhile be examined by many commentators (two) and I have now enough confidence to use its structure in a slightly different context.



1) String theory is the possible 'theory of everything', underlying the physical reality of our world Wo.



2) We know that s.t. leads us to the concept of a multiverse M which contains our world, but many other possible worlds too: M = {Wo, W1, W2, W3 ...}.



3) It is possible that one of those worlds contains evidence for the truth of s.t. (e.g. the energy scales are such that it is easy for physicists to probe the Planck scale).

3b) Therefore M contains at least one world Wst where s.t. is evidently true.



4) But if s.t. is evidently true for one world Wst, then it must be true for all worlds in M.



5) Therefore s.t. is evidently true for our own world Wo.



6) You will notice that the above conclusions are independent of detailed assumptions about the (composition of) multiverse M.



I am aware that this is a physicist's proof and look forward to mathematicians formalizing it in the decades to come.



Also, I am sure that some string theorists already use this argument implicitly, but I still think there is some value in making it explicit.


a new proof for the existence of God ...

... from assumptions about many worlds.



I believe the following proof is a variation of Plantinga's ontological argument and continues my theological studies. But I think this new argument is sufficiently different from those previous attempts and therefore should be interesting to the reader.



1) We assume the existence of a multiverse M, which contains all possible worlds. [*]



2) M contains our world Wo.



3) It is possible that there is a world Wg created by the omnipotent, omniscient, omnipresent God.

3b) Therefore M contains the world Wg which was evidently created by God.



4) If M = { Wo, W1, W2, ... Wg ... } contains one world created by God then
we must assume God created all worlds in M (otherwise God would not be omnipresent).



5) Therefore, if we assume the existence of many worlds as above, it follows that our world Wo was created by God.



6) While the above conclusion is already sufficient, we can go one step further to clarify the
meaning of 5):

The existence and creation of our world is independent of other worlds (see first footnote),
therefore God created our world regardless of assumption 1). [**]





[*] M is not necessarily the multiverse of string cosmology or related to the many worlds of quantum theory; We only assume that M contains all possible worlds independently and independent of specific theories.

As you will notice, the infinite character of M does not really play a role
in the proof, so one need not worry about antinomies related to the set of all possible sets etc.



[**] Consider the sentence s = "If it rains in Australia, then my dog barks here in Vienna."
We know that the fact of a dog barking in Europe is independent of rain falling in Australia.
Therefore, if we know that s is indeed true, then we know that the dog barks (regardless of what happens in Australia).





added later: It seems that some have a problem with 3) which implies the *possibility* that God exists. I would recommend to re-read this previous blog post, in particular paragraph 2 and 3 and footnote [2], for an explanation why such atheistic doubt is not rational.


shape up



"I figured you might be able to give me some pointers. I need to shape up."

Lester Burnham



Well, there is this paper which explains 'why decoherence has not solved the measurement problem'. (It is pretty much the argument I used here to state the 'interpretation problem'.)



Then there is this talk about the divergence of perturbation series in QFT.



And finally there is this paper about the strong coupling limit of the Wheeler-deWitt equation.


please can you help me?



Recently, this problem came up in one of my pet projects:

Does anybody know the current state-of-the-art if one needs to distinguish a weak 1st order phase transition from a 2nd order transition with lattice simulations?

If you have an opinion please please let me know and leave a comment.





added later: It might help if I explain a little bit better what I am talking about.



In my pet project I am doing Metropolis simulations on a 4d lattice and the size is limited so that 32^4 is already 'very large'.

Of course, finite size scaling is an important tool, but I would like to know e.g. if it is still state-of-the-art to use the Binder cumulant, or if there are better ways to do this.



Also, one can try to directly identify meta-stable states, but what is the best technique to do so? I know that e.g. simple histograms were sub-standard already ten years ago.

I am also curious if people use partition function zeros in real problems and if something like this has become a standard tool in recent years.



I would appreciate any input e.g. pointers to articles or books that may be relevant. Please do not hesitate to post a comment (which you can do as anonymous).


Skolem's paradox



So far I never mentioned how I understand
the famous Löwenheim-Skolem result (*):

"no matter how fancy your axiomatic system, which seems to talk about real numbers, complex numbers, geometries, fields etc.,

in the end, all it really does is talk about the countable natural numbers, nothing more and nothing less."



In my opinion it is the most shocking result of the Grundlagenstreit.

But does this tell us something about the true nature of physical reality?



(*) and let me be very clear that I am a layman who read exactly one book about number theory (and I understood perhaps half of it).


down to earth



I assume you will be relieved that
this blog post for once is not some crazy speculation about our universe and it does not contain empty pseudo-philosophical thoughts. It does not even count espressos.

Instead, it is about the down-to-earth topic of quantum gravity. Actually it is just a collection of links to pre-prints; In other words I am cleaning out my to-do list.



B.F.L. Ward: ".. by using recently developed exact resummation techniques ... we get quantum field theoretic descriptions for the UV fixed-point behaviors of the dimensionless gravitational and cosmological constants postulated by Weinberg. Connecting our work to the attendant phenomenological asymptotic safety analysis of Planck scale cosmology by Bonanno and Reuter, we predict the value of the cosmological constant ..."



U. Gursoy: "We propose a general correspondence between gravity and spin models, inspired by the well-known IR equivalence between lattice gauge theories and the spin models. This suggests a connection between continuous type Hawking-phase transitions in gravity and the continuous order-disorder transitions in ferromagnets. ..."



N. J. Poplawski: "The Einstein-Cartan-Kibble-Sciama theory of gravity provides a simple scenario in early cosmology which is alternative to standard cosmic inflation and does not require scalar fields. The torsion of spacetime prevents the appearance of the cosmological singularity in the early Universe filled with Dirac particles averaged as a spin fluid. Instead, its expansion starts from a state at which the Universe has a minimum but finite radius. ..."



A. Strominger et al.: "The problem of gravitational fluctuations confined inside a finite cutoff at radius r=r_c outside the horizon in a general class of black hole geometries is considered. Consistent boundary conditions at both the cutoff surface and the horizon are found and the resulting modes analyzed. For general cutoff r_c the dispersion relation is shown at long wavelengths to be that of a linearized Navier-Stokes fluid living on the cutoff surface. ..."



If this would be a better blog, each one would have its own blog post with interesting explanations etc. - some value added.

But by now you should know that with this blog you will have to make up your own mind about all this ...


empty set



The empty set {} contains no element, nothing whatsoever.

Next we consider the set {{}}, which contains the empty set as its only element.

Then the set { {}, {{}} } which contains two elements and so on and so forth.

We assign the symbols 0, 1, 2, ... to these sets for convenience.



This is of course the standard definition of the natural numbers N as given by von Neumann; Once we have N then Z, Q, R, C etc. follow from N more or less in the usual manner.



I only mention it because some people believe that all physics is really just math.

But if "external physical reality is assumed to be purely mathematical" then all reality is based on the empty set.


7 x 6 = 41 you little sh**



Nowadays, Scott A. rarely writes blog posts, so I really enjoyed his latest entry and the discussion which followed. (By the way, Moshe refers to his own blog post which is here.)

It seems that Scott is worried about hyper-computation: "Yes, doing an infinite amount of computation in a finite time using exponentially-faster steps certainly does seem like a cheat to me!"

But I think he should be less worried about the discreteness of space-time and more about the question if one can create artificial black holes and baby universes. Of course, what appears as baby universe from one side, looks like a normal universe from the inside and we don't know if one could create a whole universe just to solve a math problem. (As long as we are not sure about quantum gravity, we are not sure about anything.)

Obviously, the Bekenstein bound would not limit the complexity of problems one could solve using baby universes (the volume of a universe is not bounded) and the only question is if/how one could get the answer out of the universe (perhaps using time travel?).



There are of course several indications that our own universe was indeed created as such a computing device:

1) Our universe seems to be fine tuned for the existence of math teachers.

2) We have reached some sophistication in our studies of math.

3) Life in this world seems to lack a deeper meaning and has a certain tendency towards the boring, uninteresting and annoying.

3b) The creator of this universe seems indifferent to the pain and suffering of its inhabitants.

4) One could resolve the Fermi paradox by assuming that the universe is fine tuned for its inhabitants to hang out at MathOverflow but prohibiting unnecessary inter-galactic travelling.



The only remaining question is this. If our universe was really created as some sort of computation device, is it at least part of a grand scientific project, some kind of ultimate mathematical inquiry?

Or did some ET kindergartener 'borrow' the baby-universe-computation-device of his older sister to solve the home work problem 7x6=?


Clarity about these matters ...



Philosophy, Bayesian inference, statistics and all that.



"Philosophy matters to practitioners because they use philosophy to guide their practice; even those who believe themselves quite exempt from any philosophical influences are usually the slaves of some defunct methodologist."



"We fear that a philosophy of Bayesian statistics as subjective, inductive inference can encourage a complacency about picking or averaging over existing models rather than trying to falsify and go further."



In other words, do not look for a Bayesian super-intelligence, but (try to) learn from your mistakes and fail in interesting ways.



update: The authors blog about it here and there.


arXiv or viXra ?



Recently, there was some fun with arXiv vs. snarXiv, where arXiv is of course the place where serious physicists store their papers and snarXiv is a random paper generator.

But there is also viXra.org, a self proclaimed 'alternative archive of 1072 e-prints in Science and Mathematics'. As an example, take a look at the paper Logic: a Misleading Concept, which kind of sets the tone for the rest.



In the following I present to you four abstracts and the challenge, should you accept it, to distinguish arXiv from viXra. The first who can assign them correctly (in the comments) wins a golden llama award with a free subscription to this blog for a whole year. Here we go ...



1) Many people believe that mysterious phenomenon of consciousness may be connected with quantum features of our world. The present author proposed so-called Extended Everett's Concept (EEC) that allowed to explain consciousness and super-consciousness (intuitive knowledge). Brain, according to EEC, is an interface between consciousness and super-consciousness on the one part and body on the other part. Relations between all these components of the human cognitive system are analyzed in the framework of EEC. It is concluded that technical devices improving usage of super-consciousness (intuition) may exist.



2) Vafa's (11+1) F theory is extended by means of Bars' 2T holographic theory to yield a 14d Multiverse theory that permeates the brane of a 12d Universe in which both the Universe and the Multiverse have (3+1) spacetimes. Given the 2d toroidal compactification of F theory, we conjecture that the Multiverse has a 4d Cartesian compactification that is filled with 3D+T spacetime via the standard 6d elliptic Calabi-Yau compactification, as in both M and F theory. The result is exemplified using supermassive black hole cosmology.



3) The study of the so called brane world models has introduced completely new ways of looking up on standard problems in many areas of theoretical physics. Inspired in the recent developments of string theory, the Brane World picture involves the introduction of new extra dimensions beyond the four we see, which could either be compact or even open (infinite). The sole existence of those new dimensions may have non trivial observable effects in short distance gravity experiments, as well as in our understanding of the cosmology of the early Universe, among many other issues.



4) We propose a derivation of the empirical Weinberg relation for the mass of an elementary particle in an inflationary type of universe. Our derivation produces the standard well known Weinberg relation for the mass of an elementary particle, along with an extra term which depends on the inflationary potential, as well as Hubble's constant. The derivation is based on Zeldovich's result for the cosmological constant Λ, in the context of quantum field theory. The extra term can be understood as a small correction to the mass of the elementary particle due to inflation.



-------



The proud winner is cherez who can now read this fine blog for another year for free while looking at the golden llama, knowing that he can tell arXiv from viXra. Congratulations!









Links to the four papers are in the comments.


one pipe, many worlds



Some people claim that they have seen the image of a pipe on this blog. Obviously, there are many different ways how such an image could have been observed in the many worlds we live in, but in this world there is only one way how it could not have appeared.

So what does this tell us about probabilities?



PS: If you want to think even more about probabilities and many worlds, I recommend this paper and in particular the sections about some toy many-worlds models and
about 'the problem of inappropriate self-importance'.



-----



added later: In my previous example a quantum experiment has two possible macroscopic outcomes, e (espresso) and n (no espresso), and the probability for each is 50%, although e can be realized in N different (macroscopic) ways, while n can be realized in only one way.

Now, one could argue that in both cases the observer does not immediately experience a conflict with the 50% Born probability (the observer in one world e or n does not experience all the other worlds). Therefore we need to consider iterating this experiment.

If the experiment is repeated R times, then observers who see the outcome eeeeeeeee...eeeeee will indeed conclude that something is very wrong. Unfortunately, they are the overwhelming majority with N^R worlds and I would really like to understand how this can be reconciled with e.g. the Deutsch-Wallace interpretation of probability.


one espresso, many worlds



I walk up to a fully automated coffee dispenser and push the button to get an espresso.
What I do not know is that the button is connected to a device which consists of a radioactive source and a detector. The button activates the detector for 5 seconds and if it registers a particle from the radioactive material it will make an espresso and otherwise not (*). The probability to get my espresso, according to text book quantum theory, is exactly 50%.



But this raises the following question. There are many ways I can get the espresso, but there is only one way that it can fail. See, I push the button at 1pm, which here means I push it exactly at 1:00:00. Now the detector could register a particle at 1:00:01 or at 1:00:02 or ... at any time between 1:00:00 and 1:00:05. There is an infinite number of possibilities when and how the detector could register a particle.

But there is only one way it can not register a particle, with the detector shutting down at 1:00:05.

Since we all believe in the many worlds interpretation, this implies that there are many worlds where an espresso has been prepared and only one world where I fail to get it. So how can the probability be 50%?

Unless we assume that somehow the many different worlds are not equally real (x).



So perhaps the many worlds are not really real after all ?





(*) Actually, many automated dispensers in the real world follow a similar design.



(x) added later: There are proposals on how to save the appearances. Let me know if you find them convincing.


weird comments



Something is weird with the comments to this blog. Now I receive spam (e.g. to the previous entry) which I cannot delete (*). And the comment counters are sometimes messed up too.

So I decided to disable comments (again) and recommend that you do not click on links in comments which consist mostly of strange characters.





(*) Why send spam to a blog which is mostly inactive ?!





update: Meanwhile, it seems that the delete finally worked and I am turning comments back on.


triangles



In a recent preprint Renate Loll et al. present numerical evidence that causal dynamical triangulation is eventually a discretization of Horava-Lifshitz gravity.

But is this really good news?

In an earlier paper by Christos Charmousis et al. an argument was given that "the original Horava model, and its 'phenomenologically viable' extensions do not have a perturbative General Relativity limit at any scale". Lubos wrote more about that at the time and there is also this paper with a more general argument.

I am neither an expert on CDT nor Horava-Lifshitz gravity and would welcome comments about this. (Of course, I always welcome comments!)


friends



One reason I do not have a facebook account is that I would probably not have enough friends; Most likely my friends would on average have more friends than I do.

This is just another result of this 'mutant form of math' and is explained e.g. here and here (pdf).



-----



Adrian Kent thinks that many worlds interpretations are scientifically inadequate.

He is against many worlds.



In this world.

I wonder what he thinks in the other worlds...



And I wonder if I have on average more friends in the other worlds than in this.

By the way, can a whole world have 'friends'? (*).





(*) Friends have to have something in common. So we define that two different worlds or branches of the m.w.i. are 'friends' if they both contain at least one observer with (almost) the same conscious experience. At any point in time we are then dealing with a 'social network' of worlds and the above friendship paradox applies: Most likely the other worlds will have more 'friends', be more popular, than our world.
If I believe in the many minds interpretation and assume that conscious experience constitutes reality, does this mean our world is most likely less real than others?


mean mean



warning: this blog and in particular this blog post is about a mutant form of math.





Recently, I browsed through Cosma's notebook about large deviations reading that

"The limit theorems of probability theory [..] basically say that averages
taken over large samples converge on expectation values."

and immediately my inner contrarian tried to come up with a simple stochastic process
where the sample mean does not converge.

Of course, this is not very difficult, since there are many examples of processes where
the sample mean is unbounded and does not converge to anything.



It would be much more interesting to find a process where the sample mean is bounded, but 'bouncing around' unpredictably and therefore not converging. In other words, a process where it seems that a 'mean of sample means' exists and yet it does not.



My initial idea was to use the sample mean S of a random variable y itself as a variable in the stochastic process and consider the following:

y(t+1) = -sgn[S(t)]/( |S(t)| + e )   + noise

where e (epsilon) is a small number, S is the sample mean S(t) = ( y(1) + y(2) + ... y(t) )/t
and the noise term is a (uniformly distributed) random variable between -1 and +1.

The sgn[S] function is defined to be -1 for negative values of S but +1 otherwise, so that sgn(0) = +1.

Notice that we can write S(t) as ((t-1)/t)*S(t-1) + Y(t)/t , formally making this a Markov process for (y,S).



If the current sample mean S(t) is a small negative (positive) number, the process will generate y with large positive (negative) values, but if the current sample mean is large then the process will generate small y distributed around zero, forcing the sample mean to lower values. This should make for a nasty little process with a really mean mean.



Unfortunately, the sample mean of this process still converges (#). And it converges towards zero, as depicted in the following picture,
produced by a numerical simulation with e = 10^-6 (and S(0) = 1 instead of 0).







It is not too difficult to understand that the convergence rate is proportional to the value of e and it is not
difficult for a statistical mechanic to fix this problem with a little tinkering.
Using (e/t) instead of e does the trick.







As the numerical simulation depicted in this picture suggests, the sample mean does not converge, but seems to remain bounded (x), bouncing unpredictably between positive and negative values (*).



The obvious question is what the expectation value E[y] is and in which sense S
converges towards it.

Unfortunately, I have to leave it as an exercise for the reader to find an answer and actually proof the (non)convergence of this process.





(#) does it really?



(x) or diverging very slowly?



(*) By the way, the values of y(t) are finite for all finite t, but obviously are now unbounded; But this is also
true for Gaussian noise and should not really bother us.



added later: I convinced myself of the following:

(#) Yes, for the 1st process (with fixed epsilon e) the sample mean S does indeed converge on E[y] = 0.

(x) No, for the 2nd process (with decreasing e/t) the sample mean S(t) is still bounded (by the inverse of e). Due to the symmetry of the system [the fact that I assume sgn(0) = +1 and not 0 is irrelevant by the way] this indicates that the mean of the S(t) values, i.e. 'the mean of the means', converges on zero, which would be compatible with E[y] = 0.

However, it is also the case that S(t) *cannot* converge on zero.


you have to click



If you read this, it is because you clicked. And I know you need some more links, because you have to click. You cannot stop. Here they are, pure, high quality stuff. Take them, click them.



The Aeolist reviews Sean Carroll's book and concludes that it is "too sloppy to be of much help".

John Baez writes about Algorithmic Thermodynamics and describes a "design for a heat engine powered by programs".

Boris Kosyakov examines the classical physics of a particle at the top of a hill and concludes that it is indeterministic, re-discovering what John D. Norton found out about The Dome.

Last, but not least, an easy-to-read explanation (pdf) of Xia's proof of the existence of non-collision singularities in Newtonian gravity.


interpretations, part 4/4



I recommend that you read the first part of this series first.



"If the facts don't fit the theory, change the facts." Albert Einstein



If you assume that nothing is wrong with the wave function W and you dislike
the interpretations of the previous part 3, then you have to conclude that
something is wrong with (your perception of) the reality R.

So, one way out of the interpretation problem, as posed in the first part of this series, is to simply assume that actually both detectors clicked; You are just somehow confused about it.



The many-worlds interpretation and its variants (consistent histories, many minds, etc.) are increasingly popular and solve the interpretation problem by pointing out that the wave function W = |1> + |2> continues to evolve into |1>|D1> + |2>|D2> and finally |1>|D1>|Y1> + |2>|D2>|Y2> , where Y1 indicates you, puzzled why click 1 has been observed but not click 2.



But, as I have argued previously, the full meaning of a many-worlds interpretation can only
be appreciated if one goes 'backwards in time', trying to find the origin of W (which
cannot evolve from a 'collapsed' wave function). In the words of Matthew J. Donald:



"Each time we pass back (through the appearance of a collapse) we get a better approximation to W.
Eventually, we arrive back at the big bang. ...



The quantum state of the universe coming out of the big bang looks - at least in its non-gravitational
aspects - very like a thermal equilibrium state. In the Hamiltonian time propagation of that state,
the stars and planets which we see now do not exist as definite objects, and certainly neither does any particular
measuring device now being used by us on one of those planets. W seems to be a complete mess.
However, it does have a great deal of hidden structure, and it is the job of a no collapse interpretation to explain
how that hidden structure comes to be seen." (*)



You may think that we have only replaced the original interpretation problem with a much more complicated one.
But the beauty of it is that this allows us to continue to think about the meaning of life, the universe and everything... (x)



-----



(*) While I agree with the overall conclusion, I disagree with the details.

i) Nowadays the multiverse landscape seems very popular (especially among string theorists) and it is not clear at all that we arrive back at one big bang.

ii) In general, I doubt that one could really reconstruct W as suggested or otherwise determine the wave function of the universe, unless
there is a new general principle which limits the possibilities (notice that we only know about one branch out of an infinity of possible branches.)

iii) If one assumes that W may contain an infinite number of branches and since those branches (assuming decoherence) are only almost-orthogonal to each other this poses a real problem, If one wants to use W to calculate anything.



(x) As far as I know, Frank Tipler was the first who realized that the many-worlds interpretation opens the door to theological interpretations of quantum theory. If one wants to follow this path (and don't we all want to believe?), then I would use a previous result to conclude that the wave function of the universe is not only invisible but necessarily pink.


interpretations, part 3/4



I recommend that you read the first part of this series first.



The ensemble interpretation goes back to Albert Einstein, who assumed that W does not describe an individual system, but instead an ensemble of equivalent systems or experiments. (Notice that Chad Orzel emphasizes that we need to use many photons to see an interference effect in the Mach-Zehnder interferometer.)

This resolves the mismatch between W and R, without assuming that anything is wrong with either W or R (*).

It seems to me that this minimalist interpretation is actually quite popular with many physicists and especially experimentalists.

Further, I think it is the philosophical basis of the "shut up and calculate" approach and suspect that some physicists use it who are otherwise convinced that Einstein never understood quantum theory.



In stark contrast, the Copenhagen interpretation assumes that W very well describes individual quantum systems, but emphasizes that we necessarily have to use classical concepts to describe the outcome of experiments. Therefore, one needs to change the 'description' during a measurement and the wave function
'collapses' at some point; Nowadays one could refer to decoherence to better determine that point.



In this interview Werner Heisenberg emphasized that W does not describe (fundamental) reality itself...




"That is just the point; I do not know what the words fundamental reality mean.
They are taken from our daily life situation where they have a good meaning,
but when we use such terms we are usually extrapolating from our daily lives
into an area very remote from it, where we cannot expect the words to have a meaning.
This is perhaps one of the fundamental difficulties of philosophy: that our thinking hangs in the language."




... instead he assumed that W does describe what he called 'potentiality'.




'What does a wave function actually describe?' In old physics, the mathematical scheme
described a system as it was, there in space and time.
One could call this an objective description of the system.
But in quantum theory the wave function cannot be called a description of an objective system,
but rather a description of observational situations.




The explanations of Niels Bohr were usually even more profound, with complementarity playing
a prominent role in his philosophy; But it seems that they were indeed so profound that nobody actually read them (x).



Let me finally mention some results which emphasize that
the problem to understand the measurement process results from the impossibility of self-measurement.
The observer of an experiment can therefore not use W to describe herself and her own experience.
In my opinion one could either use these results as further argument in support of the 'collapse' of the
Copenhagen interpretation or as the starting point for a new 'relational interpretation'.



-----



(*) Notice that in many cases a Wick rotation translates between quantum and statistical mechanics.



(x) "In a widely used compendium of papers on quantum theory [...], the pages of Bohr's reprinted article are out of order.
This paper (Bohr's response to the famous 1935 Einstein-Podolsky-Rosen critique of the standard Copenhagen interpretation)
is widely cited in contemporary literature by physicists and philosophers of science.
Yet I have never heard anybody complain that something is wrong with Bohr's text in this volume."

Mara Beller about the philosophical pronouncements of Bohr, Born, Heisenberg and Pauli.



continue to part 4.


la statistique bayésienne



Andrew Gelman and Cosma Shalizi wrote something about the real philosophical foundations of Bayesian data analysis. Very interesting.

Unfortunately, it is all French to me...


interpretations, part 2/4



I recommend that you read the first part of this series first.



The idea that the wave function W is only an incomplete description of the
reality R is as old as quantum theory itself.
Already at the Solvay conference of 1927 Louis deBroglie suggested to add
'hidden variables' with W being only a 'pilot wave'.

During the 1930s Albert Einstein published several thought experiments
(the most famous being the EPR paper) to demonstrate that W was obviously incomplete.



Consider the following 1-dimensional thought experiment.

A particle enters a detector of (great) length L at time tI and we determine its momentum p = mv
with high precision, knowing that this will lead to large uncertainty dx in its position x.
At a later time tF we turn the detector on, which will now determine the position x of the particle
with high precision and leave dp large. But although we assume that Heisenberg's uncertainty relation
holds for each measurement, we can now reconstruct the path of the particle between tI and tF,
knowing v(tI) and x(tF) with high precision and assuming conservation of momentum (just as we can
reconstruct the path which the photon must have taken in the Mach-Zehnder interferometer, once we know which
detector clicked).

But this reconstructed path R(t) for tF > t > tI is not described at all by the wave function W(t); This suggests
that W provides for an incomplete description of reality only.



A consistent theory of hidden variables for non-relativistic quantum theory was later formulated by David Bohm
and attempts have been made to generalize it to relativistic field theories (1, 2).

By the way, notice that the 'hidden variables' are actually the particle and pointer positions
we observe in a measurement (while we never experience the
wave function directly).



---



An even more elegant way to introduce hidden variables is to refer to (human) consciousness as selection
principle. This works especially well, because on one hand one cannot deny the reality
of our conscious experience, on the other hand it never directly shows up in physics.

John v. Neumann was the first to mention it, Wigner and his
friend
made the idea popular and Henry Stapp worked out some concrete proposals.
Actual experiments, using EEGs to test the influence of consciousness on quantum experiments, have been proposed, as described in this paper.



Of course, in a section about quantum theory and consciousness I have to mention Roger Penrose. He does not only think that W is incomplete, but assumes that current quantum theory is actually wrong, because
it ignores the effects of (quantum) gravitation. He also proposed an actual experiment to test his idea.



There are many other proposals to modify quantum theory and I only mention the Ghirardi-Rimini-Weber theory,
which assumes real sporadic collapse of the wave function.



continue to part 3.


interpretations, part 1/4



It is time to write about something truly exciting, in other words it is time to write
about the interpretation problem of quantum theory. This is the 1st of 4 posts about it, the introduction if you will.

Some time ago, Chad Orzel wrote a blog post explaining decoherence and it shall be the starting point for us (he later wrote a more detailed explanation as response to a comment).

Consider a single photon in a Mach-Zehnder interferometer, which will end up at one of the two detecors D1 and D2.





If the interferometer is properly set up, the wave function W of the photon will be something like |1> + |2> (normalization coefficients are absorbed in the state vectors). Notice that W is symmetric in the two possible outcomes 1 and 2.

However, we know that when we do the experiment, only one of the detectors will click, either D1 or D2.

And so we have already all the ingredients together for the interpretation problem.



W: The wave function |1> + |2> is symmetric and prefers neither 1 or 2.



R: In reality, only one detector will click, either D1 or D2, and obviously one is preferred over the other.



As Chad explains, decoherence will eliminate to some extent the quantum interference between |1> and |2> and in this sense 'classical behavior' emerges; But this does not really solve our problem.

Even if the product between |1> and |2> or |D1> and |D2> is nearly zero, this does not elimiate the fact that W is symmetric and R is not. (Notice also, that decoherence will in general bring the product between |D1> and |D2> close to zero but not exactly zero and there is no sharp cut-off between quantum interference and classical behavior. But this is really not that important to our problem.)



By the way, please notice that the interpretation problem is a real problem (W does not match R); We are not talking about an 'interpretation problem' in the sense an art critic or a philosopher might use that phrase.



It is evident that every attempt to solve the interpretation problem must fall into one of three categories.



i) The problem is with W. The wave function is not a complete description of R. We will encounter this approach in part 2 of this series as hidden variables, etc.



ii) While W is complete, the problem is about what we mean with 'W describes R'. The Copenhagen interpretation, the ensemble interpretation and others belong here. I will discuss them in part 3.



iii) The most radical proposal is to assume that W is fine and our problem is with R; Reality is just not what we think it is. The many-worlds interpretation is the most important example and I will discuss it in the final part 4 of this series.



One more remark about the upcoming parts; While similar reviews often point to a favored interpretation and describe all others in a negative light, I will try something new and describe all interpretations as convincing and favorable as I can. I hope that this will increase the entertainment value.



continue to part 2.


entropic gravity



Recently, Erik Verlinde proposed that gravity can be described as entropic force. I am not sure yet what to think about this, but Lubos explains why he is certain this can never work.

Meanwhile, Lee Smolin published a preprint using Verlinde's idea to derive Newton's law from loop quantum gravity. I am not convinced by his argument.

Verlinde considers the change in entropy dS for displacements dx assuming a holographic principle and in his calculation he implicitly assumes the geometry of a smooth and indeed flat geometry.

There is of course nothing wrong about that, but if Lee Smolin wants to use this argument, then he has to first show that there is a reasonable limit of loop quantum gravity which reproduces this smooth and (almost) flat spacetime and I don't see that.



added later:



More discussion Lubos vs. Erik (scroll down through the comments).



Lee Smolin responds to my comment here.



Robert Helling comments on entropic everything.


previously, somewhere else



Just some links to that other blog, which you may or (more likely) may not find interesting..

.. about many worlds (again),

.. lattice gravity models,

.. the impossibility of self-measurements,

.. and a lottery ticket of zero value but with probability to win greater than zero.