here and now



"How is it that there are the two times, past and future,
when even the past is now no longer and the future is now not yet?"

Augustinus, Confessions, Book 11, chap. 14



Meanwhile, we know that in addition to events which happened in our past and events which will happen in our future, there are also events which are space-like (neither in the past nor in the future); And we know that the present is really a single point in space-time.

Of course, there is nothing mysterious about relativity, the math is easy enough that nowadays they teach it in high school, Minkowski diagrams and the Lorentz group as homework exercise.



But still - I wonder about the present and how the activities of my brain fit into a single space-time point when I experience the "here and now" ...



Perhaps I should think more about world lines.


about entropy



"I thought of calling it 'information', but the word was overly used, so I decided to call it 'uncertainty'. When I discussed it with John von Neumann, he had a better idea. Von Neumann told me, 'You should call it entropy, for two reasons. In the first place your uncertainty function has been used in statistical mechanics under that name, so it already has a name. In the second place, and more important, nobody knows what entropy really is, so in a debate you will always have the advantage.'"

Claude Shannon



In order to contemplate the definition of entropy, and perhaps gain an advantage in debates about it, I recommend this article about classical thermodynamics. In section 3 'a one-dimensional classical analogue of Clausius' thermodynamic entropy' is constructed, which dates back to Helmholtz. (The properties of one-dimensional classical gas were previously discussed on this blog here.)



And then there is a pdf file available to this text book
about 'Entropy, Order Parameters and Complexity'.



I found both the article and the book on Cosma's list of interesting links.


world lines



I write this blog post about something I have not really thought and don't know much about.

The Aeolist discusses the Dowe/Salmon causal processes account of causation and raises important objections in my opinion. (see also here)

But I think there is a more fundamental objection: In order to define world-lines one needs a concept of space and time, but how could one determine the properties of space and time and measure distances between world-lines?

One would have to introduce a metric, in other words a field, and quickly end up with (something like) general relativity and its complications.
As the Aeolist already noticed, the introduction of fields "brings along a whole other suite of problems like how to individuate objects and processes on that conception".



Does it really make sense that philosophers retrace the history of physics (and the inconsistencies of its various concepts) from Newtonian particles to quantum field theory just to define terms like 'process' and 'causation'?


deep or trivial



I would summarize the previous blog post as follows:

In general, a closed system, which contains e.g. a physicist and/or computer, cannot predict its
own future, even if we assume deterministic laws of physics (*).

This is quite easy to understand once you think about it;
But is this some deep insight or just trivial stuff?



An equivalent statement would be that, in general, a closed system, which contains e.g. a physicist and/or computer, cannot
determine or know its own microstate.

Although equally trivial, it has a bit more 'statistical mechanics flavor' to it and might be interesting if one considers foundational
questions about entropy or even more profound quantities.

It also has a certain Zen-like quality...





(*) See also this and that.


Arthur and free will



In recent days and weeks I read an interesting but rather heavy book about
Arthur Schopenhauer, his philosophy and his times. And I think this was the main reason I did not
find the will in me to write something on this blog. I still don't have too much
to offer, except the following silly story...



-----



This story is about a very smart physicist and her simpleton friend. Among other things
the smart physicist entertained herself by predicting his behavior. This was possible because
they lived in a Dennett-Newtonian universe, where all thoughts and all behavior was
a function of the configuration of molecules constituting a brain and the movement of these
molecules was deterministic, following Newtonian laws. The bouncing of molecules was not so difficult to predict with all forces well known.



The physicist had several machines in her laboratory to determine the configuration
of huge numbers of molecules to arbitrary precision and a supercomputer (also made of Newtonian particles of course)
to calculate the future configuration of molecules ahead of time.
Since her friend was made of a large but finite number of molecules, all she had to do was e.g. to
use her machines to measure the configuration of molecules at 9am (he did not even notice it), feed the result
into her supercomputer and read out the calculation which predicted his behavior at 10am. And when she determined that
he would say "I am bored, let's go for a walk" this was exactly what happened, like clockwork.
Easy as pie and quite funny.



Unfortunately, she made a mistake. She wanted to show him how smart she was and wrote down her prediction
so he could see it and for some reason all of a sudden it failed to work.

Of course, on one hand it was immediately clear what happened. As soon as he read that at 10am he would "go to the window
and open it" he decided to open it earlier and at 10am already closed it.
There was nothing mysterious about it, actually it was a completely deterministic process, with Newtonian
photons carrying the prediction to his Dennett-Newtonian brain, which was not very complex, but smart enough to do simply
the opposite of what she wrote on the paper. He did it just to prove a point. And of course it was quite irritating.



On the other hand, she did not understand this at all. Her machines could measure the configuration of all
molecules in the room (including herself) and the supercomputer calculated this forward to arbitrary precision. So the calculation 'knew' that a prediction was
written on a piece of paper and the Newtonian photons carrying the message and his simpleton brain receiving it and doing the opposite of what was written etc.



So how could this prediction go wrong? Everything was deterministic! And still, no matter how many times she tried,
her simpleton friend with his simpleton stubbornness did the opposite of what she wrote down. Every time. Was Newton wrong after all? Or Dennett?



She found a solution, of course, it was easy enough. Get another friend. But still...



-----



If you have a good explanation of what caused this 'failure' of determinism then please post a comment. The first to solve this silly puzzle will win a Golden Llama award for major contributions to the blogosphere of physics, which includes a free subscription to this blog.

Of course, if you just want to debate the whole thing feel free to post a comment too, or if you want to let me know just how silly this silly story really is.



added: And the winner of the Golden Llama Award is Chris, who pointed out that the problem is with the supercomputer (trying to) predict its prediction. (See the comments for more details).

Congratulations!






heretic stuff



Some evildoer at ScienceBlogs asks "Is String Theory an Unphysical Pile of Garbage?" and references
this paper (in particular p.54 to 57) and other heretic stuff.

"... in string theory, we don't know both the variables and the equations. In fact, unless another theory (...) comes along that encompasses and expands upon string theory, string theory isn't a fundamental theory at all, due to instabilities."



I think this is yet another job for SuperLumo, defender of the one true string theory!



While we wait for SuperLumo, let me add a few remarks about this (but keep in mind that I am not a string theorist).

The blog post and the paper it references basically complain about the lack of a fundamental, non-perturbative formulation of string theory. But it seems to me that the Maldacena conjecture provides for such a non-perturbative description, currently at least for AdS.

A while ago string field theory has been proposed as a more fundamental theory and the referenced paper suggests that it suffers from instabilities. I cannot judge the argument in detail, but it seems to me that string field theory is indeed no longer such a hot topic.
Most of the effort seems today focused on generalizing and understanding the AdS/CFT correspondence.

However, I also think it is a mistake to underestimate 'string theory as perturbation theory'.
After all it provides for the only known way to deal with quantum gravity but at the same time keep local Lorentz invariance (i.e. a smooth spacetime) and quantum theory as we know it. And it gets surprisingly far, including consistent calculations of black hole entropy, making use of the amazing string dualities.

Many questions remain unanswered and perhaps early hopes that string theory will answer all questions of particle physics turned into consternation about the multiverse, but I think the answer to the "incendiary title" of the blog post obviously has to be "No!".


no quantum resolution ...



This paper is a really interesting comment to the one proposing a "Quantum resolution to the arrow of time dilemma", discussed here previously.

"In this note we show that the argument is incomplete and furthermore, by providing a counter-example, argue that it is incorrect. Instead of quantum mechanics providing a resolution in the manner suggested, it allows enhanced classical memory records of entropy-decreasing events."


Bayes vs Kelly



A Bayesian statistician understands probabilities as numerical description of 'subjective uncertainty'. In order to make this a little bit more concrete, Bruno de Finetti considered betting odds and order books (see e.g. this paper [pdf!] for a more detailed discussion).

Meanwhile this approach is usually introduced and shortened as follows (*):

"we say one has assigned a probability p(A) to an event A if,
before knowing the value of A, one is willing to either buy or sell a lottery ticket of the form 'Worth $1 if A' for an amount $p(A).

The personalist Bayesian position adds only that this is the full meaning of
probability; it is nothing more and nothing less than this definition."



But there is a problem with that.



As every professional gambler knows, if one places a series of bets one needs to follow the Kelly criterion to avoid certain ruin in the long run. But the Kelly fraction and thus the amount one should be willing to bet is strictly zero for the case of a fair bet (when one would be willing to take both the buy and sell side).

In other words, it is wrong to pay the price suggested in the above definition, if one wants to survive in the long run.



Now, it would seem that this is one of those 'technicalities' that one can easily sweep under a rug or into a footnote. But I think it would be difficult to somehow incorporate the Kelly criterion in the above definition of probability, because in order to derive the Kelly fraction one needs to know already quite a bit about probabilities in the first place.

It may make more sense to emphasize that, of course, an order book is really a process and Bayesian probabilities are found in the limit when such order books approach the fair price and bet sizes tend towards zero. But unfortunately, in general we don't know much about the convergence of this process and real world examples of order books do not exhibit such a limit [x].



There is one additional problem, because the Kelly criterion is actually the limiting case of a more general rule. Indeed, if the estimated probabilities contain just a small amount of noise, bankruptcy still looms even if the Kelly criterion is used. Therefore professional gamblers know that one must bet 'less than Kelly'; In general a rational agent will bet 'less than Kelly' due to risk aversion and this means that the direct relationship between bet sizes and probability, as proposed in the above definition, is lost completely [x].



Therefore, I suggest that Bayesians simply refrain from using order books etc. (free markets are becoming quite unpopular anyways) to define probability and simply state that 'subjective uncertainty' is a self evident concept. After all, we know that self evidence is a very reliable kind of rug.



(*) I do not want to pick on this particular paper, since similar 'definitions' are
nowadays used in many cases. But since the paper is about the foundations of quantum theory I would like to point out that I have it on good authority that God does not play dice.



[x] If risk aversion and probability estimates of the participating agents are correlated, this would be already one reason why the order book would not converge towards the 'fair price'.


managed world views



We already have managed accounts and managed healthcare, now we also have managed world views, thanks to Scott. A very interesting application and I recommend that you try it and check your views on quantum mechanics, strong AI and other topics.

Of course, the devil is in the details and several commenters have already detected ambiguities in the statements proposed by the world view manager and the detected tensions are often already accompanied by comments disputing their validity.

But where Wittgenstein ran into a brick wall, Scott may yet succeed. His manager could indeed be the first application of Web 3.0 a.k.a. 'the semantic interwebs'.

In the future, bloggers and other opinionators will check the consistency of their views and rationally resolve tensions and perhaps we will have, in addition to virus and spam filters, consistency scans of web pages and blog posts.



e.g. "wvm just detected an unresolved tension between this and that blog post and recommends to shut down this blog until the tension is resolved."



I am sure that Web 3.0 will be a better place, with about 95% of the current blogs removed from the interwebs due to inconsistencies.


... and peace reigns once more



"Nothing could be simpler. It is merely a small wooden casket, the size and shape of a cigar box, with a single switch on one face. When you throw the switch, there is an angry, purposeful buzzing. The lid slowly rises, and from beneath it emerges a hand. The hand reaches down, turns the switch off and retreats into the box. With the finality of a closing coffin, the lid snaps shut, the buzzing ceases and peace reigns once more."

The 'ultimate machine' of Claude Shannon.


memories



Cosma links to this article about a 'quantum solution to the arrow-of-time dilemma' and suggests to read it carefully.
I think I read it already a year ago as this preprint and it has several interesting ideas; In particular I liked the passages about Borel's argument.



But I was also thinking about the hidden assumption of this paper.
As far as I understand it, the author (implicitly) assumes that memories are about the past and in a paper about the 'arrow-of-time dilemma' this should be explained not assumed.
In other words, the author suggests that we could live in a time-symmetric world and we just do not have memories about "phenomena where the entropy decreases"; This is all fine, but in my opinion one needs to explain then why we only have memories about the past but not the future, without "an ad hoc assumption about low entropy initial states".

E.g. in the conclusions we read that "we could define the past as that of which we
have memories of, and the future as that of which we do not have any memories" (*), but the author does not ask or answer the question how there can even be such a thing as the future, of which we have no memories, in a time-symmetric world.



update: Sean wrote about the paper also and in the comments the author responds.

I am glad that at least one comment agrees with my own reading; And it seems that Nick Huggett has some interesting papers I should read.



update2: The author responds to Huw Price (who raised the same objection I did).



update3: Last, but not least, Dieter Zeh comments on the paper as well. I wrote about his book about 'the direction of time' previously.



PS: The question "if the world would run 'backwards', would we even notice it?" was raised and discussed previously on this blog.



(*) As it stands this statement is of course contrary to the usual convention(s). From what we know about physics, most of those events of which we have no memory are space-like to us and not in our future. And of course there are even events in our past of which we have no memories. E.g. we know for sure that Philip Augustus of France spent a night with Ingeborg of Denmark, but we have no documents or memories about the mysterious events of that night.


Fermi event



The Fermi experiment detected a 31 GeV photon emitted by a short gamma-ray burst and "this photon sets limits on a possible linear energy dependence of the propagation speed of photons (Lorentz-invariance violation) requiring for the first time a quantum-gravity mass scale significantly above the Planck mass".

In other words, the observed event suggests that Lorentz invariance holds up to (and in fact above) the Planck scale and thus provides an empirical argument to rule out several proposals for quantum gravity. As far as I know, this is the first time direct empirical evidence about the Planck scale was obtained!

Via Lubos.



So what does this tell us about string theory?


quantum Hamlet



The arXiv blog reports about a new proposed effect called Quantum Hamlet Effect. According to the author "It represents a complete destruction of the quantum predictions on the decay probability of an unstable quantum system by frequent measurement".

But I think there is a problem with it. The Hamlet state is prepared by a series of subsequent measurements, happening at decreasing time intervals tau/sqrt(n), with n
going to infinity. This leads to a divergent sum in the probability, which then leads to the "complete destruction of predictability".

But, of course, in reality the limit n to infinity cannot be taken (e.g. as the author notices himself he neglects time - energy uncertainty!), so we have to assume the procedure stops at n = N, with some finite but perhaps large N. Unfortunately, the divergent Hamlet term is the harmonic series, which increases only with log(N),
i.e. it increases much slower than sqrt(N).

Therefore I doubt that the Hamlet effect will "turn out to be more useful and famous than [the Zeno effect]" as suggested on the arXiv blog.


truth



The statement "p is an unknown truth" cannot be both known and true at the same time.
Therefore, if all truths are knowable, the set of all truths must not include any of the form
"p is an unknown truth"; Thus there must be no unknown truths, and thus all truths must be known.

At least according to Frederic Fitch.



Later, after a heated debate about the question if all true logical statements are indeed tautological, one
of the philosophers finally screamed "enough is enough" and stormed out of the room.


summer and hot and beach



It is summer, it is hot and you should be at the beach.

All I have are some links to stuff you have probably seen already:

A paper about improved mean field approximations.

It turns out that 1-d classical gas is known as Jepsen gas in the literature [1, 2].

A link between birds on a wire and random matrix theory.

A paper about non-renormalizability of quantum gravity (via Lubos).

A talk by Steve Carlip about quantum gravity at short distances [pdf].

Last but not least, an important contribution to an old problem of philosophy.



Really, you should be at the beach...


1-dimensional gas



In this episode we study the amazing properties of 1-dimensional classical gas.



There are N 'molecules', each with the same mass m, in a narrow cylinder of length L,
moving back and forth, colliding with each other and we assume that the collisions are
perfectly elastic. The cylinder is closed on both ends, but at the right end the rightmost
molecule can escape if its energy exceeds the threshold Eo, providing a 'window' for a
physicist to observe what is happening inside.

How many 'molecules' do we expect to
leak out from the 'window' and what can we learn from observing them (like peeking into a 1-dimensional oven)?



It turns out that the statistical mechanics of this system is remarkably simple,
once we consider what happens when two molecules collide.



Assume that 'molecule' A moves with velocity v1 and collides with B moving with velocity v2.
Using momentum and energy conservation (and the fact that both 'molecules' have exactly the same
mass) we find immediately that after the collision A moves with velocity v2 and B moves with velocity v1.



But this is equivalent to assuming that we are actually dealing with two (quasi)particles 1 and 2, which move with constant velocities v1 and v2 and do not interact with each other.



I guess a string theorist might call this a duality; We have two equivalent pictures of the same
situation. In one picture we are dealing with interacting molecules, bouncing off each other, and in
the other picture we are dealing with non-interacting molecules moving with constant velocities. As long as we cannot distinguish the molecules (they have the same mass) both pictures describe the same situation.



Of course, statistical mechanics is quite simple in the non-interaction picture and we can immediately answer the question from above: If initially there are n molecules with kinetic energy E > Eo,
we will observe n molecules leaking out and the time this takes will be less than 2*L/v, with v = sqrt(2*Eo/m). We only need to consider a freely moving (quasi)particle with E = Eo (+ epsilon), traveling
from the right end to the left, getting reflected and moving back to the right end where it leaves
the cylinder; All other (quasi)particles with E > Eo will leave even earlier.



And what do we learn about the gas (remaining) in the cylinder from observing the escaping molecules? The answer is exactly nothing, because of the independence of the (quasi)particles. We won't even know the temperature of the remaining gas, but we would know for sure that the kinetic energy of each remaining molecule is less than Eo.

Notice that the molecules of the 1-dimensional gas will in general not follow a
Boltzmann distribution, instead the probability distribution for energy and momentum remains whatever it was
initially - the 1-d system does not 'thermalize' as one would expect from experience with real 3-d gas.

It is left as an exercise for the interested reader to determine if (or in which sense) the 0th and 2nd law still hold.



I assume somebody must have studied the properties of 1-dimensional classical gas already, but I am just not aware of it. Please let me know if you have a reference.


decreasing entropy



I was thinking about the director of this previous post and how she
could use a mechanical device to generate the fire alarm.



picture of cylinder



A long and narrow cylinder has a slowly moving particle A on one side and a particle B at rest on the other (*).
She sets everything up on Sunday and simply waits until she hears the 'click' of the two particles colliding, which then triggers the fire alarm.

Here is my problem: It would seem that the entropy of this closed system decreases until we hear the 'click'; The entropy due to the unknown location of particle B is
proportional to lnV, but the volume V decreases with time as A moves
from left to right.

Notice that she can make the cylinder (and thus the initial V) as large as she wants and she could use more than one particle at B so that the initial entropy kNlnV can be arbitrarily large(**); And if the velocity of particle A is very small this decrease can take a long time ...



added 1 day later: I think I (finally!) figured out where the problem is with this puzzle. See the comments if you want to know my solution (instead of finding out for yourself.)



(*) While the initial position and momentum of A are well known (the
particles are heavy enough that we don't have to worry about quantum effects), the position
of particle B is unknown (but we do know that it is at rest).



(**) Of course the effort to set up the device will increase entropy by an even larger quantity, but all this occurs already on Sunday.



added later: I am no longer sure about that. She might have simply picked a cylinder
of unknown length, but shorter than 5m. The (right) end of that cylinder and the particle B would be identical. Now she sets up particle A on the left side to move with a (constant) speed of 1m/day and when A hits the other end (=particle B) it triggers the alarm (at which point she then knows the length of the cylinder).

I don't see how the act of picking up a cylinder of unknown length increased the entropy on Sunday.



reading



I just came across the book Information, Physics and Computation, by Marc Mezard and Andrea Montanari, which was published just recently. The draft is still available as pdf files here. Now you know what I am currently reading.

And there is also this paper about
MAP estimation of Hidden Markov processes; I mention it as a follow-up to earlier posts.

"We reduce the MAP estimation to the energy minimization of an appropriately defined Ising spin model..." Sounds interesting.


time and uncertainty



I am sure you know this one already, but ...



The director announces that next week will be a fire drill. In order to make it more realistic the day of the drill will be a surprise.

Here is the problem: The drill cannot be on Friday (the last day of the work week), because everybody would know on Friday morning that if the drill did not happen yet it will have to be this day, so it would not be a surprise.

But for the same reason it cannot be on Thursday, because everybody knows it cannot be on Friday and on Thursday morning, knowing that it did not happen yet one would have to conclude it will be this day, so it would not be a surprise. etc.
Therefore the fire drill cannot be on any day.



But on Tuesday the alarm bell rings and of course nobody knew it would be that day...



C.F. v. Weizsäcker discussed the puzzle in his book 'Aufbau der Physik', assuming that it tells us something about the nature of time.

According to Wikipedia no consensus on its correct resolution has yet been established despite significant academic interest (*).



Maybe we should try to assign Bayesian probabilities. Obviously, we have p(Fri) = 0 but then it follows that ...



(*) Notice the citation of the famous remark made by Defense Secretary Donald Rumsfeld!


tbfkatbfka...tbfkaTSM



I assume that you pay attention and noticed that several weeks ago this blog changed its name to 'the blog formerly known as The Statistical Mechanic', which one may abbreviate as tbfkaTSM.

Yesterday it occurred to me that it is time to change the name once more, this time to 'the blog formerly known as the blog formerly known as The Statistical Mechanic' or short tbfkatbfkaTSM. Of course, thinking ahead, it was clear that a more future proof name would be tbfkatbfka...tbfkaTSM.

But then it dawned on me that tbfkatbfka...tbfkaTSM is actually equivalent to tbfkaTSM in a strange way. And so I had an opportunity to appreciate the axioms of logic, which allow one to compress unnecessarily long statements.

Like the axiom S5 of modal logic. I only mention it because Alvin Plantinga used it in his ontological proof, which is a variant of Anselm's proof.


brains



In yet another post Lubos Motl writes about Boltzmann brains and makes the following argument.



"The Boltzmann Brain hypotheses should already be expo-exponentially suppressed relatively to sane hypotheses. Since the people began to think about the world, they have made so many observations of the ordered real world that had to look like miracles from the Boltzmann Brain viewpoint that whatever the Boltzmann Brain prior were, they were already suppressed essentially to zero."



You have 10 sec to figure out what is wrong with this argument.


tossing biased cyber coins



I decided to test this comment to the previous blog post with a numerical simulation and the result was quite surprising (at least to me).

I wrote a simple C program, which generates n-state HMMs randomly (*) and then runs them N times (generating a sequence HTHH...), followed by another N cyber coin tosses.

The question was if a 'simple strategy' can predict from the first sequence the bias of the second sequence. In the following, I performed the experiment for n = 10, 20 ...
with N = 100.

The 'simple strategy' predicts a bias for Head if the number of Heads in the first sequence exceeds 60 = 0.6*N and I registered a success of the prediction if the
number of Heads was indeed larger than 50 = 0.5*N in the following sequence.
The experiment was repeated 10000 times for each n and the graph below shows the success rate as a function of n.







Notice that the success rate is well above 50% for n < N, but even for n > N it seems that the first sequence can predict the bias of the second to some extent.
This is quite amazing, considering that for n > N the first sequence has not even encountered all the possible states of the HMM.

Obviously, as n increases (and N stays fixed) the success rate approaches 50%
and if one does not know n this leads us back to the questions raised in the previous post. But the success rate for n < N and even n > N is much higher than
I would have expected.

The next task for me is to double check the result (e.g. against the literature) and to do some more experiments.



------



(*) An HMM is really described by two tables. The one stores the probabilities for H vs. T in each state s = 1, ..., n. The program initializes these probabilities with uniform random numbers between 0 and 1.

The other table stores the n x n transition probabilities and the way my program assigns these probabilities is to first assign uniformly distributed random numbers and then normalizing probabilities by multiplying the probilities p(i,j) to get from state i to j so that sum_j ( p(i,j) ) = 1. There is some ambiguity in this procedure
and I guess one could choose a different measure.


homework



In order to illustrate some comments made to my previous post, I suggest
the following homework problem:



We are dealing with a Hidden Markov model with n internal states, which produces as output a sequence of Hs and Ts. We know that it is ergodic (*), we are told that it is biased such that either p(H) > 2p(T) or p(T) > 2p(H) (if it just runs long enough) and we observe a sequence HTHHT... of N ouptputs.



How big does N have to be as a function of n to determine with some confidence if we are
dealing with one or the other case?

If we do not know n (**), how large would N have to be?

And how does an uninformed prior look like in this case? Will it have to be an improper prior as long as we do not know n or should we somehow weight with the (inverse of) the complexity of the HMMs?



(*) Every internal state can eventually be reached and every internal state
will eventually be left, but the transition probabilities might be small.



(**) This makes it somewhat similar to the case of the previous blog post, but of course, in the previous post we do not know anything about the underlying model, other than that there is a strong bias.


biased



A simple coin toss and all we know is that it is strongly biased.
What is the probability to get Head at the first throw?

Since we have no further information to favor Head or Tail, we have to assume
p(Head) = p(Tail) and since p(Head) + p(Tail) = 1 we
conclude that p(Head) = 1/2. Easy as pie.

But kind of wrong, because the only information we
actually do have about the coin is that p(Head) is certainly not 1/2,
because the coin is biased.


esse est percipi, part 4



As I tried to show in the previous blog posts, the interpretation of quantum physics is to a large extent a debate
on how to understand "psychophysical parallelism" and how to assign (our) conscious experience to a wave function and/or its components.
This is one reason why most 'real physicists' usually stay away from this topic.

But if you want to read even more about it, I recommend the following as starting points:



H. D. Zeh: "Epistemological consequences of quantum nonlocality (entanglement) are discussed under the assumption
of a universally valid Schroedinger equation in the absence of hidden variables.
This leads inevitably to a many-minds interpretation."



plato about many minds: "... one might well conclude that a single-mind theory, where each observer has one mind that evolves randomly given the evolution of the standard quantum mechanical state, would be preferable."



Dowker & Kent (also here): "We examine critically Gell-Mann and Hartle's interpretation of the formalism,
and in particular their discussions of communication, prediction and retrodiction,
and conclude that their explanation of the apparent persistence of quasiclassicality
relies on assumptions about an as yet unknown theory of experience."



Bernard d'Espagnat: "The central claim, in this paper, is that the Schroedinger cat – or Wigner’s friend –
paradox cannot be really solved without going deeply into a most basic question,
namely: are we able to describe things as they really are or should we rest content
with describing our experience?"



Last but not least, the webpage of Peter Hankins is not a bad reference for the
traditional discussion of conscious entities and the mind-body problem.


esse est percipi, part 3



The previous blog post (I recommend that you read it first) ended with Sidney Coleman's argument in favor of the 'many worlds interpretation', which really is an argument that a 'collapse'
of the wave function is not necessary to explain our usual conscious experience.
As we shall see there is a problem with that argument.



Max Tegmark provides for a good (and easy to read) explanation of the 'many worlds interpretation' in this paper. In the last section he discusses the issue of 'quantum immortality', considering
a quantum suicide experiment and in my opinion this raises an important question.

In general we have to assume that the wave function
|Y> of You the Observer always contains components associated with an alive
human being, even thousands of years in the future and even if classical
physics would describe you as long dead; The wave function never
'collapses' and preserves all components, even those which describe absurd freak events. It is an important question what conscious experience is associated with such states.



But in order to discuss this further I prefer to modify Tegmark's thought experiment
so that the experiment does not use bullets which may kill you, but pills (the "red pill
or the blue pill") which may or
may not contain drugs to knock you unconscious for a while.
We may want to call the thought experiment Schroedinger's Junkies and
instead of his cat we place You the observer in an experiment where
you have to swallow a pill which either contains harmless
water or a strong drug (e.g. LSD), depending on a measurement of the quantum state |s>.

If |s> = a|u> + b|d> we have to assume that after the experiment You are best
described by the wave function |Y> = a|U> + b|D> , where the component |U> means that you
are unharmed, while |D> means that you are heavily drugged.



Again we consider Coleman's operator C, but this time we have to assume that
C|D> = 0 (heavily drugged you will not have a normal conscious experience)
while C|U> = |U>. The problem is that now C|Y> = aC|U> + bC|D> = a|U> and
the state |Y> is no longer an eigenstate of C. In other words, Coleman's consciousness operator indicates that after the experiment You are not in a normal conscious state [*]; This contradicts the fact that for a = b in approximately
half of the cases Schroedinger's Junkies will always experience a
normal conscious state (and for a >> b almost all of them !).



Does this counter example to Coleman's argument indicate
that something like a
'collapse' (e.g. decoherence) from the superposition |Y> to either |U> or |D> is necessary after all?

I have to admit that trying to understand quantum physics feels like trying to find the solution to x² + 1 = 0 in the real numbers!





[*] One could argue that there is no problem if the total state is not an eigenstate of C, since the "psychophysical parallelism" of m.w.i. assigns consciousness to the components of the wave function only. However, we can split any component into subcomponents and even if C|U> = |U>
we can split |U> e.g. into two subcomponents |U> = ( |U> - |D> ) + ( |D> ) so that
none of the subcomponents (..) is an eigenstate of C.

Coleman's argument seemed to provide for consistency across different ways to split the wave function into components, but indeed it fails in general; In order to rescue "psychophysical parallelism" for m.w.i. one would have to find a preferred basis and it has been argued that decoherence might just do that. However, I have explained earlier why I am not convinced.


esse est percipi, part 2



The previous post referenced a rather crude attempt to use
our conscious experience as the foundation of (quantum) physics.
Usually, consciousness does not even make an appearance in physics
and some sort of "psychophysical parallelism" (different states of a
[human] brain
correlate with different conscious experience) is the only (hidden) assumption.



An interesting example is the notorious measurement problem in quantum physics.
(A slightly related classical example was provided earlier.)

Just to quickly recapitulate the main issue: Assume that a quantum system
|s> can be in two states |u> and/or |d>. An Observer,
initially in the state |I>, subsequently interacts with |s>
in such a way that |u>|I> evolves into |u>|U>, while |d>|I> evolves into
|d>|D>. With |U> we denote an observer who is sure to have observed
the system as |u>, while |D> is the observer in a state with conscious experience of |d>.

The measurement problem arises if we consider the interaction of
this observer with a state |s> in a superposition a|u> + b|d>, which then leads to an observer being in the superposition a|U> + b|D>; Schroedinger's Cat, Wigner's friend and all that.

The argument can be made much more precise, see here and here, and one does not have to assume
that |U> or |D> are necessarily pure states (and the observer state will in general incorporate entanglement with the environment etc.).



We find this superposition of observer states absurd because
we assign a specific conscious experience to a specific observer state following the assumption of
"psychophysical parallelism"; But
while we know what sort of conscious experience one would assign to |I> , |D> and
|U> , we do not know what conscious state should be assigned to a state
a|U> + b|D>. We would assume it has to be some experience of confusion, a superposition of
consciousness, which we normally do not experience.

At this point physicists introduced various
assumptions about a 'collapse' of the wave function to eliminate such
superpositions of observers, they threatened to pull a gun whenever Schroedinger's Cat
was mentioned and even worse began a long philosophical debate about various interpretations
of quantum physics.



But following an argument made by Sidney Coleman (in this video clip [at about 40min.]), this is really not necessary. Consider again the wave function |Y> of You the observer and
assume that there is an operator C which tells us if You the observer has
a normal classical conscious experience, so that C|Y> = |Y> if you are in a normal conscious state
and C|Y> = 0 if not [*].

Returning to our measurment problem, we have to assume that C|D> = |D> and also C|U> = |U>,
but then it follows from the linearity of quantum operators that even if
|Y> = a|U> + b|D> we have C|Y> = aC|U> + bC|D> = |Y>. In other words we have to conclude that You the observer has a normal classical conscious experience even in a superposition state after the quantum experiment.



This argument is at the basis of the 'many worlds interpretation' [x], which
assigns a normal classical conscious experience to the components
of the wave function |Y> and then shows that this does not lead to contradictions with
our everyday experience for superpositions of such components. A subtle shift in our
assumptions of "psychophysical parallelism" with drastic consequences.

A 'collapse' of wave functions seems no longer necessary (and the act of pulling a gun would only create yet another superposition of states 8-).





[*] Obviously we do not know what such an operator would look like, but if we believe in "psychophysical parallelism" we have to assume it can be constructed. Notice that if we would not believe in "psychophysical parallelism" then there would not be a 'measurement problem' either.



[x] It is obvious that 'many worlds interpretation' is a really bad name and should be replaced
e.g. with 'many experiences'.


esse est percipi



I would not have thought that Bishop Berkeley was perhaps one of the founding fathers of modern physics.
But we have to consider this:
"In the present work, quantum theory is founded on the framework of consciousness, in contrast to earlier suggestions that consciousness might be understood starting from quantum theory. [..]
Beginning from our postulated ontology that consciousness is primary and from the most elementary conscious contents, such as perception of periodic change and motion, quantum theory follows naturally as the description of the conscious experience."

Could the phenomenalism of e.g. Ernst Mach make a bit of a comeback after all?



I really like the phrase "in contrast to earlier suggestions", which sums up about 250 years of physics as "earlier suggestions". 8-)



In the appendix (section 10) a mathematical model of consciousness is presented, as a process which tries to find the solution to x² + 1 = 0 in the real numbers, which reminds me of
The Confusions of Young Törless.


living with ghosts



" We conclude that quantum gravity with fourth order corrections can make sense,
despite apparently having negative energy solutions and ghosts. In doing this,
we seem to go against the convictions of the last 25 years ..."

Hawking and Hertog, 2001



It is well known that the perturbation theory of quantum gravity is not renormalizable, but one can 'fix' this problem by introducing higher order terms ( R² ) in the action.
Unfortunately, it is also well known that higher order derivative terms (appear to) come with
dangerous ghosts, threatening the S-matrix with states of negative probabilities.
However, in their (very clear and easy to read) paper Hawking and Hertog provide for a convincing argument that one should not be afraid of such ghosts.



In a related paper Bender and Mannheim, 2007 showed that "contrary to common belief .. theories whose field equations are higher than second order in derivatives need not be stricken with ghosts. In particular, the prototypical fourth-order derivative Pais-Uhlenbeck oscillator model is shown to be free of states of negative energy or negative norm."



Last but not least, Benedetti, Machado and Saueressig, 2009 "study the non-perturbative renormalization group flow of higher-derivative gravity employing functional renormalization group techniques" and argue that "asymptotic safety also resolves the unitarity problem typically haunting higher-derivative gravity theories."



In other words, if (for whatever reason) you don't like string theory, you could try to get used to living with ghosts...


rational or real



Physics as we know it is based on real (or complex) numbers, but it is interesting to ask what (if anything) would change if we would consequently replace real valued variables with rational numbers. After all one could make the case that any measurement can only result in rational numbers and probabilities derived from counting outcomes are rational numbers as well.

Obviously, it would not be an easy change, e.g. one would have to replace differential equations with difference equations (with arbitrary base). One would have to consider if and how it affects e.g. the Lorentz group and one would have to deal with the fact that the rational numbers do not constitute a Hilbert space. In other words, it is not immediately clear that one gains anything using Q instead of its natural completion R.

However, if one thinks that it is obvious that using Q instead of R would only complicate things but not make any real difference, I suggest to read this paper: "We explicitly evaluate the free energy of the random cluster model at its critical point for 0 < q < 4 using an exact result due to Baxter, Temperley and Ashley." The authors find that the free energy of the system depends on whether a certain function of the continuous parameter q is "a rational number, and if it is a rational number whether the denominator is an odd integer".

This is one of the weirdest things I have ever seen in statistical mechanics.



update: RZ (see comments) points out that it is not immediately clear from the paper that the numerical value of the free energy is indeed different for rational numbers, just because the form of the free energy function is not the same.
However the sentence "This implies that the free energy
of the random cluster model, if solved, would also share this property" on p.3 would then be highly misleading.

In any case the quantum kicked rotator, also mentioned by RZ, might be a better example of what I had in mind.


theology and probability, part2



In the following we shall finally consider one of the more serious issues.
Similar questions have bothered the serious thinkers for several centuries and perhaps I can finally make an important contribution.
What is the probability for the existence of the invisible pink unicorn?



It may be necessary to clarify a few terms first. With "invisible" we mean here that one cannot test or detect the supreme being with currently available scientific methods. With "pink unicorn" we mean that a true believer may (or may not) experience the supreme being through direct revelation as pink and a unicorn. It is obvious that this believe system is consistent [see footnote 1] and furthermore it seems to be reasonable as we shall see in the following.



It is also immediately obvious that an atheistic position (which assigns a probability p(i.p.u.) = 0 to the existence of the i.p.u.) is problematic and indeed inconsistent with the usual rules of reasoning.

In general one should not assign a zero prior to a consistent and eventually reasonable model, but furthermore
if one is certain that the i.p.u. does not exist, then there can be no facts directly related to the i.p.u. and thus no facts to make a rational decision that p(i.p.u.) = 0. [2]



The agnostic position, which assigns a probability p(i.p.u.) = A with
0 < A < 1 seems reasonable at first (the A stand for either agnostic or arbitrary), however, it really is problematic as well.

The obvious question is what value A to use, which does not have a good answer. But it gets much worse once we consider the fact that an agnostic (and only an agnostic!) must assign the same probability to the "invisible yellow dragon", the "invisible green parrot", the "invisible red herring" etc.
In other words, the agnostic has to deal with a discrete but obviously infinite set of possibilities, which are a priori equally likely, and assigning any probability A to one of them would mean that the sum

p(i.p.u.) + p(i.y.d.) + p(i.g.p.) + p(i.r.h.) + ... = A + A + A + ...

necessarily diverges instead of adding up to one [4].



This seems to leave us with the true believer, who assigns p(i.p.u.) = 1, as the only one with a consistent and reasonable position [3]. It is also the only one with a chance to gather real evidence through direct revelation.





-----------



[1] In order to better understand that believing in the "invisible pink unicorn" is perfectly consistent, we consider a model which assumes that the world around us with all its possible experiences is just the result of an elaborate computer simulation. The supreme being would then e.g. be the administrator of this simulation and she would be "invisible" to all
the beings in the simulated world. However, she may (or may not) choose to reveal her existence to true believers every now and then by programming the direct experience of a "pink unicorn".
Of course, to have true faith in the "invisible pink unicorn" does not include believe in this particular model and indeed a true believer would most likely understand it as heresy by limiting the potential mode of existence of the supreme being.



[2] If one makes the scientific statement that "pink unicorns do not exist", then it really means e.g. that a careful and exhaustive scientific search has not detected "pink unicorns" or that a particular well established theory excludes "pink unicorns". In other words "pink unicorns do not exist" is really a statement e.g. about the scientific search or the well established theory.

However, in the case of the "invisible pink unicorn" such a scientific search is meaningless
and no relevant theory can exist.

Furthermore, the fact that one did not experience the supreme being as "pink unicorn" through direct revelation is of course irrelevant, because such revelation requires true believe.



[3] True faith in the "invisible pink unicorn" sets the probability for all other possible supreme beings to zero, thus avoiding the problem of the agnostic.

Different to an atheist, the true believer in the "invisible pink unicorn" can indeed set the probability for the existence of the "invisible green parrot" etc. to zero, p[i.g.p.] = p[i.y.d.] = ... = 0, because she can use the fact of her own strong faith as justification; The atheist has no such fact available.





[4] added later 4/6/09

Yes, I am aware that a true Bayesian may use an improper prior in this case. (I also admit that I learned about it only two weeks ago and it was actually one reason to write this post.)

But how would she update this prior to get to real probabilities?
Notice that reports of revelation will only become available if true believers are around, but
this means Bayesian updating would depend on the existence of people not using the Bayesian method. How can one rely on the testimony of such irrational people?

One could also (try to) argue for a non-uniform prior. E.g. all invisible supreme beings are described as "the invisible X1 X2 X3 ... Xn" using n words X1, X2... , Xn.
It could make sense to weight the probability with the inverse of the complexity of the supreme being, approximated by n, e.g. such that p("the invisible X1 X2 ...Xn") = exp( -f(n) ) and f(n) is chosen so
that the sum of all probabilities converges. But there are a few problems with this.

It is obviously a very crude method to approximate the complexity of a supreme being in such a
way, e.g. some of the descriptions might be of the form "the invisible being which is ... but is
very simple indeed", etc.

Also, notice that the "invisible blue dolphin which likes yellow fish and ..." is known as
"iok Aum" in the Zaliwali language, so it would have a much higher probability than for an
English speaking Bayesian. But of course, probability is about subjective uncertainty 8-)

I guess there are many clever ways a Bayesian could "fix" this problem, but I am afraid from
my point of view it would only get us deeper and deeper into nonsense land.


a quantum of solace?



I cannot deny that the quality of this blog is declining rapidly (actually it is more accurate to say it did not improve as quickly as I hoped for). Thus I changed the title to reflect this sad fact.

In this spirit I shall write a few lines now about quantum gravity and the links I collected recently.



In her last post Sabine asks if quantum gravity has been observed by GLAST/Fermi in the gamma ray burst GRB 080916C. Interestingly, Lubos wrote about the same result(s) but with exactly the opposite conclusion. (Did I mention that the two don't like each other?)

Previously, Peter wrote once again about string theory being useless for ever predicting anything. At about the same time Lubos mentioned a recent paper about 'The footprint of F-theory at the LHC' and concluded that
the validity of string theory is a settled fact and indeed string theory is highly predictive. (Did I mention that the two don't like each other?)

Lubos also discussed Boltzmann eggs, mostly to attack a straw man used to stand in for Sean Carroll and his upcoming book. (Did I mention that the two don't like each other?)

CIP picked up on it and in comments to his post Lubos made some confusing or confused statements, which you may or may not find interesting.



--------------



And even more links.

John Ellis et al. also discussed the GLAST/Fermi and related results.

Craig Hogan proposes that excess noise observed at the GEO600 interferometric gravitational-wave detector could be direct evidence of holographic quantum gravity.

But Igor Smolyaninov thinks that this is unlikely.

Renata Kallosh found an argument why d=4 N=8 supergravity is finite for all loops.

Simon Catterall et al. put N=4 SYM on a 4d lattice (see also here).

Last, but not least, Aaron Bergman asks an important question about fretless guitars.


theology and probability



If you are a member of The Church of Bayes (*) I recommend you read this.

If you are a physicist considering to become a member I recommend you read that.



(*) Wikipedia: Thomas Bayes (1702-1761), British mathematician, statistician and religious leader



------------------------



I wanted to write about a nice illustrative example of Cosma's 1st exercise as the 2nd part to this post. But it has already been done before I could even begin with the typing (I was travelling) and I guess it is much better than what I would have achieved (but I would have left out the 'waterboarding', which is not that funny).

I recommend the comments to Brad DeLong's example if one is interested to see some members of the church argue (with each other). But then, perhaps you have something better to do...



The main lesson from all this is very simple. If the set of considered models does not contain the true model then Bayesian updating can go very wrong. But how does a Bayesian know that her process includes the true model without leaving the reference frame of her church?



Of course, we should not expect a true Bayesian to agree that there is a problem (e.g. in the comments to DeLong's post) - after all we know (now) that their procedure does not always converge on the truth...


Ikea chairs



In a review article about agent-based models Dietrich Stauffer once wrote

"Physicists not only know everything, they also know everything better."

In the spirit of "this indisputable dogma" Lee Smolin recently published his thoughts
about "time and symmetry in models of economic markets", advocating to formulate "economics in the language of a gauge theory".

If I would be in a generous mood, I would perhaps only point out that worse stuff has been published on the arxiv and in particular its econophysics section. But I am not and thus I will actually quote some of the deep insights of the author:

"..publishers have a simple motivation to cut costs by only printing the books that will sell,
but it seems very difficult to predict accurately which books will sell and which won’t.
One rule of thumb-to which there are exceptions-is that books that are not in book stores don’t
sell."

" Combining space, time and uncertainty, there is then a vast explosion in the number
of goods. Rather than having a particular model of Ikea made chair, we have a vast set of
chairs..."

"Consider the adage, well known to sailors, ”The two happiest days in
the life of a boat owner are the day they buy their boat and the day they sell it.”
Imagine coding this in a utility function." (*)



If one is interested in similar deep insight then this paper offers plenty of it.

If one is actually interested in the application of gauge theory in finance I would recommend
the book of Ilinski and this review.

But if one wants to read a well written critique of econophysics, I would recommend this essay instead.



(*) Actually, I can confirm the adage from personal experience as an obvious truth.


anti-block



Bryan lists the possible philosophical positions on the passage of time.

Currently I think 'anti-block' is the most convincing, but probably I just read too much Ernst Mach recently...


groundhog day



Warning: This blog post is neither entertaining nor informative. I recommend that you just skip it.



Once again we are floating in a Newtonian universe described by its microstate S(t), but of course we are just ordinary observers who do not necessarily know it.
At t0 we perform a little experiment, the equivalent of tossing a coin, with two possible outcomes H(ead) or T(ail); At t0+D we know the result.

The omnipotent demon watches the outcome too and in the case of T reverts the Newtonian universe to
its previous state S(t0-D), which explains the title of this blog post; In the case of H she does nothing.
However, the omnipotent demon is not infinitely patient and therefore the universe would loop only N times through the same state(s), then it continues.

We know about all this, because the omnipotent demon was so nice to inform us about it in advance.



Now we try to calculate the probability for H and there are two different ways to do it:

i) We do not know the microstate and both H and T are equally likely for what we know about S(t0).
The fact that the demon will play some tricks at t0+D does not change anything, thus p(H) = 1/2.

ii) There are the following possible cases for this silly game: This is the 1st time we observe the experiment
and the outcome is H, which we denote as 1H. Then there is 1T and also 2T, 3T, ... NT.

Notice that there cannot be a 2H, 3H etc. because the universe is deterministic, if T was seen the first time
it must be seen the 2nd time etc.

Since we cannot distinguish between those cases, we have the probability p(H) = 1/(N+1).



Obviously, this has some similarity with the sleeping beauty problem. But I am not asking which of the two
probabilities is 'correct' and I am not even interested if this little thought experiment tells us anything
about a relationship between time and probability. I am asking a different question.

How do you understand the limit D -> 0 ?



Told you so...


constraints



The field equations of general relativity can be separated into
hyperbolic evolution equations and elliptic constraints [ADM].
The evolution equations propagate an initial field configuration
'forward' in time, similar to other theories of classical fields.

However, due to the constraints one cannot choose the initial
configuration freely and this is very different from other classical
fields. In some sense the constraints 'connect' spacelike points and
thus one could call general relativity 'holistic' if this would
not be such an abused word.



We don't really know what the quantum theory of gravitation is,
but one would assume that the classical theory reflects the properties
of the underlying quantum theory and indeed the Wheeler-deWitt equation
is nothing but the operator version of one of the constraints.

I think one needs to keep this in mind when discussing thermodynamics
of general relativity, the information loss problem or the entropy of
black holes. E.g. if one specifies the metric near the horizon of a
(near spherically symmetric) black hole, the constraints already
determine the 3-geometry; Therefore I do not find it surprising that
counting microstates provides for a holographic result which differs
substantially from the naive expectation.

I would also think that an approach to the information
loss problem which emphasizes locality as 'conservative' is misguided.


the physics of immortality and the direction of time



Recently, Sean wrote a blog post about Frank Tipler, who described in his book,
The Physics of Immortality, what he calls Omega point theory; Wikipedia has enough about it that I do not need to elaborate much further. The main idea in one sentence is that the
'big crunch' of a re-collapsing universe which contains intelligent life (necessarily) generates a
point of infinite complexity, capable to process an infinite amount of information in a finite amount of time [x]. As I mentioned previously, the book contains a lot of interesting physics, but also large sections comparing the Omega point to the God of various religions and as a whole the book is a bit odd.



In a section near the end of his book, Tipler discusses quantum gravity, the wave function of the universe and
in particular the boundary condition(s) for such a wave function. The best known example for such a condition
is the no-boundary proposal of Hawking, which corresponds to 'creation from nothing'; A different proposal was
examined by Vilenkin and others [e.g. here].

Tipler proposes as a new boundary condition the requirement of an Omega point. In other words, he replaces the usual initial condition with a final condition on the allowed physical states. In his own words:

"In my above description of the Omega Point Theory, I used past-to-future causation language, which is standard in everyday life, and in most physics papers. This may have given the reader the impression that it is life that is creating the Omega Point (God) rather than the reverse. Nothing could be further from the truth. It is more accurate to say that the Omega Point, acting backwards in time, via future-to-past causation, creates life, and His multiverse."



This is of course a main difference (if not the main difference) between science and religion.

Science assumes an initial condition (usually of high symmetry and low entropy), with everything following
afterwards according to the laws of physics, with no purpose, intention or meaning.

Religion on the other hand assumes that there is a point to the world and our experience, a desired goal and final explanation, which determines everything.

Once Tipler assumes the final Omega point condition, he leaves science as we know it and opens the
door to 'explanations' like this:

"I will say that an event is a "miracle" if it is very improbable according to standard past-to-future causation from the data in our multiverse neighborhood, but is seen to be inevitable from knowledge that the multiverse will evolve into the Omega Point."



While his book 'The Physics of Immortality' is vague enough, suggesting that perhaps one may be able to have it both, science and religion, the subsequent development of Tipler's thoughts makes it immediately clear where his proposal leads:

"I shall now argue that the Incarnation, the Virgin Birth, and the Resurrection were miracles in my sense. The key to understanding why these events HAD to occur is the recently observed acceleration of the universe."



I will only add that in my opinion Sean's word crackpot is misplaced in this case; I think 'tragedy' would fit better.





[x] According to Tipler, the discovery of an accelerating expansion of the universe (dark energy) does not
necessarily affect his main assumption, as he explains in this interview.


the measurement problem



Shortly after Newton proposed his new mechanics, the "shut up and calculate" approach
of Newton, Halley and others produced the first astonishing results.
However, it did not take long until the foundational debate about the interpretation of the new physics began.
In particular, the true meaning of the position coordinates x(t) was heavily discussed.
The x(t) were of course projections onto a holonomic basis in the 3-dimensional
Euclidean vector space. But how exactly would they be determined in a measurement process?







It came down to measuring distances between point masses (1). But how does one actually measure
such a distance? Suppose we use a ruler in the simplest case (2). We have then only replaced one
distance measurement with two distance measurements, because instead of measuring the distance
between two mass points we need to measure now the distance of each mass point to the markings on the ruler (3).

Now we could use another two rulers to measure those distances etc. - an infinite regress. (Notice the superposition of rulers at 3!)



There were soon two main groups of opinion. The first was known as realists, assuming that the x(t) represented
the real position of a mass point, and even if human beings had a problem to comprehend the infinite regress
of the measurement process, the omniscient God would necessarily know it.

A small subgroup proposed that the infinite regress is the position, but could not really explain what this means.

The other group insisted that the x(t) were only a subjective description of reality but not part of reality itself.
They emphasized the important role of the conscious observer who would terminate the otherwise infinite regress
of the measurement process; This introduced the issue of subjective uncertainty into the debate.



Careful analysis showed that x(t) was only known with finite uncertainty dx and in general this
uncertainty would increase with time. Astronomers noticed that the dx for some planets was larger than the whole Earth!
The realists assumed that there was still one true x(t), even if we do not know it,
while Sir Everett 1st proposed the stunning interpretation that *all* positions within dx were equally real, rejecting the
idea of random measurement errors. The world was really a multitude of infinitely many worlds and the infinite regress of the measurement problem reflected this multitude!

Subsequently, this type of analysis became known as decoherence program: The position of a mass point can be determined only
if the mass point interacts with other mass points. But this means that in order to reduce the uncertainty dx, one necessarily
increases the uncertainty of the position of all mass points in the environment.

While it was not clear if decoherence really helped to solve the foundational problems, the complicated calculations were
certainly very interesting.



In a devilish thought experiment, a cat was put in a large box and then the lid closed. Obviously the cat would move around
inside the box (some would even suggest that the cat moved around randomly, since no law was known that could determine the
movement of the cat!), but one could not observe it.

The stunning question was, and still is, if the cat had a position x(t) if one waited long enough.

The realists again insisted that the position of the cat was a real property of the cat, even if it was unknown to everybody.
But others insisted that it made no sense to assign a position, since the rays emitted by the eyes of the observer were not
able to reach the cat; Furthermore, the animal itself has no conscious soul and thus cannot determine its own position.



While the "shut up and calculate" approach celebrated many more successes, the foundational issues of the new physics were never resolved.


spin echos



In my previous post about a hypothetical 'reversal of time', I should have mentioned the spin echo effect.
In real experiments, first performed by E. L. Hahn, a configuration of spins evolves from an ordered into a disordered state, but subsequently the initial ordered state is recovered by application of magnetic pulses.

The spin echo effect is described in this text (sect. 11) and further discussed here.



Obviously, this effect raises several interesting questions about the foundations of statistical mechanics, e.g. the definition of entropy via coarse-graining; But as noticed by
the Aeolist "many of the terms used in the debate, beginning with the all-important definition of entropy, and including terms like ‘preparation’ and ‘reversal’ (and its cognates), are still used in so many different ways that many of the participants are speaking at cross purposes".

By the way, I did not see a discussion of the 'backreaction' of the ensemble of random spins on the magnet (and its entropy) that is used to trigger the reversal; It may or may not be important in this debate.



A somewhat related model is the swarm of n free particles moving with random but fixed velocities on a ring, as discussed by H. D. Zeh in appendix A to his book about the direction of time.


thermodynamics



It seems that there is some confusion about several issues in thermodynamics, so the following might be helpful.



1) If a system is not in thermodynamic equilibrium, certain macroscopic quantities may not be well defined, e.g. temperature as mean kinetic energy. However, entropy as a measure of our ignorance about the micro state is in general defined even far away from equilibrium. Otherwise we would not have the 2nd law of thermodynamics, because dS/dt ~ 0 if a system is in equilibrium.



2) The heat capacity of a gravitating system (Newtonian gravity) is in general negative. As an example consider a star radiating energy away, which will cause it to heat up due to gravitational contraction. This can be confusing, but there is nothing wrong with thermodynamics if one includes Newtonian gravity.

In general, the 0th law does not always hold and things can get funny, but this does not affect the 1st and 2nd law.



3) If we consider Newtonian mechanics carefully, we find that no classical system is stable and thus no purely classical system can be in thermodynamic equilibrium. This was historically the reason for Bohr to propose the first version of quantum mechanics.



4) In general, we do not know how to calculate the entropy of a particular spacetime. There is the proposal of Penrose to equate it with the Weyl curvature; However, there are problems with this proposal.

Things can get quite funny if one considers a spacetime which contains a naked singularity or closed timelike loops. Unfortunately, current state-of-the-art is still that one has to remove such geometries by hand on the grounds that things get quite funny otherwise.



5) In quantum theory, if a system is in a pure state the corresponding entropy is zero. If one assumes that the 'wave function of the universe' was initially in a pure state, it would remain in a pure state, assuming unitary evolution for quantum gravity (as suggested by the AdS-CFT correspondence). There is thus a problem for (some) many worlds interpretations in my opinion.


backwards or twice as fast



Recently I came across an argument about 'reversal of time' and our conscious experience (I am sure
this type of argument must be at least hundred years old) and I thought I should mix it with an old
idea of mine. I am curious what others think about it; So here it goes:



Imagine that we can describe the world as a Newtonian universe of classical particles so that
xi(t) , where x is the position(vector) of the i-th particle and t is the classical time
parameter, determines the configuration of our world for each moment of time. I am pretty sure that
the following argument can be generalized to a quantum mechanical description, but it is much easier to
stick to Newton for now.



We assume that the world evolves according to the laws of Newtonian physics up until the time t0.
At this moment an omnipotent demon reverses all velocities: vi(t0) = x'i(t0) -> - x'i(t0),
where ' is the time derivative, and the Newtonian evolution continues afterwards.

Obviously, for t > t0 everything moves 'backwards'; If a glass fell on the floor and shattered into many pieces for t < t0,
it will now assemble and bounce back up from the floor etc.; If the entropy S(t) increased with t for t < t0, it now decreases for
t > t0.

One can also check that xi(t0+T) = xi(t0-T) and x'i(t0+T) = -x'i(t0-T) for every T (as long
as we rule out non-conservative forces).



The interesting question in this thought experiment is "what would an observer experience for t > t0 ?".

If we assume that the conscious experience E(t) of an observer is a function of xb(t), where b enumerates
the particles which constitute her brain, then we would have to conclude that the observer does not recognize anything
strange for t > t0, since xb(t0+T) = xb(t0-T) and it follows immediately that E(t0+T) = E(t0-T). So if
all the experiences E(t0-T) contained only 'normal' impressions then the same is true for E(t0+T). In other words, while the sequence of
experiences is 'backwards' no single experience contains the thought "everything is backwards" and nobody feels anything strange.

But this would mean that no observer is able to recognize 'backward evolution' with entropy decreasing and distinguish
it from normal evolution!



One way to avoid this strange conclusion is to assume that E(t) is a function of xb(t) and vb(t).
Of course, we do not have a physical description of conscious experiences and how they follow from the configurations of our brain (yet).
It is reasonable that our conscious experience depends not only on the position of all molecules in our brain but also
their velocities.

Unfortunately, this leads us into another problem. If we rescale the time parameter t as t* = s*t, this would rescale all velocities
so that v(t*) = s*v(t) and thus E(t) = E[x(t),v(t)] -> E(t*) = E[x(t*),s*v(t*)]; But if the function E is sensitive to vb then
it would be sensitive to the scale s too. I find this to be quite absurd, our experiences should not depend on an unphysical parameter.



The summary of my argument is the following:

i) If the world evolves 'twice as fast' we should not notice a difference (the molecules
in our brains would move twice as fast as well).

ii) However, if the world suddenly evolves 'backwards' we would like to be able to recognize this (otherwise how would we know if the 2nd law is correct).

iii) But it seems that one cannot have both i) and ii) if one assumes that our conscious experience is a 'natural' function of the material configuration
of our brain, e.g. if we follow Daniel Dennett and assume that consciousness simply is the material configuration of our brain: E(t) = [xb(t)]
or E(t) = [xb(t),vb(t)] (*).



Perhaps one can solve this puzzle by assuming E depends on higher derivatives x'' and/or perhaps one can find some
clever non-linear function. But I think this would introduce other problems (at least for the few I tried ) and I don't find this very convincing [x].

Of course one can challenge other assumptions too. I already mentioned quantum mechanics instead of Newton or perhaps
we have to assume that our conscious experience is not a function of the particle positions in our brain. But still, none of these
solutions are very convincing in my opinion.

What do you think?



(*) Dennett is never that explicit about his explanation of consciousness.

In general, one could imagine that E is some sort of vector in the 'space of all possible conscious experience' - whatever that means.



[x] e.g. E could depend on vb/N with N = sqrt(sumb v2b) instead of vb. But where would the non-local N come from and also there would be a singularity at N=0, i.e. when all velocities are zero. One would not expect a singularity of E for a dead brain (with all molecules at rest) but rather zero experience.