deep or trivial



I would summarize the previous blog post as follows:

In general, a closed system, which contains e.g. a physicist and/or computer, cannot predict its
own future, even if we assume deterministic laws of physics (*).

This is quite easy to understand once you think about it;
But is this some deep insight or just trivial stuff?



An equivalent statement would be that, in general, a closed system, which contains e.g. a physicist and/or computer, cannot
determine or know its own microstate.

Although equally trivial, it has a bit more 'statistical mechanics flavor' to it and might be interesting if one considers foundational
questions about entropy or even more profound quantities.

It also has a certain Zen-like quality...





(*) See also this and that.


Arthur and free will



In recent days and weeks I read an interesting but rather heavy book about
Arthur Schopenhauer, his philosophy and his times. And I think this was the main reason I did not
find the will in me to write something on this blog. I still don't have too much
to offer, except the following silly story...



-----



This story is about a very smart physicist and her simpleton friend. Among other things
the smart physicist entertained herself by predicting his behavior. This was possible because
they lived in a Dennett-Newtonian universe, where all thoughts and all behavior was
a function of the configuration of molecules constituting a brain and the movement of these
molecules was deterministic, following Newtonian laws. The bouncing of molecules was not so difficult to predict with all forces well known.



The physicist had several machines in her laboratory to determine the configuration
of huge numbers of molecules to arbitrary precision and a supercomputer (also made of Newtonian particles of course)
to calculate the future configuration of molecules ahead of time.
Since her friend was made of a large but finite number of molecules, all she had to do was e.g. to
use her machines to measure the configuration of molecules at 9am (he did not even notice it), feed the result
into her supercomputer and read out the calculation which predicted his behavior at 10am. And when she determined that
he would say "I am bored, let's go for a walk" this was exactly what happened, like clockwork.
Easy as pie and quite funny.



Unfortunately, she made a mistake. She wanted to show him how smart she was and wrote down her prediction
so he could see it and for some reason all of a sudden it failed to work.

Of course, on one hand it was immediately clear what happened. As soon as he read that at 10am he would "go to the window
and open it" he decided to open it earlier and at 10am already closed it.
There was nothing mysterious about it, actually it was a completely deterministic process, with Newtonian
photons carrying the prediction to his Dennett-Newtonian brain, which was not very complex, but smart enough to do simply
the opposite of what she wrote on the paper. He did it just to prove a point. And of course it was quite irritating.



On the other hand, she did not understand this at all. Her machines could measure the configuration of all
molecules in the room (including herself) and the supercomputer calculated this forward to arbitrary precision. So the calculation 'knew' that a prediction was
written on a piece of paper and the Newtonian photons carrying the message and his simpleton brain receiving it and doing the opposite of what was written etc.



So how could this prediction go wrong? Everything was deterministic! And still, no matter how many times she tried,
her simpleton friend with his simpleton stubbornness did the opposite of what she wrote down. Every time. Was Newton wrong after all? Or Dennett?



She found a solution, of course, it was easy enough. Get another friend. But still...



-----



If you have a good explanation of what caused this 'failure' of determinism then please post a comment. The first to solve this silly puzzle will win a Golden Llama award for major contributions to the blogosphere of physics, which includes a free subscription to this blog.

Of course, if you just want to debate the whole thing feel free to post a comment too, or if you want to let me know just how silly this silly story really is.



added: And the winner of the Golden Llama Award is Chris, who pointed out that the problem is with the supercomputer (trying to) predict its prediction. (See the comments for more details).

Congratulations!