many simulated worlds



I think it makes sense to combine Nick Bostrom's simulation argument with the many worlds interpretation.

While Copenhagen tells us that some outcomes are very unlikely, the m.w.i. assures us that every possible world actually exists. So we can be sure that there are worlds which contain the necessary equipment to simulate your conscious experience - and since there are (infinitely) many different ways to simulate the same experience, we can follow Bostrum's argument to finally conclude that it is almost certain that you are experiencing The Matrix right now (*).



If you believe the m.w.i. then you must believe that your experience is just a fake (x).




(*) There are N ways how your experience can be simulated and only 1 how it would be real, since you cannot distinguish them they have equal probability and for (infinitely) large N the conclusion follows.





(x) Of course, my argument will probably not convince you, which is just what The Matrix does to you ...



Columbo and his memories




This was recently posted on that other blog, but on second thought it really belongs here, following a proud tradition of confusing thoughts about the arrow of time.




Inspector Columbo enters an empty apartment (*) and finds a dead body on the floor. He measures the body temperature and determines it to be 33C, while the room temperature is 21C. He could now use physics forensic science to predict quite well what will happen going forward. The dead body will continue to cool and a few hours later will reach equilibrium with the room temperature. He could even predict how several days and weeks ahead this dead body will turn slowly but surely into a decaying corpse.

But he is not really interested in that. He wants to predict postdict what happened earlier. Again, he can use thermodynamics to determine how many minutes earlier the body temperature was 34C, 35C, ... But when his postdiction reaches 37C he must stop. All he can postdict is the time of death, but he can not go beyond that point. Once his calculation reaches 37C he deals with a living human being and from the data he has he cannot postdict what that person was doing. It would not make any sense to continue his calculation - and he would have to stop at 100C anyways.



In some sense 37C is like a 'singularity' for his postdiction, which he cannot cross - quite remarkable, because we are used to thinking that the past is certain and the future is not; But here it seems to be the other way around. (Actually this case is not that special; In general, physicists are pretty good at making predictions, if they have the necessary initial data, but they are not good at all at making postdictions from the same data, which is why they normally don't do it.)



Later on, the clever inspector will find clues to what happened, fingerprints and other evidence - in other words he will find documents about the past. The truly amazing thing about those documents is that they fit together and tell a coherent story (e.g. the fingerprint on the door knob is the same as the fingerprint on the knife). Even more amazing is that in the end the killer will confess and tell the inspector what happened and his memory will match the story reconstructed from those documents.



Which brings us to the final question. Why do we have memories of the past but not the future? In other words, why does the inspector consider the dead body at the beginning of our story as evidence of a murder (which happened in the past) - but not as a document and memory of the rotten corpse it will be in the future?

Is it because he knows the future better than the past?






(*) There never was such an episode, but we can assume that he solved murder cases not shown on tv. Also, I am aware that he is actually a Lieutenant.



pictures of Columbo



I thought I should illustrate the following story with two pictures.







Columbo (C) collects documents (*) about the murder case (the black vertical lines). He can easily predict the future of those documents, because they are stable (otherwise they would not be good documents). However, he can postdict the past of those documents only up to a certain point, when they have been created.

Different documents tell a coherent story, therefore he can assume that they were created by the same event E. But notice that he cannot postdict the state of those documents before E. The future of those documents is known, but their past is uncertain beyond a particular point and this is what makes them memories of the past event E.



This picture of the 'arrow of time' is somewhat different from the usual image of entropy being low in the past and uncertainty increasing in the future.









(*) 'document' is used in a general sense - a hot cup of coffee in an empty apartment 'documents' that a person was in that apartment not too long ago. We know this because we cannot postdict the temperature of that coffee beyond 100C - so we know somebody had to be there to make it.


still asking the same question(s)



I write this post mostly to show that this blog is still alive ... and still asking the same question(s).



Recently, I read this paper about a numerical study in lattice gravity, trying to distinguish 1st and 2nd order phase transitions. They use and refer to the methods I am familiar with, but I do wonder if this is really the best one can do nowadays.



If one does e.g. fit the location of the critical coupling as a function of lattice size, one has to deal with two big problems: First, the location of the critical coupling is not so well defined (e.g. due to the metastable states associated with 1st order transitions) for a given lattice size; there are limits on computation time and resources (*).

Second, how can one be sure if the lattice is big enough to be in the 'scaling region', i.e. big enough that small size corrections can be neglected (if the typical size of the 'bubbles', which come with a 1st order transition, is n and the lattice size N is smaller than n one has a problem).



So what is the current state-of-the-art and where are the professional statisticians and their Bayesian stochastic network thingy-ma-jiggies when we need them (x)? Please let me know if you know something.






(*) A related question for the practitioner: Is it better to spend the available computation power on a small number of iterations on a large lattice or is it better to do many iterations on a small lattice?



(x) Speaking of Bayesian thingy-ma-jiggies ...