friends



One reason I do not have a facebook account is that I would probably not have enough friends; Most likely my friends would on average have more friends than I do.

This is just another result of this 'mutant form of math' and is explained e.g. here and here (pdf).



-----



Adrian Kent thinks that many worlds interpretations are scientifically inadequate.

He is against many worlds.



In this world.

I wonder what he thinks in the other worlds...



And I wonder if I have on average more friends in the other worlds than in this.

By the way, can a whole world have 'friends'? (*).





(*) Friends have to have something in common. So we define that two different worlds or branches of the m.w.i. are 'friends' if they both contain at least one observer with (almost) the same conscious experience. At any point in time we are then dealing with a 'social network' of worlds and the above friendship paradox applies: Most likely the other worlds will have more 'friends', be more popular, than our world.
If I believe in the many minds interpretation and assume that conscious experience constitutes reality, does this mean our world is most likely less real than others?


mean mean



warning: this blog and in particular this blog post is about a mutant form of math.





Recently, I browsed through Cosma's notebook about large deviations reading that

"The limit theorems of probability theory [..] basically say that averages
taken over large samples converge on expectation values."

and immediately my inner contrarian tried to come up with a simple stochastic process
where the sample mean does not converge.

Of course, this is not very difficult, since there are many examples of processes where
the sample mean is unbounded and does not converge to anything.



It would be much more interesting to find a process where the sample mean is bounded, but 'bouncing around' unpredictably and therefore not converging. In other words, a process where it seems that a 'mean of sample means' exists and yet it does not.



My initial idea was to use the sample mean S of a random variable y itself as a variable in the stochastic process and consider the following:

y(t+1) = -sgn[S(t)]/( |S(t)| + e )   + noise

where e (epsilon) is a small number, S is the sample mean S(t) = ( y(1) + y(2) + ... y(t) )/t
and the noise term is a (uniformly distributed) random variable between -1 and +1.

The sgn[S] function is defined to be -1 for negative values of S but +1 otherwise, so that sgn(0) = +1.

Notice that we can write S(t) as ((t-1)/t)*S(t-1) + Y(t)/t , formally making this a Markov process for (y,S).



If the current sample mean S(t) is a small negative (positive) number, the process will generate y with large positive (negative) values, but if the current sample mean is large then the process will generate small y distributed around zero, forcing the sample mean to lower values. This should make for a nasty little process with a really mean mean.



Unfortunately, the sample mean of this process still converges (#). And it converges towards zero, as depicted in the following picture,
produced by a numerical simulation with e = 10^-6 (and S(0) = 1 instead of 0).







It is not too difficult to understand that the convergence rate is proportional to the value of e and it is not
difficult for a statistical mechanic to fix this problem with a little tinkering.
Using (e/t) instead of e does the trick.







As the numerical simulation depicted in this picture suggests, the sample mean does not converge, but seems to remain bounded (x), bouncing unpredictably between positive and negative values (*).



The obvious question is what the expectation value E[y] is and in which sense S
converges towards it.

Unfortunately, I have to leave it as an exercise for the reader to find an answer and actually proof the (non)convergence of this process.





(#) does it really?



(x) or diverging very slowly?



(*) By the way, the values of y(t) are finite for all finite t, but obviously are now unbounded; But this is also
true for Gaussian noise and should not really bother us.



added later: I convinced myself of the following:

(#) Yes, for the 1st process (with fixed epsilon e) the sample mean S does indeed converge on E[y] = 0.

(x) No, for the 2nd process (with decreasing e/t) the sample mean S(t) is still bounded (by the inverse of e). Due to the symmetry of the system [the fact that I assume sgn(0) = +1 and not 0 is irrelevant by the way] this indicates that the mean of the S(t) values, i.e. 'the mean of the means', converges on zero, which would be compatible with E[y] = 0.

However, it is also the case that S(t) *cannot* converge on zero.


you have to click



If you read this, it is because you clicked. And I know you need some more links, because you have to click. You cannot stop. Here they are, pure, high quality stuff. Take them, click them.



The Aeolist reviews Sean Carroll's book and concludes that it is "too sloppy to be of much help".

John Baez writes about Algorithmic Thermodynamics and describes a "design for a heat engine powered by programs".

Boris Kosyakov examines the classical physics of a particle at the top of a hill and concludes that it is indeterministic, re-discovering what John D. Norton found out about The Dome.

Last, but not least, an easy-to-read explanation (pdf) of Xia's proof of the existence of non-collision singularities in Newtonian gravity.