mean mean
warning: this blog and in particular this blog post is about a mutant form of math.
Recently, I browsed through Cosma's notebook about large deviations reading that
"The limit theorems of probability theory [..] basically say that averages
taken over large samples converge on expectation values."
and immediately my inner contrarian tried to come up with a simple stochastic process
where the sample mean does not converge.
Of course, this is not very difficult, since there are many examples of processes where
the sample mean is unbounded and does not converge to anything.
It would be much more interesting to find a process where the sample mean is bounded, but 'bouncing around' unpredictably and therefore not converging. In other words, a process where it seems that a 'mean of sample means' exists and yet it does not.
My initial idea was to use the sample mean S of a random variable y itself as a variable in the stochastic process and consider the following:
y(t+1) = -sgn[S(t)]/( |S(t)| + e ) + noise
where e (epsilon) is a small number, S is the sample mean S(t) = ( y(1) + y(2) + ... y(t) )/t
and the noise term is a (uniformly distributed) random variable between -1 and +1.
The sgn[S] function is defined to be -1 for negative values of S but +1 otherwise, so that sgn(0) = +1.
Notice that we can write S(t) as ((t-1)/t)*S(t-1) + Y(t)/t , formally making this a Markov process for (y,S).
If the current sample mean S(t) is a small negative (positive) number, the process will generate y with large positive (negative) values, but if the current sample mean is large then the process will generate small y distributed around zero, forcing the sample mean to lower values. This should make for a nasty little process with a really mean mean.
Unfortunately, the sample mean of this process still converges (#). And it converges towards zero, as depicted in the following picture,
produced by a numerical simulation with e = 10^-6 (and S(0) = 1 instead of 0).
It is not too difficult to understand that the convergence rate is proportional to the value of e and it is not
difficult for a statistical mechanic to fix this problem with a little tinkering.
Using (e/t) instead of e does the trick.
As the numerical simulation depicted in this picture suggests, the sample mean does not converge, but seems to remain bounded (x), bouncing unpredictably between positive and negative values (*).
The obvious question is what the expectation value E[y] is and in which sense S
converges towards it.
Unfortunately, I have to leave it as an exercise for the reader to find an answer and actually proof the (non)convergence of this process.
(#) does it really?
(x) or diverging very slowly?
(*) By the way, the values of y(t) are finite for all finite t, but obviously are now unbounded; But this is also
true for Gaussian noise and should not really bother us.
added later: I convinced myself of the following:
(#) Yes, for the 1st process (with fixed epsilon e) the sample mean S does indeed converge on E[y] = 0.
(x) No, for the 2nd process (with decreasing e/t) the sample mean S(t) is still bounded (by the inverse of e). Due to the symmetry of the system [the fact that I assume sgn(0) = +1 and not 0 is irrelevant by the way] this indicates that the mean of the S(t) values, i.e. 'the mean of the means', converges on zero, which would be compatible with E[y] = 0.
However, it is also the case that S(t) *cannot* converge on zero.
Subscribe to:
Post Comments (Atom)
3 comments:
Hi, I believe you can achieve the same effect much easier.
Use a process for y,N where N(t+1) = 10*N(t) and N(1) = 1 , y(t) = N(t)*rand and rand is either +1 or -1
Write down the first three or four terms to see what I am talking about.
This gives you E[y] = 0 but the sample mean will not converge.
Very nice.
I think there are many such processes, but I wanted to find one where the sequence of sample means is bounded, while in this example it obviously bounces between positive and negative values without bounds.
in your second example, E[y(t)] grows linearly in t, so I guess it is intuitive that the sample mean doesn't necessarily converge. BTW, the process Y(t) with Y(0) \sim N(0,1) and Y(t) = Y(0), t >= 1 is a trivial example of a process with sample mean <> expectation value
Post a Comment