tossing biased cyber coins



I decided to test this comment to the previous blog post with a numerical simulation and the result was quite surprising (at least to me).

I wrote a simple C program, which generates n-state HMMs randomly (*) and then runs them N times (generating a sequence HTHH...), followed by another N cyber coin tosses.

The question was if a 'simple strategy' can predict from the first sequence the bias of the second sequence. In the following, I performed the experiment for n = 10, 20 ...
with N = 100.

The 'simple strategy' predicts a bias for Head if the number of Heads in the first sequence exceeds 60 = 0.6*N and I registered a success of the prediction if the
number of Heads was indeed larger than 50 = 0.5*N in the following sequence.
The experiment was repeated 10000 times for each n and the graph below shows the success rate as a function of n.







Notice that the success rate is well above 50% for n < N, but even for n > N it seems that the first sequence can predict the bias of the second to some extent.
This is quite amazing, considering that for n > N the first sequence has not even encountered all the possible states of the HMM.

Obviously, as n increases (and N stays fixed) the success rate approaches 50%
and if one does not know n this leads us back to the questions raised in the previous post. But the success rate for n < N and even n > N is much higher than
I would have expected.

The next task for me is to double check the result (e.g. against the literature) and to do some more experiments.



------



(*) An HMM is really described by two tables. The one stores the probabilities for H vs. T in each state s = 1, ..., n. The program initializes these probabilities with uniform random numbers between 0 and 1.

The other table stores the n x n transition probabilities and the way my program assigns these probabilities is to first assign uniformly distributed random numbers and then normalizing probabilities by multiplying the probilities p(i,j) to get from state i to j so that sum_j ( p(i,j) ) = 1. There is some ambiguity in this procedure
and I guess one could choose a different measure.


homework



In order to illustrate some comments made to my previous post, I suggest
the following homework problem:



We are dealing with a Hidden Markov model with n internal states, which produces as output a sequence of Hs and Ts. We know that it is ergodic (*), we are told that it is biased such that either p(H) > 2p(T) or p(T) > 2p(H) (if it just runs long enough) and we observe a sequence HTHHT... of N ouptputs.



How big does N have to be as a function of n to determine with some confidence if we are
dealing with one or the other case?

If we do not know n (**), how large would N have to be?

And how does an uninformed prior look like in this case? Will it have to be an improper prior as long as we do not know n or should we somehow weight with the (inverse of) the complexity of the HMMs?



(*) Every internal state can eventually be reached and every internal state
will eventually be left, but the transition probabilities might be small.



(**) This makes it somewhat similar to the case of the previous blog post, but of course, in the previous post we do not know anything about the underlying model, other than that there is a strong bias.


biased



A simple coin toss and all we know is that it is strongly biased.
What is the probability to get Head at the first throw?

Since we have no further information to favor Head or Tail, we have to assume
p(Head) = p(Tail) and since p(Head) + p(Tail) = 1 we
conclude that p(Head) = 1/2. Easy as pie.

But kind of wrong, because the only information we
actually do have about the coin is that p(Head) is certainly not 1/2,
because the coin is biased.


esse est percipi, part 4



As I tried to show in the previous blog posts, the interpretation of quantum physics is to a large extent a debate
on how to understand "psychophysical parallelism" and how to assign (our) conscious experience to a wave function and/or its components.
This is one reason why most 'real physicists' usually stay away from this topic.

But if you want to read even more about it, I recommend the following as starting points:



H. D. Zeh: "Epistemological consequences of quantum nonlocality (entanglement) are discussed under the assumption
of a universally valid Schroedinger equation in the absence of hidden variables.
This leads inevitably to a many-minds interpretation."



plato about many minds: "... one might well conclude that a single-mind theory, where each observer has one mind that evolves randomly given the evolution of the standard quantum mechanical state, would be preferable."



Dowker & Kent (also here): "We examine critically Gell-Mann and Hartle's interpretation of the formalism,
and in particular their discussions of communication, prediction and retrodiction,
and conclude that their explanation of the apparent persistence of quasiclassicality
relies on assumptions about an as yet unknown theory of experience."



Bernard d'Espagnat: "The central claim, in this paper, is that the Schroedinger cat – or Wigner’s friend –
paradox cannot be really solved without going deeply into a most basic question,
namely: are we able to describe things as they really are or should we rest content
with describing our experience?"



Last but not least, the webpage of Peter Hankins is not a bad reference for the
traditional discussion of conscious entities and the mind-body problem.


esse est percipi, part 3



The previous blog post (I recommend that you read it first) ended with Sidney Coleman's argument in favor of the 'many worlds interpretation', which really is an argument that a 'collapse'
of the wave function is not necessary to explain our usual conscious experience.
As we shall see there is a problem with that argument.



Max Tegmark provides for a good (and easy to read) explanation of the 'many worlds interpretation' in this paper. In the last section he discusses the issue of 'quantum immortality', considering
a quantum suicide experiment and in my opinion this raises an important question.

In general we have to assume that the wave function
|Y> of You the Observer always contains components associated with an alive
human being, even thousands of years in the future and even if classical
physics would describe you as long dead; The wave function never
'collapses' and preserves all components, even those which describe absurd freak events. It is an important question what conscious experience is associated with such states.



But in order to discuss this further I prefer to modify Tegmark's thought experiment
so that the experiment does not use bullets which may kill you, but pills (the "red pill
or the blue pill") which may or
may not contain drugs to knock you unconscious for a while.
We may want to call the thought experiment Schroedinger's Junkies and
instead of his cat we place You the observer in an experiment where
you have to swallow a pill which either contains harmless
water or a strong drug (e.g. LSD), depending on a measurement of the quantum state |s>.

If |s> = a|u> + b|d> we have to assume that after the experiment You are best
described by the wave function |Y> = a|U> + b|D> , where the component |U> means that you
are unharmed, while |D> means that you are heavily drugged.



Again we consider Coleman's operator C, but this time we have to assume that
C|D> = 0 (heavily drugged you will not have a normal conscious experience)
while C|U> = |U>. The problem is that now C|Y> = aC|U> + bC|D> = a|U> and
the state |Y> is no longer an eigenstate of C. In other words, Coleman's consciousness operator indicates that after the experiment You are not in a normal conscious state [*]; This contradicts the fact that for a = b in approximately
half of the cases Schroedinger's Junkies will always experience a
normal conscious state (and for a >> b almost all of them !).



Does this counter example to Coleman's argument indicate
that something like a
'collapse' (e.g. decoherence) from the superposition |Y> to either |U> or |D> is necessary after all?

I have to admit that trying to understand quantum physics feels like trying to find the solution to x² + 1 = 0 in the real numbers!





[*] One could argue that there is no problem if the total state is not an eigenstate of C, since the "psychophysical parallelism" of m.w.i. assigns consciousness to the components of the wave function only. However, we can split any component into subcomponents and even if C|U> = |U>
we can split |U> e.g. into two subcomponents |U> = ( |U> - |D> ) + ( |D> ) so that
none of the subcomponents (..) is an eigenstate of C.

Coleman's argument seemed to provide for consistency across different ways to split the wave function into components, but indeed it fails in general; In order to rescue "psychophysical parallelism" for m.w.i. one would have to find a preferred basis and it has been argued that decoherence might just do that. However, I have explained earlier why I am not convinced.


esse est percipi, part 2



The previous post referenced a rather crude attempt to use
our conscious experience as the foundation of (quantum) physics.
Usually, consciousness does not even make an appearance in physics
and some sort of "psychophysical parallelism" (different states of a
[human] brain
correlate with different conscious experience) is the only (hidden) assumption.



An interesting example is the notorious measurement problem in quantum physics.
(A slightly related classical example was provided earlier.)

Just to quickly recapitulate the main issue: Assume that a quantum system
|s> can be in two states |u> and/or |d>. An Observer,
initially in the state |I>, subsequently interacts with |s>
in such a way that |u>|I> evolves into |u>|U>, while |d>|I> evolves into
|d>|D>. With |U> we denote an observer who is sure to have observed
the system as |u>, while |D> is the observer in a state with conscious experience of |d>.

The measurement problem arises if we consider the interaction of
this observer with a state |s> in a superposition a|u> + b|d>, which then leads to an observer being in the superposition a|U> + b|D>; Schroedinger's Cat, Wigner's friend and all that.

The argument can be made much more precise, see here and here, and one does not have to assume
that |U> or |D> are necessarily pure states (and the observer state will in general incorporate entanglement with the environment etc.).



We find this superposition of observer states absurd because
we assign a specific conscious experience to a specific observer state following the assumption of
"psychophysical parallelism"; But
while we know what sort of conscious experience one would assign to |I> , |D> and
|U> , we do not know what conscious state should be assigned to a state
a|U> + b|D>. We would assume it has to be some experience of confusion, a superposition of
consciousness, which we normally do not experience.

At this point physicists introduced various
assumptions about a 'collapse' of the wave function to eliminate such
superpositions of observers, they threatened to pull a gun whenever Schroedinger's Cat
was mentioned and even worse began a long philosophical debate about various interpretations
of quantum physics.



But following an argument made by Sidney Coleman (in this video clip [at about 40min.]), this is really not necessary. Consider again the wave function |Y> of You the observer and
assume that there is an operator C which tells us if You the observer has
a normal classical conscious experience, so that C|Y> = |Y> if you are in a normal conscious state
and C|Y> = 0 if not [*].

Returning to our measurment problem, we have to assume that C|D> = |D> and also C|U> = |U>,
but then it follows from the linearity of quantum operators that even if
|Y> = a|U> + b|D> we have C|Y> = aC|U> + bC|D> = |Y>. In other words we have to conclude that You the observer has a normal classical conscious experience even in a superposition state after the quantum experiment.



This argument is at the basis of the 'many worlds interpretation' [x], which
assigns a normal classical conscious experience to the components
of the wave function |Y> and then shows that this does not lead to contradictions with
our everyday experience for superpositions of such components. A subtle shift in our
assumptions of "psychophysical parallelism" with drastic consequences.

A 'collapse' of wave functions seems no longer necessary (and the act of pulling a gun would only create yet another superposition of states 8-).





[*] Obviously we do not know what such an operator would look like, but if we believe in "psychophysical parallelism" we have to assume it can be constructed. Notice that if we would not believe in "psychophysical parallelism" then there would not be a 'measurement problem' either.



[x] It is obvious that 'many worlds interpretation' is a really bad name and should be replaced
e.g. with 'many experiences'.


esse est percipi



I would not have thought that Bishop Berkeley was perhaps one of the founding fathers of modern physics.
But we have to consider this:
"In the present work, quantum theory is founded on the framework of consciousness, in contrast to earlier suggestions that consciousness might be understood starting from quantum theory. [..]
Beginning from our postulated ontology that consciousness is primary and from the most elementary conscious contents, such as perception of periodic change and motion, quantum theory follows naturally as the description of the conscious experience."

Could the phenomenalism of e.g. Ernst Mach make a bit of a comeback after all?



I really like the phrase "in contrast to earlier suggestions", which sums up about 250 years of physics as "earlier suggestions". 8-)



In the appendix (section 10) a mathematical model of consciousness is presented, as a process which tries to find the solution to x² + 1 = 0 in the real numbers, which reminds me of
The Confusions of Young Törless.