the physics of immortality and the direction of time



Recently, Sean wrote a blog post about Frank Tipler, who described in his book,
The Physics of Immortality, what he calls Omega point theory; Wikipedia has enough about it that I do not need to elaborate much further. The main idea in one sentence is that the
'big crunch' of a re-collapsing universe which contains intelligent life (necessarily) generates a
point of infinite complexity, capable to process an infinite amount of information in a finite amount of time [x]. As I mentioned previously, the book contains a lot of interesting physics, but also large sections comparing the Omega point to the God of various religions and as a whole the book is a bit odd.



In a section near the end of his book, Tipler discusses quantum gravity, the wave function of the universe and
in particular the boundary condition(s) for such a wave function. The best known example for such a condition
is the no-boundary proposal of Hawking, which corresponds to 'creation from nothing'; A different proposal was
examined by Vilenkin and others [e.g. here].

Tipler proposes as a new boundary condition the requirement of an Omega point. In other words, he replaces the usual initial condition with a final condition on the allowed physical states. In his own words:

"In my above description of the Omega Point Theory, I used past-to-future causation language, which is standard in everyday life, and in most physics papers. This may have given the reader the impression that it is life that is creating the Omega Point (God) rather than the reverse. Nothing could be further from the truth. It is more accurate to say that the Omega Point, acting backwards in time, via future-to-past causation, creates life, and His multiverse."



This is of course a main difference (if not the main difference) between science and religion.

Science assumes an initial condition (usually of high symmetry and low entropy), with everything following
afterwards according to the laws of physics, with no purpose, intention or meaning.

Religion on the other hand assumes that there is a point to the world and our experience, a desired goal and final explanation, which determines everything.

Once Tipler assumes the final Omega point condition, he leaves science as we know it and opens the
door to 'explanations' like this:

"I will say that an event is a "miracle" if it is very improbable according to standard past-to-future causation from the data in our multiverse neighborhood, but is seen to be inevitable from knowledge that the multiverse will evolve into the Omega Point."



While his book 'The Physics of Immortality' is vague enough, suggesting that perhaps one may be able to have it both, science and religion, the subsequent development of Tipler's thoughts makes it immediately clear where his proposal leads:

"I shall now argue that the Incarnation, the Virgin Birth, and the Resurrection were miracles in my sense. The key to understanding why these events HAD to occur is the recently observed acceleration of the universe."



I will only add that in my opinion Sean's word crackpot is misplaced in this case; I think 'tragedy' would fit better.





[x] According to Tipler, the discovery of an accelerating expansion of the universe (dark energy) does not
necessarily affect his main assumption, as he explains in this interview.


the measurement problem



Shortly after Newton proposed his new mechanics, the "shut up and calculate" approach
of Newton, Halley and others produced the first astonishing results.
However, it did not take long until the foundational debate about the interpretation of the new physics began.
In particular, the true meaning of the position coordinates x(t) was heavily discussed.
The x(t) were of course projections onto a holonomic basis in the 3-dimensional
Euclidean vector space. But how exactly would they be determined in a measurement process?







It came down to measuring distances between point masses (1). But how does one actually measure
such a distance? Suppose we use a ruler in the simplest case (2). We have then only replaced one
distance measurement with two distance measurements, because instead of measuring the distance
between two mass points we need to measure now the distance of each mass point to the markings on the ruler (3).

Now we could use another two rulers to measure those distances etc. - an infinite regress. (Notice the superposition of rulers at 3!)



There were soon two main groups of opinion. The first was known as realists, assuming that the x(t) represented
the real position of a mass point, and even if human beings had a problem to comprehend the infinite regress
of the measurement process, the omniscient God would necessarily know it.

A small subgroup proposed that the infinite regress is the position, but could not really explain what this means.

The other group insisted that the x(t) were only a subjective description of reality but not part of reality itself.
They emphasized the important role of the conscious observer who would terminate the otherwise infinite regress
of the measurement process; This introduced the issue of subjective uncertainty into the debate.



Careful analysis showed that x(t) was only known with finite uncertainty dx and in general this
uncertainty would increase with time. Astronomers noticed that the dx for some planets was larger than the whole Earth!
The realists assumed that there was still one true x(t), even if we do not know it,
while Sir Everett 1st proposed the stunning interpretation that *all* positions within dx were equally real, rejecting the
idea of random measurement errors. The world was really a multitude of infinitely many worlds and the infinite regress of the measurement problem reflected this multitude!

Subsequently, this type of analysis became known as decoherence program: The position of a mass point can be determined only
if the mass point interacts with other mass points. But this means that in order to reduce the uncertainty dx, one necessarily
increases the uncertainty of the position of all mass points in the environment.

While it was not clear if decoherence really helped to solve the foundational problems, the complicated calculations were
certainly very interesting.



In a devilish thought experiment, a cat was put in a large box and then the lid closed. Obviously the cat would move around
inside the box (some would even suggest that the cat moved around randomly, since no law was known that could determine the
movement of the cat!), but one could not observe it.

The stunning question was, and still is, if the cat had a position x(t) if one waited long enough.

The realists again insisted that the position of the cat was a real property of the cat, even if it was unknown to everybody.
But others insisted that it made no sense to assign a position, since the rays emitted by the eyes of the observer were not
able to reach the cat; Furthermore, the animal itself has no conscious soul and thus cannot determine its own position.



While the "shut up and calculate" approach celebrated many more successes, the foundational issues of the new physics were never resolved.


spin echos



In my previous post about a hypothetical 'reversal of time', I should have mentioned the spin echo effect.
In real experiments, first performed by E. L. Hahn, a configuration of spins evolves from an ordered into a disordered state, but subsequently the initial ordered state is recovered by application of magnetic pulses.

The spin echo effect is described in this text (sect. 11) and further discussed here.



Obviously, this effect raises several interesting questions about the foundations of statistical mechanics, e.g. the definition of entropy via coarse-graining; But as noticed by
the Aeolist "many of the terms used in the debate, beginning with the all-important definition of entropy, and including terms like ‘preparation’ and ‘reversal’ (and its cognates), are still used in so many different ways that many of the participants are speaking at cross purposes".

By the way, I did not see a discussion of the 'backreaction' of the ensemble of random spins on the magnet (and its entropy) that is used to trigger the reversal; It may or may not be important in this debate.



A somewhat related model is the swarm of n free particles moving with random but fixed velocities on a ring, as discussed by H. D. Zeh in appendix A to his book about the direction of time.


thermodynamics



It seems that there is some confusion about several issues in thermodynamics, so the following might be helpful.



1) If a system is not in thermodynamic equilibrium, certain macroscopic quantities may not be well defined, e.g. temperature as mean kinetic energy. However, entropy as a measure of our ignorance about the micro state is in general defined even far away from equilibrium. Otherwise we would not have the 2nd law of thermodynamics, because dS/dt ~ 0 if a system is in equilibrium.



2) The heat capacity of a gravitating system (Newtonian gravity) is in general negative. As an example consider a star radiating energy away, which will cause it to heat up due to gravitational contraction. This can be confusing, but there is nothing wrong with thermodynamics if one includes Newtonian gravity.

In general, the 0th law does not always hold and things can get funny, but this does not affect the 1st and 2nd law.



3) If we consider Newtonian mechanics carefully, we find that no classical system is stable and thus no purely classical system can be in thermodynamic equilibrium. This was historically the reason for Bohr to propose the first version of quantum mechanics.



4) In general, we do not know how to calculate the entropy of a particular spacetime. There is the proposal of Penrose to equate it with the Weyl curvature; However, there are problems with this proposal.

Things can get quite funny if one considers a spacetime which contains a naked singularity or closed timelike loops. Unfortunately, current state-of-the-art is still that one has to remove such geometries by hand on the grounds that things get quite funny otherwise.



5) In quantum theory, if a system is in a pure state the corresponding entropy is zero. If one assumes that the 'wave function of the universe' was initially in a pure state, it would remain in a pure state, assuming unitary evolution for quantum gravity (as suggested by the AdS-CFT correspondence). There is thus a problem for (some) many worlds interpretations in my opinion.


backwards or twice as fast



Recently I came across an argument about 'reversal of time' and our conscious experience (I am sure
this type of argument must be at least hundred years old) and I thought I should mix it with an old
idea of mine. I am curious what others think about it; So here it goes:



Imagine that we can describe the world as a Newtonian universe of classical particles so that
xi(t) , where x is the position(vector) of the i-th particle and t is the classical time
parameter, determines the configuration of our world for each moment of time. I am pretty sure that
the following argument can be generalized to a quantum mechanical description, but it is much easier to
stick to Newton for now.



We assume that the world evolves according to the laws of Newtonian physics up until the time t0.
At this moment an omnipotent demon reverses all velocities: vi(t0) = x'i(t0) -> - x'i(t0),
where ' is the time derivative, and the Newtonian evolution continues afterwards.

Obviously, for t > t0 everything moves 'backwards'; If a glass fell on the floor and shattered into many pieces for t < t0,
it will now assemble and bounce back up from the floor etc.; If the entropy S(t) increased with t for t < t0, it now decreases for
t > t0.

One can also check that xi(t0+T) = xi(t0-T) and x'i(t0+T) = -x'i(t0-T) for every T (as long
as we rule out non-conservative forces).



The interesting question in this thought experiment is "what would an observer experience for t > t0 ?".

If we assume that the conscious experience E(t) of an observer is a function of xb(t), where b enumerates
the particles which constitute her brain, then we would have to conclude that the observer does not recognize anything
strange for t > t0, since xb(t0+T) = xb(t0-T) and it follows immediately that E(t0+T) = E(t0-T). So if
all the experiences E(t0-T) contained only 'normal' impressions then the same is true for E(t0+T). In other words, while the sequence of
experiences is 'backwards' no single experience contains the thought "everything is backwards" and nobody feels anything strange.

But this would mean that no observer is able to recognize 'backward evolution' with entropy decreasing and distinguish
it from normal evolution!



One way to avoid this strange conclusion is to assume that E(t) is a function of xb(t) and vb(t).
Of course, we do not have a physical description of conscious experiences and how they follow from the configurations of our brain (yet).
It is reasonable that our conscious experience depends not only on the position of all molecules in our brain but also
their velocities.

Unfortunately, this leads us into another problem. If we rescale the time parameter t as t* = s*t, this would rescale all velocities
so that v(t*) = s*v(t) and thus E(t) = E[x(t),v(t)] -> E(t*) = E[x(t*),s*v(t*)]; But if the function E is sensitive to vb then
it would be sensitive to the scale s too. I find this to be quite absurd, our experiences should not depend on an unphysical parameter.



The summary of my argument is the following:

i) If the world evolves 'twice as fast' we should not notice a difference (the molecules
in our brains would move twice as fast as well).

ii) However, if the world suddenly evolves 'backwards' we would like to be able to recognize this (otherwise how would we know if the 2nd law is correct).

iii) But it seems that one cannot have both i) and ii) if one assumes that our conscious experience is a 'natural' function of the material configuration
of our brain, e.g. if we follow Daniel Dennett and assume that consciousness simply is the material configuration of our brain: E(t) = [xb(t)]
or E(t) = [xb(t),vb(t)] (*).



Perhaps one can solve this puzzle by assuming E depends on higher derivatives x'' and/or perhaps one can find some
clever non-linear function. But I think this would introduce other problems (at least for the few I tried ) and I don't find this very convincing [x].

Of course one can challenge other assumptions too. I already mentioned quantum mechanics instead of Newton or perhaps
we have to assume that our conscious experience is not a function of the particle positions in our brain. But still, none of these
solutions are very convincing in my opinion.

What do you think?



(*) Dennett is never that explicit about his explanation of consciousness.

In general, one could imagine that E is some sort of vector in the 'space of all possible conscious experience' - whatever that means.



[x] e.g. E could depend on vb/N with N = sqrt(sumb v2b) instead of vb. But where would the non-local N come from and also there would be a singularity at N=0, i.e. when all velocities are zero. One would not expect a singularity of E for a dead brain (with all molecules at rest) but rather zero experience.