the isle of Elba
"Frequentists have long been in a kind of exile when it comes to statistical philosophy. ... This may now be changing."
Deborah Mayo has a blog.
on probability
"We take issue with Kent's arguments against many-world interpretations of quantum mechanics. We argue that his reasons for preferring single-world interpretations are logically flawed and that his proposed singleworld alternative to probability theory suffers from conceptual problems. We use a few thought-experiments which show that the problems he raises for probabilities in multiverses also apply in a single universe."
Guildenstern and Rosencrantz in Quantumland
It seems to me that the debate about many worlds has finally reached the point of asking 'what exactly do we mean with probability?' and I doubt this will be settled any time soon.
To be, or not to be, that really seems to be the question...
harmonic universe
Yet another attempt to reduce the physics of our universe to the harmonic oscillator; An interesting paper and a pleasure to read. But there is a problem with this approach near the end imho. I do not feel that the proposed boundary conditions psi(0) = 0 and psi(infty) = 0 are natural and honestly I have no intuition what would be natural in this context.
I think that this is a general problem whenever one tries to calculate the 'wavefunction of the universe'. While we have a good intuition for boundary conditions in conventional quantum theory, this intuition is lost for quantum cosmology. But of course those boundary conditions determine everything...
neutrinos as tachyons, the Scharnhorst effect
added much later: It turned out that the OPERA results were flawed.
I leave the text below and the comments as testimony of my confusion about all that when the surprising observation seemed somehow possible.
-----
I wonder if neutrinos should travel slightly faster than photons according to standard physics, i.e. quantum field theory as we know it (*).
The reason would be the Scharnhorst effect.
If c denotes the 'bare' speed of light then real photons should actually travel a tiny bit slower than c, due to the interaction with virtual particles in a 'real vacuum': c(physical) < c. The bulk of the effect comes from interactions with virtual electrons and positrons (see e.g. this paper for more details).
But neutrinos interact only weakly with those and therefore the velocity of neutrinos should be closer to c; In other words neutrinos would be slightly faster than photons.
I am not sure if the OPERA experiment detected anything real, but if the results are indeed confirmed one should take a second look at the Scharnhorst effect imho; This time considering the difference between neutrinos and photons instead of looking at photons between Casimir plates.
with credit to A.M., a friend of mine and yet another quant interested in physics, who reminded me of this effect, which was discussed already in the 1990s. But if this is all b.s. the embarrassment is of course fully mine.
added later: A very simplified calculation shows that the Scharnhorst effect could have the right order of magnitude!
According to the Scharnhorst paper, equ. 10, the increase of the speed of photons between Casimir plates is to first order approximately
1 + 0.01*alpha2/(mL)4.
As we move the plates closer and reduce L (in a thought experiment) we eliminate more of the interaction with the virtual particles and the speed of photons gets closer to the 'bare' speed of light c.
However, we cannot reduce L below 1/m (in this approximation) and thus the Scharnhorst formula
gives a maximum correction
c = c(bare) = c(physical)*( 1 + 0.01*alpha2 ).
If we assume that neutrinos travel at a speed close to the 'bare' speed c, then they would travel faster than photons in the 'real vacuum' by a factor of approximately 1 + 10-6.
This is pretty much what OPERA measured.
(*) added even later: Heather Logan thinks this explanation cannot work and I changed the sentence to reflect that this is not 'standard' opinion; After all she is a physics professor and my last lesson in QED was about 20 years ago. On the other hand I don't see where the argument goes wrong. Obviously I appreciate any input.
added one day later: After some more thinking and reading I can now better formulate this idea as follows:
i) In QED the Ward identities ensure that after quantization of the electromagnetic field the longitudinal polarization of the photon vanishes.
ii) This implies (by comparison with classical fields?) that the photon mass remains zero and the velocity of photons is the light speed c.
iii) On the other hand, the Scharnhorst effect suggests that photons travel at a 'dressed' speed d and one can increase (in principle) d between Casimir plates (see all the above).
iv) The standard interpretation is that d = c to keep QED simple, but one has then to explain away the acausality from photons travelling at increased speed between Casimir plates.
v) In light of the OPERA experiment, the proposal is to assume d < c and (because the Ward identities hold) for all possible experiments the longitudinal polarization of the physical photon still vanishes, so that it would behave like a massless particle, although it travels slightly slower than the 'true' speed of light c. (By the way, I am not the first thinking along such lines.)
vi) The neutrino would travel very close to the 'true' speed of light c and thus slightly faster than photons. The estimate above suggests that the effect could be in the right ballpark,
see the comments for more
neutrinos as tachyons
It is not surprising that there is a debate about the detection of faster-than-light neutrinos before that discovery is even announced...
It is also not surprising that there are already several arguments in favor of tachyonic neutrinos, even if there are good reasons to doubt that idea (*).
What I do find surprising is that slashdot and other news outlets are so far silent about this threat to the well being of grandfathers.
added later: They are no longer silent and OPERA's paper is here...
... as bonus material, let me mention an old joke from my home country, where it is known for a long time that bureaucrats are made from tachyons. How did we know that sometimes they travel faster than light? Because official working hours end at 4pm, but they are often at home already at 1pm.
(*) It seems to me that tachyonic neutrinos could easily resolve the information loss problem and would let us "see" the interior of black holes. They would also explain how information can be retrieved from baby universes and thus support a silly idea of mine. Obviously, this should count in favor of the tachyonic neutrinos. Simple question at this point: So what Bayesian prior should one use???
Eppley and Hannah
Every now and then somebody asks if it is really necessary to find a quantum theory of gravitation. After all, it is most likely not possible to detect single gravitons, following an argument of Freeman Dyson (because one would need a detector of planetary size for it).
Of course there are many good reasons why one would like to find a way to quantize gravity like all other fields [1, 2, 3]. But I was never worried about this sort of debate, because I knew that there was a thought experiment, published in the '70s or '80s, which settled this issue once and for all: Consistency requires that gravitation must be quantized. I remember that I read the argument and that I found it convincing at the time.
Recently, I was asked about this whole issue and I mentioned the thought experiment and that paper. Finally I promised that I would dig out the reference and I actually did.
K. Eppley and E. Hannah, Found. Phys. 7, 51 (1977)
The reason it was relatively easy to find the reference was that almost thirty years later somebody checked the argument and found that it was flawed. The problem is that the thought experiment asks for a detector so large and heavy that it cannot be built, somehow closing the circle back to Dyson's argument.
order!
Continuing a previous theme, let me ask this: So why don't they put their house in order? Seriously.
via Cosma
many verses
The main motivation of no-collapse interpretations is to emphasize that the unitary Schroedinger evolution is all there is.
In order to reconcile this with our experience a description considering 'many worlds' or 'many minds' and the 'splitting' of the universe into different 'branches' is often used:
The physicist does not kill the cat but really creates two worlds and then repeating this experiment doubles the number again and again.
It seems that the number N of 'branches' or different 'minds' etc. increases like N(t) ~ wt where w is some unknown constant. Now one can calculate something like an entropy S from it as S = log(N) and therefore S ~ t; time really is 'm.w.i entropy'.
But how does one reconcile all this with the time-reversible Schroedinger evolution?
asymptotic safety
Georg v. Hippel blogs about the Lattice 2011 conference and he mentioned a talk by Jack Laiho on Asymptotic Safety and Quantum Gravity.
There is already a paper about that on the arXiv. The main idea is that three coupling parameters are needed in the lattice model and they consider dynamical triangulation in the Euclidean sector with an additional measure term, which comes with the third coupling parameter. The hope is to find a tri-critical point (with 2nd order phase transition) as a candidate for the continuum limit in the asymptotic safety scenario.
balls and urn
Consider an urn holding N balls which are either white or black. Whenever I take out a ball it is replaced randomly with either a white or black ball,
with fixed probability p for the replacement ball being white.
I am playing this game for quite a while (so that the start configuration no longer matters) and now I am taking out n white balls in a row (n < N).
What is the probability that the next ball is also white?
If p is (near) 1/2 then one could make these two arguments:
1) If we take out a sequence of n white balls it indicates that there are probably
more white balls in the urn than black balls (due to a random fluctuation in the replacement process), so the next ball is most likely also white: P(white) > P(black).
2) If we take out n white balls, the ratio of white to black necessarily decreases,
so it is more likely that the next is actually black: P(black) > P(white).
What do you think? And does it make a difference if we actually know p and N ?
added later: I have posted the solution now as a comment, but I have to warn you that this is fun only if you really try to find an answer yourself first.
decoherence
One can still quite often read or hear the argument that decoherence solves the measurement problem and therefore further discussion of interpretations is unnecessary.
Fortunately, I can keep this blog post short with a link to this paper:
Stephen Adler, Why Decoherence has not Solved the Measurement Problem.
-----
If one wants to read about the role of decoherence within different popular interpretations, I recommend this paper:
Maximilian Schlosshauer, Decoherence, the measurement problem, and interpretations of quantum mechanics
It notices that "Decoherence adherents have typically been inclined
towards relative-state interpretations ... It may also seem natural to identify the
decohering components of the wave function with different Everett branches." and it then proceeds
to discuss two important open issues of that interpretation: the preferred-basis problem and the problem with probabilities in Everett interpretations.
If one want to go down that route I recommend this paper for further reading.
the Coleman argument
In 1994 Sidney Coleman gave the lecture 'Quantum Mechanics in Your Face' which was recorded and the video is available here. Near the end he makes an argument, actually attributed to Dave Albert, which is nowadays often used in debates about the meaning of quantum theory (usually people just link to the video without discussing the argument(s) much further).
But, as we shall see, it does not really work the way people think it does.
Consider a quantum system (e.g. electron in a Stern-Gerlach apparatus) which evolves from an initial state |i> into a superposition |a> + |b> (*). An observer (named Sidney) makes a measurement and evolves from an initial state |I> into a superposition |A> + |B>. How can we reconcile this superposition of observer states with our everyday normal conscious experience?
Well, consider all the states |Sj> of 'Sidney with a normal conscious experience', where j supposedly labels all the different conscious experiences Sidney can have. All those states |Sj> are eigenstates of the 'consciousness operator' C so that
C|Sj> = |Sj>.
It is clear that |A>, which here means 'Sidney observed |a>', is an eigenstate of C and also |B> is an eigenstate of C,
C|A> = |A> and C|B> = |B>.
But it follows immediately that |A> + |B> is then also an eigenstate of C,
C( |A> + |B> ) = ( |A> + |B> ), from the linearity of the quantum operator C.
This would mean that the superposition of Sidney states does not really cause a problem; The superposition still corresponds to a 'normal conscious experience'.
So it seems that there is no 'measurement problem' as long as we stay fully within quantum theory. Therefore this argument is very popular with people who want to end the interpretation debate, while I think it may be a good starting point to begin a serious discussion.
In order to see that something is wrong with the Coleman argument, consider that there must be states |Uj> of Sidney where he does not have a 'normal conscious experience', eg. because he is asleep, drunk or worse.
Obviously |Uj> cannot be an eigenstate of C, so that C|Uj> = 0.
The problem is that we have to assume that the initial state of Sidney |I> does not just evolve into the superposition |A> + |B> but it will also contain some components |Uj>, because even for a healthy person there is a small but non-zero possibility to fall into an unconscious state. But as one can check easily, the state
|A> + B|> + |Uj> is certainly not an eigenstate of C and due to the superposition of states Sidney will in general never be in such an eigenstate of C. This would mean that Sidney never has a 'normal conscious experience'.
Obviously, there must be a problem somewhere with this whole argument and I leave it as an exercise for the reader to find it and post it as a comment 8-)
(*) I leave out the necessary normalization factors of 1/sqrt(2) etc. in this text and the reader has to assume that these factors are absorbed in the state vectors. E.g. in the state |A> + |B> + |U> we assume that |U> comes with a very small weight/probability, but I do not make this small pre-factor explicit.
modified gravity?
Lubos suggests an explanation (or actually a replacement) for MOND, which sounds like entropic gravity to me (*). Did he not recently explain to us why such explanations have to be wrong?
The purpose of his proposal is to explain observations which suggest that gravity changes at low accelerations a < a0 = 1.2x 10-10 m/s2 and it is based on the idea that a0 could be the inverse size of the visible universe (times c).
(*) I should make it clear that Lubos never mentions 'entropic gravity' in his post, but how would a sentence like "The existence of this center on the hologram may be needed for the usual Kepler scaling laws to emerge." make any sense otherwise? See e.g. this paper on how MOND was derived previously from 'entropic gravity' using "a holographic principle".
added later: When I asked explicitly in a comment, he insisted that his proposal has nothing to do with "the crackpottery called entropic gravity". Alright, I will admit then that I do not understand what he is talking about and leave it to others to sort it out. Feel free to leave a comment to enlighten me. I will leave this post up as it is, because the links could be useful to others.
Perhaps I should make it clear that I am (still) not convinced 'entropic gravity and/or MOND make any sense.
added even later: While Lubos focuses directly on the quantity a0 = c/T, with T being the age of the (visible) universe, I think it would be more natural to consider the energy E = h/T. The associated temperature is E/k, which is on the order of 10-28 kelvin and comparable to the critical temperature used to derive MOND in the paper linked above; In fact plugging hk-1/T into equ. (12) gives a0 = c/T !
added several hours later: Obviously, there is a straightforward way to test this type of proposal. If one looks into the night sky one sees galaxies at different age T of the universe and the deviation from Newtonian dynamics should be stronger the further back in time one looks.
Perhaps there is already enough statistics of galaxy rotation curves to check this.
added much much later: As a counter point to all this speculation a paper [pdf] about an experiment which "finds good agreement with Newton’s second law at accelerations as small as 5 x 10-14 m s-2.
statistical theology
This post is part of a series about important theological issues [1, 2, 3, 4] and this time I want to emphasize the importance of Benford's law in statistical theology. In the recent paper "Law of the leading digits and the ideological struggle for numbers", it was used in religious demography research:
"We investigate the country-wise adherent distribution of seven major world religions i.e. Christianity, Islam, Buddhism, Hinduism, Sikhism, Judaism and Bhah'ism to see if the proportion of the leading digits conform to the Benford's law. We found that the adherent data on all the religions, except Christianity, excellently conform to the Benford's law."
What does He want to tell us with this exception?
lattice gravity
You may have noticed the link to quantum gravity on the left hand side of this blog, below the picture of the plastic bag; It leads to several blog posts about "lattice gravity" as well as posts about quantum gravity in general.
If you want to know more what "lattice gravity" is all about, you can browse some of the references provided here; Another good starting point would be Renate Loll's review (in particular sect. 3) and for even more motivation I recommend this paper.
One warning: It is very much possible, and actually quite likely, that these models have nothing to do with real (quantum) gravity. In fact there are several good arguments why a lattice approach can never work. In other words, it is very much possible that this will turn out to be a waste of time.
Just another reason it makes for a good topic on this blog...
is AdS unstable?
"We study the nonlinear evolution of a weakly perturbed anti-de Sitter (AdS) spacetime by solving numerically the four-dimensional spherically symmetric Einstein-massless-scalar field equations with negative cosmological constant. Our results suggest that AdS spacetime is unstable under arbitrarily small generic perturbations."
Piotr BizoĆ, Andrzej Rostworowski
Notice that this study was done in 3+1 dimensional AdS4, but the authors claim (in the conclusion) that they observed "qualitatively the same behavior" for the 4+1 dimensional AdS5.
But with all the activity about AdS/CFT, it is it hard to believe that nobody checked the stability of AdS in classical GR before.
the not-so-equivalent principle
I posted this puzzle a while ago on wbmh, but subsequently it got deleted and I think it is worth reposting it here. Please write a comment if you have an answer (or remember the discussion on wbmh) and win the famous golden llama award.
Perhaps you have seen the video clip of astronaut David Scott dropping a hammer and a feather on the surface of the moon, repeating Galileo's famous (thought)experiment.
But strictly speaking in a high precision experiment the two objects will in general not hit the moon at the exact same time.
Why is that?
---
Let me clarify a few things.
We assume that the shape of the objects makes no difference (we assume the spherical cow approximation is valid) and the surface of the moon is perfectly smooth. Further we assume that there is no trace of an atmosphere on the moon and no electrical charges (attached to the objects). We assume that the presence of the astronaut (and his gravitational field) can be neglected and we assume the sun, earth and the other planets are sufficiently far away. We ignore quantum theory and assume that the many-worlds interpretation and any other conspiracy theories can be neglected...
added later: Furthermore we assume that all objects and observers move slowly compared to the speed of light, so that we can use the standard St. Augustine definition of simultaneity.
update: Akshay Bhat is the proud winner of the famous golden llama award ...
... he will enjoy a free subscription to this blog for a whole year. Congratulations!
If you want to see my own solution to this puzzle click here.
hammer and feather
In this puzzle we consider a high precision re-enactment of the famous Galileo (thought)experiment and the claim is that hammer and feather dropped simultaneously from a height h will not hit the ground at exactly the same time.
In order to see this, we increase the height h and consider the full 3-body problem with feather (F), hammer (H) and moon (M) approximated as spheres (the famous spherical cow approximation).
Next we increase the distance between F and H and we increase the mass of the hammer H significantly. Therefore the moon will move towards H by a certain displacement a and thus the hammer has to travel the distance h - a until it collides with the moon M, while the feather F has to travel the increased distance sqrt( h^2 + a^2 ), which suggests that F will indeed collide with M slightly later than H.
But we have no full proof yet (notice that the feather is attracted by moon+hammer while the hammer is slightly less attracted by moon+feather and this could compensate for the different distances).
So in order to obtain full proof of our claim (without using too much math) we move the hammer H even further away from the feather F and we increase its mass until it exceeds the mass of the moon M significantly (perhaps it is easier to decrease the mass of the moon until it is more like a hammer).
But now we have transformed this thought experiment into a configuration where F and M are dropped on H, but from very different heights h and 2h. In other words, comparing with the original configuration, the assumption that the three bodies will collide at the same time is disproved by reductio ad absurdum.
I would like to make three more remarks:
1) The equivalence principle is an idealization (notice that in the general theory of relativity we consider test bodies to have infinitesimal mass and distances are small compared to the radius of curvature).
2) If we make the spheres small enough they will in general not collide at all (I leave it as an exercise for the reader to run a 3-body simulation and check this claim), except for symmetric initial configurations like the one in the last picture.
3) The contemporary opponents of Galileo could have made this reductio ad absurdum to counter his argument and discredit his physics. It is interesting to contemplate how science would have progressed in this case ...
the greatest intellectual failure
We do know how to do the calculations, we can determine the probabilities for various experiments, real or imagined. In fact, an amazing machinery of methods and tools has been developed over decades for this purpose; A cornerstone of modern physics and science in general.
And yet, important foundational questions remain unanswered.
I am talking, of course, about statistics and our theories of probability.
Recently I found this:
"This book is about one of the greatest intellectual failures of the twentieth
century - several unsuccessful attempts to construct a scientific theory of
probability. Probability and statistics are based on very well developed
mathematical theories. Amazingly, these solid mathematical foundations
are not linked to applications via a scientific theory but via two mutually
contradictory and radical philosophies. One of these philosophical theories
(frequency) is an awkward attempt to provide scientific foundations for
probability. The other theory (subjective) is one of the most confused
theories in all of science and philosophy. A little scrutiny shows that in
practice, the two ideologies are almost entirely ignored, even by their own supporters."
Krzysztof Burdzy
But I guess before I buy the book I will browse the blog a bit more...
added later: ... and read some reviews: negative and positive. (I thank Jonathan for the links.)
in search of absolutely no information
It seems that most readers swallowed my arguments in the previous post, so I think it is time to turn the screw a few more times so to speak.
Again, we are confronted with a deck of 2 cards (we know that there are only black and red cards, no jokers), we take the top card and it is black. What is the probability that the remaining card is also black?
In the previous post we only considered two hypotheses (*):
H1 ... black cards only
H2 ... mixed deck
and we assumed p(b|H2) = 1/2.
This assumption bothers me now.
We don't know how the 2-card deck was put together and it could be that somebody made sure we would always find a black card on top, even for the mixed deck. So I think we need to split H2 into two different hypotheses.
H2a ... manipulated mixed deck: p(b|H2a) = 1
H2b ... random mixed deck: p(b|H2b) = 1/2
Again, relying on the principle of indifference we use an uninformed prior p(H1) = p(H2a) = p(H2b) = 1/3 and
find p(H1|b) = 2/5 (I recommend that you actually plug in the numbers and do the calculation).
In other words, the probability that the remaining card is also black (2/5) is now significantly less than
the probability that it is red (3/5).
But it is clear that we have not yet achieved the goal of "absolutely no information" in selecting our prior; The problem is H2b.
We have to split it further to take into consideration cases in between outright
manipulation and random selection. So we have to consider additional N hypotheses
H2k ... k=1 ... N with p(b|H2k) = k/N
and let N go to infinity (o).
As you can easily check, the large number of hypotheses for mixed decks results in p(H1|b) -> 0 and therefore
we have to conclude that using an absolutely uninformed prior the probability that the
remaining card is black is (close to) zero.
This remarkable result is due to the fact that there is only one way how the deck can consist of only black cards,
but there are many ways how a mixed deck could have been put together. Absolutely no information means certainty about the 2nd card in this case, thanks to the power of the Bayesian method (x).
(*) Nobody complained, but it was a bit sloppy to exclude H0 = 'red only' from the prior (which should be chosen before the black card was seen). But p(b|H0) = 0 and, as you can check, including H0 would have made no difference.
(o) The limit N = infinite would require the use of an improper prior and I think it is sufficient for our purpose to consider the case of large but finite N.
(x) I think this toy model will be very helpful e.g. in cosmology and multiverse research.
in search of no information
The main purpose of this blog post is to illustrate my ignorance of Bayesian statistics(*), discussing a very simple game with a 2-card deck. By the way, we only consider black and red cards, no jokers etc.
So we begin with the 2-card deck and draw one card - a black card. The question is 'what is the probability that the other card is also black?'.
Fortunately we only need to consider two different hypotheses:
H1 ... both cards are black.
H2 ... a mixed deck (one black, one red).
We update our probabilities using the famous formula:
p(Hi|b) = p(b|Hi) * p(Hi) / p(b)
where b indicates the 'black card event' and p(b) is short hand for the sum p(b|H1)*p(H1) + p(b|H2)*p(H2).
Since we have no further information we use an uninformed prior which does not prefer one hypothesis over the other,
in other words:
p(H1) = p(H2)
and using p(b|H1) = 1 and p(b|H2) = 1/2 we get
p(H1|b) = 2/3 and p(H2|b) = 1/3. (I recommend that you actually plug in the numbers and do the calculation.)
I admit that there is something weird about this result: We have two cards, we pick one and after updating our probabilities have to conclude that the other is more likely being black than red, actually twice as likely.
The issue seems to be our prior, i.e. the choice of p(Hi).
Indeed, if we draw the 2-card deck randomly then the probabilities that it contains bb, br, rb and rr should be the same. We can throw out rr, which leaves us with a 2:1 majority of mixed decks and should therefore use p(H1) = 1/3 and p(H2) = 2/3. As you can check this resolves the weirdness, we get p(H1|b) = p(H2|b) and thus the probability that the 2nd card is also black would be the same as the probability for red.
But can a Bayesian accept such card counting?
And it gets worse: If the 2-card deck was drawn from an initial deck of 2N cards then the probabilities of bb and br are not exactly the same. The probability of the first b is indeed 1/2 but the probability of a 2nd b is lower, (N-1)/(2N-1), and therefore a mixed deck seems actually preferred, depending on the number N, which we don't know. Should we really conclude that red is slightly more likely after seeing black?
Do we have to sum over all possible N with equal weight to get a truly uninformed prior?
But I am sure a true Bayesian would reject all those card counting arguments which smell quite a bit of frequentist reasoning. A truly uninformed prior means that we have no information to prefer one hypothesis over the other. There is a difference between not knowing anything about the 2-card deck and knowing that it was randomly selected. Therefore the symmetric choice p(H1) = p(H2) is the true uninformed prior which properly reflects our indifference and we have to live with the asymmetry of probabilities for the 2nd card.
(*) A true Bayesian would calculate the probability that this post contains sarcasm taking into account the existence of this footnote.
how little do we know
Assume that you have two finite samples X1, X2, ..., Xn and Y1, Y2, ...Ym independently drawn from two distributions. Don't worry we know that both distributions are normal, no fat tails and no other complications. All we want to know is the probability p, given the Xs and Ys, that the two distributions have the same mean. We don't know and we don't care about the variances.
You would think that statisticians have a test ready for this, an algorithm which takes the Xs and Ys and spits out p, and you would think they have figured out the best possible algorithm for this simple question.
You would be wrong. Behrens-Fisher is one of the open problems in statistics.
micro and macro
Cosma wrote about the microfoundations of macroeconomics and received several interesting comments [1, 2].
In my judgment the idea that economics is the statistical mechanics of interacting agents has not worked out very well so far. But of course it has some value, if only some entertainment value in the worst case.
I am for various reasons increasingly convinced that the usual tools of statistics are of limited use when dealing with financial systems or the whole economy and recently I came across a book which expresses a similar sentiment.
The Blank Swan: The End of Probability begins with a discussion of 'regime switching', but with the added complication that the number and properties of those regimes change over time. It then turns quickly into a general discussion of probability.
I am not sure yet what to make of it and I don't know if I can recommend the book. I guess most readers would find it rather weird; But perhaps it is the first step in a new direction ...
Subscribe to:
Posts (Atom)