Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

December 26, 2006

This Week’s Finds in Mathematical Physics (Week 243)

Posted by John Baez

In week243 of This Week’s Finds, hear about Claude Shannon, his sidekick Kelly, and how they used information theory to make money at casinos and the stock market. Hear about the new book Fearless Symmetry, which explains fancy number theory to ordinary mortals. Learn about the Dark Ages of our Universe, and how they were ended by the earliest stars. And finally, get a taste of Derek Wise’s work on Cartan geometry, gravity… and hamsters!

Yes, here’s what you’ve always wanted for Christmas: a hamster on a Riemann surface!

Posted at December 26, 2006 1:55 AM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/1092

22 Comments & 0 Trackbacks

Re: This Week’s Finds in Mathematical Physics (Week 243)

”.. interesting reading.. ” well, hmm..
Merry Christmas!!

Posted by: ericv on December 26, 2006 3:20 PM | Permalink | Reply to this

Discussion Questions

I’m happiest when people talk about the stuff in This Week’s Finds. So, here are some questions to get a discussion started. I think I know the answers to some of these — but I’m sure I don’t know the answers to others.

  1. I wrote that the Universe became transparent to visible light 380,000 years after the Big Bang, more or less everywhere at once – but “since light takes time to travel, you’d see a transparent sphere around you, expanding outwards at the speed of light, with reddish walls”. But what would that actually look like? How can you even see something moving away from you at the speed of light? Isn’t that a contradiction?
  2. Nothing is perfect; how would things look different when the Universe became transparent given that the Universe was hotter in some patches than others? A detailed answer requires knowing how big the temperature inhomogeneities were. But since we can see them, we should be able to say in detail how things looked.
  3. What are the coolest things that people know or guess about Population III stars? What are the experts trying to figure out now?
  4. Why is John Kelly Jr.’s work so controversial among card players, sports bettors, investors, hedge fund managers, and economists? Hint: it’s not the formula I mentioned, but the Kelly criterion, that all the fuss is about.
  5. What are the most interesting recent papers on Klein geometry — the ones my friend Rafe must have been alluding to?
Posted by: John Baez on December 27, 2006 6:06 PM | Permalink | Reply to this

Re: Discussion Questions

I wrote that the Universe became transparent to visible light 380,000 years after the Big Bang, more or less everywhere at once – but “since light takes time to travel, you’d see a transparent sphere around you, expanding outwards at the speed of light, with reddish walls”. But what would that actually look like? How can you even see something moving away from you at the speed of light? Isn’t that a contradiction?

This seems so clear to me that I’m thinking you threw it in as a nucleation site. Any photon emitted as the universe became transparent is some actual distance away from the observer, and that emission event is not receding at any speed. What’s changing is the distance from the observer to the emission events whose photons arrive at the observer now, and that because the events themselves are changing.

I’d draw the analogy to the fact that when you look at a rainbow, moment to moment the light is getting scattered off of different water droplets. A drop enters the top of the bow and looks red. It “changes color” as it falls, and passes out of the violet end of the bow. The actual droplets that sunlight is being scattered from are constantly changing, but we see the bow as consistent.

As to the others… I don’t know enough to comment sensibly. For the second I was under the impression that there are big patches where the competing models disagree, so we really have no idea how big the inhomogeneities were.

On the other hand, if we put this together with the previous view of the background, shouldn’t we try to record it over and over and over again? Each record of the background becomes a spherical shell, and we stack the later ones around the earlier ones to make a spherical annulus that would actually map the temperature of space in that era.

Posted by: John Armstrong on December 27, 2006 7:38 PM | Permalink | Reply to this

How to Gamble If You Must; Re: Discussion Questions

I had spoken with Claude Shannon about his approach to Portfolio theory. We contrasted this with “Inequalities for Stochastic Processes: How to Gamble If You Must” by Lester E. Dubins; see also this.

Suppose you have a $1,000.00 bankroll. Your wife will die unless a doctor performs a surgery that costs $10,000.00. How to you maximize the chance, in a casino, of growing your $1K to $10K?

In this case, there is no “long term.” The optimal strategy is to go to the game with the biggest return, namely Roulette. Place exactly enough on a specific number (probability 1/36, or 1/38 if there’s a 0 and a 00; go to another casino if there’s a 000) so that if you win, your win plus the remainder of bankroll = $10K. Lather, rinse, repeat. Since there are house odds, you go to the biggest payoff game to minimize the number of bets. Your chances of growing $1K to $10K (hitting that absorbing barrier in the random walk) are very roughly 1/11.

Posted by: Jonathan Vos Post on December 28, 2006 6:06 PM | Permalink | Reply to this

Re: How to Gamble If You Must; Re: Discussion Questions

Since there are house odds, you go to the biggest payoff game to minimize the number of bets. Your chances of growing $1K to $10K (hitting that absorbing barrier in the random walk) are very roughly 1/11.

Surely this rough 1/11 is only accurate when the house cut (the unfairness of the bet) is very small. In an extreme case (say one chance in a million that they give you $10K for a bet of $1K, very unfair), your chances of making the money by this strategy are only one in a million.

I understand the idea of going for the biggest payoff if the unfairness is the same for all games. But if the unfairness differs, then you may want to pick a more fair game with lower payoffs. (I believe that the fairest game in the casino is blackjack, but its payoffs are low; is there a happy medium, and how do we calculate it?)

Posted by: Toby Bartels on December 28, 2006 10:42 PM | Permalink | Reply to this

Re: How to Gamble If You Must; Re: Discussion Questions

I’m not sure blackjack is the fairest of them all unless you’re counting well, which the casinos don’t like you doing.

Actually, finding the most fair game only tells you how to lose money the most slowly. What you should do is find the most unfair game that people seem to think is fair. I like craps because there’s always a lot of commotion around and that lowers most people’s critical thinking skills. Then you figure which side the house is on and nudge the guy next to you at the table, “hey, I got a sawbuck here says he doesn’t make this point.”

Posted by: John Armstrong on December 29, 2006 12:53 AM | Permalink | Reply to this

Re: How to Gamble If You Must; Re: Discussion Questions

I like craps because there’s always a lot of commotion around and that lowers most people’s critical thinking skills. Then you figure which side the house is on and nudge the guy next to you at the table, “hey, I got a sawbuck here says he doesn’t make this point.”

I’ve heard that too: that the best way to make money at the casinos is to know the odds well and do side bets. In the long run, of course, that’s the only way to make money, but we’re not asking about the long term.

In any case, the strategy for maximising the chance of turning $1K into $10K (not caring at all how much you lose if you fail) must be more complicated than simply picking the largest payoff.

Posted by: Toby Bartels on December 30, 2006 2:22 AM | Permalink | Reply to this

Finance in Week 243

David Murphy sent me the following email, which he has kindly allowed me to post:

Dear John

As an ex-category theorist I’ve always enjoyed your columns, and I’d like to take the opportunity to thank you for them.

As a current mathematican finance person, however, I’d like to raise a slight note of concern. It isn’t that what you say is wrong per se, it is just that you suggest that financial systems are somehow like the probabilistic ones we meet in, say, statistical mechanics. That isn’t quite so. The reason is that financial markets are created by people trading. People believe a wide diversity of things, including that this or that stock is going up or down, that the market is calm or in crisis, or, in particular, that this or that theory describes the markets behaviour. Thus there is a feedback loop between theory and behaviour that makes finance profoundly different to most ‘scientific’ situations.

One well known example of this is the success of the Black Scholes formula for option pricing: before B&S, warrant prices were all over the place. Now they fall neatly into the line ‘predicted’ by B&S since many option buyers trade using that formula or its descendants. But, and here is the cute part, Black, Scholes and their co-worker Merton lost money trading off their formula in the early days of its development.

Why? Because an arbitrage relationship only holds if it is actually being traded. Some skilled traders put a trade on then tell the world about it, so that other people’s actions will cause the desired market movement. The market isn’t an abstract object whose behaviour makes sense without consideration of the actions of market participants.

I hope this perspective is helpful.

Kind regards

David

Posted by: John Baez on December 28, 2006 6:57 PM | Permalink | Reply to this

Re: Finance in Week 243

In my reply to David Murphy I wrote:

Well, I actually tried to steer clear of any claims about real-world financial systems, by taking Poundstone’s remarks and translating them into a purely “gambling” context, where it’s a matter of making a single bet taking advantage of some inside information.

He replied:

Fair point — you are very wise to avoid the flamewars of mathematical finance…

The delicate philosophical point — I keep meaning to mention this to David Corfield — is the sense in which financial systems are stochastic given

(a) this interaction between theory and behaviour; and more generally

(b) the instability of a generating process.

Kolmogorov style probability theory relies on at least in principle being able to make repeated observations of a system whose behaviour is determined by the same generating process. Often in Finance we try to fix issues in the data by making more and more complex hypotheses about what that generating process might be (from Gaussian to Levy to Fréchet to…). But what if the underlying process just isn’t stable enough that it makes sense to talk about one?

Posted by: John Baez on December 28, 2006 7:03 PM | Permalink | Reply to this

Re: Finance in Week 243

Let me try repeating this so I can see if I understand.

The problem with mathematical finance is essentially one of self-reference. We can model and predict fundamental physics more easily because we are not particles. We cannot model a market so easily because predictors and modellers are an essential part of the market itself — we have to predict our own future behavior.

Posted by: John Armstrong on December 28, 2006 9:07 PM | Permalink | Reply to this

Re: Finance in Week 243

John Armstrong wrote:

The problem with mathematical finance is essentially one of self-reference.

I think that’s the key idea.

We cannot model a market so easily because predictors and modellers are an essential part of the market itself — we have to predict our own future behavior.

Yes — where ‘we’ and ‘our’ are both definitely taken in the plural sense: we’re all making predictions about everyone else’s behavior, and whenever AA gets information about BB’s predictions, it affects AA’s behavior.

Posted by: John Baez on December 28, 2006 9:24 PM | Permalink | Reply to this

Re: Finance in Week 243

John Baez wrote:

John Armstrong wrote:

The problem with mathematical finance is essentially one of self-reference.

I think that’s the key idea.

Basically, yes. It’s more an ensemble than a local property though as there are so many market participants, only a relatively small number of whom have enough cash to move the market unless everyone moves together.

One good example is the ‘87 crash. There was a piece of research circulating a little while before this event commenting on the ‘29 crash. In ‘29 the market fell roughly 23%. Therefore many large players had it in mind that if there was a big crash, the market could fall 20 something %. When it started to fall, no one wanted to buy until it hit bottom. Many people used a 20 something % fall as their estimate of the bottom based on this well known research. This more or less guaranteed that the fall was, in fact, 20 something % because there were no buyers in size before that.

Posted by: David Murphy on December 31, 2006 12:57 PM | Permalink | Reply to this

Re: Finance in Week 243

I’d be first in line to argue that the mathematisation of human behaviour can only work in a very limited way. Economic behaviour might have been thought the best bet, but even here we see the very human effects of believed story lines. And this is not to mention any disagreement one might have with economics about the scope of its subject matter, e.g., what is sacrificied to achieve a quantitative discipline.

The desperate quest for scientific respectability afflicts a host of ‘sciences’, including the medical psychology I had to wade through to write ‘Why do people get ill?’. Want to know how lonely you are? Then ‘measure’ it on the UCLA Loneliness scale.

The philosopher Ian Hacking has written interestingly about the looping effects of human kinds, which ought to be taken into account by any human science.

Posted by: David Corfield on January 2, 2007 12:00 PM | Permalink | Reply to this

Re: Finance in Week 243

Whilst there’s certainly a tendency to for researchers to suffer physics-envy, another driving force for “casting into mathematical form” (even if just simple rules for large scale simulation) is that’s the way forward researchers find most convincing for making predictions which are then used to make decisions. And unlike pure science where you need only care if something is correct or incorrect (and can wait if things are inconclusive), in engineering/finance it’s often the case that actually doing something, even if based on flawed models/predictions, is better than doing nothing. For example, I work trying to analyse human behaviour in video imagery, and the models I’m using are clearly utterly, ridiculously naive, but they may work just well enough for very gross-level behaviour. There just doesn’t seem to be a non-mathematical way of approaching this automated understanding problem.

So to my mind the question is not “can you mathematically model people?” but “when can you sensibly mathematically model people, and how do you tell when your model is sensible?”

Posted by: dave tweed on January 2, 2007 3:26 PM | Permalink | Reply to this

Re: Finance in Week 243

Granted there are situations as you describe where there is little choice. And in some of these situations there will be checks on how sensible one’s models are in terms of being taken up by commercial enterprises. If you can learn a model of people’s preferences for films (movies), based on other ratings, which is significantly better than Netflix’s own model, then you can earn yourself a million dollar prize.

Good all the same to bear in mind that a mathematical apparatus may constrain one’s vision unnecessarily.

Posted by: David Corfield on January 2, 2007 8:37 PM | Permalink | Reply to this

Re: Finance in Week 243

David Corfield writes:

I’d be first in line to argue that the mathematisation of human behaviour can only work in a very limited way. Economic behaviour might have been thought the best bet, but even here we see the very human effects of believed story lines. And this is not to mention any disagreement one might have with economics about the scope of its subject matter, e.g., what is sacrificied to achieve a quantitative discipline.

I agree, but the finance community has created a need for answers to these questions.

Can I illustrate? Suppose I have two bonds, either of which might default. I sell you a kind of derivative (the ‘First to Default note’ or FTD note) which returns your initial investment just when neither of these two bonds do actually default during some defined period. The value of the FTD note depends not just on the movements (changes in value) of the two underlying bonds but also their comovement.

Does it even make sense to talk about the ‘comovement’ let alone the correlation of two things that can only happen at most once each? Yet, fascinating though this question is, it does not remove my need to actually come up with a value for the note (and of course the copula theory that is typically used for this engineering question tactfully ignores those difficulties).

So perhaps there is a genuine epistemological issue: what would it mean for such an instrument to have a fair value? What is the status of theories of value of such things? And are you comfortable with the fact that despite your answers to the above, your pension fund contains such things?

Posted by: David Murphy on January 2, 2007 7:48 PM | Permalink | Reply to this

Re: Finance in Week 243

You present a fascinating set of challenges for the physicalist. Instead of taking it out on poor old mathematics (see, e.g., this discussion of Hartry Field), attempting to eliminate the Queen of the Sciences from physics, we should press physicalists to talk their way out of the need to invoke these financial terms.

Posted by: David Corfield on January 3, 2007 12:01 PM | Permalink | Reply to this

My Koszul connection puzzle

Hello,

This is probably too off-topic and semi-laypersonish… yet I couldn’t resist after I spotted Ehresmann connection and Cartan rolling here.

(I’ve given up on math and German EliteBildung a decade ago, yet this magnificent place (+TWF) threatens to get me hooked again sooner or later…)

One reason why I dropped out is that I don’t have enough patience to study unnecessarily messy stuff like the tensor “calculus” (be it in the “physicist’s notation” or the even more bunk “mathematician’s notation”). I made my own notation to juggle Laplacians etc. and then got tired of it all.

So, here’s my big puzzlement:

Why does the standard textbook and the standard paper start with the (Koszul) covariant derivative on vectors and not covectors?
1) Since the differential of a function is covector, this should be the natural place to proceed with calculus.
2) Torsion, curvature, and the Levi-Civita-Koszul construction would exhibit no Lie bracket terms when doing it on covectors. (It is an artefact of the product rule when switching to vector fields.)

To illustrate what I’m talking about, here’s my TeX-free definition of torsion:

Notation: C the smooth functions, T’ the C-module of covector fields, d: C–>T’ differential, T’*T’ tensor product, A: T’*T’–>T’^T’ projector on antisymmetric tensors.

Let D: T’–>T’*T’ be a covariant derivative, i.e. “Flori’s total covariant derivative starting at the right end”.
Then it is easy to check that the map ADd: C–>T’^T’ is a derivation (ADd is “taking the antisymmetric part of the Hesse form”).
Hence, by the universal property of d, there exists a unique tensor morphism t:T’–>T’^T’ with ADd=td.

Definition: t is called the torsion tensor of D.

Exercise: Check that t is the dual of the torsion in the “mathematician’s notation”.

Posted by: Florifulgurator on December 28, 2006 7:57 PM | Permalink | Reply to this

Re: My Koszul connection puzzle

Florifulgurator wrote:

This is probably too off-topic and semi-laypersonish…

It’s a bit digressive, but heck — it’s the holiday season, I’ll be nice.

Why does the standard textbook and the standard paper start with the (Koszul) covariant derivative on vectors and not covectors?

This used to bug me too.

There’s no very good reason — just tradition, I guess. People consider vectors easier to visualize, so they like to start by covariantly differentiating vectors instead of covectors.

Both approaches have their charms and defects.

Posted by: John Baez on December 28, 2006 9:50 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 243)

Chris Weed emailed me the following remarks, which has kindly allowed me to post:

A physical perspective on Cartan geometry occurred to me, and I wonder if it motivates some of the interest in Cartan’s formulation. The idea is this: Consider the “model space” to represent a putative equilibrium configuration — the “target” of some kind of relaxation process. One might then ask, especially with quantum gravity in mind, what would underlie a given configuration. If one thinks in terms of a substructure, along the lines of LQG or Volovik’s condensed matter models, what sort of substructure would prefer to relax to a deSitter spacetime in the classical limit?

In a later email, he added:

I haven’t been going to n-Category Café often enough. In particular, this recent post by David Corfield was mildly electrifying:

The “second strain of explanation” he discusses has been a pet notion of mine for about 15 years, and was directly inspired by ruminations on the writings of Edwin Jaynes combined with consideration of general relativity and field theory. I read Ariel Caticha’s paper with considerable excitement shortly after it appeared on the arXiv.

It is interesting that this theme of Corfield’s post is not unrelated to the remark in my previous message. Actually this isn’t so surprising, because the theme lurks in the background of nearly all my thinking about physics these days. In my opinion it has strong implicit connections to the Machian outlook as explicated by Julian Barbour. Consider the problem of inference he sets forth in the powerfully evocative introduction to Volume 1 of his Absolute or Relative Motion. One is confronted with a collection of snapshots of allegedly successive particle configurations, with no a priori coordinating labels to indicate a correspondence between them, and one is asked to place the snapshots in order and in “appropriate” relative orientations. It almost goes without saying that this problem has a deeply Bayesian flavor.

Mach’s philosophy also has strong logical and historical connections to the ongoing interest in inductive inference, which in some people’s minds has been substantially rehabilitated by the Bayesian approach. I have always had a heavily Popperian outlook, and my sense has been that Jaynes himself regarded Bayesian inference as the central tool in effective learning by trial and error, or conjecture and refutation. My understanding is that for inductivists, the notion that we learn by trial and error alone has always seemed untenable, because there is no evident reason why it should ever be effective unless the data can in some sense “speak to us”. As Popper himself once observed, if all one’s trials end in failure, after a while one wonders where to turn, i.e., whether one has learned anything at all that can guide future trials. All this seems quite close to David’s current concerns, as exemplified in:

Posted by: John Baez on December 29, 2006 1:46 AM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 243)

See the following, which cites Caticha’s paper:

Posted by: Chris Weed on December 29, 2006 11:58 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 243)

Let’s hope that Derek’s paper marks a watershed in the movement to have rodents properly represented in mathematics. The explication of the 3 conditions of a Cartan connection on pp. 16-17 are wonderfully clear thanks to that hamster.

If we ever get back to categorifying Klein geometry, will there then be scope to put 2-hamsters inside weak quotients of 2-groups, without harming them in any way?

As a warm-up we might look at how a 2-plane might 2-roll on a lumpy bumpy 2-surface.

Posted by: David Corfield on January 3, 2007 11:39 AM | Permalink | Reply to this

Post a New Comment