Geometric Representation Theory (Lecture 12)
Posted by John Baez
In 1925, Werner Heisenberg came up with a radical new approach to physics in which processes were described using matrices of complex numbers. What makes this especially remarkable is that Heisenberg, like most physicists of his day, had not heard of matrices!
It’s hard to tell what Heisenberg was thinking, but in retrospect we might say his idea was that given a system with some set of states, say , a process would be described by a bunch of complex numbers specifying the ‘amplitude’ for any state to turn into any state . He composed processes by summing over all possible intermediate states: Later he discussed his theory with his thesis advisor, Max Born, who informed him that he had reinvented matrix multiplication!
In 1928, Max Born figured out what Heisenberg’s mysterious ‘amplitudes’ actually meant: the absolute value squared gives the probability for the initial state to become the final state via the process . This spelled the end of the deterministic worldview built into Newtonian mechanics.
More shockingly still, since amplitudes are complex, a sum of amplitudes can have a smaller absolute value than those of its terms. Thus, quantum mechanics exhibits destructive interference: allowing more ways for something to happen may reduce the chance that it does!
Heisenberg never liked the term ‘matrix mechanics’ for his work, because he thought it sounded too abstract. Similarly, when Feynman invented a way of doing physics using what we now call ‘Feynman diagrams’, he didn’t like it when Freeman Dyson called them by their standard mathematical name: ‘graphs’. He thought it sounded too fancy. Can you detect a pattern?
But no matter what we call matrix mechanics, its generalizations are the key to understanding how invariant relations between geometric figures give intertwining operators between group representations. And that’s what I talked about this time in the Geometric Representation Theory seminar.
-
Lecture 12 (Nov. 6) - John Baez on matrix mechanics and its generalizations.
Heisenberg’s original matrix mechanics, where a quantum process from a
set of states to a set of states is described by a matrix of
complex “amplitudes”:
We can generalize this
by replacing the complex numbers with any rig , obtaining a category
where the objects are finite sets, and the morphisms from to are
-valued matrices
is equivalent to the category of finitely generated free -modules. For example, is equivalent to the category of finite-dimensional complex vector spaces, . If is the rig of truth values with “or” as addition and “and” as multiplication, is equivalent to the category with finite sets as objects and relations as morphisms, .
There’s an obvious map from to , which lets us reinterpret invariant relations as Hecke operators. But this is not a functor, so we don’t get a functor . To fix this, we can consider , where is the rig of natural numbers. This is equivalent to , the category where morphisms are isomorphism class of spans between finite sets. The theory of spans as categorified matrix mechanics.
-
Streaming
video in QuickTime format; the URL is
http://mainstream.ucr.edu/baez_11_6_stream.mov - Downloadable video
- Lecture notes by Alex Hoffnung
- Lecture notes by Apoorva Khare
-
Streaming
video in QuickTime format; the URL is
Re: Geometric Representation Theory (Lecture 12)
Hmm. I’m a little confused by this. Why aren’t finitely-generated free -modules the same as Boolean algebras which are power sets of finite sets? Then a matrix of 0s and 1s of size acts on a column of 0s and 1s representing a subset of to give a subset of , just as a matrix of complex numbers acts on an element of .
To get sets as freely generated modules, I thought we needed the generalized ring known as the ‘field with no elements’, i.e., the one associated with the identity functor.
The Kleisli category of the powerset monad is Rel. But then we’re not dealing here with Kleisli categories, are we? They have mappings from to , for some functor . I thought these matrices you’re after are mappings from to , so that transposes take you in the opposite direction.
On the other hand, a map from to does generate a map from to . And as we are dealing with ‘linear’ mappings from to , a mapping from to can be recovered.
OK, so what are ‘linear’ mappings between ‘field with no elements’-modules, i.e., between sets? Oh, they’re just functions, aren’t they?
Good, I think confusion is over.