Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

July 22, 2014

The Tenfold Way (Part 2)

Posted by John Baez

How can we discuss all the kinds of matter described by the ten-fold way in a single setup?

It’s bit tough, because 8 of them are fundamentally ‘real’ while the other 2 are fundamentally ‘complex’. Yet they should fit into a single framework, because there are 10 super division algebras over the real numbers, and each kind of matter is described using a super vector space — or really a super Hilbert space — with one of these super division algebras as its ‘ground field’.

Combining physical systems is done by tensoring their Hilbert spaces… and there does seem to be a way to do this even with super Hilbert spaces over different super division algebras. But what sort of mathematical structure can formalize this?

Here’s my current attempt to solve this problem. I’ll start with a warmup case, the threefold way. In fact I’ll spend most of my time on that! Then I’ll sketch how the ideas should extend to the tenfold way.

Fans of lax monoidal functors, Deligne’s tensor product of abelian categories, and the collage of a profunctor will be rewarded for their patience if they read the whole article. But the basic idea is supposed to be simple: it’s about a multiplication table.

The 𝟛\mathbb{3}-fold way

First of all, notice that the set

𝟛={1,0,1} \mathbb{3} = \{1,0,-1\}

is a commutative monoid under ordinary multiplication:

× 1 0 1 1 1 0 1 0 0 0 0 1 1 0 1 \begin{array}{rrrr} \mathbf{\times} & \mathbf{1} & \mathbf{0} & \mathbf{-1} \\ \mathbf{1} & 1 & 0 & -1 \\ \mathbf{0} & 0 & 0 & 0 \\ \mathbf{-1} & -1 & 0 & 1 \end{array}

Next, note that there are three (associative) division algebras over the reals: ,\mathbb{R}, \mathbb{C} or \mathbb{H}. We can equip a real vector space with the structure of a module over any of these algebras. We’ll then call it a real, complex or quaternionic vector space.

For the real case, this is entirely dull. For the complex case, this amounts to giving our real vector space VV a complex structure: a linear operator i:VVi: V \to V with i 2=1i^2 = -1. For the quaternionic case, it amounts to giving VV a quaternionic structure: a pair of linear operators i,j:VVi, j: V \to V with

i 2=j 2=1,ij=ji i^2 = j^2 = -1, \qquad i j = -j i

We can then define k=ijk = i j.

The terminology ‘quaternionic vector space’ is a bit quirky, since the quaternions aren’t a field, but indulge me. n\mathbb{H}^n is a quaternionic vector space in an obvious way. n×nn \times n quaternionic matrices act by multiplication on the right as ‘quaternionic linear transformations’ — that is, left module homomorphisms — of n\mathbb{H}^n. Moreover, every finite-dimensional quaternionic vector space is isomorphic to n\mathbb{H}^n. So it’s really not so bad! You just need to pay some attention to left versus right.

Now: I claim that given two vector spaces of any of these kinds, we can tensor them over the real numbers and get a vector space of another kind. It goes like this:

real complex quaternionic real real complex quaternionic complex complex complex complex quaternionic quaternionic complex real \begin{array}{cccc} \mathbf{\otimes} & \mathbf{real} & \mathbf{complex} & \mathbf{quaternionic} \\ \mathbf{real} & real & complex & quaternionic \\ \mathbf{complex} & complex & complex & complex \\ \mathbf{quaternionic} & quaternionic & complex & real \end{array}

You’ll notice this has the same pattern as the multiplication table we saw before:

× 1 0 1 1 1 0 1 0 0 0 0 1 1 0 1 \begin{array}{rrrr} \mathbf{\times} & \mathbf{1} & \mathbf{0} & \mathbf{-1} \\ \mathbf{1} & 1 & 0 & -1 \\ \mathbf{0} & 0 & 0 & 0 \\ \mathbf{-1} & -1 & 0 & 1 \end{array}

So:

  • \mathbb{R} acts like 1.
  • \mathbb{C} acts like 0.
  • \mathbb{H} acts like -1.

There are different ways to understand this, but a nice one is to notice that if we have algebras AA and BB over some field, and we tensor an AA-module and a BB-module (over that field), we get an ABA \otimes B-module. So, we should look at this ‘multiplication table’ of real division algebras:

[2] [2] [4] \begin{array}{lrrr} \mathbf{\otimes} & \mathbf{\mathbb{R}} & \mathbf{\mathbb{C}} & \mathbf{\mathbb{H}} \\ \mathbf{\mathbb{R}} & \mathbb{R} & \mathbb{C} & \mathbb{H} \\ \mathbf{\mathbb{C}} & \mathbb{C} & \mathbb{C} \oplus \mathbb{C} & \mathbb{C}[2] \\ \mathbf{\mathbb{H}} & \mathbb{H} & \mathbb{C}[2] & \mathbb{R}[4] \end{array}

Here [2]\mathbb{C}[2] means the 2 × 2 complex matrices viewed as an algebra over \mathbb{R}, and [4]\mathbb{R}[4] means that 4 × 4 real matrices.

What’s going on here? Naively you might have hoped for a simpler table, which would have instantly explained my earlier claim:

\begin{array}{lrrr} \mathbf{\otimes} & \mathbf{\mathbb{R}} & \mathbf{\mathbb{C}} & \mathbf{\mathbb{H}} \\ \mathbf{\mathbb{R}} & \mathbb{R} & \mathbb{C} &\mathbb{H} \\ \mathbf{\mathbb{C}} & \mathbb{C} & \mathbb{C} & \mathbb{C} \\ \mathbf{\mathbb{H}} & \mathbb{H} & \mathbb{C} & \mathbb{R} \end{array}

This isn’t true, but it’s ‘close enough to true’. Why? Because we always have a god-given algebra homomorphism from the naive answer to the real answer! The interesting cases are these:

\mathbb{C} \to \mathbb{C} \oplus \mathbb{C} [2] \mathbb{C} \to \mathbb{C}[2] [4] \mathbb{R} \to \mathbb{R}[4]

where the first is the diagonal map a(a,a)a \mapsto (a,a), and the other two send numbers to the corresponding scalar multiples of the identity matrix.

So, for example, if VV and WW are \mathbb{C}-modules, then their tensor product (over the reals! — all tensor products here are over \mathbb{R}) is a module over \mathbb{C} \otimes \mathbb{C} \cong \mathbb{C} \oplus \mathbb{C}, and we can then pull that back via ff to get a right \mathbb{C}-module.

What’s really going on here?

There’s a monoidal category Alg Alg_{\mathbb{R}} of algebras over the real numbers, where the tensor product is the usual tensor product of algebras. The monoid 𝟛\mathbb{3} can be seen as a monoidal category with 3 objects and only identity morphisms. And I claim this:

Claim. There is an oplax monoidal functor F:𝟛Alg F : \mathbb{3} \to Alg_{\mathbb{R}} with F(1) = F(0) = F(1) = \begin{array}{ccl} F(1) &=& \mathbb{R} \\ F(0) &=& \mathbb{C} \\ F(-1) &=& \mathbb{H} \end{array}

What does ‘oplax’ mean? Some readers of the nn-Category Café eat oplax monoidal functors for breakfast and are chortling with joy at how I finally summarized everything I’d said so far in a single terse sentence! But others of you see ‘oplax’ and get a queasy feeling.

The key idea is that when we have two monoidal categories CC and DD, a functor F:CDF : C \to D is ‘oplax’ if it preserves the tensor product, not up to isomorphism, but up to a specified morphism. More precisely, given objects x,yCx,y \in C we have a natural transformation

F x,y:F(xy)F(x)F(y) F_{x,y} : F(x \otimes y) \to F(x) \otimes F(y)

If you had a ‘lax’ functor this would point the other way, and they’re a bit more popular… so when it points the opposite way it’s called ‘oplax’.

(In the lax case, F x,yF_{x,y} should probably be called the laxative, but we’re not doing that case, so I don’t get to make that joke.)

This morphism F x,yF_{x,y} needs to obey some rules, but the most important one is that using it twice, it gives two ways to get from F(xyz)F(x \otimes y \otimes z) to F(x)F(y)F(z)F(x) \otimes F(y) \otimes F(z), and these must agree.

Let’s see how this works in our example… at least in one case. I’ll take the trickiest case. Consider

F 0,0:F(00)F(0)F(0), F_{0,0} : F(0 \cdot 0) \to F(0) \otimes F(0),

that is:

F 0,0: F_{0,0} : \mathbb{C} \to \mathbb{C} \otimes \mathbb{C}

There are, in principle, two ways to use this to get a homomorphism

F(000)F(0)F(0)F(0)F(0 \cdot 0 \cdot 0 ) \to F(0) \otimes F(0) \otimes F(0)

or in other words, a homomorphism

\mathbb{C} \to \mathbb{C} \otimes \mathbb{C} \otimes \mathbb{C}

where remember, all tensor products are taken over the reals. One is

F 0,01F 0,0() \mathbb{C} \stackrel{F_{0,0}}{\longrightarrow} \mathbb{C} \otimes \mathbb{C} \stackrel{1 \otimes F_{0,0}}{\longrightarrow} \mathbb{C} \otimes (\mathbb{C} \otimes \mathbb{C})

and the other is

F 0,0F 0,01() \mathbb{C} \stackrel{F_{0,0}}{\longrightarrow} \mathbb{C} \otimes \mathbb{C} \stackrel{F_{0,0} \otimes 1}{\longrightarrow} (\mathbb{C} \otimes \mathbb{C})\otimes \mathbb{C}

I want to show they agree (after we rebracket the threefold tensor product using the associator).

Unfortunately, so far I have described F 0,0F_{0,0} in terms of an isomorphism

\mathbb{C} \otimes \mathbb{C} \cong \mathbb{C} \oplus \mathbb{C}

Using this isomorphism, F 0,0F_{0,0} becomes the diagonal map a(a,a)a \mapsto (a,a). But now we need to really understand F 0,0F_{0,0} a bit better, so I’d better say what isomorphism I have in mind! I’ll use the one that goes like this:

11 (1,1) i1 (i,i) 1i (i,i) ii (1,1) \begin{array}{ccl} \mathbb{C} \otimes \mathbb{C} &\to& \mathbb{C} \oplus \mathbb{C} \\ 1 \otimes 1 &\mapsto& (1,1) \\ i \otimes 1 &\mapsto &(i,i) \\ 1 \otimes i &\mapsto &(i,-i) \\ i \otimes i &\mapsto & (1,-1) \end{array}

This may make you nervous, but it truly is an isomorphism of real algebras, and it sends a1a \otimes 1 to (a,a)(a,a). So, unraveling the web of confusion, we have

F 0,0: a a1 \begin{array}{rccc} F_{0,0} : & \mathbb{C} &\to& \mathbb{C}\otimes \mathbb{C} \\ & a &\mapsto & a \otimes 1 \end{array}

Why didn’t I just say that in the first place? Well, I suffered over this a bit, so you should too! You see, there’s an unavoidable arbitrary choice here: I could just have well used a1aa \mapsto 1 \otimes a. F 0,0F_{0,0} looked perfectly god-given when we thought of it as a homomorphism from \mathbb{C} to \mathbb{C} \oplus \mathbb{C}, but that was deceptive, because there’s a choice of isomorphism \mathbb{C} \otimes \mathbb{C} \to \mathbb{C} \oplus \mathbb{C} lurking in this description.

This makes me nervous, since category theory disdains arbitrary choices! But it seems to work. On the one hand we have

F 0,0 1F 0,0 a a1 a(11) \begin{array}{ccccc} \mathbb{C} &\stackrel{F_{0,0}}{\longrightarrow} &\mathbb{C} \otimes \mathbb{C} &\stackrel{1 \otimes F_{0,0}}{\longrightarrow}& \mathbb{C} \otimes \mathbb{C} \otimes \mathbb{C} \\ a &\mapsto & a \otimes 1 & \mapsto & a \otimes (1 \otimes 1) \end{array}

On the other hand, we have

F 0,0 F 0,01 a a1 (a1)1 \begin{array}{ccccc} \mathbb{C} &\stackrel{F_{0,0}}{\longrightarrow} & \mathbb{C} \otimes \mathbb{C} &\stackrel{F_{0,0} \otimes 1}{\longrightarrow} & \mathbb{C} \otimes \mathbb{C} \otimes \mathbb{C} \\ a &\mapsto & a \otimes 1 & \mapsto & (a \otimes 1) \otimes 1 \end{array}

So they agree!

I need to carefully check all the other cases before I dare call my claim a theorem. Indeed, writing up this case has increased my nervousness… before, I’d thought it was obvious.

But let me march on, optimistically!

Consequences

In quantum physics, what matters is not so much the algebras \mathbb{R}, \mathbb{C} and \mathbb{H} themselves as the categories of vector spaces — or indeed, Hilbert spaces —-over these algebras. So, we should think about the map sending an algebra to its category of modules.

For any field kk, there should be a contravariant pseudofunctor

Rep:Alg kRex k Rep: Alg_k \to Rex_k

where Rex kRex_k is the 2-category of

  • kk-linear finitely cocomplete categories,

  • kk-linear functors preserving finite colimits,

  • and natural transformations.

The idea is that RepRep sends any algebra AA over kk to its category of modules, and any homomorphism f:ABf : A \to B to the pullback functor f *:Rep(B)Rep(A)f^* : Rep(B) \to Rep(A) .

(Functors preserving finite colimits are also called right exact; this is the reason for the funny notation RexRex. It has nothing to do with the dinosaur of that name.)

Moreover, RepRep gets along with tensor products. It’s definitely true that given real algebras AA and BB, we have

Rep(AB)Rep(A)Rep(B) Rep(A \otimes B) \simeq Rep(A) \boxtimes Rep(B)

where \boxtimes is the tensor product of finitely cocomplete kk-linear categories. But we should be able to go further and prove RepRep is monoidal. I don’t know if anyone has bothered yet.

(In case you’re wondering, this \boxtimes thing reduces to Deligne’s tensor product of abelian categories given some ‘niceness assumptions’, but it’s a bit more general. Read the talk by Ignacio López Franco if you care… but I could have used Deligne’s setup if I restricted myself to finite-dimensional algebras, which is probably just fine for what I’m about to do.)

So, if my earlier claim is true, we can take the oplax monoidal functor

F:𝟛Alg F : \mathbb{3} \to Alg_{\mathbb{R}}

and compose it with the contravariant monoidal pseudofunctor

Rep:Alg Rex Rep : Alg_{\mathbb{R}} \to Rex_{\mathbb{R}}

giving a guy which I’ll call

Vect:𝟛Rex Vect: \mathbb{3} \to Rex_{\mathbb{R}}

I guess this guy is a contravariant oplax monoidal pseudofunctor! That doesn’t make it sound very lovable… but I love it. The idea is that:

  • Vect(1)Vect(1) is the category of real vector spaces

  • Vect(0)Vect(0) is the category of complex vector spaces

  • Vect(1)Vect(-1) is the category of quaternionic vector spaces

and the operation of multiplication in 𝟛={1,0,1}\mathbb{3} = \{1,0,-1\} gets sent to the operation of tensoring any one of these three kinds of vector space with any other kind and getting another kind!

So, if this works, we’ll have combined linear algebra over the real numbers, complex numbers and quaternions into a unified thing, VectVect. This thing deserves to be called a 𝟛\mathbb{3}-graded category. This would be a nice way to understand Dyson’s threefold way.

What’s really going on?

What’s really going on with this monoid 𝟛\mathbb{3}? It’s a kind of combination or ‘collage’ of two groups:

  • The Brauer group of \mathbb{R}, namely 2{1,1}\mathbb{Z}_2 \cong \{-1,1\}. This consists of Morita equivalence classes of central simple algebras over \mathbb{R}. One class contains \mathbb{R} and the other contains \mathbb{H}. The tensor product of algebras corresponds to multiplication in {1,1}\{-1,1\}.

  • The Brauer group of \mathbb{C}, namely the trivial group {0}\{0\}. This consists of Morita equivalence classes of central simple algebras over \mathbb{C}. But \mathbb{C} is algebraically closed, so there’s just one class, containing \mathbb{C} itself!

See, the problem is that while \mathbb{C} is a division algebra over \mathbb{R}, it’s not ‘central simple’ over \mathbb{R}: its center is not just \mathbb{R}, it’s bigger. This turns out to be why \mathbb{C} \otimes \mathbb{C} is so funny compared to the rest of the entries in our division algebra multiplication table.

So, we’ve really got two Brauer groups in play. But we also have a homomorphism from the first to the second, given by ‘tensoring with \mathbb{C}’: complexifying any real central simple algebra, we get a complex one.

And whenever we have a group homomorphism α:GH\alpha: G \to H, we can make their disjoint union GHG \sqcup H into monoid, which I’ll call G αHG \sqcup_\alpha H.

It works like this. Given g,gGg,g' \in G, we multiply them the usual way. Given h,hHh, h' \in H, we multiply them the usual way. But given gGg \in G and hHh \in H, we define

gh:=α(g)h g h := \alpha(g) h

and

hg:=hα(g) h g := h \alpha(g)

The multiplication on G αHG \sqcup_\alpha H is associative! For example:

(gg)h=α(gg)h=α(g)α(g)h=α(g)(gh)=g(gh) (g g')h = \alpha(g g') h = \alpha(g) \alpha(g') h = \alpha(g) (g'h) = g(g'h)

Moreover, the element 1 GG1_G \in G acts as the identity of G αHG \sqcup_\alpha H. For example:

1 Gh=α(1 G)h=1 Hh=h 1_G h = \alpha(1_G) h = 1_H h = h

But of course G αHG \sqcup_\alpha H isn’t a group, since “once you get inside HH you never get out”.

This construction could be called the collage of GG and HH via α\alpha, since it’s reminiscent of a similar construction of that name in category theory.

Question. What do monoid theorists call this construction?

Question. Can we do a similar trick for any field? Can we always take the Brauer groups of all its finite-dimensional extensions and fit them together into a monoid by taking some sort of collage? If so, I’d call this the Brauer monoid of that field.

The 𝟙𝟘\mathbb{10}-fold way

If you carefully read Part 1, maybe you can guess how I want to proceed. I want to make everything ‘super’.

I’ll replace division algebras over \mathbb{R} by super division algebras over \mathbb{R}. Now instead of 3 = 2 + 1 there are 10 = 8 + 2:

  • 8 of them are central simple over \mathbb{R}, so they give elements of the super Brauer group of \mathbb{R}, which is 8\mathbb{Z}_8.

  • 2 of them are central simple over \mathbb{C}, so they give elements of the super Brauer group of \mathbb{C}, which is 2\mathbb{Z}_2.

Complexification gives a homomorphism

α: 8 2 \alpha: \mathbb{Z}_8 \to \mathbb{Z}_2

namely the obvious nontrivial one. So, we can form the collage

𝟙𝟘= 8 α 2 \mathbb{10} = \mathbb{Z}_8 \sqcup_\alpha \mathbb{Z}_2

It’s a commutative monoid with 10 elements! Each of these is the equivalence class of one of the 10 real super division algebras.

I’ll then need to check that there’s an oplax monoidal functor

G:𝟙𝟘SuperAlg G : \mathbb{10} \to SuperAlg_{\mathbb{R}}

sending each element of 𝟙𝟘\mathbb{10} to the corresponding super division algebra.

If GG really exists, I can compose it with a thing

SuperRep:SuperAlg Rex SuperRep : SuperAlg_{\mathbb{R}} \to Rex_{\mathbb{R}}

sending each super algebra to its category of ‘super representations’ on super vector spaces. This should again be a contravariant monoidal pseudofunctor.

We can call the composite of GG with SuperRepSuperRep

SuperVect:𝟙𝟘Rex SuperVect: \mathbb{10} \to \Rex_{\mathbb{R}}

If it all works, this thing SuperVectSuperVect will deserve to be called a 𝟙𝟘\mathbb{10}-graded category. It contains super vector spaces over the 10 kinds of super division algebras in a single framework, and says how to tensor them. And when we look at super Hilbert spaces, this setup will be able to talk about all ten kinds of matter I mentioned last time… and how to combine them.

So that’s the plan. If you see problems, or ways to simplify things, please let me know!

Posted at July 22, 2014 11:02 AM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2757

41 Comments & 1 Trackback

Re: The Ten-Fold Way (Part 2)

In retrospect I’m making things too hard on myself, and even doing things wrong, in the ‘tricky’ case of tensoring two real vector spaces equipped with a complex structure! I should just treat them as complex vector spaces and tensor them over \mathbb{C}. That’s physically right (for combining quantum systems), and it also seems to correspond to what we should do by taking the Brauer group of the complex numbers seriously, as a separate ‘chunk’ of our collage.

I believe that with this fix, the functor F:𝟛Alg F : \mathbb{3} \to Alg_{\mathbb{R}} will still be lax monoidal, and devoid of funny arbitrary choices.

Posted by: John Baez on July 25, 2014 11:40 AM | Permalink | Reply to this

Re: The Ten-Fold Way (Part 2)

From this point of view, where’s “periodicity”? Given an object in the category 10, why is there a “next” object?

Posted by: Allen Knutson on July 25, 2014 3:59 PM | Permalink | Reply to this

Re: The Ten-Fold Way (Part 2)

Well, super division algebras in the submonoid

8𝟙𝟘\mathbb{Z}_8 \subset \mathbb{10}

happen to be Morita equivalent to the super algebras Cl 0,,Cl 7Cl_0, \dots, Cl_7 (real Clifford algebras), while those in the submonoid

2𝟙𝟘\mathbb{Z}_2 \subset \mathbb{10}

happen to be Morita equivalent to the super algebras l 0,l 1\mathbb{C}\mathrm{l}_0, \mathbb{C}\mathrm{l}_1 (complex Clifford algebras).

So in the former case 8\mathbb{Z}_8 has a distinguished generator, Cl 1Cl_1 (actually a super division algebra itself), which gives 8\mathbb{Z}_8 a cyclic ordering! But I only know this generator is distinguished using Clifford algebra theory, not from staring at the super Brauer group 8\mathbb{Z}_8 in abstract.

The second case is less interesting: 2\mathbb{Z}_2 has a distinguished generator simply because it has only one possible generator.

Posted by: John Baez on July 25, 2014 4:45 PM | Permalink | Reply to this

Re: The Ten-Fold Way (Part 2)

Do you think (or share Greg Moore’s optimism) that this tenfold way is somehow the same as Dyson’s original tenfold way (discussed in his paper on the various threefold ways)?

The 10=8+2 split, as well as the group structure and canonical generators seem to be important aspects of this particular tenfold way. Dyson’s seems to involve 10=3x3+1, and various other “tenfold ways” in the literature don’t obviously have this “next object” that Allen mentioned. For the connection to KK-theory, this extra structure seems to be crucial. If all these “tenfold ways” are not actually the same in any non-trivial sense, then what they classify must be very different things, which would be quite worrying!

Posted by: Guo Chuan Thiang on July 25, 2014 10:00 PM | Permalink | Reply to this

Re: The Ten-Fold Way (Part 2)

Chuan Thiang wrote:

Do you think (or share Greg Moore’s optimism) that this tenfold way is somehow the same as Dyson’s original tenfold way (discussed in his paper on the various threefold ways)?

I haven’t delved into this question enough… thanks for giving me a nudge; it would be fun to investigate this. I’m pretty sure the (8+2)-fold way I’m discussing here is related to a (3×3+1)-fold way based on the study of time reversal and charge conjugation symmetry… but I don’t know if that’s Dyson’s original 10-fold way.

Posted by: John Baez on July 26, 2014 2:51 AM | Permalink | Reply to this

Re: The Ten-Fold Way (Part 2)

Is there some way to define the Brauer monoid directly, without performing the collage construction? For example, can it be identified as consisting of Morita equivalence classes of [adjectives] algebras over \mathbb{R} under some natural tensor product?

Posted by: Tim Campion on July 25, 2014 4:44 PM | Permalink | Reply to this

Re: The Ten-Fold Way (Part 2)

I’ve tried things like that but haven’t been able to get it to work. The basic problem is that

\mathbb{C}\otimes_\mathbb{R} \mathbb{C} \cong \mathbb{C} \oplus \mathbb{C}

whereas the three-fold way and ten-fold way want the tensor product of two \mathbb{C}-modules to be another \mathbb{C}-module.

Central simple algebras over a field kk are closed under tensor product, and every one is Morita equivalent to a division algebra over kk whose center is kk. This gives the usual Brauer group. The problem is that \mathbb{C} is not central simple over \mathbb{R}. \mathbb{C}\otimes_\mathbb{R} \mathbb{C} \cong \mathbb{C} \oplus \mathbb{C} is no longer even simple.

If kk has characteristic zero, semisimple algebras over kk are closed under tensor product (as well as direct sum). So, we can create a rig of Morita equivalence classes of semisimple algebras over kk. This seems to be the most reasonable thing to do. But this rig consists of finite linear combinations of ,,\mathbb{R}, \mathbb{C}, and \mathbb{H} with the following multiplication table:

\begin{array}{cccc} \mathbf{\otimes_{\mathbb{R}}} & \mathbf{\mathbb{R}} & \mathbf{\mathbb{C}} & \mathbf{\mathbb{H}} \\ \mathbf{\mathbb{R}} & \mathbb{R} & \mathbb{C} & \mathbb{H} \\ \mathbf{\mathbb{C}} & \mathbb{C} & \mathbb{C} \oplus \mathbb{C} & \mathbb{C} \\ \mathbf{\mathbb{H}} & \mathbb{H} & \mathbb{C} & \mathbb{R} \end{array}

I don’t see a systematic way to chop this rig down to the monoid 𝟛={1,0,1}\mathbb{3} = \{1, 0, -1\}. The problem is

=\mathbb{C} \otimes_{\mathbb{R}} \mathbb{C} = \mathbb{C} \oplus \mathbb{C}

again.

So, I think the field extensions of our base field kk should be treated with some extra respect; this leads to the idea of taking all the Brauer groups and glomming them together into a monoid.

Posted by: John Baez on July 26, 2014 2:34 AM | Permalink | Reply to this

Re: The Ten-Fold Way (Part 2)

What you call “oplax” monoidal functors, I prefer to call colax:

Colax tablet label

There, got that out of my system.

Posted by: Tom Leinster on July 25, 2014 9:21 PM | Permalink | Reply to this

Re: The Ten-Fold Way (Part 2)

I’m thinking my ‘collage’ construction might be a special case of a general classification result for commutative monoids. The way people usually state that result is:

Any commutative semigroup has a grading by a semilattice such that the homogeneous components are Archimedean semigroups.

I don’t understand this result very well yet, but the basic idea seems to be that a commutative semigroup can be broken up into pieces called ‘components’, and the set of components is partially ordered (in fact a semilattice). Adding elements in two components XX and YY can only yield an element in a component that’s at least as far up as both XX and YY.

I believe that the commutative monoid 𝟙𝟘\mathbb{10} will have two ‘components’, which are the groups 8\mathbb{Z}_8 and 2\mathbb{Z}_2. The component 2\mathbb{Z}_2 is ‘further up’, since whenever we add something in 8\mathbb{Z}_8 to something in 2\mathbb{Z}_2 we get something in 2\mathbb{Z}_2, and whenever we add two things in 2\mathbb{Z}_2 we get another thing in 2\mathbb{Z}_2. We can go up, but we can never go back down.

I think Pierre W. Grillet’s book Commutative Semigroups is the place to really learn this stuff, though it was first introduced in

  • T. Tamura and N. Kimura, On decompositions of a commutative semigroup, Kodai Math. Sem. Rep. 1954 (1954), 109-112.
Posted by: John Baez on July 26, 2014 3:16 AM | Permalink | Reply to this

Re: The Ten-Fold Way (Part 2)

Let me try to explain this structure theorem:

A commutative semigroup has a grading by a semilattice such that the homogeneous components are Archimedean semigroups.

First, a meet-semilattice is a poset (L,)(L,\le) where any pair of elements α,β\alpha, \beta has a greatest lower bound, denoted αβ\alpha \wedge \beta. Any such semilattice is automatically a commutative semigroup obeying an extra law, the idempotence law:

αα=α\alpha \wedge \alpha = \alpha

Suppose we have a commutative semigroup (S,)(S, \cdot) and a homomorphism f:SLf : S \to L where LL is a semilattice. Then we can partition SS into (possibly empty) subsets, one for each element αL\alpha \in L:

S α=f 1{α} S_\alpha = f^{-1} \{\alpha\}

It’s easy to check that thanks to the idempotence law, each S αS_\alpha is a semigroup!

So, a homomorphism f:SLf : S \to L to a semilattice LL is a great way to chop up a commutative semigroup into smaller commutative semigroups.

Posted by: John Baez on July 26, 2014 8:28 AM | Permalink | Reply to this

Re: The Ten-Fold Way (Part 2)

So, given a commutative semigroup (S,)(S,\cdot), how do we get a homomorphism to a semilattice?

We can define a preorder on SS by

abbc=a nforsomecS,n>0 a \le b \iff b \cdot c = a^n \; for \; some \; c \in S, n > 0

Roughly speaking, aba \le b if you can get from bb to some power of aa by multiplying bb with something. I think this definition is ‘upside down’ compared to how I would visualize things, but I will stick with Grillet’s notation for now to keep from getting completely confused!

This is just one of many preorders people put on semigroups, but this one is particularly nice for our purposes.

We can define an equivalence relation on SS by

ababandba a \sim b \iff a \le b \; and \; b \le a

Then S/S/\sim becomes a semilattice with the operation \wedge coming from the multiplication \cdot in SS and a partial order coming from the preorder \le in SS. The quotient map

SS/ S \to S/\sim

is a homomorphism of semigroups, so we’ve got what we want.

Even better, \sim is the finest equivalence relation such that S/S /\sim becomes a semilattice!

This is part of Theorem 1.2 in Grillet’s book Commutative Semigroups. It looks quite easy to prove.

Posted by: John Baez on July 26, 2014 8:47 AM | Permalink | Reply to this

Re: The Ten-Fold Way (Part 2)

Let me just check that S/S / \sim is a semilattice.

I’ll take it for granted that \le as defined above is a preorder on our semigroup SS. It’s easy to check that this preorder gives one on S/S / \sim.

Note that the preorder \le on SS is not in general a partial order: we don’t have

abandbaa=b a \le b \; and \; b \le a \implies a = b

since for a group we have aba \sim b for all a,ba,b. However, to form S/S/\sim we identify two elements whenever aba \le b and bab \le a. So the preorder on S/S / \sim is actually a partial order.

Why is it a semilattice? We need to show that given [a],[b]S/[a], [b] \in S/\sim they have a greatest lower bound. The only possible choice is [ab][a \cdot b], so let’s check that it works.

I would visualize aba \cdot b as bigger than aa and bb, but I think my visualization is upside down compared to Grillet’s. Indeed, we have

aba a \cdot b \le a

since aa times something equals some power of aba \cdot b. Similarly

abb a \cdot b \le b

so aba \cdot b is a lower bound of aa and bb in SS. Thus [ab][a \cdot b] is a lower bound of [a][a] and [b][b] in S/S / \sim.

Why is it the greatest lower bound?

Suppose [c][c] is any lower bound of [a][a] and [b][b]. Then cac \le a and cbc \le b. So,

c m=ax c^m = a \cdot x

for some n,xn, x and

c n=by c^n = b \cdot y

for some m,ym, y. Following my old nose, I get

c m+n=abxy c^{m + n} = a \cdot b \cdot x \cdot y

which implies that cabc \le a \cdot b. So, [ab][a \cdot b] is the greatest lower bound of [a][a] [b][b].

Apart from all the inequalities pointing the opposite direction from how I would visualize them, this was very easy.

Posted by: John Baez on July 26, 2014 9:24 AM | Permalink | Reply to this

Re: The Ten-Fold Way (Part 2)

And now for the final piece of this structure theorem! We’ve already seen that starting from a commutative semigroup SS, we get a semilattice S/S/\sim and a homomorphism, the quotient map

f:SS/ f: S \to S/\sim

It follows that each component

S α=f 1{α} S_\alpha = f^{-1} \{ \alpha \}

is a semigroup. Moreover, SS becomes graded by the semilattice S/S / \sim, meaning

S αS βS αβ S_\alpha \cdot S_\beta \subseteq S_{\alpha \wedge \beta }

But the final piece is this: each component S αS_\alpha is an Archimedean semigroup. This is one for which any pair of elements a,ba, b obeys aba \le b, where \le is the preorder we’ve seen before:

abbc=a nforsomecS,n>0 a \le b \iff b \cdot c = a^n \; for \; some \; c \in S, n > 0

This last piece is obvious, since each component is defined to be an equivalence class of elements of SS, where aba \sim b iff aba \le b and bab \le a.

You can see how the Archimedean property is a generalization of the one we know in real analysis.

Posted by: John Baez on July 26, 2014 9:40 AM | Permalink | Reply to this

Re: The Ten-Fold Way (Part 2)

Ah, Theorem 2.1 in Grillet’s Commutative Monoids gives a possible strategy for building the ‘Brauer monoid’ of a field out of the Brauer groups of all its algebraic extensions!

Let me first state the theorem. We say a commutative semigroup SS is graded by a semilattice LL if we can write SS as a disjoint union of ‘components’ S αS_\alpha for αL\alpha \in L, and

S αS βS αβ S_\alpha \cdot S_\beta \subseteq S_{\alpha \vee \beta}

for all α,βL\alpha, \beta \in L. This implies that each component is a semigroup.

We’ve seen that every commutative semigroup has a canonical grading of this sort.

We say SS is a Clifford semigroup if it is equipped with a grading by a semilattice LL (not necessarily the canonical one) with the additional property that each component is a group. The components will obviously then be abelian groups.

The theorem says that from a Clifford semigroup we can build a functor from LL (viewed as a category with a unique morphism from α\alpha to β\beta when βα\beta \le \alpha) to the category of abelian groups. Conversely, given a functor from a semilattice to the category of abelian groups, we can build a Clifford semigroup.

(I believe these two processes are ‘inverses’: that is, they form an equivalence between the category of LL-graded semigroups whose components are groups, and the category of functors from LL to AbGpAbGp. But Grillet doesn’t say this.)

The idea seems pretty simple. Given a Clifford semigroup, we define a functor from LL to AbGpAbGp as follows. To each αL\alpha \in L we assign the group S αS_\alpha. And whenever βα\beta \le \alpha, we have a homomorphism S αS βS_\alpha \to S_\beta sending xS αx \in S_\alpha to x1 βx 1_\beta, where β\beta is the identity of S βS_\beta. This indeed maps S αS_\alpha to S βS_\beta since

xS α,1 βS βx1 βS αβ=S β x \in S_\alpha, 1_\beta \in S_\beta \; \implies \; x 1_\beta \in S_{\alpha \wedge \beta} = S_\beta

And it’s clearly a homomorphism!

It’s also pretty easy to see that these homomorphisms fit together to define a functor from LL to AbGpAbGp.

(Again Grillet seems to be doing things ‘upside down’ by treating LL as a category with a unique morphism from α\alpha to β\beta when βα\beta \le \alpha. But this is just a matter of convention. I won’t change conventions while I’m still reading his book!)

Posted by: John Baez on July 26, 2014 4:22 PM | Permalink | Reply to this

Re: The Ten-Fold Way (Part 2)

This reminds me of two things: the Grothendieck construction of a functor, and the spectrum of a ring. Any relation to either one, do you think?

Posted by: Mike Shulman on July 27, 2014 4:29 AM | Permalink | Reply to this

Re: The Ten-Fold Way (Part 2)

I’ve been thinking of it as very close to the Grothendieck construction of a functor.

This is slightly obscured by the fact that people in semigroup theory don’t like identity morphisms.

But for my application this extra generality is pointless! I like identities. So I would use monoids instead of semigroups, and also work with meet-semilattices that have a top element. (I now see that some reputable people insist that their meet-lattices have a top element. Good.)

Then we could think of it this way:

We’ve got a functor F:ΛAbGpF : \Lambda \to AbGp where Λ\Lambda is a specially nice symmetric monoidal category (namely a semilattice, viewed as a symmetric monoidal posetal category). And we use some version of the Grothendieck construction to turn this into a specially nice symmetric monoidal category over Λ\Lambda (namely a Λ\Lambda-graded commutative monoid).

Perhaps AbGpAbGp here is a watered-down version of SymMonCatSymMonCat.

So it seems to involve a symmetric monoidal version of the Grothendieck construction. Have you heard about that?

Posted by: John Baez on July 27, 2014 5:55 AM | Permalink | Reply to this

Re: The Ten-Fold Way (Part 2)

So it seems to involve a symmetric monoidal version of the Grothendieck construction. Have you heard about that?

Yep! (Theorem 12.7)

Posted by: Mike Shulman on July 28, 2014 5:50 AM | Permalink | Reply to this

Re: The Ten-Fold Way (Part 2)

Although you know you should beef up the assignment kBr(k) k \mapsto Br(k) to a functor assigning to kk the symmetric monoidal 2-category of Azumaya algebras over kk, invertible bimodules and bimodule isos. Although, as Mike’s result stands, isomorphism classes of bimodules would work better.

Also, the domain could the (opposite of the) topos of finite field extensions, or equivalently the category of continuous Gal(K/k)Gal(K/k)-sets…

Posted by: David Roberts on July 28, 2014 6:42 AM | Permalink | Reply to this

Re: The Ten-Fold Way (Part 2)

Sorry, that was meant in response to John. Miscounted the quote levels… :-/

Posted by: David Roberts on July 28, 2014 6:44 AM | Permalink | Reply to this

Re: The Ten-Fold Way (Part 2)

Great! For those too lazy to click and read, this theorem says (among other things) that there’s an equivalence

SMF C[C op,SymMonCat] SMF_C \simeq [C^{op}, SymMonCat ]

where CC is a cartesian monoidal category, SMF CSMF_C is the 2-category of symmetric monoidal fibrations over CC, some sort of commuting squares of these, and natural transformations, and I’m guessing [,][\cdot, \cdot] is the internal hom in CatCat.

A meet-semilattice is indeed a cartesian monoidal category, right? So I think this theorem is indeed a kind of generalization of the result I mentioned. In that result CC is replaced by a meet-semilattice, SymMonCatSymMonCat is replaced by AbGpAbGp, and SMF CSMF_C is replaced by the category of CC-graded commutative monoids with the property that each grade is an abelian group.

Posted by: John Baez on July 28, 2014 6:25 AM | Permalink | Reply to this

Re: The Ten-Fold Way (Part 2)

Right! Except that [,][\cdot,\cdot] denotes the category of pseudofunctors, not strict ones — but in your decategorified case there is no difference.

Posted by: Mike Shulman on July 28, 2014 7:08 PM | Permalink | Reply to this

Re: The Ten-Fold Way (Part 2)

I don’t really see how this construction is like the spectrum of a ring, except in my application where there are bunch of different fields running around.

Posted by: John Baez on July 27, 2014 6:04 AM | Permalink | Reply to this

Re: The Ten-Fold Way (Part 2)

Well, maybe the analogy to the spectrum of a ring is a bit of a stretch. I’m not thinking only of the ordinary prime spectrum of a commutative ring, but also of various other kinds of “spectra of rings” that I think I read about in Stone Spaces. In general the idea is that given a ring, one constructs from it a posetal sort of gadget (like the frame of opens of its prime ideal spectrum), and then decomposes the ring into a family of “simpler” rings “indexed over” that poset in some way (like the structure sheaf of the prime ideal spectrum). You seem to be doing something similar to a semigroup.

Posted by: Mike Shulman on July 28, 2014 5:55 AM | Permalink | Reply to this

Re: The Ten-Fold Way (Part 2)

I just got around to reading J.C. Cole’s preprint The bicategory of topoi, and spectra which was discussed on the categories list last week. He presents a general theory of “spectra” in terms of right adjoints to maps of slice 2-categories of ToposTopos over classifying topoi, which includes the usual prime spectrum of a ring, where the classifying toposes are the Zariski topos (for local rings) and the classifying topos of rings. The right adjoint assigns to any ringed topos a locally ringed topos, and I guess when restricted to ordinary rings in SetSet it gives the usual Zariski spectrum. Its existence is equivalent to the (constructive) best factorization of a ring homomorphism with local codomain through a local map (the other factor being a localization). I wonder whether this decomposition of a commutative semigroup into an “Archimedean piece” and a “semilattice piece” can be viewed as one of these “spectra”.

Posted by: Mike Shulman on July 29, 2014 6:26 AM | Permalink | Reply to this

Re: The Ten-Fold Way (Part 2)

A whiff of fracture theorem there (blog post, nLab)?

Posted by: David Corfield on July 29, 2014 6:45 PM | Permalink | Reply to this

Re: The Ten-Fold Way (Part 2)

Mike wrote:

I wonder whether this decomposition of a commutative semigroup into an “Archimedean piece” and a “semilattice piece” can be viewed as one of these “spectra”.

Just so I don’t forget: someone should try to boost up this decomposition (called Clifford’s theorem) to a decomposition for symmetric monoidal categories!

You can easily form a symmetric monoidal preorder PP from a symmetric monoidal category CC by saying that for a,bCa,b \in C we have aba \le b if there exists a morphism from aa to bb. We get a forgetful functor

F:CPF: C \to P

which is symmetric monoidal, and thus CC becomes `PP-graded’. We can replace PP by an equivalent poset if we like.

But Clifford’s clever idea — which he implemented only for commutative monoids — was to work a bit harder and replace PP with a semilattice.

Hmm. Ten minutes trying to generalize this to full-fledged symmetric monoidal categories failed to yield a method that works. Maybe we need to categorify the concept of ‘semilattice’ or something.

Getting some sort of ‘structure theorem’ for symmetric monoidal categories would be very nice.

Posted by: John Baez on July 29, 2014 9:39 AM | Permalink | Reply to this

Re: The Ten-Fold Way (Part 2)

Maybe we need to categorify the concept of ‘semilattice’ or something.

One categorification of ‘meet-semilattice’ is ‘cartesian monoidal category’.

Posted by: Mike Shulman on July 30, 2014 12:43 AM | Permalink | Reply to this

Re: The Ten-Fold Way (Part 2)

So here’s my latest idea for defining the ‘Brauer monoid’ of a field kk. We let KK be a finite extension of kk, and let LL be the lattice of subfields of KK containing kk.

For each field FLF \in L, let Br(F)Br(F) be the Brauer group of FF. If FF is contained in a bigger field FLF' \in L, there’s a homomorphism

Br(F)Br(F) Br(F) \to Br(F')

sending each Morita equivalence class [A][A] of central simple algebras over FF to the class [F FA][F' \otimes_F A] of central simple algebras over FF'. (Apparently people call this homomorphism a ‘restriction map’.)

I believe this should give a functor from the lattice LL to AbGp — and thus, by the theorem I mentioned in my last comment, an LL-graded commutative semigroup!

(Now our conventions have flipped, so we’re thinking of LL as a category with a unique morphism from FF to FF' when FFF \subseteq F'. But that’s okay.)

This commutative semigroup should be a commutative monoid: the Brauer monoid of the extension kKk \subseteq K.

This seems like a more spiffy version of my ‘collage’ idea.

Posted by: John Baez on July 26, 2014 4:39 PM | Permalink | Reply to this

Re: The Ten-Fold Way (Part 2)

It may seem odd that I described a Brauer monoid not for a field but for a field kk and a finite extension KK. This gives a bit of extra flexibility, but we may not want that flexibility. I restricted myself to finite extensions of the field kk, that is those where the larger field KK is a finite-dimensional vector space over kk, because I only want my Brauer monoid to know about finite-dimensional division algebras over kk.

If the algebraic closure of kk is a finite extension of kk, we can use that as our choice of KK and speak simply of the Brauer monoid of kk.

This condition holds for k=k = \mathbb{R}, but not k=k = \mathbb{Q}.

Now I see that we can easily avoid this condition: let KK be the algebraic closure of kk but let LL be the lattice of fields kFKk \subseteq F \subseteq K that are finite extensions of kk. Define a functor from LL to AbGpAbGp sending each such field FF to the Brauer group Br(F)Br(F), and sending each inclusion FFF \hookrightarrow F' to the canonical map Br(F)Br(F)Br(F) \to Br(F'). Use Theorem 2.1 in Grillet’s book to create a commutative monoid from this functor. This is the Brauer monoid of kk.

It agrees with the previously mentioned one when the algebraic closure of kk is a finite extension of KK.

Next I’d like to describe the Brauer monoid a bit more directly, without mentioning that theorem in Grillet’s book.

Posted by: John Baez on July 27, 2014 4:35 AM | Permalink | Reply to this

Re: The Ten-Fold Way (Part 2)

How do we explicitly describe the commutative semigroup coming from a functor

F:LAbGpF : L \to AbGp

where LL is a lattice? Grillet explains this in the proof of his Theorem 2.1, a theorem that goes back to here:

  • A. H. Clifford, Semigroups admitting relative inverses, Annals of Math. 42 (1941) 1037–1049.

To get our commutative semigroup, we start by taking the disjoint union

S= αLF(α) S = \coprod_{\alpha \in L} F(\alpha)

Then we put a multiplication on it. Say we are given aF(α)a \in F(\alpha) and bF(β)b \in F(\beta) and we want to multiply them. Since we have

α,βαβ \alpha, \beta \ge \alpha \wedge \beta

our functor FF gives group homomorphisms from F(α)F(\alpha) and F(β)F(\beta) to F(αβ)F(\alpha \wedge \beta) (using Grillet’s upside-down conventions). We use these to map aa and bb into the semigroup F(αβ)F(\alpha \wedge \beta), and then we multiply them in there.

It’s pretty easy to see that this recipe makes SS into a commutative semigroup: the associative law is the fun thing to check here.

And if our semilattice LL has a top element \top, then the identity 1F()1 \in F(\top) will serve as an identity for SS, so SS will be a commutative monoid.

(There should be some quick name for a meet-semilattice with a top element, just as there’s a quick name for a semigroup with an identity element.)

Posted by: John Baez on July 27, 2014 5:33 AM | Permalink | Reply to this

Re: The Ten-Fold Way (Part 2)

So, let me unravel all the constructions and describe my ‘Brauer monoid of a field’ more directly.

We start with a field kk and pick an algebraic closure of it, say KK. We let LL be the collection of all intermediate fields kFKk \subseteq F \subseteq K that are finite extensions of kk. We form the disjoint union

BR(k)= FLBr(F) \mathbf{BR}(k) = \coprod_{F \in L} Br(F)

where Br(F)Br(F) is the usual Brauer group of FF. An element of Br(F)Br(F) is a Morita equivalence class [A][A] of central simple algebras over FF.

Now we make BR(k)\mathbf{BR}(k) into a commutative monoid. Suppose we have two elements of BR(k)\mathbf{BR}(k):

[A]Br(F),[A]Br(F) [A] \in Br(F), \quad [A'] \in Br(F')

How do we multiply them? We take the smallest field ELE \in L that contains both FF and FF'. We extend our algebras AA and AA' to algebras over EE, getting

A˜=E FA,A˜=E FA \tilde{A} = E \otimes_F A , \qquad \tilde{A'} = E \otimes_{F'} A

If I know what I’m doing, these are both central simple algebras over EE. So, we can tensor them over EE and get another central simple algebra A˜ EA˜\tilde{A} \otimes_E \tilde{A'}. This gives an element

[A˜ EA˜]Br(E)BR(k) [\tilde{A} \otimes_E \tilde{A'}] \in Br(E) \subseteq \mathbf{BR}(k)

And that’s how we multiply two guys in BR(k)\mathbf{BR}(k).

It’s pretty simple when all is said and done. The detour through commutative semigroup theory could be avoided, but I needed it to reach this idea… and it was fun to learn that stuff: commutative semigroups don’t get the respect they deserve. (Or at least commutative monoids don’t: semigroups that aren’t monoids deserve to be spat on and kicked, or else mercifully provided with an identity element.)

Now I can just generalize this stuff to the ‘super’ case and define the super Brauer monoid of any field. When that field is \mathbb{R}, this should be the commutative monoid 𝟙𝟘\mathbb{10}, and we’ll get a ‘𝟙𝟘\mathbb{10}-graded symmetric monoidal category’

syntax error at token }

describing all 10 kinds of matter and how you can tensor them.

Posted by: John Baez on July 27, 2014 6:20 AM | Permalink | Reply to this

Re: The Ten-Fold Way (Part 2)

but don’t forget that INVERSE semigroups look like semigroups of partial automorphisms and have lots of identities rather than just one, and they are essentially the same as ordered groupoids, so spitting on the many object case seems unlike you, John, champion of the many object view of mathematics. ;-)

Posted by: Tim Porter on July 27, 2014 10:10 AM | Permalink | Reply to this

Re: The Ten-Fold Way (Part 2)

Dear John:

You shouldn’t be so harsh to something with an identity crisis, like a semigroup without a “1”.

Although you can just add an identity element, this sometimes masks some important features of the structure. Well, I’ve spent about 40 years thinking about them, so the margins of this response don’t allow me to amplify this remark. Glad to discuss it if you like.

Best, Stuart Margolis

PS There’s also a fairly extensive literature on Commutative Monoids that I could point you to if you’d like.

Posted by: Stuart Margolis on July 27, 2014 12:55 PM | Permalink | Reply to this

Re: The Ten-Fold Way (Part 2)

I’d hope it’s clear I’m joking when I say a mathematical object deserves to be “spat on and kicked”. However, I’ve rarely needed to think about semigroups that weren’t monoids or conveniently sitting inside monoids. Ditto for ‘semicategories’. But when I need them I will embrace them.

Posted by: John Baez on July 28, 2014 6:34 AM | Permalink | Reply to this

Re: The Ten-Fold Way (Part 2)

David wrote:

you should beef up the assignment kBr(k) SuperVect : \mathbb{10} \to AbCat_\mathbb{R}} k \mapsto Br(k) to a functor assigning to kk the symmetric monoidal 2-category of Azumaya algebras over kk, invertible bimodules and bimodule isos.

At one point my aesthetic was always to categorify everything as much as possible — but now ‘everyone’ is doing it, and I’m getting old and tired, so I’ve switched to a more minimalist aesthetic. I really just want to understand the 𝟙𝟘\mathbb{10} in the ten-fold way as a unified mathematical entity, instead of a listing of things.

However, if this works, it will be good for someone to milk it for all it’s worth!

Back in 2004 I was pushing people to study a monoidal bicategory Alg(R)Alg(R) associated to any commutative ring RR. The idea was that Alg(R)Alg(R) has:

  • RR-algebras as objects

  • bimodules as morphisms

  • bimodule homomorphisms as 2-morphisms

The ‘core’ of this, consisting of the weakly invertible stuff, would be the 3-group with:

  • Azumaya algebras as objects

  • Morita equivalences as morphisms

  • bimodule isomorphisms as 2-morphisms

and the π 1,π 2\pi_1, \pi_2 and π 3\pi_3 of this would be the Brauer group, Picard group and unit group of RR.

At the time I was unable to rigorously verify that Alg(R)Alg(R) is a monoidal category. In fact, it should be a symmetric monoidal bicategory. I guess Mike’s paper on constructing symmetric monoidal bicategories includes a proof that this is really true. (Talk is cheap; proofs less so.)

But now you’re telling me to go a bit further and think about how all these Alg(R)Alg(R) guys fit together as we vary RR. I’d thought about that a little bit, before.

The funny thing is that we have homomorphisms between commutative rings and also bimodules as potential morphisms between different RRs. And something similar happens even if we fix one particular commutative ring RR: we have homomorphisms as well as bimodules going between different RR-algebras. Mike deals with that by saying Alg(R)Alg(R) is not merely a symmetric monoidal bicategory, but something better: a fibrant symmetric monoidal double category.

So, something like Mike’s idea should also hold at the level where we allow RR to vary. But somehow this doesn’t seem to buy us much more, since all different commutative rings are already sitting there inside Alg()Alg(\mathbb{Z}).

We can talk about commutative algebras over commutative algebras over …. a commutative ring as long as we want, but is there any reason to care?

Posted by: John Baez on July 29, 2014 7:56 AM | Permalink | Reply to this

Re: The Ten-Fold Way (Part 2)

I guess Mike’s paper on constructing symmetric monoidal bicategories includes a proof that this is really true.

Yep. You probably also know about Niles Johnson’s work using this bicategory (and related ones) for Morita theory.

So, something like Mike’s idea should also hold at the level where we allow RR to vary.

One way to assemble this sort of data is what I called in the paper a “2x1-category”: a (pseudo) 2-category internal to 1-categories, just as a double category (or “1x1-category”) is a a 1-category internal to 1-categories. You have a category of objects (commutative rings and ring homomorphisms), a category of 1-cells (two-sided algebras and algebra homomorphisms), and a category of 2-cells (bimodules and bimodule maps). There are also other ways of presenting these data, e.g. you could talk about a functor CRingDblCatCRing \to DblCat. Neither of these includes bimodules between commutative rings explicitly, but you can see them by regarding a commutative ring as an algebra over itself in the tautological way. I guess you could maybe try to include them explicitly in a more triple-categorical sort of structure.

I’m can’t say based on my own experience whether or not this buys us anything, but I remember Peter May caring about structures of this sort at some point — I think towards the end of Parametrized homotopy theory they had to look at an analogous thing where for every space BB there was a bicategory (actually a fibrant double category) of spaces-over-BB and parametrized-spectra-over-spaces-over-BB, varying as BB varies. And IIRC Chris Douglas and his collaborators have used related structures to talk about “conformal nets”, whatever those are.

Posted by: Mike Shulman on July 29, 2014 7:15 PM | Permalink | Reply to this

Re: The Ten-Fold Way (Part 2)

Mike wrote:

Yep. You probably also know about Niles Johnson’s work using this bicategory (and related ones) for Morita theory.

No, I didn’t! Thanks for pointing it out.

Posted by: John Baez on July 30, 2014 4:41 AM | Permalink | Reply to this

Re: The Ten-Fold Way (Part 2)

I knew that you were not trying to discover a huge intricate sunken cathedral, but being concrete (and real-world relevant); I’m just in a bimodule/algebra frame of mind at the moment.

One reason that people care about such things is that this iterated algebra construction is one way we can access nn-vector spaces. Also, Urs is currently thinking about how to globalise the arithmetic part of algebraic topology, instead of working ‘prime-by-prime’, in his cohesive setup, and this feels very much like it. All these things pasted together should say something about twists of differential algebraic K-theory, but perhaps that’s a bit of wishful thinking.

Posted by: David Roberts on July 30, 2014 6:47 AM | Permalink | Reply to this

Re: The Ten-Fold Way (Part 2)

I am vaguely trying to get a glimpse about what you are talking here and what this is motivated by. Warning: Since I understand only a tiny bit of the above (and part 1) and probably less then 1 % of that category theory jargon the following question may eventually appear strange. I also haven’t looked at your division algebra-quantum theory paper.

If I understood you correctly then with the threefold way you have some way to “cook up” higher dimensional Hilbert spaces, via some tensor product construction. I have though no idea how you came up with the results in the product table (is this in the division algebra paper?). That is on those product Hilbertspaces one has again 1 of the two major symmetries acting. Within the “tenfold way” that could either be time reversal or charge conjugation. Since as said I do not know how this construction works, I have no idea wether this construction can be used to construct new Hilbert space operators and in particular wether this construction would be partially extendable for the tenfold way, like by assuming some product behaviour for the “missing symmetry”.

Like could one use your construction by assuming, no symmetry for the missing symmetry (i.e. 0) would still be no symmetry for that missing symmetry also after taking the product?

Posted by: nad on July 29, 2014 12:25 PM | Permalink | Reply to this

Re: The Ten-Fold Way (Part 2)

Nad wrote:

If I understood you correctly then with the threefold way you have some way to “cook up” higher dimensional Hilbert spaces, via some tensor product construction.

In quantum mechanics, as you know, when we combine two systems we take the tensor product of their Hilbert spaces. This is fine if we’re working only with complex Hilbert spaces, and it’s also fine if we’re working only with real Hilbert spaces, but it doesn’t work with quaternionic Hilbert spaces!

The reason is that the quaternions are noncommutative. A quaternionic vector space is a left module over the quaternions, but you can’t tensor two left modules over a noncommutative algebra and get another left module. Stephen Adler wrote a book on quaternionic quantum mechanics where he got rather confused by this issue (in my humble opinion).

You might wonder why we should care about real or quaternionic Hilbert spaces. The answer is that even if you take complex Hilbert spaces as fundamental, the study of conjugate-linear symmetries, like time reversal and charge conjugation, gives rise to real and quaternionic Hilbert spaces. I explained this here:

and this paper probably provides some of the motivation — the ‘why should we do this?’ stuff — that you’re missing.

But very roughly:

  • quantum systems that don’t have time reversal symmetry are described by complex Hilbert spaces;

  • quantum systems that do have time reversal symmetry are described by complex Hilbert spaces with an operator TT that is norm-preserving, conjugate-linear (iT=Tii T = -T i) and obeys T 2=±1T^2 = \pm 1. When T 2=1T^2 = 1 we can obtain from this data a real Hilbert space; when T 2=1T^2 = -1 we can obtain from this data a quaternionic Hilbert space. Conversely, from a real or quaternionic Hilbert space we can get a complex Hilbert space with this extra data!

We can then think about combining these systems. So, we’re tensoring two complex Hilbert spaces equipped with extra data — or no extra data. But it turns out this problem is the same as the problem of tensoring real, complex and quaternionic Hilbert spaces!

For example, if we have two complex Hilbert spaces HH and HH' with time reversal operators TT and TT' obeying T 2=1T^2 = -1, T 2=1{T'}^2 = -1, the Hilbert space HHH \otimes H gets an operator TTT \otimes T' with (TT) 2=1(T \otimes T')^2 = 1. This corresponds to how we can tensor two quaternionic Hilbert spaces and get a real one!

Further analysis (done in this paper) reveals the multiplication table here:

real complex quaternionic real real complex quaternionic complex complex complex complex quaternionic quaternionic complex real \begin{array}{cccc} \mathbf{\otimes} & \mathbf{real} & \mathbf{complex} & \mathbf{quaternionic} \\ \mathbf{real} & real & complex & quaternionic \\ \mathbf{complex} & complex & complex & complex \\ \mathbf{quaternionic} & quaternionic & complex & real \end{array}

This ultimately arises from the fact that if we tensor the algebras ,,\mathbb{R}, \mathbb{C}, \mathbb{H}, thinking of them as algebras over the real numbers, we get other algebras as follows:

[2] [2] [4] \begin{array}{lrrr} \mathbf{\otimes} & \mathbf{\mathbb{R}} & \mathbf{\mathbb{C}} & \mathbf{\mathbb{H}} \\ \mathbf{\mathbb{R}} & \mathbb{R} & \mathbb{C} & \mathbb{H} \\ \mathbf{\mathbb{C}} & \mathbb{C} & \mathbb{C} \oplus \mathbb{C} & \mathbb{C}[2] \\ \mathbf{\mathbb{H}} & \mathbb{H} & \mathbb{C}[2] & \mathbb{R}[4] \end{array}

But how, exactly, does it arise?

My post here, and especially my comments on it, work out exactly how, in a way that generalizes to the 10-fold way, which is the ‘super’ version of the same story… arising naturally when you consider charge conjugation as well as time reversal.

Posted by: John Baez on July 30, 2014 5:20 AM | Permalink | Reply to this

Re: The Ten-Fold Way (Part 2)

thanks for the explanations.

T 2=1T^2= -1 we can obtain from this data a quaternionic Hilbert space.

For example, if we have two complex Hilbert spaces H and H’ with time reversal operators T and T’ obeying…

I understand this, but when you construct this “oplax functor” (who is still quite ominous to me) then you use intrinsically some “reductions”, which you can “trivially” “blow up” to the full tensor product.

Furthermore you wrote:

So, we can form the collage…

What about the product table?

Do you assume that if you have two tensors A and A’ (as time reversal and charge conjugation), which e.g. square to 1 and -1 and B and B’ which square to 0 and -1, then your “new” time reversal would probably be (?) ABA \otimes B (squaring to 0) and the new charge conjugation ABA' \otimes B' (squaring to 1)?

I’ll then need to check that there’s an oplax monoidal functor…

So it seems you want to check wether you can “blow up” the corresponding product table spaces up to a full tensor product ?

The answer is that even if you take complex Hilbert spaces as fundamental, the study of conjugate-linear symmetries, like time reversal and charge conjugation, gives rise to real and quaternionic Hilbert spaces. I explained this here:

Unfortunately I can’t afford to read your division algebra-quantum theory paper, I just want to get a rough overview about what’s happening here. It may be mathematically interesting to construct quaternionic quantum spaces, for me it is at the moment sufficient to talk about that operators TT, with their given properties. In particular it is not clear to me, what happens to this topological insulator classification if you perform those tensor products.

Posted by: nad on July 30, 2014 8:41 AM | Permalink | Reply to this
Read the post The Tenfold Way (Part 3)
Weblog: The n-Category Café
Excerpt: A detailed introduction to the Brauer--Wall monoid, which gives an algebraic explanation of the ten-fold way.
Tracked: January 8, 2023 12:27 AM

Post a New Comment