Brauer’s Lemma
Posted by John Baez
The Wedderburn–Artin Theorem has always been one of my friends in ring theory, not because I understand it well, but because it reduces a potentially scary general situation to one that fits pretty nicely inside my mathematical physics brain. One proof of it uses a cute result called ‘Brauer’s Lemma’.
The Wedderburn–Artin theorem says that if you have a semisimple ring , it must be a finite product of matrix algebras over division rings. When my mathematical physicist brain sees this, it mutters: “oh, good — like matrices with real, complex or quaternionic entries!” These are familiar from the foundations of quantum mechanics, the study of classical Lie groups, and so on.
But my pals and are much more than division rings: they are division algebras over a field, namely the field of real numbers! Thus they pertain to a special case of the Artin–Wedderburn Theorem, which would have been familiar to Wedderburn. The full-fledged theorem breaks free from these shackles, and works with algebras over the integers: that is, rings. And lately, as I struggle to learn the rudiments of algebraic geometry, I’ve been trying to work over the integers myself.
But what’s a ‘semisimple’ ring, and how do they pull you into the study of division rings? And why is the title of this article ‘Brauer’s Lemma’?
Any ring has a category of (left) modules, say . A module is simple if it has no nontrivial submodules — or equivalently, no nontrivial quotient modules. Simple modules are like primes: they’re building blocks of more interesting modules. The easiest way to build more complicated modules is to take direct sums, so we say a module is semisimple if it’s a finite direct sum of simple modules.
The idea of ‘semisimplicity’ is one of those foundational notions that spreads through many areas of algebra. It’s a good name, because when you get used to algebra, the ‘semisimple’ objects tend to be the ones you can more easily understand. They may not be simple to work with, but at least they’re…. halfway simple.
So much for semisimple modules. What about rings?
A ring is semisimple if it’s semisimple as a left module over itself. And don’t worry: in case you’re thinking this should be called ‘left’ semisimple and there’s also a ‘right’ version, there’s a nice fact: a ring is semisimple as a left module over itself iff it’s semisimple as a right module over itself!
For example, think about , the ring of matrices over some field. This has an obvious module, namely . This module is simple, and every module is a direct sum of (possibly infinitely many) copies of this one. In particular, as a left module over itself, is a direct sum of copies of . That’s because when you multiply two matrices and , you’re really multiplying and a list of different column vectors, one for each column of . So this ring is semisimple.
Puzzle 1. Show that is semisimple if is any division ring: that is, a ring where every nonzero element has a multiplicative inverse.
Puzzle 2. Show that is not semisimple if .
So matrix algebras over division rings are semisimple. It’s also easy to see that a direct sum of two semisimple rings is semisimple. The Wedderburn–Artin Theorem says that’s all.
Wedderburn–Artin Theorem. Every semisimple ring is isomorphic to a finite direct sum of matrix algebras over division rings.
If you prefer (associative, unital) algebras over fields to rings, here’s another version:
Wedderburn–Artin Theorem for Algebras over Fields. Every semisimple algebra over a field is isomorphic to a finite direct sum of matrix algebras over division algebras over .
I won’t bother to define ‘semisimple algebra’ and ‘division algebra’, except to say that they go just like the definitions of semisimple ring and division ring: just cross out ‘ring’ and write in ‘algebra over ’ everywhere.
Anyway, how do we prove Wedderburn–Artin? There are various proofs. I was naturally drawn to this one, due to the title:
- William K. Nicholson, A short proof of the Wedderburn-Artin theorem, New Zealand J. Math 22 (1993), 83–86.
A key step here is something called ‘Brauer’s Lemma’. I like it because it shows you one reason division rings get into the act.
Brauer’s Lemma. Suppose is a ring and is a minimal left ideal with . Then for some with , and is a division ring.
Here a minimal left ideal is a nonzero left ideal for which the only smaller left ideal is .
My mathematical physicist brain processes this as follows. Suppose is the ring of matrices over some division ring like or . This has a minimal ideal consisting of matrices with just one nonzero column, say the th column. You can see . Then how do we write ? We can take to be the matrix that’s zero everywhere except for a 1 in the th row and th column. Multiplying any matrix by this kills off everything except the th column!
Clearly . Lastly, consists of matrices that are zero except in the th row and th column. So is isomorphic to the division ring we started with!
The reason I admire Brauer’s Lemma so much is that it works for any ring, and coughs up a division ring. Well, at least it works for any ring that has a minimal left ideal with . There are certainly rings without minimal left ideals — but these are called nonartinian, a term of abuse. And there are plenty of rings with minimal left ideals that have . For example, take , and look at the ideal generated by . But still, Brauer’s Lemma is ridiculously general.
How do you prove it? Luckily, no deep techniques or preliminary lemmas are required! You just need to follow your nose in a fairly skillful way, using the minimality condition a lot. I found Nicholson’s proof cryptically terse, so here’s a version with more steps filled in:
Proof of Brauer’s Lemma. Since , we must have for some . Of course . But is a left ideal contained in , so by minimality
Thus for some . Now, let
is a left ideal since for any we have
Note by definition, so by the minimality of we must have either or . But is in but not in , since , so we must have . In other words
What can we do with this? Well, so so , so let’s take above to be . We conclude that ! So is what we call an idempotent:
Now we claim that . The reason is that is in the left ideal , so . Since is minimal this implies either or . But so .
Finally, why is a division ring? Its unit is , of course. Suppose is nonzero. Why does it have an inverse? Well, is a nonzero ideal contained in so by minimality . So, we must have for some . But this implies is the left inverse of in :
Finally, in a ring where every nonzero element has a left inverse, every element has a two-sided inverse, so it’s a division ring! To see this, say . It has a left inverse, which I’ll call now. Why is also the right inverse of ? Well, must also have its own left inverse, which I’ll call . So we have and , giving
Thus is the left inverse of . So is the right inverse of . ▮
Whew! That’s algebra at its rawest. I used to hate this stuff. But it’s actually impressive how you can scrabble around with equations, using just addition and multiplication, and get something nontrivial.
If you think my proof was too long to understand, with too many small steps cluttering the overall logic, you might prefer Nicholson’s. Here’s how he says it:
Proof of Brauer’s Lemma. Since , certainly for some . Hence by minimality, so for some . If , this implies . Now is a left ideal, , and because . So and it follows that and .
Now let . Then so by minimality, say . Hence , so has a left inverse in . It follows that is a division ring. ▮
Re: Brauer’s Lemma
One of those lefts should be right?