### Brauer’s Lemma

#### Posted by John Baez

The Wedderburn–Artin Theorem has always been one of my friends in ring theory, not because I understand it well, but because it reduces a potentially scary general situation to one that fits pretty nicely inside my mathematical physics brain. One proof of it uses a cute result called ‘Brauer’s Lemma’.

The Wedderburn–Artin theorem says that if you have a semisimple ring $R$, it must be a finite product of matrix algebras over division rings. When my mathematical physicist brain sees this, it mutters: “oh, good — like $n \times n$ matrices with real, complex or quaternionic entries!” These are familiar from the foundations of quantum mechanics, the study of classical Lie groups, and so on.

But my pals $\mathbb{R}, \mathbb{C}$ and $\mathbb{H}$ are much more than division rings: they are division algebras over a field, namely the field of real numbers! Thus they pertain to a *special case* of the Artin–Wedderburn Theorem, which would have been familiar to Wedderburn. The full-fledged theorem breaks free from these shackles, and works with algebras over the integers: that is, rings. And lately, as I struggle to learn the rudiments of algebraic geometry, I’ve been trying to work over the integers myself.

But what’s a ‘semisimple’ ring, and how do they pull you into the study of division rings? And why is the title of this article ‘Brauer’s Lemma’?

Any ring $R$ has a category of (left) modules, say $R Mod$. A module is **simple** if it has no nontrivial submodules — or equivalently, no nontrivial quotient modules. Simple modules are like primes: they’re building blocks of more interesting modules. The easiest way to build more complicated modules is to take direct sums, so we say a module is **semisimple** if it’s a finite direct sum of simple modules.

The idea of ‘semisimplicity’ is one of those foundational notions that spreads through many areas of algebra. It’s a good name, because when you get used to algebra, the ‘semisimple’ objects tend to be the ones you can more easily understand. They may not be simple to work with, but at least they’re…. *halfway* simple.

So much for semisimple modules. What about rings?

A *ring* is **semisimple** if it’s semisimple as a left module over itself. And don’t worry: in case you’re thinking this should be called ‘left’ semisimple and there’s also a ‘right’ version, there’s a nice fact: a ring is semisimple as a left module over itself iff it’s semisimple as a right module over itself!

For example, think about $R = M_n(k)$, the ring of $n \times n$ matrices over some field. This has an obvious module, namely $k^n$. This module is simple, and *every* module is a direct sum of (possibly infinitely many) copies of this one. In particular, as a left module over itself, $M_n(k)$ is a direct sum of $n$ copies of $k^n$. That’s because when you multiply two $n \times n$ matrices $A$ and $B$, you’re really multiplying $A$ and a list of $n$ different column vectors, one for each column of $B$. So this ring is semisimple.

**Puzzle 1.** Show that $R = M_n(k)$ is semisimple if $k$ is any **division ring**: that is, a ring where every nonzero element has a multiplicative inverse.

**Puzzle 2.** Show that $R = M_n(k)$ is *not* semisimple if $k = \mathbb{Z}$.

So matrix algebras over division rings are semisimple. It’s also easy to see that a direct sum of two semisimple rings is semisimple. The Wedderburn–Artin Theorem says *that’s all*.

**Wedderburn–Artin Theorem**. Every semisimple ring is isomorphic to a finite direct sum of matrix algebras over division rings.

If you prefer (associative, unital) algebras over fields to rings, here’s another version:

**Wedderburn–Artin Theorem for Algebras over Fields**. Every semisimple algebra over a field $k$ is isomorphic to a finite direct sum of matrix algebras over division algebras over $k$.

I won’t bother to define ‘semisimple algebra’ and ‘division algebra’, except to say that they go just like the definitions of semisimple ring and division ring: just cross out ‘ring’ and write in ‘algebra over $k$’ everywhere.

Anyway, how do we *prove* Wedderburn–Artin? There are various proofs. I was naturally drawn to this one, due to the title:

- William K. Nicholson, A short proof of the Wedderburn-Artin theorem,
*New Zealand J. Math***22**(1993), 83–86.

A key step here is something called ‘Brauer’s Lemma’. I like it because it shows you one reason division rings get into the act.

**Brauer’s Lemma.** Suppose $R$ is a ring and $K \subseteq R$ is a minimal left ideal with $K^2 \ne 0$. Then $K = R e$ for some $e \in K$ with $e^2 = e$, and $e R e$ is a division ring.

Here a **minimal left ideal** is a nonzero left ideal for which the only smaller left ideal is $\{0\}$.

My mathematical physicist brain processes this as follows. Suppose $R$ is the ring of $n \times n$ matrices over some division ring like $\mathbb{R}, \mathbb{C}$ or $\mathbb{H}$. This has a minimal ideal $K$ consisting of matrices with just one nonzero column, say the $j$th column. You can see $K^2 \ne 0$. Then how do we write $K = R e$? We can take $e$ to be the matrix that’s zero everywhere except for a 1 in the $j$th row and $j$th column. Multiplying any matrix by this kills off everything except the $j$th column!

Clearly $e^2 = e$. Lastly, $e R e$ consists of matrices that are zero except in the $j$th row and $j$th column. So $e R e$ is isomorphic to the division ring we started with!

The reason I admire Brauer’s Lemma so much is that it works for *any* ring, and coughs up a division ring. Well, at least it works for any ring that has a minimal left ideal $K$ with $K^2 \ne 0$. There are certainly rings without minimal left ideals — but these are called nonartinian, a term of abuse. And there are plenty of rings with minimal left ideals $K$ that have $K^2 = 0$. For example, take $\mathbb{R}[x]/\langle x^2 \rangle$, and look at the ideal generated by $x$. But still, Brauer’s Lemma is ridiculously general.

How do you prove it? Luckily, no deep techniques or preliminary lemmas are required! You just need to follow your nose in a fairly skillful way, using the minimality condition a lot. I found Nicholson’s proof cryptically terse, so here’s a version with more steps filled in:

**Proof of Brauer’s Lemma.** Since $0 \ne K^2$, we must have $K u \ne 0$ for some $u \in K$. Of course $u \ne 0$. But $K u$ is a left ideal contained in $K$, so by minimality

$K u = K.$

Thus $u = e u$ for some $e \in K$. Now, let

$L = \{a \in K : a u = 0 \}$

$L$ is a left ideal since for any $r \in R$ we have

$a \in L \; \implies \; a u = 0 \; \implies \; r a u = 0 \; \implies \; r a \in L.$

Note $L \subseteq K$ by definition, so by the minimality of $K$ we must have either $L = 0$ or $L = K$. But $e$ is in $K$ but not in $L$, since $e u = u \ne 0$, so we must have $L = 0$. In other words

$a \in K \; and \; a u = 0 \quad \implies \quad a = 0.$

What can we do with this? Well, $u = e u$ so $e u = e^2 u$ so $(e - e^2)u = 0$, so let’s take $a$ above to be $e - e^2$. We conclude that $e - e^2 = 0$! So $e$ is what we call an **idempotent**:

$e^2 = e.$

Now we claim that $K = R e$. The reason is that $e$ is in the left ideal $K$, so $R e \subseteq K$. Since $K$ is minimal this implies either $R e = 0$ or $R e = K$. But $e \ne 0$ so $R e \ne 0$.

Finally, why is $e R e$ a division ring? Its unit is $e$, of course. Suppose $b \in e R e$ is nonzero. Why does it have an inverse? Well, $R b$ is a nonzero ideal contained in $R e = K$ so by minimality $R b = R e$. So, we must have $e = r b$ for some $r \in R$. But this implies $e r e$ is the left inverse of $b$ in $e R e$:

$(e r e) b = e r (e b) = e r b = e^2 = e.$

Finally, in a ring where every nonzero element has a *left* inverse, every element has a two-sided inverse, so it’s a division ring! To see this, say $b \ne 0$. It has a left inverse, which I’ll call $x$ now. Why is $x$ also the right inverse of $b$? Well, $x$ must also have its own left inverse, which I’ll call $y$. So we have $x b = 1$ and $y x = 1$, giving

$y = y (x b) = (y x) b = b .$

Thus $b$ is the left inverse of $x$. So $x$ is the right inverse of $b$. ▮

Whew! That’s algebra at its rawest. I used to hate this stuff. But it’s actually impressive how you can scrabble around with equations, using just addition and multiplication, and get something nontrivial.

If you think my proof was too long to understand, with too many small steps cluttering the overall logic, you might prefer Nicholson’s. Here’s how he says it:

**Proof of Brauer’s Lemma.** Since $0 \ne K^2$, certainly $K u \ne 0$ for some $u \in K$. Hence $K u = K$ by minimality, so $e u = u$ for some $e \in K$. If $r \in K$, this implies $r e - r \in L = \{a \in K \vert a u = 0\}$. Now $L$ is a left ideal, $L \subseteq K$, and $L \ne K$ because $e u \ne 0$. So $L = 0$ and it follows that $e^2 = e$ and $K = R e$.

Now let $0 \ne b \in e R e$. Then $0 \ne R b \subseteq R e$ so $R b = R e$ by minimality, say $e = r b$. Hence $(e r e)b = e r(e b) = e r b = e^2 = e$, so $b$ has a left inverse in $e R e$. It follows that $e R e$ is a division ring. ▮

## Re: Brauer’s Lemma

One of those lefts should be right?