Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

August 30, 2023

Grothendieck–Galois–Brauer Theory (Part 3)

Posted by John Baez

Last time we saw that Galois theory is secretly all about covering spaces. Among other things, I told you that a field kk gives a funny kind of space called Spec(k)\mathrm{Spec}(k), and a separable commutative algebra over kk gives a covering space of Spec(k)\mathrm{Spec}(k). I gave you a lot of definitions and stated a few big theorems. But I didn’t prove anything, and some big issues were left untouched.

One of these is: why is a separable commutative algebra over kk like a covering space of Spec(k)\mathrm{Spec}(k)? Today I’ll talk about that, and actually prove a few things.

I’ll start with some hand-wavy geometry. For any field kk, the space Spec(k)\mathrm{Spec}(k) is a lot like a point. So, its covering spaces should be zero-dimensional in some sense. That’s what the concept of ‘separable’ commutative algebra tries to formalize. Separability is a kind of zero-dimensionality.

That seems hard to understand at first, because the definition of a separable algebra doesn’t seem to be about geometry! An algebra AA over a field kk is separable if there exists a map Δ:AAA\Delta \colon A \to A \otimes A with

m(Δ(a))=a(1) \qquad \qquad m(\Delta(a)) = a \qquad \qquad \qquad \qquad (1)

and

aΔ(b)=Δ(ab)=Δ(a)b(2) \qquad a \Delta(b) = \Delta(a b) = \Delta(a) b \;\;\;\qquad \qquad (2)

I motivated these laws in Part 1 using the relation between sets and vector spaces. But what do they have to do with zero-dimensionality — apart from the fact that sets are topologically zero-dimensional, which is ultimately the whole point?

Today I want to show that separable commutative algebras can be described in a more geometrical way, using what geometers call 1-forms. On a manifold, a 1-form is something that eats a vector field and spits out a smooth function. If all the 1-forms on a manifold are zero, that means all the vector fields are zero. And that happens if and only if the manifold is zero-dimensional!

So, what we want is an algebraic way of talking about 1-forms for any commutative algebra AA. These are called ‘Kähler differentials’. I’ll explain them, and prove that an algebra is separable iff all its Kähler differentials are zero.

By the way: while I’ll mainly talk about commutative algebras AA over a field kk, the whole subject gets even more interesting when we let kk be a more general commutative ring. I don’t want to get into that too much yet. But many of the things I’ll be doing work just as easily for commutative rings. So I’ll work at that level of generality when it’s easy, but switch to demanding that kk be a field whenever that helps. Please think of kk as a field whenever it helps you.

Kähler differentials

For any commutative algebra AA over any commutative ring kk we define the Kähler differentials, Ω 1(A)\Omega^1(A), to be the module over AA freely generated by symbols dad a, one for each aAa \in A, modulo relations

d(a+b)=da+db,d(αa)=αda d(a + b) = d a + d b , \qquad d(\alpha a) = \alpha d a d(ab)=adb+bda,d1=0 d(a b) = a \, d b + b \, d a , \qquad d 1 = 0

where a,bAa,b \in A and αk\alpha \in k. These relations are just the laws obeyed by derivatives. If you’ve studied 1-forms, you should have seen these relations when AA is the algebra of smooth functions on a manifold.

We can also describe Kähler differentials more abstractly. Say a derivation on AA is any map

d:AM d \colon A \to M

obeying the four equations above, where now MM is any AA-module. Then d:AΩ 1(A)d \colon A \to \Omega^1(A) is the universal derivation on AA, meaning any derivation on AA factors uniquely through this one!

For example, suppose k=k = \mathbb{R}. If AA is the algebra of smooth real-valued functions on a manifold XX, vector fields on XX give derivations v:AAv \colon A \to A, and the map sending vector fields to such derivations is bijective, so if we like we can say vector fields are such derivations. But the Kähler differentials Ω 1(A)\Omega^1(A) are an algebraic variant of the 1-forms on XX. By its universal property there’s a map sending Kähler differentials to 1-forms, and this map is surjective — but it’s not injective.

All these facts are more tricky than you’d expect: for example, the only known proof of the last one uses the axiom of choice, but nobody knows if this is truly necessary! But I digress. Today I just want to show this:

Theorem 14. Suppose AA is a finite-dimensional commutative algebra over a field kk. Then AA is separable iff Ω 1(A)0\Omega^1(A) \cong 0.

To prove this, we’ll need to relate the definition of separability to Kähler differentials.

First note that any algebra AA over a commutative ring has a multiplication map

m:AAA m \colon A \otimes A \to A

which is a map of A,AA,A-bimodules. Call the kernel of this map II.

Note that AAA \otimes A is a commutative algebra in its own right, and II is an ideal in this algebra because II is an A,AA,A-bimodule:

xI(ab)x=axbI x \in I \implies (a \otimes b)x = a x b \in I

So, we can square the ideal II and get another ideal I 2I^2 in AAA \otimes A. And then we have:

Lemma 15. For any commutative algebra AA over a commutative ring kk,

Ω 1(A)I/I 2 \Omega^1(A) \cong I / I^2

as AA-modules.

I’ll show this fact later. But first let’s use it to prove our theorem! We’ll need an easy lemma:

Lemma 16. An algebra AA over a commutative ring is separable iff this short exact sequence of A,AA,A-bimodules splits:

0IiAAmA0 0 \longrightarrow I \stackrel{i}{\longrightarrow} A \otimes A \stackrel{m}{\longrightarrow} A \longrightarrow 0

Proof. This is immediate from the definition. A splitting is a map Δ:AAA\Delta \colon A \to A \otimes A with

m(Δ(a))=a(1) \qquad \qquad m(\Delta(a)) = a \qquad \qquad \qquad \qquad (1)

and Δ\Delta is an A,AA,A-bimodule map iff

aΔ(b)=Δ(ab)=Δ(a)b(2) \qquad a \Delta(b) = \Delta(a b) = \Delta(a) b \;\;\; \qquad \qquad (2)

These are precisely the equations in the definition of a separable algebra.    █

Now let’s relate the separability of AA to the condition Ω 1(A)0\Omega^1(A) \cong 0.

First, let’s assume Ω 1(A)0\Omega^1(A) \cong 0. Then Lemma 15 implies I 2=II^2 = I. To go further we need II to be finitely generated:

Lemma 17. If a finitely generated ideal II in a commutative ring RR has I 2=II^2 = I then I=pRI = p R for some idempotent pRp \in R.

I’ll prove this later too. So, let’s assume II is a finitely generated ideal in AAA \otimes A. Then multiplication by pp gives an A,AA,A-bimodule map

AApI A \otimes A \stackrel{p}{\longrightarrow} I

which splits our exact sequence of bimodules

0IiAAmA0 0 \longrightarrow I \stackrel{i}{\longrightarrow} A \otimes A \stackrel{m}{\longrightarrow} A \longrightarrow 0

So, by Lemma 16, AA is separable! Here’s what we’ve shown so far:

Lemma 18. Suppose AA is a commutative algebra over a commutative ring kk and the kernel of multiplication m:AAAm \colon A \otimes A \to A is a finitely generated as an ideal in AAA \otimes A. If Ω 1(A)0\Omega^1(A) \cong 0, then AA is separable.

This is a bit clunky sounding, but every ideal of AAA \otimes A is finitely generated when kk is a field and AA is finite-dimensional. So in that case, Ω 1(A)0\Omega^1(A) \cong 0 implies AA is separable.

Now let’s try showing the converse. Suppose AA is a separable algebra over a commutative ring kk. Lemma 15 says our short exact sequence of A,AA,A-bimodules

0IiAAmA0 0 \longrightarrow I \stackrel{i}{\longrightarrow} A \otimes A \stackrel{m}{\longrightarrow} A \longrightarrow 0

splits. So, there’s an A,AA,A-bimodule map

π:AAAA \pi \colon A \otimes A \to A \otimes A

projecting AAA \otimes A onto II, with π 2=π\pi^2 = \pi. Let

p=π(11)AA p = \pi(1 \otimes 1) \in A \otimes A

Then a little calculation shows that π\pi is multiplication by pp in the algebra AAA \otimes A, and p 2=pp^2 = p. This is easy, but best done in the privacy of your own home.

So, our ideal II is of the form p(AA)p(A \otimes A) for some idempotent pp. This implies I 2=II^2 = I. Since Ω 1(A)I/I 2\Omega^1(A) \cong I/I^2, we get:

Lemma 19. Suppose AA is a commutative algebra over a commutative ring kk. If AA is separable then Ω 1(A)0\Omega^1(A) \cong 0.

Putting Lemmas 18 and 19 together we get our main result, Theorem 14.

Conclusions

I’ve shown you something about the geometrical meaning of separability for commutative algebras. But there’s a lot left undone.

For example, while I said Ω 1(A)I/I 2\Omega^1(A) \cong I / I^2 , and I’ll prove it below, I haven’t shown you a picture illustrating why 1-forms are like elements of I/I 2I/I^2. I also haven’t shown you a picture illustrating why I/I 2=0I/I^2 = 0 means AA is like the algebra of functions on a zero-dimensional space! Geometry isn’t really geometry without pictures. But I’ve been too busy writing to draw these pictures.

Also, I left off last time with two questions:

  • Why are separable commutative algebras over a field kk precisely the ones that give covering spaces of Spec(k)Spec(k)?

  • Why are separable commutative algebras over kk the same as finite products of finite separable extensions of kk?

Using Kähler differentials I’ve made a stab at answering the first question — a tentative stab, which would only be completed if I explained Grothendieck’s whole theory of etale maps. But I haven’t touched the second question at all! I want to do that next time.

I’ve been talking to Tom Leinster about this stuff, and he showed me a nice proof that a separable field extension KK of kk has Ω 1(K)=0\Omega^1(K) = 0. Thanks to our work today, that shows finite separable field extensions are separable algebras! I want to show you that argument, and maybe some more.

Proofs of Lemmas

Lemma 15. For any commutative algebra AA over a commutative ring kk, Ω 1(A)I/I 2\Omega^1(A) \cong I / I^2 as AA-modules.

Proof. We’ll use a cool fact: there’s an explicit description of the kernel II of the multiplication map m:AAAm \colon A \otimes A \to A. Namely, II consists precisely of kk-linear combinations of elements of the form

ab1ab a b \otimes 1 - a \otimes b

To check this, first note that m(ab1ab)=abab=0m(a b \otimes 1 - a \otimes b) = a b - a b = 0, so these elements really are in II. On the other hand, if ia ib iI\sum_i a_i \otimes b_i \in I then ia ib i=0\sum_i a_i b_i = 0, so ia ib i= i(a ib i1a ib i) \sum_i a_i \otimes b_i = -\sum_i \left( a_i b_i \otimes 1 - a_i \otimes b_i\right) is a linear combination of elements of the above form.

Using this cool fact, we can define

d:AI/I 2 d \colon A \to I/I^2

by

da=[a11a] d a = [a \otimes 1 - 1 \otimes a]

Here the stuff in the brackets is clearly an element of II, while the brackets mean we take an equivalence class to get an element of the quotient I/I 2I/I^2.

To check that dd is a derivation, we just check each of the four relations. I’ll do the hardest one and leave the rest for you: I’ll show d(ab)=adb+bdad( a b ) = a \, d b + b \, d a.

Now, this identity would be nicer we didn’t switch the letters aa and bb in the second term at right: d(ab)=adb+(da)bd (a b) = a \, d b + (d a) b. But for this to make sense we need to know how to multiply a guy in I/I 2I/I^2 on the right by a guy in AA. And this calls for a little thought. Of course we could just define multiplication on the right to be the same as multiplication on the left. But that wouldn’t achieve anything. In fact we have made AAA \otimes A into an A,AA,A-bimodule where the left and right actions of AA are different:

a(bc)=abc a (b \otimes c) = a b \otimes c

while

(bc)a=bca (b \otimes c) a = b \otimes c a

The ideal IAAI \subseteq A \otimes A is a sub-bimodule, and the left and right actions of AA are still different on II. But they agree up to an element of I 2I^2!

Lemma 16. If xIx \in I, then ax=xamodI 2a x = x a \mathrm{mod} I^2 for all aAa \in A, where the left and right actions of AA on II are defined as above.

I’ll prove this later. So, the left and right actions of AA on I/I 2I/I^2 agree, and d(ab)=adb+(da)bd (a b) = a \, d b + (d a) b means the same thing as d(ab)=adb+b(da)d (a b) = a \, d b + b (d a) . Yet the first one is easier to show:

adb+(da)b = a[b11b]+[a11a]b = [ab1ab+ab1ab] = [ab11ab] = d(ab) \begin{array}{ccl} a \, d b + (d a) b &=& a [b \otimes 1 - 1 \otimes b] + [a \otimes 1 - 1 \otimes a] b \\ &=& [a b \otimes 1 - a \otimes b + a \otimes b - 1 \otimes a b] \\ &=& [a b \otimes 1 - 1 \otimes a b] \\ &=& d(a b) \end{array}

So, if you check the other less interesting laws of calculus you’ll see d:AI/I 2d \colon A \to I/I^2 is a derivation. To complete our work it suffices to show dd is a universal derivation.

Suppose v:AMv \colon A \to M is any derivation on AA, where MM is any AA-module. We want to show there exists a unique AA-module homomorphism f:I/I 2Mf : I/I^2 \to M making the obvious triangle commute:

v=fd v = f \circ d

For this to hold we need

f(db)=v(b) f(d b) = v(b)

for all bAb \in A. For ff to be an AA-module homomorphism we must also have

f(adb)=av(b)() f (a \, d b) = a v(b) \qquad \qquad (\star)

By our “cool fact”, everything in I/I 2I/I^2 is a kk-linear combination of guys like [ab1ab][a b \otimes 1 - a \otimes b]. But by our definition of dd, we know

[ab1ab]=adb [a b \otimes 1 - a \otimes b] = a \, d b

So in fact, everything in I/I 2I/I^2 is a linear combination of guys like adba \, d b, so ()(\star) determines ff uniquely. To finish the job, please check that ff is well-defined — that is, independent of the choice of representatives for the expressions in the brackets.    █

Now we pay the price for our slick proof. But it’s always good to stick an annoying calculation into a lemma, even when you’re already proving a lemma.

Lemma 16. If xIx \in I, then ax=xamodI 2a x = x a \mathrm{mod} I^2 for all aAa \in A, where the left and right actions of AA on II are defined as above.

Proof. It suffices to show this for x=bc1bcx = b c \otimes 1 - b \otimes c, since our “cool fact” says every element of II is a kk-linear combination of elements like this.

ax = a(bc1bc)=abc1abc xa = (bc1bc)a=bcabac axxa = abc1abcbca+bac = (ab1ba)(c11c) \begin{array}{ccl} a x &=& a(b c \otimes 1 - b \otimes c) = a b c \otimes 1 - a b \otimes c \\ x a &=& (b c \otimes 1 - b \otimes c) a = b c \otimes a - b \otimes a c \\ \\ a x - x a &=& a b c \otimes 1 - a b \otimes c - b c \otimes a + b \otimes a c \\ &=& (a b \otimes 1 - b \otimes a)(c \otimes 1 - 1 \otimes c) \end{array}

The last expression is the product of two elements of II, so we’re done!    █

Believe it or not, all these calculations have a geometrical meaning. I hope to clarify that sometime. But for now I’ll conclude with a lemma of a very different sort.

Lemma 17. If a finitely generated ideal II in a commutative ring RR has I 2=II^2 = I then I=pRI = p R for some idempotent pRp \in R.

Proof. We’ll use Nakayama’s Lemma. There are lots of ways to state this lemma, but the most memorable might be

M=IMm=im M = I M \implies m = i m

This means: if a finitely generated module MM of some commutative ring RR has M=IMM = I M for some ideal IRI \subseteq R, then there exists iIi \in I such that im=mi m = m for all mMm \in M.

We can apply this taking M=IM = I to be a finitely generated ideal with I=I 2I = I^2. We conclude that there exists iIi \in I such that im=mi m = m for all mIm \in I. It follows that i 2=ii^2 = i and I=iRI = i R. Now just let p=ip = i.    █

Posted at August 30, 2023 10:11 AM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/3488

12 Comments & 2 Trackbacks

Re: Grothendieck–Galois–Brauer Theory (Part 3)

Side remark (apologies if you knew this and were just omitting it to keep the exposition shorter).

One can set up derivations from AA into AA-bimodules, and then it turns out that D:AI,aa11aD:A\to I, a \mapsto a \otimes 1 - 1\otimes a is the universal such derivation. To get the universal derivation into symmetric bimodules, one composes DD with the abelianization map II abI \to I_{ab}; then the calculations at the end of your post can be viewed as showing that I abI_{ab} is naturally identifiable with I/I 2I/I^2.

I think that the arguments in your post show, without any extra work, that for a not-necessarily finite-dimensional and not-necessarily-commutative kk-algebra AA, separability of AA is still equivalent to the II having an identity element, provided one views it as an ideal in the ring AA opA \otimes A^{op} — and this is equivalent to D:AID:A\to I being an inner derivation. This is the characterization of the separability of AA by the vanishing of its 1st Hochschild cohomology.

Posted by: Yemon Choi on August 30, 2023 8:18 PM | Permalink | Reply to this

Re: Grothendieck–Galois–Brauer Theory (Part 3)

Thanks! No, I didn’t know most of what you wrote. But did I notice that derivations from commutative rings to bimodules are not only more general than derivations to one-sided modules, but also better behaved, since

d(ab)=a(db)+(da)b d(a b) = a (d b) + (d a) b

works better for calculations than

d(ab)=a(db)+b(da) d(a b) = a (d b) + b (d a)

Ages ago I used to think about noncommutative geometry; that hasn’t been on my mind much these days, but I still know that you pay a price anytime you switch the order of two symbols.

I started out with the latter definition of derivation, but in doing the calculations to show

d: A I/I 2 a [1aa1] \begin{array}{rccl} d \colon & A & \to & I/I^2 \\ & a & \mapsto & [1 \otimes a - a \otimes 1] \end{array}

is a derivation I realized it was more natural to switch to the former definition and then separately check (in Lemma 17) that the natural left and right AA-module structures on I/I 2I/I^2 agree. This makes sense geometrically but I didn’t have the energy to explain why.

Your viewpoint is much more thought-out. I hadn’t quite noticed that I’d shown there’s a derivation d:AId \colon A \to I if one uses the bimodule approach. And I like how this approach generalizes to the noncommutative case, because I plan to bring noncommutative separable algebras into the story later, when I get to Brauer theory. Now I’m wondering how much of the geometrical theory of etale maps carries over, perhaps using ideas from noncommutative geometry.

Posted by: John Baez on August 30, 2023 10:44 PM | Permalink | Reply to this

Re: Grothendieck–Galois–Brauer Theory (Part 3)

Yemon wrote:

I think that the arguments in your post show, without any extra work, that for a not-necessarily finite-dimensional and not-necessarily-commutative kk-algebra AA, separability of AA is still equivalent to the ideal II having an identity element, provided one views it as an ideal in the ring AA opA⊗A^{op} — and this is equivalent to D:AID: A \to I being an inner derivation.

That’s incredibly cool. I haven’t seen how we get II to have an identity element from D:AID: A \to I being inner, but I suppose we must use the element pAp \in A that implements the ‘innerness’ of the derivation (Da=paapD a = p a - a p).

By the way, every separable algebra over a field is automatically finite-dimensional. I want to present the proof of this sometime. But nonetheless it’s nice to have arguments that sidestep this. When we work over commutative rings kk, I believe not all separable algebras are finitely generated projective kk-modules. (If anyone out there knows the story here, please comment!)

Posted by: John Baez on September 1, 2023 12:34 PM | Permalink | Reply to this

Re: Grothendieck–Galois–Brauer Theory (Part 3)

I’m sure this is obvious to many regular readers, but in case it’s helpful to others, here’s a simple intuition that relates the algebra and geometry. We can understand morally why the quotient of II by I 2I^2 yields “differentials” by viewing the quotient as zeroing out higher order / non-linear terms, analogously to viewing differentials as small quantities with negligible higher powers.

For example, let R=k[x]R=k[x] be a polynomial ring and consider the ideal I=xRI=xR of polynomials with zero constant term. I 2=x 2RI^2 = x^2 R, the collection of polynomials with zero constant and linear terms. I/I 2I/I^2 leaves us just the linear terms. Similarly for the plane k[x,y]k[x,y], when I=(x,y)I=(x, y) then I 2=(x 2,xy,y 2)I^2=(x^2, xy, y^2) and I/I 2I/I^2 just has the linear multiples of xx and yy left. And so on in higher dimensions.

This is a nice starting point as similar I/I 2I/I^2 constructions appear in related contexts, such as the Zariski cotangent space for a local ring and cotangent spaces in differential geometry.

Posted by: Marc Harper on August 31, 2023 6:39 AM | Permalink | Reply to this

Re: Grothendieck–Galois–Brauer Theory (Part 3)

Thanks! I wanted to talk about this and draw some pictures, but I got worn out.

In case I never get the time: if AA is the algebra of functions on some space (affine scheme) XX, then AAA \otimes A is the algebra of functions on X×XX \times X. II, the kernel of the multiplication map

m:AAA m \colon A \otimes A \to A

consists of functions that vanish on the diagonal

D={(x,x)|xX}X×X D = \{(x,x) \; \vert \; x \in X\} \subseteq X \times X

(AA)/I(A \otimes A)/I is the algebra of functions on the diagonal. The diagonal is isomorphic to XX itself. So,

(AA)/IA (A \otimes A)/I \; \cong \; A

I/I 2I/I^2 consists of functions that vanish on the diagonal mod functions that vanish to second order. If you think about it, these are the same as 1-forms on XX, since they say to first order how a function changes as you start at a point (x,x)(x,x) and then start moving to a nearby point (x,y)(x,y). More precisely,

I/I 2Ω 1(A) I/I^2 \cong \Omega^1(A)

which I proved more rigorously, but without help from geometry, in my post.

All the weird formulas I used in proving Lemma 15 and Lemma 16 make more sense if you think geometrically.

For example: algebraically, IAAI \subseteq A \otimes A is a module of AAA \otimes A so it is an A,AA,A-bimodule. There’s a left action of AA on II and a right action, and they’re really different.

Geometrically, AAA \otimes A is the functions on X×XX \times X. If we visualize X×XX \times X as a plane, the left copy of AA

AkAAA \otimes k \subseteq A \otimes A

consists of functions that only depend on the ‘horizontal coordinates’, while the right copy

kAAAk \otimes A \subseteq A \otimes A

consists of functions that only depend on the ‘vertical coordinates’.

So, fAf \in A acts on iIi \in I on the left to give this function:

f(x)i(x,y) f(x) i(x,y)

but it acts on iIi \in I on the right to give this function:

i(x,y)f(y) i(x,y) f(y)

They’re different. But the difference is

(f(x)f(y))i(x,y) (f(x) - f(y)) i(x,y)

which is in I 2I^2, since not only i(x,y)i(x,y) but the function f(x)f(y)f(x) - f(y) vanishes on the diagonal, where x=yx = y.

Thus, the left and right actions of AA on II are different, but they give the same action of AA on I/I 2I/I^2. And that’s Lemma 16!

Posted by: John Baez on August 31, 2023 9:46 AM | Permalink | Reply to this

Re: Grothendieck–Galois–Brauer Theory (Part 3)

I/I 2I/I^2 consists of functions on the diagonal mod functions that vanish to second order.

Um, isn’t this “functions that vanish on the diagonal mod functions that vanish to second order? Since II is functions that vanish on the diagonal.

Posted by: David Roberts on September 2, 2023 12:42 PM | Permalink | Reply to this

Re: Grothendieck–Galois–Brauer Theory (Part 3)

Right. II is functions on X×XX \times X that vanish on the diagonal; (AA)/I(A \otimes A)/I is functions on the diagonal.

Posted by: John Baez on September 2, 2023 3:18 PM | Permalink | Reply to this

Re: Grothendieck–Galois–Brauer Theory (Part 3)

It’s just that what you wrote doesn’t make sense, as a cartoon explanation for why we should think of elements of I/I 2I/I^2 as 1-forms.

Perhaps you mean to say something like: II is the space of functions that are zero on the diagonal, so the constant term in a “Taylor expansion about the diagonal” vanishes, and since we are identifying functions whose difference vanishes to second order, we are really just looking at first-order Taylor polynomials. But then the only information that remains is the information in the “first derivative”.

Is this right?

Posted by: David Roberts on September 7, 2023 7:24 AM | Permalink | Reply to this

Re: Grothendieck–Galois–Brauer Theory (Part 3)

Right. I accepted your correction of my typo. And from then on the argument goes just like you say. I was trying to sketch it rapidly. I’ll say it a bit more precisely in a later article, but if you want rigor, use the algebraic proof that Ω 1(A)I/I 2\Omega^1(A) \cong I/I^2.

(I didn’t edit my comment because I decided to stop pretending to be perfect. Maybe that signalled some indifference to your correction? I’ll edit it now.)

Posted by: John Baez on September 7, 2023 12:42 PM | Permalink | Reply to this

Re: Grothendieck–Galois–Brauer Theory (Part 3)

Yes, sorry, it seemed that way, since I know you can edit posts. Apologies for the pedantry!

Posted by: David Roberts on September 11, 2023 1:23 AM | Permalink | Reply to this
Read the post Grothendieck--Galois--Brauer Theory (Part 4)
Weblog: The n-Category Café
Excerpt: The word 'separable' means two things, but today we'll use some geometry to show every finite separable extension of a field is a separable algebra over that field. And don't worry, I'll explain what all this stuff means!
Tracked: September 3, 2023 12:31 PM
Read the post Grothendieck--Galois--Brauer Theory (Part 6)
Weblog: The n-Category Café
Excerpt: Let's classify all separable algebras over fields!
Tracked: October 27, 2023 4:04 PM

Re: Grothendieck–Galois–Brauer Theory (Part 3)

At one point, you claim that separable algebras over kk are precisely the ones that give covering spaces of Spec(k)\text{Spec}(k)… Does this mean a specific scheme-y sense of covering space?

The reason I ask is that if you choose some characteristic pp field kk which has a non-separable extension KK then this is an algebra over kk which thus yields a map Spec(K)Spec(k)\text{Spec}(K)\to \text{Spec}(k) which topologically is a point to another point and thus a covering map right? But this is supposedly not a separable algebra over kk

Posted by: Isky Mathews on February 12, 2024 3:51 PM | Permalink | Reply to this

Re: Grothendieck–Galois–Brauer Theory (Part 3)

Isky wrote:

Does this mean a specific scheme-y sense of covering space?

Yes, the technical name is an etale morphism of schemes. If you click on the link you’ll see nine equivalent definitions of that concept. I believe a field extension kKk \to K gives a map of schemes Spec(K)Spec(k)\mathrm{Spec}(K) \to \mathrm{Spec}(k) that’s an etale map of schemes iff kKk \to K is a finite separable extension. See etale algebra for more on this.

You seem to be treating these spectra as mere topological spaces and defining “covering space” that way. That gives a different story—and a much less interesting one, as you note.

I said “a field kk gives a funny kind of space called Spec(k)\mathrm{Spec}(k).” I said “funny” because I was talking about schemes, and they behave rather differently from topological spaces. Part of Grothendieck’s genius was understanding how schemes behave somewhat like topological spaces but also quite differently in many respects.

Posted by: John Baez on February 13, 2024 8:10 AM | Permalink | Reply to this

Post a New Comment