Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

September 25, 2012

Where Do Linearly Compact Vector Spaces Come From?

Posted by Tom Leinster

Where do what come from?

Linearly compact vector spaces are what fill in this blank:

                       sets   are to   vector spaces
                                as
   compact Hausdorff spaces   are to   ???         

I’ll explain the exact sense in which that’s true, as part of the continuing story of codensity monads. Of course, I’ll also give you the actual definition.

Here’s the idea. Compactness of a topological space is something like finiteness of a set; indeed, the archetypal example of a compact space is a finite set with the discrete topology. Can we find a decent notion of “compactness” for topological vector spaces, such that the archetypal example of a “compact” vector space is a finite-dimensional vector space with the discrete topology?

Compactness itself won’t do: for example, 2\mathbb{R}^2 is finite-dimensional but not compact (with either the euclidean or the discrete topology). But linear compactness will fulfil all our desires.

Before I go further, I want to thank Todd Trimble. I’d never heard of linearly compact vector spaces until Todd told me about them in a MathOverflow answer. Thanks, Todd! More on that answer later.

Here’s a quick summary of my last two posts:

  • Every functor G:BAG: \mathbf{B} \to \mathbf{A} has a so-called codensity monad, which is a monad on A\mathbf{A}. (Well: not quite every functor, but subject only to the existence of certain limits.)
  • The codensity monad of the inclusion FinSetSet\mathbf{FinSet} \hookrightarrow \mathbf{Set} is the ultrafilter monad UU. Here FinSet\mathbf{FinSet} is the category of finite sets, and the ultrafilter monad sends a set XX to the set U(X)U(X) of ultrafilters on XX.

This time, we’re going to think about algebras for codensity monads.

What, in particular, are the algebras for the ultrafilter monad? This question is answered by a well-known theorem of Manes:

Theorem  The algebras for the ultrafilter monad are the compact Hausdorff spaces.

Last time I said “ultrafilters are inevitable”, because they arise via a general categorical construction from the inclusion FinSetSet\mathbf{FinSet} \hookrightarrow \mathbf{Set}. In the same sense, compact Hausdorff spaces are inevitable.

Let me take a moment to explain roughly why Manes’s theorem is true. An ultrafilter on a topological space XX can be viewed as something like sequence in XX. In particular, one can say what it means for an ultrafilter on XX to “converge” to a point of XX. (I won’t say exactly what this means — it’s not difficult, but I don’t want to digress too much.) Here are some appealing facts about ultrafilter convergence:

  • XX is compact iff every ultrafilter on XX converges to at least one point.
  • XX is Hausdorff iff every ultrafilter on XX converges to at most one point.
  • XX is, therefore, compact Hausdorff iff every ultrafilter on XX converges to exactly one point.

So if XX is a compact Hausdorff space, there is a map of sets U(X)XU(X) \to X assigning to each ultrafilter its unique limit. Moreover, if you know which ultrafilters converge to which points, you can recover the whole topology. So a compact Hausdorff space can be viewed as a set XX together with a map U(X)XU(X) \to X, satisfying some axioms. Those axioms turn out to be exactly the axioms for an algebra for a monad: hence Manes’s theorem.

 

So, general categorical machinery turns the inclusion FinSetSet\mathbf{FinSet} \hookrightarrow \mathbf{Set} into the category of compact Hausdorff spaces. The game now is to feed other, similar, functors into the same machine. For example: what happens if we feed in the inclusion functor

FDVectVect? \mathbf{FDVect} \hookrightarrow \mathbf{Vect}?

Here Vect\mathbf{Vect} is the category of vector spaces over some field kk, and FDVect\mathbf{FDVect} is the category of finite-dimensional vector spaces. The questions we have to answer are:

  1. What is the codensity monad of FDVectVect\mathbf{FDVect} \hookrightarrow \mathbf{Vect}? (This is a monad on Vect\mathbf{Vect}, analogous to the ultrafilter monad on Set\mathbf{Set}.)
  2. What are the algebras for this codensity monad? (Since it’s a monad on Vect\mathbf{Vect}, these are vector spaces equipped with some extra structure. They are the linear analogue of compact Hausdorff spaces.)

Without further ado, the answer to Question 1 is:

Theorem  The codensity monad of FDVectVect\mathbf{FDVect} \hookrightarrow \mathbf{Vect} is the double dualization monad.

As you’d guess, the double dualization monad sends a vector space XX to its double dual X **X^{**}. So, I’m inviting you to think of elements of a double dual space as the linear analogues of ultrafilters.

If you want the full proof, see Theorem 7.5 of my recent paper. I’ll just sketch it here.

It’s similar to the proof that the codensity monad of FinSetSet\mathbf{FinSet} \hookrightarrow \mathbf{Set} is the ultrafilter monad, which I sketched last time. The key idea there was integration against an ultrafilter. To recap: given a set XX, an ultrafilter Ω\Omega on XX, and a finite set BB, we have a canonical map

XdΩ:Set(X,B)B. \int_X - \, d\Omega \colon \mathbf{Set}(X, B) \to B.

Analogously, in the linear situation, we can integrate against an element of the double dual. That is, given a vector space XX, an element Ω\Omega of X **X^{**}, and a finite-dimensional vector space BB, we have a canonical map

XdΩ:Vect(X,B)B. \int_X - \, d\Omega \colon \mathbf{Vect}(X, B) \to B.

I won’t explain what this integration is. You can find it in Proposition 7.1 of my paper. Or maybe you can find a slicker construction yourself — there’s probably only one sensible way to assign to each element of X **X^{**} and finite-dimensional vector space BB a map Vect(X,B)B\mathbf{Vect}(X, B) \to B.

The upshot is that for each XX we have a canonical map

X ** BFDVect[Vect(X,B),B] X^{**} \to \int_{B \in \mathbf{FDVect}} [\mathbf{Vect}(X, B), B]

given by

Ω XdΩ. \Omega \mapsto \int_X - \, d\Omega.

The second-to-last integral sign denotes an end. This end is actually T(X)T(X), where TT is the codensity monad of FDVectVect\mathbf{FDVect} \hookrightarrow \mathbf{Vect}. So, we’ve just defined a natural transformation ( ) **T( )^{**} \to T.

Just as in the case of sets and ultrafilters, elements of the double dual space X **X^{**} can be thought of as something like measures on XX. The map X **T(X)X^{**} \to T(X) then assigns to each measure on XX an integration operator. As before, this correspondence between measures and integrals turns out to be one-to-one; and that, in essence, proves the theorem.

So, the double dualization monad is the linear analogue of the ultrafilter monad. This answers Question 1. But Question 2 asks: what are its algebras?

My train is about to reach its destination, so I’ll wrap this up quickly. Here’s the answer to Question 2:

Theorem  The algebras for the codensity monad of FDVectVect\mathbf{FDVect} \hookrightarrow \mathbf{Vect} are the linearly compact vector spaces.

I learned this from Todd Trimble: see this MathOverflow answer or Theorem 7.8 of my paper. (There are also similar results in the paper by Kennison and Gildenhuys that I mentioned last time.)

I won’t explain the proof, but I will tell you what a linearly compact vector space is.

A linearly compact vector space is a topological vector space with certain properties. Our ground field kk has no topology, so I have to be clear about the meaning of “topological vector space”: it’s either a kk-vector space internal to Top\mathbf{Top}, or it’s a topological vector space with respect to the discrete topology on kk. They amount to the same thing.

A linearly compact vector space over a field kk is a topological vector space over kk such that:

  • the topology is linear: the open affine subspaces form a basis for the topology
  • any family of closed affine subspaces with the finite intersection property has nonempty intersection
  • the topology is Hausdorff.

The first condition might strike you as really bizarre. For example, it’s not true of n\mathbb{R}^n with the Euclidean topology, which is probably the first topological vector space that pops into your head. In fact, a finite-dimensional vector space can be made into a linearly compact vector space in one and only one way: by giving it the discrete topology. This is like the fact that a finite set can be made into a compact Hausdorff space in one and only one way — again, by giving it the discrete topology.

The second condition is the analogue of compactness. “Finite intersection property” means that the intersection of any finite number of members of the family is nonempty. It’s just like the fact that a topological space is compact iff every family of closed subsets with the finite intersection property has nonempty intersection.

As for the third condition, well: as so often happens, “Hausdorff” is taken for granted to such an extent that the terminology ignores it. They’re not called “linearly compact Hausdorff vector spaces”, though perhaps they should be.

In summary, we have the following table of analogues:

sets                        vector spaces
finite sets                 finite-dimensional vector spaces
ultrafilters                elements of the double dual
compact Hausdorff spaces    linearly compact vector spaces

Can this be extended to theories other than vector spaces? I don’t know. But I hope someone will find out!

In the final installment: are ultraproducts part of the codensity story?

Posted at September 25, 2012 11:17 PM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2563

58 Comments & 0 Trackbacks

Re: Where Do Linearly Compact Vector Spaces Come From?

This stuff is wicked cool!

Posted by: Allen K. on September 26, 2012 1:40 AM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

Thanks very much!

Posted by: Tom Leinster on September 26, 2012 3:02 PM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

This is pretty awesome. Can you say anything about why the analogy isn’t stronger? Why (intuitively speaking) in the case of sets do we get only the ultrafilters rather than the whole double powerset?

Posted by: Mike Shulman on September 26, 2012 2:36 AM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

You do! The set of ultrafilters on a set XX is Hom(2,Hom(2,X))\mathbf{Hom}(2,\mathbf{Hom}(2,X)). The trick is that the inside 22 and Hom\mathbf{Hom} are in the category of sets, and the outside 22 and Hom\mathbf{Hom} are in the category of Boolean algebras. So of course you don’t really get the full double powerset, but you get something that looks like it’s trying to be.

It seems like this is related to the fact that the powerset functor PP is an equivalence from FinSet op\mathbf{FinSet}^{op} to FinBoolAlg\mathbf{FinBoolAlg}, so we require ultrafilters to be Boolean Algebra homomorphisms P(X)2P(X)\to 2, while () *()^* is an equivalence from FDVect op\mathbf{FDVect}^{op} to FDVect\mathbf{FDVect}, so the linear analogue of ultrafilter is only required to be a linear function X *XX^*\to X. Could someone who knows what’s going on say whether I’m on the right track?

Posted by: Owen Biesel on September 26, 2012 3:27 AM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

Of course, by Hom(2, )\mathbf{Hom}(2,_) I meant Hom( ,2)\mathbf{Hom}(_,2) in both places it appeared. Sorry for the mixup!

Posted by: Owen Biesel on September 26, 2012 3:30 AM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

You’re on the right track, but there are a few typos. I think you meant Hom Bool(Hom Set(X,2),2)Hom_{Bool}(Hom_{Set}(X, 2), 2), and a little later, you meant X *kX^\ast \to k where kk is the ground field.

Posted by: Todd Trimble on September 26, 2012 3:33 AM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

Thanks, Todd. This is my first time using itex for my comments, so I celebrated the successful typesetting a little too soon.

What confuses me is that the categories Bool\mathbf{Bool} (and to some extent, Vect\mathbf{Vect}) seem to arise out of thin air in my (typo-ridden) comment above: Hom Set(X,2)\mathbf{Hom}_{Set}(X,2) naturally has the structure of a finite Boolean algebra, but why are (infinite) Boolean algebra homomorphisms the right thing to consider for Hom Bool(Hom Set(X,2),2)\mathbf{Hom}_{Bool}(\mathbf{Hom}_{Set}(X,2),2)?

Posted by: Owen Biesel on September 26, 2012 4:04 AM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

Well, I disagree with your claim that

You do!

because Hom Bool(Hom Set(X,2),2)Hom_{Bool}(Hom_{Set}(X,2),2) is not the double powerset of XX. (-: However, I do see the point you’re making. The question then becomes exactly what you asked: how does BoolBool arise out of thin air?

Posted by: Mike Shulman on September 26, 2012 2:15 PM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

An important clue is that the codensity monad applied to a set XX is, by definition, the set of natural transformations of the form

() X() 1:FinSet(-)^X \to (-)^1: Fin \to Set

and that the category of Boolean algebras is equivalent to the category of product-preserving functors of the form FinSetFin \to Set. Here the product-preserving functor () X:FinSet(-)^X: Fin \to Set corresponds to the power set P(X)P(X) (and of course () 1(-)^1 corresponds to the Boolean algebra 22).

Posted by: Todd Trimble on September 26, 2012 3:36 PM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

Nuts. This is maybe what I should have said: the category of Boolean algebras is equivalent to the category of product-preserving functors Fin +SetFin_+ \to Set from finite nonempty sets to SetSet. (An easy way to see this is to recognize Fin +Fin_+ as the Cauchy completion of the full subcategory consisting of finite sets of cardinality 2 n2^n, which is equivalent to the Lawvere theory for Boolean algebras, and to recognize that if CC has finite products and C¯\bar{C} is its Cauchy completion, then the category of product-preserving functors CSetC \to Set is equivalent to the category of product-preserving functors C¯Set\bar{C} \to Set.)

However, the collection of FinFin-natural transformations () X() 1(-)^X \to (-)^1 is in natural bijection with the Fin +Fin_+-natural transformations of the form () X() 1(-)^X \to (-)^1, so that we can apply the observation above about Boolean algebras.

Posted by: Todd Trimble on September 26, 2012 4:24 PM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

Tom will probably have a more elegant and definitive way of saying it, but maybe I’ll try to say something. An all-too-brief answer might be: you can’t integrate along just any old element in the double power set; the things you can integrate along are the ultrafilters. Whereas in the linear context, you can integrate along any element in the double dual.

What I’m really loving now, and I only just noticed this although I’m sure Tom noticed it too and might have mentioned it already, is that this is the first time I’ve seen a really satisfying connection made between ends as “integrals”, and ordinary integrals. The object of study in each case (SetSet and VectVect) is an end construction for the codensity monad, which takes an object XX to

Fhom(hom(X,F),F)\int_F \hom(\hom(X, F), F)

where we “integrate” (take the end) over the “finite objects” in the category. The elements of this end deserve to be called “measures”, as Tom has been saying. In the SetSet case we can project down

FFinF F X2 2 X\int_{F \in Fin} F^{F^X} \to 2^{2^X}

and interestingly, this turns out to be an injection but not a surjection. I guess it shouldn’t be too surprising that this is not surjective, since the end is concerned with families of functions

F XFF^X \to F

natural in finite sets FF, whereas not every map 2 X22^X \to 2 will be natural even with regard to self-maps 222 \to 2. But in fact it won’t even map onto such “hom(2,2)\hom(2, 2)-natural” maps 2 X22^X \to 2. What is additionally curious is that the projection

FF F XSet hom(n,n)(n X,n)\int_F F^{F^X} \to Set^{\hom(n, n)}(n^X, n)

to the “hom(n,n)\hom(n, n)-natural” maps n Xnn^X \to n is both injective and surjective, as soon as n3n \geq 3. Tom talks about this too. In other words, such maps n Xnn^X \to n can be extended uniquely to a natural map F XFF^X \to F. These correspond precisely to the ultrafilters on XX, the things you can integrate along in the beautiful naturality interpretation of measure that Tom is offering.

On the other hand, for VectVect, the projection

FVect fdhom(hom(V,F),F)hom(hom(V,k),k)\int_{F \in Vect_{fd}} \hom(\hom(V, F), F) \to \hom(\hom(V, k), k)

is an isomorphism: any functional hom(V,k)k\hom(V, k) \to k extends uniquely to a natural family hom(V,F)F\hom(V, F) \to F. (Notice that any functional hom(V,k)k\hom(V, k) \to k is automatically hom(k,k)hom(k, k)-natural; this has to do with commutativity of kk.) This seems to be a nice exercise, and it seems to me it is connected with the fact that coproducts of finite-dimensional spaces are biproducts.

Sorry I’m not doing better giving a simple explanation, but section 3 of Tom’s paper might be something to look at.

Posted by: Todd Trimble on September 26, 2012 2:56 PM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

Thanks for trying. I think I may just be too tired to wrap my head around this right now. But one question for clarification: it seems to me that in your isomorphism FVect fdhom(hom(V,F),F)hom(hom(V,k),k)\int_{F\in Vect_{fd}} hom(hom(V,F),F) \xrightarrow{\cong} hom(hom(V,k),k) the word homhom has three different meanings. In hom(V,F)hom(V,F) it means the set of linear maps between the vector spaces VV to FF. Then in hom(hom(V,F),F)hom(hom(V,F),F) the outer “homhom” means the vector space of set maps from the set hom(V,F)hom(V,F) to the vector space FF. While on the right hand side, both “homhom“s mean the vector space of linear maps between two vector spaces. Is that right?

If we were to interpret the homs on the right-hand side in the same way that we did on the left-hand side, so that what we had was literally one of the projections out of the end, then we wouldn’t have an isomorphism; we’d have an injection whose image is the set of linear maps — just as the corresponding projection for Set is an injection whose image is, as Owen pointed out, the set of Boolean algebra maps. Which suggests that there is an analogy something like “Set : Bool :: Vect : Vect”. Am I making any sense?

Posted by: Mike Shulman on September 27, 2012 4:28 AM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

Sorry to being confusing! I think you’re definitely right that I was playing a bit fast and loose, although I think some sense of what I was saying is salvageable in this situation. Anyway, let me back up a bit. (Or, we could just refer to Tom’s paper.)

If CC is a complete category and i:ACi: A \hookrightarrow C is a small full subcategory, then we can extend ii (uniquely, up to isomorphism) to a limit-preserving functor i^:(Set A) opC\hat{i}: (Set^A)^{op} \to C; the extension is of course along the opposite of the Yoneda embedding (which I’ll call the co-Yoneda embedding). This i^\hat{i} is right adjoint to a restricted co-Yoneda embedding which takes cOb(C)c \in Ob(C) to Hom(c,i) opHom(c, i-)^{op}. The composition of these two adjoint functors gives the codensity monad on CC.

(By the way, I’m following here standard practice where HomHom represents a set-valued functor and hom\hom denotes an internal hom. I’ll be damned if I know why it’s this way and not HomHom for the internal hom, which would be my own preference.)

Following this recipe, the codensity monad takes cc to an end

aOb(A)i(a) Hom(c,i(a))\int_{a \in Ob(A)} i(a)^{Hom(c, i(a))}

where that exponential is an ordinary cartesian power. It is sheer folly to think that in this or in most any other situation, we can expect some end projection

ai(a) Hom(c,i(a))i(b) Hom(c,i(b))\int_a i(a)^{Hom(c, i(a))} \to i(b)^{Hom(c, i(b))}

will be an isomorphism.

But, in the case of Vect fdVectVect_{fd} \hookrightarrow Vect, it’s a fact that the codensity monad is isomorphic to the double dualization monad on VectVect, which takes a vector space VV to hom(hom(V,k),k)\hom(\hom(V, k), k) (internal homs here). So apparently what I did was slip in an isomorphism

FVect fdF Hom(V,F) FVect fdhom(hom(V,F),F)\int_{F \in Vect_{fd}} F^{Hom(V, F)} \cong \int_{F \in Vect_{fd}} \hom(\hom(V, F), F)

(the end on the right as an ordinary end) and then project the right-hand side onto one of its components (namely, the component at kk). This is the sense that what I was saying is “salvageable”. (If you still don’t like it or don’t find it helpful, then forget it! Refer to Tom’s paper instead.)

If you grant me that there is at least a comparison map going from the right side to the left, then it suffices to check that this comparison induces an isomorphism at the set-theoretic level (since the underlying set functor U=Hom(k,):VectSetU = Hom(k, -): Vect \to Set reflects isomorphisms). But this boils down to the observation that a family of functions

Hom(V,F)U(F)Hom(V, F) \to U(F)

natural in finite-dimensional FF must preserve the linear structure on these sets. And I claim (although I won’t bother with details unless I’m pressed) that that should fall out from playing with naturality and biproducts. Does that seem believable to you?

Posted by: Todd Trimble on September 27, 2012 3:33 PM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

I haven’t been keeping up with Mike and Todd’s conversation, and I’m not sure whether the following point has been made, but I’ll make it in case it hasn’t.

Here’s my notation:

  • [,][-, -] denotes a power, so that if SS is a set and AA is an object of a category A\mathbf{A} then [S,A][S, A] is the product of SS copies of AA in A\mathbf{A}
  • Vect(,)\mathbf{Vect}(-, -) denotes the set of linear maps between two vector spaces
  • VECT(,)\mathbf{VECT}(-, -) denotes the vector space of linear maps between two vector spaces.
  • TT is the codensity monad of the inclusion FDVectVect\mathbf{FDVect} \hookrightarrow \mathbf{Vect}.

For a vector space XX, we have

T(X)= BFDVect[Vect(X,B),B], T(X) = \int_{B \in \mathbf{FDVect}} [\mathbf{Vect}(X, B), B],

more or less by definition of codensity monad. But in fact, it’s also true that

T(X)= BFDVectVECT(VECT(X,B),B). T(X) = \int_{B \in \mathbf{FDVect}} \mathbf{VECT}\,\Bigl(\mathbf{VECT}(X, B), B\Bigr).

(This is Lemma 7.4.)

To put it another way, an element II of T(X)T(X) is a priori a family (I B:Vect(X,B)B) BFDVect(I_B \colon \mathbf{Vect}(X, B) \to B)_{B \in \mathbf{FDVect}} of maps of sets, satisfying a naturality axiom. But it so happens that the maps I BI_B are automatically linear — that comes for free.

Posted by: Tom Leinster on September 27, 2012 5:40 PM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

Thanks! I did try to make substantially those points here, and I’m glad to see that your lemma 7.4 wraps it up so neatly.

Posted by: Todd Trimble on September 27, 2012 6:35 PM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

Ah yes, I see it. Sorry to have repeated what you said.

By the way, I hadn’t noticed any solidification of the idea of “ends as integrals”, which you mentioned here. On the contrary, I was finding it a nuisance to have to use the same notation for two different things.

Posted by: Tom Leinster on September 27, 2012 6:41 PM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

It’s possible that I was getting carried away (as I do on occasion). I really do think the idea of ultrafilter as tantamount to a natural family of integration operators is very pretty. I’m not quite sure how much should be made of the consequent ability to express ultrafilter monads = codensity monads as “integral” ends, e.g., whether ends should be seen as some sort of categorified integrals, or whether there’s some sort of macrocosm-microcosm principle at play here, but it might be worth pondering in an odd moment.

Posted by: Todd Trimble on September 30, 2012 4:18 PM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

Thanks guys! This is starting to make some more sense now. It sounds like the isomorphism between the codensity monad of FDVectVectFDVect \hookrightarrow Vect and the double-dualization monad is a composite of two isomorphisms:

F[Vect(X,F),F] FVECT(VECT(X,F),F)VECT(VECT(X,k),k). \int_{F} [Vect(X,F),F] \cong \int_{F} VECT(VECT(X,F),F) \cong VECT(VECT(X,k),k).

I’m using Tom’s notation for consistency (omitting the bold), although myself I’d probably tend to write Vect̲(,)\underline{Vect}(-,-) for the internal-hom; writing something in all caps makes me think that its objects are large. (Todd, I don’t think I’ve never encountered the “standard” notation you refer to of HomHom for set-valued homs and homhom for internal ones.)

The first isomorphism is Lemma 7.4. It looks to me as though the essential ingredients in the proof of that lemma are that (1) FDVectFDVect is closed under finite products, and (2) vector spaces are a finitary commutative theory. This part doesn’t seem to me to need the fact that the finite products in FDVectFDVect are also coproducts. But I could be missing something.

The second isomorphism, however, looks like the sort of thing that ought to follow from VectVect-enriched profunctory arguments and the fact that FDVectFDVect is Morita equivalent to (in fact, is the Cauchy completion of) the unit VectVect-category. Does that seem plausible?

Posted by: Mike Shulman on September 29, 2012 2:59 AM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

(Todd, I don’t think I’ve never encountered the “standard” notation you refer to of Hom for set-valued homs and hom for internal ones.)

Did you mean to say “I don’t think I’ve ever enountered…”?

If that’s what you meant, let me assure you that capital-H Hom for set-valued homs and lower-case-h hom for internal ones seems to be the rule at the nLab, among other places. See for example starting here and continuing down the page, and also on the page for internal hom of chain complexes.

My natural instinct would be to use the lower case h for the humble external hom, and the upper case for the Haughtier enriched internal hom.

Posted by: Todd Trimble on October 11, 2012 3:57 PM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

Huh, interesting. I nearly always write C(x,y)C(x,y) for the external hom. Would any author of those pages like to step up and defend that choice of notation?

Posted by: Mike Shulman on October 12, 2012 12:25 AM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

If you discover that I am one of those authors, please know that I do it to fit in with perceived conventions. As I so often find myself doing at the nLab, here and in other circumstances, with gritted teeth!

I also use C(x,y)C(x, y) often. That’s a good convention, but not everyone knows it.

Posted by: Todd Trimble on October 12, 2012 1:37 AM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

All the more reason to expose them to it!

The section you linked to at the nLab page internal hom was actually in conflict with the rest of that page, which uses [X,Y][X,Y] for the internal hom and C(X,Y)C(X,Y) for the external one. So I felt no compunction about changing it to make it simultaneously more consistent and better.

I don’t see the lower-case hom notation at internal hom of chain complexes, is that the page you meant?

Posted by: Mike Shulman on October 12, 2012 3:48 PM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

Also, all of this suggests to me that rather than something like algebras or monoids, the most natural next case to consider would be modules over a fixed commutative ring, or perhaps sets with an action by a fixed abelian monoid or group. (Recalling for instance that sets are F1-modules…)

Posted by: Mike Shulman on September 27, 2012 4:31 AM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

Though there was a case for pointed sets being the F 1F_1-modules, and sets being the F F_{\empty}-modules, where F F_{\empty} is the field with no elements.

Talking of this, why not look at modules for all of Durov’s generalized rings, such as the abstract convex sets of 0.5.18 of Durov?

Posted by: David Corfield on September 27, 2012 4:58 PM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

Of course, another interesting case would be BoolBool.

Posted by: Mike Shulman on September 27, 2012 4:36 AM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

Very interesting paper and series of posts, Tom.

I am heartened to see that things like the double dualization monad and relatives have been and are of interest to category theorists - not least because I hope things in these circles of ideas might cast light on things the Banach algebra community have been hacking around with “by hand”. In particular, there is a sense in which your line

So, I’m inviting you to think of elements of a
double dual space as the linear analogues of ultrafilters

makes precise something that Banach algebra people seem to have been implicitly using as a vague folklore principle. (Passing to the topological bidual is a standard technique for transporting one’s problem into a setting where compactness can be used.)

Regarding your penultimate sentence: presumably you’ve tried the theory of associative algebras with identity? Or monoids? Can you quickly explain why this is/isn’t likely to be a straightforward recapitulation of the proof for vector spaces?

Posted by: Yemon Choi on September 26, 2012 2:50 AM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

Thanks, Yemon. This —

Passing to the topological bidual is a standard technique for transporting one’s problem into a setting where compactness can be used

— sounds like one of those useful tools-of-someone-else’s-trade, not always easily discovered by those outside the guild.

Regarding your penultimate sentence: presumably you’ve tried the theory of associative algebras with identity? Or monoids?

Actually, no. I haven’t tried any other examples at all — I was hoping someone else might pick it up. I had the impression that finding a common generalization of these two analogous scenarios would be a fairly substantial project, and at the time when I was thinking about this, I just wanted to finish writing up. But I do hope it will be appealing to somebody.

(Just to elaborate a little bit: the case of vector spaces mentions affine subspaces, so we’ll need some analogue of those. For an arbitrary algebraic theory, I’d guess the analogous things are the equivalence classes of congruences.)

Posted by: Tom Leinster on September 26, 2012 3:42 PM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

Yes, Tom, this is a very attractive story you’re telling in terms of integration!

I’d never actually studied these linearly compact spaces, but maybe you can tell me if I have the right idea with regard to the equivalence Vect opLCVectVect^{op} \simeq LCVect. The idea is to regard the ground field kk as both a kk-module and as a discrete topological kk-module (I don’t say “topological vector space” for reasons I’ll mention later), in effect as a dualizing object (which goes by another name that you famously dislike). In one direction we have a functor

hom(,k):Vect opTopMod k\hom(-, k): Vect^{op} \to TopMod_k

which sends a vector space VV to V *=hom(V,k)V^\ast = \hom(V, k), topologized as an inverse limit of discrete finite-dimensional spaces

hom(V,k)=hom(colim FVF,k)lim Fhom(F,k)\hom(V, k) = \hom(colim_{F \subseteq V} F, k) \cong lim_F \hom(F, k)

where the colimit inside the hom is over the directed system of finite-dimensional subspaces of VV and inclusions between them. This inverse limit turns out to be linearly compact.

In the other direction, given a topological kk-module WW, we have a functor

hom(,k):TopMod k opVect\hom(-, k): TopMod_k^{op} \to Vect

which takes WW to the vector space of continuous linear maps WkW \to k, where again kk has the discrete topology. As usual in these situations, we have a contravariant adjunction between VectVect and TopMod kTopMod_k, and this restricts to an equivalence between Vect opVect^{op} and LCVectLCVect. In particular, we can get a vector space VV back from V *V^\ast by considering continuous functionals on its LC topology.

The reason I steer away from “topological vector space” is that the functional analysts have commandeered this term to mean a topologized vector space, usually over \mathbb{R} or \mathbb{C} but sometimes more generally over a local field kk, where the scalar multiplication k×VVk \times V \to V is required to be continuous. In that context, it means something stronger than asking that each individual scalar gives a continuous map VVV \to V, which is what we want here. So, to avoid possible misunderstanding, I say “topological kk-module”.

Posted by: Todd Trimble on September 26, 2012 2:51 AM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

When k is discrete, isn’t a continuous multiplication k×VVk\times V\to V the same as each scalar acts continuously?

I think of the duality between vector spaces and linear compact spaces as an instance of the kind of dualities in Johnstone’s Stone space book. Clearly Vect is the Ind-completion of FDVect and LCVect is the pro-completion of FDVect. Since FDVect is self dual, you have that the pro-completion of FDVect is dual to the Ind-completion.

The same generalizes to the world of coalgebras and pseudocompact algebras. The ind-completion of f.d. coalgebras is all coalgebras and the pro-completion of f.d. algebras are the pseudocompact algebras. Since f.d. coalgebras and f.d. algebras are obviously dual you get the duality.

Posted by: Benjamin Steinberg on September 26, 2012 3:40 AM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

When k is discrete, isn’t a continuous multiplication k×V→V the same as each scalar acts continuously?

Yes, of course, but still functional analysts seem to reserve TVS for the case where kk is a locally compact Hausdorff non-discrete field (a local field). Maybe that’s an unfortunate restriction, but anyway that’s why I’m avoiding the term TVS here.

I think of the duality between vector spaces and linear compact spaces as an instance of the kind of dualities in Johnstone’s Stone space book. Clearly Vect is the Ind-completion of FDVect and LCVect is the pro-completion of FDVect. Since FDVect is self dual, you have that the pro-completion of FDVect is dual to the Ind-completion.

The same generalizes to the world of coalgebras and pseudocompact algebras. The ind-completion of f.d. coalgebras is all coalgebras and the pro-completion of f.d. algebras are the pseudocompact algebras. Since f.d. coalgebras and f.d. algebras are obviously dual you get the duality.

Indeed! I recall that we were discussing this in the MO thread.

Posted by: Todd Trimble on September 26, 2012 4:00 AM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

The assumption that the field is discrete confused me too in the first few paragraphs of Tom’s post. When he said

… the archetypal example of a “compact” vector space is a finite-dimensional vector space with the discrete topology

and then followed that with

for example, 2\mathbb{R}^2 is finite-dimensional but not compact

I felt confused because neither does 2\mathbb{R}^2, in my mind, have the discrete topology. Only later did I realize he was assuming that all fields have the discrete topology.

Posted by: Mike Shulman on September 26, 2012 2:23 PM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

Thanks. Actually, when I mentioned 2\mathbb{R}^2, I had the Euclidean topology in mind, but I see now that the flow of ideas isn’t very clear. Fortunately it doesn’t matter, since in neither case is the thing compact. I’ve made a little edit to try to smooth this over.

Posted by: Tom Leinster on September 26, 2012 3:24 PM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

Ben wrote:

When kk is discrete, isn’t a continuous multiplication k×VVk \times V \to V the same as each scalar acts continuously?

Indeed — it’s just an irritating issue of terminology, as Todd says. I want to consider models in Top\mathbf{Top} of the theory of kk-vector spaces (for a fixed, untopologized field kk). Category theorists would call this a “kk-vector space in Top\mathbf{Top}”.

I want to use the snappier term “topological vector space”, but the trouble is that the dominant usage of TVS demands that the ground field kk is topologized (and that k×VVk \times V \to V is continuous). We can get round this by saying that kk carries the discrete topology: then “kk-vector space in Top\mathbf{Top}” and “TVS over kk” mean the same thing.

Clearly Vect is the Ind-completion of FDVect and LCVect is the pro-completion of FDVect. Since FDVect is self dual, you have that the pro-completion of FDVect is dual to the Ind-completion.

I very much want to understand the duality Vect opLCVect\mathbf{Vect}^{op} \simeq \mathbf{LCVect} in the way you suggest. That’s really nice. But why is it “clearly” the case that LCVect\mathbf{LCVect} is the pro-completion of FDVect\mathbf{FDVect}?

Posted by: Tom Leinster on September 26, 2012 3:21 PM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

The open subspaces are precisely those of finite codimension and hence linearly compact vector spaces are precisely inverse limits of finite dimensional subspace. Moreover, one can check that a continuous homomorphism to a finite dimensional vector space has open kernel. One can use this to prove the hom sets are as in the pro-completion.

The proof is similar to the proof in the profinite situation.

Posted by: Benjamin Steinberg on September 27, 2012 2:00 AM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

A historical answer to the question would be Lefshetz.

I have not got a copy of his algebraic topology book with me, but I think he introduced them as suitable coefficients for Cech homology as the inverse limit construction needed there did not destroy the exactness of the long exact sequences if the modules involved were linearly compact. (The condition on coefficients was studied further by Garavaglia (Fund Math.1978, 100, p. 89 - 95)It corresponds to a related form of compactness known as equational compactness.)

Returning to linear compact modues, there is a paper by Dieudonné:

* American Journal of Mathematics, Vol. 73, No. 1, Jan., 1951

which is interesting as it relates to the double dual.

If one thinks of linear subspaces as solutions to linear equations, then one can extend to solutions of polynomial equations and one gets the idea of equational compactness (this relates to various constructions in universal algebra and back into model theory. There was a SLN on equational compactness for rings (No 745) [David K Haley, 1976].) I remember ultraproducts, ultrapowers being of use in that stuff.

Another direction worth flagging up is the theory of duals of Grothendieck categories:

Duality theory for Grothendieck categories, Bull. Amer. Math. Soc. Volume 75, Number 6 (1969), 1401-1407.

A form of compactness comes in in more or less this same way. This is also in

N. Popescu, Abelian Categories with Applications to Rings and Modules, Academic Press, 1973.

Posted by: Tim Porter on September 26, 2012 7:15 AM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

Thanks, Tim. Again I forgot to put the history into my post — there’s something about expository writing that can make me forget to include attributions. It doesn’t always help an explanation of a mathematical concept to describe its historical context, but not doing so carries the risk that readers might think you’re claiming to have originated the concept yourself.

On the other hand, my paper is as carefully referenced as I could make it. There, the notion of linearly compact vector space is firmly attributed to Lefschetz.

Posted by: Tom Leinster on September 26, 2012 3:13 PM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

I was not intending criticism of your post, it was just that I like this area and there are hints in what goes on in the duality theory for Grothendieck categories of a wider view that could be followed up by someone. Also the link with equational compactness could be worth looking at as there, if I remember rightly, ultra products come in in a big way and link up with the model theory aspect.

This looks nice stuff, and we should put more of it available on the nLab if time permits, including all the side issues. :-)

Posted by: Tim Porter on September 27, 2012 6:34 AM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

This stuff is really beautiful Tom. Since compact Hausdorff spaces arise `inevitably’ from the inclusion of finite sets into sets, might the inclusion of finite preorders into preorders give rise to a good notion of `directed topological space’? There are a plethora of competing notions in this area, so it would be nice if one was `inevitable’.

Posted by: Jon Woolf on September 28, 2012 12:38 PM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

Excellent question, and I don’t know the answer. My guess is that it’s going to be an internal preorder in the category of compact Hausdorff spaces, which I’m sure isn’t what directed topologists want. But I could very well be wrong.

Posted by: Tom Leinster on September 28, 2012 2:02 PM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

Maybe this has been mentioned, but what about the codensity monad for the inclusion of finitely generated abelian groups into all abelian groups? Will that give us, in Pontrajgin style, compact hausdorff abelian groups as algebras for the monad?

What about finite dimensional convex sets in all convex sets?

Posted by: David Corfield on September 28, 2012 12:45 PM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

These are all things that I hope someone will figure out!

My proof of the vector space case very much used the fact that we’re over a field, so it’s not immediately obvious that it can be adapted to other categories of modules. I haven’t given it much thought, though.

What exactly do you mean by “convex set”?

Posted by: Tom Leinster on September 28, 2012 2:59 PM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

Or maybe I should have said ‘convex space’. Assuming nLab has things right, convex space is a more general notion than convex set. The former includes a description as modules for a generalized ring. Yet Durov, as mentioned above, termed these modules (abstract) convex sets. So that’s all very confusing.

Posted by: David Corfield on September 28, 2012 3:12 PM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

It seems (p.7) that the condensity monad for the inclusion of finite groups in all groups is profinite completion.

Same page, (did you have this already?) for full embeddings 𝒟𝒞\mathcal{D} \to \mathcal{C}:

If 𝒟\mathcal{D}-completion exists for all objects, then it is a pointed endofunctor. The existence of 𝒟\mathcal{D}-completion is guaranteed for all objects if inverse limits exist in 𝒞\mathcal{C} and the class 𝒟\mathcal{D} is a set, but also in other cases, for instance when 𝒟\mathcal{D} is reflective.

Localizations are a special kind of such completions.

Posted by: David Corfield on September 29, 2012 8:27 AM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

Re the first para, maybe Ben Steinberg mentioned this earlier in the conversation? Not sure. But thanks.

Re the second para, I gather that 𝒟\mathcal{D}-completion means the codensity monad of 𝒟𝒞\mathcal{D} \to \mathcal{C}. It’s not only a pointed endofunctor but, well, a monad. Its existence when the domain is small and the codomain has small limits is guaranteed, yes: that goes back to Kan, and I mentioned it in the first of this series of posts. I think the statement that “localizations are a special kind of such completions” is a case of the statement that if a functor GG has a left adjoint FF, then the codensity monad of GG is just GFG F.

Posted by: Tom Leinster on September 29, 2012 7:45 PM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

So they’re denoting by localization any idempotent completion. So we find that

Some examples of localizations in the category of groups are abelianization, hypoabelianization (i.e., dividing out the perfect radical), and localization at primes.

Are the last two adjoints to inclusions?

Posted by: David Corfield on September 30, 2012 4:27 PM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

Scattered thoughts while nursing minor ailments:

Reading more carefully the section of Tom’s paper dealing with FDVect and Vect, I was a bit surprised to see the identification of a fd vector space BB with k nk^n at one point. Tom mentions above that there could well be a slicker construction of “integration against an element of X **X^{**}” and I think the following might do the job.

(I haven’t been keeping up with the thread, so apologies if what follows has already been mentioned or suggested or refuted.)

I’m going to denote the internal Hom in Vect by L(,)L(-, -). Given vector spaces VV and BB we observe that “composition of linear maps” gives us an arrow L(X,B)B *=L(X,B)L(B,k)L(X,k)=X * L(X,B)\otimes B^* = L(X,B) \otimes L(B, k) \to L(X,k) = X^*

Now hit everything with L(,k)L(-,k), i.e. dualize (or take “adjoints” in the linear algebra sense) and use hom-tensor adjointness:

X **L(L(X,B)B *,k)L(L(X,B),B **) X^{**} \to L( L(X,B) \otimes B^*, k) \cong L( L(X,B), B^{**})

In the case where BB is finite-dimensional, B **B^{**} is naturally isomorphic to BB, and so we have obtained a natural-looking, linear map X **L(L(X,B),B)X^{**} \to L( L(X,B), B). I have not checked that it does everything it should, though.

The appearance of tensor here makes me wonder if the closed structures on Set and Vect are important to the parallels between the codensity monads for inclusions of FinSet and FDVect. I think someone on this thread suggested that other categories to try next would be modules over a fixed ring, or acts over a semigroup… Maybe bimodules over a ring with augmentation might give a closer parallel to the existing cases?

Obviously I have Ban on my mind here, although illness and a work backlog make it unlikely I’ll be able to give that case a serious go any time soon. The above suggests that one could look at the inclusion of reflexive Banach spaces into all Banach spaces, not least because Tom’s exposition suggests that maybe the closest you can get to making a Banach space into a reflexive one is to create double duals. (This may or may not be a correct guess but it doesn’t seem an unreasonable one to me.)

Posted by: Yemon Choi on October 8, 2012 6:20 AM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

Excellent. That is indeed the map we need. Thanks.

It’s good to know about this slicker construction, which may point the way to useful generalizations. However, I don’t think it absolves us of having to prove something like Proposition 7.1 (which is where I explicitly use the fact that a finite-dimensional vector space is isomorphic to k nk^n for some nn).

The reason is that in the end formula for the codensity monad,

T(X)= BFDVect[Vect(X,B),B], T(X) = \int_{B \in \mathbf{FDVect}} [\mathbf{Vect}(X, B), B],

the square brackets denote a power (iterated product) in Vect\mathbf{Vect}, so that an element of [Vect(X,B),B][\mathbf{Vect}(X, B), B] is a map of sets Vect(X,B)B\mathbf{Vect}(X, B) \to B. So with things set up as they currently are, we sometimes have to step out of the linear world.

On the other hand, we could change the set-up. Jiri Velebil suggested by email that perhaps it would be better to work in an enriched setting. (Maybe someone said that here too; I can’t remember.) I wouldn’t be surprised if that was a fruitful thing to do.

Posted by: Tom Leinster on October 8, 2012 12:22 PM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

I suggested some enriched profunctory arguments up here.

Posted by: Mike Shulman on October 11, 2012 1:12 PM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

There’s often quite a song and dance made when nonprincipal ultrafilters are introduced to a class, especially about how even if they exist, they can’t be given explicitly. Somehow elements of the double dual of a vector space which are not evaluations at an element don’t seem so exciting. Do they even have a name?

Posted by: David Corfield on October 8, 2012 9:45 AM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

Interesting point. I certainly haven’t heard of a name for them. (I’ve also never understand the name “free ultrafilter” to mean “nonprincipal ultrafilter”.)

I don’t have even one minute to think about this, but does one need any form of choice to construct some vector space XX and element of X **XX^{**} \setminus X? I.e. in a choiceless world, is it true that all vector spaces are reflexive?

Posted by: Tom Leinster on October 8, 2012 4:48 PM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

Tom, this sort of question seems to come up every now and then on MathOverflow. See for example here.

Posted by: Todd Trimble on October 8, 2012 5:00 PM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

Thanks, Todd. Andreas Blass’s answer proves that “the canonical map VV **V \to V^{**} can consistently, in the absence of the axiom of choice, be surjective”. Here VV is the real vector space on a countably infinite basis. That’s more or less enough to convince me that, for a general vector space VV, elements of V **V^{**} that are not evaluations should be thought of as no less mysterious than ultrafilters that are not principal.

(I don’t find non-principal ultrafilters particularly mysterious, for what it’s worth.)

Posted by: Tom Leinster on October 8, 2012 11:28 PM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

I agree with that sentiment.

This reminds me of something I find interesting. Start with a free \mathbb{Z}-module of countable rank, FF. The algebraic dual of FF (homming into \mathbb{Z}) is a countable product of copies of \mathbb{Z}. What’s the algebraic dual of that?

The answer is: you’re back where you started from! In other words, the canonical map from FF to its double algebraic dual is an isomorphism. I believe this is a theorem in ZF (choice-free mathematics), but I’d have to check to be sure. I believe the result is due to Kurosh.

Posted by: Todd Trimble on October 9, 2012 12:20 AM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

This fact comes up on MO from time to time as well: Here is Kevin Buzzard’s elegant proof. It wouldn’t be hard to make this choice free.

Posted by: David Speyer on October 9, 2012 2:38 PM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

Thanks very much to everyone for their comments on this series of posts. I’ve now revised and re-arXived the paper in the light of the comments. I didn’t want to change it too much, so it’s mostly been a case of updating the references.

Posted by: Tom Leinster on October 11, 2012 10:06 PM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

As was pointed out in a comment above, the codensity monad of the inclusion
FiniteGroups -> Groups
is the profinite completion. Does anyone know what the algebras for this monad are? Profinite groups?

Posted by: Werner Thumann on December 13, 2012 3:08 PM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

I’m surprised that nobody has talked about this in the context of Paul Taylor’s amazing theory, Abstract Stone Duality (ASD).

In fact, the result is an easy corollary of Abstract Stone Duality!

Abstract stone duality is a way of talking about setups involving conditions like “proper” “hausdorff” “separated”, “closed map”, “open map”, etc., using entirely categorical language. Models for ASD (and its variants) include schemes, locales, sober spaces, self-double-dual topological vector spaces, spectral spaces, the etale topology in algebraic geometry, and much more.

We start with two basic materials (I prefer to think of these materials as paradigms subject to some tweaking - for instance taking a monoidal closed category instead of cartesian closed category):

(1) We start with a monad TT on a cartesian category CC, whose Eilenberg-Moore category we call E(T)E(T), and whose canonial adjuntion (F,G)(F, G) between CC and E(T)E(T) is a categorical equivalence. This is also called an “Isbell Duality” by some authors, though others consider Isbell Duality to be a larger paradigm. Note that we can view this as a contravariant right adjoint equivalence if we concentrate on the opposite category of E(T)E(T), call it DD. So we have contravariant right adjoints F:CDF : C \rightarrow D and G:DCG : D \rightarrow C. We think of DD as being lattices or algebras of some kind, and CC as being spaces. Now, under broad circumstances, there is a “dualizing object”, since G(A)[i,G(A)][A,F(i)]G(A) \cong [i, G(A)] \cong [A, F(i)] and F(X)[*,F(X)][X,G(*)]F(X) \cong [*, F(X)] \cong [X, G(*)], where ii is the initial object in DD and ** is the terminal object in CC. There are general contexts in which F(*)F(*) and G(i)G(i) are isomorphic as objects in CC, and so we call this setup an Isbell Duality since we are hom-ing with a single object as an object in CC or DD to go back and forth.

(2) Let SS be the dualizing object which we get from (1). Let ,:*S\top, \bot: * \rightarrow S be two maps. We say that AXA \rightarrow X is closed if it results from a pullback of XSX \rightarrow S along \bot and open if it results from a pullback of XSX \rightarrow S along \top. We call SS the Sierpinsky space. We require that maps *S* \rightarrow S have the structure of a distributive lattice (recently I’ve been thinking about a number of variations on this).

This structure alone (which does not arrive at describing the full commonality behind the mentioned examples) is enough to define open maps, proper maps, compact spaces, Hausdorff spaces. It also exposes a deep duality between “open” definitions and “closed definitions”. For instance, Hausdorff means that XX×XX \rightarrow X \times X is closed, and discrete means that XX×XX \rightarrow X \times X is an open.

In particular, discrete kk-vector spaces are the “cocompact discrete” objects and compact Hausdorff spaces are the “compact hausdorf” objects. Cocompact is called overt, and doesn’t show up sometimes since it is common for all objects to be overt (that is the case here). Locales are an example where not all objects are overt.

What I’ve given here is the version for spaces, and it follows almost immediately that compact hausdorff spaces are algebras for the double dual monad on sets (which are discrete overt objects), along with the duality theorem for spacial locales and sober spaces. You can tweak this to give the right setup for linear categories, giving the theorem discussed here.

You can read more about this amazing theory, due to Paul Taylor, at his Abstract Stone Duality site. Or, if you want some help along the way (since the current presentations are hard to sort through), then email me at edeany (at) nyu.edu, because I love to talk about these ideas!

P.S. The paradigm of ASD suggests another way of seeing that the double dual functor on discrete vector spaces is monadic: discrete kk-vector spaces reside inside a larger category CC of topological vector spaces whose double dual is themselves. The double dual functor extends to this category naturally and we get that CC opC \cong C^{op}. We get that algebras for the double dual on this category are precisely C opC^{op}, and it follows that algebras for the double dual on discrete kk-vector spaces

Posted by: Dean Young on December 29, 2020 4:12 AM | Permalink | Reply to this

Re: Where Do Linearly Compact Vector Spaces Come From?

The paper here suggests a relationship between compact Hausdorff topological groups and linearly compact topological groups:

https://web.archive.org/web/20180727052524id_/https://www.cambridge.org/core/services/aop-cambridge-core/content/view/3911E7E3BA80F4CB19BA1567CD1A396A/S1446788700007369a.pdf/div-class-title-note-on-linearly-compact-abelian-groups-div.pdf

I’m interested in modifying the proofs here, switching “Set” to “Abelian Group” and “{0,1}” to “S¹”:

https://ncatlab.org/nlab/show/compactum

The article here is saying something similar to “algebras for the monad given by [[-,S¹],S¹] are compact Hausdorff topological abelian groups”.

In the above, the dual gets the discrete topology, and S¹ is regarded simply as an abelian group.

Perhaps compact-Hausdorff topological abelian groups are algebras for the double dual [[A,S¹],S¹] ⭢ A?

Posted by: Dean Young on February 9, 2024 6:21 PM | Permalink | Reply to this

Post a New Comment