Magnitude Homology
Posted by Tom Leinster
I’m excited that over on this thread, Mike Shulman has proposed a very plausible theory of magnitude homology. I think his creation could be really important! It’s general enough that it can be applied in lots of different contexts, meaning that lots of different kinds of mathematician will end up wanting to use it.
However, the story of magnitude homology has so far only been told in that comments thread, which is very long, intricately nested, and probably only being followed by a tiny handful of people. And because I think this story deserves a really wide readership, I’m going to start afresh here and explain it from the beginning.
Magnitude is a numerical invariant of enriched categories. Magnitude homology is an algebraic invariant of enriched categories. The Euler characteristic of magnitude homology is magnitude, and in that sense, magnitude homology is a categorification of magnitude. Let me explain!
I’ll explain twice: a short version, then a long version. After that, there’s a section going into some of the details that I wanted to keep tucked out of the way. Choose the level of detail you want!
So that I don’t have to keep saying it, almost everything here that’s new is due to Mike Shulman, who put these ideas together on the other thread. Some aspects were present in work that Aaron Greenspan did during his master’s year with me (2014–15); you can read his MSc thesis here. But Aaron and I didn’t get very far, and it was Mike who made the decisive contributions and to whom this theory should be attributed.
The short version
I won’t actually give the definition here — I’ll just sketch its shape.
Let be a semicartesian monoidal category. Semicartesian means that the unit object of is terminal. This isn’t as unnatural a condition as it might seem!
Let be a small -category ( category enriched in ). Small means that the collection of objects of is small (a set).
Let be a small functor. In this context, small means that is the left Kan extension of its restriction to some small full subcategory of . This condition holds automatically if the category is small, as it often will be for us.
From this data, we define a sequence of abelian groups, called the (magnitude) homology of with coefficients in . Dually, given instead a contravariant functor , there is a sequence of cohomology groups. But we’ll concentrate on homology.
As for any notion of homology, we can attempt to form the Euler characteristic
Depending on and , it may or may not be possible to make sense of this infinite sum.
Examples:
When and is chosen suitably, we recover the notion of homology and Euler characteristic of an ordinary category. What do “homology” and “Euler characteristic” mean for an ordinary category? There are several equivalent answers; one is that they’re just the homology and Euler characteristic of the topological space associated to the category, called its geometric realization or classifying space. The Euler characteristic of a category is also called its magnitude.
When is the poset , made monoidal by taking to be addition, graphs can be understood as special -categories. By choosing suitable values of , we obtain Hepworth and Willerton’s magnitude homology of a graph. Its Euler characteristic is the magnitude of a graph.
When is the poset , made monoidal by taking to be addition, metric spaces can be understood as special -categories. By choosing suitable values of , we obtain a new notion of the magnitude homology of a metric space. Subject to convergence issues that haven’t been fully worked out yet, its Euler characteristic is the magnitude of a metric space.
The long version
Again, let’s start by fixing a semicartesian monoidal category . I’ll use the letter for a typical object of , because an important motivating case is where , and in that case the objects of are thought of as lengths.
Aside Actually, you can be a bit more general and work with an arbitrary monoidal category equipped with an augmentation, as described here, or you can do something more general still. But I’ll stick with the simpler hypothesis of semicartesianness.
Step 1 Let be a small -category. We define a kind of nerve . The nerve of an ordinary category is a single simplicial set, but for us will be a functor into the category of simplicial sets. For , the simplicial set is defined by
(). The degeneracy maps are given by inserting identities. The inner face maps are given by composition. The outer face maps are defined using the unique maps from the first factor and the last factor to the unit object of . (There are unique such maps because is semicartesian.)
Mike wrote instead of . I guess he intended the M to stand for magnitude and the S to stand for simplicial. I’m using because I want to emphasize that it’s a kind of nerve. Still, half of me regrets removing the notation MS from a construction described by Mike Shulman.
Steps 2 and 3 Let be the composite functor
Here is the category of simplicial abelian groups, is the category of chain complexes of abelian groups, and the functor is induced by the free abelian group functor . The unlabelled functor sends a simplicial abelian group to either its unnormalized chain complex or its normalized chain complex. It won’t matter which we use, for reasons I’ll explain in the details section below.
Notice that isn’t a single chain complex; it’s a functor into the category of chain complexes. There’s one chain complex for each object of .
Step 4 Now we bring in the other piece of data: a small functor , which I’ll call the functor of coefficients. Actually, everything that follows makes sense in the more general context of a functor , where is thought of as a subcategory of by viewing an abelian group as a chain complex concentrated in degree zero. But we don’t seem to have found a purpose for that extra generality, so I’ll stick with .
We form the tensor product of with . By definition, this is the chain complex defined by the coend formula
The tensor product on the right-hand side is the tensor product of chain complexes. Under our assumption that is concentrated in degree zero, its th component is simply .
Explicitly, this coend is the coproduct over all of the chain complexes , quotiented out by one relation for each map in . Which relation? Well, given such a map, you can write down two maps from to the coproduct I just mentioned, and the relation states that they’re equal.
This coend exists because of the smallness assumption on . Indeed, by definition of small functor, there exists some small full subcategory of such that is the left Kan extension of along the inclusion . Then exists because has small colimits, and you can show that it has the defining universal property of the coend above. So exists and is equal to .
We have now constructed from and a single chain complex .
If you choose to use unnormalized chains, you can unwind the coend formula to get a simple explicit formula for :
with the differential that you’d guess. (This formula does assume that is a functor from into rather than . For -valued , the formula becomes slightly more complicated.) I don’t think there’s such a simple formula for normalized chains, at least for general .
Step 5 The (magnitude) homology of with coefficients in , written as , is the homology of the chain complex . In other words, is the th homology group of , for .
For the definition of cohomology, let instead be a small contravariant functor . Then we can form the chain complex
The on the right-hand side denotes the closed structure on the monoidal category of chain complexes. And , the cohomology of with coefficients in , is defined as the homology of the chain complex .
Everything is functorial in the way it should be: homology is covariant in , cohomology is contravariant in , and both are covariant in the functor of coefficients.
Example: ordinary categories
When , a small -category is just a small category .
The functor sends to the th power of the ordinary nerve. So, we might suggestively write as instead.
Now let’s think about the functor of coefficients, which is some small functor . For to be small means exactly that there is some small full subcategory of such that is the left Kan extension of along the inclusion . For instance, choose an abelian group and define to be the coproduct of copies of . Then is small, since if we take to be the full subcategory consisting of just the one-element set then is the left Kan extension of its restriction to . Let’s write as .
The general definition gives us homology groups for every small category and abelian group . These homology groups, more normally written as , are actually something familiar. In simplicial terms, they’re simply the homology of the ordinary nerve of (with coefficients in ). In terms of topological spaces, they’re just the homology of the geometric realization (classifying space) of .
Example: graphs
Let , a poset seen as a category. The objects of are the natural numbers together with , there’s exactly one map when , and there are no maps when . It’s a monoidal category under addition. Any graph can be seen as a -category: the objects are the vertices, and is the number of edges in a shortest path from to (understood to be if there is no such path at all).
So, we’re going to get a homology theory of graphs.
What about the coefficients? Well, the first point is that we don’t have to worry about the smallness condition. The category is small, so it’s automatic that any functor on is small too.
The second, important, point is that every object of gives rise to a functor , defined by
(). We’re going to use as our functor of coefficients.
So, for any graph and natural number , we get homology groups . It turns out that is exactly what Richard Hepworth and Simon Willerton called the magnitude homology group .
I’ll repeat Richard and Simon’s definition here, so that you can see concretely what Mike’s general theory actually produces in a specific situation. Let be a graph. For integers , let be the free abelian group on the set
For , define by
Then define by . This gives a chain complex for each natural number . The Hepworth–Willerton magnitude homology group is defined to be its th homology.
So, this two-case formula for the differential, involving the triangle inequality, somehow comes out of Mike’s general definition. I’ll explain how in the details section below.
Incidentally, Richard and Simon proved a Künneth theorem, an excision theorem and a Mayer–Vietoris theorems for their magnitude homology of graphs. Can these be generalized magnitude homology of arbitrary enriched categories?
Example: metric spaces
Let be the poset , made into a monoidal category in the same way that was. As Lawvere pointed out long ago, any metric space can be seen as a -category.
So, we get a homology theory of metric spaces. More exactly, we have a graded abelian group for each metric space and functor . Exactly as for graphs, every element gives rise to a functor , taking value at and elsewhere. So we get a group for each and .
Explicitly, this group turns out to be the same as the group that you get from Hepworth and Willerton’s definition above by simply crossing out the word “graph” and replacing it by “metric space”, and letting range over rather than .
But here’s the thing. There are some metric spaces, including most finite ones, where the triangle inequality is never an equality (except in the obvious trivial situations). For such spaces, the Hepworth–Willerton differential is always . Hence the homology groups are the same as the chain groups, which tend to be rather large. For instance, that’s almost always the case when is a random finite collection of points in Euclidean space. So homology fails to do its usual job of summarizing useful information about the space.
In that situation, we might prefer to use different coefficients. So, let’s think again about the construction of the functor from the object . This construction makes sense for any partially ordered set , and it also makes sense not only for single elements (objects) of , but arbitrary intervals in .
What I mean is the following. An interval in a poset is a subset with the property that if in with then . For any interval , there’s a functor defined on objects by
It’s defined on maps by sending everything to either a zero map or the identity on . For instance, if is a trivial interval then is the functor that we met before.
I observed a few paragraphs back that when is a finite metric space, typically isn’t very interesting. However, it seems likely that is more interesting for nontrivial intervals . The idea is that it introduces some blurring, to compensate for the fact that the triangle inequality is never exactly an equality. And here we get into territory that seems close to that of persistent homology… but this connection still needs to be explored!
Decategorification: from homology to magnitude
For any homology theory of any kind of object , we can attempt to define the Euler characteristic of as the alternating sum of the ranks of the homology groups. We immediately have to ask whether that sum makes sense.
It may be that only finitely many of the homology groups are nontrivial, in which case there’s no problem. Or it may be that infinitely many of the groups are nontrivial, but the Euler characteristic can be made sense of using one or other technique for summing divergent series. Or, it may be that the sum is beyond salvation. Typically, if you want the Euler characteristic to make sense — or even just in order for the ranks to be finite — you’ll need to impose some sort of finiteness condition on the object that you’re taking the homology of.
The idea — perhaps the entire point of magnitude homology — is that its Euler characteristic should be equal to magnitude. For some enriching categories , we have a theorem saying exactly that. For others, we don’t… but we do have some formal calculations suggesting that there’s a theorem waiting to be found. We haven’t got to the bottom of this yet.
I’ll say something about the general situation, then I’ll explain the state of the art in the three examples above.
In general, for a semicartesian monoidal category , a small -category , and a small functor , we want to define the Euler characteristic of with coefficients in as
Here’s how it looks in our three running examples: categories, graphs and metric spaces.
In the case , we’re talking about the Euler characteristic of a category . Take , as defined above. Then the homology group is equal to , the th homology of the category with coefficients in . That’s the same as the th homology of the nerve (or its geometric realization).
To make sense of , we impose a finiteness condition. Assume that the category is finite, skeletal, and contains no nontrivial endomorphisms. Then the nerve of has only finitely many nondegenerate simplices, from which it follows that only finitely many of the homology groups are nontrivial. So, the sum is finite and makes sense.
Under these finiteness hypotheses, what actually is ? Since is the th homology of the nerve of with integer coefficients, is the ordinary (simplicial/topological) Euler characteristic of the nerve of . And it’s a theorem that this is equal to the Euler characteristic of the category , defined combinatorially and also called the “magnitude” of .
So for a small category , the Euler characteristic of the magnitude homology is indeed the magnitude of . In other words: magnitude homology categorifies magnitude.
Take a graph , seen as a category enriched in . For each natural number , we can try to define the Euler characteristic
I said earlier that these homology groups are the same as Hepworth and Willerton’s homology groups , and I described them explicitly.
To make sure that the ranks are all finite, let’s assume that the graph is finite. That alone is enough to guarantee that the sum defining is finite. Why? Well, from the definition of the chain groups , it’s clear that is trivial when . Hence the same is true of , which means that the sum defining might as well run only from to .
At the moment, our graph has not one Euler characteristic but an infinite sequence of them:
Let’s assemble them into a single formal power series over :
where is a formal variable. (You might wonder what’s happened to . In principle, it should be present in the sum. However, if we adopt the convention that then it might as well not be. It will become clear when we look at metric spaces that this is the right convention to adopt.)
On the other hand, viewing graphs as enriched categories leads to the notion of the magnitude of a graph. The magnitude of a finite graph is a formal expression in a variable , and can be understood either as a rational function in or as a power series in . Hepworth and Willerton showed that the power series above is precisely the magnitude of , seen as a power series.
So in the case of graphs too, magnitude homology categorifies magnitude.
Finally, consider a metric space , viewed as a category enriched in . For each , we want to define
I have no idea what these homology groups look like when is a familiar geometric object such as a disk or line, so I don’t know how often these ranks are finite. But they’re certainly finite if has only finitely many points, so let’s assume that.
The sum on the right-hand side is, then, automatically finite. To see this, the argument is almost the same as for graphs. For graphs, we used the fact that the distance between two distinct vertices is always at least , from which it followed that the homology groups can only be nonzero when . Now in a finite metric space, distances can of course be less than , but finiteness implies that there’s a minimal nonzero distance: , say. Then can only be nonzero when . That’s why the sum is finite.
We’ve now assigned to our metric space not one Euler characteristic but a one-parameter family of them. That is, we’ve got an integer for each . Actually, all but countably many of these integers are zero. Better still, for each real there are only finitely many such that . (I’ll explain why in the details section.) So, it’s not too crazy to write down the formal expression
There are a couple of ways to think about the expression on the right-hand side. You can treat as a formal variable and the expression as a Hahn series (like a power series, but with non-integer real powers allowed). Or you can (attempt to) evaluate at a particular value of in or or some other setting where the sum makes analytic sense.
So far no one knows how exactly we should proceed from here, but it looks as if the story goes something like this.
Remember, we’re trying to show that magnitude homology categorifies magnitude, which in this instance means that should be equal to the magnitude of a metric space . That’s a real number, and it’s defined in terms of negative exponentials of distances , so let’s put . (This explains why we can ignore , since then .) I’m not claiming that anything converges! You can treat as a formal variable for the time being, although at some stage we’ll want to interpret it as an actual real number.
It’s a useful little lemma that when you have a bounded chain complex , the alternating sum of the ranks of the groups is equal to the alternating sum of the homology groups . So,
where denotes the Hepworth–Willerton chain groups that I defined earlier. Substituting this into the definition of gives
That’s potentially a doubly infinite sum. But we can do some formal calculations leading to the conclusion that is indeed equal to the magnitude of the metric space (that is, the sum of all the entries of the inverse of the matrix ). Again, that’s deferred to the details section below. It’s not clear how to make rigorous sense of it, but I’m confident that it can somehow be done.
So, magnitude homology categorifies magnitude in all three of our examples… well, definitely in the first two cases, and tentatively in the third. Of course, we’d like to make a general statement to the effect that homology categorifies magnitude over an arbitrary base category . The metric space case illustrates some of the difficulties that we might expect to encounter in making a general statement.
Details and proofs
The rest of this post mostly consists of supporting details that we figured out in the other thread. I’ve mostly only bothered to include the points that weren’t immediately obvious to us (or me, at least).
If you’ve read this far, bravo! You can think of what follows as an appendix.
From simplicial abelian groups to chain complexes
The relationship between simplicial abelian groups and chain complexes is a classical part of homological algebra, but there’s at least one aspect of it that some of us in the old thread hadn’t previously appreciated.
First, the definitions. Let be a simplicial abelian group. The unnormalized chain complex is defined by , the differentials being the alternating sums of the face maps. The degenerate elements of generate a subgroup , which assemble to give a subcomplex of . The normalized chain complex is .
Now here are two facts. First, there’s an isomorphism of chain complexes , natural in . Second, the projection and inclusion maps between and are mutually inverse up to a chain homotopy that is natural in (in the obvious sense). That naturality will be crucial for us. We therefore say that and are naturally chain homotopy equivalent.
The functoriality of the nerve construction
Given a monoidal category and a small -category , we defined a functor
by
Obviously is functorial in : any -functor induces a map of simplicial sets for each , and this map is natural in .
Less obvious is that is functorial in the following 2-dimensional sense. Take -functors
and a -natural transformation . The claim is that for each , there’s an induced simplicial homotopy from to . Moreover, is natural in .
How does this work? I’m pretty much a klutz with things simplicial, so let me explain it in a concrete way and refer to this comment of Mike’s for a more abstract perspective.
Fix . We have our two maps
By definition, a simplicial homotopy from to is a map of simplicial sets that satisfies the appropriate boundary conditions. Here means the representable simplicial set . There are two maps from the terminal simplicial set to , corresponding to the two maps in . The “boundary conditions” are that the two composites in the diagram
are equal to and .
Concretely, a simplicial homotopy from to consists of a map of sets
for each map in . When is the map with constant value , is required to be equal to , and when has constant value , is required to be equal to . The maps also have to satisfy some other equations which I don’t need to mention.
There are maps in , so what this means really explicitly is that a simplicial homotopy from to consists of an ordered list of functions for each . The first has to be , the last has to be , and the whole lot have to hang together in some reasonable way. So roughly speaking, a simplicial homotopy is a kind of discrete path between two simplicial maps, as you’d probably expect.
We’re supposed to be building a simplicial homotopy from to out of a -natural transformation . So, let’s recall what a -natural transformation actually is. More or less by definition, consists of a map
in for each (subject to some axioms). For instance, when , this map sends to the diagonal of the naturality square for .
Now let . For any objects of , we can build from , and a sequence of maps in , which for ease of typesetting I’ll show for (and you’ll guess the general pattern):
These maps in induce, in the obvious way, maps of sets
for each . The domain and codomain here are just and : so we have maps . The first of these maps is and the last is . Some checking reveals that these maps, taken over all , do indeed determine a simplicial homotopy from to . Moreover, everything is obviously natural in . So that’s our natural simplicial homotopy!
Functoriality of the tensor product
Let be a small functor. For any functor , we can form the tensor product
which is a chain complex. Obviously this determines a functor
A little less obviously, transforms any natural chain homotopy into a chain homotopy.
In other words, take functors and natural transformations . (So, and consist of chain maps for each , natural in .) Suppose we also have a chain homotopy for each , and that is natural in . The claim is that there’s an induced chain homotopy between the chain maps
To show this, the key point is that a chain homotopy between the chain maps can be understood as a chain map satisfying appropriate boundary conditions. Here (for “interval”) is the chain complex
with the two copies of in degrees and . Once you adopt this viewpoint, it’s straightforward to prove the claim, using only the associativity of and the fact that distributes over colimits.
An important consequence is that if two functors are naturally chain homotopy equivalent, then the complexes and are chain homotopy equivalent.
It doesn’t matter whether you normalize your chains
Let be a small -category. The functor was defined by first building from a certain functor , then turning simplicial sets into chain complexes. I (or rather Mike) said that it doesn’t matter whether you do that last step with unnormalized or normalized chains. Why not?
Earlier in this “details” section, I recalled the fact that the two chain complexes coming from a simplicial abelian group are not only chain homotopy equivalent, but chain homotopy equivalent in a way that’s natural in . We can apply this fact to the simplicial abelian group , for each . It implies that the two chain complexes coming from are chain homotopy equivalent naturally in . Or, said another way, the two versions of that you get by choosing the “unnormalized” or “normalized” option are naturally chain homotopy equivalent.
But we just saw that when two functors are naturally chain homotopy equivalent, their tensor products with are chain homotopy equivalent. So, the two versions of have the same tensor product with , up to chain homotopy equivalence. In other words, the chain homotopy equivalence class of is unaffected by which version of you choose to use. The homology of that chain complex is, therefore, also unaffected by this choice.
Invariance of magnitude homology under equivalence of categories
It’s a fact that the magnitude of an enriched category is invariant not only under equivalence, but even under the existence of an adjunction (at least, if both magnitudes are well-defined). Something similar is true for magnitude homology, as follows.
Let be -functors between small -categories. We’ll show that if there exists a -natural transformation from to then the maps
induced by and are equal (for any coefficients ). It will follow that whenever you have -categories that are equivalent, or even just connected by an adjunction, their homologies are isomorphic. (Even “adjunction” can be weakened further, but I’ll leave that as an exercise.)
The proof is mostly a matter of assembling previous observations. Take a -natural transformation . We have functors
natural transformations
and (as we saw previously) a natural simplicial homotopy from induced by . When we pass from simplicial sets to chain complexes, this natural simplicial homotopy turns into a natural chain homotopy (Lemma 8.3.13 of Weibel’s book). So, the natural transformations and between the functors
are naturally chain homotopic. It follows from another of the previous observations that the chain maps
are chain homotopic. Hence they induce the same map on homology, as claimed.
Homology of graphs and of metric spaces
Earlier, I claimed that Mike’s general theory of homology of enriched categories reproduces Richard Hepworth and Simon Willerton’s theory of magnitude homology of graphs, by choosing the coefficients suitably. It’s trivial to extend Richard and Simon’s theory from graphs to metric spaces, as I did earlier; and I claimed that this too is captured by the general theory.
I’ll prove this now in the case of metric spaces. It will then be completely clear how it works for graphs.
Let be a metric space, seen as a category enriched in . Let be a real number, and recall the functor from earlier. The aim is to show that the groups and are isomorphic, where the latter is defined à la Hepworth–Willerton.
The nerve functor is given by
The unnormalized chain group is simply the free abelian group on this set, but in order to make the connection with Richard and Simon’s definition, we’re going to use the normalized version of . It’s not too hard to see that the normalized is the free abelian group on the set
with differentials , where
Now we have to compute . I know two ways to do this. You can use the definition of coend directly, as Mike does here. Alternatively, note that for any functor of coefficients ,
where the last step is by the density formula. We’re interested in the case , and then the expression in the last line is either if the distances sum to , or if not. So
That’s exactly Richard and Simon’s chain group . With a little more thought, you can see that the differentials agree too. Thus, the chain complexes and are isomorphic. It follows that their homologies are isomorphic, as claimed.
Decategorification for metric spaces
The final stretch of this marathon post is devoted to finite metric spaces — specifically, how the magnitude of a finite metric space can be obtained as the Euler characteristic of its magnitude homology. Here’s where there are some gaps.
Let be a finite metric space. For each , we have the Euler characteristic
The ranks here are finite because the sets are manifestly finite. We saw earlier that the sum itself is finite, but let me repeat the argument slightly more carefully. First, these homology groups are the same as the Hepworth–Willerton homology groups. Second, the Hepworth–Willerton chain groups are trivial when , where is the minimum nonzero distance occurring in . So, the same is true of the homology groups .
Let be the set of (extended) real numbers occurring as finite sums of distances in . Although this set is usually infinite, it’s always countable. Better still, is finite for all real . It’s easy to prove this, again using the fact that there’s a minimum nonzero distance.
For a number that’s not in , the Hepworth–Willerton chain groups are trivial, so the homology groups are trivial too. Hence . Or in other words: only stands a chance of being nonzero if belongs to the countable set .
So, in the definition
that scary-looking sum over all might as well only be over the relatively tame range .
Now let’s do a formal calculation. Back in the main part of the post (just before the start of this “details” section), I observed that
Now is the free abelian group on the set
so is the cardinality of this set. Hence, working formally,
Let be the square matrix with rows and columns indexed by the points of , and entries . Write for the identity matrix, and write for the sum of all the entries of a matrix . Then
So our earlier formula
now gives
Again formally speaking, the part inside the brackets is a geometric series whose sum is . So, the conclusion is that
The right-hand side is by definition the magnitude of the metric space (at least, assuming that is invertible).
So, using non-rigorous formal methods, we’ve achieved our goal. That is, we’ve shown that the magnitude of a finite metric space is the Euler characteristic of its magnitude homology.
We know how to make some of this rigorous. The basic idea is that to sum a possibly-divergent series , we “vary the value of ” by replacing it with a formal variable . Thus, we define the formal power series , hope that is formally equal to a rational function, hope that the rational function doesn’t have a pole at , and if not, interpret as .
That’s a time-honoured technique for summing divergent series. To apply it in this situation, here’s a little theorem about matrices that essentially appears in a paper by Clemens Berger and me:
Theorem Let be a square matrix of real numbers. Then:
The formal power series is rational.
If is invertible, the value of the rational function at is (defined and) equal to .
This result provides a respectable way to interpret the last part of the unrigorous argument presented above — the bit about the geometric series. But the earlier parts remain to be made rigorous.
Re: Magnitude Homology
Incidentally, I agonized over notation.
I called the base category . Everyone agrees on that. (Well, if I was Latexing I’d use , but on the blog it’s easier to stick to a plain .)
Usually in enriched category theory, the objects of the base category are called things like (or ). I’ve used instead, for two reasons. First, I didn’t use and because I wanted them for something else. Second, is what we used in earlier conversations, it’s what Richard and Simon used, and in the important examples of graphs and metric spaces, it stands for length.
Usually I’d call an enriched category something like or , at the opposite end of the alphabet from . But Mike used for the coefficients (reasonably enough), so I wanted to avoid that. He used for the category. However, he also used C to stand for chain, and just about everyone writing on homological algebra does the same, so I wanted to avoid that. I chose , because it’s a normal kind of letter for a graph, a metric space, or generally something that you might take the homology of.
I used , and for the nerve, chain complex and homology functors. Mike used , and , with M standing for magnitude. As I said in the post, I think it’s good to use to signal that it’s a nerve construction, but I’m agnostic on whether the and should have s in front of them.
I don’t know whether Mike’s homology theory should be called “magnitude homology” or simply “homology”. Since magnitude homology is the categorification of homology in the same sense as Khovanov homology is the categorification of the Jones polynomial, calling it “magnitude homology” is like saying “Jones polynomial homology” (or more euphonically, “Jones homology”) instead of “Khovanov homology”. That would seem entirely reasonable. On the other hand, if there are no other theories of homology for enriched categories, maybe it should just be called “homology” without adornment.
But that comes with a risk. If Mike writes this up and just calls it “homology”, someone else will call it “Shulman homology” and the name will stick. Much as he’ll deserve that, I’m a firm believer that descriptive names are better than named-after-people names — e.g. “Kullback–Leibler divergence” vs. “relative entropy”. In particular, “magnitude homology” is better than “Shulman homology” (sorry, Mike!). To avert the possibility of the terminology heading that way, the correct tactic must be to call it magnitude homology from the start :-)
I’m writing all this here because I want everyone to use this comments thread to discuss notation and terminology rather than mathematical substance, of course.