Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

December 4, 2023

Magnitude 2023

Posted by Tom Leinster

I’m going to do something old school and live-blog a conference: Magnitude 2023, happening right now in Osaka. This is a successor to Magnitude 2019 in Edinburgh and covers all aspects of magnitude and magnitude homology, as well as hosting some talks on subjects that aren’t magnitude but feel intriguingly magnitude-adjacent.

Slides for the talks are being uploaded here.

What is magnitude? The magnitude of an enriched category is the canonical measure of its size. For instance, the magnitude of a set (as a discrete category) is its cardinality, and the magnitude of an ordinary category is the Euler characteristic of its nerve. For metric spaces, magnitude is something new, but there is a sense in which you can recover from it classical measures of size like volume, surface area and dimension.

What is magnitude homology? It’s the canonical homology theory for enriched categories. The magnitude homology of an ordinary category is the homology of its classifying space. For metric spaces, it’s something new, and has a lot to say about the existence, uniqueness and multiplicity of geodesics.

Let’s go!

Park in Osaka

Posted at December 4, 2023 10:13 PM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/3511

55 Comments & 0 Trackbacks

Colloquium: Introduction to magnitude

There was a kind of pre-event yesterday, a colloquium for the Osaka University mathematics department which I gave:

The many faces of magnitude

If you’ve seen me give a general talk on magnitude then many of the slides will be familiar to you. But I want to draw your attention to a new one (page 27), which is a summary of recent and current activity in magnitude homology and is meant to whet your appetite for some of the talks this week.

Here’s that slide in plain text, shorn of the photos of the people involved:

What’s happening in magnitude homology?

  • There is a relationship between magnitude homology and persistent homology — but they detect different information (Nina Otter; Simon Cho)

  • Applications of magnitude homology to the analysis of networks (Giuliamaria Menara)

  • A theory of magnitude cohomology (Richard Hepworth)

  • Connections between magnitude homology and path homology (Yasuhiko Asao)

  • A comprehensive spectral sequence approach that encompasses both magnitude homology and path homology (Yasuhiko Asao; Richard Hepworth and Emily Roff; also relevant is work of Kiyonori Gomi)

  • A concept of magnitude homotopy (Yu Tajima and Masahiko Yoshinaga)

  • New results on magnitude homology equivalence of subsets of n\mathbb{R}^n, involving convex geometry (joint with Adrián Doña Mateo)

  • And lots more… find out this week!

Posted by: Tom Leinster on December 4, 2023 11:33 PM | Permalink | Reply to this

Re: Colloquium: Introduction to magnitude

After my colloquium, Eiko Kin (professor here at Osaka) told me some things that I found very interesting. And she asked me the following question.

Take a surface homeomorphism that has the property of being pseudo-Anosov. (I don’t know what this means, but for the broad gist of this question it doesn’t matter.) Then one can construct from it a graph, which has something to do with the directions in which the homeomorphism expands and contracts. This is a graph with multiple edges, in general. In turn, the graph has a transition matrix: the rows and columns are the vertices, and the entries are the numbers of edges between the corresponding vertices. And the transition matrix has a largest eigenvalue. Theorem (Thurston): the largest eigenvalue is equal to the entropy of the homeomorphism we started with.

The question was whether there could be some magnitude connection, given that (i) magnitude is related to entropy, and (ii) there is a notion of the magnitude of graphs.

Of course, I don’t know. It’s an intriguing possibility, and it would be fantastic if some connection was there.

More generally, this is a good opportunity to point out the following gap in our knowledge. We know various things about the relations between magnitude and entropy, but this is always entropy for a single, static probability distribution. We know nothing about relations with entropy in the dynamical sense, e.g. the entropy of a self-map of a topological space. This would be well worth looking into.

Posted by: Tom Leinster on December 6, 2023 12:55 PM | Permalink | Reply to this

Re: Colloquium: Introduction to magnitude

One of the issues here is that the Markov partitions and attendant (di)graphs (or more generally stochastic matrices if you want to get quantitative and have a Markov chain versus a shift) are not unique: presumably some extremum will need to be taken or there will need to be a result that shows things don’t depend on the choice here.

For example, Figure 10 of https://arxiv.org/abs/2104.00753 shows three different Markov partitions for the cat map, which is the archetypal Anosov map (vs flow). Some subsequent figures show how these stretch across, and section VI of https://arxiv.org/abs/1009.2127 details the adjacency matrices (they are called B (0)B^{(0)}).

In these papers I focused on a particular class of refinements of Markov partitions of maximum entropy at a physics (and not even mathematical physics!) level of rigor. This construction appears not to have been considered elsewhere but I would be interested to be wrong about that.

Again you’ll want to tie this to Ihara zeta functions in the graphical case (and maps to Ruelle zeta functions on the original dynamics) and probably a good place to start is https://doi.org/10.1016/S0024-3795(02)00341-5

Posted by: Steve Huntsman on December 6, 2023 5:52 PM | Permalink | Reply to this

Re: Colloquium: Introduction to magnitude

Going through some old notes of mine relevant, Parry and Williams (https://doi.org/10.1112/plms/s3-35.3.483) considered zeta functions of the form 1/det(ItP)1/\det(I-tP) for PP a stochastic matrix. I had the notion of trying to get such zeta functions to be sizes some time ago but could not find an obvious way to make that work, though taking the Kronecker sum \boxplus of stochastic matrices seems interesting because spec mP m= mspec P m\text{spec } \boxplus_m P_m = \sum_m \text{spec } P_m (for this see Horn and Johnson’s Matrix Analysis, 1st ed., Thm. 4.4.5).

Meanwhile the metric or Kolmogorov-Sinai entropy of a nice Markov chain with transition matrix PP is jkπ jP jklogP jk-\sum_{jk} \pi_j P_{jk} \log P_{jk} where π=πP\pi = \pi P is the invariant distribution. It has the nice property that the entropy of a product transformation is the sum of the component transformations, so something size-like.

Posted by: Steve Huntsman on December 7, 2023 7:07 PM | Permalink | Reply to this

Re: Colloquium: Introduction to magnitude

By the way: has anyone tried to make genera of multiplicative sequences work as size functions?

Posted by: Steve Huntsman on January 16, 2024 2:02 PM | Permalink | Reply to this

Re: Colloquium: Introduction to magnitude

Interesting idea (link to definition). Not to my knowledge.

Posted by: Tom Leinster on January 16, 2024 4:34 PM | Permalink | Reply to this

Re: Colloquium: Introduction to magnitude

I guess this would only work for magnitude vs (co)homology because the unit (= empty set) can’t be terminal: any manifold with more than one connected component can have more than one morphism to the empty set. Think of the disjoint union of circles: we can either make two cups or a U-pipe. So Cob isn’t semicartesian (unless I’m mistaken, which can easily happen to me in these sorts of realms).

That said, enriching over Cob seems interesting from the point of view of TQFTs and their ilk. Rather Atiyah-ish on at least two counts.

Posted by: Steve Huntsman on January 16, 2024 9:46 PM | Permalink | Reply to this

Emily Roff: Iterated magnitude homology

The first talk is by Emily Roff, based on her paper Iterated magnitude homology.

Emily began by reviewing some of the foundations.

The setup for magnitude is this:

  • a monoidal category VV
  • a ring KK
  • a “size” homomorphism ||:(ob(V),)(K,)|\cdot|: (ob(V),\otimes) \to (K, \cdot)

and with some linear algebra, we get a notion of the magnitude of a finite VV-category.

Categorifying, the setup for magnitude homology is this:

  • a semicartesian monoidal category VV
  • a monoidal abelian category AA
  • a strong symmetric monoidal functor Σ:VA\Sigma: V \to A (a “size functor”)

and with some homological algebra, we get a notion of the magnitude homology of a VV-category (not necessarily finite).

The magnitude homology functor VCatA V Cat \to A^{\mathbb{N}} is the composite of three functors:

VCatMB Σ[Δ op,A]CCh(A)H A V Cat \stackrel{M B^\Sigma}{\to} [\Delta^{op}, A] \stackrel{C}{\to} Ch(A) \stackrel{H_\bullet}{\to} A^{\mathbb{N}}

Here CC and H H_\bullet are standard, off-the-shelf functors: chain complexes and homology. The first functor, MB ΣM B^\Sigma, gives the magnitude nerve construction, which takes a VV-category XX as input and produces as output the simplicial object with nnth object

x 0,,x nXΣ(X(x 0,x 1)X(x n1,x n)). \bigoplus_{x_0, \ldots, x_n \in X} \Sigma(X(x_0, x_1) \otimes \cdots \otimes X(x_{n - 1}, x_n)).

Its simplicial object structure is given in the usual nervy way, but here’s where we need the restriction that the monoidal category VV is semicartesian: the unit object is terminal. That’s needed in order to define the first and last face maps.

Emily then told us about partially ordered groups (which include Coxeter groups with the Bruhat order) and norms on groups (e.g. any generating set on a group determines a word-length norm), which induce metrics in the usual kind of way.

For a symmetric monoidal category VV, a VV-group is a one-object VV-category whose underlying ordinary category is a group. E.g. if VV is CatCat, PosetPoset or MetMet then a VV-group is a monoid GG in VV together with a function () 1:U(G)U(G)(-)^{-1}: U(G) \to U(G) making the usual diagrams commute. (Here UU means the underlying set of the VV-object GG and MetMet is metric spaces with the 1\ell^1 product.) This is something more general than an internal group.

  • Every partially ordered group is a group enriched in PosetPoset. (The inversion map is basically never monotone.)

  • Every normed group with conjugation-invariant norm gives a group enriched in MetMet. (The inversion map is 1-Lipschitz if the metric is symmetric.)

  • A strict 2-group, i.e. a group object in CatCat, is a special kind of group enriched in CatCat. Strict 2-groups classify homotopy 2-types. For instance, any normal subgroup NN of a group GG gives a strict 2-group G NG_N whose objects are the elements of GG and with arrows (k,g):gkg(k, g) : g \to kg for kNk \in N and gGg \in G. Mac Lane and Whitehead proved that π 1(BG N)=G/N\pi_1(B G_N) = G/N.

Now onto iterated magnitude homology.

In all the cases above, GG has a “second-order” enrichment, i.e. it’s enriched in VV-CatCat where VV is one of SetSet, (01)(0 \to 1) and [0,][0, \infty]. Iterated magnitude homology is a homology theory of categories enriched in VV-CatCat.

A short digression. Any 2-category has a classifying space. There are a couple of approaches to this: Duskin’s and Street’s on the one hand, and Segal’s on the other. Bullejos and Cegarra proved that the two classifying space constructions are equivalent.

Now let VV be a semicartesian monoidal category and Σ:VA\Sigma: V \to A a strong symmetric monoidal functor.

Proposition: the magnitude nerve defines a strong sym mon functor MB Σ:VCat[Δ op,A], MB^\Sigma: VCat \to [\Delta^{op}, A],

which means we can use it as a size functor!

The double magnitude nerve of a (VCat)(V Cat)-category XX MB MB Σ(X)[Δ op×Delta op,A]. MB^{MB^\Sigma}(X) \in [\Delta^op \times Delta^op, A]. The iterated magnitude nerve of XX is the diagonal part of this bisimplicial object, and its homology is called the iterated magnitude homology IMH (X)IMH_\bullet(X). (The role of the diagonal here recalls the Segal construction mentioned above.)

Examples/theorems:

  • For a CatCat-group GG, we have IMH 1(G)=π(G) abIMH_1(G) = \pi(G)_{ab}.

  • For a normal subgroup of a group GG, we have IMH 1(G N)=(G/N) abIMH_1(G_N) = (G/N)_{ab}

  • For a preordered group GG, we have IMH 1(G)=(G/P ) abIMH_1(G) = (G/P_\sim)_{ab}, where P P_\sim is the subgroup of elements gg such that gegg \leq e \leq g.

  • Take a MetMet-group G=(G,d)G = (G, d). An element gg of GG is primitive if there is nothing strictly between it and the identity ee (“between” in the sense of the triangle identity). In length grading 00, we have IMH 0(G)H (G) IMH^0_\bullet(G) \cong H_\bullet(G) (the ordinary group homology), and in length gradings >0\ell \gt 0, the abelian group IMH 2 (G)IMH^\ell_2(G) is freely generated by the conjugacy classes of primitive elements of norm \ell.

Summary: iterated magnitude homlogy captures “the homology of a classifying space”.

And for a group with a conjugacy-invariant norm, the iterated magnitude homology is sensitive to both the the topology of the classifying space and the geometry of the group under the norm.

Posted by: Tom Leinster on December 5, 2023 1:29 AM | Permalink | Reply to this

Re: Emily Roff: Iterated magnitude homology

Well, that turned out quite long. I suspect future ones will be shorter, which is probably a good thing.

Posted by: Tom Leinster on December 5, 2023 1:41 AM | Permalink | Reply to this

Re: Emily Roff: Iterated magnitude homology

Tiny correction: the inversion map in a metrically enriched group is 1-Lipschitz provided the metric is symmetric. However, the map that pairs an element with its inverse is not. (This is really just saying that the ell_1 product is not cartesian, i.e. doesn’t have 1-Lipschitz diagonals.)

Posted by: Emily Roff on December 9, 2023 12:35 AM | Permalink | Reply to this

Re: Emily Roff: Iterated magnitude homology

Thanks. Fixed.

Posted by: Tom Leinster on December 10, 2023 9:47 AM | Permalink | Reply to this

Mark Meckes: Inequalities for magnitude and maximum diversity

From homological algebra to analysis, with Café regular Mark Meckes.

Mark began with some classical geometric inequalities…

  • The isoperimetric inequality: for nice A nA \subseteq \mathbb{R}^n, the ratio vol n(A)/(vol n1(A)) n/(n1) vol_n(A)/(vol_{n-1}(\partial A))^{n/(n-1)} is maximized when AA is a Euclidean ball.

  • The Brunn-Minkowski inequality: for nice A,B nA, B \subseteq \mathbb{R}^n, vol n(A+B) 1/nvol n(A) 1/n+vol n(B) 1/n vol_n(A + B)^{1/n} \geq vol_n(A)^{1/n} + vol_n(B)^{1/n} or equivalently vol n(λA+(1λ)B)vol n(A) λvol n(B) 1λ. vol_n(\lambda A + (1 - \lambda) B) \geq vol_n(A)^\lambda vol_n(B)^{1 - \lambda}.

Notation: AA is a nonempty compact metric space, M(A)M(A) is the set of signed Borel measures on AA, and P(A)P(A) is the set of Borel probability measures on AA. For a signed measure μ\mu and xAx \in A, write Zμ(x)= Xe d(x,y)dμ(y). Z\mu(x) = \int_X e^{-d(x, y)} d\mu(y).

We’ll use two mutually consistent definitions of magnitude:

  • If there exists μM(A)\mu \in M(A) such that ZμZ\mu has constant value 1 (a weight measure) then Mag(A)=μ(A)Mag(A) = \mu(A).

  • If AA is of “negative type” then Mag(A)sup μM(A)μ(A) 2/Zμdμ. Mag(A) \sup_{\mu \in M(A)} \mu(A)^2 /\int Z\mu d\mu. This is also equal to inff H 2 \inf \|f\|^2_H where the inf is over functions ff that have constant value 1 on A, HH is a particular function space on XAX \supseteq A, and details are omitted.

The maximum diversity of AA is Div(A)=sup μP(A)( X(Zμ) q1dμ) 1/(1q) Div(A) = \sup_{\mu \in P(A)} \Biggl( \int_X (Z\mu)^{q - 1} d\mu \Biggr)^{1/(1 - q)} where q[0,]q \in [0, \infty] and the right-hand side turns out to be independent of qq (a theorem of Emily Roff and myself; the finite case is due to Mark and myself).

Basic, easy inequalities:

  • If ABA \subseteq B then 1Div(A)Div(B)<e diam(B)1 \leq Div(A) \leq Div(B) \lt e^{diam(B)}.

  • If ABA \subseteq B and BB is of negative type then 1Mag(A)Mag(B)1 \leq Mag(A) \leq Mag(B).

  • The fundamental inequality: if AA is of negative type then Div(A)Mag(A)Div(A) \leq Mag(A).

  • Magnitude is lower semicontinuous on spaces of negative type: if A kAA_k \to A in the Gromov-Hausdorff topology then Mag(A)liminfMag(A k). Mag(A) \leq liminf Mag(A_k). (There are examples where this inequality is strict.)

Lower bounds for maximum diversity

  • It’s pretty straightforward to show that Div(A)vol(A)n!vol(B) Div(A) \geq \frac{vol(A)}{n!vol(B)} where AA is a subset of an nn-dimensional normed space, whose unit ball is called BB.

  • The packing number P(A,ε)P(A, \varepsilon) is a notion of the “effective number of points” at a particular scale: the largest number of points at distance ε\geq \varepsilon from each other. Then Div(A)P(A,ε)1+P(A,ε)e ε. Div(A) \geq \frac{P(A, \varepsilon)}{1 + P(A, \varepsilon)e^{-\varepsilon}}.

Upper bounds for maximum diversity

  • Aishwarya, Li, Madiman and Meckes (in prepartion): if A= iA iA = \bigcup_i A_i then Div(A) iDiv(A i)Div(A) \leq \sum_i Div(A_i).

  • The covering number N(A,ε)N(A, \varepsilon) is the smallest cardinality of an ε\varepsilon-net. The previous bullet point implies that Div(A)N(A,ε)e 2ε. Div(A) \leq N(A, \varepsilon)e^{2\varepsilon}.

  • Packing and covering numbers are basically the same: P(A,2ε)N(A,ε)P(A,ε)P(A, 2\varepsilon) \leq N(A, \varepsilon) \leq P(A, \varepsilon). From this we get the growth rate of Div(tA)Div(t A) as tt \to \infty is the Minkowski dimension of AA. We also get an expression in terms of maximum diversity for sup ε0εlog(N(A,ε)) \sup_{\varepsilon \geq 0} \varepsilon\sqrt{\log(N(A, \varepsilon))}

Lower bounds for magnitude

Assume negative type now. Magnitude has been bounded below in two broad ways:

  • Mag(A)Div(A)Mag(A) \geq Div(A), combined with lower bounds for Div(A)Div(A).

  • Use monotonicity wrt inclusion, combined with exact computation via weight measures. E.g. in 1 n\ell_1^n, Mag( i=1 n[0,a i]e i)=1+2a i, Mag \Biggl( \bigcup_{i = 1}^n [0, a_i] e_i \Biggr) = 1 + 2\sum a_i, so if ia i=\sum_i a_i = \infty then in 1\ell_1, Mag( i=1 [0,a i]e i)= Mag \Biggl( \bigcup_{i = 1}^\infty [0, a_i] e_i \Biggr) = \infty (a simple example of a compact negative type space with infinite magnitude).

Upper bounds for magnitude

In principle, we have the bound Mag(A)f H 2Mag(A) \leq \|f\|_H^2 for any fHf \in H with f| A1f|_A \equiv 1. But this is hard to implement in practice; the norm is a complicated Sobolev-type norm. Mostly, it has been used for asymptotic results such as those of Barceló and Carbery, of Leinster and Meckes, and (sharpest of all) of Gimperlein, Goffeng and Louca.

The major exception: for a subset AA of Euclidean space n\mathbb{R}^n, Div(A)Mag(A)c nDiv(A) Div(A) \leq Mag(A) \leq c_n Div(A) where c nc_n is a dimensional constant. This is a deep result in potential theory about the relationship between two different families of capacities. Meckes deduced that the growth rate of the magnitude of tAt A as tt \to \infty is also the Minkowski dimension of AA.

The most powerful way of bounding magnitude above ahs been to approximate AA by spaces A kA_k with weight measures, then use lower semicontinuity. And can iterate this idea, approximating A kA_k in the same way too.

To say more, need the idea of the intrinsic volumes on an nn-dimensional normed space of negative type. There’s a Hadwiger-type theorem in this generality, giving well-defined intrinsic volumes μ 0,,μ n\mu_0, \ldots, \mu_n called the Holmes-Thompson intrinsic volumes. (There are some details here regarding choice of normalization that I’m not attempting to keep track of.)

  • Take the normed space 1 n\ell_1^n. For compact, 1\ell_1-convex sets KK, Mag(K) i=0 n4 iμ i(K)=vol([0,1] n+12K). Mag(K) \leq \sum_{i = 0}^n 4^{-i} \mu_i(K) = vol([0, 1]^n + \tfrac{1}{2}K).

  • Next approximate an nn-dimensional subspace of L 1=L 1[0,1]L_1 = L_1[0, 1] by subspaces of 1 N\ell_1^N. For an nn-dimensional normed space EE of negative type and compact convex KEK \subseteq E, Mag(K) i=0 n(1/4) iμ i(K)exp(μ 1(K)/4) Mag(K) \leq \sum_{i = 0}^n (1/4)^i \mu_i(K) \leq \exp(\mu_1(K)/4) and there’s a slightly sharper estimate in the special case of K 2 nK \subseteq \ell_2^n, using the Alexandrov-Fenchel inequalities.

Applications:

Let EE be a fin-dim normed space of negative type.

  • If AA is a compact subset of EE then Mag(tA)Mag(t A) is always finite and Mag(tA)1Mag(t A) \to 1 as t0t \to 0 (the one-point property).

  • If KK is a compact convex subset of EE then Mag(tK)Mag(t K) is differentiable at 00, and in fact limsup t0Mag(tK)1t14μ 1(K). lim sup_{t \to 0} \frac{Mag(t K) - 1}{t} \leq \tfrac{1}{4} \mu_1(K). There are some known cases where the limit exists and is μ 1(K)\mu_1(K), in which case that’s the derivative at t=0t = 0 of Mag(tK)Mag(t K).

  • There is another application to Sudakov minoration. Using the bound on εlogN(K,ε)\varepsilon \sqrt{\log N(K, \varepsilon)} mentioned earlier, Meckes gets a new proof of Sudakov’s minoration inequality.

  • He also gets a short new proof of a special case of Mahler’s conjecture, making the assumption of negative type: vol(B)vol(B )4 n/n! vol(B) vol(B^\circ) \geq 4^n/n! for the unit ball BB in an nn-dimensional normed space of negative type. (Here B B^\circ is the polar set of BB.)

There is also a Brunn-Minkowski-type inequality for maximum diversity, at least in one-dimensional space 1\mathbb{R}^1. In higher dimensions, there are only partial results known now. E.g. for subsets AA and BB of Euclidean space and 0λ10 \leq \lambda \leq 1, Div((1λ)A+λB)Div(A/2) 1λDiv(B/2) λ. Div((1 - \lambda)A + \lambda B) \geq Div(A/\sqrt{2})^{1 - \lambda} Div(B/\sqrt{2})^\lambda.

Open questions

Mark finished with a bunch of open questions. There are some really basic unknowns here!

  • Consider spaces AA of negative type. If t>1t \gt 1, is Mag(tA)Mag(A)Mag(t A) \geq Mag(A)? Is Mag(A)#AMag(A) \leq \# A? If t<1t \lt 1, is Mag(tA)Mag(A) tMag(t A) \geq Mag(A)^t (cf. Brunn-Minkowski)?

  • In a normed space of negative type, is magnitude continuous of compact convex sets? We know it’s continuous in full dimension and at a point. But Mark suspects it’s false in general.

  • Does Mag(tK)Mag(tK) have derivative μ 1(K)/4\mu_1(K)/4 at t=0t = 0, for compact KK in a finite-dimensional linear subspace EL 1E \subseteq L_1?

  • What about a maximum diversity approach to the full Mahler conjecture?

  • What about Brunn-Minkowski in arbitrary normed spaces, for either magnitude or maximum diversity?

  • Can we get any inequalities from magnitude homology? This could be a powerful method!

Posted by: Tom Leinster on December 5, 2023 3:08 AM | Permalink | Reply to this

Re: Mark Meckes: Inequalities for magnitude and maximum diversity

What an intense blog-comment! I’ve seen people publish papers with less content than this.

Looks like a great conference.

Posted by: John Baez on December 5, 2023 11:13 AM | Permalink | Reply to this

Re: Mark Meckes: Inequalities for magnitude and maximum diversity

Ha, yes, I don’t think I’ve mastered the art of live-blogging. But Mark’s talk was great, and I didn’t want to miss anything!

It is a great conference, with a really positive and friendly atmosphere.

Posted by: Tom Leinster on December 5, 2023 1:03 PM | Permalink | Reply to this

Re: Mark Meckes: Inequalities for magnitude and maximum diversity

One tiny correction: the Brunn–Minkowski-type inequality with the 1/21/\sqrt{2} holds in Euclidean space of any dimension, not just the plane.

Posted by: Mark Meckes on December 6, 2023 9:10 PM | Permalink | Reply to this

Re: Mark Meckes: Inequalities for magnitude and maximum diversity

Thanks, Mark. Fixed now.

Posted by: Tom Leinster on December 6, 2023 11:28 PM | Permalink | Reply to this

Re: Mark Meckes: Inequalities for magnitude and maximum diversity

A recent update to one of the issues discussed in my talk: Emily Roff and Masahiko Yoshinaga have shown that generic finite metric spaces satisfy the one-point property.

Posted by: Mark Meckes on January 2, 2024 3:55 PM | Permalink | Reply to this

Asuka Takatsu: Geometry of sliced/disintegrated Monge-Kantorovich metrics

The first talk not on magnitude is by Asuka Takatsu, in an area where optimal transport meets metric geometry. It represents joint work with J. Kitagawa.

Notation: P(X)P(X) is the set of probability measures on a complete, separable metric space XX.

The Monge-Kantorovich metric is also known as the Wasserstein metric. Data scientists are interested in it, but it’s expensive to compute. To fix this, one possibility is to use “sliced” M-K metrics.

For 1p<1 \leq p \lt \infty and 1q1 \leq q \leq \infty, we’ll define the M-K distance MK p,qMK_{p, q} on the space P p( n)P_p(\mathbb{R}^n) of measures with finite ppth moment (i.e. |x| pdμ(x)\int |x|^p d\mu(x) is finite). With this metric, P p( n)P_p(\mathbb{R}^n) is complete and separable, and it’s geodesic iff either n=1n = 1 or p=1p = 1.

We’re also going to consider the space P(S n1×)P(S^{n - 1} \times \mathbb{R}), and the embedding of n\mathbb{R}^n into S n1×S^{n - 1} \times \mathbb{R}. For a probability measure σ\sigma on S n1×S^{n - 1} \times \mathbb{R}, there’s an M-K metric MK p,q σMK_{p, q}^\sigma on P p,q σ(S n1×)P_{p, q}^\sigma(S^{n - 1} \times \mathbb{R}), which is complete, separable (if q<q \lt \infty), and always geodesic.

The classical M-K distance MK pMK_p first arose in optimal transport. For probability measures μ\mu and ν\nu on XX, we have the set of couplings between them, Π(μ,ν)={πP(X×X):π(A×X)=μ(A) and π(X×A)=ν(A) for all measurable A}. \Pi(\mu, \nu) = \{ \pi \in P(X \times X): \pi(A \times X) = \mu(A) &nbsp;\text{and}&nbsp; \pi(X \times A) = \nu(A) &nbsp;\text{for all measurable}&nbsp;A\}. For a cost function c:X×Xc: X \times X \to \mathbb{R}, the total cost of moving from μ\mu to ν\nu according to π\pi is X×Xc(x,y)dπ(x,y)\int_{X \times X} c(x, y) d\pi(x, y), and the optimal transport problem is to minimize the total cost over πΠ(μ,ν)\pi \in \Pi(\mu, \nu). We put MK p(μ,ν)=inf πΠ(μ,ν)( X×Xd(x,y) pdπ(x,y)) 1/p. MK_p(\mu, \nu) = \inf_{\pi \in \Pi(\mu, \nu)} \biggl( \int_{X \times X} d(x, y)^p d\pi(x, y) \biggr)^{1/p}. This is a metric on P p(X)P_p(X). The inf is achieved. Also, the metric space (P p(X),MK p)(P_p(X), MK_p) is complete and separable, and it’s geodesic if XX is.

(Categorical point: the “gluing lemma” says there’s a canonical map Π(μ,ν)×Π(ν,λ)Π(μ,λ). \Pi(\mu, \nu) \times \Pi(\nu, \lambda) \to \Pi(\mu, \lambda). It’s associative… and indeed, we have a category here. There may be some analytic/measure-theoretic requirements to get this to work. Mark Meckes tells me that Tobias Fritz calls this the “category of couplings”, and that Simon Willerton has pondered it too.)

Examples

  • If μ=1n i=1 nδ x i\mu = \tfrac{1}{n} \sum_{i = 1}^n \delta_{x_i} and ν=1n iδ y i\nu = \tfrac{1}{n} \sum_i \delta_{y_i} then the minimizer is 1nδ (x i,y σ(i))\tfrac{1}{n} \delta_{(x_i, y_{\sigma(i)})} for some permutation σ\sigma.

  • Let μ,νP()\mu, \nu \in P(\mathbb{R}). Then we transport xx to yy whenever μ(,x]=ν(,y]\mu(-\infty, x] = \nu(-\infty, y]. (I guess this assumes μ\mu and ν\nu always give nonzero measures to nonempty open sets, otherwise that doesn’t make sense.)

This was just the warmup before Asuka got onto the main topic, sliced MK-metrics. For ωS n1\omega \in S^{n - 1}, define R ω=,ω: n. R^\omega = \langle -, \omega \rangle: \mathbb{R}^n \to \mathbb{R}. Then for μP( n)\mu \in P(\mathbb{R}^n), we get the pushforward measure R # wμP()R^w_\#\mu \in P(\mathbb{R}). (Asuka drew a picture here justifying the word “sliced”.) And now put MK p,q(μ,ν)=MK p(R # μ,R # ν) L q(σ) MK_{p, q}(\mu, \nu) = \|MK_p(R_\#^\bullet \mu, R_\#^\bullet \nu)\|_{L^q(\sigma)} where σ\sigma is the uniform probability measure on S n1S^{n - 1}. That is, unwinding the notation, MK p,q(μ,ν)=( S n1MK p(R # ωμ,R # ων) qdσ(ω)) 1/q. MK_{p, q}(\mu, \nu) = \biggl( \int_{S^{n -1}} MK_p(R_\#^\omega \mu, R_\#^\omega \nu)^q d\sigma(\omega) \biggr)^{1/q}. So if q=1q = 1, then MK p,q(μ,ν)MK_{p, q}(\mu, \nu) is the expected MK pMK_p distance between μ\mu and ν\nu after you project them down to a random line in n\mathbb{R}^n. (This reminds me of mean width.)

MK p,qMK_{p,q} is a complete, separable metric on P p( n)P_p(\mathbb{R}^n). It’s topologically equivalent to P p( n)P_p(\mathbb{R}^n) with the metric MK pMK_p, but not usually metrically equivalent. Typically it’s not geodesic, but it is if n=1n = 1 or p=1p = 1.

At this point I stopped being able to take notes, as this is new stuff to me and it took all my concentration to follow.

Posted by: Tom Leinster on December 5, 2023 6:10 AM | Permalink | Reply to this

Re: Asuka Takatsu: Geometry of sliced/disintegrated Monge-Kantorovich metrics

Mark Meckes tells me that Tobias Fritz calls this the “category of couplings”

Specifically, this category is briefly discussed in Remark 12.10 (and appears again in a few other spots) in A synthetic approach to Markov kernels, conditional independence and theorems on sufficient statistics. In fact he discusses it in a more general setting of a Markov category which satisfies some additional axioms; those additional axioms correspond to the extra requirements needed to get the construction to work.

In the setting of Asuka’s talk, what you need is precisely the assumption of working on a complete, separable, metrizable space (aka a Polish space), which she included from the beginning. In Tobias’s paper, this means that the category of Polish spaces and measurable maps satisfies the necessary axioms.

Posted by: Mark Meckes on December 6, 2023 10:02 PM | Permalink | Reply to this

Emerson Gaw Escolar: On interval covers and resolutions of persistence modules

The second non-magnitude talk is on a topic that seems very closely linked to magnitude homology: persistent homology. The speaker is Emerson Gaw Escolar, and let me quote the beginning of his abstract:

In topological data analysis, one-parameter persistent homology can be used to describe the topological features of data in a multiscale way via the persistence barcode, which can be formalized as a decomposition of a persistence module into interval representations. Multi-parameter persistence modules, however, do not necessarily decompose into intervals and thus (together with other reasons) are complicated to describe.

(All the abstracts are on the programme page.)

Emerson’s talk will give general results on “interval approximation” and “interval global dimension” of finite posets, and he’ll also try to relate his observations to magnitude.

Setting: PP is a finite poset, A=kPA = k P is its incidence algebra (i.e. the category algebra), and we consider the category [P,vect k][P, vect_k] of finite-dimensional kk-representations of PP, also called persistence modules over PP. The goal is to approximate them by simpler ones called interval-decomposable ones.

For example, if we consider a point cloud and increasing radii r 1<r 2<<r nr_1 \lt r_2 \lt \cdots \lt r_n of balls around the points, then our poset PP is \bullet \to \bullet \to \cdots \to \bullet. In this 1-dimensional situation, we can decompose into interval modules. That’s the standard structure theorem in persistent homology (the “bar code” theorem, I think it’s sometimes called).

(Emerson’s talk has lots of nice diagrams that I can’t reproduce here. The conference website has copies of slides from speakers willing to share them; maybe Emerson’s will appear there.)

Now for multiple parameters. For instance, you might have spatial parameters (e.g. radius) and a time parameter (data changing through time). Or there might be clustering depending on multiple parameters that govern how you’re doing your clustering.

In that first example, the spatiotemporal one, you have a different persistence diagram for each time point. So you need to think about how the birth/death events at one time point connect up to those at other time points. There’s no analogue of the decomposition into interval modules; we have a “wild representation type”.

Often, the poset PP for multiparameter situations is the grid [m]×[n][m] \times [n] for some natural numbers m,nm, n. (Here [n]={0<<n}[n] = \{0 \lt \cdots \lt n\}.) We have at our disposal the adjointness relation Fun([m]×[n],vect)Fun([m],Fun([n],vect)). Fun([m] \times [n], vect) \cong Fun([m], Fun([n], vect)). But we’ll think about all finite posets PP.

A full subposet II of PP is an interval if it is connected and convex, in the obvious senses. (But “interval” in the poset literature means something different!) For example, there are intervals in [m]×[n][m] \times [n] that aren’t the product of two posets of the form [k][k]. For an interval II, the interval representation V I:Pvect kV_I: P \to vect_k is defined by V I(p)=kV_I(p) = k for all pIp \in I and V I(p)=0V_I(p) = 0 when p¬Ip \not\in I; it’s defined on maps in the simplest possible way.

Now for approximation. Let MM be a persistence module. An interval-decomposable approximation of MM is a weakly terminal map from an interval-decomposable module to MM. (So it’s something a bit like a projective cover.)

You can then consider “interval resolutions”: diagrams

X 2f 2X 1f 1X 0f 0M0 \cdots \to X_2 \stackrel{f_2}{\to} X_1 \stackrel{f_1}{\to} X_0 \stackrel{f_0}{\to} M \to 0

such that f 0f_0 is an interval approximation of MM and f if_i is an interval approximation of kerf i1ker f_{i - 1}. And then you can talk about minimal interval resolutions, projective dimension of MM, and global dimension of kPk P. I’ll skip some stuff here, but the path leads us deeper into representation theory and, in particular, the theory of dimensions of various types.

Relative homological algebra plays an increasing role in persistence, which Emerson mentioned briefly before skipping forward to interval global dimension of finite posets. He and his collaborators have computed the interval global dimension of [m]×[n][m] \times [n] for small mm and nn, and there are some interesting patterns, conjectures and theorems about monotonicity and stabilization.

Monotonicity theorem: if QQ is a full subposet of PP then intgldim(Q)intgldim(P)intgldim(Q) \leq intgldim(P). The proof uses the “intermediate extension” construction of Kuhn, Beilinson, Bernstein and Deligne, which is some analogue of convex hull and is a map from vect Qvect^Q to vect Pvect^P (not induction or coinduction).

Now, what about magnitude?

Is magnitude a measure of complexity, from the point of view of intervals?

Let PP be a finite poset. Form the full subcategory of interval representations of PP. This is a finite category. What is its magnitude? Here we’re talking about the magnitude of the linear category Intervals(P)Intervals(P) of interval representations of PP, i.e. the Vect-enriched category. What about the category of indecomposable projectives?

A paper by Joe Chuang, Alastair King and myself helps to answer the second question, with a fomrula for the magnitude of the linear category of indecomposable projectives over a suitably finite algebra. Also, for a finite poset PP, Mag #(P)=Mag(IndecProj(kP)) Mag_\#(P) = Mag(IndecProj(k P)) where the LHS is the magnitude/Euler characteristic as a category, and the RHS treats IndecProj(kP)IndecProj(k P) as a linear category. This answers the question.

For the first question, I fell behind in taking notes, unfortunately.

It looks like the magnitude of Intervals(0n)Intervals(0 \to \cdots \to n) is nn, tested numerically for n10n \leq 10.

Posted by: Tom Leinster on December 5, 2023 7:25 AM | Permalink | Reply to this

Tom Leinster: Magnitude homology equivalence of Euclidean sets

The last talk of the day is by me, on joint work with Adrián Doña Mateo.

Since I can’t very well live-blog my own talk, I’ll just drop a link to my slides and copy my abstract:

I will present a theorem describing when two closed subsets XX and YY of Euclidean space have the same magnitude homology in the following strong sense: there are distance-decreasing maps XYX \to Y and YXY \to X inducing mutually inverse isomorphisms in positive-degree magnitude homology. The theorem states that this is equivalent to a certain concrete geometric condition. This condition involves the notion of the “inner boundary” of a metric space, which consists of those points adjacent to some other point. The proof of the theorem builds on Kaneta and Yoshinaga’s structure theorem for magnitude homology.

Posted by: Tom Leinster on December 5, 2023 7:29 AM | Permalink | Reply to this

Greybox fuzzing time-intensive programs

Wow, much food for thought.

One perhaps exotic-sounding application that I just put on arXiv last week (so too late to be noticed) is to help find inputs that reach particular parts of a program’s control flow graph in cases that typical “fuzzing” tools are poorly equipped to handle: https://arxiv.org/abs/2311.17200

Posted by: Steve Huntsman on December 5, 2023 1:34 PM | Permalink | Reply to this

Masahiko Yoshinaga: Magnitude homotopy type

The first talk on Wednesday is by our generous host Masahiko Yoshinaga, and it’s one I’ve been especially looking forward to: the notion of magnitude homotopy. It’s based on a paper of his with Yu Tajima, and the full title is “Magnitude homotopy type: a combinatorial-topological approach to magnitude homology”.

The idea is to study magnitude homology groups via combinatorial topology, and more particularly, discrete Morse theory.

Starting from a poset PP, we get its order complex, the simplicial complex Δ={(p 0<<p n):n0,p iP}\Delta = \{(p_0 \lt \cdots \lt p_n): n \geq 0, p_i \in P\}, hence a space |Δ||\Delta|.

Short review of discrete Morse theory. Take a set VV of vertices and a set S2 VS \subseteq 2^V. Then take M{(σ,τ)S×S:στ,|τσ|=1}. M \subseteq \{(\sigma, \tau) \in S \times S: \sigma \subset \tau, |\tau \setminus \sigma| = 1\}. We call MM an acyclic matching if:

  • σS\sigma \in S appears at most once
  • there is no cycle σ 1τ 1σ 2τ 2τ nσ 1\sigma_1 \vdash \tau_1 \succ \sigma_2 \vdash \tau_2 \succ \cdots \vdash \tau_n \succ \sigma_1, where σ iτ i\sigma_i \vdash \tau_i means (σ i,τ i)M(\sigma_i, \tau_i) \in M and τ iσ i+1\tau_i \succ \sigma_{i + 1} means that τ iσ i+1\tau_i \supset \sigma_{i + 1} and |τ iσ i+1|=1|\tau_i \setminus \sigma_{i + 1}| = 1.

We write C={σS:σ does not appear in M} C = \{\sigma \in S: \sigma&nbsp; \text{does not appear in} &nbsp; M\} and this is called the set of critical cells.

The fundamental theorem of discrete Morse theory says that |S||S| is equivalent to a CW complex constructed from the critical cells.

Example Take a triangle (empty) with vertices called 1, 2, 3. Its face poset PP has elements 1, 2, 3, 12, 13, 23 with the evident inequalities. Take 1121 \vdash 12 and 2232 \vdash 23. Then 33 and 1313 are critical, and PP is homotopy equivalent to a circle with a point marked on it; the point is 33 and the rest of the circle is 1313. (Masahiko drew some diagrams here, and this is my poor effort to put them into text.)

That’s the review of discrete Morse theory.

Now take a quasi-metric space (X,d)(X, d), i.e. a non-symmetric metric space. A directed graph is a typical example. We sometimes allow \infty as a distance.

We’ll construct a poset suitable for the study of magnitude homology, the “causal order”. Masahiko wrote the words “potential on space-time” and then, as a motivating example, drew a very charming diagram.

There are some similar diagrams in his paper with Tajima; they’re space-time diagrams showing causal relationships and “cones of influence” or light-cones.

The causal order on X×X \times \mathbb{R} is defined by (x 1,t 1)<(x 2,t 2)t 1<t 2 and d(x 1,x 2)t 2t 1. (x_1, t_1) \lt (x_2, t_2) \iff t_1 \lt t_2 &nbsp;\text{and}&nbsp; d(x_1, x_2) \leq t_2 - t_1. Magnitude homology can be defined purely in terms of this poset.

Now we define causal intervals. Let a,bXa, b \in X and 0\ell \geq 0. Write Cau (X:a,b)=[(a,0),(b,)] Cau^\ell(X: a, b) = [(a, 0), (b, \ell)] where the square brackets mean the interval in the causal order.

(Masahiko also made some remarks on the physics angle, e.g. time-like and light-like relations in Minkowski geometry. The notion of causal relation goes back to a 1967 paper of Kronheimer and Penrose. There’s also a relevant 2018 paper of Kunzinger and Sämann on causal spaces, I guess this one.)

Next, magnitude homotopy type!

For a,bXa, b \in X, we have the order complex ΔCau (a,b)\Delta Cau^\ell(a, b), i.e. the set of chains in Cau (a,b)Cau^\ell(a, b). It has a subcomplex Δ\Delta' consisting of the chains (x 0,t 0)<<(x n,t n) (x_0, t_0) \lt \cdots \lt (x_n, t_n) such that d(x 0,,x n)<. d(x_0, \ldots, x_n) \lt \ell. (Here d(x 0,,x n)d(x_0, \ldots, x_n) means d(x i1,x i)\sum d(x_{i-1}, x_i).) The magnitude homotopy type is the quotient simplicial complex. M (X:a,b)=ΔCau (X:a,b)/Δ, M^\ell(X: a, b) = \Delta Cau^\ell(X: a, b)/\Delta', i.e. we collapse Δ\Delta' to a single point. Or we can consider everything as a space and take |ΔCau (X:a,b)|/|Δ|. |\Delta Cau^\ell(X: a, b)|/|\Delta'|. (Alternative point of view: consider the pair (Δ,Δ)(\Delta, \Delta').)

Theorem (Tajima and Yoshinaga)

  • H˜ n(M (X:a,b))MH n,(X:a,b)\tilde{H}_n(M^\ell(X: a, b)) \cong MH_{n, \ell}(X: a, b), where tildeHtilde{H} means reduced homology and the right-hand side is defined in the obvious way so that MH n,(X)= a,bXMH n,(X:a,b)MH_{n, \ell}(X) = \bigoplus_{a, b \in X} MH_{n, \ell}(X: a, b).
  • M (X:a,b)M^\ell(X: a, b) is a double suspension of |Δ(Cau (a,b){(a,0),(b,ell)})/Δ| \bigl|\Delta(Cau^\ell(a, b)\setminus\{(a,0), (b, ell)\})/\Delta'\cap\cdots\bigr| (an object originally considered for graphs by Asao and Izumihara).

Example Let X=X = \mathbb{Z}, with points labelled as p np_n (nn \in \mathbb{Z}). What is M 4(X:p 0,p 0)M^4(X: p_0, p_0)? Here the explanation absolutely needs the diagrams that Masahiko drew:

I’m afraid my notes can’t convey the dynamic live explanation here. The conclusion is that M 4(X:p 0,p 0)M^4(X: p_0, p_0) is homotopy equivalent to S 4S 4S^4 \vee S^4, and MH 4,4 2MH_{4, 4} \cong \mathbb{Z}^2.

Now put M (X)= a,bXM (X:a,b) M^\ell(X) = \bigvee_{a, b \in X} M^\ell(X: a, b) where the wedge is a one-point sum. (The definition of M (X:a,b)M^\ell(X: a, b) as a quotient Δ/Δ\Delta/\Delta' gives it a basepoint.) Then we have a functor M :MetTop *. M^\ell: Met \to Top_\ast.

Theorem If XX is finite and its similarity matrix Z=(e d(a,b))Z = (e^{-d(a, b)}) is invertible then the family of reduced Euler characteristics (χ˜(M (X:a,b))) a,bX,0 \Bigl( \tilde{\chi}(M^\ell(X: a, b)) \Bigr)_{a, b \in X, \ell \geq 0} determines the metric space (X,d)(X, d).

Sketch proof The inverse Z 1Z^{-1} of the similarity matrix of XX has (a,b)(a, b)-entry 0χ˜(M (a,b))e . \sum_{\ell \geq 0} \tilde{\chi}(M^\ell(a, b))e^{-\ell}.

There’s also an inclusion-exclusion result: under conditions, M (XY)M (XY)M (X)M (Y). M^\ell(X \cup Y) \vee M^\ell(X \cap Y) \simeq M^\ell(X) \vee M^\ell(Y). And there’s a Künneth theorem too!

Posted by: Tom Leinster on December 6, 2023 1:44 AM | Permalink | Reply to this

Jun O’Hara: Residues of manifolds

Next we have another non-magnitude talk on a topic that magnitudey people should probably be interested in: Jun O’Hara (web page) from Chiba University on residues of manifolds.

Let X=(X,d,μ)X = (X, d, \mu) be either a closed submanifold of n\mathbb{R}^n or a compact body in n\mathbb{R}^n (“body” meaning of dimension nn). (I think dd and μ\mu are the distance function and metric on XX.) We’re interested in the function zB X(z)= X×Xd(x,y) zdμ xdμ y z \mapsto B_X(z) = \int_{X \times X} d(x, y)^z d\mu_x d\mu_y \in \mathbb{C} on \mathbb{C}. (This reminded me of capacities in potential theory, but that’s just a single integral with xx or yy fixed, and typically zz is real there. Here we’re integrating the capacity over the whole space.)

The function B XB_X is holomorphic on the zz with real part >dimX\gt - \dim X, and can be analytically continued to \mathbb{C} with only simple poles at some negative integers. It’s called a Brylinski beta function or the meromorphic (Riesz) energy function of XX, and has been studied especially closely for knots.

I’m going to be briefer in these notes because the subject is less familiar to me and the talk slides are on the speaker’s web page.

We’re also interested, for a knot KK, in the integrals K×Kdxdy|xy|,E(K)= K×Kdxdy|xy| 2 \int_{K \times K} \frac{d x d y}{|x - y|}, \qquad E(K) = \int_{K \times K} \frac{d x d y}{|x - y|^2} or rather their regularizations. The motivating problem: find a functional ee on the space of knots so that each knot type has an “optimal” representative minimizing ee within the knot type.

The meromorphic function B XB_X produces two important quantities:

  • Residues: R X(z 0)R_X(z_0), the residue at a pole z 0z_0 of B XB_X. Examples: volume, total squared curvature, Willmore functional.

  • Energy: E X(z)E_X(z) is B X(z)B_X(z) if zz is not a pole of B XB_X, and lim wz(B X(w)R X(z)wz) lim_{w \to z} \Bigl( B_X(w) - \frac{R_X(z)}{w - z}\Bigr) if zz is a pole. Examples: knot energies, generalized Riesz energies.

For a compact manifold (X,d)(X, d) and R{0}R \in \mathbb{C} \setminus \{0\}, we have the magnitude operator Z X(R)Z_X(R): u(x1R Xe Rd(x,y)u(y)dy). u \mapsto \biggl( x \mapsto \frac{1}{R} \int_X e^{-R d(x, y)} u(y) d y \biggr). Notice that the “scale factor” RR is a complex number! This idea was crucial to the work of Gimperlein, Goffeng and Louca on magnitude, in which (for example) they discovered that the magnitude function of a suitably nice space determines its surface area and Willmore energy.

There followed a summary of how to compute residues using the coarea formula, and an explanation of a relation with integral geometry (including work with Gil Solanes in Barcelona).

Theorem (Brylinski; Fuller and Vemuri) Take a closed mm-manifold MM in n\mathbb{R}^n, with d(x,y)=|xy|d(x, y) = |x - y|. Then B MB_M has poles at m,m2,-m, -m - 2, \ldots, the first residue at m-m is essentially the volume of MM, the second residue is essentially the Willmore energy for m=2m = 2 and n=3n = 3 (and more generally we get something to do with mean curvature).

If n=2,3n = 2, 3 then the intrinsic volumes (including the Euler characteristic) of a compact body are given by a linear combination of residues and relative residues. For all nn, the inclusion-exclusion property holds for residues of compact bodies if the boundary is regular enough.

Now consider geodesic distance on a closed submanifold of n\mathbb{R}^n (and can generalize to Riemannian manifold). The first residue is volume, the second residue is total scalar curvature, and the third residue is something more complicated involving the Ricci tensor and total scalar curvature.

Gimperlein, Goffeng and Louca showed that the magnitude function of such a manifold has an expansion whose coefficients involve volume and total scalar curvature, among other things. There’s more about this in a guest post they wrote last year.

A point to watch out for: different manifolds XX can have the same beta function B XB_X.

There are pairs of spaces distinguished by their magnitude functions, but not by their beta functions. E.g. consider 1>> 6\ell_1 \gt \cdots \gt \ell_6 such that all triples satisfy the triangle inequalities to make them the edge-lengths of a tetrahedron. A numerical experiment: there are 30 tetrahedra with the same beta functions, but different magnitudes.

On the other hand, there’s a pair of graphs distinguished by their beta functions, but both with magnitude function 42q1+q \frac{4 - 2q}{1 + q} They’re \bullet-\bullet-\bullet-\bullet and the 4-vertex graph that looks like Y\mathsf{Y}.

There’s a more general, interesting, question of comparing the residue method and the magnitude method to see which is more sensitive (in different situations) in detecting differences between spaces.

Posted by: Tom Leinster on December 6, 2023 2:57 AM | Permalink | Reply to this

Re: Jun O’Hara: Residues of manifolds

Thank you very much for the post! From Kiyonori Gomi’s suggestion, I found that the (relative) position of three points can be determined by the magnitude function, to be precise, the behavior of it as R goes to +\infinity.

Posted by: Jun O'Hara on December 6, 2023 3:15 PM | Permalink | Reply to this

Re: Jun O’Hara: Residues of manifolds

Ah, fantastic! Thanks for that.

Jun’s comment answers a question he raised during his talk: if two three-point metric spaces have the same magnitude function, are they isometric?

The answer to the analogous question is “no” for four or more points, as shown by the graph example at the end of his talk (given in my notes above).

Posted by: Tom Leinster on December 6, 2023 11:26 PM | Permalink | Reply to this

Re: Jun O’Hara: Residues of manifolds

Tom, Kiyonori informed me that the example of two graphs with the same magnitude, --- and Y, appeared in your paper “The magnitude of metric spaces”, Example 2.3.5. I apologize for my wrong reference at my talk.

Posted by: Jun OHara on December 10, 2023 12:25 PM | Permalink | Reply to this

Re: Jun O’Hara: Residues of manifolds

Oh, no problem at all! I’d totally forgotten I’d included that example.

Posted by: Tom Leinster on December 10, 2023 9:16 PM | Permalink | Reply to this

Re: Jun O’Hara: Residues of manifolds

A very recent update to this discussion: Jun has now shown that generic finite metric spaces are determined by their magnitude functions.

Posted by: Mark Meckes on January 2, 2024 3:57 PM | Permalink | Reply to this

Masaki Tsukamoto: Introduction to mean dimension

This afternoon’s first talk is by Masaki Tsukamoto, who’s giving an introduction to Gromov’s notion of mean dimension. I’ll paste in the abstract:

Mean dimension is a topological invariant of dynamical systems introduced by Gromov in 1999. It is a dynamical version of topological dimension. It evaluates the number of parameters per unit time for describing a given dynamical system. Gromov introduced this notion for the purpose of exploring a new direction of geometric analysis. Independently of this original motivation, Elon Lindenstrauss and Benjamin Weiss found deep applications of mean dimension in topological dynamics. I plan to survey some highlights of the mean dimension theory.

Tsukamoto works in dynamical systems and ergodic theory, and explained to us the idea of “dynamicalization”: take concepts and theorems in geometry, and find analogues in dynamical systems.

For example:

  • number of points \mapsto topological entropy

  • pigeonhole principle \mapsto symbolic coding

  • New example by Gromov: topological dimension \mapsto mean dimension.

Slogan:

Mean dimension is the number of parameters per unit time for describing a dynamical system.

Plan for the talk:

  • Topological entropy and symbolic dynamics
  • Mean dimension and topological dynamics
  • Mean dimension and geometric analysis

Topological entropy and symbolic dynamics

For now, a dynamical system is a compact metrizable space XX with a homeomorphism T:XXT: X \to X, or equivalently, a continuous \mathbb{Z}-action. So then we can consider generalizing from \mathbb{Z} to other groups.

For ε>0\varepsilon \gt 0, define #(X,d,ε)\#(X, d, \varepsilon) as the minimum number of open sets of diameter <ε\lt \varepsilon needed to cover XX. Now for N1N \geq 1, put d N(x,y)=max 0n<Nd(T nx,T ny). d_N(x, y) = \max_{0 \leq n \lt N} d(T^n x, T^n y). This is a metric. The topological entropy of (X,T)(X, T) is h top(X,T)=lim ε0lim Nlog#(X,d N,ε)N. h_{top}(X, T) = \lim_{\varepsilon \to 0} \lim_{N \to \infty} \frac{\log \#(X, d_N, \varepsilon)}{N}.

Example Let AA be a finite set, and consider the full shift on the alphabet AA, which is the dynamical system (A ,σ)(A^\mathbb{Z}, \sigma) defined by σ((x n) n)=(x n+1) n. \sigma((x_n)_{n \in \mathbb{Z}}) = (x_{n + 1})_{n \in \mathbb{Z}}. Then we get h top(σ)=log|A|. h_{top}(\sigma) = \log|A|.

Generally, here’s a recipe for dynamicalization: given an invariant Q(K)Q(K) of spaces KK, define a dynamical version Q dynQ_{dyn} so that Q dyn(K ,σ)=Q(K). Q_{dyn}(K^\mathbb{Z}, \sigma) = Q(K).

We just did this for cardinality. What about the pigeonhole principle?

Some terminology. An embedding (X,T)(Y,S)(X, T) \to (Y, S) of dynamical systems is a topological embedding that commutes with TT and SS.

What obstructions are there to existence of dynamical embeddings? Write P n(X,T)P_n(X, T) for the set of periodic points of period dividing nn. If (X,T)(X, T) embeds in (Y,S)(Y, S) then:

  • h top(X,T)h top(Y,S)h_{top}(X, T) \leq h_{top}(Y, S)
  • |P n(X,T)||P n(Y,S)||P_n(X, T)| \leq |P_n(Y, S)| for all n1n \geq 1.

A partial converse holds! A closed subset XX of A A^\mathbb{Z} is called a subshift if σ(X)=X\sigma(X) = X. The Krieger embedding theorem says that if AA and BB are finite sets and XA X \subseteq A^\mathbb{Z} satisfies:

  • h top(X,σ)<log|B|h_{top}(X, \sigma) \lt log|B|,
  • |P n(X,σ)||B| n|P_n(X, \sigma)| \leq |B|^n for all n1n \geq 1,

then XX embeds in B B^\mathbb{Z}. Note that AA can be much larger than BB!

This is the dynamical version of the pigeonhole principle, in the following version: a finite set AA embeds in a finite set BB iff |A||B||A| \leq |B|. We’ve just changed the appropriate notion of size.

Mean dimension and topological dynamics

A continuous map of metric spaces is called an ε\varepsilon-embedding if each fibre has diameter <ε\lt \varepsilon. For compact XX, define Widim ε(X,d)Widim_\varepsilon(X, d) (width dimension) as the minimum integer nn for which XX can be ε\varepsilon-embedded into an nn-dimensional simplicial complex.

The topological dimension of XX is dimX=lim ε0Widim ε(X,d). \dim X = \lim_{\varepsilon \to 0} Widim_\varepsilon(X, d) \in \mathbb{N}. This is a topological invariant!

Theorem (Menger and Nöbeling) If dimX<N/2\dim X \lt N/2 then XX topologically embeds in [0,1] N[0, 1]^N.

Short digression: every compact metric space of topological dimension 1 can be topologically embedded into the Menger sponge (itself a subspace of [0,1] 3[0, 1]^3).

The dynamical embedding problem Consider the shift map σ\sigma on the Hilbert cube [0,1] [0, 1]^\mathbb{Z}. When does a given dynamical system (X,T)(X, T) embed in ([0,1] ,σ)([0, 1]^\mathbb{Z}, \sigma)?

Periodic points again provide an obstruction. If (X,T)(X, T) does embed in ([0,1] ,σ)([0, 1]^\mathbb{Z}, \sigma) then P n(X,T)P_n(X, T) embeds in P n([0,1] ,σ)[0,1] nP_n([0, 1]^\mathbb{Z}, \sigma) \cong [0, 1]^n.

Allan Jaworski proved in his 1974 PhD thesis that if (X,T)(X, T) is a finite-dimensional system (i.e. dimX<\dim X \lt \infty) and has no periodic point then (X,T)(X, T) embeds in ([0,1] ,σ)([0, 1]^\mathbb{Z}, \sigma). Then he left mathematics and became an artist.

What about infinite-dimensional systems? Joseph Auslander asked in 1988: can we embed every minimal dynamical system in ([0,1] ,σ)([0, 1]^\mathbb{Z}, \sigma) in the shift of the Hilbert cube? Minimal means that every orbit is dense, e.g. an irrational rotation of the circle. (This is a topological analogue of ergodicity.) Minimality clearly precludes the existence of periodic points: it’s a strong version of there being no periodic points.

The answer came about a decade later. In 1999, Mikhail Gromov introduced mean dimension, defined by

mdim(X,T)=lim ε0lim NWidim ε(X,d N)N mdim(X, T) = \lim_{\varepsilon \to 0} \lim_{N \to \infty} \frac{Widim_\varepsilon(X, d_N)}{N}

Again, this is a topological invariant!

If KK is a nice space then

mdim(K ,shift)=dimK. mdim(K^\mathbb{Z}, shift) = \dim K.

In this sense, mean dimension is the dynamicalization of topological dimension.

Earlier we had the slogan:

Mean dimension is the number of parameters per unit time for describing a dynamical system.

Here’s an informal (or maybe I mean formal?) explanation of the “per unit time” bit:

mdim(K Z,σ)=dimK ||=||×dimK||=dimK. mdim(K^Z, \sigma) = \frac{dim K^\mathbb{Z}}{|\mathbb{Z}|} = \frac{|\mathbb{Z}| \times \dim K}{|\mathbb{Z}|} = \dim K.

In particular,

mdim([0,1] ,σ)=1. mdim([0,1]^\mathbb{Z}, \sigma) = 1.

But if (X,T)(X, T) embeds in (Y,S)(Y, S) then one easily shows that

mdim(X,T)mdim(Y,S)! mdim(X, T) \leq mdim(Y, S)!

So if mdim(X,T)>1mdim(X, T) \gt 1 then it doesn’t embed in the shift of the Hilbert cube! Lindenstrauss and Weiss found in 2000 that for any c0c \geq 0, there is some minimal dynamical system of mean dimension. IN particular, there is some minimal system not embeddable in the shift of the Hilbert cube.

Lindenstrauss proved a further result that if a minimal dynamical system has mean dimension smaller than N/36N/36 then it embeds in the shift of ([0,1] N) ([0, 1]^N)^\mathbb{Z}. And, with the speaker Tsukamoto, he proved that this constnt 36 can’t be improved to 2 or less. Finally, Tsukamoto and Gutman proved that it can be improved to 2. So that completely settles that question.

Mean dimension and geometric analysis

The talk then went into more advanced material on Gromov’s original motivation, which was from geometry rather than symbolic dynamics. Gromov was interested in groups acting on manifolds MM, with a PDE on M and its space XX of solutions. I decided to concentrate on listening to this part rather than taking notes.

 

This was a really wonderful talk, with beautiful conceptual structure. I now think of dynamicalization as a cousin of categorification and feel irresistibly drawn to try to find something new to dynamicalize.

Posted by: Tom Leinster on December 6, 2023 6:04 AM | Permalink | Reply to this

Re: Masaki Tsukamoto: Introduction to mean dimension

In question time, I asked what the dynamicalization of the concept of the Shannon entropy of a finite set would be. Metric entropy, perhaps?

The speaker answered by recalling the variational principle: for a dynamical system (X,T)(X, T), it’s a theorem that

h top(X,T)=sup μh μ(T), h_{top}(X, T) = \sup_\mu h_\mu(T),

where the sup is over all TT-invariant measures μ\mu on XX and h μh_\mu is the (Kolmogorov-Sinai) metric entropy with respect to μ\mu.

Posted by: Tom Leinster on December 6, 2023 6:24 AM | Permalink | Reply to this

Re: Masaki Tsukamoto: Introduction to mean dimension

In question time, I asked what the dynamicalization of the concept of the Shannon entropy of a finite set would be. Metric entropy, perhaps?

The direct answer to this is “yes”.

Metric entropy is the slightly odd name for what would better be called measure-theoretic entropy, because it has nothing to do with metric spaces. Given an endomorphism TT of a probability space (X,μ)(X, \mu), one can define its entropy h μ(T)h_\mu(T), as per the link above.

Take a finite probability space AA. There’s an induced product measure μ\mu on A A^\mathbb{Z}. Let σ:A A \sigma:A^\mathbb{Z} \to A^\mathbb{Z} be the shift function. Small theorem: h μ(σ)h_\mu(\sigma) is equal to the Shannon entropy of the probability space AA.

Hence “metric” entropy is indeed a dynamicalization of Shannon entropy, in the sense of this talk.

Posted by: Tom Leinster on December 6, 2023 11:37 PM | Permalink | Reply to this

Re: Masaki Tsukamoto: Introduction to mean dimension

I wrote:

This was a really wonderful talk, with beautiful conceptual structure. I now think of dynamicalization as a cousin of categorification and feel irresistibly drawn to try to find something new to dynamicalize.

Maybe we can combine dynamicalization with categorification. For example, dynamicalization does

  • cardinality \mapsto topological entropy

and categorification does

  • cardinality \mapsto magnitude.

If we do both at once, perhaps we get an invariant of an endofunctor of a topological category, or something like that?

A different way to combine dynamicalization with categorification would be to look for a homology theory of dynamical systems whose Euler characteristic is topological entropy. (Or perhaps better, its exponential. It’s the exponential of entropy, not entropy itself, that behaves like cardinality.)

Is there already a homology theory for topological dynamical systems? What is its Euler characteristic?

Small observation: since (the exponential of) topological entropy is not usually an integer, we’d need the Euler characteristic of our homology theory not to be an integer either, so it can’t be a simple alternating sum of ranks of homology groups. But such things have been known to happen!

Posted by: Tom Leinster on December 6, 2023 12:31 PM | Permalink | Reply to this

Re: Masaki Tsukamoto: Introduction to mean dimension

In the discrete setting, topological entropy can be a nice size function as outlined in https://arxiv.org/abs/2205.05178. So you can combine all this stuff. This should give a way to bootstrap into symbolic dynamics per se but also (e.g.) Anosov systems, which can be described in that vein. The Ruelle zeta function (which counts closed geodesics of a given length) should feature prominently.

Posted by: Steve Huntsman on December 6, 2023 2:18 PM | Permalink | Reply to this

Re: Masaki Tsukamoto: Introduction to mean dimension

There is a sense in which dynamicalization of invariants of size is dual to categorification of invariants of size.

To explain what I mean, I need to back up a bit and note that while forgetful functors usually don’t preserve invariants of size, free functors often do. For example, the forgetful functor

VectSet \mathbf{Vect} \to \mathbf{Set}

does not “preserve size”: the dimension of a vector space is not equal to the cardinality of its underlying set. But the free functor

SetVect \mathbf{Set} \to \mathbf{Vect}

does: the dimension of the free vector space on a set is the cardinality of that set. Similarly, the forgetful functor

TopSet \mathbf{Top} \to \mathbf{Set}

that takes the set of points does not transform Euler characteristic into cardinality. But its left adjoint, the discrete space functor

SetTop, \mathbf{Set} \to \mathbf{Top},

does “preserve size”: the Euler characteristic of a discrete space on a set is the cardinality of that set. One more example: the objects functor

CatSet \mathbf{Cat} \to \mathbf{Set}

doesn’t preserve size: the magnitude (Euler characteristic) of a category does not equal the cardinality of its set of objects. But its left adjoint is the discrete category functor

SetCat, \mathbf{Set} \to \mathbf{Cat},

and the magnitude of the discrete category on a set is the same as the cardinality of the set. The same goes for enriched categories, including metric spaces (e.g. the magnitude of a discrete metric space is just its cardinality).

Here I’m assuming that everything is finite enough that everything is defined.

Now take a group GG and consider the category [G,Set][G, \mathbf{Set}] of left GG-sets. The “size” of a GG-set XX is understood to be the magnitude (= groupoid cardinality) of the lax quotient (= action groupoid) X//GX//G, which works out to be |X|/|G||X|/|G|.

As you’d expect given the examples above, the forgetful functor

U:[G,Set]Set U: [G, \mathbf{Set}] \to \mathbf{Set}

does not preserve size, as |X|/|G||X||X|/|G| \neq |X|. And as you’d expect, the left adjoint LL to UU does. Explicitly, LL is given on a set SS by L(S)=G×SL(S) = G \times S, with the obvious GG-action, and

|G×S||G|=|G|×|S||G|=|S|. \frac{|G \times S|}{|G|} = \frac{|G| \times |S|}{|G|} = |S|.

No surprises yet.

But UU also has a right adjoint RR, given on a set AA by R(A)=A GR(A) = A^G, again with the obvious GG-action. It doesn’t preserve size, since

|A G||G|=|A| |G||G||A|. \frac{|A^G|}{|G|} = \frac{|A|^{|G|}}{|G|} \neq |A|.

Idea Look for some invariant \|\cdot\| of GG-sets such that A G\|A^G\| = |A||A| for sets AA.

Roughly, this is the idea of dynamicalization.

In Masaki’s talk, we stuck to the group G=G = \mathbb{Z}. In that case, [G,Set][G, \mathbf{Set}] is the category of sets with a \mathbb{Z}-action, or equivalently, sets equipped with an automorphism. The right adjoint

R:Set[,Set] R: \mathbf{Set} \to [\mathbb{Z}, \mathbf{Set}]

sends a set AA to the set A A^\mathbb{Z} with the shift automorphism σ A\sigma_A.

What I want to say now is that there’s an invariant \|\cdot\| on sets-with-automorphism which, when applied to (A ,σ A)(A^\mathbb{Z}, \sigma_A), gives just |A||A| for any finite set AA. That’s the idea, but I don’t know whether that’s quite true. It seems that we need to treat the set AA as a discrete topological space and bring the profinite topology on A A^\mathbb{Z} into play. So we’re looking at the functors

SetDTopR[,Top], \mathbf{Set} \stackrel{D}{\to} \mathbf{Top} \stackrel{R}{\to} [\mathbb{Z}, \mathbf{Top}],

where DD means discrete and RR is the right adjoint of the forgetful functor. For X[,Top]X \in [\mathbb{Z}, \mathbf{Top}] (i.e. for a topological space XX equipped with an automorphism), define X\|X\| to be the exponential of the topological entropy of XX. Then

RD(A)=|A| \|R D(A)\| = |A|

for all finite sets AA. This is equivalent to the assertion h top(A)=log|A|h_{top}(A) = \log |A| in my notes on Masaki’s talk, which was given as the basic example of dynamicalization.

So dynamicalization is something to do with right adjoints to forgetful functors preserving size.

Posted by: Tom Leinster on December 10, 2023 11:03 AM | Permalink | Reply to this

Luigi Caputi: On finite generation in magnitude cohomology of graphs, and its torsion

Back to magnitude, with Luigi Caputi from Torino/Turin.

The talks here have ranged from analysis to homological algebra to dynamical systems to geometry to category theory, but the theme for this one is representation theory.

Plan:

  • Recap, problems, questions
  • Representation theory of categories
  • Applications

Recap, problems, questions

GG denotes an undirected graph, usually connected, and not simple (i.e. may have loops and multiple edges). But can generalize most things to directed graphs.

Such a graph is a metric space under the path metric.

Maps: there are the ordinary morphisms of graphs that send vertices to vertices and edges to edges. But we’ll concentrate on minor morphisms ϕ:GG\phi: G' \to G. Such a thing is a function

V(G)E(G){*}V(G)E(G){*} V(G') \coprod E(G') \coprod \{\ast\} \to V(G) \coprod E(G) \coprod \{\ast\}

such that

  • vertices to go vertices and ϕ(*)=*\phi(\ast) = \ast

  • if eE(G)e \in E(G') is incident to v,wv, w then:

    • ϕ(e)=*\phi(e) = \ast, or
    • ϕ(e)=ϕ(v)=ϕ(w)\phi(e) = \phi(v) = \phi(w), or
    • ϕ(e)E(G)\phi(e) \in E(G) and it is incident to ϕ(v),ϕ(w)\phi(v), \phi(w)
  • ϕ 1(E(G))\phi^{-1}(E(G)) maps bijectively to E(G)E(G)

  • ϕ 1(v)\phi^{-1}(v) is a tree for all vv.

It is a contraction if ϕ 1{*}={*}\phi^{-1}\{\ast\} = \{\ast\}. Write GraphGraph for the category of graph and minor morphisms, and CGraphCGraph for graphs and contractions.

Perspective on magnitude homology: MH k,(G)MH_{k, \ell}(G) is a subquotient of the free module Λ k(G)\Lambda_k(G) generated by all (k+1)(k + 1)-tuples of vertices. Write I k(G)I_k(G) for the submodule generated by (x 0,,x k)(x_0, \ldots, x_k) such that x i1=x ix_{i - 1} = x_i for some ii. Then

MC k,(G)R k(G)=Λ k(G)/I k(G). MC_{k, \ell}(G) \subseteq R_k(G) = \Lambda_k(G)/I_k(G).

Note that MH k,MH_{k, \ell} is a functor CGraphMod RCGraph \to Mod_R.

Luigi then recalled some computations for graph magnitude homology, including:

dimMH k,(C m)=a(k,)m+b(k,) dim MH_{k, \ell}(C_m) = a(k, \ell)m + b(k, \ell)

for some constants a(k,)a(k, \ell) and b(k,)b(k, \ell).

Question Can we understand better the ranks of the magnitude homology groups?

Kaneta and Yoshinaga’s idea: starting from a simplicial complex KK, we get a poset by adding a top and bottom to the face poset of KK, and from that we get a graph. Kaneta and Yoshinaga proved that if MM is a manifold and KK is a triangulation of MM, then H˜ k2(M)\tilde{H}_{k - 2}(M) embeds into MH k,(G(K))MH_{k, \ell}(G(K)) for a certain \ell. This provides a source of examples of torsion in magnitude homology.

Theorem (Sazdanovic and Summers) For any finitely generated abelian group AA, there exist a graph GG and k,k, \ell such that MH k,(G)MH_{k, \ell}(G) contains AA as a subgroup.

Question Can we understand the torsion structurally?

Representation theory of categories

For a small category CC, a CC-module is a functor F:CMod RF: C \to Mod_R (where RR is a commutative ring with 1, Noetherian if need be).

For a subset SS of cCF(c)\bigoplus_{c \in C} F(c), we denote by span(S)span(S) the minimal submodule of FF containing SS. If SS is finite and F=span(S)F = span(S) then FF is finitely generated. In practice, we don’t use this definition but the result of a lemma: FF is f.g. iff there exist objects e 1,,e kCe_1, \ldots, e_k \in C and a surjection C(c i,)F \bigoplus \mathbb{Z}\cdot C(c_i, -) \to F We write P c=C(c,)P_c = \mathbb{Z}\cdot C(c, -).

A module is Noetherian if all its submodules are finitely generated.

Write Rep(C)Rep(C) for the category of CC-modules and natural transformations. We say that Rep(C)Rep(C) is Noetherian if all f.g. CC-modules are Noetherian. But this is not equivalent to the category algebra of CC being Noetherian.

Example Let FIFI be the category of finite sets and inejctions. Then Rep(FI)Rep(FI) is Noetherian.

Idea Translate the algebraic problem into a combinatorial problem. (Follow the Hilbert basis theorem: if KK is Noetherian then so is K[x 1,,x n]K[x_1, \ldots, x_n], using Dickson’s lemma: the poset r\mathbb{N}^r is Noetherian.)

Next, Luigi explained some of the theory of quasi-Gröbner categories (which include the opposite of the category of graphs with genus bounded above by some constant). I’ve learned this week that my ability to live-blog is basically nil when the material gets into zones that are really unfamiliar to me.

Magnitude cohomology was introduced by Richard Hepworth in a fantastic paper. More people should study it! Magnitude cohomology deserves much more investigation and use.

Luigi and collaborator Carlo Collari proved that the functor

MH k,*:CGraph g opMod R MH^{k, *}: CGraph_{\leq g}^{op} \to Mod_R

is finitely generated. Here the domain is the opposite of the category of graphs with genus g\leq g and contractions between them.

Corollary Among graphs of genus g\leq g, torsion in the magnitude (co)homology is bounded, i.e. there’s a common annihilator.

(We should all call this the Caputi-Collari Corollary. It’s fun to say.)

A second corollary: assume our base ring is a field. Let kk \in \mathbb{N}. Then there’s a polynomial ff over \mathbb{Z} such that ff is of degree g+1\leq g + 1 and

dimMH k,*(G)f(E(G)) \dim MH^{k, *}(G) \leq f(E(G))

for all GG of genus g\leq g. So, we’ve got numerical control over the ranks of the magnitude cohomology groups!

It’s also worth noting that in Remark 2.5 of Richard’s paper on magnitude cohomology, there’s a universal coefficient sequence that relates the dimensions of the magnitude homology and cohomology groups. Using this, we also get control over the dimensions of the homology groups.

Posted by: Tom Leinster on December 6, 2023 7:36 AM | Permalink | Reply to this

Juan Pablo Vigneaux: A combinatorial approach to Möbius inversion and pseudoinversion

In the final talk of the day, Juan Pablo Vigneaux will excavate the foundations of magnitude.

Plan:

  • Magnitude
  • Combinatorial linear algebra
  • Pseudoinversion
  • Comparison with previous results

Notation For a finite category AA, write ζ:obA×obA\zeta: ob A \times ob A \to \mathbb{Z} for (a,b)|Hom(a,b)|(a, b) \mapsto |Hom(a, b)|. A weighting k k^\bullet on AA is a function obAob A \to \mathbb{Q} such that

bζ(a,b)k b=1 \sum_b \zeta(a, b) k^b = 1

for all aa, and dually coweightings k k_\bullet. If there is a weighting and a coweighting on AA then their sums are equal, and called the magnitude of AA. When ζ\zeta is invertible, there’s a unique weighting and a unique coweighting, given by the column- and row-sums of ζ\zeta, and the magnitude is the sum of the elements of ζ 1\zeta^{-1}.

Back in my 2006 paper on the magnitude/Euler characteristic of a category, I showed that the matrix ζ\zeta has an inverse μ\mu if AA is finite, skeletal and the only idempotents are identities; it’s given by

μ(a,b)=(1) k|Hom(c 0,c 0)||Hom(c k,c k)|, \mu(a, b) = \sum \frac{(-1)^k}{|Hom(c_0, c_0)| \cdots |Hom(c_k, c_k)|},

where the sum is over all paths c 0c kc_0 \to \cdots \to c_k where c 0c kc_0 \neq \cdots \neq c_k. This generalizes an old formula attributed to Philip Hall for the Möbius function of a poset.

Akkaya and Ünlü on the one hand, and independently Chen and Vigneaux on the other, showed that the magnitude of a category can be extended to all categories by putting |A|= a,bζ +(a,b), |A| = \sum_{a, b} \zeta^+(a, b), where ζ +\zeta^+ is the Moore-Penrose inverse of ζ\zeta. (This is always defined, for every complex matrix.) Akkaya and Ünlü showed that this extended definition is invariant under equivalence of categories.

Moreover, if AA has a weighting, then it’s given by the row/column sums of ζ +\zeta^+, and dually. (I say “row/column” because I’m not fast enough to figure out which it is.) So the new definition of the magnitude of a category is compatible with the old weight/coweight one.

(All this is really about magnitude of matrices more generally.)

Juan Pablo gave a quite complicated formula for the magnitude of a finite metric space involving ratios of determinants, which I didn’t have time to type up. Further, in combinatorial linear algebra (“a completely ignored subject”), there’s a formula for det(ζ)\det(\zeta) that involves analysing cycles in AA. We can feed the latter formula into the former.

Then he gave a new formula for the Möbius function of a category AA (assuming it has Möbius inversion), again involving counting cycles in AA, as well as spanning graphs in AA.

(The basic idea here is that you can take the formula for the determinant as a signed sum over permutations, then use the disjoint cycle decomposition of each permutation.)

Much of the above works for enriched categories. So we can seek applications to metric spaces. For instance, applying the Möbius formula just mentioned gives a new proof that

lim t|tX|=#X \lim_{t \to \infty} |tX| = \# X

for any finite metric space XX. Challenge: use a similar approach to understand when it is that

lim t0|tX|=1. \lim_{t \to 0} |tX| = 1.

We know that this is true under certain mild conditions, but it’s not always true; e.g. Simon Willerton found a six-point space for which the right-hand side is 6/56/5.

Juan Pablo proposed a general question. The formulas he presented earlier, involving sums over particular types of cycles, are complicated. But maybe they can be simplified using magnitude homology? For instance, the cancellations that appear in the new formula for the magnitude of a finite metric space are related to magnitude homology.

(A little point I’d never seen before: the (i,j)(i, j)-entry of the adjugate of a matrix A=(a ij)A = (a_{i j}) is the partial derivative detAa ij, \frac{\partial \det A}{\partial a_{i j}}, at least up to sign and the possibility that I’ve muddled up rows and columns.)

There’s a generalization of Cramer’s rule (i.e. A 1=adj(A)/det(A)A^{-1} = adj(A)/det(A)) to the Moore-Penrose inverse, called Berg’s formula. This gives us a combinatorial interpretation of the entries of ζ +\zeta^+, hence the magnitude of a category (and of a matrix more generally).

Juan Pablo finished by taking us back to some of the basic formulas for magnitude, including as the decategorification of magnitude homology. I found myself deep in thought and didn’t take notes on this bit.

Posted by: Tom Leinster on December 6, 2023 9:04 AM | Permalink | Reply to this

Giuliamaria Menara: Eulerian magnitude homology

First in today’s programme is Giuliamaria Menara on “Eulerian magnitude homology: introduction and application to random graphs” (joint work with Chad Giusti). Café readers may remember Giulia’s post here on Eulerian magnitude homology a little while ago. She also organized a minisymposium on Applications of magnitude and magnitude homology to network analysis at the big SIAM Conference on Applied Algebraic Geometry in Eindhoven this summer.

Question What kind of information about a graph GG is contained in the magnitude homology groups of GG?

The plan is to modify the definition of magnitude homology — that’s Eulerian MH — and apply this modified definition to Erdős-Rényi random graphs.

Giulia began by recalling the definition of the magnitude homology of a graph, which I won’t reproduce here. But a basic point to note is that the generators of the normalized chain groups are tuples (x 0,,x k)(x_0, \ldots, x_k) where x i1x ix_{i - 1} \neq x_i for all ii — i.e. neighbours are distinct, but we might, for instance, have x 0=x 2x_0 = x_2.

In Eulerian magnitude homology, we restrict further by asking that x ix jx_i \neq x_j for all iji \neq j. This gives us smaller chain groups EMC k,(G)EMC_{k, \ell}(G), and a different homology theory EMH *,*EMH_{\ast, \ast}.

Lemma The number of triangles in a graph is 1/61/6 times the number of basis elements of EMC 2,2EMC_{2, 2} whose differential is zero.

Then there’s a similar upper bound on the number of cliques of a given size in a graph.

The Erdős-Rényi random graph G(n,p)G(n, p) is constructed by connecting nn labelled vertices randomly, where each edge is included with probability pp independently.

We’ll consider an ER graph G(n,n α)G(n, n^{-\alpha}) where 0α<0 \leq \alpha \lt \infty and its Eulerian magnitude homology. It’s a little difficult for me to take notes here as the diagrams of graphs are so integral; I guess the slides will appear on the conference web page sooner or later.

Theorem Let G(n,n α)G(n, n^{-\alpha}) be an ER graph. Then the EMH groups EMH k,k(G)EMH_{k, k}(G) vanish for α>k+12k1\alpha \gt \frac{k + 1}{2k - 1}.

When kk is large, k+12k112\frac{k + 1}{2k - 1} \approx \frac{1}{2}, so we’re roughly saying that the kkth diagonal EMH group vanish when the probability of each edge being present is <1/n\lt 1/\sqrt{n}. For the sake of my own feeble mind, I’m going to note that k+12k1=12(1+22k1), \frac{k + 1}{2k - 1} = \frac{1}{2} \biggl( 1 + \frac{2}{2k - 1} \biggr), so it’s a decreasing function of kk. That is, when kk is small, the threshold on α\alpha is larger, so we require the edge probability to be smaller in order to conclude that EMH k,kEMH_{k, k} vanishes. Or more crudely put, EMH k,kEMH_{k, k} is more likely to vanish when kk is large, as we’d expect.

The next step starts with results of Matthew Kahle and Elizabeth Meckes on random clique complexes. Giusti and Menara were able to prove similarly flavoured results for the Betti numbers β k,k\beta_{k, k} of the diagonal Eulerian magnitude homology groups EMH k,kEMH_{k, k}, including a central limit type theorem:

If p=o(n 1/n)p = o(n^{-1/n}) then β k,k𝔼(β k,k)Var(β k,k) \frac{\beta_{k, k} - \mathbb{E}(\beta_{k, k})}{\sqrt{Var(\beta_{k, k})}} is normally distributed, N(0,1)N(0, 1).

Giulia stated further results on the random geometric graph, another kind of random graph. Again, there are results on the vanishing of the diagonal EMH groups and a central limit theorem for the Betti numbers, with the proof got by modifying the proofs of the results by Kahle and Elizabeth Meckes.

Posted by: Tom Leinster on December 7, 2023 1:04 AM | Permalink | Reply to this

Tsubasa Kamiyama: Metric fibrations over one dimensional spaces are trivial

Next up is Tsubasa Kamiyama from Tohoku University.

The point of “metric fibrations” is that there’s a product formula for magnitude: the magnitude of the total space is the product of the magnitude of the base with the magnitude of the (any) fibre.

Question When are metric fibrations trivial?

Roughly speaking, “trivial” means that the total space is the 1\ell^1 product of the base space with the fibre.

This talk is about a new result:

Metric fibrations are trivial when the base space is one-dimensional.

Definition A map p:EXp: E \to X of metric spaces is a metric fibration if for all x,yXx, y \in X and ap 1(x)a \in p^{-1}(x), there exists αp 1(y)\alpha \in p^{-1}(y) such that

d E(a,b)=d X(x,y)+d E(α,b) d_E(a, b) = d_X(x, y) + d_E(\alpha, b)

for all bp 1(y)b \in p^{-1}(y):

diagram of metric fibration

The idea here is that a metric fibration is locally an 1\ell^1 product. And α\alpha is the closest point on p 1(y)p^{-1}(y) to aa.

In a metric fibration, all fibres are isometric. (I think we have to assume here that distances are finite.)

We’ll write γ y x(a)=α\gamma^x_y(a) = \alpha, so that γ y x\gamma^x_y transports from p 1(x)p^{-1}(x) to p 1(y)p^{-1}(y).

Lemma 1 We have

γ x yγ y x=id \gamma^y_x \circ \gamma^x_y = id

for all x,yXx, y \in X.

Definition A fibration p:EXp: E \to X is trivial if EE is isometric (via pp) to p 1(x)× 1Xp^{-1}(x) \times_1 X for all xx, where × 1\times_1 is the 1\ell^1 product.

Lemma 2 A fibration p:EXp: E \to X is trivial iff

γ z yγ y x=γ z x \gamma^y_z \circ \gamma^x_y = \gamma^x_z

for all x,y,zXx, y, z \in X.

Lemma 3 Let x,y,zXx, y, z \in X. If any of these three points is between the other two then

γ z yγ y x=γ z x. \gamma^y_z \circ \gamma^x_y = \gamma^x_z.

To get to the definition of “1-dimensional”, write

K n={(tcos(2πj/n),tsin(2πj/n)):0t<1,j=1,2,,n}. K_n = \{(t \cos (2\pi j/n), t\sin(2\pi j/n)): 0 \leq t \lt 1, j = 1, 2, \ldots, n\}.

This is a star shape, e.g. K 3K_3 looks like Y\mathsf{Y}. Then give K nK_n the geodesic metric.

A metric space XX is one-dimensional if for all xXx \in X, there exist some open neighbourhood UU of xx and n,r>0n \in \mathbb{N}, r \gt 0 such that UU is isometric to rK nr K_n with xx mapping to 0K n0 \in K_n. Write nn as degxdeg x. (I guess nn must be uniquely determined.) Call xx a vertex if degx2deg x \neq 2.

(Note to self: K 1=[0,1)K_1 = [0, 1) and K 2=(1,1)K_2 = (-1, 1), so degrees 1 and 2 mean different things.)

Not all 1-dimensional manifolds are 1-dimensional in this sense, e.g. a circle with the flat metric is 1-dim but a circle with the round metric is not.

Main theorem Let p:EXp: E \to X be a metric fibration. If XX is one-dimensional then the fibration is trivial.

The proof is by induction on the number of vertices, and the speaker sketched it using diagrams which I won’t attempt to put into words.

I’m not sure what happens if XX is not compact, in which case the number of vertices could be infinite. I think the answer is that you approximate XX as a nested union of compact (or do I mean bounded?) subspaces.

Posted by: Tom Leinster on December 7, 2023 1:41 AM | Permalink | Reply to this

Sho Shimoyama: Exploring the uniqueness of p-minimizing movement in metric spaces: beyond p = 2

Next is a non-magnitude talk, on flow in metric spaces.

Problem For an initial point u 0u_0, find conditions such that:

  • (i) the pp-minimizing movement starting from u 0u_0 exists and is unique;

  • (ii) it is the pp-curve of maximizing slope.

Terminology to be defined!

Main theorem Under hypotheses, (i) holds and the pp-minimizing movement is trivial.

This was another talk with a lot of pictures, and also a lot of definitions in a field that I know nothing about, so this time I’m going to be minimal and just record the basic definition of “slope” of a function on a metric space.

Let XX be a complete metric space. Take a convex, lower-semicontinuous function f:X[0,]f: X \to [0, \infty]. Then the slope of ff at uXu \in X is defined to be

|f|(u)=limsup vumax{f(u)f(v),0}d(u,v) \left|-\partial f\right|(u) = limsup_{v \to u} \frac{\max\{f(u) - f(v), 0\}}{d(u, v)}

if f(u)f(u) \neq \infty, or as \infty if f(u)=f(u) = \infty. (I’m not sure what the minus sign on the left-hand side is doing and wonder whether I misread it.)

The idea is to mimic the gradient flow (in the Euclidean setting), moving down steepest slopes of the function. When you get into this, you have to choose a norm, and this is where the “pp” in the title arises: we’re using pp-norms. Mostly people have investigated the case p=2p = 2, but this talk is about going beyond that case — more specifically, to p>2p \gt 2.

Posted by: Tom Leinster on December 7, 2023 2:21 AM | Permalink | Reply to this

Yu Tajima: Magnitude homotopy type of graphs and Whitney twist via discrete Morse theory

The last talk of today is by Yu Tajima, who recently wrote a great paper on magnitude homotopy with her PhD supervisor Masahiko Yoshinaga. I’ll quote the abstract for her talk:

The invariance of magnitudes for graphs differ by a Whitney twist (with a certain condition) was proved by Leinster, and the problem that “Are their magnitude homology groups isomorphic” is still open. We approach the problem using discrete Morse theory on magnitude homotopy types. In the course of this approach, we had another proof of the invariance of magnitudes under a Whitney twist. This is joint work with Masahiko Yoshinaga (Osaka University).

Plan:

  • Introduction
  • Magnitude homotopy type
  • Discrete Morse theory
  • Whitney twist

Introduction

I have to include a picture of the wonderful first slide:

Write GG for a (finite, simple connected) graph, with the usual shortest-path metric. A path is a sequence (x 0,,x k)(x_0, \ldots, x_k) of adjacent vertices, and we write

L(x 0,,x k)=d(x 0,x 1)++d(x k1,x k). L(x_0, \ldots, x_k) = d(x_0, x_1) + \cdots + d(x_{k - 1}, x_k).

I’ll skip the definitions of the magnitude chain groups and magnitude homology of a graph, which are in the Hepworth-Willerton paper.

Magnitude homotopy type

The definitions here are slightly different from those in Masahiko Yoshinaga’s talk.

For a,bGa, b \in G, define the simplicial complex

Δ (G;a,b)={{(x i 0,i 0),,(x i k,i k)}G×{0,,}:(x i 0,,x i k)some path (a=x 0,,x =b)}. \Delta_\ell(G; a, b) = \{ \{ (x_{i_0}, i_0), \ldots, (x_{i_k}, i_k)\} \subseteq G \times \{0, \ldots, \ell\} : (x_{i_0}, \ldots, x_{i_k}) \prec \text{some path} &nbsp; (a= x_0, \ldots, x_\ell = b)\}.

Here \prec means “subsequence”, I think. This has a subcomplex

Δ (G;a,b)={{x i 0,,x i k}Δ (G;a,b):L(x i 0,,x i k)<}. \Delta'_\ell(G; a, b) = \{\{x_{i_0}, \ldots, x_{i_k}\} \in \Delta_\ell(G; a, b): L(x_{i_0}, \ldots, x_{i_k}) \lt \ell \}.

Then we define the magnitude homotopy type (with or without basepoints) to be

M (G;a,b)=Δ (G;a,b)/Δ (G;a,b) M^\ell(G; a, b) = \Delta_\ell(G; a, b)/\Delta'_\ell(G; a, b)

and

M (G)= a,bGM (G;a,b). M^\ell(G) = \bigvee_{a, b \in G} M^\ell(G; a, b).

I’m hoping Yu will upload her slides, because I think this talk is going to be full of useful pictures (e.g. examples) that I can’t easily reproduce here.

Theorem MH k,(G)=H˜ k(M (G))MH_{k, \ell}(G) = \tilde{H}_k(M^\ell(G)).

Discrete Morse theory

More pictures! I’m not one of those ninjas who can live-TikZ diagrams (especially nice colourful ones), so I’m afraid I’ll have to skip this bit.

Whitney twist

I’ll refer back to an old Café post for the definition of Whitney twist.

Two graphs X,YX, Y that differ by a Whitney twist have the same magnitude, as long as the two joining vertices are adjacent. Emily Roff generalized this result to sycamore twists.

Open question Do XX and YY have the same magnitude homology?

Motivated by this, the speaker then presented a new proof of the invariance of magnitude under Whitney twists. (Again, this means Whitney twists where the gluing vertices are adjacent; it’s false if they’re not.)

And guess what? There are more crucial diagrams that I can’t reproduce here! This is a well-presented, well-explained talk and it pains me not to be able to do it justice here, so I really hope the slides will be put up.

In question time, Richard Hepworth asked whether the answer to the following is known: are there two graphs with the same magnitude homology but different magnitude homotopy types? The speaker says it’s unknown, but the answer is “yes” for metric spaces.

Posted by: Tom Leinster on December 7, 2023 3:12 AM | Permalink | Reply to this

Rayna Andreeva and Katharina Limbeck: Metric space magnitude for machine learning

It’s the last day of the meeting, and to wake us up after last night’s conference dinner, we have Rayna Andreeva. Well, we should have had Rayna, but there’s a sad story…

Originally the talk was meant to be a double act by Rayna and Katharina Limbeck, but unfortunately, Katharina both fell ill and had her flights cancelled, so she couldn’t make it to the conference. So Rayna had to undertake the heroic task of completely rewriting her talk to take twice the time, during the conference week. But then, even more unfortunately (and coincidentally), Rayna fell ill last night too. So right now, Emily Roff is doing the also-heroic job of presenting the talk, having only just looked at the slides a few minutes ago. Bravo.

Under these circumstances, I’m not going to attempt to take detailed notes. But the slides are on the conference slides page. The broad theme of Andreeva and Limbeck’s work is to use magnitude and magnitude dimension for deep learning (Andreeva) and representation learning (Limbeck), extending into unsupervised representation learning.

The first part of the talk is based on Andreeva, Limbeck, Rieck and Sarkar’s paper Metric space magnitude and generalisation in neural networks.

“Magnitude dimension” here is Simon Willerton’s notion — the instantaneous growth rate of the magnitude function:

dlog|tX|dlogt. \frac{d\log|t X|}{d\log t}.

So it’s scale-dependent. It’s sort of discussed at this blog post, although mostly in the different setting of spread rather than magnitude.

Slogan: the lower the magnitude dimension, the better the generalization.

Theorem Let XX be a compact subset of n\mathbb{R}^n suc that either the magnitude dimension or the Minkowski dimension of XX exists. Then

dim MagX=dim PH 0X, \dim_{Mag} X = \dim_{PH}^0 X,

where the right-hand side is persistent homology dimension in degree 00.

Also: the lower the magnitude dimension, the higher the test accuracy, as borne out by some experimentation. And the higher the test accuracy, the lower the magnitude (effective number of models).

The second half is based on Limbeck, Andreeva, Sarkar and Rieck’s paper Metric space magnitude for evaluating unsupervised representation learning.

One thing that’s needed here is a good notion of the difference between magnitude functions.

Question Is there a good such notion?

They address this with a scale-invariant difference measure. There’s some principal component analysis involved here, some rescaling of the magnitude functions, and the notion of magnitude weights. And there are lots of experiments to test out their new methods, giving encouraging evidence that magnitude can detect the quality of representations.

There’s also results on the stability of magnitude in terms of continuity. There’s still much to explore, but the results are encouraging.

One bottleneck in the use of magnitude is the computational expense: inverting a matrix takes ages. They speed up computation of magnitude via Cholesky factorization (no inversion needed). For example, if I’ve read the graph correctly, they can compute the magnitude of a space with about 10000 points in about 10 seconds.

I obviously haven’t done justice to any of their work. See their papers and slides for more!

Both speakers have emphasized that they welcome questions by email.

Posted by: Tom Leinster on December 8, 2023 12:37 AM | Permalink | Reply to this

Re: Rayna Andreeva and Katharina Limbeck: Metric space magnitude for machine learning

For speeding up the calculation of weightings see appendix E of https://arxiv.org/abs/2201.11677.

Posted by: Steve Huntsman on December 8, 2023 2:45 AM | Permalink | Reply to this

Re: Rayna Andreeva and Katharina Limbeck: Metric space magnitude for machine learning

BTW appendix D of the same paper outlines a special case wherein coweightings are computed on matrices of size more than 160000x160000.

Posted by: Steve Huntsman on December 8, 2023 2:49 AM | Permalink | Reply to this

Jun Yoshida: On a logical foundation for parametrized homologies

From machine learning to semantics, with Jun Yoshida. Let’s begin with Jun’s abstract:

The magnitude homology has a variant called the blurred magnitude homology. It is constructed from a filtration by a distance function on a simplex spanned by the points on a metric space.

From this point of view, it is thought of as a sibling of the persistent homology. The goal of this talk is to introduce a common foundation for this type of parameterized homologies. More precisely, I will explain how they are understood as semantics of ordinary simplicial homology in the intuitionistic logic. As an application, one can make a provably correct implementation of these homologies.

The talk began with an overview of persistent homology. Then we had a summary of blurred magnitude homology, a notion due to Nina Otter.

Let XX be a metric space. For 0\ell \geq 0, the blurred magnitude complex MC *,(X)MC_{*, \leq\ell}(X) is the chain complex whose nnth group is the free module on the sequences (x 0,,x n)(x_0, \ldots, x_n) such that x 0x nx_0 \neq \cdots \neq x_n and

d(x i1,x i). \sum d(x_{i - 1}, x_i) \leq \ell.

In ordinary magnitude homology, this inequality would be an equality. And the differential is the same as in magnitude homology. You can recover the magnitude complex from the blurred magnitude complex as a quotient:

MC *,(X)MC *,(X)/ s<MC *,s(X). MC_{\ast, \ell}(X) \cong MC_{\ast, \ell}(X)/\bigcup_{s \lt \ell} MC_{\ast, \leq s}(X).

Blurred magnitude homology is the homology of the blurred magnitude complex.

Question Can we generalize results about persistent homology to more general parameter spaces than subsets of \mathbb{R}?

This question was also addressed in Emerson Gaw Escolar’s talk. But Jun’s point of view is that multiparameter persistent homology seems not to cover all situations, and he’s going to explain the idea of persistent homology as homology sheaves.

The idea is to consider 𝕋=(0,]\mathbb{T} = (0, \infty] with the right-order topology, generated by half-open intervals (a,](a, \infty]. (That’s T for “time”.) Then persistent homology and blurred homology are defined “internally” in Sh(𝕋)Sh(\mathbb{T}). And Jun advocates using Sh(𝕋×)Sh(\mathbb{T} \times \mathbb{R}).

Goal Reformulate parametrized homology theory in the language of categorical logic.

Advantages:

  • The language and arguments will be independent of the parameter space (e.g. 𝕋\mathbb{T} or 𝕋×\mathbb{T} \times \mathbb{R}).

  • Should be applicable to non-spatial toposes, such as persistent homology with infinitesimal time dependency. (There’s a connection here with the vineyard algorithm.)

  • A rigorous connection between the theory and the implementation, leading to a formally verified implementation on Lean that the speaker is building.

Jun then reviewed the basics of logic on sheaves, starting with the idea that a truth value on a topological space is an open subset. For example, in 𝕋\mathbb{T}, we have truth values like “true after time aa”.

(I guess we’re emphasizing “after” over “before” because something already having happened is an observable condition, hence corresponds to an open set.)

The speaker emphasized that the Mitchell-Bénabou language and Kripke-Joyal semantics apply not just in sheaf toposes but in arbitrary elementary toposes. For example, they can be used in a topos suitable for infinitesimally time-dependent logic, where the subobject classifier consists of the open sets in 𝕋×\mathbb{T} \times \mathbb{R} quotiented out by agreement near 𝕋×{0}\mathbb{T} \times \{0\}.

He gave a couple of examples, both involving sheaves on 𝕋\mathbb{T}. The first was about continuously time-filtered sets as subobjects of constant sheaves on 𝕋\mathbb{T}, and the second described a correspondence between (not necessarily symmetric) ultrametrics on a set VV and preorders on the constant sheaf VV on 𝕋\mathbb{T}.

Using the tools of linear algebra in intuitionistic logic, Jun explained how to compute homology groups in a general topos-theoretic setting. The main theorem is a sheaf-theoretic generalization of the Zomorodian-Carlsson theorem on decomposition of persistent homology into interval modules.

There’s also a further theorem, for an arbitrary topos that isn’t necessarily a sheaf topos, where some new complications arise: you don’t necessarily have an interval module decomposition in the same simple sense. But it doesn’t look too much more complicated! Jun’s main application is to infinitesimal time.

Finally, Jun showed us some of his work setting up a formally verified implementation on Lean.

Posted by: Tom Leinster on December 8, 2023 2:11 AM | Permalink | Reply to this

Re: Jun Yoshida: On a logical foundation for parametrized homologies

A very uninformed comment, but I wonder if there’s a possibility to bring persistent cohomotopy into the story. It’s viewed here (slide 98) as an enhancement of persistent homology.

Posted by: David Corfield on December 8, 2023 8:41 AM | Permalink | Reply to this

Sergei O. Ivanov: On the path homology of Cayley digraphs and covering digraphs

Sergei Ivanov is here to tell us about path homology (also called GLMY theory). More and more people are working out the connections between path homology and magnitude homology.

Path homology is a homology theory of directed graphs (also called digraphs). The definition of path homology was too fast for me to type up, but there’s a nice summary of it in Section 6 of this paper by Asao, which also began the project of comparing it with magnitude homology.

Here are some properties of path homology, which Sergei explained:

  • There’s a Künneth theorem.

  • It’s homotopy invariant in an appropriate sense.

  • It respects suspension.

  • It generalizes homology of simplicial complexes. Given a simplicial complex KK, we can form the directed graph X(K)X(K) whose vertices are the simplices of KK and whose arrows are embeddings (“barycentric subdivision digraph”). Then the homology of KK is isomorphic to the path homology of X(K)X(K).

  • There’s an accompanying notion of fundamental group, with a Hurewicz isomorphism theorem.

  • It satisfies Eilenberg-Steenrod axioms.

There’s also a path cohomology, and there’s a cup product making it into a graded algebra.

Open questions

  • Is the path cohomology algebra graded commutative? Many computations haven’t settled the question. There’s no diagonal map… which reminds me of Hepworth’s results on the circumstances under which the cup product on magnitude cohomology is graded commutative. (We know that it often isn’t.) E.g. see Example 5.6 and Theorem 5.7 of Hepworth’s paper.

  • Is the path cohomology algebra isomorphic to the cohomology algebra of a space? If it’s not graded commutative, the answer is no. If the answer is yes, that would be an analogue of the magnitude homotopy type.

  • Is the path cohomology of a box product the same as that of the strong product? (Or maybe Sergei said “homology”, not “cohomology”.)

A sequence x 0,,x nx_0, \ldots, x_n of vertices in a directed graph XX is accessible if d(x i1,x i)d(x_{i - 1}, x_i) is finite for all ii. The accessible sequences form a simplicial set N(X)N(X), called the nerve of XX. So, it’s the nerve of the free preordered set on XX. The nerve of XX is also exhaustively filtered by length:

N 1(X)N 2(X)N(X). N^1(X) \subseteq N^2(X) \subseteq \cdots \subseteq N(X).

Idea: Interpret path homology theory in terms of this filtration.

In particular, the magnitude path spectral sequence of XX, considered by Asao and then Hepworth and Roff, can be described through this filtration.

The next part of the talk was about notions of fundamental group and covering digraphs. He didn’t give the definition of covering, but it’s related to covering for simplicial complexes, via nerves. These coverings can also be called 1-coverings, and there’s a more general notion of \ell-covering closely related to the notion of metric fibration that Tsubasa Kamiyama spoke about yesterday, that Yasuhiko Asao recently blogged about here, and that Yasuhiko will also speak about this afternoon.

Proposition There is an equivalence between the categories of \ell-coverings of XX and coverings of the simplicial set N (X)N^\ell(X).

I guess that could be taken as a definition of \ell-covering of a directed graph.

There’s a universal 1-covering of any directed graph, and the fundamental group is isomorphic to the group of deck transformations. (It’s cleanest if we do everything in an unbased way.) Going further, there’s a universal 2-covering of any directed graph, and an associated spectral sequence.

Now for Cayley digraphs. Take a group GG and a set SS of generators (not including 11), giving a Cayley digraph Cay(G,S)Cay(G, S). Homological and homotopical properties of Cay(G,S)Cay(G, S) give information about the relations satisfied by the generators. It turns out that the universal covering of a Cayley digraph is another Cayley digraph — of a different group and generating set.

Example Consider G=(,+)G = (\mathbb{Q}, +) with

S={1/n!:n2}. S = \{1/n! : n \geq 2\}.

Put X=Cay(G,S)X = Cay(G, S). Then the path homology consists of the exterior algebra of the countably infinitely generated free abelian group.

Posted by: Tom Leinster on December 8, 2023 2:59 AM | Permalink | Reply to this

Yasuhiko Asao: Classification of metric fibrations

Now for Yasuhiko Asao on the classification of metric fibrations, subject of a recent guest post here.

The definition of metric fibration was given in Tsubasa Kamiyama’s talk, so I won’t repeat it here. The basic property is that if p:EXp: E \to X is a metric fibration, with EE finite, then all the fibres FF are isometric and

Mag(E)=Mag(F)Mag(X). Mag(E) = Mag(F) Mag(X).

Motivation

The following isn’t strictly necessary, but is useful philosophical background.

Define an Fset-category, or seminormed category, to be a small category CC with a real number |f|0|f| \geq 0 for each map ff, such that |1 c|=0|1_c| = 0 for all cc and |gf||g|+|f||g\circ f| \leq |g| + |f|.

Examples

  • Any small category, with |f|=0|f| = 0 for all ff.

  • Any metric space XX, where X(a,b)X(a, b) is a one-element set {f a,b}\{f_{a, b}\} and |f a,b|=d(a,b)|f_{a, b}| = d(a, b), for all points a,ba, b.

So FsetFset-CatCat contains both categories and metric spaces. (You can view an Fset-category as a category enriched over the monoidal category FsetFset of filtered sets, as per Yasuhiko’s post here.)

(Question to self: are the inclusions CatFsetCatMet Cat \hookrightarrow FsetCat \hookleftarrow Met induced by monoidal inclusions SetFset[0,]? Set \hookrightarrow Fset \hookleftarrow [0, \infty]? I should go back to Yasuhiko’s post and see if this question is answered there.)

The notion of Grothendieck fibration can be generalized to Fset-category, whose restriction to MetMet is exactly the metric fibrations.

Generally, Grothendieck fibrations over a category CC correspond to lax functors CCatC \to Cat. This suggests a correspondence between metric fibrations over a space XX and “lax functors” XMetX \to Met, whatever they may be… and we’ll see next!

(There was a question here over whether the correct analogy was really with fibrations or prefibrations.)

The main story

A lax functor F:XMetF: X \to Met consists of:

  • a metric space F(x)F(x) for each xXx \in X

  • a map F xx:F(x)F(x)F_{x x'}: F(x) \to F(x') for each x,xx, x'

satisfying

  • F xx 1=F xxF_{x x'}^{-1} = F_{x' x}

  • for all x,x,xXx, x', x'' \in X, d (F xxF x,x,F x,x)d(x,x)+d(x,x)d(x,x) d_\infty(F_{x' x''} F_{x, x'}, F_{x, x''}) \leq d(x, x') + d(x', x'') - d(x, x'') where d d_\infty is the sup distance beteween functions.

The lax functors XMetX \to Met form a category Met XMet_X in the obvious way. On the other hand, the fibrations over XX form a category Fib XFib_X in the obvious way (with commutative triangles).

Theorem Fib XMet XFib_X \simeq Met_X for all metric spaces XX.

Examples

  • F xxF_{x x'} is functorial if and only if the corresponding metric fibration is trivial (i.e. the total space is globally a product of the base and the fibre, and the map is projection). Well: Yasuhiko was sure of “only if” but not 100 percent sure of “if”.

  • There’s an example involving maps K 2×K 3K 3K_2 \times K_3 \to K_3 and K 3,3K 3K_{3, 3} \to K_3, where we have the same base and same fibre for both maps, but the total spaces are different. (Here K nK_n is the complete graph on nn vertices and K m,nK_{m, n} is the complete bipartite graph.)

Actions and torsors

We mimic the topological case:

Fib/XPrinBun/X[X,BG]=Hom(π 1X,G), Fib/X \leftrightarrow PrinBun/X \leftrightarrow [X, B G] = Hom(\pi_1 X, G),

where that last equality assumes GG is discrete.

For us, the group GG should be a metric group, i.e. a group with a bi-invariant metric, or equivalently, a conjugation-invariant norm.

Example For a finite metric space YY, we have the group Aut(Y)Aut(Y) with sup-distance.

For a metric group GG, a GG-lax functor F:XMetF: X \to Met is a lax functor such that:

  • F(x)=GF(x) = G for all xXx \in X

  • F xx:GGF_{x x'}: G \to G is left multiplication by an element of GG, which we also denote by F xxF_{x x'}.

Then there’s a notion of morphism between GG-lax functors, so that they form a subcategory GMet XG Met_X of Met XMet_X.

A map π:EtoX\pi: E to X is a GG-torsor if:

  • π\pi is a metric fibration with fibre-preserving right action GG on EE

  • the map E×GE× XEE \times G \to E\times_X E defined by (e,g)(e,eg)(e, g) \mapsto (e, e g) is an isomorphism over XX.

The GG-torsors over XX form a category GTor XG Tor_X.

Proposition The equivalence of categories Fib XMet XFib_X \simeq Met_X induces a further equivalence GTor XGMet XG Tor_X \simeq G Met_X.

Classification

Write Fib X YFib_X^Y for the subcategory of Fib XFib_X with fibres YY (a finite metric space?). It can be shown that

core(Fib X Y)(Aut(Y))Met X, core(Fib_X^Y) \simeq (Aut(Y))Met_X,

i.e. to GMet XG Met_X where G=Aut(Y)G = Aut(Y).

In the other direction,

GMet XHom(π 1 metX,G). G Met_X \simeq Hom(\pi_1^{met} X, G).

Here π 1 metX\pi_1^{met} X is the fundamental metric group. Choosing a basepoint xx, it consists of the set of tuples (x,x 1,,x n,x)(x, x_1, \ldots, x_n, x) quotiented out by a certain equivalence relation defined in Yasuhiko’s post, which also contains the definition of the metric on π 1 metX\pi_1^{met} X.

Examples

  • Take the cycle graph C 5C_5, with vertices labelled cyclically as 1, 2, 3, 4, 5. The loop 123451 is identified with 13451, then in turn to 1351. And then d((1351),(1))=d((1351),(131))=d(3,5)+d(5,1)d(3,1)=2+12=1. d((1351), (1)) = d((1351), (131)) = d(3, 5) + d(5, 1) - d(3, 1) = 2 + 1 - 2 = 1. So 12351 has norm 1, and the fundamental metric group is \mathbb{Z}, with |1|=1|1| = 1.

  • For similar reasons, every odd cycle graph has fundamental metric group \mathbb{Z}. Hence on any odd cycle graph, there are nontrivial metric fibrations.

  • On the other hand, over any even cycle graph, every metric fibration is trivial and the fundamental metric group of the graph is trivial.

  • For a geodesic circle and \mathbb{R}, the fundamental metric group is again trivial, so again, every metric fibration over these bases is trivial.

  • But for the circle with the subspace metric from 2\mathbb{R}^2, the fundamental metric group is nontrivial.

Conjecture There is a homotopy relation \sim such that π 1 met\pi_1^{met} is invariant under \sim and any 1-dimensional space in the sense of Tsubasa Kamiyama’s talk is equivalent to a wedge of circles.

Posted by: Tom Leinster on December 8, 2023 6:06 AM | Permalink | Reply to this

Re: Yasuhiko Asao: Classification of metric fibrations

It was clear from the talks that lots of people have read and appreciated Yasuhiko’s paper Magnitude homology and path homology (as have I). This establishes a relationship between magnitude homology of directed graphs and the path homology of Alexander Grigor’yan, Yong Lin, Yuri Muranov and Shing-Tung Yau. More specifically, there is a spectral sequence in which the first page is magnitude homology and the first row of the second page is path homology.

But what I hadn’t understood is that Yasuhiko’s paper is so popular that even manga characters read it!

Pages from manga book entitled 18=80

Detail from page of manga book entitled 18=80, showing a character reading Asao's paper

This is from volume 1 of a book series called 18 = 80 by Iwabuchi Ryuuko, about an 80-year-old woman who becomes 18 again.

Posted by: Tom Leinster on December 10, 2023 10:24 AM | Permalink | Reply to this

Re: Yasuhiko Asao: Classification of metric fibrations

Ryuuko Iwabuchi is my friend, and I was asked by her to supervise the mathematical settings of that manga. She left it to me to fill the back of the book in that page, so I used “magnitude homology and path homology”, a little playful.

Posted by: Yasuhiko Asao on December 11, 2023 6:47 AM | Permalink | Reply to this

Re: Yasuhiko Asao: Classification of metric fibrations

:-) Yasuhiko very kindly gave me a copy of his friend’s book during the conference week, and explained how his paper title came to be in it. I was tickled pink.

Posted by: Tom Leinster on December 11, 2023 10:46 AM | Permalink | Reply to this

Kiyonori Gomi: A direct proof for the positive definiteness of four point metric spaces

Kiyonori Gomi has done lots of important work on magnitude homology, listed at the magnitude bibliography. I referred to his work in both of my talks.

Today he’s speaking on this new paper, in which he proves directly that metric spaces with four points are always positive definite.

A finite metric space XX is positive definite if its similarity matrix (e d(x,y)) x,yX(e^{-d(x, y)})_{x, y \in X} is positive definite.

  • It’s easy enough to prove directly that a metric space with 3\leq 3 points is positive definite, and that’s done in my paper “The magnitude of metric spaces”.

  • Mark Meckes, in Theorem 3.6 of his paper “Positive definite metric spaces”, gave an indirect proof that 4-point spaces are also positive definite. It relied on a result about embedding into the 2-dimensional normed space 2\ell_\infty^2.

  • The complete bipartite graph K 2,3K_{2,3}, rescaled so that edges each have length t<log2t \lt \log\sqrt{2}, is not positive definite: hence spaces with 5 points need not be positive definite.

  • From this, one can show that for every n5n \geq 5, there’s some nn-point space that is not positive definite.

The only remaining challenge, then, is to find a direct proof for 4-points spaces. That’s what this talk is about. An unexpected byproduct of the new proof is new insight into inclusion-exclusion for magnitude of metric spaces.

The proof uses Sylvester’s criterion: a matrix is positive definite iff each of its leading principal submatrices has positive determinant. Since all spaces with 3\leq 3 points are positive definite, we only have to prove that the similarity matrix ZZ itself has positive determinant.

That’s not the main insight! It’s the starting point. I remember trying to prove the 4-point case long ago, and failing, and I knew Sylvester’s criterion could be applied.

What Gomi has done is much more ingenious than that. You should really read his paper if you want to see the argument. My limitations as a live blogger are very apparent to me here. I can’t compete with the paper. But during the talk, he revealed that he’d come up with the argument using Mathematica to explore the space of factorizations of the algebraic expressions involved, inspecting them to find out which ones would help with proving positivity.

The inclusion-exclusion principle

We know that magnitude of finite metric spaces satisfies the inclusion-exclusion law

|AB|=|A|+|B||AB| |A \cup B| = |A| + |B| - |A \cap B|

under some very restrictive conditions, which have a lot in common with the definition of metric fibration (see previous talks). But the proof suggests other conditions under which it holds. First of all, it’s about taking the union of two 3-point spaces along a 2-point space, to get a 4-point space.

But eventually, Gomi found a different sufficient condition, as follows. Let xx and yy be distinct points of a finite metric space XX. Then the inclusion-exclusion law

|X|=|X{x}|+|X{y}||X{x,y}| |X| = |X \setminus \{x\}| + |X \setminus \{y\}| - |X \setminus \{x, y\}|

holds as long as:

  • the similarity matrices of X{x}X\setminus\{x\}, X{y}X\setminus\{y\} and X{x,y}X \setminus\{x,y\} are all invertible

  • a quite complicated condition on d(x,y)d(x, y) that it’s in the paper (“Z 12=b 0Z_{12} = b_0”, where b 0b_0 is defined in Definition 3.4), but which I haven’t understood at a conceptual level.

Better still (Theorem 3.10), he showed that if inclusion-exclusion holds then this mysterious condition must hold! An equality between similarities (or equivalently, distances) is something very strict, so this confirms the intuition that inclusion-exclusion rarely holds.

(An aside: work of Gimperlein, Goffeng and Louca shows that asymptotic inclusion-exclusion holds under many circumstances. “Asymptotic” means scaling all the spaces up by a scale factor tending to \infty. This is mentioned in my colloquium slides.)

Now Gomi is going back to the previous known conditions for inclusion-exclusion holds. These go back to a couple of old papers of mine; originally I stated it for metric spaces but didn’t get the best condition, then I stated it for graphs and did get it right, but didn’t say anything about more general metric spaces. I’m happy that Gomi has now put in print the “right” condition for general metric spaces; that’s Theorem 3.2 in his paper.

And to wrap this up, he’s summarizing the various implications between the known sufficient conditions for inclusion-exclusion to hold. And he’s also talking about sufficient conditions for Mayer-Vietoris for magnitude homology, which is the categorification of inclusion-exclusion.

Posted by: Tom Leinster on December 8, 2023 7:19 AM | Permalink | Reply to this

Richard Hepworth: Bigraded path homology and reachability homology

The very last talk of this very wonderful week is by Richard Hepworth. I’ll quote the abstract:

Important recent work of Asao conclusively demonstrated that magnitude homology and path homology of graphs, both previously unrelated, are in fact just two aspects of a much larger object, the magnitude-path spectral sequence or MPSS. Magnitude homology is precisely the E 1E^1 page of this sequence, while path homology is an axis of the E 2E^2 page. In this talk I will present joint work in progress with Emily Roff. We show that the E 2E^2 page of the MPSS satisfies Künneth, excision and Mayer-Vietoris theorems. These, together with the homotopy-invariance property proved by Asao, show that the entire E 2E^2 term should be regarded as a homology theory, which we call bigraded path homology. Second, we show that bigraded path homology is a strictly stronger invariant than path homology, by demonstrating that it can completely distinguish the directed cycles, none of which can be told apart under the original path homology.

The first joint Hepworth-Roff paper is here, on reachability homology. But there’s more to come!

For a directed graph GG, define the reachability chains RC *(G)RC_*(G) over a ring RR as follows: RC k(G)RC_k(G) is generated by tuples (x 0,,x k)(x_0, \ldots, x_k) with x 0x kx_0 \neq \cdots \neq x_k and d(x i1,x i)<d(x_{i - 1}, x_i) \lt \infty for each ii. The differential is given in the usual kind of way — see notes from previous talks! So reachability homology is the homology of the preorder associated with GG.

Filter this complex as follows: F RC k(G)F_\ell RC_k(G) is spanned by (x 0,,x k)(x_0, \ldots, x_k) with d(x i1,x i)\sum d(x_{i - 1}, x_i) \leq \ell.

The main character in this drama is the magnitude-path spectral sequence (MPSS) (E *,* r(G),d r)(E_{\ast, \ast}^r(G), d^r). We have

E i,j 0(G)=F iRC i+j(G)F i1RC i+j(G)=MC i+j,i(G), E^0_{i, j}(G) = \frac{F_i RC_{i + j}(G)}{F_{i-1} RC_{i+j}(G)} = MC_{i + j, i}(G),

where MCMC means magnitude chains. Then

E i,j 1(G)=MH i+j,i(G). E^1_{i, j}(G) = MH_{i + j, i}(G).

In other words:

The E 1E^1 page of the magnitude-path spectral sequence is exactly magnitude homology.

I have a looming dread that Richard is about to start drawing spectral sequence diagrams that I won’t be able to reproduce here… yup, he’s doing it. Sorry!

The existence of the spectral sequence goes back to the original Hepworth-Willerton paper on magnitude homology, but it wasn’t investigated then.

Richard has now devoted two whole blackboards to a table that he’s apparently going to spend the rest of the talk building up. He just asked me whether I was keeping up… I’m going to have to resort to photographing it later.

The columns of the table are the pages of the spectral sequence (well, some of them), plus several other columns for other things… you’d really need a video to appreciate Richard’s performance.

One important point from the table:

The first row of the E 2E^2 page of the magnitude-path spectral sequence is exactly path homology.

The terminology on-axis in the table means “in position *,0\ast, 0”. That’s where you find path homology on the E 2E^2 page, as Asao showed. But he also showed that the whole E 2E^2 page is invariant under 1-homotopy, suggesting that the whole E 2E^2 page is interesting and worthy of investigation. Bigraded path homology is defined exactly like this.

Some abbreviations in the table: les and ses mean long and short exact sequence.

All I can give you is photos! Here’s the overall scene:

Richard built up the table in chronological order — but what he called his “personal chronology”. Here’s what the boards looked like at the moment when he’d just reached the present. Click any of these photos for a larger, higher-resolution version:

(The boards were kept extremely clean and the chalk was very good, both of which are great for photo quality.)

Here’s what they looked like when he’d finished the table:

Notes

  • There’s a hidden “excision” row.

  • Mayer-Vietoris for E 2E^2 combines:

    • excision for E 1E^1
    • a splitting built from no-entry/projection
    • a LES built from a SES by taking homology.
  • Excision and Mayer-Vietoris work for pushouts.

Rhetorical questions

  • What do we gain from all the extensions and expansions explained in the table?

  • What do we gain from moving off the axis?

Example (by way of an answer) Let C mC_m be the directed cycle of length m3m \geq 3. Then PH *(C m)H *(S 1)PH_*(C_m) \cong H_\ast(S^1), so C mC_m is not 1-homotopy equivalent to a point. But it is (m1)(m - 1)-homotopy equivalent to a point. So E ** rE^r_{\ast\ast} is trivial for rmr \geq m and C mC_m is long homotopy equivalent to a point.

The spectral sequence is nontrivial for as long as it can be, i.e. up to E m1E^{m - 1}, and then it’s trivial from E mE^m onwards. (Again, I’m not up to reproducing the full spectral sequence diagram.)

In any case, the bigraded path homology PH *,*(C m)PH_{\ast,\ast}(C_m) detects mm, in contrast to the fact that ordinary path homology can’t tell any of the graphs C mC_m apart.

Example Let mn1m \geq n \geq 1 with m3m \geq 3. The graph C m,nC_{m, n} consists of a left-to-right path of length mm and a left-to-right path of length nn, with the two leftmost vertices identified and the two rightmost vertices identified. Then E *,*(C m,n)E_{\ast,\ast}(C_{m,n}) detects mm but not nn — i.e. the length of the long left-to-right path, but not the short one. Richard explained why.

What’s left to do?

  • Optimize and complete the table! There’s a big gap in the “r=rr = r” column.

  • Take other results about path homology and extend them to bigraded path homology.

  • Look for analogues of features of other homology theories (e.g. Lefschetz fixed point theorems).

  • Investigate what happens on each page, on and off the axis.

  • This is all for graphs. But what about metric spaces? What does a spectral sequence look like when you’re filtering by the real numbers?

Posted by: Tom Leinster on December 8, 2023 9:03 AM | Permalink | Reply to this

Re: Magnitude 2023

Bravo, Tom! These posts are amazing, and will be much more useful for me to refer to than my own notes.

Posted by: Mark Meckes on December 8, 2023 9:09 AM | Permalink | Reply to this

Re: Magnitude 2023

Seconded. Thanks so much for doing this!!

Posted by: Emily Roff on December 9, 2023 12:36 AM | Permalink | Reply to this

Post a New Comment