## October 20, 2021

### What is the Uniform Distribution?

#### Posted by Tom Leinster Today I gave the Statistics and Data Science seminar at Queen Mary University of London, at the kind invitation of Nina Otter. There I explained an idea that arose in work with Emily Roff. It’s an answer to this question:

What is the “canonical” or “uniform” probability distribution on a metric space?

You can see my slides here, and I’ll give a lightning summary of the ideas now.

Let $X$ be a compact metric space.

• Step 1   The uniform probability distribution (or more formally, probability measure) on $X$ should be one that’s highly spread out. So, we need to be able to quantify the “spread” of a probability distribution on a metric space.

There are many such measures of spread — a whole one-parameter family of them, in fact. They’re the diversities $(D_q)_{q \in \mathbb{R}^+}$. Or if you prefer, you can work with the entropies $\log D_q$; it makes little difference.

• Step 2   We now appear to have a problem. Different values of $q$ give different diversity measures $D_q$, so it seems to be hoping for way too much for there to be a probability measure on $X$ that maximizes $D_q$ for all uncountably many $q$s at once.

But miraculously, there is! Call it the maximizing measure on $X$.

• Step 3   Statisticians are very familiar with the idea of a maximum entropy distribution as being somehow canonical or preferable. But it’s not what we should call the uniform measure, as it’s not scale-invariant. For example, converting our metric from centimetres to inches would change the maximizing measure, and that’s not good.

The idea now is to take the large-scale limit. In other words, for each scale factor $t \gt 0$, write $\mathbb{P}_t$ for the maximizing measure on the scaled space $t X$, and define the uniform measure on $X$ to be $\lim_{t \to \infty} \mathbb{P}_t$. This is scale-invariant.

• Step 4   Let’s check this gives sensible results. We already know what “uniform distribution” should mean when $X$ is finite, or homogeneous (it should mean Haar measure), or a subset of Euclidean space (it should mean normalized Lebesgue measure). Does our general definition of uniform measure give the right thing in these cases? Yes, it does!

There’s also a connection between uniform measures and the Jeffreys prior, an “objective” or “noninformative” prior derived from Fisher information.

You can find all this and more in the slides.

Posted at October 20, 2021 4:06 PM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/3360

### Re: What is the Uniform Distribution?

Very nice! I’d be very interested to hear if you get any good suggested answers to your questions on the last slide, especially

What properties would you want something with that name [uniform measure] to have?

Posted by: Mark Meckes on October 20, 2021 5:27 PM | Permalink | Reply to this

### Re: What is the Uniform Distribution?

Thanks! I didn’t get any answers to the question you quote. But for the question about possible examples to investigate, Nina made the broad suggestion of looking at the uniform measure on spaces coming from networks. I may be slightly mangling what she said, as I’m not at all familiar with the network world, but apparently there are interesting infinite spaces there. Hopefully I’ll be able to find out about that properly sometime — or maybe someone here can help out.

Posted by: Tom Leinster on October 20, 2021 9:11 PM | Permalink | Reply to this

### Re: What is the Uniform Distribution?

One thing I definitely remember from my time in the network world was people looking at steady-state distributions for diffusion processes. It would be interesting to see how those compare with “uniform measures” in the sense defined here.

Posted by: Blake Stacey on October 20, 2021 11:53 PM | Permalink | Reply to this

### Re: What is the Uniform Distribution?

OK, thanks. So, is there a compact metric space here? (Maybe it’s the space on which the steady-state distribution is defined.) If so, can you explain what it is? That’s the context for our construction: given a compact metric space, we define the uniform probability measure on it.

Posted by: Tom Leinster on October 21, 2021 11:30 AM | Permalink | Reply to this

### Re: What is the Uniform Distribution?

The general idea (if I can dust off my brain cells and remember) is to put a particle on a vertex of the graph and have it execute a random walk. For example, the walker might pick an edge at random out of those connected to the current vertex and traverse it. If the choice of edge is made without bias, then the probability of stepping from vertex $i$ to vertex $j$ is then $A_{i j}/k_i$, where $A$ is the adjacency matrix and $k_i$ is the degree of vertex $i$. Thus, we have a discrete-time Markov process. If the graph is connected and not bipartite, this Markov process will have a steady-state distribution with $p_i \propto k_i$, and the inverse of the smallest nonzero eigenvalue of the transition matrix will give the characteristic timescale for approaching that steady-state distribution.

There are, of course, variations on the theme: picking the target node with a bias, adding weights and/or directionality to the edges, etc. In general, though, the probability distribution under consideration will be defined on the set of vertices. As for regarding the graph as a metric space … probably one would use the shortest-path distance between vertices, as in calculating the magnitude of a graph.

Posted by: Blake Stacey on November 8, 2021 3:03 PM | Permalink | Reply to this

### Re: What is the Uniform Distribution?

Very nice ideas!

Here are two more properties that one may want from a “uniform” distribution on a metric space $X$: first a “uniformity” requirement:

• It assigns the same amount of measure to isometric (measurable or open) subsets of $X$.

Also, a “positivity” requirement, which admittedly seems pretty strong:

• It assigns nonzero measure to all nonempty open subsets, in particular, to all open balls.

Here is an example where the latter property is difficult to satisfy: imagine a space given by the union of a square and a line segment, connected at a point. Or more generally, two Riemannian manifolds of different dimensions glued together. (It would be interesting to see what your definition of uniform measure gives in this case, by the way!)

Posted by: Paolo Perrone on October 21, 2021 9:15 AM | Permalink | Reply to this

### Re: What is the Uniform Distribution?

Thanks! That first property you mention is interesting, and I don’t immediately see whether or not it holds for our definition of the uniform measure.

The second one definitely doesn’t, because of examples like the one you mention. The uniform measure on a compact subset $X$ of $\mathbb{R}^n$, of nonzero Lebesgue measure, is Lebesgue measure restricted to $X$ and normalized to a probability measure. So if $X$ is something like a lollipop shape in $\mathbb{R}^2$, the uniform measure gives the stick of the lollipop measure zero. Or more formally, as we write in Remark 9.10 of our paper:

the support of the uniform measure […] need not be $X$; that is, some nonempty open sets may have measure zero. Any nontrivial union of an $n$-dimensional set with a lower-dimensional set gives an example.

This fits with the intuition that if you pick a point of the lollipop uniformly at random, the probability it’s on the stick should be zero (assuming, of course, an infinitely thin stick).

I know there are contexts where it’s useful to have a “reference measure” that gives nonzero measure to nonempty open sets. On the other hand, it might feel quite non-uniform if, say, some one-dimensional piece of a space was assigned higher probability than a two-dimensional piece of the space, which is what would happen if the stick of the lollipop didn’t have measure zero.

Posted by: Tom Leinster on October 21, 2021 11:00 AM | Permalink | Reply to this

### Re: What is the Uniform Distribution?

It’s certainly true that if $A$ and $A'$ are isometric measurable subsets of $X$ and there’s a self-isometry of $X$ mapping $A$ to $A'$ then the uniform measure gives them the same probability. But this is much weaker than the first property you mention.

Posted by: Tom Leinster on October 21, 2021 11:38 AM | Permalink | Reply to this

### Re: What is the Uniform Distribution?

On further thought, Paolo’s first requirement —

• It assigns the same amount of measure to isometric (measurable or open) subsets of $X$

— is unfulfillable, by our or any other way of defining uniform probability measure. At least, it’s unfulfillable if we take the “measurable” option. For “open”, I don’t know.

Here’s why. If $X$ is countably infinite, there is no probability measure on $X$ that satisfies Paolo’s requirement. For any two singleton subsets are isometric, so all singletons have to be assigned the same probability; but then expressing $X$ as a countable union of singletons and using countable additivity of probabilities gives a contradiction.

So that explains why this requirement can’t possibly be satisfied. But maybe it’s enlightening to look at an example. In Emily’s and my paper (Section 10, question 3), we mention that

$\{1, 1/2, 1/3, \ldots, 0 \},$

metrized as a subspace of $\mathbb{R}$, has uniform measure $\delta_0$. Is it reasonable that $\{0\}$ and $\{1\}$, despite being isometric, are assigned different probabilities? It seems to me that it is, because although they themselves are isometric, their immediate neighbourhoods are not.

And this leads naturally to the alternative form of Paolo’s question, where we only ask about open subsets.

It’s a good question!

Posted by: Tom Leinster on October 21, 2021 7:14 PM | Permalink | Reply to this

### Re: What is the Uniform Distribution?

Posted by: Paolo Perrone on October 22, 2021 2:52 PM | Permalink | Reply to this

### Re: What is the Uniform Distribution?

OK, now I have a proof that if $X$ is a compact metric space and $A$ and $B$ are isometric clopen subsets of $X$, then the uniform measure on $X$ (assuming it exists) gives the same probability to $A$ and $B$.

Clopen sets are obviously much more special than open sets, but baby steps!

The proof goes in two stages:

1. It’s easy enough to work out what the maximizing distribution on a coproduct of spaces is, in terms of the maximizing distribution on the individual components. Passing to the large-scale limit tells us about the uniform measure on a coproduct in terms of the uniform measures on the individual components. And from here, we get the case of the result where $A$ and $B$ are disjoint.

2. To extend to the general case, we can use the same two-copies-of-$X$ trick as here (though in fact, it was in the present context that I first thought of this argument).

Although the clopenness hypothesis is extremely restrictive, there is a useful corollary: in the uniform measure on a compact metric space, all isolated points are assigned the same probability. For example, in the space $\{1, 1/2, 1/3, \ldots, 0\}$ that I mentioned before, every point apart from $0$ is isolated, and there are infinitely many of them, so the uniform measure can only be $\delta_0$.

Posted by: Tom Leinster on October 23, 2021 1:39 AM | Permalink | Reply to this

### Re: What is the Uniform Distribution?

Re the statement in the first paragraph: oops. The proof I thought I had is wrong. I don’t know whether the statement is correct.

Posted by: Tom Leinster on October 31, 2021 11:46 PM | Permalink | Reply to this

### Re: What is the Uniform Distribution?

Here’s another property that seems like it would be nice and seems similar in spirit to Paolo’s:

If $\mu$ is the uniform measure on $X$, and $Y \subseteq X$ is open and satisfies $\mu(Y) \gt 0$, then the uniform measure on $Y$ is the normalization of $\mu$ restricted to $Y$.

Tom’s example in this comment above shows that this fails without the openness assumption (let $Y = \{0,1\}$), but with that assumption, the example satisfies this property.

Posted by: Mark Meckes on October 22, 2021 5:44 PM | Permalink | Reply to this

### Re: What is the Uniform Distribution?

I like this. Moreover, your property implies mine in case of disjoint subsets: let $Y$ and $Z$ be open subsets of $X$ of positive measure, and suppose they are isometric. Then $Y \cup Z$ is also an open subset of $X$, and so the measure $\mu$ restricts to the uniform measure on $Y\cup Z$ after normalization. Now the isometry $Y \to Z$ does induce a self-isometry of $Y\cup Z$, and so $\mu(Y)=\mu(Z)$.

(Does anyone see a way to make this work if $Y$ and $Z$ are not disjoint?)

Posted by: Paolo Perrone on October 22, 2021 6:08 PM | Permalink | Reply to this

### Re: What is the Uniform Distribution?

I think we can drop disjointness (at least, if we ignore the problem mentioned in my comment just now).

For consider the coproduct $X + X$, which is two disjoint copies of $X$ at distance $\infty$ from each other. It’s not hard to show that $X + X$ has uniform measure $\tfrac{1}{2} \mu \oplus \tfrac{1}{2}\mu$, in what I hope is obvious notation.

Let $A_1$ denote the first copy of $A$ in $X + X$, and $B_2$ the second copy of $B$. Then $A_1$ and $B_2$ are disjoint open subsets of $X + X$ that are isometric. So by your argument, if Mark’s principle holds then $\tfrac{1}{2} \mu \oplus \tfrac{1}{2}\mu$ gives the same probability to $A_1$ and $B_2$. But this just says that $\mu(A)/2 = \mu(B)/2$. Hence $\mu(A) = \mu(B)$.

Posted by: Tom Leinster on October 23, 2021 1:26 AM | Permalink | Reply to this

### Re: What is the Uniform Distribution?

I like Mark’s property too, but there’s a difficulty: it talks about uniform measure on a non-compact metric space $Y$, whereas we’ve only defined uniform measure for (some) compact metric spaces.

To briefly recap the definition: one first defines what it means for a probability measure on a compact metric space $A$ to be “maximizing” (i.e. diversity-maximizing).

To define the uniform measure on $A$, we assume that for all sufficiently large $t \gt 0$, the scaled space $t A$ has a unique maximizing measure $\mu_t$, and we also assume that $\mu_t$ has a limit (in the weak${}^\ast$ topology) as $t \to \infty$. That limit is the uniform measure.

I confess, I don’t have a clear idea of exactly where in all this compactness is needed. It was always just a background assumption for me. Maybe it’s possible to get away without it.

Mark’s property does hold when $Y$ is a clopen subset, for what it’s worth.

Posted by: Tom Leinster on October 23, 2021 1:21 AM | Permalink | Reply to this

### Re: What is the Uniform Distribution?

I haven’t thought about this carefully at all (and was being knowingly sloppy in my comment above), but maybe this can be dealt with here by requiring $Y$ to be the closure of an open set (which, since $X$ itself is assumed to be compact, will be compact).

Posted by: Mark Meckes on October 23, 2021 1:56 AM | Permalink | Reply to this

### Re: What is the Uniform Distribution?

Yes, that sounds good. And it reminds me that being the closure of an open set (or equivalently, the closure of your interior) was also a hypothesis in Heiko and Magnus’s first paper on magnitude.

Alternatively, it occurs to me that if we were to try to extend the definition of uniform measure to some class of metric spaces including the open subspaces of compact spaces, a natural such class might be the totally bounded spaces. If I’m not mistaken, these are always locally compact.

Incidentally, I wonder about the continuity properties of uniform measure. If two compact metric spaces are nearby in the Gromov–Hausdorff metric, are their uniform measures similar? This question needs making precise, but I feel like your work on continuity of maximum diversity must be the starting point to an answer.

Posted by: Tom Leinster on October 23, 2021 10:15 AM | Permalink | Reply to this

### Re: What is the Uniform Distribution?

One way someone could try to define a uniform measure on a compact metric space $X$ would be analogously to the Haar measure - for any closed subset $K$, let $[\varepsilon : K]$ be the minimum number of $\varepsilon$-balls needed to cover $K$, define $\mu(K)$ as $\lim_{\varepsilon \to 0} [\varepsilon : K] / [\varepsilon : X]$, and extend $\mu$ to a probability measure.

This doesn’t work. If we let $C \subseteq \mathbb{R}$ be the Cantor set, then $X = C \cup (2+2C)$ has $[\varepsilon : C] / [\varepsilon : X]$ oscillate infinitely often between being $1/2$ and $1/3$.

When it does work, does this coincide with your notion of uniform measure, or can it produce something different?

Posted by: Jem Lord on October 23, 2021 3:15 PM | Permalink | Reply to this

### Re: What is the Uniform Distribution?

Interesting question. I don’t know.

What you describe sounds very much like Hausdorff measure, which Emily and I say a little bit about in Section 10 of our paper. I don’t know much about the relationship between uniform and Hausdorff measure, but I suspect it’s complicated, since uniform measure is closely related to Minkowski dimension rather than Hausdorff dimension.

Posted by: Tom Leinster on October 24, 2021 12:19 PM | Permalink | Reply to this

### Re: What is the Uniform Distribution?

Maybe we’re beginning to get somewhere on properties of the uniform measure!

Here’s a list of conjectures mentioned so far. To simplify things a bit, I’ll silently assume that every space involved has a uniform measure, and I’ll denote the uniform measure on $X$ by $\mu_X$.

• Continuity  Let $X$ be a compact metric space. Then the function $\{\text{nonempty closed subsets of }\ X\} \to \{\text{probability measures on }\ X\}$ defined by $A \mapsto \mu_A$ is continuous with respect to the Hausdorff metric on the domain and the weak${}^\ast$-topology on the codomain.

• Restriction  Let $X$ be a compact metric space and $A$ a nonempty subset of $X$ that is the closure of an open set. Then $\mu_X|_A = \mu(A) \cdot \mu_A,$ where the left-hand side is the restriction of $\mu_X$ to $A$.

(Incidentally, is there a good one-word name for a subset of a topological space that’s a closure of an open set, or equivalently, the closure of its interior? People sometimes call such a set a “domain”, but since that has at least two other meanings, I’d prefer another name.)

• Invariance for closures of open sets  Let $X$ be a compact metric space and let $A$ and $B$ be nonempty subsets of $X$ that are both closures of open sets. If $A$ and $B$ are isometric then $\mu_X(A) = \mu_X(B)$.

• Invariance for open sets  Similarly.

Now some commentary. First, a general point: restriction and isometry-invariance do hold for clopen subsets. This follows from some easy stuff about the uniform measure on a coproduct; see this comment.

• Continuity  Mark proved (about ten years ago now) that maximum diversity, as a function of positive definite compact metric spaces, is continuous with respect to the Gromov–Hausdorff metric. That gives grounds for optimism, but there’s still some distance between this and a proof of the continuity of uniform measure. Some factors: (i) we need to think about the maximizing measures themselves, not just the maximum diversity; (ii) we have to pass to the large-scale limit; (iii) ideally, it would be nice to drop the positive-definiteness condition.

• Restriction  Here’s a possible strategy for deducing restriction from continuity. Let $A$ be a closed subset of a compact metric space $X$. For small $\varepsilon \gt 0$, let $B_\varepsilon$ be the complement of the open $\varepsilon$-neighbourhood of $A$. Then $A \cup B_\varepsilon$ is Hausdorff-close to $X$, since it’s just $X$ with a thin shell around $A$ removed. So if continuity holds then $\mu_{A \cup B_\varepsilon} \approx \mu_X$. But $A$ is a clopen subset of $A \cup B_\varepsilon$, and restriction does hold for clopen sets. So we should get $\mu_X|_A \approx \mu_{A \cup B_\varepsilon}|_A = \mu_{A \cup B_\varepsilon}(A) \cdot \mu_A \approx \mu_X(A) \cdot \mu_A,$ hence $\mu_X|_A \approx \mu_X(A) \cdot \mu_A.$ And with luck, letting $\varepsilon \to 0$ should turn that $\approx$ into an $=$.

• Invariance for closures of open sets.  This should follow from restriction, by the argument that Paolo and I put together earlier.

• Invariance for open sets.  I don’t currently see how this would follow from invariance for closures of open sets. We do know that if two open sets are isometric then so are their closures (since closure is completion, which is functorial). But taking the closure of an open set may increase its measure (e.g. consider $\{1, 1/2, 1/3, \ldots, 0\}$ again). So I don’t see where to go with this.

It may well be that the hypotheses for these conjectures aren’t quite right. But I think there’s at least a sketch of a plan here.

Posted by: Tom Leinster on October 23, 2021 8:00 PM | Permalink | Reply to this

### Re: What is the Uniform Distribution?

A thought: if the restriction conjecture is true, then it gives a way to extend the notion of uniform measure beyond the compact case. At least on a locally compact metric space $X$, demanding that $\mu_X|_A = \mu_X(A)\mu_A$ for all compact closure-of-open sets $A$ should uniquely determine $\mu_X$ up to scaling. For $X = \mathbb{R}^n$, Lebesgue measure is uniform in this sense by the result in the paper.

Posted by: lambda on October 23, 2021 9:41 PM | Permalink | Reply to this

### Re: What is the Uniform Distribution?

Hmm, thinking further about it, it’s not completely obvious that it really is unique up to scaling, because you have to rule out the scale factor itself being somehow non-uniform.

Posted by: lambda on October 24, 2021 4:53 AM | Permalink | Reply to this

### Re: What is the Uniform Distribution?

Here’s a slightly stronger continuity conjecture:

The function $\{ \text{compact metric spaces} \} \to \{ \text{metric measure spaces} \}$ defined by $(X, d) \mapsto (X, d, \mu_X)$ is continuous with respect to the Gromov–Hausdorff topology on the domain and the Gromov–Wasserstein topology on the codomain.

Posted by: Mark Meckes on October 23, 2021 11:30 PM | Permalink | Reply to this

### Re: What is the Uniform Distribution?

Incidentally, I’m pretty sure my proof of continuity of maximum diversity doesn’t actually need positive definiteness (though I haven’t thought about this stuff carefully in a while). That hypothesis was just thrown in everywhere since I was focused on magnitude for positive definite spaces, but all the positivity one needs is automatic when you’re only working with positive measures.

Posted by: Mark Meckes on October 23, 2021 11:36 PM | Permalink | Reply to this

### Re: What is the Uniform Distribution?

Ah, I’m beginning to think we might have had this conversation before. And indeed, Proposition 3.12 of our survey paper says exactly what you’re saying now:

The maximum diversity $|A|_+$ is continuous as a function of $A$, on the class of compact metric spaces equipped with the Gromov–Hausdorff topology.

No positive definiteness!

Posted by: Tom Leinster on October 24, 2021 12:30 AM | Permalink | Reply to this

### Re: What is the Uniform Distribution?

A little progress on continuity: let $X$ be a compact metric space and let $(Y_n)$ be a sequence of closed subspaces converging to $X$ in the Hausdorff metric. Suppose that $X$ has a unique maximizing measure $\nu_X$, and similarly that each $Y_n$ has a unique maximizing measure $\nu_{Y_n}$. Then we can show that $\nu_{Y_n}$ converges to $\nu_X$ in the weak$\ast$ topology on probability measures on $X$.

The proof starts from Mark’s result that the maximum diversity is continuous in this sense. (In fact, it’s continuous in a stronger sense, but never mind that for now.) Then we use a little lemma: if you have a compact space $S$ and a continuous function $\phi: S \to \mathbb{R}$ that achieves its maximum at only one point $s_{max}$, then any sequence $(s_n)$ in $S$ satisfying

$\phi(s_n) \to \phi(s_{max})$

must in fact satisfy

$s_n \to s_{max}.$

In our case, $S$ is the space of probability measures on $X$.

That gives a proof of the result I stated, but something feels cheap and nasty. We have no control over the speed of convergence of $\nu_{Y_n}$ to $\nu_X$, and that’s going to be a problem when we attempt to pass to the large-scale limit (replacing $Y_n$ and $X$ by $t Y_n$ and $t X$ for $t \gg 0$). Maybe we have to think harder about what could cause a maximizing measure to be unique.

Posted by: Tom Leinster on October 25, 2021 7:51 PM | Permalink | Reply to this

### Re: What is the Uniform Distribution?

I suspect you mean here to be positing that the spaces have unique uniform measures?

Of course a sufficient condition for the uniqueness of the uniform measure (assuming its existance) would be that $X$ has a unique maximizing measure at each scale $t \gt 0$.

A sufficient condition for the uniqueness of the maximizing measures is contained in the proof of Proposition 8.8 of your paper with Emily: If the bilinear form $\langle \mu, \nu \rangle = \int_X \int_X e^{-t d(x,y)} \ d\mu(x) \ d\nu(y)$ on the space $M(X)$ of signed measures on $X$ is strictly positive definite, then the maximizing measure at scale $t$ is unique.

As you know well, the latter condition holds in many interesting cases, and fails in many other interesting cases.

Posted by: Mark Meckes on October 25, 2021 9:34 PM | Permalink | Reply to this

### Re: What is the Uniform Distribution?

Ah, no, I was misreading your comment, and you can disregard the first two paragraphs of my comment.

The rest is worth reading, though.

Posted by: Mark Meckes on October 25, 2021 9:36 PM | Permalink | Reply to this

### Re: What is the Uniform Distribution?

Thanks. I feel the constant pull of positive definiteness hypotheses… though I don’t know whether even this strong one will be strong enough.

Ultimately, the hypotheses in the definition of uniform measure seem a bit provisional. Or at least, I don’t think the tyres have been thoroughly kicked. For example, is assuming that $t X$ has a unique maximizing measure for all $t \gg 0$ really the right thing to do? I’m not sure.

Posted by: Tom Leinster on October 25, 2021 11:46 PM | Permalink | Reply to this

### Re: What is the Uniform Distribution?

It’s conceivable that one could have a situation where there may be multiple maximizing measures for some $t$, but that any sequence of maximizing measures with $t \to \infty$ converges to the same limit. But that’s a pretty unsatisfying hypothesis.

Posted by: Mark Meckes on October 26, 2021 3:12 AM | Permalink | Reply to this

### Re: What is the Uniform Distribution?

Mark wrote:

It’s conceivable that one could have a situation where there may be multiple maximizing measures for some $t$, but that any sequence of maximizing measures with $t \to \infty$ converges to the same limit. But that’s a pretty unsatisfying hypothesis.

I feel the same way, but let me explain the picture in my head.

We have our compact metric space $X$. We also have the space $P(X)$ of probability measures on it, with the weak$\ast$ topology (metrized by the Wasserstein metric). This is also compact. For each $t \gt 0$, the maximizing measures on $t X$ form a nonempty closed set $M_t \subseteq P(X)$ (which is convex if $X$ is of negative type). One can think about how $M_t$ changes as $t \to \infty$.

The optimistic picture in my head is of $M_t$ moving and shrinking down to a single point as $t \to \infty$. (Imagine a water droplet sliding down a slope while simultaneously evaporating.) To say that it shrinks down to a point is equivalent to your hypothesis.

I wonder whether there are some general results about the behaviour of $M_t$ as $t$ grows. For example, does your hypothesis always hold? Does $diam(M_t) \to 0$ as $t \to \infty$? (Whatever diameter means.) If we write

$M_\infty = \bigcap_{T \gt 0} Cl\biggl(\bigcup_{t \geq T} M_t\biggr)$

for the set of limit points of sequences of maximizing measures as $t \to \infty$, does $M_\infty$ always have at least one element? At most one element? Ideally it would always have exactly one, which we would then call the uniform measure.

Posted by: Tom Leinster on October 26, 2021 10:11 AM | Permalink | Reply to this

### Re: What is the Uniform Distribution?

I like this picture! First observation: In a compact metric space, like $P(X)$, the intersection of a decreasing sequence of closed nonempty subsets is nonempty. So we do indeed have $M_\infty = \bigcap_{n = 1}^\infty Cl \left(\bigcup_{t \ge n} M_t\right) \neq \emptyset.$

So if we generalize the definition of uniform measure a bit, we could say that every compact metric space possesses some uniform measures.

Another standard compactness argument shows that the diameter of the intersection of such a sequence of sets is at least the infimum of the diameters of the sets, hence the diameter of the intersection is 0 iff the diameters shrink to 0. So the uniqueness of the uniform measure is equivalent to $\lim_{n \to \infty} diam\left(\bigcup_{t \ge n} M_t\right) = 0,$ for any metric that metrizes the weak* topology on $P(X)$. Which is pretty close to your question about $diam(M_t)$, and very closely related to the question about continuity of maximizing measures.

Posted by: Mark Meckes on October 26, 2021 5:34 PM | Permalink | Reply to this

### Re: What is the Uniform Distribution?

Great, I’m already happier with that definition of a uniform measure. It makes sense on an arbitrary compact metric space, and there’s always at least one of them (silly me for not seeing that).

Posted by: Tom Leinster on October 26, 2021 9:03 PM | Permalink | Reply to this

### Re: What is the Uniform Distribution?

A subset $A\subset X$ of a topological space $X$ whose interior is dense in $A$ is called regular. So a regular closed subset is one that is the closure of its interior.

Posted by: David Roberts on October 24, 2021 1:32 AM | Permalink | Reply to this

### Re: What is the Uniform Distribution?

Posted by: Tom Leinster on October 24, 2021 1:47 AM | Permalink | Reply to this

### Re: What is the Uniform Distribution?

Here’s what I think is a proof that if the restriction conjecture holds then so does the invariance-for-open-sets conjecture.

As ever, I’ll silently assume where necessary that every compact metric space has a uniform measure (necessarily unique).

Let $X$ be a compact metric space, and let $U$ and $W$ be isometric nonempty open subsets of $X$. Let $f: U \to W$ be an isometry. Then $f$ extends uniquely to an isometry $\overline{f}: \overline{U} \to \overline{W}$ between the closures. Under the isometry $\overline{f}$, the uniform measure $\mu_\overline{U}$ on $\overline{U}$ corresponds to the uniform measure $\mu_{\overline{W}}$ on $\overline{W}$, simply because it’s an isometry. In particular, since $f U = W$,

$\mu_{\overline{U}}(U) = \mu_{\overline{W}}(W).$

Now we use restriction. Applied to the closures-of-opens $\overline{U}$ and $\overline{W}$, it gives

$\mu_X|_{\overline{U}} = \mu_X(\overline{U}) \cdot \mu_{\overline{U}}, \qquad \mu_X|_{\overline{W}} = \mu_X(\overline{W}) \cdot \mu_{\overline{W}},$

and in particular, $\mu_X(U) = \mu_X(\overline{U}) \cdot \mu_{\overline{U}}(U), \qquad \mu_X(W) = \mu_X(\overline{W}) \cdot \mu_{\overline{W}}(W).$

Also, we’ve already seen that restriction implies invariance for closures of open sets, so

$\mu_X(\overline{U}) = \mu_X(\overline{W}).$

Putting this all together gives $\mu_X(U) = \mu_X(W)$, as required.

So where are we on these conjectures?

Modulo existence of uniform measures, it seems that the invariance conjectures both follow from the restriction conjecture. I sketched a proof that restriction follows from the continuity conjecture (I mean the weak one that I stated, not the nice strong Gromovy one that Mark stated). If that sketch proof can be made to go through then the remaining challenge is to prove continuity.

The deduction of restriction from continuity might not be as straightforward as my sketch made it look, though. The problem is that convergence in the weak${}^\ast$ topology on the space of probability measures doesn’t imply convergence of probabilities of a particular set. That is, $\nu \to \mu$ doesn’t imply $\nu(A) \to \mu(A)$ for all measurable $A$. This somehow has to be overcome.

Posted by: Tom Leinster on October 24, 2021 1:46 AM | Permalink | Reply to this

### Re: What is the Uniform Distribution?

The problem is that convergence in the weak∗ topology on the space of probability measures doesn’t imply convergence of probabilities of a particular set. That is, $\nu \to \mu$ doesn’t imply $\nu(A) \to \mu(A)$ for all measurable $𝐴$. This somehow has to be overcome.

You may already know this, but the so-called Portmanteau theorem, which gives a number of equivalent conditions for convergence of probability measures, says that for probability measures on a metric space, $\nu \to \mu$ is equivalent to $\nu(A) \to \mu(A)\quad \text{for all Borel sets}\ A with \mu(\partial A) = 0.$

(The convergence $\nu \to \mu$ here is the probabilists’ “weak convergence” or “convergence in distribution”, which in general is not quite the same as convergence in the weak* topology, but is the same on a compact metric space, which is the setting we care about here.)

I haven’t yet given any thought to whether this observation helps in this case.

Posted by: Mark Meckes on October 28, 2021 1:47 PM | Permalink | Reply to this

### Re: What is the Uniform Distribution?

Thanks. I’d understood that the difficulty had to do with the possibility of $\partial A$ having positive probability, but I didn’t know that theorem.

Posted by: Tom Leinster on October 28, 2021 3:13 PM | Permalink | Reply to this

Post a New Comment