June 21, 2015

Posted by Mike Shulman

In my last post I promised to follow up by explaining something about the relationship between homotopy type theory (HoTT) and computer formalization. (I’m getting tired of writing “publicity”, so this will probably be my last post for a while in this vein — for which I expect that some readers will be as grateful as I).

As a potential foundation for mathematics, HoTT/UF is a formal system existing at the same level as set theory (ZFC) and first-order logic: it’s a collection of rules for manipulating syntax, into which we can encode most or all of mathematics. No such formal system requires computer formalization, and conversely any such system can be used for computer formalization. For example, the HoTT Book was intentionally written to make the point that HoTT can be done without a computer, while the Mizar project has formalized huge amounts of mathematics in a ZFC-like system.

Why, then, does HoTT/UF seem so closely connected to computer formalization? Why do the overwhelming majority of publications in HoTT/UF come with computer formalizations, when such is still the exception rather than the rule in mathematics as a whole? And why are so many of the people working on HoTT/UF computer scientists or advocates of computer formalization?

To start with, note that the premise of the third question partially answers the first two. If we take it as a given that many homotopy type theorists care about computer formalization, then it’s only natural that they would be formalizing most of their papers, creating a close connection between the two subjects in people’s minds.

Of course, that forces us to ask why so many homotopy type theorists are into computer formalization. I don’t have a complete answer to that question, but here are a few partial ones.

1. HoTT/UF is built on type theory, and type theory is closely connected to computers, because it is the foundation of typed functional programming languages like Haskell, ML, and Scala (and, to a lesser extent, less-functional typed programming languages like Java, C++, and so on). Thus, computer proof assistants built on type theory are well-suited to formal proofs of the correctness of software, and thus have received a lot of work from the computer science end. Naturally, therefore, when a new kind of type theory like HoTT comes along, the existing type theorists will be interested in it, and will bring along their predilection for formalization.

2. HoTT/UF is by default constructive, meaning that we don’t need to assert the law of excluded middle or the axiom of choice unless we want to. Of course, most or all formal systems have a constructive version, but with type theories the constructive version is the “most natural one” due to the Curry-Howard correspondence. Moreover, one of the intriguing things about HoTT/UF is that it allows us to prove certain things constructively that in other systems require LEM or AC. Thus, it naturally attracts attention from constructive mathematicians, many of whom are interested in computable mathematics (i.e. when something exists, can we give an algorithm to find it?), which is only a short step away from computer formalization of proofs.

3. One could, however, try to make similar arguments from the other side. For instance, HoTT/UF is (at least conjecturally) an internal language for higher topos theory and homotopy theory. Thus, one might expect it to attract an equal influx of higher topos theorists and homotopy theorists, who don’t care about computer formalization. Why hasn’t this happened? My best guess is that at present the traditional 1-topos theorists seem to be largely disjoint from the higher topos theorists. The former care about internal languages, but not so much about higher categories, while for the latter it is reversed; thus, there aren’t many of us in the intersection who care about both and appreciate this aspect of HoTT. But I hope that over time this will change.

4. Another possible reason why the influx from type theory has been greater is that HoTT/UF is less strange-looking to type theorists (it’s just another type theory) than to the average mathematician. In the HoTT Book we tried to make it as accessible as possible, but there are still a lot of tricky things about type theory that one seemingly has to get used to before being able to appreciate the homotopical version.

5. Another sociological effect is that Vladimir Voevodsky, who introduced the univalence axiom and is a Fields medalist with “charisma”, is also a very vocal and visible advocate of computer formalization. Indeed, his personal programme that he calls “Univalent Foundations” is to formalize all of mathematics using a HoTT-like type theory.

6. Finally, many of us believe that HoTT is actually the best formal system extant for computer formalization of mathematics. It shares most of the advantages of type theory, such as the above-mentioned close connection to programming, the avoidance of complicated ZF-encodings for even basic concepts like natural numbers, and the production of small easily-verifiable “certificates” of proof correctness. (The advantages of some type theories that HoTT doesn’t yet share, like a computational interpretation, are work in progress.) But it also rectifies certain infelicious features of previously existing type theories, by specifying what equality of types means (univalence), including extensionality for functions and truth values, providing well-behaved quotient types (HITs), and so on, making it more comfortable for ordinary mathematicians. (I believe that historically, this was what led Voevodsky to type theory and univalence in the first place.)

There are probably additional reasons why HoTT/UF attracts more people interested in computer formalization. (If you can think of others, please share them in the comments.) However, there is more to it than this, as one can guess from the fact that even people like me, coming from a background of homotopy theory and higher category theory, tend to formalize a lot of our work on HoTT. Of course there is a bit of a “peer pressure” effect: if all the other homotopy type theorists formalize their papers, then it starts to seem expected in the subject. But that’s far from the only reason; here are some “real” ones.

1. Computer formalization of synthetic homotopy theory (the “uniquely HoTT” part of HoTT/UF) is “easier”, in certain respects, than most computer formalization of mathematics. In particular, it requires less infrastructure and library support, because it is “closer to the metal” of the underlying formal system than is usual for actually “interesting” mathematics. Thus, formalizing it still feels more like “doing mathematics” than like programming, making it more attractive to a mathematician. You really can open up a proof assistant, load up no pre-written libraries at all, and in fairly short order be doing interesting HoTT. (Of course, this doesn’t mean that there is no value in having libraries and in thinking hard about how best to design those libraries, just that the barrier to entry is lower.)

2. Precisely because, as mentioned above, type theory is hard to grok for a mathematician, there is a significant benefit to using a proof assistant that will automatically tell you when you make a mistake. In fact, messing around with a proof assistant is one of the best ways to learn type theory! I posted about this almost exactly four years ago.

3. I think the previous point goes double for homotopy type theory, because it is an unfamiliar new world for almost everyone. The types of HoTT/UF behave kind of like spaces in homotopy theory, but they have their own idiosyncracies that it takes time to develop an intuition for. Playing around with a proof assistant is a great way to develop that intuition. It’s how I did it.

4. Moreover, because that intuition is unique and recently developed for all of us, we may be less confident in the correctness of our informal arguments than we would be in classical mathematics. Thus, even an established “homotopy type theorist” may be more likely to want the comfort of a formalization.

5. Finally, there is an additional benefit to doing mathematics with a proof assistant (as opposed to formalizing mathematics that you’ve already done on paper), which I think is particularly pronounced for type theory and homotopy type theory. Namely, the computer always tells you what you need to do next: you don’t need to work it out for yourself. A central part of type theory is inductive types, and a central part of HoTT is higher inductive types; both of which are characterized by an induction principle (or “eliminator”) which says that in order to prove a statement of the form “for all $x:W$, $P(x)$”, it suffices to prove some number of other statements involving the predicate $P$. The most familiar example is induction on the natural numbers, which says that in order to prove “for all $n\in \mathbb{N}$, $P(n)$” it suffices to prove $P(0)$ and “for all $n\in \mathbb{N}$, if $P(n)$ then $P(n+1)$”. When using proof by induction, you need to isolate $P$ as a predicate on $n$, specialize to $n=0$ to check the base case, write down $P(n)$ as the inductive hypothesis, then replace $n$ by $n+1$ to find what you have to prove in the induction step. The students in an intro to proofs class have trouble with all of these steps, but professional mathematicians have learned to do them automatically. However, for a general inductive or higher inductive type, there might instead be four, six, ten, or more separate statements to prove when applying the induction principle, many of which involve more complicated transformations of $P$, and it’s common to have to apply several such inductions in a nested way. Thus, when doing HoTT on paper, a substantial amount of time is sometimes spent simply figuring out what has to be proven. But a proof assistant equipped with a unification algorithm can do that for you automatically: you simply say “apply induction for the type $W$” and it immediately decides what $P$ is and presents you with a list of the remaining goals that have to be proven.

To summarize this second list, then, I think it’s fair to say that compared to formalizing traditional mathematics, formalizing HoTT tends to give more benefit at lower cost. However, that cost is still high, especially when you take into account the time spent learning to use a proof assistant, which is often not the most user-friendly of software. This is why I always emphasize that HoTT can perfectly well be done without a computer, and why we wrote the book the way we did.

Posted at June 21, 2015 6:47 AM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2833

Re: What’s so HoTT about Formalization?

I’ve been trying to keep abreast of the discussions on HoTT, and I haven’t seen one question answered. I’m the type of pure mathematician who’s quite sympathetic to the HoTT program. But I’m also on the luddite end of mathematicians (well, I’m better than some at LaTeX). Suppose I wanted to start playing around with a HoTT proof assistant and start “doing” HoTT (or even, gasp, formalizing my own research). What steps should I take? I mean: I need to download software, learn some interface, practice some tutorials, etc. What software? What tutorials? Or is this only a game for folks who took CS 101 in college?

Posted by: Theo Johnson-Freyd on June 21, 2015 8:22 PM | Permalink | Reply to this

Re: What’s so HoTT about Formalization?

Some information can be found in our Coq library. We spend quite a bit of time on the documentation.

Posted by: Bas Spitters on June 21, 2015 10:11 PM | Permalink | Reply to this

Re: What’s so HoTT about Formalization?

Yes, in the HoTT Coq library we try to explain the features of Coq that we’re using in comments as we go, intending that the source code could be readable. Although I haven’t personally spoken to anyone who has actually tried to read it as their introduction to doing HoTT in Coq, so I don’t know how successful we were; if you do look at it, we’d appreciate feedback!

Unfortunately, though, we haven’t included a really basic introduction to Coq; you need a certain amount of orientation before you can start to read our code. There’s been some talk of writing such an introduction, perhaps to be read somewhat in parallel with the HoTT Book, but it hasn’t actually happened. So at present, I’d say you are best off starting with a non-HoTT introduction to Coq, such as Software Foundations or CPDT. You probably don’t need to read all of those books, just the first few chapters to get you to the point where you know how to define and prove things in Coq with terms and tactics.

For your preliminary work with one of those books, you can use a vanilla install of Coq. The HoTT Coq library, on the other hand, currently requires a custom-built version of Coq, but it includes scripts that are supposed to build that version for you; read the file INSTALL in the distribution. (We’re hoping that once Coq v8.5 is released, we’ll be able to distribute a version of the HoTT library that will work with a standard install thereof.)

Another option is not to look at the HoTT library yet, but once you have some basic familiarity with Coq, to read the HoTT Book and try to formalize some of it on your own using Coq. As I mentioned above, that can be a very educational approach.

There are also other proof assistants that people use to do HoTT, such as Agda and (more recently) Lean. Maybe someone who uses one of those will chime in with some suggestions for getting started with them.

Posted by: Mike Shulman on June 22, 2015 12:41 AM | Permalink | Reply to this

Re: What’s so HoTT about Formalization?

Well, one advantage of proof assistants for me is that I’m really really bad at calculation, and anything that I can do which will help me with it (and particularly make it more reliable) is a gain, and it keeps me more engaged with the calculation, so I spend less time staring into space or making cups of coffee which I don’t need. Plus, it’s fun (I do know quite a lot about functional programming, so I’m quite happy with the predominantly inductive form of proof which Coq tends to use).

One issue is that Coq is, after all, a proof assistant: it does proof in a different way from humans. (Bernays said that it clearly wasn’t true that formalising mathematics would make it more reliable, because humans were so crap at actually doing the calculations involved in logic. This has a corollary, I think, that human mathematicians must be doing something else than merely doing a lot of inferences in formal logic.)

So the question probably is whether the dialogue between a human and Coq is worth having. I do tend to think so, especially with homotopy theory. If you look at algebraic geometry, then the use of proof assistants has allowed people to do some things which would be out of reach to all but the best calculators: see, for example, Cassels’ work on curves of genus 2.

Posted by: Graham White on June 24, 2015 4:15 PM | Permalink | Reply to this

Re: What’s so HoTT about Formalization?

My best guess is that at present the traditional 1-topos theorists seem to be largely disjoint from the higher topos theorists.

That’s an interesting part of the story, and no doubt a very complicated one too, which will feature the likes of Joyal as one of the few to bridge the gap.

Regarding the near disjunction, I wonder if there is any justification to a suspicion I’ve had over the years of watching the category theoretic movement that it has slightly underplayed the capacity of category theory to make sense of, and improve, results in contemporary mainstream research.

Something I admire in Urs Schreiber’s reformulation of mathematical physics is that it involves a genuine attempt to mop up and reinterpret the results of large numbers of papers scattered through the mathematical physics literature. For example, a typical Urs-ism from nLab: D’Auria-Fre formulation of supergravity

What D’Auria-Fré implicitly observe (not in this homotopy theoretic language though, that was developed in Sati-Schreiber-Stasheff 08, Fiorenza-Schreiber-Stasheff 10, Fiorenza-Sati-Schreiber 13) is that for higher supergravity with extended supersymmetry such as 11-dimensional supergravity with its M-theory super Lie algebra symmetry, the description of the fields is in the higher differential geometry version of Cartan geometry, namely higher Cartan geometry, where the super Poincare Lie algebra is replaced by one of its exceptional super Lie n-algebra extensions (those that also control the brane scan), such as notably the supergravity Lie 3-algebra and the supergravity Lie 6-algebra. This is the refinement of super-Cartan geometry to higher Cartan geometry.

Whether or not that’s an important insight, there’s an interesting blend of the abstract and the particular here.

The connecting thought is that the insufficiency of 1-topos theory to deal with the subtler forms of identity needed even in concrete cases like the one Lurie explains in Structured Spaces about Bezout’s Theorem is part of the drive to higher topos theory.

An extra intricacy in the case of toposes is Grothendieck’s role in pushing towards higher versions. There’s an interesting discussion of his correspendence with Breen (ending here) which includes an expression of his worries about $n$-toposes,

J’espère bien que ces animaux n’existent pas …

Posted by: David Corfield on June 27, 2015 3:01 PM | Permalink | Reply to this

Re: What’s so HoTT about Formalization?

I’ve heard 2 interesting arguments, first of them from Voevodsky:

1. Combinatorial complexity of description of mathematical objects, roughly speaking, grows exponentially (when using set theory) with the level of abstraction. HoTT allows to avoid that exponential growth. This is almost literal translation of his argument that I read in 2012, but I’m sorry if my poor English distorts the meaning.

2. May be the second argument, which I heard from other people, is closely related to the first one: most boring part of work with proof assistants is transforming the term, for which some proposition is obvious to us, to the form, for which proposition is known to the proof assistant. That people says that unvalence allows somehow automatize this transformation.

Are these arguments related to the sixth paragraph of your first list?

If so then how is second argument possible for any math term? As s/w engineer I just don’t understand. I think it’s just very difficult sometimes to show that two terms are forms of each other, even for human.

Excuse me for my naive questions, but I need some arguments to learn HoTT.

Posted by: Osman on June 28, 2015 3:18 PM | Permalink | Reply to this

Re: What’s so HoTT about Formalization?

These arguments do seem to be particular cases of my #6 in the first list. However, my experience and knowledge (which of course is not perfect) suggest that both of those arguments, at least as you phrase them, are somewhat exaggerated.

Regarding the first, I don’t have extensive experience formalizing complicated mathematics, so I don’t think I can offer an opinion as to whether the complexity does in fact “grow exponentially”. I’m not sure I even know what exactly that means. However, I don’t see any reason to expect that HoTT will change the “asymptotic” behavior of this function (whether exponential, polynomial, or whatever). It might very well only affect the “constant factors”, by reducing the distance from certain sorts of “real mathematics” to the “metal” of the foundation, as I mentioned above. My inclination would be to expect the latter, but I don’t know how to decide the question other than experimentally, nor do I know what data Voevodsky may have been basing his claim on, so I might be wrong. Of course, constant factors can still be very significant.

Regarding the second, there is a hope that univalence will make it easier or automatic to transform some particular kinds of terms in “obvious” ways. Specifically, if two types are isomorphic/equivalent, then human mathematicians tend to identify them along that isomorphism without further comment, whereas a proof assistant needs to be told explicitly how to do so. Univalence, by making such an equivalence into an actual equality, makes that identification more automatic. To get the best benefit of this you would need a proof assistant that, unlike current ones, implements univalence computationally; such type theories and proof assistants are currently under research and development. However, I think there will remain many parts of “boring work” that univalence doesn’t affect.

In general, I see the innovations of HoTT/UF (such as univalence and HITs) as incremental improvements in the technology for computer formalization of mathematics. They are certainly significant, but I don’t see them as “game changers” in the sense of suddenly making widespread computer formalization feasible where it wasn’t before. Many other incremental improvements have been made in the past, and many more remain to be made in the future before proof assistants are likely to become as common among mathematicians as, say, TeX is today.

Posted by: Mike Shulman on June 28, 2015 10:25 PM | Permalink | Reply to this

Re: What’s so HoTT about Formalization?

Regarding the second argument: I mean, how is it possible computationally? Because you cannot just keep orbit of each term under groupoid action in memory… :)

I mean I can see some ad hoc solutions that are good from computational point of view, like to describe normalization procedures for important terms explicitly, but it looks like you have theoretical solution…

Posted by: Osman on June 28, 2015 11:50 PM | Permalink | Reply to this

Re: What’s so HoTT about Formalization?

I don’t understand what you’re worried about, so probably we’re not communicating. What I’m talking about is sketched in section 2.14 of the HoTT Book, and more programming-theoretically in these slides from a talk by Dan Licata.

Posted by: Mike Shulman on June 29, 2015 4:55 PM | Permalink | Reply to this

Re: What’s so HoTT about Formalization?

In order to understand 2.14 I should learn much more about HoTT. But that headache with terms is known to anybody who tried any proof assistans or similar software (and maybe does not know even TT). I think this is a possibility to explain HoTT advantages for non-TT people.

Posted by: Osman on June 29, 2015 5:22 PM | Permalink | Reply to this

Re: What’s so HoTT about Formalization?

I agree that #2 is one of the reasons that people (specifically, constructivists and computer scientists) become interested in HoTT. However, the introduction to the HoTT book reminds readers that the nice properties of the univalence axiom and HITs might be taken as evidence that they are not constructive (page 11, PDF for screen reading). Until this question is settled and an adequate computational interpretation of univalence is provided, I’m hesitant to accept the claim that type theory + univalence is still constructive.

Of course, this blog post is two years old and there may have been further developments on the type theory side of things that I’m not aware of/aren’t mentioned in the book :)

Posted by: Langston Barrett on October 24, 2017 1:48 AM | Permalink | Reply to this

Re: What’s so HoTT about Formalization?

Cubical type theory is generally agreed to be a computational interpretation of univalence and at least some HITs.

Of course, it’s also important to recognize that there are many meanings of “constructive”, and even a non-computational theory can be “constructive” in many ways.

Posted by: Mike Shulman on October 24, 2017 3:06 AM | Permalink | Reply to this

Post a New Comment