In Further Praise of Dependent Types
Posted by David Corfield
After an exciting foray into the rarefied atmosphere of current geometry, I want to return to something more prosaic – dependent types – the topic treated in Chapter 2 of my book. We have already had a paean on this subject a few years ago in Mike’s post – In Praise of Dependent Types, hence the title of this one.
I’ve just watched Kevin Buzzard’s talk – Is HoTT the way to do mathematics?. Kevin is a number theorist at Imperial College London who’s looking to train his undergraduates to produce computer-checked proofs of mainstream theory (e.g., theorems in algebraic geometry concerning rings and schemes) in the Lean theorem-prover.
Why Lean? Well, at (12:14) in the talk Kevin claims of mathematicians that
They use dependent types, even though they don’t know they are using dependent types.
Let’s hope they receive this news with the delight of Molière’s Mr. Jourdain:
« Par ma foi ! il y a plus de quarante ans que je dis de la prose sans que j’en susse rien, et je vous suis le plus obligé du monde de m’avoir appris cela. »
“My faith! For more than forty years I have been speaking prose while knowing nothing of it, and I am the most obliged person in the world to you for telling me so.”
So Kevin has chosen Lean over set-based provers since it employs these dependent types. He then goes on in his talk to consider whether one should prefer a system with univalence built in, but that’s a topic for another day.
I have a simple explanation as to why mathematicians are using dependent types. They’re doing so because first and foremost they are speakers of natural language, and as such they are already using dependent types. This latter point is something we might have learned way back in the 1994 from Aarne Ranta in his excellent Type-theoretic Grammar.
Recognising the commonality between natural and mathematical language, Ranta tells us in a very recent paper that mathematics is a good place to look to understand how language works:
The main characteristic of the language of mathematics is perhaps that different members of the community can completely agree about the meaning of each word and sentence. This means that one can take these meanings for granted and analyse the underlying mechanisms in isolation from uncertainty about what the correct analysis of data is. In this sense, the language of mathematics provides “laboratory conditions” for the study of semantics and pragmatics. (Ranta 2020, p. 122).
Ranta, A. 2020. ‘Some remarks on pragmatics in the language of mathematics’, Journal of Pragmatics 160, pp. 120-122.
So, a good place to try and understand underlying linguistic structure is informal mathematics, something he studied in his 1993 paper Type theory and the informal language of mathematics. This investigates both directions of the passage between formal and informal mathematics: sugaring and formalization.
Important topics for the dependent type-theoretic account of language are pronouns and quantifiers. Starting with the latter, any dependent type gives rise to two non-dependent types:
These assignments are left and right adjoints to the one which sends a type to the constant dependent type:
(Exercise: Assuming these adjunctions, demonstrate the arithmetic identities for natural numbers, and .)
When is a proposition, these quantified types are ‘The s which are ’ (‘truncated’ to ‘Some is ’) and ‘Every is ’.
The kind of ambiguity that Ranta says is missing in mathematics occurs frequently in natural language. Philosophers of language puzzle about how we determine the scope of a quantifier as when someone utters ‘Every bottle is green’. Certainly it doesn’t mean every bottle in the world. It probably doesn’t mean every bottle in the house or even every one in the room. Of course, the context matters. For instance, maybe the person who says this has just returned from a store with a selection of wine.
Dependent type theory understands there to be a host of implicit parameters. ‘Every’ is ‘’ where is a type. Similarly for demonstratives, this and that. The need for parameters is felt by the kind of philosopher of language who wants to downplay the role of the context as an independent determinant of meaning, one who, like Jason Stanley, could maintain:
All truth-conditional effects of extra-linguistic context can be traced to logical form. (Language in Context, OUP, 2007, p. 30)
Stanley speaks of ‘context-dependency’, ‘domain indices’, ‘covert pronomial elements’, and ‘unpronounced syntactic structure’. My claim is that if this has any chance of working, it has to be through the logical form of dependent type theory. On hearing ‘every’ in ‘every bottle is green’, we search for the corresponding implicit type. ‘Bottle’ is a cue and may lead us with the context to the intended type of ‘bottles that have just been brought home from the store’.
Consider another example from Stanley:
Every time John lit a cigarette, it rained.
Despite appearances this can’t merely be quantification over times. If John always lights a cigarette in New York and at the same moment it always rains in London, this doesn’t support the proposition. It must rain at the same location as John.
I would want to say that the verb ‘rain’ has implicit parameters for time and place. An achievement, such as ‘John lights a cigarette’, has an associated time and place. The proposition is claiming that every achievement which is John lighting a cigarette occurs at a time and place such that it rains then and there.
Hence,
.
This final type corresponds to the target sentence.
Briefly on pronouns, one of the first pieces of natural language I saw rendered in dependent type theory was Sundholm’s version of the Donkey sentence:
Every farmer who owns a donkey beats it.
This may strike people as a little unnatural, but it’s easy to devise sentences of a similar form, especially if we choose ‘any’ rather than ‘every’:
Anyone who owns a gun should register it.
(A treatment of the subtleties of this choice and comparable ones can be found in Zeno Vendler’s Each and Every, Any and All.)
The point is that there’s a relation between two kinds of thing, here people and guns, where one person may have no, one, or many guns. We use the indefinite article ‘a’ for gun, and yet seem to want to refer to something particular by using ‘it’.
Every that s a , s it,
rendered in type theory as
where and are the projections to first and second components. We see ‘it’ corresponds to .
A first-order quantification, on the other hand, will look to two universal quantifiers
which seems further from the original linguistic form.
Ranta gives an example from informal Euclidean geometry:
Every point that lies outside a line determines a parallel to it.
Perhaps a more natural sounding mathematical example is
Any number which has a proper divisor is greater than it.
In more formal mathematics, we tend to deploy letters, so
Any number, , which has a proper divisor, , is such that .
I think Ranta is right, we could learn a lot from informal mathematical language.
Re: In Further Praise of Dependent Types
I have heard computer scientists complaining that our (mathematical) notation is ambiguous. They don’t like
[0,1]
for the real unit interval, because it looks exactly like the list[0,1]
even though it is manifestly clear to any mathematician that the meaning of[0,1]
is completely unambiguous in any context. The CS people like]0,1]
and(0,1]
even less because this messes up their parsing. We’re stuck with something likeIoc 0 1
right now (“Interval, open, closed”). Come on. Lean 4 has an amazing parser designed by Sebastian Ullrich and even though it won’t be perfect, it will be better than Lean 3’s. I believe that writing mathematics in a computer file which looks the same as the mathematics we write on blackboards is a really important step towards making software such as Lean usable for mathematicians. We are working on it. I think that some of the computer scientists who claim it’s sloppy or ambiguous just don’t understand it well enough.