Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

September 26, 2018

A Communal Proof of an Initiality Theorem

Posted by Mike Shulman

One of the main reasons I’m interested in type theory in general, and homotopy type theory (HoTT) in particular, is that it has categorical semantics. More precisely, there is a correspondence between (1) type theories and (2) classes of structured categories, such that any proof in a particular type theory can be interpreted into any category with the corresponding structure. I wrote a lot about type theory from this perspective in The Logic of Space. The basic idea is that we construct a particular structured category SynSyn out of the syntax of the type theory, and prove that it is the initial such category. Then we can interpret any syntactic object AA in a structured category CC by regarding AA as living in SynSyn and applying the unique structured functor SynCSyn\to C.

Unfortunately, we don’t currently have any very general definitions of what “a type theory” is, what the “corresponding class of structured categories” is, or a very general proof of this “initiality theorem”. The idea of such proofs is easy — just induct over the construction of syntax — but its realization in practice can be long and tedious. Thus, people are understandably reluctant to take the time and space to write out such a proof explicitly, when “everyone knows” how the proof should go and probably hardly anyone would really read such a proof in detail anyway. This is especially true for dependent type theory, which is qualitatively more complicated in various ways than non-dependent type theories; to my knowledge only one person (Thomas Streicher) has ever written out anything approaching a complete proof of initiality for a dependent type theory.

There is currently some disagreement in the HoTT community over how much a problem this is. On one side, the late Vladimir Voevodsky argued that it is completely unacceptable, delayed the publication of his seminal model of type theory in simplicial sets because of his dissatisfaction with the situation, and spent the last years of his life working on the problem. (Others, less dogmatic in philosophy, are nevertheless also working on the problem — specifically, attempting to give a general definition of “type theory” and prove a general initiality theorem for all such “type theories”.) On the other side, plenty of people point out reasonably that functorial semantics has been well-understood for decades, and why should we worry so much about a particular instance of it all of a sudden now? Unfortunately, the existence of this disagreement is not good for the perception of our discipline among other mathematicians.

In my experience, arguments about the importance of initiality often tend to devolve into disagreements about questions like: Is Streicher’s proof hard to understand? Does it generalize “easily” to other type constructors? How “easy” is “easily”? What kinds of type constructors? Is it hard to deal with variable binding? Is the categorical structure really “exactly” the same as the type-theoretic structure? Where is the “hard part” of an initiality proof? Is there even a “hard part” at all? Plenty of people have opinions about these questions, but for most of us these opinions are not based on actual experience of trying to prove such a theorem.

Last month at the nForum, Richard Williamson suggested the (in hindsight obvious) solution: let’s get a bunch of people together and work together to write out a complete proof of an initiality theorem, in modern language, for a basic uncomplicated dependent type theory. If we have enough contributors to divide up the work, the verbosity and tedium shouldn’t be overwhelming. We can use the nLab wikilink features to organize the proof in a “drill-down” manner so that a reader can get a high level idea and then delve into as many or as few details as desired. Hopefully, this will increase overall public awareness of how such proofs work, so that they seem less “magic”. Moreover, all the contributors will get some actual experience “in the trenches” with an initiality proof, thereby hopefully leading us to more informed opinions.

I don’t view such a project as a replacement for proving one general theorem, but as a complementary effort, whose goals are primarily understanding and exposition. However, if it’s successful, the result will be a complete initiality theorem for at least one dependent type theory; and we can add as many bells and whistles to this theory as we have time and energy for, hopefully in a relatively modular way.

We had some preliminary discussion about this project at the nForum here, at which enough people expressed interest in participating that I think the project can get off the ground. But the more the merrier! If you’d like to help out, even just a little bit, just add your name to the list of participants on this nLab page and join the conversation when it begins. (Some other people have informally told me they’re interested, but I didn’t keep a record of their names, so I didn’t add them to the list; if you fall in that category, please add yourself!)

I’m not sure yet how we will do most of our communication and coordination. We’ll probably have one or more nForum threads for discussion. I think it might be nice to have some scheduled videoconference meetings for those who can make it, especially during the early stages when we’ll have to make various decisions that will affect the overall course of the project; but I’m not wedded to that if others aren’t interested. Most of the work will probably be individual people writing out proofs of inductive cases on nLab pages.

Some of the decisions we’ll have to make at the beginning include:

  • What type theory should we aim for as a first target? We can always add more to it later, so it should be something fairly uncomplicated, but nontrivial enough to exhibit the interesting features. For instance, I think it should certainly have Π\Pi-types. What about universes?

  • Exactly how should we present the syntax? In particular, should we represent variable binding with named variables, de Bruijn indices, or some other method? Should all terms be fully annotated?

  • What categorical structure should we use as the substrate? Options include contextual categories (a.k.a. “C-systems”), categories with families, split comprehension categories, etc.

  • How should we structure the proof? The questions here are hard to describe concisely, but for instance one of them was mentioned by Peter Lumsdaine at the nForum thread: Streicher phrases the induction using an “existential quantifier” for interpretation of contexts, but it is arguably easier to use a “universal quantifier” in the same place.

Feel free to use the comments of this post to express opinions about any of these, or about the project overall. My current plan is to wait a couple weeks to give folks a chance to sign up, then “officially” start making plans as a group.

Posted at September 26, 2018 8:01 PM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/3064

36 Comments & 0 Trackbacks

Re: A Communal Proof of an Initiality Theorem

I have heard that some folks at Stockholm University are working on something just along these lines. I attended a talk by Guillaume Brunerie some weeks ago, where he outlined a definition for a general type theory and proof of initiality for its syntactic category. Peter LeFanu Lumsdaine has been working on the same things, collaborating with someone else (can’t remember who!) independently of Guillaume. So perhaps we are in good luck if we just wait for their papers to appear! :)

Posted by: Oskar Berndal on September 26, 2018 10:11 PM | Permalink | Reply to this

Re: A Communal Proof of an Initiality Theorem

I’m aware of the folks working on general initiality theorems; in fact I mentioned them specifically in the post:

Others… are… attempting to give a general definition of “type theory” and prove a general initiality theorem for all such “type theories”.

as well as the relationship between this project and theirs:

I don’t view such a project as a replacement for proving one general theorem, but as a complementary effort, whose goals are primarily understanding and exposition.

I’ve also already spoken to Peter Lumsdaine and Andrej Bauer about this project, and both agreed that it seemed complementary to their effort and worth pursuing at the same time.

And also, to be frank, I don’t want to wait any longer. I think it’s gotten to the point that even someone who doesn’t believe it’s intrinsically important to write down initiality theorems completely might still agree that the existence of the controversy, and its failure to get resolved in a timely manner, is damaging enough to the perception of our subject among other mathematicians to make it worth the effort.

Personally, I also have another motivation: I want to understand the proofs of initiality theorems, and the best way to understand a proof is to help create it yourself. And I hope that everyone else who joins in this project will derive the same benefit of increased understanding.

Posted by: Mike Shulman on September 26, 2018 11:48 PM | Permalink | Reply to this

Re: A Communal Proof of an Initiality Theorem

Oh, right. Sorry for the misunderstanding.

Posted by: Oskar Berndal on September 27, 2018 3:59 PM | Permalink | Reply to this

Re: A Communal Proof of an Initiality Theorem

It strikes me that this could be an official Polymath Project. It would make a change from combinatorics/analytic number theory…

Posted by: David Roberts on September 26, 2018 10:44 PM | Permalink | Reply to this

Re: A Communal Proof of an Initiality Theorem

I thought of something like that. But my impression is that Polymath projects have mainly been trying to solve important open problems by collecting lots of ideas from many contributors, and I wouldn’t put initiality theorems at that level. In particular, I wouldn’t want to give other mathematicians the impression that type theory and HoTT have nothing better to offer as a Polymath project than what is, essentially, a big bookkeeping problem.

Obviously this is closely related to my own opinion about the status and importance of initiality, which I tried to shy away from stating outright in the main announcement. The participants in this project don’t, after all, have to agree on the status and importance of initiality; all we need is the common goal of writing down a proof of it.

Posted by: Mike Shulman on September 27, 2018 12:00 AM | Permalink | Reply to this

Re: A Communal Proof of an Initiality Theorem

I’d very much agree, this doesn’t seem suitable for a Polymath project. My own personal opinion is that it is in some sense a straightforward extension of Streicher’s proof, maybe with minor modifications needed, but no substantially new ideas — but it’s lengthy and fiddly enough that one can’t comfortably say up front that it’s clearly straightforward.

So I’m very much in support of this project; and like Mike said, I think this — writing up initiality for a specific type theory — nicely complements the more general approaches that several of us are working on.

Posted by: Peter LeFanu Lumsdaine on September 27, 2018 12:44 PM | Permalink | Reply to this

Re: A Communal Proof of an Initiality Theorem

Fair enough! :-)

Posted by: David Roberts on September 27, 2018 1:39 PM | Permalink | Reply to this

Re: A Communal Proof of an Initiality Theorem

Good idea, Mike – let’s put this to rest!
I’m in.

Posted by: Steve Awodey on September 27, 2018 12:12 AM | Permalink | Reply to this

Re: A Communal Proof of an Initiality Theorem

One question in my mind is how much we want to stick to a standard presentation of a type theory, or use more “modern” technology – e.g. bidirectional type checking.

As far as I know, the received wisdom is that to define the partial interpretation of raw terms, it is necessary to label each piece of syntax with all of the metavariables occuring in its typing rule. E.g. the raw term for application is app_{A,x.B}(f,a). But I wonder if some of these annotations could be eliminated if we use bidirectional type checking, and/or pay close attention to the presuppositions of the judgements, and the corresponding quantifiers in the statement of the interpretation, as in Peter’s comment that you mentioned.

By presuppositions, I mean that, for example, you can think of the typing judgement as having subjects

{G | G ctx} |- a : {A | G |- A type}

where {G | G ctx} is the subset of the raw contexts G which are (merely) well-formed according to the context formation judgement. This is slightly different than a judgement

G |- a : A

where G and A are raw syntax and it is an admissible invariant of the rules that if the judgement holds, then there is a derivation of G ctx and of G |- A type, which is another way that people set things up.

My general impression is that what works best for minimizing the size of derivations and the repeated information is to set things up bidirectionally, presuppose that “inputs” are well-formed, and ensure that “outputs” are well-formed. So I wonder if this allows you to push through a categorical semantics with fewer annotations as well.

Posted by: Dan Licata on September 27, 2018 1:40 AM | Permalink | Reply to this

Re: A Communal Proof of an Initiality Theorem

Mike started an nLab page on bidirectional typechecking a few days ago.

Posted by: David Corfield on September 27, 2018 8:18 AM | Permalink | Reply to this

Re: A Communal Proof of an Initiality Theorem

This is a really interesting question, and one that I’ve thought about a lot as well.

If we take the Streicher approach of first defining a partial interpretation function on raw judgments of the form ΓM:A\Gamma \vdash M : A, and then proving by induction on derivations that it is total on derivable judgments, then I don’t know how to get away from annotations even in a bidirectional theory. For instance, in an un-annotated application App(F,X)App(F,X), where FF could be some arbitrary term, there is no way to produce from the raw syntax judgments of the form ΓF:?\Gamma \vdash F : ? and ΓX:?\Gamma \vdash X : ? that we could interpret inductively in order to apply the one to the other. These judgments only come from a bidirectional derivation of ΓApp(F,X)C\Gamma \vdash App(F,X) \Leftarrow C which first synthesizes a type Π(x:A)B[x]\Pi(x:A) B[x] for FF and then checks XX at type AA. Unless there is some trick that I’m missing?

However, with a bidirectional system it becomes much more tempting to take a different approach than Streicher’s and define the interpretation function directly on derivations rather than bothering about a partial interpretation on raw judgments. Last year I proposed doing this with a bidirectional system of the “hereditary substitution” variety, in which only normal/canonical (and neutral/atomic) terms exist, in the hopes that it would have more direct higher-categorical semantics. At the time there didn’t seem to be much interest, but I still like that idea.

But even if we didn’t go all the way to hereditary substitution and used a bidirectional theory that does include non-normal terms (e.g. with type ascriptions to make redexes typable), we could still consider defining an interpretation by induction on derivations rather than raw judgments. As I understand it, the problem with doing this for a unidirectional theory (is that the opposite of “bidirectional”?) is that the conversion rule

ΓM:AΓABtypeΓM:B\frac{\Gamma \vdash M:A \quad \Gamma\vdash A\equiv B\; type}{\Gamma\vdash M:B}

means that one derivable judgment has many different derivations, whereas we eventually want an interpretation function that depends only on the judgment and not how it was derived. But a bidirectional theory eliminates (or at least reduces) this ambiguity by pushing all uses of the conversion rule into canonical places (the switch between checking and synthesizing mode), so that every derivable judgment has a canonical (or maybe even unique?) derivation.

However, right now my inclination is that it would be better for this project to start out using the “traditional” Streicherian approach for a unidirectional theory with unpleasant annotations, and regard a bidirectional theory as “syntactic sugar” that can be “compiled” into the fully annotated theory before being interpreted, as Andy and Peter suggested below. (I believe this “compilation” idea has already been made fully precise for some type theories in the literature, by enhancing the bidirectional typing judgments to include a fully annotated “output” term that gets produced as you go along.)

I don’t know whether I agree with Peter that this will be cleaner. In fact, I quite like the idea that with a bidirectional theory we might be able to do away with the partial interpretation function and induct directly on derivations (and potentially also induct directly into higher categories without needing to strictify them). But the only way to really find out which approach is cleaner is to actually do both of them, and it makes sense to start with the one that’s already better-understood.

Furthermore, bidirectional typechecking is an extra layer of complication in the presentation of a type theory, and since a primary goal of this project is expositional, we want the result to be as accessible as possible to as many readers as possible. For instance, it would be nice if someone whose main exposure to type theory has been the HoTT book (including the brief appendix about formal syntax) could dive right into reading this initiality proof, and for that purpose it would be best to stick as close as possible to easier-to-understand unidirectional theories.

I think my response to Paolo is the same. Going through a calculus with explicit substitutions is an interesting idea, but in order to tell whether it’s better we have to actually do both versions, and to start with I would rather keep the type theory as close as possible to the ones that we use in practice, with implicit substitutions.

This is all just my current opinion, though. I’m coordinating the project (because someone has to), but I’m not the dictator, and I can be convinced or overruled.

Posted by: Mike Shulman on September 27, 2018 12:50 PM | Permalink | Reply to this

Re: A Communal Proof of an Initiality Theorem

Mike: But a bidirectional theory eliminates (or at least reduces) this ambiguity by pushing all uses of the conversion rule into canonical places (the switch between checking and synthesizing mode), so that every derivable judgment has a canonical (or maybe even unique?) derivation.

Yes, unique. I think. It seems that with all algorithmic systems of rules, derivations are unique. Not just bidirectional ones. The conversion rule is famously non-algorithmic.

Posted by: Matt Oliveri on September 28, 2018 8:49 AM | Permalink | Reply to this

Re: A Communal Proof of an Initiality Theorem

Insofar as derivations are unique, it suggests that we could define an interpretation directly by induction on derivations rather than on raw syntax first. Alternatively, we could regard type synthesis and checking as not inductively defined judgments at all, but rather (partial) functions defined recursively on raw syntax, and then define interpretation as a partial function in parallel with them that can be proven total whenever they are total.

However, I think all of this breaks for “ill-behaved” type theories with rules such as equality reflection, where nothing is algorithmic any more. But initiality for such theories should still be true, so it would be good to have a framework for initiality that doesn’t depend on well-behavedness.

It may still be possible to use some ideas from bidirectionality, though, as I suggested below. For instance, while still treating all judgments as inductively defined, we could separate type checking from type synthesis while retaining equality as a unidirectional judgment. I don’t think this would lose any important generality (though I’m ready to be proven wrong), and it might enable a cleaner description of the interpretation function.

Posted by: Mike Shulman on October 5, 2018 2:56 PM | Permalink | Reply to this

Re: A Communal Proof of an Initiality Theorem

Mike: Insofar as derivations are unique, it suggests that we could define an interpretation directly by induction on derivations rather than on raw syntax first.

Is the induction principle strong enough to prove “on the fly” that the interpretation results are meaningful? For example, that we didn’t apply a function to an object outside its domain? You like strongly-typed math, right? You sure this type checks? Remember: the extrinsic typing rules aren’t inductive-inductive.

Alternatively…

Yes. Some way or another, you should be able to think of it as a partial function. I’m just not convinced that there’s actually an easier way to organize this than Streicher’s method, that doesn’t depend on induction-induction principles you haven’t actually proven. In some sense, the whole point is to prove an induction-induction principle for the syntax.

Posted by: Matt Oliveri on October 6, 2018 12:17 PM | Permalink | Reply to this

Re: A Communal Proof of an Initiality Theorem

Well, all of that is for another time anyway. Right now I think we’re going to use Streicher’s method (with tweaks).

Posted by: Mike Shulman on October 6, 2018 3:17 PM | Permalink | Reply to this

Re: A Communal Proof of an Initiality Theorem

For the 1,Σ,Π1,\Sigma, \Pi fragment of type theory, using a bidirectional framework ought to be something you can make work. It even seems likely you could get as far as a canonical forms presentation which omits definitional equality. (This would let you easily show that derivations and typed terms are in 1-1 correspondence.)

However, once you add positive types, things get more complicated, and AFAICT universes make this approach incredibly hairy. Basically once you have type variables you have to mark neutral terms of variable type to eta-expand them on-demand. (I learned this from Aleks Nanevski and company’s 2006 paper “Hoare Type Theory, Polymorphism and Separation”. It’s a nice idea, but one I’ve never seen much follow up to.)

As far as the issue of explicit substitutions or not goes, I think it hinges on the choice of de Bruijn notation or not. If you use de Bruijn notation then you will need explicit weakening markers in your syntax, from which explicit substitutions are not a big step. If you have explicit variable names you can omit the weakenings in the term syntax (because weakening maps between contexts with names are unique).

I don’t know whether de-Bruijn+explicit-substitution or named+no-substitutions is easier, but those two seem like the two most reasonable choices to me.

Posted by: Neel Krishnaswami on September 27, 2018 3:45 PM | Permalink | Reply to this

Re: A Communal Proof of an Initiality Theorem

once you have type variables you have to mark neutral terms of variable type to eta-expand them on-demand.

Can you explain that? I haven’t run across such a problem, and there’s too much unrelated stuff in that paper for me to extract this point. Are you talking specifically about a canonical-forms presentation only, or about any kind of bidirectional typechecking?

If you use de Bruijn notation then you will need explicit weakening markers in your syntax

Why? I always thought that de Bruijn weakening could be defined as an operation on syntax just like implicit substitution, recursing over a term to increment all the free variables.

Posted by: Mike Shulman on September 27, 2018 5:00 PM | Permalink | Reply to this

Re: A Communal Proof of an Initiality Theorem

  1. The issue with universes is basically the following. Suppose you have the following derivation:

    a:U,x:El(a)x:El(a) a : U, x : \mathrm{El}(a) \vdash x : \mathrm{El}(a)

    In this context, xx is in beta-normal, eta-long form.

    Now, suppose that you substitute 11 (the code for the unit type) for aa. Now, xx is no longer in beta-normal, eta-long form – it needs to be eta-expanded to the unit value ()().

    However, you can’t look at just the term to figure this out, which makes defining hereditary substitution much harder. What Nanevski et al did was to annotate neutral terms at neutral types, so that you end up writing something like:

    a:U,x:El(a)eta(x,El(a)):El(a) a : U, x : \mathrm{El}(a) \vdash \mathsf{eta}(x, \mathrm{El}(a)) : El(a)

    Now, substituting 11 for aa would make El(a)\mathrm{El}(a) reduce, leaving you with a unit type to tell you to eta-expand xx to ()(). So now you no longer have to carry around the context, and can define hereditary substitution purely on terms.

  2. As far as weakening goes, you’re right and I’m wrong. I have been thinking too much about type inference lately, and in that context weakening/renaming substitutions must be suspended when they touch a unification variable, pushing inwards only after the unification variable is solved. But obviously a core type theory doesn’t have any unification variables in it!

Posted by: Neel Krishnaswami on September 28, 2018 10:51 AM | Permalink | Reply to this

Re: A Communal Proof of an Initiality Theorem

Thanks! Do I understand correctly, though, that this issue with universes only arises for canonical-form-only type theories? In a bidirectional type theory that permits redexes and non-eta-long terms it seems like it would not be an issue.

Posted by: Mike Shulman on September 28, 2018 2:20 PM | Permalink | Reply to this

Re: A Communal Proof of an Initiality Theorem

Yes, it only applies to canonical-forms-only theories.

IME, bidirectional systems with non-canonical terms are a little more annoying to work with than fully-annotated terms. The issue is that you need a bit of extra machinery to handle terms with nested annotations like ((e:A):B)((e : A) : B). Nothing goes wrong, or is even hard, but it’s all just a bit grungy.

Posted by: Neel Krishnaswami on October 1, 2018 1:11 PM | Permalink | Reply to this

Re: A Communal Proof of an Initiality Theorem

Such grunginess is mainly in the implementation, though, right? I would expect the user not to usually encounter such things.

Actually, I have a question about exactly this, which you or someone else here might be able to answer. In this post McBride describes a bidirectional system with two categories of untyped terms, those that will be checked and those that will be synthed, with an embedding [][-] making a synthed term checked and an ascription (:T)(- : T) making a checked term synthed. He then has an “υ\upsilon-reduction” rule [t:T]t[t:T] \rightsquigarrow t. This seems related to your point about double ascriptions: [t:A]:B[t:A]:B reduces to t:Bt:B by υ\upsilon-reduction, while we can’t write simply (t:A):B(t:A):B since t:At:A is a synthed term and only checked terms can be ascribed.

My question is, don’t you also need some kind of rule comparing [e]:T[e]:T with ee? Maybe this should be some kind of η\eta-like rule rather than a reduction, but it seems like it ought to be present somehow.

Posted by: Mike Shulman on October 1, 2018 3:23 PM | Permalink | Reply to this

Re: A Communal Proof of an Initiality Theorem

Yes, you do need rules amounting to that, though it might not be exactly that rule. (You just need that equation to be admissible.)

In fact, this is basically the problem! One of the things I like about bidirectional systems is that they reduce the number of choices that you have to make when writing down a type theory.

However, when you have these extra type annotations, there are several plausible ways to make them go away, and so you have to think carefully to ensure you’ve made a coherent set of choices. It’s always possible, but it doesn’t feel as natural as the rest of bidirectional typechecking.

For example, about 10 years ago I wrote a paper on pattern matching in functional languages, which was done in a bidirectional style. Because bidirectionality/polarization eliminated so many choices, today I could reconstruct basically every part of the paper from memory – except the rules for getting rid of useless type annotations!

Posted by: Neel Krishnaswami on October 5, 2018 11:34 AM | Permalink | Reply to this

Re: A Communal Proof of an Initiality Theorem

Actually, though, now that I think about it more, it seems that some amount of bidirectionality is already lurking in the ‘standard’ approach to initiality, and making it explicit might be helpful. For instance, in Peter’s sketch he wrote

For each (e : Expr cl γ), where cl is a syntactic class and γ a context-of-variables, we’ll give:

  • in case cl=ty: for each X:C and partial environment for γ on X, a partial element of Ty X
  • in case cl=tm: for each X:C and partial environment for γ on X, a partial element of Tm X

An alternative at this step: we could assume when interpreting terms that their type is already interpreted, i.e. define the goal type of the induction as

  • in case cl=tm: for each X:C, partial environment for γ on X, and A in Ty Г, a partial element of (Tm X A)

This seems perhaps conceptually nicer than the version above, but that version seemed to work out slightly more easily in practice.

This seems to me to be essentially a semantic version of the difference between regarding ee as a synthesizing judgment or as a checking judgment. Even in a fully annotated theory where all terms can synthesize a type, there’s still room to distinguish the two judgments, because a term will synthesize only one syntactic type (at least, if the synthesis is single-valued functional), but should check against any type that is (judgmentally) equal to the type it synthesizes. And I suspect that if this sort of “mode switching” is not made explicit, it will nevertheless be implicitly embedded in the inductive clauses.

For instance, when interpreting an application App x:A.B(F,M)App^{x:A.B}(F,M), and (in Peter’s notation) an object X:CX:C and partial environment EE, if we take the first approach (“all judgments are synthesizing”) we have to do something like:

  • attempt to interpret AA over (X,E)(X,E), yielding [[A]]:Ty(X)[[A]]:Ty(X)
  • attempt to interpret BB over (X.[[A]],E.[[A]])(X.[[A]], E.[[A]]), yielding [[B]]:Ty(X.[[A]])[[B]]:Ty(X.[[A]])
  • attempt to interpret FF over (X,E)(X,E), yielding [[F]]:Tm(X,P)[[F]]:Tm(X,P)
  • check that P=Π[[A]][[B]]P = \Pi [[A]] [[B]]
  • attempt to interpret MM over (X,E)(X,E), yielding [[M]]:Tm(X,Q)[[M]]:Tm(X,Q)
  • check that Q=[[A]]Q = [[A]]
  • now the result is the semantic application of [[F]][[F]] to [[M]][[M]], in Tm(X,[[M]] *[[B]])Tm(X,[[M]]^*[[B]]).

The paired steps of “interpret, yielding a (semantic) term in a (semantic) type, then check that the type equals something else” are exactly a semantic version of the mode-switching rule in a bidirectional theory:

ΓMAΓABΓMB. \frac{\Gamma \vdash M \Rightarrow A \quad \Gamma \vdash A \equiv B}{\Gamma\vdash M \Leftarrow B}.

So even if we are not making any use of the ability of a bidirectional theory to omit annotations, it seems that it might still be helpful to distinguish synthesizing and checking judgments in the syntax, because they correspond to different modes of interpretation: when interpreting a synthesizing judgment we should produce a semantic term and its semantic type, while when interpreting a checking judgment we should take the semantic type as given and produce a semantic term belonging to it.

Additionally, if we set up the basic framework of the theorem using both judgments, then it will be more straightforward later to consider adding rules to the theory (or replacing existing ones) that make more use of bidirectionality.

Posted by: Mike Shulman on October 5, 2018 12:13 PM | Permalink | Reply to this

Re: A Communal Proof of an Initiality Theorem

Mike: This seems to me to be essentially a semantic version of the difference between regarding e as a synthesizing judgment or as a checking judgment.

[…]

And I suspect that if this sort of “mode switching” is not made explicit, it will nevertheless be implicitly embedded in the inductive clauses.

Yes! I think you’re right. I was thinking about bidirectional interpretation after I ran into coherence trouble with my formalized interpreter. (Actually when I told you recently that checking mode is like elements and synth mode is like pointed types, it was because I remembered this.) Although I didn’t actually try it, I’m pretty sure it works just as well/badly as the synth-only approach. The really interesting thing though, pertaining to coherence issues, is that it seems like all coherence issues would now be caused by exactly one rule: the synth-to-check direction switch. In interpreter form, this rule has semantic types for the same type expression coming from both directions, and it needs to coerce the interpreted element between them.

My difficulty with the function application rule does indeed seem to have been caused by the implicit direction switch. Basically, coherence problems whenever you needed to check type equality, syntactically.

So even if we are not making any use of the ability of a bidirectional theory to omit annotations…

This is not so easy, by the way. The clauses of the interpretation resemble the “sanity check” for the typing rules. For the usual sorts of bidirectional rules, the latter relies on inversion lemmas for type derivations. There’s normally nothing analogous in a model, for an interpreter to use.

My interpreter had a funny hack, where I first wrapped semantic types in a universe construction, and this let me prove analogues of the inversion lemmas. This was relatively unsuspicious, by sheer luck, for the simple language I interpreted. It seems less clear that you can get away with it once the object language has at least one universe closed under at least one type constructor. But it might work, by injecting semantic types into the construction at each universe level. Hopefully it’d turn out that whenever the typing rules mention a type constructor, the semantics can use a constructor of the universe, rather than the injection.

But yeah, if you tweak the usual bidirectional rules to add any premises you turn out to need, it should work fine without any universe construction.

My impression is that for type systems where the proof theory is already well understood, it doesn’t seem worthwhile to play clever tricks in the interpretation, since you can fix everything up on the purely syntactic side.

Posted by: Matt Oliveri on October 6, 2018 10:31 AM | Permalink | Reply to this

Re: A Communal Proof of an Initiality Theorem

Dan Licata’s comment is interesting. But I am in two (or more) minds about the status of the whole “annotations” problem (referred to in Mike Shulman’s second bullet point), by which I mean the problem that to assign meaning (in some structured category) to actual pieces of syntax, rather than assigning meanings to the derivations of their wellformedness, the syntax gets decorated with explicit annotations that no-one writes in practice.

Perhaps for this project one should begin with a straightforward “core” type theory with lots of annotation. Then one might separately study less annotated type theories and elaboration algorithms (and their correctness) for constructing expressions in the “core” for them. Is that different from organizing the less annotated type theory into a structured category and proving initiality? I think it is.

Posted by: Andy Pitts on September 27, 2018 8:48 AM | Permalink | Reply to this

Re: A Communal Proof of an Initiality Theorem

I rather agree with Andy here: I’d expect it’ll be much cleaner to present initiality in terms of a fully annotated syntax, and leave the comparison with less-annotated syntax to be done entirely within the realm of syntax — much like how it is (I think) clearer to state initiality using a strict algebraic structure (CwA’s, contextual categories, …) and then give comparisons with weaker structures (type-theoretic fibration categories, …) purely in the categorical realm.

This disentangles the different things going on — proof-theoretic issues, initiality itself, and (higher-)category-theoretic issues.

Posted by: Peter LeFanu Lumsdaine on September 27, 2018 12:33 PM | Permalink | Reply to this

Re: A Communal Proof of an Initiality Theorem

This is a great idea! I signed up on the nlab page.

One important design parameter is whether we want explicit substitution in the syntax. I think with explicit substitution the initiality theorem is not much more than unfolding a construction of the free CwF (with such and such structure…) given say by a proof of the adjoint functor theorem or a theorem about initial models of finite limit sketches. The structure you get from those theorems is very similar to a syntax with explicit substitution, and the work required to put it into the specific form that one wants to call “syntax” should be completely straightforward (and probably not very mathematically illuminating).

I guess, however, that a satisfactory initiality theorem should be about syntax with implicit substution. In that case I wonder if one can use the explicit substitution version of the theorem as a stepping stone, and turn the problem into proving a certain “cut elimination” theorem in the appropriate sense. The hope would be that such a result could be proved via some Yoneda trick / NbE, similarly to how one can prove the coherence theorem for monoidal categories (which itself can be put in a form that looks like an initiality theorem) via embedding into a stricter larger category.

Posted by: Paolo Capriotti on September 27, 2018 10:54 AM | Permalink | Reply to this

Re: A Communal Proof of an Initiality Theorem

I can’t figure out whether this experimental initiality theorem is going to be a formal proof. I want in only if it will be formalized in Coq. I don’t think I’d want to do the work that comes up otherwise.

Posted by: Matt Oliveri on September 28, 2018 1:10 PM | Permalink | Reply to this

Re: A Communal Proof of an Initiality Theorem

I am not personally organizing a formalization aspect of the project at this time. However, it could be that other people would also be interested in a formalization, so if anyone else wanted to organize a parallel project to formalize the result of this one, you might get some takers.

Posted by: Mike Shulman on September 28, 2018 2:20 PM | Permalink | Reply to this

Re: A Communal Proof of an Initiality Theorem

So does that mean I should add my name to the list, just in case? Or not add it, just in case?

In other words is it a “maybe I’ll help” list or a “I’ll probably help” list?

Posted by: Matt Oliveri on September 28, 2018 2:58 PM | Permalink | Reply to this

Re: A Communal Proof of an Initiality Theorem

You could start a new “interested in formalizing the results of the project” list. If that list starts to look substantial, then you or someone else could step up to organize such an effort.

Posted by: Mike Shulman on September 28, 2018 3:08 PM | Permalink | Reply to this

Re: A Communal Proof of an Initiality Theorem

This is getting kind of complicated. If we make a list of people who’d be willing to help formalize it without also finding out who wants it formalized, then we could end up wasting time on an undesired formalization.

I think I’ll just wait and see for now.

Posted by: Matt Oliveri on September 28, 2018 4:34 PM | Permalink | Reply to this

Re: A Communal Proof of an Initiality Theorem

As an amateur of mathematical logic, I can propose a contribution to the 2nd point: How should we present the syntax?, based on an ongoing development of a logical framework designed for analytic presentations of dependant type theories. It is still partly a prospective work, but I think that I have reached a point where experts can quickly judge if there is an interest or not in it…

The logical framework is a dependant type theory, that is designed as a general “rule theory”. The underlying guiding principle (that I will not debate here) is that mathematic can be understood as the science of systems of formal rules; therefore, by formalising adequately the concept of ‘rule’ and the concept of ‘system of rules’, we can expect to obtain a logical framework allowing the formalisation of any mathematical theory.

It turns out that the resulting “rule theory” is a dependant type theory with some promising properties, that remain however to be verified and explored. The rules of the “rule theory” are almost fully specified, but only partially implemented & tested in Ocaml (I am able to define N and compute the Ackermann function in the framework, but still need to improve some rules and the testing environment to verify that I am able to fully recover the induction rules for classical inductive types and HITs - I am progressing only slowly…).

To give you a taste of the “rule theory” framework:

  • It is essentially based on the syntaxical definition of the normal forms; all terms are strongly normalizable using a few reduction rules. The most important of them is the rule that reduces the instance of the induction rule for an inductive type A applied to its constructors into the identity function ‘λ(x:A).x’ (the framework defines a generic induction rule that is then instanciated for the defined inductive types). The type of a term can always be computed from the term (and its context): therefore the syntax defines only what is a term in a context. Terms, types and universes have the same syntaxic form, they are just ‘rules’ in the rule theory. One type can ‘match’ different other types, as specified by a few matching rules (if a inhabits A and A matches B, then a inhabits B). Universes are just the consequence of the matching rule that says that, for any types A, B and U, if A and B inhabit U, then A->B inhabits U. For variable naming, I am using an index àla de Bruijn; however, the variable index I am using is defined as the position of the variable definition in the context (that is to say the size of the context where the variable is defined), so that a variable has a fixed index in a syntax expression. This leads to a quite intuitive implementation of a multi-variables recursive implementation of substitution/induction/evaluation.

  • There is no formal difference between terms, types and universes. Every term/type x is equiped with an identity function λ(z:x).z. The possibility to reduce the “do nothing” inductive rules into the identity function should allow the definition of the classical type equivalence without identity types (using the direct recognition of the identity function instead); this type equivalence can be used as a term equivalence, because there is no formal difference between terms and types. [this is not yet verified on computer].

  • the generic induction rule allows the definition of inductive universes, where all internal types/sub-universes are defined together inductively [this is defined & implemented]. This should allow the definition of generalized inductive types, and HITs with a computable induction rule (using equivalence as identity: identity types cannot be used for HITs in the rule theory) [This is not yet verified on computer]. The “rule theory” gives an intuitive justification for its generic induction rule, that also explains the need for the strict positivity condition. [the generic induction rule takes a different form than the classical induction rules, I still need to verify on computer that the classical form can be fully recovered].

  • [this is only speculative work] The logical framework allows the definition of “universal” properties, defined for all terms/types of a theory. It seems that the information available in the syntax of the framework for such universal properties is good enough to compute from the proof of a universal property of a term/type the proof of the same property for any equivalent term/type [this is today not much more than an intuition from a computer science point of view]. That would make mathematical theories in the framework univalent, if we defined type/term equality as indiscernibility by universal properties (it is still possible to define some non universal “evil” properties - the HoTT identity types is one of them in the rule theory, where identity types match definitional equality).

If there is an interest in my work, I should be able to deliver during next weekend a short draft paper containing a more complete explanation of the rule theory framework, with its rules and its syntax, in its current state of (partial) development.

Posted by: Louis on October 2, 2018 9:36 PM | Permalink | Reply to this

Re: A Communal Proof of an Initiality Theorem

Thanks for offering this! However, I think that for the current project we ought to stick to established ways of presenting type theories rather than experiment with new things: the point is to understand and exposit the state of the art.

Posted by: Mike Shulman on October 2, 2018 9:39 PM | Permalink | Reply to this

Exotic Logical Framework

Louis: The most important of them is the rule that reduces the instance of the induction rule for an inductive type A applied to its constructors into the identity function ‘λ(x:A).x’.

The possibility to reduce the “do nothing” inductive rules into the identity function should allow the definition of the classical type equivalence without identity types…

This sounds like eta reduction for inductive types. It’s known, but rarely used. I don’t understand how this could help express type equivalence.

For variable naming, I am using an index àla de Bruijn; however, the variable index I am using is defined as the position of the variable definition in the context (that is to say the size of the context where the variable is defined), so that a variable has a fixed index in a syntax expression.

In other words, you refer to a variable by it’s index from the left end of the context, rather than from the right end? So that when you bind a variable, and it appears on the right end, it doesn’t change the index of other variables? That’s de Bruijn “levels”. It’s distinguished from de Bruijn “indexes” because it has different properties.

(it is still possible to define some non universal “evil” properties - the HoTT identity types is one of them in the rule theory, where identity types match definitional equality)

It seems like you’re doing unusual things to avoid identity types. You seem to believe they will be “evil” in your setup. But in HoTT, identity types aren’t evil, they’re (homotopy-)invariant, like all type families. What’s up with this?

I would like to talk about your project, but perhaps this comments section is not an appropriate place to do so, if others agree (with Mike) that this initiality proof should not be trying something new and clever.

How about you re-announce on HoTT Cafe?

Posted by: Matt Oliveri on October 3, 2018 2:29 AM | Permalink | Reply to this

Re: A Communal Proof of an Initiality Theorem

Okay, let’s get started! We have close to 40 participants, which is awesome but may require some effort to organize. I’ve started an nForum thread to discuss all the questions we need to answer at the beginning. I also wrote up a concrete proposal of ways to answer many of those questions, just to give us a place to start the discussion from.

Posted by: Mike Shulman on October 18, 2018 8:55 PM | Permalink | Reply to this

Post a New Comment