Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

February 20, 2021

Native Type Theory (Part 1)

Posted by John Baez

guest post by Christian Williams

Native Type Theory is a new paper by myself and Mike Stay. We propose a unifying method of reasoning for programming languages: model a language as a theory, form the category of presheaves, and use the internal language of the topos.

languageΛcategory𝒫toposΦtypesystem\mathtt{language} \xrightarrow{\;\Lambda\;} \mathtt{category} \xrightarrow{\;\mathscr{P}\;} \mathtt{topos} \xrightarrow{\;\Phi\;} \mathtt{type\; system}

Though these steps are known, the original aspect is simply the composite and its application to software. If implemented properly, we believe that native types can be very useful to the virtual world. Here, I want to share some of what we’ve learned so far.

The Free Lunch

The biggest lesson of this paper was to have faith in what John Baez calls the dao of mathematics. For two years, Mike and I and Greg Meredith looked for ways to generate logics for programming languages; we tried many approaches, but ultimately the solution was the simplest.

Two facts of category theory provide an abundance of structure, for free:

1. Every category embeds into its presheaf topos.\text{1. Every category embeds into its presheaf topos.}

2. Every topos has a rich internal language.\text{2. Every topos has a rich internal language.}

The embedding preserves limits and exponents, hence we can apply this to “higher-order theories”.

For now we explore the language of a topos. There are multiple views, and often its full power is not used. Toposes support geometric logic, predicate logic, and moreover dependent type theory. We emphasize the latter: dependent types are expressive and pervasive, yet underutilized in mathematics.

The Language of a Topos

My thinking has been shaped by the idea that even foundations are categorical. Virtually any language can be modelled as a structured category: the most comprehensive reference I’ve found is Categorical Logic and Type Theory by Bart Jacobs.

Probably the most studied categorical structure of logic is the topos. Toposes of sheaves, functors which coherently assign data to a space, were first applied to algebraic geometry. A continuous map f:XYf:X\to Y induces an inverse image f:Sh(Y)Sh(X)f:Sh(Y)\to Sh(X) which is a left adjoint that preserves finite limits. This gives geometric morphisms of toposes, and geometric logic (\wedge and \exists) as the language of classifying toposes.

Though geometric logic is an important level of generality, the language of toposes is more powerful. In La Jolla, California 1965, the budding category theory community recognized Grothendieck’s categories of sheaves as being fundamentally logical structures, which generalize set theory. An elementary topos is a cartesian closed category with finite limits and a subobject classifier, an object which represents the correspondence of predicates and subobjects.

The language of an elementary topos T is encapsulated in a pair of structures.

The predicate fibration ΩTT reasons about predicates, like subsets;\text{The predicate fibration }\; \Omega\text{T}\to \text{T}\; \text{ reasons about predicates, like subsets;}

The codomain fibration ΔTT reasons about dependent types, like indexed sets.\text{The codomain fibration }\; \Delta\text{T}\to \text{T}\; \text{ reasons about dependent types, like indexed sets.}

Predicate Logic

Throughout mathematics, we use the internal predicate logic of Set. It is the canonical example of a topos: a predicate such as φ(x)=(x+35)\varphi(x)= (x+3\geq 5) is a function φ:X2={t,f}\varphi:X\to 2=\{t,f\}, which corresponds to its comprehension, the subobject of true terms {xX|φ(x)=t}X\{x\in X \;|\; \varphi(x)=t\}\subseteq X.

Predicates on any set XX form a Boolean algebra P(X)=[X,2]P(X)=[X,2], ordered by implication. Every function f:XYf:X\to Y gives an inverse image P(f):P(Y)P(X)P(f):P(Y)\to P(X). This defines a functor P:Set opBoolP:Set^{op}\to Bool which is a first-order hyperdoctrine: each P(f)P(f) has adjoints fP(f) f\exists_f\dashv P(f)\dashv \forall_f representing quantification, which satisfy the Beck-Chevalley condition.

Altogether, this structure formalizes classical higher-order predicate logic. Most formulae, such as

x,y:.z:.x<zz<y\forall x,y:\mathbb{Q}.\; \exists z:\mathbb{Q}.\; x\lt z \wedge z\lt y

f:Y X.y:Y.x:X.f(x)=yg:X Y.y:Y.f(g(y))=y\forall f:Y^X.\; \forall y:Y.\; \exists x:X.\; f(x)=y \Rightarrow \exists g:X^Y.\; \forall y:Y.\; f(g(y))=y

can be modelled in this logical structure of Set.

This idea is fairly well-known; people often talk about the “Mitchell-Benabou language” of a topos. However, this language is predicate logic over a simple type theory, meaning that the only type formation rules are products and functions. Many constructions in mathematics do not fit into this language, because types often depend on terms:

Nat(n):={m:|mn}\text{Nat}(n) := \{m:\mathbb{N} \;|\; m\leq n\}

p:=/xyz:.(xy)=zp\mathbb{Z}_p := \mathbb{Z}/\langle x\sim y \equiv \exists z:\mathbb{Z}.\; (x-y)=z\cdot p\rangle

This is provided by extending predicate logic with dependent types, described in the next section.

So, we have briefly discussed how the structure of Set allows for the the explicit construction of predicates used in everyday mathematics. The significance is that these can be constructed in any topos: we thereby generalize the historical conception of logic.

For example, in a presheaf topos [C op,Set][C^{op},\text{Set}], the law of excluded middle does not hold, and for good reason. Negation of a sieve is necessarily more subtle than negation of subsets, because we must exclude the possibility that a morphism is not in but “precomposes in” to a sieve. This will be explored more in the applications post.

Dependent Type Theory

Dependency is pervasive throughout mathematics. What is a monoid? It’s a set MM, and then operations m,em,e on MM, and then conditions on m,em,e. Most objects are constructed in layers, each of which depends on the ones before. Type theory is often regarded as “fancy” and only necessary in complex situations, similar to misperceptions of category theory; yet dependent types are everywhere.

The basic idea is to represent dependency using preimage. A type which depends on terms of another type, x:AB(x):Typex:A\vdash B(x):Type, can be understood as an indexed set {B(x)} x:A\{B(x)\}_{x:A}, and this in turn is represented as a function x:A.B(x)A\coprod x:A.\; B(x)\to A. Hence the “world of types which depend on AA” is the slice category Set/A/A.

The poset of “AA-predicates” sits inside the category of “AA-types”: a comprehension is an injection {xA|φ(x)}A\{x\in A \;|\; \varphi(x)\}\to A. This is a degenerate kind of dependent type, where preimages are truth values rather than sets. So we are expanding into a larger environment, which shares all of the same structure. The slice category Set/A/A is a categorification of P(A)P(A): its morphisms are commuting triangles, understood as AA-indexed functions.

Every function f:ABf:A\to B gives a functor f *:f^\ast:Set/B/B\toSet/A/A by pullback; this generalizes preimage, and can be expressed as substitution: given p:SBp:S\to B, we can form the AA-type

x:Af *(S)(x)=S(f(x)):Type.x:A\vdash f^\ast(S)(x) = S(f(x)):\text{Type}.

This functor has adjoints Σ ff *Π f\Sigma_f\dashv f^\ast\dashv \Pi_f, called dependent sum and dependent product: these are the powerful tools for constructing dependent types. They generalize not only quantification, but also product and hom: the triple adjunction induces an adjoint co/monad on Set/B/B

Σ ff *=×f[f,]=Π ff *.\Sigma_f\circ f^\ast = -\times f \dashv [f,-] = \Pi_f\circ f^\ast.

These dependent generalizations of product and function types are extremely useful.

Indexed sum generalizes product type by allowing the type of the second coordinate to depend on the term in the first coordinate. For example: Σn:.List(n)\Sigma n:\mathbb{N}.\text{List}(n) consists of dependent pairs n,l:List(n)\langle n, l:\text{List}(n)\rangle.

Indexed product generalizes function type by allowing the type of the codomain to depend on the term in the domain. For example: ΠS:Set.List(S)\Pi S:\text{Set}.List(S) consists of dependent functions λS:Set.φ(S):List(S)\lambda S:\text{Set}.\varphi(S):List(S).

See how natural they are? We use them all the time, often without realizing. Simply by using preimage for indexing, we generalize product and function types to “branching” versions, allowing us to build up complex objects such as

Monoid:=ΣM:Set.Σm:M 2M.Σe:1M...\text{Monoid}:= \Sigma M:\text{Set}.\Sigma m:M^2\to M.\Sigma e:1\to M...

...Π(a,b,c):M 3.m(m(a,b),c)=m(a,m(b,c)).Πa:M.m(e,a)=a=m(a,e)....\Pi (a,b,c):M^3. m(m(a,b),c)=m(a,m(b,c)). \Pi a:M. m(e,a)=a=m(a,e).

This rich language is available in any topos. I think we’ve hardly begun to see its utility in mathematics, computation, and everyday life.

All Together

A topos has two systems, predicate logic and dependent type theory. Each is modelled by a fibration, a functor into the topos for which the preimage of AA are the AA-predicates/AA-types. A morphism in the domain is a judgement of the form “in the context of variables of these (dependent) types, this term is of this (dependent) type”.

a:A,b:B(a),,z:Z(y)t:Ta:A,b:B(a),\dots, z:Z(y)\vdash t:T

The two fibrations are connected by comprehension which converts a predicate to a type, and image factorization which converts a type to a predicate. These give that the predicate fibration is a reflective subcategory of the type fibration.

Altogether, this forms the full higher-order dependent type theory of a topos. As far as I know, this is what deserves to be called “the” language of a topos, in its full glory. This type theory is used in proof assistants such as Coq and Agda; eventually, this expressive logic + dependent types will be utilized in many programming languages.

Conclusion

Native Type Theory gives provides a shared framework of reasoning for a broad class of languages. We believe that this could have a significant impact on software and system design, if integrated into existing systems.

In the next post, we’ll explore why this language is so useful for the topos of presheaves on theories. Please let me know any thoughts or questions about this post and especially the paper. Thank you.

Posted at February 20, 2021 1:59 AM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/3297

16 Comments & 1 Trackback

Re: Native Type Theory

Hey pretty neat.

I initially looked into topos stuff too when trying to figure out a pointful syntax for a categorical programming language but the math was way too beyond me.

The main thing I found that was helpful was

http://www.kurims.kyoto-u.ac.jp/~hassei/papers/ctcs95.html

Decomposing typed lambda calculus into a couple of categorical programming languages

And using a tagless final style. The connection to reflexive objects is also neat!

In pseudo Coq

Class KappaZeta k `(Category k) := lam : (k 1 a -> k b c) -> k b (a -> c) app : k a (b -> c) -> k 1 b -> k a c unit : k a 1

kappa : (k 1 a -> k b c) -> k (a * b) c lift : k (a * b) c -> k 1 a -> k b c

I always knew there was some abstract nonsense about parametric HOAS, indexing and topos stuff that kind of related but I lacked the background to understand.

Posted by: Molly on February 20, 2021 2:03 PM | Permalink | Reply to this

Re: Native Type Theory

Hi Molly, thanks for your comment. That paper is interesting.

Decomposing simply-typed lambda into these two calculi, one for products and one for exponents - why is this more practical for implementation? Is it mainly to expand the kinds of structured categories we use for programming besides cartesian closed?

Though I don’t know much about the realm of implementation in the past few decades, I imagine that nowadays we can work in the internal languages of many kinds of categories, from monoidal up to locally cartesian closed, for example.

Anyway, I agree that there is not enough introductory material on categorical logic, especially dependent type theory. The original is Martin-Löf; the interpretation in LCCCs is Seely; and an overview of modern semantics is Ahrens/Lumsdaine/Voevodsky.

I’d like to try to make some relaxed introductions to dependent types in category theory. If someone has a good reference they recommend, let us know. Thanks.

Posted by: Christian Williams on February 22, 2021 3:01 AM | Permalink | Reply to this

Re: Native Type Theory

Huh, weird I thought I replied to this.

You may be interested in this comment I made on a GitHub issue going into more detail about the kappa calculus stuff.

https://github.com/jameshaydon/lawvere/issues/9#issuecomment-780744651

I think my own main motivation for delving into this stuff was just cause it was fun. But the thing is having things in a categorical flavor offers very straightforward construction of opposites. I was also trying to figure out what exactly labels/points meant. Coexponential types are very strange but can vaguely be thought of as continuations/exceptions. I thought they might make for a good effect system but they are inconsistent with exponentials unless you have complicated restrictions.

You would think that there would be an easy to understand way of going back and forth between categories and internal languages but all I get pointed to is the paper “compiling to categories” which just covers CCCs.

Also, this is far from in any good state but I tried to figure out a bit of my intuitions about slices and stuff in Coq. It’s messy cause it’s categorical and I don’t get that yet.

https://pastebin.com/1LiqT2aT

Posted by: Molly on February 27, 2021 3:05 PM | Permalink | Reply to this

Re: Native Type Theory

I’m quite interested in the question of (in the end) practical uses of this kind of thing. Suppose that I am writing some C for an embedded application. How could it help me to have a dependent type theory coming from a formal semantics for C (e.g. that of the K framework)?

Is it not a fundamental prerequisite for such a thing to be useful that the type-theoretic description of something be significantly different from the implementation? One way for something to be significantly different is for it to be significantly simpler. Thus the basic (non-dependent) typing (whether dynamic or static) of C and all its derivatives (i.e. essentially all programming languages in use today) is useful, because it is massively simpler, but still catches bugs in a useful way.

Another way for the type-theoretic description to be significantly different from the implementation is when both are in the same language. This is the case in mathematics: both the statements and the proofs use the same kind of language. In this case, the description acts as a summary: one can choose to ‘ignore the details’.

Now in C, say in my embedded example, I may well be actually working with bits and bytes. Maybe I get some bytes from a hardware sensor, do some processing, and send some bytes out. I have trouble seeing how writing a type-theoretic description of what I am doing will be essentially different from actually writing the program; it will be the same thing, just in a different language. And thus it is just as likely that I will introduce a bug in the type-theoretic description (i.e. the description will not express what I wish it to express) as in a normal implementation. And if that type-theoretic language is significantly more difficult to understand than the usual one used for implementation, why not just use the latter?

The same point can be made for higher-level languages, just that the balance is more subtle. Suppose I have a standard REST API type thing. I take some input, parse it, do a database call or two, do some processing, and send something back. So far so good, I might find it useful to have type system expressive enough to cope with that top-level description natively. But the details of the parsing and the processing are domain-specific: if one expresses what they are, one is basically, exactly as in the C example, just writing an implementation in a different language.

One might think that a more promising practical use of this kind of thing would more likely be the following kind of process. Suppose I write my program in the normal way in C. I run it through some formal machine, and get the same program in Java or Python or any other language. The problem here, however, is that the programs in Java or Python will be under-specified. Where I use an array in C, I have numerous options in Java as to which type to use. Maybe I wish to always use immutable structures in my Java application; my C algorithm might not be appropriate for that. The whole point of using such languages at a higher level of abstraction is to go in the reverse direction: write some higher-level code more quickly, or more safely, and then compile back to C (or machine code, or whatever). And that compiling back process is fundamentally language-specific.

One thing I think mathematicians especially are guilty of is often completely misunderstanding the nature of real world programming. If you go to Math Overflow or Zulip or wherever, you won’t have to look long in a relevant discussion for somebody to come out with some assertion along the lines of ‘such and such a person or software should just use a decent language like Haskell’, and for this to be met with mutual nodding and appreciation. In real world programming, what type system a programming language has is only one aspect of a number of factors, and typically (and correctly) far from the most important or decisive one. This paragraph is not a dig at the blog article authors, by the way, just an observation that I think is relevant to understanding the cultural biases which people often have in this kind of discussion!

Posted by: Richard Williamson on February 20, 2021 11:59 PM | Permalink | Reply to this

Re: Native Type Theory

Hi Richard, thanks for your thoughts. As may have been apparent in the paper, our motivation is practical but we admit that the question of application is currently quite broad, and it will take time and community engagement to expound.

I certainly have limited real-world programming experience to draw upon when developing ideas for applications, though Mike definitely does (hopefully he’ll join the conversation). But yes, this is why I appreciate your comment.

So it seems your main question is, how can “native” types be actually useful? If the types are just made from the language itself, what do we really gain? This is reasonable. There are a few different directions to answer.

First, it helps to imagine that we already have adequate infrastructure that supports this for most popular languages. Of course when we start from scratch, native types seem funny because at first they’re just like the original language. But suppose that we’ve built up a library of complex types of C programs, which characterize classes of algorithms satisfying certain properties, constraints, pre/postconditions etc. One could then use these to query a codebase such as GitHub. One could compare existing implementations of an algorithm, choose one and adapt it into your system.

Does this make sense? I think it’s up to how we design and implement native type systems which will determine how useful they can be in practice. Sure, maybe there are aspects of languages where it’s not very useful, or some applications where it can’t help much because it’s too domain-specific. But the idea is so broad that it seems inevitable that we’ll find some good applications, with enough brainstorming.

Did you get a chance to look at the examples in the paper? We use a concurrent language because it lets us give types for distributed computing. Hopefully those help some with what we have in mind so far. You’ve brought up some interesting examples, especially regarding translations. The functoriality of the native types construction will be very useful once we develop libraries of formal translations between languages. Perhaps we could brainstorm more in that direction.

Thanks for your thoughts. Mathematicians often have limited understanding of real-world programming, and these conversations are helpful. I’ll tell Mike to read your comment and he’ll have more to say as well.

Posted by: Christian Williams on February 21, 2021 12:57 AM | Permalink | Reply to this

Re: Native Type Theory

Suppose that I am writing some C for an embedded application. How could it help me to have a dependent type theory coming from a formal semantics for C (e.g. that of the K framework)?

Various things come to mind. The first is the reason K Framework exists: Runtime Verification is Rosu’s company that sells software built on the formal semantics of C to detect undefined behavior in embedded systems. These undefined behaviors usually indicate a bug like a buffer overflow or a use-after-free or so on.

In an embedded scenario, you’re often concerned with handling interrupts. These are asynchronous events that interrupt the usual flow of the computer. In an 80x86, the CPU finishes the current instruction, pushes the code segment and instruction pointer to the stack, then looks up the new CS and IP of the handler in the interrupt table. Code continues from that point. When the handler is done, it pops the old CS and IP from the stack, continuing where it left off.

Consider a handler that increments a counter for keeping track of how many times the interrupt fired. If the handler reads the value, adds one, then writes the result back, it can dramatically undercount the number of interrupts, particularly if the interrupts fire at just the right interval: the same interrupt can fire between the time when it reads and the time when it writes. Many copies of the interrupt CS:IP get pushed to the stack, each having read the same value; then when they complete, all write the same value back. A lock doesn’t help, because the interrupt handlers aren’t threaded; they’re synchronous, so the system would immediately deadlock.

Expressing the property that the code preserves invariants even when interrupted is something the native type system can do.

Timing attacks have been exploited for fun and profit. Consider an embedded system that stores a password in memory, then does a strcmp to compare what you typed to what’s in memory. When you guess the first character incorrectly, it fails immediately, but if you guess it correctly, it runs a little longer while testing the second character. This reduces the complexity of finding the password from exponential to linear time. Similar timing attacks on RSA can leak bits of decrypted ciphertexts, which in turn can be leveraged to get bits of a private key.

Expressing the property that the code runs in constant time independent of the input is something the native type system can do.

Another use could be expressing orange-book-style security properties of information, or checking that object capabilities don’t leak.

In this case, the description acts as a summary: one can choose to ‘ignore the details’.

Yes, it is certainly the case that we want two independent expressions of programmer intent for the compiler to check against each other, and having the type system be a simple summary is a reasonable way to do that.

With a native type system, you’re free to choose which parts to ignore, while with C’s built-in system, you must ignore all behavioral information and a lot of information about the values.

it is just as likely that I will introduce a bug in the type-theoretic description […] as in a normal implementation.

The native type system is as complex as you want it to be. You can write very simple types or very expressive types. The compiler will find the places where your types do not match your code.

As an example of a dependent type being used to catch errors in C++ code, see this integer overflow/underflow bug in Cap’n Proto. To fix the bug, Kenton Varda used template metaprogramming to write a class encoding a permitted range for a nonnegative integer, then updated his code to use the class everywhere. The compiler then statically verified the ranges. He writes:

In the process, I found:

  • Several overflows that could be triggered by the application calling methods with invalid parameters, but not by a remote attacker providing invalid message data. We will change the code to check these in the future, but they are not critical security problems.

  • The overflow that Ben had already reported (2015-03-02-0). I had intentionally left this unfixed during my analysis to verify that Guarded would catch it.

  • One otherwise-undiscovered integer underflow (2015-03-02-1).

Based on these results, I conclude that Guarded is in fact effective at finding overflow bugs, and that such bugs are thankfully not endemic in Cap’n Proto’s code.

Why don’t programming languages do this?

Anything that can be implemented in C++ templates can obviously be implemented by the compiler directly. So, why have so many languages settled for either modular arithmetic or slow arbitrary-precision integers?

Languages could even do something which my templates cannot: allow me to declare relations between variables. For example, I would like to be able to declare an integer whose value is less than the size of some array. Then I know that the integer is a safe index for the array, without any run-time check.

Obviously, I’m not the first to think of this. “Dependent types” have been researched for decades, but we have yet to see a practical language supporting them. Apparently, something about them is complicated, even though the rules look like they should be simple enough from where I’m standing.

[End of Kenton’s quote. Richard again:]

Suppose I have a standard REST API type thing.

Suppose you do, and part of your request is turned into an SQL query. By combining the operational semantics of the server on the backend with the semantics of the database, it’s possible to express the property that Little Bobby Tables’ name won’t cause trouble.

SQL injection attack on license plate reader

In real world programming, what type system a programming language has is only one aspect of a number of factors, and typically (and correctly) far from the most important or decisive one.

Absolutely!

Posted by: Mike Stay on February 21, 2021 2:36 AM | Permalink | Reply to this

Re: Native Type Theory

Apparently, something about [dependent types] is complicated

(-:

Posted by: Mike Shulman on February 21, 2021 5:27 AM | Permalink | Reply to this

Re: Native Type Theory

An exceptionally well written use-case - deserves to be published somewhere for easy bookmarking and reference.

Posted by: jay gray on February 21, 2021 11:30 AM | Permalink | Reply to this

Re: Native Type Theory

Thank you both for the replies! To keep the discussion focused, I’ll reply, for now at least, just to the question of preventing types of overflow.

Consider something like the following example.

int some_array[100];

static void fill_with_ones(const int entries_to_fill) {
    for (int i=0; i < entries_to_fill; i++) {
        some_array[i] = 1;
    }
}

int main(int number_of_arguments, char** arguments) {
    fill_with_ones(atoi(arguments[1]));
}

In other words: take some integer xx specified as a command line argument, and fill the array some_array with xx ones. One will get an overflow here if x>100x \gt 100.

I don’t see immediately how a compiler based upon formal semantics/native type theory will just be able to be run upon the above code as-is and detect the possibility of overflow. I can see that one might be able to require a guarantee that entries_to_fill is 100\leq 100, and that the compiler might be able to detect that such a guarantee has not been given. But what if we instead do:

int main(int number_of_arguments, char** arguments) {
    const int entries_to_fill = atoi(arguments[1]));
    if (entries_to_fill > 100) {
        sys.exit(1);
    } 
    fill_with_ones(atoi(arguments[1]));
}

Maybe you say that the compiler here will be able to understand that a guarantee that x100x \leq 100 has been given. But I find that hard to believe in general. What if instead of just using xx directly, we ran some complicated function on it first? What if that complicated function involves hardware: say if a piece of arithmetic, or a fast Fourier transform, is carried out in a chip on my MCU, not in software? At minimum, you would need that hardware manipulation to be defined manually in type theory. That will be awful to write down by hand, I would imagine, in general; again, it comes back to the idea that there is no benefit in having a type theoretic description that is essentially the same as writing a program.

I think one can come up with other scenarios like this. What if for example I take entriestofill to be the current year minus 1921, where the current year is fetched from a syscall or whatever. My code will work this year, but not next year. How can a compiler possibly decide that? For this is not just a question of the syscall not being expressed in type theory: it would be completely unreasonable for example for the compiler to prevent my program from compiling if it would run for the next 100,000 years but not the next 100,001. The only way around this would be that the programmer would have to specify that 100,000 years is OK in some type theoretic way.

In other words, what I think seems inevitable is that one would not be able to simply run a compiler based on formal semantics on existing C code. One would need manual intervention of one kind or another. This is probably why one can make a business out of it; I would imagine that for a non-trivial program, it will still be a time-consuming (and thus costly) process to be able to run a formal compiler usefully on it. And one then comes back to the fundamental problem with proof checking as it exists today (Homotopy Type Theory has made no real progress on this, despite this being one of the original motivations of Voevodsky): it is not practical for the average programmer/mathematician/whoever, it is more efficient (in most cases, obviously probably not in something mission-critical) to just proceed as before and handle problems when they arise.

Posted by: Richard Williamson on March 1, 2021 1:09 AM | Permalink | Reply to this

Re: Native Type Theory

I don’t see immediately how a compiler based upon formal semantics/native type theory will just be able to be run upon the above code as-is and detect the possibility of overflow.

Try to unify the code with the type of undefined behavior. There’s a finite set of places in the C spec where behavior is undefined, so do a case analysis. One of the cases involves writing past the end of an array. That case appears here. The condition 0 ≤ i < 100 has to be satisfied, therefore 0 ≤ entries_to_fill < 101, therefore atoi has to return a number in that range, therefore arguments[1] has to be in the preimage of that range.

Maybe you say that the compiler here will be able to understand that a guarantee that 𝑥 ≤ 100 has been given. But I find that hard to believe in general. What if instead of just using 𝑥 directly, we ran some complicated function on it first?

We didn’t say that all these types would be efficient to compute, or even decidable. The more complicated the language semantics and the type you’re trying to express, the harder it will be for the compiler to prove that the code matches the type. Basically, the operational semantics will produce a bunch of definitions for Coq, the program will provide a bunch more, and then the types are theorems you’re trying to prove about those definitions. Sometimes it’ll be able to prove stuff automatically for you, and sometimes you’ll have to use it as a proof assistant to come up with the proof yourself. The experience will be a lot like programming in Agda.

What if that complicated function involves hardware: say if a piece of arithmetic, or a fast Fourier transform, is carried out in a chip on my MCU, not in software? At minimum, you would need that hardware manipulation to be defined manually in type theory.

The semantics of the hardware would have to be included in the Lawvere theory. Once it’s there, the resulting native type theory can reason about it.

I think one can come up with other scenarios like this. What if for example I take entries_to_fill to be the current year minus 1921, where the current year is fetched from a syscall or whatever. My code will work this year, but not next year. How can a compiler possibly decide that?

Propagate the constraint as before, but instead of propagating to atoi(), you propagate to time(). The time_t type encodes years from 1970 to 2038, therefore time() must return a year from 1970 to 2021. Since it might return one greater than that, the compiler flags the call.

As far as I can see, your questions boil down to “(1) How can it reason about stuff not in the semantics? (2) How can it decide complicated questions?”

The answer to (1) is that someone has to add it to the semantics, but it needn’t be completely accurate to be useful. For example, for checking the code above we didn’t need to know that time() always increases until 2038, we just needed its range of values. Adding that to the semantics is a lot easier than modeling a clock.

For (2), as I said above, we’re not claiming all this will be automatic. We’re saying that having tools that essentially import a K framework operational semantics into Coq for programmers to reason about would be cool and useful.

In other words, what I think seems inevitable is that one would not be able to simply run a compiler based on formal semantics on existing C code. One would need manual intervention of one kind or another. This is probably why one can make a business out of it; I would imagine that for a non-trivial program, it will still be a time-consuming (and thus costly) process to be able to run a formal compiler usefully on it. And one then comes back to the fundamental problem with proof checking as it exists today (Homotopy Type Theory has made no real progress on this, despite this being one of the original motivations of Voevodsky): it is not practical for the average programmer/mathematician/whoever, it is more efficient (in most cases, obviously probably not in something mission-critical) to just proceed as before and handle problems when they arise.

To a large extent, I agree. Existing programming languages have such complicated semantics that it would be hard to do anything automatic with them. Since formal verification is a big deal on the blockchain, though, lots of new languages are being proposed with really clear and simple formal semantics; I think this is where something like NTT will really shine.

On the other hand, I don’t think anyone has deployed theorem provers at the scale they’ve deployed, say, bitcoin miners. It’s conceivable to me that as smart contracts get more complicated that people will run theorem-proving-as-a-service and just brute-force proofs with massive farms of GPUs.

Posted by: Mike Stay on March 2, 2021 3:00 PM | Permalink | Reply to this

Re: Native Type Theory

Any applications to quantum programming languages?

Posted by: matt on February 21, 2021 12:17 AM | Permalink | Reply to this

Re: Native Type Theory

Sure. Everything works the same way; the size of the computer’s state is just exponentially large and the rewrites are unitary.

Posted by: Mike Stay on February 21, 2021 5:03 PM | Permalink | Reply to this

Re: Native Type Theory

That said, I’m working with Greg Meredith on generalizing the construction to V-enriched presheaf categories that aren’t toposes, but instead use some form of linear logic. It seems plausible that there would be some more quantum-like types with that approach.

Posted by: Mike Stay on February 21, 2021 5:17 PM | Permalink | Reply to this

Re: Native Type Theory

Over on the category theory Zulip chat, Morgan Rogers suggested that I might benefit from reading the dependent type theory section of this post (and presumably the paper as well, but I’ve only just finished reading this blog post after having the tab open for over a week…).  I am still a bit weak on the notation used in type theory, so I hope you don’t mind answering some very basic questions.

After explaining indexed sum and indexed product, you write the rhetorical question, “See how natural they are?”  Unfortunately, I cannot yet answer that with a yes.  What does List() do?  That dependent type isn’t defined prior to the examples that use it, and I’m failing to intuit it.  As such, the examples remain opaque.  On the other hand, I already know what a monoid is, so I can work from both ends to read your building of Monoid (after interpreting into jargon that is more familiar) as:

“Take a monoid to be a set M with a binary operation of multiplication and a nullary operation (i.e. a specific element) of unit… for any triple of elements of M, multiplication is associative; for any element a of M, multiplication by the unit element on either side returns that element a.”

Is there a similar translation you can provide for the “simple” examples you provided in the blog post?

On a tangential note, my translation of the Monoid type would seem to indicate the empty monoid (M=ø) is excluded, as the nullary operation requires at least one element of M.  Is this also the case for the Monoid type you wrote, before I translated it, or is my translation unfaithful in that respect?

Posted by: Jason Erbele on March 2, 2021 10:00 PM | Permalink | Reply to this

Re: Native Type Theory

Hi Jason, sorry for the late reply. My explanation of dependent sum and product here is very limited – I did a bit more in Part 2.

We use dependent sum any time we build objects by saying “first give me an X, and then give me something depending on X”.

So Σn:.List(n)\Sigma n:\mathbb{N}. List(n) means “first give me a number nn, and then give me a list of length nn.” (for some way of making lists) So an element is a pair where the second coordinate depends on the first.

Similarly ΠS:Set.List(S)\Pi S:\mathrm{Set}. List(S), this means “first give me a set, and then give me a list of elements of SS.” So an element is a function where the codomain depends on the argument.

Hope this helps.

Posted by: Christian Williams on March 24, 2021 12:55 AM | Permalink | Reply to this

Re: Native Type Theory

Similarly ΠS:Set.List(S)\Pi S:\mathrm{Set}. List(S), this means “first give me a set, and then give me a list of elements of SS.”

Wouldn’t Π\Pi be more like “whenever I give you a set, give me a list of elements of SS”?

Posted by: Mike Shulman on March 24, 2021 4:12 PM | Permalink | Reply to this
Read the post Native Type Theory (Part 3)
Weblog: The n-Category Café
Excerpt: The third episode of "Native Type Theory" by Christian Williams.
Tracked: March 29, 2021 5:15 AM

Post a New Comment