Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

February 14, 2018

Gradual Typing

Posted by Mike Shulman

(Guest post by Max New)

Dan Licata and I have just put up a paper on the arxiv with a syntax and semantics for a gradually typed programming language, which is a kind of synthesis of statically typed and dynamically typed programming styles. The central insight of the paper is to show that the dynamic type checking used in gradual typing has the structure of a proarrow equipment. Using this we can show that some traditional definitions of dynamic type checks can be proven to be in fact unique solutions to the specifications provided by the structure of an equipment. It’s a classic application of category theory: finding a universal property to better understand what was previously an ad-hoc construction.

The paper is written for computer scientists, so I’ll try to provide a more category-theorist-accessible intro here.

Dynamic Typing

First, a very brief introduction to what dynamic typing is. To put it very simply, dynamically typed languages are languages where there is only one type, the “dynamic type” which I’ll call ?? (more details here). They have the same relationship to typed languages that monoids do to categories or operads to multicategories. So for example, in a dynamically typed language the function application rule looks like

Γt:?Γu:?Γt(u):?\frac{\Gamma \vdash t : ?\,\, \Gamma \vdash u : ?}{\Gamma \vdash t(u) : ?}

That is, it always type-checks, whereas in a typed language the tt above would need to be of a function type ABA \to B and uu would have to be of type AA. However, these languages are not like the pure untyped lambda calculus where everything is a function: they will also include other types of data like numbers, pairs, strings etc. So whereas in a statically typed language the syntax 3(3)3(3) would be rejected at type-checking time, we can think of the dynamic language as delaying this type error until run-time, when the program 3(3)3(3) will crash and say “3 is not a function”. This is implemented essentially by representing a value in the dynamic language as an element of a coproduct of all the primitive types in the language. Then 33 becomes (number,3)(number, 3) and the implementation for the function application form will see that the value is not tagged as a function and crash. On the other hand if we have an application (function,λx.t)((number,3))(function, \lambda x. t)((number, 3)) the implementation will see that the tag is a function and then run t[(number,3)/x]t[(number,3)/x], which may itself raise a dynamic type error later if tt uses its input as a pair, or a function.

So with all that ugliness, why do we care about these languages? Well it is very easy to write down a program and just run it without fighting with the type checker. Anyone who’s had to restructure their coq or agda program to convince the type checker that it is terminating should be able to relate here. Furthermore, even if I do think these languages are ugly and awful it is a fact that these languages are extremely popular: JavaScript, Python, Perl, Ruby, PHP, LISP/Scheme are all used to write real working code and we certainly don’t want to chuck out all of those codebases and laboriously rewrite them in a typed language.

Gradual Typing

Gradually typed languages give us a better way: they embed the dynamic language in them, but we can also program in a statically typed style and have it interoperate with the dynamically typed code. This means new code can be written in a statically typed language while still being able to call the legacy dynamically typed code without having to sacrifice all of the benefits of static typing.

To a first approximation, a gradually typed language is a statically typed language with dynamic type errors, a distinguished dynamic type ?? and a distinguished way to coerce values from any type to and from the dynamic type (possibly causing type errors). Then a dynamically typed program can be compiled to the gradual language by translating the implicit dynamic type checking to explicit casts.

For a gradually typed language to deserve the name, it should be on the one hand typed, meaning the types have their intended categorical semantics as products, exponentials, etc and on the other hand it should satisfy graduality. Graduality of a language means that the transition from a very loose, dynamic style to a precise, statically typed style should be as smooth as possible. More concretely, it means that changing the types used in the program to be less dynamic should lead to a refinement of the program’s behavior: if the term satisfies the new type, it should behave as before, but otherwise it should produce a dynamic type error.

We can formalize this idea by modeling our gradually typed language as a category internal to preorders: types and terms have related “dynamism” orderings (denoted by \sqsubseteq) and all type and term constructors are monotone with respect to these orderings. Then we can characterize the dynamic type as being the most dynamic type and the type error as a least dynamic term of each type. Making everything internal to preorders reproduces exactly the rules of type and term dynamism that programming language researchers have developed based on their operational intuitions.

We can then make a simple logic of type and term dynamism that we call “Preorder Type Theory”. Since we are doing cartesian type theory, it is the internal logic of virtually cartesian preorder categories, rather than plain preorder categories due to the structure of contexts. In the paper I present VCP categories by a direct definition, but behind the scenes I used Crutwell-Shulman’s notion of normalized T-monoid to figure out what the definition should be.

Casts and Equipments

While preorder type theory can express the basic ideas of a dynamic type and type errors, the key ingredient of gradual typing that we are missing is that we should be able to cast terms of one type to another so that we can still program in a dynamically typed style.

Gradually typed languages in the literature do this by having a cast form BAt\langle B \Leftarrow A\rangle t in their language whose semantics is defined by induction on A,BA,B. Our approach is to carve out 2 subsets of these casts which have universal properties with respect to the preorder category structure, which are the upcasts BAt\langle B \leftarrowtail A\rangle t and downcasts ABu\langle A \twoheadleftarrow B\rangle u which can only be formed when ABA \sqsubseteq B. Then one of those “oblique” casts can be defined using the dynamic type: BAt=B??At\langle B \Leftarrow A\rangle t = \langle B \twoheadleftarrow ?\rangle \langle ? \leftarrowtail A\rangle t.

What universal property do the upcasts and downcasts have? Well, first we notice that term dynamism gives a relational profunctor between terms of type AA and type BB (i.e. a relation that is monotone in BB and antitone in AA). Then we characterize the upcast and downcast as left- and right-representables for that profunctor, which consequently means the upcast is left adjoint to the downcast. More explicitly, this means for t:A,u:Bt : A, u : B we have BAtututABu\langle B \leftarrowtail A\rangle t \sqsubseteq u \iff t \sqsubseteq u \iff t \sqsubseteq \langle A \twoheadleftarrow B\rangle u

By uniqueness of representables, this defines the upcasts and downcasts uniquely up to order-equivalence, giving us a specification for the casts. We show that this specification, combined with the monotonicity of all the term connectives and the β,η\beta,\eta rules of the connectives allow us to derive the standard implementations of these casts as the unique solutions to this specification.

If you’re familiar with equipments this structure should look familiar: it gives a functor from the thin category ordering the objects to the category of adjunctions in the term category. This is a special case of a proarrow equipment where we view our preorder category as a “vertically thin” double category. So we extend our syntax to be the internal logic of vertically thin cartesian preorder categories that are equipments, and can model type error, dynamic type, functions and products and we call that system “Gradual Type Theory”. Then in that syntax we give some synthetic proofs of the uniqueness theorems for the casts and a few other theorems about equipments with certain universal properties.

Constructing Models

Finally, we give a construction for models of gradual type theory that extends Dana Scott’s classical models of dynamic typing to gradual typing. This connects our semantics to previous programming languages approaches and proves consistency for the system.

Briefly, we start with a locally thin 2-category CC (such as the category of domains) and construct an equipment by defining the vertical arrows to be coreflections in CC, which are adjunctions where the right adjoint is a retract of the left. Then we pick an object dd in CC and take a kind of vertical slice category C/dC/d. So the objects are coreflections into dd and the vertical arrows are commuting triangles of coreflections.

We use coreflections rather than just adjunctions for 2 reasons: first a sociological reason is that the casts people have developed are coreflections, and second a technical reason is that every coreflection is a monomorphism and so when we take the slice category we get a thin category and so our double category becomes a preorder category. This allows us to interpret type and term dynamism as orderings rather than a more complex category structure.

We then show how Dana Scott’s domain theoretic models of dynamic typing extend to a model of gradual type theory (where the type error is a diverging program) and another model where the type error and diverging program are different.

Posted at February 14, 2018 11:56 PM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/3019

33 Comments & 0 Trackbacks

Re: Gradual typing

A naive question. Is gradual typing in any way related to the casts we are used to in DTT, e.g. coercions, type classes, canonical structures, … ?

A more practical question, which of the graduately typed languages is closest to your type theory?

Posted by: Bas Spitters on February 18, 2018 9:16 AM | Permalink | Reply to this

Re: Gradual typing

  1. I would say the interpretation I present here is very much analogous to the coercion interpretation of subtyping. The difference is instead of a subtyping judgment ABA \leq B denoting a function ABA \to B, here a dynamism judgment ABA \sqsubseteq B denotes a coreflection/embedding-projection pair from AA to BB. There are many similarities such as coherence which arise in both and the functorial constructions of the casts are essentially the same.

  2. Most gradually typed languages mentioned in that article do not do dynamic checks any differently from a dynamically typed language. These languages don’t satisfy “type soundness” in the sense that they do not validate the η\eta law for the different types, so the types are like a sanity-check, but you can’t really rely on them to do type-based reasoning. These languages are sometimes called “optionally typed” by theoreticians to distinguish them from gradually typed languages which do validate type-based reasoning/optimizations. An example of a language that does perform these checks is Typed Racket, and academics are developing more of them, such as Reticulated Python. It is currently a major research topic how to implement these languages efficiently, so optional typing is much more popular for now. The type theory here is very similar to what’s called the “Gradually Typed Lambda Calculus”. An overview of that language and the origin of graduality is this paper.

Posted by: Max S. New on February 18, 2018 6:13 PM | Permalink | Reply to this

Re: Gradual typing

Thanks.

The page coercion mostly mentions the work by Reynolds on categorical semantics of coercive subtyping. Whereas Luo’s work is more type theoretic. I expect there is more, but off hand, I wouldn’t know of better references. Anything the nlab is overlooking?

Regarding gradual typing, I was happy to learn about the connections with parametricity, and also step-indexing. The latter connects to our own work on guarded cubical type theory, although I am not sure how useful that connection is.

Posted by: Bas Spitters on February 19, 2018 12:08 PM | Permalink | Reply to this

Re: Gradual typing

  1. I think Mellies and Zeilberger’s paper has a nice perspective on resolving the coercion and “subset” interpretations of subtyping by viewing each as taking place in a different category, and includes a nice reformulation of some of Reynolds work.

  2. I’m actually surprised to see you mention step-indexing, since I never invoked it explicitly. I’m doing some work right now on operational/logical relations models of gradual typing that uses step-indexing but I take it you are referring to something else? Do you mean that both are instances of enriched/internal categories?

Posted by: Max S. New on February 19, 2018 3:31 PM | Permalink | Reply to this

Re: Gradual typing

Thanks.

About step-indexing, I toke your blog-post as an opportunity to read up a little on gradual typing. I was thinking about Theorems for Free for Free, a paper you discuss. Step-indexing can be internalized using the topos of trees, as you know.

Posted by: Bas Spitters on February 19, 2018 9:28 PM | Permalink | Reply to this

Re: Gradual typing

Does the embedding/reflection point of view solve the problem of contravariance in subtyping, i.e. that ABA \le B doesn’t imply (AC)(BC)(A\to C) \le (B\to C) (which is what doing everything internal to preorders would give you) but rather (BC)(AC)(B\to C) \le (A\to C)?

Posted by: Mike Shulman on February 19, 2018 12:03 PM | Permalink | Reply to this

Re: Gradual typing

Yes, this is something I will highlight more in a future paper. The type dynamism ordering I presented was originally called “na¯ve” subtyping because the function type was covariant in both positions in this paper.

With the ep pair semantics, you do get covariance on both sides. The reasoning being if I have ep pairs (e A,p A):AA(e_A,p_A) : A \triangleleft A' and (e B,p B):BB(e_B,p_B) : B \triangleleft B' then I can construct an ep pair (p Ae B,e Ap B):(AB)(AB)(p_A \to e_B, e_A \to p_B) : (A \to B) \triangleleft (A' \to B'). And we see that we have not avoided the contravariance of the function type at all, but we instead see a kind of “polarity switch” when we have a contravariant position.

There is a somewhat similar analysis in the na¯ve paper, but there analysis relies on the introduction of a notion of “runtime blame” for a type error that I think is more complicated and less general than the ep pair/coreflection interpretation.

Posted by: Max S. New on February 19, 2018 3:25 PM | Permalink | Reply to this

Re: Gradual typing

There’s something interesting and puzzling to me about gradual typing, especially this phenomenon about function types: It seems suspiciously similar to refinement typing, although they are not the same. There is no analogue of a dynamism ordering on terms, but the dynamism ordering on types seems analogous to the “refinement”/”subquotient” order on different refinements of an intrinsic type. Meanwhile the (coercive) subtyping order in gradual typing seems analogous to the (subsumptive) subtyping order on refinements.

In particular, while function types are contravariant in their argument type according to subtyping, they are covariant according to refinement. The “subset” and “quotient” parts switch roles. This is something I hadn’t even noticed prior to this post. (So thanks for that!)

Is there some big idea connecting the details of the two techniques that I’m missing out on? I knew about the Mellies and Zeilberger paper about refinement typing. (I haven’t read it but) I thought it was describing refinement systems at too abstract a level to address the difference between subtyping and refinement, or whether there even is one in a given system.

Posted by: Matt Oliveri on February 24, 2018 5:32 AM | Permalink | Reply to this

Re: Gradual typing

OK, I think I might have it figured it out. Maybe this should have been obvious. If you start with a gradually-typed language, and then require a proof that the program doesn’t throw a type error, you’re effectively limited to the same programs as in a refinement type system for an untyped language. (A Curry-style/extrinsic type system.) So you can optimize out the coercions and dynamic checks and you get a more usual presentation of an extrinsic type system, without types appearing in the terms. The dynamism order on terms becomes trivial, and the dynamism order on types apparently becomes the refinement order.

Dynamism becoming refinement still seems mysterious. But plausible.

Posted by: Matt Oliveri on February 24, 2018 11:31 PM | Permalink | Reply to this

Re: Gradual typing

I was mistaken; my reasoning was too hasty. For refinement, function types are not covariant in the argument position after all.

So never mind, it doesn’t look like the dynamism order and refinement order are very closely related.

Posted by: Matt Oliveri on February 26, 2018 1:10 AM | Permalink | Reply to this

Re: Gradual typing

First, I highly suggest you read the Mellies-Zeilberger paper, because subtyping is handled in that paper in a nice “fibrational” way.

Second, as to the relationship between gradual typing and their approach to refinement types, I think they are closesly related, but there is no direct analog for the type dynamism relation in their work. In their work, \sqsubseteq is a relation between refinement types and “base” types, whereas in gradual typing it is between gradual types. So for instance, in gradual typing the relation is reflexive and transitive, but that property doesn’t make sense for refinement types.

One way to realize at least the judgments A?A \sqsubseteq ? as refinement relations would be to modify the slice category construction we use in this paper to one where horizontal arrows have to be “over” the identity on ??. This is actually a more natural definition of comma category, and would furthermore give you a functor from your “gradual category” GG to the “dynamic category” DD. You could think of this as saying every gradually typed term is equipped with a specified “type erasure” that it is less dynamic than. This is well motivated from the study of the syntax of gradual typing, where this is true, but obscures the η\eta laws, which were a focus of this paper.

Posted by: Max S. New on February 26, 2018 2:57 PM | Permalink | Reply to this

Re: Gradual typing

First, I highly suggest you read the Mellies-Zeilberger paper, because subtyping is handled in that paper in a nice “fibrational” way.

OK, it’s on my list. For now though, I’m still in the middle of your paper with Dan Licata.

I’m not so good yet at understanding things from a categorical perspective, so it might take a while before I really understand the last part of your reply. I think it’s saying roughly that every gradually typed construction has a corresponding dynamically typed construction, which it can be considered a (term-level) refinement of. This does indeed seem like a nice property that should be true of gradually typed programming languages.

In their work, ⊑ is a relation between refinement types and “base” types, whereas in gradual typing it is between gradual types.

Yeah, this relation has spread to/from elsewhere, so I knew about it. It’s not the relation I’m talking about.

If you represent sets as base types, then being a subset of a set is asserted with a judgment like that, with different kinds of things on either side. But there is also the subset relation between subsets of the same set. The refinement relation I was talking about is more like the latter.

When you want refinements of a base type to be like subquotients of the type, rather than just subsets, you can use partial equivalence relations (PERs) on the intrinsic type. There is a “subtype” relation between PERs on the same type, which just says that one is a subrelation of the other. This turns out not to be the same as saying intuitively that one subquotient is a subquotient of the other. So the notion of “subset of a subset” splits in two. It’s “subquotient of a subquotient” that I’m calling the refinement relation, between two PERs on the same type.

Every base type with a well-behaved notion of equality has a least-refined subquotient, which is represented by the PER which is just equality (at that type). If you’re working with PER refinements of an untyped system, the least-refined subquotient is intuitively similar to ‘?’ from gradual typing, and the type dynamism relation is intuitively similar to the refinement relation between PERs.

When I first commented, I thought I saw hints of a rigorous connection, but it looks now like I was mistaken.

It seems intuitively true that subquotients are less dynamic than the originals. But I don’t see a good way to implement casts for quotients. Also, up/downcast traditionally refers to whether the cast goes with or against subtyping, not dynamism or refinement. So your terminology is confusing me in this case. Is casting from a type to a quotient of that type up or down?

Maybe gradual typing gives rise to a different notion of refinement than subquotienting.

Posted by: Matt Oliveri on February 27, 2018 4:34 AM | Permalink | Reply to this

Re: Gradual typing

I’ve had some of these exact same ideas (dynamism as subquotient of a subquotient) and I think you can apply them to gradual typing.

However, like you imply you can’t just take an arbitrary subquotient QQ of a base DD and expect it to be possible to implement an ep pair between them QDQ \triangleleft D. Instead what you can do is just require that that ep pair be given as extra data, so you limit which subquotients are possible to model as gradual types. Also I think subquotient is not exactly the right word here because we have the term dynamism ordering, but it’s roughly correct.

One language design idea I had based on this was that you could have some types be “gradual” meaning they had an ep pair associated to them, but other types would be “logical” which would be arbitrary subquotients. Then the logical types could give you more precise guarantees but limit your ability to smoothly interoperate with dynamic code. As long as you still use gradual types at the interface you can still interoperate freely.

Posted by: Max S. New on February 27, 2018 1:12 PM | Permalink | Reply to this

Re: Gradual typing

However, like you imply you can’t just take an arbitrary subquotient Q of a base D and expect it to be possible to implement an ep pair between them Q◃D.

Ah, I shouldn’t have said “implement”. Right, of course you can’t generally implement the checks for arbitrary refinements. The problems begin before you even get to quotients.

When I talked about starting with a gradual type system, and getting a refinement type system by requiring a proof that you don’t get type errors, I left it implicit (I shouldn’t have) that because the checks get optimized out, they don’t have to be decidable. Also, because of statically ruling out type errors, the term dynamism relation would presumably degenerate to typed program equivalence. (Or maybe the domain order. I haven’t read enough yet to know.)

Even so, I don’t see how to express (not “implement”) casts for quotients. Quotienting is intuitively placing restrictions on the uses of a type, not on its elements.

I peeked ahead at your Related Work section and saw the discussion of Findler and Blume’s “pairs of projections” approach. It seems plausible that an (undecidable) check on the continuation could easily express quotients.

Since gradual type systems seem to induce refinement type systems by requiring proof that type errors are avoided, I’m basically wondering how well “traditional” refinement systems fit into this picture.

If you’re interested in this connection, maybe you could find some awesome result that fits everything into one semantic framework.

One language design idea I had based on this was that you could have some types be “gradual” meaning they had an ep pair associated to them, but other types would be “logical” which would be arbitrary subquotients. Then the logical types could give you more precise guarantees but limit your ability to smoothly interoperate with dynamic code. As long as you still use gradual types at the interface you can still interoperate freely.

This kind of language would be really good for practical reasons. It’s much easier to be able to safely test your code to see if it’s “probably mostly correct” before you try to verify it. The more of your mistakes the language implementation can feasibly catch automatically, the better. So an extensible notion of gradual type is desirable.

But this is not the sort of connection I was thinking of. Types that you can actually implement as “gradual” will surely have to be a special case. But what can the “gradual way of thinking” tell us about the semantics of types in general?

Posted by: Matt Oliveri on February 28, 2018 4:40 AM | Permalink | Reply to this

Re: Gradual typing

it is very easy to write down a program and just run it without fighting with the type checker. Anyone who’s had to restructure their coq or agda program to convince the type checker that it is terminating should be able to relate here.

Is that really a fair comparison? The type theories preferred by mathematicians do also ensure termination, but in general I thought that type-checking and termination-checking were rather different. In Agda, at least, I believe they are totally distinct: you can selectively turn off the termination-checker, but not the type-checker. And “real-world” statically typed languages like ML don’t generally enforce termination at all.

I’ve done a fair amount of programming in dynamically typed languages in the more distant past, but now that I have more experience using static typing, whenever I have to use a dynamically typed language I usually find myself “fighting with the dynamic types”, if that makes sense.

Posted by: Mike Shulman on February 19, 2018 5:36 PM | Permalink | Reply to this

Re: Gradual typing

First, I mostly made the comparison to try to give people that are not personally familiar with dynamically typed languages some idea of why people reject the semantically much nicer statically typed languages. I’m not saying I agree with their rejection. To me the better argument for gradual typing is that even if every program from now on was written in ML or Agda, there is just too much working dynamic/unverified code in the world to rewrite and so we should seriously consider the problem of interoperating with and maintaining these legacy systems.

Also I think your termination vs type checking distinction is similar to the way many dynamically typed programmers think about static typing. Some people, including some computer science/programming language professors will say that there should be a type checking vs syntax-checking distinction. Syntax-checking just makes sure the program parses and if it is a decent language also makes sure that all of the variables used are in scope. Then they say that even a language like ML has or should have a “dynamically typed semantics” that allows them to run the program at that point, because they understand type checking as just being layered on top of a dynamically typed language. In fact I believe GHC Haskell has a mode to do this (though you will get undefined behavior if there is a type error) and there are many typed languages that just erase the types when compiling so in practice many languages can be viewed this way.

Posted by: Max S. New on February 19, 2018 7:38 PM | Permalink | Reply to this

Re: Gradual typing

but now that I have more experience using static typing, whenever I have to use a dynamically typed language I usually find myself “fighting with the dynamic types”, if that makes sense.

I’d be very interested if you could elaborate a bit on this, Mike, or give an example.

even if I do think these languages are ugly and awful it is a fact that these languages are extremely popular: JavaScript, Python, Perl, Ruby, PHP, LISP/Scheme are all used to write real working code and we certainly don’t want to chuck out all of those codebases and laboriously rewrite them in a typed language.

that even if every program from now on was written in ML or Agda, there is just too much working dynamic/unverified code in the world to rewrite and so we should seriously consider the problem of interoperating with and maintaining these legacy systems.

There is perhaps a bit of an implicit assumption here that once one has understood the benefits of static typing, one would not wish to go back to a dynamically typed language. I just thought I’d add a bit of a counter-voice to that. I am working as a programmer (or ‘software engineer’, as people like to call it), but before I began working in industry, my serious programming experience was almost entirely in functional programming languages, both with and without dependent types, especially Haskell and Coq. I have worked on and thought a lot about type theory. Since working in industry, I have written production code in several languages. But my language of choice now for most things is Python, despite my background in functional languages (some of the places I would use something other than Python would be web servers, where I would typically use Go as a light layer on top of some Python APIs; or numerically intensive work, where I would typically use Fortran).

Now, knowing about type theory is definitely extremely helpful for writing good quality Python code. But since I am internally type-checking what I am doing, I find that do not really need type-signatures; they basically become boilerplate. I can proceed much more efficiently by not using them, but designing the code well and testing properly. Indeed, it is often very useful not to have the type constraints, because one becomes constricted into patterns that are not natural for what one is trying to do.

I have experienced numerous times that I wish to do something, and it has been implemented in the standard library in Python in a beautiful and simple way that makes me think: wow, that is just how I would have hoped it would be! I have never had that experience with, say, Java!

These languages don’t satisfy “type soundness” in the sense that they do not validate the η\eta law for the different types, so the types are like a sanity-check, but you can’t really rely on them to do type-based reasoning.

Could you elaborate a bit upon this? To be concrete, if you are familiar with it, could you discuss this in the setting of mypy?

Posted by: Richard Williamson on February 20, 2018 8:42 AM | Permalink | Reply to this

Re: Gradual typing

I meant to add that I feel that for formal proofs to become part of the working life of a mathematician, there needs to be a language that is more Python-like. We touched upon it in an earlier thread, but I don’t think that writing in Coq, Agda, and so on is efficient enough for practical use, and I don’t think that HoTT has helped much with this practical goal.

I find it very interesting to ponder what such a language could look like. As I mentioned on that other thread, it seems to me that it would need some kind of new, higher-level syntax/theory.

Posted by: Richard Williamson on February 20, 2018 8:51 AM | Permalink | Reply to this

Re: Gradual typing

My point there was that even if you despise dynamically typed languages, gradual typing is still useful.

From the other side, I think one of the main benefits of gradual typing is that while you can use the dynamic language you like to develop code, later on when you revisit code to fix a bug or add a new feature, it is very nice to have verified typing information there to help you. It helps you understand the code and it also helps tools like automatic refactoring in Eclipse/VS to operate.

This isn’t just an argument from academia, huge software companies like Microsoft (TypeScript) and Facebook (Hack) have devoted a lot of resources into developing optionally typed languages and use them extensively for the software engineering benefits.

Re: type soundness

First, a caveat that I’m not intimately familiar with mypy, but based on skimming their website I think the following is correct.

The η\eta law is different for every type but I’ll consider string as an example. The η\eta law for strings would say essentially that there is nothing more to a string than a sequence of characters. More concretely this would say that if s : string in mypy then type(s) is ‘str’ and so for instance if I run a string operation like s += 'more string' then I know there will not be a runtime type error. Knowing that there’s no type error means I can move string operations around (even lifting them out of function definitions) without changing the behavior of the program. If the compiler is aware of the types, it can also optimize the code to remove the dynamic type check because it knows it will pass. This is a simple example of the kind of reasoning principles guaranteed by static type checking.

This will be true if all of your code is statically typed with mypy, but the point of gradual typing is for this to be true even if part of your code is dynamically typed. Which means that if I write a function

function f(s : string) -> string ...

Then I can still rely on that s being a valid string, even if I allow this function to be called by dynamically typed code. The catch is of course that I now have to run the dynamic check somewhere else, but it means internally to this function, I can see that s : string and know that it “really is” a string.

Posted by: Max S. New on February 20, 2018 2:00 PM | Permalink | Reply to this

Re: Gradual typing

This will be true if all of your code is statically typed with mypy, but the point of gradual typing is for this to be true even if part of your code is dynamically typed. Which means that if I write a function

    function f(s : string) -> string ...

Then I can still rely on that s being a valid string, even if I allow this function to be called by dynamically typed code. >This will be true if all of your code is statically typed with mypy, but the point of gradual typing is for this to be true even if part of your code is dynamically typed. Which means that if I write a function

    function f(s : string) -> string ...

Then I can still rely on that s being a valid string, even if I allow this function to be called by dynamically typed code.

Ah, interesting. So let us take the following simple python program.

#!/usr/bin/python3

def hello(name: str) -> str:
    return "Hello " + name + "!"

def hello_how_are_you(name):
    return hello(name) + " How are you?"

print(hello_how_are_you("Bob"))

If one runs mypy on this, it does not complain. And when one runs the actual program, one obtains “Hello Bob! How are you?” on stdout.

Now, if I change “Bob” to 3, mypy still does not complain, but one will get a type error when running the program.

If I understand correctly, you are saying that a gradually typed variant of mypy would complain in the second example?

That would be interesting indeed. It seems like the compiler will have to do a lot of work, though. Also, if I replace

print(hello_how_are_you("Bob"))

by

def hello_number_how_are_you(n: int) -> str:
    return hello_how_are_you(n)

then mypy will complain, even though

hello_how_are_you

has no type signature. Is the difference between optional and gradual typing thus only in ‘end calls’ from a dynamically typed function?

This isn’t just an argument from academia, huge software companies like Microsoft (TypeScript) and Facebook (Hack) have devoted a lot of resources into developing optionally typed languages and use them extensively for the software engineering benefits.

Absolutely, any kind of easy-to-use type safety is likely to find widespread use, especially among larger or more conservative companies. Java is after all the industry standard. In a different direction, Go is an interesting case, because it is typed, but most of the time the types can be left implicit; some people feel it gives the best of both the static and dynamic worlds.

Posted by: Richard Williamson on February 20, 2018 3:46 PM | Permalink | Reply to this

Re: Gradual typing

No, a gradually typed language would not prevent your second program (with 3 instead of "Bob") from type checking.

Instead, it would raise a type error at the function call ‘hello(3)’ with an error message that says “you called the function hello, which has type (name: str) -> str with an incompatible value 3”. This is a better error message than what you would get, especially if I wrote hello_how_are_you and someone else wrote hello. If hello were some big function written by someone else, possibly not even a co-worker, I would have to look deeply in their code to see if it was their fault or mine. This is similar to what Mike was saying about dynamic typing.

But in addition to getting better error messages, the gradually typed program is more stable under simple program transformations you or your tools or the compiler might perform.

For example, if we have a function that takes a string and returns another function that takes a number and prints the string that many times:

def print_upper(s : str): up_s = s.upper() def printer(n : int): acc = "" for i in range(0,n): acc += up_s

If I call this function on a number and save f = print_upper(3) I will get an error in mypy and in gradually typed python, but if I move the s.upper() into the printer function:

def print_upper(s : str): def printer(n : int): up_s = s.upper() acc = "" for i in range(0,n): acc += up_s

and I write f = print_upper(3), then in gradually typed python I would still get the error, but in mypy there won’t be an error until I call f. These are the kinds of optimizations compilers do all the time based on heuristics to speed up your program, so the gradually typed version gives the compiler the option to do it whereas the dynamic version would take the compiler valuable CPU cycles to determine whether or not it was possible. I think it is also better from a programming perspective to have to worry less when I’m refactoring typed code whether or not I’m changing if or when a type error happens.

Posted by: Max S. New on February 20, 2018 7:08 PM | Permalink | Reply to this

Re: Gradual typing

Whoops looks like my python code is not formatted correctly. Rather than repost it, here it is in a gist.

Posted by: Max S. New on February 20, 2018 7:11 PM | Permalink | Reply to this

Re: Gradual typing

This is a better error message than what you would get

Well, here is the error I get.

Traceback (most recent call last):
  File "./test.py", line 14, in <module>
    print(hello_how_are_you(3))
  File "./test.py", line 7, in hello_how_are_you
    return hello(name) + " How are you?"
  File "./test.py", line 4, in hello
    return "Hello " + name + "!"
TypeError: must be str, not int

That seems to me to express all I need to know; in particular, it includes all the content of the error message you described (the nature of the type error, and where the problem occurs). It is true that if

hello

instead took two string variables, and I called

hello_how_are_you

with one string variable and one integer variable, then the TypeError would not tell me which variable was the problem, but that is hardly an issue in practise.

then in gradually typed python I would still get the error, but in mypy there won’t be an error until I call f.

I don’t quite follow here. If one does not call f, then one does not get an error when one runs the program. Surely gradual typing should only move runtime errors to compile time, not introduce compile time errors when the program runs?

so the gradually typed version gives the compiler the option to do it whereas the dynamic version would take the compiler valuable CPU cycles to determine whether or not it was possible.

A few comments to this:

1) In the case of Python, one can compare mypy and Cython. Cython is explicitly focused on increasing performance. Static typing is a part of that, but only a part (things like stripping off the object oriented layer as much as possible to efficient array manipulations are very significant).

2) In industry, the type safety argument is much more likely to be what people are interested in than the performance benefits, unless one is doing something very intensive like numerical computations.

3) I do not yet see how gradual typing provides a significant benefit over optional typing in this regard.

I think it is also better from a programming perspective to have to worry less when I’m refactoring typed code whether or not I’m changing if or when a type error happens.

The benefits of type safety for code maintainability are certainly well-known, but, again, what I do not yet follow is the distinction between optional and gradual typing in this regard.

(Of course the mathematics of your article may well be very interesting despite all this; I am just trying to understand the goal first :-)).

Posted by: Richard Williamson on February 21, 2018 11:13 AM | Permalink | Reply to this

Re: Gradual typing

A few points

  1. You may be able to recover the provenance of the type error using stack-traces, but as the complexity of the programs and the types gets higher it becomes more difficult. For instance if I’m passed a function Class1 -> Class2 from dynamic code and I call a method on a Class2 object that function returns and that has the wrong type, it can be very hard to trace the provenance back to the original dynamically typed function. The idea of blame which is used in most gradually typed languages (not actually in my paper) is to capture this info automatically based on the type structure and type errors.

  2. Gradual Typing can introduce new type errors, the idea being that these are type errors are happening in your code, i.e. some variable was declared to have a certain type and it was instantiated with an invalid value. Gradual Typing takes the long-view on bug catching here: you said that that variable not having that type is a bug so we found it for you. In dynamic typing, you won’t find that until later, but that’s because dynamic typing doesn’t check anything until the last possible moment. If you don’t want the type error you can always make the type of the variable more dynamic, like ?.

  3. I’ve been sticking to simple types because they are easier to make concrete in a short comment, but the difference between optional and gradual becomes enormous when we move to more “powerful” type systems. Specifically, if we talk about things like parametric polymorphism and security (non-interference) types. All of these have more powerful reasoning principles associated to them that are considerably weakened without enforcing them dynamically.

For instance, parametric polymorphism enforces data abstraction, meaning if I provide something of existential type α.A\exists \alpha. A, where the α\alpha is my underlying representation, then I the code I link with will not be able to “forge” an α\alpha. This means I can completely swap out the representation of α\alpha as long as I change the methods in an equivalent way. The basis for this is called parametricity. You don’t get this benefit if you don’t “enforce” the type at runtime: you have to make sure that you don’t accept α\alpha values that didn’t originate in your source program. Gradual typing and parametricity is described here.

Next, “security” types allow you to put tags on types to say for instance “this string is secret”, for instance if it is a password. If I want to non-security aware code with code with security types, then at runtime I need to make sure the untrusted code never branches on the password. If you don’t enforce that at runtime, the security type doesn’t mean anything. A paper on gradual security types is here.

Posted by: Max S. New on February 21, 2018 12:29 PM | Permalink | Reply to this

Re: Gradual typing

Thank you, this was very interesting and useful. Point 1) still seems more like an issue with how the compiler presents its information than necessarily a defect of the type system, but I can take your word for it!

Would you be able to express the distinction between optional and gradual typing more formally? E.g. can one adjust (relax, I guess?) your syntax and semantics in some way to obtain something for optional typing?

Posted by: Richard Williamson on February 21, 2018 12:59 PM | Permalink | Reply to this

Re: Gradual typing

I’m sure the static vs dynamic typing argument has been had to death all over the Internet, and I don’t think we should replay it all here. But since I made the comment and you asked, I’ll give an example from my own experience, and try to summarize some thoughts.

I don’t often have to use dynamically typed languages these days, so I don’t have lots of examples ready to hand. But the one that really convinced me that dynamic typing is an Abomination That Must Die is when (after a long and very frustrating debugging session) I discovered that when I passed a custom key function to a Python sorting routine, the floating point numbers returned by my key function were silently being converted to integers before being compared.

(I thought that the sorting routine in question was the one in the Python standard library, but I just tried again right now and can’t reproduce this behavior with it; so either this bug has been fixed in the past ~7 years or else I’m misremembering the details. But the point is that it certainly could happen, and I give you my word that it did happen to me. And yes, a sufficiently determined programmer could create a similar bug in a statically typed language, but it would be much harder. Ordinarily, the output type of the key function would either be something concrete like “int”, in which case my attempt to use floats would have been a compile-time error; or else something abstract like “anything supporting a comparison routine”, in which case the floats would have been compared as floats. To reproduce this bug, the output type would have to be something like “anything convertible to an int”, which, while it would compile with my key function returning floats, would have been a much more obvious clue to me that perhaps my floats were not being treated as floats, as well as being more obviously a bad idea to anyone looking at the library code and thus more likely to be fixed.)

Of course this was bad programming on the part of the library author. A good programmer, especially one who has experience in type theory and typed languages, can, as you say, “internally type-check” and write well-typed code in a dynamically typed language. But by Sturgeon’s Law, there will always be plenty of bad programmers — and let’s be honest, even good programers make mistakes occasionally.

There’s a tradeoff of course: static typing means more time spent making a program compile, while dynamic typing means more time spent debugging run-time errors. Maybe some people prefer one activity or the other, and maybe one of them takes less time overall than the other. But what clearly tips the balance in favor of static typing, for me, is that run-time bugs (unlike compile-time type errors) don’t always get caught before code is released, and often have serious real-world consequences such as security holes. Anything that reduces the incidence of such things — statistically, overall, bearing in mind the prevalence of bad programmers — is a good thing, and perhaps even a moral imperative. (-:

I have enjoyed programming in Python (and even Perl — actually, I like Perl better than Python). But in most cases, the good things about those languages occur despite their dynamic typing rather than because of it. And on the other hand, it’s highly misleading to bash static typing by holding up a language like Java (or, heaven forbid, C++) whose type system is actually rather rudimentary. For static typing to be pleasant and practical one needs a more flexible type system such as in Haskell, ML, or Scala.

Posted by: Mike Shulman on February 20, 2018 4:48 PM | Permalink | Reply to this

Re: Gradual typing

Here’s another example of what I mean by “fighting with the dynamic types”. Suppose I’m calling a function foo that takes an argument called data. What sort of thing do I have to pass as that argument? The documentation for foo doesn’t say. So I poke through the code and I see that data is being accessed like a dict, data['name'], data['age'] etc. So I think “ah-ha, data is a dict” and I call the function by passing in a dict. This works fine for a long time, until eventually my program gets some unusual user input that causes it to invoke a different function bar from the same library foo was defined in, revealing my mistake: data is not supposed to be a dict, but rather an object with a custom class that happens to subclass dict and hence can be accessed like a dict. The body of foo only accesses it like a dict, but bar also calls some other methods on it; thus the program suddenly crashes with a run-time type error occurring in a library function that I didn’t even know I was calling.

Posted by: Mike Shulman on February 20, 2018 5:28 PM | Permalink | Reply to this

Re: Gradual typing

Thank you for the elaboration! As you suggest, to avoid derailing the thread, I’ll not take up most of the points you mention! But could you elaborate on, or be a bit more specific (maybe with a bit of code or pseudo-code) on the details of, your second example, the one with

foo

and

bar

? I could not quite follow it. I am genuinely interested to understand the examples/experiences you describe, because I do not myself experience this kind of thing!

Other than that, a couple of tiny remarks:

1) The type systems of Java and Haskell, say, are quite different in flavour, I would say. In Java (as in any object-oriented language), one has a lot of freedom to construct one’s own types, but there are less type-theoretic concepts built in and with syntactic sugar (but there is, naturally, syntactic sugar for other things that are harder to achieve in Haskell). E.g. one can define monads and work at a monadic level in Java if one so desires, but there is no concept in the standard library to express this (yet, at least). Also things like recursion are not optimised to the same degree, so other patterns are typically used. What I mean to say is that I wouldn’t say that it is really a paucity of the type system that makes Java development relatively slow, it is more the verbosity and amount of boilerplate needed to work within it.

It is a platitude, but there is of course no one programming language suitable for everything. I actually don’t mind Java at all, it is just that I am much more efficient in, say, Python or Go, and sometimes the syntax of Python is, I find, really elegant.

2) Moving of bugs to compile time and increased readability are certainly the two main reasons that static types are used in the hope of increasing maintainability. But, to appeal to the same kind of law as you described, it is still perfectly possible to use static types badly, and bugs still very often slip through in statically typed languages :-). Thorough and correct testing is certainly the most important factor in practise for good code quality. Again, there are no universal truths here I think; in some teams, one may get equal quality with a dynamic language as with a static language, and the development time will certainly be faster. In other teams, use of a dynamic language would be a disaster!

Posted by: Richard Williamson on February 21, 2018 11:44 AM | Permalink | Reply to this

Re: Gradual typing

Of course it is possible to use static types badly, and statically typed programs can still contain bugs. (Unless you go all the way to fully verified code, where the well-typedness of a program ensures its correctness, at least in the sense of meeting its specification.) But just because something isn’t perfect doesn’t mean it isn’t better than the alternative.

I don’t think I want to take up any more space in this thread discussing the other points further, and I don’t have time for it right now anyway. Maybe another time, perhaps privately.

Posted by: Mike Shulman on February 21, 2018 1:19 PM | Permalink | Reply to this

Re: Gradual Typing

How should I think of the dynamism relation on terms? I think I understand the dynamism relation on types: ABA\sqsubseteq B means that AA has a more informative type than BB, so that if I have a term of type AA I can regard it as a term of type BB (upcast), and if I have a term of type BB I can ask whether it happens to be a term of type AA and otherwise give a type error (downcast). For instance, we have (AB)(A?)(A \to B) \sqsubseteq (A \to ?), where the upcast forgets that a function defined on AA takes values in BB while the downcast asks whether a function defined on AA takes values in BB (or, I guess, asks whether each value individually lies in BB?). And we also have (AB)(?B)(A\to B) \sqsubseteq (? \to B), where the upcast applies a function defined on AA to an arbitrary value by trying to downcast it into AA, and the downcast restricts an arbitrary function to only take AA as input.

But if I have two terms t,ut,u of a fixed type AA, what does it mean to say tut\sqsubseteq u? You mentioned that the type error is the “least dynamic” term of each type. I can see this a little bit from the adjunction counit BAABuu\langle B\leftarrowtail A \rangle \langle A \twoheadleftarrow B \rangle u \sqsubseteq u, in which BAABu\langle B\leftarrowtail A \rangle \langle A \twoheadleftarrow B \rangle u has to be a type error if u:Bu:B isn’t actually an element of AA, so it makes sense that a type error has to be very small since it has to be less than all the elements of BB that are not elements of AA. But what other dynamism relations are there on terms? Is tut\sqsubseteq u something like the definedness ordering in domain theory, where the bottom element \bot diverges (in this case, gives a type error) and tut\sqsubseteq u means that “tt gives at least as many type errors as uu”? Can you give an example of a nontrivial relation tut\sqsubseteq u where neither tt nor uu is just “a type error”?

Posted by: Mike Shulman on February 19, 2018 8:39 PM | Permalink | Reply to this

Re: Gradual Typing

It is basically the same as the domain ordering, but using the (finitary) type error \mho rather than the (infinitary) \bot as undefined. So for example if you take the pointed domain preorder model, in the domain of functions ,\mathbb{N} \to \mathbb{N}_{\mho,\bot}, fgf \sqsubseteq g if for every nn \in \mathbb{N} either f(n)=f(n) = \mho or f(n)=g(n)f(n) = g(n).

So for example for your ABA?A \to B \sqsubseteq A \to ? the downcast will return a function that when applied downcasts the result to BB.

Posted by: Max S. New on February 20, 2018 3:30 AM | Permalink | Reply to this

Re: Gradual Typing

And just to make sure that I understand, the fact that you use coreflections instead of adjunctions corresponds to saying that t=ABBAtt = \langle A \twoheadleftarrow B \rangle\langle B \leftarrowtail A \rangle t, i.e. that if you upcast from AA to BB and then downcast back from BB to AA, you get back what you started with? In other words, that making something more dynamic doesn’t lose information? That seems obviously something you would expect; is there any context in which it wouldn’t be true?

What property of a (thin) equipment corresponds to this coreflection property? Is it something like “all horizontal-identity 2-cells are cartesian”, i.e. all vertical arrows are “fully faithful” in the equipment-theoretic sense?

Posted by: Mike Shulman on February 19, 2018 8:54 PM | Permalink | Reply to this

Re: Gradual Typing

Your characterizations of the coreflections are right. I don’t know any programming examples where they are not retracts, and I recall that some famous notes on domain theory commented on this fact that while much of the theory works for adjunctions, embedding-projection pairs are sufficient in practice.

Furthermore, I don’t know any interesting models of Preorder Type Theory that don’t satisfy the retract axiom because my model construction uses it to ensure the vertical category is thin.

I hadn’t thought about other ways to characterize the property in an equipment, but your characterization sounds right to me, if I’m interpreting it correctly.

Posted by: Max S. New on February 20, 2018 3:52 AM | Permalink | Reply to this

Post a New Comment