## May 26, 2015

### A 2-Categorical Approach to the Pi Calculus

#### Posted by John Baez

guest post by Mike Stay

Greg Meredith and I have a short paper that’s been accepted for Higher-Dimensional Rewriting and Applications (HDRA) 2015 on modeling the asynchronous polyadic pi calculus with 2-categories. We avoid domain theory entirely and model the operational semantics directly; full abstraction is almost trivial. As a nice side-effect, we get a new tool for reasoning about consumption of resources during a computation.

It’s a small piece of a much larger project, which I’d like to describe here in a series of posts. This post will talk about lambda calculus for a few reasons. First, lambda calculus is simpler, but complex enough to illustrate one of our fundamental insights. Lambda calculus is to serial computation what pi calculus is to concurrent computation; lambda calculus talks about a single machine doing a computation, while pi calculus talks about a network of machines communicating over a network with potentially random delays. There is at most one possible outcome for a computation in the lambda calculus, while there are many possible outcomes in a computation in the pi calculus. Both the lazy lambda calculus and the pi calculus, however, have as an integral part of their semantics the notion of waiting for a sub-computation to complete before moving onto another one. Second, the denotational semantics of lambda calculus in Set is well understood, as is its generalization to cartesian closed categories; this semantics is far simpler than the denotational semantics of pi calculus and serves as a good introduction. The operational semantics of lambda calculus is also simpler than that of pi calculus and there is previous work on modeling it using higher categories.

### History

Alonzo Church invented the lambda calculus as part of his attack on Hilbert’s third problem, also known as the Entscheidungsproblem, which asked for an algorithm to solve any mathematical problem. Church published his proof that no such algorithm exists in 1936. Turing invented his eponymous machines, also to solve the Entscheidungsproblem, and published his independent proof a few months after Church. When he discovered that Church had beaten him to it, Turing proved in 1937 that the two approaches were equivalent in power. Since Turing machines were much more “mechanical” than the lambda calculus, the development of computing machines relied far more on Turing’s approach, and it was only decades later that people started writing compilers for more friendly programming languages. I’ve heard it quipped that “the history of programming languages is the piecemeal rediscovery of the lambda calculus by computer scientists.”

The lambda calculus consists of a set of “terms” together with some relations on the terms that tell how to “run the program”. Terms are built up out of “term constructors”; in the lambda calculus there are three: one for variables, one for defining functions (Church denoted this operation with the Greek letter lambda, hence the name of the calculus), and one for applying those functions to inputs. I’ll talk about these constructors and the relations more below.

Church introduced the notion of “types” to avoid programs that never stop. Modern programming languages also use types to avoid programmer mistakes and encode properties about the program, like proving that secret data is inaccessible outside certain parts of the program. The “simply-typed” lambda calculus starts with a set of base types and takes the closure under the binary operation $\to$ to get a set of types. Each term is assigned a type; from this one can deduce the types of the variables used in the term. An assignment of types to variables is called a typing context.

The search for a semantics for variants of the lambda calculus has typically been concerned with finding sets or “domains” such that the interpretation of each lambda term is a function between domains. Scott worked out a domain $D$ such that the continuous functions from $D$ to itself are precisely the computable ones. Lambek and Scott generalized the category where we look for semantics from Set to arbitrary cartesian closed categories (CCCs).

Lambek and Scott constructed a CCC out of lambda terms; we call this category the syntactical category. Then a structure-preserving functor from the syntactical category to Set or some other CCC would provide the semantics. The syntactical category has types as objects and equivalence classes of certain terms as morphisms. A morphism in the syntactical category goes from a typing context to the type of the term.

John Baez has a set of lecture notes from Fall 2006 through Spring 2007 describing Lambek and Scott’s approach to the category theory of lambda calculus and generalizing it from cartesian closed categories to symmetric monoidal closed categories so it can apply to quantum computation as well: rather than taking a functor from the syntactical category into Set, we can take a functor into Hilb instead. He and I also have a “Rosetta stone” paper summarizing the ideas and connecting them with the corresponding generalization of the Curry-Howard isomorphism.

The Curry-Howard isomorphism says that types are to propositions as programs are to proofs. In practice, types are used in two different ways: one as propositions about data and the other as propositions about code. Programming languages like C, Java, Haskell, and even dynamically typed languages like JavaScript and Python use types to talk about propositions that data satisfies: is it a date or a name? In these languages, equivalence classes of programs constitute constructive proofs. Concurrent calculi are far more concerned about propositions that the code satisfies: can it reach a deadlocked state? In these languages, it is the rewrite rules taking one term to another that behave like proofs. Melliès and Zeilberger’s excellent paper “Functors are Type Refinement Systems” relates these two approaches to typing to each other.

Note that Lambek and Scott’s approach does not have the sets of terms or variables as objects! The algebra that defines the set of terms plays only a minor role in the category; there’s no morphism in the CCC, for instance, that takes a term $t$ and a variable $x$ to produce the term $\lambda x.t$. This failure to capture the structure of the term in the morphism wasn’t a big deal for lambda calculus because of “confluence” (see below), but it turns out to matter a lot more in calculi like Milner’s pi calculus that describe communicating over a network, where messages can be delayed and arrival times matter for the end result (consider, for instance, two people trying to buy online the last ticket to a concert).

The last few decades have seen domains becoming more and more complicated in order to try to “unerase” the information about the structure of terms that gets lost in the domain theory approach and recover the operational semantics. Fiore, Moggi, and Sangiorgi, Stark and Cattani, Stark, and Winskel all present domain models of the pi calculus that recursively involve the power set in order to talk about all the possible futures for a term. Industry has never cared much about denotational semantics: the Java Virtual Machine is an operational semantics for the Java language.

### What we did

Greg Meredith and I set out to model the operational semantics of the pi calculus directly in a higher category rather than using domain theory. An obvious first question is, “What about types?” I was particularly worried about how to relate this approach to the kind of thing Scott and Lambek did. Though it didn’t make it into the HDRA paper and the details won’t make it into this post, we found that we’re able to use the “type-refinement-as-a-functor” idea of Melliés and Zeilberger to show how the algebraic term-constructor functions relate to the morphisms in the syntactical category.

We’re hoping that this categorical approach to modeling process calculi will help with reasoning about practical situations where we want to compose calculi; for instance, we’d like to put a hundred pi calculus engines around the edges of a chip and some ambient calculus engines, which have nice features for managing the location of data, in the middle to distribute work among them.

### Lambda calculus

The lambda calculus consists of a set of “terms” together with some relations on the terms. The set $T$ of terms is defined recursively, parametric in a countably infinite set $V$ of variables. The base terms are the variables: if $x$ is an element of $V$, then $x$ is a term in $T$. Next, given any two terms $t, t' \in T$, we can apply one to the other to get $t(t')$. We say that $t$ is in the head position of the application and $t'$ in the tail position. (When the associativity of application is unclear, we’ll also use parentheses around subterms.) Finally, we can abstract out a variable from a term: given a variable $x$ and a term $t,$ we get a term $\lambda x.t$.

The term constructors define an algebra, a functor $LC$ from Set to Set that takes any set of variables $V$ to the set of terms $T = LC(V)$. The term constructors themselves become functions: $\begin{array}{rll} -\colon & V \to T &\mbox{variable}\\ -(-)\colon & T \times T \to T &\mbox{application}\\ \lambda\colon & V \times T \to T &\mbox{abstraction} \end{array}$

Church described three relations on terms. The first relation, alpha, relates any two lambda abstractions that differ only in the variable name. This is exactly the same as when we consider the function $f(x) = x^2$ to be identical to the function $f(y) = y^2$. The third relation, eta, says that there’s no difference between a function $f$ and a “middle-man” function that gets an input $x$ and applies the function $f$ to it: $\lambda x.f(x) = f$. Both alpha and eta are equivalences.

The really important relation is the second one, “beta reduction”. In order to define beta reduction, we have to define the free variables of a term: a variable occurring by itself is free; the set of free variables in an application is the union of the free variables in its subterms; and the free variables in a lambda abstraction are the free variables of the subterm except for the abstracted variable. $\begin{array}{rl} \mathrm{FV}(x) = & \{x\} \\ \mathrm{FV}(t(t')) = & \mathrm{FV}(t) \cup \mathrm{FV}(t') \\ \mathrm{FV}(\lambda x.t) = & \mathrm{FV}(t) / \{x\} \\ \end{array}$

Beta reduction says that when we have a lambda abstraction $\lambda x.t$ applied to a term $t'$, then we replace every free occurrence of $x$ in $t$ by $t'$: $(\lambda x.t)(t') \downarrow_\beta t\{t' / x\},$ where we read the right hand side as “$t$ with $t'$ replacing $x$.” We see a similar replacement of $y$ in action when we compose the following functions: $\begin{array}{rl} f(x) = & x + 1 \\ g(y) = & y^2 \\ g(f(x)) = & (x + 1)^2 \\ \end{array}$

We say a term has a normal form if there’s some sequence of beta reductions that leads to a term where no beta reduction is possible. When the beta rule applies in more than one place in a term, it doesn’t matter which one you choose to do first: any sequence of betas that leads to a normal form will lead to the same normal form. This property of beta reduction is called confluence. Confluence means that the order of performing various subcomputations doesn’t matter so long as they all finish: in the expression $(2 + 5) * (3 + 6)$ it doesn’t matter which addition you do first or whether you distribute the expressions over each other; the answer is the same.

“Running” a program in the lambda calculus is the process of computing the normal form by repeated application of beta reduction, and the normal form itself is the result of the computation. Confluence, however, does not mean that when there is more than one place we could apply beta reduction, we can choose any beta reduction and be guaranteed to reach a normal form. The following lambda term, customarily denoted $\omega$, takes an input and applies it to itself: $\omega = \lambda x.x(x)$ If we apply $\omega$ to itself, then beta reduction produces the same term, customarily called $\Omega$: $\Omega = \omega(\omega)$ $\Omega \downarrow_\beta \Omega.$ It’s an infinite loop! Now consider this lambda term that has $\Omega$ as a subterm: $(\lambda x.\lambda y.x)(\lambda x.x)(\Omega)$ It says, “Return the first element of the pair (identity function, $\Omega$)”. If it has an answer at all, the answer should be “the identity function”. The question of whether it has an answer becomes, “Do we try to calculate the elements of the pair before applying the projection to it?”

### Lazy lambda calculus

Many programming languages, like Java, C, JavaScript, Perl, Python, and Lisp are “eager”: they calculate the normal form of inputs to a function before calculating the result of the function on the inputs; the expression above, implemented in any of these languages, would be an infinite loop. Other languages, like Miranda, Lispkit, Lazy ML, and Haskell and its predecessor Orwell are “lazy” and only apply beta reduction to inputs when they are needed to complete the computation; in these languages, the result is the identity function. Abramsky wrote a 48-page paper about constructing a domain that captures the operational semantics of lazy lambda calculus.

The idea of representing operational semantics directly with higher categories originated with R. A. G. Seely, who suggested that beta reduction should be a 2-morphism; Barney Hilken and Tom Hirschowitz have also contributed to looking at lambda calculus from this perspective. In the “Rosetta stone” paper that John Baez and I wrote, we made an analogy between programs and Feynman diagrams. The analogy is precise as far as it goes, but it’s unsatisfactory in the sense that Feynman diagrams describe processes happening over time, while Lambek and Scott mod out by the process of computation that occurs over time. If we use 2-categories that explicitly model rewrites between terms, we get something that could potentially be interpreted with concepts from physics: types would become analogous to strings, terms would become analogous to space, and rewrites would happen over time. The idea from the “algebra of terms” perspective is that we have objects $V$ and $T$ for variables and terms, term constructors as 1-morphisms, and the nontrivial 2-morphisms generated by beta reduction. Seely showed that this approach works fine when you’re unconcerned with the context in which reduction can occur.

This approach, however, doesn’t work for lazy lambda calculus! Horizontal composition in a 2-category is a functor, so if a term $t$ reduces to a term $t'$, then by functoriality, $\lambda x.t$ must reduce to $\lambda x.t'$—but this is forbidden in the lazy lambda calculus! Functoriality of horizontal composition is a “relativity principle” in the sense that reductions in one context are the same as reductions in any other context. In lazy programming languages, on the other hand, the “head” context is privileged: reductions only happen here. It’s somewhat like believing that measuring differences in temperature is like measuring differences in space, that only the difference is meaningful—and then discovering absolute zero. When beta reduction can happen anywhere in a term, there are too many 2-morphisms to model lazy lambda calculus.

In order to model this special context, we reify it: we add a special unary term constructor $[-]\colon T \to T$ that marks contexts where reduction is allowed, then redefine beta reduction so that the term constructor $[-]$ behaves like a catalyst that enables the beta reduction to occur. This lets us cut down the set of 2-morphisms to exactly those that are allowed in the lazy lambda calculus; Greg and I did essentially the same thing in the pi calculus.

More concretely, we have two generating rewrite rules. The first propagates the reduction context to the head position of the term; the second is beta reduction restricted to a reduction context. $[t(t')]\, \downarrow_{ctx}\, [[t](t')]$ $[[\lambda x.t](t')]\, \downarrow_\beta\, [t]\, \{t'/x\}$ When we surround the example term from the previous section with a reduction context marker, we get the following sequence of reductions: $\begin{array}{rl} & [(\lambda x.\lambda y.x)(\lambda x.x)(\Omega)] \\ \downarrow_{ctx}& [[(\lambda x.\lambda y.x)(\lambda x.x)](\Omega)] \\ \downarrow_{ctx}& [[[\lambda x.\lambda y.x](\lambda x.x)](\Omega)] \\ \downarrow_{\beta}& [[\lambda y.(\lambda x.x)](\Omega)]\\ \downarrow_{\beta}& [\lambda x.x] \\ \end{array}$ At the start, none of the subterms were of the right shape for beta reduction to apply. The first two reductions propagated the reduction context down to the projection in head position. At that point, the only reduction that could occur was at the application of the projection to the first element of the pair, and after that to the second element. At no point was $\Omega$ ever in a reduction context.

### Compute resources

In order to run a program that does anything practical, you need a processor, time, memory, and perhaps disk space or a network connection or a display. All of these resources have a cost, and it would be nice to keep track of them. One side-effect of reifying the context is that we can use it as a resource.

The rewrite rule $\downarrow_{ctx}$ increases the number of occurrences of $[-]$ in a term while $\downarrow_\beta$ decreases the number. If we replace $\downarrow_{ctx}$ by the rule $[t(t')]\, \downarrow_{ctx'}\, [t](t')$ then the number of occurences of $[-]$ can never increase. By forming the term $[[\cdots[t]\cdots]]$, we can bound the number of beta reductions that can occur in the computation of $t$.

If we have a nullary constructor $c\colon 1 \to T$, then we can define $[t] = c(t)$ and let the program dynamically decide whether to evaluate an expression eagerly or lazily.

In the pi calculus, we have the ability to run multiple processes at the same time; each $[-]$ in that situation represents a core in a processor or computer in a network.

These are just the first things that come to mind; we’re experimenting with variations.

### Conclusion

We figured out how to model the operational semantics of a term calculus directly in a 2-category by requiring a catalyst to carry out a rewrite, which gave us full abstraction without needing a domain based on representing all the possible futures of a term. As a side-effect, it also gave us a new tool for modeling resource consumption in the process of computation. Though I haven’t explained how yet, there’s a nice connection between the “algebra-of-terms” approach that uses $V$ and $T$ as objects and Lambek and Scott’s approach that uses types as objects, based on Melliès and Zeilberger’s ideas about type refinement. Next time, I’ll talk about the pi calculus and types.

Posted at May 26, 2015 9:28 PM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2829

### Re: A 2-Categorical Approach to the Pi Calculus

Very interesting! It might help the lay reader if you say a bit about what “denotational semantics” and “operational semantics” mean — I still have trouble myself remembering those big fancy words. (-:

The problem of only wanting to $\beta$-reduce in a special contexts reminds me a little bit of the problem of comodalities in dependent type theory, where we instead have an operation on types that we only want to be able to apply in the empty context. But I can’t think of any precise relationship.

Can you say anything about what the extra term constructor $[-]$ corresponds to categorically?

Posted by: Mike Shulman on May 26, 2015 10:09 PM | Permalink | Reply to this

### Re: A 2-Categorical Approach to the Pi Calculus

Denotational semantics is about interpreting terms of a calculus as morphisms in a category. Many terms can map to the same morphism, as is the case with Lambek & Scott’s approach to lambda calculus. The objects of the category are called “domains”. Often, domains are sets with structure, but game semantics is a form of denotational semantics where domains are games and terms map to strategies.

Operational semantics is about interpreting terms of a calculus as objects in a category, with rewrites as the morphisms. It’s more concerned with how a function is computed than what the function is.

Seely’s contention was that higher category theory could let us do both.

Posted by: Mike Stay on May 27, 2015 6:55 PM | Permalink | Reply to this

### Re: A 2-Categorical Approach to the Pi Calculus

Thanks!

Posted by: Mike Shulman on May 27, 2015 8:43 PM | Permalink | Reply to this

### Re: A 2-Categorical Approach to the Pi Calculus

Thinking of a term calculus operationally, term contexts with one hole form a monoidal category, which in turn suggests interpreting term contexts as resources. [-] can be seen as a computational resource where the rewrite rules say how the resource is moved around, produced, and consumed.

Posted by: Mike Stay on May 27, 2015 8:12 PM | Permalink | Reply to this

### Re: A 2-Categorical Approach to the Pi Calculus

The inadequacy of the first 2-categorical semantics is not exactly a feature of laziness, but of practical machine (vs. symbolic) evaluation. Eager languages tend not to reduce under lambdas, either, because this is effectively run time code generation. Perhaps one could view certain JIT compilers as performing this, but then it could be done for lazy languages as well (and it probably doesn’t correspond very well for the most part).

Even Agda, which does full normalization of symbolic terms during type checking, does not implement evaluation under lambdas for the run time semantics. So if one wants to talk about those, something like your privileged contexts are necessary. Eager vs. lazy is just the order in which things become privileged.

Posted by: Dan Doel on May 27, 2015 5:36 AM | Permalink | Reply to this

### Re: A 2-Categorical Approach to the Pi Calculus

Oops! You’re right, thanks. I have conflated laziness with reduction under a lambda, which is the real culprit. JavaScript is an eager language but the expression

   function () { while (1){} }


doesn’t cause an infinite loop until it’s invoked.

In pi calculus, we can’t have reduction under the “receive” prefix rather than under a lambda.

Posted by: Mike Stay on May 28, 2015 6:09 PM | Permalink | Reply to this

### Re: A 2-Categorical Approach to the Pi Calculus

I was going to say that laziness might make the 2-categorical semantics more incorrect. But I think that’s not actually true, either. They’re just different in which 2-cells aren’t supposed to exist.

For instance, eagerly you’re not allowed to do:

$(\lambda x y. y) \Omega \downarrow \lambda y. y$

because you’re supposed to spin on $\Omega$ forever. And lazily, you’re not allowed to do:

$(\lambda x. e_1) ((\lambda y. y) e_2) \downarrow (\lambda x. e_1) e_2$

In some cases, it might not make much difference in the end result. But rigorously modeling what an actual evaluation strategy does seems a fair bit more complex than modeling the ‘equivalent’ lambda terms in either case.

Posted by: Dan Doel on May 28, 2015 6:41 PM | Permalink | Reply to this

### Re: A 2-Categorical Approach to the Pi Calculus

For us slow learners in the audience, can you spell out exactly what reduction under lambdas has to do with 2-categories? You said

Horizontal composition in a 2-category is a functor, so if a term $t$ reduces to a term $t'$, then by functoriality, $\lambda x.t$ must reduce to $\lambda x.t'$

but I don’t immediately see what lambda-abstraction has to do with horizontal composition.

Posted by: Mike Shulman on May 28, 2015 8:34 PM | Permalink | Reply to this

### Re: A 2-Categorical Approach to the Pi Calculus

I shouldn’t have said “horizontal”, since that’s terminology from double categories, not 2-categories. Double categories will probably show up next time. In the 2-category case I mean composition of 1-morphisms.

When modeling term constructors as 1-morphisms, lambda is a 1-morphism $\lambda\colon V \times T \to T$. So if I have some term $t$ interpreted as a 1-morphism $t\colon V^n \to T$, a variable $x$ interpreted as a 1-morphism $x\colon 1\to V,$ and beta reduction applying to $t$ to give $t'$ interpreted as a 2-morphism $\beta\colon t \Rightarrow t'$, then there’s necessarily a 2-morphism

(1)$\lambda \circ (x \times \beta):\;\lambda \circ (x \times t) \;\Rightarrow \;\lambda \circ (x \times t')$

that means $\lambda x.t \downarrow_beta \lambda x.t'.$

Posted by: Mike Stay on May 28, 2015 9:56 PM | Permalink | Reply to this

### Re: A 2-Categorical Approach to the Pi Calculus

I guess I have no idea how you are modeling lambda calculus categorically. The usual way is that variables represent “generalized elements”, so that a term $(x:A) \vdash f(x):B$ would correspond to a morphism $A\to B$. In that case, $\lambda$-abstraction is an exponential transpose: if $(x:A),(y:B) \vdash f(x,y):C$ is a morphism $A\times B\to C$, then $(x:A)\vdash \lambda y. f(x,y) : B\to C$ is the corresponding adjunct $A\to C^B$. I thought you were saying that then a beta-reduction $f(x) \to_\beta g(x)$ would be a 2-cell relating the two 1-morphisms $f,g:A\to B$. But you’re saying instead that you have a “type of variables”? That seems to be internalizing the syntax in a completely different way.

Posted by: Mike Shulman on May 28, 2015 10:45 PM | Permalink | Reply to this

### Re: A 2-Categorical Approach to the Pi Calculus

In this post, I’m only looking at modeling the operational semantics of lambda calculus when there’s no reduction under a lambda. I’m hinting that there’s a connection between this way and the usual denotational semantics you’re used to (Lambek & Scott’s approach), but I haven’t explained how that works.

Posted by: Mike Stay on May 29, 2015 12:45 AM | Permalink | Reply to this

### Re: A 2-Categorical Approach to the Pi Calculus

Did you explain how your approach works? Are you just treating lambda-calculus as a sort of “algebraic theory” and internalizing it in a category with finite products?

Posted by: Mike Shulman on May 29, 2015 5:29 AM | Permalink | Reply to this

### Re: A 2-Categorical Approach to the Pi Calculus

In this post, yes, except that it’s in a 2-category with finite products and we include 2-morphisms generated by beta. In the next post, I’ll talk about how to get the usual semantics back and what happens if we try to do the same thing to the “algebraic theory”-like construction for pi calculus.

Posted by: Mike Stay on May 29, 2015 3:21 PM | Permalink | Reply to this

### Re: A 2-Categorical Approach to the Pi Calculus

Cool! A potential problem here is that it seems you don’t have a 1-1 correspondence between terms with free variables in $\{1,\ldots,n\}$ and morphisms $T^n \to T$. The reason is because there may be $[-]$’s lying around anywhere in the term. It won’t solve the issue completely, but have you tried using two objects, say $T$ and $A$ (for active), and duplicating for $A$ the part of your signature for $T$ which represents active contexts? That’s a bit ugly, but it should force the active part of morphisms to remain to the right’ (viewing terms as morphisms $T^n \to A$) of the inactive part.

Posted by: Tom Hirschowitz on May 27, 2015 8:54 AM | Permalink | Reply to this

### Re: A 2-Categorical Approach to the Pi Calculus

Right, the algebraic approach I outlined above is all about the operational semantics, so terms are of the form $V^n \to T,$ not $T^n \to T.$ Next time I’ll try to show how denotational semantics gets into the game.

I’ll have to ponder your “active object” idea and get back to you.

Posted by: Mike Stay on May 28, 2015 6:24 PM | Permalink | Reply to this

### Re: A 2-Categorical Approach to the Pi Calculus

Oh. Then there must be something i don’t get. It seems i had mistakenly interpreted your stuff as a cartesian closed 2-signature. But in fact you’re just constructing a particular 2-category by hand, right? Are your 1-morphisms considered equivalent up to renaming of bound variables? Is your 1-category the Lawvere theory on two objects $V$ and $T$ with generators for variable, application, and abstraction with equations for $\alpha$-equivalence? Is $\lambda x.x$ a morphism from $V$ to $T$? These are questions similar to Mike’s, I guess, which probably only reveals I’m completely lost :)

Posted by: Tom Hirschowitz on May 29, 2015 10:47 AM | Permalink | Reply to this

### Re: A 2-Categorical Approach to the Pi Calculus

I’m beginning to think I should do this post over again before moving on to pi calculus.

The usual denotational semantics of lambda calculus treats types as objects and alpha-beta-eta-equivalence classes of terms as morphisms.

The usual operational semantics of lambda calculus ignores types entirely, treats terms as objects and rewrite rules as morphisms.

Our operational semantics has V and T as objects, term constructors as 1-morphisms, and rewrites as 2-morphisms. It uses something very much like an algebraic theory, but it’s higher-dimensional.

I have hinted that we can also relate denotational semantics to this higher-dimensional operational semantics, but haven’t said how.

Posted by: Mike Stay on May 29, 2015 3:28 PM | Permalink | Reply to this

### Re: A 2-Categorical Approach to the Pi Calculus

Are your 1-morphisms considered equivalent up to renaming of bound variables?

In our paper, we use a symmetric monoidal closed 2-category and a morphism “fresh” from $I$ to $N$ (our object of names) handles alpha equivalence. I think I could be using a tensor product and “fresh” in the same way in this theory of lambda calculus.

I haven’t actually worked out exactly how to handle alpha equivalence in the cartesian case. It’s possible that modding out by alpha equivalence will work, but I’d have to convince myself that I can still define the 2-morphisms. If that doesn’t work, though, we could move from alpha equivalence to alpha reduction and model renaming with 2-morphisms.

Is your 1-category the Lawvere theory on two objects V and T with generators for variable, application, and abstraction with equations for $\alpha$-equivalence?

The underyling 1-category is, yes. (Except maybe some stuff around alpha as noted above.)

Is $\lambda x.x$ a morphism from $V$ to $T$?

The variable $x$ is not free, so (in the symmetric monoidal case) $V$ is a cocommutative comonoid and the term is a morphism from $I$ to $T$:

(1)$I \stackrel{fresh}{\to} V \stackrel{\Delta}{\to} V \otimes V \stackrel{V \otimes -}{\to} V \otimes T \stackrel{\lambda}{\to} T$
Posted by: Mike Stay on May 29, 2015 4:45 PM | Permalink | Reply to this

### Re: A 2-Categorical Approach to the Pi Calculus

Why do you want to muck around with “freshness” and alpha-equivalence rather than using something like de Bruijn indices?

Posted by: Mike Shulman on May 29, 2015 5:36 PM | Permalink | Reply to this

### Re: A 2-Categorical Approach to the Pi Calculus

If I can offer a guess, I suppose that the motivation for using an explicit type of names is because it follows the same pattern as Mike and Greg’s paper about pi calculus (a language in which names are first-class). There are many different ways to realize variable binding for lambda calculus, with different advantages and disadvantages. Besides de Bruijn indices, another approach is to use higher-order abstract syntax, which in this setting (in the cartesian case) would correspond to replacing the pair of constructors $- : V \to T$ and $\lambda : V \times T \to T$ by a single constructor $\lambda : T^T \to T$ (as part of a cartesian closed 2-signature, as Tom H. mentioned). However, the point is that both approaches can “work”—depending on what you want to do with them—and I imagine that the right strategy is for us to be patient and wait for what comes next :-)

Posted by: Noam Zeilberger on May 29, 2015 7:27 PM | Permalink | Reply to this

### Re: A 2-Categorical Approach to the Pi Calculus

If I can offer a guess, I suppose that the motivation for using an explicit type of names is because it follows the same pattern as Mike and Greg’s paper about pi calculus (a language in which names are first-class).

Yes, that’s exactly right.

Posted by: Mike Stay on May 29, 2015 7:43 PM | Permalink | Reply to this

### Re: A 2-Categorical Approach to the Pi Calculus

$I \xrightarrow{\text{fresh}} V \xrightarrow{\Delta} V \otimes V \xrightarrow{\Delta \otimes V} V \otimes V \otimes V \xrightarrow{V \otimes V \otimes -} V \otimes V \otimes T \xrightarrow{V \otimes \lambda} V \otimes T \xrightarrow{\lambda} T$

Posted by: Tom Hirschowitz on May 30, 2015 10:23 AM | Permalink | Reply to this

### Re: A 2-Categorical Approach to the Pi Calculus

Oh, then the model is even wilder than I thought :)

$I \xrightarrow{\text{fresh}} V \xrightarrow{\Delta} V \otimes V \xrightarrow{\Delta \otimes V} V \otimes V \otimes V \xrightarrow{V \otimes V \otimes -} V \otimes V \otimes T \xrightarrow{V \otimes \lambda} V \otimes T \xrightarrow{\lambda} T$

Posted by: Tom Hirschowitz on May 30, 2015 10:24 AM | Permalink | Reply to this

### Re: A 2-Categorical Approach to the Pi Calculus

Yes! We only prove that the image of the embedding has the right semantics, not that every morphism in the model 2-category corresponds to a term.

Posted by: Mike Stay on June 1, 2015 7:24 PM | Permalink | Reply to this

### Re: A 2-Categorical Approach to the Pi Calculus

Very nice. [A minor detail is that your reference to “Hilbert’s third problem” is wrong]

Posted by: guest on May 27, 2015 5:20 PM | Permalink | Reply to this

### Re: A 2-Categorical Approach to the Pi Calculus

Eek, you’re right: I misinterpreted this line in the Wikipedia article on the decision problem:

In continuation of his “program,” Hilbert posed three questions at an international conference in 1928, the third of which became known as “Hilbert’s Entscheidungsproblem.” As late as 1930, he believed that there would be no such thing as an unsolvable problem.

So it’s the third of “a” list of problems he laid out, but not “the” list of problems.

Posted by: Mike Stay on May 27, 2015 6:41 PM | Permalink | Reply to this

### Re: A 2-Categorical Approach to the Pi Calculus

I like the way you present this. To echo what Dan Doel said, though, the issue of “evaluation under a lambda” is not actually specific to “lazy” (call-by-name) evaluation, and is also present for most “eager” (call-by-value) functional languages.

A classic paper on this is

Quoting Reynolds (p.2):

Purely applicative languages are often said to be based on a logical system called the lambda calculus [5, 6], or even to be “syntactically sugared” versions of the lambda calculus. In particular, Landin [7] has shown that such languages can be reduced to the lambda calculus by treating each type of expression as an abbreviation for some expression of the lambda calculus. Indeed, this kind of reducibility could be taken as a precise definition of the notion of “purely applicative”. However, as we will see, although an unsugared applicative language is syntactically equivalent to the lambda calculus, there is a subtle semantic difference. Essentially, the semantics of the “real” lambda calculus implies a different “order of application” (i.e., normal-order evaluation) than most applicative programming languages.

Have you thought about whether the semantics with an explicit context marker can be related to a traditional evaluation context-based operational semantics for call-by-name lambda calculus (e.g., as presented by Andrew Myers in these lecture notes)?

Posted by: Noam Zeilberger on May 27, 2015 10:17 PM | Permalink | Reply to this

### Re: A 2-Categorical Approach to the Pi Calculus

You’re right, I conflated two different issues. The one we really care about is that we don’t have reduction under lambda (or under a receive prefix in pi calculus). A lazy language like Haskell can simulate eagerness using seq`, while an eager language like Javascript can simulate laziness using a thunk, but neither language reduces under a lambda.

I’d guess that representing evaluation contexts as 1-morphisms would allow us to translate operational semantics to this style of 2-categorical semantics pretty much anywhere.

Posted by: Mike Stay on May 28, 2015 6:34 PM | Permalink | Reply to this

### Re: A 2-Categorical Approach to the Pi Calculus

Is there by chance any relationship between the issue of “evaluation under a lambda” and the issue of “substitution under a lambda”? The latter is a headache in constructing some models of dependent type theory because it means the exponential transpose of a morphism has to be stable under pullback; that’s obvious if the transpose comes from a true adjunction (since then it’s uniquely determined), but if we have only a “weak adjunction” then stability is not automatic.

Posted by: Mike Shulman on May 27, 2015 10:57 PM | Permalink | Reply to this

### Re: A 2-Categorical Approach to the Pi Calculus

That’s a really interesting question. I don’t know the answer offhand. I know there are variants of lambda calculus that explicitly model substitution; looking over Wikipedia’s example now, there are five rewrite rules instead of just beta, and conditions on when two of them apply. Instead of the conditions being an evaluation context, they’re conditions on equality of variables.

In our paper, we’re working in a 2-category with an underlying symmetric monoidal closed category, not necessarily cartesian closed. We say that the object of names $N$ is a cocommutative comonoid and then make use of a morphism “fresh” from the monoidal unit to $N$; because of the no-cloning theorem, “fresh tensor fresh” is not the same as “duplicate fresh”, so we get the right semantics for freshness.

I think that means that we can define the relevant 2-morphisms to involve two copies of fresh, which should guarantee that the variables are distinct.

A different approach would be to change alpha from a structural congruence to a reduction rule and explicitly model variable renaming; then we’d add a 2-morphism for each non-equal pair of variable names.

I don’t know if there’s a better way to approach your question than this.

Posted by: Mike Stay on May 28, 2015 7:15 PM | Permalink | Reply to this

### Re: A 2-Categorical Approach to the Pi Calculus

I generally consider a lambda term as “really” involving de Bruijn indices, so that that sort of question doesn’t come up. We only use alphabetic variables when writing them on paper to convince each other than we aren’t Cylons.

Posted by: Mike Shulman on May 28, 2015 8:21 PM | Permalink | Reply to this

### Re: A 2-Categorical Approach to the Pi Calculus

Mike Stay wrote:

I’m beginning to think I should do this post over again before moving on to pi calculus.

You seem to have gotten a very interested audience of experts, so if you want to do a more detailed post on the “2-lambda calculus” I bet you’d get useful feedback.

Posted by: John Baez on May 30, 2015 1:12 AM | Permalink | Reply to this

### Re: A 2-Categorical Approach to the Pi Calculus

Coming back to this, I wonder whether you really mean $\lambda : T^V \to T$ as the signature of lambda abstraction, rather than $\lambda : V \times T \to T$? The signature

$\begin{array}{rcl} - &:& V \to T \\ -(-) &:& T \times T \to T \\ \lambda &:& T^V \to T \end{array}$

is standard (see these slides by Marcelo Fiore) and is also related to so-called “parametric” higher-order abstract syntax. If I am not mistaken, this is actually the way you deal with name-binding operators in your paper on pi calculus.

Posted by: Noam Zeilberger on June 1, 2015 11:03 AM | Permalink | Reply to this

### Re: A 2-Categorical Approach to the Pi Calculus

I think I got it right. Pi calculus is not higher-order: we can’t substitute terms for names. The 2-cell we’ve got for the comm rule merely replaces some names with other names:

(1)$x!(z) \;|\; x?(y).P \quad \downarrow_{\mathrm{comm}} \quad P\{z/y\}.$

In this operational semantics for the lambda calculus, terms with free variables involve uses of the $-$ term constructor that turns variables into terms. Those uses have to go away when we do substitution:

That’s not to say that we couldn’t define lambda as going from $T^V$ to $T;$ I just think that if we took that approach, we’d also need a new term constructor for substitution, something like an adjoint $\mathrm{subst}\colon T \to V$ that would “cancel out” the uses of the $-$ term constructor.

Posted by: Mike Stay on June 1, 2015 5:35 PM | Permalink | Reply to this

### Re: A 2-Categorical Approach to the Pi Calculus

Right, of course you really do mean $\lambda : V \times T \to T$, since you want to define lambda terms using the “fresh” operation. As an aside, though, to the extent that there is a need for an adjoint $subst : T \to V$ in order to formulate beta reduction with the other signature ($\lambda : T^V \to T$), I don’t see why there is any less of a need with this signature. In particular, in the left-hand side of the diagram you have a $- : V \to T$ component leading into $t_0$, but in general a beta redex need not have that form.

One can construct a counterexample from the morphism Tom mentioned above, which to me actually looks like a perfectly valid lambda term: it represents the term $\lambda x.\lambda x.x$, which is well-formed and $\alpha$-convertible to $\lambda x.\lambda y.y$. So if you want to be able to verify for example that

$(\lambda x.\lambda x.x)(t') \to^\beta (\lambda x.x)[t'/x] \to^\alpha (\lambda y.y)[t'/x] = \lambda y.y,$

I think it makes sense to introduce the $subst : T \to V$ operation. Then you could hope to give a diagrammatic formalization of capture-avoiding substitution, which would be pretty cool.

Posted by: Noam Zeilberger on June 2, 2015 6:32 PM | Permalink | Reply to this

### Re: A 2-Categorical Approach to the Pi Calculus

I think it makes sense to introduce the subst:$T\to V$ operation. Then you could hope to give a diagrammatic formalization of capture-avoiding substitution, which would be pretty cool.

I agree!

Posted by: Mike Stay on June 2, 2015 11:28 PM | Permalink | Reply to this

Post a New Comment