Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

August 28, 2006

Categorifying CCCs: Computation as a Process

Posted by John Baez

This entry is an excuse for discussing ways to categorify the notion of a cartesian closed category (CCC), so we can see computations in the λ-calculus as 2-morphisms.

To get what I’m talking about, start with the most fun introduction to the λ-calculus you can find:

  • Raymond Smullyan, To Mock a Mockingbird, Alfred A. Knopf, New York, 1985.

It begins like this:

A certain enchanted forest is inhabited by talking birds. Given any birds A and B, if you call out the name of B to A, then A will respond by calling out the name of some bird to you; this bird we designate AB. Thus AB is the bird named by A upon hearing the name of B.

Next, take Smullyan’s stories and make movies of the process of computation in the λ-calculus, all in terms of birds.

Luckily, you don’ t need to do this yourself - someone already has:

mockingbird_hears_idiot_bird

If you stare carefully at these movies, and think hard, you’ll see they depict 2-morphisms in a cartesian closed 2-category. So, Keenan is secretly categorifying the concept of CCC, to see computation as a kind of process! That’s something I want to understand better.

I’ll start the discussion with a kind of confession.

When I first heard about categories in computer science, people told me that the λ-calculus could be seen as a particularly elegant computer language, in which computations were really just proofs. They told me Lambek had shown the λ-calculus was in some sense equivalent to the formalism of cartesian closed categories (CCCs). They seemed to be saying that objects of the CCC correspond to “data types”.

So, I put two and two together and got… umm… five. I guessed that a morphism in a CCC

f: x → y

represents a program that takes data of type x as input and spits out data of type y. Or, a proof that takes some assumptions x and derives some conclusions y.

I then got upset when people told me morphisms in a CCC were really more like equivalence classes of proofs, where the equivalence relation seemed to wash out some really interesting stuff - stuff that looks like “2-morphisms between morphisms”.

I got even more upset when these “2-morphisms” seemed to be precisely the “computational work” that a computer implementing the λ-calculus would need to do to actually compute anything. I’m talking about things like β-reduction.

A theory of computation that neglects the process of computation?

I documented a certain stage of my befuddlement in week227, where I wrote:

Anyway, the stuff Phil Scott told me about was mainly over on the syntax side. Here categories show up in another way. Oversimplifying as usual, the idea is to create a category where an object P is a sentence - or maybe a list of sentences - and a morphism

f: P → Q

is a proof of Q from P - or maybe an equivalence class of proofs.

We can compose proofs in a more or less obvious way, so with any luck this gives a category! And, different kinds of logical system give us different kinds of categories.

Quite famously, the multiplicative fragment of intuitionistic logic gives cartesian closed categories. (The "multiplicative fragment" is the portion that deals with "and" and "implies" but leaves out "or" and "not". I’m calling it this because "and" acts like multiplication, while "or" acts like addition.) Similarly, the multiplicative fragment of linear logic gives *-autonomous categories. Full-fledged intuitionistic logic gives cartesian closed categories with finite coproducts, and full-fledged linear logic gives us even fancier kinds of categories!

One thing that intrigues me is the equivalence relation we need to get a category whose morphisms are equivalence classes of proofs. In Gentzen’s "natural deduction" approach to logic, there are various deduction rules. Here’s one:

P |- Q    P |- Q'
------------------
  P |- Q & Q'

This says that if P implies Q and it also implies Q’, then it implies Q & Q’.

Here’s another:

P |- Q => R
------------
P and Q |- R

And here’s a very important one, called the "cut rule":

P |- Q    Q |- R
-----------------
     P |- R

If P implies Q and Q implies R, then P implies R!

There are a bunch more… and to get the game rolling we need to start with this:

P |- P

In this setup, a proof f: P → Q looks vaguely like this:

    f-crud
    f-crud
    f-crud
    f-crud
    f-crud
    f-crud
 -------------
    P |- Q 

The stuff I’m calling "f-crud" is a bunch of steps which use the deduction rules to get to P |- Q.

Suppose we also we also have a proof

g: Q → R

There’s a way to stick f and g together to get a proof

fg: P → R

This proof consists of setting the proofs f and g side by side and then using the cut rule to finish t he job. So, fg looks like this:

     f-crud     g-crud
     f-crud     g-crud
     f-crud     g-crud
     f-crud     g-crud
    --------   --------
     P |- Q     Q |- R
   ---------------------
           P |- R 

Now let’s see if composition is associative. Suppose we also have a proof

h: R → S

We can form proofs

(fg)h: P → S

and

f(gh): P → S

Are they equal? No! The first one looks like this:

     f-crud     g-crud
     f-crud     g-crud
     f-crud     g-crud       h-crud
     f-crud     g-crud       h-crud
    --------   --------      h-crud
     P |- Q     Q |- R       h-crud
   ---------------------   ----------- 
           P |- R            R |- S 
         ----------------------------
                    P |- S

while the second one looks like this:

                g-crud       h-crud
                g-crud       h-crud
    f-crud      g-crud       h-crud
    f-crud      g-crud       h-crud
    f-crud     --------     --------      
    f-crud      Q |- R       R |- S      
   ---------   --------------------- 
    P |- Q            Q |- S 
   ----------------------------
            P |- S

So, they’re not quite equal! This is one reason we need an equivalence relation on proofs to get a category. Both proofs resemble trees, but the first looks more like this:

\  /  /
 \/  /
  \ /
   |

while the second looks more like this:

\  \  /
 \  \/
  \ /
   |

So, we need an equivalence relation that identifies these proofs if we want composition to be associative!

This sort of idea, including this "tree" business, is very familiar from homotopy theory, where we need a similar equivalence relation if we want composition of paths to be associative. But in homotopy theory, people have learned that it’s often better NOT to impose an equivalence relation on paths! Instead, it’s better to form a weak 2-category of paths, where there’s a 2-morphism going from this sort of composite:

\  /  /
 \/  /
  \ /
   |

to this one:

\  \  /
 \  \/
  \ /
   |

This is called the "associator". In our logic context, we can think of the associator as a way to transform one proof into another.

The associator should satisfy an equation called the "pentagon identity", which I explained back in "week144". However, it will only do this if we let 2-morphisms be equivalence classes of proof transformations.

So, there’s a kind of infinite regress here. To deal with this, it would be best to work with a "weak ω-category" with

sentences (or sequences of sentences) as objects,

proofs as morphisms,

proof transformations as 2-morphisms,

transformations of proof transformations as 3-morphisms,…

and so on. With this, we would never need any equivalence relations: we keep track of all transformations explicitly. This is almost beyond what mathematicians are capable of at present, but it’s clearly a good thing to strive toward.

So far, it seems Seely has gone the furthest in this direction. In his thesis, way back in 1977, he studied what one might call "weak cartesian closed 2-categories" arising from proof theory. You can read an account of this work here:

  • R.A.G. Seely, Weak adjointness in proof theory, in Proc. Durham Conf. on Applications of Sheaves, Springer Lecture Notes in Mathematics 753, Springer, Berlin, 1979, pp. 697-701. Also available at http://www.math.mcgill.ca/rags/WkAdj/adj.pdf

  • R.A.G. Seely, Modeling computations: a 2-categorical framework, in Proc. Symposium on Logic in Computer Science 1987, Computer Society of the IEEE, pp. 65-71. Also available at http://www.math.mcgill.ca/rags/WkAdj/LICS.pdf

Can we go all the way and cook up some sort of ω-category of proofs? Interestingly, while the logicians at Geocal06 were talking about n-categories and the geometry of proofs, the mathematician Vladimir Voevodsky was giving some talks at Stanford about something that sounds pretty similar:

Voevodsky has thought hard about n-categories, and he won the Fields medal for his applications of homotopy theory to algebraic geometry.

Anyway, in our conversation about categories and computation on this blog, the whole issue came up again: Mike Stay got upset when Robin Houston told him that morphisms in a CCC correspond to αβη equivalence classes of terms in the λ-calculus, and Robin pointed out Seely’s work involving 2-categories, which tackles this problem. Then Urs Schreiber pointed out Keeney’s bird calculus for drawing things like β-reduction as 2-morphisms - which indeed looks very much like the “movies” people use in the theory of braided monoidal 2-categories. So, I think the time is ripe to categorify CCCs and understand computation as a process!

Posted at August 28, 2006 5:12 AM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/909

37 Comments & 3 Trackbacks

Read the post Categories and computation
Weblog: The n-Category Café
Excerpt: What's the relation between the lambda-calculus, computation and CCCs?
Tracked: August 28, 2006 8:31 AM
Read the post CCCs and the λ-calculus
Weblog: The n-Category Café
Excerpt: Let's discuss cartesian closed categories and the λ-calculus!
Tracked: August 28, 2006 8:38 AM

Re: Categorifying CCCs: computation as a process

Urs Schreiber pointed out Keenan’s “blackbird calculus” for drawing things like β-reduction as 2-morphisms - which indeed looks very much like the “movies” people use in the theory of braided monoidal 2-categories.

I have tried to translate the diagrammatic notation used by Keenan into ordinary string diagrams (“tangle diagrams”) used in monoidal categories. I believe I succeeded.

I ended by expressing my hope that hence Keenan’s pictures should pretty much tell us what it means, when translated to CCC language, to go through a program step by step. I might be wrong about this. But its certainly an expectation well motivated from a naïve point of view.

I will say something hopelessly low-brow, which experts are invited to try to translate to sophisticated language, if possible.

In a way, the “magic” in Keenan’s calculus happens when he turns data into code.

In our language, that corresponds to applying the isomorphism

(1)xhom(x,x). x \stackrel{\sim}{\to} \mathrm{hom}(x,x) \,.

At least as far as I understand.

In Keenan’s setup, this isomorphism is implicit whenever he slides one of his “boxes” (or “birds”) upwards and then vertically downwards into one of the filled black circles.

In other words, drawing such a box with internal wiring is like specifying an element of xx by saying which morphism it corresponds to under the identitfication

(2)xhom(x,x). x \stackrel{\sim}{\to} \mathrm{hom}(x,x) \,.

Even better, by writing every morphism (element in hom(x,x)\mathrm{hom}(x,x)) in a sort of “standard form”, namely as a composition of just the three building blocks

(3)xxx, x \otimes x \stackrel{\bullet}{\to} x \,,
(4)xxx x \stackrel{\wedge}{\to} x\otimes x

and

(5)xx×xx x \otimes x \stackrel{\times}{\to} x\otimes x

every element in xx is written as a certain diagram.

Posted by: urs on August 28, 2006 5:06 PM | Permalink | Reply to this

Re: Categorifying CCCs: computation as a process

Even better, by writing every morphism (element in hom(x,x)hom(x,x)) in a sort of “standard form”, namely as a composition of just the three building blocks

>

(1)xxx, x\otimes x \stackrel{\bullet}{\to} x,

>

(2)xΔxx x\stackrel{\Delta}{\to} x \otimes x

and

(3)xxxxxx\otimes x\stackrel{x}{\to} x\otimes x

every element in xx is written as a certain diagram.

You’re writing \otimes here, but in a CCC we can be more specific and use ×\times, since CCCs have finite products. It’s only in the quantum case that we generalize to the tensor product.

Posted by: Mike on August 28, 2006 6:26 PM | Permalink | Reply to this

Re: Categorifying CCCs: computation as a process

In Keenan’s setup, this isomorphism is implicit whenever he slides one of his “boxes” (or “birds”) upwards and then vertically downwards into one of the filled black circles.

Yes; the isomorphism is applied to the object coming in on the left, and then the resulting morphism acts on the data coming in from above.

Posted by: Mike on August 28, 2006 6:34 PM | Permalink | Reply to this

Re: Categorifying CCCs: computation as a process

the isomorphism is applied to the object coming in on the left, and then the resulting morphism acts on the data coming in from above.

Not that it’s very important, but I do think it is the other way around.

For instance in λa.λb.ab\lambda a.\lambda b. ab it is the aa that comes from above, is onverted from data into code, i.e. into a morphism, and this morphism gets bb fed in from the left.

Compare figure 12.

Posted by: urs on August 28, 2006 6:56 PM | Permalink | Reply to this

Re: Categorifying CCCs: computation as a process

You’re absolutely right. Sorry.

Posted by: Mike on August 29, 2006 4:56 AM | Permalink | Reply to this

Re: Categorifying CCCs: computation as a process

Urs wrote:

I have tried to translate the diagrammatic notation used by Keenan into ordinary string diagrams (“tangle diagrams”) used in monoidal categories. I believe I succeeded.

I spent the afternoon reading Keenan’s paper at the Shanghai branch of the n-Category Café, and I drew many pictures of my own. I understand this stuff a lot better now, and I think everything you said here is right.

I ended by expressing my hope that hence Keenan’s pictures should pretty much tell us what it means, when translated to CCC language, to go through a program step by step. I might be wrong about this. But its certainly an expectation well motivated from a naïve point of view.

Yes. Keenan’s style of drawing pictures is a bit idiosyncratic, since he’s not trying to fit it into the general “string diagram” framework for drawing morphisms in monoidal categories, or the general “movies of string diagrams” framework for drawing 2-morphisms in monoidal 2-categories.

However, by messing with his pictures a little, I was able to fit them into the general string diagram framework pretty nicely. So, I’m very happy! With any luck, all this should dovetail - pardon the pun - quite nicely with Seeley’s work on cartesian closed 2-categories.

I will say something hopelessly low-brow, which experts are invited to try to translate to sophisticated language, if possible. In a way, the “magic” in Keenan’s calculus happens when he turns data into code. In our language, that corresponds to applying the isomorphism xhom(x,x). x \stackrel{\sim}{\to} \mathrm{hom}(x,x) \,. At least as far as I understand. In Keenan’s setup, this isomorphism is implicit whenever he slides one of his “boxes” (or “birds”) upwards and then vertically downwards into one of the filled black circles. In other words, drawing such a box with internal wiring is like specifying an element of xx by saying which morphism it corresponds to under the identification xhom(x,x). x \stackrel{\sim}{\to} \mathrm{hom}(x,x) \,.

Yes! Personally I like a notation that more clearly distinguishes between two other things that Keenan identifies: namely, a morphism f:xy f: x \to y and its name, meaning the corresponding element f˜:1hom(x,y) \tilde{f}: 1 \to hom(x,y) Both these have nice string diagram pictures. The morphism is a little circle labelled ff with one wire coming in at top and one wire going out at bottom. To draw its “name”, we take the input wire and fold it down to the left, getting a gadget with no input wires and two output wires - or more precisely, an output ribbon.

In the special case of the untyped λ\lambda-calculus, the isomorphism hom(x,x)x hom(x,x) \stackrel{\sim}{\to} x can be drawn as a gadget with a ribbon coming in and a line coming out.

Given a morphism f:xxf : x \to x, and composing its name f˜:1hom(x,x)\tilde{f}: 1 \to hom(x,x) with the above isomorphism, we obtain its nickname f¯:1x. \bar{f}: 1 \to x . This looks vaguely like a lollipop - draw it yourself, please!

Or, if we follow Smullyan and Keenan’s goofy terminology and call f:xxf: x \to x a bird, then f¯:1x\bar{f}: 1 \to x is the bird’s song.

We can then feed the song of one bird into another bird, and get a new song! Given a bird f:xx f : x \to x and a bird g:xx g : x \to x we take the song of ff f¯:1x \bar{f}: 1 \to x and feed it into gg to get gf¯:1x g \bar{f}: 1 \to x which is the song of some other bird.

There’s a bit more to this formalism, for example the evaluation map ev:x×xx. ev: x \times x \to x . So, just as Urs said, we get string diagrams for the untyped lambda calculus.

Then we need to categorify this! But, I’m getting a bit tired right now, so I won’t try to figure that out here.

Posted by: John Baez on August 29, 2006 4:31 PM | Permalink | Reply to this

a bird nicknamed hom sang a song

A bird nicknamed hom\mathrm{hom} sang a song

he had uncurried on the ribbon it was from

he sang “ωK\omega K

and “λb.a\lambda b . a

when he found his α\alpha-reduction was wrong.

Posted by: j on August 29, 2006 5:18 PM | Permalink | Reply to this

Re: a bird nicknamed hom sang a song

Not really sure what you mean by this esoteric comment, but it might be just what I was about to say.

Keenan’s notation is isomorphic to much more stadard stuff like λ\lambda-graphs and linear logic proof nets. You may get a taste of λ\lambda-graphs here (in a rather different setting, just look at the pictures) and of proof nets there.

The main difference are that proof nets abstract over things like the orientation of edges and the root, while being much cleaner w.r.t. sharing and duplication.

Comparing with Keenan’s notation, the lack of mathematical precision seems to hide the issue of well-scopedness: both λ\lambda-graphs and proof nets come equipped with a correctness criterion ensuring that bound variables are confined below their binders. In Keenan’s terms, this would impose a condition on birds like “the input link of a bird never escapes the corresponding box”. I haven’t seen anything like that in the article, but maybe I just read too fast.

Posted by: Tom Hirschowitz on August 30, 2006 8:59 AM | Permalink | Reply to this

Re: a bird nicknamed hom sang a song

Keenan’s notation is isomorphic to much more standard stuff

Good to know. To the naked eye, Keenan’s stuff looks more elegant but …

the lack of mathematical precision seems to hide […]

… maybe that’s just because it’s less precise, or less powerful. (I think it’s clear that Keenan’s text is supposed to be expository, not research-level.)

The question of this thread here is how to conceive step-by-step computation in λ\lambda-calculus/CCCs in terms of certain 2-morphisms.

To me it seemed that looking at Keenan’s pictures could help finding the answer. But maybe people working on λ\lambda-graphs and/or proof nets already have an answer?

Posted by: urs on August 30, 2006 7:35 PM | Permalink | Reply to this

Re: a bird nicknamed hom sang a song

Unsure about that.

And sorry, I wasn’t clear enough: I didn’t mean to discourage you from thinking in terms of Keenan’s notation. Only wanted to mention this implicit, graphical correctness condition, which might also be what Jon wanted to say.

Posted by: Tom Hirschowitz on August 31, 2006 6:57 AM | Permalink | Reply to this

Re: Categorifying CCCs: computation as a process

All this stuff is great! I’d been developing a graphical notation for morphisms in CCCs over the last week or so, but I hadn’t gotten around to trying to categorify it.

Morphisms in monoidal categories are nicely drawn as 2-dimensional “string diagrams”. A braided monoidal category adds the 3rd dimension, letting the strings cross. A symmetric monoidal category adds the 4th dimension, meaning there’s no difference any more between a left-handed and a right-handed crossing. All this is standard stuff.

For our symmetric monoidal category to be cartesian, all we need is for every object to be equipped with a “duplicator”

Δ x:xxx\Delta_x : x \to x \otimes x

and a “deleter”:

e x:x1e_x : x \to 1

satisfying a few axioms drawn on page 47 here. If we assume these, we’re officiallly allowed to stop writing \otimes and start writing ×\times, as Mike notes.

(For experts: note this makes every object into a cocommutative comonoid, with Δ\Delta as comultiplication and ee as counit!)

A cartesian category cannot have duals for objects unless every object is isomorphic to 11. So, the strings in our string diagrams are not allowed to double back as they can when objects have duals.

But, a cartesian category can still be closed. For starters, this means for any objects xx and yy we get an object hom(x,y)hom(x,y). Tempting fate, I tried drawing this as a ribbon with yy running down the right edge and xx running up the left. In my conventions the arrow of time points down, so what we’ve got here is an “particle-antiparticle pair” with the particle yy on the right and the antiparticle of xx running backwards in time on the left.

I say “tempting fate” because this notation is borrowed from symmetric monoidal categories with duals for objects, where

hom(x,y)x*yhom(x,y) \cong x* \otimes y

It’s a bit risky to extend these pictures to cartesian closed categories, but it’s safe if we’re careful to follow certain rules. These pictures do a good job of explaining the evaluation morphisms one has in any cartesian closed category:

ev:x×hom(x,y)yev: x \times hom(x,y) \to y

The particle x annihilates the antiparticle lurking in hom(x,y)hom(x,y), leaving yy!

They also serve to explain the coevaluation morphisms

coev:yhom(x,x×y)coev: y \to hom(x, x \times y)

If we draw this, we see the creation of a particle-antiparticle pair.

To state that we really have a cartesian closed category, it suffices to impose the “triangle axioms” on the evaluation and coevaluation, which are really just the counit and unit for the adjunction between x×x \times - and hom(x,)hom(x,-). If we have a unit and counit satisfying the triangle axioms, we have an adjunction!

One can draw these triangle axioms as pictures; I won’t attempt that here. They look reasonable. However, since the notation is stolen from the case of symmetric monoidal categories with duals for objects, it’s a bit clunky - lots of things one is tempted to do are forbidden! Also, it’s hard to notate the difference between

hom(a,b×c)hom(a,b \times c)

and

hom(a,b)×chom(a,b) \times c

since these would both the same in a symmetric monoidal with duals.

So, I need to compare what I was doing with what Keenan and Urs are doing, and see how to make everything as pretty as possible.

It’s great how far Keenan has gone in developing string diagrams for the untyped λ-calculus, where we also have that isomorphism

xhom(x,x)x \cong hom(x,x)

If I go through his stuff carefully, I may finally understand these I, K, and S combinators and other clever tricks λ-hackers use. I’ll try to coax Lisa to let me use her printer and print a copy of Keenan’s mockingbird paper, so I can read it at Starbucks today while she’s shopping downtown.

Posted by: John Baez on August 29, 2006 4:28 AM | Permalink | Reply to this

Re: Categorifying CCCs: computation as a process

If I go through his stuff carefully, I may finally understand these I, K, and S combinators and other clever tricks λ-hackers use.

One of those clever tricks you should be aware of is lambda abstraction; this is at the root of my explanation earlier.

In LISP, to apply a function f to a list of inputs (a, b, c) you write

(f a b c)

If we make the restriction that every atom is curried, then each function takes exactly one input:

(((f a) b) c)

But because we know how many inputs a function takes, we can get rid of the right parentheses:

(((f a b c

Now: given a lambda expression written this way, there’s a mechanical process to translate it into SKI notation.

For each variable v, starting with the innermost: 1. ( \to ((S 2. v \to I 3. term T not containing v \to (K T

That’s it! In the case of the “arrow-order” applicator λ\lambda xy.(y x,

  1. λ\lambda xy.(y x
  2. λ\lambda x.((S I (K x
  3. ((S ((S (K I ((S (K K I
Posted by: Mike on August 29, 2006 5:26 AM | Permalink | Reply to this

Re: Categorifying CCCs: computation as a process

((S((S(KI((S(KKI ((S ((S (K I ((S (K K I

Just to check if I understand: is there possibly a “(KS(KS” missing?

Reading λx.((SI(Kx\lambda x . (( S I ( K x from right to left:

(1)x I K (KK ( ((S I (KI S (KS (missing?) ( ((S ( ((S \array{ x &\mapsto& I \\ K &\mapsto& (KK \\ ( &\mapsto& ((S \\ I &\mapsto& (KI \\ S &\mapsto& (KS & \text{(missing?)} \\ ( &\mapsto& ((S \\ ( &\mapsto& ((S }
Posted by: urs on August 29, 2006 10:19 AM | Permalink | Reply to this

Re: Categorifying CCCs: computation as a process

Just to check if I understand: is there possibly a “(KS” missing?

Aargh, yes. The last line should be

3.3. ((S ((S (K S (K I ((S (K K I

Posted by: Mike on August 29, 2006 11:53 PM | Permalink | Reply to this

Re: Categorifying CCCs: computation as a process

John Baez wrote:

A cartesian category cannot have duals for objects unless every object is isomorphic to 1.

Sorry to be Mr Pedantic again, but I don’t think that’s quite right. There is certainly such a thing as a non-trivial boolean algebra! On the other hand, this essentially exhausts the possibilities: a cartesian category with duals collapses to a preorder, i.e. no homset can contain more than one arrow.

Posted by: Robin on August 29, 2006 7:36 PM | Permalink | Reply to this

Re: Categorifying CCCs: computation as a process

John Baez wrote:

A cartesian category cannot have duals for objects unless every object is isomorphic to 1.

Robin wrote:

Sorry to be Mr Pedantic again, but I don’t think that’s quite right. There is certainly such a thing as a non-trivial boolean algebra! On the other hand, this essentially exhausts the possibilities: a cartesian category with duals collapses to a preorder, i.e. no homset can contain more than one arrow.

It’s good to be pedantic if one’s a logician - one should never ignore that nitpicky little distinction between true and false statements.

Of course you’re an expert on this stuff:

and the argument I had in mind actually said something completely different: I was half-remembering the fact that in a category with finite products where every object xx has an inverse: x×y1,x \times y \cong 1 , every object is in fact isomorphic to the terminal object. In a category with finite products where every object has a dual, we have a unit and counit i x:1x×yi_x : 1 \to x \times y e x:y×x1e_x : y \times x \to 1 for every object, but they needn’t be isomorphisms, at least prima facie.

However, I’m confused when you say that boolean algebras are cartesian categories with duals for objects. Maybe I should say what I mean by “cartesian categories with duals for objects”. I mean we take a category with finite products, treat it as a monoidal category (=×\otimes = \times), and then demand this monoidal category is compact closed.

Is a boolean algebra really like that?

Some stuff you surely know, which I’ll repeat just for people like Mike and Urs who might not have spent time in the coal mines of categorical logic:

If we have a poset, we can write x → y when there’s a (unique) morphism from x to y, and read this as “x implies y”. If it has finite products, we should write the binary product as “&” and the terminal object as “T”, since we then have things like

x → T

x & y → x

x & y → y

and indeed any p with p → x and p → y has p → x & y. Indeed, I’ve just restated the definition of “binary products” and “terminal object” in the special case of a poset!

Now, one can work out what it means for this tensor product & to have a corresponding internal hom, and the wonderful fact - discovered by Lawvere, maybe? - is that this internal hom deserves to be called “⇒”, since the adjunction between tensor and hom says

(x & y) → z if and only if x → (y ⇒ z)

This example of an adjunction is one of those thigns that blew people’s minds back in the 1960’s, since it showed how tightly connected category theory was to logic! There were other things that blew people’s minds back then, but this was one of the best. Another one is the way quantifiers show up from adjunctions.

But I digress. For a cartesian closed poset to be compact, it seems we’d need

hom(x,y) = x* × y

or in other words

(x ⇒ y) = (x* & y)

and this doesn’t look good - I’d prefer an “or” in that last equation.

Maybe you weren’t demanding that the tensor product of the compact monoidal structure be the cartesian product?

Posted by: John Baez on August 30, 2006 5:39 AM | Permalink | Reply to this

Re: Categorifying CCCs: computation as a process

However, I’m confused when you say that boolean algebras are cartesian categories with duals for objects. Maybe I should say what I mean by “cartesian categories with duals for objects”.

Sorry, I was misinterpreting your phrase “with duals for objects” to mean star-autonomous, rather than compact closed. That’s my default notion of duality, and I keep forgetting that it isn’t yours! As you say, a compact closed cartesian category is definitely going to be trivial.

Since star-autonomy is one of those funny notions that probably isn’t very well-known outside categorical logic, maybe I should hint at what it means. A symmetric star-autonomous category is a symmetric monoidal category C with a dualisation operation

    –*: C \to Cop

such that the dualisation is an equivalence of categories, and an internal hom ABA\multimap B can be defined as (A \otimes B*)*.

Well, that’s one definition. There are lots of different ways to describe it, and the non-symmetric version even has two different dualisations.

Using the duality we can define another, generally different, tensor as (A* \otimes B*)*. This is usually written with an inverted ampersand as A⅋B, following Girard’s notation for linear logic, but people have used all kinds of other symbols as well (often because they couldn’t work out how to make an inverted ampersand with TeX).

Every compact closed category is star-autonomous, of course. One question I don’t know the answer to is this: if we have a star-autonomous category whose dualisation functor is a monoidal functor, does that mean it’s necessarily compact closed? I can’t prove that’s true, but I can’t construct a counterexample either. (A famous category theorist told me it was true, which could be construed as evidence in its favour.)

A star-autonomous cartesian category can only be a boolean algebra, as I said in my last post. This is one of the reasons that it’s diffficult to find an interesting categorical interpretation of classical logic: a star-autonomous cartesian category is the obvious notion of categorical model for classical logic, but it turns out that the only such categories are boring old boolean algebras, which identify all the proofs of each sequent.

Posted by: Robin on August 30, 2006 1:43 PM | Permalink | Reply to this

Re: Categorifying CCCs: computation as a process

Maybe it’s a bit late for me to contribute to this thread, but I really wanted to respond to this:

A star-autonomous cartesian category is the obvious notion of categorical model for classical logic, but it turns out that the only such categories are boring old boolean algebras, which identify all the proofs of each sequent.

As a part-time constructivist, this seems to me a well-known fact, the source of the computational vacuity of classical logic.

More precisely: In the BHK interpretation of intuitionistic logic, a given statement may have a very large set of proofs, but a negation has only one proof (if any). Since this single proof cannot usefully be applied to various cases, negations are seen as computationally vacuous. In practical constructive mathematics, therefore, negations are avoided; see Bishop’s comments on this in the introduction to his famous analysis book.

Of course, in classical logic, every statement is (equivalent to) a negation! In other words, all proofs are identified, and indistinguishable. They have no computational content, and cannot be applied to anything.

Speaking more polemically, classical logic (in the strictest sense) is uninteresting. Its ramifications were fully understood by early 20th century, with the general idea of Boolean algebra. All further research in pure logic, therefore, had to be about something else. This explains why logicians talk so much about non-classical logics (intuitionistic, quantum, etc), quite apart from (and even if they have no interest in) the practical questions that led people like Brouwer and von Neumann to use them.

Of course, the polemics in the previous paragraph are not absolutely true; nothing of that sort ever is. But I think that there is something to it. Classical logic is genuinely shallower.

Posted by: Toby Bartels on October 14, 2006 9:51 PM | Permalink | Reply to this

Re: Categorifying CCCs: computation as a process

Long ago, Robin wrote:

A symmetric star-autonomous category is a symmetric monoidal category C\mathbf{C} with a dualisation operation

*:CC op -^* : \mathbf{C} \to \mathbf{C}^op

such that the dualisation is an equivalence of categories, and an internal hom(A,B)hom(A,B) can be defined as (AB *) *(A \otimes B^*)^*.

Could somebody please remind me why the condition is defined this way, as aopposed to the naive

hom(A,B)A *B \mathrm{hom}(A,B) \simeq A^* \otimes B

??

I seem to recall there was some deep reason for that, which became clear when looking at the diagram John drew to illustrate all this. But I forget. I try to look through the old notes again. But if somebody could quickly enlighten me, that would be very nice.

And, by the way, I’d like a star-autonomous structue on chain complexes Ch(Mod(A))\mathrm{Ch}(\mathrm{Mod}(A)) for AA some commutative algebra. All hints and pointers to the literature concerning this particular case will be very much appreciated.

Posted by: Urs Schreiber on November 17, 2007 11:50 AM | Permalink | Reply to this

Re: Categorifying CCCs: computation as a process

Urs, I gave some kind of reply to this over here.

Posted by: Todd Trimble on November 17, 2007 2:25 PM | Permalink | Reply to this

Re: Categorifying CCCs: computation as a process

Urs, I don’t know whether the following is a satisfactory answer to your question, though it may be of some help.

What you call the naive condition

(1)hom(A,B)A *Bhom(A,B)\cong A^*\otimes B

is a special case of the star-autonomy condition, because the naive condition implies that (AB) *A *B *(A\otimes B)^*\cong A^*\otimes B^*. This is easily proved using Yoneda, since there’s a sequence of isomorphisms

(2)((AB) *,C)*(I,ABC)(A *B *,C).\mathbb{C}((A\otimes B)^*,C)\cong\mathbb{C}*(I, A\otimes B\otimes C)\cong\mathbb{C}(A^*\otimes B^*,C).

So any category that satisfies your naive condition is star-autonomous, but star-autonomy is a strictly more general notion.

On the other hand, all the cases that I know of categories that satisfy the naive condition are actually compact closed – though I don’t think it necessarily follows that they must be – so in that case it’s probably more natural to talk about compact closed categories rather than star-autonomous ones.

Posted by: Robin on November 17, 2007 3:43 PM | Permalink | Reply to this

Monads

Hi All,

It is very interesting. Unfortunately, I am still a beginner and don’t understand everything.

Althought you’re currently focusing on CCCs, I have seen in the literature some other approaches using Kleisli triple to define computations.

In short:

1 - they take a category C as a model for functions and an unitary operation T on the objects of C which maps an object (viewed as a set of value of type t) to an object TA : the set of computations of type t ;

2 - A program from A to B (taking a value of type A to a value of type B) can be identified with a morphism from A to TB ;

3 - Additional requirements on values and computations so that program are morphisms in some suitable category.

I don’t remember more details unfortunately.

Posted by: alpheccar on August 29, 2006 6:21 PM | Permalink | Reply to this

Re: Monads

alpheccar: I guess you’re thinking of Moggi’s work using strong monads?

Posted by: Robin on August 29, 2006 7:41 PM | Permalink | Reply to this

Re: Monads

Robin,

Yes. It is this paper. Thank you. On page 5, I can read :

The reason why a functional type A -> B in a programming language (like ML) cannot be interpreted by the exponential BA (as done in a ccc) is fairly obvious;

So I think the paper may be relevant to this thread. But it is not so obvious to me so I am going to read it again.

Posted by: alpheccar on August 29, 2006 8:44 PM | Permalink | Reply to this

Re: star-autonomous categories

Robin Houston wrote:

Since star-autonomy is one of those funny notions that probably isn’t very well-known outside categorical logic, maybe I should hint at what it means.

Thanks! I’m gradually, gradually warming up to these beasts. The definition you gave seems perfectly natural. Could you give me some of your favorite examples?

I keep asking people this question, whenever I go to conferences on categories and computer science.

Some cite categories built using topological vector spaces, which is all fine and dandy - except I don’t feel this illuminates the connection with logic unless we’re talking about a funny generalization of quantum logic from Hilbert spaces to spaces without H **HH^{**} \cong H, which I don’t think is why people got interested in star-autonomous categories!

Some cite Chu spaces. I still need to chew on that construction a bit more.

Maybe the problem is that most of the really interesting star-autonomous categories known so far are syntactical in nature, coming from linear logic, and my respondents are embarrassed to give me examples of that sort? Shy of admitting they have “a syntax in search of a semantics”?

I just want the examples that best illustrate why people care about star-autonomous categories.

Posted by: John Baez on August 31, 2006 8:52 AM | Permalink | Reply to this

Re: star-autonomous categories

My favourite star-autonomous category is the category of coherence spaces. It was Girard’s study of this category that gave him the idea for linear logic, so it’s historically important as well as being a jolly nice category. (The history of linear logic is actually the reverse of what you suggest: it started life as a semantics in search of a syntax!)

A coherence space is just an undirected graph, i.e. a set A together with a reflexive relation \sim \subseteq A×AA\times A, called the coherence relation. The dual relation \backsim is defined as: xyx\backsim y iff xyx\nsim y or x=yx = y.

A morphism R:ABR: A \to B is a relation s.t. whenever aRba R b and aRba' R b',

  • if aaa\sim a' then bbb\sim b'
  • if bbb\backsim b' then aaa\backsim a'

(I hope all these crazy symbols come out okay on your computer, or else this might be quite hard to follow!)

The dual A\bot of a coherence space A has the same underlying set as A, with the dual coherence relation. The tensor ABA\otimes B has underlying set A×BA\times B, with a coherence relation defined as

    (a,b) \sim (a’,b’) iff aaa\sim a' and bbb\sim b'

Notice that (A\bot\otimesB\bot)\bot is generally different from ABA\otimes B. (There is a little degeneracy though: the unit for tensor is the one-element space I, which is equal to I\bot. In linear-logical terms, that means the model validates the MIX rule.)

The category of coherence spaces also has coproducts, given by the disjoint union of the graphs, hence (by duality) it has products too.

There are lots of nice examples of star-autonomous categories, but they typically have a very different flavour from the categories of mathematical objects that we’re most familiar with.

Dualisation is a fertile source of star-autonomous categories. If C is a SMCC with finite products then C×\timesCop has a natural star-autonomous structure. Its tensor product is

(A,X)\otimes(B,Y) = (A\otimesB, (AY)×(BX)(A\multimap Y)\times(B\multimap X))

with unit (I, 1). The dual is of course defined as (A,X)\bot = (X,A).

Posted by: Robin on August 31, 2006 12:44 PM | Permalink | Reply to this

Re: Categorifying CCCs: Computation as a Process

Hi,
This is very interesting. I’m new to CT but know a bit about lambda calculus.
If I understand what you are trying to do, I think you should be considering some calculus of explicit substitution rather than “plain” beta reduction, as the later leaves substitution outside the calculus…

Posted by: Enrique on September 16, 2006 2:39 PM | Permalink | Reply to this
Read the post Voevodsky on the Homotopy λ-Calculus
Weblog: The n-Category Café
Excerpt: Voevodsky has released a "very short note on the homotopy λ-calculus".
Tracked: October 2, 2006 7:00 PM

Re: Categorifying CCCs: Computation as a Process

Hi,

The subject of interpreting lambda-calculus into n-categories has been of major interest to me for some years. However, I do not see lambda-terms as 1-dimensional objects and computations as 2-dimensional ones, but I lift everything by one dimension, so that:
- types live in dimension 1,
- terms/formulas are in dimension 2,
- computations/proofs are in dimension 3,
- computations over computations in dimension 4 (e.g. normalization of proofs), etc.

Let me explain this classification. The kind of translation I seek would explicitly describe both:
- resources management issues (duplication and erasure);
- substitutions (as mentioned in the previous post).
(Here “explicit description” means giving elementary operations and elementary rules to compute them.)

Albert Burroni’s polygraphs allow a nice treatment of the first part, by describing the structure of cartesian categories with generating operations and computations. (Polygraphs are wonderful objects: they are n-categories freely generated in every dimension). Here, the generating cells of the polygraph are:
- elementary types are generating 1-cells,
- term constructors, elementary duplications and elementary erasers are generating 2-cells,
- rules for computation (eg for evaluation of duplication/erasure) are generating 3-cells.

Hence, in this framework, computations are 3-dimensional objects. This first part is due to Albert Burroni, Yves Lafont and myself and summarized in “Termination orders for 3-dimensional rewriting” that can be found on my home page. (Anyone interested in polygraphs should check Albert’s paper “Higher-dimensional word problem”, available on his page.)

The explicit description of substitutions would require the addition of extra cells, corresponding to a presentation of monoidal closed categories in polygraphs.

The easy way would consist into the adjunction of cells for computing duals, such as the ones in string/tangle diagrams. However this results in the collapse of our n-category, making the easy way a bit useless…

The path I am currently exploring consists into a n-categorical translation of the presheaves constructions made in the paper “Abstract syntax and variable binding” by Marcelo Fiore, Gordon Plotkin and Daniele Turi.

Let me know if you have comments or questions about this polygraphic point of view of computation.

Posted by: Yves Guiraud on October 10, 2006 9:57 AM | Permalink | Reply to this

Re: Categorifying CCCs: Computation as a Process

Here’s a question/comment. Why do you think of terms as 1-dimensional objects? Maybe it’s because you can take a product of terms, and you want to represent these products using diagrams, with the product of x \bullet \stackrel{x}{\to} \bullet and y \bullet \stackrel{y}{\to} \bullet looking like xy? \bullet \stackrel{x}{\to} \bullet \stackrel{y}{\to} \bullet \quad ?

If so, then I think something like this is going on:

Above I’m talking about “cartesian closed categories” and “cartesian closed 2-categories”. But, a cartesian category is always monoidal, and a monoidal category is just a 2-category with one object. So, “cartesian closed categories” are already 2-categories. Similarly “cartesian closed 2-categories” are already 3-categories.

If one likes extra dimensions, one can go further: a braided monoidal category is just a (weak) 3-category with one morphism, and a symmetric monoidal category is just a (weak) 4-category with one 2-morphism. This is why we draw morphisms in braided monoidal categories using string diagrams in 3-space, and computations in symmetric monoidal categories using string diagrams in 4-space.

So, a cartesian closed category is already a special sort of 4-category, and a “cartesian closed 2-category” is a special sort of 5-category!

In fact, this viewpoint underlies the diagrams in my course. I began introducing these diagrams here and went further here. These ideas are probably quite familiar to you - I may be reinventing the wheel. So far I’ve focused on closed monoidal categories, so the diagrams are just 2-dimensional. But when the braiding and symmetry become important, they’ll become 4-dimensional - with the 4th dimension merely letting us neglect the difference between a right-handed crossing

     \ /
      /
     / \

and a left-handed crossing

     \ /
      \
     / \

when drawing our string diagrams.

And later, when I categorify everything, the pictures will become 5-dimensional!

This is not as scary as it sounds, since it’s easy to draw a “movie” each of whose frames is a picture of a string diagram in 4 dimensions, where the 4th dimension merely lets us ignore the difference between

     \ /
      \
     / \

and

     \ /
      /
     / \
Posted by: John Baez on October 30, 2006 12:46 AM | Permalink | Reply to this

Re: Categorifying CCCs: Computation as a Process

John Baez wrote:

Here’s a question/comment. Why do you think of terms as 1-dimensional objects? Maybe it’s because you can take a product of terms, and you want to represent these products using diagrams, with the product of •→x• and •→y• looking like •→x•→y•?

In fact, I see terms as 2-dimensional objects! In the polygraphic setting, one considers that objects belong to dimension n when they can be composed in n different ways. For example, propositional terms (i.e. without abstraction or quantifiers) are considered as 2-dimensional when equipped with the “sequential” and “parallel” compositions. In string-like diagrams, these two compositions are respectively the vertical one and the horizontal one.

I really like this point of view because terms and computations have the same structure, except for their dimension: for example, computations on propositional terms (i.e. propositional calculus) are three-dimensional objects, since they inherit the two compositions of terms and have their own extra composition (the usual sequential composition of computations). See, for example, this proof that the neutral element of a monoid is unique: vertical slices yield string-like diagrams representing computation steps and their results.

If so, then I think something like this is going on:

Above I’m talking about “cartesian closed categories” and “cartesian closed 2-categories”. But, a cartesian category is always monoidal, and a monoidal category is just a 2-category with one object. So, “cartesian closed categories” are already 2-categories. Similarly “cartesian closed 2-categories” are already 3-categories.

I totally agree with that. However, I think we must dig further in the structure. Indeed a cartesian closed category is a monoidal one and, thus, is already a 2-category. Yet, we still do not describe the internal hom.

I began introducing these diagrams here and went further here. These ideas are probably quite familiar to you - I may be reinventing the wheel.

Yes, I also use diagrams like these ones, I find them really usefull even if I am not totally satisfied with them.

Their main quality is that they are string-like diagrams where one can draw operations such as the evaluation: the input of type hom(x,y) is pictured as a “thick” string going from x to y.

But there is some internal structure in these “thick” strings, and this is the one of the ambient (n-)category. Indeed, relations between two (n-)morphisms f and f’ induce relations between hom(x,f) and hom(x,f’), for example.

If one likes extra dimensions, one can go further: a braided monoidal category is just a (weak) 3-category with one morphism, and a symmetric monoidal category is just a (weak) 4-category with one 2-morphism. This is why we draw morphisms in braided monoidal categories using string diagrams in 3-space, and computations in symmetric monoidal categories using string diagrams in 4-space.

So, a cartesian closed category is already a special sort of 4-category, and a “cartesian closed 2-category” is a special sort of 5-category!

For the moment, I have not used weak structures in order to limit the number of dimensions so I cannot really comment on this. I do not know if the extra structure I was talking about to describe the internal hom will induce another dimensional inflation: I do not think so for the moment, but this depends on the ideas I have at the moment to modelize the hom!

Posted by: Yves Guiraud on October 30, 2006 10:05 AM | Permalink | Reply to this

Re: Categorifying CCCs: Computation as a Process

I then got upset when people told me morphisms in a CCC were really more like equivalence classes of proofs, where the equivalence relation seemed to wash out some really interesting stuff - stuff that looks like “2-morphisms between morphisms”.

I got even more upset when these “2-morphisms” seemed to be precisely the “computational work” that a computer implementing the λ-calculus would need to do to actually compute anything. I’m talking about things like β-reduction.

A theory of computation that neglects the process of computation?

If I understand correctly the point John is raising, I think the answer is that historically it was much easier to describe the “syntactic” structure of lambda-calculus computations (see “Church-Rosser theorem”) than it was to figure out what they actually meant, as mathematical functions. So in a certain sense the intent of Christopher Strachey and Dana Scott’s work and every thing that followed was to “mod out” as much as possible! The holy grail for a programming language semantics, what’s called “full abstraction”, is that two programs be interpreted by the same arrow exactly when they have indistinguishable behavior according to the syntactic rewriting rules of the programming language. (See wikipedia on “denotational semantics”.)

Lately people are more interested in n-categories, myself in the context of concurrent computation, where there seems to be a rich structure of “pushing atomic computation steps (like beta-reductions) past each other” when they are causally independent (executing on different processors of a multiprocessor computer, say).

Posted by: Kevin Watkins on October 14, 2006 3:00 AM | Permalink | Reply to this

Re: Categorifying CCCs: Computation as a Process

This weekend I came up with a string diagram notation that’s much closer to Keenan’s movies than the one that ended up in our Rosetta Stone paper. The diagrams are too hard for me to include here, so I’ll refer interested parties to the blog post.

Whereas Keenan’s depiction is based on combinator calculus, mine explicitly has a λ\lambda node. The most striking part for me was that lambda takes as input an antivariable and a term in which that variable is free, producing a term in which the variable is bound.

Has anyone seen this representation before?

Posted by: Mike Stay on January 26, 2009 7:31 AM | Permalink | Reply to this

Re: Categorifying CCCs: Computation as a Process

Peter Selinger was kind enough to send this reply:

Hi Mike,

your string diagrams are known as “sharing graphs” in the literature, and are extremely well-studied. See e.g.

[1] Stefano Guerrini, A general theory of sharing graphs (1997)

There is a reduction strategy on sharing graphs that is provably the most efficient possible reduction strategy for lambda calculus; this goes back to the work of Levy and Lamping (both cited in the above paper).

Guerrini also gives an algebraic semantics of these sharing graphs, see section 7. I don’t know whether it is related to your categorical view.

I also like the book by Peyton-Jones:

[2] Simon Peyton Jones, The Implementation of Functional Programming Languages (1987)

It shows how to use sharing graphs as the basis for a practical implementation of lazy programming languages. As far as I know, this is still state-of-the-art and is used in the implementation of Haskell. In keeping with the practical nature of the book, the sharing graphs are represented in slightly different form (with syntactic variables rather than backpointers), but this is of course equivalent.

As for the categorical semantics, what you have in mind is a kind of abstract syntax with variable binding. To put this into perspective, the semantics of ordinary abstract syntax (i.e., without variable binding), is given by an object A in a cartesian category, together with interpretations for each n-ary function symbol f:A nA.f : A^n \to A. One can then inductively define the interpretation of terms, speak of the free such object, etc.

Things get slightly more complicated if one adds variable binding to this picture. This has also been studied, though perhaps not in the same form as you are proposing. Perhaps the closest to your approach is

[3] Martin Hofmann, Semantical analysis of higher-order abstract syntax (1999)

Here one has var:VT,var : V \to T, app:T×TT,app : T \times T \to T, and lam:(VT)Tlam : (V \Rightarrow T) \to T (see top of p.9, with the obvious changes in notation). You will note the use of a function space (VT)(V \Rightarrow T) in place of your cartesian product V *×T.V^* \times T.

Another very similar paper is

[4] M.P.Fiore, G.D.Plotkin and D.Turi. Abstract syntax and variable binding (1999) or here

Here, one has var:VTvar : V \to T, app:T×TTapp : T \times T \to T, and lam:δTTlam : \delta T \to T (as contained e.g. in the commutative diagram on p.6). Again, δT\delta T is something akin to the function space VTV \Rightarrow T, but is also isomorphic, in a suitable sense, to TT,T \Rightarrow T, as far as I remember (this is important for substitution, see below).

A third, technically slightly different (but conceptually similar) approach to abstract syntax with variable binders is:

[5] Murdoch J. Gabbay, Andrew M. Pitts: A new approach to abstract syntax with variable binding (1999)

[6] Murdoch J. Gabbay, Andrew M. Pitts: A new approach to abstract syntax with variable binding (2002)

See especially the second-to-last line of [6, p.356], and you immediately see the similarity. Their [A]T operation is akin to VTV \Rightarrow T in [3] and δT\delta T in [4].

It is fun to note that [3-5] all appeared (independently) in the same conference in 1999. Semantics of variable binders was clearly a pressing problem that year.

I can comment briefly on the main difference between these works and what you are proposing. One thing that is important in syntax is substitution (of a term for a variable). In fact, in the usual abstract syntax (without binders), a term with n variables is represented as a morphism A nAA^n \to A. This is useful for substitution: namely, if f:A×A nAf : A \times A^n \to A represents the term t(x,y 1,...,y n)t(x,y_1,...,y_n) and g:A nAg : A^n \to A represents s(y 1,...,y n)s(y_1,...,y_n), then fg,idf \circ \langle g, id \rangle represents t[s/x].t[s/x].

In the presence of variables, a term with n variables is represented as 1V *n×T1 \to V^{*n} \times T (in your notation). However, for reasons of substitution, one would still like this hom-set to be in 1-1 correspondence with T nT.T^n \to T. Somehow this is what the papers [3-6] manage to do, each in their own way.

I hope these references will be useful. It would be great if you had a more abstract monoidal framework of which the particular constructions in [3-6] are concrete examples. I have always wondered about the precise connection between [3-6], and whether there is a bigger picture.

Good luck, and let me know how it goes! – Peter

Posted by: Mike Stay on January 27, 2009 7:43 PM | Permalink | Reply to this

Re: Categorifying CCCs: Computation as a Process

Yesterday I had created nnLab: string diagram, mainly as a stub surrounding the link to Selinger’s new survey that came across the nnCat mailing list yesterday.

Maybe you would enjoy expanding a bit, in particular maybe adding the stuff about sharing graphs.

The above list of useful information will be forgotten here eventually. On the nnLab it will be found more easily.

Posted by: Urs Schreiber on January 27, 2009 8:29 PM | Permalink | Reply to this

Re: Categorifying CCCs: Computation as a Process

Posted by: Mike Stay on January 27, 2009 10:49 PM | Permalink | Reply to this

Re: Categorifying CCCs: Computation as a Process

Thanks!

Posted by: Urs Schreiber on January 28, 2009 7:39 AM | Permalink | Reply to this

Re: Categorifying CCCs: Computation as a Process

The first link on the page is broken. The correct version is http colon backslash backslash www dot cs dot virginia dot edu/~cs655/readings/mockingbird dot html

Posted by: Stephen Paul King on May 20, 2013 7:01 AM | Permalink | Reply to this

Post a New Comment