Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

May 14, 2021

Structure vs. Observation

Posted by Emily Riehl

guest post by Stelios Tsampas and Amin Karamlou

Today we’ll be talking about the theory of universal algebra, and its less well-known counterpart of universal coalgebra. We’ll try to convince you that these two frameworks provide us with suitable tools for studying a fundamental duality that arises between structure and behaviour. Rather than jumping straight into the mathematical details we’ll start with a few motivating examples that arise in the setting of functional programming. We’ll talk more about the mathematics at play behind the scenes in the second half of this post.

Take a look at this Agda-style1 definition of an algebraic datatype, a structure. data Expr:Setwhere lit:Expr add:Expr×ExprExpr mul:Expr×ExprExpr \begin{aligned} {\color{#CD6600} data} \:& {\color{#0000CD} Expr}\::\:{\color{#0000CD} Set}\:{\color{#CD6600}where}\\ & {\color{#008B00} lit} : \mathbb{N} \to {\color{#0000CD} Expr}\\ & {\color{#008B00} add} : {\color{#0000CD} Expr} \times {\color{#0000CD} Expr} \to {\color{#0000CD} Expr}\\ & {\color{#008B00} mul} : {\color{#0000CD} Expr} \times {\color{#0000CD} Expr} \to {\color{#0000CD} Expr} \end{aligned} The intention of the programmer is to define datatype Expr{\color{#0000CD} Expr} that can be constructed in exactly three ways. One can either apply a natural number to the function lit{\color{#008B00} lit} (for “literal”), find a pair of Expr{\color{#0000CD} Expr}s and apply them to the function add{\color{#008B00} add}, or do the same thing to mul{\color{#008B00} mul}. In fact, Expr{\color{#0000CD} Expr} is defined precisely in terms of these three functions, which we call constructors, and it is the least such set whose building blocks are lit{\color{#008B00} lit}, mul{\color{#008B00} mul} and add{\color{#008B00} add}. In the end, Expr{\color{#0000CD} Expr} consists of expressions such as “add{\color{#008B00} add} (lit{\color{#008B00} lit} 5) (mul{\color{#008B00} mul} (lit{\color{#008B00} lit} 4) (lit{\color{#008B00} lit} 2))”, “lit{\color{#008B00} lit} 0” and so on.

Such datatypes are not the only ones we can define in functional languages such as Agda. The dual notion is that of a record or a class. Well, here’s a classic. record Stream:Setwhere field head: next:Stream \begin{aligned} {\color{#CD6600} record}\:& {\color{#0000CD} Stream}\::\:{\color{#0000CD} Set}\:{\color{#CD6600}where}\\ & {\color{#CD6600} field} \\ & {\color{#EE1289} head} : \mathbb{N} \\ & {\color{#EE1289} next} : {\color{#0000CD} Stream}\\ \end{aligned} Stream{\color{#0000CD} Stream} is a datatype, like Expr{\color{#0000CD} Expr} before. In fact, Stream{\color{#0000CD} Stream} is the set of all infinite sequences of natural numbers only in this case functions head{\color{#EE1289} head} and next{\color{#EE1289} next} do not represent ways to construct a Stream{\color{#0000CD} Stream} instance. Rather, they show the exact opposite, so they are destructors for a Stream{\color{#0000CD} Stream}! Perhaps a more accurate representation would be something akin to record Stream:Setwhere field head:Stream next:StreamStream \begin{aligned} {\color{#CD6600} record}\:& {\color{#0000CD} Stream}\::\:{\color{#0000CD} Set}\:{\color{#CD6600}where}\\ & {\color{#CD6600} field} \\ & {\color{#EE1289} head} : {\color{#0000CD} Stream} \to \mathbb{N} \\ & {\color{#EE1289} next} : {\color{#0000CD} Stream} \to {\color{#0000CD} Stream}\\ \end{aligned}

By saying “destructor”, the idea is that a Stream{\color{#0000CD} Stream} can be applied to either head{\color{#EE1289} head} or next{\color{#EE1289} next} to get a natural number or another Stream{\color{#0000CD} Stream} respectively. Hence, destructors are different when compared to constructors in that they are not used to build an instance of a datatype out of a number of ingredients, but to pick it apart and present the observations that one can make out of it. In the case of Stream{\color{#0000CD} Stream}, i.e. infinite sequences of natural numbers, we can always observe the head{\color{#EE1289} head}, the number at the current position in the stream, and move on to observe the rest of the Stream{\color{#0000CD} Stream} (i.e. observing the next{\color{#EE1289} next} Stream{\color{#0000CD} Stream}), to repeat the process.

We can witness the first traces of a duality by comparing the codomains of constructors with the domains of destructors. At one end, constructors all map to the same thing, Expr{\color{#0000CD} Expr}, while all destructors map from the same thing, Stream{\color{#0000CD} Stream}. But the antitheses do not end here as, interestingly, instances of Expr{\color{#0000CD} Expr} and Stream{\color{#0000CD} Stream} are meant to be used in radically different ways.

Let us define a simple function that “calculates” the value of an Expr{\color{#0000CD} Expr} expression. In this case we want to define a function such that “add{\color{#008B00} add} (lit{\color{#008B00} lit} 5) (mul{\color{#008B00} mul} (lit{\color{#008B00} lit} 4) (lit{\color{#008B00} lit} 2))” amounts to 13 or that “lit{\color{#008B00} lit} 0” amounts to 0. This is simple enough. calc:Expr litnn add(l,r)(calcl)+(calcr) mul(l,r)(calcl)*(calcr) \begin{aligned} & {\color{#0000CD} calc} : {\color{#0000CD} Expr} \to \mathbb{Z} \\ & {\color{#008B00} lit}\:n \mapsto n \\ & {\color{#008B00} add}\:(l,r) \mapsto ({\color{#0000CD} calc}\:l) + ({\color{#0000CD} calc}\:r)\\ & {\color{#008B00} mul}\:(l,r) \mapsto ({\color{#0000CD} calc}\:l) * ({\color{#0000CD} calc}\:r) \end{aligned}

calc{\color{#0000CD} calc} is an example of a simple inductive definition. The key is that it composes values of smaller sub-Expr{\color{#0000CD} Expr}essions to specify the result of the whole Expr{\color{#0000CD} Expr}ession. “Composition” is a rather fitting word here, as the definition of calc{\color{#0000CD} calc} is essentially a recipe on how to mix Expr{\color{#0000CD} Expr}essions together. Now, how about a small dose of un-mixing? zip:Stream×StreamStream head(zip(l,r))=headl next(zip(l,r))=zip(r,nextl) \begin{aligned} & {\color{#0000CD} zip} : {\color{#0000CD} Stream} \times {\color{#0000CD} Stream} \to {\color{#0000CD} Stream} \\ & {\color{#EE1289} head}\:({\color{#0000CD} zip}\:(l,r)) = {\color{#EE1289} head}\:l \\ & {\color{#EE1289} next}\:({\color{#0000CD} zip}\:(l,r)) = {\color{#0000CD} zip}\:(r, {\color{#EE1289} next}\:l) \end{aligned}

Our new function zip{\color{#0000CD} zip} takes apart two Stream{\color{#0000CD} Stream}s by specifying, at each given step, how to (i) extract a natural number and (ii) extract a new Stream{\color{#0000CD} Stream} out of them. The end result is that of a coinductive definition. Notice how zip{\color{#0000CD} zip} maps onto Stream{\color{#0000CD} Stream}s and compare it to calc{\color{#0000CD} calc} that maps from Expr{\color{#0000CD} Expr}s. Once again, the arrows are reversed. Strange.

Let’s forget about that mystery for a second and try out a couple of proofs. For starters, notice that calc{\color{#0000CD} calc} maps all elements of Expr{\color{#0000CD} Expr} to positive integers. This is a property that we can prove inductively.

prop:(e:Expr)calce0 prop(litn)=trueasn0 prop(addlr)=trueasproplpropr calc(addlr)=calcl+calcr prop(mullr)=trueasproplpropr calc(mullr)=calclcalcr \begin{aligned} & {\color{#0000CD} prop}\: : \forall\:(e : {\color{#0000CD} Expr}) \to {\color{#0000CD} calc}\:e \geq 0 \\ & {\color{#0000CD} prop}\:(lit\:n) = true\:as\:n \geq 0 \\ & {\color{#0000CD} prop}\:({\color{#008B00} add}\:l\:r) = {\color{#008B00} true}\:as\:{\color{#0000CD} prop}\:l\:\wedge\:{\color{#0000CD} prop}\:r\\ & \:\:\:\:\:\:\:\:\:\:\:\:\wedge\:{\color{#0000CD} calc}\:({\color{#008B00} add}\:l\:r) = {\color{#0000CD} calc}\:l + {\color{#0000CD} calc}\:r\\ & {\color{#0000CD} prop}\:({\color{#008B00} mul}\:l\:r) = {\color{#008B00} true}\:as\:{\color{#0000CD} prop}\:l\:\wedge\:{\color{#0000CD} prop}\:r \\ & \:\:\:\:\:\:\:\:\:\:\:\:\wedge\:{\color{#0000CD} calc}\:({\color{#008B00} mul}\:l\:r) = {\color{#0000CD} calc\:l} \cdot {\color{#0000CD} calc\:r} \end{aligned} The above is an example of a mechanized Agda proof written in a mathematician-friendly way. We encode the fact that calc{\color{#0000CD} calc} produces positive values as property/function prop{\color{#0000CD} prop}. A valid proof requires prop{\color{#0000CD} prop} being true for each of the constructors of Expr{\color{#0000CD} Expr}, which is trivial for lit{\color{#008B00} lit} and slightly more involved for add{\color{#008B00} add} and mul{\color{#008B00} mul}. In the latter two cases we invoke the induction hypothesis on both operands, which translates to two nested calls to prop{\color{#0000CD} prop}.

The above proof is an example of the so-called Curry-Howard correspondence, relating mathematical proofs with programs written in typed functional languages. The main aspects of it are the fact that propositions can be encoded as types (first line of the proof) and proofs as definitions. Indeed, prop{\color{#0000CD} prop} is simply a function that we are giving a definition for.

Now let’s try something similar for Stream{\color{#0000CD} Stream}. First, we define function even that only picks positions 0,2,4,0,2,4,\dots from a Stream{\color{#0000CD} Stream}. even:StreamStream head(evenxs)=headxs next(evenxs)=even(next(nextxs)) \begin{aligned} & {\color{#0000CD} even} : {\color{#0000CD} Stream} \to {\color{#0000CD} Stream} \\ & {\color{#EE1289} head}\:({\color{#0000CD} even}\:xs) = {\color{#EE1289} head}\:xs \\ & {\color{#EE1289} next}\:({\color{#0000CD} even}\:xs) = {\color{#0000CD} even}\:({\color{#EE1289} next}\:({\color{#EE1289} next}\:xs)) \end{aligned}

If we “double” a Stream{\color{#0000CD} Stream} xsxs (as in zip(xs,xs){\color{#0000CD} zip}\:(xs , xs)) and apply the result to even{\color{#0000CD} even}, we will obviously recover the original Stream{\color{#0000CD} Stream} xsxs. We once again write down the proof, only this time it is coinductive. prop2:(xs:Stream)even(zip(xs,xs))xs head(even(zip(xs,xs)))headxs (1) next(even(zip(xs,xs)))nextxsas next(even(zip(xs,xs)))even(zip(nextxs,nextxs))) (2) even(zip(nextxs,nextxs)))nextxs (3) \begin{aligned} & {\color{#0000CD} prop2} : \forall\:(xs : {\color{#0000CD} Stream}) \to {\color{#0000CD} even}\:({\color{#0000CD} zip}\:(xs , xs)) \equiv xs \\ & {\color{#EE1289} head}\:({\color{#0000CD} even}\:({\color{#0000CD} zip}\:(xs , xs))) \equiv {\color{#EE1289} head}\:xs\:&& (1)\\ & {\color{#EE1289} next}\:({\color{#0000CD} even}\:({\color{#0000CD} zip}\:(xs , xs))) \equiv {\color{#EE1289} next}\:xs\:as\:\\ & \:\:\:\:\:\:\:\:\:\:\:\:{\color{#EE1289} next}\:({\color{#0000CD} even}\:({\color{#0000CD} zip}\:(xs , xs))) \equiv {\color{#0000CD} even}\:({\color{#0000CD} zip}\:({\color{#EE1289} next}\:xs , {\color{#EE1289} next}\:xs))) && (2) \\ & \:\:\:\:\:\:\:\:\:\:\:\:{\color{#0000CD} even}\:({\color{#0000CD} zip}\:({\color{#EE1289} next}\:xs , {\color{#EE1289} next}\:xs))) \equiv {\color{#EE1289} next}\:xs && (3) \end{aligned} where (1) and (2) are true by definition of even{\color{#0000CD} even} and zip{\color{#0000CD} zip} and (3) is obtained via coinduction on next{\color{#EE1289} next} xsxs.

The proof essentially asserts that both the head{\color{#EE1289} head} and the next{\color{#EE1289} next} part of the two streams in question are identical. If we look closely, we can observe the subtle “structural” differences between prop{\color{#0000CD} prop} and prop2{\color{#0000CD} prop2}: in the case of the former, we had to provide a valid proof for each Expr{\color{#0000CD} Expr} constructor, while in the latter case, we had to provide a valid proof of equivalence for each Stream{\color{#0000CD} Stream} destructor! What’s going on here?

Constructors and destructors, induction and coinduction, composition and decomposition are all elements of a duality that transcends functional programming going all the way up to the very foundations of thought: It’s the duality of structure vs observation. And it is a duality that universal (co)algebra gives a very satisfying account for.

The Category Theory Within

We’ll now give a brief overview of the main building blocks of universal (co)algebra. Along the way, we will revisit our Agda examples in order to see how this simple and rather abstract theory can be used in concrete situations.

Given endofunctors T,F:SetSetT, F: \mathbf{Set} \rightarrow \mathbf{Set}2, a TT-algebra (S,a S)(S, a_{S}) consists of a set SS together with a function a S:T(S)Sa_{S}: T(S) \rightarrow S. Dually, an FF-coalgebra (X,α X)(X, \alpha_X) consists of a set XX together with a function α X:XF(X)\alpha_X: X \rightarrow F(X). We’ve already encountered examples of both algebras and coalgebras:

  1. Consider the endofunctor EE which maps a set UU to Ex(U)=N+(U×U)+(U×U)Ex(U) = N + (U \times U) + (U \times U), and a function ff to Ex(f)=id N+(f×f)+(f×f)Ex(f) = id_{N} + (f \times f) + (f \times f). Then, (Expr,lit+add+mul)({\color{#0000CD} Expr}, {\color{#008B00} lit} + {\color{#008B00} add} + {\color{#008B00} mul}) is an EE-algebra.
  2. Consider the endofunctor SS which maps a set XX to S(X)=N×XS(X) = N \times X, and a function ff to S(f)=id N×fS(f) = id_{N} \times f. Then, (Stream,(head,next))({\color{#0000CD} Stream}, ({\color{#EE1289} head}, {\color{#EE1289} next})) is an SS-coalgebra.

A TT-algebra homomorphism between two TT-algebras (U,a U),(V,a V)(U, a_U), (V, a_V) is any function f:UVf:U \rightarrow V where a VT(f)=fa Ua_V \circ T(f) = f \circ a_U. Dually, an FF-coalgebra homomorphism between two FF-algebras (X,α X),(Y,α Y)(X, \alpha_X), (Y, \alpha_Y) is any function f:XYf:X \rightarrow Y where α Yf=F(f)α X\alpha_Y \circ f = F(f) \circ \alpha_X. In terms of commuting diagrams these conditions amount to:

Once again, we’ve already seen examples of both algebra and coalgebra homomorphisms:

  1. calc{\color{#0000CD} calc} is an EE-algebra homomorphism.
  2. zip{\color{#0000CD} zip} is an SS-coalgebra homomorphism.

We leave it to the reader to check the commutativity conditions for these homomorphisms, which are summarised in the diagrams below:

The function zipStep{\color{#0000CD} zipStep} above is defined as zipStep(l,r)=(headl,(r,nextl)){\color{#0000CD} zipStep}\:(l, r) = ({\color{#EE1289} head}\:l, (r, {\color{#EE1289} next}\:l)). If we keep use zipStep according to the (pseudo)-agda style code below we can recover the behaviour of zip{\color{#0000CD} zip}.

altZip:Stream×StreamStream head(altZip(l,r))=lHead next(altZip(l,r))=altZip(r,lNext) where lHeadrnextL=altZip(l,r) \begin{aligned} & {\color{#0000CD} altZip} : {\color{#0000CD} Stream} \times {\color{#0000CD} Stream} \to {\color{#0000CD} Stream} \\ & {\color{#EE1289} head}\:({\color{#0000CD} altZip}\:(l,r)) = lHead \\ & {\color{#EE1289} next}\:({\color{#0000CD} altZip}\:(l,r)) = {\color{#0000CD} altZip}\:(r, lNext) \\ & \:\:{\color{#CD6600}where}\\ & \:\:\:\: lHead\:r\:nextL = {\color{#0000CD} altZip}\:(l,r) \end{aligned}

So zipStep{\color{#0000CD} zipStep} is in a sense equivalent to a single step of the zip{\color{#0000CD} zip} function. This is not a coincidence! We’ll return to this point shortly when we talk about coinduction.

TT-algebras and algebra homomorphisms form a category. We refer to an initial object of this category as an initial algebra. Dually, FF-coalgebras and coalgebra homomorphisms form a category whose final objects we refer to as final coalgebras. Initiality and finality occupy a central role in universal (co)algebra. This is what we explore next.

Take an initial TT-algebra (A,a a)(A, a_a) together with an arbitrary TT-algebra (U,a u)(U, a_u). By initiality, there exists a unique TT-algebra homomorphism f:ASf: A \rightarrow S. We say that such a function ff is inductively defined. Intuitively, an inductive function ff breaks complicated elements of AA down into their constituent parts, applies some function on each constituent part, and mixes the results together to obtain an action on the whole element. If you’re an eagle eyed reader you’ll know that we’ve already encountered a function that behaves this way. Remember how calc{\color{#0000CD} calc} breaks complicated Expr{\color{#0000CD} Expr}s down into smaller sub-Expr{\color{#0000CD} Expr}s and mixes their results together to get a result for the whole Expr{\color{#0000CD} Expr}? Well now we know that this behaviour is indicative of the fact that calc{\color{#0000CD} calc} is inductively defined (note that in this case Expr{\color{#0000CD} Expr} is an initial TT-algebra3).

Now consider the dual scenario where we have a final FF-coalgebra (Z,α Z)(Z, \alpha_Z) together with an arbitrary FF-coalgebra (Y,α Y)(Y, \alpha_Y). By universality there must be a unique coalgebra homomorphism g:YZg:Y \rightarrow Z. We call such a function g coinductively defined. A nice intuition here is to think of (Y,α Y)(Y, \alpha_Y) as a dynamic system evolving according to the transition structure given by α Y\alpha_Y, then the function gg is the result of applying the transition α Y\alpha_Y over and over again. Repeated application of a transition structure might sound awfully familiar to you, as in fact it should. We were only just talking about how zip{\color{#0000CD} zip} can be thought of as repeated applications of zipStep{\color{#0000CD} zipStep}. Again, this is indicative of the fact that zip{\color{#0000CD} zip} is a coinductive definition, and that Stream{\color{#0000CD} Stream} is a final coalgebra.

With that our whirlwind tour comes to a close. We’ve seen how universal algebra gives us tools for exploring the structure of things, while universal coalgebra allows us to explore their behaviour. Together they gave us a way to rigorously analyse the duality between structure and behaviour. Earlier in the article we made the rather bold claim that this duality transcends the examples we’ve seen here and goes up all the way to the foundations of thought. We’ll end on a similarly dramatic note by giving you a philosophical question to ponder:

Is a “thing” best defined by its constituent parts (structure) or by its observable actions(behaviour).

Further Reading

If you’re interested in learning more there’s many facets of the structure vs behaviour duality that we didn’t have space to talk about. Things like congruence vs bisimulation, minimality vs simplicity, and freeness vs cofreeness. A good starting point for further reading is the paper which motivated this blog post, Universal Coalgebra: a theory of systems. In this classic work the author introduces universal coalgebra as an abstract theory for capturing the behaviour of a wide class of dynamic systems4. Even though it is a fantastic introduction to universal coalgebra the focus on duality in this article is arguably limited. A paper that does extensively cover duality is An introduction to (co)algebra and (co)induction. This is also a great article for readers who may be newer to category theory as the material is presented in set-theoretic language whenever possible. Much of the material from both these papers is now available as part of an introductory book on coalgebras. Finally, since our examples focused on functional programming we mention that chapter 24 of Category Theory for Programmers gives a nice account of how (co)algebras are used in the functional programming language Haskell.

1 Agda is a functional programming language often used as a theorem prover.

2 This definition can easily be generalised to endofunctors on an arbitrary category 𝒞\mathcal{C}.

3 Proving this fact directly is a rather fun excercise that we leave for the reader. Hint: First build up a homomorphism ff from Expr{\color{#0000CD} Expr} into an arbitrary ExEx-algebra by considering where ff could send elements such as lit{\color{#008B00} lit} 0 or add (mul{\color{#008B00} mul} 5) (mul{\color{#008B00} mul} (mul{\color{#008B00} mul} 4) (lit{\color{#008B00} lit} 2)) to in the target set. Then assume for the sake of contradiction that ff is not unique, i.e. send some of the objects to a different place in the target set. You should be able to show that this new function breaks the algebra homomorphism commutativity conditions. Once you’ve proven this have a go at showing that Stream{\color{#0000CD} Stream} is a final coalgebra.

4 These so-called dynamic systems are by no means limited to the functional programming examples in this blog post. In fact, the main appeal of universal coalgebra comes from the omni-presence of dynamic systems around us. Labelled transition systems, finite automata, Kripke models arising in modal logic, And even the evolution of quantum systems are just a few examples of such systems.

Posted at May 14, 2021 8:37 PM UTC

TrackBack URL for this Entry:

25 Comments & 0 Trackbacks

Re: Structure vs. Observation

I already knew about datatype constructors, but I love the idea of datatype destructors. I now imagine that it’s part of a dual theory to constructive mathematics, called destructive mathematics. To be able to proclaim “I am a destructive mathematician” possibly even beats “I am a persistent homologist”.

(Sorry for the non-serious comment… I hope I’m not setting a precedent. BTW, the link to Amin’s web page doesn’t work.)

Posted by: Tom Leinster on May 14, 2021 9:33 PM | Permalink | Reply to this

Re: Structure vs. Observation

There is a “destructive math”, or at least a “destructive set theory”: Aczel’s anti-foundation axiom! The short of it is that in a category of classes, the powerset functor P has an initial algebra: the Von Neumann hierarchy V (of well-founded sets). The set theory in V works by constructing it using powerset and propositions, starting from the empty set. But there is also a terminal coalgebra of P, the class of “anti-founded” sets. In this set theory, you destruct a set by giving an equation defining its elements, something like A = {B, {C}}, etc. But you can have A on both sides of the equation, like A = {A, {A}}, the set which contains itself and a set containing itself.

Posted by: David Jaz Myers on May 15, 2021 12:25 AM | Permalink | Reply to this

Re: Structure vs. Observation

Tom, you held a destructor in your hands back here:

This is David’s predpred.

Posted by: David Corfield on May 16, 2021 5:17 PM | Permalink | Reply to this

Re: Structure vs. Observation

Yes. (And your recall of long-ago Café conversations never ceases to amaze me.) It was the name that amused me and was new to me, not the idea itself.

Posted by: Tom Leinster on May 16, 2021 8:55 PM | Permalink | Reply to this

Re: Structure vs. Observation

In the second and third code snippets, what does the line “field” do/mean?

Posted by: Tom Leinster on May 14, 2021 9:40 PM | Permalink | Reply to this

Re: Structure vs. Observation

“field” is a keyword which begins the section where you say all the fields of the records. The fields of a record are the cartesian factors of the endofunctor that it is the terminal algebra of.

The reason you have to say it is because you can also give constructors for records in Agda that go before the fields. The constructor is the map you get from the universal property of the cartesian product, telling you how to build an element of the record by mapping into each field (cartesian factor of the endofunctor).

Posted by: David Jaz Myers on May 15, 2021 12:14 AM | Permalink | Reply to this

Re: Structure vs. Observation

Got it. Thanks.

Posted by: Tom Leinster on May 16, 2021 11:27 AM | Permalink | Reply to this

Re: Structure vs. Observation

Thanks, I enjoyed reading that!

On a minor point, is it possible to put in the names of the authors of the papers and books in the Further Reading section?

Posted by: Simon Willerton on May 15, 2021 10:40 AM | Permalink | Reply to this

Re: Structure vs. Observation

Here’s a couple of places where there’s either a typo or else I’m confused!

  • “Consider the endofunctor EE which maps a set UU to Ex(U)Ex(U)”: are EE and ExEx the same?

  • “The function zipStep above”: I couldn’t find an earlier reference to zipStep.

  • “Take an initial TT-algebra (A,a a)(A, a_a) together with an arbitrary TT-algebra (U,a u)(U, a_u). By initiality, there exists a unique TT-algebra homomorphism f:ASf: A \rightarrow S”: I’m guessing that S=US=U.

Posted by: Simon Willerton on May 15, 2021 10:41 AM | Permalink | Reply to this

Re: Structure vs. Observation

Related to the zipStep type - the diagram for the commutativity of the zip morphism has the wrong vertical labels, they match the previous diagram. I guess the left has [head, zipStep] and the right has [head, next]

Posted by: Marc G on May 15, 2021 1:56 PM | Permalink | Reply to this

Re: Structure vs. Observation

Thanks for pointing out the typos. I’ll get them fixed!

Posted by: Amin Karamlou on May 15, 2021 5:52 PM | Permalink | Reply to this

Re: Structure vs. Observation

What an interesting twist on ‘structure vs. behavior’ you make! If you had asked me before reading your post which of the two I associate with induction, I would have probably said behavior. Why? I would have contrasted induction with deduction (instead of co-induction). In that setting, induction would have been more about “one step after the next” and deduction would have had a stronger structural flavor, building up knowledge from axioms… So, I think you’ve changed my perspective on the inductive base case - from “starting point of the endless journey/behavior” to “basis of structure”.

Thanks for this food for thought! :D

Posted by: Anna Knörr on May 15, 2021 6:51 PM | Permalink | Reply to this

Re: Structure vs. Observation

So if I am reading this correctly, the definition of the real numbers, whether defined as infinite sequences of rational numbers or of decimals, is coalgebraic in nature?

Posted by: Madeleine Birchfield on May 16, 2021 3:44 PM | Permalink | Reply to this

Re: Structure vs. Observation

Section 4 of my Understanding the infinite II: Coalgebra gives some references of those treating the reals coalgebraically.

Posted by: David Corfield on May 16, 2021 5:26 PM | Permalink | Reply to this

Re: Structure vs. Observation

I’m sure there is a reason for the symmetric language, but isn’t it more common to refer to applying head or next to a Stream, rather than applying Stream to one of the destructors?

Posted by: L Spice on May 16, 2021 5:37 PM | Permalink | Reply to this

Re: Structure vs. Observation

A very nice post! I’ve been thinking about coinductive types a lot myself recently too. One conclusion that I’ve come to is that the common description of the elements of a coinductive type as “infinite data structures” is somewhat misleading, and perhaps partially to blame for their continuing second-class status. True, from the “static” viewpoint of mathematics, the elements of a coinductive type are certainly infinite structures. But from the “dynamic” viewpoint of programming, it seems better to think of them as “servers”, i.e. “programs” that continue interacting forever (rather than computing a value and terminating).

Something of this sort is, of course, implicit when talking about coalgebra as a model for “dynamic systems”, but it seems to me that a lot of references still don’t seem to take it as seriously as could be done. (However, my own reading has surely only scratched the surface of what’s out there about coinduction, and I expect what I’m about to say is well-known to many people.) I was first clued into this viewpoint by a remark in Conor McBride’s “Let’s see how things unfold”:

As a total programmer, I am often asked ‘how do I implement a server as a program in your terminating language?’, and I reply that I do not: a server is a coprogram in a language guaranteeing liveness.

I found this remark rather cryptic at first, but recently I’ve come to appreciate it much more.

Part of the problem is that everybody’s favorite simple example of coinduction, namely streams, are too simple to exhibit the interactive behavior characteristic of a server. Consider instead a more general M-type (non-dependent for simplicity) M=M x:ABM = M_{x:A} B, with destructors receive:MAreceive : M \to A and send:MBMsend : M \to B \to M. In static mathematics, we can think of an element of MM as an infinite rooted tree where each node is labeled by an element of AA and has a family of children labeled by the elements of BB. But from a dynamic perspective, it’s better to think of of an element of MM as a server with an “internal state” that supports two operations: we can receive from it an element of AA, or we can send to it an element of BB and thereby change its internal state.

The latter viewpoint corresponds more directly, not only to the destructors, but to the corecursor. Specifically, the way we define an element of MM is to give a type SS (whose elements should be thought of as valid internal states of the server) together with maps r:SAr:S\to A (saying what is received from a server in that state) and s:SBSs:S\to B\to S (saying how the state is updated when we send an input to the server). The principle of coinduction now sounds much more nontrivial: it says that two servers that behave the same, giving the same outputs for all possible input sequences, are equal, even if they have totally different types of internal states.

To me this helps to explain why coinductive types tend to get short shrift from mathematicians. From a “static” viewpoint, it’s harder to justify the need for a whole dual version of mathematics just to talk about infinite structures, when we already have perfectly good ways to talk about infinite structures: a stream of elements of AA is just a function A\mathbb{N}\to A, and similarly an infinite tree as above is just a function List(B)A\List(B) \to A. But this would be a horrific way to program a server! Can you imagine if the only way a web server could respond to a single request is by being passed, every time, the entire sequence of all the requests it had received up until that point, and replaying from square one all the calculations and updates it had to do in response to them?

Posted by: Mike Shulman on May 17, 2021 5:31 AM | Permalink | Reply to this

Re: Structure vs. Observation

I agree, it’s an amazing quote. This perspective on computer prorgams is also partly explored in section 6.9 of Jacobs’ Introduction to Coalgebra book (note that a draft version is available online).

I can recall from my only undergrad TCS class, which touched a bit on automata and Turing Machines, that an “algorithm” always needs to terminate. I never quite understood why. As you point out, servers do not necessarily terminate, so why the big fuss with termination?

Of course, it is no coincidence that inductively defined functions necessarily terminate, that mathematicians are more familiar with induction and that all algorithms must be terminating. Like you said, and for whatever reason, coinductive types tend to be dismissed. For instance, there could be a common sentiment that the (coalgebraic) behavior of a machine is simply an implementation detail, or side-effects of, an inductively defined function.

Posted by: Stelios Tsampas on June 2, 2021 12:28 AM | Permalink | Reply to this

Re: Structure vs. Observation

The type signatures of your example coinductive type are very similar to the most basic lens type. Is this a coincidence? Or does it point to the likely proof principle to apply when reasoning about optics?

Posted by: Barnes on June 29, 2021 2:31 PM | Permalink | Reply to this


Interesting! I have a totally silly question: is it at all common to say “apply xx to ff” for the evaluation of a function ff at an input xx? I noticed you’ve used it a few times in this post. I would rather say “apply ff to xx”, and so would (I think?) say most mathematicians I know… Is this what thinking in Polish notation looks like? :-)

Posted by: Ivo on May 17, 2021 12:52 PM | Permalink | Reply to this


Interesting! I have a totally silly question: is it at all common to say “apply xx to ff” for the evaluation of a function ff at an input xx? I noticed you’ve used it a few times in this post. I would rather say “apply ff to xx”, and so would (I think?) say most mathematicians I know… Is this what thinking in Polish notation looks like? :-)

Posted by: Ivo on May 17, 2021 12:52 PM | Permalink | Reply to this

Re: Ivo

Oh, I see L Spice made the same comment.

Posted by: Ivo on May 17, 2021 12:54 PM | Permalink | Reply to this

Re: Structure vs. Observation

The vertical arrows on “zip is a co-algerba homomorphism” diagram seem to be mislabelled.

Posted by: Nikita on May 25, 2021 5:55 PM | Permalink | Reply to this

Thanks for pointing it out. This should be fixed now.

Posted by: Amin on June 2, 2021 5:23 PM | Permalink | Reply to this

Re: Structure vs. Observation

Thanks for this thought-provoking posting. I took a look at Introduction to coalgebras and it seemed to me that there are other dualities present which might of course be aspects of the one presented here.

Jacobs starts with the special case where the algebra is the action of a set of symbols on a state space and the coalgebra reflects the evolution of the state space and the labels of its internal states. In other words, this is apt for talking about the progress of a computation driven by inputs on the one hand and producing outputs on the other. In the special context of finite state machines or read-only Turing machines this is a self-duality: the languages recognised by such a machine are precisely the languages generated by such a machine. But this leads me to think about two further points. Firstly, the generation model is inherently non-deterministic: we consider all possible paths through the state space which end at an acceptance node: even if there is a single path per symbol out of a state, there is no determinism in selecting the path to be taken. Secondly, both the acceptance and the generation model have a non-deterministic analogue, in that there may be multiple paths out of a state with the same label, and again any path that leads to an acceptance node denotes validity of the sequence of labels. The beauty of the FSM theory is that these determine the same set of languages.

Thinking about non-determinism led me on to thinking about coalgebras more generally. I think it was Rota who first advocated for encoding assembly by multiplications and disassembly by comultiplications in the algebraic theory of combinatorics. From this point of view, the result of a comultiplication should be simply the sum (in a suitable sense) of all possible disassemblies of an object: in the computational setting, then, a comultiplication should yield the sum of all possible successor states of a given state.

Since a practical computation is likely to have both inputs and outputs, presumably there is some sort of consistency condition that is reflected algebraically in the existence of a bialgebra structure: is this the natural setting? To return to the FSM setting for a moment, there is an algebraic formulation of the Myhill–Nerode Theorem on languages accepted or generated by FSM in terms of the existence of a bialgebra structure: this is presented in Underwood’s Fundamentals of Hopf algebras. The title of that book reminds me that Rota would have looked not only for a bialgebra but a Hopf algebra structure in this setting. What would the antipode be? In the combinatorial setting I think of it as inclusion-exclusion or Moebius inversion, but it would be interesting to know what it means computationally.

Posted by: Richard Pinch on June 4, 2021 11:01 AM | Permalink | Reply to this

Re: Structure vs. Observation

I’ll follow up my own comment, if I may, with a further reflection on duality in general. The original post is about a duality between two things, namely “structure” and “observation” and I start off by thinking that this means there is some sort of two-variable map from pairs of (structure,observation) to, er, something: the usual algebraic interpretation uses an exponential law to turn this into a pair of maps from structures to maps from observations to something and from observations to maps from structures to something, thus exhibiting each of the two things in terms of “the” dual of the other. Such a duality might then give rise to a Galois correspondence, in turn giving rise to some sort of closure operators. All of these seem as if they might give rise to useful ideas.

But even in the well-known algebraic setting, there’s difficulty with the concept of “the” dual. When there’s some sort of rigidity, such as for finite-dimensional vector spaces over a field or Hilbert spaces, the duality is symmetric, and this is expressed by saying the the original space is canonically isomorphic to the double dual. Otherwise, the dual seems to be, somehow, “bigger” then the original, and this works against the duality-between formulation.

We can see this in the specific case of algebras and coalgebras in the case of vector spaces over a field with additional structure. The linear dual of a coalgebra is an algebra, but to construct a coalgebra by taking the linear dual of an algebra requires an additional finiteness condition.

Is an analogous finiteness requirement reflected in the structure/observation duality?

Posted by: Richard Pinch on June 10, 2021 9:08 AM | Permalink | Reply to this

Post a New Comment