Coalgebraically Thinking
Posted by David Corfield
Having come across Peter Freyd’s coalgebraic characterisation of the reals, I’ve been spending some time of late trying to understand coalgebras. Where an algebra for an endofunctor is an object of together with an arrow , a coalgebra for requires an object with an arrow .
I’ve traced out what seemed to me an interesting path. First I stumbled upon Bart Jacob’s book Introduction to Coalgebra: Towards Mathematics of States and Observations. This I’d thoroughly recommend. Let me give you some nuggets from it:
The duality with algebras forms a source of inspiration and of opposition: there is a “hate-love” relationship between algebra and coalgebra. (p. v)
As already mentioned, ultimately, stripped to its bare minimum, a programming language involves both a coalgebra and an algebra. A program is an element of the algebra that arises (as so-called initial algebra) from the programming language that is being used. Each language construct corresponds to certain dynamics, captured via a coalgebra. The program’s behaviour is thus described by a coalgebra acting on the state space of the computer. (p. v)
There are many similarities (or dualities) between algebras and coalgebras which are often useful as guiding principles. But one should keep in mind that there are also significant differences between algebra and coalgebra. For example, in a computer science setting, algebra is mainly of interest for dealing with finite data elements – such as finite lists or trees – using induction as main definition and proof principle. A key feature of coalgebra is that it deals with potentially infinite data elements, and with appropriate state-based notions and techniques for handling these objects. Thus, algebra is about construction, whereas coalgebra is about deconstruction – understood as observation and modification.(p. 47)
A rule of thumb is: data types are algebras, and state-based systems are coalgebras. But this does not always give a clear-cut distinction. For instance, is a stack a data type or does it have a state? In many cases however, this rule of thumb works: natural numbers are algebras (as we are about to see), and machines are coalgebras. Indeed, the latter have a state that can be observed and modified. (pp. 47-8)
Now, just as initial algebras are a vital part of mathematics, so a key property for coalgebras is finality.
Initial algebras are special, just like final coalgebras. Initial algebras (in Sets) can be built as so-called term models: they contain everything that can be built from the operations themselves, and nothing more. Similarly, we saw that final coalgebras consist of observations only. (p. 48)
If a functor has an initial algebra or final coalgebra, then it’s isomorphic to its image under the functor. Indeed, an initial algebra is like a minimal fixed point of the functor, whereas a final coalgebra is like a maximal fixed point.
Take the functor , for a fixed set . Searching for the smallest fixed point, we put in as little as possible. So we build up from the empty set, taking any existing set of lists to a new set which includes the empty list and all lists generated by adding an element of to the old lists. All finite lists of elements are generated this way and nothing else, so these lists form the elements of the initial algebra for . But a larger set is fixed by , namely, the set of -streams, or finite and infinite lists of elements. This in fact is the final coalgebra.
Coalgebras are about observation. We can think of our as observing about an entity whether it contains something -detectable or not, and if so which element of it detects. Have observed something it modifies it. The final coalgebra has as elements all possible outcomes of the behaviour you might observe. Do you still have list elements? If ever no, we have a finite list. If always yes, we have a infinite list. And there’s no other behaviour that can be detected.
Now, this talk of maximality put me in mind of a chapter in a book edited by Donald Gillies, my PhD supervisor, called Revolutions in Mathematics. In ‘A Restoration that Failed’, Herbert Breger argues that where Hilbert presented his style of mathematics as the established norm, it was in fact a very novel way of proceeding. Apparently Finsler (of Finsler geometry fame) promoted an older style of mathematics, where one captures what is taken to be an already existing system by writing a list of properties satisfied by that system, such that the system is maximal with respect to the properties.
When it comes to sets then, rather than worry about the set theoretic paradoxes by controlling their construction, building them bottom up from the empty set and various constructions, instead Finsler defines the system of sets to be that collection which is maximal with respect to certain properties. Breger tells us the story of how by 1928 Finsler’s approach was thoroughly misunderstood, allowing Reinhold Baer to dismiss Finsler’s set theory in a four page paper as inconsistent. Baer showed that any system satisfying Finsler’s axioms can be extended, hence that there is a no maximal such system. But in doing so he is using a different rules of set formation from Finsler.
To Zermelo, Fraenkel, Baer, and others, the idea of definite property is inseparably connected with the notion of set, the underlying philosophy being that well-formed definitions create the object.
Finsler is left to face a lonely battle, and we hear about him still writing in 1969, the year before his death, on how the continuum hypothesis is still open in his set theory, in what he calls ‘classical mathematics’. The neglected Finsler doesn’t even have a Wikipedia entry, and the MacTutor history is as brief as possible.
But there’s no need to feel too sorry for Finsler as his ideas, or something similar, did later see the light of day. In his Non-well-founded Sets, Peter Aczel describes a range of ways of developing a set theory which does not obey well-foundedness, the condition that membership chains must be finite.
If this, as Finsler puts it, is “classical mathematics”, perhaps it is odd that Aczel was inspired by computer science:
The original stimulus for my own interest in the notion of a non-well-founded set came from a reading of the work of Robin Milner in connection with his development of a mathematical theory of concurrent processes. (p. xix)
On the other hand, perhaps we could say that computer scientists are dealing with already existing systems – the program-running computers before them.
Certainly we hear the sound of maximality in Aczel’s book:
Thus, in the case of the axiom system for the natural numbers, the extremal axiom is the principle of mathematical induction, which is a minimalisation axiom, as it expresses that no objects can be subtracted from the domain of natural numbers while keeping the truth of the other axioms. The axiom systems for Euclidean Geometry and the real numbers involve on the other hand completeness axioms. These are maximalisation axioms; i.e. they express that the domain of objects cannot be enlarged while preserving the truth of the other axioms. (p. 106)
Aczel proposes his own anti-foundation axioms, AFA, and discusses it in relation to other such axioms, including FAFA (for Finsler) and SAFA (for Dana Scott). He notes that
It is suprising that it has taken over 50 years for this “success” to come about, whereas Fraenkel only had to wait a handful of years. It is worth recording here that Finsler’s axiom system uses a notion of isomorphism of sets which is different to the one introduced by Mirimanoff. If he had used Mirimanoff’s notion the resulting anti-foundation axiom would have been what I have called Scott’s AFA. (p. 107)
But what has this to do with coalgebra? Well there’s a project called algebraic set theory. In his introduction to the field, Steve Awodey writes:
The new insight taken as a starting point in AST is that models of set theory are in fact algebras for a suitably presented algebraic theory, and that many familiar set theoretic conditions (such as well-foundedness) are thereby related to familiar algebraic ones (such as freeness).
So what about the coalgebras? Well, consider the functor from the category of classes of sets to itself, which sends a class to the class of subclasses of which are sets. This restriction must be made, otherwise there would be no fixed points. Now, this functor has as initial algebra the class of sets and as final coalgebra the class of non-well-founded sets. An excellent article here is Rutten and Turi’s On the Foundations of Final Semantics: Non-Standard Sets, Metric Spaces, Partial Orders
So now I’m left wondering what would be possible were we to follow Finsler and opt for a more coalgebraic approach to mathematics. There’s plenty of coalgebraic thinking in mathematics, and you can do no better than to pay a visit to Tom Leinster’s coalgebraic topology for examples, including the category of strict -categories as a coalgebra for an enrichment functor on finite product categories.
Then Pavlovic and Escardó tell us about Calculus in Coinductive Form:
Coinduction is often seen as a way of implementing infinite objects. Since real numbers are typical infinite objects, it may not come as a surprise that calculus, when presented in a suitable way, is permeated by coinductive reasoning. What is surprising is that mathematical techniques, recently developed in the context of computer science, seem to be shedding a new light on some basic methods of calculus. We introduce a coinductive formalization of elementary calculus that can be used as a tool for symbolic computation, and geared towards computer algebra and theorem proving. So far, we have covered parts of ordinary differential and difference equations, Taylor series, Laplace transform and the basics of the operator calculus.
But do algebraic blinkers prevent us from doing more coalgebraic thinking? Jacobs tells us that
Coalgebra is still in its infancy.
When it grows up it is expected to help unite
the theory of differential equations with automata and process theory.
While we’re waiting we can enjoy playing the Game of Life coalgebraically.
Re: Coalgebraically Thinking
Interesting. This is something I should try to better understand.
I definitely need to find a copy of
Peter J. Freyd: Path Integrals, Bayesian Vision, and Is Gaussian Quadrature Really Good? Electr. Notes Theor. Comput. Sci. 29: (1999)
In physics one notices that globally hyperbolic Lorentzian manifolds naturally carry the structure of a poset (where for points , we have if and only if “ is in the future of ”) and notices that this underlying poset structure remembers a surprising amount of information: every map between globally hyperbolic Lorentzian manifolds which induces a functor on the underlying posets is automatically a conformal isometry of Lorentzian spaces.
In other words: the poset structure on a Lorentzian manifold remembers everything about its pseudo-Riemannian geometry except the volume density.
In this sense, posets which have a top and bottom element, i.e. which have one point that is in the future of all other points, and one point which is in the past of all other ones, are in particular given by the causal subsets on which the co-presheaves of observable algebras of the quantum field theory on our Lorentzian spacetime are defined.
Observations like this have made some people play with the idea that all causal subsets of spacetime might have finite cardinality (third item in the definition of a causal set/cause here).
But in any case Peter Freyd’s theorem tells us that the continuum is nicely encoded in the poset language itself:
the endofunctor on posets with distinct top and bottom element which takes a causal subset and attaches it to itself
has as “maximal fixed point”, i.e. as final coalgebra the interval with the continuum () of points in it.
That’s nice.