Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

September 1, 2010

Bimonoids from Biproducts

Posted by John Baez

Today Jacob Biamonte and I were talking about Boolean circuits and we came across a cute fact I hadn’t noticed before. I’ll explain it, and then ask you to help me find a really slick proof.

But let’s start at the beginning. Suppose VV is a vector space. There’s a “diagonal map”

Δ:VVV \Delta : V \to V \oplus V

which goes like this:

Δ(v)=(v,v) \Delta(v) = (v,v)

There’s also an “codiagonal map”

:VVV \nabla : V \oplus V \to V

which goes like this:

(v,w)=v+w \nabla (v,w) = v+w

Yeah, it’s just addition.

Now say we draw the diagonal map as a green blob with one input and two outputs, and the codiagonal map as red blob with two outputs and one input. Then this law holds:

Why?

Well, one reason is that it’s just true. Let’s look at it again:

On the left, we’re taking two vectors, duplicating each to get a total of four vectors, then swapping the middle two, then adding the first two and adding the last two:

(v,w)(v,v,w,w)(v,w,v,w)(v+w,v+w) (v,w) \mapsto (v,v,w,w) \mapsto (v,w,v,w) \mapsto (v+w,v+w)

On the right we’re taking two vectors, adding them, then duplicating the result:

(v,w)v+w(v+w,v+w) (v,w) \mapsto v+w \mapsto (v+w,v+w)

So yeah, we’re getting the same answer: the law holds.

But that’s just the shallowest layer of understanding. We can get a bit deeper if we notice that the law we’re talking about is the compatibility condition for the multiplication and comultiplication in a bialgebra — or more generally, a bimonoid.

I suppose I should explain that a bit, though I seem to have said this a million times:

A monoid is an object with an associative multiplication and a unit. We can define a monoid in any monoidal category.

If MM is a monoidal category, so is the opposite category M opM^{op}. A comonoid in MM is defined to be a monoid in M opM^{op}. In other words: a comonoid is an object with a coassociative comultiplication and a counit.

A bimonoid is a gadget that’s both a monoid and a comonoid, where the monoid operations are comonoid homomorphisms — or equivalently, the comonoid operations are monoid homomorphisms! If you work out what this means, you’ll see that this law needs to hold:

where the green blob is the comultiplication, and the red blob is the multiplication. (Since we’re switching two wires, the concept of bimonoid only makes sense in a braided monoidal category.)

Now, can we use this abstract nonsense to understand what’s going on?

Yeah!

(You thought I was gonna say “No”? Then you were sorely wrong, my friend. That’s not how it works around here.)

Here are two famous facts:

  • Any object in a category with finite products becomes a comonoid in a unique way.
  • Any object in a category with finite coproducts becomes a monoid in a unique way.

These facts are dual to each other so we just need to understand one and we get the other for free. A category with finite products is just one with a terminal object where every pair of objects has a product. Given this, every object VV comes with a diagonal map Δ:VV×V\Delta : V \to V \times V and a map to the terminal object ϵ:V1\epsilon : V \to 1, and these make that object into a comonoid. With some more work, you can check that this is the only way to make it into a comonoid.

Dually, a category with finite coproducts is just one with an initial object where every pair of objects has a coproduct. Given this, every object VV comes with a codiagonal map :V+VV\nabla : V + V \to V and a map from the initial object ι:0V\iota : 0 \to V, and these make that object into a monoid… and that’s the only way to do that.

But the category of vector spaces is particularly symmetrical and nice. The initial object is also the terminal object, so we call it a zero object. And the product of two objects is also their coproduct — and even better, a compatibility condition holds which makes us call the result a biproduct!

(The biproduct is what normal mortals often call the ‘direct sum’. Remember, we started out talking about vector spaces and direct sums. We still are.)

Let’s call a category with a zero object and where every pair of objects has a biproduct a category with finite biproducts. Then here’s the cute fact I hadn’t known:

  • Any object in a category with finite biproducts becomes a bimonoid in a unique way.

It’s an easy calculation.

But here’s what I’m wondering now. First of all: people must already know this. James Dolan says he knew it. Does anyone know a good reference?

But secondly: is there an ultra-elegant way of deriving this cute fact from the two famous facts I listed before, using only abstract nonsense?

For example, I know that a bimonoid is a monoid in the category of comonoids, and vice versa. I also know that every category with finite biproducts is automatically enriched over the category of commutative monoids! Can you use ideas like this, together with the two famous facts listed above, to get the cute fact I learned today?

I should add that Jacob and I started out by thinking about vector spaces over the field with 2 elements. Then 0 is called “false”, 1 is called “true”, addition is called “exclusive or”, and this law:

is a fact about logic gates — or in other words, Boolean circuits. You can see this law drawn as a picture in figure 13 of Yves Lafont’s paper “Towards an algebraic theory of Boolean circuits”, which dates back to 2003. It was trying to understand this paper that led us down this train of thought.

Posted at September 1, 2010 12:02 PM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2269

28 Comments & 1 Trackback

Re: Bimonoids from Biproducts

As soon as I posted this and walked home through the sultry Singapore night, I realized why I’d gotten stuck in my attempts to find an ultra-elegant abstract nonsense proof of this fact:

  • Any object in a category with finite biproducts becomes a bimonoid in a unique way.

I had been looking for one such proof — but in fact there are two, related by duality. And they’re both a lot more trivial than I was expecting.

Perhaps this clue will make the puzzle more fun without giving away my answer.

Posted by: John Baez on September 1, 2010 1:42 PM | Permalink | Reply to this

Re: Bimonoids from Biproducts

So on every vector space there’s a unique bialgebra structure? Why does one mention them then? I mean you don’t talk about the unique comonoid structure on a set very much as it’s trivial.

Posted by: David Corfield on September 1, 2010 1:49 PM | Permalink | Reply to this

Re: Bimonoids from Biproducts

I think it’ll be more fun to let someone else tackle this question.

Posted by: John Baez on September 1, 2010 2:54 PM | Permalink | Reply to this

Re: Bimonoids from Biproducts

Why does one talk about them? Do you mean why does John talk about them?

It’s important to bear in mind that John in using \oplus for his monoidal product on vector spaces, whereas the usual monoidal product is \otimes.

[I’m holding off on answering John’s question to let others have a think about it.]

Posted by: Simon Willerton on September 1, 2010 3:09 PM | Permalink | Reply to this

Re: Bimonoids from Biproducts

I meant why does anyone talk about them, but that, of course, includes John.

Posted by: David Corfield on September 1, 2010 4:33 PM | Permalink | Reply to this

Re: Bimonoids from Biproducts

I think that whenever He gives us an interesting algebraic structure, even if it arises ‘trivially’, there’s often going to be something useful you can do with it! At least it’s worth considering. Wouldn’t you agree?

For example, you say that the unique comonoid structure on a set is trivial. But together with a monoid on the same set, the structures interact to form a bimonoid. There’s a canonical monoidal faithful functor from Set to Hilb, and we can use this to transfer our bimonoid into Hilb. But bimonoids in Hilb have monoidal categories of modules – and so this tells us that our original monoid will have a monoidal category of representations.

Maybe you would say this is obvious anyway, but at least this gives a categorical way to see why it’s true.

Posted by: Jamie Vicary on September 1, 2010 5:14 PM | Permalink | Reply to this

Re: Bimonoids from Biproducts

Jamie wrote:

I think that whenever He gives us an interesting algebraic structure

By ‘He’, do you mean John or God?

Posted by: Tom Leinster on September 1, 2010 7:13 PM | Permalink | Reply to this

Re: Bimonoids from Biproducts

Me, of course.

Posted by: God on September 2, 2010 1:55 AM | Permalink | Reply to this

Re: Bimonoids from Biproducts

Simon answered David’s question.

David says:

So on every vector space there’s a unique bialgebra structure?

Simon says:

It’s important to bear in mind that John is using \oplus for his monoidal product on vector spaces, whereas the usual monoidal product is \otimes.

So: a bialgebra is a vector space with a map VVVV \otimes V \to V and a map VVVV \to V \otimes V, fitting together nicely to make a bimonoid in the monoidal category (Vect,)(Vect, \otimes). There are usually lots of ways to make a vector space into a bialgebra.

But I’m talking about a vector space with a map VVVV \oplus V \to V and a map VVVV \to V \oplus V, fitting together nicely to make a bimonoid in the monoidal category (Vect,)(Vect, \oplus). There is always exactly one way to make a vector space into this sort of bimonoid.

And I’m trying to get you guys to cough up the two really elegant (indeed, trivial) proofs of this last fact. Jamie proved almost everything except the part I was really interested in, namely this law:

Posted by: John Baez on September 2, 2010 1:40 AM | Permalink | Reply to this

Re: Bimonoids from Biproducts

I’ve used diagrams like this. You can use them to represent algorithms that evaluate linear operations in terms of primitive linear operations. Turning one of these diagrams upside down gives you an algorithm to compute the adjoint function. Eg. FFT. Sometimes if you have a fast algorithm for something useful then the upside down algorithm is a fast non-obvious algorithm for something else useful.

Posted by: Dan P on September 1, 2010 3:44 PM | Permalink | Reply to this

Re: Bimonoids from Biproducts

Hi Dan, that sounds interesting – can you give an example?

Posted by: Jamie Vicary on September 1, 2010 4:16 PM | Permalink | Reply to this

Re: Bimonoids from Biproducts

Here’s a brief sketch of how the diagrams can be thought of:

Assume some fixed basis so adjoints are just transposes.

The green node is the diagonal map. This has matrix (1 1)^t. Ie. (x x)^t = (1 1)^t (x)

The red node adds two variables. This has matrix (1 1). Ie. (x+y) = (1 1)(x y)^t.

So turning a matrix upside down gives the transpose.

Also allow lines to be labelled by values from the base field. These are scalar multiplication. The transpose of one of these is the same.

We now have all the operations we need for a vector space: addition, scalar multiplication and duplication. (They don’t teach that last one at school - until you meet Hopf algebras.) So any linear operation made up of these operations can be turned upside down to get its transpose.

An actual example of turning an algorithm upside down like this is here. (It’s written in a very practical C oriented style to make it accessible to graphics programmers.) I don’t draw the diagrams explicitly there but I hope you can see what I mean in your mind’s eye. I didn’t use diagrams in the paper because the actual diagram for the given algorithm is way too big to fit on a page. But the diagrams are useful when trying to understand how the primitive linear operations fit together. The code fragments in table 1 correspond to easily drawn diagrams. The algorithm I derived in the paper appears to be a novel twist on a familiar one.

Butterfly diagrams for the FFT are a similar kind of diagram. In this case, the adjoint of the FFT is basically just the FFT again, so turning the operation upside down doesn’t get you anything new.

Posted by: Dan Piponi on September 2, 2010 5:02 AM | Permalink | Reply to this

Re: Bimonoids from Biproducts

Dan: it sounds like you were using these diagrams to depict quantum circuits (morphisms in the monoidal category of complex Hilbert spaces, with \otimes as its monoidal structure).

Right now I’m talking about using them to depict Boolean circuits (morphisms in the monoidal category of vector spaces over 𝔽 2\mathbb{F}_2, with \oplus as its monoidal structure).

But Jacob Biamonte is working on both.

Posted by: John Baez on September 2, 2010 1:51 AM | Permalink | Reply to this

Re: Bimonoids from Biproducts

John wrote:

Dan: it sounds like you were using these diagrams to depict quantum circuits (morphisms in the monoidal category of complex Hilbert spaces, with \otimes as its monoidal structure).

I take it back! From what Dan’s saying now, it’s clear he’s using them to depict morphisms in a monoidal category of vector spaces, with \oplus as its monoidal structure. Just like me in my post.

Dan wrote:

Assume some fixed basis so adjoints are just transposes.

The green node is the diagonal map. This has matrix (11) t(1 1)^t. I.e. (xx) t=(11) t(x)(x x)^t = (1 1)^t (x)

Yup, this is exactly what I was saying in a different language: the green node is the diagonal map

Δ:VVV \Delta: V \to V \oplus V

defined by Δ(x)=(x,x)\Delta(x) = (x,x) Unknown character/pUnknown characterUnknown characterblockquoteUnknown characterUnknown characterpUnknown characterTherednodeaddstwovariables.Thishasmatrix</p> <blockquote> <p>The red node adds two variables. This has matrix (1 1).I.e.. I.e. (x+y) = (1 1)(x y)^t.Unknown character/pUnknown characterUnknown character/blockquoteUnknown characterUnknown characterpUnknown characterThisagainisexactlywhatIsayinginadifferentlanguage:therednodeisthecodiagonalmapUnknown character/pUnknown characterUnknown characterpUnknown character.</p> </blockquote> <p>This again is exactly what I saying in a different language: the red node is the codiagonal map </p> <p>:VVV \nabla : V \oplus V \to V Unknown character/pUnknown characterUnknown characterpUnknown characterdefinedby</p> <p>defined by \nabla(x,y) = x+y$. In this situation, ‘codiagonal’ is just pompous jargon for ‘addition’.

An actual example of turning an algorithm upside down like this is here.

Very cool! You are using the same math that Yves Lafont was using in his paper — except he mainly focused on vector spaces over the field with two elements, with just a little attention to real or complex vector spaces, while you seem to be focused on real or complex vector spaces.

Posted by: John Baez on September 2, 2010 5:54 AM | Permalink | Reply to this

Re: Bimonoids from Biproducts

All this holding off is rather gallant, but let me have a go. In a cartesian category, not only does every object come with a canonical comonoid object, but every monoid becomes a bimonoid when paired with this comonoid. So, if you have a category equipped with a monoidal product which is both cartesian and cocartesian, then every object can be given a unique monoid structure and a unique comonoid structure, and these must fit together to form a bimonoid. We only need coproducts isomorphic to products for this to work, which is weaker than fully-fledged biproducts.

You can use the existence of this bimonoid structure on a vector space to show that Fock space, the state space for a non-interacting quantum field theory, has a bimonoid structure!

Posted by: Jamie Vicary on September 1, 2010 4:15 PM | Permalink | Reply to this

Re: Bimonoids from Biproducts

Jamie wrote:

So, if you have a category equipped with a monoidal product which is both cartesian and cocartesian, then every object can be given a unique monoid structure and a unique comonoid structure, and these must fit together to form a bimonoid.

True, but my puzzle was just: why is the last part true? What’s the conceptual proof? I want to know where this law is coming from:

You have already made the crucial step, which is noting that you can weaken the hypotheses I originally gave. I claimed:

  • Any object in a monoidal category where the tensor product is a biproduct, and the unit is both initial and terminal, becomes a bimonoid in a unique way.

You said:

  • Any object in a monoidal category where the tensor product is both product and coproduct, and the unit is both initial and terminal, becomes a bimonoid in a unique way.

You dropped the compatibility condition between product and coproduct which is the key clause in the definition of biproduct!

I originally thought this compatibility condition would be the key to proving the compatibility between the multiplication :VVV\nabla : V \oplus V \to V and the comultiplication Δ:VVV\Delta: V \to V \oplus V. I thought I was following the tao of mathematics, which says “as above, so below”, or more generally: “it takes an equation to make an equation”.

But no…

Posted by: John Baez on September 2, 2010 1:29 AM | Permalink | Reply to this

Re: Bimonoids from Biproducts

OK, I’ll give it a go:

A very-slightly-fancier way of stating your ‘famous facts’ is that if (C,×)(\mathbf{C},\times) is cartesian, then the forgetful functor (Comon(C,×),+)(C,+)(\mathbf{Comon}(\mathbf{C},\times),+) \to (\mathbf{C},+) is an isomorphism, and (trivially) a monoidal one. And, of course, the dual holds if (C,+)(\mathbf{C},+) is co-cartesian.

So if (C,)(\mathbf{C},\oplus) is both cartesian and cocartesian, then Bimon(C,)Mon(Comon(C,),)Mon(C,)C\mathbf{Bimon}(\mathbf{C},\oplus) \cong \mathbf{Mon}(\mathbf{Comon}(\mathbf{C},\oplus),\oplus) \cong \mathbf{Mon}(\mathbf{C},\oplus) \cong \mathbf{C}. And a moment’s reflection shows that the forward direction of this isomorphism is still just the forgetful functor it should be.

The dual proof starts with Bimon(C,)Comon(Mon(C,),)\mathbf{Bimon}(\mathbf{C},\oplus) \cong \mathbf{Comon}(\mathbf{Mon}(\mathbf{C},\oplus),\oplus).

Categories of (co)monoids have several cute and surprisingly-rarely-discussed universal properties! For instance, I’ve always liked the fact that for (C,)(\mathbf{C},\otimes) symmetric monoidal, ComMon(C,)\mathbf{ComMon}(\mathbf{C},\otimes) is the universal way of making \otimes into a coproduct. (Consider the classical case: \otimes is not a coproduct for modules, but it is for commutative algebras.)

Posted by: Peter LeFanu Lumsdaine on September 2, 2010 3:16 AM | Permalink | Reply to this

Re: Bimonoids from Biproducts

Peter wrote:

I’ve always liked the fact that for (C,)(\mathbf{C}, \otimes) symmetric monoidal, ComMon(C,)\mathbf{ComMon}(\mathbf{C}, \otimes) is the universal way of making \otimes into a coproduct.

Yes, that’s a nice and perhaps underappreciated fact. I think it’s even underappreciated that the tensor product of commutative rings is their coproduct. It’s not an obscure fact, but nevertheless I’ve known it to take people by surprise. Perhaps it’s something to do with the use of language: tensor products are thought of as more like products (×\times) than sums (++).

Posted by: Tom Leinster on September 2, 2010 7:48 AM | Permalink | Reply to this

Re: Bimonoids from Biproducts

I think it’s even underappreciated that the tensor product of commutative rings is their coproduct. It’s not an obscure fact, but nevertheless I’ve known it to take people by surprise.

True. Of course the right way to think of it is that RingsRings is really Aff opAff^{op} and the tensor product of two rings of functions on XX and YY is the ring of functions on the product X×YX \times Y.

This becomes even more “impressive” for more general Lawvere theories: for instance the C C^\infty- completed tensor product of two rings of smooth functions C (X)C^\infty(X) and C (Y)C^\infty(Y) is just their coproduct as C C^\infty-rings.

I have seen differential geometers who first learned about this quite impressed (myself once included, thanks to initiation by Todd). Of course when one thinks about it a little, it is a triviality. But that’s the point, right: making that which is trivial be trivially trivial. ;-)

Posted by: Urs Schreiber on September 2, 2010 5:11 PM | Permalink | Reply to this

Re: Bimonoids from Biproducts

Great proof, Peter! This is the type of abstract nonsense I was really looking for. But as soon as I realized there were two dual proofs, I settled for something quick and dirty, namely this:

Suppose we have a monoidal category where the unit object is both terminal and initial, and the tensor product is both product and coproduct.

If the green blob here is the diagonal, then the equation holds no matter what morphism the red dot stands for:

Similarly, if the red blob is the codiagonal, then the equation holds no matter what morphism the green dot stands for.

Presumably if I thought about it hard enough I’d see that this is just your proof in disguise.

Posted by: John Baez on September 2, 2010 6:04 AM | Permalink | Reply to this

Re: Bimonoids from Biproducts

Peter’s proof uses slightly more than:

  • Any object in a category with finite coproducts becomes a monoid in a unique way.

and its dual; he really uses that:

  • Any object in a category with finite products becomes a comonoid in a unique way, and every map between them a map of comonoids

and its dual. In fact, you can do without this. For any monoidal category 𝒱\mathcal{V}, the forgetful functor Comon(𝒱,)𝒱\mathbf{Comon}(\mathcal{V}, \otimes) \to \mathcal{V} creates colimits. In particular, for any \mathcal{E} with biproducts, the forgetful functor Comon(,×)\mathrm{Comon}(\mathcal{E}, \times) \to \mathcal{E} creates coproducts. Now each XX \in \mathcal{E} is uniquely a ×\times-comonoid, and the unique lifting of XX to Comon(,×)\mathbf{Comon}(\mathcal{E}, \times) is itself a ++-comonoid in a unique way; whence XX is uniquely a bimonoid for ×=+\times = +. Of course, there is a dual version.

Note that, in fact, we can do without the assumption that ×=+\times = + here, so long as we are prepared to interpret the notion of bimonoid in a more general sense. Here is that sense. Suppose that 𝒱\mathcal{V} is a category equipped with two monoidal structures 1\otimes_1 and 2\otimes_2 such that the second one is lax monoidal with respect to the first. This is sometimes called a two-fold monoidal category. This means in particular that there are (non-invertible) interchange maps

σ X,Y,W,Z:(X 2Y) 1(W 2Z)(X 1W) 2(Y 1Z)\sigma_{X,Y,W,Z} \colon (X \otimes_2 Y) \otimes_1 (W \otimes_2 Z) \to (X \otimes_1 W) \otimes_2 (Y \otimes_1 Z)

satisfying various axioms. Then we can define a 1\otimes_1- 2\otimes_2-bimonoid to be an object of X𝒱X \in \mathcal{V} which is a 2\otimes_2-comonoid, an 1\otimes_1-monoid and which satisfies, amongst other things, the bialgebra law that

X 1Xδ 1δ(X 2X) 1(X 2X)σ X,X,X,X(X 1X) 2(X 1X)μ 2μX 2XX \otimes_1 X \stackrel{\delta \otimes_1 \delta}{\to} (X \otimes_2 X) \otimes_1 (X \otimes_2 X) \stackrel{\sigma_{X,X,X,X}}{\to} (X \otimes_1 X) \otimes_2 (X \otimes_1 X) \stackrel{\mu \otimes_2 \mu}{\to} X \otimes_2 X

should equal

X 1XμXδX 2X.X \otimes_1 X \stackrel{\mu}{\to} X \stackrel{\delta}{\to} X \otimes_2 X.

More abstractly, we have that 2\otimes_2, being lax monoidal with respect to 1\otimes_1, lifts to the category of 1\otimes_1-monoids; so that we may consider 2\otimes_2-comonoids in there, and these are precisely the bimonoids defined above. Of course you could also first take 2\otimes_2-comonoids and then 1\otimes_1-monoids, but you’d get the same answer. To summarise, we have that:

Bimon(𝒱, 1, 2)Comon(Mon(𝒱, 1), 2)Mon(Comon(𝒱, 2), 1)\mathbf{Bimon}(\mathcal{V}, \otimes_1, \otimes_2) \cong\mathbf{Comon}(\mathbf{Mon}(\mathcal{V}, \otimes_1), \otimes_2) \cong\mathbf{Mon}(\mathbf{Comon}(\mathcal{V}, \otimes_2), \otimes_1)

Now for any category \mathcal{E} with both products and coproducts, the product structure is lax monoidal with respect to the coproduct structure (because any functor is lax monoidal with respect to coproducts), and so we have a two-fold monoidal category (,+,×)(\mathcal{E}, +, \times). The above argument now shows that any object of \mathcal{E} is a ++-×\times-bimonoid in a unique way.

Posted by: Richard Garner on September 2, 2010 8:05 AM | Permalink | Reply to this

Re: Bimonoids from Biproducts

Hi, Richard! This is a nice example of the microcosm principle! Concepts tend to be most comfortable living in contexts of the same general shape, like a hand in a glove. So, a monoid likes to live in a monoidal category. A bimonoid can live in a monoidal category too… but as you note, it’s unnecessary for the multiplication

μ:X 1XX \mu : X \otimes_1 X \to X

to involve the same tensor product as the comultiplication

δ:XX 2X \delta : X \to X \otimes_2 X

So a bimonoid is really most comfy when living inside some sort of ‘bimonoidal category’! And, I guess you’re claiming that a ‘two-fold monoidal category’ is a way of making this intuition precise.

Are you sure it’s the best way? Aguiar and Mahajan’s magnum opus has a notion of 2-monoidal category which is a bit different. They write:

Let us argue in favor of our definition of 2-monoidal categories. Related notions appear in the literature but with important differences. The two-fold monoidal categories of Balteanu and Fiedorowicz [31, 204] involve two monoidal structures which are required to be strict and to share the unit object (I = J). While the former assumption may not be crucial, the latter fails in many of the examples we are interested in (see Section 6.4). Two-fold monoidal categories thus appear as a somewhat unnatural special case of 2-monoidal categories. Forcey, Siehler, and Sowers [132] improve on this notion by removing the strictness assumption and allowing the unit objects to be distinct, but the structure maps Δ I\Delta_I and μ J\mu_J (6.2) are assumed to be isomorphisms. This again fails in most of our examples.

Another related notion appears in recent work of Vallette [362, Section 1.2], under the name of lax 2-monoidal category; this notion involves fewer structure morphisms and fewer axioms than our notion of 2-monoidal category. We do not know of any examples in which only these axioms are satisfied. In [362, Section 1.3], Vallette defines a colax 2-monoidal category and then combines the two to define a 2-monoidal category. This is different from what we do (compare his definition with the alternative definition we give in Proposition 6.4) and again leaves out most of our examples.

In addition to the examples of Section 6.4, support for Definition 6.1 is provided by various results of later sections, such as Proposition 6.4 and most notably Proposition 6.73, which shows that our notion of 2-monoidal category is an instance of a general notion in higher category theory (that of a pseudomonoid in a monoidal 2-category).

Definition 6.3. We say that a 2-monoidal category is strong if the structure morphisms (6.1) and (6.2) are isomorphisms.

The notion of a strong 2-monoidal category is not truly a new one: a result of Joyal and Street implies that this notion is equivalent to that of a braided monoidal category.

Posted by: John Baez on September 3, 2010 10:16 AM | Permalink | Reply to this

Re: Bimonoids from Biproducts

Dear John,

Thanks for the comment. Aguiar and Mahajan’s definition is the same as mine, I think. I used the name “two-fold monoidal category” a bit lazily; I’d forgotten that in that definition the units are required to coincide. That is certainly not what I intended as it doesn’t hold here.

To recap, the structure we are talking about is the following: a 2-monoidal category is a category with two monoidal structures, such that the multiplication and unit functors for the second, are lax monoidal with respect to the first; and such that the associativity and unit constraints for the second are lax monoidal transformations with respect to the first.

This is the nicest definition; and actually it’s one that Aguiar and Mahajan miss out.

Such categories (and bimonoids in them) also feature in the work of Francois Lamarche.

Posted by: Richard Garner on September 3, 2010 12:14 PM | Permalink | Reply to this

Re: Bimonoids from Biproducts

Richard wrote:

To recap, the structure we are talking about is the following: a 2-monoidal category is a category with two monoidal structures, such that the multiplication and unit functors for the second, are lax monoidal with respect to the first; and such that the associativity and unit constraints for the second are lax monoidal transformations with respect to the first.

This is the nicest definition; and actually it’s one that Aguiar and Mahajan miss out.

In email, Aguiar says:

We do have this, it’s given as Proposition 6.4, among the first remarks we make about 2-monoidal categories. We also discuss the corresponding facts for higher monoidal categories in Chapter 7.

Posted by: John Baez on September 6, 2010 8:57 AM | Permalink | Reply to this

Re: Bimonoids from Biproducts

I’m sorry, I stand corrected; though the definition I give is not the one of Proposition 6.4, now that I look again I see that it does appear later, as Proposition 6.73.

Posted by: Richard Garner on September 6, 2010 1:00 PM | Permalink | Reply to this

Re: Bimonoids from Biproducts

If the green blob is the diagonal, then the equation holds no matter what morphism the red dot stands for.

So you made us think that the problem has something to do with addition, when obviously the result is (A(x,y),A(x,y))=(A(x,y),A(x,y))(A (x, y), A (x, y)) = (A (x, y), A (x, y)) for any linear map A:VVVA\colon V \oplus V \to V. So it has nothing to do with addition at all, only duplication. How sneaky!

If the red blob is the codiagonal, then the equation holds no matter what morphism the green dot stands for.

So you made us think that the problem has something to do with duplication, when obviously the result is (A 1x+A 1y,A 2x+A 2y)=(A 1(x+y),A 2(x+y))(A_1 x + A_1 y, A_2 x + A_2 y) = (A_1 (x + y), A_2 (x + y)) for any linear map A:VVVA\colon V \to V \oplus V. So it has nothing to do with duplication at all, only addition. How sneaky!

Posted by: Toby Bartels on September 5, 2010 11:58 PM | Permalink | Reply to this

Re: Bimonoids from Biproducts

Toby wrote:

How sneaky!

Yeah — I’m so sneaky I not only fooled you twice, I even fooled myself twice!

At first I felt sure that some interaction between duplication and addition was involved in proving the desired equation. Only when I same to my senses on that sultry Singapore night did I realize that this equation had nothing to do with addition, just the universal property of duplication — and then, moments later, that dually it must have nothing to do with duplication, just the universal property of addition.

My mistake: a misapplication of Curie’s principle. (That’s Pierre, not Marie.) Curie’s principle says that when a problem has a symmetry, so must its answer. But this principle is only valid when there’s a unique answer. Here the problem is invariant under duality — and what threw me was that there are two answers, interchanged by dualiy.

Posted by: John Baez on September 6, 2010 3:44 AM | Permalink | Reply to this

Re: Bimonoids from Biproducts

Peter wrote:

Categories of (co)monoids have several cute and surprisingly-rarely-discussed universal properties!

Yes. I’ve said it before and I’ll say it again: monoids are underrated. I think it’s a sin that the typical introductory ‘ancient algebra’ textbook used by math grad students says so little about monoids (yet so much about groups). Any view of mathematics that doesn’t treat the natural numbers with respect is flawed. They’re not just a warmup for the integers.

Posted by: John Baez on September 2, 2010 7:24 AM | Permalink | Reply to this
Read the post A Hopf Algebra Structure on Hall Algebras
Weblog: The n-Category Café
Excerpt: Christopher Walker has a new paper, "A Hopf algebra structure on Hall algebras".
Tracked: October 4, 2010 8:04 AM

Post a New Comment