### Compositional Thermostatics (Part 2)

#### Posted by John Baez

*guest post by Owen Lynch*

In Part 1, John talked about a paper that we wrote recently:

- John Baez, Owen Lynch and Joe Moeller, Compositional thermostatics.

and he gave an overview of what a ‘thermostatic system’ is.

In this post, I want to talk about how to compose thermostatic systems. We will not yet use category theory, saving that for another post; instead we will give a ‘nuts-and-bolts’ approach, based on examples.

Suppose that we have two thermostatic systems and we put them in thermal contact, so that they can exchange heat energy. Then we predict that their temperatures should equalize. What does this mean precisely, and how do we derive this result?

Recall that a **thermostatic system** is given by a convex space $X$ and a concave entropy function $S \colon X \to [-\infty,\infty].$ A ‘tank’ of constant heat capacity, whose state is solely determined by its energy, has state space $X = \mathbb{R}_{> 0}$ and entropy function $S(U) = C \log(U),$ where $C$ is the heat capacity.

Now suppose that we have two tanks of heat capacity $C_1$ and $C_2$ respectively. As thermostatic systems, the state of both tanks is described by two energy variables, $U_1$ and $U_2,$ and we have entropy functions

$S_1(U_1) = C_1 \log(U_1), \qquad S_2(U_2) = C_2 \log(U_2)$

By conservation of energy, the total energy of both tanks must remain constant, so

$U_1 + U_2 = U$

for some $U;$ equivalently

$U_2 = U - U_1$

The equilibrium state then has maximal total entropy subject to this constraint. That is, an equilibrium state $(U_1^{\mathrm{eq}},U_2^{\mathrm{eq}})$ must satisfy

$S_1(U_1^{\mathrm{eq}}) + S_2(U_2^{\mathrm{eq}}) = \max_{U_1+U_2=U} S_1(U_1) + S_2(U_2)$

We can now derive the condition of equal temperature from this condition. In thermodynamics, temperature is defined by

$\displaystyle{ \frac{1}{T} = \frac{\partial S}{\partial U} }$

The interested reader should calculate this for our entropy functions, and in doing this, see why we identify $C$ with the heat capacity. Now, manipulating the condition of equilibrium, we get

$\max_{U_1+U_2=U} S_1(U_1) + S_2(U_2) = \max_{U_1} S_1(U_1) + S_2(U-U_1)$

As a function of $U_1,$ the right hand side of this equation must have derivative equal to $0.$ Thus,

$\displaystyle{ \frac{\partial}{\partial U_1} (S_1(U_1) + S_2(U-U_1)) = 0 }$

Now, note that if $U_2 = U - U_1,$ then

$\displaystyle{ \frac{\partial}{\partial U_1} S(U-U_1) = -\frac{\partial}{\partial U_2} S(U_2) }$

Thus, the condition of equilibrium is

$\displaystyle{ \frac{\partial}{\partial U_1} S_1(U_1) = \frac{\partial}{\partial U_2} S_2(U_2) }$

Using the fact that

$\displaystyle{ \frac{1}{T_1} = \frac{\partial}{\partial U_1} S_1(U_1) , \qquad \frac{1}{T_2} = \frac{\partial}{\partial U_2} S_2(U_2) }$

the above equation reduces to

$\displaystyle{ \frac{1}{T_1} = \frac{1}{T_2} }$

so we have our expected condition of temperature equilibriation!

The result of composing several thermostatic systems should be a new thermostatic system. In the case above, the new thermostatic system is described by a single variable: the total energy of the system $U = U_1 + U_2.$ The entropy function of this new thermostatic system is given by the constrained supremum:

$S(U) = \max_{U = U_1 + U_2} S_1(U_1) + S_2(U_2)$

The reader should verify that this ends up being the same as a system with heat capacity $C_1 + C_2,$ i.e. with entropy function given by

$S(U) = (C_1 + C_2) \log(U)$

A very similar argument goes through when one has two systems that can exchange both heat and *volume*; both temperature and pressure are equalized as a consequence of entropy maximization. We end up with a system that is parameterized by total energy and total volume, and has an entropy function that is a function of those quantities.

The general procedure is the following. Suppose that we have $n$ thermostatic systems, $(X_1,S_1),\ldots,(X_n,S_n).$ Let $Y$ be a convex space, that we think of as describing the quantities that are conserved when we compose the $n$ thermostatic systems (i.e., total energy, total volume, etc.). Each value of the conserved quantities $y \in Y$ corresponds to many different possible values for $x_1 \in X_1, \ldots x_n \in X_n.$ We represent this with a relation

$R \subseteq X_1 \times \cdots \times X_n \times Y$

We then turn $Y$ into a thermostatic system by using the entropy function

$S(y) = \max_{R(x_1,\ldots,x_n,y)} S_1(x_1) + \ldots + S_n(x_n)$

It turns out that if we require $R$ to be a convex relation (that is, a convex subspace of $X_1 \times \cdots \times X_n \times Y$) then $S$ as defined above ends up being a concave function, so $(Y,S)$ is a true thermostatic system.

We will have to wait until a later post in the series to see exactly how we describe this procedure using category theory. For now, however, I want to talk about *why* this procedure makes sense.

In the statistical mechanical interpretation, entropy is related to the probability of observing a specific macrostate. As we scale the system, the theory of large deviations tells us that seeing any macrostate other than the most probable macrostate is highly unlikely. Thus, we can find the macrostate that we will observe in practice by finding the entropy maxima. For an exposition of this point of view, see this paper:

- Jeffrey Commons, Ying-Jen Yang and Hong Qian, Duality symmetry, two entropy functions, and an eigenvalue problem in Gibbs’ theory.

There is also a dynamical systems interpretation of entropy, where entropy serves as a Lyapunov function for a dynamical system. This is the viewpoint taken here:

- Wassim M. Haddad,
*A Dynamical Systems Theory of Thermodynamics*, Princeton U. Press.

In each of these viewpoints, however, the maximization of entropy is not global, but rather constrained. The dynamical system only maximizes entropy along its orbit, and the statistical mechanical system maximizes entropy with respect to constraints on the probability distribution.

We can think of thermostatics as a ‘common refinement’ of both of these points of view. We are agnostic as to the mechanism by which constrained maximization of entropy takes place and we are simply interested in investigating its consequences. We expect that a careful formalization of either system should end up deriving something similar to our thermostatic theory in the limit.