Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

November 4, 2010

Transforms

Posted by David Corfield

I have a project afoot to gain some historical understanding on the rise of appreciation for mathematical duality. Something I need to know more about is the process whereby the duality involved in Fourier analysis came to be seen as arising through a pairing of a space with its dual, and so allowing comparison to other such dualities.

This has got me thinking once again about transforms, something we’ve discussed many times at the Café before. In as general a setting as possible, we could take some pairing

A×BC, A \times B \to C,

and then if we have a map gg from CC to a rig DD, we can transform certain functions from AA to DD to ones from BB to DD. We would do this by forming the DD-sum over aa of DD-products of g((a,b))g((a, b)) and f(a)f(a).

Usually we pick CC to act as a dualizing object, and then BB to be the dual of AA with respect to CC. So in the case of the Fourier transform for locally compact abelian groups, we choose CC to be the circle group, and BB as the group of characters of AA.

In this case, we take DD to be the complex numbers, and we map the circle group to the unit complex numbers. Then the transform of ff, a complex function on AA, is f^(χ)= aχ(a)f(a)dμ\hat f(\chi) = \int_a \chi (a) \cdot f(a) d \mu, integrating with respect to the Haar measure on AA.

We might then realise that we could have chosen a larger CC, and take it as all of \mathbb{C}. The maps AA to \mathbb{C} were called ‘generalized characters’ by George Mackey in his 1948 paper, The Laplace Transform for Locally Compact Abelian Groups. So now the dual of AA is the product of the ordinary characters of AA (the complex part) and the real linear functionals (once logarithms are taken). We find that in the case of the integers, the dual is now not just the unit circle as in the case of the Fourier dual, but all of the non-zero complex numbers.

Now in this case we have CC and DD identical, and can form the Laplace transform for any locally compact abelian group. Perhaps more usually the Laplace transform is taken to be the real version of the complex Fourier transform, whereas Mackey is treating them in combination.

This reminded me of something we discussed years ago, looking at the Legendre transform as a deformation to a different rig DD of the Laplace transform. In the guise of the Legendre-Fenchel transformation, we have AA a real vector space XX, BB its dual, with obvious pairing to \mathbb{R}. We take the rig DD to be max\mathbb{R}_{max}, the reals extended by {}\{ - \infty\}, with ‘multiplication’ as ++, ‘addition’ as maxmax. This is usually extended further to takes maps into max{+}\mathbb{R}_{max} \union \{+ \infty\}. We can see that such a rig is at play because instead of the integral we now have supsup, and we don’t multiply f(a)f(a) with the evaluation (a,b)(a, b), but add (or rather subtract, there being sign conventions)

f *(x *)=sup{x *,xf(x)|xX}. f^{\ast} (x^{\ast}) = sup \{ \langle x^{\ast}, x \rangle - f(x)| x \in X \}.

This has always struck me as a bit odd that we’d use something valued in \mathbb{R} in the very different setting of the rig max\mathbb{R}_{max}. It works, I guess, because we can map the reals under addition to the multiplicative structure of max\mathbb{R}_{max}. But as all this transform business seems to be a souped-up form of matrices acting on vectors, I wonder whether there isn’t a way of taking AA and its dual as semimodules, that is, modules for semirings or rigs. Couldn’t the pairing of the spaces take values in max\mathbb{R}_{max}? I need to look closely at it, but something like this may be being described in Duality and Separation Theorems in Idempotent Semimodules, see example 7.

We see transforms everywhere, such as the Radon transform for the incidence pairing for homogeneous spaces G/H×G/K{0,1}G/H \times G/K \to \{0, 1\}, transforming complex maps on G/HG/H to ones on G/KG/K, allowing us, say, to reconstruct the value of a function at a point from the integrals of the function on lines passing through the point (see here). Here again our CC differs from our DD. I’d love to understand the rationale behind such choices.

What makes for a transform that you can write whole books about? The membership pairing for a set AA and its powerset P(A)P(A), leads to the transform of a function ff from AA to the rig of truth values (i.e. a subset BB) being the subset of P(A)P(A) formed of subsets of A which intersect with B. Not enough there to fill a tome it appears.

Up a categorical level, we have composition of profunctors F:ABF\colon A ⇸ B with G:1AG\colon 1 ⇸ A or H:B1H\colon B ⇸ 1. Do any of these get recognised as worthy transforms? What if a category like HilbHilb were the target rather than SetSet?

Posted at November 4, 2010 11:41 AM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2297

16 Comments & 0 Trackbacks

Re: Transforms

Oooh. I was about to post something about this stuff. Watch this space.

Posted by: Simon Willerton on November 4, 2010 1:25 PM | Permalink | Reply to this

Re: Transforms

Great! In the meanwhile I remembered I ought to get reacquainted with the discussion you began here.

Posted by: David Corfield on November 4, 2010 1:36 PM | Permalink | Reply to this

Re: Transforms

If you’re also interested in very recent history, in the last few years Shiri Artstein-Avidan and Vitali Milman have been doing some really interesting work on the canonical nature of many classical dualities in analysis and geometry. Search this page for “duality”. In particular, with Semyon Alesker they’ve shown how the Fourier transform is characterized by very little.

Posted by: Mark Meckes on November 4, 2010 1:35 PM | Permalink | Reply to this

Re: Transforms

And the Legendre transform is characterized by very little. So does the very little characterizing the Fourier transform mean very little characterizes the Laplace transform? And if so, does the very little of the latter get deformed into the very little characterizing the Legendre transform as we let the ‘temperature’ of the real rig tend to zero?

A characterization of the Fourier transform and related topics might explain:

For reasons connected with the topic of convex analysis, we were interested in the characterization of a very basic concept in convexity: duality and the Legendre transform. In the paper [1] it was shown that the Legendre transform can be characterized as follows: up to linear terms, it is the only involution on the class of convex lower semi-continuous functions on n\mathbb{R}^n which reverses the (partial) order of functions. Since the Legendre transform has another special property, namely that it exchanges summation of functions with their inf-convolution (for definitions and details see [2]), this in fact implied that an involution on lower semi-continuous convex functions which reverses order must have this special property. It turns out that also the opposite is true, namely any involutive transform (on this class) which exchanges summation with inf-convolution, must reverse order, and, in fact, be up to linear terms the Legendre transform (see [2] for proofs and a discussion). Thus, already at this stage we observed that very minimal basic properties essentially uniquely define some classical transform which traditionally is defined in a concrete, and quite involved form.

It looks very intriguing to determine how far this point of view can be extended. It turns out that also the classical Fourier transform may be defined essentially uniquely by very minimal and basic conditions, namely by the condition of exchanging convolution with product (together with, for example, the form of the square of the transform). This is what we announced in this paper. The methods of proof are different (for Legendre transform convexity is used very strongly), but the ideology is similar, although in the case of Fourier transform the proofs seem to be, for the moment, more involved.

Posted by: David Corfield on November 5, 2010 5:51 PM | Permalink | Reply to this

Re: Transforms

Being more of a discrete guy, I wonder about the Discrete Fourier Transform. The continuous statement becomes “cyclic convolution exchanges with pointwise multiplication” for the DFT. But in this case this is equivalent to the statement that the unitary DFT matrix of a given size diagonalises all circulant matrices of that size. I wonder if there are any other discrete transforms that are characterised by their actions on certain classes of matrix.

Posted by: Dave Tweed on November 6, 2010 2:05 AM | Permalink | Reply to this

Re: Transforms

This is a very interesting topic, I think, and I’m looking forward to more on it from you and from Simon.

There’s one part of what you wrote that made me raise my eyebrows. Regarding the Legendre–Fenchel transform:

we don’t multiply f(a)f(a) with the evaluation (a,b)(a, b), but add (or rather subtract, there being sign conventions) f *(x *)=sup{x *,xf(x)|xX}. f^*(x^*) = sup \{ \langle x^*, x \rangle - f(x) | x \in X \}.

I’m not so sure it’s a matter of convention. One way to express this is that, for any given x *x^*, the real number f *(x *)f^*(x^*) is universal (minimal) such that f *(x *)+f(x)x *,x f^*(x^*) + f(x) \geq \langle x^*, x \rangle for all xx. This way of expressing it eliminates subtraction.

(I’ve been studying similar duality-type things myself in a fairly general setting, but I don’t have the energy to explain that right now, and I think it’s not quite what you’re doing.)

Posted by: Tom Leinster on November 5, 2010 6:53 PM | Permalink | Reply to this

Re: Transforms

Mmm, I wasn’t too sure about that when I wrote that. Something that confuses me is that in the Fourier/Laplace case, sometimes we see

χ(a)¯f(a)dμ, \int \overline{\chi(a)} \cdot f(a) d \mu,

as here and sometimes without the conjugation, as in that Mackay paper. So I guess in the Fourier case we’re wondering about whether to multiply by e ikte^{i k t} or e ikte^{-i k t}. Is there a reason we pick one for the transform and the other for the inverse transform?

Likewise with the Laplace transform, why e kte^{- k t} one way and e kte^{k t} back? It’s the temperature = 0 version of the former which gives the minus in the Legendre transform.

Posted by: David Corfield on November 5, 2010 7:35 PM | Permalink | Reply to this

Re: Transforms

David wrote:

Is there a reason we pick one for the transform and the other for the inverse transform?

Not a strong reason; it’s just a matter of notation, or convention. But it’s still interesting. In what follows, I’ll use the variable xx where you used tt, thinking of my waves as functions on space rather than functions of time, because people often use somewhat different sign conventions for Fourier transforms in the space and time variables — in part for good reasons (special relativity), and in part just for tradition.

Typically the variable kk is used as the Fourier transform partner of the space variable xx, while ω\omega is used as the partner of the time variable tt. The name for kk is wavenumber (or sometimes (spatial) frequency), while ω\omega is called (temporal) frequency. There are a bunch of further subtleties which I won’t bother mentioning now. They may seem purely terminological, and to some extent that’s true — but I claim that they’re all important, even if you’re mainly interested in the math!

A function of the form

f(x)=e ikxf(x) = e^{i k x}

is said to be a plane wave of wavenumber kk. Suppose we’d like our Fourier transform to map any function of this sort to a delta function supported at kk. This will let us take a take a function take its Fourier transform, and easily see the frequencies of the plane waves from which its built as a superposition. To do this, we need to use this Fourier transform:

f^(k)=12πf(x)e ikxdx \widehat{f}(k) = \frac{1}{2 \pi} \int f(x) e^{-i k x} \, d x

It’s a bit easier to see that the corresponding inverse transform must be

f(x)=f^(k)e ikxdk f(x) = \int \widehat{f}(k) e^{i k x} \, d k

since this clearly sends a delta function supported at some value of kk back to the corresponding plane wave e ikxe^{i k x}.

The whole point of this overly long comment is this:

The minus sign and the factor of 2π2 \pi need to be put in the right places to make my above statement true: if we randomly muck with them, the Fourier transform of e ikxe^{i k x} won’t be a delta function supported at kk.

Anyone who wants to fully understand the Fourier transform needs to understand this, and also understand the various other conventions people use, and what’s good about those.

For example: the Fourier transform I’ve described is not a unitary operator. To make it unitary, we can adjust it as follows:

f^(k)=12πf(x)e ikxdx \widehat{f}(k) = \frac{1}{\sqrt{2 \pi}} \int f(x) e^{-i k x} \, d x

and then its inverse is

f(x)=12πf^(x)e ikxdx f(x) = \frac{1}{\sqrt{2 \pi}} \int \widehat{f}(x) e^{i k x} \, d x

This is a good convention for quantum mechanics, and it’s somehow pleasing that the necessary factor of 1/2π1/2 \pi has been split evenly between the Fourier transform and its inverse.

On the other hand, there are those (especially highbrow mathematicians) who prefer the kernel e 2πixe^{2 \pi i x} to the kernel e ixe^{i x}, based on the philosophy that 2πi2 \pi i is the fundamentally important number, while 2π2 \pi is less so and π\pi is downright dumb. And this too has its merits.

Indeed one could write a very nice essay on the various conventions for Fourier transforms, the endless arguments that people get into about them, and what this means about the sociology and philosophy of mathematics, physics and engineering. I feel one coming on, but I’ll stop here.

Posted by: John Baez on November 6, 2010 4:10 AM | Permalink | Reply to this

Re: Transforms

John said

On the other hand, there are those (especially highbrow mathematicians) who prefer the kernel e 2πixe^{2\pi i x} to the kernel e ixe^{i x}, based on the philosophy that 2πi2\pi i is the fundamentally important number, while 2π2\pi is less so and π\pi is downright dumb. And this too has its merits.

One aspect of this is how you think of (or parametrize) the circle. Do you parametrize by arc-length in which case you think of it as /2π\mathbb{R}/2\pi \mathbb{Z}? Then the kernel is e inxe^{-i n x}. Do you set the length of your circle to be 11, which is like setting the period of periodic functions to be 11, so you think of the circle as /\mathbb{R}/\mathbb{Z}? Then the kernel is e 2πinxe^{-2\pi i n x}. Or do you think of the circle as the unit complex numbers? Then the kernel is x nx^{-n}.

Posted by: Simon Willerton on November 8, 2010 10:47 AM | Permalink | Reply to this

picking isomorphisms

On the other hand, there are those (especially highbrow mathematicians) who prefer […] based on the philosophy that 2πi2\pi i is the fundamentally important number, while 2π2 \pi is less so and π\pi is downright dumb.

The highbrow mathematician will hopefully know that if there is anything “downright dumb” here then it is decreeing any fixed choice at all.

Add this to the list of examples in reply to Minhyong Kim’s request for why it is good to remember isomorphisms and not impose equality.

Posted by: Urs Schreiber on November 8, 2010 11:09 AM | Permalink | Reply to this

Re: Transforms

Urs wrote:

The highbrow mathematician will hopefully know that if there is anything “downright dumb” here then it is decreeing any fixed choice at all.

As a corollary to that, it is also downright dumb for authors to assume their readers know all the authors’ conventions without having them spelled out.

Posted by: Mark Meckes on November 8, 2010 3:50 PM | Permalink | Reply to this

Re: Transforms

David wrote:

Likewise with the Laplace transform, why e kte^{−k t} one way and e kte^{k t} back?

This is again mainly convention, but this choice matters more, psychologically at least, than the choice between e ikte^{i k t} versus e ikte^{- i k t}.

The choice between e ikte^{i k t} versus e ikte^{- i k t} is the choice between a counterclockwise spiral and a clockwise one. There are three easy ways to switch between these choices:

  • replacing ii with i-i (Galois group symmetry),
  • replacing tt with t-t (time reversal symmetry), or
  • replacing kk with k-k (changing conventions about the definition of ‘frequency’).

The choice between e kte^{k t} and e kte^{-k t} is the difference between exponential growth and decay. This seems like a bigger deal. Now we only have two easy ways to switch between these choices: switching the sign of tt and switching the sign of kk.

The Fourier transform is best for systems that oscillate endlessly. The Laplace transform is best for systems that show exponential growth or decay. In electrical circuits containing resistors, or the heat equation, or radioactive decay, or many other dissipative linear systems, we see a lot of exponential decay and no exponential growth. These are the natural markets for the Laplace transform.

For thermodynamic reasons that can be endlessly discussed but seem intuitively obvious, we have chosen our arrow of time to point forwards in the ‘decay’ direction. So, the only remaining freedom concerns our choice of the sign for kk: do we describe exponential decay by e kte^{-k t} with kk positive, or e kte^{k t} with kk negative?

Posted by: John Baez on November 9, 2010 12:53 AM | Permalink | Reply to this

Re: Transforms

Thanks. That’s helpful. I wonder why we don’t see more of Mackey’s full Laplace transform, i.e., the one arising from his generalized characters. Do we not have a need to treat decaying oscillations?

See how I try to lure you back to temperature lives on the Riemann sphere.

Posted by: David Corfield on November 9, 2010 10:58 AM | Permalink | Reply to this

Re: Transforms

David wrote:

Do we not have a need to treat decaying oscillations?

Sure — indeed, most linear physical systems display decaying oscillations when you smack them. These decaying oscillations are commonly described by taking their Laplace transforms, but these Laplace transforms extend to analytic functions on half of the complex plane, so they have a Fourier aspect as well. On the other half of the complex plane they have poles, and the locations of these convey important information about vibrational modes: their frequencies and decay rates.

You’ll see this mathematical technology widely used in everything from electrical engineering to particle physics. In particle physics, the poles I mentioned are called ‘resonances’. Some other good buzzwords include s plane, frequency domain, and transfer function. For the discrete-time analogue of the Laplace transform, see the nice Wikipedia articles on z transform. That article has more pictures than the article on Laplace transform, for no really good reason. Any decent book like LePage’s Complex Variables and the Laplace Transform for Engineers will make up for that deficiency. This stuff is lots of fun.

And yes, all this is deeply related to the way inverse temperature acts like imaginary time: exp(βH)exp(-\beta H) versus exp(itH)exp(-i t H), and all that.

(When you get some time, check out the latest discoveries in temperature-dependent information geometry over on Azimuth. I’ve decided that the symbol β\beta needs a better name than inverse temperature, because it’s arguably more fundamental than temperature. I’m calling it coolness.)

Posted by: John Baez on November 9, 2010 12:58 PM | Permalink | Reply to this

Re: Transforms

Since you are interested in historical aspects, there is another very old basic duality from the 19th century: that between lines and hyperplanes in a projective space.

Posted by: Arnold Neumaier on November 11, 2010 4:20 PM | Permalink | Reply to this

Re: Transforms

Absolutely. That’s a critical one.

So how does it come about that people begin to see the interrelation of dualities? I take it that Poincare sees a connection between his topological duality and that pairing up Platonic solids. He speaks of a dual complex. But was there the thought that projective duality is linked in that it swaps entities of complementary dimension? From the current perspective, the latter relates to an outer automorphism of the Lie group of projective transformations.

And when do people relate any of these geometric/topological dualities to the logical duality of Jevons, Boole, etc? There must be important realisations via lattice theory in the hands of Dedekind and later Birkhoff.

Aside from the historical details of this case, the thought behind the project is something like:

Any adequate historical representation of mathematics must include descriptions of the emergence and elaboration of high-level themes such as duality.

Posted by: David Corfield on November 11, 2010 5:25 PM | Permalink | Reply to this

Post a New Comment