Carleson’s Theorem
Posted by Tom Leinster
I’ve just started teaching an advanced undergraduate course on Fourier analysis — my first lecturing duty in my new job at Edinburgh.
What I hadn’t realized until I started preparing was the extraordinary history of false beliefs about the pointwise convergence of Fourier series. This started with Fourier himself about 1800, and was only fully resolved by Carleson in 1964.
The endlessly diverting index of Tom Körner’s book Fourier Analysis alludes to this:
Here’s the basic set-up. Let be the circle, and let be an integrable function. The Fourier coefficients of are
(), and for , the th Fourier partial sum of is the function given by
The question of pointwise convergence is:
For ‘nice’ functions , does converge to as for all ?
And if the answer is no, does it at least work for most ? Or if not for most , at least for some ?
Fourier apparently thought that was always true, for all functions , although what a man like Fourier would have thought a ‘function’ was isn’t so clear.
Cauchy claimed a proof of pointwise convergence for continuous functions. It was wrong. Dirichlet didn’t claim to have proved it, but he said he would. He didn’t. However, he did show:
Theorem (Dirichlet, 1829) Let be a continuously differentiable function. Then as for all .
In other words, pointwise convergence holds for continuously differentiable functions.
It was surely just a matter of time until someone managed to extend the proof to all continuous functions. Riemann believed this could be done, Weierstrass believed it, Dedekind believed it, Poisson believed it. So, in Körner’s words, it ‘came as a considerable surprise’ when du Bois–Reymond proved:
Theorem (du Bois–Reymond, 1876) There is a continuous function such that for some , the sequence fails to converge.
Even worse (though I actually don’t know whether this was proved at the time):
Theorem Let be a countable subset of . Then there is a continuous function such that for all , the sequence fails to converge.
The pendulum began to swing. Maybe there’s some continuous such that doesn’t converge for any . This, apparently, became the general belief, solidified by a discovery of Kolmogorov:
Theorem (Kolmogorov, 1926) There is a Lebesgue-integrable function such that for all , the sequence fails to converge.
It was surely just a matter of time until someone managed to adapt the counterexample to give a continuous whose Fourier series converged nowhere.
At best, the situation was unclear, and this persisted until relatively recently. I have on my shelf a 1957 undergraduate textbook called Mathematical Analysis by Tom Apostol. In the part on Fourier series, he states that it’s still unknown whether the Fourier series of a continuous function has to converge at even one point. This isn’t ancient history; Apostol’s book was even on my own undergraduate recommended reading list (though I can’t say I ever read it).
The turning point was Carleson’s theorem of 1964. His result implies:
If is continuous then for at least one .
In fact, it implies something stronger:
If is continuous then for almost all .
In fact, it implies something stronger still:
If is Riemann integrable then for almost all .
The full statement is:
Theorem (Carleson, 1964) If then for almost all .
This was soon strengthened even further by Hunt (in a way that apparently Carleson had anticipated). ‘Recall’ that the spaces get bigger as gets smaller; that is, if then . So, if we could change the ‘2’ in Carleson’s theorem to something smaller, we’d have strengthened it. We can’t take it all the way down to 1, because of Kolmogorov’s counterexample. But Hunt showed that we can take it arbitrarily close to 1:
Theorem (Hunt, 1968) If then for almost all .
There’s an obvious sense in which Carleson’s and Hunt’s theorems can’t be improved: we can’t change ‘almost all’ to ‘all’, simply because changing a function on a set of measure zero doesn’t change its Fourier coefficients.
But there’s another sense in which they’re optimal: given any set of measure zero, there’s some function whose Fourier series fails to converge there. Indeed, there’s a continuous such :
Theorem (Kahane and Katznelson, 196?) Let be a measure zero subset of . Then there is a continuous function such that for all , the sequence fails to converge.
I’ll finish with a question for experts. Despite Carleson’s own proof having been subsequently simplified, the Fourier analysis books I’ve seen say that all proofs are far too hard for an undergraduate course. But what about the corollary that if is continuous then must converge to for at least one ? Is there now a proof of this that might be simple enough for a final-year undergraduate course?
Re: Carleson’s Theorem