Uncertainty
Update (10/18/2012) — Mea Culpa:
Sonia pointed out to me that my (mis)interpretation of Ozawa was too charitable. We ended up (largely due to Steve Weinberg’s encouragement) writing a paper. So… where does one publish simple-minded (but, apparently, hitherto unappreciated) remarks about elementary Quantum Mechanics?Sonia was chatting with me about this PRL (arXiv version), which seems to have made a splash in the news media and in the blogosphere. She couldn’t make heads or tails of it and (as you will see), I didn’t do much better. But I thought that I would take the opportunity to lay out a few relevant remarks.
Since we’re going to be talking about the Uncertainty Principle, and measurements, it behoves us to formulate our discussion in terms of density matrices.
A quantum system is described in terms of a density matrix, , which is a self-adjoint, positive-semidefinite trace-class operator, satisfying In the Schrödinger picture (which we will use), it evolves unitarily in time
except when a measurement is made.
Consider a self-adjoint operator (an “observable”). We will assume that has a pure point spectrum, and let be the projection onto the eigenspace of .
When we measure , quantum mechanics computes for us
- A classical probability distribution for the values on the readout panel of the measuring apparatus. The moments of this probability distribution are computed by taking traces. The moment is In particular, the variance is
- A change (which, under the assumptions stated, can be approximated as occurring instantaneously) in the density matrix,
Thereafter, the system, described by the new density matrix, , again evolves unitarily, according to (1).
The new density matrix, , after the measurement1, can be completely characterized by two properties
- All of the moments of are the same as before (In particular, is unchanged.) Moreover, for any observable, , which commutes with ( ),
- However, the measurement has destroyed all interference between the different eigenspaces of
Note that it is really important that I have assumed a pure point spectrum. If has a continuous spectrum, then you have to deal with complications both physical and mathematical. Mathematically, you need to deal with the complications of the Spectral Theorem; physically, you have to put in finite detector resolutions, in order to make proper sense of what a “measurement” does. I’ll explain, later, how to deal with those complications
Now consider two such observables, and . The Uncertainty Principle gives a lower bound on the product
in any state, . (Exercise: Generalize the usual proof, presented for “pure states” to the case of density matrices.)
As stated, (3) is not a statement about the uncertainties in any actual sequence of measurements. After all, once you measure , in state , the density matrix changes, according to (2), to
so a subsequent measurement of is made in a different state from the initial one.
The obvious next thing to try is to note that, since the uncertainty of in the state is the same as in the state , and since we are measuring in the state , we can apply the Uncertainty Relation, (3) in the state , instead of in the state, . Unfortunately, , so this leads to an uninteresting lower bound on the product the uncertainties
for a measurement of immediately followed by a measurement of .
It is, apparently, possible to derive a better lower bound on the product of the uncertainties of successive measurements (which is still, of course, weaker than the “naïve” , which is what you might have guessed for the lower bound, had you not thought about what (3) means). But I don’t know how to even state that result at the level of generality of the above discussion
Instead, I’d like to discuss how one treats measurements, when doesn’t have a pure point spectrum. When it’s discussed at all, it’s treated very poorly in the textbooks.
Measuring Unbounded Operators
Let’s go straight to the worst-case, of an unbounded operator, with . Such an operator has no eigenvectors at all. What happens when we measure such an observable? Clearly, the two conditions which characterized the change in the density matrix, in the case of a pure point spectrum,
are going to have to be modified. The second condition clearly can’t hold for all choice of , in the unbounded case (think and ). As to the first condition, we might hope that the moments of the classical probability distribution for the observed measurements of would be calculated by taking traces with the density matrix, . But that probability distribution depends on the resolution of the detector, something which the density matrix, , knows nothing about.
To keep things simple, let’s specialize to and . Let’s imagine a detector which can measure the particle’s position with a resolution, . Let’s define a projection operator which reflects the notion that our detector has measured the position to be , to within an accuracy . Here, I’ve chosen a Gaussian; but really any acceptance function peaked at , and dying away sufficiently fast away from will do (and may more-accurately reflect the properties of your actual detector).
But I only know how to do Gaussian integrals, so this one is a convenient choice.
This is, indeed, a projection operator: . But integrating over doesn’t quite give the completeness relation one would want Instead we find Rather than getting back, we get smeared against a Gaussian.
To fix this, we need to consider a more general class of projection operators (here, again, the Gaussian acceptance function proves very convenient):
These are still projection operators, But now they obey the completeness relation
so we can now assert that the density matrix after measuring is
If is represented by the integral kernel, : then the new density matrix, is represented by the integral kernel Here we see clearly that it has the desired properties:
- The off-diagonal terms are suppressed; for .
- The near-diagonal terms are smeared by a Gaussian, representing the finite resolution of the detector.
Moreover, the moments of the probability distribution for the measured value of are given by taking traces with : One easily computes So the intrinsic quantum-mechanical uncertainty of the position, , in the state, , adds in quadrature with the systematic uncertainty of the measuring apparatus to produce the measured uncertainty exactly as we expect.
There’s one feature of this Gaussian measuring apparatus which is a little special. Of course, we expect that measuring should change the distribution for values of . Here, the effect (at least on the first few moments) is quite simple If we wanted to compute the effect of measuring , using a Gaussian detector with systematic uncertainty , we would use the same projectors (6) and obtain the same density matrix (8) after the measurement. This leads to very simple formulæ for the uncertainties resulting from successive measurements. Say we start with an initial state, , measure with a Gaussian detector with systematic uncertainty , and then measure with another Gaussian detector with systematic uncertainty . The measured uncertainties are
You can play around with other, non-Gaussian, acceptance functions to replace (6). You’re limited only by your ability to find a complete set of projectors, satisfying the analogue of (7) and, of course, by your ability to do the requisite integrals.
What you’ll discover is that the Gaussian acceptance function provides the best tradeoff (when, say, you measure ) between the systematic uncertainty in and the contribution to the quantum-mechanical uncertainty in , resulting from the measurement.
Update (9/20/2012):
I looked some more at the Ozawa paper whose “Universally valid reformulation” of the uncertainty principle this PRL proposes to test. Unfortunately, it doesn’t seem nearly as interesting as it did at first glance.
- For observables, , with pure point spectra, we can assume “ideal” measuring apparati (whose measured uncertainty equals the inherent quantum-mechanical uncertainty of the observable in the quantum state in which the measurement is made). In that case, his uncertainty relation (see (17) of his paper) reduces to the “uninteresting” (5). Of course that’s trivially satisfied. I believe that a stronger bound can be derived, in this case. But doing so requires more sophisticated techniques than Ozawa uses.
- For unbounded observables, like , we can see from what I’ve said above that the actual lower bound is stronger than the one Ozawa derives. Consider a measurement of , followed by a measurement of . From (9), the product of the measured uncertainties2 satisfies
where the last inequality is saturated by an initial state, , which is a pure state consisting of a Gaussian wave packet with carefully-chosen width,
Update (9/28/2012):
Here is, at least, one lower bound (stronger than Ozawa’s stupid bound) for the product of measured uncertainties when have pure point spectra. Let and We easily compute and hence
Since , the first term is bounded below3 by
The second term is also positive-semidefinite.
For the classic case of a 2-state system, with and (the system considered by the aforementioned PRL), we see that , and the product of uncertainties is entirely given by the second term of (11).
The most general density matrix for the 2-state system is parametrized by the unit 3-ball The points on the boundary correspond to pure states.
Upon measuring , the density matrix after the measurement is and, for a subsequent measurement of , as “predicted” by (11).
1 Frequently, one wants to ask questions about conditional probabilites: “Given that a measurement of yields the value , what is the probability distribution for a subsequent measurement of …”. To answer such questions, one typically works with a new (“projected”) density matrix, , where the normalization factor is required to make . The formalism in the main text of this post is geared, instead, to computing joint probability distributions.
2 Ozawa’s inequality isn’t for the product of the measured uncertainties, , but rather for the product, , of the measured uncertainty in with the quantum-mechanical uncertainty in in the state which results from the -measurement. To obtain this, just mentally set in the above formulæ.
3 Let for . Consider This is a quadratic expression in , which is positive-semidefinite, . Thus, the discriminant must be negative-semidefinite, . For , this yields the conventional uncertainty relation, For , it yields which is an expression you sometimes see, in the higher-quality textbooks.
Re: Uncertainty
What do you think about this analysis?
http://motls.blogspot.com/2012/09/pseudoscience-hiding-behind-weak.html?m=1