In this article, we'll assume the de Broglie relation for all particles -- i.e. that their momentum is given by $p=hf$. This is actually quite an incredible assumption, even if not surprising -- we've accepted that a particle is a wave in the sense of probability (the wave describes the probability amplitude densities of finding it at some point), but why at all should the spatial frequency of the probability wave relate to its momentum?
Well, it's natural for you to find this assumption unsatisfactory. We've been quite liberal in assuming the de Broglie relation earlier when motivating quantum theory, too -- we'll later produce some motivation for the de Broglie relation for photons, and discuss derivations from quantum mechanics, axiomatising our theory clearly to eliminate circularities. But for now, let's not.
The key point of $p=hf$ is that for a sinusoidal wave $e^{i \cdot 2\pi f \cdot x}$ (so the probability density is uniform, and the standard deviation in the observation of the particle's position is infinite), the momentum takes a specific definite value, $hf$, with zero standard deviation.
Well, what if the wavefunction isn't a simple sinusoid, but some other distribution $\Psi(x)$? If you did all the assigned exercises in the first article, you should know the answer (if not, work it out before reading on). Classically, if you could write that wavefunction as a sum of sinusoids (i.e. use a Fourier transform), then each sinusoid would have its own momentum and there would be some chunk of your matter in each of those momenta, forming a momentum distribution. In quantum mechanics, you can't have chunks of a single quantum, so you this distribution is a probability distribution (still a probability amplitude distribution, because we want superposition). We'll use the notation $\Psi(p)$ to represent this "momentum-space wavefunction", and we'll see why soon.
So it's not too hard to see that the frequency distribution is simply the Fourier transform of $\Psi(x)$, while the momentum-space wavefunction is given by:
$$\Psi(p)=\frac1h \mathcal{F}_x^{p/h}(\Psi(x))$$
Where $\mathcal{F}_x^{p/h}(\Psi(x))$ is the Fourier transform of $\Psi(x)$ (which is a function of $f$) written with the variable substitution $f=p/h$. Note that we're considering the non-normalised Fourier transform, in terms of ordinary frequencies.
Well, $\Psi(x)\, dx$ and $\Psi(p)\, dp$ are just the representations of the state vector in the position and momentum bases respectively. So the inverse Fourier transform acts as a change-of-basis matrix from the position basis to the momentum basis. I.e.
$$|\psi\rangle_P=F|\psi\rangle_X$$
This change-of-basis matrix $F^{-1}$ precisely represents the eigenstates of the momentum operator written in the position basis, and the corresponding eigenvalues are the actual values of the momenta. So we have eigenstates $\frac1h e^{ix \cdot 2\pi p / h} dp$ with corresponding eigenvalues $p$.
Before going any further, let's make sure we know exactly what this means: our change-of-basis matrix $F^{-1}$ is an uncountably infinite-dimensional "matrix" whose "indices" are denoted as $(x,p)$ in the rows-by-columns format. Its general entry is $\frac1h e^{ix \cdot 2\pi p / h} dp$, and each column -- here's the important bit -- each column holds p constant and varies x, i.e. each column, i.e. each eigenstate of $P$ is a function of $x$.
Anyway, so we're looking for a linear operator $P$ solving the eigenvalue problem (and we're just ignoring the scalar multiples):
$$P e^{ix \cdot 2\pi p / h} = pe^{ix \cdot 2\pi p / h}$$
It should be quite clear that the operator we're looking for is:
$$\begin{align}P &= \frac{h}{2\pi i}\frac{\partial}{\partial x} \\
&= -i\hbar \frac{\partial}{\partial x} \end{align}$$
We need to be clear that this is the representation of the momentum operator in the position basis -- in the momentum basis, its representation is simply "$p$" (i.e. its action on each eigenstate $|p\rangle$ is to multiply it by $p$). Similarly, it should be easy to show that in the momentum basis,
$$X=i\hbar\frac{\partial}{\partial p}$$
Exercise: make sure you clearly know and understand what the eigenvectors and eigenvalues of $X$ and $P$ are, in both the position and momentum bases. Hint: something about the Dirac delta function.
Derivation of Heisenberg and Robertson-Schrodinger uncertainty principles
We can derive a variety of "uncertainty principles" -- inequalities showing trade-off between the certainties of two observables -- with some basic algebraic manipulation. It is important to note that none of these individual uncertainty principles is really much more fundamental than any of the others (or at least I don't see in what way they can be) -- one can always make stronger bounds for the uncertainty, and many stronger bonds exist than the ones we're showing here -- but the concept of an uncertainty principle is crucial, in that it demonstrates the rigorously difference between quantum mechanics and statistical physics. In general, the noncommutativity of observables (having no shared eigenstates) is something that has no analog in classical physics.
OK. So we'll show two statements about the product of uncertainties of two observables, $(\langle A^2\rangle - \langle A\rangle^2)^{1/2}(\langle B^2 \rangle - \langle A \rangle^2)^{1/2} $. Once again, there is nothing special about the specific relations we will show -- we can consider other combinations than products, like $\Delta a^2 + \Delta b^2$, and indeed, there exist uncertainty relations for such terms.
Defining $A'=A-\langle A\rangle$ and $B'=B-\langle B\rangle $ for Hermitian (this is important!) $A$ and $B$, we see that:
$$\begin{align}
\langle A'^2\rangle \langle B'^2 \rangle &= \langle \psi | A'^2 | \psi \rangle \langle \psi | B'^2 | \psi \rangle \\
&= \langle A' \psi | A' \psi \rangle \langle B' \psi | B' \psi \rangle \\
&\ge |\langle \psi | A' B' | \psi \rangle| ^ 2 \\
&= \left|\frac12 \langle\psi|A'B'+B'A'|\psi\rangle + \frac12\langle\psi|A'B'-B'A'|\psi\rangle\right|^2 \\
&= \frac14 |\langle\psi|A'B'+B'A'|\psi\rangle|^2 + \frac14|\langle\psi|A'B'-B'A'|\psi\rangle|^2 \\
&= \frac14 |\langle \{A-\langle A\rangle, B-\langle B\rangle\} \rangle| ^2 + \frac14 |\langle [A,B]\rangle|^2\\
&= \frac14 |\langle\{A,B\} \rangle - 2\langle A\rangle \langle B\rangle |^2 + \frac14|\langle[A,B]\rangle|^2\\
\Rightarrow \Delta a\,\Delta b &\ge \frac12 \sqrt{|\langle\{A,B\} \rangle - 2\langle A\rangle \langle B\rangle |^2 + |\langle [A,B]\rangle|^2}
\end{align}$$
This is the Robertson-Schrodinger relation.
(Guide in case you get stuck somewhere -- line 3, Cauchy-Schwarz inequality; line 4, splitting into Hermitian and anti-Hermitian parts; line 5, magnitude of a complex number -- I'm not sure if I can give any better motivation for specifically considering the product of the standard deviations -- like I said, these specific relations are not really that fundamental. I guess we just want to illustrate the point of "the" uncertainty principle, regardless of the specific ways in which it is treated, and would like to get a simple form for it, regardless of how weak or strong it may be.)
One may weaken the inequality further, writing (and this is equivalent to having ignored the real part in line 4, saying the magnitude of a complex number is at least that of the imaginary part):
$$\Delta a\,\Delta b \ge \frac12 |\langle [A,B]\rangle|$$
This is the Heisenberg uncertainty relation. In particular, in the last article, we showed that for the position and momentum operators, $[X,P]=i\hbar$. So in this case, we get the celebrated identity:
$$\Delta x\, \Delta p \ge \frac{\hbar}{2}$$
For canonically conjugate $X$ and $P$.
As mentioned before, other stronger uncertainty relations exist for general observables. Some examples can be found on the Wikipedia page Stronger uncertainty relations (permalink).
No comments:
Post a Comment