Written by: Christoph Adami

Primary Source: Spherical Harmonics

So this is the final installment of the “On quantum measurement” series. You may have arrived here by reading all previous parts in one sitting (I’ve heard of such feats in the comments). This is the apotheosis: what all these posts have been gearing up to. If, for some reason that only the Internets know, you have arrived here without the benefit of the first six installments, I’ll provide you with the link to the very first installment, but I won’t summarize all the posts, out of deference to all the readers who got here the conventional way.

The Copenhagen Interpretation of quantum mechanics, as I’m sure all of you that have arrived to Part 7 are aware of, is a view of the meaning of quantum mechanics promulgated mostly by the Danish physicist Niels Bohr, and codified in the 1920s, that is, the “heydays” of quantum physics. Quantum mechanics can be baffling to be sure, and there are multiple attempts to square what we observe experimentally with our common sense. The Copenhagen Interpretation is an extreme view (in my opinion) of how to make sense of the reflection of the quantum world in our classical measurement devices. So, at its very core, the Copenhagen Interpretation muses about the relationship of the classical to the quantum world.

As a young student of quantum mechanics in the early eighties, I was a bit baffled by this right away. When the true underlying physics is quantum (I mused), and that therefore the classical world is just an approximation of the quantum, how can we have “theorems” that codify the relationship between quantum and classical systems?

I won’t write a treatise here about the Copenhagen Interpretation. I’ve already linked the Wikipedia article about it, which should get those of you who are not yet groaning up to speed. I’ll just list the two central “things” that are taught just about everywhere quantum mechanics is taught, and that can be squarely traced back to Bohr’s school.

1. Physical systems do not have definite properties prior to being measured, but instead should be described by a set of probabilities

2. The act of measurement changes the quantum system, so that it takes on only one of the previous possibilities (wave function collapse, or reduction)

Yes, the general understanding of the Copenhagen Interpretation is more multi-faceted, but for the purpose of this post I will focus on the collapse of the wave function. When I first fully understood what that meant, it was immediately clear to me that this was just a load of crap. I knew of no law of physics that could engender such a collapse, and it violated everything I believed in (such as conservation of probabilities). You who reads this blog so ardently already know this: it makes no sense from the point of view of information theory.

Now, quantum information theory did not exist around the time of Bohr (and Heisenberg, who must carry some of the blame for the Copenhagen Interpretation). And maybe the two should get a pass for this simple reason, except for the fact that John von Neumann, as I have pointed out in another post), had the foundations of quantum information theory already worked out in 1932, two years after the first “definitive” treatise on the “Copenhagen spirit” was published by Heisenberg.

So you, faithful reader, come to this post well prepared. You already know that Hans Bethe told me and my colleague Nicolas Cerf that we showed that wave functions don’t collapse, you know that John von Neumann almost discovered quantum information theory in the 30s, that quantum measurement is very different from its classical counterpart because copying is not allowed in the quantum world. You know where Born’s rule comes from, and you pondered the utility of quantum Venn diagrams. You were promised a discussion of Schrödinger’s cat, but that never materialized. Instead, you were given a discussion of the quantum eraser. Arguably, that is a more interesting system, but I understand if you are miffed. But to make it up, now we get to the quantum grand-daddy of them all. I will show you that the Copenhagen interpretation is not only toast theoretically, but that it is possible to design experiments that will show this. Or they will show that I’m full of the aforementioned crap. Either way, it is going to be exciting.

In this post, I will reveal to you the mathematical beauty and elegance of consecutive measurements performed on the same quantum system. I will also show you how looking at three measurements in a row (but not two), will reveal to you that the Copenhagen Interpretation is now history, ripe for the trash heap of ill-conceived concepts in theoretical physics. All of what I’m going to tell you is an extension of the picture that Nicolas Cerf and I wrote about in 1996, and which Bethe understood immediately after we showed him our results, while it took us six months to understand what he told us. But it is an extension that took some time to clarify, so that the indictment of Bohr (and implicitly Heisenberg) and the collapse picture of measurement is unambiguous, and most importantly, experimentally verifiable.

Let’s get right into the thick of things. But getting started may really be the hardest thing here. Say you want to measure a quantum system. But you know absolutely nothing about it. How do you write such a quantum system?

In general, people write arbitrary quantum states like this: \(|Q\rangle=\sum_i\alpha_i |i\rangle\), with complex coefficients αi that satisfy \(|Q\rangle=\sum_i\alpha_i |i\rangle\). But you may ask, “Who told you what basis to write this quantum state in? The basis states \(\alpha_i\), I mean”. After all, the amplitudes \(\alpha_i\) only make sense with respect to a particular basis system (if you transform this basis to another, as we will do a lot in this post) it changes the coefficients. “So haven’t you already assumed a lot by writing the quantum state like that?” (You may remember questions like that from a blog post on classical information, and this is no accident).

If you think about this problem for a little while, you realize that indeed the coefficients and the basis you choose are crucial. Just as in classical information theory where I told you that the entropy of a system was undefined, and determined only by the measurement device that you were about to use to learn about it, the state of an arbitrary quantum system only makes sense relative to the quantum states of the detector that you are about to use to measure it. This is, essentially, what is at the heart of the “relative state” formalism of quantum mechanics, due to Everett, of course. That fellow Hugh Everett does not get as much recognition as he deserves, so I’ll let you gaze at him for a little while.

H. Everitt III (1930-1982) Source: Wikimedia |

He cooked up his theory as a graduate student, but as nobody believed his theory at the time, he left quantum physics and became a defense analyst.

You may expect me to launch into a description and discussion of the “many-worlds” interpretation of quantum mechanics, which became a fad in the 1970s, but I won’t. It is silly to call the relative-state picture a “many-worlds” interpretation, because it does not propose at all that at every quantum measurement event the universe splits into so many worlds as there are orthogonal states. This is beyond silly in fact (it was also not at all advocated by Everett), and the people who did coin these terms should be ashamed of themselves (but I won’t name them here). My re-statement of Everett’s theory in the modern language of quantum information theory can be read here, and in any case Zeh (in 1973) and Deutsch (in 1985) before me had understood much about Everett’s theory without imagining some many-worlds voodoo.

So let us indeed talk about a quantum state by writing it in terms of the basis states of the measurement device we are about to examine it with. Because that is all we can do, ever. Just as we have learned in the first six installments of this series, we will measure the quantum state using an ancilla A, with orthogonal basis states \(|i\rangle_A\) I wrote the ‘A’ as a subscript to distinguish it from the quantum states, but later I will drop the subscript once you are used to the notation.

Now look what happens if I measure \(|Q\rangle=\sum_i\alpha_i |a_i\rangle\) with A (to distinguish the quantum states, written in terms of A’s basis from the A Hilbert space, we simply write them as \(|a_i\rangle\)). The probability to observe the quantum state in state *i* is (you remember of course Part 4)

\(p_i=|\langle a_i|i\rangle_A|^2=|\alpha_i|^2.\)

Now get this: You’re supposed to measure a random state, but the probability distribution you obtain is not random at all, but given by the probability distribution pi, which is not uniform. This makes no sense at all. If \(|Q\rangle\) was truly arbitrary, then on average you should see \(p_i=1/d\) (the uniform distribution), where d is the dimension of the Hilbert space. So an arbitrary unknown quantum state, written in terms of the basis states of the apparatus that we are going to measure it in, should be (and must be) written as

\(|Q\rangle=\sum_i^d\frac1{\sqrt d} |a_i\rangle.\)

Now, each outcome i is equally likely, as it should be if you are measuring a state that *nobody prepared beforehand*. A random state. With maximum entropy.

So now we got this out of the way: We know how to write the to-be-measured state. Except that we assumed that the system *Q* had never interacted with anything (or was measured by anything) before. This also is a nonsense assumption. All quantum states are entangled: there is no such thing as a “pristine” quantum system. Fortunately, we know exactly how to describe that: we can write the quantum wavefunction so that it is entangled with an arbitrary “reference” state R:

\(|QR\rangle=\frac1{\sqrt{d}}\sum_i|a_i\rangle_Q|r_i\rangle_R\)

You can think of R as all the measurement devices that Q has interacted with in the past: who are we to say that A is really the first? Now we don’t know really what all these R states are, so we just trace them out, so that the Q density matrix is the familiar

\(\rho_Q=\frac1d\sum_i |a_i\rangle\langle a_i|.\)

After we measured the state with A, the joint state QRA is now (the previous posts tell you how to do this)

\(|QRA\rangle=\frac1{\sqrt d}\sum_i |a_i\rangle|r_i\rangle_R|i\rangle_A.\)(1)

Don’t worry about the R system too much: the Q density matrix is still the same as above, and I have to skip the reason for that here. You can read about it in the paper. Oh yes, there is a paper. Read on.

This is, after all, the post about consecutive measurements, so we will measure Q again, but this time with ancilla B, which is not in the same basis as A. (If it was, then the result would be trivial: you’d just get the same result over and over again: it is like all the pieces of the measurement device A all agreeing on the result).

So we will say that the B eigenstates are *at an angle* with the A eigenstates:

\(\langle b_j|a_i\rangle=U_{ij}\)

This just means that what is a zero or one in one of the measurement devices (if we are measuring qubits) is going to be a superposition in the other’s basis. U is a unitary matrix. For qubits, a typical U will look like this:

\(U=\begin{pmatrix} \cos(\theta) & -\sin(\theta)\\ \sin(\theta)& \cos(\theta)\\ \end{pmatrix}\)

where θ is the angle between the bases. (Yes, it is a special case, but it will suffice.)

To measure Q with B (after we measured it with A, of course) we have to write Q in terms of B’s eigenstates, and then measure. What you get is a wave function that has Q entangled not only with its past (R), but both A and B as well:

\(|QRAB\rangle=\frac1{\sqrt d}\sum_{ij}U_{ij}|b_j\rangle|i\rangle_R|i\rangle_A|j\rangle_b.\)(2)

You might think that this looks crazy complicated, but the result is really quite simple. And it agrees with everything that has been written about consecutive measurements so far, whether they advocated a collapse picture or a unitary “relative state” picture. For example, the joint density matrix of just the two detectors, \(\rho_{AB}\), is just

\(\rho_{AB}=\frac1d\sum_i|i\rangle\langle i|\otimes\sum_j|U_{ij}|^2|j\rangle\langle j |.\)

That this is the “standard” result will dawn on you when you notice that \(|U_{ij}|^2\) is the conditional probability to measure outcome *j* with B given that the previous measurement (with A) gave you outcome *i* (with probability \(1/d\), of course).

It is fair warning that if you have not understood this result, you should probably not go on reading. Go on if you must, but remember to go back to this result.

Also, keep in mind that I will from now on use the index *i* for the system A, the index j for system B, and later on I will use *k* for system C. And I won’t continually indicate the state with a bothersome subscript like \(|i\rangle_A\). Because that is how I roll.

So here is what we have achieved. We have written the physics of consecutive quantum measurements performed on the same system in a manifestly unitary formalism, where wavefunctions do not collapse, and the joint wavefunction of the quantum system, entangled with *all *the measurements that have preceded our measurements, along with our recent attempts with A and B, exists in a superposition, will all the possibilities (realized or not) still present. And the resulting density matrix along with all the probabilities agree precisely with what has been known since Bohr, give or take.

And the whispers of “Chris, what other ways do you know to waste your time, besides I mean, blogging?” are getting louder.

But wait. There is the measurement with C that I advertised. You might think (possibly with anybody who has ever contemplated this calculation) “Why would things change?” But they will. The third measurement will show a dramatic difference, and once we’re done you’ll know why.

First, we do the boring math. You could do this yourself (given that you followed enough to get to be able to derive Eqs. (1) and (2). You just use a unitary U′ to encode the angle between the measurement system C and the system B (just like U described the rotation between systems A and B), and the result (after tracing out the quantum system Q and the reference system R, since no one is looking at those) looks innocuous enough:

\(\rho_{ABC}=\frac1d\sum_i|i\rangle\langle i|\otimes\sum_{jj’}U_{ij}U^{*}_{ij’}|j\rangle\langle j’|\otimes \sum_k U’_{jk} U’^{*}_{j’k}|k\rangle\langle k|.\)(3)

Except after looking this formula over a couple of times, you squint. And then you go “Hold on, hold on”.

“The B measurement!”, you exhale. After measuring with B the device was diagonal in the measurement basis (this means that the density matrix was like \(|j\rangle \langle j|\)). But now you measured Q again, and now B is not diagonal anymore (now it’s like \(|j\rangle \langle j’|\)). How is that possible?

Well, it is the law, is all I can tell you. Quantum mechanics requires it. Density matrices, after all, only tell us part of the story (since you are tracing out the entire history of measurements). That story could be full of lies, and here it turns out it actually is.

It is the *last* measurement that gives a density matrix that is diagonal in the measurements basis, always. Oh, and the first one, if you measure an arbitrary unknown state. That’s two. To see that things can be different, you need a third. The one in between.

To see that Eq. (3) is nothing like what you are used to, let’s see what a collapse picture would give you. A detailed calculation using the conventional formalism will lead to (the superscript “coll”) is to remind you that this NOT the result of a unitary calculation

\(\rho_{ABC}^{{\rm coll}}=\frac1d\sum_i|i\rangle\langle i|\otimes \sum_j|U_{ij}|^2|j\rangle\langle j|\otimes \sum_k|U’_{jk}|^2|k\rangle\langle k|.\)(4)

The difference between (3) and (4) should be immediately obvious to you. You get (4) from (3) if you set j=j′, that is, if you remove the off-diagonal terms that exist in (3). But, you see, there is no law of physics that allows you to just grab some off-diagonal terms and yank them out of the matrix. That means that (3) is a consequence of quantum mechanics, and (4) is not derived from anything. It is really just wishful thinking.

“So”, I can hear you mutter from a distance, “can you make a measurement that supports one or the other of the approaches? Can experiments tell the difference between the two ways to understand quantum measurement?”

That, Detective, is the right question.

How do we tell the difference between two density matrices? Let us focus on qubits here (d=2). And, just to make things more tangible, let’s fix the angles between the consecutive measurements.

Measurement A is the first measurement, so there is no angle. In fact, A sets the stage and all subsequent measurements will be relative to that. We will take B at 45 degrees to A. This means that B will have a 50/50 chance to record 0 or 1, no matter whether A registered 0 or 1. Note that A also will record 0 or 1 half the time, as it should in the initial state is random and unknown.

We will take C to measure at an angle of 45 degrees to B also, so that C’s entropy will be one bit as well. Thus, all three detector’s entropy should be one bit. This will be true, by the way, both in the unitary, and in the collapse picture. The relative states between the three detectors are, however, quite different between the two descriptions. Below you can see the quantum Venn diagram for the unitary picture on the left, and the collapse picture on the right.

We kinda knew that had to be like that, on account of the π/4 angles and all. Yes, the two diagrams look very different. For example, look at detector B. If I give you A and C, the state of B is perfectly known as S(B|AC)=0). That’s not true in the collapse picture: giving A or C does nothing for B.

That in itself looks like a death knell for the unitary picture: How could it be that a past and a future experiment can fully determine the quantum state in the present? It turns out that such questions have been asked before! Aharonov, Bergmann, and Lebowitz (ABL) showed in 1964 that it is possible to set up a measurement so that knowing the results from A and C will allow you to predict with certainty what B would have recorded [1]. As you can tell from the title of their paper, ABL were concerned about the apparent asymmetry in quantum measurement.

*Of course there is an asymmetry! a measurement can tell you about the past, but it cannot tell you about the future! What an asymmetry! *

Slow down, there. That’s not a fair comparison. Causality is, after all, ruling over us all: what hasn’t happened is different from that which has happened. The real question is whether, after all things are said and done, there is an asymmetry between what was, and what could have been. In the language of quantum measurement, we should instead ask the question: If the past measurements influence what I can record in the future, do the future measurements constrain what once was, in an equal manner? Or put in another way, can can the measurements today tell me as much about the state on which it was performed, as knowing the state today tells you about future measurements?

To some extent, ABL answered this question in the affirmative. For a fairly contrived measurement scenario, they showed that if you give me the measurement record of the past, as well as what was measured in the future, I can tell you what it is you *must *have measured in the present. In other words, they said that the past and future, taken together, will predict the present perfectly.

I don’t think everybody who read that paper in 1964 was aware of the ramifications of this discovery. I don’t think people are now. What we show in our paper is that what ABL showed holds in a fairly contrived situation, in fact holds true universally, all the time.

*“Which paper?”, *you ask*. “Come clean already!”*

Can’t you wait just a little longer? I promise it will be at the end of the blog. You can scroll ahead if you must.

In fact, we show that the ABL result is just a special case that holds quite generally. For any sequence of measurements of the same quantum system, Jennifer Glick and I prove that only the very first and the very last measurements are uncertain. All those measurements in between are perfectly predictable. (This holds for the case of measuring unprepared quantum states only.) This makes sense from the point of view I just advocated: you cannot fully know the last measurement because the future did not yet happen. And you cannot know the first measurement because there is nothing in its past. Everything else is perfectly knowable.

Now, “knowable” does not mean “known”, because in general you cannot use the results of the individual measurements to make the predictions about the intermediate detectors: you need some of the off-diagonal terms of the density matrix, which means that you have to perform more complex, joint measurements. But you only need the measurement devices, nothing else.

We show a number of other fairly uncommon things for sequences of quantum measurements in the paper aptly entitled “Quantum mechanics of consecutive measurements”, which you can read on arXiv here. For example, we show that the sequence of measurements does *not* form a Markov chain, as is expected for a collapse picture. We also show that the density matrix of *any* pair of detectors in that sequential chain is “classical”, which we here identify with “diagonal in the detector product basis”. There are several more general results in there: be sure to read the Supplementary Material, where all the proofs reside.

*“So your math says that wavefunctions don’t collapse. Can you prove it experimentally?”*

That too is an excellent question. Math, after all, it just a surrogate that helps us understand the laws of nature. What we are saying is that the laws of nature are not as you thought they were. And if you make a statement like that, then it should be falsifiable. If your theory truly goes beyond the accepted canon, then there must be an experiment that will support the new theory (it cannot prove it, mind you) by sending the old theory to where…. old theories go to die.

What is that experiment? It turns out it is not an easy experiment. Or, at least, for this particular scenario (three consecutive measurements of the same quantum system) the experiment is not easy. The statistics of counts of the three measurement devices is predicted by the diagonal of the joint density matrix \(\rho_{ABC}\), and this is the same in the unitary relative state picture and the collapse picture. The difference is in off-diagonal elements of the density matrix. Now, there are methods that allow you to measure off-diagonal elements of a quantum state, using so-called “quantum-state tomography” methods. Because the density matrix in question is large (an 8×8 matrix for qubit measurements), this is a very involved measurement. Fortunately, there are short cuts. It turns out that for the case at hand, every single *moment* of the density matrix is different. The nth moments of a density matrix is defined by \({\rm Tr} \rho^n\), and it turns out that already the second moment, that is \({\rm Tr} \rho^2\) is different. Measuring the second moment of the density matrix is far simpler than measuring the entire matrix via quantum state tomography, but given that it is a three qubit system, it is still not a simple endeavor. But it is one that I hope someone will be convinced will be worth undertaking. Because it will the experiment that will send the Copenhagen interpretation packing, for all time.

So I asked myself, “How do I close such a long series about quantum measurement, and this interminable last post?” I hope to have brought quantum measurement a little bit out of the obscure corner where it is sometimes relegated to. Much about quantum measurement can be readily understood, and what mysteries there still are can, I am confident, be resolved as well. Collapse never made any physical sense to begin with, but neither did a branching of the universe. We know that quantum mechanics is unitary, and we now know that the chain of measurements is too. What remains to be solved, really, is just the randomness that we experience in the last measurement, when the future is still uncertain.

Where does this randomness come from? What do these probabilities mean? I have some ideas about that, but this will have to wait for another blog post. Or series.

[1] Y. Aharonov, P. G. Bergmann and J. L. Lebowitz, “Time symmetry in the quantum process of measurement,” Phys. Rev. B 134, 1410–16 (1964).

[2] J. Glick and C. Adami, “Quantum mechanics of consecutive measurements”.

#### Latest posts by Christoph Adami (see all)

- Survival of the Steepest - October 18, 2017
- An evolutionary theory of music - June 25, 2017
- What can the physics of spin crystals tell us about how we cooperate? - June 19, 2017