Saturday, February 23, 2008

The Foundations of Quantum Computing

What follows is not essential for applying softwarephysics to your current IT job, but it might prove useful in the future. The purpose of this posting is two-fold – first, for those of you who find the idea of depicting the variables in a line of code as a set of interacting organic molecules, with the individual characters in each variable acting as a collection of atoms bound together into a molecule, as a bit of a stretch, even for a simulation, we shall further explore the very strange phenomena of the quantum world we live in. It is sometimes difficult to think of software as a virtual substance in the Software Universe, simulating real substances in the physical Universe, while you are still under the delusion that real tangible solid substances actually exist. A little more exposure to the quantum world will cure that problem. Secondly, it will provide some important background information for the emerging field of quantum computing.

Before proceeding, I would like to recommend two excellent books that cover much of the material in this posting in much greater detail. First, there is Richard Feynman’s QED – The Strange Theory of Light and Matter (1985), and secondly, Bruce Schumm’s Deep Down Things – The Breathtaking Beauty of Particle Physics (2004). I reread both of these books on a rigorous yearly schedule, and part of the delay in preparing this posting was due to the rereading of both of them again. And I must apologize in advance to both authors for lifting a considerable amount of their material, especially to Richard Feynman, in those universes where his instance is still running.

Some of you may be young enough to one day find yourselves programming quantum computers, so please pay heed. I was born in 1951, the same year that the first U.S. commercial computer, the UNIVAC I, went into production at the United States Census Bureau, so most of the things that I do in my current IT job still seem like pure science fiction to me. So the idea of quantum computing is not so far-fetched for me. I vividly remember my first encounter with a computer on Monday, Nov. 19, 1956, watching the Art Linkletter TV show People Are Funny. Art was showcasing the 21st UNIVAC I to be built, and Art's UNIVAC I "electronic brain" was sorting through the questionnaires from 4,000 hopeful singles, looking for the ideal match. The machine paired up John Caran, 28, and Barbara Smith, 23, who later became engaged. And this was more than 40 years before eHarmony.com! To a five-year-old boy, a machine that could “think” was truly amazing.

Quantum Computers
It may not have occurred to you yet, but you have already begun to work with quantum computers. A computer is just a large collection of coordinated switches. The earliest computers, like Konrad Zuse’s Z3 computer, used electromechanical relays for switches with a switching speed of about 10-1 seconds. In the late 1940s, the relays were replaced with vacuum tubes with switching speeds of about 10-6 seconds. In the early 1960s, the vacuum tubes were replaced by discrete transistors, which were much smaller than vacuum tubes, consumed much less electricity, and had a switching speed of about 10-7 seconds. In the 1970s, integrated circuits appeared which, over time, evolved into the modern microprocessor chips of today, containing tens of millions of transistors with switching speeds of about 10-10 seconds. The incredible increase in computing power that we have seen over the past 60+ years has resulted because the switching speed and the size of the switches in a computer have decreased by a factor of a billion since Konrad Zuse’s Z3 in 1941. All this vast technological progress goes back to the invention of the transistor at Bell Labs in 1947, a tangible product of quantum mechanics developed in 1926. And the current high-density disk drives that you use rely upon giant magnetoresistance (GMR), a quantum mechanical effect, which results from the spin of electrons. GMR is a practical example of spintronics, an extension of electronics, which relies upon the quantum mechanical spin of electrons rather than their electrical charge. In fact, the 2007 Nobel Prize in physics was recently awarded to Albert Fert and Peter Grünberg for their discovery of GMR.

But when people speak of quantum computers, they are really speaking to the research being performed by people like David Deutsch at Oxford and Seth Lloyd at MIT. A classical computer stores information in bits which can be in one of two states - either a “0” or a “1”. But a full-fledged quantum computer would use qubits, which can be in a state of “0”, “1”, or a simultaneous mixture of “0” and “1” at the same time! It turns out that a quantum computer using 500 qubits could simultaneously operate on 2500 machine states with each clock cycle, and would be the equivalent of a classical computer with 10150 processors. But how can a qubit be simultaneously in a state of “0” and “1” at the same time? The answer is that everything in the physical Universe is simultaneously in multiple states all the time! This may fly in the face of your common sense, but once again, please remember that common sense is just another effective theory of reality, with all of the limitations of any effective theory. In physics, it is important not to confuse our models of reality with reality itself. I believe the world would be a much more peaceful place if this same principle were carried over to political science as well. And the same goes for softwarephysics; where we take a very strong positivist point of view by trying to put together a model for software behavior without worrying about what software “really” is. Please keep that in mind as we examine the incredibly bizarre quantum Universe that we live in.

The Favorite Models of Physicists
Recall that physicists have three favorite models and that all three are just idealizations:

1. Particles – Like pebbles with a dimension of zero and no physical extent, but with physical properties such as mass, charge, and angular momentum.

2. Waves – Like the ripples on a pond when a pebble is tossed in.

3. Fields – Like the patterns that you see when you sprinkle iron filings over a piece of paper on top of a magnet.

The Particle Model
The chief characteristic of the particle model is that it has a well-defined location in space, and is limited to a very small region of space as well. Because of these properties of the particle model, it is possible to follow the path of a particle very closely as it moves through space and time. One of the problems in physics is that people usually refer to objects such as electrons, quarks, and neutrinos as particles, and consequently confuse these material objects with the particle model itself. It would be much better if physicists used the IT concept of an “object” for things like electrons, quarks, and neutrinos. These objects have certain attributes like mass, electric charge, weak isospin charge, and color charge. They also have behaviors or “methods” such as linear motion, spin, deflection in a magnetic field, or diffraction off a crystal lattice. However, since the term particle is the normal terminology used for things like electrons, quarks, and neutrinos, I will continue to follow this confusing convention, but keep in mind that real electrons are real electrons and are not idealized particles. As we have already seen, the particle model is not limited to just material objects either. In 1704, Newton published Opticks, in which he proposed his theory of light as a stream of particles, similar to Einstein’s 1905 concept of light as a stream of photons with quantized energies. In fact, the particle theory of light dominated 18th-century thinking because of Newton’s overwhelming preeminence in physics at the time.

The Wave Model
The chief characteristic of the wave model, on the other hand, is that waves tend to be everywhere and nowhere at the same time and will simultaneously explore all possible paths. To see a wave in action, drop a small pebble into a still pond of water containing many obstacles and watch the resulting waves spread out and reflect off the obstacles and interfere with each other before eventually reaching a particular destination near the edge of the pond. As we have already seen, things like electrons and photons, which are normally thought of as particles, can also behave like idealized waves too.

In 1801, Thomas Young conducted a series of experiments with waves. First, using water waves in a shallow ripple tank, he demonstrated the concept of interference. When a water wave encounters a barrier with two slits, the ripples passing through the slits interfere with each other on the other side of the barrier (Figure 1). Where two crests intersect, the wave amplitude doubles in height, and where a crest meets a trough, the two waves cancel each other out entirely. Next, Young used a distant light source with two closely spaced slits in an opaque barrier. On the other side of the barrier, he placed a white projection screen. When light from the distant light source passed through the double slit barrier, Young observed an interference pattern of alternating bright and dark fringes projected onto the screen which demonstrated the wavelike behavior of light.

Figure 1 – The interference pattern from two slits (click to enlarge)

You can easily repeat Young’s experiment with a piece of thin cloth. At night, hold up a single ply of a pillowcase in front of a distant light source, such as a far-off street light or the filament in your neighbor’s decorative front door light that uses a clear light bulb. Instead of a single diffuse spot of light shining through the pillowcase, you will see a pronounced checkerboard interference pattern of spots, because the weave of your pillowcase has both vertical and horizontal slits between the threads.

The Field Model
In the 1830s, Michael Faraday began conducting a series of electrical and magnetic experiments and came up with the idea of fields. Take a bar magnet and cover it with a piece of paper. Then sprinkle some iron filings over it. The “lines of force” that you see are a field. Faraday called it a field because it reminded him of a freshly plowed farmer’s field. At each point on the paper, the magnetic force from the underlying magnet has a certain strength and a certain direction which define the magnetic field of the magnet. Now Faraday thought that the electric and magnetic fields that he observed were “real”, but the French thought that his fields were just a mathematical contrivance. The French thought that to calculate the effects from a collection of charged objects and current carrying wires, you should pick a point in space and then use the formulas developed by the French physicists Ampere and Coulomb to calculate the strength and direction of the resulting magnetic and electrical forces. The French were quite happy with the concept of electric and magnetic forces as being an “action at a distance”, the same concept used by Newton for the gravitational force in his 1687 Principia. In the Principia, Newton proposed that the gravitational force between two objects with masses m1 and m2 was given by the formula:

F = G m1 m2/R2

where
G = the gravitational constant
R = the distance between m1 and m2

Newton knew that he was going to take some philosophical flack for proposing a mysterious force between objects, that could reach out across the vast depths of space with no apparent mechanism, so he took a very positivist position on the matter with the famous phrase "I feign no hypotheses". Instead, Newton focused on how things were observed to move under the influence of his law of gravitational attraction, without worrying about what gravity “really” was. There was another problem though; this gravitational force had to travel with an infinite velocity. Imagine the Earth as it orbits the Sun at 66,660 miles/hour. If the gravitational force traveled with a finite speed, then the gravitational force from the Sun would have to lead the Earth, like a hunter firing on a flock of ducks flushed from the reeds. How would the Sun know where to shoot the gravitational force in advance to hit the Earth squarely in mid-flight, if the gravitational force traveled at less than an infinite speed?

Faraday, on the other hand, felt that electrical charges and current carrying wires created real fields in space, and that charged objects then interacted with these surrounding fields. This idea could also be extended to the gravitational force as well, and eliminate the mysterious “action at a distance” problem. The Sun creates a strong gravitational field that extends out to the Earth, and the Earth interacts with the gravitational field as it orbits the Sun. The idea of electric and magnetic fields being “real” was further bolstered in 1864 when James Clerk Maxwell, who later proposed the paradox of Maxwell’s Demon, published A Dynamical Theory of the Electromagnetic Field, in which he unified the electric and magnetic forces into a single combined electromagnetic force. Maxwell demonstrated that a changing magnetic field could create an electric field and that, similarly, a changing electric field could create a magnetic field. This meant that electric and magnetic fields could break free of charged objects and currents in wires and propagate through space as a self-propagating wave. If you wiggle a charged object back and forth, electromagnetic waves peel off. This is how the oscillating electrons in a radio antenna send out radio waves and the electrons jumping around within atoms send out light. Recall the wave equation for a stretched string:

∂²y  =  μ   ∂²y
──      ──  ───
∂x²      T    ∂t²

Maxwell found a similar wave equation for the electric field E and the magnetic field B. For the electric field the wave equation became:

∂²Ey  =  με   ∂²Ey
───              ───
∂x²                ∂t²

and for the magnetic field B it became:

∂²Bz  =  με   ∂²Bz
───              ───
∂x²                ∂t²

The velocity of the electromagnetic wave came out to be:
                ____
v  =  1/√ με

v = 3 x 108 m/sec the speed of light!

This was truly a remarkable result. The constant μ is measured by observing the strength of the magnetic field surrounding a current carrying wire, and the constant ε is measured by observing the voltage across a charged capacitor. Both μ and ε seem to have nothing to do with light, yet the speed of light easily falls out from a simple relationship between the two, derived from a wave equation featuring both constants. This added credence to Faraday’s idea that electromagnetic fields were, indeed, real tangible things, and Maxwell’s prediction of electromagnetic waves further strengthened the reality of electromagnetic fields in 1886, when Heinrich Hertz was able to generate and detect electromagnetic radio waves.

The Heisenberg Uncertainty Principle
Now that we have reviewed the three favorite models of physicists, let us return to the quantum world and see how these models can be put to good use. In March 1926, Werner Heisenberg formulated the principle of uncertainty, which placed a limitation on how much we could know about material objects and consequently put a limitation on our old friend information. The Heisenberg Uncertainty Principle can be expressed several ways; two of the most important are:

∆x ∆p ≥ ħ/2

where p is the momentum of the object in question, which can be defined classically for objects moving less than 10% of the speed of light as:

p = mv

and:

∆x = Uncertainty of an object’s position
∆p = Uncertainty of an object’s momentum

It can also be expressed as:

∆E ∆t ≥ ħ/2

where:

∆E = Uncertainty of an object’s energy
∆t = Uncertainty of the duration of the object’s energy measurement

The first expression tells us that it is physically impossible to know the exact position of an object and its exact momentum at the same time. As the uncertainty of an object’s position ∆x gets smaller, the uncertainty in its momentum ∆p must increase, so that the product of the two uncertainties is always greater than ħ/2. Similarly, the second expression tells us that it is physically impossible to know the exact energy of an object and the exact duration of an energy measurement at the same time as well. As the uncertainty of an object’s energy ∆E gets smaller, the uncertainty in the duration of the energy measurement ∆t must increase so that the product of the two uncertainties is always greater than ħ/2. Both of these expressions do not make much sense. If a tangible object really does exist, we certainly should be able to measure its position and momentum at the same time, with ever-increasing accuracy as our equipment improves, and the same should hold for an object’s energy and the duration of an energy measurement. But the Heisenberg Uncertainty Principle denies both claims.

The Heisenberg Uncertainty Principle really results from trying to force-fit objects, like electrons, into the model of a particle. The particle model only partially explains the behavior of electrons and other small objects like protons and neutrons. The first expression of the Heisenberg Uncertainty Principle can be understood if we go back to DeBroglie’s original conjecture that particles also behave as waves with a wavelength λ that is inversely proportional to the momentum p of the particle:

λ = h/p

Imagine an electron as a series of waves crashing onto a beach. If you were able to observe a very long wave train of many wave crests coming ashore, you would be able to measure the wavelength of the wave train very accurately, and using DeBroglie’s formula, you would be able to determine the momentum of the electron very accurately as well. However, you would be very uncertain about the exact location of the electron, because the position of the electron would be smeared out over the whole wave train. Similarly, if an isolated large swell crashed upon the beach, you would be in a better position to identify the location of the electron, but you would not be able to accurately gauge its wavelength, and consequently its momentum. We shall use the ∆E ∆t ≥ ħ/2 relationship extensively in the coming sections.

QED – The First Quantum Field Theory
In 1928, Paul Dirac combined quantum mechanics (1926) with the special theory of relativity (1905) and came up with a relativistic reformulation of the Schrödinger equation. This was the beginning of the first quantum field theory (QFT) known as quantum electrodynamics - QED. The original quantum mechanics developed by Heisenberg and Schrödinger dealt with the quantization of the properties of particles, particularly the quantization of the physical properties of electrons, such as their energy, momentum, and angular momentum. QED was an attempt to include the quantization of the electromagnetic field as well. QED is an effective theory which explains all of the possible interactions between electrons and photons, and consequently, all of the things that you encounter in your daily life that deal with the physical and chemical properties of matter, such as the texture of objects, their colors, hardness, ductility, tensile strengths and chemical activity. A great deal of work went into QED during the 1930s and 1940s. In general, quantum field theories (QFTs) combine all three of the physicist’s favorite models – particles, waves and fields. All physical objects, like electrons and photons, are thought of as fields that extend over the entire Universe with decreasing probability amplitudes as you get further away from where the particles are thought to exist. These fields are also quantized so that when you measure the field, you observe the field expressed as a particle with wavelike properties.

Heisenberg, Pauli, and Oppenheimer tried to extend Dirac's ideas, but initially had great difficulties with QED because the early theoretical work yielded infinite values for many things like the force between charged particles. A breakthrough came in 1948, when Richard Feynman, working on his Ph.D. under John Wheeler, developed the concept of “renormalization”, in which a series of terms that yielded infinite values were rearranged so that the infinities canceled out, yielding the values that were actually measured in the laboratory. A similar technique is frequently seen today in the corporate and political worlds and goes by the acronym of MICOR (Make It Come Out Right), where you start with the answer that you desire and work the analysis backwards to MICOR.

In QED, the electromagnetic force between two charged particles is depicted as the exchange of virtual photons between the charged particles. This is described in what are known as Feynman diagrams, which are spacetime plots of an interaction, and are very similar to the spacetime plots that we saw previously when discussing the special theory of relativity. Again, the vertical axis of the diagrams represents the time dimension T and the horizontal axis represents one dimension of space X, so the diagrams describe the motion of particles moving back and forth along one dimension of space. Imagine two electrons approaching each other. At some point, one of the electrons emits a virtual photon which is subsequently absorbed by the other electron. The virtual photon carries the message “stay away” to the second electron, with the end result that both electrons repel each other, as shown in Figure 2.

Figure 2 – Two electrons collide by exchanging a virtual photon

Think of it as two college roommates on an ice skating pond. One of the undergrads throws a basketball to his roommate and recoils back in the process, while the other roommate catches the basketball and slides away from his roommate as well. The end result is that the two roommates have repelled each other by exchanging a basketball. The attractive force between electrons and protons is also accomplished through the exchange of virtual photons, but our basketball analogy is a little less useful for that situation. Instead, imagine that one of the roommates throws a boomerang, with the resulting recoil pushing him towards his roommate. When the receiving roommate catches the boomerang, he also slides closer to his roommate. The end result is that the roommates are attracted to each other by exchanging a boomerang. Since photons have an energy E = hf, where did the extra energy for the virtual photons come from? That is why they are called virtual photons; the energy was borrowed from the vacuum through the Heisenberg Uncertainty Principle ∆E ∆t ≥ ħ/2. It is possible to borrow the energy ∆E necessary for a virtual photon so long as you return it to the vacuum in a small amount of time ∆t, just as you can write a bad check with impunity, so long as you make a deposit before the check is cashed. In QED, the electrons in atoms are constantly exchanging virtual photons with the protons in the nucleus, with the net effect that the electrons are bound to the nucleus.

Now you cannot see these virtual photons, but you sure can feel them. Just push your hand down against a table. As your hand approaches the table, the electrons in the table will begin to exchange virtual photons with the electrons in your hand and will send the “stay away” message to the electrons in your hand via the virtual photons. The resulting pushback that you feel from the table will give you the false impression that the table is solid. In reality, you are just feeling virtual photons conveying the electromagnetic force. You are just feeling information! Going back to our fictional 200-pound man consisting of 100 pounds of protons, 100 pounds of neutrons, and 0.87 ounces of electrons, we should now realize that all we really see of the man is scattered photons off of nearly empty space. Recall that if the electron probability clouds surrounding the nuclei of his atoms were the size of football stadiums, the protons and neutrons of the nuclei would be about the size of shelled peanuts on the 50-yard line. The image of the man that we see results from photons in the room being scattered off these 0.87 ounces of electrons. And the reason that the 200-pound man does not instantly crash through the floor to the center of the Earth is that the electrons in his feet are exchanging virtual photons with the electrons in the floor, resulting in an upward force that resists gravity.

In Feynman’s conception of QED, he developed his own approach to quantum mechanics called the "path integral" or "sum over histories" approach, the word "history" in this context meaning that photons, just as waves, explore all possible paths when going from point A to point B in spacetime. Recall that the wavefunction solutions to Schrödinger’s equation are, in general, complex functions consisting of both real and imaginary parts. That means that to plot a wavefunction for a particle that is restricted to motion along the x-axis, you have to plot a graph of the real part of the wavefunction along the x-axis and another plot of the imaginary part along the x-axis. We normally display individual complex numbers with a two-dimensional plot, using the x-axis to plot the real part of the complex number, and the y-axis to plot the imaginary part of the complex number. For example, Figure 3 shows the plot of the complex number 1 + 2i (remember that i is the mythical number where i 2 = -1).

Figure 3 – A plot of the complex number 1 + 2i

This is a little cumbersome - an equivalent way to portray the same information is to plot the amplitude and phase of the wavefunction at each point x along the x-axis instead. Imagine that at each point along the x-axis we have a little analog clock that only has an hour hand and each hour hand points to a single complex number like the arrow in Figure 3. The length of the arrow is called the amplitude of the complex number and the angle between the arrow and the x-axis is called the phase angle of the complex number. Again, it takes two values to display a complex number, but this time we are using an amplitude value and a phase angle value, instead of an x and y value, to depict the complex number 1 + 2i. The amplitude of the wavefunction as you move along the x-axis is key. Recall that Max Born conjectured that wavefunctions were probability waves and that the square of the amplitude of the wavefunction at each point x along the x-axis, was equal to the probability of finding the particle described by the wavefunction at that point x. So another way to portray a complex wavefunction would be to create a plot of little clocks all lined up along the x-axis. The length of the hour hands would give the amplitude of the wavefunction at each point x, and the angle of the hour hand would give the phase. That might be a little hard to follow, so we could also plot the amplitude, or length of the hour hands, like we normally plot a wiggly line on an x-y plot, and plot the phase angle as little clocks along the x-axis, all having the same length hour hands, but with differing clock positions depicting the differing phase angles at each point x.

In Feynman’s "sum over histories" approach to quantum mechanics, the probability amplitude of an electron or photon is the same in all directions, like when you drop a pebble in a still pond, but the phase angles will differ depending upon the path that is taken. So to figure out the probability of finding an electron or photon at a particular point, you have to add up the amplitudes and phases of all the possible paths that the electron or photon could have taken to reach the destination point. Although there are an infinite number of possible paths, the key insight is that most of the paths will be out of phase with each other and will cancel out like the destructive interference shown in Figure 1. This produces some rather strange experimental observations. Imagine a very dim source of photons or electrons that can fire one photon or electron at a time. If we fired the particles at a screen with two slits, as in Young’s experiment, we would expect to see a pattern similar to Figure 4 build up over time, based upon the particle model for electrons and photons.

Figure 4 – What common sense and the particle model would predict for a source that fires electrons or photons one at a time

However, what is actually observed is an interference pattern similar to Figure 5, even though the electrons or photons pass through the slits one at a time. According to QED, the individual electrons or photons interfere with themselves as they go through both slits at the same time! This means that if your neighbor could turn down the light by his front door to a very low level, so that it only emitted one photon at a time, and your eye could record a long exposure image, you would still see a checkerboard pattern of light spots through your pillowcase, even though the photons went through the fabric mesh one at a time.

Figure 5 – We actually observe an interference pattern as each particle interferes with itself

Now here comes the really strange part. If we put detectors just in front of the slits so that we can record which slit the electron or photon actually passed through, and keep firing one particle at a time, the interference pattern will disappear, and we will see the pattern in Figure 4 instead. If we turn the detectors off, the interference pattern returns, and we see the pattern in Figure 5. For some reason, Nature will not allow us to observe electrons or photons behaving like particles and waves at the same time. It’s some kind of information thing again. But it gets worse. If we put the detectors at some distance behind the slits and turn them on, the interference pattern again disappears, but if we turn the detectors off, the interference pattern returns. Now, this is after the electrons or photons have already passed through the slits! How do they know whether to behave like a wave or a particle in advance, before they know if the detectors are on or off? In fact, experiments have been performed where the decision to turn the detectors on or off is not made until after the individual electrons or photons have already passed through the slits, but even so, if the detectors are turned on, the interference pattern disappears, and if the detectors are turned off, the interference pattern returns! This means that the present can change the past! This is the famous delayed choice experiment proposed by John Wheeler in 1978 and actually performed by Alain Aspect and his colleagues in 1982. In another experiment, the detectors are placed beyond the observation screen to detect cloned photons that are created in a splitting process. By observing the cloned photons, it is possible to determine which slit an individual twin photon passed through after its twin has already hit the observation screen. When these distant detectors are turned on, the interference pattern once again disappears, and if the detectors are turned off, the interference pattern returns. Again, the decision to turn the detectors on or off can be made after the photons have already hit the observation screen. This means that the future can change the present!

A similar example of QED messing with our concepts of time occurred in 1947 when Richard Feynman came up with an alternate interpretation for Dirac’s positrons with negative energy. Feynman proposed that positrons were actually normal electrons moving backwards in time! Recall that the full-blown wave function of an object with constant energy can be expressed as a time-independent wavefunction ψ(x) multiplied by a time-varying term:

Ψ(x, t)  =  e-iEt/ħ  ψ(x)

Now the solutions to Dirac’s equation predicted both the existence of electrons with positive energy and also positrons, the antimatter equivalent of electrons, with negative energy. For a particle with negative energy, the above equation looks like:

Ψ(x, t)  =  e-i(-E)t/ħ  ψ(x)

but since:

-i(-E)t/ħ = -iE(-t)/ħ

Feynman realized that an equivalent equation could be written by simply changing the parenthesis yielding:

Ψ(x, t)  =  e-iE(-t)/ħ  ψ(x)

So a positron with negative energy –E could mathematically be thought of as a regular old electron with positive energy E moving backwards in time! Indeed, today that is the preferred interpretation. All antimatter is simply regular matter moving backwards in time.

In QED, in order for an electron or photon to get from point A to point B, we have to consider all possible paths, and not just the simple ones. Figure 6 shows a more complicated Feynman diagram for two electrons scattering off each other, similar to Figure 2, but in this case, they emit two photons, one of which decays into an electron-positron pair, which later annihilates back into a photon again, before finally being absorbed by the second electron. Note that the e+ with the backward arrow is the positron moving backwards in time, while the e- with the forward arrow is the electron moving forwards in time.

Figure 6 – A more complicated Feynman diagram for two electrons scattering off each other by emitting two photons, one of which decays into an electron-positron pair, which later annihilates back into a photon again

But that is just the beginning. To truly calculate the scattering of one electron by another, you have to add up all the possible Feynman diagrams. Figure 7 shows a small number of additional possibilities. Now you can see why renormalization was such an important technique for QED. The more complicated Feynman diagrams contain more vertices, so they are much less probable than the simplest interaction which uses a single virtual photon, but since there are an infinite number of possibilities, things can rapidly get out of hand. At some point, one has to simply stop with the never-ending number of Feynman diagrams and simply MICOR the result with renormalization.

Figure 7 – Some additional Feynman diagrams for two electrons scattering off each other (click to enlarge)

Now depicting the electromagnetic force as the exchange of virtual photons might seem a bit farfetched to you since you have no first-hand experience with quantum effects, so let us repeat an experiment that Michael Faraday might have performed 170 years ago to help us out of this jam. Grab two small styrofoam packing peanuts from your last online purchase. Using a needle and thread, attach each packing peanut to the end of a sturdy piece of thread, and then rub the two packing peanuts in your hair to have them pick up some extra electrons. Now hold the thread in its middle, so that the packing peanuts are free to dangle straight down. You will observe a very interesting thing, instead of dangling straight down, the packing peanuts will repel each other, and the thread will form an inverted V shape. Now stare at that for a few minutes. This will, no doubt, not seem so strange to you because you do have some experience with similar electrical effects, but think about it for a few minutes anyway. Something very strange, indeed, is going on. The whole mass of the Earth is trying to pull those two packing peanuts straight down, but some mysterious thing, which apparently is much stronger, is keeping them apart. What could it be? Coulomb would borrow Newton’s idea of a “spooky action at a distance” to explain that there is an electrical force between the charged packing peanuts and that the electrical force is keeping them apart. In fact, given the amount of charge on the peanuts, the mass of each peanut, and the total length of the thread, Coulomb would be able to exactly predict the angle of the inverted V formed by the thread and the dangling packing peanuts. So Coulomb’s model is quite useful in making predictions despite using a “spooky action at a distance”. Michael Faraday would go one step further. In Faraday’s model, each charged packing peanut creates an electric field about itself, and the other charged packing peanut then interacts with the electric field by moving away from it. Faraday’s model not only predicts the angle of the packing peanuts, it can also be used to derive the speed of light as we saw above, so Faraday’s model is even more useful than Coulomb’s. Finally, we can use QED to model the behavior of the packing peanuts to be the result of the exchange of a huge number of virtual photons. QED makes all of the predictions that Coulomb’s and Faraday’s models make, and in addition, by using 72 Feynman diagrams, can predict the gyromagnetic ratio of the electron to 11 decimal places. So which is it? Are the packing peanuts being held apart by Coulomb’s “spooky action at a distance”, Faraday’s electric field, or QED’s exchange of virtual photons? What is “really” going on? Well, nobody really knows. The three models we just discussed are all effective theories that make predictions about the phenomenon of repelling packing peanuts with varying degrees of accuracy – that is all we really know. Each of these three models may seem a bit strange, but that does not matter. What matters is that they all make some accurate predictions of what we observe and offer various degrees of insight into what is really going on. This is a key point pertinent to softwarephysics because in softwarephysics we also only try to model the behavior of software, without caring what software really is. I contend that modeling software as a collection of quantum particles, forming simulated organic molecules that interact with each other, and which are subject to the second law of thermodynamics, is no stranger than modeling the scattering of electrons by virtual photons. If a model makes good predictions, then it is a good model. Later in this posting, we will even question the reality of reality, as we delve deeper into the quantum world.

Before leaving QED, one additional important concept must be kept in mind. QED predicts that the vacuum is not empty; it must be a seething froth of virtual particle pairs of electrons and positrons constantly popping into and out of existence. And this must also be true for all the other fundamental particles of the Universe too, the quarks, muons, and tauons, and their associated neutrinos. So pairs of quarks and antiquarks and neutrinos and antineutrinos are constantly popping into and out of existence too.

The Standard Model and Quantum Chromodynamics – QCD
In the 1930s, 40s, and 50s particle physicists built atom smashers or particle accelerators of ever-increasing power in order to figure out the internal structure of the nuclei of atoms and also, hopefully, of the internal structure of the protons and neutrons within. It was thought that if you could accelerate protons or electrons to high enough energies, and then smash them into stationary targets of atomic nuclei, that you would be able to figure out what was inside the protons and neutrons of the nuclei. But that is not what happened. Instead, all sorts of new particles popped out in the debris and many of these new particles were heavier than the protons and neutrons themselves! Somehow these new particles must have been popping out of the vacuum of empty space itself. Because the particles of particle physics are so light, physicists find it convenient to use Einstein’s equation E = mc2 to express the mass of particles in terms of energy in units of MeV (million electron volts) instead of kilograms. For example, protons have a mass of 938.3 MeV, and the much lighter electrons have a mass of 0.511 MeV. So it was quite surprising when particles like the Λ0 with a mass of 2,281 MeV appeared out of nowhere in a collision event between a high-speed proton and a stationary atomic nucleus. As we have already seen, the vacuum is not empty space, but is a seething froth of particle and antiparticle pairs, popping into and out of existence. What was happening in these collisions that created new particles, was that the collisions released lots of energy, which allowed a few virtual particles in the vacuum to latch onto some real energy and become real for a short period of time, before decaying into more “normal” types of matter. When beams of protons and antiprotons or beams of electrons and positrons were collided in head-on collisions, this effect was magnified, because all of the kinetic energy of the colliding particles went into creating new particles and none was wasted on conserving momentum, just as a head-on collision between cars is more devastating than a car colliding with a parked car. This technique of using head-on collisions of particle beams is the approach of choice today at both Fermilab and CERN.

In the 1960s things got a little depressing, because particle physicists had discovered over 400+ new “fundamental” particles beyond the electron, proton, and neutron, familiar to the physicists of the 1930s. The Universe just could not be that complicated! In the 19th century, chemists had a similar problem with the 92 naturally occurring elements. By looking for patterns in the 92 naturally occurring elements, Dmitri Mendeleev was able to derive the periodic table of the elements and later chemists were able to deduce much of the internal structure of atoms from the patterns of the periodic table. Similarly, particle physicists like Murray Gell-Mann in the 1960s and 1970s were able to look for patterns in the 400+ particles discovered in the debris of collisions, and develop the Standard Model of particle physics.

Figure 8 – The Standard Model (click to enlarge)

The Standard Model is a quantum field theory QFT that combines the QED we have already studied with quantum chromodynamics QCD, the QFT of protons, neutrons, and the other strange new particles discovered in particle accelerators. The Standard Model also describes the forces between these particles in terms of force-carrying particles called bosons, similar to the virtual photons we saw in QED. In high school, you were taught that a force is a “push or a pull”, and that is an adequate description for Newtonian mechanics, but in particle physics, a force is portrayed as much more, as an interaction that changes something. Certainly, a “push or pull” can change the momentum of an object, but an interaction can do even more. For example, the weak force or interaction can change an up quark into a down quark, and as we shall see, make the Sun shine.

To understand the Standard Model, it is best to break it down by the columns and rows of Figure 8 and to concentrate on familiar objects like protons, neutrons, and electrons. The first major division is between the matter particles called fermions, comprising the first three columns of the Standard Model, and the force-carrying particles called bosons in the last column. The fermion matter particles all have a spin of ½, while the force-carrying bosons all have a spin of 1. The fermion particles are further subdivided; the two top rows contain the heavy quarks, while the two lower rows hold the much lighter leptons (Greek for small). Three quarks combine together to form heavy particles called baryons (Greek for heavy) like protons and neutrons, and two quarks, actually a quark and an antiquark, combine into slightly lighter particles called mesons (Greek for middle-one). In QED, we already explored the interactions of a lepton, called the electron, and a boson called the photon, carrying the electromagnetic force. The electron can be found in the lower left corner of Figure 8, while the photon is found in the upper right corner. Protons are not fundamental particles; they consist of two up quarks, each with a +2/3 electrical charge, and one down quark with a -1/3 charge to yield a proton of charge +2/3 +2/3 -1/3 = +1. Similarly, neutrons consist of two down quarks and one up quark to yield a charge of -1/3 -1/3 +2/3 = 0. So all of the normal matter, consisting of protons, neutrons, and electrons, that we come into contact with on a daily basis, comes from column Ι of the Standard Model.

Again, remember that the Standard Model is a QFT. All the particles in the Standard Model are fields that extend over the entire Universe, with a decreasing probability amplitude far from where the particle “is”, and that the field is quantized into the particles of Figure 8 with wavelike properties. Thus there are up quark fields, down quark fields, and electron fields, extending over the entire Universe, that are quantized as individual up and down quarks and electrons with wavelike properties.

Now if all the matter we use on a daily basis comes from column Ι of the Standard Model, what are the particles in columns ΙΙ and ΙΙΙ used for? That is a very good question. The particles in columns ΙΙ and ΙΙΙ are heavier clones of the particles found in column Ι, and only briefly come into existence when cosmic rays or particles in accelerators at Fermilab or CERN collide with particles from column Ι. For example, the muon is exactly like an electron with an electrical charge of -1. The only difference is that the muon has a mass that is 200 times greater than that of the electron, so a muon decays in about 2.2 x 10-6 seconds into an electron and a couple of neutrinos. So why bother having muons around? In fact, when the muon was first discovered, the theorist Isidor I. Rabi exclaimed, ''Who ordered that?''. Similarly, the tau is also like a much heavier electron, with an electrical charge of -1, but with a mass that is 3,478 times greater than the mass of an electron, and which decays into an electron and some other particles in about 2.90×10-13 seconds. The same goes for the charm (c) and top ( t) quarks. They are just heavier versions of the up (u) quark with an electrical charge of +2/3 that quickly decay into the stable particles of column Ι. For example, the Λ0 consists of an up, down, and charm quark (udc) and decays in about 2x10-13 seconds. Similarly, the strange (s) and bottom (b) quarks are also heavier versions of the down (d) quark with an electrical charge of -1/3 that quickly decay into the stable particles of column Ι. The particles in column Ι have the least mass, and therefore, cannot decay into lighter particles, so they have been stable for billions of years.

We cannot go into the mathematics here, but the reason that we need at least 3 generations of basically the same particles is that there have to be at least 3 generations in order for the weak interaction to have a matter-antimatter asymmetry. At the time of the Big Bang, a great deal of energy was released in this Universe, which initially was in the form of very high energy photons. As we have already seen, photons can decay into pairs of matter and antimatter particles like electrons and positrons or quarks and antiquarks, which can later recombine back into photons. Now if this were a purely symmetrical process, there would be no matter left in the Universe, because all of the matter would have found its matching antimatter counterpart, and the two would have annihilated each other. It turns out that, one time in a billion, more matter was created than antimatter during the Big Bang. Recall that in QED, antimatter is simply regular matter moving backwards in time, from the future into the past. So our Universe must have a slight asymmetry, a slight preference for matter to move forward in time from the past into the future, rather than moving backwards in time from the future into the past. Otherwise, we would not be here contemplating this Universe. This is just one example of the Anthropic Principle, which claims that intelligent beings will only find themselves in universes capable of sustaining intelligent beings. We will further explore the Anthropic Principle in a posting devoted to cybercosmology.

The Force-Carrying Particles of the Standard Model
The fourth column of the Standard Model contains the force-carrying boson particles, which carry out the interactions between the fermion matter particles (Figure 9). In QED, we already discussed the electromagnetic interaction between electrons and protons, carried out by the virtual photon boson. The electromagnetic interaction is responsible for most of what you encounter in your daily life. It provides you with the illusion that things are solid and have physical properties like color, texture, and physical strength. All of the quarks and the charged leptons (electrons, muons, and taus) also feel the electromagnetic interaction.

The other force of nature that you are very familiar with is gravity, but gravity is not part of the Standard Model. Gravity is covered by two other effective theories – Newton’s theory of gravity and Einstein’s improved theory of gravity, the general theory of relativity. QED has predicted the gyromagnetic ratio of electrons accurate to 11 decimal places, and the general theory of relativity has predicted the orbital decay rate of two neutron stars accurate to 14 decimal places, but unfortunately, both theories are totally incompatible with each other. Again, remember that both QED and the general theory of relativity are effective theories, that are only approximations of reality and only work over a certain range of conditions. Combining the general theory of relativity with quantum mechanics is the current great struggle in physics, which is being actively pursued by string theorists and other theorists searching for a theory of quantum gravity. Fortunately, the gravitational interaction is so weak, relative to the electromagnetic, strong, and weak interactions, that it can be safely ignored at the level of individual particles. For example, the strong interaction is about 1,000 times more powerful than the electromagnetic interaction, while the weak interaction is about 10,000 times less powerful than the electromagnetic interaction. But the gravitational interaction is about 10-40 times as powerful as the electromagnetic interaction, meaning that the gravitational force between two electrons is only 10-40 as strong as the electromagnetic force between the electrons.

Figure 9 – The Electromagnetic, Weak, and Strong Interactions of the Standard Model (click to enlarge)

The strong interaction is covered by the Standard Model. The strong interaction is the force that holds quarks together into the 3 quark baryons, like protons and neutrons, and also the two quark mesons. Thanks to Ben Franklin, we say that electrons are negatively charged and that protons are positively charged. But these are just meaningless human naming conventions. Ben could have just as easily said that the charge on electrons was “white” and the charge on protons was “black”. In fact, before Franklin, people used to refer to electrical charge as being either “vitreous” or “resinous”. It turns out that in addition to an electrical charge, quarks have another kind of charge we call color, and quarks come in three kinds of color – red, green, and blue. Again, this is just a human naming convention to make clear that there are three kinds of color charge. Just as the electromagnetic interaction is carried out by virtual photons, the strong interaction is carried out by virtual gluons. Both photons and gluons are classified as bosons with a spin of 1 and a mass of zero. Consider a proton consisting of two up quarks, each with an electrical charge of +2/3 and a down quark with an electrical charge of -1/3, all squeezed together in a small apartment 10-16 meter in width. There is a tremendous electromagnetic force between the up quarks that must be overcome by the gluons. Figure 9 shows how this is accomplished by the gluons changing the colors of the quarks. In Figure 9 we see a green up quark approaching a blue up quark. The green up quark emits a virtual green-antiblue gluon (there are 8 gluons of differing colors) and turns into a blue up quark. The blue up quark absorbs the virtual gluon and becomes a green up quark. The net result is that the two up quarks decide to stick together, even though they both carry a repulsive +2/3 electrical charge. This strong interaction also sort of slops over to keep protons together in the nucleus of an atom. This is depicted as the exchange of virtual π mesons between protons or protons and neutrons (Figure 9). Only the quarks feel the strong interaction.

The up and down quarks form the protons and neutrons of normal matter that you are familiar with. A proton consisting of two up quarks and one down quark (uud) has a mass of 938.27 MeV. Similarly, a neutron consisting of one up quark and two down quarks (udd), is slightly more massive with a mass of 939.56 MeV. But the up and down quarks themselves are surprisingly quite light. The up quark is now thought to have a mass with an upper limit of only 4 Mev, and the down quark is thought to have a mass with an upper limit of only 8 MeV. So a proton should have a mass of about 16 MeV instead of 938.27 MeV, and the neutron should have a mass of about 20 MeV instead of 939.56 MeV. Where does all this extra mass come from? It comes from the kinetic and binding energy of the virtual gluons holding the protons and neutrons together! Remember that energy can add to the mass of an object via E = mc2. So going back to our fictional 200-pound man consisting of 100 pounds of protons, 100 pounds of neutrons, and 0.87 ounces of electrons, we can now say that the man really consists of 0.87 ounces of electrons, 1.28 pounds of up quarks, 2.56 pounds of down quarks, and 196.10 pounds of energy! Like the Cheshire Cat, our 200-pound man is fading fast indeed.

The weak interaction is the strangest of all. The weak interaction can change one kind of quark into another. About 75% of the Sun’s mass is made up of protons (hydrogen nuclei). The core of the Sun is at a temperature of about 27 million degrees 0F with a density of about 150 g/cm3 (13 times denser than lead), so the protons at the core are squeezed together very tightly and are traveling at a high speed. About once every billion years, a given proton will get close enough to another proton, that one of its up quarks will emit a virtual W+ boson and turn into a down quark. The proton now has one up quark and two down quarks, so it has turned into a neutron, which then sticks to the proton it just collided with due to the strong interaction between the two. The W+ boson carries away the +1 electrical charge that the proton needs to get rid of in order to turn into a neutron, and then the W+ quickly decays into a positively charged positron and an electron neutrino in about 3 × 10-25 seconds. The proton-neutron combination is called deuterium. The deuterium almost immediately picks up another proton to form helium-3. In about a million years or so, the helium-3 nucleus slams into another helium-3 nucleus to form a single helium-4 nucleus and releases two protons in the process. This all yields about 26.7 MeV of energy for each generated helium-4 nucleus. Now normal chemical reactions yield a few electron volts of energy per reacting atom, so nuclear reactions deliver more than 10 million times as much energy per atom as do chemical reactions, and that is why nuclear explosives are more than 10 million times as powerful as chemical explosives. As Bruce Schumm has pointed out, it is probably a good thing that we only know how to work with the slop-over strong interaction between protons and neutrons and not the much more powerful strong interaction between the quarks themselves. All the fermionic matter particles in columns Ι, ΙΙ, and ΙΙΙ of the Standard Model feel the weak interaction. In fact, the weak interaction is the only interaction that the neutrinos feel. That is why, on average, a neutrino can pass through 22 light years of lead before interacting with another particle.

The weak interaction appears to be so weak because, unlike the massless photons, and gluons, the three bosons of the weak interaction all have huge masses. The W+ and W- have a mass of 80,400 MeV and the neutral Z0 has a mass of 91,200 MeV – nearly equal to the mass of 100 protons. Borrowing such a huge amount of energy from the vacuum with Heisenberg’s Uncertainty Principle ∆E ∆t ≥ ħ/2 means that ∆t must be very small - like 3 × 10-25 seconds. The weak interaction, like a scam artist writing a huge bad check, cannot get very far before getting caught. The explanation from the Standard Model for the large masses of the weak interaction bosons is that they couple with a quantum field called the Higgs field. The Higgs field is a field with a nonzero magnitude that permeates the entire Universe and gives the illusion that the truly massless W+, W-, and Z0 have masses. To understand this, imagine photons moving in water. If you stick a pencil in a glass of water, the pencil will look bent because the photons scattered off the pencil are refracted when they leave the water and enter the air. The refraction results because the photons travel slower in water than in air, as they constantly interact with the electric fields of the water molecules. The electric field that permeates all of the water slows down the photons and makes them move slower than the normal speed of light - 3 x 10 8 meters/sec. The electric field essentially makes the massless photons appear sluggish, with an effective mass. Now imagine that the entire Universe was filled with water. How would we know that photons are really massless and usually travel at 3 x 10 8 meters/sec? The only photons we could observe would always travel at less than 3 x 10 8 meters/sec and would seem to have some mass. That’s exactly what the Standard Model predicts; all of the particles of the Standard Model are really massless, their apparent masses are just an illusion caused by the differing strengths with which each type of particle interacts with the Higgs field that permeates the entire Universe. Photons and gluons do not interact with the Higgs field, and consequently, are massless. The electron interacts a little more strongly and has an effective mass of 0.511 MeV. The top quark interacts very strongly with the Higgs field and has a whopping mass of 170,900 MeV. Since the Higgs field is a quantum field, it must be expressed as a particle. Physicists at Fermilab, and soon at the CERN LHC (Large Hadron Collider), are actively searching for the Higgs boson at this very moment. So our 200-pound man has now been reduced to pure energy and a large number of massless quantized field particles interacting with the Higgs field!

The Standard Model has been quite successful in describing nearly all of the phenomena of high energy particle physics, but again, it is just another very useful effective theory. For example, the Standard Model requires about 20+ constants that must be determined in the laboratory, such as the masses of the fundamental fermionic matter particles and the strengths of the electromagnetic, weak, and strong interactions. These constants have to be plugged into the Standard Model; they are not predicted by the Standard Model itself. The surprising thing is that if you change any of these 20+ constants by a few percent or less, you end up with a universe incapable of supporting intelligent beings. For example, if the strong interaction were 2% stronger, atoms consisting of 2 protons, known as deuterons, would be stable and all hydrogen would have fused into deuterons and then into helium during the Big Bang so there would be no hydrogen to combine with oxygen to form water in such a universe. A 2% stronger strong interaction would also drastically increase the rate of nuclear fusion in stars, reducing stellar lifetimes from billions of years to millions of years, without enough time for evolution to produce intelligent beings. On the other hand, if the strong interaction were just 5% weaker, protons would not stick to neutrons in the first step of the hydrogen fusion process that makes the Sun shine, and such a universe would be dark and lifeless. The fine tuning of the 20+ constants of the Standard Model is again an example of the Anthropic Principle at work.

Gauge Theories
Going back to Schrödinger’s time-invariant equation:

-ħ²  d²ψ(x)   +   V(x) ψ(x)  =  E ψ(x)
──  ──────
2m     dx²

The first term corresponds to the kinetic energy of the particle:

-ħ²  d²ψ(x)
──  ──────
2m     dx²

The second term corresponds to the potential energy of the particle:

V(x) ψ(x)

And the last term corresponds to the total energy of the particle:

E ψ(x)

So the Schrödinger equation really recapitulates the classical idea that the total energy of a particle is equal to the sum of its kinetic energy and potential energy. If we focus on a particle moving freely in space, far from any forces, with a V(x) that is zero everywhere, we may rewrite the Schrödinger equation as:

-ħ²  d²ψ(x)   =  E ψ(x)
──  ──────
2m     dx²

Now the kinetic energy of the particle is equal to the total energy of the particle. To solve this differential equation, all we have to do is find a complex wavefunction, such that the curvature of the wavefunction at each point along the x-axis, times some constants like the mass m of the particle and ħ, is equal to the quantized energy E of the particle, times the value of the complex wavefunction at each point. Recall that a way to portray such a complex wavefunction is to create a plot of little clocks all lined up along the x-axis. The length of the hour hands gives the amplitude of the wavefunction at each point x, and the angle of the hour hand would give the phase. We can also plot the amplitude, or length of the hour hands, like we normally plot a wiggly line on an x-y plot, and then plot the phase angle as little clocks along the x-axis, all having the same length hour hands, but with differing clock positions depicting the differing phase angles at each point x. Now the important thing to realize about solving Schrödinger’s equation is that the only thing that matters is the amplitude plot. The absolute phase angle of each little clock along the x-axis does not affect your solution. The phase angle differences between neighboring points does matter, but not the absolute phase angle of all the clocks. Here is another way to think of it. Twice a year we all advance or retard the hour hands on our clocks by 300 when we convert back and forth between Standard and Daylight Saving Time. Now, so long as we all move our hour hands by 300 at the same time, the world continues on unharmed. This is true even though there are several hundred million people with hour hands that are only accurate to ±10 minutes. So the phase angle of each person’s clock varies a little from person to person, just as the phase angle of the little clocks of a wavefunction vary from point to point. As long as we all rotate our hour hands by 300 at the same time, all is OK. We only encounter a problem when we meet up with somebody who forgot to rotate their clock. Similarly, imagine that we were on a spaceship 100 light years from Earth and discovered that all of our clocks had stopped for some unknown period of time. What should we do? Since it would take 200 years to find out what time it was back on Earth, we could just as well set our clocks to any arbitrary time at all, so long as we all set our hour hands to the same phase angle. The same goes for the wavefunction solutions to the Schrödinger equation. The absolute phase angle of the little clocks along the x-axis are irrelevant; the only thing that matters is the relative phase angles between adjoining clocks. So it is OK to do a global phase angle change to all the clocks simultaneously along the x-axis of a wavefunction, like a change to Daylight Saving Time, the Schrödinger equation will still work for the wavefunction.

In 1954, Chen Ning Yang and Robert Mills worried about such global phase changes to wavefunctions. Since the wavefunctions extend over the entire Universe, like a huge infinite spaceship, Yang and Mills wondered how you could possibly do a global phase shift to all the little clocks simultaneously. You have to worry about that pesky special theory of relativity limiting the speed with which you could transmit the command to shift all the phase clocks by 300 simultaneously. Yang and Mills proposed that the Schrödinger equation should still work even for local phase shifts to wavefunctions, where each little clock along the x-axis could have its own phase shift change. Yang and Mills found that in order for the Schrödinger equation to still work for a given wavefunction subject to local phase changes, they had to add a correction term qA(x)ψ(x) to the equation to fix the damage done by the local phase change.

-ħ²  d²ψ(x)   +   qA(x) ψ(x)  =  E ψ(x)
──  ──────
2m     dx²

Think of it this way, we actually are on a very large spherical spaceship that we call Earth. My current IT job is in my hometown of Chicago in the Central Time Zone. Unfortunately, my company has facilities all over the Earth and has standardized on using Eastern Time instead of Central Time. But that is OK. I know that when looking at log files on Unix servers around the world running on Eastern Time, all I have to do is apply a local phase correction, by shifting the hour hand on my local clock forward by 300, when comparing my time with the log file times. Now the surprising thing is that the Yang and Mills correction function A(x), needed to fix the Schrödinger equation, is identical to the potential function of the electric field for the problem at hand, and q turns out to be the electrical charge of the particle! So when we fix the Schrödinger equation so that it still works for local phase changes (making it invariant under a local phase change), the electromagnetic force pops out. Performing a local phase change to our little clocks along the x-axis is just rotating them through different angles, so fixing the Schrödinger equation so that it is invariant under local phase changes is just fixing the Schrödinger equation so that it is invariant under phase rotations in the complex plane. Physicists call this “invariance under U(1)” or “U(1) invariance” because we are only rotating each little clock hand through a single angle or single dimension.

In an earlier posting, I referred to Noether’s Theorem published in 1918, in which Emmy Noether proposed that there is a one-to-one correspondence between the conservation laws of physics and the symmetries of nature. For example, if you collide two billiard balls on a pool table and then rotate the pool table by 1800 and repeat the experiment, you will obtain the same result for both collisions. The symmetry under rotation implies the conservation of angular momentum – why a skater speeds up when she pulls in her arms in a spin. The U(1) invariance or symmetry of the Schrödinger’s equation implies another conservation law – the conservation of electric charge. Whenever a negatively charged electron appears or disappears, a corresponding positively charged positron must appear or disappear too. The electromagnetic force is thus a result of a U(1) symmetry. This is quite strange. For some reason, symmetry under rotation in a made-up space we call the complex plane, that has an unreal dimension defined by the imaginary number i, yields the electromagnetic force we observe in “real” space. But again, it gets worse. It turns out that you can also rotate wavefunctions in two complex directions SU(2), instead of just one U(1). When you apply more fixes to the Schrödinger equation to make SU(2) rotations invariant, you get the three particles of the weak interaction the W+, W-, and the Z0. And as you might have guessed, if you rotate wavefunctions in three complex directions SU(3), you get the 8 colored gluons, when you apply your fixes to the Schrödinger equation. This is all explained much better in Deep Down Things, and I strongly recommend reading that book if you would like to explore these amazing things further. This cursory synopsis hardly does justice to the topic.

So why do rotations in the made-up U(1), SU(2), and SU(3) internal symmetry spaces yield the forces of nature that we observe in the “real” physical Universe? Nobody knows. One thing that many people have noted is the remarkable correspondence between mathematics and what is observed in the physical Universe. There are two major schools of thought on this topic. The Platonic school envisions mathematics as pure thought, that exists, even if there is nobody to think of it. Draw a figure in flat spacetime, such that all the points of the figure are an equal distance R from a central point, yielding a circle. Measure the distance around the circle and call it the circumference C. Now divide the circumference C by twice the radius R and you get π. Now is π real or something we just made up? You find π in just about every formula of physics, which might lead you to think that something must be going on beyond our own thoughts. In mathematics, we constantly find surprising relationships and symmetries like Euler's identity:

e + 1 = 0

which relates the constants e (used to calculate your mortgage payment), the imaginary number i, the π from circles, and the source of the counting numbers 0 and 1. Others take a more empirical tack and contend that mathematics is just a figment of our overactive imaginations and so are the physics formulas of our very successful effective theories. This leads us nicely into our next topic.

The Interpretations of the Quantum World
So as we have seen, the observations we make in the quantum world in which we live do a lot of damage to our common sense approach to the physical Universe. But remember, quantum mechanics and the quantum field theories of QED and QCD, upon which the Standard Model is based, are all effective theories. They are only approximations or analogies of reality. The mathematics of the quantum world does a wonderful job in predicting what we observe, but the nagging question remains, what does it all mean? It’s like the equation:

Glass = 0.5

Is the glass half empty or half full and how did it get that way? Because quantum mechanics is so strange, it is one of the few effective theories of physics that is open to much philosophical interpretation by both physicists and philosophers alike. The other example of this dilemma goes back to our old friend the second law of thermodynamics. Nobody disagrees with the highly effective predictions made by the mathematics of quantum mechanics; they just have different interpretations of what is really going on. There are several predominant interpretations of quantum mechanics that warrant discussion. Again, these interpretations are models, and since quantum mechanics itself is a model, these interpretations are models of models.

The Copenhagen Interpretation
In 1927, Niels Bohr and Werner Heisenberg proposed a very positivistic interpretation of quantum mechanics now known as the Copenhagen interpretation; Bohr was working at the University of Copenhagen Institute of Theoretical Physics at the time. The Copenhagen interpretation contends that absolute reality does not really exist. Instead, there are an infinite number of potential realities. This satisfies Max Born’s contention that wavefunctions are probability waves. When you make an observation of a two-slit experiment using electrons fired one at a time at the slits, the wavefunctions of the electrons “collapse” when they hit the projection screen and are observed, but until then, the electrons themselves do not really exist. The only thing that is “real” is the observation of the electrons, and QED makes very accurate predictions of the observed interference pattern. When you introduce detectors at the slits, you change the experiment, so naturally what you observe also changes, and the interference patterns disappear. What could be simpler?

Einstein had a hard time with the Copenhagen interpretation of quantum mechanics because he thought that it verged on solipsism. Solipsism is a philosophical idea from Ancient Greece. In solipsism, you are the whole thing, and the physical Universe is just a figment of your imagination. So I would like to thank you very much for thinking of me and bringing me into existence. Einstein’s opinion of the Copenhagen interpretation of quantum mechanics can best be summed up by his statement “Is it enough that a mouse observes that the Moon exists?” Einstein’s rejection of the Copenhagen interpretation is rather interesting. Recall that Einstein’s original interpretation of the special theory of relativity (1905) was very positivistic, since he relied solely upon what could be observed with meter sticks and clocks, and totally rejected Newton’s concepts of absolute space and time because they could not be physically observed. Only later, when formulating the general theory of relativity, did he rely heavily upon Minkowski’s concept of an absolute spacetime, even though it could not be directly observed. In his elder years, Einstein held many profound philosophical debates with Bohr on the topic of quantum mechanics, since Einstein could not accept the extreme positivism of the Copenhagen interpretation of quantum mechanics, which held that only the observations of things really existed and not the things themselves. In the Copenhagen interpretation of quantum mechanics, the wavefunctions or probability clouds of electrons surrounding an atomic nucleus are just that, potential electrons waiting to be measured. About 60% of physicists still ascribe to the Copenhagen interpretation of quantum mechanics.

However, there are several problems with the Copenhagen interpretation. For example, Eugene Wigner pointed out that the devices we use to measure quantum events are made out of atoms which are quantum objects themselves, so when an observation is made of a single atom of uranium to see if it has gone through a radioactive decay using a Geiger counter, the atomic quantum particles of the Geiger counter become entangled in a quantum superposition of states with the uranium atom. If the uranium has decayed, then the uranium atom and the Geiger counter are in one quantum state, and if the atom has not decayed, then the uranium atom and the Geiger counter are in a different quantum state. If the Geiger counter is fed into an amplifier, then we have to add in the amplifier too into our quantum superposition of states. If a physicist is patiently listening to the Geiger counter, we have to add him into the chain as well, so that he can write and publish a paper which is read by other physicists and is picked up by Newsweek for a popular presentation to the public. So when does the “measurement” actually take place? We seem to have an infinite regress. Wigner’s contention is that the measurement takes place when a conscious being first becomes aware of the observation. This is exactly what Einstein objected to because it requires a conscious being to bring the Universe into existence. In Einstein’s view, measurements simply reveal to us the condition of an already existing reality that does not need us around to make measurements in order to exist. But in the Copenhagen interpretation, the absolute reality of Einstein does not really exist.

This is an example of what is called the strong Anthropic Principle. We have already discussed the Weak Anthropic Principle which holds that intelligent beings will only find themselves in a universe capable of supporting intelligent beings. This rather self-evident form of the Anthropic Principle has been criticized for being pointedly obvious. However, in The Anthropic Cosmological Principle (1988) John Barrow and Frank Tipler point out that even the Weak Anthropic Principle can be used to make many predictions about our Universe. For example, the 20+ constants of the Standard Model have to be just so, or we would not be here contemplating them. The strong Anthropic Principle goes even further. The strong Anthropic Principle states that the Universe has to contain intelligent beings for it to exist, and Wigner’s interpretation of the Copenhagen interpretation is along these lines. The Universe did not really exist until we or some other intelligent beings came about to observe it.

The Many-Worlds Interpretation
In 1957, Hugh Everett working on his Ph.D. under John Wheeler, proposed the Many-Worlds Interpretation of quantum mechanics. The Many-Worlds Interpretation admits an absolute reality but claims that there are an infinite number of absolute realities spread across an infinite number of parallel universes. In the Many-Worlds Interpretation, when electrons or photons encounter a two-slit experiment, they go through one slit or the other, and when they hit the projection screen they interfere with electrons or photons from other universes that went through the other slit! In Everett’s original version of the Many-Worlds Interpretation, the entire Universe splits into two distinct universes whenever a particle is faced with a choice of quantum states, and so all of these universes are constantly branching into an ever-growing number of additional universes. In the Many-Worlds Interpretation of quantum mechanics, the wavefunctions or probability clouds of electrons surrounding an atomic nucleus are the result of overlaying the images of many “real” electrons in many parallel universes.

David Deutsch is a strong proponent of the Many-Worlds Interpretation. In Deutsch’s version of the Many-Worlds Interpretation, there always have been an infinite number of parallel universes, with no need of continuous branching. When electrons or photons encounter a two-slit experiment without detectors, two very closely related universes merge into a single universe as the electrons or photons interfere with each other. If the electrons or photons encounter a two-slit experiment with detectors, the parallel universes remain distinct and no interference is observed. According to this version of the Many-Worlds Interpretation, when you hold up your pillowcase and observe your neighbor’s front door light and see a checkerboard interference pattern of spots, there are an infinite number of copies of you doing the same thing in an infinite number of closely related parallel universes. The interference pattern you observe is the result of the interference of the photons from all these parallel universes. The chief advantage of the Many-Worlds Interpretation is that you do not have to be there to observe the interference pattern. It happens whether you are there or not, and absolute reality does not depend upon conscious beings observing it. Einstein died in 1955, two years before the Many-Worlds Interpretation of quantum mechanics, but I imagine that he would have gladly traded an infinite number of universes for the Copenhagen interpretation, in which absolute reality did not even exist in a single one! About 30% of physicists adhere to a variation of the Many-Worlds Interpretation.

The Decoherence Interpretation
Quantum decoherence is another popular interpretation of quantum mechanics. Quantum decoherence pulls in our old friend the second law of thermodynamics. In quantum decoherence, there are again a large number of parallel possibilities for the wavefunctions for things like electrons, but many closely related wavefunctions interact with each other and become quantum mechanically “entangled” with each other in a quantum superposition of states, forming a Universe of their own. This interpretation of quantum mechanics seems to be very popular amongst the quantum mechanical engineers working on quantum computers.

EPR Experiments
Because Einstein detested the Copenhagen interpretation of quantum mechanics, he published a paper in 1935 with Boris Podolsky and Nathan Rosen which outlined what is now known as the EPR Paradox. The EPR Paradox goes like this. Suppose we prepare two quantum mechanically “entangled” electrons that conserve angular momentum; one with spin-up and one with spin-down. Now let the two electrons fly apart and let two observers measure the spins. If observer A measures his electron with spin-up, then observer B must measure his electron with spin-down with a probability of 100% in order to conserve angular momentum. Now there is nothing special about the directions in which observers A and B make their spin measurements. Suppose observer A rotates his magnets by 900 to measure the spin of the electron and observer B does not. Then observer B will only have a 50% chance for his electron having a spin that is down. How does the electron at observer B know that observer A has rotated his magnets when the electron arrives at observer B? Einstein thought that the EPR paper was the final nail in the coffin of quantum mechanics. There had to be some “hidden variables” that allowed electrons to know if they “really” had a spin-up or spin-down. For Einstein, absolute reality really existed. The apparent probabilistic nature of quantum mechanics was an illusion, like the random() function found in most computer languages. The random() function just points to a table of apparently random numbers that are totally predictable if you look at the table in advance. You normally initiate the random() function with a “seed” from the system clock of the computer you are running on to simulate randomness.

However, in 1964 John S. Bell published a paper in which he proposed an experiment that could actually test the EPR proposition. In the 1980s and 1990s a series of experiments were performed that showed that Einstein was actually wrong. Using photons and polarimeters, instead of the spin of electrons, these experiments showed that photons really do not know their quantum states in advance of being measured and that determining the polarization of a photon by observer A can immediately change the polarization of another photon 60 miles away. These experiments demonstrated that the physical Universe is non-local, meaning that Newton’s spooky “action at a distance” is built into our Universe, at least for entangled quantum particles. This might sound like a violation of the special theory of relativity because it seems like we are sending an instantaneous message faster than the speed of light, but that is really not the case. Both observer A and observer B will measure photons with varying polarizations at their observing stations separated by 60 miles. Only when observer A and observer B come together to compare results will they realize that their observations were correlated, so it is impossible to send a message with real information using this experimental scheme. Clearly, our common sense ideas about space and time are still lacking, and so are our current effective theories.

Quantum Gravity
As I mentioned, the Standard Model of particle physics does not address the gravitational force or interaction. It has been proposed that there may be a spin 2 boson particle called the graviton, but nobody has ever observed it. The Standard Model is a big improvement over the 400+ unrelated “fundamental” particles discovered in the 1960s, but when you add up all of the various colored particles and the antimatter twins of the Standard Model, you end up with about 63 particles. Many physicists think that the Universe cannot be so complicated and that there has to be a simpler model. One promising model is called supersymmetric string theory, or string theory for short. String theory contends that all of the particles of the Standard Model are actually strings or membranes vibrating in an 11-dimensional Universe. The strings or membranes are made of pure energy, or perhaps, pure mathematics and vibrate with different frequencies. The different vibrational frequencies account for the differing physical properties of the particles, like the different frequencies of a vibrating guitar string account for the differing notes emanating from a guitar. According to string theory, at the time of the Big Bang, 3 of the 11 spatial dimensions suddenly began to expand and so did the 4th dimension of time. The remaining 7 dimensions remained microscopically small beyond our ability to observe them. The beauty of string theory is that the graviton naturally falls out of the theory without a struggle. String theory research has dominated physics for the past 20 years, but as with softwarephysics, string theory is running on pure mathematics without the benefit of the third step in the Scientific Method - experimental verification of the theory using inductive empiricism. Unfortunately, the predicted vibrating strings and membranes are so small that they are beyond the reach of our current accelerators by many orders of magnitude.

The initial hope of string theory was that there would be one and only one self-consistent formulation of the theory and that the Standard Model and its 20+ constants would fall out from it. But that is not what happened. Over time, it became evident that one could form nearly an infinite number of universes by slightly changing the geometry of the dimensions in which the strings and membranes vibrate. Leonard Susskind calls this The Cosmic Landscape (2006) in his book of the same title. As do many cosmologists, Susskind proposes that there are an infinite number of other universes forming a multiverse, with each universe having its own physics determined by the number and geometry of its dimensions. The concept of a multiverse also provides an explanation for the Weak Anthropic Principle. If there is a multiverse comprised of an infinite number of universes, most universes will be very inhospitable for living things and quite sterile, but there will still be a vast number of universes capable of supporting intelligent beings, and intelligent beings will always find themselves in such universes. Some physicists have also proposed that the concept of a multiverse could provide additional support for the Many-Worlds Interpretation of quantum mechanics.

One of my favorite physicists, Lee Smolin of the Perimeter Institute, has countered that string theory may be a blind alley in The Trouble with Physics (2006). In this book, he proposes that physics may have taken on some of the characteristics of a religion, in that some string theorists seem to value the ability of students to believe in string theory without supporting empirical evidence. Remember, in science, we are allowed to have a level of confidence in effective theories, but we are not allowed to believe in them. Smolin has another book Three Roads to Quantum Gravity (2001), in which he suggests that there may be other solutions to unifying the general theory of relativity and quantum mechanics. In this book, he proposes that we go back to Einstein’s original positivist approach to relativity and dispose of the “background” or stage of Minkowski’s spacetime that Einstein adopted for the general theory of relativity. In this worldview, there is no stage, there is only a dialogue between players in the form of a spin foam of interactions, like the froth of computer interactions in cyberspacetime.

How to Build a Quantum Computer
If I knew how to build one, I would not be sitting here writing this posting! The first thing you have to do is get a bunch of quantum particles like photons or electrons entangled in a quantum superposition. These are your qubits. Then you process the qubits simultaneously in parallel universes with quantum logic gates. The differing results from the various runs in all of the parallel universes interfere with each other, like the electrons in parallel universes interfered with each other in the two-slit experiment discussed above. The problem is decoherence; quantum computers are very delicate machines. If you try to measure the output before the run completes, the whole thing will abend. The problem is that the results of a computation are not in our universe alone; we can only obtain a solution to a computation by looking at the interference of the various results in all the parallel universes together. If we try to take a peek before all these parallel runs complete, the whole system of qubits will decohere because we have taken a measurement of the system, and our quantum calculation will crash. Quantum computing is still in its infancy, but many university and government research centers are actively pursuing it, so who knows? If you had told me what I would end up doing for a living back in 1956, I would have thought you were crazy.

Next time we will explore the idea that, in addition to using the particle, wave, and field models, there might be some advantage in modeling the physical Universe as a quantum computer.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston