Monday, January 10, 2011

Model-Dependent Realism - A Positivistic Approach to Realism

In the Introduction to Softwarephysics, we saw how softwarephysics adopts a very positivistic view of software in that we do not care about what software “really” is; we only care about how software is observed to behave, and that we only attempt to model this behavior with a set of effective theories. Recall that positivism is an enhanced form of empiricism, in which we do not care about how things “really” are; we are only interested in how things are observed to behave. With positivism, physicists only seek out models of reality - not reality itself. Effective theories are an extension of positivism. An effective theory is an approximation of reality that only holds true over a certain restricted range of conditions and only provides for a certain depth of understanding of the problem at hand. For example, Newtonian mechanics works very well for objects moving in weak gravitational fields at less than 10% of the speed of light and which are larger than a very small mote of dust. For things moving at high velocities or in strong gravitational fields we must use relativity theory, and for very small things like atoms, we must use quantum mechanics. All of the current theories of physics, such as Newtonian mechanics, classical electrodynamics, thermodynamics, statistical mechanics, the special and general theories of relativity, quantum mechanics, and quantum field theories like QED and QCD are just effective theories that are based upon models of reality, and all these models are approximations - all these models are fundamentally "wrong", but at the same time, these effective theories make exceedingly good predictions of the behavior of physical systems over the limited ranges in which they apply, and that is all positivism hopes to achieve. The opposite view is called realism, in which an external physical reality actually exists all on its own and in an independent manner. It really goes back to the age-old philosophical question – if a tree falls in the forest and nobody hears it - does it make a noise? For a realist it certainly does. For a positivist, it may not unless some observable evidence is left behind in the Universe that it did.

A good comparison of these two worldviews can be found in the Dreams of a Final Theory (1992) by Steven Weinberg and The Grand Design (2010) by Stephen Hawking and Leonard Mlodinow, in which they present their concept of a model-dependent realism. There is also a very nice synopsis of model-dependent realism in the October 2010 issue of Scientific American entitled The (Elusive) Theory of Everything by the same authors. Steven Weinberg is a brilliant theoretician and a winner of the 1979 Nobel Prize in Physics for his theory of the electroweak interaction, which unified the electromagnetic and weak interactions of the Standard Model of particle physics. In Dreams of a Final Theory, Weinberg makes a strong case for realism, since the discovery of a single all-encompassing final theory of everything would necessarily imply the existence of a single absolute reality independent of observation. Stephen Hawking is famous to all as a brilliant theoretician and within the physics community is most famous for his theory of Hawking radiation emitted by black holes, which was the very first theory in physics to apply quantum effects to the general theory of relativity. Leonard Mlodinow is a physicist at Caltech and the author of many popular books on physics and wrote for Star Trek: The Next Generation as well.

In The Grand Design, Stephen Hawking and Leonard Mlodinow present an alternate worldview to that of Weinberg’s realism. For them, a single all-encompassing final theory of everything may not be possible because a single absolute reality may not exist. Instead, they draw a more positivistic view with their concept of a model-dependent realism. Model-dependent realism maintains that there is no absolute reality after all. We can only hope for a collection of effective theories that present a series of models of reality that are confirmed by empirical observation, and each of these models essentially creates a reality of its own.

The Need For a Return to Natural Philosophy
Both books have few kind words for philosophers, but both then go into some pretty heavy philosophical discussions of both positivism and realism, despite their apparent lack of confidence in philosophy. However, I think we shall see in this posting that both books actually seem to demonstrate that, based upon the findings of the 20th century, physics must once again return to its roots in natural philosophy in order to make progress in the 21st century. After all, the question of the existence of an absolute reality goes way back in philosophy, but since I have little background in philosophy, I will use the history of physics as a guide instead, as did the authors of both these books did as well.

The Grand Design begins with:

… How can we understand the world in which we find ourselves? How does the universe behave? What is the nature of reality? Where did all this come from? Did the universe need a creator? … Traditionally these are questions for philosophy, but philosophy is dead. Philosophy has not kept up with the modern developments in science, particularly physics. Scientists have become the bearers of the torch of discovery in our quest for knowledge.

In Dreams of a Final Theory in the chapter Against Philosophy, Weinberg goes on with:

The value today of philosophy to physics seems to me to be something like the value of early nation-states to their peoples. It is only a small exaggeration to say that, until the introduction of the post office, the chief service of nation-states was to protect their peoples from other nation-states. The insights of philosophers have occasionally benefited physicists, but generally in a negative fashion – by protecting them from the preconceptions of other philosophers.

But I do not aim here to play the role of a philosopher, but rather that of a specimen, an unregenerate working scientist who finds no help in professional philosophy. I am not alone in this. I know of no one who has participated actively in the advance of physics in the postwar period whose research has been significantly helped by the work of philosophers. I raised in the previous chapter the problem of what Wigner calls the “unreasonable effectiveness” of mathematics; here I want to take up another equally puzzling phenomenon, the unreasonable ineffectiveness of philosophy.

Physicists do of course carry around with them a working philosophy. For most of us, it is a rough-and-ready realism, a belief in the objective reality of the ingredients of our scientific theories. But this has been learned through the experience of scientific research and rarely from the teachings of philosophers.

With that said, let us now explore the nature of reality from an historical perspective within physics.

The Historical Clash of Positivism and Realism in Physics
Before delving further into these two books, let us review the historical clash of positivism and realism in physics. The debate over positivism and realism has been going on within the physics community from the very start, and in recent years has escalated with the unfolding of the quantum nature of reality and also with the quest for what is called a unified Theory of Everything or a Final Theory that would explain it all and replace our current collection of effective theories with a single unified theory of true reality. It is generally thought that the modern Scientific Revolution of the 16th century began in 1543 when Nicolaus Copernicus published On the Revolutions of the Heavenly Spheres, in which he proposed his Copernican heliocentric theory that held that the Earth was not the center of the Universe, but that the Sun held that position and that the Earth and the other planets revolved about the Sun. A few years ago I read On the Revolutions of the Heavenly Spheres and found that it began with a very strange foreword that essentially said that the book was not claiming that the Earth actually revolved about the Sun, rather the foreword proposed that astronomers may adopt many different models that explain the observed motions of the Sun, Moon, and planets in the sky, and so long as these models make reliable predictions, they don’t have to exactly match up with the absolute truth. Since the foreword did not anticipate space travel, it also implied that since nobody will ever really know for sure anyway, because nobody will ever be able to see from above what is really going on, there is no need to get too bent out of shape over the idea of the Earth moving. I found this foreword rather puzzling and so disturbing that I almost put On the Revolutions of the Heavenly Spheres down. But a little further research revealed the true story. However, before we get to that, below is the complete foreword to On the Revolutions of the Heavenly Spheres in its entirety. It is well worth reading because it perfectly encapsulates the ongoing philosophical clash between positivism and realism in the history of physics.

To the Reader
Concerning the Hypotheses of this Work

There have already been widespread reports about the novel hypotheses of this work, which declares that the earth moves whereas the sun is at rest in the center of the universe. Hence certain scholars, I have no doubt, are deeply offended and believe that the liberal arts, which were established long ago on a sound basis, should not be thrown into confusion. But if these men are willing to examine the matter closely, they will find that the author of this work has done nothing blameworthy. For it is the duty of an astronomer to compose the history of the celestial motions through careful and expert study. Then he must conceive and devise the causes of these motions or hypotheses about them. Since he cannot in any way attain to the true causes, he will adopt whatever suppositions enable the motions to be computed correctly from the principles of geometry for the future as well as for the past. The present author has performed both these duties excellently. For these hypotheses need not be true nor even probable. On the contrary, if they provide a calculus consistent with the observations, that alone is enough. Perhaps there is someone who is so ignorant of geometry and optics that he regards the epicycle of Venus as probable, or thinks that it is the reason why Venus sometimes precedes and sometimes follows the sun by forty degrees and even more. Is there anyone who is not aware that from this assumption it necessarily follows that the diameter of the planet at perigee should appear more than four times, and the body of the planet more than sixteen times, as great as at apogee? Yet this variation is refuted by the experience of every age. In this science there are some other no less important absurdities, which need not be set forth at the moment. For this art, it is quite clear, is completely and absolutely ignorant of the causes of the apparent nonuniform motions. And if any causes are devised by the imagination, as indeed very many are, they are not put forward to convince anyone that they are true, but merely to provide a reliable basis for computation. However, since different hypotheses are sometimes offered for one and the same motion (for example, eccentricity and an epicycle for the sun’s motion), the astronomer will take as his first choice that hypothesis which is the easiest to grasp. The philosopher will perhaps rather seek the semblance of the truth. But neither of them will understand or state anything certain, unless it has been divinely revealed to him.
Therefore alongside the ancient hypotheses, which are no more probable, let us permit these new hypotheses also to become known, especially since they are admirable as well as simple and bring with them a huge treasure of very skillful observations. So far as hypotheses are concerned, let no one expect anything certain from astronomy, which cannot furnish it, lest he accept as the truth ideas conceived for another purpose, and depart from this study a greater fool than when he entered it. Farewell.


Now here is the real behind-the-scenes story. Back in 1539 Georg Rheticus, a young mathematician, came to study with Copernicus as an apprentice. It was actually Rheticus who convinced the aging Copernicus to finally publish On the Revolutions of the Heavenly Spheres shortly before his death. When Copernicus finally turned over his manuscript for publication to Rheticus, he did not know that Rheticus subcontracted out the overseeing of the printing and publication of the book to a philosopher by the name of Andreas Osiander, and it was Osiander who anonymously wrote and inserted the infamous foreword. My guess is that Copernicus was a realist at heart who really did think that the Earth revolved about the Sun, while his publisher, who worried more about the public reaction to the book, took a more cautious positivistic position. I think that all scientific authors can surely relate to this story.

Another early example of the clash between positivism and realism can be found in Newton’s Principia (1687), in which he outlined Newtonian mechanics and his theory of gravitation, which held that the gravitational force between two objects was proportional to the product of their masses divided by the square of the distance between them. Newton knew that he was going to take some philosophical flack for proposing a mysterious force between objects that could reach out across the vast depths of space with no apparent mechanism, so he took a very positivistic position on the matter:

I have not as yet been able to discover the reason for these properties of gravity from phenomena, and I do not feign hypotheses. For whatever is not deduced from the phenomena must be called a hypothesis; and hypotheses, whether metaphysical or physical, or based on occult qualities, or mechanical, have no place in experimental philosophy. In this philosophy particular propositions are inferred from the phenomena, and afterwards rendered general by induction.

Instead, Newton focused on how things were observed to move under the influence of his law of gravitational attraction, without worrying about what gravity “really” was.

However, in the very first few pages of Newton’s Principia, Newton also proposed that there really was an absolute and fixed space filling the entire Universe that all objects existed in and moved through. This absolute fixed space was like a stage or background upon which the motions of all the objects in the Universe were played out upon. Newton admitted that, as Galileo had proposed earlier, you could not measure this fixed and absolute space directly, but just the same, it still existed. Newton also proposed that there was a fixed and absolute universal time that flowed at a constant rate that all observers agreed upon, but which could not be measured directly either. Clocks really do not measure the rate at which time flows. Clocks can only measure amounts of time, just as rulers can measure amounts of space. To measure the rate at which time flows, you would need something like a speedometer, and we do not have such a device. The ideas of a fixed and absolute space and time are such common sense self-evident concepts that Newton almost dismissed dealing with them outright in the first few pages of the Principia because they seemed so obvious to him, but this turned out to ultimately lead to his undoing. It would take more than 200 years and the work of Albert Einstein to reveal the flaws in his reasoning. Thus for his concept of an absolute space and time, Newton took a very realistic viewpoint. Absolute space and absolute time “really” did exist even though they could not be directly observed.

Galileo, on the other hand, proposed that all motion is relative, meaning that you can only define motion as an observable change in the distance between objects. In 1632 Galileo published the Dialogue Concerning the Two Chief World Systems, in which he compared Ptolemy’s astronomical model, that positioned the Earth at the center of the Universe, with the heliocentric model of Copernicus. One of the chief objections against Copernicus’s model was that if the Earth really does move, how come we do not feel it moving? To counter this argument, Galileo noted that when down in the hold of a ship on a quiet sea, that it was impossible to tell if the ship was moving under sail or anchored at rest by simply performing physical experiments, like throwing balls or watching the drips from a dripping bottle fall into a basin. For Galileo, there was no such thing as absolute motion relative to some absolute and fixed space. Galileo’s concept of relative motion was carried forward further by Gottfried Leibniz, a contemporary and strident rival of Newton, who fervently claimed that there was no such thing as an absolute space that could not be observed; there was only relative motion between objects, and absolute space was a fantasy of our common sense. So Galileo and Leibniz took a very positivistic view of space, while Newton took the position of a realist.

The reason you do not feel the motion of the Earth as it orbits the Sun and revolves upon its axis is that, for the most part, you are moving in a straight line at a constant speed. For example, the Earth takes about 365 days to orbit the Sun and complete a full 360° revolution about it. So that comes to about 1 degree/day. The width of your index finger at arm’s length also subtends an angle of about 1°. Now imagine being able to drive a car all day long in a straight line at 66,660 miles/hour, and find that at the end of the day, you have only deviated from your desired straight line path by the width of your index finger at arm’s length, when you look back at your starting point! Most of us would likely congratulate ourselves on being able to drive in such a straight line. Because the circumference of the Earth’s orbit is over 584 million miles and it takes 365 days to cover that distance, the Earth essentially moves in a straight line over the course of a single day to a very good approximation. And the same can be said of your motion as the Earth turns upon its axis. This motion has a tighter radius of curvature, but at a much lower tangential velocity, so again your acceleration is quite small and you do not detect it.

In 1905 Einstein extended Galileo’s idea that you cannot tell if you are moving in a straight line at a constant speed, or standing still using experimental devices, by extending this idea of relative motion to electromagnetic experiments as well. With that, Einstein was able to derive the special theory of relativity using simple high school mathematics. In Is Information Real? we noted Einstein’s strong adherence to a positivistic approach to a relative space and time versus Newton’s concept of an absolute space and time. In Einstein’s original conceptualization of relativity, he only dealt with observable phenomena like the ticking of light clocks, the paths and timings of light beams, and the lengths of objects measured directly with yardsticks. Einstein did not make any reference to an absolute space or time that we presume exists, but which we cannot directly measure as did Newton in his Principia.

In the 1830s, Michael Faraday began conducting a series of electrical and magnetic experiments and came up with the idea of fields. Take a bar magnet and cover it with a piece of paper. Then sprinkle some iron filings over it. The “lines of force” that you see are a field. Faraday called it a field because it reminded him of a freshly plowed farmer’s field. At each point on the paper, the magnetic force from the underlying magnet has a certain strength and a certain direction which define the magnetic field of the magnet. Now Faraday thought that the electric and magnetic fields that he observed were “real”, but the French thought that his fields were just a mathematical contrivance. The French thought that to calculate the effects from a collection of charged objects and current carrying wires, you should pick a point in space and then use the formulas developed by the French physicists Ampere and Coulomb to calculate the strength and direction of the resulting magnetic and electrical forces. The French were quite happy with the positivistic concept of electric and magnetic forces as being an “action at a distance”, the same concept used by Newton for the gravitational force in his Principia. There was another problem though; these spooky “action at a distance” forces had to travel with an infinite velocity. Imagine the Earth as it orbits the Sun at 66,660 miles/hour. Since Newton’s gravitational force depends upon the exact distance between the Sun and the Earth, which is constantly changing for an elliptical orbit, if the gravitational force traveled with a finite speed, then the gravitational force from the Sun would have to lead the Earth, like a hunter firing on a flock of ducks flushed from the reeds. How would the Sun know where to shoot the gravitational force in advance and with the correct strength to hit the Earth squarely in mid-flight if the gravitational force traveled at less than an infinite speed?

Faraday, on the other hand, felt that electrical charges and current carrying wires created real fields in space, and that charged objects then interacted with these surrounding fields. This idea could also be extended to the gravitational force as well, and eliminate the mysterious “action at a distance” problem. The Sun creates a strong gravitational field that extends out to the Earth, and the Earth interacts with the gravitational field as it orbits the Sun. The idea of electric and magnetic fields being “real” was further bolstered in 1864 when James Clerk Maxwell published A Dynamical Theory of the Electromagnetic Field, in which he unified the electric and magnetic forces into a single combined electromagnetic force. Maxwell demonstrated that a changing magnetic field could create an electric field and that, similarly, a changing electric field could create a magnetic field. This meant that electric and magnetic fields could break free of charged objects and currents in wires and propagate through space as a self-propagating wave. If you wiggle a charged object back and forth, electromagnetic waves peel off. This is how the oscillating electrons in a radio antenna send out radio waves and the electrons jumping around within atoms send out light. The velocity of the electromagnetic wave came out to be:
                ____
v  =  1/√ με

v = 3 x 108 m/sec the speed of light!

This was truly a remarkable result. The constant μ is measured by observing the strength of the magnetic field surrounding a current carrying wire, and the constant ε is measured by observing the voltage across a charged capacitor. Both μ and ε seem to have nothing to do with light, yet the speed of light easily falls out from a simple relationship between the two, derived from a wave equation featuring both constants. This added credence to Faraday’s idea that electromagnetic fields were, indeed, real tangible things, and Maxwell’s prediction of electromagnetic waves further strengthened the reality of electromagnetic fields in 1886, when Heinrich Hertz was able to generate and detect electromagnetic radio waves. But even with the findings of Hertz, all that we know is that when we jiggle electrons on one side of a room, we make electrons on the other side of the room jiggle as well. Does that mean that there is a real electromagnetic wave involved? Fortunately, we can refine our experiment with the aid of a microwave oven. Open your microwave oven and remove the rotating platter within. Now get a small Espresso coffee cup and commit the ultimate Starbucks sin, heat a small cup of cold coffee in the microwave at various positions within the oven. What you will find is that at some locations in the oven, the coffee gets quite hot, and at others, it does not. So what is happening? In the classical electrodynamics of Maxwell, there is a microwave standing wave within the oven. If you are fortunate enough to place your Espresso cup at a point in the microwave oven where the standing wave is intense, the coffee will heat up quite nicely. If you place the Espresso cup at a node point where the standing microwave is at a minimum, the coffee will not heat up as well. That is why they put the rotating platter in the microwave oven. By rotating objects in the oven, the objects pass through the hot and cold spots of the standing electromagnetic microwave and are evenly heated. So this is pretty convincing evidence that electromagnetic waves really do exist, even for a positivist.

But now let us look at this same experiment from the point of view of quantum mechanics and QED. If you have been paying attention, you might have noticed that our microwave oven is simply a physical implementation of the famous “particle in a box” we discussed previously in Quantum Software. The only difference is that we are using microwave photons in our box instead of electrons. Now according to quantum mechanics and QED, the reason that the coffee in the Espresso cup got hotter in some spots and less hot in others is that the probability of finding microwave photons at certain spots in the oven is greater than finding them at other spots based upon the square of the amplitude of the wavefunctions Ψ of the photons. But remember, in the Copenhagen interpretation of quantum mechanics, the wavefunctions Ψ of particles and photons are not “real” waves, they are only probability waves – just convenient mathematical constructs that don’t “really” exist, similar to the electromagnetic waves of the mid-19th century that did not “really” exist either.

Another example of the clash between positivism and realism comes from the very beginnings of quantum mechanics. According to classical electrodynamics, the walls of the room in which you are currently located should be at a temperature of absolute zero, having converted all of the energy of the free electrons in the walls of the room into ultraviolet light and x-rays. This was known as the “Ultraviolet Catastrophe” at the end of the 19th century and is another example of an effective theory bumping up against the limitations of its effective range of reliable prediction. In 1900, Max Planck was able to resolve this dilemma by proposing that the energy of the oscillating electrons in the walls of your room was quantized into a set of discrete integer multiples of an elementary unit of energy E = hf.
Thus:

E = nhf

where
n= 1, 2 , 3,…
h = Planck’s constant = 4.136 x 10-15 eV sec
f = frequency of the electron oscillation

Max Planck regarded his inspiration/revelation of the quantization of the oscillation energy of the free electrons and their radiated energy as a mathematical trick to overcome the Ultraviolet Catastrophe. But in 1905, the same year that he published the special theory of relativity, Einstein proposed that Planck’s discovery was not a mathematical trick at all. Einstein proposed that sometimes light, an electromagnetic wave in classical electrodynamics, could also behave like a stream of “real” quantized particles, that we now call photons, with energy:

E = hf

Although Einstein took a very positivistic position in his development of the special theory of relativity, he was a true realist at heart who could never quite accept the very positivistic Copenhagen interpretation of quantum mechanics. In 1927, Niels Bohr and Werner Heisenberg proposed a very positivistic interpretation of quantum mechanics now known as the Copenhagen interpretation; Bohr was working at the University of Copenhagen Institute of Theoretical Physics at the time. The Copenhagen interpretation contends that absolute reality does not really exist. Instead, there are an infinite number of potential realities. This satisfies Max Born’s contention that wavefunctions are just probability waves. Einstein had a hard time with the Copenhagen interpretation of quantum mechanics because he thought that it verged upon solipsism. Solipsism is a philosophical idea from Ancient Greece. In solipsism, you are the whole thing, and the physical Universe is just a figment of your imagination. So I would like to thank you very much for thinking of me and bringing me into existence. Einstein’s opinion of the Copenhagen interpretation of quantum mechanics can best be summed up by his statement Is it enough that a mouse observes that the Moon exists?. Einstein’s rejection of the Copenhagen interpretation is rather interesting. Recall that in Is Information Real? we saw that Einstein’s original interpretation of the special theory of relativity (1905) was very positivistic, since he relied solely upon what could be observed with meter sticks and clocks, and totally rejected Newton’s concepts of absolute space and time because they could not be physically observed. Despite this, Einstein was a true realist at heart. In his elder years, Einstein held many profound philosophical debates with Bohr on the topic of quantum mechanics, since Einstein could not accept the extreme positivism of the Copenhagen interpretation of quantum mechanics, which held that only the observations of things really existed and not the things themselves. In the Copenhagen interpretation of quantum mechanics, the wavefunctions or probability clouds of electrons surrounding an atomic nucleus are just that, potential electrons waiting to be measured.

Because Einstein detested the Copenhagen interpretation of quantum mechanics so much, he published a paper in 1935 with Boris Podolsky and Nathan Rosen which outlined what is now known as the EPR Paradox. The EPR Paradox goes like this. Suppose we prepare two quantum mechanically “entangled” electrons that conserve angular momentum; one with spin-up and one with spin-down. Now let the two electrons fly apart and let two observers measure their spins. If observer A measures his electron with spin-up, then observer B must measure his electron with spin-down with a probability of 100% in order to conserve angular momentum. Now there is nothing special about the directions in which observers A and B make their spin measurements. Suppose observer A rotates his magnets by 900 to measure the spin of his electron and observer B does not. Then observer B will only have a 50% chance for his electron having a spin that is down. How does the electron at observer B know that observer A has rotated his magnets when the electron arrives at observer B? Einstein thought that the EPR paper was the final nail in the coffin of quantum mechanics. There had to be some “hidden variables” that allowed electrons to know if they “really” had a spin-up or spin-down. You see, for Einstein, absolute reality really existed. For Einstein, the apparent probabilistic nature of quantum mechanics was an illusion, like the random() function found in most computer languages. The random() function just points to a table of apparently random numbers that are totally predictable if you look at the table in advance. You normally initiate the random() function with a “seed” from the system clock of the computer you are running on to simulate randomness by starting at different points in the table.

However, in 1964 John S. Bell published a paper in which he proposed an experiment that could actually test the EPR proposition. In the 1980s and 1990s, a series of experiments were indeed performed that showed that Einstein was actually wrong. Using photons and polarimeters, instead of the spin of electrons, these experiments showed that photons really do not know their quantum states in advance of being measured and that determining the polarization of a photon by observer A can immediately change the polarization of another photon 60 miles away. These experiments demonstrated that the physical Universe is non-local, meaning that Newton’s spooky “action at a distance” is built into our Universe, at least for entangled quantum particles. This might sound like a violation of the special theory of relativity because it seems like we are sending an instantaneous message faster than the speed of light, but that is really not the case. Both observer A and observer B will measure photons with varying polarizations at their observing stations separated by 60 miles. Only when observer A and observer B come together to compare results will they realize that their observations were correlated, so it is impossible to send a message with real information using this experimental scheme. Clearly, our common sense ideas about space and time are still lacking, and so are our current effective theories.

From all of the above we can see that the ongoing philosophical clash between positivism and realism in physics has a lot at stake, so let us return to the positions presented in Dreams of a Final Theory and The Grand Design to see how they deal with it in the search for a Final Theory.

The Case For Realism
In Dreams of a Final Theory in the chapter Against Philosophy, Weinberg goes on with a discussion of the negative impact of positivism on physics. Although positivism did help Einstein to break free of an absolute space and time that could not be directly observed and also helped Heisenberg formulate his version of quantum mechanics that only dealt with observable quantities, and which ultimately led to the Heisenberg Uncertainty Principle, Weinberg finds that positivism in total has done far more harm to physics than good. He points to the extreme positivism of Ernst Mach at the turn of the 20th century that suppressed the idea that atoms and molecules were real things and which retarded the acceptance of Boltzmann’s statistical mechanics (see The Demon of Software for more on that).

To further this point, Weinberg describes the misfortunes of Walter Kaufmann. In 1897 both J. J. Thomson and Walter Kaufmann were experimenting with cathode rays in evacuated glass tubes. A cathode ray is really a stream of energetic electrons, but the idea of an electron was an unknown concept back in 1897. Using an electric field between two parallel charged plates and a pair of magnets, J. J. Thomson was able to deflect the path of the electrons on their way to hitting a fluorescent screen. With these measurements he was able to calculate the charge to mass ratio of the particles in the cathode rays and found that the likely mass of these particles was much less than that of atoms, so J. J. Thomson took a leap and concluded that the particles in cathode rays must be some kind of constituent part of atoms. J. J. Thomson is thus credited with the discovery of the electron, the first fundamental particle discovered by mankind. However, Walter Kaufmann in Berlin performed a very similar set of experiments months earlier than J. J. Thomson and even came up with a more accurate value for the charge to mass ratio of electrons than did J. J. Thomson. But Kaufmann was a positivist and could not bring himself to believe that he had discovered a fundamental particle that was a constituent part of atoms since he did not even believe in atoms in the first place, because they could not be directly observed. Weinberg justly comments:

What after all does it mean to observe anything? In a narrow sense, Kaufmann did not even observe the deflection of cathode rays in a given magnetic field; he measured the position of a luminous spot on the downstream side of a vacuum tube when wires were wound a certain number of times around a piece of iron near the tube and connected to a certain electric battery and used accepted theory to interpret this in terms of ray trajectories and magnetic fields. Very strictly speaking, he did not even do that: he experienced certain visual and tactile sensations that he interpreted in terms of luminous spots and wires and batteries.

Figure 1 – J.J. Thomson’s Experimental Cathode Ray Tube

In the chapter Quantum Mechanics and its Discontents, Weinberg portrays a very interesting hypothetical dialogue between Charles Dickens’ Scrooge and Tiny Tim. In this hypothetical discussion, the realist Scrooge debates the reality of the quantum wavefunction with the positivist Tiny Tim. Scrooge maintains that quantum wavefunctions are just as real as anything else in this strange quantum world of ours, while Tiny Tim adheres to the positivistic Copenhagen interpretation of quantum mechanics that holds that wavefunctions are just mathematical devices that only produce probabilistic predictions of what is observed, so that wavefunctions themselves are not really real (see Quantum Software and The Foundations of Quantum Computing for more details).

Weinberg finds merit in both positions, but in the end, sides with the realist Scrooge.

Scrooge: …It is true enough that the electron does not have a definite position and momentum at the same time, but this just means that these are not appropriate quantities to use in describing the electron. What an electron or any collection of particles does have at any time is a wave function. If there is a human observing the particles, then the state of the whole system including the human is described by a wave function. The evolution of the wave function is just as deterministic as the orbits of particles in Newtonian mechanics.

Tiny Tim: …The wave function has no objective reality, because it cannot be measured … All that we ever measure are quantities like positions or momenta or spins, and about these we can predict probabilities. And until some human intervenes to measure these quantities, we cannot say that the particle has any definite state at all.

Scrooge: … Wave functions are real for the same reason that quarks and symmetries are – because it is useful to include them in our theories. Any system is in a definite state whether any humans are observing it or not; the state is not described by a position or a momentum but by a wave function.

Tiny Tim: …Let me just remind you of a serious problem you get into when you imagine the wave function to be real. This problem was mentioned in an attack on quantum mechanics by Einstein at the 1933 Solvay Conference in Brussels and then in 1935 written up by him in a famous paper with Boris Podolsky and Nathen Rosen. Suppose that we have a system consisting of two electrons, prepared in such a way that the electrons at some moment have a known large separation and a known total momentum….


Tiny Tim then goes on to outline the EPR Paradox described above. But Scrooge does not have a problem with the EPR Paradox:

Scrooge: I can accept it…..(While you were at it, you might have mentioned that John Bell has come up with even weirder consequences of quantum mechanics involving atomic spins, and experimental physicists have demonstrated that the spins in atomic systems really do behave in the way expected from quantum mechanics, but that is just the way the world is.) It seems to me that none of this forces us to stop thinking of the wave function as real; it just behaves in ways that we are not used to, including instantaneous changes affecting the wave function of the whole universe.

In the chapter Tales of Theory and Experiment Weinberg points out that one of the dangers of positivism is its overreliance upon observation and experimentation. In How To Think Like A Scientist, I highlighted how the old miasma theory of disease, the theory that diseases are caused by foul-smelling air, had a lot of supporting empirical evidence. People who lived near foul-smelling 19th-century rivers in England were more prone to dying of cholera than people who lived further from the rivers, and we had death certificate data to prove that empirical fact. Weinberg uses the example of Einstein’s general theory of relativity to drive home his point (see Cyberspacetime). In Newtonian mechanics, a single planet, all by itself with no sister planets, should orbit the Sun in an ellipse that never changes its orientation. The major axis of the ellipse, the length of its oblong dimension, should always point in the same direction like a compass needle. However, because the other planets of the solar system tug on the elliptical orbit of Mercury, it is found that the major axis of Mercury’s orbit actually precesses or slowly rotates by 575 seconds of arc every century. That means over a period of 225,000 years the orbit of Mercury makes one full rotation like a slowly spinning egg. The problem was that the Newtonian theory of gravity predicted that the tugs on Mercury’s orbit from the other planets should only add up to 532 seconds of arc every century, leaving 43 seconds per century unexplained. When Einstein applied his new general theory of relativity to the problem in 1915, he immediately saw that it predicted the additional precession of 43 seconds per century from the Sun’s gravitation alone. Newton’s theory of gravitation is based upon a linear equation, where a small change to the mass of the Sun simply makes a proportionally small change to its gravitational field, while Einstein’s general theory of relativity is framed in terms of nonlinear differential equations. The gravitational field of the Sun has an energy, and since in Einstein’s theory spacetime is deformed by both mass and energy, there is a nonlinear positive feedback loop between the energy in the gravitational field and the gravitational field itself – they both feed off of each other, leading to planets precessing. Weinberg calls Einstein’s finding a retrodiction, meaning that his new theory very accurately produced a result that matched a baffling astronomical observation already in existence. Surprisingly, this retrodiction did not completely convince a skeptical physics community of the validity of the general theory of relativity. That did not happen until 1919 when a group of astronomers used a total eclipse of the Sun to search for a prediction that the general theory of relativity made. The general theory of relativity predicts that light from a distant star passing close to the Sun during a total eclipse will be deflected by the Sun because the light will pass through the distorted spacetime surrounding the Sun. So all you have to do is make a photographic plate of a group of stars 6 months before the total eclipse and then compare the positions of the same stars in a plate taken during the total eclipse. The stars closest to the blocked-out Sun should shift in position due to the bending of their light as it passes close to the Sun. But these shifts are quite small. A grazing star should only shift by 1.75 seconds of arc during a total eclipse. The width of your index finger at arm’s length subtends about 10 which is 3,600 seconds of arc, so 1.75 seconds is quite small indeed. However, the 1919 expeditions did indeed report observing such small shifts to within a 10% accuracy of Einstein’s prediction, and this made Einstein and his general theory of relativity an overnight sensation. However, the observations from several subsequent total eclipses of the Sun in the ensuing years did not find such stunning results. Some even found deflections that appeared to disagree with the general theory of relativity. After all, it is not easy to make such delicate observations. You have to compare photographic plates that were taken at different times, and perhaps with different equipment. So as in all the experimental and observational sciences, corrections must be applied to account for the limitations of observations performed with real-life error-prone physical devices. The focus of the telescope may not have been exactly the same for both plates and plates can shrink or expand with temperature differences that might have occurred when the two observations were made. Experimentalists always try to do a good job of applying these corrections, but Weinberg suspects that in the heat of the moment, experimentalists sometimes subconsciously fall prey to MICOR – Make It Come Out Right. They subconsciously keep applying corrections to their data until it fits the findings of a new theory. Weinberg argues that it is much harder to bend the mathematics of a new theory to match already existing unbiased experimental observations than it is to “correct” experimental data until it matches a new theory. Consequently, the retrodictions of a new theory are a much better way to validate a new theory than its ability to make predictions of observations never before made.

The end result is that one must always keep in mind the limitations imposed by observations in the real-world, especially now that some of the most promising theories of physics, like supersymmetric string theory, seem to have overtaken our technological capabilities by many orders of magnitude, and most likely we will never be able to obtain the necessary energies to validate them.

The Case For Model-Dependent Realism
Model-dependent realism maintains that there is no absolute reality after all. We can only hope for a collection of effective theories that present a series of models of reality that are confirmed by empirical observation, and each of these models essentially creates a reality of its own. In The Grand Design the authors explain it this way:

In physics, an effective theory is a framework created to model certain observed phenomena without describing in detail all of the underlying processes… Similarly, we cannot solve the equations governing the behavior of complex atoms and molecules, but we have developed an effective theory called chemistry that provides an adequate explanation of how atoms and molecules behave in chemical reactions without accounting for every detail of the interactions.

Because we only have a collection of effective theories, we cannot be certain that an absolute reality even exists. To illustrate this point they turn to the analogy of a goldfish living in a curved goldfish bowl. The goldfish could certainly make observations of moving bodies outside of the goldfish bowl, but due to the distortion caused by light refracting into the curved goldfish bowl, the effective theories that the goldfish would come up with would be very complicated indeed. For the goldfish, a freely moving body would not travel in a straight line, but with enough mathematics, the goldfish could certainly come up with a modified version of Newtonian mechanics that could predict the path of freely moving objects and of objects subjected to a driving force as seen from within the goldfish bowl.

If a goldfish formulated such a theory, we would have to admit the goldfish’s view as a valid picture of reality.

In a further analogy, that comes quite close to what we see with software running in the Software Universe, they go on to explain that if our observations of the physical Universe were really only the observations of an alien computer simulation that we were all caught up in, like in The Matrix, how could we possibly distinguish true reality from a simulated reality? Their conclusion is:

These examples bring us to a conclusion that will be important in this book: There is no picture – or theory-independent concept of reality. Instead, we will adopt a view that we will call model-dependent realism: the idea that a physical theory or world picture is a model (generally of a mathematical nature) and a set of rules that connect the elements of the model to observations. This provides a framework with which to interpret modern science.

According to model-dependent realism, it is pointless to ask if a model is real, only whether it agrees with observation. If there are two models that agree with observation, like the goldfish’s picture and ours, then one cannot say that one is more real than another. One can use whichever model is more convenient in the situation under consideration. For example, if one were inside the bowl, the goldfish’s picture would be useful, but for those outside, it would be very awkward to describe events from a distant galaxy in the frame of a bowl on earth, especially because the bowl would be moving as the earth orbits the sun and spins on its axis.

According to the idea of model-dependent realism introduced in Chapter 3, our brains interpret the input from our sensory organs by making a model of the outside world. We form mental concepts of our home, trees, other people, the electricity that flows from wall sockets, atoms, molecules, and other universes. These mental concepts are the only reality we can know. There is no model-independent test of reality. It follows that a well-constructed model creates a reality of its own.

Steven Weinberg is a master of quantum field theory and is famous for combining the electromagnetic and weak interactions into a single electroweak interaction, and that is why he is a credible advocate for pursuing a Final Theory for an absolute reality. However, let us try to extend model-dependent realism to quantum field theory as well.

In The Foundations of Quantum Computing, we discussed quantum field theories. In quantum field theories, all the particles and forces we observe in the Universe are modeled as fields that extend over the entire universe with varying amplitudes that define the probability of observing the fields as a quantized particle. Thus there are matter fields like electron fields, quark fields, and neutrino fields along with the force fields like the electromagnetic field, the weak field and the strong field. When you observe one of the matter fields, a quantized particle pops out like an electron, neutrino, or a clump of quarks in a meson or in a hadron particle like a proton or neutron. Similarly, when you observe a force carrying field you observe the photon of the electromagnetic force, the W+, W- or Z0 bosons of the weak force, or the gluons of the strong force. In quantum field theories, the forces or interactions between matter particles (fermions) are modeled as exchanges of force-carrying particles (bosons). Thus the repulsive electromagnetic interaction or force between two electrons scattering off each other is depicted as an exchange of virtual photons between the two.

The very first quantum field theory, quantum electrodynamics – QED, matured in 1948 when all the pesky infinities were removed with a mathematical process called renormalization. With QED it became possible, at least theoretically, to explain all of the possible interactions between electrons, protons, and photons, and consequently, all of the things that you encounter in your daily life that deal with the physical and chemical properties of matter, such as the texture of objects, their colors, hardness, ductility, tensile strengths and chemical activity. One of the things that QED could not explain was why the Sun shines. To explain why the Sun shines, we need an interaction that can turn protons into neutrons, as the Sun fuses hydrogen nuclei composed of a single proton into helium nuclei composed of two protons and two neutrons. Since a proton is composed of two up quarks and one down quark, while a neutron is composed of one up quark and two down quarks, we need an interaction that can turn up quarks into down quarks and that is exactly what the weak interaction or force can do. In 1967 Steven Weinberg proposed a model that combined the electromagnetic and weak interactions, which predicted the Higgs boson and the Z0 boson, and is now known as the electroweak interaction. Thus, the electroweak interaction can explain all of QED, and why the Sun shines, all at the same time.

Now depicting the electromagnetic force as the exchange of virtual photons might seem a bit farfetched to you, since you have no first-hand experience with quantum effects, so let us repeat an experiment that Michael Faraday might have performed 170 years ago to help us out of this jam. Grab two small styrofoam packing peanuts from your last online purchase. Using a needle and thread, attach each packing peanut to the ends of the thread, and then rub the two packing peanuts in your hair to have them pick up some extra electrons. Now hold the thread in its middle, so that the packing peanuts are free to dangle straight down. You will observe a very interesting thing, instead of dangling straight down, the packing peanuts will repel each other, and the thread will form a Λ shape. Now stare at that for a few minutes. This will, no doubt, not seem so strange to you because you do have some experience with similar electrical effects, but think about it for a few minutes anyway. Something very strange, indeed, is going on. The whole mass of the Earth is trying to pull those two packing peanuts straight down, but some mysterious thing, which apparently is much stronger, is keeping them apart. What could it be? Coulomb would borrow Newton’s idea of a “spooky action at a distance” to explain that there is an electrical force between the charged packing peanuts, and that the electrical force is keeping them apart. In fact, given the amount of charge on the peanuts, the mass of each peanut, and the total length of the thread, Coulomb would be able to exactly predict the angle of the Λ shape formed by the thread and the dangling packing peanuts. So Coulomb’s model is quite useful in making predictions despite using a “spooky action at a distance”. Michael Faraday would go one step further. In Faraday’s model, each charged packing peanut creates an electric field about itself, and the other charged packing peanut then interacts with the electric field by moving away from it. Faraday’s model not only predicts the angle of the packing peanuts, it can also be used to derive the speed of light as we saw above, so Faraday’s model is even more useful than Coulomb’s. With QED we can model the behavior of the packing peanuts to be the result of the exchange of a huge number of virtual photons. QED makes all of the predictions that Coulomb’s and Faraday’s models make, and in addition, as we saw in The Foundations of Quantum Computing by using 72 Feynman diagrams, can predict the gyromagnetic ratio of the electron to 11 decimal places. With QED we can also formulate the basis of all chemical reactions as approximations of QED. Finally, with the unified electroweak quantum field theory of Steven Weinberg we can even explain why the Sun shines in addition to all of the above.

So which is it? Are the packing peanuts being held apart by Coulomb’s “spooky action at a distance”, Faraday’s electric field, QED’s exchange of virtual photons, or Steven Weinberg’s electroweak quantum field theory? What is “really” going on? Well, nobody really knows. The four models we just discussed are all effective theories that make predictions about the phenomenon of repelling packing peanuts with varying degrees of accuracy – that is all we really know. Each of these four models may seem a bit strange, but that does not matter. What matters is that they all make some accurate predictions of what we observe and offer various degrees of insight into what is really going on. Model-dependent realism would say that each model simply creates its own reality.

Hopes For a Final Theory
So we see that the ongoing philosophical debate over positivism and realism does bear upon the future of physics because it does frame the fundamental question of whether an absolute reality exists or not. If there is no absolute reality, then there can be no Final Theory of everything, but if absolute reality does exist, then a Final Theory is at least a possibility. Stephen Hawking and Leonard Mlodinow do offer some hope along these lines in the form of M-theory. M-theory is a collection of the 5 versions of supersymmetric string theory. In the network of string theories that comprise M-theory, the Universe contains 11 dimensions – the four-dimensional macroscopic spacetime that we are all familiar with and 7 additional compacted dimensions that we are not aware of. Unfortunately, the compacted dimensions are so small that they cannot be observed, but one-dimensional strings and two-dimensional membranes of energy can vibrate within them, yielding all of the particles and forces of nature that we do observe. In fact, p-brane objects of dimension p = 0 to 9 can vibrate within the 11 dimensions. The 11 dimensions can be curled up in about 10500 different ways, and each way defines a different universe with different physical laws. This collection of approximately 10500 different universes forms a multiverse, or what Leonard Susskind calls The Cosmic Landscape (2006).

A cosmic landscape of 10500 possible universes can also help with explaining the Weak Anthropic Principle, the idea that intelligent beings will only find themselves in universes capable of supporting intelligent beings, by providing a mechanism for the formation of a multiverse composed of an infinite number of bubble universes. In 1986, Andrei Linde published the Eternally Existing Self-Reproducing Chaotic Inflationary Universe in which he described what has become known as the Eternal Chaotic Inflation theory. In this model, our Universe is part of a much larger multiverse that has not yet decayed to its ground state. Quantum fluctuations in a scalar field within this multiverse create bubbles of rapidly expanding “bubble” universes, and our Universe is just one of these infinite number of “bubble” universes. A scalar field is just a field that has only one quantity associated with each point in space, like a weather map that lists the temperatures observed at various towns and cities across the country. Similarly, a vector field is like a weather map that shows both the wind velocity and direction at various points on the map. In the Eternal Chaotic Inflation model, there is a scalar field within an infinite multiverse which is subject to random quantum fluctuations, like the quantum fluctuations described by the quantum field theories we saw in The Foundations of Quantum Computing. One explanation of the Weak Anthropic Principle is that these quantum fluctuations result in universes with different sets of fundamental laws. Most bubble universes that form in the multiverse do not have a set of physical laws compatible with intelligent living beings and are quite sterile, but a very small fraction do have physical laws that allow for beings with intelligent consciousness. Remember a small fraction of an infinite number is still an infinite number, so there will always be plenty of bubble universes within this multiverse capable of supporting intelligent beings.

I have a West Bend Stir Crazy popcorn popper which helps to illustrate this model. My Stir Crazy popcorn popper has a clear dome which rests upon a nearly flat metal base that has a central stirring rod that constantly rotates and keeps the popcorn kernels well oiled and constantly tumbling over each other, as the heating element beneath heats the cooking oil and popcorn kernels together to a critical popping temperature. As the popcorn kernels heat up, the water in each kernel begins to boil within, creating a great deal of internal steam pressure within the kernels. You can think of this hot mix of oil and kernels as a scalar field not in its ground state. All of a sudden, and in a seemingly random manner, quantum fluctuations form in this scalar field and individual “bubble” universes of popped corn explode into reality. Soon my Stir Crazy multiverse is noisily filling with a huge number of rapidly expanding bubble universes, and the aroma of popped corn is just delightful. Now each popped kernel has its own distinctive size and geometry. If you were a string theorist, you might say that for each popped kernel the number of dimensions and their intrinsic geometries determine the fundamental particles and interactions found within each bubble popcorn universe. Now just imagine a Stir Crazy popcorn popper of infinite size and age, constantly popping out an infinite number of bubble universes, and you have a pretty good image of a multiverse based upon the Eternal Chaotic Inflation model.

Stephen Hawking and Leonard Mlodinow go on to apply model-dependent realism to M-theory and find hope for a Final Theory of sorts:

M-theory is not a theory in the usual sense. It is a whole family of different theories, each of which is a good description of observations only in some range of physical situations. It is a bit like a map. As is well known, one cannot show the whole of the earth’s surface on a single map. The usual Mercator projection used for maps of the world makes areas appear larger and larger in the far north and south and doesn’t cover the North and South Poles. To faithfully map the entire earth, one has to use a collection of maps, each of which covers a limited region. The maps overlap each other, and where they do, they show the same landscape. M-theory is similar. The different theories in the M-theory family may look very different, but they can all be regarded as aspects of the same underlying theory. They are versions of the theory that are applicable only in limited ranges – for example, when certain quantities such as energy are small. Like the overlapping maps in a Mercator projection, where the ranges of different versions overlap, they predict the same phenomena. But just as there is no flat map that is a good representation of the earth’s entire surface, there is no single theory that is a good representation of observations in all situations.

…. Each theory can describe and explain certain properties, neither theory can be said to be better or more real than the other. Regarding the laws that govern the universe, what we can say is this: There seems to be no single mathematical model or theory that can describe every aspect of the universe. Instead, as mentioned in the opening chapter, there seems to be a network of theories called M-theory. Each theory in the M-theory network is good at describing phenomena within a certain range. Whenever their ranges overlap, the various theories in the network agree, so they can all be said to be parts of the same theory. But no single theory within the network can describe every aspect of the universe – all the forces of nature, the particles that feel those forces, and the framework of space and time in which it all plays out. Though this situation does not fulfill the traditional physicist’s dream of a single unified theory, it is acceptable within the framework of model-dependent realism.

Model-Dependent Realism in Softwarephysics
When I transitioned into IT in 1979, after being a young exploration geophysicist for several years, I fully expected that since the IT community had already been working with software for several decades, that there should already be an established set of effective theories of software behavior at hand that I could use in this new intimidating career, just like I had as a former physicist. I figured that since IT people already knew what software “really was”, that this should not have been a difficult thing to achieve. Instead, I learned that not much progress had been made along these lines since I started programming back in 1972, and learned that source code was compiled into machine code that was later loaded into the memory of a computer to finally run under a CPU process that eventually played quantum mechanical tricks with iron atoms on disk drives and silicon atoms in chips. But I found that model of software behavior to be quite lacking and not very useful in making the day-to-day decisions required of me by my new IT career. I realized that what was needed were some pragmatic effective theories of software behavior, like we had in physics for “real” tangible things like photons and electrons. It seemed to me that the IT community had accidentally created a pretty decent computer simulation of the physical Universe on a grand scale, a Software Universe so to speak, and that I could use this fantastic simulation in reverse to better understand the behavior of commercial software, by comparing software to how things behaved in the physical Universe. I figured if you could apply physics to geology; why not apply physics to software? So I decided to establish a simulated science called softwarephysics to do that, and decided to pattern softwarephysics after geophysics as a set of high-level hybrid effective theories combining the effective theories of physics, chemistry, biology, and geology all together into one integrated package. So in physics we use software to simulate the behavior of the Universe, while in softwarephysics we use the Universe to simulate the behavior of software. Since I knew that software was not really “real”, I was not faced with the positivism versus realism dilemma, and consequently, I decided to simply take a very positivistic approach to software behavior from the very start. However, being fully aware of the pitfalls of positivism, I always tried to stress that softwarephysics only provided models of software behavior that were just approximations, they were not what software “really” was. Remember, a vice is simply a virtue carried to extreme, and when carried to an extreme, positivism is certainly a damaging thing that can stifle the imagination and impede progress. But used wisely, positivism can also be quite beneficial. I believe that is what the concept of model-dependent realism entails – a useful application of positivism that can assist the progress of physics in the 21st century. I have been using this approach in softwarephysics for over 30 years and found it to be quite useful in modeling the Software Universe with a suite of effective theories. I never tried to search for a Final Theory of softwarephysics because I knew that I was working in a simulated universe that did not have one. Perhaps Stephen Hawking and Leonard Mlodinow are right and this approach can be applied equally as well to our physical Universe too. In truth, as Stephen Hawking and Leonard Mlodinow showed in The Grand Design, physicists have really been using model-dependent realism all along since the very beginning - from the time of Copernicus – they just did not know it at the time.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

No comments: