Friday, January 28, 2011

How to Use Your IT Skills to Save the World

I just finished The Flooded Earth (2010) by Peter Ward, one of my most favorite paleontologists. Peter Ward is an expert on mass extinctions, particularly the Permian-Triassic mass extinction at the Paleozoic-Mesozoic boundary 251 million years ago, described in his Under a Green Sky (2007). There are several other good books by Peter Ward that provide further interesting insights into the nature of mass extinctions - The Life and Death of Planet Earth (2003), Gorgon – Paleontology, Obsession, and the Greatest Catastrophe in Earth’s History (2004), and Out of Thin Air (2006), all of which are very good reads. The Flooded Earth picks up where Under a Green Sky left off and both books posit that mankind might be initiating a human-induced greenhouse gas mass extinction by burning up all the fossil fuels that have been laid down over hundreds of millions of years in the Earth’s crust. What differentiates Peter Ward from many climatologists is that, as a geologist and paleontologist, he looks at climate change from the perspective of deep time, rather than focusing on the past 100,000 years or so. Following the well-tested geological concept of Hutton’s and Lyell’s uniformitarianism, in which the “present is key to the past”, Peter Ward inverts the logic to point out that the past may be key to the future as well. By looking back at past greenhouse gas mass extinctions, like the Permian-Triassic mass extinction, we can get a sense of what mankind might have in store for itself if carbon dioxide levels keep rising.

The Earth’s Long-Term Climate Cycle in Deep Time
As we saw in Software Chaos, weather systems are examples of complex nonlinear systems that are very sensitive to small changes to initial conditions. The same goes for the Earth’s climate; it is a highly complex nonlinear system that we have been experimenting with for more than 200 years by pumping large amounts of carbon dioxide into the atmosphere. The current carbon dioxide level of the atmosphere has risen to 390 ppm, up from a level of about 280 ppm prior to the Industrial Revolution. Now if this trend continues, computer models of the nonlinear differential equations that define the Earth’s climate indicate that we are going to melt the polar ice caps and also the ice stored in the permafrost of the tundra. If that should happen, sea level will rise about 300 feet, and my descendents in Chicago will be able to easily drive to the new seacoast in southern Illinois for a day at the beach. Worse yet, Peter Ward points to recent research which indicates that a carbon dioxide level of as little as 1,000 ppm might trigger a greenhouse gas mass extinction that could wipe out about 95% of the species on Earth and make the Earth a truly miserable planet to live upon.

This is not a fantasy. The Earth’s climate does change with time and these changes have greatly affected life in the past and will continue to do so on into the future. By looking into deep time, we can see that there have been periods in the Earth’s history when the Earth has been very inhospitable to life and nothing like the Earth of today. Over the past 600 million years, during the Phanerozoic Eon during which complex life first arose, we have seen five major mass extinctions, and it appears that four of those mass extinctions were greenhouse gas mass extinctions, with one caused by an impact from a comet or asteroid 65 million years ago that wiped out the dinosaurs in the Cretaceous-Tertiary mass extinction that ended the Mesozoic Era and kicked off the Cenozoic Era.

We are living in a very strange time in the history of the Earth. The Earth has been cooling for the past 40 million years, as carbon dioxide levels have significantly dropped. This has happened for a number of reasons. Due to the motions of continental plates caused by plate tectonics, the continents of the Earth move around like bumper cars at an amusement park. With time, all the bumper car continents tend to smash up in the middle to form a huge supercontinent, like the supercontinent Pangea that formed about 275 million years ago. When supercontinents form, the amount of rainfall on the Earth tends to decline because much of the landmass of the Earth is then far removed from the coastlines of the supercontinent and is cut off from the moist air that rises above the oceans. Consequently, little rainwater with dissolved carbon dioxide manages to fall upon the continental rock. Carbon dioxide levels in the Earth’s atmosphere tend to increase at these times because not much carbon dioxide is pulled out of the atmosphere by the chemical weathering of rock to be washed back into the sea by rivers as carbonate ions. However, because the silicate-rich continental rock of supercontinents, which is lighter and thicker than the heavy iron-rich basaltic rock of the ocean basins, floats high above the ocean basins like a blanket, the supercontinents tend to trap the Earth’s heat. Eventually, so much heat is trapped beneath a supercontinent that convection currents form in the taffy-like asthenosphere below the rigid lithospheric plate of the supercontinent. The supercontinent then begins to break apart, as plate tectonic spreading zones appear, like the web of cracks that form in a car windshield that takes a hit from several stray rocks, while following too closely behind a dump truck on the freeway. This continental cracking and splitting apart happened to Pangea about 150 million years ago. As the continental fragments disperse, subduction zones appear on their flanks forcing up huge mountain chains along their boundaries, like the mountain chains on the west coast of the entire Western Hemisphere, from Alaska down to the tip of Argentina near the South Pole. Some of the fragments also collide to form additional mountain chains along their contact zones, like the east-west trending mountain chains of the Eastern Hemisphere that run from the Alps all the way to the Himalayas. Because there are now many smaller continental fragments with land much closer to the moist oceanic air, rainfall on land increases, and because of the newly formed mountain chains, chemical weathering and erosion of rock increases dramatically. The newly formed mountain chains on all the continental fragments essentially suck carbon dioxide out of the air and wash it down to the sea as dissolved carbonate ions.

The break up of Pangea and the subsequent drop in carbon dioxide levels has caused a 40 million year cooling trend on Earth, and about 2.5 million years ago, carbon dioxide levels dropped so low that the Milankovitch cycles were able to begin to initiate a series of a dozen or so ice ages. The Milankovitch cycles are caused by minor changes in the Earth’s orbit and inclination that lead to periodic coolings and warmings. In general, the Earth’s temperature drops by about 150 Fahrenheit for about 100,000 years and then increases by 150 Fahrenheit for about 10,000 years. During the cooling period we have an ice age because the snow in the far north does not fully melt during the summer and builds up into huge ice sheets that push down to the lower latitudes. Carbon dioxide levels also drop to about 180 ppm during an ice age, which further keeps the planet in a deep freeze. During the 10,000 year warming period, we have an interglacial period, like the Holocene interglacial that we now find ourselves in, and carbon dioxide levels rise to about 280 ppm.

Thus the Earth usually does not have polar ice caps, we just happened to have arrived on the scene at a time when the Earth is unusually cold and has polar ice caps. From my home in the suburbs of Chicago, I can easily walk to an abandoned quarry of a Devonian limestone reef, clear evidence that my home was once under the gentle waves of a shallow inland sea several hundred million years ago, when there were no ice caps, and the Earth was much warmer. Resting on top of the Devonian limestone is a thick layer of rocky glacial till left behind by the ice sheets of the Wisconsin glacial period that ended 10,000 years ago, as vast ice sheets withdrew and left Lake Michigan behind. The glacial till near my home is part of a terminal glacial moraine. This is a hilly section of very rocky soil that was left behind as a glacier acted like a giant conveyer belt, delivering large quantities of rocky soil and cobbles to be dumped at the end of the icy conveyor belt to form a terminal moraine. It is like all that dirt and gravel you find on your garage floor in the spring. The dirt and gravel were transported into your garage by the snow and ice clinging to the undercarriage of your car, and when it melted it dropped a mess on your garage floor. This section of land was so hilly and rocky that the farmers of the area left it alone and did not cut down the trees, so now it is a forest preserve. My great grandfather used to hunt in this glacial moraine and my ancestors also used the cobbles to build the foundations and chimneys of their farm houses and barns. There is a big gorge in one section of the forest preserve where you can still see the left over effects of this home-grown mining operation for cobbles.

The Effect of Climate Cycles Upon Life
The long-term climatic cycles brought on by these plate tectonic bumper car rides have also greatly affected the evolution of life on Earth. Two of the major environmental factors affecting the evolution of living things on Earth have been the amount of solar energy arriving from the Sun and the atmospheric gases surrounding the Earth that held it in. For example, billions of years ago the Sun was actually less bright than it is today. Our Sun is a star on the main sequence that is using the proton-proton reaction and the carbon-nitrogen-oxygen cycle in its core to turn hydrogen into helium-4, and consequently, turn matter into energy that is later radiated away from its surface. As a main-sequence star ages, its energy producing core begins to contract, as the amount of helium-4 waste rises, and it shifts from using the proton-proton reaction to relying more heavily upon the carbon-nitrogen-oxygen cycle which runs at a higher temperature. Thus, as a main-sequence star ages, its core contracts and heats up and it begins to radiate more energy at its surface. For example, the Sun currently radiates about 30% more energy today than it did about 4.5 billion years ago, when it first formed and entered the main sequence, and about 1.0 billion years ago, the Sun radiated about 10% less energy than today. Fortunately, the Earth’s atmosphere had plenty of greenhouse gasses, like carbon dioxide, in the deep past to augment the low energy output of our youthful, but somewhat anemic, Sun. Using some simple physics, you can quickly calculate that if the Earth did not have an atmosphere containing greenhouse gases, like carbon dioxide, the surface of the Earth would be on average 590 Fahrenheit cooler than it is today and would be totally covered by ice. So in the deep past greenhouse gases, like carbon dioxide, played a crucial role in keeping the Earth’s climate warm enough to sustain life. People tend to forget just how narrow a knife edge the Earth is on, between being completely frozen over on the one hand, and boiling away its oceans on the other. For example, in my Chicago suburb the average daily high is 240 Fahrenheit on January 31st and 890 Fahrenheit on August 10th. That’s a whopping 650 Fahrenheit spread just due to the Sun being 470 higher in the sky on June 21st than on December 21st. But the fact that the Sun has been slowly increasing in brightness over geological time presents a problem. Without some counteracting measure, the Earth would heat up and the Earth’s oceans would vaporize, giving the Earth a climate more like Venus which has a surface temperature that melts lead. Thankfully, there has been such a counteracting measure in the form of a long term decrease in the amount of carbon dioxide in the Earth’s atmosphere, principally caused by living things extracting carbon dioxide from the air to make carbon-based organic molecules which later get deposited into sedimentary rocks, oil, gas, and coal. These carbon-laced sedimentary rocks and fossil fuels then plunge back deep into the Earth at the many subduction zones around the world that result from plate tectonic activities. Fortunately over geological time, the competing factors of a brightening Sun, in combination with an atmosphere with decreasing carbon dioxide levels, has kept the Earth in a state capable of supporting complex life.

Greenhouse Gas Mass Extinctions
But as I always say, a vice is just a virtue carried to an extreme. Geological history now shows that an overabundance of carbon dioxide in the atmosphere can bring on a greenhouse gas mass extinction. The classic case is typified by the Permian-Triassic (P-T) mass extinction at the Paleozoic-Mesozoic boundary 251 million years ago. Greenhouse gas extinctions are thought to be caused by periodic flood basalts, like the Siberian Traps flood basalt of the late Permian. A flood basalt begins as a huge plume of magma several hundred miles below the surface of the Earth. The plume slowly rises and eventually breaks to the surface of the Earth, forming a huge flood basalt that spills basaltic lava over an area of millions of square miles to a depth of several miles. Huge quantities of carbon dioxide bubble out of the magma over a period of several hundred thousand years which greatly increases the ability of the Earth’s atmosphere to trap heat from the Sun. For example, during the Permian-Triassic mass extinction, carbon dioxide levels may have reached a level as high as 3,000 ppm, much higher than the 390 ppm of today. Most of the Earth warms to tropical levels with little temperature difference between the equator and the poles and with daily highs that can reach 1400 Fahrenheit. This shuts down the thermohaline conveyer that drives the deep ocean currents. Currently, the thermohaline conveyer begins in the North Atlantic, where high winds and cold polar air reduce the temperature of ocean water through evaporation and concentrates its salinity, making the water very dense. The dense North Atlantic water, with lots of dissolved oxygen, then descends to the ocean depths and slowly winds its way around the entire Earth, until it ends up back on the surface in the North Atlantic several thousand years later. Before submerging, the cold salty water of the North Atlantic picks up large quantities of dissolved oxygen, since cold water can hold much more dissolved oxygen than warm water, and this supply of oxygen from the surface keeps the water at the bottom of the oceans oxygenated. When this thermohaline conveyer stops for an extended period of time, the water at the bottom of the oceans is no longer supplied with oxygen, and only bacteria that can survive on sulfur compounds manage to survive in the anoxic conditions. These sulfur-loving bacteria metabolize sulfur compounds to produce large quantities of the highly toxic gas hydrogen sulfide, the distinctive component of the highly repulsive odor of rotten eggs, which has a severe damaging effect upon both marine and terrestrial life. As the hydrogen sulfide gas bubbles up from below, the sky turns a dingy green from all the hydrogen sulfide gas in the air and the oceans, choked with sulfur-loving bacteria, eventually turn a greasy purple and totally anoxic because of all the dissolved hydrogen sulfide gas. Nearly all fish species and other complex marine life die off in the oceans, as the surface waters too eventually lose all oxygen to support complex life. The hydrogen sulfide gas also erodes the ozone layer as the ozone of the upper atmosphere is lost as it oxidizes the hydrogen sulfide gas. This allows damaging ultraviolet light to reach the Earth’s surface which destroys the DNA of plant and animal life alike. Oxidation of hydrogen sulfide gas in the atmosphere, combined with the dramatic drop in oxygen producing life forms on both the land and in the sea, also causes the oxygen level of the atmosphere to drop to a suffocating low of 12%, compared to the current level of 21%.

The combination of severe climate change, changes to atmospheric and oceanic oxygen levels and temperatures, the toxic effects of hydrogen sulfide gas, and the loss of the ozone layer cause a rapid extinction of about 95% of all species over a period of about a hundred thousand years. And unlike an impacting mass extinction from an incoming comet or asteroid, like the one that wiped out the dinosaurs, a greenhouse gas mass extinction does not quickly reverse itself, but persists for millions of years until the high levels of carbon dioxide are flushed from the atmosphere and oxygen levels once again rise. In the stratigraphic section, this is seen as a thick section of rock with decreasing numbers of fossils and fossil diversity leading up to the mass extinction, and a thick layer of rock above the mass extinction level with very few fossils at all, representing the long recovery period of millions of years required to return the Earth’s environment back to a more normal state.

A Human-Induced Greenhouse Gas Mass Extinction
The good news is that even if we do pump up the carbon dioxide level of the Earth’s atmosphere to 1,000 ppm over the next 100 – 150 years, it might take a couple of thousand years for a greenhouse gas mass extinction to begin to unfold, so we do have some time to think this over before proceeding along this path of self-destruction. The bad news is that human beings tend to live for the moment and are not very much concerned with anything more than a few months out. The person who chopped down the first tree on Easter Island was probably thinking about the same things as the person who chopped down the last tree on Easter Island. Throughout history, human beings have had a tendency to squander resources until none were left and then to move on. Unfortunately, this strategy seems to fall short when applied to an entire planet. But why should we worry about people living 2,000 years from now? Please remember that people are probably no smarter today than they were 200,000 years ago when Homo sapiens first appeared. All this carbon-based energy technology stems from the Scientific Revolution of the 17th century, which led to the first commercially successful steam engine invented by Thomas Newcomen in 1712 (see A Lesson From Steam Engines for details). But don’t forget that we are enjoying the technological benefits of the second Scientific Revolution, not the first. The first Scientific Revolution was initiated by Thales of Miletus in about 600 B.C., when he began to explain natural phenomena in terms that did not rely upon mythology. Luckily for us, the Greek Scientific Revolution failed. Otherwise there could have been a technological explosion by 400 B.C. that dramatically increased the population of the Earth to 9 billion people. There could have been billions of people back then running around in SUVs burning up all the oil and coal thousands of years ago, leaving us with none and a carbon dioxide level of 1,000 ppm that initiated a greenhouse gas mass extinction. You could be sitting here today, with a green sky and purple ocean, hungry, sweltering, and miserable on an Earth barely capable of sustaining life!

Perhaps we can work our way out of this mess with some additional geoengineering. After all, we seemed to have geoengineered our way into this mess in the first place, so maybe we can geoengineer our way out too. People have suggested that we could inject sulfate particles, or aerosols, into the upper atmosphere like volcanoes do when they blow their tops, to reflect incoming sunlight. Or we could use robotic ships to increase the cloud cover over oceans by injecting water mists into the air. I have read about a proposal to build 10 million computer controlled carbon dioxide scrubbers to extract all of our current emissions at a cost of several hundred billion dollars per year. The only problem with the proposal is what do you do with 28 billion tons of carbon dioxide gas each year? That’s a lot of gas to inject into sandstone reservoirs! Geoengineering on such scales is going to be expensive. Imagine the reverse situation. Suppose the Earth was found to be dramatically cooling for lack of carbon dioxide and we had to artificially pump up the carbon dioxide level of the atmosphere to stave off the ice sheets from an impending Ice Age. Imagine if we had to pay billions of people to explore for and produce all the oil and coal we use today and then burn it in cars and plants that did nothing useful beyond generating carbon dioxide. Suppose you were required to spend several hours per day running a machine that simply spewed out carbon dioxide and cost you several hundred dollars per week to operate. Such efforts would present a major impact to the world economy. Granted, there are a whole host of ingenious geoengineering solutions being offered, but they all have side effects, and l am afraid that lots of the side effects are not even known and would not become apparent until the geoengineering solutions were put into effect. Clearly, the safest approach is to simply stop emitting carbon dioxide by switching to solar, wind, geothermal, nuclear, and wave energy sources. A viable fusion reactor would certainly do the job too, but the joke is that fusion power is the power of the future and always will be. Unless we get serious about these alternative energy sources very quickly, the future does not look very bright.

The problem is that we have known about this problem for over 50 years and have done absolutely nothing about it! When we first began measuring the carbon dioxide level of the Earth’s atmosphere at the Mauna Loa observatory in Hawaii in 1958, the carbon dioxide level was rising at a rate of 0.95 ppm per year. Now the carbon dioxide level is rising at a rate of 2.39 ppm per year. In the coming years, it will be rising at a rate of 3 or 4 ppm per year as the demand for energy explodes with the increasing demand from the emerging economies of the world. Even now, the carbon dioxide level of the Earth’s atmosphere is rising about 1,000 times faster than it did during the period that led up to the Permian-Triassic mass extinction 251 million years ago. So we don’t have that much time to waste. Economically recoverable oil will be gone in about 50 years, so over the next few decades we will need to switch to coal on an ever increasing basis to fulfill the increasing appetite for world energy. Coal is pretty cheap, and my suspicion is that coal will always be a little cheaper than the alternative sources of energy for quite some time. It never seems to be the right time to begin switching to the alternative forms of energy. Either the world economy is humming along splendidly and we don’t want to damage it with a burdensome conversion cost, or the world economy has tanked and we don’t want to make things worse. So it seems that there never will be a good time to make the energy conversion to renewable sources of energy. And as global warming in the 21st century begins to produce a rising sea level, increasingly extreme weather conditions, and shifting rain patterns that create droughts and regional floods, as outlined in The Flooded Earth, there will be even less economic incentive to make the transition due to a lack of available funds, because so much money will need to be diverted to dealing with the consequences of global warming. Plus, we will always be faced with Garrett Hardin's Tragedy of the Commons (1968). If the United States and Europe do convert to renewable energy resources, but China, India and the other emerging economies do not, then the problem will still continue. Global warming is not an environmental problem, like others that the world has successfully addressed. Most environmental problems can usually be corrected in a few decades or so by halting the production of one offending organic molecule and substituting a more benign molecule in its place, such as we did with the phasing out of Freon. Global warming, on the other hand, is a world-wide geophysical problem that might take hundreds of thousands of years to correct. But there is a way out.

What You Can Do as an IT Professional
So how do we get 6 billion people to cooperate in reducing carbon dioxide emissions if it requires the self-sacrifice of the individual for the greater good of all? Clearly, given the limitations of human nature, if that is our only recourse, we are all doomed. However, in the past decade or so we have been able to get nearly 6 billion people to cooperate through the spread of capitalism and the globalization of the world economy brought on by high-speed fiber optic telecommunication networks and the spread of IT technology. This was not accomplished through the self-sacrifice of the individual. Instead, it was accomplished through the amazing “Invisible Hand” of Adam Smith, which allows the self-interest of the individual to benefit society as a whole. The only way we can possibly solve this problem is by making it easier and cheaper not to emit carbon dioxide than to continue on with business as usual, and IT technology is the only thing that can do that. IT is a truly transformative force in the world today. Just look to the recent revolutions in Tunisia and Egypt, and the spreading revolt throughout the entire Middle East, all the way to Iran. In the past decade, the United States spent over a trillion dollars toppling Saddam Hussein in Iraq, and now it appears that all you need is the Internet, Facebook, Twitter and some blogs like this one.

This is where you come in as an IT professional. IT technology has tremendous potential for reducing our consumption of energy and reducing emissions. For example, I work for a really great company that instituted a “hoteling” initiative several years ago. Now I only go into the office about one day per month for special “in-person” group meetings. These “in-person” meetings are also broadcast as webconferences too, so basically I just go into the office once a month for old times sake. All you need is broadband connectivity to a good VPN, some email and instant messaging software, some voice over IP telephony software, and some software for webconferencing for meetings and group collaboration efforts and you have a virtual distributed office network! Your company probably has a lot of this already in place. Our set-up is so good that it passes the equivalent of a Turing Test, I cannot tell if my co-workers are in the office or out of the office when I work with them.

Let me describe my typical workday. Middleware Operations supports all the infrastructure software installed upon 200 production Unix servers, including Apache, Websphere, JBoss, Tomcat, ColdFusion, CTG, MQ, OFX, and many third party custom software products. We install application software into these products and keep the whole thing up and running on a 24x7 basis. As a member of Middleware Operations, I work very strange hours, including nights, weekends, and holidays. We are on pager duty one week per month, being Day Primary one month and Night Primary the next. We only work 40 hours per week, but we don’t get to pick which 40 hours! For example, when on Night Primary for Middleware Operations, I might work a few hours from 1:00 – 3:00 AM troubleshooting a problem before my first Change Management teleconference meeting at 9:00 AM. This meeting is a teleconference where Change Management, MidOps, UnixOps, NetOps, and Application Development go over all the upcoming scheduled change tickets to look for scheduling conflicts. I call into the meeting from my laptop using voice over IP software and view the change calendar report on an internal website. During the Change Management meeting, only about 5% of the change tickets really apply to Middleware Operations, so I check on all the pending change tickets in our queue that I need to approve for Middleware Operations. While I am listening into the meeting and approving tickets, I will also get instant message pop-ups from project managers and Application Development programmers with questions about the status of pending change tickets or answers to questions that I have emailed them about concerning their implementation plans, or I might get an instant message pop-up from a staff member in Bangalore India with an update on some of the installs that happened the previous night. I also work my email at the same time during the meeting. Later in the day, I might join a webconference for a walkthrough of an install plan for an upcoming change ticket with Application Development. During that meeting, I might get an instant message pop-up from a team member who lives in Indiana, or I might initiate an instant messaging session with him to discuss a technical problem on one of our websites. When on Day Primary, I might also get paged into a conference call for a website outage via my BlackBerry. On these voice over IP conference calls we might have 5 – 10 people using a group instant messaging session to troubleshoot the problem. I might have 20 or so windows open while troubleshooting, looking at monitoring tools and logged into a large number of Unix servers, to look at logs and to restart software components as needed. Later in the day, I might join a webconference for a demo of a new software technology that is being introduced. Sometimes, I need to train a new team member. To do that I set up a webconference so that the trainee can see what I am doing as I explain how a certain piece of software works. I can also promote the trainee to be the moderator of the webconference and watch the trainee go through what I have just demonstrated. Many times I will get paged by a frantic developer to approve a change ticket late in the day, when I would normally be driving home in a car and useless to all, but I just log back into our VPN and spend about 20 minutes going over the install plan before approving the ticket. I might even get paged to approve a last-minute ticket at 9:00 PM, but I just quickly login again to review and approve it. To keep things from getting out of hand, I simply keep a running tab of the time I actually work for my employer. When I do some work, I just record my start and end times, and then credit the hours worked to my employer’s running tab. When I begin work each day, I simply debit 8 hours from my employer’s running tab and only work the hours my employer actually needs me to perform duties, so some days I might only work 5 or 6 hours before logging off the VPN. In this way, I can work a 40 hour workweek, which is quite rare in IT, but from the perspective of my employer, I am always available when needed, like a time-sharing operating system, even though much of the time I am “idle” and doing other things like living my personal life with my family. When my laptop acts up, I open a ticket and somebody from desktop support uses software to take over my laptop remotely to fix it. Even when I do come into the office for a special “in-person” meeting, none of the above work processes really change! I do not physically walk to meeting rooms because that is a waste of time, I simply join the teleconference as usual. When I have to work with a team member sitting a couple of cubes away, I don’t walk over to her cube either because my old eyes cannot see her screen from more than 2 feet out. Instead, I just open a webconference, like I would do if I were working from my home office. So whether I am in the office or out of the office, I use software to work exactly the same way, regardless of where I am located physically in space. Some of my teammates even go out to events and run errands while on pager support and use a high-speed wireless card to connect to our VPN when necessary, but I like working from my home office at a nice desk, so I stick around the house while on pager duty. During my whole workday, I do not touch nor generate a single piece of paper the whole day long. With all this IT technology, it seems time to me to turn our obsolete 20th century office buildings into condos with high population densities that limit the amount of traveling people need to do in their daily lives.

I strongly believe that a distributed office platform greatly benefits both me and my employer because it allows me to be much more productive and flexible in delivering my services, and I believe a distributed office platform could work for most office work in general. Imagine how much energy could be saved and carbon dioxide emissions reduced if nobody commuted to work! The reason people still physically travel to a central location for office work stems from a change in work habits brought about by the Industrial Revolution. Prior to the Industrial Revolution, craftsmen worked out of their homes. It was the steam engines of the Industrial Revolution that brought us together, but the steam engines are gone now, so why are people still driving to the office? I think it is just a holdover of 20th century thinking. Imagine that everybody in the world had been working from home offices for 20 years, using a distributed office platform, and somebody in a meeting came up with this really great idea. We should all spend an hour each morning and an hour each evening driving a car around in circles, spewing out carbon dioxide, and useless to all. This would improve productivity because it would help to reinforce the idea that work is a hardship to be suffered by all. After all, you really are not “working” unless you are suffering, and driving around in circles is surely a way to suffer. I have been in the workforce for 36 years, and I have also run across a lot of people who confuse “being at work” with “doing work”. They may spend the whole day in the office “working”, but they do not do anything constructive the whole day, beyond bothering people who really are trying to get some work done!

We have the technology to solve this problem. What really is needed is a change in thinking. As I pointed out previously, as IT professionals we are all warriors in the Information Revolution. So if your company has not instituted a similar program, gather up a little courage and email your CEO a proposal explaining how moving to a virtual distributed office system based upon telecommuting can save your company and fellow employees a great deal of time and money, while improving productivity at the same time. Just ask him how much it costs each year to operate the company’s office buildings and pay the property taxes. You can play a significant role in fixing this problem through the implementation of IT technology, so put your IT skills and abilities to work!

For Further Study Please See Professor David Archer's Excellent Course on Climate Change
For a more detailed exploration of climate change, please see Professor David Archer's excellent course on climate change entitled Global Warming I: The Science and Modeling of Climate Change on Coursera at Just search the course catalog for "global warming" to find it. You can also listen to the Coursera lectures directly from a University of Chicago website at This is an excellent course that goes into much greater detail than this brief posting.

Comments are welcome at

To see all posts on softwarephysics in reverse order go to:

Steve Johnston

Monday, January 10, 2011

Model-Dependent Realism - A Positivistic Approach to Realism

In the Introduction to Softwarephysics we saw how softwarephysics adopts a very positivistic view of software in that we do not care about what software “really” is; we only care about how software is observed to behave, and that we only attempt to model this behavior with a set of effective theories. Recall that positivism is an enhanced form of empiricism, in which we do not care about how things “really” are; we are only interested with how things are observed to behave. With positivism, physicists only seek out models of reality - not reality itself. Effective theories are an extension of positivism. An effective theory is an approximation of reality that only holds true over a certain restricted range of conditions and only provides for a certain depth of understanding of the problem at hand. For example, Newtonian mechanics works very well for objects moving in weak gravitational fields at less than 10% of the speed of light and which are larger than a very small mote of dust. For things moving at high velocities or in strong gravitational fields we must use relativity theory, and for very small things like atoms we must use quantum mechanics. All of the current theories of physics, such as Newtonian mechanics, classical electrodynamics, thermodynamics, statistical mechanics, the special and general theories of relativity, quantum mechanics, and quantum field theories like QED and QCD are just effective theories that are based upon models of reality, and all these models are approximations - all these models are fundamentally "wrong", but at the same time, these effective theories make exceedingly good predictions of the behavior of physical systems over the limited ranges in which they apply, and that is all positivism hopes to achieve. The opposite view is called realism, in which an external physical reality actually exists all on its own and in an independent manner. It really goes back to the age-old philosophical question – if a tree falls in the forest and nobody hears it - does it make a noise? For a realist it certainly does. For a positivist it may not unless some observable evidence is left behind in the Universe that it did.

A good comparison of these two worldviews can be found in the Dreams of a Final Theory (1992) by Steven Weinberg and The Grand Design (2010) by Stephen Hawking and Leonard Mlodinow, in which they present their concept of a model-dependent realism. There is also a very nice synopsis of model-dependent realism in the October 2010 issue of Scientific American entitled The (Elusive) Theory of Everything by the same authors. Steven Weinberg is a brilliant theoretician and a winner of the 1979 Nobel Prize in Physics for his theory of the electroweak interaction, which unified the electromagnetic and weak interactions of the Standard Model of particle physics. In Dreams of a Final Theory, Weinberg makes a strong case for realism, since the discovery of a single all-encompassing final theory of everything would necessarily imply the existence of a single absolute reality independent of observation. Stephen Hawking is famous to all as a brilliant theoretician and within the physics community is most famous for his theory of Hawking radiation emitted by black holes, which was the very first theory in physics to apply quantum effects to the general theory of relativity. Leonard Mlodinow is a physicist at Caltech and the author of many popular books on physics and wrote for Star Trek: The Next Generation as well.

In The Grand Design, Stephen Hawking and Leonard Mlodinow present an alternate worldview to that of Weinberg’s realism. For them a single all-encompassing final theory of everything may not be possible because a single absolute reality may not exist. Instead, they draw a more positivistic view with their concept of a model-dependent realism. Model-dependent realism maintains that there is no absolute reality after all. We can only hope for a collection of effective theories that present a series of models of reality that are confirmed by empirical observation, and each of these models essentially creates a reality of its own.

The Need For a Return to Natural Philosophy
Both books have few kind words for philosophers, but both then go into some pretty heavy philosophical discussions of both positivism and realism, despite their apparent lack of confidence in philosophy. However, I think we shall see in this posting that both books actually seem to demonstrate that, based upon the findings of the 20th century, physics must once again return to its roots in natural philosophy in order to make progress in the 21st century. After all, the question of the existence of an absolute reality goes way back in philosophy, but since I have little background in philosophy, I will use the history of physics as a guide instead, as did the authors of both these books did as well.

The Grand Design begins with:

… How can we understand the world in which we find ourselves? How does the universe behave? What is the nature of reality? Where did all this come from? Did the universe need a creator? … Traditionally these are questions for philosophy, but philosophy is dead. Philosophy has not kept up with the modern developments in science, particularly physics. Scientists have become the bearers of the torch of discovery in our quest for knowledge..

In Dreams of a Final Theory in the chapter Against Philosophy, Weinberg goes on with:

The value today of philosophy to physics seems to me to be something like the value of early nation-states to their peoples. It is only a small exaggeration to say that, until the introduction of the post office, the chief service of nation-states was to protect their peoples from other nation-states. The insights of philosophers have occasionally benefited physicists, but generally in a negative fashion – by protecting them from the preconceptions of other philosophers.

But I do not aim here to play the role of a philosopher, but rather that of a specimen, an unregenerate working scientist who finds no help in professional philosophy. I am not alone in this. I know of no one who has participated actively in the advance of physics in the postwar period whose research has been significantly helped by the work of philosophers. I raised in the previous chapter the problem of what Wigner calls the “unreasonable effectiveness” of mathematics; here I want to take up another equally puzzling phenomenon, the unreasonable ineffectiveness of philosophy.

Physicists do of course carry around with them a working philosophy. For most of us, it is a rough-and-ready realism, a belief in the objective reality of the ingredients of our scientific theories. But this has been learned through the experience of scientific research and rarely from the teachings of philosophers.

With that said, let us now explore the nature of reality from an historical perspective within physics.

The Historical Clash of Positivism and Realism in Physics
Before delving further into these two books, let us review the historical clash of positivism and realism in physics. The debate over positivism and realism has been going on within the physics community from the very start, and in recent years has escalated with the unfolding of the quantum nature of reality and also with the quest for what is called a unified Theory of Everything or a Final Theory that would explain it all and replace our current collection of effective theories with a single unified theory of true reality. It is generally thought that the modern Scientific Revolution of the 16th century began in 1543 when Nicolaus Copernicus published On the Revolutions of the Heavenly Spheres, in which he proposed his Copernican heliocentric theory that held that the Earth was not the center of the Universe, but that the Sun held that position and that the Earth and the other planets revolved about the Sun. A few years ago I read On the Revolutions of the Heavenly Spheres and found that it began with a very strange foreword that essentially said that the book was not claiming that the Earth actually revolved about the Sun, rather the foreword proposed that astronomers may adopt many different models that explain the observed motions of the Sun, Moon, and planets in the sky, and so long as these models make reliable predictions, they don’t have to exactly match up with the absolute truth. Since the foreword did not anticipate space travel, it also implied that since nobody will ever really know for sure anyway, because nobody will ever be able to see from above what is really going on, there is no need to get too bent out of shape over the idea of the Earth moving. I found this foreword rather puzzling and so disturbing that I almost put On the Revolutions of the Heavenly Spheres down. But a little further research revealed the true story. However, before we get to that, below is the complete foreword to On the Revolutions of the Heavenly Spheres in its entirety. It is well worth reading because it perfectly encapsulates the ongoing philosophical clash between positivism and realism in the history of physics.

To the Reader
Concerning the Hypotheses of this Work

There have already been widespread reports about the novel hypotheses of this work, which declares that the earth moves whereas the sun is at rest in the center of the universe. Hence certain scholars, I have no doubt, are deeply offended and believe that the liberal arts, which were established long ago on a sound basis, should not be thrown into confusion. But if these men are willing to examine the matter closely, they will find that the author of this work has done nothing blameworthy. For it is the duty of an astronomer to compose the history of the celestial motions through careful and expert study. Then he must conceive and devise the causes of these motions or hypotheses about them. Since he cannot in any way attain to the true causes, he will adopt whatever suppositions enable the motions to be computed correctly from the principles of geometry for the future as well as for the past. The present author has performed both these duties excellently. For these hypotheses need not be true nor even probable. On the contrary, if they provide a calculus consistent with the observations, that alone is enough. Perhaps there is someone who is so ignorant of geometry and optics that he regards the epicycle of Venus as probable, or thinks that it is the reason why Venus sometimes precedes and sometimes follows the sun by forty degrees and even more. Is there anyone who is not aware that from this assumption it necessarily follows that the diameter of the planet at perigee should appear more than four times, and the body of the planet more than sixteen times, as great as at apogee? Yet this variation is refuted by the experience of every age. In this science there are some other no less important absurdities, which need not be set forth at the moment. For this art, it is quite clear, is completely and absolutely ignorant of the causes of the apparent nonuniform motions. And if any causes are devised by the imagination, as indeed very many are, they are not put forward to convince anyone that they are true, but merely to provide a reliable basis for computation. However, since different hypotheses are sometimes offered for one and the same motion (for example, eccentricity and an epicycle for the sun’s motion), the astronomer will take as his first choice that hypothesis which is the easiest to grasp. The philosopher will perhaps rather seek the semblance of the truth. But neither of them will understand or state anything certain, unless it has been divinely revealed to him.
Therefore alongside the ancient hypotheses, which are no more probable, let us permit these new hypotheses also to become known, especially since they are admirable as well as simple and bring with them a huge treasure of very skillful observations. So far as hypotheses are concerned, let no one expect anything certain from astronomy, which cannot furnish it, lest he accept as the truth ideas conceived for another purpose, and depart from this study a greater fool than when he entered it. Farewell.

Now here is the real behind-the-scenes story. Back in 1539 Georg Rheticus, a young mathematician, came to study with Copernicus as an apprentice. It was actually Rheticus who convinced the aging Copernicus to finally publish On the Revolutions of the Heavenly Spheres shortly before his death. When Copernicus finally turned over his manuscript for publication to Rheticus, he did not know that Rheticus subcontracted out the overseeing of the printing and publication of the book to a philosopher by the name of Andreas Osiander, and it was Osiander who anonymously wrote and inserted the infamous foreword. My guess is that Copernicus was a realist at heart who really did think that the Earth revolved about the Sun, while his publisher, who worried more about the public reaction to the book, took a more cautious positivistic position. I think that all scientific authors can surely relate to this story.

Another early example of the clash between positivism and realism can be found in Newton’s Principia (1687), in which he outlined Newtonian mechanics and his theory of gravitation, which held that the gravitational force between two objects was proportional to the product of their masses divided by the square of the distance between them. Newton knew that he was going to take some philosophical flack for proposing a mysterious force between objects that could reach out across the vast depths of space with no apparent mechanism, so he took a very positivistic position on the matter:

I have not as yet been able to discover the reason for these properties of gravity from phenomena, and I do not feign hypotheses. For whatever is not deduced from the phenomena must be called a hypothesis; and hypotheses, whether metaphysical or physical, or based on occult qualities, or mechanical, have no place in experimental philosophy. In this philosophy particular propositions are inferred from the phenomena, and afterwards rendered general by induction.

Instead, Newton focused on how things were observed to move under the influence of his law of gravitational attraction, without worrying about what gravity “really” was.

However, in the very first few pages of Newton’s Principia, Newton also proposed that there really was an absolute and fixed space filling the entire Universe that all objects existed in and moved through. This absolute fixed space was like a stage or background upon which the motions of all the objects in the Universe were played out upon. Newton admitted that, as Galileo had proposed earlier, you could not measure this fixed and absolute space directly, but just the same, it still existed. Newton also proposed that there was a fixed and absolute universal time that flowed at a constant rate that all observers agreed upon, but which could not be measured directly either. Clocks really do not measure the rate at which time flows. Clocks can only measure amounts of time, just as rulers can measure amounts of space. To measure the rate at which time flows, you would need something like a speedometer, and we do not have such a device. The ideas of a fixed and absolute space and time are such common sense self-evident concepts that Newton almost dismissed dealing with them outright in the first few pages of the Principia because they seemed so obvious to him, but this turned out to ultimately lead to his undoing. It would take more than 200 years and the work of Albert Einstein to reveal the flaws in his reasoning. Thus for his concept of an absolute space and time, Newton took a very realistic viewpoint. Absolute space and absolute time “really” did exist even though they could not be directly observed.

Galileo, on the other hand, proposed that all motion is relative, meaning that you can only define motion as an observable change in the distance between objects. In 1632 Galileo published the Dialogue Concerning the Two Chief World Systems, in which he compared Ptolemy’s astronomical model, that positioned the Earth at the center of the Universe, with the heliocentric model of Copernicus. One of the chief objections against Copernicus’s model was that if the Earth really does move, how come we do not feel it moving? To counter this argument, Galileo noted that when down in the hold of a ship on a quiet sea, that it was impossible to tell if the ship was moving under sail or anchored at rest by simply performing physical experiments, like throwing balls or watching the drips from a dripping bottle fall into a basin. For Galileo, there was no such thing as absolute motion relative to some absolute and fixed space. Galileo’s concept of relative motion was carried forward further by Gottfried Leibniz, a contemporary and strident rival of Newton, who fervently claimed that there was no such thing as an absolute space that could not be observed; there was only relative motion between objects, and absolute space was a fantasy of our common sense. So Galileo and Leibniz took a very positivistic view of space, while Newton took the position of a realist.

The reason you do not feel the motion of the Earth as it orbits the Sun and revolves upon its axis is that, for the most part, you are moving in a straight line at a constant speed. For example, the Earth takes about 365 days to orbit the Sun and complete a full 360° revolution about it. So that comes to about 1 degree/day. The width of your index finger at arm’s length also subtends an angle of about 1°. Now imagine being able to drive a car all day long in a straight line at 66,660 miles/hour, and find that at the end of the day, you have only deviated from your desired straight line path by the width of your index finger at arm’s length, when you look back at your starting point! Most of us would likely congratulate ourselves on being able to drive in such a straight line. Because the circumference of the Earth’s orbit is over 584 million miles and it takes 365 days to cover that distance, the Earth essentially moves in a straight line over the course of a single day to a very good approximation. And the same can be said of your motion as the Earth turns upon its axis. This motion has a tighter radius of curvature, but at a much lower tangential velocity, so again your acceleration is quite small and you do not detect it.

In 1905 Einstein extended Galileo’s idea that you cannot tell if you are moving in a straight line at a constant speed, or standing still using experimental devices, by extending this idea of relative motion to electromagnetic experiments as well. With that, Einstein was able to derive the special theory of relativity using simple high school mathematics. In Is Information Real? we noted Einstein’s strong adherence to a positivistic approach to a relative space and time versus Newton’s concept of an absolute space and time. In Einstein’s original conceptualization of relativity, he only dealt with observable phenomena like the ticking of light clocks, the paths and timings of light beams, and the lengths of objects measured directly with yard sticks. Einstein did not make any reference to an absolute space or time that we presume exists, but which we cannot directly measure as did Newton in his Principia.

In the 1830s, Michael Faraday began conducting a series of electrical and magnetic experiments, and came up with the idea of fields. Take a bar magnet and cover it with a piece of paper. Then sprinkle some iron filings over it. The “lines of force” that you see are a field. Faraday called it a field because it reminded him of a freshly plowed farmer’s field. At each point on the paper, the magnetic force from the underlying magnet has a certain strength and a certain direction which define the magnetic field of the magnet. Now Faraday thought that the electric and magnetic fields that he observed were “real”, but the French thought that his fields were just a mathematical contrivance. The French thought that to calculate the effects from a collection of charged objects and current carrying wires, you should pick a point in space and then use the formulas developed by the French physicists Ampere and Coulomb to calculate the strength and direction of the resulting magnetic and electrical forces. The French were quite happy with the positivistic concept of electric and magnetic forces as being an “action at a distance”, the same concept used by Newton for the gravitational force in his Principia. There was another problem though; these spooky “action at a distance” forces had to travel with an infinite velocity. Imagine the Earth as it orbits the Sun at 66,660 miles/hour. Since Newton’s gravitational force depends upon the exact distance between the Sun and the Earth, which is constantly changing for an elliptical orbit, if the gravitational force traveled with a finite speed, then the gravitational force from the Sun would have to lead the Earth, like a hunter firing on a flock of ducks flushed from the reeds. How would the Sun know where to shoot the gravitational force in advance and with the correct strength to hit the Earth squarely in mid-flight if the gravitational force traveled at less than an infinite speed?

Faraday, on the other hand, felt that electrical charges and current carrying wires created real fields in space, and that charged objects then interacted with these surrounding fields. This idea could also be extended to the gravitational force as well, and eliminate the mysterious “action at a distance” problem. The Sun creates a strong gravitational field that extends out to the Earth, and the Earth interacts with the gravitational field as it orbits the Sun. The idea of electric and magnetic fields being “real” was further bolstered in 1864 when James Clerk Maxwell published A Dynamical Theory of the Electromagnetic Field, in which he unified the electric and magnetic forces into a single combined electromagnetic force. Maxwell demonstrated that a changing magnetic field could create an electric field and that, similarly, a changing electric field could create a magnetic field. This meant that electric and magnetic fields could break free of charged objects and currents in wires and propagate through space as a self-propagating wave. If you wiggle a charged object back and forth, electromagnetic waves peel off. This is how the oscillating electrons in a radio antenna send out radio waves and the electrons jumping around within atoms send out light. The velocity of the electromagnetic wave came out to be:
v  =  1/√ με

v = 3 x 108 m/sec the speed of light!

This was truly a remarkable result. The constant μ is measured by observing the strength of the magnetic field surrounding a current carrying wire, and the constant ε is measured by observing the voltage across a charged capacitor. Both μ and ε seem to have nothing to do with light, yet the speed of light easily falls out from a simple relationship between the two, derived from a wave equation featuring both constants. This added credence to Faraday’s idea that electromagnetic fields were, indeed, real tangible things, and Maxwell’s prediction of electromagnetic waves further strengthened the reality of electromagnetic fields in 1886, when Heinrich Hertz was able to generate and detect electromagnetic radio waves. But even with the findings of Hertz, all that we know is that when we jiggle electrons on one side of a room, we make electrons on the other side of the room jiggle as well. Does that mean that there is a real electromagnetic wave involved? Fortunately, we can refine our experiment with the aid of a microwave oven. Open your microwave oven and remove the rotating platter within. Now get a small Espresso coffee cup and commit the ultimate Starbucks sin, heat a small cup of cold coffee in the microwave at various positions within the oven. What you will find is that at some locations in the oven, the coffee gets quite hot, and at others it does not. So what is happening? In the classical electrodynamics of Maxwell, there is a microwave standing wave within the oven. If you are fortunate enough to place your Espresso cup at a point in the microwave oven where the standing wave is intense, the coffee will heat up quite nicely. If you place the Espresso cup at a node point where the standing microwave is at a minimum, the coffee will not heat up as well. That is why they put the rotating platter in the microwave oven. By rotating objects in the oven, the objects pass through the hot and cold spots of the standing electromagnetic microwave and are evenly heated. So this is pretty convincing evidence that electromagnetic waves really do exist, even for a positivist.

But now let us look at this same experiment from the point of view of quantum mechanics and QED. If you have been paying attention, you might have noticed that our microwave oven is simply a physical implementation of the famous “particle in a box” we discussed previously in Quantum Software. The only difference is that we are using microwave photons in our box instead of electrons. Now according to quantum mechanics and QED, the reason that the coffee in the Espresso cup got hotter in some spots and less hot in others is that the probability of finding microwave photons at certain spots in the oven is greater than finding them at other spots based upon the square of the amplitude of the wavefunctions Ψ of the photons. But remember, in the Copenhagen interpretation of quantum mechanics, the wavefunctions Ψ of particles and photons are not “real” waves, they are only probability waves – just convenient mathematical constructs that don’t “really” exist, similar to the electromagnetic waves of the mid-19th century that did not “really” exist either.

Another example of the clash between positivism and realism comes from the very beginnings of quantum mechanics. According to classical electrodynamics, the walls of the room in which you are currently located should be at a temperature of absolute zero, having converted all of the energy of the free electrons in the walls of the room into ultraviolet light and x-rays. This was known as the “Ultraviolet Catastrophe” at the end of the 19th century, and is another example of an effective theory bumping up against the limitations of its effective range of reliable prediction. In 1900, Max Planck was able to resolve this dilemma by proposing that the energy of the oscillating electrons in the walls of your room was quantized into a set of discrete integer multiples of an elementary unit of energy E = hf.

E = nhf

n= 1, 2 , 3,…
h = Planck’s constant = 4.136 x 10-15 eV sec
f = frequency of the electron oscillation

Max Planck regarded his inspiration/revelation of the quantization of the oscillation energy of the free electrons and their radiated energy as a mathematical trick to overcome the Ultraviolet Catastrophe. But in 1905, the same year that he published the special theory of relativity, Einstein proposed that Planck’s discovery was not a mathematical trick at all. Einstein proposed that sometimes light, an electromagnetic wave in classical electrodynamics, could also behave like a stream of “real” quantized particles, that we now call photons, with energy:

E = hf

Although Einstein took a very positivistic position in his development of the special theory of relativity, he was a true realist at heart who could never quite accept the very positivistic Copenhagen interpretation of quantum mechanics. In 1927, Niels Bohr and Werner Heisenberg proposed a very positivistic interpretation of quantum mechanics now known as the Copenhagen interpretation; Bohr was working at the University of Copenhagen Institute of Theoretical Physics at the time. The Copenhagen interpretation contends that absolute reality does not really exist. Instead, there are an infinite number of potential realities. This satisfies Max Born’s contention that wavefunctions are just probability waves. Einstein had a hard time with the Copenhagen interpretation of quantum mechanics because he thought that it verged upon solipsism. Solipsism is a philosophical idea from Ancient Greece. In solipsism, you are the whole thing, and the physical Universe is just a figment of your imagination. So I would like to thank you very much for thinking of me and bringing me into existence. Einstein’s opinion of the Copenhagen interpretation of quantum mechanics can best be summed up by his statement Is it enough that a mouse observes that the Moon exists?. Einstein’s rejection of the Copenhagen interpretation is rather interesting. Recall that in Is Information Real? we saw that Einstein’s original interpretation of the special theory of relativity (1905) was very positivistic, since he relied solely upon what could be observed with meter sticks and clocks, and totally rejected Newton’s concepts of absolute space and time because they could not be physically observed. Despite this, Einstein was a true realist at heart. In his elder years, Einstein held many profound philosophical debates with Bohr on the topic of quantum mechanics, since Einstein could not accept the extreme positivism of the Copenhagen interpretation of quantum mechanics, which held that only the observations of things really existed and not the things themselves. In the Copenhagen interpretation of quantum mechanics, the wavefunctions or probability clouds of electrons surrounding an atomic nucleus are just that, potential electrons waiting to be measured.

Because Einstein detested the Copenhagen interpretation of quantum mechanics so much, he published a paper in 1935 with Boris Podolsky and Nathan Rosen which outlined what is now known as the EPR Paradox. The EPR Paradox goes like this. Suppose we prepare two quantum mechanically “entangled” electrons that conserve angular momentum; one with spin up and one with spin down. Now let the two electrons fly apart and let two observers measure their spins. If observer A measures his electron with spin up, then observer B must measure his electron with spin down with a probability of 100% in order to conserve angular momentum. Now there is nothing special about the directions in which observers A and B make their spin measurements. Suppose observer A rotates his magnets by 900 to measure the spin of his electron and observer B does not. Then observer B will only have a 50% chance for his electron having a spin that is down. How does the electron at observer B know that observer A has rotated his magnets when the electron arrives at observer B? Einstein thought that the EPR paper was the final nail in the coffin of quantum mechanics. There had to be some “hidden variables” that allowed electrons to know if they “really” had a spin up or spin down. You see, for Einstein, absolute reality really existed. For Einstein, the apparent probabilistic nature of quantum mechanics was an illusion, like the random() function found in most computer languages. The random() function just points to a table of apparently random numbers that are totally predictable if you look at the table in advance. You normally initiate the random() function with a “seed” from the system clock of the computer you are running on to simulate randomness by starting at different points in the table.

However, in 1964 John S. Bell published a paper in which he proposed an experiment that could actually test the EPR proposition. In the 1980s and 1990s a series of experiments were indeed performed that showed that Einstein was actually wrong. Using photons and polarimeters, instead of the spin of electrons, these experiments showed that photons really do not know their quantum states in advance of being measured and that determining the polarization of a photon by observer A can immediately change the polarization of another photon 60 miles away. These experiments demonstrated that the physical Universe is non-local, meaning that Newton’s spooky “action at a distance” is built into our Universe, at least for entangled quantum particles. This might sound like a violation of the special theory of relativity because it seems like we are sending an instantaneous message faster than the speed of light, but that is really not the case. Both observer A and observer B will measure photons with varying polarizations at their observing stations separated by 60 miles. Only when observer A and observer B come together to compare results will they realize that their observations were correlated, so it is impossible to send a message with real information using this experimental scheme. Clearly, our common sense ideas about space and time are still lacking, and so are our current effective theories.

From all of the above we can see that the ongoing philosophical clash between positivism and realism in physics has a lot at stake, so let us return to the positions presented in Dreams of a Final Theory and The Grand Design to see how they deal with it in the search for a Final Theory.

The Case For Realism
In Dreams of a Final Theory in the chapter Against Philosophy, Weinberg goes on with a discussion of the negative impact of positivism on physics. Although positivism did help Einstein to break free of an absolute space and time that could not be directly observed and also helped Heisenberg formulate his version of quantum mechanics that only dealt with observable quantities, and which ultimately led to the Heisenberg Uncertainty Principle, Weinberg finds that positivism in total has done far more harm to physics than good. He points to the extreme positivism of Ernst Mach at the turn of the 20th century that suppressed the idea that atoms and molecules were real things and which retarded the acceptance of Boltzmann’s statistical mechanics (see The Demon of Software for more on that).

To further this point, Weinberg describes the misfortunes of Walter Kaufmann. In 1897 both J. J. Thomson and Walter Kaufmann were experimenting with cathode rays in evacuated glass tubes. A cathode ray is really a stream of energetic electrons, but the idea of an electron was an unknown concept back in 1897. Using an electric field between two parallel charged plates and a pair of magnets, J. J. Thomson was able to deflect the path of the electrons on their way to hitting a fluorescent screen. With these measurements he was able to calculate the charge to mass ratio of the particles in the cathode rays and found that the likely mass of these particles was much less than that of atoms, so J. J. Thomson took a leap and concluded that the particles in cathode rays must be some kind of constituent part of atoms. J. J. Thomson is thus credited with the discovery of the electron, the first fundamental particle discovered by mankind. However, Walter Kaufmann in Berlin performed a very similar set of experiments months earlier than J. J. Thomson and even came up with a more accurate value for the charge to mass ratio of electrons than did J. J. Thomson. But Kaufmann was a positivist and could not bring himself to believe that he had discovered a fundamental particle that was a constituent part of atoms, since he did not even believe in atoms in the first place, because they could not be directly observed. Weinberg justly comments:

What after all does it mean to observe anything? In a narrow sense, Kaufmann did not even observe the deflection of cathode rays in a given magnetic field; he measured the position of a luminous spot on the downstream side of a vacuum tube when wires were wound a certain number of times around a piece of iron near the tube and connected to a certain electric battery and used accepted theory to interpret this in terms of ray trajectories and magnetic fields. Very strictly speaking, he did not even do that: he experienced certain visual and tactile sensations that he interpreted in terms of luminous spots and wires and batteries.

Figure 1 – J.J. Thomson’s Experimental Cathode Ray Tube

In the chapter Quantum Mechanics and its Discontents, Weinberg portrays a very interesting hypothetical dialogue between Charles Dickens’ Scrooge and Tiny Tim. In this hypothetical discussion, the realist Scrooge debates the reality of the quantum wavefunction with the positivist Tiny Tim. Scrooge maintains that quantum wavefunctions are just as real as anything else in this strange quantum world of ours, while Tiny Tim adheres to the positivistic Copenhagen interpretation of quantum mechanics that holds that wavefunctions are just mathematical devices that only produce probabilistic predictions of what is observed, so that wavefunctions themselves are not really real (see Quantum Software and The Foundations of Quantum Computing for more details).

Weinberg finds merit in both positions, but in the end, sides with the realist Scrooge.

Scrooge: …It is true enough that the electron does not have a definite position and momentum at the same time, but this just means that these are not appropriate quantities to use in describing the electron. What an electron or any collection of particles does have at any time is a wave function. If there is a human observing the particles, then the state of the whole system including the human is described by a wave function. The evolution of the wave function is just as deterministic as the orbits of particles in Newtonian mechanics.

Tiny Tim: …The wave function has no objective reality, because it cannot be measured … All that we ever measure are quantities like positions or momenta or spins, and about these we can predict probabilities. And until some human intervenes to measure these quantities, we cannot say that the particle has any definite state at all.

Scrooge: … Wave functions are real for the same reason that quarks and symmetries are – because it is useful to include them in our theories. Any system is in a definite state whether any humans are observing it or not; the state is not described by a position or a momentum but by a wave function.

Tiny Tim: …Let me just remind you of a serious problem you get into when you imagine the wave function to be real. This problem was mentioned in an attack on quantum mechanics by Einstein at the 1933 Solvay Conference in Brussels and then in 1935 written up by him in a famous paper with Boris Podolsky and Nathen Rosen. Suppose that we have a system consisting of two electrons, prepared in such a way that the electrons at some moment have a known large separation and a known total momentum….

Tiny Tim then goes on to outline the EPR Paradox described above. But Scrooge does not have a problem with the EPR Paradox:

Scrooge: I can accept it…..(While you were at it, you might have mentioned that John Bell has come up with even weirder consequences of quantum mechanics involving atomic spins, and experimental physicists have demonstrated that the spins in atomic systems really do behave in the way expected from quantum mechanics, but that is just the way the world is.) It seems to me that none of this forces us to stop thinking of the wave function as real; it just behaves in ways that we are not used to, including instantaneous changes affecting the wave function of the whole universe.

In the chapter Tales of Theory and Experiment Weinberg points out that one of the dangers of positivism is its overreliance upon observation and experimentation. In How To Think Like A Scientist, I highlighted how the old miasma theory of disease, the theory that diseases are caused by foul smelling air, had a lot of supporting empirical evidence. People who lived near foul smelling 19th century rivers in England were more prone to dying of cholera than people who lived further from the rivers, and we had death certificate data to prove that empirical fact. Weinberg uses the example of Einstein’s general theory of relativity to drive home his point (see Cyberspacetime). In Newtonian mechanics a single planet, all by itself with no sister planets, should orbit the Sun in an ellipse that never changes its orientation. The major axis of the ellipse, the length of its oblong dimension, should always point in the same direction like a compass needle. However, because the other planets of the solar system tug on the elliptical orbit of Mercury, it is found that the major axis of Mercury’s orbit actually precesses or slowly rotates by 575 seconds of arc every century. That means over a period of 225,000 years the orbit of Mercury makes one full rotation like a slowly spinning egg. The problem was that the Newtonian theory of gravity predicted that the tugs on Mercury’s orbit from the other planets should only add up to 532 seconds of arc every century, leaving 43 seconds per century unexplained. When Einstein applied his new general theory of relativity to the problem in 1915, he immediately saw that it predicted the additional precession of 43 seconds per century from the Sun’s gravitation alone. Newton’s theory of gravitation is based upon a linear equation, where a small change to the mass of the Sun simply makes a proportionally small change to its gravitational field, while Einstein’s general theory of relativity is framed in terms of nonlinear differential equations. The gravitational field of the Sun has an energy, and since in Einstein’s theory spacetime is deformed by both mass and energy, there is a nonlinear positive feedback loop between the energy in the gravitational field and the gravitational field itself – they both feed off of each other, leading to planets precessing. Weinberg calls Einstein’s finding a retrodiction, meaning that his new theory very accurately produced a result that matched a baffling astronomical observation already in existence. Surprisingly, this retrodiction did not completely convince a skeptical physics community of the validity of the general theory of relativity. That did not happen until 1919, when a group of astronomers used a total eclipse of the Sun to search for a prediction that the general theory of relativity made. The general theory of relativity predicts that light from a distant star passing close to the Sun during a total eclipse will be deflected by the Sun because the light will pass through the distorted spacetime surrounding the Sun. So all you have to do is make a photographic plate of a group of stars 6 months before the total eclipse and then compare the positions of the same stars in a plate taken during the total eclipse. The stars closest to the blocked-out Sun should shift in position due to the bending of their light as it passes close to the Sun. But these shifts are quite small. A grazing star should only shift by 1.75 seconds of arc during a total eclipse. The width of your index finger at arm’s length subtends about 10 which is 3,600 seconds of arc, so 1.75 seconds is quite small indeed. However, the 1919 expeditions did indeed report observing such small shifts to within a 10% accuracy of Einstein’s prediction, and this made Einstein and his general theory of relativity an overnight sensation. However, the observations from several subsequent total eclipses of the Sun in the ensuing years did not find such stunning results. Some even found deflections that appeared to disagree with the general theory of relativity. After all, it is not easy to make such delicate observations. You have to compare photographic plates that were taken at different times, and perhaps with different equipment. So as in all the experimental and observational sciences, corrections must be applied to account for the limitations of observations performed with real-life error-prone physical devices. The focus of the telescope may have not been exactly the same for both plates and plates can shrink or expand with temperature differences that might have occurred when the two observations were made. Experimentalists always try to do a good job of applying these corrections, but Weinberg suspects that in the heat of the moment, experimentalists sometimes subconsciously fall prey to MICOR – Make It Come Out Right. They subconsciously keep applying corrections to their data until it fits the findings of a new theory. Weinberg argues that it is much harder to bend the mathematics of a new theory to match already existing unbiased experimental observations, than it is to “correct” experimental data until it matches a new theory. Consequently, the retrodictions of a new theory are a much better way to validate a new theory than its ability to make predictions of observations never before made.

The end result is that one must always keep in mind the limitations imposed by observations in the real-world, especially now that some of the most promising theories of physics, like supersymmetric string theory, seem to have overtaken our technological capabilities by many orders of magnitude, and most likely we will never be able to obtain the necessary energies to validate them.

The Case For Model-Dependent Realism
Model-dependent realism maintains that there is no absolute reality after all. We can only hope for a collection of effective theories that present a series of models of reality that are confirmed by empirical observation, and each of these models essentially creates a reality of its own. In The Grand Design the authors explain it this way:

In physics, an effective theory is a framework created to model certain observed phenomena without describing in detail all of the underlying processes… Similarly, we cannot solve the equations governing the behavior of complex atoms and molecules, but we have developed an effective theory called chemistry that provides an adequate explanation of how atoms and molecules behave in chemical reactions without accounting for every detail of the interactions.

Because we only have a collection of effective theories, we cannot be certain that an absolute reality even exists. To illustrate this point they turn to the analogy of a goldfish living in a curved goldfish bowl. The goldfish could certainly make observations of moving bodies outside of the goldfish bowl, but due to the distortion caused by light refracting into the curved goldfish bowl, the effective theories that the goldfish would come up with would be very complicated indeed. For the goldfish, a freely moving body would not travel in a straight line, but with enough mathematics, the goldfish could certainly come up with a modified version of Newtonian mechanics that could predict the path of freely moving objects and of objects subjected to a driving force as seen from within the goldfish bowl.

If a goldfish formulated such a theory, we would have to admit the goldfish’s view as a valid picture of reality.

In a further analogy, that comes quite close to what we see with software running in the Software Universe, they go on to explain that if our observations of the physical Universe were really only the observations of an alien computer simulation that we were all caught up in, like in The Matrix, how could we possibly distinguish true reality from a simulated reality? Their conclusion is:

These examples bring us to a conclusion that will be important in this book: There is no picture – or theory-independent concept of reality. Instead we will adopt a view that we will call model-dependent realism: the idea that a physical theory or world picture is a model (generally of a mathematical nature) and a set of rules that connect the elements of the model to observations. This provides a frame-work with which to interpret modern science.

According to model-dependent realism, it is pointless to ask if a model is real, only whether it agrees with observation. If there are two models that agree with observation, like the goldfish’s picture and ours, then one cannot say that one is more real than another. One can use whichever model is more convenient in the situation under consideration. For example, if one were inside the bowl, the goldfish’s picture would be useful, but for those outside, it would be very awkward to describe events from a distant galaxy in the frame of a bowl on earth, especially because the bowl would be moving as the earth orbits the sun and spins on its axis.

According to the idea of model-dependent realism introduced in Chapter 3, our brains interpret the input from our sensory organs by making a model of the outside world. We form mental concepts of our home, trees, other people, the electricity that flows from wall sockets, atoms, molecules, and other universes. These mental concepts are the only reality we can know. There is no model-independent test of reality. It follows that a well-constructed model creates a reality of its own.

Steven Weinberg is a master of quantum field theory and is famous for combining the electromagnetic and weak interactions into a single electroweak interaction, and that is why he is a credible advocate for pursuing a Final Theory for an absolute reality. However, let us try to extend model-dependent realism to quantum field theory as well.

In The Foundations of Quantum Computing we discussed quantum field theories. In quantum field theories all the particles and forces we observe in the Universe are modeled as fields that extend over the entire universe with varying amplitudes that define the probability of observing the fields as a quantized particle. Thus there are matter fields like electron fields, quark fields, and neutrino fields along with the force fields like the electromagnetic field, the weak field and the strong field. When you observe one of the matter fields, a quantized particle pops out like an electron, neutrino, or a clump of quarks in a meson or in a hadron particle like a proton or neutron. Similarly, when you observe a force carrying field you observe the photon of the electromagnetic force, the W+, W- or Z0 bosons of the weak force, or the gluons of the strong force. In quantum field theories, the forces or interactions between matter particles (fermions) are modeled as exchanges of force-carrying particles (bosons). Thus the repulsive electromagnetic interaction or force between two electrons scattering off each other is depicted as an exchange of virtual photons between the two.

The very first quantum field theory, quantum electrodynamics – QED, matured in 1948 when all the pesky infinities were removed with a mathematical process called renormalization. With QED it became possible, at least theoretically, to explain all of the possible interactions between electrons, protons, and photons, and consequently, all of the things that you encounter in your daily life that deal with the physical and chemical properties of matter, such as the texture of objects, their colors, hardness, ductility, tensile strengths and chemical activity. One of the things that QED could not explain was why the Sun shines. To explain why the Sun shines, we need an interaction that can turn protons into neutrons, as the Sun fuses hydrogen nuclei composed of a single proton into helium nuclei composed of two protons and two neutrons. Since a proton is composed of two up quarks and one down quark, while a neutron is composed of one up quark and two down quarks, we need an interaction that can turn up quarks into down quarks and that is exactly what the weak interaction or force can do. In 1967 Steven Weinberg proposed a model that combined the electromagnetic and weak interactions, which predicted the Higgs boson and the Z0 boson, and is now known as the electroweak interaction. Thus, the electroweak interaction can explain all of QED, and why the Sun shines, all at the same time.

Now depicting the electromagnetic force as the exchange of virtual photons might seem a bit farfetched to you, since you have no first-hand experience with quantum effects, so let us repeat an experiment that Michael Faraday might have performed 170 years ago to help us out of this jam. Grab two small styrofoam packing peanuts from your last online purchase. Using a needle and thread, attach each packing peanut to the ends of the thread, and then rub the two packing peanuts in your hair to have them pick up some extra electrons. Now hold the thread in its middle, so that the packing peanuts are free to dangle straight down. You will observe a very interesting thing, instead of dangling straight down, the packing peanuts will repel each other, and the thread will form a Λ shape. Now stare at that for a few minutes. This will, no doubt, not seem so strange to you because you do have some experience with similar electrical effects, but think about it for a few minutes anyway. Something very strange, indeed, is going on. The whole mass of the Earth is trying to pull those two packing peanuts straight down, but some mysterious thing, which apparently is much stronger, is keeping them apart. What could it be? Coulomb would borrow Newton’s idea of a “spooky action at a distance” to explain that there is an electrical force between the charged packing peanuts, and that the electrical force is keeping them apart. In fact, given the amount of charge on the peanuts, the mass of each peanut, and the total length of the thread, Coulomb would be able to exactly predict the angle of the Λ shape formed by the thread and the dangling packing peanuts. So Coulomb’s model is quite useful in making predictions despite using a “spooky action at a distance”. Michael Faraday would go one step further. In Faraday’s model, each charged packing peanut creates an electric field about itself, and the other charged packing peanut then interacts with the electric field by moving away from it. Faraday’s model not only predicts the angle of the packing peanuts, it can also be used to derive the speed of light as we saw above, so Faraday’s model is even more useful than Coulomb’s. With QED we can model the behavior of the packing peanuts to be the result of the exchange of a huge number of virtual photons. QED makes all of the predictions that Coulomb’s and Faraday’s models make, and in addition, as we saw in The Foundations of Quantum Computing by using 72 Feynman diagrams, can predict the gyromagnetic ratio of the electron to 11 decimal places. With QED we can also formulate the basis of all chemical reactions as approximations of QED. Finally, with the unified electroweak quantum field theory of Steven Weinberg we can even explain why the Sun shines in addition to all of the above.

So which is it? Are the packing peanuts being held apart by Coulomb’s “spooky action at a distance”, Faraday’s electric field, QED’s exchange of virtual photons, or Steven Weinberg’s electroweak quantum field theory? What is “really” going on? Well, nobody really knows. The four models we just discussed are all effective theories that make predictions about the phenomenon of repelling packing peanuts with varying degrees of accuracy – that is all we really know. Each of these four models may seem a bit strange, but that does not matter. What matters is that they all make some accurate predictions of what we observe and offer various degrees of insight into what is really going on. Model-dependent realism would say that each model simply creates its own reality.

Hopes For a Final Theory
So we see that the ongoing philosophical debate over positivism and realism does bear upon the future of physics because it does frame the fundamental question of whether an absolute reality exists or not. If there is no absolute reality, then there can be no Final Theory of everything, but if absolute reality does exist, then a Final Theory is at least a possibility. Stephen Hawking and Leonard Mlodinow do offer some hope along these lines in the form of M-theory. M-theory is a collection of the 5 versions of supersymmetric string theory. In the network of string theories that comprise M-theory, the Universe contains 11 dimensions – the four dimensional macroscopic spacetime that we are all familiar with and 7 additional compacted dimensions that we are not aware of. Unfortunately, the compacted dimensions are so small that they cannot be observed, but one-dimensional strings and two-dimensional membranes of energy can vibrate within them, yielding all of the particles and forces of nature that we do observe. In fact, p-brane objects of dimension p = 0 to 9 can vibrate within the 11 dimensions. The 11 dimensions can be curled up in about 10500 different ways, and each way defines a different universe with different physical laws. This collection of approximately 10500 different universes forms a multiverse, or what Leonard Susskind calls The Cosmic Landscape (2006).

A cosmic landscape of 10500 possible universes can also help with explaining the weak Anthropic Principle, the idea that intelligent beings will only find themselves in universes capable of supporting intelligent beings, by providing a mechanism for the formation of a multiverse composed of an infinite number of bubble universes. In 1986, Andrei Linde published the Eternally Existing Self-Reproducing Chaotic Inflationary Universe in which he described what has become known as the Eternal Chaotic Inflation theory. In this model, our Universe is part of a much larger multiverse that has not yet decayed to its ground state. Quantum fluctuations in a scalar field within this multiverse create bubbles of rapidly expanding “bubble” universes, and our Universe is just one of these infinite number of “bubble” universes. A scalar field is just a field that has only one quantity associated with each point in space, like a weather map that lists the temperatures observed at various towns and cities across the country. Similarly, a vector field is like a weather map that shows both the wind velocity and direction at various points on the map. In the Eternal Chaotic Inflation model, there is a scalar field within an infinite multiverse which is subject to random quantum fluctuations, like the quantum fluctuations described by the quantum field theories we saw in The Foundations of Quantum Computing. One explanation of the weak Anthropic Principle is that these quantum fluctuations result in universes with different sets of fundamental laws. Most bubble universes that form in the multiverse do not have a set of physical laws compatible with intelligent living beings and are quite sterile, but a very small fraction do have physical laws that allow for beings with intelligent consciousness. Remember a small fraction of an infinite number is still an infinite number, so there will always be plenty of bubble universes within this multiverse capable of supporting intelligent beings.

I have a West Bend Stir Crazy popcorn popper which helps to illustrate this model. My Stir Crazy popcorn popper has a clear dome which rests upon a nearly flat metal base that has a central stirring rod that constantly rotates and keeps the popcorn kernels well oiled and constantly tumbling over each other, as the heating element beneath heats the cooking oil and popcorn kernels together to a critical popping temperature. As the popcorn kernels heat up, the water in each kernel begins to boil within, creating a great deal of internal steam pressure within the kernels. You can think of this hot mix of oil and kernels as a scalar field not in its ground state. All of a sudden, and in a seemingly random manner, quantum fluctuations form in this scalar field and individual “bubble” universes of popped corn explode into reality. Soon my Stir Crazy multiverse is noisily filling with a huge number of rapidly expanding bubble universes, and the aroma of popped corn is just delightful. Now each popped kernel has its own distinctive size and geometry. If you were a string theorist, you might say that for each popped kernel the number of dimensions and their intrinsic geometries determine the fundamental particles and interactions found within each bubble popcorn universe. Now just imagine a Stir Crazy popcorn popper of infinite size and age, constantly popping out an infinite number of bubble universes, and you have a pretty good image of a multiverse based upon the Eternal Chaotic Inflation model.

Stephen Hawking and Leonard Mlodinow go on to apply model-dependent realism to M-theory and find hope for a Final Theory of sorts:

M-theory is not a theory in the usual sense. It is a whole family of different theories, each of which is a good description of observations only in some range of physical situations. It is a bit like a map. As is well known, one cannot show the whole of the earth’s surface on a single map. The usual Mercator projection used for maps of the world makes areas appear larger and larger in the far north and south and doesn’t cover the North and South Poles. To faithfully map the entire earth, one has to use a collection of maps, each of which covers a limited region. The maps overlap each other, and where they do, they show the same landscape. M-theory is similar. The different theories in the M-theory family may look very different, but they can all be regarded as aspects of the same underlying theory. They are versions of the theory that are applicable only in limited ranges – for example, when certain quantities such as energy are small. Like the overlapping maps in a Mercator projection, where the ranges of different versions overlap, they predict the same phenomena. But just as there is no flat map that is a good representation of the earth’s entire surface, there is no single theory that is a good representation of observations in all situations.

…. Each theory can describe and explain certain properties, neither theory can be said to be better or more real than the other. Regarding the laws that govern the universe, what we can say is this: There seems to be no single mathematical model or theory that can describe every aspect of the universe. Instead, as mentioned in the opening chapter, there seems to be a network of theories called M-theory . Each theory in the M-theory network is good at describing phenomena within a certain range. Whenever their ranges overlap, the various theories in the network agree, so they can all be said to be parts of the same theory. But no single theory within the network can describe every aspect of the universe – all the forces of nature, the particles that feel those forces, and the framework of space and time in which it all plays out. Though this situation does not fulfill the traditional physicist’s dream of a single unified theory, it is acceptable within the framework of model-dependent realism.

Model-Dependent Realism in Softwarephysics
When I transitioned into IT in 1979, after being a young exploration geophysicist for several years, I fully expected that since the IT community had already been working with software for several decades, that there should already be an established set of effective theories of software behavior at hand that I could use in this new intimidating career, just like I had as a former physicist. I figured that since IT people already knew what software “really was”, that this should not have been a difficult thing to achieve. Instead, I learned that not much progress had been made along these lines since I started programming back in 1972, and learned that source code was compiled into machine code that was later loaded into the memory of a computer to finally run under a CPU process that eventually played quantum mechanical tricks with iron atoms on disk drives and silicon atoms in chips. But I found that model of software behavior to be quite lacking and not very useful in making the day-to-day decisions required of me by my new IT career. I realized that what was needed were some pragmatic effective theories of software behavior, like we had in physics for “real” tangible things like photons and electrons. It seemed to me that the IT community had accidentally created a pretty decent computer simulation of the physical Universe on a grand scale, a Software Universe so to speak, and that I could use this fantastic simulation in reverse to better understand the behavior of commercial software, by comparing software to how things behaved in the physical Universe. I figured if you could apply physics to geology; why not apply physics to software? So I decided to establish a simulated science called softwarephysics to do that, and decided to pattern softwarephysics after geophysics as a set of high-level hybrid effective theories combining the effective theories of physics, chemistry, biology, and geology all together into one integrated package. So in physics we use software to simulate the behavior of the Universe, while in softwarephysics we use the Universe to simulate the behavior of software. Since I knew that software was not really “real”, I was not faced with the positivism versus realism dilemma, and consequently, I decided to simply take a very positivistic approach to software behavior from the very start. However, being fully aware of the pitfalls of positivism, I always tried to stress that softwarephysics only provided models of software behavior that were just approximations, they were not what software “really” was. Remember, a vice is simply a virtue carried to extreme, and when carried to an extreme, positivism is certainly a damaging thing that can stifle the imagination and impede progress. But used wisely, positivism can also be quite beneficial. I believe that is what the concept of model-dependent realism entails – a useful application of positivism that can assist the progress of physics in the 21st century. I have been using this approach in softwarephysics for over 30 years and found it to be quite useful in modeling the Software Universe with a suite of effective theories. I never tried to search for a Final Theory of softwarephysics because I knew that I was working in a simulated universe that did not have one. Perhaps Stephen Hawking and Leonard Mlodinow are right and this approach can be applied equally as well to our physical Universe too. In truth, as Stephen Hawking and Leonard Mlodinow showed in The Grand Design, physicists have really been using model-dependent realism all along since the very beginning - from the time of Copernicus – they just did not know it at the time.

Comments are welcome at

To see all posts on softwarephysics in reverse order go to:

Steve Johnston