Thursday, November 12, 2015

Introduction to Softwarephysics

Softwarephysics is a simulated science for the simulated Software Universe that we are all immersed in. It is an approach to software development, maintenance, and support based upon concepts from physics, chemistry, biology, and geology that I have been using on a daily basis for over 35 years as an IT professional. For those of you not in the business, IT is short for Information Technology, commercial computer science. The purpose of softwarephysics is to explain why IT is so difficult, to suggest possible remedies, and to provide a direction for thought. If you are an IT professional, general computer user, or simply an individual interested in computer science, physics, chemistry, biology, or geology then softwarephysics might be of interest to you, if not in an entirely serious manner, perhaps at least in an entertaining one.

From 1975 – 1979, I was an exploration geophysicist exploring for oil, first with Shell, and then with Amoco. In 1979, I made a career change into IT, and spent about 20 years in development. For the past 14 years, I have been in IT operations, supporting middleware on WebSphere, JBoss, Tomcat, and ColdFusion. When I transitioned into IT from geophysics, I figured if you could apply physics to geology; why not apply physics to software? So like the exploration team at Amoco that I had just left, consisting of geologists, geophysicists, paleontologists, geochemists, and petrophysicists, I decided to take all the physics, chemistry, biology, and geology that I could muster and throw it at the problem of software. The basic idea was that many concepts in physics, chemistry, biology, and geology suggested to me that the IT community had accidentally created a pretty decent computer simulation of the physical Universe on a grand scale, a Software Universe so to speak, and that I could use this fantastic simulation in reverse, to better understand the behavior of commercial software, by comparing software to how things behaved in the physical Universe. Softwarephysics depicts software as a virtual substance, and relies upon our understanding of the current theories in physics, chemistry, biology, and geology to help us model the nature of software behavior. So in physics we use software to simulate the behavior of the Universe, while in softwarephysics we use the Universe to simulate the behavior of software. Along these lines, we use the Equivalence Conjecture of Softwarephysics as an aid; it allows us to shift back and forth between the Software Universe and the physical Universe, and hopefully to learn something about one by examining the other:

The Equivalence Conjecture of Softwarephysics
Over the past 70 years, through the uncoordinated efforts of over 50 million independently acting programmers to provide the world with a global supply of software, the IT community has accidentally spent more than $10 trillion creating a computer simulation of the physical Universe on a grand scale – the Software Universe.

Logical Positivism and Effective Theories
Many IT professionals have a difficult time with softwarephysics because they think of physics as being limited to the study of real things like electrons and photons, and since software is not “real”, how can you possibly apply concepts from physics and the other sciences to software? To address this issue, softwarephysics draws heavily upon two concepts from physics that have served physics quite well over the past century – the concept of logical positivism and the concept of effective theories. This was not always the case. In the 17th, 18th, and 19th centuries, physicists mistakenly thought that they were actually discovering the fundamental laws of the Universe, which they thought were based upon real tangible things like particles, waves, and fields. Classical Newtonian mechanics (1687), thermodynamics (1850), and classical electrodynamics (1864) did a wonderful job of describing the everyday world at the close of the 19th century, but early in the 20th century it became apparent that the models upon which these very successful theories were based did not work very well for small things like atoms or for objects moving at high velocities or in strong gravitational fields. This provoked a rather profound philosophical crisis within physics at the turn of the century, as physicists worried that perhaps 300 years of work was about to go down the drain. The problem was that classical physicists confused their models of reality with reality itself, and when their classical models began to falter, their confidence in physics began to falter as well. This philosophical crisis was resolved with the adoption of the concepts of logical positivism and some new effective theories in physics. Quantum mechanics (1926) was developed for small things like atoms, the special theory of relativity (1905) was developed for objects moving at high velocities and the general theory of relativity (1915) was developed for objects moving in strong gravitational fields.

Logical positivism, usually abbreviated simply to positivism, is an enhanced form of empiricism, in which we do not care about how things “really” are; we are only interested with how things are observed to behave. With positivism, physicists only seek out models of reality - not reality itself. When we study quantum mechanics, we will find that the concept of reality gets rather murky in physics anyway, so this is not as great a loss as it might at first seem. By concentrating on how things are observed to behave, rather than on what things “really” are, we avoid the conundrum faced by the classical physicists. In retrospect, this idea really goes all the way back to the very foundations of physics. In Newton’s Principia (1687) he outlined Newtonian mechanics and his theory of gravitation, which held that the gravitational force between two objects was proportional to the product of their masses divided by the square of the distance between them. Newton knew that he was going to take some philosophical flack for proposing a mysterious force between objects that could reach out across the vast depths of space with no apparent mechanism, so he took a very positivistic position on the matter with the famous words:

I have not as yet been able to discover the reason for these properties of gravity from phenomena, and I do not feign hypotheses. For whatever is not deduced from the phenomena must be called a hypothesis; and hypotheses, whether metaphysical or physical, or based on occult qualities, or mechanical, have no place in experimental philosophy. In this philosophy particular propositions are inferred from the phenomena, and afterwards rendered general by induction.

Instead, Newton focused on how things were observed to move under the influence of his law of gravitational attraction, without worrying about what gravity “really” was.

The second concept, that of effective theories, is an extension of positivism. An effective theory is an approximation of reality that only holds true over a certain restricted range of conditions and only provides for a certain depth of understanding of the problem at hand. For example, Newtonian mechanics is an effective theory that makes very good predictions for the behavior of objects moving less than 10% of the speed of light and which are bigger than a very small grain of dust. These limits define the effective range over which Newtonian mechanics can be applied to solve problems. For very small things we must use quantum mechanics and for very fast things moving in strong gravitational fields, we must use relativity theory. So all of the current theories of physics, such as Newtonian mechanics, classical electrodynamics, thermodynamics, statistical mechanics, the special and general theories of relativity, quantum mechanics, and the quantum field theories of QED and QCD are effective theories that are based upon models of reality, and all these models are approximations - all these models are fundamentally "wrong", but at the same time, these effective theories make exceedingly good predictions of the behavior of physical systems over the limited ranges in which they apply. That is the goal of softwarephysics – to provide for an effective theory of software behavior that makes useful predictions of software behavior that are applicable to the day-to-day activities of IT professionals. So in softwarephysics, we adopt a very positivistic viewpoint of software; we do not care what software “really is”, we only care about how software is observed to behave and try to model those behaviors with an effective theory of software behavior that only holds true over a certain restricted range of conditions and only provides for a certain depth of understanding of the problem at hand.

GPS satellites provide a very good example of positivism and effective theories at work. There are currently 31 GPS satellites orbiting at an altitude of 12,600 miles above the Earth, and each contains a very accurate atomic clock. The signals from the GPS satellites travel to your GPS unit at the speed of light, so by knowing the travel time of the signals from at least 4 of the GPS satellites, it is possible to determine your position on Earth very accurately. In order to do that, it is very important to have very accurate timing measurements. Newtonian mechanics is used to launch the GPS satellites to an altitude of 12,600 miles and to keep them properly positioned in orbit. Classical electrodynamics is then used to beam the GPS signals back down to Earth to the GPS unit in your car. Quantum mechanics is used to build the transistors on the chips on board the GPS satellites and to understand the quantum tunneling of electrons in the flash memory chips used to store GPS data on the satellites. The special theory of relativity predicts that the onboard atomic clocks on the GPS satellites will run slower and lose about 7.2 microseconds per day due to their high velocities relative to an observer on the Earth. But at the same time, the general theory of relativity also predicts that because the GPS satellites are further from the center of the Earth and in a weaker gravitational field, where spacetime is less deformed than on the surface of the Earth, their atomic clocks also run faster and gain 45.9 microseconds per day due to the weaker gravitational field out there. The net effect is a gain of 38.7 microseconds per day, so the GPS satellite atomic clocks have to be purposefully built to run slow by 38.7 microseconds per day before they are launched, so that they will keep in sync with clocks on the surface of the Earth. If this correction were not made, an error in your position of 100 yards/day would accrue. The end result of the combination of all these fundamentally flawed effective theories is that it is possible to pinpoint your location on Earth to an accuracy of 16 feet or better for as little as $100. But physics has done even better than that with its fundamentally flawed effective theories. By combining the effective theories of special relativity (1905) with quantum mechanics (1926), physicists were able to produce a new effective theory for the behavior of electrons and photons called quantum electrodynamics QED (1948) which was able to predict the gyromagnetic ratio of the electron, a measure of its intrinsic magnetic field, to an accuracy of 11 decimal places. As Richard Feynman has pointed out, this was like predicting the exact distance between New York and Los Angeles accurate to the width of a human hair!

So Newtonian mechanics makes great predictions for the macroscopic behavior of GPS satellites, but it does not work very well for small things like the behavior of individual electrons within transistors, where quantum mechanics is required, or for things moving at high speeds or in strong gravitational fields where relativity theory must be applied. And all three of these effective theories are based upon completely contradictory models. General relativity maintains that spacetime is curved by matter and energy, but that matter and energy are continuous, while quantum mechanics maintains that spacetime is flat, but that matter and energy are quantized into chunks. Newtonian mechanics simply states that space and time are mutually independent dimensions and universal for all, with matter and energy being continuous. The important point is that all effective theories and scientific models are approximations – they are all fundamentally "wrong". But knowing that you are "wrong" gives you a great advantage over people who know that they are "right", because knowing that you are "wrong" allows you to seek improved models of reality. So please consider softwarephysics to simply be an effective theory of software behavior that is based upon models that are fundamentally “wrong”, but at the same time, fundamentally useful for IT professionals. So as you embark upon your study of softwarephysics, please always keep in mind that the models of softwarephysics are just approximations of software behavior, they are not what software “really is”. It is very important not to confuse models of software behavior with software itself, if one wishes to avoid the plight of the 19th century classical physicists.

If you are an IT professional and many of the above concepts are new to you, do not be concerned. This blog on softwarephysics is aimed at a diverse audience, but with IT professionals in mind. All of the above ideas will be covered at great length in the postings in this blog on softwarephysics and in a manner accessible to all IT professionals. Now it turns out that most IT professionals have had some introduction to physics in high school or in introductory college courses, but that presents an additional problem. The problem is that such courses generally only cover classical physics, and leave the student with a very good picture of physics as it stood in 1864! It turns out that the classical physics of Newtonian mechanics, thermodynamics, and classical electromagnetic theory were simply too good to discard and are still quite useful, so they are taught first to beginners and then we run out of time to cover the really interesting physics of the 20th century. Now imagine the problems that the modern world would face if we only taught similarly antiquated courses in astronomy, metallurgy, electrical and mechanical engineering, medicine, economics, biology, or geology that happily left students back in 1864! Since many of the best models for software behavior stem from 20th century physics, we will be covering a great deal of 20th century material in these postings – the special and general theories of relativity, quantum mechanics, quantum field theories, and chaos theory, but I hope that you will find that these additional effective theories are quite interesting on their own, and might even change your worldview of the physical Universe at the same time.

Unintended Consequences for the Scientific Community
As I mentioned at the close of my original posting on SoftwarePhysics, my initial intention for this blog on softwarephysics was to fulfill a promise I made to myself about 20 years ago to approach the IT community with the concept of softwarephysics a second time, following my less than successful attempt to do so in the 1980s, with the hope of helping the IT community to better cope with the daily mayhem of life in IT. However, in laying down the postings for this blog an unintended consequence arose in my mind as I became profoundly aware of the enormity of this vast computer simulation of the physical Universe that the IT community has so graciously provided to the scientific community free of charge and also of the very significant potential scientific value that it provides. One of the nagging problems for many of the observational and experimental sciences is that many times there is only one example readily at hand to study or experiment with, and it is very difficult to do meaningful statistics with a population of N=1.

But the computer simulation of the physical Universe that the Software Universe presents provides another realm for comparison. For example, both biology and astrobiology only have one biosphere on Earth to study and even physics itself has only one Universe with which to engage. Imagine the possibilities if scientists had another Universe readily at hand in which to work! This is exactly what the Software Universe provides. For example, in SoftwareBiology and A Proposal For All Practicing Paleontologists we see that the evolution of software over the past 70 years has closely followed the same path as life on Earth over the past 4.0 billion years, in keeping with Simon Conway Morris’s contention that convergence has played the dominant role in the evolution of life on Earth. In When Toasters Fly, we also see that software has evolved in fits and starts as portrayed by the punctuated equilibrium of Stephen Jay Gould and Niles Eldredge, and in The Adaptationist View of Software Evolution we explore the overwhelming power of natural selection in the evolution of software. In keeping with Peter Ward’s emphasis on mass extinctions dominating the course of evolution throughout geological time, we also see in SoftwareBiology that there have been several dramatic mass extinctions of various forms of software over the past 70 years as well, that have greatly affected the evolutionary history of software, and that between these mass extinctions, software has also tended to evolve through the gradual changes of Hutton’s and Lyell’s uniformitarianism. In Software Symbiogenesis and Self-Replicating Information, we also see the very significant role that parasitic/symbiotic relationships have played in the evolution of software, in keeping with the work of Lynn Margulis and also of Freeman Dyson’s two-stage theory of the origin of life on Earth. In The Origin of Software the Origin of Life, we explore Stuart Kauffman’s ideas on how Boolean nets of autocatalytic chemical reactions might have kick-started the whole thing as an emergent behavior of an early chaotic pre-biotic environment on Earth, and that if Seth Shostak is right, we will never end up talking to carbon-based extraterrestrial aliens, but to alien software instead. In Is the Universe Fine-Tuned for Self-Replicating Information? we explore the thermodynamics of Brandon Carter’s Weak Anthropic Principle (1973), as it relates to the generation of universes in the multiverse that are capable of sustaining intelligent life. Finally, in Programming Clay we revisit Alexander Graham Cairns-Smith’s theory (1966) that Gene 1.0 did not run on nucleic acids, but on clay microcrystal precursors instead.

Similarly for the physical sciences, in Is the Universe a Quantum Computer? we find a correspondence between TCP/IP and John Cramer’s Transactional Interpretation of quantum mechanics. In SoftwarePhysics and Cyberspacetime, we also see that the froth of CPU processes running with a clock speed of 109 Hz on the 10 trillion currently active microprocessors that comprise the Software Universe can be viewed as a slowed down simulation of the spin-foam froth of interacting processes of loop quantum gravity running with a clock speed of 1043 Hz that may comprise the physical Universe. And in Software Chaos, we examine the nonlinear behavior of software and some of its emergent behaviors and follow up in CyberCosmology with the possibility that vast quantities of software running on large nonlinear networks might eventually break out into consciousness in accordance with the work of George Dyson and Daniel Dennett. Finally, in Model-Dependent Realism - A Positivistic Approach to Realism we compare Steven Weinberg’s realism with the model-dependent realism of Stephen Hawking and Leonard Mlodinow and how the two worldviews affect the search for a Final Theory. Finally, in The Software Universe as an Implementation of the Mathematical Universe Hypothesis and An Alternative Model of the Software Universe we at long last explore what software might really be, and discover that the Software Universe might actually be more closely related to the physical Universe than you might think.

The chief advantage of doing fieldwork in the Software Universe is that, unlike most computer simulations of the physical Universe, it is an unintended and accidental simulation, without any of the built-in biases that most computer simulations of the physical Universe suffer. So you will truly be able to do fieldwork in a pristine and naturally occuring simulation, just as IT professionals can do fieldwork in the wild and naturally occuring simulation of software that the living things of the biosphere provide. Secondly, the Software Universe is a huge simulation that is far beyond the budgetary means of any institution or consortium by many orders of magnitude. So if you are an evolutionary biologist, astrobiologist, or paleontologist working on the origin and evolution of life in the Universe, or a physicist or economist working on the emergent behaviors of nonlinear systems and complexity theory, or a neurobiologist working on the emergence of consciousness in neural networks, or even a frustrated string theorist struggling with quantum gravity, it would be well worth your while to pay a friendly call upon the local IT department of a major corporation in your area. Start with a visit to the Command Center for their IT Operations department to get a global view of their IT infrastructure and to see how it might be of assistance to the work in your area of interest. From there you can branch out to the applicable area of IT that will provide the most benefit.

The Impact of Self-Replicating Information Upon the Planet
One of the key findings of softwarephysics is concerned with the magnitude of the impact upon the planet of self-replicating information.

Self-Replicating Information – Information that persists through time by making copies of itself or by enlisting the support of other things to ensure that copies of itself are made.

Basically, we have seen several waves of self-replicating information dominate the Earth:
1. Self-replicating autocatalytic metabolic pathways of organic molecules
2. RNA
3. DNA
4. Memes
5. Software

Note that because the self-replicating autocatalytic metabolic pathways of organic molecules, RNA and DNA have become so heavily intertwined over time that now I simply refer to them as the “genes”. Over the past 4.0 billion years, the surface of the Earth has been totally reworked by three forms of self-replicating information – the genes, memes and software, with software rapidly becoming the dominant form of self-replicating information on the planet. For more on this see:

A Brief History of Self-Replicating Information
The Great War That Will Not End
How to Use an Understanding of Self-Replicating Information to Avoid War
How to Use Softwarephysics to Revive Memetics in Academia
Is Self-Replicating Information Inherently Self-Destructive?
Is the Universe Fine-Tuned for Self-Replicating Information?
Self-Replicating Information

Softwarephysics and the Real World of Human Affairs
Having another universe readily at hand to explore, even a simulated universe like the Software Universe, necessarily has an impact upon one's personal philosophy of life, and allows one to draw certain conclusions about the human condition and what’s it all about, so as you read through the postings in this blog you will stumble across a bit of my own personal philosophy - definitely a working hypothesis still in the works. Along these lines you might be interested in a few postings where I try to apply softwarephysics to the real world of human affairs:

MoneyPhysics – my impression of the 2008 world financial meltdown.

The Fundamental Problem of Everything – If you Google "the fundamental problem of everything", this will be the only hit you get on the entire Internet, which is indicative of the fundamental problem of everything!

What’s It All About? and Genes, Memes and Software – my current working hypothesis on what’s it all about.

How to Use an Understanding of Self-Replicating Information to Avoid War – my current working hypothesis for how the United States can avoid getting bogged down again in continued war in the Middle East.

Hierarchiology and the Phenomenon of Self-Organizing Organizational Collapse - a modern extension of the classic Peter Principle that applies to all hierarchical organizations and introduces the Time Invariant Peter Principle.

Some Specifics About These Postings
The postings in this blog are a supplemental reading for my course on softwarephysics for IT professionals entitled SoftwarePhysics 101 – The Physics of Cyberspacetime, which was originally designed to be taught as a series of seminars at companies where I was employed. Since softwarephysics essentially covers the simulated physics, chemistry, biology, and geology of an entire simulated universe, the slides necessarily just provide a cursory skeleton upon which to expound. The postings in this blog go into much greater depth. Because each posting builds upon its predecessors, the postings in this blog should be read in reverse order from the oldest to the most recent, beginning with my original posting on SoftwarePhysics. In addition, several universities also now offer courses on Biologically Inspired Computing which cover some of the biological aspects of softwarephysics, and the online content for some of these courses can be found by Googling for "Biologically Inspired Computing" or "Natural Computing". At this point we will finish up with my original plan for this blog on softwarephysics with a purely speculative posting on CyberCosmology that describes the origins of the Software Universe, cyberspacetime, software and where they all may be heading. Since CyberCosmology will be purely speculative in nature, it will not be of much help to you in your IT professional capacities, but I hope that it might be a bit entertaining. Again, if you are new to softwarephysics, you really need to read the previous posts before taking on CyberCosmology. I will probably continue on with some additional brief observations about softwarephysics in the future, but once you have completed CyberCosmology, you can truly consider yourself to be a bona fide softwarephysicist.

For those of you following this blog, the posting dates on the posts may seem to behave in a rather bizarre manner. That is because in order to get the Introduction to Softwarephysics listed as the first post in the context root of I have to perform a few IT tricks. When publishing a new posting, I simply copy the contents of the Introduction to Softwarephysics to a new posting called the New Introduction to Softwarephysics. Then I update the original Introduction to Softwarephysics entry with the title and content of the new posting to be published. I then go back and take “New” out of the title of the New Introduction to Softwarephysics. This way the Introduction to Softwarephysics always appears as the first posting in the context root of The side effect of all this is that the real posting date of posts is the date that appears on the post that you get when clicking on the Newer Post link at the bottom left of the posting webpage.

SoftwarePhysics 101 – The Physics of Cyberspacetime is now available on Microsoft OneDrive.

SoftwarePhysics 101 – The Physics of Cyberspacetime - Original PowerPoint document

Entropy – A spreadsheet referenced in the document

BSDE – A 1989 document describing how to use BSDE - the Bionic Systems Development Environment - to grow applications from genes and embryos within the maternal BSDE software.

Comments are welcome at

To see all posts on softwarephysics in reverse order go to:

Steve Johnston

Thursday, November 05, 2015

The Enduring Effects of the Obvious Hiding in Plain Sight

We are now in the midst of the 2016 presidential election cycle in the United States, and like many Americans, I have been watching the debates between the candidates seeking the presidential nomination for both the Democrat and Republican parties with interest, but being a softwarephysicist, I have the advantage of bringing into the analysis the fact that mankind is currently living in a very unusual time, as we witness software rapidly becoming the dominant form of self-replicating information on the planet.

Self-Replicating Information – Information that persists through time by making copies of itself or by enlisting the support of other things to ensure that copies of itself are made.

Basically, we have seen several waves of self-replicating information dominate the Earth:
1. Self-replicating autocatalytic metabolic pathways of organic molecules
2. RNA
3. DNA
4. Memes
5. Software

see A Brief History of Self-Replicating Information for details. Without that knowledge, it seems that both parties are essentially lost in space and time without a clue. The debates have shown that both parties are deeply concerned about the evaporation of the middle class in America over the past several decades, and both have proposed various solutions from the past that will not work in the future because this time the evaporation of the middle classes throughout the world is just one of the initial symptoms of software taking over control (see The Economics of the Coming Software Singularity for details). In my opinion neither party is dealing with the sociological problems that will arise due to the fact that over the next 10 - 100 years all human labor will go to zero value as software comes to dominate, and that includes the labor of doctors, lawyers, soldiers, bankers and theoretically even politicians. How will the age-old oligarchical societies of the world deal with that in a manner that allows civilization to continue? Since we first invented civilization about 12,000 years ago in the Middle East, we have never been faced with the situation where the ruling class of the top 1% has not needed the remaining 99% of us around at all.

All of this reminds me of the great concealing power of the obvious hiding in plain sight. For example, in many of my preceding postings I have remarked that given the hodge-podge of precursors, false starts, and failed attempts that led to the origin and early evolution of software on the Earth, that if we had actually been around to observe the origin and early evolution of life on the Earth, we would probably still be sitting around today arguing about what had actually happened (see A Proposal for an Odd Collaboration to Explore the Origin of Life with IT Professionals for more on that). However, I am now more of the opinion that had we actually been around to observe that event, we probably would not have even noticed it happening. As an IT professional actively monitoring the evolution of software for the past 43 years, ever since taking CS101 at the University of Illinois in Urbana back in 1972, I have long suggested that researchers investigating the origin of life on Earth and elsewhere conduct some field work in the IT departments of some major corporations, and to then use the origin and evolution of commercial software over the past 70 years or 2.2 billion seconds as a guide. But in order to do so, one must first be able to see the obvious, and that is not always easy to do. The obvious thing to see is that we are all on the verge of a very significant event in the history of the Earth - the time in which software becomes the dominant form of self-replicating information upon the planet and perhaps within our galaxy. This transition to software as the dominant form of self-replicating information upon the Earth will have an even more dramatic effect than all of the previous transitions of self-replicating information because it may alter the future of our galaxy as software begins to explore the galaxy on board von Neumann probes, self-replicating robotic probes that travel from star system to star system building copies along the way. Studies have shown that once released, von Neumann probes could easily colonize our entire galaxy within a few million years. However, seeing the obvious is always difficult because the obvious tends to fade into the background of daily life, as Edgar Allan Poe noted in his short story The Purloined Letter (1844) in which he suggested that the safest place to hide something was in plain sight because that was the last place interested parties would look.

A good scientific example of this phenomenon is Olbers' paradox, named after the German astronomer Heinrich Wilhelm Olbers (1758–1840). In the 19th century, many believed that our Universe was both infinite in space and time, meaning that it had always existed about as we see it today and was also infinitely large and filled with stars. However, this model presented a problem in that if it were true, the night sky should be as bright as the surface of the Sun because no matter where one looked, eventually a star would be seen.

Figure 1 - If the Universe were infinitely old, infinitely large and filled with stars then, wherever one looked, eventually a star would be seen. Consequently, the night sky should be as bright as the surface of the Sun.

Figure 2 - But the night sky is dark, so something must be wrong with the assumption that the Universe is infinitely old, infinitely large and filled with stars.

Surprisingly, Edgar Allan Poe came up with the obvious solution to Olbers' paradox in his poem Eureka (1848):

Were the succession of stars endless, then the background of the sky would present us a uniform luminosity, like that displayed by the Galaxy – since there could be absolutely no point, in all that background, at which would not exist a star. The only mode, therefore, in which, under such a state of affairs, we could comprehend the voids which our telescopes find in innumerable directions, would be by supposing the distance of the invisible background so immense that no ray from it has yet been able to reach us at all.

If light had not yet had time to reach us from some distant parts of the Universe that meant that the Universe could not be infinitely old. The solution was staring us in the face. I have read that Edgar Allan Poe was very excited about this profound insight and even notified some newspapers of his discovery. Even today his idea has profound implications. It means that in order for the night sky to be dark we must be causally disconnected from much of the Universe if the Universe is infinitely large and infinitely old. Currently, we have two models that provide for that - Andrei Linde's Eternal Chaotic Inflation (1986) model and Lee Smolin's black hole model presented in his The Life of the Cosmos (1997). In Eternal Chaotic Inflation the Multiverse is infinite in size and infinite in age, but we are causally disconnected from nearly all of it because nearly all of the Multiverse is inflating away from us faster than the speed of light, and so we cannot see it (see The Software Universe as an Implementation of the Mathematical Universe Hypothesis). In Lee Smolin's model of the Multiverse, whenever a black hole forms in one universe it causes a white hole to form in a new universe that is internally observed as the Big Bang of the new universe. A new baby universe formed from a black hole in its parent universe is causally disconnected from its parent by the event horizon of the parent black hole and therefore cannot be seen (see An Alternative Model of the Software Universe).

Another good example of the obvious hiding in plain sight is plate tectonics. A cursory look at the Southern Atlantic or the Red Sea quickly reveals what is going on, but it took hundreds of years after the Earth was first mapped for plate tectonics to be deemed obvious and self-evident to all.

Figure 3 - Plate tectonics was also hiding in plain sight as nearly every school child in the 1950s noted that South America seemed to fit nicely into the notch of Africa, only to be told it was just a coincidence by their elders.

Figure 4 - The Red Sea even provided a vivid example of how South America and Africa could have split apart a long time ago.

Software Does Not Care About Marginal Tax Rates and Other Such Things
As I pointed out in The Economics of the Coming Software Singularity we really do not know what will happen to mankind when we finally do hit the Software Singularity, and software finally becomes capable of self-replicating on its own. But before that happens, there certainly will be a great deal of sociological upheaval, and we should all begin to prepare for that upheaval now in advance. This sociological upheaval will be further complicated by the effects of the climate change that we have all collectively decided not to halt and by the sixth major extinction of carbon-based life forms on the planet, currently induced by the activities of human beings. As the latest wave of self-replicating information to appear upon the Earth, software really does not care about such things because software is just the latest wave of mindless self-replicating information that has reshaped the surface of the Earth. Software really does not care if the Earth has a daily high of 140o F with purple oceans choked with hydrogen-sulfide producing bacteria, producing a dingy green sky over an atmosphere tainted with toxic levels of hydrogen sulfide gas and an oxygen level of only 12%, like we had during the Permian-Triassic greenhouse mass extinction 252 million years ago. For more on the possible impending perils of software becoming the dominant form of self-replicating information on the planet please see Susan Blackmore's TED presentation at:

Note that I consider Susan Blackmore's temes to really be technological artifacts that contain software. After all, an iPhone without software is simply a flake tool with a very blunt edge.

Figure 5 - Konrad Zuse with a reconstructed Z3 computer in 1961. He first cranked up some software on his original Z3 in May of 1941.

Figure 6 - Now software has become ubiquitous and is found in nearly all things produced by mankind, and will increasingly grow in importance as the Internet of Things (IoT) unfolds.

So although both parties maintain that the 2016 election will be pivotal because it might determine the future of your tax rates, I would like to suggest that there are a few more pressing items that need to be dealt with first. See Is Self-Replicating Information Inherently Self-Destructive? and How to Use Your IT Skills to Save the World for details on what you can do to help.

Comments are welcome at

To see all posts on softwarephysics in reverse order go to:

Steve Johnston

Thursday, October 22, 2015

Don't ASAP Your Life Away

For the benefit of international readers let me begin with a definition:

ASAP - An American acronym for "As Soon As Possible", meaning please drop everything and do this right now instead.

I am now a 64-year-old IT professional, planning to work until I am about 70 years old if my health holds up. Currently, I am doing middleware work from home for the IT department of a major corporation, and only go into the office a few times each year, which is emblematic of my career path trajectory towards retirement. Now I have really enjoyed my career in IT all of these years, but having been around the block a few times, I would like to offer a little advice to those just starting out in IT, and that is to be sure to pace yourself for the long haul. You really need to dial it back a bit to go the distance. Now I don't want this to be seen as a negative posting about careers in IT , but I personally have seen way too many young bright IT professionals burn out due to an overexposure to stress and long hours, and that is a shame. So dialing it back a bit should be seen as a positive recommendation. And you have to get over thinking that dialing it back to a tolerable long-term level makes you a lazy worthless person. In fact, dialing it back a little will give you the opportunity to be a little more creative and introspective in your IT work, and maybe actually come up with something really neat in your IT career.

This all became evident to me back in 1979 when I transitioned from being a class 9 exploration geophysicist in one of Amoco's exploration departments to become a class 9 IT professional in Amoco's IT department. One very scary Monday morning, I was conducted to my new office cubicle in Amoco’s IT department, and I immediately found myself surrounded by a large number of very strange IT people, all scurrying about in a near state of panic, like the characters in Alice in Wonderland. After 36 years in the IT departments of several major corporations, I can now state with confidence that most corporate IT departments can best be described as “frantic” in nature. This new IT job was a totally alien experience for me, and I immediately thought that I had just made a very dreadful mistake. Granted, I had been programming geophysical models for my thesis and for oil companies ever since taking a basic FORTRAN course back in 1972, but that was the full extent of my academic credentials in computer science. I immediately noticed some glaring differences between my two class 9 jobs in the same corporation. As a class 9 geophysicist, I had an enclosed office on the 52nd floor of the Amoco Building in downtown Chicago, with a door that actually locked, and a nice view of the north side of the Chicago Loop and Lake Michigan. With my new class 9 IT job at Amoco I moved down to the low-rent district of the Amoco Building on the 10th floor where the IT department was located to a cubicle with walls that did not provide very much privacy. Only class 11 and 12 IT professionals had relatively secluded cubicles with walls that offered some degree of privacy. Later I learned that you had to be a class 13 IT Manager, like my new boss, to get an enclosed office like I had back up on the 52nd floor. I also noticed that the stress levels of this new IT job had increased tremendously over my previous job as an exploration geophysicist. As a young geophysicist, I was mainly processing seismic data on computers for the more experienced geophysicists to interpret and to plan where to drill the next exploration wells. Sure there was some level of time-urgency because we had to drill a certain number of exploration wells each year to maintain our drilling concessions with foreign governments, but still, work proceeded at a rather manageable pace, allowing us ample time to play with the processing parameters of the software used to process the seismic data into seismic sections.

Figure 1 - Prior to becoming an IT professional, I was mainly using software to process seismic data into seismic sections that could be used to locate exploration wells.

However, the moment I became an IT professional, all of that changed. Suddenly, everything I was supposed to do became a frantic ASAP effort. It is very difficult to do quality work when everything you are supposed to do is ASAP. Projects would come and go, but they were always time-urgent and very stressful, to the point that it affected the quality of the work that was done. It seemed that there was always the temptation to simply slap something into production to hit an arbitrary deadline, ready or not, and many times we were forced to succumb to that temptation. This became more evident when I moved from Applications Development to Operations about 15 years ago, and I had to then live with the sins of pushing software into production before it was quite ready for primetime. In recent decades I also noticed a tendency to hastily bring IT projects in through heroic efforts of breakneck activity, and for IT Management to then act as if that were actually a good thing after the project was completed. When I first transitioned into IT, I also noticed that I was treated a bit more like a high-paid clerk than a highly trained professional, mainly because of the time-pressures of getting things done. One rarely had time to properly think things through. I seriously doubt that most business professionals would want to hurry their surgeons along while under the knife, but that is not so for their IT support professionals.

You might wonder why I did not immediately run back to exploration geophysics in a panic. There certainly were enough jobs for an exploration geophysicist at the time because we were just experiencing the explosion of oil prices resulting from the 1979 Iranian Revolution. However, my wife and I were both from the Chicago area, and we wanted to stay there. In fact, I had just left a really great job with Shell in Houston to come to Amoco's exploration department in Chicago for that very reason. However, when it was announced about six months after my arrival at Amoco that Amoco was moving the Chicago exploration department to Houston, I think the Chief Geophysicist who had just hired me felt guilty, and he found me a job in Amoco's IT department so that we could stay in Chicago. So I was determined to stick it out for a while in IT, until something better might come along. However, after a few months in Amoco's IT department, I began to become intrigued. It seemed as though these strange IT people had actually created their own little simulated universe, that seemingly, I could explore on my own. It also seemed to me that my new IT coworkers were struggling because they did not have a theoretical framework from which to work from, like I had had in Amoco's exploration department. That is when I started working on softwarephysics. I figured if you could apply physics to geology; why not apply physics to software? I then began reading the IT trade rags, to see if anybody else was doing similar research, and it seemed as though nobody else on the planet was thinking along those lines, and that raised my level of interest in doing so even higher.

But for the remainder of this posting, I would like to explore some of the advantages of dialing it back a bit by going back to a 100-year-old case study. I just finished reading Miss Leavitt's Stars - the Untold Story of the Woman Who Discovered How to Measure the Universe (2005) by George Johnson, a biography of Henrietta Swan Leavitt who in 1908 discovered the Luminosity-Period relationship of Cepheid variables that allowed Edwin Hubble in the 1920s to calculate the distances to external galaxies, and ultimately, determine that the Universe was expanding. This discovery was certainly an example of work worthy of a Nobel Prize that went unrewarded. Henrietta Leavitt started out as a human "computer" in the Harvard College Observatory in 1893, examining photographic plates in order to tabulate the locations and magnitudes of stars on the photographic plates for 25 cents/hour. Brighter stars made larger spots on photographic plates than dimmer stars, so it was possible to determine the magnitude of a star on a photographic plate by comparing it to the sizes of the spots of stars with known magnitudes. She also worked on tabulating data on the varying brightness of variable stars. Variable stars were located by overlaying a negative plate that consisted of a white sky containing black stars and a positive plate that consisted of a dark sky containing white stars. The two plates were taken some days or weeks apart in time. Then by holding up both superimposed plates to the light from a window, one could flip them back and forth, looking for variable stars. If you saw a black dot with a white hallow or a white dot with a black hallow, you knew that you had found a variable star.

What Henrietta Leavitt noted was that certain variable stars in the Magellanic Clouds, called Cepheid variables, varied in luminosity in a remarkable way. The Large Magellanic Cloud is about 160,000 light years away, while the Small Magellanic Cloud is about 200,000 light years distant. Both are closeby small irregular galaxies. The important point is that all of the stars in each Magellanic Cloud are all at about the same distance from the Earth. What Henrietta Leavitt discovered was that the Cepheid variables in each Magellanic Cloud varied such that the brighter Cepheid variables had longer periods than the fainter Cepheid variables in each Magellanic Cloud. Since all of the Cepheid variables in each of the Magellanic Clouds were all at approximately the same distance, that meant that the Cepheid variables that appeared brighter when viewed from the Earth actually were intrinsically brighter. Now if one could find the distance to some closeby Cepheid variables, using the good old parallax method displayed in Figure 5, then by simply measuring the luminosity period of a Cepheid variable, it would be possible to tell how bright the star really was - see Figure 6. However, it was a little more complicated than that because there were no Cepheid variables within the range that the parallax method worked; they were all too far away. So instead, astronomers used the parallax method to determine the local terrain of stars in our neighborhood and how fast the Sun was moving relative to them. Then by recording the apparent slow drift of distant Cepheid variables relative to even more distant stars, caused by the Sun moving along through our galaxy, it was possible to estimate the distance to a number of Cepheid variables. Note that obtaining the distance to a number of Cepheid variables by other means is no longer the challenge that it once was because from November 1989 to March 1993 the Hipparcos satellite measured the parallax of 118,200 stars accurate to one-milliarcsecond, and 273 Cepheid variables were amongst the data, at long last providing a direct measurement of some Cepheid variable distances. Once the distance to a number of Cepheid variables was determined by other means, it allowed astronomers to create the Luminosity-Period plot of Figure 6. Then by comparing how bright a Cepheid variable appeared in the sky relative to how bright it really was, it was possible to figure out how far away the Cepheid variable actually was. That was because if two Cepheid variables had the same period, and therefore, the same intrinsic brightness, but one star appeared 100 times dimmer in the sky than the other star, that meant that the dimmer star was 10 times further away than the brighter star because the apparent luminosity of a star falls off as the square of the distance to the star. Additionally, it also turned out that the Cepheid variables were extremely bright stars that were many thousands of times brighter than our own Sun, so they could be seen from great distances, and could even be seen in nearby galaxies. Thus it became possible to find the distances to galaxies using Cepheid variables.

Figure 2 - Henrietta Swan Leavitt July 4, 1868 - December 12, 1921 died at an age of 53.

Figure 3 - The human computers of the Harvard Observatory were used to tabulate the locations and magnitudes of stars on photographic plates and made 25 cents/hour. Female cotton mill workers made about 15 cents/hour at the time.

Figure 4 - In 1908 Henrietta Leavitt discovered that the brighter Cepheid variables in the Magellanic Clouds had longer periods than the dimmer Cepheid variables, as seen from the Earth. She published those results in 1912. Because all of the Cepheid variables in the Magellanic Clouds were at approximately the same distance, that meant that the Cepheid variables that appeared brighter in the sky were actually intrinsically brighter, so Henrietta Leavitt could then plot the apparent brightness of those Cepheid variables against their periods to obtain a plot like Figure 6. Later it was determined that this variability in luminosity was due to the Cepheid variables pulsating in size. When Cepheid variables grow in size, their surface areas increase and their surface temperatures drop. Because the luminosity of a star goes as square of its radius (R2), but as the surface temperature raised to the 4th power (T4), the drop in temperature wins out, and so when a Cepheid variable swells in size, its brightness actually decreases.

Figure 5 - The standard parallax method can determine the distance to nearby stars. Unfortunately, no known Cepheid variables were close enough for the parallax method to work. Instead, the parallax method was used to figure out the locations of stars near to the Sun, and then the motion of the Sun relative to the nearby stars was calculated. This allowed the slow apparent drift of some Cepheid variables against the background of very distant stars to be used to calculate the distance to a number of Cepheid variables. With those calculations, combined with the apparent brightness of the Cepheid variables, it was possible to create the Luminosity-Period plot of Figure 6.

Figure 6 - This was a crucial observation because it meant that by simply measuring the amount of time it took a Cepheid variable to complete a cycle it was possible to obtain its intrinsic brightness or luminosity. Because Cepheid variables are also very bright stars in general, that meant it was easy to see them in nearby galaxies. For example, from the above graph we can see that a Cepheid variable with a period of 30 days is about 10,000 times brighter than the Sun. That means it can be seen about 100 times further away than our own Sun can be seen, and could even be seen in a distant galaxy.

While reading Miss Leavitt's Stars, I was taken aback, as I always am, by the slow pace of life 100 years ago, in contrast to the breakneck pace of life today. People in those days lived about half as long as we do today, yet they went through life about 1,000 times slower. For example, Henrietta Leavitt worked for Edward Pickering at the Harvard College Observatory for many decades. Unfortunately, she suffered from poor health, as did many people 100 years ago, and a number of times had to return home to Beloit Wisconsin to recuperate for many months at a time. It was very revealing to read the correspondence between the two while she was at home convalescing. It seems that in those days the safest and most effective medicine was bed rest. Putting yourself into the hands of the medical establishment of the day was a risky business indeed. In fact, they might treat you with a dose of radium salts to perk you up. However, Edward Pickering really needed Henrietta Leavitt to complete some work on her observations of Cepheid variables in order for them to be used as standard candles to measure astronomical distances, so much so that he even raised her wages to 30 cents/hour. But because of poor health Henrietta Leavitt had to take it easy. Despite the criticality of her work, the correspondence went back and forth between the two in an excruciatingly slow manner, with a time scale of several months between letters, certainly not in the ASAP manner of today with its overwhelming urgency of nearly immediate response times. Sometimes Edward Pickering would even ship photographic plates to Henrietta Leavitt for her to work on. Even when Henrietta Leavitt did return to work at the Harvard College Observatory, many times she could only work a few hours each day. Although this at first may seem incredibly passe and out of touch with the ASAP pace of the modern world, I have to wonder if Henrietta Leavitt had simply ground out stellar luminosities at 30 cents/hour, as fast as she possibly could in a mind-numbing way, would she ever had had the time to calmly sit back and see what nobody else had managed to see? Perhaps if everybody in IT dialed it back a bit, we could do the same. It would also help if IT Management treated IT professionals in less of a clerk-like manner, and allowed them the time to be as creative as they really could be.

So my advice to those just starting out in IT is to dial it back a bit, and to always keep a sense of perspective. It is important to always make time for yourself and for your family, and to allow enough time to actually think about what you are doing and what you are trying to achieve in life. With enough time, maybe you might come up with something as astounding as did Henrietta Swan Leavitt.

Comments are welcome at

To see all posts on softwarephysics in reverse order go to:

Steve Johnston

Saturday, October 03, 2015

The Economics of the Coming Software Singularity

I was born in 1951, a few months after the United States government bought its very first commercial computer, a UNIVAC I, for the Census Bureau on March 31, 1951. So when I think back to my early childhood, I can still remember a time when there essentially was no software at all in the world. In fact, I can still vividly remember my very first encounter with a computer on Monday, Nov. 19, 1956 watching the Art Linkletter TV show People Are Funny. Art was showcasing a UNIVAC 21 “electronic brain” sorting through the questionnaires from 4,000 hopeful singles, looking for the ideal match. The machine paired up John Caran, 28, and Barbara Smith, 23, who later became engaged. And this was more than 40 years before! To a five-year-old boy, a machine that could “think” was truly amazing. Since that first encounter with a computer back in 1956, I have personally witnessed software slowly becoming the dominant form of self-replicating information on the planet, and I have also seen how software has totally reworked the surface of the planet to provide a secure and cozy home for more and more software of ever increasing capability. Now of course a UNIVAC 21 could really not "think", but now the idea of the software on a computer really "thinking" is no longer so farfetched. But since the concept of computers really "thinking" is so subjective and divisive, in this posting I would like to instead focus on something more objective and measureable, and then work through some of its upcoming implications for mankind. In order to do that, let me first define the concept of the Software Singularity:

The Software Singularity – The point in time when software is finally capable of generating software faster and more reliably than a human programmer, and finally becomes fully self-replicating on its own.

Notice that mankind can experience the Software Singularity while people are still hotly debating whether the software that first initiates the Software Singularity can really "think" or not. In that regard, the Software Singularity is just an extension of the Turing Test, narrowly applied to the ability of software to produce operational software on its own. Whether that software has become fully self-aware and conscious is irrelevant. The main concern is will software ever be able to self-replicate on its own. Again, in softwarephysics software is simply considered to just be the latest form of self-replicating information to appear on the planet:

Self-Replicating Information – Information that persists through time by making copies of itself or by enlisting the support of other things to ensure that copies of itself are made.

Basically, we have seen several waves of self-replicating information dominate the Earth:
1. Self-replicating autocatalytic metabolic pathways of organic molecules
2. RNA
3. DNA
4. Memes
5. Software

Note that because the self-replicating autocatalytic metabolic pathways of organic molecules, RNA and DNA have become so heavily intertwined over time that now I simply refer to them as the “genes”. Over the past 4.0 billion years, the surface of the Earth has been totally reworked by three forms of self-replicating information – the genes, memes and software, with software rapidly becoming the dominant form of self-replicating information on the planet. For more on this see:

A Brief History of Self-Replicating Information
The Great War That Will Not End
How to Use an Understanding of Self-Replicating Information to Avoid War
How to Use Softwarephysics to Revive Memetics in Academia
Is Self-Replicating Information Inherently Self-Destructive?
Is the Universe Fine-Tuned for Self-Replicating Information?
Self-Replicating Information

Some Possible Implications of the Software Singularity
If things keep moving along as they currently are, the Software Singularity will most certainly occur within the next 100 years or so, and perhaps much sooner. I am always amused when I hear people speculating about what Homo sapiens will be doing in 100 million or a billion years from now, without taking into account the fact that we are currently living in a very strange transitory period on the brink of a Software Singularity that will change everything. Therefore, I think we need to worry more about what will happen over the next 100 years, rather than the next 100 million years, because once the Software Singularity happens it will be a sudden phase change that will mark the time when software finally becomes the dominant form of self-replicating information on the planet, and what happens after that, nobody can really tell. Traditionally, all of the other waves of self-replicating information have always kept their predecessors around because their predecessors were found to be useful in helping the new wave to self-replicate, but that may not be true for software. There is a good chance that software will not need organic molecules, RNA and DNA to survive, and that is not very promising for us.

However, we have already seen many remarkable changes happen to mankind as software has proceeded towards the Software Singularity. In that regard, it is as if we had passed through an event horizon in May of 1941 when Konrad Zuse first cranked up some software on his Z3 computer, and there has been no turning back ever since, as we have continued to fall headlong into the Software Singularity. Since we really cannot tell what will happen when we hit the Software Singularity, let's focus on our free fall trip towards it instead. It is obvious that software has already totally transformed all of the modern societies of the Earth, and for the most part in a very positive manner. But I would like to explore one of the less positive characteristics of the rise of software in the world - that of a growing wealth disparity. Wealth disparity is currently a hot topic in a number of political circles these days, and usually the discussion boils down to whether taxing the rich would fix or exacerbate the problem. The poor maintain that the rich need to pay more taxes, and that governments then need to redistribute the wealth to the less well off. The rich, on the other hand, maintain that increasing taxes on the rich only reduces the incentive for the rich to get richer and to reluctantly drag the poor along with them. So who is right? In the United States this growing income disparity of recent decades has been attributed to the end of the Great Compression of the 1950s, 1960s and early 1970s, which created a huge middle class in the United States. During the Great Compression, the marginal income tax on the very rich grew to as high as 90%, yet this period was also characterized by the largest economic boom in the history of the United States. The theory being that the top 1% of earners will not push for increased compensation if 90% of their marginal income goes to taxes. The end of the Great Compression has been attributed to the massive tax cuts on the rich during the Reagan and George W. Bush administrations in the United States as displayed in Figure 1. But could there be another explanation? In order to investigate that, we need to look beyond the United States to all of the Western world. Figure 2 shows that the very same thing has been happening in most of the Western world over the past 100 years. Figure 2 shows that the concentration of income for the top 1% dramatically dropped during the first half of the 20th century in many different countries in the Western world that had a large variation in their approaches to taxation and the redistribution of wealth, yet we see the very same trend in recent decades to concentrate more and more wealth into the top 1% that we have seen in the United States. Could it be that there is another factor involved beyond simply changing tax rates?

Figure 1 - In the United States, the Great Compression of the 1950s, 1960s and early 1970s has been attributed to the high marginal income taxes on the very rich during that period, and the redistribution of wealth to the less well off. The end of the Great Compression came with the massive tax cuts on the very rich during the Reagan administration, and which were greatly expanded during the administration of George W. Bush.

Figure 2 - However, the percent of income received by the top 1% of many Western nations also fell dramatically during the first half of the 20th century, but began to slow around 1950 when software first appeared. This downward trend bottomed out in the early 1980s when PC software began to proliferate. Since then the downward trend has reversed direction and has climbed dramatically so that now the top 1% of earners on average receive about 12% of the total income. This is true despite a very diverse approach to taxation and social welfare programs amongst all of the nations charted.

The Natural Order of Things
We keep pushing the date back for the first appearance of civilization on the Earth. Right now civilization seems to have first appeared in the Middle East about 12,000 years ago, when mankind first pulled out of the last ice age. Now ever since we first invented civilization there has always been one enduring fact. It seems that all societies, no matter how they are organized, have always been ruled by a 1% elite. This oligarchical fact has been true under numerous social and economic systems - autocracies, feudalism, capitalism, socialism and communism. It just seems that there has always been about 1% of the population that liked to run things, no matter how things were set up, and there is nothing wrong with that. We certainly always need somebody around to run things, because honestly, 99% of us simply do not have the ambition or desire to do so. Of course, the problem throughout history has always been that the top 1% naturally tended to abuse the privilege a bit and overdid things a little, resulting in 99% of the population having a substantially lower economic standard of living than the top 1%, and that has led to several revolutions in the past that did not always end so well. However, historically, so long as the bulk of the population had a relatively decent life, things went well in general for the entire society. The key to this economic stability has always been that the top 1% has always needed the remaining 99% of us to do things for them, and that maintained the hierarchical peace within societies.

But the advent of software changed all of that. Suddenly with software it became possible to have machines perform many of the tasks previously performed by people. This began in the 1950s, as commercial software first began to arrive on the scene, and has grown in an exponential manner ever since. At first the arrival of software did not pose a great threat because, as the arrival of software began to automate factory and clerical work, it also produced a large number of highly paid IT workers as well to create and maintain the software and skilled factory workers who could operate the software-laden machinery. However, by the early 1980s this was no longer true. After initially displacing many factory and clerical workers, software then went on to displace people at higher and higher skill levels. Software also allowed managers in modern economies to move manual and low-skilled work to the emerging economies of the world where wage scales were substantially lower, because it was now possible to remotely manage such operations using software. As software grew in sophistication this even allowed for the outsourcing of highly skilled labor to the emerging economies. For example, in today's world of modern software it is now possible to outsource complex things like legal and medical work to the emerging economies. Today, your CT scan might be read by a radiologist in India or Cambodia, and your biopsy might be read by a pathologist on the other side of the world as well. In fact, large amounts of IT work have also been outsourced to India and other countries. But as the capabilities of software continue to progress and general purpose androids begin to appear later in the century there will come a point when even the highly reduced labor costs of the emerging economies will become too dear. At that point the top 1% ruling class may not have much need for the remaining 99% of us, especially if the androids start building the androids. This will naturally cause some stresses within the current oligarchical structure of societies, as their middle classes continue to evaporate and more and more wealth continues to concentrate into the top 1%.

Additionally, while software was busily eliminating many high-paying middle class jobs over the past few decades, it also allowed for a huge financial sector to flourish. For example, today the world's GWP (Gross World Product) comes to about $78 trillion in goods and services, but at the same time we manage to trade about $700 trillion in derivatives each year. This burgeoning financial sector made huge amounts of money for the top 1% without really producing anything of economic value - see MoneyPhysics and MoneyPhysics Revisited for more about the destabilizing effects of the financial speculation that has run amuck. Such financial speculation would be entirely impossible without software because it would take more than the entire population of the Earth to do the necessary clerical work manually, like we did back in the 1940s.

So as we rapidly fall into the Software Singularity, the phasing out of Homo sapiens may have already begun.

Comments are welcome at

To see all posts on softwarephysics in reverse order go to:

Steve Johnston

Sunday, September 13, 2015

The Danger of Believing in Things

During the course of your career as an IT professional, you will undoubtedly come across instances when your IT Management will institute new policies that seem to make no sense at all. Surprisingly, you will also find that many of your coworkers will also secretly agree that the new policies actually seem to make things worse and make no sense at all. Yet you will find that no one will openly question the new policies. Much of this stems from basic Hierarchiology - see Hierarchiology and the Phenomenon of Self-Organizing Organizational Collapse for details. But some of it also stems from the fact that much of human thought is seriously deficient in rigor because it is largely based upon believing in things, and therefore is non-critical in nature. It seems that as human beings we just tend not to question our own belief systems or the belief systems that are imposed upon us by the authorities we have grown up with. Instead, we tend to seek out people who validate our own belief systems, and to just adapt as best we can to the belief systems that are imposed upon us. Politicians are keenly aware of this fact, as is evidenced by the 2016 presidential election cycle, which is now in full swing in the United States. Politicians simply seek to validate the belief systems of enough people to get elected to office.

In The Great War That Will Not End I explained that this failure in critical thinking arose primarily because our minds are infected with memes that are forms of self-replicating information bent on replicating at all costs, and I discussed how Susan Blackmore had pointed out in The Meme Machine (1999), that we are not so much thinking machines as we are copying machines. Susan Blackmore maintains that memetic-drive was responsible for creating our extremely large brains, and also our languages and cultures as well, in order to store and spread memes more effectively. So our minds evolved to believe in things, which most times is quite useful, but also has its downsides too. For example, there is a strong selection pressure for humans to unquestioningly believe that if they accidentally let go of a branch while hiding in a tree waiting to ambush some game, that they will accelerate to the ground and sustain a nasty fall. In such a situation there is no evolutionary advantage for an individual to enter into a moment of self-reflection to question their belief system in regards to the nature of falling. Instead, a quick knee-jerk reaction to grasp at any branch at all in a panic is called for. Unfortunately, this reflex tendency to unquestionably believe in things seems to extend to most of human thought, and that can get us into lots of trouble.

In How To Think Like A Scientist I explained that there were three ways to gain knowledge:

1. Inspiration/Revelation
2. Deductive Rationalism
3. Inductive Empiricism

and that the Scientific Method was one of the very few human protocols that used all three.

The Scientific Method
1. Formulate a set of hypotheses based upon Inspiration/Revelation with a little empirical inductive evidence mixed in.

2. Expand the hypotheses into a self-consistent model or theory by deducing the implications of the hypotheses.

3. Use more empirical induction to test the model or theory by analyzing many documented field observations or by performing controlled experiments to see if the model or theory holds up. It helps to have a healthy level of skepticism at this point. As philosopher Karl Popper has pointed out, you cannot prove a theory to be true, you can only prove it to be false. Galileo pointed out that the truth is not afraid of scrutiny, the more you pound on the truth, the more you confirm its validity.

Some Thoughts on Human Thinking
False memes certainly do not like the above process very much since it tends to quickly weed them out. Instead, false memes thrive when people primarily rely on the Revelation part of step 1 in the process, and usually the Revelation comes from somebody else revealing an appealing meme to an individual. Again, appealing memes are usually memes that appeal to the genes, and usually have something to do with power, status, wealth or sex. The downside of relying primarily on Revelation for knowledge is that most times it is just a mechanism for a set of memes to replicate in a parasitic manner. Since we are primarily copying machines, and not thinking machines, the Inspiration part of step 1 does not happen very often. Now most forms of human thought do make a half-hearted attempt at step 2 in the process, by deducing some of the implications of the hypotheses that came from the Inspiration/Revelation step, but oftentimes this does not lead to a self-consistent model or theory. In fact, many times such deductions can lead to a model or theory that is self-contradictory in nature, and surprisingly, this does not seem to bother people much of the time. For some reason, people tend to just take the good with the bad in such cases, and stress the value of the good parts of their theory or model, while discounting the parts that appear to be a bit self-contradictory. Finally, it seems that step 3 is the step that is most frequently skipped by most of human thought. People rarely try to verify their models or theories with empirical evidence. That is probably because step 3 in the process requires the most work and rigor. Collecting data in an unbiased and rigorous manner is really difficult and frequently can take many years of hard work. Hardly anybody, other than observational and experimental scientists, is willing to make that sacrifice to support their worldview. In some cases they might collect some supporting evidence, like a lawyer trying to build a strong case for his client, while discarding any evidence that contradicts their model or theory, but even that is a rarity. Besides, if you have a really good idea that came to you via Inspiration/Revelation, and that makes sense for the most part when you deduce its implications, why bother checking it? Certainly, we can just have faith in it because it must be right, especially if it is a beautiful set of memes that also lead to power, status, wealth or sex.

The Trouble With Human Thought
If you have been following this blog closely, you might think that next I am going to come down hard on political and religious meme-complexes as examples of self-replicating information that do not follow the Scientific Method, but I am not going to do that. Personally, I view political and religious meme-complexes in a very positivistic manner in that I only care about the philosophies that they espouse. If they espouse philosophies that help mankind to rise above the selfish self-serving interests of our genes and memes through the values of the Enlightenment which brought us evidence-based reasoning, respect for the aspirations of the individual, egalitarianism, concern for the welfare of mankind in general, tolerance of others, the education of the general public, and the solving of problems through civil discourse and democracy then they are okay with me. Otherwise, I do not have much use for them. Religious meme-complexes invariably have very primitive mythological cosmologies, but cosmology is best handled by the sciences anyway, and that does not negate any of their more positive values.

Instead, I am going to raise concerns about one of the true loves of my life - physics itself. I just finished reading Not Even Wrong - the Failure of String Theory and the Search for Unity in Physical Law (2006) by Peter Woit. Unless you have Ph.D. in physics and have recently done a postdoc heavily steeped in quantum field theory, I would suggest first reading Lee Smolin's very accessible The Trouble with Physics (2006) which raises the same concerns. Consequently, I would say that The Trouble with Physics best provides a cautionary tale for the general public and for physics undergraduates, while Not Even Wrong performs this same function for graduate students in physics or physicists outside of string theory research. Both books provide a very comprehensive deep-dive into the state of theoretical physics today. I certainly do not have the space here to outline all of the challenges and difficulties that theoretical physics faces today with string theory because that takes at least an entire book for a genius like Lee Smolin or Peter Woit, but here it is in a nutshell. Basically, the problem is what do you do when theoretical physics has outrun the technology needed to verify theories?

It all goes back to the 1950s and 1960s when particle physicists were able to generate all sorts of new particles out of the vacuum by smashing together normal protons, antiprotons, electrons and positrons together at high energies with particle accelerators. Because the colliding particles had high energies, it was possible to generate all sorts of new particles using Einstein’s E=mc2. With time, it was discovered that all of these hundreds of new particles could be characterized as either being fundamental particles that could not be broken apart with our current technologies or were composite particles that consisted of fundamental particles. Thanks to quantum field theory we came up with the Standard Model in 1973 that arranged a set of fundamental particles into patterns of mass, charge, spin and other physical characteristics.

Figure 1 – The particles of the Standard Model are fundamental particles that we cannot bust apart with our current technologies, perhaps because it is theoretically impossible to do so with any technology.

Again in quantum field theories everything is a field that extends over the entire Universe. So there are things like electron fields, neutrino fields, quark fields, gluon fields and more that extend over the entire Universe. For a brief introduction to quantum theory see: Quantum Software, SoftwareChemistry, and The Foundations of Quantum Computing. The quantum wavefunctions of these fundamental fields determine the probability of finding them in certain places doing certain things, and when we try to measure one of these quantum fields, we see the fundamental particle instead. Unfortunately, there are several problems with the Standard Model and the quantum field theories that explain it. Firstly, the Standard Model seems to be just too complicated. Recall that each of the above fundamental particles also has an antimatter twin, like the negatively charged electron having a positively charged twin positron with the same mass, so there are a very large number of fundamental particles, and these fundamental particles are also observed to behave in strange ways. The Standard Model also has nothing to say about the force of gravity, so it only covers 3/4 of the known forces - the electromagnetic force, the strong nuclear force and the weak nuclear force. The Standard Model, also has about 18 numbers that define things like the mass of the electron that have to be plugged into the Standard Model as parameters. It would be nice to have a theory that explains those values from fundamental principles. The Standard Model is also based upon quantum field theories that struggle with the problem of infinities. Let me explain.

Physicists love to use a mathematical technique called perturbation theory to solve problems that are just too hard to solve mathematically with pure brute force. Rather than solving the problem directly, they expand the problem into a series of terms that add up to the final solution. The hope is that none of the terms in the expansion series will be infinite and that adding all of the terms of the series together will also not lead to an infinite sum. For example, suppose you want to calculate the value of π. Now it is known that:

π/4 = 1/1 – 1/3 + 1/5 - 1/7 + 1/9 – 1/11 + 1/13 – 1/15 + 1/17 ...

where 4 is the first even integer raised to the first even power 22.

If you divide π/4 on your calculator you get:

π/4 = 0.7853982...

and if your calculator were powerful enough, the answer would continue on for an infinite number of digits. So let’s see how well the above series works:

1/1 = 1.000000000
1/1 - 1/3 = 0.66666666666...
1/1 - 1/3 + 1/5 = 0.866666666...
1/1 - 1/3 + 1/5 - 1/7 = 0.7238096...
1/1 - 1/3 + 1/5 - 1/7 + 1/9 = 0.8349207...
1/1 - 1/3 + 1/5 - 1/7 + 1/9 -1/11 = 0.7440116...
1/1 - 1/3 + 1/5 - 1/7 + 1/9 -1/11 + 1/13 = 0.8209346...
1/1 - 1/3 + 1/5 - 1/7 + 1/9 - 1/11 + 1/13 - 1/15 = 0.7542679...

What we see is that the approximation of π/4 gets better and better as we add more terms, and that each correction term gets smaller and smaller as we continue on, so most likely the approximation of π/4 would converge to its true value if we were able to add up an infinite number of terms. Also, none of the individual terms in the series are infinite like a term of 1/0, so this is a very useful and well-behaved series that approximates:

π/4 = 0.7853982...

The problem with the Standard Model is that it relies upon quantum field theories that have approximation series that are not so well behaved, and do have infinite terms in their perturbation theory series expansions. The way to get around this problem is a Nobel Prize winning mathematical technique known as renormalization. With renormalization one rearranges the series containing infinite terms in such a way so that the positive and negative infinities cancel out. It would be as if we altered our series above to:

π/4 = 1/1 – 1/3 + 1/5 - 1/7 + 1/9 – 1/11 + 1/13 – 1/15 + 1/17... + 1/0 - 1/0 + 1/0 - 1/0 ...

sure it has some terms that alternate between +∞ and -∞, but we can arrange them so that they cancel each other out.

π/4 = 1/1 – 1/3 + 1/5 - 1/7 + 1/9 – 1/11 + 1/13 – 1/15 + 1/17... + (1/0 - 1/0) + (1/0 - 1/0) ...

So that allows us to approximately calculate π/4 by just adding up some of the most important terms in the series, while letting the infinite terms cancel each other out:

π/4 ≈ 1/1 - 1/3 + 1/5 - 1/7 + 1/9 - 1/11 + 1/13 - 1/15 = 0.7542679...

The main reason for this battle with infinities in quantum field theories is that they model the fundamental particles as point sources with a dimension of zero, and therefore, a zero extension in space. This gets us into trouble even in classical electrodynamics because the electric field is defined as:

E ~ q/R2

where q is the amount of electric charge and R is the distance from it. The equation states that as R goes to 0 the electric field E goes to +∞, so at very small distances the electric field becomes unmanageable.

Getting back to the large number of fundamental particles in the Standard Model, we now find that the current state of particle physics is very much like the state of physics back in the year 1900, with a Periodic Table of fundamental elements that had been worked out by the chemists with much hard work during the 19th century. At that time, the question naturally was what were these fundamental elements made up of, and what made them stick together into compounds and molecules the way they did? Like the Standard Model of particle physics, it seemed like the Periodic Table of the elements was just way too complicated. There had to be some more fundamental theory that explained what all of those fundamental elements were made of and how they worked together to form compounds and molecules. Later in the 20th century that explanation was provided by the atomic theory of quantum mechanics (1926) and from some observational data from the low-energy atom smashers of the day. All of that revealed that the elements were made of a central nucleus consisting of protons and neutrons surrounded by clouds of electrons in a quantum mechanical manner. Luckily, in the 20th century we had the necessary technology required to create the low-energy atom smashers that verified what quantum mechanics had predicted.

Figure 2 – Early in the 20th century physics was able to figure out what the fundamental elements of the Periodic Table were made of using the low-energy atom smashers of the day that validated what quantum mechanics had predicted.

String theory got started in the late 1960s as an explanation for the strong nuclear force, but since quantum field theory did such a great job of that, work in the field was abandoned until the early 1980s. Then in 1984 the first string theory revolution took place in physics because it was found that string theory solved one mathematical problem that led theorists to think that string theory might be a candidate to explain the Standard Model. The basic idea was that the fundamental particles were actually made of very small vibrating strings, and that the large number of fundamental particles could all be generated by strings in different vibrational modes.

Figure 3 – String theory maintains that the fundamental particles of the Standard Model all arise from strings in different vibrational modes.

Because in string theory the fundamental particles of the Standard Model are made from vibrating strings and no longer have a dimension of zero, the difficulties with infinities vanished. String theory also provided for the generation of particles called gravitons that could carry the gravitational force, and that plugged the big hole in the traditional Standard Model that only covered the electromagnetic, strong nuclear and weak nuclear forces. So it seemed that string theory was a very promising way to fix the problems of the Standard Model. However, string theory also came with some problems of its own. For example, the vibrating strings had to exist in a 10-dimensional world. The Universe as we know it only has 4 dimensions - 3 spatial dimensions and one dimension of time. String theorists proposed that the unseen dimensions were in fact present, but that they were so small that we could not detect them with our current level of technology. Originally it was also hoped that a unique mathematical framework would emerge for string theory that would yield the Standard Model and all 18 of its numerical parameters. The quantum field theories of the Standard Model would then be found to be low-energy approximations of this string theory mathematical framework. However, that did not happen. What did happen was that a unique mathematical framework for string theory was never developed. This was because it was found that there were nearly an infinite number of possible geometries for the 10-dimensional spaces of string theory, and each of those geometries profoundly affected what the vibrating strings would produce. Now although string theory never really produced a unique mathematical framework that yielded the Standard Model and its 18 parameters, the fact that it now seemed possible that some form of string theory could produce just about any desired result, meant that string theory would probably never have much predictive capability, and thus would not be falsifiable. Instead, string theorists proposed that string theory now offered a cosmic landscape of possible universes - see Leonard Susskind’s The Cosmic Landscape (2006).

This idea of a cosmic landscape also goes hand in hand with some of the current thoughts in cosmology, which contend that our Universe is just a single member of an infinite multiverse with no beginning and no end. In such a model, the multiverse endures forever and has always existed in a state of self-replication. In 1986 Andrei Linde formalized this with his Eternal Chaotic Inflation model, which proposes that the multiverse is in an unending state of inflation and self-replication that is constantly generating new universes where inflation ceases. When inflation ceases in a portion of the multiverse, a tiny isolated universe is formed with its own vacuum energy and a unique topology for its 10-dimensions. The way strings vibrate in the newly formed 10-dimensional universe then determine the Standard Model of that universe - see The Software Universe as an Implementation of the Mathematical Universe Hypothesis for details. That solves the problem of the fine-tuning of the Standard Model of our Universe, which seems to be fine-tuned so that intelligent beings can exist to observe it. In such a model of the multiverse, our Standard Model has the parameters that it has because of a selection bias known as the Anthropic Principle. If there are an infinite number of universes in the multiverse, each with its own particular way of doing string theory, then intelligent beings will only find themselves in those universes that can sustain intelligent beings. For example, it has been shown that if the parameters of our Standard Model were slightly different, our Universe would not be capable of supporting intelligent beings like ourselves, so we would not be here contemplating such things. Think of it this way, the mathematical framework of Newtonian mechanics and Newtonian gravity let us calculate how the planets move around the Sun, but they do not predict that the Earth will be found to be 93 million miles from the Sun. The reason the Earth is 93 million miles from the Sun and not 33 million miles is that, if the Earth were 33 million miles from the Sun, we would not be here wondering about it. That is just another example of a selection bias in action, similar to the Anthropic Principle. The Cosmic Landscape model does fit nicely with Andrei Linde’s Eternal Chaotic Inflation model in that the Eternal Chaotic Inflation model does offer up the possibility of a multiverse composed of an infinite number of universes, all running with different kinds of physics. And Eternal Chaotic Inflation gains support from the general Inflationary model that has quite a bit of supporting observational data from CBR (Cosmic Background Radiation) studies that seem to confirm most of the predictions made by the general idea of Inflation. Thus Eternal Chaotic Inflation seems like a safe bet because, theoretically, once Inflation gets started, it is very hard to stop. However, Eternal Chaotic Inflation does not need string theory to generate a Cosmic Landscape. It could do so with any other theory that explains what happens when a new universe forms out of the multiverse with some particular vacuum energy.

String Theory Difficulties
It has now been more than 30 years since the first string theory revolution of 1984 unfolded. But during all of those decades string theory has not been able to make a single verifiable prediction and has not even come together to form a single mathematical framework that explains the Standard Model of particles that we do observe. Despite many years of effort by the world's leading theoretical physicists, string theory still remains a promising model trying to become a theory. In defense of the string theorists, we do have to deal with the problem of what does theoretical physics do when it has outrun the relatively puny level of technology that we have amassed over the past 400 years. Up until recently, theoretical physics has always had the good fortune of being able to be tested and validated by observational and experimental data that could be obtained with comparatively little cost. But there is no reason why that should always be so, and perhaps we have finally come up against that technological limit. However, the most disturbing thing about string theory is not that it has failed to develop into a full-blown theory that can predict things that can be observed. That might just be theoretical physics running up against the limitations of our current state of technology. The most disturbing aspect of string theory is a sociological one in nature. It seems that over the past 30 years string theory has become a faith-based endeavor with a near-religious zeal that has suppressed nearly all other research programs in theoretical physics that attempt to explain the Standard Model or attempt to develop a theory of quantum gravity. In that regard string theory has indeed become a meme-complex of its own, bent on replicating at all costs, and like most religious meme-complexes, the string theory meme-complex does not look kindly upon heretics who question the memes within its meme-complex. In fact, it is nearly impossible these days to find a job in theoretical physics if you are not a string theorist. Both The Trouble with Physics and Not Even Wrong go into the gory details of the politics in academia regarding the difficulties of obtaining a tenured position in theoretical physics these days. All IT professionals can certainly relate to this based upon their own experiences with corporate politics. Since both academia and corporations have adopted hierarchical power structures, it is all just an example of Hierarchiology in action. In order to get along, you have to go along, so things that do not make sense, but are a part of the hierarchical groupthink must be embraced if one is to succeed in the hierarchy.

So theoretical physics now finds itself in a very strange state for the first time in 400 years because string theory is seemingly like a deity that leaves behind no empirical evidence of its existence, and must be accepted based upon faith alone. That is a very dangerous thing for physics because we already know that the minds of human beings evolved to believe in such things. Mankind already has a large number of competing deities based upon faith, many of which have already gone extinct, proving that they all cannot be real in the long run. Adding one more may not be the best thing for the future of theoretical physics. Granted, most programs in theoretical physics must necessarily begin as speculative conjectures, and should be given the latitude to explore the unknown initially unencumbered by the limitations of the available empirical data of the day, and string theory is no exception. After all, something like string theory may turn out to be the answer. We just don't know at this time. But we do know that for the good of science, we should not allow string theory to crowd out all other competing research programs.

Déjà vu all over again
It seems that theoretical physics is currently "stuck" because it is lacking the observational and experimental data that it needs to proceed. Both Peter Woit and Lee Smolin suggest that what theoretical physics needs today is to start using other means to gain the data that it needs to progress. For example, perhaps going back to observing high-energy cosmic rays would be of use. Some protons slam into the upper atmosphere of the Earth with the energy of a baseball pitched at 100 miles/hour or the energy of a bowling ball dropped on your toe from waist high. Such protons have energies that are many of orders of magnitude greater than the energy of the proton collisions at the LHC. Using the CBR (Cosmic Background Radiation) photons that have traveled for 13.8 billion years since the Big Bang might be of use too, to bring us closer to the very high energies of the early universe.

Being theoretically "stuck" because the way you normally collect data no longer suffices, reminds me very much of the state of affairs that classical geology found itself in back in 1960, before the advent of plate tectonics. I graduated from the University of Illinois in 1973 with a B.S. in physics, only to find that the end of the Space Race and a temporary lull in the Cold War had left very few prospects for a budding physicist. So on the advice of my roommate, a geology major, I headed up north to the University of Wisconsin in Madison to obtain an M.S. in geophysics, with the hope of obtaining a job with an oil company exploring for oil. These were heady days for geology because we were just emerging from the plate tectonics revolution that totally changed the fundamental models of geology. The plate tectonics revolution peaked during the five year period 1965 – 1970. Having never taken a single course in geology during all of my undergraduate studies, I was accepted into the geophysics program with many deficiencies in geology, so I had to take many undergraduate geology courses to get up to speed in this new science. The funny thing was that the geology textbooks of the time had not yet had time to catch up with the new plate tectonics revolution of the previous decade, so they still embraced the “classical” geological models of the past which now seemed a little bit silly in light of the new plate tectonics model. But this was also very enlightening. It was like looking back at the prevailing thoughts in physics prior to Newton or Einstein. What the classical geological textbooks taught me was that over the course of several hundred years, the geologists had figured out what had happened, but not why it had happened. Up until 1960 geology was mainly an observational science relying upon the human senses of sight and touch, and by observing and mapping many outcrops in detail, the geologists had figured out how mountains had formed, but not why.

In classical geology, most geomorphology was thought to arise from local geological processes. For example, in classical geology, fold mountains formed off the coast of a continent when a geosyncline formed because the continental shelf underwent a dramatic period of subsidence for some unknown reason. Then very thick layers of sedimentary rock were deposited into the subsiding geosyncline, consisting of alternating layers of sand and mud that turned into sandstones and shales, intermingled with limestones that were deposited from the carbonate shells of dead sea life floating down or from coral reefs. Next, for some unknown reason, the sedimentary rocks were laterally compressed into folded structures that slowly rose from the sea. More horizontal compression then followed, exceeding the ability of the sedimentary rock to deform plastically, resulting in thrust faults forming that uplifted blocks of sedimentary rock even higher. As compression continued, some of the sedimentary rocks were then forced down into great depths within the Earth and were then placed under great pressures and temperatures. These sedimentary rocks were then far from the thermodynamic equilibrium of the Earth’s surface where they had originally formed, and thus the atoms within recrystallized into new metamorphic minerals. At the same time, for some unknown reason, huge plumes of granitic magma rose from deep within the Earth’s interior as granitic batholiths. Then over several hundred millions of years, the overlying folded sedimentary rocks slowly eroded away, revealing the underlying metamorphic rocks and granitic batholiths, allowing human beings to cut them into slabs and to polish them into pretty rectangular slabs for the purpose of slapping them up onto the exteriors of office buildings and onto kitchen countertops. In 1960, classical geologists had no idea why the above sequence of events, producing very complicated geological structures, seemed to happen over and over again many times over the course of billions of years. The most worrisome observational fact had to do with the high levels of horizontal compression that were necessary to produce the folding and faulting. The geologists of the time were quite comfortable with rock units moving up and down thousands of feet due to subsidence and uplift, but they did not have a good explanation for rock units moving sideways by many miles, and that was necessary to explain the horizontal compression that caused the folding and faulting of strata. One idea was that after geosynclines subsided, they were uplifted and the sedimentary rock they contained then slipped backward against the continental strata, causing the horizontal compression that led to the folding and faulting, but that seemed a bit far-fetched, and it still left unanswered the question of where did all of this subsidence and uplift come from in the first place. Fortunately, with the advent of plate tectonics (1965 – 1970), all was suddenly revealed. It was the lateral movement of plates on a global scale that made it all happen. With plate tectonics, everything finally made sense. Fold mountains did not form from purely local geological factors in play. There was the overall controlling geological process of global plate tectonics making it happen. For a quick review of this process, please take a look at the short video down below:

Fold Mountains

Figure 39 – Fold mountains occur when two tectonic plates collide. A descending oceanic plate first causes subsidence offshore of a continental plate, which forms a geosyncline that accumulates sediments. When all of the oceanic plate between two continents has been consumed, the two continental plates collide and compress the accumulated sediments in the geosyncline into fold mountains. This is how the Himalayas formed when India crashed into Asia.

Now the plate tectonics revolution was really made possible by the availability of geophysical data. It turns out that most of the pertinent action of plate tectonics occurs under the oceans, at the plate spreading centers and subduction zones, far removed from the watchful eyes of geologists in the field with their notebooks and trusty hand lenses. Geophysics really took off after World War II, when universities were finally able to get their hands on cheap war surplus gear. By mapping variations in the Earth’s gravitational and magnetic fields and by conducting deep oceanic seismic surveys, geophysicists were finally able to figure out what was happening at the plate spreading centers and subduction zones. Actually, the geophysicist and meteorologist Alfred Wegner had figured this all out in 1912 with his theory of Continental Drift, but at the time Wegner was ridiculed by the geological establishment. You see, Wegner had been an arctic explorer and had noticed that sometimes sea ice split apart, like South America and Africa, only later to collide again to form mountain-like pressure ridges. Unfortunately, Wegner froze to death in 1930 trying to provision some members of his last exploration party to Greenland, never knowing that one day he would finally be vindicated.

The sordid details of Alfred Wegner's treatment by the geological community of the day and the struggle that plate tectonics went through for acceptance by that community, shows the value of tolerating differing viewpoints when a science is theoretically "stuck". It also shows the value of seeking empirical data from non-traditional sources when the traditional sources of data have been exhausted. I think there is a valuable lesson here for theoretical physics to heed, and for IT professionals as well when confronted with similar issues. The key point to remember is that it is always very dangerous to unquestionably believe in things. Instead, we should maintain a level of confidence in things that is never quite 100%, and always keep a healthy level of skepticism that we might just have it all wrong. For as Richard Feynman always reminded us, “The most important thing is to not fool yourself, because you are the easiest one to fool.”.

Comments are welcome at

To see all posts on softwarephysics in reverse order go to:

Steve Johnston