Tuesday, February 10, 2015

Introduction to Softwarephysics

Softwarephysics is a simulated science for the simulated Software Universe that we are all immersed in. It is an approach to software development, maintenance, and support based upon concepts from physics, chemistry, biology, and geology that I have been using on a daily basis for over 35 years as an IT professional. For those of you not in the business, IT is short for Information Technology, commercial computer science. The purpose of softwarephysics is to explain why IT is so difficult, to suggest possible remedies, and to provide a direction for thought. If you are an IT professional, general computer user, or simply an individual interested in computer science, physics, chemistry, biology, or geology then softwarephysics might be of interest to you, if not in an entirely serious manner, perhaps at least in an entertaining one.

From 1975 – 1979, I was an exploration geophysicist exploring for oil, first with Shell, and then with Amoco. In 1979, I made a career change into IT, and spent about 20 years in development. For the past 14 years, I have been in IT operations, supporting middleware on WebSphere, JBoss, Tomcat, and ColdFusion. When I transitioned into IT from geophysics, I figured if you could apply physics to geology; why not apply physics to software? So like the exploration team at Amoco that I had just left, consisting of geologists, geophysicists, paleontologists, geochemists, and petrophysicists, I decided to take all the physics, chemistry, biology, and geology that I could muster and throw it at the problem of software. The basic idea was that many concepts in physics, chemistry, biology, and geology suggested to me that the IT community had accidentally created a pretty decent computer simulation of the physical Universe on a grand scale, a Software Universe so to speak, and that I could use this fantastic simulation in reverse, to better understand the behavior of commercial software, by comparing software to how things behaved in the physical Universe. Softwarephysics depicts software as a virtual substance, and relies upon our understanding of the current theories in physics, chemistry, biology, and geology to help us model the nature of software behavior. So in physics we use software to simulate the behavior of the Universe, while in softwarephysics we use the Universe to simulate the behavior of software. Along these lines, we use the Equivalence Conjecture of Softwarephysics as an aid; it allows us to shift back and forth between the Software Universe and the physical Universe, and hopefully to learn something about one by examining the other:

The Equivalence Conjecture of Softwarephysics
Over the past 70 years, through the uncoordinated efforts of over 50 million independently acting programmers to provide the world with a global supply of software, the IT community has accidentally spent more than $10 trillion creating a computer simulation of the physical Universe on a grand scale – the Software Universe.

Logical Positivism and Effective Theories
Many IT professionals have a difficult time with softwarephysics because they think of physics as being limited to the study of real things like electrons and photons, and since software is not “real”, how can you possibly apply concepts from physics and the other sciences to software? To address this issue, softwarephysics draws heavily upon two concepts from physics that have served physics quite well over the past century – the concept of logical positivism and the concept of effective theories. This was not always the case. In the 17th, 18th, and 19th centuries, physicists mistakenly thought that they were actually discovering the fundamental laws of the Universe, which they thought were based upon real tangible things like particles, waves, and fields. Classical Newtonian mechanics (1687), thermodynamics (1850), and classical electrodynamics (1864) did a wonderful job of describing the everyday world at the close of the 19th century, but early in the 20th century it became apparent that the models upon which these very successful theories were based did not work very well for small things like atoms or for objects moving at high velocities or in strong gravitational fields. This provoked a rather profound philosophical crisis within physics at the turn of the century, as physicists worried that perhaps 300 years of work was about to go down the drain. The problem was that classical physicists confused their models of reality with reality itself, and when their classical models began to falter, their confidence in physics began to falter as well. This philosophical crisis was resolved with the adoption of the concepts of logical positivism and some new effective theories in physics. Quantum mechanics (1926) was developed for small things like atoms, the special theory of relativity (1905) was developed for objects moving at high velocities and the general theory of relativity (1915) was developed for objects moving in strong gravitational fields.

Logical positivism, usually abbreviated simply to positivism, is an enhanced form of empiricism, in which we do not care about how things “really” are; we are only interested with how things are observed to behave. With positivism, physicists only seek out models of reality - not reality itself. When we study quantum mechanics, we will find that the concept of reality gets rather murky in physics anyway, so this is not as great a loss as it might at first seem. By concentrating on how things are observed to behave, rather than on what things “really” are, we avoid the conundrum faced by the classical physicists. In retrospect, this idea really goes all the way back to the very foundations of physics. In Newton’s Principia (1687) he outlined Newtonian mechanics and his theory of gravitation, which held that the gravitational force between two objects was proportional to the product of their masses divided by the square of the distance between them. Newton knew that he was going to take some philosophical flack for proposing a mysterious force between objects that could reach out across the vast depths of space with no apparent mechanism, so he took a very positivistic position on the matter with the famous words:

I have not as yet been able to discover the reason for these properties of gravity from phenomena, and I do not feign hypotheses. For whatever is not deduced from the phenomena must be called a hypothesis; and hypotheses, whether metaphysical or physical, or based on occult qualities, or mechanical, have no place in experimental philosophy. In this philosophy particular propositions are inferred from the phenomena, and afterwards rendered general by induction.

Instead, Newton focused on how things were observed to move under the influence of his law of gravitational attraction, without worrying about what gravity “really” was.

The second concept, that of effective theories, is an extension of positivism. An effective theory is an approximation of reality that only holds true over a certain restricted range of conditions and only provides for a certain depth of understanding of the problem at hand. For example, Newtonian mechanics is an effective theory that makes very good predictions for the behavior of objects moving less than 10% of the speed of light and which are bigger than a very small grain of dust. These limits define the effective range over which Newtonian mechanics can be applied to solve problems. For very small things we must use quantum mechanics and for very fast things moving in strong gravitational fields, we must use relativity theory. So all of the current theories of physics, such as Newtonian mechanics, classical electrodynamics, thermodynamics, statistical mechanics, the special and general theories of relativity, quantum mechanics, and the quantum field theories of QED and QCD are effective theories that are based upon models of reality, and all these models are approximations - all these models are fundamentally "wrong", but at the same time, these effective theories make exceedingly good predictions of the behavior of physical systems over the limited ranges in which they apply. That is the goal of softwarephysics – to provide for an effective theory of software behavior that makes useful predictions of software behavior that are applicable to the day-to-day activities of IT professionals. So in softwarephysics, we adopt a very positivistic viewpoint of software; we do not care what software “really is”, we only care about how software is observed to behave and try to model those behaviors with an effective theory of software behavior that only holds true over a certain restricted range of conditions and only provides for a certain depth of understanding of the problem at hand.

GPS satellites provide a very good example of positivism and effective theories at work. There are currently 31 GPS satellites orbiting at an altitude of 12,600 miles above the Earth, and each contains a very accurate atomic clock. The signals from the GPS satellites travel to your GPS unit at the speed of light, so by knowing the travel time of the signals from at least 4 of the GPS satellites, it is possible to determine your position on Earth very accurately. In order to do that, it is very important to have very accurate timing measurements. Newtonian mechanics is used to launch the GPS satellites to an altitude of 12,600 miles and to keep them properly positioned in orbit. Classical electrodynamics is then used to beam the GPS signals back down to Earth to the GPS unit in your car. Quantum mechanics is used to build the transistors on the chips onboard the GPS satellites and to understand the quantum tunneling of electrons in the flash memory chips used to store GPS data on the satellites. The special theory of relativity predicts that the onboard atomic clocks on the GPS satellites will run slower and lose about 7.2 microseconds per day due to their high velocities relative to an observer on the Earth. But at the same time, the general theory of relativity also predicts that because the GPS satellites are further from the center of the Earth and in a weaker gravitational field, where spacetime is less deformed than on the surface of the Earth, their atomic clocks also run faster and gain 45.9 microseconds per day due to the weaker gravitational field out there. The net effect is a gain of 38.7 microseconds per day, so the GPS satellite atomic clocks have to be purposefully built to run slow by 38.7 microseconds per day before they are launched, so that they will keep in sync with clocks on the surface of the Earth. If this correction were not made, an error in your position of 100 yards/day would accrue. The end result of the combination of all these fundamentally flawed effective theories is that it is possible to pinpoint your location on Earth to an accuracy of 16 feet or better for as little as $100. But physics has done even better than that with its fundamentally flawed effective theories. By combining the effective theories of special relativity (1905) with quantum mechanics (1926), physicists were able to produce a new effective theory for the behavior of electrons and photons called quantum electrodynamics QED (1948) which was able to predict the gyromagnetic ratio of the electron, a measure of its intrinsic magnetic field, to an accuracy of 11 decimal places. As Richard Feynman has pointed out, this was like predicting the exact distance between New York and Los Angeles accurate to the width of a human hair!

So Newtonian mechanics makes great predictions for the macroscopic behavior of GPS satellites, but it does not work very well for small things like the behavior of individual electrons within transistors, where quantum mechanics is required, or for things moving at high speeds or in strong gravitational fields where relativity theory must be applied. And all three of these effective theories are based upon completely contradictory models. General relativity maintains that spacetime is curved by matter and energy, but that matter and energy are continuous, while quantum mechanics maintains that spacetime is flat, but that matter and energy are quantized into chunks. Newtonian mechanics simply states that space and time are mutually independent dimensions and universal for all, with matter and energy being continuous. The important point is that all effective theories and scientific models are approximations – they are all fundamentally "wrong". But knowing that you are "wrong" gives you a great advantage over people who know that they are "right", because knowing that you are "wrong" allows you to seek improved models of reality. So please consider softwarephysics to simply be an effective theory of software behavior that is based upon models that are fundamentally “wrong”, but at the same time, fundamentally useful for IT professionals. So as you embark upon your study of softwarephysics, please always keep in mind that the models of softwarephysics are just approximations of software behavior, they are not what software “really is”. It is very important not to confuse models of software behavior with software itself, if one wishes to avoid the plight of the 19th century classical physicists.

If you are an IT professional and many of the above concepts are new to you, do not be concerned. This blog on softwarephysics is aimed at a diverse audience, but with IT professionals in mind. All of the above ideas will be covered at great length in the postings in this blog on softwarephysics and in a manner accessible to all IT professionals. Now it turns out that most IT professionals have had some introduction to physics in high school or in introductory college courses, but that presents an additional problem. The problem is that such courses generally only cover classical physics, and leave the student with a very good picture of physics as it stood in 1864! It turns out that the classical physics of Newtonian mechanics, thermodynamics, and classical electromagnetic theory were simply too good to discard and are still quite useful, so they are taught first to beginners and then we run out of time to cover the really interesting physics of the 20th century. Now imagine the problems that the modern world would face if we only taught similarly antiquated courses in astronomy, metallurgy, electrical and mechanical engineering, medicine, economics, biology, or geology that happily left students back in 1864! Since many of the best models for software behavior stem from 20th century physics, we will be covering a great deal of 20th century material in these postings – the special and general theories of relativity, quantum mechanics, quantum field theories, and chaos theory, but I hope that you will find that these additional effective theories are quite interesting on their own, and might even change your worldview of the physical Universe at the same time.

Unintended Consequences for the Scientific Community
As I mentioned at the close of my original posting on SoftwarePhysics, my initial intention for this blog on softwarephysics was to fulfill a promise I made to myself about 20 years ago to approach the IT community with the concept of softwarephysics a second time, following my less than successful attempt to do so in the 1980s, with the hope of helping the IT community to better cope with the daily mayhem of life in IT. However, in laying down the postings for this blog an unintended consequence arose in my mind as I became profoundly aware of the enormity of this vast computer simulation of the physical Universe that the IT community has so graciously provided to the scientific community free of charge and also of the very significant potential scientific value that it provides. One of the nagging problems for many of the observational and experimental sciences is that many times there is only one example readily at hand to study or experiment with, and it is very difficult to do meaningful statistics with a population of N=1.

But the computer simulation of the physical Universe that the Software Universe presents provides another realm for comparison. For example, both biology and astrobiology only have one biosphere on Earth to study and even physics itself has only one Universe with which to engage. Imagine the possibilities if scientists had another Universe readily at hand in which to work! This is exactly what the Software Universe provides. For example, in SoftwareBiology and A Proposal For All Practicing Paleontologists we see that the evolution of software over the past 70 years has closely followed the same path as life on Earth over the past 4.0 billion years, in keeping with Simon Conway Morris’s contention that convergence has played the dominant role in the evolution of life on Earth. In When Toasters Fly, we also see that software has evolved in fits and starts as portrayed by the punctuated equilibrium of Stephen Jay Gould and Niles Eldredge, and in The Adaptationist View of Software Evolution we explore the overwhelming power of natural selection in the evolution of software. In keeping with Peter Ward’s emphasis on mass extinctions dominating the course of evolution throughout geological time, we also see in SoftwareBiology that there have been several dramatic mass extinctions of various forms of software over the past 70 years as well, that have greatly affected the evolutionary history of software, and that between these mass extinctions, software has also tended to evolve through the gradual changes of Hutton’s and Lyell’s uniformitarianism. In Software Symbiogenesis and Self-Replicating Information, we also see the very significant role that parasitic/symbiotic relationships have played in the evolution of software, in keeping with the work of Lynn Margulis and also of Freeman Dyson’s two-stage theory of the origin of life on Earth. In The Origin of Software the Origin of Life, we explore Stuart Kauffman’s ideas on how Boolean nets of autocatalytic chemical reactions might have kick-started the whole thing as an emergent behavior of an early chaotic pre-biotic environment on Earth, and that if Seth Shostak is right, we will never end up talking to carbon-based extraterrestrial aliens, but to alien software instead. In Is the Universe Fine-Tuned for Self-Replicating Information? we explore the thermodynamics of Brandon Carter’s Weak Anthropic Principle (1973), as it relates to the generation of universes in the multiverse that are capable of sustaining intelligent life. Finally, in Programming Clay we revisit Alexander Graham Cairns-Smith’s theory (1966) that Gene 1.0 did not run on nucleic acids, but on clay microcrystal precursors instead.

Similarly for the physical sciences, in Is the Universe a Quantum Computer? we find a correspondence between TCP/IP and John Cramer’s Transactional Interpretation of quantum mechanics. In SoftwarePhysics and Cyberspacetime, we also see that the froth of CPU processes running with a clock speed of 109 Hz on the 10 trillion currently active microprocessors that comprise the Software Universe can be viewed as a slowed down simulation of the spin-foam froth of interacting processes of loop quantum gravity running with a clock speed of 1043 Hz that may comprise the physical Universe. And in Software Chaos, we examine the nonlinear behavior of software and some of its emergent behaviors and follow up in CyberCosmology with the possibility that vast quantities of software running on large nonlinear networks might eventually break out into consciousness in accordance with the work of George Dyson and Daniel Dennett. Finally, in Model-Dependent Realism - A Positivistic Approach to Realism we compare Steven Weinberg’s realism with the model-dependent realism of Stephen Hawking and Leonard Mlodinow and how the two worldviews affect the search for a Final Theory.

The chief advantage of doing fieldwork in the Software Universe is that, unlike most computer simulations of the physical Universe, it is an unintended and accidental simulation, without any of the built-in biases that most computer simulations of the physical Universe suffer. So you will truly be able to do fieldwork in a pristine and naturally occuring simulation, just as IT professionals can do fieldwork in the wild and naturally occuring simulation of software that the living things of the biosphere provide. Secondly, the Software Universe is a huge simulation that is far beyond the budgetary means of any institution or consortium by many orders of magnitude. So if you are an evolutionary biologist, astrobiologist, or paleontologist working on the origin and evolution of life in the Universe, or a physicist or economist working on the emergent behaviors of nonlinear systems and complexity theory, or a neurobiologist working on the emergence of consciousness in neural networks, or even a frustrated string theorist struggling with quantum gravity, it would be well worth your while to pay a friendly call upon the local IT department of a major corporation in your area. Start with a visit to the Command Center for their IT Operations department to get a global view of their IT infrastructure and to see how it might be of assistance to the work in your area of interest. From there you can branch out to the applicable area of IT that will provide the most benefit.

The Impact of Self-Replicating Information Upon the Planet
One of the key findings of softwarephysics is concerned with the magnitude of the impact upon the planet of self-replicating information.

Self-Replicating Information – Information that persists through time by making copies of itself or by enlisting the support of other things to ensure that copies of itself are made.

Basically, we have seen several waves of self-replicating information dominate the Earth:
1. Self-replicating autocatalytic metabolic pathways of organic molecules
2. RNA
3. DNA
4. Memes
5. Software

Note that because the self-replicating autocatalytic metabolic pathways of organic molecules, RNA and DNA have become so heavily intertwined over time that I now simply refer to them as the “genes”. Over the past 4.0 billion years, the surface of the Earth has been totally reworked by three forms of self-replicating information – the genes, memes and software, with software rapidly becoming the dominant form of self-replicating information on the planet. For more on this see:

A Brief History of Self-Replicating Information
How to Use an Understanding of Self-Replicating Information to Avoid War
How to Use Softwarephysics to Revive Memetics in Academia
Is Self-Replicating Information Inherently Self-Destructive?
Is the Universe Fine-Tuned for Self-Replicating Information?
Self-Replicating Information

Softwarephysics and the Real World of Human Affairs
Having another universe readily at hand to explore, even a simulated universe like the Software Universe, necessarily has an impact upon one's personal philosophy of life, and allows one to draw certain conclusions about the human condition and what’s it all about, so as you read through the postings in this blog you will stumble across a bit of my own personal philosophy - definitely a working hypothesis still in the works. Along these lines you might be interested in a few postings where I try to apply softwarephysics to the real world of human affairs:

MoneyPhysics – my impression of the 2008 world financial meltdown.

The Fundamental Problem of Everything – If you Google "the fundamental problem of everything", this will be the only hit you get on the entire Internet, which is indicative of the fundamental problem of everything!

What’s It All About? and Genes, Memes and Software – my current working hypothesis on what’s it all about.

How to Use an Understanding of Self-Replicating Information to Avoid War – my current working hypothesis for how the United States can avoid getting bogged down again in continued war in the Middle East.

Hierarchiology and the Phenomenon of Self-Organizing Organizational Collapse - a modern extension of the classic Peter Principle that applies to all hierarchical organizations and introduces the Time Invariant Peter Principle.

Some Specifics About These Postings
The postings in this blog are a supplemental reading for my course on softwarephysics for IT professionals entitled SoftwarePhysics 101 – The Physics of Cyberspacetime, which was originally designed to be taught as a series of seminars at companies where I was employed. Since softwarephysics essentially covers the simulated physics, chemistry, biology, and geology of an entire simulated universe, the slides necessarily just provide a cursory skeleton upon which to expound. The postings in this blog go into much greater depth. Because each posting builds upon its predecessors, the postings in this blog should be read in reverse order from the oldest to the most recent, beginning with my original posting on SoftwarePhysics. In addition, several universities also now offer courses on Biologically Inspired Computing which cover some of the biological aspects of softwarephysics, and the online content for some of these courses can be found by Googling for "Biologically Inspired Computing" or "Natural Computing". At this point we will finish up with my original plan for this blog on softwarephysics with a purely speculative posting on CyberCosmology that describes the origins of the Software Universe, cyberspacetime, software and where they all may be heading. Since CyberCosmology will be purely speculative in nature, it will not be of much help to you in your IT professional capacities, but I hope that it might be a bit entertaining. Again, if you are new to softwarephysics, you really need to read the previous posts before taking on CyberCosmology. I will probably continue on with some additional brief observations about softwarephysics in the future, but once you have completed CyberCosmology, you can truly consider yourself to be a bona fide softwarephysicist.

For those of you following this blog, the posting dates on the posts may seem to behave in a rather bizarre manner. That is because in order to get the Introduction to Softwarephysics listed as the first post in the context root of http://softwarephysics.blogspot.com/ I have to perform a few IT tricks. When publishing a new posting, I simply copy the contents of the Introduction to Softwarephysics to a new posting called the New Introduction to Softwarephysics. Then I update the original Introduction to Softwarephysics entry with the title and content of the new posting to be published. I then go back and take “New” out of the title of the New Introduction to Softwarephysics. This way the Introduction to Softwarephysics always appears as the first posting in the context root of http://softwarephysics.blogspot.com/. The side effect of all this is that the real posting date of posts is the date that appears on the post that you get when clicking on the Newer Post link at the bottom left of the posting webpage.

SoftwarePhysics 101 – The Physics of Cyberspacetime is now available on Google Drive. Please note that some of the formulas do not render properly, especially exponents which do not display as superscripts, so please use your imagination.

Part1 - Part 1 of the original PowerPoint document.
Part 2
- Part 2 of the original PowerPoint document.
Entropy – A spreadsheet referenced in Part 1
– A 1989 document describing how to use BSDE - the Bionic Systems Development Environment - to grow applications from genes and embryos within the maternal BSDE software.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:

Steve Johnston

Saturday, July 05, 2014

Hierarchiology and the Phenomenon of Self-Organizing Organizational Collapse

Back in 1979, my original intent for softwarephysics was to help myself, and the IT community at large, to better cope with the daily mayhem of life in IT by applying concepts from physics, chemistry, biology, and geology to the development, maintenance, and support of commercial software. However, over the years I have found that this scope was far too limiting, and that softwarephysics could also be applied to many other knowledge domains, most significantly to biology and astrobiology, and similarly, that softwarephysics could also draw additional knowledge from other disciplines as well, such as memetics. In this posting, I would like to expand the range of softwarephysics further into the domain of management theory by exploring the science of Hierarchiology as it pertains to IT professionals, but again the concepts of Hierarchilogy can certainly be applied to all human hierarchies wherever you might find them.

Hierarchiology is the scientific study of human hierarchies. Since nearly all human organizations are based upon hierarchies, it is quite surprising that the science of Hierarchiology was not developed until 1969. The late Professor Laurence Johnston Peter (September 16, 1919 - January 12, 1990) (sadly, not a known relation to myself) is credited as the founding father of the science of Hierarchiology. Like the Newtonian mechanics that Isaac Newton first introduced to the world in his Principia (1687), the science of Hierarchiology was first introduced by Professor Peter with his publication of The Peter Principle: Why Things Always Go Wrong (1969). The Peter Principle can best be defined in Professor Peter’s own words as:

The Peter Principle: - In a hierarchy every employee tends to rise to his level of incompetence ... in time every post tends to be occupied by an employee who is incompetent to carry out its duties ... Work is accomplished by those employees who have not yet reached their level of incompetence.

By this Professor Peter meant that in a hierarchical organizational structure, the potential of an employee for a promotion is normally based on their performance in their current job. Thus a competent engineer is likely to be promoted to become a manager of engineers, while an incompetent engineer will likely remain an engineer. Over time, this results in employees being promoted to their highest level of competence, and potentially to a level in which they are no longer competent, referred to as their "level of incompetence". The employee then has no further chance for promotion, and will have reached their final level within the hierarchical organization. Amazingly, employees who have reached their “level of incompetence” are retained because letting them go would:

"violate the first commandment of hierarchical life with incompetent leadership: [namely that] the hierarchy must be preserved".

Indeed the tenets of softwarephysics would maintain that since hierarchical organizations are simply another form of self-replicating information they are endowed with one paramount trait, the ability to survive.

Now for the modern reader, the Peter Principle might seem to be rather quaint, and certainly not an accurate description of modern hierarchies. That is because the Peter Principle was originally developed to explain the American hierarchical organizations of the post-war 1950s and 1960s. In those days, the power and influence of a member of a hierarchical organization were solely based upon the number of people reporting to the member of the hierarchy. The number of competent subordinates that a manager might have was totally irrelevant, so long as there were a sufficient number of competent subordinates who had not yet reached their “level of incompetence” to guarantee that the subordinate organization could perform its prime objectives. Efficiency was not a concern in those days because in the 1950s and 1960s America had no foreign competition, since the rest of the industrial world had effectively destroyed itself during World War II.

This all changed in the 1980s with the rise of foreign competition, principally from the once defeated Germany and Japan. In the 1980s, corporate America was once again faced with foreign competition, and in response, invented the concept of “downsizing”. Downsizing allowed hierarchies to clear out the dead wood of employees who had reached their “level of incompetence” through the compulsory dismissal of a significant percentage of each department. These downsizing activities were reluctantly carried out by the HR department of the organization, which cast HR as the villain, and not by the management chain of the organization whose hands then became forced by external factors beyond their control. This reduced the hard feelings amongst the survivors in a hierarchy after a successful “downsizing”. And since all departments were required to equally reduce staff by say 15%, no single manager lost status by a drop in head count because each manager in the hierarchy equally lost 15% of their subordinates, so the hierarchy was preserved intact. The elimination of incompetent employees was further enhanced by globalization over the past several decades. With globalization it was now possible to “offshore” whole departments at a time of an organization and dispatch employees en masse, both the competent and the incompetent, without threatening the hierarchy because the number of subordinates might actually increase as work was moved to foreign countries with emerging economies, but significantly lower wage scales.

Now the nature of hierarchies may have substantially changed since Professor Peter’s time, but I believe that there are some enduring characteristics of hierarchies that do not change with time because they are based upon the fundamentals of memetics. All successful memes survive because they adapt themselves to two enduring characteristics of human beings:

1. People like to hear what they like to hear.

2. People do not like to hear what they do not like to hear.

Based upon the above observation, I would like to propose:

The Time Invariant Peter Principle: In a hierarchy, successful subordinates tell their superiors what their superiors want to hear, while unsuccessful subordinates try to tell their superiors what their superiors need to hear. Only successful subordinates are promoted within a hierarchy, and eventually, all levels of a hierarchy will only become occupied by successful subordinates who only tell their superiors what their superiors want to hear.

This means that, like the original Peter Principle, organizations tend to become loaded down with “successful” employees who only tell their superiors what their superiors want to hear. This is not a problem, so long as the organization is not faced with any dire problems. After all, if things are moving along smoothly, there is no need to burden superiors with trivial problems. However, covering up problems sometimes does lead to disaster. For example, a subordinate telling the captain of the Titanic that it might be wise to reduce the Titanic’s speed down from its top speed of 21 knots to a lower speed during the evening of April 14, 1912 because there were reports of icebergs in the area coming in over the wireless would be ill advised. Such a suggestion would certainly not have advanced the career of the captain’s subordinate, especially if his warning had been heeded, and the Titanic had not hit an iceberg and sunk. The subordinate would have simply been dismissed as a worrisome annoyance with a bad attitude and certainly not a team player worthy of promotion.

Some might argue that the Titanic is an overly dramatic example of the Time Invariant Peter Principle in action, and not representative of what actually happens when subordinates simply cast events in a positive light for their superiors. How can that lead to organizational collapse? For that we must turn to Complexity Theory. One of the key findings of Complexity Theory is that large numbers of simple agents, all following a set of very simple rules, can lead to very complex emergent organizational behaviors. This is seen in the flocking of birds, the schooling of fish and the swarming of insects. For example, large numbers of ants following some very simple rules can lead to very complex emergent organizational behaviors of an entire ant colony. The downside of this is that large numbers of simple agents following simple rules can also lead to self-organized organizational collapse. The 2008 financial collapse offers a prime example, where huge numbers of agents in a large number of different hierarchies, all following the simple rule of the Time Invariant Peter Principle led to disaster (see MoneyPhysics for more details). Similarly, I personally witnessed the demise of Amoco during the 1990s through a combination of very poor executive leadership coupled with the Time Invariant Peter Principle in action. My suspicion is that the collapse of the Soviet Union on December 26, 1991 was also largely due to the Time Invariant Peter Principle in action over many decades.

For the interested, the Santa Fe Institute offers many very interesting online courses on Complexity Theory at:

Complexity Explorer

The study of large-scale organizational collapse is always challenging because of the breadth and scale of large organizations. What happens is that the Time Invariant Peter Principle in such failing organizations leads to a series of fiascos and disasters that slowly eat away at the organization until the organization ultimately collapses on its own. However, trying to add up all of the negative impacts from the large numbers of fiascos and disasters that occur over the span of a decade within a hierarchical organization in decline is nearly impossible. Consequently, it is much easier to focus upon individual disasters as case studies of the Time Invariant Peter Principle in action, and that will be our next topic.

The Challenger Disaster a Case Study of the Time Invariant Peter Principle in Action
In preparation for this posting, I just finished rereading Richard Feynman’s ”What Do You Care What Other People Think?” Further Adventures of a Curious Character (1988), probably for the fourth or fifth time. This book was a Christmas present from my wife, and I periodically reread it because it describes how Richard Feynman, one of the most productive physicists of the 20th century, was able to apply his scientific training to uncover and analyze the absurdities behind the Challenger Disaster. The book is a marvelous description of how the Time Invariant Peter Principle interacting with the absurd real world of human affairs can bring us to disaster.

Richard Feynman is most famous for his contributions to QED – Quantum Electrodynamics (1948) and his famous Feynman diagrams. This work ultimately led to the awarding of the 1965 Nobel Prize in Physics to Richard Feynman and to two other physicists for their work on QED (for more on this see The Foundations of Quantum Computing). Richard Feynman was also a very unique and dearly loved teacher of physics. I encourage you to view some of his wonderful lectures on YouTube.

The first half of the book covers some interesting stories from his youth and also the continuing love story of his relationship with his first wife Arlene. Arlene tragically died from tuberculosis while the two of them were stationed at Los Alamos and Feynman was working on the Manhattan Project to develop the first atomic bomb. Richard Feynman married Arlene with the full knowledge that their time together as man and wife would be cut short, after Arlene’s diagnosis of tuberculosis became known. This did not bother the two of them as they proceeded to get married despite the warnings from friends and family to do otherwise - certainly a rare mark of loyalty that is not seen much today. In fact, the quote in the title of the book came from Arlene.

In the first half of the book Richard Feynman goes on to describe how his father had decided that Feynman would become a scientist from the day he was born. So in Feynman’s childhood his father carefully taught Richard to have no respect for authority whatsoever, but to only have respect for knowledge. So later in life Feynman did not really care who or what you were, he only cared about your story or hypothesis. If your story made sense to him and stood up to scrutiny, then Feynman would have respect for your story or hypothesis, but otherwise watch out! Feynman would quickly reduce your story or hypothesis to a pile of rubble no matter who or what you were, but all during the process, he would still show respect for you as a fellow human being.

It all began shortly after the explosion of the space shuttle Challenger on January 28, 1986. Feynman received a phone call from William Graham, a former Caltech student of Feynman’s and now the recently appointed head of NASA. Graham had just been sworn in as the head of NASA on November 25, 1985, less than two months prior to the Challenger Disaster, and asked Feynman if he would agree to join the Presidential Commission on the Space Shuttle Challenger Accident that was to be headed by the former Secretary of State William Rogers. Originally, Feynman did not want to have anything to do with a Washington commission, but his wife Gweneth wisely persuaded him to join saying, “If you don’t do it, there will be twelve people, all in a group, going around from place to place together. But if you join the commission, there will be eleven people – all in a group, going around from place to place together – while the twelfth one runs around all over the place, checking all kinds of unusual things. There probably won’t be anything, but if there is, you’ll find it. There isn’t anyone else who can do that like you can.” Gweneth certainly got the part about Feynman running around all over the place on his own trying to figure out what had actually gone wrong, but she did not get the part about the other Commission members doing the same thing as a group. What Feynman did find when he got to Washington was that William Rogers wanted to conduct a “proper” Washington-style investigation like you see on CNN. This is where a bunch of commissioners or congressman, all sitting together as a panel, swear in a number of managers and ask them probing questions that the managers then all try to evade. The managers responsible for the organization under investigation are all found to have no knowledge of any wrong doing, and certainly would not condone any wrong doing if they had had knowledge of it. We have all seen that many times before. And as with most Washington-based investigations, all of the other members on the Commission also were faced with a conflict of interest because they all had strong ties to NASA or to what NASA was charged with doing. When Feynman initially objected to this process, William Rogers confessed to him that the Commission would probably never really figure out what had gone wrong, but that they had to go through the process just the same for appearances sake. So true to his wife’s prediction, Feynman then embarked upon a one-man investigation into the root cause of the Challenger Disaster. Due to his innate lack of respect for authority, Feynman decided to forgo discussions with NASA Management, and instead decided to only focus on the first-line engineers and technicians who got their hands dirty in the daily activities of running the space shuttle business. What Feynman found was that, due to the Time Invariant Peter Principle, many of the engineers and technicians who actually touched the space shuttle had been bringing forth numerous safety problems with the shuttle design for nearly a decade, but that routinely these safety concerns never rose through the organization. Feynman also found that many of the engineers and technicians had been initially afraid to speak frankly with him. They were simply afraid to speak the truth.

This was a totally alien experience for Richard Feynman because he was used to the scientific hierarchies of academia. Unfortunately, scientific hierarchies are also composed of human beings, and therefore are also subject to the Time Invariant Peter Principle, but fortunately there is a difference. Most organizational hierarchies are based upon seeking favor. Each layer in the hierarchy is seeking the favor of the layer directly above it, and ultimately, the whole hierarchy is seeking the favor of something. For corporations, the CEO of the organization is seeking the favor of Wall Street analysts, large fund managers and of individual investors. The seeking of favor necessarily requires the manipulation of facts to cast them in a favorable light. Scientific hierarchies, on the other hand, are actually trying to seek out knowledge and determine the truth of the matter, as best as we can determine the truth of the matter. Recently, I reread David Deutsch’s The Fabric of Reality (1997), another book that I frequently reread to maintain sanity. In the book Deutsch explains the difference between scientific hierarchies that seek knowledge and normal hierarchies that seek favor by describing what happens at physics conferences. What happens at lunchtime during a typical physics conference is that, like in the 7th grade, all of the preeminent physicists of the day sit together at the “cool kids’ table” for lunch. But in the afternoon, one finds that when one of the most preeminent physicists in the world gives a presentation, one can frequently find the lowliest of graduate students asking the preeminent physicist to please explain why the approximation in his last equation is justifiable under the conditions of the problem at hand. Deutsch wisely comments that he cannot imagine an underling in a corporate hierarchy similarly challenging the latest business model of his CEO in a grand presentation.

Now it turns out that Feynman did have one ally on the Commission by the name of General Kutenya. General Kutenya had been in contact with an unnamed astronaut at NASA who put General Kutenya on to the fact that the space shuttle had a potentially fatal flaw with the rubber O-rings used to seal three joints in the solid booster rockets – SRBs that were manufactured by Morton Thiokol. These joints were sealed by two 37 foot long rubber O-rings, each having a diameter of only 1/4 of an inch. The purpose of the O-rings was to seal the joints when the SRBs were fired. Because the joints were three times thicker than the steel walls of the SRBs, the walls of the SRBs tended to bulge out a little because of the pressures generated by the burning fuel when the SRBs were lit. The bulging out of the SRB walls caused the joints to bend slightly outward, and it was the job of the rubber O-rings to expand and fill the gap when the SRB walls bulged out so that the joints maintained their seal and no hot gasses from the burning solid fuel in the SRBs could escape (see Figure 1). At the time of the launch it was well known by NASA and Morton Thiokol Management that these O-rings suffered from burning and erosion by hot blow-by gasses from the burning of the solid rocket fuel, particularly when the shuttle was launched at lower temperatures. This was because the SRBs were ejected from the shuttle after their fuel had been expended and splashed down into the ocean to be later recovered and reused for later flights. On the morning of January 28, 1986, the Challenger was launched at a temperature of 28 to 29 oF, while the pervious coldest shuttle launch had been at a temperature of 53 oF.

Figure 1 – The two rubber O-rings of the SRB were meant to expand when the walls of the SRB bulged out so that hot burning gasses could not escape from the SRB joints and cause problems.

Figure 2 - On the right we see that the two O-rings in a SRB joint are in contact with the clevis (male part of the joint) when the SRB has not been lit. On the left we see that when an SRB is lit and there is pressure in the SRB that causes the walls to bulge out, a gap between the O-rings and the clevis can form if the O-rings are not resilient enough to fill the gap. Richard Feynman demonstrated that cold O-rings at 32 oF are not resilient.

Figure 3 - How the O-rings failed.

Figure 4 – A plume of burning SRB fuel escaping from the last field joint on the right SRB eventually burns through the supports holding the SRB onto the Challenger.

Figure 5 – When the right SRB breaks free of the Challenger it slams into the large tanks holding the liquid oxygen and hydrogen used to power the main engine of the Challenger. This causes a disastrous explosion.

Figure 6 – Richard Feynman demonstrates that the O-ring rubber of the SRB joints is not resilient at 32 oF. The Challenger was launched with a temperature of about 28 – 29 oF. The previous lowest temperature for a shuttle launch had been 53 oF.

Figure 7 – After the Disaster, several changes were made to the field joints between segments of the SRBs, including the addition of a third O-ring. This finally fixed the decades old problem that had been ignored by NASA Management all along.

For a nice montage of the events surrounding the Challenger Disaster see the following YouTube link that shows Richard Feynman questioning a NASA Manager and then demonstrating to him that what he was saying was total …..


For a nice synopsis of all of the events, please see this Wikipedia link at:


The Truth About the Time Invariant Peter Principle
Now that we have seen a case study of the Time Invariant Peter Principle in action, it is time for all of us to fess up. Everybody already knows about the Time Invariant Peter Principle because, as human beings, we all live within hierarchical organizations. We just do not talk about such things. In fact, the Time Invariant Peter Principle actually prevents work teams from talking about the Time Invariant Peter Principle. So what I am proposing here is nothing new. It is as old as civilization itself. Now as Richard Feynman used to remind us, “The most important thing is to not fool yourself, because you are the easiest one to fool.”. So the important thing about the Time Invariant Peter Principle is not discovering it for yourself, the important thing is to articulate it in difficult times, and that takes some courage, as Richard Feynman demonstrated in ”What Do You Care What Other People Think?”:

I invented a theory which I have discussed with a considerable number of people, and many people have explained to me why it’s wrong. But I don’t remember their explanations, so I cannot resist telling you what I think led to this lack of communication in NASA.

When NASA was trying to go to the moon, there was a great deal of enthusiasm: it was a goal everyone was anxious to achieve. They didn’t know if they could do it, but they were all working together.

I have this idea because I worked at Los Alamos, and I experienced the tension and the pressure of everybody working together to make the atomic bomb. When somebody’s having a problem — say, with the detonator — everybody knows that it’s a big problem, they’re thinking of ways to beat it, they’re making suggestions, and when they hear about the solution they’re excited, because that means their work is now useful: if the detonator didn’t work, the bomb wouldn’t work.

I figured the same thing had gone on at NASA in the early days: if the space suit didn’t work, they couldn’t go to the moon. So everybody’s interested in everybody else’s problems.

But then, when the moon project was over, NASA had all these people together: there’s a big organization in Houston and a big organization in Huntsville, not to mention at Kennedy, in Florida. You don’t want to fire people and send them out in the street when you’re done with a big project, so the problem is, what to do?

You have to convince Congress that there exists a project that only NASA can do. In order to do so, it is necessary — at least it was apparently necessary in this case — to exaggerate: to exaggerate how economical the shuttle would be, to exaggerate how often it could fly, to exaggerate how safe it would be, to exaggerate the big scientific facts that would be discovered. “The shuttle can make so-and-so many flights and it’ll cost such-and-such; we went to the moon, so we can do it!”

Meanwhile, I would guess, the engineers at the bottom are saying, “No, no! We can’t make that many flights. If we had to make that many flights, it would mean such-and-such!” And, “No, we can’t do it for that amount of money, because that would mean we’d have to do thus-and-so!”

Well, the guys who are trying to get Congress to okay their projects don’t want to hear such talk. It’s better if they don’t hear, so they can be more “honest” — they don’t want to be in the position of lying to Congress! So pretty soon the attitudes begin to change: information from the bottom which is disagreeable — “We’re having a problem with the seals; we should fix it before we fly again” — is suppressed by big cheeses and middle managers who say, “If you tell me about the seals problems, we’ll have to ground the shuttle and fix it.” Or, “No, no, keep on flying, because otherwise, it’ll look bad,” or “Don’t tell me; I don’t want to hear about it.”

Maybe they don’t say explicitly “Don’t tell me,” but they discourage communication, which amounts to the same thing. It’s not a question of what has been written down, or who should tell what to whom; it’s a question of whether, when you do tell somebody about some problem, they’re delighted to hear about it and they say “Tell me more” and “Have you tried such-and-such?” or they say “Well, see what you can do about it” — which is a completely different atmosphere. If you try once or twice to communicate and get pushed back, pretty soon you decide, “To hell with it.”

So that’s my theory: because of the exaggeration at the top being inconsistent with the reality at the bottom, communication got slowed up and ultimately jammed. That’s how it’s possible that the higher-ups didn’t know.

What Has Been Learned From the Challenger Disaster
Since the Challenger Disaster there have been a number dramatic video renditions of the facts surrounding the case produced for various reasons. Most of these renditions have been produced to warn the members of an organizational hierarchy about the dangers of the Time Invariant Peter Principle, and are routinely shown to the managers in major corporations. The classic scene concerns the teleconference between NASA Management and the Management of Morton Thiokol and its engineers the night before the launch of the Challenger. In the scene, the Morton Thiokol engineers are against the Challenger being launched at such a low temperature because they think the O-ring seals will fail and destroy the Challenger and all those who are onboard. The hero of the scene is Roger Boisjoly, a Morton Thiokol engineer who courageously stands up to both Morton Thiokol Management and the Management of NASA to declare that a launch of the Challenger at such a low temperature would be wrong. Roger Boisjoly is definitely one of those unsuccessful subordinates in the eyes of the Time Invariant Peter Principle. In the scene, Roger Boisjoly is overruled by Morton Thiokol Management, under the pressure from NASA Management, and the Challenger is approved for launch on the very cold morning of January 28, 1986. Strangely, it seems that it was the very cold, in combination with the Time Invariant Peter Principle that doomed both the Titanic and the Challenger.

My Own Experience with the Challenger Case Study
Now in the 1990s I was a Technical Consultant in Amoco’s IT department. Our CEO in the 1990s decided that it was time to “Renew” Amoco’s corporate culture in keeping with Mikhail Gorbachev’s Glasnost (increased openness and transparency) and Perestroika (organizational restructuring) of the late 1980s that was meant to prevent the Soviet Union from collapsing under its own weight. Indeed, Amoco’s command and control management style of the 1990s was very reminiscent of the heydays of the former Soviet Union. The purpose of “Corporate Renewal” was to uplift Amoco from its normal position as the #4 oil company in the Big Eight, to being the “preeminent oil company” of the world. Amoco was originally known as Standard Oil of Indiana, and was one of the many surviving fragments of the Standard Oil Trust that was broken up in 1911 by the Sherman Antitrust Act of 1890. The Standard Oil Trust went all the way back to 1863, when John D. Rockefeller first formed the Standard Oil Company, and thus Amoco was just a little bit older than the Battle of Gettysburg. Now in order to renew the corporate culture, our CEO created the Amoco Management Learning Center (AMLC), and required that once each year everybody above a certain pay grade had to attend a week long course at the AMLC. The AMLC was really a very nice hotel in the western suburbs of Chicago that Amoco used for the AMLC classes and attendees. We met in a large lecture hall as a group, and also in numerous breakout rooms reserved for each team to work on assignments and presentations of their own. Now back in the mid 1990s there were no cell phones, no laptops, no Internet, no pagers and no remote access to the Home Office. There were a bank of landline telephones that attendees could use to periodically check in with the Office, but because the AMLC classes and group exercises ran all day long and most of the night too, there really was little opportunity for attendees to become distracted by events back in the Office, so the attendees at the AMLC were nearly completely isolated from their native hierarchies for the entire week.

One year at the AMLC, the topic for the week was Management Courage, and as part of the curriculum, we studied the Challenger Disaster in detail as an example of a dramatic Management Failure that could have been prevented by a little bit of Management Courage. Now something very strange began to happen for my particular AMLC class. It was composed primarily of Amoco managers who had been pulled out of their normal hierarchies so they did not normally work with each other or even really know each other very well, because they all came from very different parts of the Amoco hierarchical structure. But all of these managers did have something in common. They had all suffered from the consequences of the many fiascos and disasters that our new CEO had embarked upon in recent years, and because of the Time Invariant Peter Principle, there was a great deal of suppressed and pent up unspoken animosity amongst them all. As the class progressed, and the instructors kept giving us more and more case studies of disasters that resulted because of a lack of Management Courage, the class members finally began to totally break down and they began to unload all sorts of management horror stories on us, like an AA meeting gone very badly wrong. I have never seen anything like it before or since. As the week progressed, with open rebellion growing within the ranks, there were even rumors that our rebellious AMLC class would be adjourned early and everybody sent home early before the week was out. Finally, to quell the uprising, the AMLC staff brought in one of our CEO’s direct reports on an emergency basis to once again reestablish the dominance of the hierarchy and get everybody back in line. After all, the hierarchy must always be preserved.

A few years later, after nearly a decade of debacle, Amoco was so weakened that we dropped to being #8 in the Big Eight and there was even a rumor going around that we did not have enough cash on hand to come up with our normal quarterly dividend, something that we had been paying out to stock holders for nearly a century or so without a break. Shortly after that we came to work one day in August of 1998 to learn that our CEO had sold Amoco to BP for $100 million. Now naturally BP paid a lot more than $100 million for Amoco, but that is what we heard that our CEO cleared on the deal. With the announcement of the sale of Amoco, the whole Amoco hierarchy slowly began to collapse like the former Soviet Union. Nobody in the hierarchy could imagine a world without Amoco. For example, my last boss at Amoco was a third generation Amoco employee and had a great deal of difficulty dealing with the situation. Her grandfather had worked in the Standard Oil Whiting Refinery back in the 19th century.

When the British invasion of Amoco finally began, all corporate communications suddenly ceased. We were all told to simply go into standby mode and wait. I was in the IT Architecture department at the time, and all of our projects were promptly cancelled, leaving us with nothing to do. Similarly, all new AD development ceased as well. For six months we all just came into work each day and did nothing while we were in standby mode waiting to see what would happen. But it’s hard to keep IT professionals idle, and we soon learned that a group of Amoco’s IT employees had essentially taken over the Yahoo Message Board for Amoco stockholders. The Yahoo Message Board suddenly became an underground means of communications for Amoco employees all over the world. People were adding postings that warned us that HR hit teams, composed of contract HR people, were making the rounds of all of Amoco’s facilities and were laying off whole departments of people en masse. This was still the early days of the corporate use of the Internet, so I don’t think we even had proxy servers in those days to block traffic because BP never was able to block access to the Yahoo Message Board for the idle Amoco workers on standby mode, so we spent the whole day just reading and writing postings for the Yahoo Message Board about the British invasion of Amoco, sort of like a twentieth century rendition of the Sons of Liberty. In the process, I think the Amoco IT department may have accidentally invented the modern concept of using social media to foment rebellion and revolution way back in 1998!

Then things began to get even stranger. It seems that the CEO of ARCO had learned about the $100 million deal that our CEO got for the sale of Amoco. So the CEO of ARCO made an unannounced appearance at the home office of BP in London, and offered to sell ARCO to BP for a similar deal. BP was apparently quite shocked by the unsolicited windfall, but eagerly took up the deal offered by ARCO’s CEO, and so the whole process began all over again for the employees of ARCO. Now we began to see postings from ARCO employees on the Yahoo Message Board trying to figure out what the heck was going on. The Amoco employees warned the ARCO employees about what was coming their way, and we all began to exchange similar stories on the Yahoo Message Board with each other, and many of us became good friends in cyberspacetime. Then one day I rode up in the elevator with the HR hit team for Amoco’s IT Architecture department. Later that morning, we all took our turns reporting to Room 101 for our severance packages. Several months later, I had to return to the Amoco building for some final paperwork, and I decided to drop by my old enclosed office for just one last time. I found that my former office had now been completely filled with old used printer cartridges stacked from floor to ceiling! Each used printer cartridge represented the remains of one former Amoco employee.

With the assets of Amoco and ARCO in hand, combined with the assets that it had acquired when it took over control of Standard Oil of Ohio back in 1987, BP heavily expanded its operations within North America. But BP’s incessant drive to maximize profits by diminishing maintenance and safety costs led to the Texas City Refinery Disaster in 2005 that killed 15 workers and burned and injured more than 170 others. For details see:


As the above Wikipedia article notes:

BP was charged with criminal violations of federal environmental laws, and has been named in lawsuits from the victims' families. The Occupational Safety and Health Administration gave BP a record fine for hundreds of safety violations, and in 2009 imposed an even larger fine after claiming that BP had failed to implement safety improvements following the disaster. On February 4, 2008, U.S. District Judge Lee Rosenthal heard arguments regarding BP's offer to plead guilty to a federal environmental crime with a US $50 million fine. At the hearing, blast victims and their relatives objected to the plea, calling the proposed fine "trivial." So far, BP has said it has paid more than US $1.6 billion to compensate victims. The judge gave no timetable on when she would make a final ruling. On October 30, 2009, OSHA imposed an $87 million fine on the company for failing to correct safety hazards revealed in the 2005 explosion. In its report, OSHA also cited over 700 safety violations. The fine was the largest in OSHA's history, and BP announced that it would challenge the fine. On August 12, 2010, BP announced that it had agreed to pay $50.6 million of the October 30 fine, while continuing to contest the remaining $30.7 million; the fine had been reduced by $6.1 million between when it was levied and when BP paid the first part.

These same policies then led to the Deepwater Horizon Disaster in 2010 that killed 11 workers and injured 16 others, and polluted a good portion of the Gulf of Mexico, costing many businesses billions of dollars. For details see:


As the above Wikipedia article notes:

On 4 September 2014, U.S. District Judge Carl Barbier ruled BP was guilty of gross negligence and willful misconduct under the Clean Water Act (CWA). He described BP's actions as "reckless," while he said Transocean's and Halliburton's actions were "negligent." He apportioned 67% of the blame for the spill to BP, 30% to Transocean, and 3% to Halliburton.

So remember, even though the Time Independent Peter Principle may lead members of a hierarchical organization to cover up problems, there may come a day of reckoning, no matter how large the hierarchy may be, and the costs of that day of reckoning may be far greater than one can even hope of imagining.

What Does This Mean for IT Professionals?
As an IT professional you will certainly spend most of your career in private or governmental hierarchical organizations. Now most times you will find that these hierarchical organizations will be sailing along over smooth waters with a minimum of disruption. But every so often you may become confronted with a technical ethical dilemma, like the engineers and technicians working on the space shuttle. You may find that, in your opinion, that the hierarchy is embarking upon a reckless and dangerous, or even unethical, course of action. And then you must make a decision for yourself to either remain silent or to speak up. I hope that this posting helps you with that decision.

I would like to conclude with the concluding paragraph from Richard Feynman’s Appendix F - Personal observations on the reliability of the Shuttle that was attached to the official Presidential Commission Report.

Let us make recommendations to ensure that NASA officials deal in a world of reality in understanding technological weaknesses and imperfections well enough to be actively trying to eliminate them. They must live in reality in comparing the costs and utility of the Shuttle to other methods of entering space. And they must be realistic in making contracts, in estimating costs, and the difficulty of the projects. Only realistic flight schedules should be proposed, schedules that have a reasonable chance of being met. If in this way the government would not support them, then so be it. NASA owes it to the citizens from whom it asks support to be frank, honest, and informative, so that these citizens can make the wisest decisions for the use of their limited resources.

For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:

Steve Johnston

Saturday, June 14, 2014

How to Use an Understanding of Self-Replicating Information to Avoid War

Periodically events in the “real world” of human affairs seem to intervene in our lives, and so once again, we must take a slight detour along our path to IT enlightenment, as we did with MoneyPhysics in the fall of 2008 with the global financial meltdown, and with The Fundamental Problem of Everything as it relates to the origins of war. With the 100 year anniversary of the onset of World War I in August of 1914 close at hand, which led to the deaths of 20 million people and 40 million casualties for apparently no particular reason at all, once again we see growing turmoil in the world, specifically in the Middle East and a multitude of conflicts converging. World War I basically shattered the entire 20th century because it led to the Bolshevik Revolution in Russia in 1917 and to the rise of fascism in Europe in the 1930s that led to World War II, and the ensuing Cold War of the latter half of the 20th century. This ongoing turmoil has continued on well into the 21st century in the Middle East because the end of World War I brought with it a number of manufactured countries in the Middle East that were arbitrarily carved up out of the remains of the Ottoman Empire that, unfortunately, aligned itself with the Central Powers, and thus chose to be on the losing side of World War I. With such rampant mass insanity once again afoot in the Middle East, one must naturally ask why is the real world of human affairs so absurd, and why has it always been so? I think I know why.

In the analysis that follows there will be no need to mention any current names in the news because, as in The Fundamental Problem of Everything, this is a human problem that is not restricted to any particular group or subgroup of people. It is a problem that stems from the human condition and applies to all sides of all conflicts for all times.

It’s The Fundamental Problem of Everything Again
In The Fundamental Problem of Everything, I left it to the readers to make the final determination for themselves, but for me, the fundamental problem of everything is ignorance. Let me explain.

About 15 years ago it dawned upon me that I only had a finite amount of time left and that it sure would be a shame to have lived my whole life without ever having figured out what’s it all about or where I had been, so I started reading a popular book on science each week or a scientific college textbook over a span of several months in an attempt to figure it all out as best I could. The conclusion I came to was that it is all about self-replicating information, and that there are currently three forms of self-replicating information on the Earth – the genes, memes and software, with software rapidly becoming the dominant form of self-replicating information on the planet. As human beings, it seems that our entire life, from the moment of conception, to that last gasp, is completely shaped by the competitive actions of these three forms of self-replicating information. So as a sentient being, in a Universe that has become self-aware, if you want to take back control of your life, it is important to confront them now and know them well. Before proceeding, let us review what self-replicating information is and how it behaves.

Self-Replicating Information – Information that persists through time by making copies of itself or by enlisting the support of other things to ensure that copies of itself are made.

The Characteristics of Self-Replicating Information
All forms of self-replicating information have some common characteristics.

1. All self-replicating information evolves over time through the Darwinian processes of innovation and natural selection, which endows self-replicating information with one telling characteristic – the ability to survive in a Universe dominated by the second law of thermodynamics and nonlinearity.

2. All self-replicating information begins spontaneously as a parasitic mutation that obtains energy, information and sometimes matter from a host.

3. With time, the parasitic self-replicating information takes on a symbiotic relationship with its host.

4. Eventually, the self-replicating information becomes one with its host through the symbiotic integration of the host and the self-replicating information.

5. Ultimately, the self-replicating information replaces its host as the dominant form of self-replicating information.

6. Most hosts are also forms of self-replicating information.

7. All self-replicating information has to be a little bit nasty in order to survive.

8. The defining characteristic of self-replicating information is the ability of self-replicating information to change the boundary conditions of its utility phase space in new and unpredictable ways by means of exapting current functions into new uses that change the size and shape of its particular utility phase space. See Enablement - the Definitive Characteristic of Living Things for more on this last characteristic.

For a good synopsis of how self-replicating information has dominated the Earth for the past 4 billion years, and also your life, take a quick look at A Brief History of Self-Replicating Information. Basically, we have seen several waves of self-replicating information dominate the Earth:

1. Self-replicating autocatalytic metabolic pathways of organic molecules
2. RNA
3. DNA
4. Memes
5. Software

Note that because the self-replicating autocatalytic metabolic pathways of organic molecules, RNA and DNA have become so heavily intertwined over time that I now simply call them the “genes”.

The Ongoing Battle Between the Genes, Memes and Software For World Domination
In school you were taught that your body consists of about 100 trillion cells, and that these cells use DNA to create proteins that you need to replicate and operate your cells. The problem, as always, is that this is an entirely anthropocentric point of view. As Richard Dawkins explains in the The Selfish Gene (1976), this is totally backwards. We do not use genes to protect and replicate our bodies; genes use our bodies to protect and replicate genes, so in Dawkins’ view we are DNA survival machines, and so are all other living things. Darwin taught us that natural selection was driven by survival of the fittest. But survival of the fittest what? Is it survival of the fittest species, species variety, or possibly the fittest individuals within a species? Dawkins notes that none of these things actually replicate, not even individuals. All individuals are genetically unique, so it is impossible for individuals to truly replicate. What does replicate are genes, so for Dawkins, natural selection operates at the level of the gene. These genes have evolved over time to team up with other genes to form bodies or DNA survival machines that protect and replicate DNA, and that is why the higher forms of life are so “inefficient” when it comes to how genetic information is stored in DNA. For example, the human genome consists of about 23,000 genes stored on a few percent of the 6 feet of DNA found within each human cell, which is a rather inefficient way to store genetic information because it takes a lot of time and resources to replicate all that DNA when human cells divide. But that is the whole point, the DNA in higher forms of life is not trying to be an “efficient” genetic information storage system, rather it is trying to protect and replicate as much DNA as possible, and then build a DNA survival machine to house it by allocating a small percentage of the DNA to encode for the genes that produce the proteins needed to build the DNA survival machine. From the perspective of the DNA, these genes are just a necessary evil, like the taxes that must be paid to build roads and bridges.

Prokaryotic bacteria are small DNA survival machines that cannot afford the luxury of taking on any “passenger” junk DNA. Only large multicellular cruise ships like ourselves can afford that extravagance. If you have ever been a “guest” on a small sailing boat, you know exactly what I mean. There are no “guest passengers” on a small sailboat; it's always "all hands on deck" - and that includes the "guests"! Individual genes have been selected for one overriding trait, the ability to replicate, and they will do just about anything required to do so, like seeking out other DNA survival machines to mate with and rear new DNA survival machines. In Blowin’ in the Wind Bob Dylan asked the profound question,”How many years can a mountain exist; Before it's washed to the sea?”. Well, the answer is a few hundred million years. But some of the genes in your body are billions of years old, and as they skip down through the generations largely unscathed by time, they spend about half their time in female bodies and the other half in male bodies. If you think about it, all of your physical needs and desires are geared to ensuring that your DNA survives and gets passed on, with little regard for you as a disposable DNA survival machine. I strongly recommend that all IT professionals read the The Selfish Gene, for me the most significant book of the 20th century because it explains so much. For a book written in 1976, it makes many references to computers and data processing that you will find extremely interesting.

As DNA survival machines, our genes create our basic desires to survive and to replicate our genes through sexual activity in a Dawkinsian manner. When you factor in the ensuing human desires for food and comfort, and for the wealth that provides for them, together with the sexual tensions that arise in the high school social structures that seem to go on to form the basis for all human social structures, the genes alone probably account for at least 50% of the absurdity of the real world of human affairs because life just becomes a never ending continuation of high school. This is all part of my general theory that nobody ever really graduates from their culturally equivalent form of high school. We all just go on to grander things in our own minds. Certainly the success of Facebook and Twitter are testament to this observation.

Our Minds were formed next by the rise of the memes over the past 2.5 million years, again this was first proposed by Richard Dawkins in The Selfish Gene. The concept of memes was later advanced by Daniel Dennett in Consciousness Explained (1991) and Richard Brodie in Virus of the Mind: The New Science of the Meme (1996), and was finally formalized by Susan Blackmore in The Meme Machine (1999). For those of you not familiar with the term meme, it rhymes with the word “cream”. Memes are cultural artifacts that persist through time by making copies of themselves in the minds of human beings and were first recognized by Richard Dawkins in The Selfish Gene. Dawkins described memes as “Examples of memes are tunes, ideas, catch-phrases, clothes fashions, ways of making pots or of building arches. Just as genes propagate themselves in the gene pool by leaping from body to body via sperms or eggs, so memes propagate themselves in the meme pool by leaping from brain to brain via a process which, in the broad sense, can be called imitation.”. Just as genes come together to build bodies, or DNA survival machines, for their own mutual advantage, memes also come together from the meme pool to form meme-complexes for their own joint survival. DNA survives down through the ages by inducing disposable DNA survival machines, in the form of bodies, to produce new disposable DNA survival machines. Similarly, memes survive in meme-complexes by inducing the minds of human beings to reproduce memes in the minds of others. Meme-complexes come in a variety of sizes and can become quite large and complicated with a diverse spectrum of member memes. Examples of meme-complexes of increasing complexity and size would be Little League baseball teams, clubs and lodges, corporations, political and religious movements, tribal subcultures, branches of the military, governments and cultures at the national level, and finally the sum total of all human knowledge in the form of all the world cultures, art, music, religion, and science put together.

To the genes and memes, human bodies are simply disposable DNA survival machines housing disposable minds that come and go with a lifespan of less than 100 years. The genes and memes, on the other hand, continue on largely unscathed by time as they skip down through the generations. However, both genes and memes do evolve over time through the Darwinian mechanisms of innovation and natural selection. You see, the genes and memes that do not come together to build successful DNA survival machines, or meme-complexes, are soon eliminated from the gene and meme pools. So both genes and memes are selected for one overriding characteristic – the ability to survive. Once again, the “survival of the fittest” rules the day. Now it makes no sense to think of genes or memes as being either “good” or “bad”; they are just mindless forms of self-replicating information bent upon surviving with little interest in you as a disposable survival machine. So in general, these genes and memes are not necessarily working in your best interest, beyond keeping you alive long enough so that you can pass them on to somebody else.

According to Susan Blackmore, we are not so much thinking machines, as we are copying machines. For example, Blackmore maintains that memetic-drive was responsible for creating our extremely large brains and also our languages and cultures as well, in order to store and spread memes more effectively. Many researchers have noted that the human brain is way over engineered for the needs of a simple hunter-gatherer. After all, even a hundred years ago, people did not require the brain-power to do IT work, yet today we find many millions of people earning their living doing IT work, or at least trying to. Blackmore then points out that the human brain is a very expensive and dangerous organ. The brain is only 2% of your body mass, but burns about 20% of your calories each day. The extremely large brain of humans also kills many mothers and babies at childbirth, and also produces babies that are totally dependent upon their mothers for survival and that are totally helpless and defenseless on their own. Blackmore asks the obvious question of why the genes would build such an extremely expensive and dangerous organ that was definitely not in their own self-interest. Blackmore has a very simple explanation – the genes did not build our exceedingly huge brains, the memes did. Her reasoning goes like this. About 2.5 million years ago, the predecessors of humans slowly began to pick up the skill of imitation. This might not sound like much, but it is key to her whole theory of memetics. You see, hardly any other species learns by imitating other members of their own species. Yes, there are many species that can learn by conditioning, like Pavlov’s dogs, or that can learn through personal experience, like mice repeatedly running through a maze for a piece of cheese, but a mouse never really learns anything from another mouse by imitating its actions. Essentially, only humans do that. If you think about it for a second, nearly everything you do know, you learned from somebody else by imitating or copying their actions or ideas. Blackmore maintains that the ability to learn by imitation required a bit of processing power by our distant ancestors because one needs to begin to think in an abstract manner by abstracting the actions and thoughts of others into the actions and thoughts of their own. The skill of imitation provided a great survival advantage to those individuals who possessed it, and gave the genes that built such brains a great survival advantage as well. This caused a selection pressure to arise for genes that could produce brains with ever increasing capabilities of imitation and abstract thought. As this processing capability increased there finally came a point when the memes, like all of the other forms of self-replicating information that we have seen arise, first appeared in a parasitic manner. Along with very useful memes, like the meme for making good baskets, other less useful memes, like putting feathers in your hair or painting your face, also began to run upon the same hardware in a manner similar to computer viruses. The genes and memes then entered into a period of coevolution, where the addition of more and more brain hardware advanced the survival of both the genes and memes. But it was really the memetic-drive of the memes that drove the exponential increase in processing power of the human brain way beyond the needs of the genes.

A very similar thing happened with software over the past 70 years. When I first started programming in 1972, million dollar mainframe computers typically had about 1 MB (about 1,000,000 bytes) of memory with a 750 KHz system clock (750,000 ticks per second). Remember, one byte of memory can store something like the letter “A”. But in those days, we were only allowed 128 K (about 128,000 bytes) of memory for our programs because the expensive mainframes were also running several other programs at the same time. It was the relentless demands of software for memory and CPU-cycles over the years that drove the exponential explosion of hardware capability. For example, today the typical $600 PC comes with 8 GB (about 8,000,000,000 bytes) of memory and has several CPUs running with a clock speed of about 3 GHz (3,000,000,000 ticks per second). Last year, I purchased Redshift 7 for my personal computer, a $60 astronomical simulation application, and it alone uses 382 MB of memory when running and reads 5.1 GB of data files, a far cry from my puny 128K programs from 1972. So the hardware has improved by a factor of about 10 million since I started programming in 1972, driven by the ever increasing demands of software for more powerful hardware. For example, in my current position in Middleware Operations for a major corporation we are constantly adding more application software each week, so every few years we must upgrade all of our servers to handle the increased load.

The memes then went on to develop languages and cultures to make it easier to store and pass on memes. Yes, languages and cultures also provided many benefits to the genes as well, but with languages and cultures, the memes were able to begin to evolve millions of times faster than the genes, and the poor genes were left straggling far behind. Given the growing hardware platform of an ever increasing number of Homo sapiens on the planet, the memes then began to cut free of the genes and evolve capabilities on their own that only aided the survival of memes, with little regard for the genes, to the point of even acting in a very detrimental manner to the survival of the genes, like developing the capability for global thermonuclear war and global climate change. The memes have since modified the entire planet. They have cut down the forests for agriculture, mined minerals from the ground for metals, burned coal, oil, and natural gas for energy, releasing the huge quantities of carbon dioxide that its genetic predecessors had sequestered within the Earth, and have even modified the very DNA, RNA, and metabolic pathways of its predecessors.

We can now see these very same processes at work today with the evolution of software. Software is currently being written by memes within the minds of programmers. Nobody ever learned how to write software all on their own. Just as with learning to speak or to read and write, everybody learned to write software by imitating teachers, other programmers, imitating the code written by others, or by working through books written by others. Even after people do learn how to program in a particular language, they never write code from scratch; they always start with some similar code that they have previously written, or others have written, in the past as a starting point, and then evolve the code to perform the desired functions in a Darwinian manner (see How Software Evolves). This crutch will likely continue for another 20 – 50 years, until the day finally comes when software can write itself, but even so, “we” do not currently write the software that powers the modern world; the memes write the software that does that. This is just a reflection of the fact that “we” do not really run the modern world either; the memes in meme-complexes really run the modern world because the memes are currently the dominant form of self-replicating information on the planet. In The Meme Machine, Susan Blackmore goes on to point out that the memes at first coevolved with the genes during their early days, but have since outrun the genes because the genes could simply not keep pace when the memes began to evolve millions of times faster than the genes. The same thing is happening before our very eyes to the memes, with software now rapidly outpacing the memes. Software is now evolving thousands of times faster than the memes, and the memes can simply no longer keep up.

As with all forms of self-replicating information, software began as a purely parasitic mutation within the scientific and technological meme-complexes, initially running onboard Konrad Zuse’s Z3 computer in May of 1941 (see So You Want To Be A Computer Scientist? for more details). It was spawned out of Zuse’s desire to electronically perform calculations for aircraft designs that were previously done manually in a very tedious manner. So initially software could not transmit memes, it could only perform calculations, like a very fast adding machine, and so it was a pure parasite. But then the business and military meme-complexes discovered that software could also be used to transmit memes, and software then entered into a parasitic/symbiotic relationship with the memes. Software allowed these meme-complexes to thrive, and in return, these meme-complexes heavily funded the development of software of ever increasing complexity, until software became ubiquitous, forming strong parasitic/symbiotic relationships with nearly every meme-complex on the planet. In the modern day, the only way memes can now spread from mind to mind without the aid of software is when you directly speak to another person next to you. Even if you attempt to write a letter by hand, the moment you drop it into a mailbox, it will immediately fall under the control of software. The poor memes in our heads have become Facebook and Twitter addicts.

So in the grand scheme of things, the memes have replaced their DNA predecessor, which replaced RNA, which replaced the original self-replicating autocatalytic metabolic pathways of organic molecules as the dominant form of self-replicating information on the Earth. Software is the next replicator in line, and is currently feasting upon just about every meme-complex on the planet, and has formed very strong parasitic/symbiotic relationships with all of them. How software will merge with the memes is really unknown, as Susan Blackmore pointed out in her TED presentation which can be viewed at:


Once established, software then began to evolve based upon the Darwinian concepts of innovation and natural selection, which endowed software with one telling characteristic – the ability to survive in a Universe dominated by the second law of thermodynamics and nonlinearity. Successful software, like MS Word and Excel competed for disk and memory address space with WordPerfect and VisiCalc and out-competed these once dominant forms of software to the point of extinction. In less than 70 years, software has rapidly spread across the face of the Earth and outward to every planet of the Solar System and many of its moons, with a few stops along the way at some comets and asteroids. And unlike us, software is now leaving the Solar System for interstellar space onboard the Pioneer 1 & 2 and Voyager 1 & 2 probes.

Currently, software manages to replicate itself with the support of you. If you are an IT professional, then you are directly involved in some, or all of the stages in this replication process, and act sort of like a software enzyme. No matter what business you support as an IT professional, the business has entered into a parasitic/symbiotic relationship with software. The business provides the budget and energy required to produce and maintain the software, and the software enables the business to run its processes efficiently. The ultimate irony in all this is the symbiotic relationship between computer viruses and the malevolent programmers who produce them. Rather than being the clever, self-important, techno-nerds that they picture themselves to be, these programmers are merely the unwitting dupes of computer viruses that trick these unsuspecting programmers into producing and disseminating computer viruses! And if you are not an IT professional, you are still involved with spreading software around because you buy gadgets that are loaded down with software, like smartphones, notepads, laptops, PCs, TVs, DVRs, cars, refrigerators, coffeemakers, blenders, can openers and just about anything else that uses electricity.

The Genes, Memes and Software of War
In times of war, successful meme-complexes appeal primarily to two gene-induced emotions – the desire for social status and the fear of a perceived enemy. Social status in a group of similar DNA survival machines is always a good thing for the replication of genes because it brings with it the necessities of life that are required to maintain a healthy DNA survival machine and also provides for more opportunities for a DNA survival machine to couple with other DNA survival machines and to replicate its genes. Fear of a perceived enemy is another gene-induced emotion because it is a known fact that an enemy can destroy the DNA survival machines that are used to house genes as they move about from place to place.

Meme-complexes can do wonderful things, as is evidenced by the incredible standard of living enjoyed by the modern world, thanks to the efforts of the scientific meme-complex, or the great works of art, music, and literature handed down to us from the Baroque, Classical, and Romantic periods, not to mention the joys of jazz, rock and roll, and the blues. However, other meme-complexes, like the memes of war, can also turn incredibly nasty. Just since the Scientific Revolution of the 17th century we have seen the Thirty Years War (1618 -1648), the Salem witch hunts (1692), the French Reign of Terror (1793 – 1794), American slavery (1654 – 1865), World War I (all sides) (1914 – 1918), the Stalinist Soviet Union (1929 – 1953), National Socialism (1933 – 1945), McCarthyism (1949 – 1958), Mao’s Cultural Revolution (1969 – 1976), and Pol Pot’s reign of terror (1976 – 1979).

The problem is that when human beings get wrapped up into a horrible meme-complex, they can do horrendous things without even being aware of the fact. This is because in order to survive, the first thing that most meme-complexes do is to use a meme that turns off human thought and reflection. To paraphrase Descartes, ”I think, therefore I am" a heretic. So if you ever questioned any of the participants caught up in any of the above atrocious events, you would find that the vast majority would not have any qualms about their deadly activities whatsoever. In fact, they would question your loyalty and patriotism for even bringing up the subject. For example, during World War I there were few dissenters beyond Albert Einstein in Germany and Bertrand Russell in Great Britain, and both suffered the consequences of not being onboard with the World War I meme-complex. Unquestioning blind obedience to a meme-complex through unconditional group-think is definitely a good survival strategy for any meme-complex.

In the modern world, during times of distress, we now see a very interesting interplay between the genes, memes and software of war. This certainly was true during the Arab Spring which began on December 18, 2010 and was made possible by the spreading of the memes of revolution via social media software. The trouble with the memes of war is that, like all meme-complexes, once they are established they are very conservative and not very open to new memes that might jeopardize the ongoing survival of the meme-complex, and consequently, they are very hard to change or eliminate. Remember, every meme-complex is less than one generation away from oblivion. So normally, meme-complexes are very resistant to the Darwinian processes of innovation and natural selection, and just settle down into a state of coexistence with the other meme-complexes that they interact with. But during periods of stress, very violent and dangerous war-like meme-complexes can break out of this equilibrium, rapidly forming a new war-like meme-complex in a manner similar to the Punctuated Equilibrium model of Stephen Jay Gould and Niles Eldridge (1972), which holds that species are usually very stable and in equilibrium with their environment and only rarely change when required.

In times of peace, the genes, memes and software enter into an uneasy alliance of parasitic/symbiotic relationships, but in times of war, this uneasy truce breaks down, as we have again seen in the Middle East. The Middle East is currently plagued by a number of warring religious meme-complexes that are in the process of destroying the Middle East, as did the warring Catholic and Protestant religious meme-complexes of the Thirty Years War (1618 – 1648), which nearly destroyed Europe. But at the same time that the Thirty Years War raged in Europe, people like Kepler, Galileo and Descartes were laying the foundations of the 17th century Scientific Revolution which led to the 18th century European Enlightenment. So perhaps the warring meme-complexes of a region have to eliminate the belligerent genes of the region before rational thought can once again prevail.

Application to the Foreign Policy of the United States
The foreign policy of the United States keeps getting into trouble because Americans do not understand the enduring nature of meme-complexes. Because all successful meme-complexes have survived the rigors of Darwinian natural selection, they are very hardy forms of self-replicating information and not easily dislodged or eliminated once they have become endemic in a region. Yes, by occupying a region it is possible to temporarily suppress what the local meme-complexes can do, but it is very difficult to totally eliminate them from the scene because successful meme-complexes have learned to simply hide when confronted by a hostile intruding meme-complex, only later to reemerge when the hostile meme-complex has gone. The dramatic collapse of South Vietnam in less than two months (March 10 – April 30 1975) after spending more than a decade trying to alter the meme-complexes of the region is evidence of that fact. Similarly, the dramatic collapse of Iraq and Afghanistan after another decade of futile attempts to subdue the local meme-complexes of the region that are thousands of years old is another example of a failed foreign policy stemming from a na├»ve understanding of the hardiness of meme-complexes. History has taught us that the only way to permanently suppress the local meme-complexes of a region is to establish a permanent empire to rule the region with a heavy hand, and this is something Americans are loath to do, having once freed ourselves from such an empire.

Currently, in the United States the polls are showing that Americans, on one hand, do not want to get involved in the Middle East again, but on the other hand, perceive that the foreign policy of the United States is weak and that we are not showing leadership. Apparently, Americans are now so confused by the varying warring factions in the Middle East that they can no longer even tell who the potential enemy is. This confusion also stems from an old 20th century meme that world leadership equates to military action, which is probably no longer true in the 21st century because the 21st century will be marked by the rise of software to supremacy as the dominant form of self-replicating information on the planet. This self-contradictory assessment troubling the minds of Americans is further exasperated by an old 20th century meme currently floating about that, if the Middle East should further spin out of control, that governmental safe havens will be established for the training of combatants that might again strike the United States as they did with the September 11, 2001 attacks, specifically those on the World Trade Center and the Pentagon. But in the modern world, with the dramatic rise of software, there is no longer a need for physical safe havens in the Middle East to train and equip combatants. Indeed, training combatants to effectively attack modern 21st century countries, and the technology that they rely upon, is best done in locations with modern 21st century technology and good Internet connectivity close at hand. For example, Timothy McVeigh, Terry Nichols and Michael Fortier conspired to conduct the Oklahoma City bombing attack that killed 168 people and injured over 600 on April 19, 1995 by training within the United States itself. Similarly, the September 11, 2001 combatants also trained within the United States prior to the attack. After all, it’s hard to learn how to fly a modern jetliner in a cave. Ironically, in the 21st century it would actually be a good defensive strategy to try to isolate your enemies to the deserts and caves of the Middle East because deserts and caves have such poor Internet connectivity and access to modern technology.

For example, currently I am employed in the Middleware Operations group of a major U.S. corporation, and I work out of my home office in Roselle, IL, a northwest suburb of Chicago. The rest of our onshore Middleware Operations group is also scattered throughout the suburbs of Chicago and hardly ever goes into our central office for work. And about 2/3 of Middleware Operations works out of an office in Bangalore India. But the whole team can collaborate very effectively in a remote manner using CISCO software. We use CISCO IP Communicator for voice over IP phone conversations and CISCO WebEx for online web-meetings. We use CISCO WebEx Connect for instant messaging and the sharing of desktops to view the laptops of others for training purposes. Combined with standard corporate email, these technologies allow a large group of Middleware Operations staff to work together from locations scattered all over the world, without ever actually being physically located in the same place. In fact, when the members of Middleware Operations do come into the office for the occasional group meeting, we usually just use the same CISCO software products to communicate while sitting in our cubicles, even when we are sitting in adjacent cubicles! After all, the CISCO collaborative software works better than leaning over somebody else’s laptop and trying to see what is going on. I believe that many enemies of the United States now also work together in a very similar distributed manner as a network of agents scattered all over the world. Now that memes can move so easily over the Internet and are no longer confined to particular regions, even the establishment of regional empires will no longer be able to suppress them.

So in the 21st century dominated by software, the only thing that the enemies of the United States really need is money. From a 21st century military perspective, control of territory is now an obsolete 20th century meme because all an enemy really needs is money and the complicit cooperation of world-wide financial institutions to do things like launch cyber-attacks, create and deliver dirty bombs, purchase surface to air missiles to down commercial aircraft or purchase nuclear weapons to FedEx to targets. For the modern 21st century economies, it really makes more sense to beef up your Cyber Defense capabilities rather than trying to control territories populated by DNA survival machines infected with very self-destructive war-like meme-complexes that tend to splinter and collapse on their own. So for the present situation, the most effective military action that the United States could take would be to help the world to cut off the money supply to the Middle East by ending the demand for oil and natural gas by converting to renewable sources of energy. This military action would also have the added benefit of preventing many additional future wars fought over the control of Middle Eastern oil and wars that would be induced by global climate change as it severely disrupts the economies of the world (see How to Use Your IT Skills to Save the World and 400 PPM - The Dawn of the SophomorEocene for more details).

Since the “real world” of human affairs only exists in our minds, we can change it by simply changing the way we think by realizing that we are indeed DNA survival machines with minds infected with memes and software that are not necessarily acting in our own best interests. We are sentient beings in a Universe that has become self-aware and perhaps the only form of intelligence in our galaxy. What a privilege! The good news is that conscious intelligence is something new. It is not a mindless form of self-replicating information, bent on replicating at all costs with all the associated downsides of a ruthless nature. We can do much better with this marvelous opportunity once we realize what is really going on. It is up to all of us to make something of this unique opportunity that we can all be proud of – that’s our responsibility as sentient beings.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:

Steve Johnston