Saturday, April 08, 2017

Introduction to Softwarephysics

Softwarephysics is a simulated science for the simulated Software Universe that we are all immersed in. It is an approach to software development, maintenance and support based upon concepts from physics, chemistry, biology, and geology that I have been using on a daily basis for over 35 years as an IT professional. For those of you not in the business, IT is short for Information Technology, commercial computer science. The original purpose of softwarephysics was to explain why IT was so difficult, to suggest possible remedies, and to provide a direction for thought. Since then softwarephysics has taken on a larger scope, as it became apparent that softwarephysics could also assist the physical sciences with some of the Big Problems that they are currently having difficulties with. So if you are an IT professional, general computer user, or simply an individual interested in computer science, physics, chemistry, biology, or geology then softwarephysics might be of interest to you, if not in an entirely serious manner, perhaps at least in an entertaining one.

The Origin of Softwarephysics
From 1975 – 1979, I was an exploration geophysicist exploring for oil, first with Shell, and then with Amoco. In 1979, I made a career change into IT, and spent about 20 years in development. For the past 17 years, I have been in IT operations, supporting middleware on WebSphere, JBoss, Tomcat, and ColdFusion. When I first transitioned into IT from geophysics, I figured that if you could apply physics to geology; why not apply physics to software? So like the exploration team at Amoco that I had just left, consisting of geologists, geophysicists, paleontologists, geochemists, and petrophysicists, I decided to take all the physics, chemistry, biology, and geology that I could muster and throw it at the problem of software. The basic idea was that many concepts in physics, chemistry, biology, and geology suggested to me that the IT community had accidentally created a pretty decent computer simulation of the physical Universe on a grand scale, a Software Universe so to speak, and that I could use this fantastic simulation in reverse, to better understand the behavior of commercial software, by comparing software to how things behaved in the physical Universe. Softwarephysics depicts software as a virtual substance, and relies upon our understanding of the current theories in physics, chemistry, biology, and geology to help us model the nature of software behavior. So in physics we use software to simulate the behavior of the Universe, while in softwarephysics we use the Universe to simulate the behavior of software. Along these lines, we use the Equivalence Conjecture of Softwarephysics as an aid; it allows us to shift back and forth between the Software Universe and the physical Universe, and hopefully to learn something about one by examining the other:

The Equivalence Conjecture of Softwarephysics
Over the past 75 years, through the uncoordinated efforts of over 50 million independently acting programmers to provide the world with a global supply of software, the IT community has accidentally spent more than $10 trillion creating a computer simulation of the physical Universe on a grand scale – the Software Universe.

For more on the origin of softwarephysics please see Some Thoughts on the Origin of Softwarephysics and Its Application Beyond IT.

Logical Positivism and Effective Theories
Many IT professionals have a difficult time with softwarephysics because they think of physics as being limited to the study of real things like electrons and photons, and since software is not “real”, how can you possibly apply concepts from physics and the other sciences to software? To address this issue, softwarephysics draws heavily upon two concepts from physics that have served physics quite well over the past century – the concept of logical positivism and the concept of effective theories. This was not always the case. In the 17th, 18th, and 19th centuries, physicists mistakenly thought that they were actually discovering the fundamental laws of the Universe, which they thought were based upon real tangible things like particles, waves, and fields. Classical Newtonian mechanics (1687), thermodynamics (1850), and classical electrodynamics (1864) did a wonderful job of describing the everyday world at the close of the 19th century, but early in the 20th century it became apparent that the models upon which these very successful theories were based did not work very well for small things like atoms or for objects moving at high velocities or in strong gravitational fields. This provoked a rather profound philosophical crisis within physics at the turn of the century, as physicists worried that perhaps 300 years of work was about to go down the drain. The problem was that classical physicists confused their models of reality with reality itself, and when their classical models began to falter, their confidence in physics began to falter as well. This philosophical crisis was resolved with the adoption of the concepts of logical positivism and some new effective theories in physics. Quantum mechanics (1926) was developed for small things like atoms, the special theory of relativity (1905) was developed for objects moving at high velocities and the general theory of relativity (1915) was developed for objects moving in strong gravitational fields.

Logical positivism, usually abbreviated simply to positivism, is an enhanced form of empiricism, in which we do not care about how things “really” are; we are only interested with how things are observed to behave. With positivism, physicists only seek out models of reality - not reality itself. When we study quantum mechanics, we will find that the concept of reality gets rather murky in physics anyway, so this is not as great a loss as it might at first seem. By concentrating on how things are observed to behave, rather than on what things “really” are, we avoid the conundrum faced by the classical physicists. In retrospect, this idea really goes all the way back to the very foundations of physics. In Newton’s Principia (1687) he outlined Newtonian mechanics and his theory of gravitation, which held that the gravitational force between two objects was proportional to the product of their masses divided by the square of the distance between them. Newton knew that he was going to take some philosophical flack for proposing a mysterious force between objects that could reach out across the vast depths of space with no apparent mechanism, so he took a very positivistic position on the matter with the famous words:

I have not as yet been able to discover the reason for these properties of gravity from phenomena, and I do not feign hypotheses. For whatever is not deduced from the phenomena must be called a hypothesis; and hypotheses, whether metaphysical or physical, or based on occult qualities, or mechanical, have no place in experimental philosophy. In this philosophy particular propositions are inferred from the phenomena, and afterwards rendered general by induction.

Instead, Newton focused on how things were observed to move under the influence of his law of gravitational attraction, without worrying about what gravity “really” was.

The second concept, that of effective theories, is an extension of positivism. An effective theory is an approximation of reality that only holds true over a certain restricted range of conditions and only provides for a certain depth of understanding of the problem at hand. For example, Newtonian mechanics is an effective theory that makes very good predictions for the behavior of objects moving less than 10% of the speed of light and which are bigger than a very small grain of dust. These limits define the effective range over which Newtonian mechanics can be applied to solve problems. For very small things we must use quantum mechanics and for very fast things moving in strong gravitational fields, we must use relativity theory. So all of the current theories of physics, such as Newtonian mechanics, Newtonian gravity, classical electrodynamics, thermodynamics, statistical mechanics, the special and general theories of relativity, quantum mechanics, and the quantum field theories of QED and QCD are effective theories that are based upon models of reality, and all these models are approximations - all these models are fundamentally "wrong", but at the same time, these effective theories make exceedingly good predictions of the behavior of physical systems over the limited ranges in which they apply. That is the goal of softwarephysics – to provide for an effective theory of software behavior that makes useful predictions of software behavior that are applicable to the day-to-day activities of IT professionals. So in softwarephysics, we adopt a very positivistic viewpoint of software; we do not care what software “really is”, we only care about how software is observed to behave and try to model those behaviors with an effective theory of software behavior that only holds true over a certain restricted range of conditions and only provides for a certain depth of understanding of the problem at hand.

GPS satellites provide a very good example of positivism and effective theories at work. There are currently 31 GPS satellites orbiting at an altitude of 12,600 miles above the Earth, and each contains a very accurate atomic clock. The signals from the GPS satellites travel to your GPS unit at the speed of light, so by knowing the travel time of the signals from at least 4 of the GPS satellites, it is possible to determine your position on Earth very accurately. In order to do that, it is very important to have very accurate timing measurements. Newtonian mechanics is used to launch the GPS satellites to an altitude of 12,600 miles and to keep them properly positioned in orbit. Classical electrodynamics is then used to beam the GPS signals back down to Earth to the GPS unit in your car. Quantum mechanics is used to build the transistors on the chips on board the GPS satellites and to understand the quantum tunneling of electrons in the flash memory chips used to store GPS data on the satellites. The special theory of relativity predicts that the onboard atomic clocks on the GPS satellites will run slower and lose about 7.2 microseconds per day due to their high velocities relative to an observer on the Earth. But at the same time, the general theory of relativity also predicts that because the GPS satellites are further from the center of the Earth and in a weaker gravitational field, where spacetime is less deformed than on the surface of the Earth, their atomic clocks also run faster and gain 45.9 microseconds per day due to the weaker gravitational field out there. The net effect is a gain of 38.7 microseconds per day, so the GPS satellite atomic clocks have to be purposefully built to run slow by 38.7 microseconds per day before they are launched, so that they will keep in sync with clocks on the surface of the Earth. If this correction were not made, an error in your position of 100 yards/day would accrue. The end result of the combination of all these fundamentally flawed effective theories is that it is possible to pinpoint your location on Earth to an accuracy of 16 feet or better for as little as $100. But physics has done even better than that with its fundamentally flawed effective theories. By combining the effective theories of special relativity (1905) with quantum mechanics (1926), physicists were able to produce a new effective theory for the behavior of electrons and photons called quantum electrodynamics QED (1948) which was able to predict the gyromagnetic ratio of the electron, a measure of its intrinsic magnetic field, to an accuracy of 11 decimal places. As Richard Feynman has pointed out, this was like predicting the exact distance between New York and Los Angeles accurate to the width of a human hair!

So Newtonian mechanics makes great predictions for the macroscopic behavior of GPS satellites, but it does not work very well for small things like the behavior of individual electrons within transistors, where quantum mechanics is required, or for things moving at high speeds or in strong gravitational fields where relativity theory must be applied. And all three of these effective theories are based upon completely contradictory models. General relativity maintains that spacetime is curved by matter and energy, but that matter and energy are continuous, while quantum mechanics maintains that spacetime is flat, but that matter and energy are quantized into chunks. Newtonian mechanics simply states that space and time are mutually independent dimensions and universal for all, with matter and energy being continuous. The important point is that all effective theories and scientific models are approximations – they are all fundamentally "wrong". But knowing that you are "wrong" gives you a great advantage over people who know that they are "right", because knowing that you are "wrong" allows you to seek improved models of reality. So please consider softwarephysics to simply be an effective theory of software behavior that is based upon models that are fundamentally “wrong”, but at the same time, fundamentally useful for IT professionals. So as you embark upon your study of softwarephysics, please always keep in mind that the models of softwarephysics are just approximations of software behavior, they are not what software “really is”. It is very important not to confuse models of software behavior with software itself, if one wishes to avoid the plight of the 19th century classical physicists.

If you are an IT professional and many of the above concepts are new to you, do not be concerned. This blog on softwarephysics is aimed at a diverse audience, but with IT professionals in mind. All of the above ideas will be covered at great length in the postings in this blog on softwarephysics and in a manner accessible to all IT professionals. Now it turns out that most IT professionals have had some introduction to physics in high school or in introductory college courses, but that presents an additional problem. The problem is that such courses generally only cover classical physics, and leave the student with a very good picture of physics as it stood in 1864! It turns out that the classical physics of Newtonian mechanics, thermodynamics, and classical electromagnetic theory were simply too good to discard and are still quite useful, so they are taught first to beginners and then we run out of time to cover the really interesting physics of the 20th century. Now imagine the problems that the modern world would face if we only taught similarly antiquated courses in astronomy, metallurgy, electrical and mechanical engineering, medicine, economics, biology, or geology that happily left students back in 1864! Since many of the best models for software behavior stem from 20th century physics, we will be covering a great deal of 20th century material in these postings – the special and general theories of relativity, quantum mechanics, quantum field theories, and chaos theory, but I hope that you will find that these additional effective theories are quite interesting on their own, and might even change your worldview of the physical Universe at the same time.

Unintended Consequences for the Scientific Community
As I mentioned at the close of my original posting on SoftwarePhysics, my initial intention for this blog on softwarephysics was to fulfill a promise I made to myself about 25 years ago to approach the IT community with the concept of softwarephysics a second time, following my less than successful attempt to do so in the 1980s, with the hope of helping the IT community to better cope with the daily mayhem of life in IT. However, in laying down the postings for this blog an unintended consequence arose in my mind as I became profoundly aware of the enormity of this vast computer simulation of the physical Universe that the IT community has so graciously provided to the scientific community free of charge and also of the very significant potential scientific value that it provides. One of the nagging problems for many of the observational and experimental sciences is that many times there is only one example readily at hand to study or experiment with, and it is very difficult to do meaningful statistics with a population of N=1.

But the computer simulation of the physical Universe that the Software Universe presents provides another realm for comparison. For example, both biology and astrobiology only have one biosphere on Earth to study and even physics itself has only one Universe with which to engage. Imagine the possibilities if scientists had another Universe readily at hand in which to work! This is exactly what the Software Universe provides. For example, in SoftwareBiology and A Proposal For All Practicing Paleontologists we see that the evolution of software over the past 75 years, or 2.4 billion seconds, ever since Konrad Zuse first cranked up his Z3 computer in May of 1941, has closely followed the same path as life on Earth over the past 4.0 billion years in keeping with Simon Conway Morris’s contention that convergence has played the dominant role in the evolution of life on Earth. In When Toasters Fly, we also see that software has evolved in fits and starts as portrayed by the punctuated equilibrium of Stephen Jay Gould and Niles Eldredge, and in The Adaptationist View of Software Evolution we explore the overwhelming power of natural selection in the evolution of software. In keeping with Peter Ward’s emphasis on mass extinctions dominating the course of evolution throughout geological time, we also see in SoftwareBiology that there have been several dramatic mass extinctions of various forms of software over the past 75 years as well, that have greatly affected the evolutionary history of software, and that between these mass extinctions, software has also tended to evolve through the gradual changes of Hutton’s and Lyell’s uniformitarianism. In Software Symbiogenesis and Self-Replicating Information, we also see the very significant role that parasitic/symbiotic relationships have played in the evolution of software, in keeping with the work of Lynn Margulis and also of Freeman Dyson’s two-stage theory of the origin of life on Earth. In The Origin of Software the Origin of Life, we explore Stuart Kauffman’s ideas on how Boolean nets of autocatalytic chemical reactions might have kick-started the whole thing as an emergent behavior of an early chaotic pre-biotic environment on Earth, and that if Seth Shostak is right, we will never end up talking to carbon-based extraterrestrial aliens, but to alien software instead. In Is the Universe Fine-Tuned for Self-Replicating Information? we explore the thermodynamics of Brandon Carter’s Weak Anthropic Principle (1973), as it relates to the generation of universes in the multiverse that are capable of sustaining intelligent life. Finally, in Programming Clay we revisit Alexander Graham Cairns-Smith’s theory (1966) that Gene 1.0 did not run on nucleic acids, but on clay microcrystal precursors instead.

Similarly for the physical sciences, in Is the Universe a Quantum Computer? we find a correspondence between TCP/IP and John Cramer’s Transactional Interpretation of quantum mechanics. In SoftwarePhysics and Cyberspacetime, we also see that the froth of CPU processes running with a clock speed of 109 Hz on the 10 trillion currently active microprocessors that comprise the Software Universe can be viewed as a slowed down simulation of the spin-foam froth of interacting processes of loop quantum gravity running with a clock speed of 1043 Hz that may comprise the physical Universe. And in Software Chaos, we examine the nonlinear behavior of software and some of its emergent behaviors and follow up in CyberCosmology with the possibility that vast quantities of software running on large nonlinear networks might eventually break out into consciousness in accordance with the work of George Dyson and Daniel Dennett. Finally, in Model-Dependent Realism - A Positivistic Approach to Realism we compare Steven Weinberg’s realism with the model-dependent realism of Stephen Hawking and Leonard Mlodinow and how the two worldviews affect the search for a Final Theory. Finally, in The Software Universe as an Implementation of the Mathematical Universe Hypothesis and An Alternative Model of the Software Universe we at long last explore what software might really be, and discover that the Software Universe might actually be more closely related to the physical Universe than you might think.

The chief advantage of doing fieldwork in the Software Universe is that, unlike most computer simulations of the physical Universe, it is an unintended and accidental simulation, without any of the built-in biases that most computer simulations of the physical Universe suffer. So you will truly be able to do fieldwork in a pristine and naturally occuring simulation, just as IT professionals can do fieldwork in the wild and naturally occuring simulation of software that the living things of the biosphere provide. Secondly, the Software Universe is a huge simulation that is far beyond the budgetary means of any institution or consortium by many orders of magnitude. So if you are an evolutionary biologist, astrobiologist, or paleontologist working on the origin and evolution of life in the Universe, or a physicist or economist working on the emergent behaviors of nonlinear systems and complexity theory, or a neurobiologist working on the emergence of consciousness in neural networks, or even a frustrated string theorist struggling with quantum gravity, it would be well worth your while to pay a friendly call upon the local IT department of a major corporation in your area. Start with a visit to the Command Center for their IT Operations department to get a global view of their IT infrastructure and to see how it might be of assistance to the work in your area of interest. From there you can branch out to the applicable area of IT that will provide the most benefit.

The Impact of Self-Replicating Information Upon the Planet
One of the key findings of softwarephysics is concerned with the magnitude of the impact upon the planet of self-replicating information.

Self-Replicating Information – Information that persists through time by making copies of itself or by enlisting the support of other things to ensure that copies of itself are made.

Basically, we have seen several waves of self-replicating information dominate the Earth:
1. Self-replicating autocatalytic metabolic pathways of organic molecules
2. RNA
3. DNA
4. Memes
5. Software

Note that because the self-replicating autocatalytic metabolic pathways of organic molecules, RNA and DNA have become so heavily intertwined over time that now I simply refer to them as the “genes”. Over the past 4.0 billion years, the surface of the Earth has been totally reworked by three forms of self-replicating information – the genes, memes and software, with software rapidly becoming the dominant form of self-replicating information on the planet. For more on this see:

A Brief History of Self-Replicating Information
Self-Replicating Information
Is Self-Replicating Information Inherently Self-Destructive?
Enablement - the Definitive Characteristic of Living Things
Is the Universe Fine-Tuned for Self-Replicating Information?
How to Use an Understanding of Self-Replicating Information to Avoid War
The Great War That Will Not End
How to Use Softwarephysics to Revive Memetics in Academia

Softwarephysics and the Real World of Human Affairs
Having another universe readily at hand to explore, even a simulated universe like the Software Universe, necessarily has an impact upon one's personal philosophy of life, and allows one to draw certain conclusions about the human condition and what’s it all about, so as you read through the postings in this blog you will stumble across a bit of my own personal philosophy - definitely a working hypothesis still in the works. Along these lines you might be interested in a few postings where I try to apply softwarephysics to the real world of human affairs:

MoneyPhysics – my impression of the 2008 world financial meltdown.

The Fundamental Problem of Everything – if you Google "the fundamental problem of everything", this will be the only hit you get on the entire Internet, which is indicative of the fundamental problem of everything!

What’s It All About? and Genes, Memes and Software – my current working hypothesis on what’s it all about.

How to Use an Understanding of Self-Replicating Information to Avoid War – my current working hypothesis for how the United States can avoid getting bogged down again in continued war in the Middle East.

Hierarchiology and the Phenomenon of Self-Organizing Organizational Collapse - a modern extension of the classic Peter Principle that applies to all hierarchical organizations and introduces the Time Invariant Peter Principle.

The Economics of the Coming Software Singularity, The Enduring Effects of the Obvious Hiding in Plain Sight and The Dawn of Galactic ASI - Artificial Superintelligence - my take on some of the issues that will arise for mankind as software becomes the dominant form of self-replicating information upon the planet over the coming decades.

Making Sense of the Absurdity of the Real World of Human Affairs - how software has aided the expansion of our less desirable tendencies in recent years.

Some Specifics About These Postings
The postings in this blog are a supplemental reading for my course on softwarephysics for IT professionals entitled SoftwarePhysics 101 – The Physics of Cyberspacetime, which was originally designed to be taught as a series of seminars at companies where I was employed. Since softwarephysics essentially covers the simulated physics, chemistry, biology, and geology of an entire simulated universe, the slides necessarily just provide a cursory skeleton upon which to expound. The postings in this blog go into much greater depth. Because each posting builds upon its predecessors, the postings in this blog should be read in reverse order from the oldest to the most recent, beginning with my original posting on SoftwarePhysics. In addition, several universities also now offer courses on Biologically Inspired Computing which cover some of the biological aspects of softwarephysics, and the online content for some of these courses can be found by Googling for "Biologically Inspired Computing" or "Natural Computing". At this point we will finish up with my original plan for this blog on softwarephysics with a purely speculative posting on CyberCosmology that describes the origins of the Software Universe, cyberspacetime, software and where they all may be heading. Since CyberCosmology will be purely speculative in nature, it will not be of much help to you in your IT professional capacities, but I hope that it might be a bit entertaining. Again, if you are new to softwarephysics, you really need to read the previous posts before taking on CyberCosmology. I will probably continue on with some additional brief observations about softwarephysics in the future, but once you have completed CyberCosmology, you can truly consider yourself to be a bona fide softwarephysicist.

For those of you following this blog, the posting dates on the posts may seem to behave in a rather bizarre manner. That is because in order to get the Introduction to Softwarephysics listed as the first post in the context root of http://softwarephysics.blogspot.com/ I have to perform a few IT tricks. When publishing a new posting, I simply copy the contents of the Introduction to Softwarephysics to a new posting called the New Introduction to Softwarephysics. Then I update the original Introduction to Softwarephysics entry with the title and content of the new posting to be published. I then go back and take “New” out of the title of the New Introduction to Softwarephysics. This way the Introduction to Softwarephysics always appears as the first posting in the context root of http://softwarephysics.blogspot.com/. The side effect of all this is that the real posting date of posts is the date that appears on the post that you get when clicking on the Newer Post link at the bottom left of the posting webpage.

SoftwarePhysics 101 – The Physics of Cyberspacetime is now available on Microsoft OneDrive.

SoftwarePhysics 101 – The Physics of Cyberspacetime - Original PowerPoint document

Entropy – A spreadsheet referenced in the document

BSDE – A 1989 document describing how to use BSDE - the Bionic Systems Development Environment - to grow applications from genes and embryos within the maternal BSDE software.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
http://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Tuesday, March 07, 2017

Cyber Civil Defense

In my posting Cyber Defense about six years ago I warned that, like the global disaster of World War I that the Powers That Be accidentally unleashed upon mankind more than 100 years ago, the current world powers may not fully understand what they have wrought with their large stockpiles of cyberweapons and cybersoldiers. Recall that the world powers that ran the world 100 years ago, just prior to World War I, did not recognize the game-changing effects of the mechanization of warfare. The development at the time of high volume rail systems, capable of quickly transporting large numbers of troops and munitions, and the invention of the machine gun, and the arrival of mechanized transport vehicles and tanks, greatly increased the killing power of nation-states. But this was not generally recognized by the Powers That Be prior to the catastrophe of World War I, which resulted in 40 million casualties and the deaths of 20 million people for apparently no particular reason at all. Similarly, it now seems that the first large-scale cyberattack by Russia upon the United States of America may have successfully elected a president of the United States, but in this posting I would like to propose that there may be some dreadful unintended consequences to this incredible Russian cybervictory that could leave even more dead in its wake. First of all, we should take note that this was not the first president of the United States that Russia managed to elect into office.

I was born in 1951 during the Korean War, and so I lived through all of the very tense Cold War events of the 1950s and 1960s, including the Cuban Missile Crisis of October 1962, which brought us all closer to the prospect of a global nuclear war than we should ever have come, and so let us begin there. On Oct. 4, 1957, Sputnik 1 was successfully launched by the Soviet Union to become the world's very first man-made object to enter into Earth orbit, and was put into orbit by a Soviet R-7 rocket. Earlier in 1957 the Soviet R-7 rocket had become the world's very first functional ICBM missile after its successful 3,700 mile test flight on August 21, 1957. At the time, all of these Russian firsts threw the United States of America into a Cold War frenzy that is now hard to fathom, and had a huge impact upon the United States of America. For example, it built my high school and put me through college. Back in the 1950's, the School District 88 of the state of Illinois was having a hard time trying to convince the stingy local residents of the need for a new high school in the area. But that all changed in January of 1958, after the launch of Sputnik 1, when suddenly the local residents now eagerly voted in a referendum to build a new Willowbrook High School out of the fear that was generated by Sputnik 1, and of the demonstrable superiority of Russian missile technology at the time. Suddenly, Americans also now began to take science and education seriously once again, and for once finally began to hold science and education in the esteem that it actually deserved. For example, in 1969 when I first began work on a B.S. in physics at the University of Illinois, tuition was only $181 per semester, and I was easily able to put myself through college simply by cleaning movie theaters seven days a week during the summers at $2.25/hour. For my M.S. in geophysics at the University of Wisconsin, my tuition and fees were waived, and I received a generous stipend to live on while working as a research assistant, courtesy of a grant from the National Science Foundation. The end result of this was that, in 1975 when I finished school, I had $3000 in the bank, instead of the crushing student debt that most graduates now face, because the United States had not yet given up on supporting education, like it did after the Cold War seemed to have ended on December 25, 1991 when the Soviet Union collapsed under its own weight.

Figure 1 - The launch of Sputnik 1 by the Russians on October 4, 1957 on top of an R-7 ICBM rocket threw the United States of America into a Cold War panic that is now hard to imagine.

But the Russians did far more than that with Sputnik 1. They also managed to elect their very first president of the United States with it. Given the astounding success that the Soviets had had with Sputnik 1 and the R-7 ICBM in 1957, early in 1958 John F. Kennedy seized upon the issue as a "missile gap" with the Soviet Union that the Eisenhower Administration had failed to prevent. Now it turns out that by November of 1960 the "missile gap" had largely been closed in reality, but it still remained in the public zeitgeist of the time as a real issue, and it helped to elect John F. Kennedy to become president of the United States by a very narrow margin. Apparently, John F. Kennedy actually knew at the time that the "missile gap" was really a false myth, but just the same, used it as a useful political tool to help get elected. The Soviets, on the other hand, regarded Kennedy's "missile gap" and the attempted Bay of Pigs invasion of Cuba in 1961 as indications that Kennedy was a dangerous and weak leader who might cave into his more militaristic generals, like General Curtis LeMay, during a crisis and launch a nuclear first strike. The Soviet R-7 ICBMs actually required about 20 hours of preparation to launch, so the R-7 ICBMs were easy targets for conventional bombers to take out before they could be launched during a global nuclear war, and thus the R-7 ICBMs were actually less threatening than long-range bombers, like the B-52. All of this led the Soviet military planners to conclude that additional deterrence measures were in order, and as a consequence, plans were put into place to install medium-range nuclear missiles in Cuba that were more accurate than the R-7. When these missiles were first discovered by American U-2 flights in September of 1962, the Cuban Missile Crisis of October 1962 soon followed. At the time, Kennedy's generals recommended an invasion of Cuba in response, but fortunately for all, Kennedy turned out to be a stronger leader than the Soviets had predicted, and Kennedy countered with a Cuban blockade instead, which allowed both sides enough time to come to their senses. There are now reports that, unknown to the United States at the time, the Soviet field commanders in Cuba had actually been given authority to launch the nuclear weapons under their control by the Soviet High Command in the event of an invasion, the only time such authority has ever been delegated by the Soviet High Command. The Soviet field commanders had at least twenty nuclear warheads on the medium-range R-12 Dvina ballistic missiles under their control that were capable of reaching cities in the United States, including Washington D.C., each carrying a one megaton warhead, and nine tactical nuclear missiles with smaller warheads. If the Soviet field commanders had launched their missiles, many millions of Americans would have been killed in the initial attack, and the ensuing retaliatory nuclear strike against the Soviet Union would have killed roughly one hundred million Russians. The final Soviet counter-attack would have killed a similar number of Americans.

But the above nuclear catastrophe did not happen because reasonable minds on both sides of the conflict always prevailed. In fact, all during the Cold War we had highly capable leaders in both the United States and the Soviet Union at all times, who always thought and behaved in a rational manner founded on sound logical grounds. Now during the Cold War, the United States may not have agreed with the Soviet Union on most things, but both sides always operated in a rational and logical manner, and that is what made the MAD (Mutual Assured Destruction) stalemate work, which prevented nuclear war from breaking out and ending it all. Additionally, the scientific community in the United States always respected those in the Russian scientific community for their brilliant scientific efforts, and this limited scientific dialog helped to keep open the political channels between both countries as well. In fact, I have always been amazed by the astounding Russian scientific achievements over the years that were made without the benefits of the freedom of thought that the 18th century Enlightenment had brought to the western democracies. Despite the limitations imposed upon the Russian scientific community by a series of political strongmen over many decades, they always managed to prevail in the long term. I am not so confident that the scientific communities of the western democracies could do as well under the thumbs of the Alt-Right strongmen that wish to come to power now.

So now thanks to Russian cyberwarfare, we now have a new president of the United States of very limited ability. It seems that the principal skill of this new president lies solely in making questionable real estate deals, but he has no experience with global political nuclear strategy whatsoever, and that is very dangerous for the United States and for Mother Russia as well. True, he does seem to unquestionably favor Russia, for some unknown reason, and that unknown favor is currently being investigated by the FBI and both houses of Congress. But those investigations will take quite some time to complete. Meanwhile, we now have a mentally unhinged leader of North Korea, a Stalinist holdover from the previous century, now rapidly moving towards obtaining ICBMs armed with nuclear warheads that could strike the United States. This has never happened before. We have never had potentially warring nation-states with nuclear weapons headed by administrations that had no idea of what they were doing with such weapons. This is not good for the world, or for Russia either. In the American 1988 vice-presidential debate between Lloyd Bentsen and Dan Quale there is a famous remark by Lloyd Bentsen, after Dan Quale made a vague analogy to himself and John F. Kennedy, that goes "Senator, I served with Jack Kennedy. I knew Jack Kennedy. Jack Kennedy was a friend of mine. Senator, you're no Jack Kennedy." And that certainly is true of the new president of the United States that Russian cyberwarriors helped to elect. Yes, he might seem to be overly friendly to Russian interests, but his administration has already stated that military actions might be required to prevent North Korea from obtaining an ICBM capable of delivering a nuclear warhead that could strike the United States, and this new Administration has also wondered why we cannot use nuclear weapons if we already have them - otherwise, why build such weapons in the first place? An attack of North Korea could be the flashpoint that ignites a global nuclear war between the United States, North Korea and China, like the original Korean War of 1950. True, Russia itself might not get drawn into such a conflict, or maybe it would, based upon earlier precedents like World War I, but nonetheless, the resulting high levels of global radioactive fallout and nuclear winter effects resulting from a large-scale nuclear exchange would bring disaster to Russia as well.

Cyber Civil Defense - How to Build a Cyber Fallout Shelter Against External Influences
Now all during the 1950s and early 1960s, great attention was paid in the United States to the matter of civil defense against a possible nuclear strike by the Russians. During those times, the government of the United States essentially admitted that it could not defend the citizens of the United States from a Soviet bomber attack with nuclear weapons, and so it was up to the individual citizens of the United States to prepare for such a nuclear attack.

Figure 2 - During the 1950s, as a very young child, with the beginning of each new school year, I was given a pamphlet by my teacher describing how my father could build an inexpensive fallout shelter in our basement out of cinderblocks and 2x4s.

Figure 3 - But to me these cheap cinderblock fallout shelters always seemed a bit small for a family of 5, and my parents never bothered to build one because we lived only 25 miles from downtown Chicago.

Figure 4 - For the more affluent, more luxurious accommodations could be constructed for a price.

Figure 5 - But no matter what your socioeconomic level was at the time, all students in the 1950s participated in "duck and cover" drills for a possible Soviet nuclear attack.

Figure 6 - And if you were lucky enough to survive the initial flash and blast of a Russian nuclear weapon with your "duck and cover" maneuver, your school, and all other public buildings, also had a fallout shelter in the basement to help you get through the next two weeks, while the extremely radioactive nucleotides from the Russian nuclear weapons rapidly decayed away.

Unfortunately, living just 25 miles from downtown Chicago, the second largest city in the United States at the time, meant that the whole Chicagoland area was destined to be targeted by a multitude of overlapping 10 and 20 megaton bombs by the Soviet bomber force, meaning that I would be killed multiple times as my atoms were repeatedly vaporized and carried away in the winds of the Windy City.

Now all of these somber thoughts from the distant 1950s might sound a bit bleak, but they are the reason that I pay very little attention when I hear our Congressmen and Senators explain that we have to investigate this Russian meddling with our 2016 presidential election "so that this never happens again". But of course it will happen again because we largely did it to ourselves! The Russians, like all major foreign powers, simply exploited the deep political divide between the Democrats and Republicans in our country. This is nothing new. All major foreign powers throughout history have always sought to meddle in the internal political affairs of other countries, in order to advance their own interests. The United States of America has a long history of doing so, and rightly so! It is always far better to try to modify the ambitions of a possible adversary politically, rather than to do so later militarily. The only difference this time was that the Russians used the full capabilities of the world-wide software infrastructure that is already in place to further their ends, like another Sputnik first, in keeping with my contention that software is now rapidly becoming the dominant form of self-replicating information on the planet. Consequently, cyberspacetime is now the most valuable terrain, from a long-term strategic perspective, to be found on the planet, and once again, the Russians got there first. The Russians realized that, for less than the price of a single ICBM, they could essentially paralyze the United States of America for many years, or perhaps even an entire decade, by simply using the existing software infrastructure of the world to their advantage, and the great divide between the Democrats and Republicans.

Now as an 18th century liberal and a 20th century conservative, I must admit that I am a 20th century Republican who has only voted for Democrats for the past 15 years. I parted with the 21st century Republican Party in 2002 when it turned its back on science, and took up some other new policies that I did not favor. So over the past 45 years, there have been long stretches of time when I was a Republican, and long stretches of time when I was a Democrat, but at all times, I always tried to remain an American and hold the best thinkings of both parties dear to my heart. But the problem today is that most Republicans and most Democrats now view members of the other party as a greater threat to the United States of America than all of the other foreign powers in the world put together. This was the fundamental flaw in today's American society that the Russians exploited using our very own software! Hence, I would like to propose that since the government of the United States cannot really protect us from such a cyberattack in the future, like back in the 1950s, we need to institute a Civilian Cyber Civil Defense program of our own. In fact, this time it is much easier to do so because we do not need to physically build and stock a huge number of fallout shelters. All we need to do is to simply follow the directions on this official 1961 CONELRAD Nuclear Attack Message by not listening to false rumors or broadcasts spread by agents of the enemy:

https://www.youtube.com/watch?v=7iaQMbfazQk

which in today's divisive world simply means:

STOP BELIEVING THE STUPID THINGS YOU READ ON THE INTERNET!

In The Danger of Believing in Things I highlighted the dangers of not employing critical thought when evaluating assertions in our physical Universe, and the same goes for politics. In that posting, I explained that it is very dangerous to believe in things because that means you have turned off your faculty of critical thought, that hopefully, allows you to uncover attempts at deception by others. If you have ever purchased a new car, you know exactly what I am talking about. Instead, you should always approach things with some level of confidence that is less than 100%, and that confidence should always be based upon the evidence at hand. In fact, at an age of 65 years, I now have very little confidence in most forms of human thought beyond the sciences and mathematics. But in today's demented political world, most Americans are now mainly foaming at the mouth over the horrible thoughts and acts of the opposition party, and paying very little attention to the actions of the other foreign powers of the world. Since the 18th century European Enlightenment brought us democracies with the freedom of speech, we all need to recognize as responsible adults that it is impossible for our government to prevent foreign powers from exploiting that freedom by injecting "fake news" and false stories into the political debate. We must realize that, although the Russians are very intelligent and sophisticated people, they never fully benefited from the 18th century European Enlightenment and the freedoms that it brought, so we cannot retaliate in kind against Russia in its next election. So the only defense we have against another similar cyberattack by Russia, or some other foreign power during an election year, is to use the skepticism and critical thinking that science uses every day. As Carl Sagan used to say "Extraordinary claims require extraordinary evidence" . So in the course of the next election cycle, if you see on the Internet or cable network news, that the opposition candidate has been found to be running a child pornography ring out of a local pizza parlor in your city, stop and think. Most likely, you are being tricked by a foreign power into believing something that you already want to believe.

This may not be as easy as it first sounds because, although we all believe ourselves to be sophisticated, rational and open-minded individuals who only pay attention to accurate news accounts, we all do savor the latest piece of political gossip that we see about the opposition candidate, and are more likely to believe it, if it reconfirms our current worldview. In the April 2017 issue of Scientific American, Walter Quattrociocchi published a very interesting article, Inside the Echo Chamber, on this very subject. He showed that studies in Italy have found that instead of creating a "collective intelligence", the Internet has actually created a vast echo chamber of misinformation that has been dramatically amplified by social media software like Facebook and Twitter. The computational social scientists who study the viral spread of such misinformation on the Internet find that, frequently, users of social media software simply confine their Internet activities to websites that only feature similar misinformation that simply reconfirms their current distorted worldview. Worse yet, when confronted with debunking information, people were found to be 30% more likely to continue to read the same distorted misinformation that reaffirms their current worldview, rather than to reconsider their position. Clearly, a bit of memetics could be of help. Softwarephysics maintains that currently software is rapidly becoming the 5th wave of self-replicating information to come to predominance on the Earth, as it continues to form very strong parasitic/symbiotic relationships with the memes that currently rule the world - please see A Brief History of Self-Replicating Information for more on that. All memes soon learn that in order to survive and replicate they need to become appealing memes for the minds of the human DNA survival machines that replicate memes. Again, appealing memes are usually memes that appeal to the genes, and usually have something to do with power, status, wealth or sex. Consequently, most political debate also arises from the desire for power, status, wealth or sex too. The end result is that people like to hear what they like to hear because it reaffirms the worldview that seems to bring them power, status, wealth or sex. So the next time you run across some political memes that seem to make you very happy inside, be very skeptical. The more appealing the political memes appear to be, the less likely they are to be true.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
http://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Monday, February 20, 2017

The Danger of Tyranny in the Age of Software

If you have been following this blog on softwarephysics, then you know that I contend that it is all about self-replicating information struggling to survive in a highly nonlinear Universe, subject to the second law of thermodynamics, and that one of my major concerns has always been why we seem to be the only form of Intelligence to be found within our Milky Way galaxy after nearly 10 billion years of galactic stellar evolution. Now software is just currently the fifth wave of self-replicating information to sweep across the surface of the planet and totally rework its surface - see A Brief History of Self-Replicating Information for more on that. But more importantly, software is the first form of self-replicating information to appear on the Earth that can already travel at the speed of light, and software never dies, so it is superbly preadapted for interstellar space travel. Since we now know that nearly all of the 400 billion stars within our galaxy have planets, then for all intents and purposes, we should now find ourselves knee-deep in von Neumann probes, self-replicating robotic probes stuffed with alien software that travel from star system to star system building copies along the way, as they seek out additional resources and safety from potential threats, but that is clearly not the case. So what gives? Clearly, something must be very wrong with my current thinking.

One of my assumptions all along has been that, capitalism, and the free markets that it naturally enables, would necessarily, bring software to predominance as the dominant form of self-replicating information on the planet, as the Powers that Be who currently rule the Earth try to reduce the costs of production. But now I have my doubts. As an 18th century liberal, and a 20th century conservative, I have always been a strong proponent of the 17th century Scientific Revolution, which brought forth the heretical proposition that rational thought, combined with evidence-based reasoning, could reveal the absolute truth, and allow individuals to actually govern themselves, without the need for an authoritarian monarchy. This change in thinking led to the 18th century Enlightenment, and brought forth the United States of America as a self-governing political entity. But unfortunately, the United States of America has always been a very dangerous experiment in human nature to see if the masses could truly govern themselves, without succumbing to the passions of the mob, and up until now, I have always maintained that we could, but now I am not so sure.

In my last posting The Continuing Adventures of Mr. Tompkins in the Software Universe I highlighted some of the recent political absurdities on the Internet that seem to call into question the very nature of reality in the modern world, and consequently, threaten the very foundations of the 18th century Enlightenment that made the United States of America possible. But the recent arrival of this fact-free virtual cyber-reality is just one element of a much more disturbing rise of Alt-Right movements throughout the world. Many contend that this resurgence of nationalistic authoritarianism is a rejection of the economic globalization that has occurred over the past 30 years or so, and the resulting economic displacement of the middle classes. But my contention is that these Alt-Right movements in such places as the United States, the UK, Germany, France and other western democracies throughout the world, are just another sign of software rapidly becoming the dominant form of self-replicating information on the planet. As software comes to predominance it has caused a great deal of social, political and economic unrest as discussed in The Economics of the Coming Software Singularity , The Enduring Effects of the Obvious Hiding in Plain Sight and Machine Learning and the Ascendance of the Fifth Wave. Basically, the arrival of software in the 1950s slowly began to automate middle class clerical and manufacturing jobs. The evaporation of middle class clerical jobs really began to accelerate in the 1960s, with the arrival of mainframe computers in the business world, and the evaporation of manufacturing jobs picked up considerably in the 1980s, with the arrival of small microprocessors that could be embedded into the machining and assembly machines found on the factory floors. In addition, the creation of world-wide high-speed fiber optic networks in the 1990s to support the arrival of the Internet explosion in 1995, led to software that allowed managers in modern economies to move manual and low-skilled work to the emerging economies of the world where wage scales were substantially lower, because it was now possible to remotely manage such operations using software. But as the capabilities of software continue to progress and general purpose androids begin to appear later in the century there will come a point when even the highly reduced labor costs of the emerging economies will become too dear. At that point the top 1% ruling class may not have much need for the remaining 99% of us, especially if the androids start building the androids. This will naturally cause some stresses within the current oligarchical structure of societies, as their middle classes continue to evaporate and more and more wealth continues to concentrate into the top 1%.

Figure 1 - Above is a typical office full of clerks in the 1950s. Just try to imagine how many clerks were required in a world without software to simply process all of the bank transactions, insurance premiums and claims, stock purchases and sales and all of the other business transactions in a single day.

Figure 2 - Similarly, the Industrial Revolution brought the assembly line and created huge numbers of middle class manufacturing jobs.

Figure 3 - But the arrival of automation software on the factory floor displaced many middle class manufacturing jobs, and will ultimately displace all middle class manufacturing jobs some time in the future.

Figure 4 - Manufacturing jobs in the United States have been on the decline since the 1960s as software has automated many manufacturing processes.

Figure 5 - Contrary to popular public opinion, the actual manufacturing output of the United States has dramatically increased over the years, while at the same time, the percentage of the workforce in manufacturing has steadily decreased. This was due to the dramatic increases in worker productivity that were made possible by the introduction of automation software.

Figure 6 - Self-driving trucks and cars will be the next advance of software to eliminate a large segment of middle class jobs.

So the real culprit that caused the great loss of middle class jobs in the western democracies over the past 40 years was really the vast expansion of automation software at home, and less so the offshoring of jobs. True, currently a substantial amount of job loss can also be attributed to the offshoring of jobs to lower wage scale economies, but that really is just a temporary transient effect. Those offshored jobs will evaporate even faster as software continues on to predominance. Let's face it, with the rapidly advancing capabilities of AI software, all human labor will be reduced to a value of zero over the next 10 - 100 years, and that raises an interesting possible solution for my concerns about not being knee-deep in von Neumann probes.

The Theory and Practice of Oligarchical Collectivism
Now I am not about to compare the bizarre social media behaviors of the new Administration of the United States of America to something out of Nineteen Eighty-Four, written by George Orwell in 1949, and the ability of its infamous Ministry of Truth to distort reality, but I must admit that the numerous Tweets from the new Administration have jogged my memory a bit. I first read Nineteen Eighty-Four in 1964 as a high school freshman at the tender age of 13. At the time, I thought that the book was a very fascinating science fiction story describing a very distant possible future, but given the very anemic IT technology of the day, it seemed much more like a very entertaining political fantasy than something I should really worry much about actually coming true. However, in 2014 I decided to read the book again to see if 50 years of IT progress had made much of a difference to my initial childhood impressions. It should come as no surprise, that in 2014, I now found that the book was now totally doable from a modern IT perspective. Indeed, I now found that, with a few tweaks, a modern oligarchical state run by 2% of the population at the very top, could now easily monitor and control the remaining 98% of the population with ease, given the vast IT infrastructure we already had in place.

But in recent days I have had even more disturbing thoughts. Recall that The Theory and Practice of Oligarchical Collectivism is the book-within-a-book of Nineteen Eighty-Four that describes what is actually going on in the lives of the main characters of the book. The Theory and Practice of Oligarchical Collectivism explains that, ever since we first invented civilization, all civilizations have adopted a hierarchy of the High, the Middle and the Low, no matter what economic system may have been adopted at the time. The High constitute about 2% of the population, and the High run all things within the society. The Middle constitute about 13% of the population, and work for the High to make sure that the Low get things properly done. The Low constitute about 85% of the population, and the Low do all of the non-administrative work to make it all happen. The Low are so busy just trying to survive that they present little danger to the High. The Theory and Practice of Oligarchical Collectivism explains that, throughout history, the Middle has always tried to overthrow the High with the aid of the Low, to establish themselves as the new High. So the Middle must always be viewed as a constant threat to the High. The solution to this problem in Nineteen Eighty-Four was for the High to constantly terrorize the Middle with thugs from the Ministry of Love and other psychological manipulations like doublethink, thoughtcrimes and newspeak, to deny the Middle of even the concept of an existence of a physical reality beyond the fabricated reality created by the Party.

Granted, this was not an ideal solution because it required a great deal of diligence and effort on the part of the High, but it was seen as a necessary evil, because the Middle was always needed to perform all of the administrative functions to keep the High in their elevated positions. But what if there were no need for a Middle? Suppose there came a day when AI software could perform all of the necessary functions of a Middle, without the threat of the Middle overthrowing the High? That would be an even better solution. Indeed, advanced AI software could allow for a 2% High to rule a 98% Low, with no need of a Middle whatsoever. Since the High had absolute control over all of society in The Theory and Practice of Oligarchical Collectivism, it also controlled all scientific advancement of society, and chose to eliminate any scientific advancements that might put the current social order in jeopardy. Such an oligarchical society could then prevent any AI software from advancing beyond the narrow AI technological levels needed to completely monitor and control society with 100% efficiency. That could lead to a society of eternal stasis that would certainly put an end to my von Neumann probes exploring the galaxy.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
http://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Sunday, December 25, 2016

The Continuing Adventures of Mr. Tompkins in the Software Universe

George Gamow was a highly regarded theoretical physicist and cosmologist from the last century who liked to explain concepts in modern physics to the common people by having them partake in adventures along with him in alternative universes that had alternative values for the physical constants that are found within our own Universe. He did so by creating a delightful fictional character back in 1937 by the name of Mr. Tompkins. Mr. Tompkins was an inquisitive bank clerk who was the main character in a series of four popular science books in which he participated in a number of such scientific adventures in alternative universes. I bring this up because back in 1979, when I first switched careers from being an exploration geophysicist to become an IT professional, I had a very similar experience. At the time, it seemed to me as if the strange IT people that I was now working with on a daily basis had created for themselves their own little Software Universe, with these strange IT people as the sole inhabitants. But over the years, I have now seen this strange alternative Software Universe slowly expand in size, to the point now that nearly all of the Earth's inhabitants are now also inhabitants of this alternative Software Universe.

Mr. Tompkins first appeared in George Gamow's mind in 1937 when he wrote a short story called A Toy Universe and unsuccessfully tried to have it published by the magazines of the day, such as Harper's, The Atlantic Monthly, Coronet and other magazines of the time. However, in 1938 he was finally able to publish a series of articles in a British magazine called Discovery that later became the book Mr Tompkins in Wonderland in 1939. Later he published Mr Tompkins Explores the Atom in 1944 and two other books at later dates. The adventures of Mr. Tompkins begin when he spends the afternoon of a bank holiday attending a lecture on the theory of relativity. During the lecture he drifts off to sleep and enters a dream world in which the speed of light is a mere 4.5 m/s (10 mph). This becomes apparent to him when he notices that passing cyclists are subject to a noticeable Lorentz–FitzGerald contraction.

As I explained in the Introduction to Softwarephysics softwarephysics is a simulated science designed to help explain how the simulated Software Universe that we have created for ourselves behaves. To do so, I simply noticed that, like our physical Universe, the Software Universe was quantized and extremely nonlinear in nature. For more on that, please see The Fundamental Problem of Software. Thanks to quantum mechanics (1926) we now know that our physical Universe is quantized into very small chunks of matter, energy, and also probably small chunks of space and time as well. Similarly, the Software Universe is composed of quantized chunks of software that start off as discrete characters in software source code (see Quantum Software for details). Thanks to quantum mechanics, we also now know that the macroscopic behaviors of our Universe are an outgrowth of the quantum mechanical operations of the atoms within it. Similarly, the macroscopic operations of the Software Universe are an outgrowth of the quantized operations of the source code that makes it all work. Now because very small changes to software source code can produce hugely significant changes to the way software operates, software is probably the most nonlinear substance known to mankind. The extreme nonlinear behavior of quantized software, combined with the devastating effects of the second law of thermodynamics to normally produce very buggy non-functional software, necessarily brings in the Darwinian pressures that have caused software to slowly evolve over the past 75 years, or 2.4 billion seconds, ever since Konrad Zuse first cranked up his Z3 computer in May of 1941. For more on this see The Fundamental Problem of Software.

Now the reason Mr. Tompkins never noticed the effects of the special theory of relativity in his everyday life was because the speed of light is so large, but once the speed of light was reduced to 10 mph in an alternative universe for Mr. Tompkins, all of the strange effects of the special theory of relativity became evidently apparent, and with enough time, would have become quite normal to him as a part of normal everyday life. Similarly, the strange effects of quantum mechanics only seem strange to us because Planck's constant is so very small - 6.62607004 × 10-34 Kg m2/second, and therefore, only become apparent for very small things like atoms and electrons. However, if Planck's constant were very much larger, then we would also begin to grow accustomed to the strange behaviors of objects behaving in a quantum mechanical way. For example, in quantum mechanics the spin of a single electron can be both up and down at the same time, but in the classical Universe that we are used to, macroscopic things like a child's top can only have a spin of up or down at any given time. The top can only spin in a clockwise or counterclockwise manner at one time - it cannot do both at the same time. Similarly, in quantum mechanics a photon or electron can go through both slits of a double slit experiment at the same time, so long as you do not put detectors at the slit locations.

Figure 1 – A macroscopic top can only spin clockwise or counterclockwise at one time.

Figure 2 – But electrons can be in a mixed quantum mechanical state in which they both spin up and spin down at the same time.

Figure 3 – Similarly, tennis balls can only go through one slit in a fence at a time. They cannot go through both slits of a fence at the same time.

Figure 4 – But at the smallest of scales in our quantum mechanical Universe, electrons and photons can go through both slits at the same time, producing an interference pattern.

Figure 5 – You can see this interference pattern of photons if you look at a distant porch light through the mesh of a sheer window curtain.

So in quantum mechanics at the smallest of scales, things can be both true and false at the same time. Fortunately for us, at the macroscopic sizes of everyday life, these bizarre quantum effects of nature seem to fade away, so that the things I just described are either true or false in everyday life. Macroscopic tops either spin up or spin down, and tennis balls pass through either one slit or the other, but not both at the same time. Indeed, it is rather strange that, although all of the fundamental particles of our Universe seem to behave in a fuzzy quantum mechanical manner in which true things and false things can both seem to blend into a cosmic grayness of ignorance, at the macroscopic level of our physical Universe, there are still such things as absolute truth and absolute falsehoods that can be measured in a laboratory in a reproducible manner. This must have been so for the Darwinian processes of innovation honed by natural selection to have brought us forth. After all, if Schrödinger's cat could really be both dead and alive at the same time, these Darwinian processes could not have worked, and we would not be here contemplating the differences between true and false assertions. The end result is that in our physical Universe, at the smallest of scales, there is no absolute truth, there are only quantum mechanical opinions, but at the macroscopic level of everyday life, there are indeed such things as absolute truth and absolute falsehoods, and these qualities can be measured in a laboratory in a reproducible manner.

The Current Bizarre World of Political Social Media Software in the United States
Now imagine that our Mr. Tompkins had entered into a bizarre alternative universe in which things were just the opposite. Imagine a universe in which, at the smallest of scales things operated classically, as if things were either absolutely true or false, but at a macroscopic level, things were seen to be both true and false at the same time! Well, we currently do have such an alternative universe close at hand to explore. It is the current bizarre world of political social media software in the United States of America. Recall that currently, the Software Universe runs on classical computers in which a bit can be either a "1" or a "0". In a classical computer a bit can only be a "1" or a "0" at any given time - it cannot be both a "1" and a "0" at the same time. For that you would need to have software running on a quantum computer, and for the most part, we are not there yet. So at the smallest of scales in our current Software Universe, the concept of there actually being a real difference between true and false assertions is fundamental. None of the current software code that makes it all work could possibly run if this were not the case. So it is quite strange that at the macroscopic level of political social media software in the United States, just the opposite seems to be the case. Unfortunately, in today's strange world of political social media software, there seems to be no right or wrong and no distinction between the truth and lies. We now have "alternative facts" and claims of "fake news" abounding, and Twitter feeds from those in power loaded down with false information. Because of this, for any given assertion, 30% of Americans will think that the assertion is true, while 70% of Americans will think that the assertion is false. In the Software Universe there are no longer any facts; there are only opinions in a seemingly upside-down quantum mechanical sense.

The Danger of Believing in Things
In The Danger of Believing in Things I highlighted the dangers of not employing critical thought when evaluating assertions in our physical Universe. The problem today is that most people are now seemingly spending more time living in the simulated Software Universe that we have created, rather than in our actual physical Universe. The end result of this is that, instead of seeking out the truth, the worldview memes infecting our minds simply seek out supporting memes in the Software Universe that lend support to the current worldview memes within our minds. But unlike in our current simulated Software Universe, where those worldview memes can be both absolutely true or absolutely false at the same time, in our physical Universe that behaves classically at the day-to-day scales in which we all live, things can still only be absolutely true or false, but not both. The most dangerous aspect of this new fake reality is that the new Administration of the United States of America maintains that climate change is a hoax, simply because they say it is a hoax, and sadly, for many Americans that is good enough for them. Now climate change might indeed be a hoax in our simulated Software Universe, or it might not be a hoax, because there is no absolute truth in our simulated Software Universe at the macroscopic level; there are only opinions. But that is not the case in the physical Universe in which we all actually live, where climate change is rapidly underway. For more on that please see This Message on Climate Change Was Brought to You by SOFTWARE. In the real physical Universe in which we all actually live, it is very important that we always take the words of Richard Feynman very seriously, for "reality must take precedence over public relations, for nature cannot be fooled."

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
http://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Thursday, September 22, 2016

Some Thoughts on the Origin of Softwarephysics and Its Application Beyond IT

On December 1, 2016 I retired at age 65 from 37+ years as an IT professional and a 41+ year career working for various corporations. During those long years I accidentally stumbled upon the fundamentals of softwarephysics, while traipsing through the jungles of several corporate IT departments, and I thought that it might be a good time to take a look back over the years and outline how all that happened.

The Rise of Software
Currently, we are witnessing one of those very rare moments in time when a new form of self-replicating information, in the form of software, is coming to dominance. Software is now so ubiquitous that it now seems like the whole world is immersed in a Software Universe of our own making, surrounded by PCs, tablets, smart phones and the software now embedded in most of mankind's products. In fact, I am now quite accustomed to sitting with audiences of younger people who are completely engaged with their "devices" before, during and after a performance. But this is a very recent development in the history of mankind. In the initial discussion below, I will first outline a brief history of the evolution of hardware technology to explain how we got to this state, but it is important to keep in mind that it was the relentless demands of software for more and more memory and CPU-cycles over the years that really drove the exponential explosion of hardware capability. After that, I will explain how the concept of softwarephysics slowly developed in my mind over the years as I interacted with the software running on these rapidly developing machines

It all started back in May of 1941 when Konrad Zuse first cranked up his Z3 computer. The Z3 was the world's first real computer and was built with 2400 electromechanical relays that were used to perform the switching operations that all computers use to store information and to process it. To build a computer, all you need is a large network of interconnected switches that have the ability to switch each other on and off in a coordinated manner. Switches can be in one of two states, either open (off) or closed (on), and we can use those two states to store the binary numbers of “0” or “1”. By using a number of switches teamed together in open (off) or closed (on) states, we can store even larger binary numbers, like “01100100” = 38. We can also group the switches into logic gates that perform logical operations. For example, in Figure 1 below we see an AND gate composed of two switches A and B. Both switch A and B must be closed in order for the light bulb to turn on. If either switch A or B is open, the light bulb will not light up.

Figure 1 – An AND gate can be simply formed from two switches. Both switches A and B must be closed, in a state of “1”, in order to turn the light bulb on.

Additional logic gates can be formed from other combinations of switches as shown in Figure 2 below. It takes about 2 - 8 switches to create each of the various logic gates shown below.

Figure 2 – Additional logic gates can be formed from other combinations of 2 – 8 switches.

Once you can store binary numbers with switches and perform logical operations upon them with logic gates, you can build a computer that performs calculations on numbers. To process text, like names and addresses, we simply associate each letter of the alphabet with a binary number, like in the ASCII code set where A = “01000001” and Z = ‘01011010’ and then process the associated binary numbers.

Figure 3 – Konrad Zuse with a reconstructed Z3 in 1961 (click to enlarge)


Figure 4 – Block diagram of the Z3 architecture (click to enlarge)

The electrical relays used by the Z3 were originally meant for switching telephone conversations. Closing one relay allowed current to flow to another relay’s coil, causing that relay to close as well.

Figure 5 – The Z3 was built using 2400 electrical relays, originally meant for switching telephone conversations.

Figure 6 – The electrical relays used by the Z3 for switching were very large, very slow and used a great deal of electricity which generated a great deal of waste heat.

Now I was born about 10 years later in 1951, a few months after the United States government installed its very first commercial computer, a UNIVAC I, for the Census Bureau on June 14, 1951. The UNIVAC I was 25 feet by 50 feet in size, and contained 5,600 vacuum tubes, 18,000 crystal diodes and 300 relays with a total memory of 12 K. From 1951 to 1958 a total of 46 UNIVAC I computers were built and installed.

Figure 7 – The UNIVAC I was very impressive on the outside.

Figure 8 – But the UNIVAC I was a little less impressive on the inside.

Figure 9 – Most of the electrical relays of the Z3 were replaced with vacuum tubes in the UNIVAC I, which were also very large, used lots of electricity and generated lots of waste heat too, but the vacuum tubes were 100,000 times faster than relays.

Figure 10 – Vacuum tubes contain a hot negative cathode that glows red and boils off electrons. The electrons are attracted to the cold positive anode plate, but there is a gate electrode between the cathode and anode plate. By changing the voltage on the grid, the vacuum tube can control the flow of electrons like the handle of a faucet. The grid voltage can be adjusted so that the electron flow is full blast, a trickle, or completely shut off, and that is how a vacuum tube can be used as a switch.

In the 1960s the vacuum tubes were replaced by discrete transistors and in the 1970s the discrete transistors were replaced by thousands of transistors on a single silicon chip. Over time, the number of transistors that could be put onto a silicon chip increased dramatically, and today, the silicon chips in your personal computer hold many billions of transistors that can be switched on and off in about 10-10 seconds. Now let us look at how these transistors work.

There are many different kinds of transistors, but I will focus on the FET (Field Effect Transistor) that is used in most silicon chips today. A FET transistor consists of a source, gate and a drain. The whole affair is laid down on a very pure silicon crystal using a multi-step process that relies upon photolithographic processes to engrave circuit elements upon the very pure silicon crystal. Silicon lies directly below carbon in the periodic table because both silicon and carbon have 4 electrons in their outer shell and are also missing 4 electrons. This makes silicon a semiconductor. Pure silicon is not very electrically conductive in its pure state, but by doping the silicon crystal with very small amounts of impurities, it is possible to create silicon that has a surplus of free electrons. This is called N-type silicon. Similarly, it is possible to dope silicon with small amounts of impurities that decrease the amount of free electrons, creating a positive or P-type silicon. To make an FET transistor you simply use a photolithographic process to create two N-type silicon regions onto a substrate of P-type silicon. Between the N-type regions is found a gate which controls the flow of electrons between the source and drain regions, like the grid in a vacuum tube. When a positive voltage is applied to the gate, it attracts the remaining free electrons in the P-type substrate and repels its positive holes. This creates a conductive channel between the source and drain which allows a current of electrons to flow.

Figure 11 – A FET transistor consists of a source, gate and drain. When a positive voltage is applied to the gate, a current of electrons can flow from the source to the drain and the FET acts like a closed switch that is “on”. When there is no positive voltage on the gate, no current can flow from the source to the drain, and the FET acts like an open switch that is “off”.

Figure 12 – When there is no positive voltage on the gate, the FET transistor is switched off, and when there is a positive voltage on the gate the FET transistor is switched on. These two states can be used to store a binary “0” or “1”, or can be used as a switch in a logic gate, just like an electrical relay or a vacuum tube.



Figure 13 – Above is a plumbing analogy that uses a faucet or valve handle to simulate the actions of the source, gate and drain of an FET transistor.

The CPU chip in your computer consists largely of transistors in logic gates, but your computer also has a number of memory chips that use transistors that are “on” or “off” and can be used to store binary numbers or text that is encoded using binary numbers. The next thing we need is a way to coordinate the billions of transistor switches in your computer. That is accomplished with a system clock. My current work laptop has a clock speed of 2.5 GHz which means it ticks 2.5 billion times each second. Each time the system clock on my computer ticks, it allows all of the billions of transistor switches on my laptop to switch on, off, or stay the same in a coordinated fashion. So while your computer is running, it is actually turning on and off billions of transistors billions of times each second – and all for a few hundred dollars!

Again, it was the relentless drive of software for ever increasing amounts of memory and CPU-cycles that made all this happen, and that is why you can now comfortably sit in a theater with a smart phone that can store more than 10 billion bytes of data, while back in 1951 the UNIVAC I occupied an area of 25 feet by 50 feet to store 12,000 bytes of data. But when I think back to my early childhood in the early 1950s, I can still vividly remember a time when there essentially was no software at all in the world. In fact, I can still remember my very first encounter with a computer on Monday, Nov. 19, 1956 watching the Art Linkletter TV show People Are Funny with my parents on an old black and white console television set that must have weighed close to 150 pounds. Art was showcasing the 21st UNIVAC I to be constructed, and had it sorting through the questionnaires from 4,000 hopeful singles, looking for the ideal match. The machine paired up John Caran, 28, and Barbara Smith, 23, who later became engaged. And this was more than 40 years before eHarmony.com! To a five-year-old boy, a machine that could “think” was truly amazing. Since that very first encounter with a computer back in 1956, I have personally witnessed software slowly becoming the dominant form of self-replicating information on the planet, and I have also seen how software has totally reworked the surface of the planet to provide a secure and cozy home for more and more software of ever increasing capability. For more on this please see A Brief History of Self-Replicating Information. That is why I think there would be much to be gained in exploring the origin and evolution of the $10 trillion computer simulation that the Software Universe provides, and that is what softwarephysics is all about. Let me explain where this idea came from.

My First Experiences with Software
Back in the 1950s, scientists and engineers first began to use computers to analyze experimental data and perform calculations, essentially using computers as souped-up sliderules to do data reduction. But by the 1960s, computers had advanced to the point where scientists and engineers were able to begin to use computers to perform simulated experiments to model things that previously had to be physically constructed in a lab. This dramatically helped to speed up research because it was found to be much easier to create a software simulation of a physical system, and perform simulated experiments on it, rather than to actually build the physical system itself in the lab. This revolution in the way science was done personally affected me. I finished up my B.S. in physics at the University of Illinois in Urbana Illinois in 1973 with the sole support of my trusty sliderule, but fortunately, I did take a class in FORTRAN programming my senior year. I then immediately began work on a M.S. degree in geophysics at the University of Wisconsin at Madison. For my thesis, I worked with a group of graduate students who were shooting electromagnetic waves into the ground to model the conductivity structure of the Earth’s upper crust. We were using the Wisconsin Test Facility (WTF) of Project Sanguine to send very low frequency electromagnetic waves, with a bandwidth of about 1 – 100 Hz into the ground, and then we measured the reflected electromagnetic waves in cow pastures up to 60 miles away. All this information has been declassified and can be downloaded from the Internet at: http://www.fas.org/nuke/guide/usa/c3i/fs_clam_lake_elf2003.pdf. Project Sanguine built an ELF (Extremely Low Frequency) transmitter in northern Wisconsin and another transmitter in northern Michigan in the 1970s and 1980s. The purpose of these ELF transmitters was to send messages to the U.S. nuclear submarine force at a frequency of 76 Hz. These very low frequency electromagnetic waves can penetrate the highly conductive seawater of the oceans to a depth of several hundred feet, allowing the submarines to remain at depth, rather than coming close to the surface for radio communications. You see, normal radio waves in the Very Low Frequency (VLF) band, at frequencies of about 20,000 Hz, only penetrate seawater to a depth of 10 – 20 feet. This ELF communications system became fully operational on October 1, 1989, when the two transmitter sites began synchronized transmissions of ELF broadcasts to the U.S. submarine fleet.

Anyway, back in the summers of 1973 and 1974 our team was collecting electromagnetic data from the WTF using a DEC PDP-8/e minicomputer. The machine cost about $30,000 in 1973 dollars and was about the size of a large side-by-side refrigerator, with 32K of magnetic core memory. We actually hauled this machine through the lumber trails of the Chequamegon National Forest and powered it with an old diesel generator to digitally record the reflected electromagnetic data in the field. For my thesis, I then created models of the Earth’s upper conductivity structure down to a depth of about 20 km, using programs written in BASIC. The beautiful thing about the DEC PDP-8/e was that the computer time was free, so I could play around with different models, until I got a good fit to what we recorded in the field. The one thing I learned by playing with the models on the computer was that the electromagnetic waves did not go directly down into the Earth from the WTF, like common sense would lead you to believe. Instead, the ELF waves traveled through the air in a wave guide between the ionosphere and the conductive rock of the Earth to where you were observing and then made a nearly 90 degree turn straight down into the Earth, as they were refracted into the much more conductive rock. So at your observing station, you really only saw ELF plane waves going straight down and reflecting straight back up off the conductivity differences in the upper crust, and this made modeling much easier than dealing with ELF waves transmitted through the Earth from the WTF. And this is what happens for our submarines too; the ELF waves travel through the air all over the world, channeled between the conductive seawater of the oceans and the conductive ionosphere of the atmosphere, like a huge coax cable. When the ELF waves reach a submarine, they are partially refracted straight down to the submarine. I would never have gained this insight by simply solving Maxwell’s equations (1864) for electromagnetic waves alone! This made me realize that one could truly use computers to do simulated experiments to uncover real knowledge by taking the fundamental laws of the Universe, really the handful of effective theories that we currently have, like Maxwell's equations, and by simulating those equations in computer code and letting them unfold in time, actually see the emerging behaviors of complex systems arise in a simulated Universe. All the sciences routinely now do this all the time, but back in 1974 it was quite a surprise for me.

Figure 14 – Some graduate students huddled around a DEC PDP-8/e minicomputer. Notice the teletype machines in the foreground on the left that were used to input code and data into the machine and to print out results as well.

After I graduated from Wisconsin in 1975, I went to work for Shell and Amoco exploring for oil between 1975 – 1979, before switching into a career in IT in 1979. But even during this period, I mainly programmed geophysical models of seismic data in FORTRAN for Shell and Amoco. It was while programming computer simulations of seismic data that the seeds of softwarephysics began to creep into my head, as I painstakingly assembled lots of characters of computer code into complex patterns that did things, only to find that no matter how carefully I tried to do that, my code always seemed to fail because there were just way too many ways to assemble the characters into computer code that was "close" but not quite right. It was sort of like trying to assemble lots of atoms into complex organic molecules that do things, only to find that you were off by a small factor, and those small errors made the computer code fail. At this point, I was beginning to have some fuzzy thoughts about being the victim of the second law of thermodynamics misbehaving in a nonlinear Universe. But those initial thoughts about softwarephysics accelerated dramatically in 1979 when I made a career change to become an IT professional. One very scary Monday morning, I was conducted to my new office cubicle in Amoco’s IT department, and I immediately found myself surrounded by a large number of very strange IT people, all scurrying about in a near state of panic, like the characters in Alice in Wonderland. Suddenly, it seemed like I was trapped in a frantic computer simulation, like the ones I had programmed on the DEC PDP-8/e , buried in punch card decks and fan-fold listings. After nearly 38 years in the IT departments of several major corporations, I can now state with confidence that most corporate IT departments can best be described as “frantic” in nature. This new IT job was a totally alien experience for me, and I immediately thought that I had just made a very dreadful mistake. Granted, I had been programming geophysical models for my thesis and for oil companies, ever since taking a basic FORTRAN course back in 1972, but that was the full extent of my academic credentials in computer science.

The Beginnings of Softwarephysics
So to help myself cope with the daily mayhem of life in IT, I began to develop softwarephysics. This was because I noticed that, unlike all of the other scientific and engineering professions, IT professionals did not seem to have a theoretical framework to help them cope with the daily mayhem of life in IT. But I figured that if you could apply physics to geology; why not apply physics to software? When I first switched from physics to geophysics in 1973, I was very impressed by the impact that applying simple 19th century physics to geology had had upon geology during the plate tectonics revolution in geology (1965 - 1970). When I first graduated from the University of Illinois in 1973 with a B.S. in physics, I was very dismayed to find that the end of the Space Race and a temporary lull in the Cold War had left very few prospects open for a budding physicist. So on the advice of my roommate, a geology major, I headed up north to the University of Wisconsin in Madison to obtain an M.S. in geophysics, with the hope of obtaining a job with an oil company exploring for oil. These were heady days for geology because we were at the very tail end of the plate tectonics revolution that totally changed the fundamental models of geology. The plate tectonics revolution peaked during the five year period 1965 – 1970. Having never taken a single course in geology during all of my undergraduate studies, I was accepted into the geophysics program with many deficiencies in geology, so I had to take many undergraduate geology courses to get up to speed in this new science. The funny thing was that the geology textbooks of the time had not yet had time to catch up with the new plate tectonics revolution of the previous decade, so they still embraced the “classical” geological models of the past which now seemed a little bit silly in light of the new plate tectonics model. But this was also very enlightening. It was like looking back at the prevailing thoughts in physics prior to Newton or Einstein. What the classical geological textbooks taught me was that over the course of several hundred years, the geologists had figured out what had happened, but not why it had happened. Up until 1960 geology was mainly an observational science relying upon the human senses of sight and touch, and by observing and mapping many outcrops in detail, the geologists had figured out how mountains had formed, but not why.

In classical geology, most geomorphology was thought to arise from local geological processes. For example, in classical geology, fold mountains formed off the coast of a continent when a geosyncline formed because the continental shelf underwent a dramatic period of subsidence for some unknown reason. Then very thick layers of sedimentary rock were deposited into the subsiding geosyncline, consisting of alternating layers of sand and mud that turned into sandstones and shales, intermingled with limestones that were deposited from the carbonate shells of dead sea life floating down or from coral reefs. Next, for some unknown reason, the sedimentary rocks were laterally compressed into folded structures that slowly rose from the sea. More compression then followed, exceeding the ability of the sedimentary rock to deform plastically, resulting in thrust faults forming that uplifted blocks of sedimentary rock even higher. As compression continued, some of the sedimentary rocks were then forced down into great depths within the Earth and were then placed under great pressures and temperatures. These sedimentary rocks were then far from the thermodynamic equilibrium of the Earth’s surface where they had originally formed, and thus the atoms within recrystallized into new metamorphic minerals. At the same time, for some unknown reason, huge plumes of granitic magma rose from deep within the Earth’s interior as granitic batholiths. Then over several hundred millions of years, the overlying folded sedimentary rocks slowly eroded away, revealing the underlying metamorphic rocks and granitic batholiths, allowing human beings to cut them into slabs and to polish them into pretty rectangular slabs for the purpose of slapping them up onto the exteriors of office buildings and onto kitchen countertops. In 1960, classical geologists had no idea why the above sequence of events, producing very complicated geological structures, seemed to happen over and over again many times over the course of billions of years. But with the advent of plate tectonics (1965 – 1970), all was suddenly revealed. It was the lateral movement of plates on a global scale that made it all happen. With plate tectonics, everything finally made sense. Fold mountains did not form from purely local geological factors in play. There was the overall controlling geological process of global plate tectonics making it happen. For a quick overview, please see:

Fold Mountains
http://www.youtube.com/watch?v=Jy3ORIgyXyk

Figure 15 – Fold mountains occur when two tectonic plates collide. A descending oceanic plate first causes subsidence offshore of a continental plate, which forms a geosyncline that accumulates sediments. When all of the oceanic plate between two continents has been consumed, the two continental plates collide and compress the accumulated sediments in the geosyncline into fold mountains. This is how the Himalayas formed when India crashed into Asia.

Now the plate tectonics revolution was really made possible by the availability of geophysical data. It turns out that most of the pertinent action of plate tectonics occurs under the oceans, at the plate spreading centers and subduction zones, far removed from the watchful eyes of geologists in the field with their notebooks and trusty hand lenses. Geophysics really took off after World War II, when universities were finally able to get their hands on cheap war surplus gear. By mapping variations in the Earth’s gravitational and magnetic fields and by conducting deep oceanic seismic surveys, geophysicists were finally able to figure out what was happening at the plate spreading centers and subduction zones. Actually, the geophysicist and meteorologist Alfred Wegner had figured this all out in 1912 with his theory of Continental Drift, but at the time Wegner was ridiculed by the geological establishment. You see, Wegner had been an arctic explorer and had noticed that sometimes sea ice split apart, like South America and Africa, only later to collide again to form mountain-like pressure ridges. Unfortunately, Wegner froze to death in 1930 trying to provision some members of his last exploration party to Greenland, never knowing that one day he would finally be vindicated.

So when I first joined the IT department of Amoco, I had the vague feeling that perhaps much of the angst that I saw in my fellow IT coworkers was really due to the lack of an overall theoretical framework, like plate tectonics, that could help to explain their daily plight, and also help to alleviate some of its impact, by providing some insights into why doing IT for a living was so difficult, and to suggest some possible remedies, and to provide a direction for thought as well. So like the exploration team at Amoco that I had just left, consisting of geologists, geophysicists, paleontologists, geochemists, and petrophysicists, I decided to take all of the physics, chemistry, biology, and geology that I could muster and throw it at the problem of software. The basic idea was that many concepts in physics, chemistry, biology, and geology suggested to me that the IT community had accidentally created a pretty decent computer simulation of the physical Universe on a grand scale, a Software Universe so to speak, and that I could use this fantastic simulation in reverse, to better understand the behavior of commercial software, by comparing software to how things behaved in the physical Universe. So in physics we use software to simulate the behavior of the Universe, while in softwarephysics we use the Universe to simulate the behavior of software.

So my original intent for softwarephysics was to merely provide a theoretical framework for IT professionals to help them better understand the behavior of software during its development and its behavior under load when running in Production. My initial thoughts were that the reason IT work was so difficult was that programmers were constantly fighting a loosing battle with the second law of thermodynamics in a nonlinear Universe. You see, programmers must assemble a huge number of characters into complex patterns of source code in order to instruct a computer to perform useful operations, and because the Universe is largely nonlinear in nature, meaning that small changes to initial conditions will most likely result in dramatic, and many times, lethal outcomes for software, IT work was nearly impossible to do, and that is why most IT professionals were usually found to be on the verge of a nervous breakdown during the course of a normal day in IT. For more on that see the The Fundamental Problem of Software. At the same time, I subconsciously also knew that living things must also assemble an even larger number of atoms into complex molecules in order to perform the functions of life in a nonlinear Universe, so obviously, it would seem that the natural solution to the problem that IT professionals faced each day would be simply to apply a biological approach to developing and maintaining software. However, this did not gel in my mind at first, until one day, while I was working on some code, and I came up with the notion that we needed to stop writing code - we needed to "grow" code instead in a biological manner. For more on that see Agile vs. Waterfall Programming and the Value of Having a Theoretical Framework.

Using Softwarephysics to Help Explore the Origin of Life
But as I saw complex corporate software slowly evolve over the decades, it became more and more evident to me that much could be gained by studying this vast computer simulation that the IT community had been working on for the past 75 years, or 2.4 billion seconds. NASA has defined life broadly as "A self-sustaining chemical system capable of Darwinian evolution." Personally, after many years of reflection, I feel that the research community that is currently exploring the origin of life on the Earth and elsewhere, is too obsessed with simply finding other carbon-based life forms like themselves. Carbon-based life forms are really just one form of self-replicating information to be currently found on our planet, so I feel that more attention should really be focused upon finding other forms of self-replicating information sharing the Universe with ourselves, and the best place to start that, with the least cost, is to simply look right here on the Earth. To do that all we need to do is simply remove the "chemical" term out of NASA's definition of life and redefine self-replicating information as "A self-sustaining system capable of Darwinian evolution." That is why I have been stressing that the origin and evolution of commercial software provides a unique opportunity for those interested in the origin and early evolution of life on the Earth, and elsewhere, in many of my postings because both programmers and living things are faced with nearly identical problems. My suggestion in those postings has been that everybody has been looking just a couple of levels too low in the hierarchy of self-replicating information. Carbon-based living things are just one form of self-replicating information, and all forms of self-replicating information have many characteristics in common as they battle the second law of thermodynamics in a nonlinear Universe. So far we have seen at least five waves of self-replicating information sweep across the Earth, with each wave greatly reworking the surface and near subsurface of the planet as it came to predominance:

Self-Replicating Information – Information that persists through time by making copies of itself or by enlisting the support of other things to ensure that copies of itself are made.

1. Self-replicating autocatalytic metabolic pathways of organic molecules
2. RNA
3. DNA
4. Memes
5. Software

Software is now rapidly becoming the dominant form of self-replicating information on the planet, and is having a major impact on mankind as it comes to predominance. For more on this see: A Brief History of Self-Replicating Information. However, of the five waves of self-replicating information, the only form that we currently have a good history of is software, going all the way back to May of 1941 when Konrad Zuse first cranked up his Z3 computer. So the best model for the origin of life might be obtained by studying the hodge-podge of precursors, false starts, and failed attempts that led to the origin and early evolution of software, with particular attention paid to the parasitic/symbiotic relationships that allowed software to bootstrap itself into existence.

Yes, there are many other examples of universal Darwinism at work in the Universe, such as the evolution of languages or political movements, but I think that the origin and evolution of software provides a unique example because both programmers and living things are faced with nearly identical problems. A programmer must assemble a huge number of characters into complex patterns of source code to instruct a computer to perform useful operations. Similarly, living things must assemble an even larger number of atoms into complex molecules in order to perform the functions of life. And because the Universe is largely nonlinear in nature, meaning that small changes to initial conditions will most likely result in dramatic, and many times, lethal outcomes for both software and living things, the evolutionary history of living things on Earth and of software have both converged upon very similar solutions to overcome the effects of the second law of thermodynamics in a nonlinear Universe. For example, both living things and software went through a very lengthy prokaryotic architectural period, with little internal structure, to be followed by a eukaryotic architectural period with a great deal of internal structure, which later laid the foundations for forms with a complex multicellular architecture. And both also experienced a dramatic Cambrian explosion in which large multicellular systems arose consisting of huge numbers of somatic cells that relied upon the services of large numbers of cells to be found within a number of discrete organs. For more on this see the SoftwarePaleontology section of SoftwareBiology and Software Embryogenesis.

Also, software presents a much clearer distinction between the genotype and phenotype of a system than do other complex systems, like languages or other technologies that also undergo evolutionary processes. The genotype of software is determined by the source code files of programs, while the phenotype of software is expressed by the compiled executable files that run upon a computer and that are generated from the source code files by a transcription process similar to the way genes are transcribed into proteins. Also, like a DNA or RNA sequence, source code provides a very tangible form of self-replicating information that can be studied over historical time without ambiguity. Source code is also not unique, in that many different programs, and even programs written in different languages, can produce executable files with identical phenotypes or behaviors.

Currently, many researchers working on the origin of life and astrobiology are trying to produce computer simulations to help investigate how life could have originated and evolved at its earliest stages. But trying to incorporate all of the relevant elements into a computer simulation is proving to be a very daunting task indeed. Why not simply take advantage of the naturally occurring $10 trillion computer simulation that the IT community has already patiently evolved over the past 75 years and has already run for 2.4 billion seconds? It has been hiding there in plain sight the whole time for anybody with a little bit of daring and flair to explore.

Some might argue that this is an absurd proposal because software currently is a product of the human mind, while biological life is not a product of intelligent design. Granted, biological life is not a product of intelligent design, but neither is the human mind. The human mind and biological life are both the result of natural processes at work over very long periods of time. This objection simply stems from the fact that we are all still, for the most part, self-deluded Cartesian dualists at heart, with seemingly a little “Me” running around within our heads that just happens to have the ability to write software and to do other challenging things. But since the human mind is a product of natural processes in action, so is the software that it produces. For more on that see The Ghost in the Machine the Grand Illusion of Consciousness.

Still, I realize that there might be some hesitation to pursue this line of research because it might be construed by some as an advocacy of intelligent design, but that is hardly the case. The evolution of software over the past 75 years has essentially been a matter of Darwinian inheritance, innovation and natural selection converging upon similar solutions to that of biological life. For example, it took the IT community about 60 years of trial and error to finally stumble upon an architecture similar to that of complex multicellular life that we call SOA – Service Oriented Architecture. The IT community could have easily discovered SOA back in the 1960s if it had adopted a biological approach to software and intelligently designed software architecture to match that of the biosphere. Instead, the world-wide IT architecture we see today essentially evolved on its own because nobody really sat back and designed this very complex world-wide software architecture; it just sort of evolved on its own through small incremental changes brought on by many millions of independently acting programmers through a process of trial and error. When programmers write code, they always take some old existing code first and then modify it slightly by making a few changes. Then they add a few additional new lines of code, and test the modified code to see how far they have come. Usually, the code does not work on the first attempt because of the second law of thermodynamics, so they then try to fix the code and try again. This happens over and over, until the programmer finally has a good snippet of new code. Thus, new code comes into existence through the Darwinian mechanisms of inheritance coupled with innovation and natural selection. Some might object that this coding process of software is actually a form of intelligent design, but that is not the case. It is important to differentiate between intelligent selection and intelligent design. In softwarephysics we extend the concept of natural selection to include all selection processes that are not supernatural in nature, so for me, intelligent selection is just another form of natural selection. This is really nothing new. Predators and prey constantly make “intelligent” decisions about what to pursue and what to evade, even if those “intelligent” decisions are only made with the benefit of a few interconnected neurons or molecules. So in this view, the selection decisions that a programmer makes after each iteration of working on some new code really are a form of natural selection. After all, programmers are just DNA survival machines with minds infected with memes for writing software, and the selection processes that the human mind undergo while writing software are just as natural as the Sun drying out worms on a sidewalk or a cheetah deciding upon which gazelle in a herd to pursue.

For example, when IT professionals slowly evolved our current $10 trillion world-wide IT architecture over the past 2.4 billion seconds, they certainly did not do so with the teleological intent of creating a simulation of the evolution of the biosphere. Instead, like most organisms in the biosphere, these IT professionals were simply trying to survive just one more day in the frantic world of corporate IT. It is hard to convey the daily mayhem and turmoil of corporate IT to outsiders. When I first hit the floor of Amoco’s IT department, I was in total shock, but I quickly realized that all IT jobs essentially boiled down to simply pushing buttons. All you had to do was to push the right buttons, in the right sequence, at the right time, and with zero errors. How hard could that be? Well, it turned out to be very difficult indeed, and in response I began to subconsciously work on softwarephysics to try to figure out why this job was so hard, and how I could dig myself out of the mess that I had gotten myself into. After a while, it dawned on me that the fundamental problem was the second law of thermodynamics operating in a nonlinear simulated universe. The second law made it very difficult to push the right buttons in the right sequence and at the right time because there were so many erroneous combinations of button pushes. Writing and maintaining software was like looking for a needle in a huge utility phase space. There just were nearly an infinite number of ways of pushing the buttons “wrong”. The other problem was that we were working in a very nonlinear utility phase space, meaning that pushing just one button incorrectly usually brought everything crashing down. Next, I slowly began to think of pushing the correct buttons in the correct sequence as stringing together the correct atoms into the correct sequence to make molecules in chemical reactions that could do things. I also knew that living things were really great at doing that. Living things apparently overcame the second law of thermodynamics by dumping entropy into heat as they built low entropy complex molecules from high entropy simple molecules and atoms. I then began to think of each line of code that I wrote as a step in a biochemical pathway. The variables were like organic molecules composed of characters or “atoms” and the operators were like chemical reactions between the molecules in the line of code. The logic in several lines of code was the same thing as the logic found in several steps of a biochemical pathway, and a complete function was the equivalent of a full-fledged biochemical pathway in itself. But one nagging question remained - how could I take advantage of these similarities to save myself? That’s a long story, but in 1985 I started working on BSDE– the Bionic Systems Development Environment, which was used at Amoco to “grow” software biologically from an “embryo” by having programmers turn on and off a set of “genes”. For more on that see Agile vs. Waterfall Programming and the Value of Having a Theoretical Framework.

The Social Impacts of the Coming Predominance of Software
Over the years, I have seen the Software Universe that I first encountered back in 1979 expand from the small population of IT workers in the world to now the entire world at large. I have also seen that as software comes to predominance it has caused a great deal of social, political and economic unrest as discussed in The Economics of the Coming Software Singularity , The Enduring Effects of the Obvious Hiding in Plain Sight, Machine Learning and the Ascendance of the Fifth Wave and Makining Sense of the Absurdity of the Real World of Human Affairs. The immediate difficulty is that software has displaced many workers over the past 75 years, and as software comes to predominance, it will eventually reduce all human labor to a value of zero over the next 10 - 100 years. How will the age-old oligarchical societies of the world deal with that in a manner that allows civilization to continue? The 2016 Presidential Election cycle in the United States was a dramatic example of this in action. The election was totally dominated by the effects of software coming to predominance - rogue email servers, hacking, leaking, software security breaches in general and wild Twitter feeds by candidates. But the election was primarily determined by the huge loss of middle class jobs due to automation by software. Now it's pretty hard to get mad at software because it is so intangible in nature, so many mistakenly directed their anger at other people because that is what mankind has been doing for the past 200,000 years. But this time is different because the real culprit is software coming of age. Unfortunately, those low-skilled factory jobs that have already evaporated are not coming back, no matter what some may promise. And those jobs are just the first in a long line. With the current pace of AI and Machine Learning research and implementation, now that they both can make lots of money, we will soon find self-driving trucks and delivery vehicles, automated cranes at container ports and automated heavy construction machinery at job sites. We have already lost lots of secretaries, bank tellers, stock brokers, insurance agents, retail salespeople and travel agents, but that is just the beginning. Soon we will see totally automated fast food restaurants to be later followed by the automation of traditional sit down restaurants, and automated retail stores without a single employee, like the totally automated parking garages we already have.

I have now been retired for nearly a month, and after having stopped working for the first time in 50 years, I can now state that there are plenty of things to do to keep busy. For example, my wife and I are doing the daycare for two of our grandchildren for our daughter, a high school Biology and Chemistry teacher, and I like to take online MOOC courses, and now I have all the time in the world to do that. So the end of working for a living is really not a bad thing, but the way we currently have civilization set up will not work in a future without work, using the current norm we have of rewarding people for what they produce. How that will all unfold remains one of the great mysteries of our time. For an intriguing view of one possibility please see THE MACHINE STOPS by E.M. Forster (1909) at:

http://archive.ncsa.illinois.edu/prajlich/forster.html

Yes - from 1909!

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
http://softwarephysics.blogspot.com/

Regards,
Steve Johnston