Thursday, September 22, 2016

Some Thoughts on the Origin of Softwarephysics and Its Application Beyond IT

On December 1, 2016, I retired at age 65 from 37+ years as an IT professional and a 41+ year career working for various corporations. During those long years, I accidentally stumbled upon the fundamentals of softwarephysics, while traipsing through the jungles of several corporate IT departments, and I thought that it might be a good time to take a look back over the years and outline how all that happened.

The Rise of Software
Currently, we are witnessing one of those very rare moments in time when a new form of self-replicating information, in the form of software, is coming to dominance. Software is now so ubiquitous that it now seems like the whole world is immersed in a Software Universe of our own making, surrounded by PCs, tablets, smart-phones and the software now embedded in most of mankind's products. In fact, I am now quite accustomed to sitting with audiences of younger people who are completely engaged with their "devices", before, during and after a performance. But this is a very recent development in the history of mankind. In the initial discussion below, I will first outline a brief history of the evolution of hardware technology to explain how we got to this state, but it is important to keep in mind that it was the relentless demands of software for more and more memory and CPU-cycles over the years that really drove the exponential explosion of hardware capability. After that, I will explain how the concept of softwarephysics slowly developed in my mind over the years as I interacted with the software running on these rapidly developing machines

It all started back in May of 1941 when Konrad Zuse first cranked up his Z3 computer. The Z3 was the world's first real computer and was built with 2400 electromechanical relays that were used to perform the switching operations that all computers use to store information and to process it. To build a computer, all you need is a large network of interconnected switches that have the ability to switch each other on and off in a coordinated manner. Switches can be in one of two states, either open (off) or closed (on), and we can use those two states to store the binary numbers of “0” or “1”. By using a number of switches teamed together in open (off) or closed (on) states, we can store even larger binary numbers, like “01100100” = 38. We can also group the switches into logic gates that perform logical operations. For example, in Figure 1 below we see an AND gate composed of two switches A and B. Both switch A and B must be closed in order for the light bulb to turn on. If either switch A or B is open, the light bulb will not light up.

Figure 1 – An AND gate can be simply formed from two switches. Both switches A and B must be closed, in a state of “1”, in order to turn the light bulb on.

Additional logic gates can be formed from other combinations of switches as shown in Figure 2 below. It takes about 2 - 8 switches to create each of the various logic gates shown below.

Figure 2 – Additional logic gates can be formed from other combinations of 2 – 8 switches.

Once you can store binary numbers with switches and perform logical operations upon them with logic gates, you can build a computer that performs calculations on numbers. To process text, like names and addresses, we simply associate each letter of the alphabet with a binary number, like in the ASCII code set where A = “01000001” and Z = ‘01011010’ and then process the associated binary numbers.

Figure 3 – Konrad Zuse with a reconstructed Z3 in 1961 (click to enlarge)


Figure 4 – Block diagram of the Z3 architecture (click to enlarge)

The electrical relays used by the Z3 were originally meant for switching telephone conversations. Closing one relay allowed current to flow to another relay’s coil, causing that relay to close as well.

Figure 5 – The Z3 was built using 2400 electrical relays, originally meant for switching telephone conversations.

Figure 6 – The electrical relays used by the Z3 for switching were very large, very slow and used a great deal of electricity which generated a great deal of waste heat.

Now I was born about 10 years later in 1951, a few months after the United States government installed its very first commercial computer, a UNIVAC I, for the Census Bureau on June 14, 1951. The UNIVAC I was 25 feet by 50 feet in size and contained 5,600 vacuum tubes, 18,000 crystal diodes and 300 relays with a total memory of 12 K. From 1951 to 1958 a total of 46 UNIVAC I computers were built and installed.

Figure 7 – The UNIVAC I was very impressive on the outside.

Figure 8 – But the UNIVAC I was a little less impressive on the inside.

Figure 9 – Most of the electrical relays of the Z3 were replaced with vacuum tubes in the UNIVAC I, which were also very large, used lots of electricity and generated lots of waste heat too, but the vacuum tubes were 100,000 times faster than relays.

Figure 10 – Vacuum tubes contain a hot negative cathode that glows red and boils off electrons. The electrons are attracted to the cold positive anode plate, but there is a gate electrode between the cathode and anode plate. By changing the voltage on the grid, the vacuum tube can control the flow of electrons like the handle of a faucet. The grid voltage can be adjusted so that the electron flow is full blast, a trickle, or completely shut off, and that is how a vacuum tube can be used as a switch.

In the 1960s the vacuum tubes were replaced by discrete transistors and in the 1970s the discrete transistors were replaced by thousands of transistors on a single silicon chip. Over time, the number of transistors that could be put onto a silicon chip increased dramatically, and today, the silicon chips in your personal computer hold many billions of transistors that can be switched on and off in about 10-10 seconds. Now let us look at how these transistors work.

There are many different kinds of transistors, but I will focus on the FET (Field Effect Transistor) that is used in most silicon chips today. A FET transistor consists of a source, gate and a drain. The whole affair is laid down on a very pure silicon crystal using a multi-step process that relies upon photolithographic processes to engrave circuit elements upon the very pure silicon crystal. Silicon lies directly below carbon in the periodic table because both silicon and carbon have 4 electrons in their outer shell and are also missing 4 electrons. This makes silicon a semiconductor. Pure silicon is not very electrically conductive in its pure state, but by doping the silicon crystal with very small amounts of impurities, it is possible to create silicon that has a surplus of free electrons. This is called N-type silicon. Similarly, it is possible to dope silicon with small amounts of impurities that decrease the amount of free electrons, creating a positive or P-type silicon. To make an FET transistor you simply use a photolithographic process to create two N-type silicon regions onto a substrate of P-type silicon. Between the N-type regions is found a gate which controls the flow of electrons between the source and drain regions, like the grid in a vacuum tube. When a positive voltage is applied to the gate, it attracts the remaining free electrons in the P-type substrate and repels its positive holes. This creates a conductive channel between the source and drain which allows a current of electrons to flow.

Figure 11 – A FET transistor consists of a source, gate and drain. When a positive voltage is applied to the gate, a current of electrons can flow from the source to the drain and the FET acts like a closed switch that is “on”. When there is no positive voltage on the gate, no current can flow from the source to the drain, and the FET acts like an open switch that is “off”.

Figure 12 – When there is no positive voltage on the gate, the FET transistor is switched off, and when there is a positive voltage on the gate the FET transistor is switched on. These two states can be used to store a binary “0” or “1”, or can be used as a switch in a logic gate, just like an electrical relay or a vacuum tube.



Figure 13 – Above is a plumbing analogy that uses a faucet or valve handle to simulate the actions of the source, gate and drain of an FET transistor.

The CPU chip in your computer consists largely of transistors in logic gates, but your computer also has a number of memory chips that use transistors that are “on” or “off” and can be used to store binary numbers or text that is encoded using binary numbers. The next thing we need is a way to coordinate the billions of transistor switches in your computer. That is accomplished with a system clock. My current work laptop has a clock speed of 2.5 GHz which means it ticks 2.5 billion times each second. Each time the system clock on my computer ticks, it allows all of the billions of transistor switches on my laptop to switch on, off, or stay the same in a coordinated fashion. So while your computer is running, it is actually turning on and off billions of transistors billions of times each second – and all for a few hundred dollars!

Again, it was the relentless drive of software for ever-increasing amounts of memory and CPU-cycles that made all this happen, and that is why you can now comfortably sit in a theater with a smart-phone that can store more than 10 billion bytes of data, while back in 1951 the UNIVAC I occupied an area of 25 feet by 50 feet to store 12,000 bytes of data. But when I think back to my early childhood in the early 1950s, I can still vividly remember a time when there essentially was no software at all in the world. In fact, I can still remember my very first encounter with a computer on Monday, Nov. 19, 1956, watching the Art Linkletter TV show People Are Funny with my parents on an old black and white console television set that must have weighed close to 150 pounds. Art was showcasing the 21st UNIVAC I to be constructed and had it sorting through the questionnaires from 4,000 hopeful singles, looking for the ideal match. The machine paired up John Caran, 28, and Barbara Smith, 23, who later became engaged. And this was more than 40 years before eHarmony.com! To a five-year-old boy, a machine that could “think” was truly amazing. Since that very first encounter with a computer back in 1956, I have personally witnessed software slowly becoming the dominant form of self-replicating information on the planet, and I have also seen how software has totally reworked the surface of the planet to provide a secure and cozy home for more and more software of ever-increasing capability. For more on this please see A Brief History of Self-Replicating Information. That is why I think there would be much to be gained in exploring the origin and evolution of the $10 trillion computer simulation that the Software Universe provides, and that is what softwarephysics is all about. Let me explain where this idea came from.

My First Experiences with Software
Back in the 1950s, scientists and engineers first began to use computers to analyze experimental data and perform calculations, essentially using computers as souped-up sliderules to do data reduction. But by the 1960s, computers had advanced to the point where scientists and engineers were able to begin to use computers to perform simulated experiments to model things that previously had to be physically constructed in a lab. This dramatically helped to speed up research because it was found to be much easier to create a software simulation of a physical system, and perform simulated experiments on it, rather than to actually build the physical system itself in the lab. This revolution in the way science was done personally affected me. I finished up my B.S. in physics at the University of Illinois in Urbana Illinois in 1973 with the sole support of my trusty sliderule, but fortunately, I did take a class in FORTRAN programming my senior year. I then immediately began work on an M.S. degree in geophysics at the University of Wisconsin at Madison. For my thesis, I worked with a group of graduate students who were shooting electromagnetic waves into the ground to model the conductivity structure of the Earth’s upper crust. We were using the Wisconsin Test Facility (WTF) of Project Sanguine to send very low-frequency electromagnetic waves, with a bandwidth of about 1 – 100 Hz into the ground, and then we measured the reflected electromagnetic waves in cow pastures up to 60 miles away. All this information has been declassified and can be downloaded from the Internet at: http://www.fas.org/nuke/guide/usa/c3i/fs_clam_lake_elf2003.pdf. Project Sanguine built an ELF (Extremely Low-Frequency) transmitter in northern Wisconsin and another transmitter in northern Michigan in the 1970s and 1980s. The purpose of these ELF transmitters was to send messages to the U.S. nuclear submarine force at a frequency of 76 Hz. These very low-frequency electromagnetic waves can penetrate the highly conductive seawater of the oceans to a depth of several hundred feet, allowing the submarines to remain at depth, rather than coming close to the surface for radio communications. You see, normal radio waves in the Very Low-Frequency (VLF) band, at frequencies of about 20,000 Hz, only penetrate seawater to a depth of 10 – 20 feet. This ELF communications system became fully operational on October 1, 1989, when the two transmitter sites began synchronized transmissions of ELF broadcasts to the U.S. submarine fleet.

Anyway, back in the summers of 1973 and 1974, our team was collecting electromagnetic data from the WTF using a DEC PDP-8/e minicomputer. The machine cost about $30,000 in 1973 dollars and was about the size of a large side-by-side refrigerator, with 32K of magnetic core memory. We actually hauled this machine through the lumber trails of the Chequamegon National Forest and powered it with an old diesel generator to digitally record the reflected electromagnetic data in the field. For my thesis, I then created models of the Earth’s upper conductivity structure down to a depth of about 20 km, using programs written in BASIC. The beautiful thing about the DEC PDP-8/e was that the computer time was free, so I could play around with different models until I got a good fit to what we recorded in the field. The one thing I learned by playing with the models on the computer was that the electromagnetic waves did not go directly down into the Earth from the WTF like common sense would lead you to believe. Instead, the ELF waves traveled through the air in a wave-guide between the ionosphere and the conductive rock of the Earth to where you were observing and then made a nearly 90 degree turn straight down into the Earth, as they were refracted into the much more conductive rock. So at your observing station, you really only saw ELF plane waves going straight down and reflecting straight back up off the conductivity differences in the upper crust, and this made modeling much easier than dealing with ELF waves transmitted through the Earth from the WTF. And this is what happens for our submarines too; the ELF waves travel through the air all over the world, channeled between the conductive seawater of the oceans and the conductive ionosphere of the atmosphere, like a huge coax cable. When the ELF waves reach a submarine, they are partially refracted straight down to the submarine. I would never have gained this insight by simply solving Maxwell’s equations (1864) for electromagnetic waves alone! This made me realize that one could truly use computers to do simulated experiments to uncover real knowledge by taking the fundamental laws of the Universe, really the handful of effective theories that we currently have, like Maxwell's equations, and by simulating those equations in computer code and letting them unfold in time, actually see the emerging behaviors of complex systems arise in a simulated Universe. All the sciences routinely now do this all the time, but back in 1974, it was quite a surprise for me.

Figure 14 – Some graduate students huddled around a DEC PDP-8/e minicomputer. Notice the teletype machines in the foreground on the left that were used to input code and data into the machine and to print out results as well.

After I graduated from Wisconsin in 1975, I went to work for Shell and Amoco exploring for oil between 1975 – 1979, before switching into a career in IT in 1979. But even during this period, I mainly programmed geophysical models of seismic data in FORTRAN for Shell and Amoco. It was while programming computer simulations of seismic data that the seeds of softwarephysics began to creep into my head, as I painstakingly assembled lots of characters of computer code into complex patterns that did things, only to find that no matter how carefully I tried to do that, my code always seemed to fail because there were just way too many ways to assemble the characters into computer code that was "close" but not quite right. It was sort of like trying to assemble lots of atoms into complex organic molecules that do things, only to find that you were off by a small factor, and those small errors made the computer code fail. At this point, I was beginning to have some fuzzy thoughts about being the victim of the second law of thermodynamics misbehaving in a nonlinear Universe. But those initial thoughts about softwarephysics accelerated dramatically in 1979 when I made a career change to become an IT professional. One very scary Monday morning, I was conducted to my new office cubicle in Amoco’s IT department, and I immediately found myself surrounded by a large number of very strange IT people, all scurrying about in a near state of panic, like the characters in Alice in Wonderland. Suddenly, it seemed like I was trapped in a frantic computer simulation, like the ones I had programmed on the DEC PDP-8/e, buried in punch card decks and fan-fold listings. After nearly 38 years in the IT departments of several major corporations, I can now state with confidence that most corporate IT departments can best be described as “frantic” in nature. This new IT job was a totally alien experience for me, and I immediately thought that I had just made a very dreadful mistake. Granted, I had been programming geophysical models for my thesis and for oil companies, ever since taking a basic FORTRAN course back in 1972, but that was the full extent of my academic credentials in computer science.

The Beginnings of Softwarephysics
So to help myself cope with the daily mayhem of life in IT, I began to develop softwarephysics. This was because I noticed that, unlike all of the other scientific and engineering professions, IT professionals did not seem to have a theoretical framework to help them cope with the daily mayhem of life in IT. But I figured that if you could apply physics to geology; why not apply physics to software? When I first switched from physics to geophysics in 1973, I was very impressed by the impact that applying simple 19th-century physics to geology had had upon geology during the plate tectonics revolution in geology (1965 - 1970). When I first graduated from the University of Illinois in 1973 with a B.S. in physics, I was very dismayed to find that the end of the Space Race and a temporary lull in the Cold War had left very few prospects open for a budding physicist. So on the advice of my roommate, a geology major, I headed up north to the University of Wisconsin in Madison to obtain an M.S. in geophysics, with the hope of obtaining a job with an oil company exploring for oil. These were heady days for geology because we were at the very tail end of the plate tectonics revolution that totally changed the fundamental models of geology. The plate tectonics revolution peaked during the five year period 1965 – 1970. Having never taken a single course in geology during all of my undergraduate studies, I was accepted into the geophysics program with many deficiencies in geology, so I had to take many undergraduate geology courses to get up to speed in this new science. The funny thing was that the geology textbooks of the time had not yet had time to catch up with the new plate tectonics revolution of the previous decade, so they still embraced the “classical” geological models of the past which now seemed a little bit silly in light of the new plate tectonics model. But this was also very enlightening. It was like looking back at the prevailing thoughts in physics prior to Newton or Einstein. What the classical geological textbooks taught me was that over the course of several hundred years, the geologists had figured out what had happened, but not why it had happened. Up until 1960 geology was mainly an observational science relying upon the human senses of sight and touch, and by observing and mapping many outcrops in detail, the geologists had figured out how mountains had formed, but not why.

In classical geology, most geomorphology was thought to arise from local geological processes. For example, in classical geology, fold mountains formed off the coast of a continent when a geosyncline formed because the continental shelf underwent a dramatic period of subsidence for some unknown reason. Then very thick layers of sedimentary rock were deposited into the subsiding geosyncline, consisting of alternating layers of sand and mud that turned into sandstones and shales, intermingled with limestones that were deposited from the carbonate shells of dead sea life floating down or from coral reefs. Next, for some unknown reason, the sedimentary rocks were laterally compressed into folded structures that slowly rose from the sea. More compression then followed, exceeding the ability of the sedimentary rock to deform plastically, resulting in thrust faults forming that uplifted blocks of sedimentary rock even higher. As compression continued, some of the sedimentary rocks were then forced down into great depths within the Earth and were then placed under great pressures and temperatures. These sedimentary rocks were then far from the thermodynamic equilibrium of the Earth’s surface where they had originally formed, and thus the atoms within recrystallized into new metamorphic minerals. At the same time, for some unknown reason, huge plumes of granitic magma rose from deep within the Earth’s interior as granitic batholiths. Then over several hundred millions of years, the overlying folded sedimentary rocks slowly eroded away, revealing the underlying metamorphic rocks and granitic batholiths, allowing human beings to cut them into slabs and to polish them into pretty rectangular slabs for the purpose of slapping them up onto the exteriors of office buildings and onto kitchen countertops. In 1960, classical geologists had no idea why the above sequence of events, producing very complicated geological structures, seemed to happen over and over again many times over the course of billions of years. But with the advent of plate tectonics (1965 – 1970), all was suddenly revealed. It was the lateral movement of plates on a global scale that made it all happen. With plate tectonics, everything finally made sense. Fold mountains did not form from purely local geological factors in play. There was the overall controlling geological process of global plate tectonics making it happen. For a quick overview, please see:

Fold Mountains
http://www.youtube.com/watch?v=Jy3ORIgyXyk

Figure 15 – Fold mountains occur when two tectonic plates collide. A descending oceanic plate first causes subsidence offshore of a continental plate, which forms a geosyncline that accumulates sediments. When all of the oceanic plate between two continents has been consumed, the two continental plates collide and compress the accumulated sediments in the geosyncline into fold mountains. This is how the Himalayas formed when India crashed into Asia.

Now the plate tectonics revolution was really made possible by the availability of geophysical data. It turns out that most of the pertinent action of plate tectonics occurs under the oceans, at the plate spreading centers and subduction zones, far removed from the watchful eyes of geologists in the field with their notebooks and trusty hand lenses. Geophysics really took off after World War II, when universities were finally able to get their hands on cheap war surplus gear. By mapping variations in the Earth’s gravitational and magnetic fields and by conducting deep oceanic seismic surveys, geophysicists were finally able to figure out what was happening at the plate spreading centers and subduction zones. Actually, the geophysicist and meteorologist Alfred Wegner had figured this all out in 1912 with his theory of Continental Drift, but at the time Wegner was ridiculed by the geological establishment. You see, Wegner had been an arctic explorer and had noticed that sometimes sea ice split apart, like South America and Africa, only later to collide again to form mountain-like pressure ridges. Unfortunately, Wegner froze to death in 1930 trying to provision some members of his last exploration party to Greenland, never knowing that one day he would finally be vindicated.

So when I first joined the IT department of Amoco, I had the vague feeling that perhaps much of the angst that I saw in my fellow IT coworkers was really due to the lack of an overall theoretical framework, like plate tectonics, that could help to explain their daily plight, and also help to alleviate some of its impact, by providing some insights into why doing IT for a living was so difficult, and to suggest some possible remedies, and to provide a direction for thought as well. So like the exploration team at Amoco that I had just left, consisting of geologists, geophysicists, paleontologists, geochemists, and petrophysicists, I decided to take all of the physics, chemistry, biology, and geology that I could muster and throw it at the problem of software. The basic idea was that many concepts in physics, chemistry, biology, and geology suggested to me that the IT community had accidentally created a pretty decent computer simulation of the physical Universe on a grand scale, a Software Universe so to speak, and that I could use this fantastic simulation in reverse, to better understand the behavior of commercial software, by comparing software to how things behaved in the physical Universe. So in physics, we use software to simulate the behavior of the Universe, while in softwarephysics we use the Universe to simulate the behavior of software.

So my original intent for softwarephysics was to merely provide a theoretical framework for IT professionals to help them better understand the behavior of software during its development and its behavior under load when running in Production. My initial thoughts were that the reason IT work was so difficult was that programmers were constantly fighting a losing battle with the second law of thermodynamics in a nonlinear Universe. You see, programmers must assemble a huge number of characters into complex patterns of source code in order to instruct a computer to perform useful operations, and because the Universe is largely nonlinear in nature, meaning that small changes to initial conditions will most likely result in dramatic, and many times, lethal outcomes for software, IT work was nearly impossible to do, and that is why most IT professionals were usually found to be on the verge of a nervous breakdown during the course of a normal day in IT. For more on that see The Fundamental Problem of Software. At the same time, I subconsciously also knew that living things must also assemble an even larger number of atoms into complex molecules in order to perform the functions of life in a nonlinear Universe, so obviously, it would seem that the natural solution to the problem that IT professionals faced each day would be simply to apply a biological approach to developing and maintaining software. However, this did not gel in my mind at first, until one day, while I was working on some code, and I came up with the notion that we needed to stop writing code - we needed to "grow" code instead in a biological manner. For more on that see Agile vs. Waterfall Programming and the Value of Having a Theoretical Framework.

Using Softwarephysics to Help Explore the Origin of Life
But as I saw complex corporate software slowly evolve over the decades, it became more and more evident to me that much could be gained by studying this vast computer simulation that the IT community had been working on for the past 75 years, or 2.4 billion seconds. NASA has defined life broadly as "A self-sustaining chemical system capable of Darwinian evolution." Personally, after many years of reflection, I feel that the research community that is currently exploring the origin of life on the Earth and elsewhere is too obsessed with simply finding other carbon-based life forms like themselves. Carbon-based life forms are really just one form of self-replicating information to be currently found on our planet, so I feel that more attention should really be focused upon finding other forms of self-replicating information sharing the Universe with ourselves, and the best place to start that, with the least cost, is to simply look right here on the Earth. To do that all we need to do is simply remove the "chemical" term out of NASA's definition of life and redefine self-replicating information as "A self-sustaining system capable of Darwinian evolution." That is why I have been stressing that the origin and evolution of commercial software provides a unique opportunity for those interested in the origin and early evolution of life on the Earth, and elsewhere, in many of my postings because both programmers and living things are faced with nearly identical problems. My suggestion in those postings has been that everybody has been looking just a couple of levels too low in the hierarchy of self-replicating information. Carbon-based living things are just one form of self-replicating information, and all forms of self-replicating information have many characteristics in common as they battle the second law of thermodynamics in a nonlinear Universe.

Self-Replicating Information – Information that persists through time by making copies of itself or by enlisting the support of other things to ensure that copies of itself are made.

The Characteristics of Self-Replicating Information
All forms of self-replicating information have some common characteristics:

1. All self-replicating information evolves over time through the Darwinian processes of inheritance, innovation and natural selection, which endows self-replicating information with one telling characteristic – the ability to survive in a Universe dominated by the second law of thermodynamics and nonlinearity.

2. All self-replicating information begins spontaneously as a parasitic mutation that obtains energy, information and sometimes matter from a host.

3. With time, the parasitic self-replicating information takes on a symbiotic relationship with its host.

4. Eventually, the self-replicating information becomes one with its host through the symbiotic integration of the host and the self-replicating information.

5. Ultimately, the self-replicating information replaces its host as the dominant form of self-replicating information.

6. Most hosts are also forms of self-replicating information.

7. All self-replicating information has to be a little bit nasty in order to survive.

8. The defining characteristic of self-replicating information is the ability of self-replicating information to change the boundary conditions of its utility phase space in new and unpredictable ways by means of exapting current functions into new uses that change the size and shape of its particular utility phase space. See Enablement - the Definitive Characteristic of Living Things for more on this last characteristic.

Over the past 4.56 billion years we have seen five waves of self-replicating information sweep across the surface of the Earth and totally rework the planet, as each wave came to dominate the Earth:

1. Self-replicating autocatalytic metabolic pathways of organic molecules
2. RNA
3. DNA
4. Memes
5. Software

Some object to the idea of memes and software being forms of self-replicating information because currently, both are products of the human mind. But I think that objection stems from the fact that most people simply do not consider themselves to be a part of the natural world. Instead, most people consciously or subconsciously consider themselves to be a supernatural and immaterial spirit that is temporarily haunting a carbon-based body. In order for evolution to take place, we need all three Darwinian processes at work – inheritance, innovation and natural selection. And that is the case for all forms of self-replicating information, including carbon-based life, memes and software. Currently, software is being written and maintained by human programmers, but that will likely change in the next 10 – 50 years when the Software Singularity occurs and AI software will be able to write and maintain software better than a human programmer. Even so, one must realize that human programmers are also just machines with a very complicated and huge neural network of neurons that have been trained with very advanced Deep Learning techniques to code software. Nobody learned how to code software sitting alone in a dark room. All programmers inherited the memes for writing software from teachers, books, other programmers or by looking at the code of others. Also, all forms of selection are “natural” unless they are made by supernatural means. So a programmer pursuing bug-free software by means of trial and error is no different than a cheetah deciding upon which gazelle in a herd to pursue.

Software is now rapidly becoming the dominant form of self-replicating information on the planet and is having a major impact on mankind as it comes to predominance. For more on this see: A Brief History of Self-Replicating Information. However, of the five waves of self-replicating information, the only form that we currently have a good history of is software, going all the way back to May of 1941 when Konrad Zuse first cranked up his Z3 computer. So the best model for the origin of life might be obtained by studying the hodge-podge of precursors, false starts, and failed attempts that led to the origin and early evolution of software, with particular attention paid to the parasitic/symbiotic relationships that allowed software to bootstrap itself into existence.

Yes, there are many other examples of universal Darwinism at work in the Universe, such as the evolution of languages or political movements, but I think that the origin and evolution of software provides a unique example because both programmers and living things are faced with nearly identical problems. A programmer must assemble a huge number of characters into complex patterns of source code to instruct a computer to perform useful operations. Similarly, living things must assemble an even larger number of atoms into complex molecules in order to perform the functions of life. And because the Universe is largely nonlinear in nature, meaning that small changes to initial conditions will most likely result in dramatic, and many times, lethal outcomes for both software and living things, the evolutionary history of living things on Earth and of software have both converged upon very similar solutions to overcome the effects of the second law of thermodynamics in a nonlinear Universe. For example, both living things and software went through a very lengthy prokaryotic architectural period, with little internal structure, to be followed by a eukaryotic architectural period with a great deal of internal structure, which later laid the foundations for forms with a complex multicellular architecture. And both also experienced a dramatic Cambrian explosion in which large multicellular systems arose consisting of huge numbers of somatic cells that relied upon the services of large numbers of cells to be found within a number of discrete organs. For more on this see the SoftwarePaleontology section of SoftwareBiology and Software Embryogenesis.

Also, software presents a much clearer distinction between the genotype and phenotype of a system than do other complex systems, like languages or other technologies that also undergo evolutionary processes. The genotype of software is determined by the source code files of programs, while the phenotype of software is expressed by the compiled executable files that run upon a computer and that are generated from the source code files by a transcription process similar to the way genes are transcribed into proteins. Also, like a DNA or RNA sequence, source code provides a very tangible form of self-replicating information that can be studied over historical time without ambiguity. Source code is also not unique, in that many different programs, and even programs written in different languages can produce executable files with identical phenotypes or behaviors.

Currently, many researchers working on the origin of life and astrobiology are trying to produce computer simulations to help investigate how life could have originated and evolved at its earliest stages. But trying to incorporate all of the relevant elements into a computer simulation is proving to be a very daunting task indeed. Why not simply take advantage of the naturally occurring $10 trillion computer simulation that the IT community has already patiently evolved over the past 75 years and has already run for 2.4 billion seconds? It has been hiding there in plain sight the whole time for anybody with a little bit of daring and flair to explore.

Some might argue that this is an absurd proposal because software currently is a product of the human mind, while biological life is not a product of intelligent design. Granted, biological life is not a product of intelligent design, but neither is the human mind. The human mind and biological life are both the result of natural processes at work over very long periods of time. This objection simply stems from the fact that we are all still, for the most part, self-deluded Cartesian dualists at heart, with seemingly a little “Me” running around within our heads that just happens to have the ability to write software and to do other challenging things. Thus, most human beings do not think of themselves as part of the natural world. Instead, they think of themselves, and others, as immaterial spirits temporarily haunting a body, and when that body dies the immaterial spirit lives on. In this view, human beings are not part of the natural world. Instead, they are part of the supernatural. But since the human mind is a product of natural processes in action, so is the software that it produces. For more on that see The Ghost in the Machine the Grand Illusion of Consciousness.

Still, I realize that there might be some hesitation to pursue this line of research because it might be construed by some as an advocacy of intelligent design, but that is hardly the case. The evolution of software over the past 75 years has essentially been a matter of Darwinian inheritance, innovation and natural selection converging upon similar solutions to that of biological life. For example, it took the IT community about 60 years of trial and error to finally stumble upon an architecture similar to that of complex multicellular life that we call SOA – Service Oriented Architecture. The IT community could have easily discovered SOA back in the 1960s if it had adopted a biological approach to software and intelligently designed software architecture to match that of the biosphere. Instead, the worldwide IT architecture we see today essentially evolved on its own because nobody really sat back and designed this very complex worldwide software architecture; it just sort of evolved on its own through small incremental changes brought on by many millions of independently acting programmers through a process of trial and error. When programmers write code, they always take some old existing code first and then modify it slightly by making a few changes. Then they add a few additional new lines of code and test the modified code to see how far they have come. Usually, the code does not work on the first attempt because of the second law of thermodynamics, so they then try to fix the code and try again. This happens over and over until the programmer finally has a good snippet of new code. Thus, new code comes into existence through the Darwinian mechanisms of inheritance coupled with innovation and natural selection. Some might object that this coding process of software is actually a form of intelligent design, but that is not the case. It is important to differentiate between intelligent selection and intelligent design. In softwarephysics we extend the concept of natural selection to include all selection processes that are not supernatural in nature, so for me, intelligent selection is just another form of natural selection. This is really nothing new. Predators and prey constantly make “intelligent” decisions about what to pursue and what to evade, even if those “intelligent” decisions are only made with the benefit of a few interconnected neurons or molecules. So in this view, the selection decisions that a programmer makes after each iteration of working on some new code really are a form of natural selection. After all, programmers are just DNA survival machines with minds infected with memes for writing software, and the selection processes that the human mind undergo while writing software are just as natural as the Sun drying out worms on a sidewalk or a cheetah deciding upon which gazelle in a herd to pursue.

For example, when IT professionals slowly evolved our current $10 trillion worldwide IT architecture over the past 2.4 billion seconds, they certainly did not do so with the teleological intent of creating a simulation of the evolution of the biosphere. Instead, like most organisms in the biosphere, these IT professionals were simply trying to survive just one more day in the frantic world of corporate IT. It is hard to convey the daily mayhem and turmoil of corporate IT to outsiders. When I first hit the floor of Amoco’s IT department, I was in total shock, but I quickly realized that all IT jobs essentially boiled down to simply pushing buttons. All you had to do was to push the right buttons, in the right sequence, at the right time, and with zero errors. How hard could that be? Well, it turned out to be very difficult indeed, and in response I began to subconsciously work on softwarephysics to try to figure out why this job was so hard, and how I could dig myself out of the mess that I had gotten myself into. After a while, it dawned on me that the fundamental problem was the second law of thermodynamics operating in a nonlinear simulated universe. The second law made it very difficult to push the right buttons in the right sequence and at the right time because there were so many erroneous combinations of button pushes. Writing and maintaining software was like looking for a needle in a huge utility phase space. There just were nearly an infinite number of ways of pushing the buttons “wrong”. The other problem was that we were working in a very nonlinear utility phase space, meaning that pushing just one button incorrectly usually brought everything crashing down. Next, I slowly began to think of pushing the correct buttons in the correct sequence as stringing together the correct atoms into the correct sequence to make molecules in chemical reactions that could do things. I also knew that living things were really great at doing that. Living things apparently overcame the second law of thermodynamics by dumping entropy into heat as they built low entropy complex molecules from high entropy simple molecules and atoms. I then began to think of each line of code that I wrote as a step in a biochemical pathway. The variables were like organic molecules composed of characters or “atoms” and the operators were like chemical reactions between the molecules in the line of code. The logic in several lines of code was the same thing as the logic found in several steps of a biochemical pathway, and a complete function was the equivalent of a full-fledged biochemical pathway in itself. But one nagging question remained - how could I take advantage of these similarities to save myself? That’s a long story, but in 1985 I started working on BSDE– the Bionic Systems Development Environment, which was used at Amoco to “grow” software biologically from an “embryo” by having programmers turn on and off a set of “genes”. For more on that see Agile vs. Waterfall Programming and the Value of Having a Theoretical Framework.

The Social Impacts of the Coming Predominance of Software
Over the years, I have seen the Software Universe that I first encountered back in 1979 expand from the small population of IT workers in the world to now the entire world at large. I have also seen that as software comes to predominance it has caused a great deal of social, political and economic unrest as discussed in The Economics of the Coming Software Singularity , The Enduring Effects of the Obvious Hiding in Plain Sight, Machine Learning and the Ascendance of the Fifth Wave and Making Sense of the Absurdity of the Real World of Human Affairs. The immediate difficulty is that software has displaced many workers over the past 75 years, and as software comes to predominance, it will eventually reduce all human labor to a value of zero over the next 10 - 100 years. How will the age-old oligarchical societies of the world deal with that in a manner that allows civilization to continue? The 2016 Presidential Election cycle in the United States was a dramatic example of this in action. The election was totally dominated by the effects of software coming to predominance - rogue email servers, hacking, leaking, software security breaches in general and wild Twitter feeds by candidates. But the election was primarily determined by the huge loss of middle class jobs due to automation by software. Now it's pretty hard to get mad at software because it is so intangible in nature, so many mistakenly directed their anger at other people because that is what mankind has been doing for the past 200,000 years. But this time is different because the real culprit is software coming of age. Unfortunately, those low-skilled factory jobs that have already evaporated are not coming back, no matter what some may promise. And those jobs are just the first in a long line. With the current pace of AI and Machine Learning research and implementation, now that they both can make lots of money, we will soon find self-driving trucks and delivery vehicles, automated cranes at container ports and automated heavy construction machinery at job sites. We have already lost lots of secretaries, bank tellers, stock brokers, insurance agents, retail salespeople and travel agents, but that is just the beginning. Soon we will see totally automated fast food restaurants to be later followed by the automation of traditional sit-down restaurants, and automated retail stores without a single employee, like the totally automated parking garages we already have.

I have now been retired for nearly a month, and after having stopped working for the first time in 50 years, I can now state that there are plenty of things to do to keep busy. For example, my wife and I are doing the daycare for two of our grandchildren for our daughter, a high school Biology and Chemistry teacher, and I like to take online MOOC courses, and now I have all the time in the world to do that. So the end of working for a living is really not a bad thing, but the way we currently have civilization set up will not work in a future without work, using the current norm we have of rewarding people for what they produce. How that will all unfold remains one of the great mysteries of our time. For an intriguing view of one possibility please see THE MACHINE STOPS by E.M. Forster (1909) at:

https://www.ele.uri.edu/faculty/vetter/Other-stuff/The-Machine-Stops.pdf

Yes - from 1909!

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston