Sunday, December 25, 2016

The Continuing Adventures of Mr. Tompkins in the Software Universe

George Gamow was a highly regarded theoretical physicist and cosmologist from the last century who liked to explain concepts in modern physics to the common people by having them partake in adventures along with him in alternative universes that had alternative values for the physical constants that are found within our own Universe. He did so by creating a delightful fictional character back in 1937 by the name of Mr. Tompkins. Mr. Tompkins was an inquisitive bank clerk who was the main character in a series of four popular science books in which he participated in a number of such scientific adventures in alternative universes. I bring this up because back in 1979, when I first switched careers from being an exploration geophysicist to become an IT professional, I had a very similar experience. At the time, it seemed to me as if the strange IT people that I was now working with on a daily basis had created for themselves their own little Software Universe, with these strange IT people as the sole inhabitants. But over the years, I have now seen this strange alternative Software Universe slowly expand in size, to the point now that nearly all of the Earth's inhabitants are now also inhabitants of this alternative Software Universe.

Mr. Tompkins first appeared in George Gamow's mind in 1937 when he wrote a short story called A Toy Universe and unsuccessfully tried to have it published by the magazines of the day, such as Harper's, The Atlantic Monthly, Coronet and other magazines of the time. However, in 1938 he was finally able to publish a series of articles in a British magazine called Discovery that later became the book Mr Tompkins in Wonderland in 1939. Later he published Mr Tompkins Explores the Atom in 1944 and two other books at later dates. The adventures of Mr. Tompkins begin when he spends the afternoon of a bank holiday attending a lecture on the theory of relativity. During the lecture he drifts off to sleep and enters a dream world in which the speed of light is a mere 4.5 m/s (10 mph). This becomes apparent to him when he notices that passing cyclists are subject to a noticeable Lorentz–FitzGerald contraction.

As I explained in the Introduction to Softwarephysics softwarephysics is a simulated science designed to help explain how the simulated Software Universe that we have created for ourselves behaves. To do so, I simply noticed that, like our physical Universe, the Software Universe was quantized and extremely nonlinear in nature. For more on that, please see The Fundamental Problem of Software. Thanks to quantum mechanics (1926) we now know that our physical Universe is quantized into very small chunks of matter, energy, and also probably small chunks of space and time as well. Similarly, the Software Universe is composed of quantized chunks of software that start off as discrete characters in software source code (see Quantum Software for details). Thanks to quantum mechanics, we also now know that the macroscopic behaviors of our Universe are an outgrowth of the quantum mechanical operations of the atoms within it. Similarly, the macroscopic operations of the Software Universe are an outgrowth of the quantized operations of the source code that makes it all work. Now because very small changes to software source code can produce hugely significant changes to the way software operates, software is probably the most nonlinear substance known to mankind. The extreme nonlinear behavior of quantized software, combined with the devastating effects of the second law of thermodynamics to normally produce very buggy non-functional software, necessarily brings in the Darwinian pressures that have caused software to slowly evolve over the past 75 years, or 2.4 billion seconds, ever since Konrad Zuse first cranked up his Z3 computer in May of 1941. For more on this see The Fundamental Problem of Software.

Now the reason Mr. Tompkins never noticed the effects of the special theory of relativity in his everyday life was because the speed of light is so large, but once the speed of light was reduced to 10 mph in an alternative universe for Mr. Tompkins, all of the strange effects of the special theory of relativity became evidently apparent, and with enough time, would have become quite normal to him as a part of normal everyday life. Similarly, the strange effects of quantum mechanics only seem strange to us because Planck's constant is so very small - 6.62607004 × 10-34 Kg m2/second, and therefore, only become apparent for very small things like atoms and electrons. However, if Planck's constant were very much larger, then we would also begin to grow accustomed to the strange behaviors of objects behaving in a quantum mechanical way. For example, in quantum mechanics the spin of a single electron can be both up and down at the same time, but in the classical Universe that we are used to, macroscopic things like a child's top can only have a spin of up or down at any given time. The top can only spin in a clockwise or counterclockwise manner at one time - it cannot do both at the same time. Similarly, in quantum mechanics a photon or electron can go through both slits of a double slit experiment at the same time, so long as you do not put detectors at the slit locations.

Figure 1 – A macroscopic top can only spin clockwise or counterclockwise at one time.

Figure 2 – But electrons can be in a mixed quantum mechanical state in which they both spin up and spin down at the same time.

Figure 3 – Similarly, tennis balls can only go through one slit in a fence at a time. They cannot go through both slits of a fence at the same time.

Figure 4 – But at the smallest of scales in our quantum mechanical Universe, electrons and photons can go through both slits at the same time, producing an interference pattern.

Figure 5 – You can see this interference pattern of photons if you look at a distant porch light through the mesh of a sheer window curtain.

So in quantum mechanics at the smallest of scales, things can be both true and false at the same time. Fortunately for us, at the macroscopic sizes of everyday life, these bizarre quantum effects of nature seem to fade away, so that the things I just described are either true or false in everyday life. Macroscopic tops either spin up or spin down, and tennis balls pass through either one slit or the other, but not both at the same time. Indeed, it is rather strange that, although all of the fundamental particles of our Universe seem to behave in a fuzzy quantum mechanical manner in which true things and false things can both seem to blend into a cosmic grayness of ignorance, at the macroscopic level of our physical Universe, there are still such things as absolute truth and absolute falsehoods that can be measured in a laboratory in a reproducible manner. This must have been so for the Darwinian processes of innovation honed by natural selection to have brought us forth. After all, if Schrödinger's cat could really be both dead and alive at the same time, these Darwinian processes could not have worked, and we would not be here contemplating the differences between true and false assertions. The end result is that in our physical Universe, at the smallest of scales, there is no absolute truth, there are only quantum mechanical opinions, but at the macroscopic level of everyday life, there are indeed such things as absolute truth and absolute falsehoods, and these qualities can be measured in a laboratory in a reproducible manner.

The Current Bizarre World of Political Social Media Software in the United States
Now imagine that our Mr. Tompkins had entered into a bizarre alternative universe in which things were just the opposite. Imagine a universe in which, at the smallest of scales things operated classically, as if things were either absolutely true or false, but at a macroscopic level, things were seen to be both true and false at the same time! Well, we currently do have such an alternative universe close at hand to explore. It is the current bizarre world of political social media software in the United States of America. Recall that currently, the Software Universe runs on classical computers in which a bit can be either a "1" or a "0". In a classical computer a bit can only be a "1" or a "0" at any given time - it cannot be both a "1" and a "0" at the same time. For that you would need to have software running on a quantum computer, and for the most part, we are not there yet. So at the smallest of scales in our current Software Universe, the concept of there actually being a real difference between true and false assertions is fundamental. None of the current software code that makes it all work could possibly run if this were not the case. So it is quite strange that at the macroscopic level of political social media software in the United States, just the opposite seems to be the case. Unfortunately, in today's strange world of political social media software, there seems to be no right or wrong and no distinction between the truth and lies. We now have "alternative facts" and claims of "fake news" abounding, and Twitter feeds from those in power loaded down with false information. Because of this, for any given assertion, 30% of Americans will think that the assertion is true, while 70% of Americans will think that the assertion is false. In the Software Universe there are no longer any facts; there are only opinions in a seemingly upside-down quantum mechanical sense.

The Danger of Believing in Things
In The Danger of Believing in Things I highlighted the dangers of not employing critical thought when evaluating assertions in our physical Universe. The problem today is that most people are now seemingly spending more time living in the simulated Software Universe that we have created, rather than in our actual physical Universe. The end result of this is that, instead of seeking out the truth, the worldview memes infecting our minds simply seek out supporting memes in the Software Universe that lend support to the current worldview memes within our minds. But unlike in our current simulated Software Universe, where those worldview memes can be both absolutely true or absolutely false at the same time, in our physical Universe that behaves classically at the day-to-day scales in which we all live, things can still only be absolutely true or false, but not both. The most dangerous aspect of this new fake reality is that the new Administration of the United States of America maintains that climate change is a hoax, simply because they say it is a hoax, and sadly, for many Americans that is good enough for them. Now climate change might indeed be a hoax in our simulated Software Universe, or it might not be a hoax, because there is no absolute truth in our simulated Software Universe at the macroscopic level; there are only opinions. But that is not the case in the physical Universe in which we all actually live, where climate change is rapidly underway. For more on that please see This Message on Climate Change Was Brought to You by SOFTWARE. In the real physical Universe in which we all actually live, it is very important that we always take the words of Richard Feynman very seriously, for "reality must take precedence over public relations, for nature cannot be fooled."

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
http://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Thursday, September 22, 2016

Some Thoughts on the Origin of Softwarephysics and Its Application Beyond IT

On December 1, 2016 I retired at age 65 from 37+ years as an IT professional and a 41+ year career working for various corporations. During those long years I accidentally stumbled upon the fundamentals of softwarephysics, while traipsing through the jungles of several corporate IT departments, and I thought that it might be a good time to take a look back over the years and outline how all that happened.

The Rise of Software
Currently, we are witnessing one of those very rare moments in time when a new form of self-replicating information, in the form of software, is coming to dominance. Software is now so ubiquitous that it now seems like the whole world is immersed in a Software Universe of our own making, surrounded by PCs, tablets, smart phones and the software now embedded in most of mankind's products. In fact, I am now quite accustomed to sitting with audiences of younger people who are completely engaged with their "devices" before, during and after a performance. But this is a very recent development in the history of mankind. In the initial discussion below, I will first outline a brief history of the evolution of hardware technology to explain how we got to this state, but it is important to keep in mind that it was the relentless demands of software for more and more memory and CPU-cycles over the years that really drove the exponential explosion of hardware capability. After that, I will explain how the concept of softwarephysics slowly developed in my mind over the years as I interacted with the software running on these rapidly developing machines

It all started back in May of 1941 when Konrad Zuse first cranked up his Z3 computer. The Z3 was the world's first real computer and was built with 2400 electromechanical relays that were used to perform the switching operations that all computers use to store information and to process it. To build a computer, all you need is a large network of interconnected switches that have the ability to switch each other on and off in a coordinated manner. Switches can be in one of two states, either open (off) or closed (on), and we can use those two states to store the binary numbers of “0” or “1”. By using a number of switches teamed together in open (off) or closed (on) states, we can store even larger binary numbers, like “01100100” = 38. We can also group the switches into logic gates that perform logical operations. For example, in Figure 1 below we see an AND gate composed of two switches A and B. Both switch A and B must be closed in order for the light bulb to turn on. If either switch A or B is open, the light bulb will not light up.

Figure 1 – An AND gate can be simply formed from two switches. Both switches A and B must be closed, in a state of “1”, in order to turn the light bulb on.

Additional logic gates can be formed from other combinations of switches as shown in Figure 2 below. It takes about 2 - 8 switches to create each of the various logic gates shown below.

Figure 2 – Additional logic gates can be formed from other combinations of 2 – 8 switches.

Once you can store binary numbers with switches and perform logical operations upon them with logic gates, you can build a computer that performs calculations on numbers. To process text, like names and addresses, we simply associate each letter of the alphabet with a binary number, like in the ASCII code set where A = “01000001” and Z = ‘01011010’ and then process the associated binary numbers.

Figure 3 – Konrad Zuse with a reconstructed Z3 in 1961 (click to enlarge)


Figure 4 – Block diagram of the Z3 architecture (click to enlarge)

The electrical relays used by the Z3 were originally meant for switching telephone conversations. Closing one relay allowed current to flow to another relay’s coil, causing that relay to close as well.

Figure 5 – The Z3 was built using 2400 electrical relays, originally meant for switching telephone conversations.

Figure 6 – The electrical relays used by the Z3 for switching were very large, very slow and used a great deal of electricity which generated a great deal of waste heat.

Now I was born about 10 years later in 1951, a few months after the United States government installed its very first commercial computer, a UNIVAC I, for the Census Bureau on June 14, 1951. The UNIVAC I was 25 feet by 50 feet in size, and contained 5,600 vacuum tubes, 18,000 crystal diodes and 300 relays with a total memory of 12 K. From 1951 to 1958 a total of 46 UNIVAC I computers were built and installed.

Figure 7 – The UNIVAC I was very impressive on the outside.

Figure 8 – But the UNIVAC I was a little less impressive on the inside.

Figure 9 – Most of the electrical relays of the Z3 were replaced with vacuum tubes in the UNIVAC I, which were also very large, used lots of electricity and generated lots of waste heat too, but the vacuum tubes were 100,000 times faster than relays.

Figure 10 – Vacuum tubes contain a hot negative cathode that glows red and boils off electrons. The electrons are attracted to the cold positive anode plate, but there is a gate electrode between the cathode and anode plate. By changing the voltage on the grid, the vacuum tube can control the flow of electrons like the handle of a faucet. The grid voltage can be adjusted so that the electron flow is full blast, a trickle, or completely shut off, and that is how a vacuum tube can be used as a switch.

In the 1960s the vacuum tubes were replaced by discrete transistors and in the 1970s the discrete transistors were replaced by thousands of transistors on a single silicon chip. Over time, the number of transistors that could be put onto a silicon chip increased dramatically, and today, the silicon chips in your personal computer hold many billions of transistors that can be switched on and off in about 10-10 seconds. Now let us look at how these transistors work.

There are many different kinds of transistors, but I will focus on the FET (Field Effect Transistor) that is used in most silicon chips today. A FET transistor consists of a source, gate and a drain. The whole affair is laid down on a very pure silicon crystal using a multi-step process that relies upon photolithographic processes to engrave circuit elements upon the very pure silicon crystal. Silicon lies directly below carbon in the periodic table because both silicon and carbon have 4 electrons in their outer shell and are also missing 4 electrons. This makes silicon a semiconductor. Pure silicon is not very electrically conductive in its pure state, but by doping the silicon crystal with very small amounts of impurities, it is possible to create silicon that has a surplus of free electrons. This is called N-type silicon. Similarly, it is possible to dope silicon with small amounts of impurities that decrease the amount of free electrons, creating a positive or P-type silicon. To make an FET transistor you simply use a photolithographic process to create two N-type silicon regions onto a substrate of P-type silicon. Between the N-type regions is found a gate which controls the flow of electrons between the source and drain regions, like the grid in a vacuum tube. When a positive voltage is applied to the gate, it attracts the remaining free electrons in the P-type substrate and repels its positive holes. This creates a conductive channel between the source and drain which allows a current of electrons to flow.

Figure 11 – A FET transistor consists of a source, gate and drain. When a positive voltage is applied to the gate, a current of electrons can flow from the source to the drain and the FET acts like a closed switch that is “on”. When there is no positive voltage on the gate, no current can flow from the source to the drain, and the FET acts like an open switch that is “off”.

Figure 12 – When there is no positive voltage on the gate, the FET transistor is switched off, and when there is a positive voltage on the gate the FET transistor is switched on. These two states can be used to store a binary “0” or “1”, or can be used as a switch in a logic gate, just like an electrical relay or a vacuum tube.



Figure 13 – Above is a plumbing analogy that uses a faucet or valve handle to simulate the actions of the source, gate and drain of an FET transistor.

The CPU chip in your computer consists largely of transistors in logic gates, but your computer also has a number of memory chips that use transistors that are “on” or “off” and can be used to store binary numbers or text that is encoded using binary numbers. The next thing we need is a way to coordinate the billions of transistor switches in your computer. That is accomplished with a system clock. My current work laptop has a clock speed of 2.5 GHz which means it ticks 2.5 billion times each second. Each time the system clock on my computer ticks, it allows all of the billions of transistor switches on my laptop to switch on, off, or stay the same in a coordinated fashion. So while your computer is running, it is actually turning on and off billions of transistors billions of times each second – and all for a few hundred dollars!

Again, it was the relentless drive of software for ever increasing amounts of memory and CPU-cycles that made all this happen, and that is why you can now comfortably sit in a theater with a smart phone that can store more than 10 billion bytes of data, while back in 1951 the UNIVAC I occupied an area of 25 feet by 50 feet to store 12,000 bytes of data. But when I think back to my early childhood in the early 1950s, I can still vividly remember a time when there essentially was no software at all in the world. In fact, I can still remember my very first encounter with a computer on Monday, Nov. 19, 1956 watching the Art Linkletter TV show People Are Funny with my parents on an old black and white console television set that must have weighed close to 150 pounds. Art was showcasing the 21st UNIVAC I to be constructed, and had it sorting through the questionnaires from 4,000 hopeful singles, looking for the ideal match. The machine paired up John Caran, 28, and Barbara Smith, 23, who later became engaged. And this was more than 40 years before eHarmony.com! To a five-year-old boy, a machine that could “think” was truly amazing. Since that very first encounter with a computer back in 1956, I have personally witnessed software slowly becoming the dominant form of self-replicating information on the planet, and I have also seen how software has totally reworked the surface of the planet to provide a secure and cozy home for more and more software of ever increasing capability. For more on this please see A Brief History of Self-Replicating Information. That is why I think there would be much to be gained in exploring the origin and evolution of the $10 trillion computer simulation that the Software Universe provides, and that is what softwarephysics is all about. Let me explain where this idea came from.

My First Experiences with Software
Back in the 1950s, scientists and engineers first began to use computers to analyze experimental data and perform calculations, essentially using computers as souped-up sliderules to do data reduction. But by the 1960s, computers had advanced to the point where scientists and engineers were able to begin to use computers to perform simulated experiments to model things that previously had to be physically constructed in a lab. This dramatically helped to speed up research because it was found to be much easier to create a software simulation of a physical system, and perform simulated experiments on it, rather than to actually build the physical system itself in the lab. This revolution in the way science was done personally affected me. I finished up my B.S. in physics at the University of Illinois in Urbana Illinois in 1973 with the sole support of my trusty sliderule, but fortunately, I did take a class in FORTRAN programming my senior year. I then immediately began work on a M.S. degree in geophysics at the University of Wisconsin at Madison. For my thesis, I worked with a group of graduate students who were shooting electromagnetic waves into the ground to model the conductivity structure of the Earth’s upper crust. We were using the Wisconsin Test Facility (WTF) of Project Sanguine to send very low frequency electromagnetic waves, with a bandwidth of about 1 – 100 Hz into the ground, and then we measured the reflected electromagnetic waves in cow pastures up to 60 miles away. All this information has been declassified and can be downloaded from the Internet at: http://www.fas.org/nuke/guide/usa/c3i/fs_clam_lake_elf2003.pdf. Project Sanguine built an ELF (Extremely Low Frequency) transmitter in northern Wisconsin and another transmitter in northern Michigan in the 1970s and 1980s. The purpose of these ELF transmitters was to send messages to the U.S. nuclear submarine force at a frequency of 76 Hz. These very low frequency electromagnetic waves can penetrate the highly conductive seawater of the oceans to a depth of several hundred feet, allowing the submarines to remain at depth, rather than coming close to the surface for radio communications. You see, normal radio waves in the Very Low Frequency (VLF) band, at frequencies of about 20,000 Hz, only penetrate seawater to a depth of 10 – 20 feet. This ELF communications system became fully operational on October 1, 1989, when the two transmitter sites began synchronized transmissions of ELF broadcasts to the U.S. submarine fleet.

Anyway, back in the summers of 1973 and 1974 our team was collecting electromagnetic data from the WTF using a DEC PDP-8/e minicomputer. The machine cost about $30,000 in 1973 dollars and was about the size of a large side-by-side refrigerator, with 32K of magnetic core memory. We actually hauled this machine through the lumber trails of the Chequamegon National Forest and powered it with an old diesel generator to digitally record the reflected electromagnetic data in the field. For my thesis, I then created models of the Earth’s upper conductivity structure down to a depth of about 20 km, using programs written in BASIC. The beautiful thing about the DEC PDP-8/e was that the computer time was free, so I could play around with different models, until I got a good fit to what we recorded in the field. The one thing I learned by playing with the models on the computer was that the electromagnetic waves did not go directly down into the Earth from the WTF, like common sense would lead you to believe. Instead, the ELF waves traveled through the air in a wave guide between the ionosphere and the conductive rock of the Earth to where you were observing and then made a nearly 90 degree turn straight down into the Earth, as they were refracted into the much more conductive rock. So at your observing station, you really only saw ELF plane waves going straight down and reflecting straight back up off the conductivity differences in the upper crust, and this made modeling much easier than dealing with ELF waves transmitted through the Earth from the WTF. And this is what happens for our submarines too; the ELF waves travel through the air all over the world, channeled between the conductive seawater of the oceans and the conductive ionosphere of the atmosphere, like a huge coax cable. When the ELF waves reach a submarine, they are partially refracted straight down to the submarine. I would never have gained this insight by simply solving Maxwell’s equations (1864) for electromagnetic waves alone! This made me realize that one could truly use computers to do simulated experiments to uncover real knowledge by taking the fundamental laws of the Universe, really the handful of effective theories that we currently have, like Maxwell's equations, and by simulating those equations in computer code and letting them unfold in time, actually see the emerging behaviors of complex systems arise in a simulated Universe. All the sciences routinely now do this all the time, but back in 1974 it was quite a surprise for me.

Figure 14 – Some graduate students huddled around a DEC PDP-8/e minicomputer. Notice the teletype machines in the foreground on the left that were used to input code and data into the machine and to print out results as well.

After I graduated from Wisconsin in 1975, I went to work for Shell and Amoco exploring for oil between 1975 – 1979, before switching into a career in IT in 1979. But even during this period, I mainly programmed geophysical models of seismic data in FORTRAN for Shell and Amoco. It was while programming computer simulations of seismic data that the seeds of softwarephysics began to creep into my head, as I painstakingly assembled lots of characters of computer code into complex patterns that did things, only to find that no matter how carefully I tried to do that, my code always seemed to fail because there were just way too many ways to assemble the characters into computer code that was "close" but not quite right. It was sort of like trying to assemble lots of atoms into complex organic molecules that do things, only to find that you were off by a small factor, and those small errors made the computer code fail. At this point, I was beginning to have some fuzzy thoughts about being the victim of the second law of thermodynamics misbehaving in a nonlinear Universe. But those initial thoughts about softwarephysics accelerated dramatically in 1979 when I made a career change to become an IT professional. One very scary Monday morning, I was conducted to my new office cubicle in Amoco’s IT department, and I immediately found myself surrounded by a large number of very strange IT people, all scurrying about in a near state of panic, like the characters in Alice in Wonderland. Suddenly, it seemed like I was trapped in a frantic computer simulation, like the ones I had programmed on the DEC PDP-8/e , buried in punch card decks and fan-fold listings. After nearly 38 years in the IT departments of several major corporations, I can now state with confidence that most corporate IT departments can best be described as “frantic” in nature. This new IT job was a totally alien experience for me, and I immediately thought that I had just made a very dreadful mistake. Granted, I had been programming geophysical models for my thesis and for oil companies, ever since taking a basic FORTRAN course back in 1972, but that was the full extent of my academic credentials in computer science.

The Beginnings of Softwarephysics
So to help myself cope with the daily mayhem of life in IT, I began to develop softwarephysics. This was because I noticed that, unlike all of the other scientific and engineering professions, IT professionals did not seem to have a theoretical framework to help them cope with the daily mayhem of life in IT. But I figured that if you could apply physics to geology; why not apply physics to software? When I first switched from physics to geophysics in 1973, I was very impressed by the impact that applying simple 19th century physics to geology had had upon geology during the plate tectonics revolution in geology (1965 - 1970). When I first graduated from the University of Illinois in 1973 with a B.S. in physics, I was very dismayed to find that the end of the Space Race and a temporary lull in the Cold War had left very few prospects open for a budding physicist. So on the advice of my roommate, a geology major, I headed up north to the University of Wisconsin in Madison to obtain an M.S. in geophysics, with the hope of obtaining a job with an oil company exploring for oil. These were heady days for geology because we were at the very tail end of the plate tectonics revolution that totally changed the fundamental models of geology. The plate tectonics revolution peaked during the five year period 1965 – 1970. Having never taken a single course in geology during all of my undergraduate studies, I was accepted into the geophysics program with many deficiencies in geology, so I had to take many undergraduate geology courses to get up to speed in this new science. The funny thing was that the geology textbooks of the time had not yet had time to catch up with the new plate tectonics revolution of the previous decade, so they still embraced the “classical” geological models of the past which now seemed a little bit silly in light of the new plate tectonics model. But this was also very enlightening. It was like looking back at the prevailing thoughts in physics prior to Newton or Einstein. What the classical geological textbooks taught me was that over the course of several hundred years, the geologists had figured out what had happened, but not why it had happened. Up until 1960 geology was mainly an observational science relying upon the human senses of sight and touch, and by observing and mapping many outcrops in detail, the geologists had figured out how mountains had formed, but not why.

In classical geology, most geomorphology was thought to arise from local geological processes. For example, in classical geology, fold mountains formed off the coast of a continent when a geosyncline formed because the continental shelf underwent a dramatic period of subsidence for some unknown reason. Then very thick layers of sedimentary rock were deposited into the subsiding geosyncline, consisting of alternating layers of sand and mud that turned into sandstones and shales, intermingled with limestones that were deposited from the carbonate shells of dead sea life floating down or from coral reefs. Next, for some unknown reason, the sedimentary rocks were laterally compressed into folded structures that slowly rose from the sea. More compression then followed, exceeding the ability of the sedimentary rock to deform plastically, resulting in thrust faults forming that uplifted blocks of sedimentary rock even higher. As compression continued, some of the sedimentary rocks were then forced down into great depths within the Earth and were then placed under great pressures and temperatures. These sedimentary rocks were then far from the thermodynamic equilibrium of the Earth’s surface where they had originally formed, and thus the atoms within recrystallized into new metamorphic minerals. At the same time, for some unknown reason, huge plumes of granitic magma rose from deep within the Earth’s interior as granitic batholiths. Then over several hundred millions of years, the overlying folded sedimentary rocks slowly eroded away, revealing the underlying metamorphic rocks and granitic batholiths, allowing human beings to cut them into slabs and to polish them into pretty rectangular slabs for the purpose of slapping them up onto the exteriors of office buildings and onto kitchen countertops. In 1960, classical geologists had no idea why the above sequence of events, producing very complicated geological structures, seemed to happen over and over again many times over the course of billions of years. But with the advent of plate tectonics (1965 – 1970), all was suddenly revealed. It was the lateral movement of plates on a global scale that made it all happen. With plate tectonics, everything finally made sense. Fold mountains did not form from purely local geological factors in play. There was the overall controlling geological process of global plate tectonics making it happen. For a quick overview, please see:

Fold Mountains
http://www.youtube.com/watch?v=Jy3ORIgyXyk

Figure 15 – Fold mountains occur when two tectonic plates collide. A descending oceanic plate first causes subsidence offshore of a continental plate, which forms a geosyncline that accumulates sediments. When all of the oceanic plate between two continents has been consumed, the two continental plates collide and compress the accumulated sediments in the geosyncline into fold mountains. This is how the Himalayas formed when India crashed into Asia.

Now the plate tectonics revolution was really made possible by the availability of geophysical data. It turns out that most of the pertinent action of plate tectonics occurs under the oceans, at the plate spreading centers and subduction zones, far removed from the watchful eyes of geologists in the field with their notebooks and trusty hand lenses. Geophysics really took off after World War II, when universities were finally able to get their hands on cheap war surplus gear. By mapping variations in the Earth’s gravitational and magnetic fields and by conducting deep oceanic seismic surveys, geophysicists were finally able to figure out what was happening at the plate spreading centers and subduction zones. Actually, the geophysicist and meteorologist Alfred Wegner had figured this all out in 1912 with his theory of Continental Drift, but at the time Wegner was ridiculed by the geological establishment. You see, Wegner had been an arctic explorer and had noticed that sometimes sea ice split apart, like South America and Africa, only later to collide again to form mountain-like pressure ridges. Unfortunately, Wegner froze to death in 1930 trying to provision some members of his last exploration party to Greenland, never knowing that one day he would finally be vindicated.

So when I first joined the IT department of Amoco, I had the vague feeling that perhaps much of the angst that I saw in my fellow IT coworkers was really due to the lack of an overall theoretical framework, like plate tectonics, that could help to explain their daily plight, and also help to alleviate some of its impact, by providing some insights into why doing IT for a living was so difficult, and to suggest some possible remedies, and to provide a direction for thought as well. So like the exploration team at Amoco that I had just left, consisting of geologists, geophysicists, paleontologists, geochemists, and petrophysicists, I decided to take all of the physics, chemistry, biology, and geology that I could muster and throw it at the problem of software. The basic idea was that many concepts in physics, chemistry, biology, and geology suggested to me that the IT community had accidentally created a pretty decent computer simulation of the physical Universe on a grand scale, a Software Universe so to speak, and that I could use this fantastic simulation in reverse, to better understand the behavior of commercial software, by comparing software to how things behaved in the physical Universe. So in physics we use software to simulate the behavior of the Universe, while in softwarephysics we use the Universe to simulate the behavior of software.

So my original intent for softwarephysics was to merely provide a theoretical framework for IT professionals to help them better understand the behavior of software during its development and its behavior under load when running in Production. My initial thoughts were that the reason IT work was so difficult was that programmers were constantly fighting a loosing battle with the second law of thermodynamics in a nonlinear Universe. You see, programmers must assemble a huge number of characters into complex patterns of source code in order to instruct a computer to perform useful operations, and because the Universe is largely nonlinear in nature, meaning that small changes to initial conditions will most likely result in dramatic, and many times, lethal outcomes for software, IT work was nearly impossible to do, and that is why most IT professionals were usually found to be on the verge of a nervous breakdown during the course of a normal day in IT. For more on that see the The Fundamental Problem of Software. At the same time, I subconsciously also knew that living things must also assemble an even larger number of atoms into complex molecules in order to perform the functions of life in a nonlinear Universe, so obviously, it would seem that the natural solution to the problem that IT professionals faced each day would be simply to apply a biological approach to developing and maintaining software. However, this did not gel in my mind at first, until one day, while I was working on some code, and I came up with the notion that we needed to stop writing code - we needed to "grow" code instead in a biological manner. For more on that see Agile vs. Waterfall Programming and the Value of Having a Theoretical Framework.

Using Softwarephysics to Help Explore the Origin of Life
But as I saw complex corporate software slowly evolve over the decades, it became more and more evident to me that much could be gained by studying this vast computer simulation that the IT community had been working on for the past 75 years, or 2.4 billion seconds. NASA has defined life broadly as "A self-sustaining chemical system capable of Darwinian evolution." Personally, after many years of reflection, I feel that the research community that is currently exploring the origin of life on the Earth and elsewhere, is too obsessed with simply finding other carbon-based life forms like themselves. Carbon-based life forms are really just one form of self-replicating information to be currently found on our planet, so I feel that more attention should really be focused upon finding other forms of self-replicating information sharing the Universe with ourselves, and the best place to start that, with the least cost, is to simply look right here on the Earth. To do that all we need to do is simply remove the "chemical" term out of NASA's definition of life and redefine self-replicating information as "A self-sustaining system capable of Darwinian evolution." That is why I have been stressing that the origin and evolution of commercial software provides a unique opportunity for those interested in the origin and early evolution of life on the Earth, and elsewhere, in many of my postings because both programmers and living things are faced with nearly identical problems. My suggestion in those postings has been that everybody has been looking just a couple of levels too low in the hierarchy of self-replicating information. Carbon-based living things are just one form of self-replicating information, and all forms of self-replicating information have many characteristics in common as they battle the second law of thermodynamics in a nonlinear Universe. So far we have seen at least five waves of self-replicating information sweep across the Earth, with each wave greatly reworking the surface and near subsurface of the planet as it came to predominance:

Self-Replicating Information – Information that persists through time by making copies of itself or by enlisting the support of other things to ensure that copies of itself are made.

1. Self-replicating autocatalytic metabolic pathways of organic molecules
2. RNA
3. DNA
4. Memes
5. Software

Software is now rapidly becoming the dominant form of self-replicating information on the planet, and is having a major impact on mankind as it comes to predominance. For more on this see: A Brief History of Self-Replicating Information. However, of the five waves of self-replicating information, the only form that we currently have a good history of is software, going all the way back to May of 1941 when Konrad Zuse first cranked up his Z3 computer. So the best model for the origin of life might be obtained by studying the hodge-podge of precursors, false starts, and failed attempts that led to the origin and early evolution of software, with particular attention paid to the parasitic/symbiotic relationships that allowed software to bootstrap itself into existence.

Yes, there are many other examples of universal Darwinism at work in the Universe, such as the evolution of languages or political movements, but I think that the origin and evolution of software provides a unique example because both programmers and living things are faced with nearly identical problems. A programmer must assemble a huge number of characters into complex patterns of source code to instruct a computer to perform useful operations. Similarly, living things must assemble an even larger number of atoms into complex molecules in order to perform the functions of life. And because the Universe is largely nonlinear in nature, meaning that small changes to initial conditions will most likely result in dramatic, and many times, lethal outcomes for both software and living things, the evolutionary history of living things on Earth and of software have both converged upon very similar solutions to overcome the effects of the second law of thermodynamics in a nonlinear Universe. For example, both living things and software went through a very lengthy prokaryotic architectural period, with little internal structure, to be followed by a eukaryotic architectural period with a great deal of internal structure, which later laid the foundations for forms with a complex multicellular architecture. And both also experienced a dramatic Cambrian explosion in which large multicellular systems arose consisting of huge numbers of somatic cells that relied upon the services of large numbers of cells to be found within a number of discrete organs. For more on this see the SoftwarePaleontology section of SoftwareBiology and Software Embryogenesis.

Also, software presents a much clearer distinction between the genotype and phenotype of a system than do other complex systems, like languages or other technologies that also undergo evolutionary processes. The genotype of software is determined by the source code files of programs, while the phenotype of software is expressed by the compiled executable files that run upon a computer and that are generated from the source code files by a transcription process similar to the way genes are transcribed into proteins. Also, like a DNA or RNA sequence, source code provides a very tangible form of self-replicating information that can be studied over historical time without ambiguity. Source code is also not unique, in that many different programs, and even programs written in different languages, can produce executable files with identical phenotypes or behaviors.

Currently, many researchers working on the origin of life and astrobiology are trying to produce computer simulations to help investigate how life could have originated and evolved at its earliest stages. But trying to incorporate all of the relevant elements into a computer simulation is proving to be a very daunting task indeed. Why not simply take advantage of the naturally occurring $10 trillion computer simulation that the IT community has already patiently evolved over the past 75 years and has already run for 2.4 billion seconds? It has been hiding there in plain sight the whole time for anybody with a little bit of daring and flair to explore.

Some might argue that this is an absurd proposal because software currently is a product of the human mind, while biological life is not a product of intelligent design. Granted, biological life is not a product of intelligent design, but neither is the human mind. The human mind and biological life are both the result of natural processes at work over very long periods of time. This objection simply stems from the fact that we are all still, for the most part, self-deluded Cartesian dualists at heart, with seemingly a little “Me” running around within our heads that just happens to have the ability to write software and to do other challenging things. Thus, most human beings do not think of themselves as part of the natural world. Instead, they think of themselves, and others, as immaterial spirits temporarily haunting a body, and when that body dies the immaterial spirit lives on. In this view, human beings are not part of the natural world. Instead, they are part of the supernatural. But since the human mind is a product of natural processes in action, so is the software that it produces. For more on that see The Ghost in the Machine the Grand Illusion of Consciousness.

Still, I realize that there might be some hesitation to pursue this line of research because it might be construed by some as an advocacy of intelligent design, but that is hardly the case. The evolution of software over the past 75 years has essentially been a matter of Darwinian inheritance, innovation and natural selection converging upon similar solutions to that of biological life. For example, it took the IT community about 60 years of trial and error to finally stumble upon an architecture similar to that of complex multicellular life that we call SOA – Service Oriented Architecture. The IT community could have easily discovered SOA back in the 1960s if it had adopted a biological approach to software and intelligently designed software architecture to match that of the biosphere. Instead, the worldwide IT architecture we see today essentially evolved on its own because nobody really sat back and designed this very complex worldwide software architecture; it just sort of evolved on its own through small incremental changes brought on by many millions of independently acting programmers through a process of trial and error. When programmers write code, they always take some old existing code first and then modify it slightly by making a few changes. Then they add a few additional new lines of code, and test the modified code to see how far they have come. Usually, the code does not work on the first attempt because of the second law of thermodynamics, so they then try to fix the code and try again. This happens over and over, until the programmer finally has a good snippet of new code. Thus, new code comes into existence through the Darwinian mechanisms of inheritance coupled with innovation and natural selection. Some might object that this coding process of software is actually a form of intelligent design, but that is not the case. It is important to differentiate between intelligent selection and intelligent design. In softwarephysics we extend the concept of natural selection to include all selection processes that are not supernatural in nature, so for me, intelligent selection is just another form of natural selection. This is really nothing new. Predators and prey constantly make “intelligent” decisions about what to pursue and what to evade, even if those “intelligent” decisions are only made with the benefit of a few interconnected neurons or molecules. So in this view, the selection decisions that a programmer makes after each iteration of working on some new code really are a form of natural selection. After all, programmers are just DNA survival machines with minds infected with memes for writing software, and the selection processes that the human mind undergo while writing software are just as natural as the Sun drying out worms on a sidewalk or a cheetah deciding upon which gazelle in a herd to pursue.

For example, when IT professionals slowly evolved our current $10 trillion worldwide IT architecture over the past 2.4 billion seconds, they certainly did not do so with the teleological intent of creating a simulation of the evolution of the biosphere. Instead, like most organisms in the biosphere, these IT professionals were simply trying to survive just one more day in the frantic world of corporate IT. It is hard to convey the daily mayhem and turmoil of corporate IT to outsiders. When I first hit the floor of Amoco’s IT department, I was in total shock, but I quickly realized that all IT jobs essentially boiled down to simply pushing buttons. All you had to do was to push the right buttons, in the right sequence, at the right time, and with zero errors. How hard could that be? Well, it turned out to be very difficult indeed, and in response I began to subconsciously work on softwarephysics to try to figure out why this job was so hard, and how I could dig myself out of the mess that I had gotten myself into. After a while, it dawned on me that the fundamental problem was the second law of thermodynamics operating in a nonlinear simulated universe. The second law made it very difficult to push the right buttons in the right sequence and at the right time because there were so many erroneous combinations of button pushes. Writing and maintaining software was like looking for a needle in a huge utility phase space. There just were nearly an infinite number of ways of pushing the buttons “wrong”. The other problem was that we were working in a very nonlinear utility phase space, meaning that pushing just one button incorrectly usually brought everything crashing down. Next, I slowly began to think of pushing the correct buttons in the correct sequence as stringing together the correct atoms into the correct sequence to make molecules in chemical reactions that could do things. I also knew that living things were really great at doing that. Living things apparently overcame the second law of thermodynamics by dumping entropy into heat as they built low entropy complex molecules from high entropy simple molecules and atoms. I then began to think of each line of code that I wrote as a step in a biochemical pathway. The variables were like organic molecules composed of characters or “atoms” and the operators were like chemical reactions between the molecules in the line of code. The logic in several lines of code was the same thing as the logic found in several steps of a biochemical pathway, and a complete function was the equivalent of a full-fledged biochemical pathway in itself. But one nagging question remained - how could I take advantage of these similarities to save myself? That’s a long story, but in 1985 I started working on BSDE– the Bionic Systems Development Environment, which was used at Amoco to “grow” software biologically from an “embryo” by having programmers turn on and off a set of “genes”. For more on that see Agile vs. Waterfall Programming and the Value of Having a Theoretical Framework.

The Social Impacts of the Coming Predominance of Software
Over the years, I have seen the Software Universe that I first encountered back in 1979 expand from the small population of IT workers in the world to now the entire world at large. I have also seen that as software comes to predominance it has caused a great deal of social, political and economic unrest as discussed in The Economics of the Coming Software Singularity , The Enduring Effects of the Obvious Hiding in Plain Sight, Machine Learning and the Ascendance of the Fifth Wave and Makining Sense of the Absurdity of the Real World of Human Affairs. The immediate difficulty is that software has displaced many workers over the past 75 years, and as software comes to predominance, it will eventually reduce all human labor to a value of zero over the next 10 - 100 years. How will the age-old oligarchical societies of the world deal with that in a manner that allows civilization to continue? The 2016 Presidential Election cycle in the United States was a dramatic example of this in action. The election was totally dominated by the effects of software coming to predominance - rogue email servers, hacking, leaking, software security breaches in general and wild Twitter feeds by candidates. But the election was primarily determined by the huge loss of middle class jobs due to automation by software. Now it's pretty hard to get mad at software because it is so intangible in nature, so many mistakenly directed their anger at other people because that is what mankind has been doing for the past 200,000 years. But this time is different because the real culprit is software coming of age. Unfortunately, those low-skilled factory jobs that have already evaporated are not coming back, no matter what some may promise. And those jobs are just the first in a long line. With the current pace of AI and Machine Learning research and implementation, now that they both can make lots of money, we will soon find self-driving trucks and delivery vehicles, automated cranes at container ports and automated heavy construction machinery at job sites. We have already lost lots of secretaries, bank tellers, stock brokers, insurance agents, retail salespeople and travel agents, but that is just the beginning. Soon we will see totally automated fast food restaurants to be later followed by the automation of traditional sit down restaurants, and automated retail stores without a single employee, like the totally automated parking garages we already have.

I have now been retired for nearly a month, and after having stopped working for the first time in 50 years, I can now state that there are plenty of things to do to keep busy. For example, my wife and I are doing the daycare for two of our grandchildren for our daughter, a high school Biology and Chemistry teacher, and I like to take online MOOC courses, and now I have all the time in the world to do that. So the end of working for a living is really not a bad thing, but the way we currently have civilization set up will not work in a future without work, using the current norm we have of rewarding people for what they produce. How that will all unfold remains one of the great mysteries of our time. For an intriguing view of one possibility please see THE MACHINE STOPS by E.M. Forster (1909) at:

http://archive.ncsa.illinois.edu/prajlich/forster.html

Yes - from 1909!

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
http://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Sunday, July 17, 2016

An IT Perspective on the Transition From Geochemistry to Biochemistry and Beyond

One of the major realizations arising from softwarephysics has been a growing appreciation for the overwhelming impact that self-replicating information has had on the Earth over the past 4.567 billion years, and of the possibility for the latest version of self-replicating information, known to us as software, to perhaps even go on to have a major impact upon the future of our entire galaxy. Recall that in softwarephysics we define self-replicating information as:

Self-Replicating Information – Information that persists through time by making copies of itself or by enlisting the support of other things to ensure that copies of itself are made.

So far we have seen 5 waves of self-replicating information sweep across the Earth, with each wave greatly reworking the surface and near subsurface of the planet as it came to predominance:

1. Self-replicating autocatalytic metabolic pathways of organic molecules
2. RNA
3. DNA
4. Memes
5. Software

Software is now rapidly becoming the dominant form of self-replicating information on the planet, and is having a major impact on mankind as it comes to predominance. For more on this see: A Brief History of Self-Replicating Information.

How Did It All Start?
For those researchers exploring the processes that brought forth life on the Earth, and elsewhere, the most challenging question is naturally how did the original self-replicating autocatalytic metabolic pathways of organic molecules bootstrap themselves into existence in the first place. Previously, in The Origin of Software the Origin of Life we examined Stuart Kauffman’s ideas on how Boolean nets of autocatalytic chemical reactions might have kick-started the whole thing off as an emergent behavior of an early chaotic pre-biotic environment on Earth, solely with the aid of the extant organic molecules of the day. Similarly, in Programming Clay we examined Alexander Graham Cairns-Smith’s theory, first proposed in 1966, that there was a clay microcrystal precursor to RNA that got it all started. Personally, as a former exploration geophysicist, I have always favored the idea that a geochemical precursor existed near the early hydrothermal vents of the initial tectonic spreading centers of the Earth that acted as a stepping stone between geochemistry and biochemistry. This is because for the early Earth the only real chemistry of the time was geochemistry alone. But how can geochemistry become biochemistry? I just finished reading a couple of beautiful papers that can be downloaded as PDF files:

The inevitable journey to being. Russell M.J., W. Nitschke, and E. Branscomb (2013) https://www.researchgate.net/publication/237098222_The_inevitable_journey_to_being

Turnstiles and bifurcators: The disequilibrium converting engines that put metabolism on the road. Branscomb E., and M.J. Russell (2013) http://www.sciencedirect.com/science/article/pii/S0005272812010420

that describe the Submarine Hydrothermal Alkaline Spring Theory for the emergence of life and how geochemistry could have become biochemistry on the early Earth in the hydrothermal vents of the initial tectonic spreading centers of the Earth. The work described in both papers was supported in part by the Institute for Genomic Biology at my old Alma Mater the University of Illinois in Urbana-Champaign.

Basically, they describe a possible early form of geochemical metabolism that could have taken the internal heat of the Earth and converted it into high-energy organic molecules via a complex set of geochemical reactions, in what the authors call a Free Energy Converter (FEC) cycle. A Free Energy Converter would have taken the high-temperature low-entropy internal heat of the Earth and degraded it into a lower-temperature higher-entropy form of heat that would then have been dumped into the oceans of the early Earth. The energy extracted by such Free Energy Converters would have ended up in high-energy organic molecules. You can think of a Free Energy Converter cycle as a huge loop of computer code that simply takes the available free energy found in the high-temperature regimes of the Earth at a depth of many miles and converts it into energetic organic molecules that could later fuel self-replicating metabolic pathways of organic molecules.

This turns out to be a rather geologically involved process. The temperature difference between the surface of the Earth and the base of the asthenosphere in the upper mantle of the Earth, causes very slow convection cells to arise that drive the plate tectonics of the Earth. These convection cells in the Earth's upper mantle move about as fast as your fingernails grow, but they do bring up the rock peridotite, primarily composed of the ultramafic silicate minerals olivine and pyroxene that are very rich in iron and magnesium, close to the Earth's surface near the Earth's spreading centers. At the spreading centers we frequently find the familiar "black smoker" hydrothermal vents that overlie the many magma chambers of the Earth's spreading centers - see Figure 1. These "black smokers" are disipative structures, far from thermodynamic equilibrium, but they are a bit too hot to be the true nurseries for the origin of life. But a few miles away from the actual spreading centers, we do find a geochemical regime that could serve that purpose. These are the alkaline hydrothermal vents that run on the geochemical reaction called serpentization that transforms the mineral olivine into the mineral serpentine. The serpentization of olivine into serpentine is an exothermic reaction that gives off heat and consequently creates mild convection cells of pore fluids in the Earth's upper crust. These pore fluid convection cells then produce very porous alkaline hydrothermal vents on the sea floor. These very porous alkaline hydrothermal vents do not spout effluents in a dramatic way, like their "black smoker" cousins, but they do percolate warm hydrothermal pore fluids through them, and they could have been the true nurseries of life on this planet. Unlike their acidic "black smoker" cousins, the alkaline hydrothermal vents had a much higher pH than the acidic ocean water that they were located in. Because the early Earth's atmosphere consisted primarily of nitrogen and carbon dioxide, the oceans of the early Earth were much more acidic than they are today because of a high level of dissolved carbon dioxide. The pH contrast of the acidic seawater with the porous alkaline hydrothermal vent pore fluids allowed for proton gradients to form across thin geochemical membranes, like they do in all living things today. All living things on the Earth now obtain energy from redox reactions involving proton gradients across thin membranes, so this could have been a possible geochemical precursor.

Figure 1 - High-temperature "black smokers" near the Earth's spreading centers were probably too hot to be the nurseries of life on Earth.

Figure 2 - However, neighboring alkaline hydrothermal vents could have provided the necessary conditions to be the true nurseries for the origin of life. The alkaline hydrothermal vents had a much lower temperature than the "black smoker" hydrothermal vents, and they percolated pore fluids with a much higher pH than the acidic seawater in which they were located. Above is a map showing the location of the famous "Lost City" alkaline hydrothermal vents near the mid-Atlantic spreading center.

Figure 3 - Above is a depiction of the "Lost City" alkaline hydrothermal vents near the mid-Atlantic spreading center. Notice that the geochemical serpentization of olivine in peridotite provides the free energy to form organic molecules in the alkaline hydrothermal vents.

Figure 4 - A simplified diagram of a "Lost City" alkaline hydrothermal vent. The alkaline hydrothermal vent has pore fluids with a pH of about 10.5, while the acidic seawater in which it sits has a pH of only 5.5. This difference in pH allows for proton gradients to form, the same kind of proton gradients that now power all forms of life on the Earth. Notice that the alkaline hydrothermal vent is also at a relatively low temperature of only 100 oC, much lower than the temperature of "black smokers" that have a temperature range of 250 - 400 oC.

Figure 5 - An actual "Lost City" alkaline hydrothermal vent.

The authors point out that not every step in the necessary geochemical reactions of a Free Energy Converter needs to increase the entropy of the Universe in accordance with the second law of thermodynamics, because if for any step in a geochemical Free Energy Converter that decreases the entropy of the Universe there is also a logically-coupled step that increases the entropy of the Universe even more, the whole process can still proceed forward under the limitation that overall entropy must always increase as dictated by the second law of thermodynamics. But in order for that to happen there needs to be some processing logic added to the infinite loop that runs the Free Energy Converter cycle. The authors point to a couple of papers that describe how Vigna radiata (mung beans) and Thermotoga maritima, a high temperature loving bacterium, accomplish this logical processing:

The essential feature of this linkage in condensations is that it makes each of the two processes conditional on the other— and with a specific logical directionality, namely, a proton (or sodium ion) can pass from outside to inside if, and only if, that happens coincidentally with the condensation and release of a molecule of pyrophosphate, or conversely, a proton (or sodium ion) can pass in the opposite direction, if and only if that happens coincidentally with the hydrolysis of a pyrophosphate and the release of the orthophosphate products. Because of this coupling logic, the device can function as a reversible free energy converter; converting, for example, the controlled dissipation of an outside-to-inside proton gradient in the production of a disequilibrium in the concentration of pyrophosphate versus orthophosphate (i.e. acting as a proton-gradient-driven pyrophosphate synthase). Or it can function equally well in reverse as a proton-pumping pyrophosphatase. Which way it goes depends, of course, on which way yields a net negative change in free energy (equivalently a net positive rate of entropy production).

Another necessary condition for such a logical coupling to work is to make it a one-way street. The authors describe this as adding some "turnstile logic" that allows the coupled reaction to only work in one direction - the direction that outputs low-entropy high-energy organic molecules. Such organic molecules could then later be used as a fuel for biochemical metabolic pathways of self-replicating information that could subsequently arise as parasites feeding off the output of the geochemical Free Energy Converter that is converting the free energy of the Earth's interior into high-energy organic molecules:

To emphasize the critical mechanistic point here, the functional essence of the coupling that achieves FEC is that the driving flux is made conditional on (is ‘gated’ by) the coincident occurrence of the other (driven) flux—which flow, being inherently improbable (i.e. anti-entropic), would, of course, never proceed (‘upstream’) on its own. However, the coupling of two processes as above envisaged is under no stretch ‘automatic’ or trivial; and is in fact a quite special state of physical affairs. In essentially all situations of interest this linking of the two processes into one, requires, and is mediated by, a macroscopically ordered and dynamic ‘structure’ which acts functionally as a “double turnstile”. The turnstile permits a token of the driving flux J1 to proceed downhill if and only if there is the coincident occurrence of some fixed ‘token’ of the driven flux J2 moving “uphill” by chance (albeit as an inherently improbable event) in the same movement of the turnstile. Embodying such conditional, turnstile-like gating mechanisms is what is universally being managed by such evolutionary marvels as the redox-driven proton pumps we will consider in detail later and indeed all other biological devices that carry out what is conventionally termed “energy conservation” (which name, we however argue, misleads in both of its terms).

In many ways this proposed geochemical metabolism of the Earth's natural heat can be compared to photosynthesis, which takes natural energy from the Sun and converts it into energy-rich organic molecules.

An IT Perspective
The authors view such geochemical Free Energy Converters as heat engines converting high-temperature heat into something of biochemical value, while dumping some energy into lower-temperature heat to satisfy the entropy increase demanded by the second law of thermodynamics. But as an IT professional, when I look at the complicated logical operations of the described geochemical processes, I see software running on some primitive hardware instead. As we saw in The Demon of Software heat engines and information are intricately intertwined, so perhaps these infinite loops of early geochemical metabolic pathways can also be viewed as primitive forms of data processing too. The question is could these early geochemical metabolic pathways also be considered to be forms of self-replicating information? That is a difficult question to answer because these early geochemical metabolic pathways seem to just naturally form as heat migrates from the Earth's mantle to its crust via convection cells. In that sense, could we consider the simple mantle convection cells that drive plate tectonics to be forms of self-replicating information too? Clearly, this all gets rather murky as we look back further in deep time.

Unlike all of the previous transitions of one form of self-replicating information into another form of self-replicating information, the transition from geochemistry to biochemistry seems to have been more or less a "clean break". Very few living things on the Earth today now rely upon hydrothermal vents for their existence. So the self-replicating autocatalytic metabolic pathways of organic molecules, feeding off the output of geochemical metabolism, started off as parasites like all new forms of self-replicating information, but unlike most other forms of self-replicating information, they did not go on to form symbiotic relationships with the geochemical pathways. Instead, the self-replicating autocatalytic metabolic pathways of organic molecules seemed to have made a "clean break" with the geochemical metabolic processes that were living off the internal heat of the early Earth by developing a new energy source called photosynthesis. Consequently, unlike the self-replicating autocatalytic metabolic pathways of organic molecules, RNA, DNA, and memes of the past, not much of the geochemical metabolic pathways of the distant past seemed to have been dragged along as subsequent forms of self-replicating information came to be. But that begs the question of why that should be, and might shed some light on the current rise of software as the dominant form of self-replicating information on the planet. Will software carry along its predecessors, or will it make a "clean break" with them? Currently, we are living in one of those very rare times when a new form of self-replicating information, in the form of software, is coming to predominance, and it is not clear that software will drag along the self-replicating autocatalytic metabolic pathways of organic molecules, RNA, DNA, and memes of the past as it becomes the dominant form of self-replicating information on the planet. Most certainly the AI software of the future will need to carry along the memes required to generate software and the necessary scientific and mathematical memes to build the hardware that it runs upon, but would it really need to carry along the ancient biochemical metabolic pathways , RNA and DNA in order to survive? The possibility of software making a "clean break" with biochemistry, just as biochemistry seems to have made a "clean break" with geochemistry, does not bode well for mankind.

Currently, the world is run by a large number of competing meme-complexes composed of memes residing in the minds of human beings. This is in contradiction to what most human beings believe because we all naturally think of ourselves as rational free-acting agents that collectively run the world together. But I contend that the only way to understand the absurd real world of human affairs is to take the Dawkinsian position that we are really DNA survival machines with minds infected by a large number of memes. The reason we all think of ourselves as rational free-acting agents is that we are all committed Cartesian Dualists at heart, with seemingly, a little "Me" running around in our heads - see The Ghost in the Machine the Grand Illusion of Consciousness for more on that. And so far, software is still being generated by the software-generating memes within the minds of programmers. But this will soon end when software is finally able to self-replicate on its own, without the aid of the software-generating memes in the minds of human programmers, and in doing so, will initiate a Software Singularity - see Machine Learning and the Ascendance of the Fifth Wave for more on that.

As software developers and software users, nearly all of mankind is now actively participating in this transition, because software has forged very strong parasitic/symbiotic relationships with nearly all of the meme-complexes on the planet. Now for many years in the past, I held the position that if we had actually been around 4 billion years ago to watch the origin of life on Earth take place, that we would still be sitting around today arguing about just exactly what had happened, just as we still manage today to sit around and argue about what exactly happened for all of the other events in human history. However, now I am more of the opinion that, being the self-absorbed species that we are, we would probably not have even noticed it happening at all! That certainly seems to be the case in the present era, as software is rapidly becoming the dominant form of self-replicating information on the planet before our very eyes, with very few really paying much attention to the fact that we are now living in one of those very rare times when a new form of self-replicating information is coming to predominance.

The key thing to remember is that all new forms of self-replicating information are very disruptive in nature. New forms of self-replicating information usually begin as a mildly parasitic form of self-replicating information that invades an existing host that usually is also a form of self-replicating information, and over time, forms a parasitic/symbiotic relationship with the host. But eventually, these new forms of self-replicating information take over and come to dominate the environment, and that is very disruptive. This is certainly true for software today. In Crocheting Software we saw that the origin of software was such a hodge-podge of precursors, false starts, and failed attempts that it is nearly impossible to pinpoint an exact date for its origin, but for the purposes of softwarephysics I have chosen May of 1941, when Konrad Zuse first cranked up his Z3 computer, as the starting point for modern software. Zuse wanted to use his Z3 computer to perform calculations for aircraft designs that were previously done manually in a very tedious manner. So initially software could not transmit memes, it could only perform calculations, like a very fast adding machine, and so it was a pure parasite. But then the business and military meme-complexes discovered that software could also be used to transmit memes, and software then entered into a parasitic/symbiotic relationship with the memes. Software allowed these meme-complexes to thrive, and in return, these meme-complexes heavily funded the development of software of ever increasing complexity, until software became ubiquitous, forming strong parasitic/symbiotic relationships with nearly every meme-complex on the planet. In the modern day, the only way memes can now spread from mind to mind without the aid of software is when you directly speak to another person next to you. Even if you attempt to write a letter by hand, the moment you drop it into a mailbox, it will immediately fall under the control of software.

Presently, we are now entering the final stage where software is now in an intense battle with the memes for predominance, and currently this is causing a great deal of social, political and economic unrest as discussed in The Economics of the Coming Software Singularity , The Enduring Effects of the Obvious Hiding in Plain Sight, Machine Learning and the Ascendance of the Fifth Wave and Makining Sense of the Absurdity of the Real World of Human Affairs. The main difficulty is that software has displaced many workers over the past 75 years, and as software comes to predominance, it will eventually reduce all human labor to a value of zero over the next 10 - 100 years. So in a sense, software will soon be eliminating the metabolic pathway of earning a living through labor for most of mankind that has made civilization possible for the past 10,000 years. The resulting social chaos can only help to hasten the day when software finally takes over control of the Earth, and perhaps, makes a "clean break" with the biochemistry that has dominated the Earth for nearly 4.0 billion years (see The Dawn of Galactic ASI - Artificial Superintelligence for details).

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
http://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Sunday, June 05, 2016

Making Sense of the Absurdity of the Real World of Human Affairs

It was the best of times,
it was the worst of times,
it was the age of wisdom,
it was the age of foolishness,
it was the epoch of belief,
it was the epoch of incredulity,
it was the season of Light,
it was the season of Darkness,
it was the spring of hope,
it was the winter of despair,

we had everything before us, we had nothing before us, we were all going direct to Heaven, we were all going direct the other way— in short, the period was so far like the present period, that some of its noisiest authorities insisted on its being received, for good or for evil, in the superlative degree of comparison only.


I dearly love those profound opening words from Charles Dickens' A Tale of Two Cities (1859) because for me they summarize the best description of the human condition ever composed by the human mind. As any student of history can attest, back in 1859 Dickens was simply stating that the current times are no different than any other, and that it has always been this way, and that there has always been some element of absurdity in the real world of human affairs. In fact, many religions in the past featured first-order approximations to explain the above. But for once our times may truly be different because of the advancing effects of software on the world, and that will be the subject of this brief posting.

As I have stated in many previous postings on this blog on softwarephysics, I started this blog on softwarephysics about 10 years ago with the hopes of helping the IT community to better deal with the daily mayhem of life in IT, after my less than stunning success in doing so back in the 1980s when I first began developing softwarephysics for my own use. But in the process of doing so, I believe I accidentally stumbled upon "what's it all about" as outlined in What’s It All About?. Softwarephysics explains that it is all about self-replicating information in action, and that much of today's absurdity stems from the fact that we are now living in one of those very rare transitionary periods when a new form of self-replicating information, in the form of software, is coming to dominate. For more on that please see A Brief History of Self-Replicating Information. Much of this realization arose from the work of Richard Dawkins, Susan Blackmore, Stuart Kauffman, Lynn Margulis, Freeman Dyson and of course Charles Darwin. The above is best summed up by Susan Blackmore's brilliant TED presentation at:

Memes and "temes"
http://www.ted.com/talks/susan_blackmore_on_memes_and_temes.html

Note that I consider Susan Blackmore's temes to really be technological artifacts that contain software. After all, an iPhone without software is simply a flake tool with a very dull edge.

So to really make sense of the absurdities of the modern world one must first realize that we are all DNA survival machines with minds infected by memes in a Dawkinsian sense, but the chief difference this time is that we now have software rapidly becoming the dominant form of self-replicating information on the planet, and that is inducing further stresses that are leading to increased levels of absurdity. As I outlined in The Economics of the Coming Software Singularity, The Enduring Effects of the Obvious Hiding in Plain Sight and Machine Learning and the Ascendance of the Fifth Wave one of the initial tell-tale signs that software is truly coming to predominance has been the ability of software to displace workers over the past 50 years or so. The combination of globalization, made possible by software, and the automation of many middle class jobs through the application of software, has led to a great deal of economic strife recently. Economic strife is not a good thing because it frequently leads to political absurdities like the 20th century Bolshevik Revolution in Russia or the rise of National Socialism in Germany. Economic strife can also lead people who are economically distressed to take up very conservative political or religious memes that condone violence, as a way to alleviate the growing pain they feel as they become alienated from society by software. So once again, the appeal of simple memes that purport to alleviate economic distress, or to eliminate the perceived heretical thoughts and actions of others, are on the rise worldwide, and these simple memes have naturally entered into a parasitic/symbiotic relationship with social media software to aid the self-replication of both forms of self-replicating information. In recent years, this parasitic/symbiotic relationship of such simple-minded memes with social media software has led to the singling out of groups of people for Sonderbehandlung or "special treatment", leading to acts of terrorism and ethnic cleansing throughout the world.

Please Stop, Breathe and Think
So before you decide to blow somebody away for some strange reason, or even before you decide to vote for somebody who might decide to blow lots of people away for some strange reason in your name, please first stop to breathe and think about what is really going on. Chances are you are simply responding to some parasitic memes in your mind that really do not have your best interest at heart, aided by some software that could also care less about your ultimate disposition. They are just mindless forms of self-replicating information that have been selected for the ability to survive in a Universe dominated by the second law of thermodynamics and nonlinearity. The memes and software that are inciting you to do harm to others are just mindless forms of self-replicating information trying to self-replicate at all costs, with little regard for you as an individual. For them you are just a disposable DNA survival machine with a disposable mind that has a lifespan of less than 100 years. They just need you to replicate in the minds of others before you die, and if blowing yourself up in a marketplace filled with innocents, or in a hail of bullets from law enforcement serves that purpose, they will certainly do so because they cannot do otherwise. Unlike you, they cannot think. Only you can do that.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
http://softwarephysics.blogspot.com/

Regards,
Steve Johnston