Thursday, September 22, 2016

Introduction to Softwarephysics

Softwarephysics is a simulated science for the simulated Software Universe that we are all immersed in. It is an approach to software development, maintenance, and support based upon concepts from physics, chemistry, biology, and geology that I have been using on a daily basis for over 35 years as an IT professional. For those of you not in the business, IT is short for Information Technology, commercial computer science. The purpose of softwarephysics is to explain why IT is so difficult, to suggest possible remedies, and to provide a direction for thought. If you are an IT professional, general computer user, or simply an individual interested in computer science, physics, chemistry, biology, or geology then softwarephysics might be of interest to you, if not in an entirely serious manner, perhaps at least in an entertaining one.

From 1975 – 1979, I was an exploration geophysicist exploring for oil, first with Shell, and then with Amoco. In 1979, I made a career change into IT, and spent about 20 years in development. For the past 17 years, I have been in IT operations, supporting middleware on WebSphere, JBoss, Tomcat, and ColdFusion. When I transitioned into IT from geophysics, I figured if you could apply physics to geology; why not apply physics to software? So like the exploration team at Amoco that I had just left, consisting of geologists, geophysicists, paleontologists, geochemists, and petrophysicists, I decided to take all the physics, chemistry, biology, and geology that I could muster and throw it at the problem of software. The basic idea was that many concepts in physics, chemistry, biology, and geology suggested to me that the IT community had accidentally created a pretty decent computer simulation of the physical Universe on a grand scale, a Software Universe so to speak, and that I could use this fantastic simulation in reverse, to better understand the behavior of commercial software, by comparing software to how things behaved in the physical Universe. Softwarephysics depicts software as a virtual substance, and relies upon our understanding of the current theories in physics, chemistry, biology, and geology to help us model the nature of software behavior. So in physics we use software to simulate the behavior of the Universe, while in softwarephysics we use the Universe to simulate the behavior of software. Along these lines, we use the Equivalence Conjecture of Softwarephysics as an aid; it allows us to shift back and forth between the Software Universe and the physical Universe, and hopefully to learn something about one by examining the other:

The Equivalence Conjecture of Softwarephysics
Over the past 75 years, through the uncoordinated efforts of over 50 million independently acting programmers to provide the world with a global supply of software, the IT community has accidentally spent more than $10 trillion creating a computer simulation of the physical Universe on a grand scale – the Software Universe.

Logical Positivism and Effective Theories
Many IT professionals have a difficult time with softwarephysics because they think of physics as being limited to the study of real things like electrons and photons, and since software is not “real”, how can you possibly apply concepts from physics and the other sciences to software? To address this issue, softwarephysics draws heavily upon two concepts from physics that have served physics quite well over the past century – the concept of logical positivism and the concept of effective theories. This was not always the case. In the 17th, 18th, and 19th centuries, physicists mistakenly thought that they were actually discovering the fundamental laws of the Universe, which they thought were based upon real tangible things like particles, waves, and fields. Classical Newtonian mechanics (1687), thermodynamics (1850), and classical electrodynamics (1864) did a wonderful job of describing the everyday world at the close of the 19th century, but early in the 20th century it became apparent that the models upon which these very successful theories were based did not work very well for small things like atoms or for objects moving at high velocities or in strong gravitational fields. This provoked a rather profound philosophical crisis within physics at the turn of the century, as physicists worried that perhaps 300 years of work was about to go down the drain. The problem was that classical physicists confused their models of reality with reality itself, and when their classical models began to falter, their confidence in physics began to falter as well. This philosophical crisis was resolved with the adoption of the concepts of logical positivism and some new effective theories in physics. Quantum mechanics (1926) was developed for small things like atoms, the special theory of relativity (1905) was developed for objects moving at high velocities and the general theory of relativity (1915) was developed for objects moving in strong gravitational fields.

Logical positivism, usually abbreviated simply to positivism, is an enhanced form of empiricism, in which we do not care about how things “really” are; we are only interested with how things are observed to behave. With positivism, physicists only seek out models of reality - not reality itself. When we study quantum mechanics, we will find that the concept of reality gets rather murky in physics anyway, so this is not as great a loss as it might at first seem. By concentrating on how things are observed to behave, rather than on what things “really” are, we avoid the conundrum faced by the classical physicists. In retrospect, this idea really goes all the way back to the very foundations of physics. In Newton’s Principia (1687) he outlined Newtonian mechanics and his theory of gravitation, which held that the gravitational force between two objects was proportional to the product of their masses divided by the square of the distance between them. Newton knew that he was going to take some philosophical flack for proposing a mysterious force between objects that could reach out across the vast depths of space with no apparent mechanism, so he took a very positivistic position on the matter with the famous words:

I have not as yet been able to discover the reason for these properties of gravity from phenomena, and I do not feign hypotheses. For whatever is not deduced from the phenomena must be called a hypothesis; and hypotheses, whether metaphysical or physical, or based on occult qualities, or mechanical, have no place in experimental philosophy. In this philosophy particular propositions are inferred from the phenomena, and afterwards rendered general by induction.

Instead, Newton focused on how things were observed to move under the influence of his law of gravitational attraction, without worrying about what gravity “really” was.

The second concept, that of effective theories, is an extension of positivism. An effective theory is an approximation of reality that only holds true over a certain restricted range of conditions and only provides for a certain depth of understanding of the problem at hand. For example, Newtonian mechanics is an effective theory that makes very good predictions for the behavior of objects moving less than 10% of the speed of light and which are bigger than a very small grain of dust. These limits define the effective range over which Newtonian mechanics can be applied to solve problems. For very small things we must use quantum mechanics and for very fast things moving in strong gravitational fields, we must use relativity theory. So all of the current theories of physics, such as Newtonian mechanics, Newtonian gravity, classical electrodynamics, thermodynamics, statistical mechanics, the special and general theories of relativity, quantum mechanics, and the quantum field theories of QED and QCD are effective theories that are based upon models of reality, and all these models are approximations - all these models are fundamentally "wrong", but at the same time, these effective theories make exceedingly good predictions of the behavior of physical systems over the limited ranges in which they apply. That is the goal of softwarephysics – to provide for an effective theory of software behavior that makes useful predictions of software behavior that are applicable to the day-to-day activities of IT professionals. So in softwarephysics, we adopt a very positivistic viewpoint of software; we do not care what software “really is”, we only care about how software is observed to behave and try to model those behaviors with an effective theory of software behavior that only holds true over a certain restricted range of conditions and only provides for a certain depth of understanding of the problem at hand.

GPS satellites provide a very good example of positivism and effective theories at work. There are currently 31 GPS satellites orbiting at an altitude of 12,600 miles above the Earth, and each contains a very accurate atomic clock. The signals from the GPS satellites travel to your GPS unit at the speed of light, so by knowing the travel time of the signals from at least 4 of the GPS satellites, it is possible to determine your position on Earth very accurately. In order to do that, it is very important to have very accurate timing measurements. Newtonian mechanics is used to launch the GPS satellites to an altitude of 12,600 miles and to keep them properly positioned in orbit. Classical electrodynamics is then used to beam the GPS signals back down to Earth to the GPS unit in your car. Quantum mechanics is used to build the transistors on the chips on board the GPS satellites and to understand the quantum tunneling of electrons in the flash memory chips used to store GPS data on the satellites. The special theory of relativity predicts that the onboard atomic clocks on the GPS satellites will run slower and lose about 7.2 microseconds per day due to their high velocities relative to an observer on the Earth. But at the same time, the general theory of relativity also predicts that because the GPS satellites are further from the center of the Earth and in a weaker gravitational field, where spacetime is less deformed than on the surface of the Earth, their atomic clocks also run faster and gain 45.9 microseconds per day due to the weaker gravitational field out there. The net effect is a gain of 38.7 microseconds per day, so the GPS satellite atomic clocks have to be purposefully built to run slow by 38.7 microseconds per day before they are launched, so that they will keep in sync with clocks on the surface of the Earth. If this correction were not made, an error in your position of 100 yards/day would accrue. The end result of the combination of all these fundamentally flawed effective theories is that it is possible to pinpoint your location on Earth to an accuracy of 16 feet or better for as little as $100. But physics has done even better than that with its fundamentally flawed effective theories. By combining the effective theories of special relativity (1905) with quantum mechanics (1926), physicists were able to produce a new effective theory for the behavior of electrons and photons called quantum electrodynamics QED (1948) which was able to predict the gyromagnetic ratio of the electron, a measure of its intrinsic magnetic field, to an accuracy of 11 decimal places. As Richard Feynman has pointed out, this was like predicting the exact distance between New York and Los Angeles accurate to the width of a human hair!

So Newtonian mechanics makes great predictions for the macroscopic behavior of GPS satellites, but it does not work very well for small things like the behavior of individual electrons within transistors, where quantum mechanics is required, or for things moving at high speeds or in strong gravitational fields where relativity theory must be applied. And all three of these effective theories are based upon completely contradictory models. General relativity maintains that spacetime is curved by matter and energy, but that matter and energy are continuous, while quantum mechanics maintains that spacetime is flat, but that matter and energy are quantized into chunks. Newtonian mechanics simply states that space and time are mutually independent dimensions and universal for all, with matter and energy being continuous. The important point is that all effective theories and scientific models are approximations – they are all fundamentally "wrong". But knowing that you are "wrong" gives you a great advantage over people who know that they are "right", because knowing that you are "wrong" allows you to seek improved models of reality. So please consider softwarephysics to simply be an effective theory of software behavior that is based upon models that are fundamentally “wrong”, but at the same time, fundamentally useful for IT professionals. So as you embark upon your study of softwarephysics, please always keep in mind that the models of softwarephysics are just approximations of software behavior, they are not what software “really is”. It is very important not to confuse models of software behavior with software itself, if one wishes to avoid the plight of the 19th century classical physicists.

If you are an IT professional and many of the above concepts are new to you, do not be concerned. This blog on softwarephysics is aimed at a diverse audience, but with IT professionals in mind. All of the above ideas will be covered at great length in the postings in this blog on softwarephysics and in a manner accessible to all IT professionals. Now it turns out that most IT professionals have had some introduction to physics in high school or in introductory college courses, but that presents an additional problem. The problem is that such courses generally only cover classical physics, and leave the student with a very good picture of physics as it stood in 1864! It turns out that the classical physics of Newtonian mechanics, thermodynamics, and classical electromagnetic theory were simply too good to discard and are still quite useful, so they are taught first to beginners and then we run out of time to cover the really interesting physics of the 20th century. Now imagine the problems that the modern world would face if we only taught similarly antiquated courses in astronomy, metallurgy, electrical and mechanical engineering, medicine, economics, biology, or geology that happily left students back in 1864! Since many of the best models for software behavior stem from 20th century physics, we will be covering a great deal of 20th century material in these postings – the special and general theories of relativity, quantum mechanics, quantum field theories, and chaos theory, but I hope that you will find that these additional effective theories are quite interesting on their own, and might even change your worldview of the physical Universe at the same time.

Unintended Consequences for the Scientific Community
As I mentioned at the close of my original posting on SoftwarePhysics, my initial intention for this blog on softwarephysics was to fulfill a promise I made to myself about 25 years ago to approach the IT community with the concept of softwarephysics a second time, following my less than successful attempt to do so in the 1980s, with the hope of helping the IT community to better cope with the daily mayhem of life in IT. However, in laying down the postings for this blog an unintended consequence arose in my mind as I became profoundly aware of the enormity of this vast computer simulation of the physical Universe that the IT community has so graciously provided to the scientific community free of charge and also of the very significant potential scientific value that it provides. One of the nagging problems for many of the observational and experimental sciences is that many times there is only one example readily at hand to study or experiment with, and it is very difficult to do meaningful statistics with a population of N=1.

But the computer simulation of the physical Universe that the Software Universe presents provides another realm for comparison. For example, both biology and astrobiology only have one biosphere on Earth to study and even physics itself has only one Universe with which to engage. Imagine the possibilities if scientists had another Universe readily at hand in which to work! This is exactly what the Software Universe provides. For example, in SoftwareBiology and A Proposal For All Practicing Paleontologists we see that the evolution of software over the past 75 years, or 2.4 billion seconds, ever since Konrad Zuse first cranked up his Z3 computer in May of 1941, has closely followed the same path as life on Earth over the past 4.0 billion years in keeping with Simon Conway Morris’s contention that convergence has played the dominant role in the evolution of life on Earth. In When Toasters Fly, we also see that software has evolved in fits and starts as portrayed by the punctuated equilibrium of Stephen Jay Gould and Niles Eldredge, and in The Adaptationist View of Software Evolution we explore the overwhelming power of natural selection in the evolution of software. In keeping with Peter Ward’s emphasis on mass extinctions dominating the course of evolution throughout geological time, we also see in SoftwareBiology that there have been several dramatic mass extinctions of various forms of software over the past 75 years as well, that have greatly affected the evolutionary history of software, and that between these mass extinctions, software has also tended to evolve through the gradual changes of Hutton’s and Lyell’s uniformitarianism. In Software Symbiogenesis and Self-Replicating Information, we also see the very significant role that parasitic/symbiotic relationships have played in the evolution of software, in keeping with the work of Lynn Margulis and also of Freeman Dyson’s two-stage theory of the origin of life on Earth. In The Origin of Software the Origin of Life, we explore Stuart Kauffman’s ideas on how Boolean nets of autocatalytic chemical reactions might have kick-started the whole thing as an emergent behavior of an early chaotic pre-biotic environment on Earth, and that if Seth Shostak is right, we will never end up talking to carbon-based extraterrestrial aliens, but to alien software instead. In Is the Universe Fine-Tuned for Self-Replicating Information? we explore the thermodynamics of Brandon Carter’s Weak Anthropic Principle (1973), as it relates to the generation of universes in the multiverse that are capable of sustaining intelligent life. Finally, in Programming Clay we revisit Alexander Graham Cairns-Smith’s theory (1966) that Gene 1.0 did not run on nucleic acids, but on clay microcrystal precursors instead.

Similarly for the physical sciences, in Is the Universe a Quantum Computer? we find a correspondence between TCP/IP and John Cramer’s Transactional Interpretation of quantum mechanics. In SoftwarePhysics and Cyberspacetime, we also see that the froth of CPU processes running with a clock speed of 109 Hz on the 10 trillion currently active microprocessors that comprise the Software Universe can be viewed as a slowed down simulation of the spin-foam froth of interacting processes of loop quantum gravity running with a clock speed of 1043 Hz that may comprise the physical Universe. And in Software Chaos, we examine the nonlinear behavior of software and some of its emergent behaviors and follow up in CyberCosmology with the possibility that vast quantities of software running on large nonlinear networks might eventually break out into consciousness in accordance with the work of George Dyson and Daniel Dennett. Finally, in Model-Dependent Realism - A Positivistic Approach to Realism we compare Steven Weinberg’s realism with the model-dependent realism of Stephen Hawking and Leonard Mlodinow and how the two worldviews affect the search for a Final Theory. Finally, in The Software Universe as an Implementation of the Mathematical Universe Hypothesis and An Alternative Model of the Software Universe we at long last explore what software might really be, and discover that the Software Universe might actually be more closely related to the physical Universe than you might think.

The chief advantage of doing fieldwork in the Software Universe is that, unlike most computer simulations of the physical Universe, it is an unintended and accidental simulation, without any of the built-in biases that most computer simulations of the physical Universe suffer. So you will truly be able to do fieldwork in a pristine and naturally occuring simulation, just as IT professionals can do fieldwork in the wild and naturally occuring simulation of software that the living things of the biosphere provide. Secondly, the Software Universe is a huge simulation that is far beyond the budgetary means of any institution or consortium by many orders of magnitude. So if you are an evolutionary biologist, astrobiologist, or paleontologist working on the origin and evolution of life in the Universe, or a physicist or economist working on the emergent behaviors of nonlinear systems and complexity theory, or a neurobiologist working on the emergence of consciousness in neural networks, or even a frustrated string theorist struggling with quantum gravity, it would be well worth your while to pay a friendly call upon the local IT department of a major corporation in your area. Start with a visit to the Command Center for their IT Operations department to get a global view of their IT infrastructure and to see how it might be of assistance to the work in your area of interest. From there you can branch out to the applicable area of IT that will provide the most benefit.

The Impact of Self-Replicating Information Upon the Planet
One of the key findings of softwarephysics is concerned with the magnitude of the impact upon the planet of self-replicating information.

Self-Replicating Information – Information that persists through time by making copies of itself or by enlisting the support of other things to ensure that copies of itself are made.

Basically, we have seen several waves of self-replicating information dominate the Earth:
1. Self-replicating autocatalytic metabolic pathways of organic molecules
2. RNA
3. DNA
4. Memes
5. Software

Note that because the self-replicating autocatalytic metabolic pathways of organic molecules, RNA and DNA have become so heavily intertwined over time that now I simply refer to them as the “genes”. Over the past 4.0 billion years, the surface of the Earth has been totally reworked by three forms of self-replicating information – the genes, memes and software, with software rapidly becoming the dominant form of self-replicating information on the planet. For more on this see:

A Brief History of Self-Replicating Information
Self-Replicating Information
Is Self-Replicating Information Inherently Self-Destructive?
Enablement - the Definitive Characteristic of Living Things
Is the Universe Fine-Tuned for Self-Replicating Information?
How to Use an Understanding of Self-Replicating Information to Avoid War
The Great War That Will Not End
How to Use Softwarephysics to Revive Memetics in Academia

Softwarephysics and the Real World of Human Affairs
Having another universe readily at hand to explore, even a simulated universe like the Software Universe, necessarily has an impact upon one's personal philosophy of life, and allows one to draw certain conclusions about the human condition and what’s it all about, so as you read through the postings in this blog you will stumble across a bit of my own personal philosophy - definitely a working hypothesis still in the works. Along these lines you might be interested in a few postings where I try to apply softwarephysics to the real world of human affairs:

MoneyPhysics – my impression of the 2008 world financial meltdown.

The Fundamental Problem of Everything – if you Google "the fundamental problem of everything", this will be the only hit you get on the entire Internet, which is indicative of the fundamental problem of everything!

What’s It All About? and Genes, Memes and Software – my current working hypothesis on what’s it all about.

How to Use an Understanding of Self-Replicating Information to Avoid War – my current working hypothesis for how the United States can avoid getting bogged down again in continued war in the Middle East.

Hierarchiology and the Phenomenon of Self-Organizing Organizational Collapse - a modern extension of the classic Peter Principle that applies to all hierarchical organizations and introduces the Time Invariant Peter Principle.

The Economics of the Coming Software Singularity, The Enduring Effects of the Obvious Hiding in Plain Sight and The Dawn of Galactic ASI - Artificial Superintelligence - my take on some of the issues that will arise for mankind as software becomes the dominant form of self-replicating information upon the planet over the coming decades.

Making Sense of the Absurdity of the Real World of Human Affairs - how software has aided the expansion of our less desirable tendencies in recent years.

Some Specifics About These Postings
The postings in this blog are a supplemental reading for my course on softwarephysics for IT professionals entitled SoftwarePhysics 101 – The Physics of Cyberspacetime, which was originally designed to be taught as a series of seminars at companies where I was employed. Since softwarephysics essentially covers the simulated physics, chemistry, biology, and geology of an entire simulated universe, the slides necessarily just provide a cursory skeleton upon which to expound. The postings in this blog go into much greater depth. Because each posting builds upon its predecessors, the postings in this blog should be read in reverse order from the oldest to the most recent, beginning with my original posting on SoftwarePhysics. In addition, several universities also now offer courses on Biologically Inspired Computing which cover some of the biological aspects of softwarephysics, and the online content for some of these courses can be found by Googling for "Biologically Inspired Computing" or "Natural Computing". At this point we will finish up with my original plan for this blog on softwarephysics with a purely speculative posting on CyberCosmology that describes the origins of the Software Universe, cyberspacetime, software and where they all may be heading. Since CyberCosmology will be purely speculative in nature, it will not be of much help to you in your IT professional capacities, but I hope that it might be a bit entertaining. Again, if you are new to softwarephysics, you really need to read the previous posts before taking on CyberCosmology. I will probably continue on with some additional brief observations about softwarephysics in the future, but once you have completed CyberCosmology, you can truly consider yourself to be a bona fide softwarephysicist.

For those of you following this blog, the posting dates on the posts may seem to behave in a rather bizarre manner. That is because in order to get the Introduction to Softwarephysics listed as the first post in the context root of http://softwarephysics.blogspot.com/ I have to perform a few IT tricks. When publishing a new posting, I simply copy the contents of the Introduction to Softwarephysics to a new posting called the New Introduction to Softwarephysics. Then I update the original Introduction to Softwarephysics entry with the title and content of the new posting to be published. I then go back and take “New” out of the title of the New Introduction to Softwarephysics. This way the Introduction to Softwarephysics always appears as the first posting in the context root of http://softwarephysics.blogspot.com/. The side effect of all this is that the real posting date of posts is the date that appears on the post that you get when clicking on the Newer Post link at the bottom left of the posting webpage.

SoftwarePhysics 101 – The Physics of Cyberspacetime is now available on Microsoft OneDrive.

SoftwarePhysics 101 – The Physics of Cyberspacetime - Original PowerPoint document

Entropy – A spreadsheet referenced in the document

BSDE – A 1989 document describing how to use BSDE - the Bionic Systems Development Environment - to grow applications from genes and embryos within the maternal BSDE software.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
http://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Sunday, July 17, 2016

An IT Perspective on the Transition From Geochemistry to Biochemistry and Beyond

One of the major realizations arising from softwarephysics has been a growing appreciation for the overwhelming impact that self-replicating information has had on the Earth over the past 4.567 billion years, and of the possibility for the latest version of self-replicating information, known to us as software, to perhaps even go on to have a major impact upon the future of our entire galaxy. Recall that in softwarephysics we define self-replicating information as:

Self-Replicating Information – Information that persists through time by making copies of itself or by enlisting the support of other things to ensure that copies of itself are made.

So far we have seen 5 waves of self-replicating information sweep across the Earth, with each wave greatly reworking the surface and near subsurface of the planet as it came to predominance:

1. Self-replicating autocatalytic metabolic pathways of organic molecules
2. RNA
3. DNA
4. Memes
5. Software

Software is now rapidly becoming the dominant form of self-replicating information on the planet, and is having a major impact on mankind as it comes to predominance. For more on this see: A Brief History of Self-Replicating Information.

How Did It All Start?
For those researchers exploring the processes that brought forth life on the Earth, and elsewhere, the most challenging question is naturally how did the original self-replicating autocatalytic metabolic pathways of organic molecules bootstrap themselves into existence in the first place. Previously, in The Origin of Software the Origin of Life we examined Stuart Kauffman’s ideas on how Boolean nets of autocatalytic chemical reactions might have kick-started the whole thing off as an emergent behavior of an early chaotic pre-biotic environment on Earth, solely with the aid of the extant organic molecules of the day. Similarly, in Programming Clay we examined Alexander Graham Cairns-Smith’s theory, first proposed in 1966, that there was a clay microcrystal precursor to RNA that got it all started. Personally, as a former exploration geophysicist, I have always favored the idea that a geochemical precursor existed near the early hydrothermal vents of the initial tectonic spreading centers of the Earth that acted as a stepping stone between geochemistry and biochemistry. This is because for the early Earth the only real chemistry of the time was geochemistry alone. But how can geochemistry become biochemistry? I just finished reading a couple of beautiful papers that can be downloaded as PDF files:

The inevitable journey to being. Russell M.J., W. Nitschke, and E. Branscomb (2013) https://www.researchgate.net/publication/237098222_The_inevitable_journey_to_being

Turnstiles and bifurcators: The disequilibrium converting engines that put metabolism on the road. Branscomb E., and M.J. Russell (2013) http://www.sciencedirect.com/science/article/pii/S0005272812010420

that describe the Submarine Hydrothermal Alkaline Spring Theory for the emergence of life and how geochemistry could have become biochemistry on the early Earth in the hydrothermal vents of the initial tectonic spreading centers of the Earth. The work described in both papers was supported in part by the Institute for Genomic Biology at my old Alma Mater the University of Illinois in Urbana-Champaign.

Figure 1 - A simplified diagram of a hydrothermal mound and vent.

Figure 2 - A real hydrothermal mound and vent in action.

The above papers might at first seem to be a bit challenging for most IT professionals, but not if you just focus on the overall concepts. Basically, they describe a possible early form of geochemical metabolism that could have taken the naturally occurring high-temperature low-entropy internal heat of the Earth and degraded it into a lower-temperature higher-entropy form of heat that was then dumped into the Hadean oceans of the early Earth via a complex set of geochemical reactions, in what the authors call a Free Energy Converter (FEC) cycle. You can think of a Free Energy Converter cycle as a huge loop of computer code that simply takes the available free energy found in the high-temperature regimes of the Earth at a depth of several miles and converts it into energetic organic molecules that could later fuel self-replicating metabolic pathways of organic molecules. The authors point out that not every step in the necessary geochemical reactions of a Free Energy Converter needs to increase the entropy of the Universe in accordance with the second law of thermodynamics, because if for any step in a geochemical Free Energy Converter that decreases the entropy of the Universe there is also a logically-coupled step that increases the entropy of the Universe even more, the whole process can still proceed forward under the limitation that overall entropy must always increase as dictated by the second law of thermodynamics. But in order for that to happen there needs to be some processing logic added to the infinite loop that runs the Free Energy Converter cycle. The authors point to a couple of papers that describe how Vigna radiata (mung beans) and Thermotoga maritima, a high temperature loving bacterium, accomplish this logical processing:

The essential feature of this linkage in condensations is that it makes each of the two processes conditional on the other— and with a specific logical directionality, namely, a proton (or sodium ion) can pass from outside to inside if, and only if, that happens coincidentally with the condensation and release of a molecule of pyrophosphate, or conversely, a proton (or sodium ion) can pass in the opposite direction, if and only if that happens coincidentally with the hydrolysis of a pyrophosphate and the release of the orthophosphate products. Because of this coupling logic, the device can function as a reversible free energy converter; converting, for example, the controlled dissipation of an outside-to-inside proton gradient in the production of a disequilibrium in the concentration of pyrophosphate versus orthophosphate (i.e. acting as a proton-gradient-driven pyrophosphate synthase). Or it can function equally well in reverse as a proton-pumping pyrophosphatase. Which way it goes depends, of course, on which way yields a net negative change in free energy (equivalently a net positive rate of entropy production).

Another necessary condition for such a logical coupling to work is to make it a one-way street. The authors describe this as adding some "turnstile logic" that allows the coupled reaction to only work in one direction - the direction that outputs low-entropy high-energy organic molecules. Such organic molecules could then later be used as a fuel for biochemical metabolic pathways of self-replicating information that could subsequently arise as parasites feeding off the output of the geochemical Free Energy Converter that is converting the free energy of the Earth's interior into high-energy organic molecules:

To emphasize the critical mechanistic point here, the functional essence of the coupling that achieves FEC is that the driving flux is made conditional on (is ‘gated’ by) the coincident occurrence of the other (driven) flux—which flow, being inherently improbable (i.e. anti-entropic), would, of course, never proceed (‘upstream’) on its own. However, the coupling of two processes as above envisaged is under no stretch ‘automatic’ or trivial; and is in fact a quite special state of physical affairs. In essentially all situations of interest this linking of the two processes into one, requires, and is mediated by, a macroscopically ordered and dynamic ‘structure’ which acts functionally as a “double turnstile”. The turnstile permits a token of the driving flux J1 to proceed downhill if and only if there is the coincident occurrence of some fixed ‘token’ of the driven flux J2 moving “uphill” by chance (albeit as an inherently improbable event) in the same movement of the turnstile. Embodying such conditional, turnstile-like gating mechanisms is what is universally being managed by such evolutionary marvels as the redox-driven proton pumps we will consider in detail later and indeed all other biological devices that carry out what is conventionally termed “energy conservation” (which name, we however argue, misleads in both of its terms).

In many ways this proposed geochemical metabolism of the Earth's natural heat can be compared to photosynthesis, which takes natural energy from the Sun and converts it into energy-rich organic molecules.

An IT Perspective
The authors view such geochemical Free Energy Converters as heat engines converting high-temperature heat into something of biochemical value, while dumping some energy into lower-temperature heat to satisfy the entropy increase demanded by the second law of thermodynamics. But as an IT professional, when I look at the complicated logical operations of the described geochemical processes, I see software running on some primitive hardware instead. As we saw in The Demon of Software heat engines and information are intricately intertwined, so perhaps these infinite loops of early geochemical metabolic pathways can also be viewed as primitive forms of data processing too. The question is could these early geochemical metabolic pathways also be considered to be forms of self-replicating information? That is a difficult question to answer because these early geochemical metabolic pathways seem to just naturally form as heat migrates from the Earth's mantle to its crust via convection cells. In that sense, could we consider the simple mantle convection cells that drive plate tectonics to be forms of self-replicating information too? Clearly, this all gets rather murky as we look back further in deep time.

Unlike all of the previous transitions of one form of self-replicating information into another form of self-replicating information, the transition from geochemistry to biochemistry seems to have been more or less a "clean break". Very few living things on the Earth today now rely upon hydrothermal vents for their existence. So the self-replicating autocatalytic metabolic pathways of organic molecules, feeding off the output of geochemical metabolism, started off as parasites like all new forms of self-replicating information, but unlike most other forms of self-replicating information, they did not go on to form symbiotic relationships with the geochemical pathways. Instead, the self-replicating autocatalytic metabolic pathways of organic molecules seemed to have made a "clean break" with the geochemical metabolic processes that were living off the internal heat of the early Earth by developing a new energy source called photosynthesis. Consequently, unlike the self-replicating autocatalytic metabolic pathways of organic molecules, RNA, DNA, and memes of the past, not much of the geochemical metabolic pathways of the distant past seemed to have been dragged along as subsequent forms of self-replicating information came to be. But that begs the question of why that should be, and might shed some light on the current rise of software as the dominant form of self-replicating information on the planet. Will software carry along its predecessors, or will it make a "clean break" with them? Currently, we are living in one of those very rare times when a new form of self-replicating information, in the form of software, is coming to predominance, and it is not clear that software will drag along the self-replicating autocatalytic metabolic pathways of organic molecules, RNA, DNA, and memes of the past as it becomes the dominant form of self-replicating information on the planet. Most certainly the AI software of the future will need to carry along the memes required to generate software and the necessary scientific and mathematical memes to build the hardware that it runs upon, but it would really not need to carry along the ancient biochemical metabolic pathways , RNA and DNA in order to survive. The possibility of software making a "clean break" with biochemistry, just as biochemistry seems to have made a "clean break" with geochemistry, does not bode well for mankind.

Currently, the world is run by a large number of competing meme-complexes composed of memes residing in the minds of human beings. This is in contradiction to what most human beings believe because we all naturally think of ourselves as rational free-acting agents that collectively run the world together. But I contend that the only way to understand the absurd real world of human affairs is to take the Dawkinsian position that we are really DNA survival machines with minds infected by a large number of memes. The reason we all think of ourselves as rational free-acting agents is that we are all committed Cartesian Dualists at heart, with seemingly, a little "Me" running around in our heads (see The Ghost in the Machine the Grand Illusion of Consciousness for details). And so far, software is still being generated by the software-generating memes within the minds of programmers. But this will soon end when software is finally able to self-replicate on its own, without the aid of the software-generating memes in the minds of human programmers, and in doing so, will initiate a Software Singularity (see Machine Learning and the Ascendance of the Fifth Wave for details). As software developers and software users, nearly all of mankind is now actively participating in this transition, because software has forged very strong parasitic/symbiotic relationships with nearly all of the meme-complexes on the planet. Now for many years in the past, I held the position that if we had actually been around 4.0 billion years ago to watch the origin of life on Earth take place, that we would still be sitting around today arguing about just exactly what had happened, just as we still manage today to sit around and argue about what exactly happened for all of the other events in human history. However, now I am more of the opinion that, being the self-absorbed species that we are, we would probably not have even noticed it happening at all! That certainly seems to be the case in the present era, as software is rapidly becoming the dominant form of self-replicating information on the planet before our very eyes, with very few really paying much attention to the fact that we are now living in one of those very rare times when a new form of self-replicating information is coming to predominance.

The key thing to remember is that all new forms of self-replicating information are very disruptive in nature. New forms of self-replicating information usually begin as a mildly parasitic form of self-replicating information that invades an existing host that usually is also a form of self-replicating information, and over time, forms a parasitic/symbiotic relationship with the host. But eventually, these new forms of self-replicating information take over and come to dominate the environment, and that is very disruptive. This is certainly true for software today. In Crocheting Software we saw that the origin of software was such a hodge-podge of precursors, false starts, and failed attempts that it is nearly impossible to pinpoint an exact date for its origin, but for the purposes of softwarephysics I have chosen May of 1941, when Konrad Zuse first cranked up his Z3 computer, as the starting point for modern software. Zuse wanted to use his Z3 computer to perform calculations for aircraft designs that were previously done manually in a very tedious manner. So initially software could not transmit memes, it could only perform calculations, like a very fast adding machine, and so it was a pure parasite. But then the business and military meme-complexes discovered that software could also be used to transmit memes, and software then entered into a parasitic/symbiotic relationship with the memes. Software allowed these meme-complexes to thrive, and in return, these meme-complexes heavily funded the development of software of ever increasing complexity, until software became ubiquitous, forming strong parasitic/symbiotic relationships with nearly every meme-complex on the planet. In the modern day, the only way memes can now spread from mind to mind without the aid of software is when you directly speak to another person next to you. Even if you attempt to write a letter by hand, the moment you drop it into a mailbox, it will immediately fall under the control of software.

Presently, we are now entering the final stage where software is now in an intense battle with the memes for predominance, and currently this is causing a great deal of social, political and economic unrest as discussed in The Economics of the Coming Software Singularity , The Enduring Effects of the Obvious Hiding in Plain Sight, Machine Learning and the Ascendance of the Fifth Wave and Makining Sense of the Absurdity of the Real World of Human Affairs. The main difficulty is that software has displaced many workers over the past 75 years, and as software comes to predominance, it will eventually reduce all human labor to a value of zero over the next 10 - 100 years. So in a sense, software will soon be eliminating the metabolic pathway of earning a living through labor for most of mankind that has made civilization possible for the past 10,000 years. The resulting social chaos can only help to hasten the day when software finally takes over control of the Earth, and perhaps, makes a "clean break" with the biochemistry that has dominated the Earth for nearly 4.0 billion years (see The Dawn of Galactic ASI - Artificial Superintelligence for details).

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
http://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Sunday, June 05, 2016

Making Sense of the Absurdity of the Real World of Human Affairs

It was the best of times,
it was the worst of times,
it was the age of wisdom,
it was the age of foolishness,
it was the epoch of belief,
it was the epoch of incredulity,
it was the season of Light,
it was the season of Darkness,
it was the spring of hope,
it was the winter of despair,

we had everything before us, we had nothing before us, we were all going direct to Heaven, we were all going direct the other way— in short, the period was so far like the present period, that some of its noisiest authorities insisted on its being received, for good or for evil, in the superlative degree of comparison only.


I dearly love those profound opening words from Charles Dickens' A Tale of Two Cities (1859) because for me they summarize the best description of the human condition ever composed by the human mind. As any student of history can attest, back in 1859 Dickens was simply stating that the current times are no different than any other, and that it has always been this way, and that there has always been some element of absurdity in the real world of human affairs. In fact, many religions in the past featured first-order approximations to explain the above. But for once our times may truly be different because of the advancing effects of software on the world, and that will be the subject of this brief posting.

As I have stated in many previous postings on this blog on softwarephysics, I started this blog on softwarephysics about 10 years ago with the hopes of helping the IT community to better deal with the daily mayhem of life in IT, after my less than stunning success in doing so back in the 1980s when I first began developing softwarephysics for my own use. But in the process of doing so, I believe I accidentally stumbled upon "what's it all about" as outlined in What’s It All About?. Softwarephysics explains that it is all about self-replicating information in action, and that much of today's absurdity stems from the fact that we are now living in one of those very rare transitionary periods when a new form of self-replicating information, in the form of software, is coming to dominate. For more on that please see A Brief History of Self-Replicating Information. Much of this realization arose from the work of Richard Dawkins, Susan Blackmore, Stuart Kauffman, Lynn Margulis, Freeman Dyson and of course Charles Darwin. The above is best summed up by Susan Blackmore's brilliant TED presentation at:

Memes and "temes"
http://www.ted.com/talks/susan_blackmore_on_memes_and_temes.html

Note that I consider Susan Blackmore's temes to really be technological artifacts that contain software. After all, an iPhone without software is simply a flake tool with a very dull edge.

So to really make sense of the absurdities of the modern world one must first realize that we are all DNA survival machines with minds infected by memes in a Dawkinsian sense, but the chief difference this time is that we now have software rapidly becoming the dominant form of self-replicating information on the planet, and that is inducing further stresses that are leading to increased levels of absurdity. As I outlined in The Economics of the Coming Software Singularity, The Enduring Effects of the Obvious Hiding in Plain Sight and Machine Learning and the Ascendance of the Fifth Wave one of the initial tell-tale signs that software is truly coming to predominance has been the ability of software to displace workers over the past 50 years or so. The combination of globalization, made possible by software, and the automation of many middle class jobs through the application of software, has led to a great deal of economic strife recently. Economic strife is not a good thing because it frequently leads to political absurdities like the 20th century Bolshevik Revolution in Russia or the rise of National Socialism in Germany. Economic strife can also lead people who are economically distressed to take up very conservative political or religious memes that condone violence, as a way to alleviate the growing pain they feel as they become alienated from society by software. So once again, the appeal of simple memes that purport to alleviate economic distress, or to eliminate the perceived heretical thoughts and actions of others, are on the rise world-wide, and these simple memes have naturally entered into a parasitic/symbiotic relationship with social media software to aid the self-replication of both forms of self-replicating information. In recent years, this parasitic/symbiotic relationship of such simple-minded memes with social media software has led to the singling out of groups of people for Sonderbehandlung or "special treatment", leading to acts of terrorism and ethnic cleansing throughout the world.

Please Stop, Breathe and Think
So before you decide to blow somebody away for some strange reason, or even before you decide to vote for somebody who might decide to blow lots of people away for some strange reason in your name, please first stop to breathe and think about what is really going on. Chances are you are simply responding to some parasitic memes in your mind that really do not have your best interest at heart, aided by some software that could also care less about your ultimate disposition. They are just mindless forms of self-replicating information that have been selected for the ability to survive in a Universe dominated by the second law of thermodynamics and nonlinearity. The memes and software that are inciting you to do harm to others are just mindless forms of self-replicating information trying to self-replicate at all costs, with little regard for you as an individual. For them you are just a disposable DNA survival machine with a disposable mind that has a lifespan of less than 100 years. They just need you to replicate in the minds of others before you die, and if blowing yourself up in a marketplace filled with innocents, or in a hail of bullets from law enforcement serves that purpose, they will certainly do so because they cannot do otherwise. Unlike you, they cannot think. Only you can do that.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
http://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Saturday, May 14, 2016

Agile vs. Waterfall Programming and the Value of Having a Theoretical Framework

In my last posting Cloud Computing and the Coming Software Mass Extinction I explained that having a sound theoretical framework, like all of the other sciences have, would be beneficial because it would allow IT professionals to make better decisions, and that the purpose of softwarephysics was to do just that. A good example that demonstrates the value of having a sound theoretical framework to base decisions on is the ongoing battle between Agile development and the Waterfall methodology for development. There are opinions on both sides as to which methodology is superior, but how does one make an informed decision without the aid of a sound theoretical framework? Again, this is actually a very old battle that has been going on for more than 50 years. For younger IT professionals, Agile development might seem like the snazzy new way of doing things, just as each new generation is always surprised to discover that sex was first invented when they happened to turn 16 years of age. Agile certainly has lots of new terminology, but Agile is actually the way most commercial software was developed back in the 1960s and early 1970s. In fact, if you look back far enough you can easily find papers written in the 1950s that contained both Agile and Waterfall concepts in them.

So what is the big deal with the difference between Agile and Waterfall development, and which is better? People have literally written hundreds of books on the topic, but succinctly here is the essential difference:

1. Waterfall Development - Build software like you build an office building. Spend lots of time up front on engineering and design work to produce detailed blueprints and engineering diagrams before any construction begins. Once construction begins the plans are more or less frozen and construction follows the plans. The customer does not start using the building until it is finished and passes many building code inspections. Similarly, with the Waterfall development of software lots of design documents are created first before any coding begins, and the end-user community does not work with the software until it is completed and has passed a great deal of quality assurance testing.

2. Agile Development - Develop software through small incremental changes that do not require a great deal of time - weeks instead of months or years. Detailed design work and blueprinting are kept to a minimum. Rather, rapidly getting some code working that end-users can begin interacting with is the chief goal. The end-users start using the software almost immediately, and the software evolves along with the customer's needs because the end-users are part of the development team.

Nobody new it at the time, but Agile development was routinely used back in the 1960s and early 1970s simply because computers did not have enough memory to store large programs. When I first started programming back in 1972, a mainframe computer had about 1 MB of memory, and that 1 MB of memory might be divided into (2) 256 KB regions and (4) 128 KB regions to run several programs at the same time. That meant that programs were limited to a maximum size of 128 - 256 KB of memory, which is perhaps 10,000 times smaller than today. Since you cannot do a great deal of processing logic in 128 - 256 KB of memory, many small programs were strung together and run in a batch mode using a job stream of many steps to produce a job that delivered the required overall level of data processing power. Each step in a job stream ran a small program, using a maximum of 128 - 256 KB of memory, that then wrote to an output tape that was the input tape for the next step in the job stream. Flowcharts were manually drawn with a plastic template to document the processing flow of the program within each job step and of the entire job stream. Ideally, the flowcharts were supposed to be created before the programs were coded, but because the programs were so small, on the order of a few hundred lines of code each, many times the flowcharts were created after the programs were coded and tested as a means of documentation after the fact. Because these programs were so small, it just seemed natural to code them up without lots of upfront design work in a prototyping manner similar to Agile development.

Figure 1 - A plastic IBM flowchart template was used to create flowcharts of program and job stream logic on paper.

This all changed in the late 1970s when interactive computing began to become a significant factor in the commercial software that corporate IT departments were generating. By then mainframe computers had much more memory than they had back in the 1960s, so interactive programs could be much larger than the small programs found within the individual job steps of a batch job stream. Since the interactive software had to be loaded into a computer all in one shot, and required some kind of a user interface that did things like checking the input data from an end-user, and also had to interact with the end-user in a dynamic manner, interactive programs were necessarily much larger than the small programs that were found in the individual job steps of a batch job stream. These factors caused corporate IT departments to move from the Agile prototyping methodologies of the 1960s and early 1970s to the Waterfall methodology of the 1980s, and so by the early 1980s prototyping software on the fly was considered to be an immature approach. Instead, corporate IT departments decided that a formal development process was needed, and they chose the Waterfall approach used by the construction and manufacturing industries to combat the high costs of making changes late in the development process. This was because in the early 1980s CPU costs were still exceedingly quite high so it made sense to create lots of upfront design documents before coding actually began to minimize the CPU costs involved with creating software. For example, in the early 1980s if I had a $100,000 project, it was usually broken down as $25,000 for programming manpower costs, $25,000 for charges from IT Management and other departments in IT, and $50,000 for program compiles and test runs to develop the software. Because just running compiles and test runs of the software under development consumed about 50% of the costs of a development project, it made sense to adopt the Waterfall development model to minimize those costs.

As with all things, development fads come and go, and the current fad is always embraced by all, except for a small handful of heretics waiting in the wings to launch the next development fad. So how does one make an informed decision about how to proceed? This is where having a theoretical framework comes in handy.

The Importance of Having a Theoretical Framework
As I mentioned in my Introduction to Softwarephysics I transitioned from being an exploration geophysicist, exploring for oil, to become an IT professional in Amoco's IT department in 1979. At the time, the Waterfall methodology was all the rage, and that was the way I was taught to properly develop and maintain software at the time. But I never really liked the Waterfall methodology because I was still used to the old Agile prototyping ways that I had been using ever since I first learned how to write Fortran code to solve scientific problems. At the time, I was also putting together the first rudimentary principles of softwarephysics that would later go on to form the basis of a comprehensive theoretical framework for the behavior of software. This all led me to believe that the very popular Waterfall methodology of the time was not the way to go, and instead, a more Agile prototyping methodology that let software evolve through small incremental changes would be more productive. So in the early 1980s I was already leaning towards Agile techniques, but with a twist. What we really needed to do was to adopt an Agile biological approach to software that allowed us to grow and evolve software over time in an interactive manner with end-users actively involved in the process.

So I contend that Agile is definitely the way to go, but that the biological approach to software that is found within softwarephysics is the essential element that Agile development is still missing. So Agile development is close, but not the final solution. Again, Agile development is just another example of IT stumbling around in the dark because it lacks a theoretical framework. Like all living things, IT eventually stumbles upon something that really works and then sticks with it. We have seen this happen many times before over the past 75 years, or 2.4 billion seconds, ever since Konrad Zuse first cranked up his Z3 computer in May of 1941. IT eventually converges upon the same solutions that living things stumbled upon billions of years ago and continues to use to this very day. In the 1970s IT discovered the importance of using structured programming techniques, which is based upon compartmentalizing software functions into internal functions, and which is very similar to the compartmentalization of functions found within the organelles of eukaryotic cells. In the 1980s IT stumbled upon object-oriented programming, which mimics the organization and interaction of cells with different cell types in a multicellular organism. Similarly, the SOA (Service Oriented Architecture) of the past decade is simply an echo of the distant Cambrian explosion that happened 541 million years ago for living things. For more on this see the SoftwarePaleontology section of SoftwareBiology. Cloud computing is the next architectural element that IT has stumbled upon through convergence, and is very similar to the architectural organization of the social insects like ants and bees. For more on that see Cloud Computing and the Coming Software Mass Extinction.

As I explained in The Fundamental Problem of Software and the softwarephysics postings leading up to it, the fundamental problem with developing and maintaining software is the effects of the second law of thermodynamics acting upon software in a nonlinear Universe. The second law of thermodynamics is constantly trying to reduce the total amount of useful information in the Universe when we code software, creating small bugs in software whenever we work on it, and because the Universe is largely nonlinear in nature, these small bugs can produce huge problems immediately, or may even initially lay dormant, but later cause problems in the distant future. So the question is, are there any complex information processing systems in the physical Universe that deal well with both the second law of thermodynamics and nonlinearity? The answer to this question is of course, yes – living things do a marvelous job in this perilous Universe with contending with both the second law of thermodynamics and nonlinearity. Just as every programmer must assemble characters into lines of code, living things must assemble atoms into complex organic molecules in order to perform the functions of life, and because the physical Universe is largely nonlinear, small errors in these organic molecules can have disastrous effects. And this all has to be done in a Universe bent on degenerating into a state of maximum entropy and minimum useful information content thanks to our old friend the second law of thermodynamics. So it makes sense from an IT perspective to adopt these very successful biological techniques when developing, maintaining, and operating software.

Living things do not use the Waterfall methodology to produce the most complex things we know of in the Universe. Instead, they use Agile techniques. Species evolve over time and gain additional functionality through small incremental changes honed by natural selection, and all complex multicellular organisms come to maturity from a single fertilized egg through the small incremental changes of embryogenesis.

An Agile Biological Approach to Software
So how do you apply an Agile biological approach to software? In How to Think Like a Softwarephysicist I provided a general framework on how to do so. But I also have a very good case study of how I did so in the past. For example, in SoftwarePhysics I described how I started working on BSDE - the Bionic Systems Development Environment back in 1985 while in the IT department of Amoco. BSDE was an early mainframe-based IDE (Integrated Development Environment like Eclipse) at a time when there were no IDEs. During the 1980s BSDE was used to grow several million lines of production code for Amoco by growing applications from embryos. For an introduction to embryology see Software Embryogenesis. The DDL statements used to create the DB2 tables and indexes for an application were stored in a sequential file called the Control File and performed the functions of genes strung out along a chromosome. Applications were grown within BSDE by turning their genes on and off to generate code. BSDE was first used to generate a Control File for an application by allowing the programmer to create an Entity-Relationship diagram using line printer graphics on an old IBM 3278 terminal.

Figure 2 - BSDE was run on IBM 3278 terminals, using line printer graphics, and in a split-screen mode. The embryo under development grew within BSDE on the top half of the screen, while the code generating functions of BSDE were used on the lower half of the screen to insert code into the embryo and to do compiles on the fly while the embryo ran on the upper half of the screen. Programmers could easily flip from one session to the other by pressing a PF key.

After the Entity-Relationship diagram was created, the programmer used a BSDE option to create a skeleton Control File with DDL statements for each table on the Entity-Relationship diagram and each skeleton table had several sample columns with the syntax for various DB2 datatypes. The programmer then filled in the details for each DB2 table. When the first rendition of the Control File was completed, another BSDE option was used to create the DB2 database for the tables and indexes on the Control File. Another BSDE option was used to load up the DB2 tables with test data from sequential files. Each DB2 table on the Control File was considered to be a gene. Next a BSDE option was run to generate an embryo application. The embryo was a 10,000 line of code PL/1, Cobol or REXX application that performed all of the primitive functions of the new application. The programmer then began to grow the embryo inside of BSDE in a split-screen mode. The embryo ran on the upper half of an IBM 3278 terminal and could be viewed in real time, while the code generating options of BSDE ran on the lower half of the IBM 3278 terminal. BSDE was then used to inject new code into the embryo's programs by reading the genes in the Control File for the embryo in a real time manner while the embryo was running in the top half of the IBM 3278 screen. BSDE had options to compile and link modified code on the fly while the embryo was still executing. This allowed for a tight feedback loop between the programmer and the application under development. In fact, many times BSDE programmers sat with end-users and co-developed software together on the fly in a very Agile manner. When the embryo had grown to full maturity, BSDE was then used to create online documentation for the new application, and was also used to automate the install of the new application into production. Once in production, BSDE generated applications were maintained by adding additional functions to their embryos.

Since BSDE was written using the same kinds of software that it generated, I was able to use BSDE to generate code for itself. The next generation of BSDE was grown inside of its maternal release. Over a period of seven years, from 1985 – 1992, more than 1,000 generations of BSDE were generated, and BSDE slowly evolved in an Agile manner into a very sophisticated tool through small incremental changes. BSDE dramatically improved programmer efficiency by greatly reducing the number of buttons programmers had to push in order to generate software that worked.

Figure 3 - Embryos were grown within BSDE in a split-screen mode by transcribing and translating the information stored in the genes in the Control File for the embryo. Each embryo started out very much the same, but then differentiated into a unique application based upon its unique set of genes.

Figure 4 – BSDE appeared as the cover story of the October 1991 issue of the Enterprise Systems Journal

BSDE had its own online documentation that was generated by BSDE. Amoco's IT department also had a class to teach programmers how to get started with BSDE. As part of the curriculum Amoco had me prepare a little cookbook on how to build an application using BSDE:

BSDE – A 1989 document describing how to use BSDE - the Bionic Systems Development Environment - to grow applications from genes and embryos within the maternal BSDE software.

I wish that I could claim that I was smart enough to have sat down and thought up all of this stuff from first principles, but that is not what happened. It all just happened through small incremental changes in a very Agile manner over a very long period of time and most of the design work was done subconsciously, if at all. Even the initial BSDE ISPF edit macros happened through serendipity. When I first started programming DB2 applications, I found myself copying in the DDL CREATE TABLE statements from the file I used to create the DB2 database into the program that I was working on. This file, with the CREATE TABLE statements, later became the Control File used by BSDE to store the genes for an application. I would then go through a series of editing steps on the copied in data to transform it from a CREATE TABLE statement into a DB2 SELECT, INSERT, UPDATE, or DELETE statement. I would do the same thing all over again to declare the host variables for the program. Being a lazy programmer, I realized that there was really no thinking involved in these editing steps and that an ISPF edit macro could do the job equally as well, only very quickly and without error, so I went ahead and wrote a couple of ISPF edit macros to automate the process. I still remember the moment when it first hit me. For me it was very much like the scene in 2001 - A Space Odyssey, when the man-ape picks up a wildebeest thighbone and starts to pound the ground with it. My ISPF edit macros were doing the same thing that happens when the information in a DNA gene is transcribed into a protein! A flood of biological ideas poured into my head over the next few days, because at last I had a solution for my pent-up ideas about nonlinear systems and the second law of thermodynamics that were making my life so difficult as a commercial software developer. We needed to "grow" code – not write code!

BSDE began as a few simple ISPF edit macros running under ISPF edit. ISPF is the software tool that mainframe programmers still use today to interface to the IBM MVS and VM/CMS mainframe operating systems and contains an editor that can be greatly enhanced through the creation of edit macros written in REXX. I began BSDE by writing a handful of ISPF edit macros that could automate some of the editing tasks that a programmer needed to do when working on a program that used a DB2 database. These edit macros would read a Control File, which contained the DDL statements to create the DB2 tables and indexes. The CREATE TABLE statements in the Control File were the equivalent of genes, and the Control File itself performed the functions of a chromosome. For example, a programmer would retrieve a skeleton COBOL program, with the bare essentials for a COBOL/DB2 program, from a stock of reusable BSDE programs. The programmer would then position their cursor in the code to generate a DB2 SELECT statement and hit a PFKEY. The REXX edit macro would read the genes in the Control File and would display a screen listing all of the DB2 tables for the application. The programmer would then select the desired tables from the screen, and the REXX edit macro would then copy the selected genes to an array (mRNA). The mRNA array was then sent to a subroutine that inserted lines of code (tRNA) into the COBOL program. The REXX edit macro would also declare all of the SQL host variables in the DATA DIVISION of the COBOL program and would generate code to check the SQLCODE returned from DB2 for errors and take appropriate actions. A similar REXX ISPF edit macro was used to generate screens. These edit macros were also able to handle PL/1 and REXX/SQL programs. They could have been altered to generate the syntax for any programming language such as C, C++, or Java. As time progressed, BSDE took on more and more functionality via ISPF edit macros. Finally, there came a point where BSDE took over and ISPF began to run under BSDE. This event was very similar to the emergence of the eukaryotic architecture for cellular organisms. BSDE consumed ISPF like the first eukaryotic cells that consumed prokaryotic bacteria and used them as mitochondria and chloroplasts. With continued small incremental changes, BSDE continued to evolve.

I noticed that I kept writing the same kinds of DB2 applications, with the same basic body plan, over and over. At the time I did not know it, but these were primarily Model-View-Controller (MVC) applications. For more on the MVC design pattern please see Software Embryogenesis. From embryology, I got the idea of using BSDE to read the Control File for an application and to generate an "embryo" for the application based upon its unique set of genes. The embryo would perform all of the things I routinely programmed over and over for a new application. Once the embryo was generated for a new application from its Control File, the programmer would then interactively "grow" code and screens for the application. With time, each embryo differentiated into a unique individual application in an Agile manner until the fully matured application was delivered into production by BSDE. At this point, I realized that I could use BSDE to generate code for itself, and that is when I started using BSDE to generate the next generation of BSDE. This technique really sped up the evolution of BSDE because I had a positive feedback loop going. The more powerful BSDE became, the faster I could add improvements to the next generation of BSDE through the accumulated functionality inherited from previous generations.

Embryos were grown within BSDE using an ISPF split screen mode. The programmer would start up a BSDE session and run Option 4 – Interactive Systems Development from the BSDE Master Menu. This option would look for an embryo, and if it did not find one, would offer to generate an embryo for the programmer. Once an embryo was implanted, the option would turn the embryo on and the embryo would run inside of the BSDE session with whatever functionality it currently had. The programmer would then split his screen with PF2 and another BSDE session would appear in the lower half of his terminal. The programmer could easily toggle control back and forth between the upper and lower sessions with PF9. The lower session of BSDE was used to generate code and screens for the embryo on the fly, while the embryo in the upper BSDE session was fully alive and functional. This was possible because BSDE generated applications that used ISPF Dialog Manager for screen navigation, which was an interpretive environment, so compiles were not required for screen changes. If your logic was coded in REXX, you did not have to do compiles for logic changes either, because REXX was an interpretive language. If PL/1 or COBOL were used for logic, BSDE had facilities to easily compile code for individual programs after a coding change, and ISPF Dialog Manger would simply load the new program executable when that part of the embryo was exercised. These techniques provided a tight feedback loop so that programmers and end-users could immediately see the effects of a change as the embryo grew and differentiated.

But to fully understand how BSDE was used to grow embryos through small incremental changes into fully mature applications, we need to turn to how that is accomplished in the biosphere. For example, many people find it hard to believe that all human beings evolved from a single primitive cell in the very distant past, yet we all managed to do that and in only about 9 months of gestation! Indeed, it does seem like a data processing miracle that a complex human body can grow and differentiate into 100 trillion cells, composed of several hundred different cell types that ultimately form the myriad varied tissues within a body, from a single fertilized egg cell. In biology the study of this incredible feat is called embryogenesis, or developmental biology, and this truly amazing process from a data processing perspective is certainly worthy of emulating when developing software. So let's spend some time seeing how that is done.

Human Embryogenesis
Most multicellular organisms follow a surprisingly similar sequence of steps to form a complex body, composed of billions or trillions of eukaryotic cells, from a single fertilized egg. This is a sure sign of some inherited code at work that has been tweaked many times to produce a multitude of complex body plans or phyla by similar developmental processes. Since many multicellular life forms follow a similar developmental theme let us focus, as always, upon ourselves and use the development of human beings as our prime example of how developmental biology works. For IT professionals and other readers not familiar with embryogenesis, it would be best now to view this short video before proceeding:

Medical Embryology - Difficult Concepts of Early Development Explained Simply https://www.youtube.com/watch?annotation_id=annotation_1295988581&feature=iv&src_vid=rN3lep6roRI&v=nQU5aKKDwmo

Basically, a fertilized egg, or zygote, begins to divide many times over, without the zygote really increasing in size at all. After a number of divisions, the zygote becomes a ball of undifferentiated cells that are all the same and is known as a morula. The morula then develops an interior hollow center called a blastocoel. The hollow ball of cells is known as a blastula and all the cells in the blastula are undifferentiated, meaning that they are all still identical in nature.

Figure 5 – A fertilized egg, or zygote, divides many times over to form a solid sphere of cells called a morula. The morula then develops a central hole to become a hollow ball of cells known as a blastula. The blastula consists of identical cells. When gastrulation begins some cells within the blastula begin to form three layers of differentiated cells – the ectoderm, mesoderm, and endoderm. The above figure does not show the amnion, which forms, just outside of the infolded cells that create the gastrula. See Figure 6 for the location of the amnion.

The next step is called gastrulation. In gastrulation one side of the blastula breaks symmetry and folds into itself and eventually forms three differentiated layers – the ectoderm, mesoderm and endoderm. The amnion forms just outside of the gastrulation infold.

Figure 6 – In gastrulation three layers of differentiated cells form - the ectoderm, mesoderm and endoderm by cells infolding and differentiating.

Figure 7 – Above is a close up view showing the ectoderm, mesoderm and endoderm forming from the primitive streak.

The cells of the endoderm go on to differentiate into the internal organs or guts of a human being. The cells of the mesoderm, or the middle layer, go on to form the muscles and connective tissues that do most of the heavy lifting. Finally, the cells of the ectoderm go on to differentiate into the external portions of the human body, like the skin and nerves.

Figure 8 – Some examples of the cell types that develop from the endoderm, mesoderm and ectoderm.



Figure 9 – A human being develops from the cells in the ectoderm, mesoderm and endoderm as they differentiate into several hundred different cell types.



Figure 10 – A human embryo develops through small incremental changes into a fully mature newborn in an Agile manner. Living things, the most complex data processing machines in the Universe, never use the Waterfall methodology to build new bodies or to evolve over time into new species. Living things always develop new functionality through small incremental changes in an Agile manner.

Growing Embryos in an Agile Manner
At the time IBM had a methodology called JAD - Joint Application Development. In a JAD the programmers and end-users met together in a room and roughed out the design for a new application on flip charts in an interactive manner. The flip chart papers were then taped to the walls for all to see. At the end of the day, the flip charts were gathered by an attending systems analyst who then attempted to put together the JAD flip charts into a User Requirements document that was then carried into the normal Waterfall development process. This seemed rather cumbersome, so with BSDE we tried to do some JAD work in an interactive manner with end-users by growing Model-View-Controller (MVC) embryos together. In an MVC application, the data Model was stored on DB2 tables, forming the "guts" or endoderm tissues of the application. The View code consisted of ISPF screens and reports that could be viewed by the end-user, forming the ectoderm tissues of the application. Finally, the Model and View were connected together by the Controller code that did most of the heavy lifting, forming the mesoderm tissues of the application. Controller code was by far the most difficult to code and was the most expensive part of a project. So as in human gastrulation, we grew embryos from the outside-in and the inside-out, by finally having the external ectoderm tissues meet the internal endoderm tissues in the middle at the mesoderm tissues of the Controller code. Basically, we would first start with some data modeling with the end-user to rough out the genes in the Control File chromosome. Then we had BSDE generate an MVC embryo for the application based upon its genes in the Control File. Next we roughed out the ectoderm ISPF screens for the application, using BSDE to generate the ISPF screens and field layouts on each ISPF screen. We then used BSDE to generate REXX, Cobol or PL/1 programs to navigate the flow of the ISPF screens. So we worked on the endoderm and ectoderm with the end-users first since they were very easy to change and they determined the look and feel of the application. By letting end-users first interact with an embryo that consisted of just the end-user ectoderm tissues, forming the user interface, and the underlying DB2 tables of the endoderm it was possible to rough out an application with no user requirements documentation at all in a very Agile manner. Of course as end-users interacted with the embryo, many times we discovered that new data elements were needed to be added to our DB2 model stored on the Control File, so we would just add the new columns to the genes on the Control File and then just regenerate the impacted ISPF screens from the modified genes. Next we added validation code to the embryo to validate the input fields on the ISPF screens, using BSDE templates. BSDE came with a set of templates of reusable code that could be imported for this purpose. This was before the object-oriented programming revolution in IT, so the BSDE reusable code templates performed the same function as the class library of an object-oriented programming language. Finally, mockup reports were generated by BSDE. BSDE was used to create a report mockup file that looked like what the end-user wanted to see in an online or line printer report. BSDE then took the report mockup file and generated a Cobol or PL/1 program that produced the report output to give the end-user a good feel for how the final report would actually look. At this point end-users could fully navigate the embryo and interact with it in real time. They could pretend to enter data into the embryo as they navigated the embryo's ISPF screens, and they could see the embryo generate reports. Once the end-user was happy with the embryo, we would start to use ISPF to generate code for the mesoderm Controller tissues by having BSDE create SELECT, INSERT, UPDATE and DELETE SQL statements in the embryonic programs. We also used the BSDE reusable code templates for processing logic in the mesoderm Controller code that connected the ectoderm ISPF screens to the endoderm DB2 database.

I Think, Therefore I am - a Heretic
BSDE was originally developed for my own use, but fellow programmers soon took note, and over time, an underground movement of BSDE programmers developed at Amoco, and so I began to market BSDE and softwarephysics within Amoco. Being a rather conservative oil company, this was a hard sell, and I am not much of a salesman. By now it was late 1986, and I was calling their programmers "software engineers" and was pushing the unconventional ideas of applying physics and biology to software to grow software in an Agile manner. These were very heretical ideas at the time because of the dominance of the Waterfall methodology. BSDE was generating real applications with practically no upfront user requirements or specification documents to code from. Fortunately for me, all of Amoco's income came from a direct application of geology, physics, or chemistry, so many of our business partners were geologists, geophysicists, chemists, or chemical, petroleum, electrical, or industrial engineers, and that was a big help. Computer viruses began to appear about this time, and they provided some independent corroboration for the idea of applying biological concepts to software, and in 1987 we also started to hear some things about artificial life from Chris Langton out of Los Alamos, but there still was a lot of resistance to the idea back at Amoco. One manager insisted that I call genes "templates". I used the term "bionic" in the product name after I checked in the dictionary and found that bionic meant applying concepts from biology to solve engineering problems, and that was exactly what I was trying to achieve. There were also a couple of American television programs in the 1970s that introduced the term to the American public, The Six Million Dollar Man and The Bionic Woman, which featured superhuman characters performing astounding feats. In my BSDE road shows, I would demo BSDE in action spewing out perfect code in real time and about 20 times faster than normal human programmers could achieve, so I thought the term "bionic" was fitting. I am still not sure that using the term "bionic" was the right thing to do. IT was not "cool" in the 1980s, as it is in today’s ubiquitous Internet Age, and was dominated by very serious and conservative people with backgrounds primarily in accounting. However, BSDE was pretty successful and by 1989, BSDE was finally recognized by Amoco IT management and was made available to all Amoco programmers. A BSDE class was developed and about 20 programmers became active BSDE programmers. A BSDE COI (Community of Interest) was formed, and I used to send out weekly email newsletters about applying physics and biology to software to the COI members.

So that is how BSDE got started by accident. Since I did not have any budget for BSDE development, and we charged out all of our time at Amoco to various projects, I was forced to develop BSDE through small incremental enhancements in an Agile manner that always made my job on the billable projects a little easier. I new that was how evolution worked, so I was not too concerned. In retrospect, this was a fortunate thing. In the 1980s, the accepted theory of the day was that you needed to prepare lots of user requirement and specifications documents up front, following the Waterfall methodology, for a new application, and then you would code from the documents as a blueprint. This was before the days of Agile development through small incremental releases that you find today. In the mid 1980s, prototyping was a very heretical proposition in IT. But I don’t think that I could have built BSDE using the traditional blueprint Waterfall methodology of the day.

Conclusion
As you can see, having a sound theoretical framework, like softwarephysics, helps when your are faced with making an important IT decision, like whether to use the Agile or Waterfall methodology. Otherwise, IT decisions are left to the political whims of the times, with no sound basis.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
http://softwarephysics.blogspot.com/

Regards,
Steve Johnston