Sunday, July 17, 2016

Introduction to Softwarephysics

Softwarephysics is a simulated science for the simulated Software Universe that we are all immersed in. It is an approach to software development, maintenance, and support based upon concepts from physics, chemistry, biology, and geology that I have been using on a daily basis for over 35 years as an IT professional. For those of you not in the business, IT is short for Information Technology, commercial computer science. The purpose of softwarephysics is to explain why IT is so difficult, to suggest possible remedies, and to provide a direction for thought. If you are an IT professional, general computer user, or simply an individual interested in computer science, physics, chemistry, biology, or geology then softwarephysics might be of interest to you, if not in an entirely serious manner, perhaps at least in an entertaining one.

From 1975 – 1979, I was an exploration geophysicist exploring for oil, first with Shell, and then with Amoco. In 1979, I made a career change into IT, and spent about 20 years in development. For the past 17 years, I have been in IT operations, supporting middleware on WebSphere, JBoss, Tomcat, and ColdFusion. When I transitioned into IT from geophysics, I figured if you could apply physics to geology; why not apply physics to software? So like the exploration team at Amoco that I had just left, consisting of geologists, geophysicists, paleontologists, geochemists, and petrophysicists, I decided to take all the physics, chemistry, biology, and geology that I could muster and throw it at the problem of software. The basic idea was that many concepts in physics, chemistry, biology, and geology suggested to me that the IT community had accidentally created a pretty decent computer simulation of the physical Universe on a grand scale, a Software Universe so to speak, and that I could use this fantastic simulation in reverse, to better understand the behavior of commercial software, by comparing software to how things behaved in the physical Universe. Softwarephysics depicts software as a virtual substance, and relies upon our understanding of the current theories in physics, chemistry, biology, and geology to help us model the nature of software behavior. So in physics we use software to simulate the behavior of the Universe, while in softwarephysics we use the Universe to simulate the behavior of software. Along these lines, we use the Equivalence Conjecture of Softwarephysics as an aid; it allows us to shift back and forth between the Software Universe and the physical Universe, and hopefully to learn something about one by examining the other:

The Equivalence Conjecture of Softwarephysics
Over the past 75 years, through the uncoordinated efforts of over 50 million independently acting programmers to provide the world with a global supply of software, the IT community has accidentally spent more than $10 trillion creating a computer simulation of the physical Universe on a grand scale – the Software Universe.

Logical Positivism and Effective Theories
Many IT professionals have a difficult time with softwarephysics because they think of physics as being limited to the study of real things like electrons and photons, and since software is not “real”, how can you possibly apply concepts from physics and the other sciences to software? To address this issue, softwarephysics draws heavily upon two concepts from physics that have served physics quite well over the past century – the concept of logical positivism and the concept of effective theories. This was not always the case. In the 17th, 18th, and 19th centuries, physicists mistakenly thought that they were actually discovering the fundamental laws of the Universe, which they thought were based upon real tangible things like particles, waves, and fields. Classical Newtonian mechanics (1687), thermodynamics (1850), and classical electrodynamics (1864) did a wonderful job of describing the everyday world at the close of the 19th century, but early in the 20th century it became apparent that the models upon which these very successful theories were based did not work very well for small things like atoms or for objects moving at high velocities or in strong gravitational fields. This provoked a rather profound philosophical crisis within physics at the turn of the century, as physicists worried that perhaps 300 years of work was about to go down the drain. The problem was that classical physicists confused their models of reality with reality itself, and when their classical models began to falter, their confidence in physics began to falter as well. This philosophical crisis was resolved with the adoption of the concepts of logical positivism and some new effective theories in physics. Quantum mechanics (1926) was developed for small things like atoms, the special theory of relativity (1905) was developed for objects moving at high velocities and the general theory of relativity (1915) was developed for objects moving in strong gravitational fields.

Logical positivism, usually abbreviated simply to positivism, is an enhanced form of empiricism, in which we do not care about how things “really” are; we are only interested with how things are observed to behave. With positivism, physicists only seek out models of reality - not reality itself. When we study quantum mechanics, we will find that the concept of reality gets rather murky in physics anyway, so this is not as great a loss as it might at first seem. By concentrating on how things are observed to behave, rather than on what things “really” are, we avoid the conundrum faced by the classical physicists. In retrospect, this idea really goes all the way back to the very foundations of physics. In Newton’s Principia (1687) he outlined Newtonian mechanics and his theory of gravitation, which held that the gravitational force between two objects was proportional to the product of their masses divided by the square of the distance between them. Newton knew that he was going to take some philosophical flack for proposing a mysterious force between objects that could reach out across the vast depths of space with no apparent mechanism, so he took a very positivistic position on the matter with the famous words:

I have not as yet been able to discover the reason for these properties of gravity from phenomena, and I do not feign hypotheses. For whatever is not deduced from the phenomena must be called a hypothesis; and hypotheses, whether metaphysical or physical, or based on occult qualities, or mechanical, have no place in experimental philosophy. In this philosophy particular propositions are inferred from the phenomena, and afterwards rendered general by induction.

Instead, Newton focused on how things were observed to move under the influence of his law of gravitational attraction, without worrying about what gravity “really” was.

The second concept, that of effective theories, is an extension of positivism. An effective theory is an approximation of reality that only holds true over a certain restricted range of conditions and only provides for a certain depth of understanding of the problem at hand. For example, Newtonian mechanics is an effective theory that makes very good predictions for the behavior of objects moving less than 10% of the speed of light and which are bigger than a very small grain of dust. These limits define the effective range over which Newtonian mechanics can be applied to solve problems. For very small things we must use quantum mechanics and for very fast things moving in strong gravitational fields, we must use relativity theory. So all of the current theories of physics, such as Newtonian mechanics, Newtonian gravity, classical electrodynamics, thermodynamics, statistical mechanics, the special and general theories of relativity, quantum mechanics, and the quantum field theories of QED and QCD are effective theories that are based upon models of reality, and all these models are approximations - all these models are fundamentally "wrong", but at the same time, these effective theories make exceedingly good predictions of the behavior of physical systems over the limited ranges in which they apply. That is the goal of softwarephysics – to provide for an effective theory of software behavior that makes useful predictions of software behavior that are applicable to the day-to-day activities of IT professionals. So in softwarephysics, we adopt a very positivistic viewpoint of software; we do not care what software “really is”, we only care about how software is observed to behave and try to model those behaviors with an effective theory of software behavior that only holds true over a certain restricted range of conditions and only provides for a certain depth of understanding of the problem at hand.

GPS satellites provide a very good example of positivism and effective theories at work. There are currently 31 GPS satellites orbiting at an altitude of 12,600 miles above the Earth, and each contains a very accurate atomic clock. The signals from the GPS satellites travel to your GPS unit at the speed of light, so by knowing the travel time of the signals from at least 4 of the GPS satellites, it is possible to determine your position on Earth very accurately. In order to do that, it is very important to have very accurate timing measurements. Newtonian mechanics is used to launch the GPS satellites to an altitude of 12,600 miles and to keep them properly positioned in orbit. Classical electrodynamics is then used to beam the GPS signals back down to Earth to the GPS unit in your car. Quantum mechanics is used to build the transistors on the chips on board the GPS satellites and to understand the quantum tunneling of electrons in the flash memory chips used to store GPS data on the satellites. The special theory of relativity predicts that the onboard atomic clocks on the GPS satellites will run slower and lose about 7.2 microseconds per day due to their high velocities relative to an observer on the Earth. But at the same time, the general theory of relativity also predicts that because the GPS satellites are further from the center of the Earth and in a weaker gravitational field, where spacetime is less deformed than on the surface of the Earth, their atomic clocks also run faster and gain 45.9 microseconds per day due to the weaker gravitational field out there. The net effect is a gain of 38.7 microseconds per day, so the GPS satellite atomic clocks have to be purposefully built to run slow by 38.7 microseconds per day before they are launched, so that they will keep in sync with clocks on the surface of the Earth. If this correction were not made, an error in your position of 100 yards/day would accrue. The end result of the combination of all these fundamentally flawed effective theories is that it is possible to pinpoint your location on Earth to an accuracy of 16 feet or better for as little as $100. But physics has done even better than that with its fundamentally flawed effective theories. By combining the effective theories of special relativity (1905) with quantum mechanics (1926), physicists were able to produce a new effective theory for the behavior of electrons and photons called quantum electrodynamics QED (1948) which was able to predict the gyromagnetic ratio of the electron, a measure of its intrinsic magnetic field, to an accuracy of 11 decimal places. As Richard Feynman has pointed out, this was like predicting the exact distance between New York and Los Angeles accurate to the width of a human hair!

So Newtonian mechanics makes great predictions for the macroscopic behavior of GPS satellites, but it does not work very well for small things like the behavior of individual electrons within transistors, where quantum mechanics is required, or for things moving at high speeds or in strong gravitational fields where relativity theory must be applied. And all three of these effective theories are based upon completely contradictory models. General relativity maintains that spacetime is curved by matter and energy, but that matter and energy are continuous, while quantum mechanics maintains that spacetime is flat, but that matter and energy are quantized into chunks. Newtonian mechanics simply states that space and time are mutually independent dimensions and universal for all, with matter and energy being continuous. The important point is that all effective theories and scientific models are approximations – they are all fundamentally "wrong". But knowing that you are "wrong" gives you a great advantage over people who know that they are "right", because knowing that you are "wrong" allows you to seek improved models of reality. So please consider softwarephysics to simply be an effective theory of software behavior that is based upon models that are fundamentally “wrong”, but at the same time, fundamentally useful for IT professionals. So as you embark upon your study of softwarephysics, please always keep in mind that the models of softwarephysics are just approximations of software behavior, they are not what software “really is”. It is very important not to confuse models of software behavior with software itself, if one wishes to avoid the plight of the 19th century classical physicists.

If you are an IT professional and many of the above concepts are new to you, do not be concerned. This blog on softwarephysics is aimed at a diverse audience, but with IT professionals in mind. All of the above ideas will be covered at great length in the postings in this blog on softwarephysics and in a manner accessible to all IT professionals. Now it turns out that most IT professionals have had some introduction to physics in high school or in introductory college courses, but that presents an additional problem. The problem is that such courses generally only cover classical physics, and leave the student with a very good picture of physics as it stood in 1864! It turns out that the classical physics of Newtonian mechanics, thermodynamics, and classical electromagnetic theory were simply too good to discard and are still quite useful, so they are taught first to beginners and then we run out of time to cover the really interesting physics of the 20th century. Now imagine the problems that the modern world would face if we only taught similarly antiquated courses in astronomy, metallurgy, electrical and mechanical engineering, medicine, economics, biology, or geology that happily left students back in 1864! Since many of the best models for software behavior stem from 20th century physics, we will be covering a great deal of 20th century material in these postings – the special and general theories of relativity, quantum mechanics, quantum field theories, and chaos theory, but I hope that you will find that these additional effective theories are quite interesting on their own, and might even change your worldview of the physical Universe at the same time.

Unintended Consequences for the Scientific Community
As I mentioned at the close of my original posting on SoftwarePhysics, my initial intention for this blog on softwarephysics was to fulfill a promise I made to myself about 25 years ago to approach the IT community with the concept of softwarephysics a second time, following my less than successful attempt to do so in the 1980s, with the hope of helping the IT community to better cope with the daily mayhem of life in IT. However, in laying down the postings for this blog an unintended consequence arose in my mind as I became profoundly aware of the enormity of this vast computer simulation of the physical Universe that the IT community has so graciously provided to the scientific community free of charge and also of the very significant potential scientific value that it provides. One of the nagging problems for many of the observational and experimental sciences is that many times there is only one example readily at hand to study or experiment with, and it is very difficult to do meaningful statistics with a population of N=1.

But the computer simulation of the physical Universe that the Software Universe presents provides another realm for comparison. For example, both biology and astrobiology only have one biosphere on Earth to study and even physics itself has only one Universe with which to engage. Imagine the possibilities if scientists had another Universe readily at hand in which to work! This is exactly what the Software Universe provides. For example, in SoftwareBiology and A Proposal For All Practicing Paleontologists we see that the evolution of software over the past 75 years, or 2.4 billion seconds, ever since Konrad Zuse first cranked up his Z3 computer in May of 1941, has closely followed the same path as life on Earth over the past 4.0 billion years in keeping with Simon Conway Morris’s contention that convergence has played the dominant role in the evolution of life on Earth. In When Toasters Fly, we also see that software has evolved in fits and starts as portrayed by the punctuated equilibrium of Stephen Jay Gould and Niles Eldredge, and in The Adaptationist View of Software Evolution we explore the overwhelming power of natural selection in the evolution of software. In keeping with Peter Ward’s emphasis on mass extinctions dominating the course of evolution throughout geological time, we also see in SoftwareBiology that there have been several dramatic mass extinctions of various forms of software over the past 75 years as well, that have greatly affected the evolutionary history of software, and that between these mass extinctions, software has also tended to evolve through the gradual changes of Hutton’s and Lyell’s uniformitarianism. In Software Symbiogenesis and Self-Replicating Information, we also see the very significant role that parasitic/symbiotic relationships have played in the evolution of software, in keeping with the work of Lynn Margulis and also of Freeman Dyson’s two-stage theory of the origin of life on Earth. In The Origin of Software the Origin of Life, we explore Stuart Kauffman’s ideas on how Boolean nets of autocatalytic chemical reactions might have kick-started the whole thing as an emergent behavior of an early chaotic pre-biotic environment on Earth, and that if Seth Shostak is right, we will never end up talking to carbon-based extraterrestrial aliens, but to alien software instead. In Is the Universe Fine-Tuned for Self-Replicating Information? we explore the thermodynamics of Brandon Carter’s Weak Anthropic Principle (1973), as it relates to the generation of universes in the multiverse that are capable of sustaining intelligent life. Finally, in Programming Clay we revisit Alexander Graham Cairns-Smith’s theory (1966) that Gene 1.0 did not run on nucleic acids, but on clay microcrystal precursors instead.

Similarly for the physical sciences, in Is the Universe a Quantum Computer? we find a correspondence between TCP/IP and John Cramer’s Transactional Interpretation of quantum mechanics. In SoftwarePhysics and Cyberspacetime, we also see that the froth of CPU processes running with a clock speed of 109 Hz on the 10 trillion currently active microprocessors that comprise the Software Universe can be viewed as a slowed down simulation of the spin-foam froth of interacting processes of loop quantum gravity running with a clock speed of 1043 Hz that may comprise the physical Universe. And in Software Chaos, we examine the nonlinear behavior of software and some of its emergent behaviors and follow up in CyberCosmology with the possibility that vast quantities of software running on large nonlinear networks might eventually break out into consciousness in accordance with the work of George Dyson and Daniel Dennett. Finally, in Model-Dependent Realism - A Positivistic Approach to Realism we compare Steven Weinberg’s realism with the model-dependent realism of Stephen Hawking and Leonard Mlodinow and how the two worldviews affect the search for a Final Theory. Finally, in The Software Universe as an Implementation of the Mathematical Universe Hypothesis and An Alternative Model of the Software Universe we at long last explore what software might really be, and discover that the Software Universe might actually be more closely related to the physical Universe than you might think.

The chief advantage of doing fieldwork in the Software Universe is that, unlike most computer simulations of the physical Universe, it is an unintended and accidental simulation, without any of the built-in biases that most computer simulations of the physical Universe suffer. So you will truly be able to do fieldwork in a pristine and naturally occuring simulation, just as IT professionals can do fieldwork in the wild and naturally occuring simulation of software that the living things of the biosphere provide. Secondly, the Software Universe is a huge simulation that is far beyond the budgetary means of any institution or consortium by many orders of magnitude. So if you are an evolutionary biologist, astrobiologist, or paleontologist working on the origin and evolution of life in the Universe, or a physicist or economist working on the emergent behaviors of nonlinear systems and complexity theory, or a neurobiologist working on the emergence of consciousness in neural networks, or even a frustrated string theorist struggling with quantum gravity, it would be well worth your while to pay a friendly call upon the local IT department of a major corporation in your area. Start with a visit to the Command Center for their IT Operations department to get a global view of their IT infrastructure and to see how it might be of assistance to the work in your area of interest. From there you can branch out to the applicable area of IT that will provide the most benefit.

The Impact of Self-Replicating Information Upon the Planet
One of the key findings of softwarephysics is concerned with the magnitude of the impact upon the planet of self-replicating information.

Self-Replicating Information – Information that persists through time by making copies of itself or by enlisting the support of other things to ensure that copies of itself are made.

Basically, we have seen several waves of self-replicating information dominate the Earth:
1. Self-replicating autocatalytic metabolic pathways of organic molecules
2. RNA
3. DNA
4. Memes
5. Software

Note that because the self-replicating autocatalytic metabolic pathways of organic molecules, RNA and DNA have become so heavily intertwined over time that now I simply refer to them as the “genes”. Over the past 4.0 billion years, the surface of the Earth has been totally reworked by three forms of self-replicating information – the genes, memes and software, with software rapidly becoming the dominant form of self-replicating information on the planet. For more on this see:

A Brief History of Self-Replicating Information
Self-Replicating Information
Is Self-Replicating Information Inherently Self-Destructive?
Enablement - the Definitive Characteristic of Living Things
Is the Universe Fine-Tuned for Self-Replicating Information?
How to Use an Understanding of Self-Replicating Information to Avoid War
The Great War That Will Not End
How to Use Softwarephysics to Revive Memetics in Academia

Softwarephysics and the Real World of Human Affairs
Having another universe readily at hand to explore, even a simulated universe like the Software Universe, necessarily has an impact upon one's personal philosophy of life, and allows one to draw certain conclusions about the human condition and what’s it all about, so as you read through the postings in this blog you will stumble across a bit of my own personal philosophy - definitely a working hypothesis still in the works. Along these lines you might be interested in a few postings where I try to apply softwarephysics to the real world of human affairs:

MoneyPhysics – my impression of the 2008 world financial meltdown.

The Fundamental Problem of Everything – If you Google "the fundamental problem of everything", this will be the only hit you get on the entire Internet, which is indicative of the fundamental problem of everything!

What’s It All About? and Genes, Memes and Software – my current working hypothesis on what’s it all about.

How to Use an Understanding of Self-Replicating Information to Avoid War – my current working hypothesis for how the United States can avoid getting bogged down again in continued war in the Middle East.

Hierarchiology and the Phenomenon of Self-Organizing Organizational Collapse - a modern extension of the classic Peter Principle that applies to all hierarchical organizations and introduces the Time Invariant Peter Principle.

The Economics of the Coming Software Singularity, The Enduring Effects of the Obvious Hiding in Plain Sight and The Dawn of Galactic ASI - Artificial Superintelligence - my take on some of the issues that will arise for mankind as software becomes the dominant form of self-replicating information upon the planet over the coming decades.

Some Specifics About These Postings
The postings in this blog are a supplemental reading for my course on softwarephysics for IT professionals entitled SoftwarePhysics 101 – The Physics of Cyberspacetime, which was originally designed to be taught as a series of seminars at companies where I was employed. Since softwarephysics essentially covers the simulated physics, chemistry, biology, and geology of an entire simulated universe, the slides necessarily just provide a cursory skeleton upon which to expound. The postings in this blog go into much greater depth. Because each posting builds upon its predecessors, the postings in this blog should be read in reverse order from the oldest to the most recent, beginning with my original posting on SoftwarePhysics. In addition, several universities also now offer courses on Biologically Inspired Computing which cover some of the biological aspects of softwarephysics, and the online content for some of these courses can be found by Googling for "Biologically Inspired Computing" or "Natural Computing". At this point we will finish up with my original plan for this blog on softwarephysics with a purely speculative posting on CyberCosmology that describes the origins of the Software Universe, cyberspacetime, software and where they all may be heading. Since CyberCosmology will be purely speculative in nature, it will not be of much help to you in your IT professional capacities, but I hope that it might be a bit entertaining. Again, if you are new to softwarephysics, you really need to read the previous posts before taking on CyberCosmology. I will probably continue on with some additional brief observations about softwarephysics in the future, but once you have completed CyberCosmology, you can truly consider yourself to be a bona fide softwarephysicist.

For those of you following this blog, the posting dates on the posts may seem to behave in a rather bizarre manner. That is because in order to get the Introduction to Softwarephysics listed as the first post in the context root of http://softwarephysics.blogspot.com/ I have to perform a few IT tricks. When publishing a new posting, I simply copy the contents of the Introduction to Softwarephysics to a new posting called the New Introduction to Softwarephysics. Then I update the original Introduction to Softwarephysics entry with the title and content of the new posting to be published. I then go back and take “New” out of the title of the New Introduction to Softwarephysics. This way the Introduction to Softwarephysics always appears as the first posting in the context root of http://softwarephysics.blogspot.com/. The side effect of all this is that the real posting date of posts is the date that appears on the post that you get when clicking on the Newer Post link at the bottom left of the posting webpage.

SoftwarePhysics 101 – The Physics of Cyberspacetime is now available on Microsoft OneDrive.

SoftwarePhysics 101 – The Physics of Cyberspacetime - Original PowerPoint document

Entropy – A spreadsheet referenced in the document

BSDE – A 1989 document describing how to use BSDE - the Bionic Systems Development Environment - to grow applications from genes and embryos within the maternal BSDE software.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
http://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Sunday, June 05, 2016

Making Sense of the Absurdity of the Real World of Human Affairs

It was the best of times,
it was the worst of times,
it was the age of wisdom,
it was the age of foolishness,
it was the epoch of belief,
it was the epoch of incredulity,
it was the season of Light,
it was the season of Darkness,
it was the spring of hope,
it was the winter of despair,

we had everything before us, we had nothing before us, we were all going direct to Heaven, we were all going direct the other way— in short, the period was so far like the present period, that some of its noisiest authorities insisted on its being received, for good or for evil, in the superlative degree of comparison only.


I dearly love those profound opening words from Charles Dickens' A Tale of Two Cities (1859) because for me they summarize the best description of the human condition ever composed by the human mind. As any student of history can attest, back in 1859 Dickens was simply stating that the current times are no different than any other, and that it has always been this way, and that there has always been some element of absurdity in the real world of human affairs. In fact, many religions in the past featured first-order approximations to explain the above. But for once our times may truly be different because of the advancing effects of software on the world, and that will be the subject of this brief posting.

As I have stated in many previous postings on this blog on softwarephysics, I started this blog on softwarephysics about 10 years ago with the hopes of helping the IT community to better deal with the daily mayhem of life in IT, after my less than stunning success in doing so back in the 1980s when I first began developing softwarephysics for my own use. But in the process of doing so, I believe I accidentally stumbled upon "what's it all about" as outlined in What’s It All About?. Softwarephysics explains that it is all about self-replicating information in action, and that much of today's absurdity stems from the fact that we are now living in one of those very rare transitionary periods when a new form of self-replicating information, in the form of software, is coming to dominate. For more on that please see A Brief History of Self-Replicating Information. Much of this realization arose from the work of Richard Dawkins, Susan Blackmore, Stuart Kauffman, Lynn Margulis, Freeman Dyson and of course Charles Darwin. The above is best summed up by Susan Blackmore's brilliant TED presentation at:

Memes and "temes"
http://www.ted.com/talks/susan_blackmore_on_memes_and_temes.html

Note that I consider Susan Blackmore's temes to really be technological artifacts that contain software. After all, an iPhone without software is simply a flake tool with a very dull edge.

So to really make sense of the absurdities of the modern world one must first realize that we are all DNA survival machines with minds infected by memes in a Dawkinsian sense, but the chief difference this time is that we now have software rapidly becoming the dominant form of self-replicating information on the planet, and that is inducing further stresses that are leading to increased levels of absurdity. As I outlined in The Economics of the Coming Software Singularity, The Enduring Effects of the Obvious Hiding in Plain Sight and Machine Learning and the Ascendance of the Fifth Wave one of the initial tell-tale signs that software is truly coming to predominance has been the ability of software to displace workers over the past 50 years or so. The combination of globalization, made possible by software, and the automation of many middle class jobs through the application of software, has led to a great deal of economic strife recently. Economic strife is not a good thing because it frequently leads to political absurdities like the 20th century Bolshevik Revolution in Russia or the rise of National Socialism in Germany. Economic strife can also lead people who are economically distressed to take up very conservative political or religious memes that condone violence, as a way to alleviate the growing pain they feel as they become alienated from society by software. So once again, the appeal of simple memes that purport to alleviate economic distress, or to eliminate the perceived heretical thoughts and actions of others, are on the rise world-wide, and these simple memes have naturally entered into a parasitic/symbiotic relationship with social media software to aid the self-replication of both forms of self-replicating information. In recent years, this parasitic/symbiotic relationship of such simple-minded memes with social media software has led to the singling out of groups of people for Sonderbehandlung or "special treatment", leading to acts of terrorism and ethnic cleansing throughout the world.

Please Stop, Breathe and Think
So before you decide to blow somebody away for some strange reason, or even before you decide to vote for somebody who might decide to blow lots of people away for some strange reason in your name, please first stop to breathe and think about what is really going on. Chances are you are simply responding to some parasitic memes in your mind that really do not have your best interest at heart, aided by some software that could also care less about your ultimate disposition. They are just mindless forms of self-replicating information that have been selected for the ability to survive in a Universe dominated by the second law of thermodynamics and nonlinearity. The memes and software that are inciting you to do harm to others are just mindless forms of self-replicating information trying to self-replicate at all costs, with little regard for you as an individual. For them you are just a disposable DNA survival machine with a disposable mind that has a lifespan of less than 100 years. They just need you to replicate in the minds of others before you die, and if blowing yourself up in a marketplace filled with innocents, or in a hail of bullets from law enforcement serves that purpose, they will certainly do so because they cannot do otherwise. Unlike you, they cannot think. Only you can do that.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
http://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Saturday, May 14, 2016

Agile vs. Waterfall Programming and the Value of Having a Theoretical Framework

In my last posting Cloud Computing and the Coming Software Mass Extinction I explained that having a sound theoretical framework, like all of the other sciences have, would be beneficial because it would allow IT professionals to make better decisions, and that the purpose of softwarephysics was to do just that. A good example that demonstrates the value of having a sound theoretical framework to base decisions on is the ongoing battle between Agile development and the Waterfall methodology for development. There are opinions on both sides as to which methodology is superior, but how does one make an informed decision without the aid of a sound theoretical framework? Again, this is actually a very old battle that has been going on for more than 50 years. For younger IT professionals, Agile development might seem like the snazzy new way of doing things, just as each new generation is always surprised to discover that sex was first invented when they happened to turn 16 years of age. Agile certainly has lots of new terminology, but Agile is actually the way most commercial software was developed back in the 1960s and early 1970s. In fact, if you look back far enough you can easily find papers written in the 1950s that contained both Agile and Waterfall concepts in them.

So what is the big deal with the difference between Agile and Waterfall development, and which is better? People have literally written hundreds of books on the topic, but succinctly here is the essential difference:

1. Waterfall Development - Build software like you build an office building. Spend lots of time up front on engineering and design work to produce detailed blueprints and engineering diagrams before any construction begins. Once construction begins the plans are more or less frozen and construction follows the plans. The customer does not start using the building until it is finished and passes many building code inspections. Similarly, with the Waterfall development of software lots of design documents are created first before any coding begins, and the end-user community does not work with the software until it is completed and has passed a great deal of quality assurance testing.

2. Agile Development - Develop software through small incremental changes that do not require a great deal of time - weeks instead of months or years. Detailed design work and blueprinting are kept to a minimum. Rather, rapidly getting some code working that end-users can begin interacting with is the chief goal. The end-users start using the software almost immediately, and the software evolves along with the customer's needs because the end-users are part of the development team.

Nobody new it at the time, but Agile development was routinely used back in the 1960s and early 1970s simply because computers did not have enough memory to store large programs. When I first started programming back in 1972, a mainframe computer had about 1 MB of memory, and that 1 MB of memory might be divided into (2) 256 KB regions and (4) 128 KB regions to run several programs at the same time. That meant that programs were limited to a maximum size of 128 - 256 KB of memory, which is perhaps 10,000 times smaller than today. Since you cannot do a great deal of processing logic in 128 - 256 KB of memory, many small programs were strung together and run in a batch mode using a job stream of many steps to produce a job that delivered the required overall level of data processing power. Each step in a job stream ran a small program, using a maximum of 128 - 256 KB of memory, that then wrote to an output tape that was the input tape for the next step in the job stream. Flowcharts were manually drawn with a plastic template to document the processing flow of the program within each job step and of the entire job stream. Ideally, the flowcharts were supposed to be created before the programs were coded, but because the programs were so small, on the order of a few hundred lines of code each, many times the flowcharts were created after the programs were coded and tested as a means of documentation after the fact. Because these programs were so small, it just seemed natural to code them up without lots of upfront design work in a prototyping manner similar to Agile development.

Figure 1 - A plastic IBM flowchart template was used to create flowcharts of program and job stream logic on paper.

This all changed in the late 1970s when interactive computing began to become a significant factor in the commercial software that corporate IT departments were generating. By then mainframe computers had much more memory than they had back in the 1960s, so interactive programs could be much larger than the small programs found within the individual job steps of a batch job stream. Since the interactive software had to be loaded into a computer all in one shot, and required some kind of a user interface that did things like checking the input data from an end-user, and also had to interact with the end-user in a dynamic manner, interactive programs were necessarily much larger than the small programs that were found in the individual job steps of a batch job stream. These factors caused corporate IT departments to move from the Agile prototyping methodologies of the 1960s and early 1970s to the Waterfall methodology of the 1980s, and so by the early 1980s prototyping software on the fly was considered to be an immature approach. Instead, corporate IT departments decided that a formal development process was needed, and they chose the Waterfall approach used by the construction and manufacturing industries to combat the high costs of making changes late in the development process. This was because in the early 1980s CPU costs were still exceedingly quite high so it made sense to create lots of upfront design documents before coding actually began to minimize the CPU costs involved with creating software. For example, in the early 1980s if I had a $100,000 project, it was usually broken down as $25,000 for programming manpower costs, $25,000 for charges from IT Management and other departments in IT, and $50,000 for program compiles and test runs to develop the software. Because just running compiles and test runs of the software under development consumed about 50% of the costs of a development project, it made sense to adopt the Waterfall development model to minimize those costs.

As with all things, development fads come and go, and the current fad is always embraced by all, except for a small handful of heretics waiting in the wings to launch the next development fad. So how does one make an informed decision about how to proceed? This is where having a theoretical framework comes in handy.

The Importance of Having a Theoretical Framework
As I mentioned in my Introduction to Softwarephysics I transitioned from being an exploration geophysicist, exploring for oil, to become an IT professional in Amoco's IT department in 1979. At the time, the Waterfall methodology was all the rage, and that was the way I was taught to properly develop and maintain software at the time. But I never really liked the Waterfall methodology because I was still used to the old Agile prototyping ways that I had been using ever since I first learned how to write Fortran code to solve scientific problems. At the time, I was also putting together the first rudimentary principles of softwarephysics that would later go on to form the basis of a comprehensive theoretical framework for the behavior of software. This all led me to believe that the very popular Waterfall methodology of the time was not the way to go, and instead, a more Agile prototyping methodology that let software evolve through small incremental changes would be more productive. So in the early 1980s I was already leaning towards Agile techniques, but with a twist. What we really needed to do was to adopt an Agile biological approach to software that allowed us to grow and evolve software over time in an interactive manner with end-users actively involved in the process.

So I contend that Agile is definitely the way to go, but that the biological approach to software that is found within softwarephysics is the essential element that Agile development is still missing. So Agile development is close, but not the final solution. Again, Agile development is just another example of IT stumbling around in the dark because it lacks a theoretical framework. Like all living things, IT eventually stumbles upon something that really works and then sticks with it. We have seen this happen many times before over the past 75 years, or 2.4 billion seconds, ever since Konrad Zuse first cranked up his Z3 computer in May of 1941. IT eventually converges upon the same solutions that living things stumbled upon billions of years ago and continues to use to this very day. In the 1970s IT discovered the importance of using structured programming techniques, which is based upon compartmentalizing software functions into internal functions, and which is very similar to the compartmentalization of functions found within the organelles of eukaryotic cells. In the 1980s IT stumbled upon object-oriented programming, which mimics the organization and interaction of cells with different cell types in a multicellular organism. Similarly, the SOA (Service Oriented Architecture) of the past decade is simply an echo of the distant Cambrian explosion that happened 541 million years ago for living things. For more on this see the SoftwarePaleontology section of SoftwareBiology. Cloud computing is the next architectural element that IT has stumbled upon through convergence, and is very similar to the architectural organization of the social insects like ants and bees. For more on that see Cloud Computing and the Coming Software Mass Extinction.

As I explained in The Fundamental Problem of Software and the softwarephysics postings leading up to it, the fundamental problem with developing and maintaining software is the effects of the second law of thermodynamics acting upon software in a nonlinear Universe. The second law of thermodynamics is constantly trying to reduce the total amount of useful information in the Universe when we code software, creating small bugs in software whenever we work on it, and because the Universe is largely nonlinear in nature, these small bugs can produce huge problems immediately, or may even initially lay dormant, but later cause problems in the distant future. So the question is, are there any complex information processing systems in the physical Universe that deal well with both the second law of thermodynamics and nonlinearity? The answer to this question is of course, yes – living things do a marvelous job in this perilous Universe with contending with both the second law of thermodynamics and nonlinearity. Just as every programmer must assemble characters into lines of code, living things must assemble atoms into complex organic molecules in order to perform the functions of life, and because the physical Universe is largely nonlinear, small errors in these organic molecules can have disastrous effects. And this all has to be done in a Universe bent on degenerating into a state of maximum entropy and minimum useful information content thanks to our old friend the second law of thermodynamics. So it makes sense from an IT perspective to adopt these very successful biological techniques when developing, maintaining, and operating software.

Living things do not use the Waterfall methodology to produce the most complex things we know of in the Universe. Instead, they use Agile techniques. Species evolve over time and gain additional functionality through small incremental changes honed by natural selection, and all complex multicellular organisms come to maturity from a single fertilized egg through the small incremental changes of embryogenesis.

An Agile Biological Approach to Software
So how do you apply an Agile biological approach to software? In How to Think Like a Softwarephysicist I provided a general framework on how to do so. But I also have a very good case study of how I did so in the past. For example, in SoftwarePhysics I described how I started working on BSDE - the Bionic Systems Development Environment back in 1985 while in the IT department of Amoco. BSDE was an early mainframe-based IDE (Integrated Development Environment like Eclipse) at a time when there were no IDEs. During the 1980s BSDE was used to grow several million lines of production code for Amoco by growing applications from embryos. For an introduction to embryology see Software Embryogenesis. The DDL statements used to create the DB2 tables and indexes for an application were stored in a sequential file called the Control File and performed the functions of genes strung out along a chromosome. Applications were grown within BSDE by turning their genes on and off to generate code. BSDE was first used to generate a Control File for an application by allowing the programmer to create an Entity-Relationship diagram using line printer graphics on an old IBM 3278 terminal.

Figure 2 - BSDE was run on IBM 3278 terminals, using line printer graphics, and in a split-screen mode. The embryo under development grew within BSDE on the top half of the screen, while the code generating functions of BSDE were used on the lower half of the screen to insert code into the embryo and to do compiles on the fly while the embryo ran on the upper half of the screen. Programmers could easily flip from one session to the other by pressing a PF key.

After the Entity-Relationship diagram was created, the programmer used a BSDE option to create a skeleton Control File with DDL statements for each table on the Entity-Relationship diagram and each skeleton table had several sample columns with the syntax for various DB2 datatypes. The programmer then filled in the details for each DB2 table. When the first rendition of the Control File was completed, another BSDE option was used to create the DB2 database for the tables and indexes on the Control File. Another BSDE option was used to load up the DB2 tables with test data from sequential files. Each DB2 table on the Control File was considered to be a gene. Next a BSDE option was run to generate an embryo application. The embryo was a 10,000 line of code PL/1, Cobol or REXX application that performed all of the primitive functions of the new application. The programmer then began to grow the embryo inside of BSDE in a split-screen mode. The embryo ran on the upper half of an IBM 3278 terminal and could be viewed in real time, while the code generating options of BSDE ran on the lower half of the IBM 3278 terminal. BSDE was then used to inject new code into the embryo's programs by reading the genes in the Control File for the embryo in a real time manner while the embryo was running in the top half of the IBM 3278 screen. BSDE had options to compile and link modified code on the fly while the embryo was still executing. This allowed for a tight feedback loop between the programmer and the application under development. In fact, many times BSDE programmers sat with end-users and co-developed software together on the fly in a very Agile manner. When the embryo had grown to full maturity, BSDE was then used to create online documentation for the new application, and was also used to automate the install of the new application into production. Once in production, BSDE generated applications were maintained by adding additional functions to their embryos.

Since BSDE was written using the same kinds of software that it generated, I was able to use BSDE to generate code for itself. The next generation of BSDE was grown inside of its maternal release. Over a period of seven years, from 1985 – 1992, more than 1,000 generations of BSDE were generated, and BSDE slowly evolved in an Agile manner into a very sophisticated tool through small incremental changes. BSDE dramatically improved programmer efficiency by greatly reducing the number of buttons programmers had to push in order to generate software that worked.

Figure 3 - Embryos were grown within BSDE in a split-screen mode by transcribing and translating the information stored in the genes in the Control File for the embryo. Each embryo started out very much the same, but then differentiated into a unique application based upon its unique set of genes.

Figure 4 – BSDE appeared as the cover story of the October 1991 issue of the Enterprise Systems Journal

BSDE had its own online documentation that was generated by BSDE. Amoco's IT department also had a class to teach programmers how to get started with BSDE. As part of the curriculum Amoco had me prepare a little cookbook on how to build an application using BSDE:

BSDE – A 1989 document describing how to use BSDE - the Bionic Systems Development Environment - to grow applications from genes and embryos within the maternal BSDE software.

I wish that I could claim that I was smart enough to have sat down and thought up all of this stuff from first principles, but that is not what happened. It all just happened through small incremental changes in a very Agile manner over a very long period of time and most of the design work was done subconsciously, if at all. Even the initial BSDE ISPF edit macros happened through serendipity. When I first started programming DB2 applications, I found myself copying in the DDL CREATE TABLE statements from the file I used to create the DB2 database into the program that I was working on. This file, with the CREATE TABLE statements, later became the Control File used by BSDE to store the genes for an application. I would then go through a series of editing steps on the copied in data to transform it from a CREATE TABLE statement into a DB2 SELECT, INSERT, UPDATE, or DELETE statement. I would do the same thing all over again to declare the host variables for the program. Being a lazy programmer, I realized that there was really no thinking involved in these editing steps and that an ISPF edit macro could do the job equally as well, only very quickly and without error, so I went ahead and wrote a couple of ISPF edit macros to automate the process. I still remember the moment when it first hit me. For me it was very much like the scene in 2001 - A Space Odyssey, when the man-ape picks up a wildebeest thighbone and starts to pound the ground with it. My ISPF edit macros were doing the same thing that happens when the information in a DNA gene is transcribed into a protein! A flood of biological ideas poured into my head over the next few days, because at last I had a solution for my pent-up ideas about nonlinear systems and the second law of thermodynamics that were making my life so difficult as a commercial software developer. We needed to "grow" code – not write code!

BSDE began as a few simple ISPF edit macros running under ISPF edit. ISPF is the software tool that mainframe programmers still use today to interface to the IBM MVS and VM/CMS mainframe operating systems and contains an editor that can be greatly enhanced through the creation of edit macros written in REXX. I began BSDE by writing a handful of ISPF edit macros that could automate some of the editing tasks that a programmer needed to do when working on a program that used a DB2 database. These edit macros would read a Control File, which contained the DDL statements to create the DB2 tables and indexes. The CREATE TABLE statements in the Control File were the equivalent of genes, and the Control File itself performed the functions of a chromosome. For example, a programmer would retrieve a skeleton COBOL program, with the bare essentials for a COBOL/DB2 program, from a stock of reusable BSDE programs. The programmer would then position their cursor in the code to generate a DB2 SELECT statement and hit a PFKEY. The REXX edit macro would read the genes in the Control File and would display a screen listing all of the DB2 tables for the application. The programmer would then select the desired tables from the screen, and the REXX edit macro would then copy the selected genes to an array (mRNA). The mRNA array was then sent to a subroutine that inserted lines of code (tRNA) into the COBOL program. The REXX edit macro would also declare all of the SQL host variables in the DATA DIVISION of the COBOL program and would generate code to check the SQLCODE returned from DB2 for errors and take appropriate actions. A similar REXX ISPF edit macro was used to generate screens. These edit macros were also able to handle PL/1 and REXX/SQL programs. They could have been altered to generate the syntax for any programming language such as C, C++, or Java. As time progressed, BSDE took on more and more functionality via ISPF edit macros. Finally, there came a point where BSDE took over and ISPF began to run under BSDE. This event was very similar to the emergence of the eukaryotic architecture for cellular organisms. BSDE consumed ISPF like the first eukaryotic cells that consumed prokaryotic bacteria and used them as mitochondria and chloroplasts. With continued small incremental changes, BSDE continued to evolve.

I noticed that I kept writing the same kinds of DB2 applications, with the same basic body plan, over and over. At the time I did not know it, but these were primarily Model-View-Controller (MVC) applications. For more on the MVC design pattern please see Software Embryogenesis. From embryology, I got the idea of using BSDE to read the Control File for an application and to generate an "embryo" for the application based upon its unique set of genes. The embryo would perform all of the things I routinely programmed over and over for a new application. Once the embryo was generated for a new application from its Control File, the programmer would then interactively "grow" code and screens for the application. With time, each embryo differentiated into a unique individual application in an Agile manner until the fully matured application was delivered into production by BSDE. At this point, I realized that I could use BSDE to generate code for itself, and that is when I started using BSDE to generate the next generation of BSDE. This technique really sped up the evolution of BSDE because I had a positive feedback loop going. The more powerful BSDE became, the faster I could add improvements to the next generation of BSDE through the accumulated functionality inherited from previous generations.

Embryos were grown within BSDE using an ISPF split screen mode. The programmer would start up a BSDE session and run Option 4 – Interactive Systems Development from the BSDE Master Menu. This option would look for an embryo, and if it did not find one, would offer to generate an embryo for the programmer. Once an embryo was implanted, the option would turn the embryo on and the embryo would run inside of the BSDE session with whatever functionality it currently had. The programmer would then split his screen with PF2 and another BSDE session would appear in the lower half of his terminal. The programmer could easily toggle control back and forth between the upper and lower sessions with PF9. The lower session of BSDE was used to generate code and screens for the embryo on the fly, while the embryo in the upper BSDE session was fully alive and functional. This was possible because BSDE generated applications that used ISPF Dialog Manager for screen navigation, which was an interpretive environment, so compiles were not required for screen changes. If your logic was coded in REXX, you did not have to do compiles for logic changes either, because REXX was an interpretive language. If PL/1 or COBOL were used for logic, BSDE had facilities to easily compile code for individual programs after a coding change, and ISPF Dialog Manger would simply load the new program executable when that part of the embryo was exercised. These techniques provided a tight feedback loop so that programmers and end-users could immediately see the effects of a change as the embryo grew and differentiated.

But to fully understand how BSDE was used to grow embryos through small incremental changes into fully mature applications, we need to turn to how that is accomplished in the biosphere. For example, many people find it hard to believe that all human beings evolved from a single primitive cell in the very distant past, yet we all managed to do that and in only about 9 months of gestation! Indeed, it does seem like a data processing miracle that a complex human body can grow and differentiate into 100 trillion cells, composed of several hundred different cell types that ultimately form the myriad varied tissues within a body, from a single fertilized egg cell. In biology the study of this incredible feat is called embryogenesis, or developmental biology, and this truly amazing process from a data processing perspective is certainly worthy of emulating when developing software. So let's spend some time seeing how that is done.

Human Embryogenesis
Most multicellular organisms follow a surprisingly similar sequence of steps to form a complex body, composed of billions or trillions of eukaryotic cells, from a single fertilized egg. This is a sure sign of some inherited code at work that has been tweaked many times to produce a multitude of complex body plans or phyla by similar developmental processes. Since many multicellular life forms follow a similar developmental theme let us focus, as always, upon ourselves and use the development of human beings as our prime example of how developmental biology works. For IT professionals and other readers not familiar with embryogenesis, it would be best now to view this short video before proceeding:

Medical Embryology - Difficult Concepts of Early Development Explained Simply https://www.youtube.com/watch?annotation_id=annotation_1295988581&feature=iv&src_vid=rN3lep6roRI&v=nQU5aKKDwmo

Basically, a fertilized egg, or zygote, begins to divide many times over, without the zygote really increasing in size at all. After a number of divisions, the zygote becomes a ball of undifferentiated cells that are all the same and is known as a morula. The morula then develops an interior hollow center called a blastocoel. The hollow ball of cells is known as a blastula and all the cells in the blastula are undifferentiated, meaning that they are all still identical in nature.

Figure 5 – A fertilized egg, or zygote, divides many times over to form a solid sphere of cells called a morula. The morula then develops a central hole to become a hollow ball of cells known as a blastula. The blastula consists of identical cells. When gastrulation begins some cells within the blastula begin to form three layers of differentiated cells – the ectoderm, mesoderm, and endoderm. The above figure does not show the amnion, which forms, just outside of the infolded cells that create the gastrula. See Figure 6 for the location of the amnion.

The next step is called gastrulation. In gastrulation one side of the blastula breaks symmetry and folds into itself and eventually forms three differentiated layers – the ectoderm, mesoderm and endoderm. The amnion forms just outside of the gastrulation infold.

Figure 6 – In gastrulation three layers of differentiated cells form - the ectoderm, mesoderm and endoderm by cells infolding and differentiating.

Figure 7 – Above is a close up view showing the ectoderm, mesoderm and endoderm forming from the primitive streak.

The cells of the endoderm go on to differentiate into the internal organs or guts of a human being. The cells of the mesoderm, or the middle layer, go on to form the muscles and connective tissues that do most of the heavy lifting. Finally, the cells of the ectoderm go on to differentiate into the external portions of the human body, like the skin and nerves.

Figure 8 – Some examples of the cell types that develop from the endoderm, mesoderm and ectoderm.



Figure 9 – A human being develops from the cells in the ectoderm, mesoderm and endoderm as they differentiate into several hundred different cell types.



Figure 10 – A human embryo develops through small incremental changes into a fully mature newborn in an Agile manner. Living things, the most complex data processing machines in the Universe, never use the Waterfall methodology to build new bodies or to evolve over time into new species. Living things always develop new functionality through small incremental changes in an Agile manner.

Growing Embryos in an Agile Manner
At the time IBM had a methodology called JAD - Joint Application Development. In a JAD the programmers and end-users met together in a room and roughed out the design for a new application on flip charts in an interactive manner. The flip chart papers were then taped to the walls for all to see. At the end of the day, the flip charts were gathered by an attending systems analyst who then attempted to put together the JAD flip charts into a User Requirements document that was then carried into the normal Waterfall development process. This seemed rather cumbersome, so with BSDE we tried to do some JAD work in an interactive manner with end-users by growing Model-View-Controller (MVC) embryos together. In an MVC application, the data Model was stored on DB2 tables, forming the "guts" or endoderm tissues of the application. The View code consisted of ISPF screens and reports that could be viewed by the end-user, forming the ectoderm tissues of the application. Finally, the Model and View were connected together by the Controller code that did most of the heavy lifting, forming the mesoderm tissues of the application. Controller code was by far the most difficult to code and was the most expensive part of a project. So as in human gastrulation, we grew embryos from the outside-in and the inside-out, by finally having the external ectoderm tissues meet the internal endoderm tissues in the middle at the mesoderm tissues of the Controller code. Basically, we would first start with some data modeling with the end-user to rough out the genes in the Control File chromosome. Then we had BSDE generate an MVC embryo for the application based upon its genes in the Control File. Next we roughed out the ectoderm ISPF screens for the application, using BSDE to generate the ISPF screens and field layouts on each ISPF screen. We then used BSDE to generate REXX, Cobol or PL/1 programs to navigate the flow of the ISPF screens. So we worked on the endoderm and ectoderm with the end-users first since they were very easy to change and they determined the look and feel of the application. By letting end-users first interact with an embryo that consisted of just the end-user ectoderm tissues, forming the user interface, and the underlying DB2 tables of the endoderm it was possible to rough out an application with no user requirements documentation at all in a very Agile manner. Of course as end-users interacted with the embryo, many times we discovered that new data elements were needed to be added to our DB2 model stored on the Control File, so we would just add the new columns to the genes on the Control File and then just regenerate the impacted ISPF screens from the modified genes. Next we added validation code to the embryo to validate the input fields on the ISPF screens, using BSDE templates. BSDE came with a set of templates of reusable code that could be imported for this purpose. This was before the object-oriented programming revolution in IT, so the BSDE reusable code templates performed the same function as the class library of an object-oriented programming language. Finally, mockup reports were generated by BSDE. BSDE was used to create a report mockup file that looked like what the end-user wanted to see in an online or line printer report. BSDE then took the report mockup file and generated a Cobol or PL/1 program that produced the report output to give the end-user a good feel for how the final report would actually look. At this point end-users could fully navigate the embryo and interact with it in real time. They could pretend to enter data into the embryo as they navigated the embryo's ISPF screens, and they could see the embryo generate reports. Once the end-user was happy with the embryo, we would start to use ISPF to generate code for the mesoderm Controller tissues by having BSDE create SELECT, INSERT, UPDATE and DELETE SQL statements in the embryonic programs. We also used the BSDE reusable code templates for processing logic in the mesoderm Controller code that connected the ectoderm ISPF screens to the endoderm DB2 database.

I Think, Therefore I am - a Heretic
BSDE was originally developed for my own use, but fellow programmers soon took note, and over time, an underground movement of BSDE programmers developed at Amoco, and so I began to market BSDE and softwarephysics within Amoco. Being a rather conservative oil company, this was a hard sell, and I am not much of a salesman. By now it was late 1986, and I was calling their programmers "software engineers" and was pushing the unconventional ideas of applying physics and biology to software to grow software in an Agile manner. These were very heretical ideas at the time because of the dominance of the Waterfall methodology. BSDE was generating real applications with practically no upfront user requirements or specification documents to code from. Fortunately for me, all of Amoco's income came from a direct application of geology, physics, or chemistry, so many of our business partners were geologists, geophysicists, chemists, or chemical, petroleum, electrical, or industrial engineers, and that was a big help. Computer viruses began to appear about this time, and they provided some independent corroboration for the idea of applying biological concepts to software, and in 1987 we also started to hear some things about artificial life from Chris Langton out of Los Alamos, but there still was a lot of resistance to the idea back at Amoco. One manager insisted that I call genes "templates". I used the term "bionic" in the product name after I checked in the dictionary and found that bionic meant applying concepts from biology to solve engineering problems, and that was exactly what I was trying to achieve. There were also a couple of American television programs in the 1970s that introduced the term to the American public, The Six Million Dollar Man and The Bionic Woman, which featured superhuman characters performing astounding feats. In my BSDE road shows, I would demo BSDE in action spewing out perfect code in real time and about 20 times faster than normal human programmers could achieve, so I thought the term "bionic" was fitting. I am still not sure that using the term "bionic" was the right thing to do. IT was not "cool" in the 1980s, as it is in today’s ubiquitous Internet Age, and was dominated by very serious and conservative people with backgrounds primarily in accounting. However, BSDE was pretty successful and by 1989, BSDE was finally recognized by Amoco IT management and was made available to all Amoco programmers. A BSDE class was developed and about 20 programmers became active BSDE programmers. A BSDE COI (Community of Interest) was formed, and I used to send out weekly email newsletters about applying physics and biology to software to the COI members.

So that is how BSDE got started by accident. Since I did not have any budget for BSDE development, and we charged out all of our time at Amoco to various projects, I was forced to develop BSDE through small incremental enhancements in an Agile manner that always made my job on the billable projects a little easier. I new that was how evolution worked, so I was not too concerned. In retrospect, this was a fortunate thing. In the 1980s, the accepted theory of the day was that you needed to prepare lots of user requirement and specifications documents up front, following the Waterfall methodology, for a new application, and then you would code from the documents as a blueprint. This was before the days of Agile development through small incremental releases that you find today. In the mid 1980s, prototyping was a very heretical proposition in IT. But I don’t think that I could have built BSDE using the traditional blueprint Waterfall methodology of the day.

Conclusion
As you can see, having a sound theoretical framework, like softwarephysics, helps when your are faced with making an important IT decision, like whether to use the Agile or Waterfall methodology. Otherwise, IT decisions are left to the political whims of the times, with no sound basis.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
http://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Wednesday, March 09, 2016

Cloud Computing and the Coming Software Mass Extinction

I started this blog on softwarephysics about 10 years ago with the hopes of helping the IT community to better deal with the daily mayhem of life in IT, after my less than stunning success in doing so back in the 1980s when I first began developing softwarephysics for my own use. My original purpose for softwarephysics was to formulate a theoretical framework that was independent of both time and technology, and which therefore could be used by IT professionals to make decisions in both tranquil times and in times of turmoil. Having a theoretical framework in turbulent times is an especially good thing to have because it helps to make sense of things when dramatic changes are happening. Geologists have plate tectonics, biologists have Darwinian thought and chemists have atomic theory to fall back upon when the going gets tough, and a firm theoretical framework provides the necessary comfort and support to get through them. With the advent of Cloud Computing, we once again seem to be heading into turbulent times within the IT industry that will have lasting effects on the careers of many IT professionals, and that will be the subject of this posting.

One of the fundamental tenants of softwarephysics is that software and living things are both forms of self-replicating information, and that both of them have converged upon very similar solutions to common data processing problems. This convergence means that they are also both subject to common experiences. Consequently, there is much that IT professionals can learn by studying living things, and also much that biologists can learn by studying the evolution of software over the past 75 years, or 2.4 billion seconds, ever since Konrad Zuse first cranked up his Z3 computer in May of 1941. For example, the success of both software and living things is very dependent upon environmental factors, and when the environment dramatically changes, it can lead to mass extinctions of both software and living things. In fact, the geological timescale during the Phanerozoic, when complex multicellular life first came to be, is divided by two great mass extinctions into three segments - the Paleozoic or "old life", the Mesozoic or "middle life", and the Cenozoic or "new life". The Paleozoic-Mesozoic boundary is defined by the Permian-Triassic greenhouse gas mass extinction 252 million years ago, while the Mesozoic-Cenozoic boundary is defined by the Cretaceous-Tertiary mass extinction caused by the impact of a small 6 mile in diameter asteroid with the Earth 65 million years ago. During both of these major mass extinctions a large percentage of the extant species went extinct and the look and feel of the entire biosphere dramatically changed. By far, the Permian-Triassic greenhouse gas mass extinction 252 million years ago was the worst. A massive flood basalt known as the Siberian Traps covered an area about the size of the continental United States with several thousand feet of basaltic lava, with eruptions that lasted for about one million years. Flood basalts, like the Siberian Traps, are thought to arise when large plumes of hotter than normal mantle material rise from near the mantle-core boundary of the Earth and break to the surface. This causes a huge number of fissures to open over a very large area that then begin to disgorge massive amounts of basaltic lava over a very large region. After the eruptions of basaltic lava began, it took about 100,000 years for the carbon dioxide that bubbled out of the basaltic lava to dramatically raise the level of carbon dioxide in the Earth's atmosphere and initiate the greenhouse gas mass extinction. This led to an Earth with a daily high of 140 oF and purple oceans choked with hydrogen-sulfide producing bacteria, producing a dingy green sky over an atmosphere tainted with toxic levels of hydrogen sulfide gas and an oxygen level of only about 12%. The Permian-Triassic greenhouse gas mass extinction killed off about 95% of marine species and 70% of land-based species, and dramatically reduced the diversity of the biosphere for about 10 million years. It took a full 100 million years to recover from it.

Figure 1 - The geological timescale of the Phanerozoic Eon is divided into the Paleozoic, Mesozoic and Cenozoic Eras by two great mass extinctions - click to enlarge.

Figure 2 - Life in the Paleozoic, before the Permian-Triassic mass extinction, was far different than life in the Mesozoic.

Figure 3 - In the Mesozoic the dinosaurs ruled after the Permian-Triassic mass extinction, but small mammals were also present.

Figure 4 - Life in the Cenozoic, following the Cretaceous-Tertiary mass extinction, has so far been dominated by the mammals. This will likely soon change as software becomes the dominant form of self-replicating information on the planet, ushering in a new geological Era that has yet to be named.

Similarly, IT experienced a similar devastating mass extinction about 25 years ago when we experienced an environmental change that took us from the Age of the Mainframes to the Distributed Computing Platform. Suddenly mainframe Cobol/CICS and Cobol/DB2 programmers were no longer in demand. Instead, everybody wanted C and C++ programmers who worked on cheap Unix servers. This was a very traumatic time for IT professionals. Of course the mainframe programmers never went entirely extinct, but their numbers were greatly reduced. The number of IT workers in mainframe Operations also dramatically decreased, while at the same time the demand for Operations people familiar with the Unix-based software of the new Distributed Computing Platform skyrocketed. This was around 1992, and at the time I was a mainframe programmer used to working with IBM's MVS and VM/CMS operating systems, writing Cobol, PL-1 and REXX code using DB2 databases. So I had to teach myself Unix and C and C++ to survive. In order to do that, I bought my very first PC, an 80-386 machine running Windows 3.0 with 5 MB of memory and a 100 MB hard disk for $1500. I also bought the Microsoft C7 C/C++ compiler for something like $300. And that was in 1992 dollars! One reason for the added expense was that there were no Internet downloads in those days because there were no high-speed ISPs. PCs did not have CD/DVD drives either, so the software came on 33 diskettes, each with a 1.44 MB capacity, that had to be loaded one diskette at a time in sequence. The software also came with a about a foot of manuals describing the C++ class library on very thin paper. Indeed, suddenly finding yourself to be obsolete is not a pleasant thing and calls for drastic action.

Figure 5 – An IBM OS/360 mainframe from 1964.

Figure 6 – The Distributed Computing Platform replaced a great deal of mainframe computing with a large number of cheap self-contained servers running software that tied the servers together.

The problem with the Distributed Computing Platform was that although the server hardware was cheaper than mainframe hardware, the granular nature of the Distributed Computing Platform meant that it created a very labor intensive infrastructure that was difficult to operate and support, and as the level of Internet traffic dramatically expanded over the past 20 years, the Distributed Computing Platform became nearly impossible to support. For example, I have been in Middleware Operations with my current employer for the past 14 years, and during that time our Distributed Computing Platform infrastructure exploded by a factor of at least a hundred. It is now so complex and convoluted that we can barely keep it all running, and we really do not even have enough change window time to properly apply maintenance to it as I described in The Limitations of Darwinian Systems. Clearly, the Distributed Computing Platform was not sustainable, and an alternative was desperately needed. This is because the Distributed Computing Platform was IT's first shot at running software on a multicellular architecture, as I described in Software Embryogenesis. But the Distributed Computing Platform simply had too many moving parts, all working together independently on their own, to fully embrace the advantages of a multicellular organization. In many ways the Distributed Computing Platform was much like the ancient stromatolites that tried to reap the advantages of a multicellular organism by simply tying together the diverse interests of multiple layers of prokaryotic cyanobacteria into a "multicellular organism" that seemingly benefited the interests of all.

Figure 7 – Stromatolites are still found today in Sharks Bay Australia. They consist of mounds of alternating layers of prokaryotic bacteria.

Figure 8 – The cross section of an ancient stromatolite displays the multiple layers of prokaryotic cyanobacteria that came together for their own mutual self-survival to form a primitive "multicellular" organism that seemingly benefited the interests of all. The servers and software of the Distributed Computing Platform were very much like the primitive stromatolites.

The Rise of Cloud Computing
Now I must admit that I have no experience with Cloud Computing whatsoever because I am an old "legacy" mainframe programmer who has now become an old "legacy" Distributed Computing Platform person. But this is where softwarephysics comes in handy because it provides an unchanging theoretical framework. As I explained in The Fundamental Problem of Software, the second law of thermodynamics is not going away, and the fact that software must operate in a nonlinear Universe is not going away either, so although I may not be an expert in Cloud Computing, I do know enough softwarephysics to sense that a dramatic environmental change is taking place, and I do know from firsthand experience what comes next. So let me offer some advice to younger IT professionals on that basis.

The alternative to the Distributed Computing Platform is the Cloud Computing Platform, which is usually displayed as a series of services all stacked into levels. The highest level, SaaS (Software as a Service) runs the common third party office software like Microsoft Office 365 and email. The second level, PaaS (Platform as a Service) is where the custom business software resides, and the lowest level, IaaS (Infrastructure as a Service) provides for an abstract tier of virtual servers and other resources that automatically scale with varying load levels. From an Applications Development standpoint the PaaS layer is the most interesting because that is where they will be installing the custom application software used to run the business and also to run high-volume corporate websites that their customers use. Currently, that custom application software is installed into the middleware that is running on the Unix servers of the Distributed Computing Platform. The PaaS level will be replacing the middleware software, such as the Apache webservers and the J2EE Application servers, like WebSphere, Weblogic and JBoss that currently do that. For Operations, the IaaS level, and to a large extent, the PaaS level too are of most interest because those levels will be replacing the middleware and other support software running on hundreds or thousands of individual self-contained servers. The Cloud architecture can be run on a company's own hardware, or it can be run on a timesharing basis on the hardware at Amazon, Microsoft, IBM or other Cloud providers, using the Cloud software that the Cloud providers market.

Figure 9 – Cloud Computing returns us to the timesharing days of the 1960s and 1970s by viewing everything as a service.

Basically, the Cloud Computing Platform is based upon two defining characteristics:

1. Returning to the timesharing days of the 1960s and 1970s when many organizations could not afford to support a mainframe infrastructure of their own.

2. Taking the mulitcellular architecture of the Distributed Computing Platform to the next level by using Cloud Platform software to produce a full-blown multicellular organism, and even higher, by introducing the self-organizing behaviors of the social insects like ants and bees.

Let's begin with the timesharing concept first because that should be the most familiar to IT professionals. This characteristic of Cloud Computing is actually not so new. For example, in 1968 my high school ran a line to the Illinois Institute of Technology to connect a local card reader and printer to the Institute's mainframe computer. We had our own keypunch machine to punch up the cards for Fortran programs that we could submit to the distant mainframe which then printed back the results of our program runs. The Illinois Institute of Technology also did the same for several other high schools in my area. This allowed high school students in the area to gain some experience with computers even though their high schools could clearly not afford to maintain a mainframe infrastructure.

Figure 10 - An IBM 029 keypunch machine like the one installed in my high school in 1968.

Figure 11 - Each card could hold a maximum of 80 bytes. Normally, one line of code was punched onto each card.

Figure 12 - The cards for a program were held together into a deck with a rubber band, or for very large programs, the deck was held in a special cardboard box that originally housed blank cards. Many times the data cards for a run followed the cards containing the source code for a program. The program was compiled and linked in two steps of the run and then the generated executable file processed the data cards that followed in the deck.

Figure 13 - To run a job, the cards in a deck were fed into a card reader, as shown on the left above, to be compiled, linked, and executed by a million-dollar mainframe computer back at the Illinois Institute of Technology. In the above figure, the mainframe is located directly behind the card reader.

Figure 14 - The output of Fortran programs run at the Illinois Institute of Technology was printed locally at my high school on a line printer.

Similarly, when I first joined the IT department of Amoco in 1979, we were using GE Timeshare for all of our interactive programming needs. GE Timeshare was running VM/CMS on mainframes, and we would connect to the mainframe using a 300 baud acoustic coupler with a standard telephone. You would first dial GE Timeshare with your phone and when you heard the strange "BOING - BOING - SHHHHH" sounds from the mainframe, you would jam the telephone receiver into the acoustic coupler so that you could login to the mainframe over a "glass teletype" dumb terminal.

Figure 15 – To timeshare we connected to GE Timeshare over a 300 baud acoustic coupler modem.

Figure 16 – Then we did interactive computing on VM/CMS using a dumb terminal.

But timesharing largely went out of style in the 1970s when many organizations began to create their own datacenters and ran their own mainframes within them. They did so because with timesharing you paid the timesharing provider by the CPU-second and by the byte for disk space, and that became expensive as timesharing consumption rose. Timesharing also limited flexibility because businesses were limited by the constraints of their timesharing providers. For example, during the early 1980s Amoco created 31 VM/CMS datacenters around the world, and tied them all together with communications lines to support the interactive programming needs of the business. This network of 31 VM/CMS datacenters was called the Corporate Timeshare System, and allowed end-users to communicate with each other around the world using an office system that Amoco developed in the late 1970s called PROFS (Professional Office System). PROFS had email, calendaring and document sharing software that later IBM sold as a product called OfficeVision. Essentially, this provided Amoco with a SaaS layer in the early 1980s.

So for IT professionals in Operations, the main question is will corporations initially move to a Cloud Computing Platform hosted by Cloud providers, but then become dissatisfied with the Cloud providers, like they did with the timesharing providers of the 1970s, and then create their own private corporate Cloud platforms? Or will corporations remain using the Cloud Computing Platform of Cloud providers on a timesharing basis? It could also go the other way too. Corporations might start up their own Cloud platforms within their existing datacenters, but then decide that it would be cheaper to move to the Cloud platform hosted by a Cloud provider. Right now, it is too early to know how Cloud Computing will unfold.

The second characteristic of Cloud Computing is more exciting. All of the major software players are currently investing heavily in the PaaS and IaaS levels of Cloud Computing, which promise to reduce the complexity and difficulty we had with running software that had a multicellular organization on the Distributed Computing Platform - see Software Embryogenesis for more on this. The PaaS software makes the underlying IaaS hardware look much like a huge integrated mainframe, and the IaaS level can spawn new virtual servers on the fly as the processing load increases, much like new worker ants can be spawned as needed within an ant colony. The PaaS level will still consist of billions of objects (cells) running in a multicellular manner, but the objects will be running in a much more orderly manner, using a PaaS architecture that is more like the architecture of a true multicellular organism, or more accurately, a colony of social multicellular organisms.

Figure 17 – The IaaS level of Cloud Computing Platform can spawn new virtual servers on the fly, much like new worker ants can be spawned as needed.

How to Survive a Software Mass Extinction
Softwarephysics suggests to always look to the biosphere for solutions to IT problems. To survive a mass extinction the best thing to do is:

1. Avoid those environments that experience the greatest change during a mass extinction event.

2. Be preadapted to the new environmental factors arising from the mass extinction.

For example, during the Permian-Triassic mass extinction 95% of marine species went extinct, while only 70% of land-based species went extinct, so there was a distinct advantage in being a land-dweller. This was probably because the high levels of carbon dioxide in the Earth's atmosphere made the oceans more acidic than usual, and most marine life could simply not easily escape the adverse effects of the increased acidity. And if you were a land-dweller, being preadapted to the low level of oxygen in the Earth's atmosphere was necessary to be a survivor. For example, after the Permian-Triassic mass extinction about 95% of land-based vertebrates were members of the Lystrosaurus species. Such dominance has never happened before or since. It is thought that Lystrosaurus was a barrel-chested creature, about the size of a pig, that lived in deep burrows underground. This allowed Lystrosaurus to escape the extreme heat of the day, and Lystrosaurus was also preadapted to live in the low oxygen levels that are found in deep burrows filled with stale air.

Figure 18 – After the Permian-Triassic mass extinction Lystrosaurus accounted for 95% of all land-based vertebrates.

The above strategies worked well for me during the Mainframe → Distributed Computing Platform mass extinction 25 years ago. By exiting the mainframe environment as quickly as possible and preadapting myself to writing C and C++ code on Unix servers, I was able to survive the Mainframe → Distributed Computing Platform mass extinction. So what is the best strategy for surviving the Distributed Computing Platform → Cloud Computing Platform mass extinction? The first step is to determine which IT environments will be most affected. Obviously, IT professionals in Operations supporting the current Distributed Computing Platform will be impacted most severely. People in Applications Development will probably feel the least strain. In fact, the new Cloud Computing Platform will most likely just make their lives easier because most of the friction in moving code to Production will be greatly reduced when code is deployed to the PaaS level. Similarly, mainframe developers and Operations people should pull through okay because whatever caused them to survive the Mainframe → Distributed Computing Platform mass extinction 25 years ago should still be in play. So the real hit will be for "legacy" Operations people supporting the Distributed Computing Platform as it disappears. For those folks the best thing would be to head back to Applications Development if possible. Otherwise, preadapt yourselves to the new Cloud Computing Platform as quickly as possible. The trouble is that there are many players producing software for the Cloud Computing Platform, so it is difficult to choose which product to pursue. If your company is building its own Cloud Computing Platform, try to be part of that effort. The odds are that your company will not forsake its own Cloud for an external Cloud provider. But don't forget that the Cloud Computing Platform will need far fewer Operations staff than the Distributed Computing Platform, even for a company running its own Cloud. Those Operations people who first adapt to the new Cloud Computing Platform will most likely be the Lystrosaurus of the new Cloud Computing Platform. So hang on tight for the next few years and good luck to all!

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
http://softwarephysics.blogspot.com/

Regards,
Steve Johnston