Saturday, May 14, 2016

Agile vs. Waterfall Programming and the Value of Having a Theoretical Framework

In my last posting Cloud Computing and the Coming Software Mass Extinction, I explained that having a sound theoretical framework, like all of the other sciences have, would be beneficial because it would allow IT professionals to make better decisions, and that the purpose of softwarephysics was to do just that. A good example that demonstrates the value of having a sound theoretical framework to base decisions on is the ongoing battle between Agile development and the Waterfall methodology for development. There are opinions on both sides as to which methodology is superior, but how does one make an informed decision without the aid of a sound theoretical framework? Again, this is actually a very old battle that has been going on for more than 50 years. For younger IT professionals, Agile development might seem like the snazzy new way of doing things, just as each new generation is always surprised to discover that sex was first invented when they happened to turn 16 years of age. Agile certainly has lots of new terminology, but Agile is actually the way most commercial software was developed back in the 1960s and early 1970s. In fact, if you look back far enough you can easily find papers written in the 1950s that contained both Agile and Waterfall concepts in them.

So what is the big deal with the difference between Agile and Waterfall development, and which is better? People have literally written hundreds of books on the topic, but succinctly here is the essential difference:

1. Waterfall Development - Build software like you build an office building. Spend lots of time up front on engineering and design work to produce detailed blueprints and engineering diagrams before any construction begins. Once construction begins the plans are more or less frozen and construction follows the plans. The customer does not start using the building until it is finished and passes many building code inspections. Similarly, with the Waterfall development of software, lots of design documents are created first before any coding begins, and the end-user community does not work with the software until it is completed and has passed a great deal of quality assurance testing.

2. Agile Development - Develop software through small incremental changes that do not require a great deal of time - weeks instead of months or years. Detailed design work and blueprinting are kept to a minimum. Rather, rapidly getting some code working that end-users can begin interacting with is the chief goal. The end-users start using the software almost immediately, and the software evolves along with the customer's needs because the end-users are part of the development team.

Nobody knew it at the time, but Agile development was routinely used back in the 1960s and early 1970s simply because computers did not have enough memory to store large programs. When I first started programming back in 1972, a mainframe computer had about 1 MB of memory, and that 1 MB of memory might be divided into (2) 256 KB regions and (4) 128 KB regions to run several programs at the same time. That meant that programs were limited to a maximum size of 128 - 256 KB of memory, which is perhaps 10,000 times smaller than today. Since you cannot do a great deal of processing logic in 128 - 256 KB of memory, many small programs were strung together and run in a batch mode using a job stream of many steps to produce a job that delivered the required overall level of data processing power. Each step in a job stream ran a small program, using a maximum of 128 - 256 KB of memory, that then wrote to an output tape that was the input tape for the next step in the job stream. Flowcharts were manually drawn with a plastic template to document the processing flow of the program within each job step and of the entire job stream. Ideally, the flowcharts were supposed to be created before the programs were coded, but because the programs were so small, on the order of a few hundred lines of code each, many times the flowcharts were created after the programs were coded and tested as a means of documentation after the fact. Because these programs were so small, it just seemed natural to code them up without lots of upfront design work in a prototyping manner similar to Agile development.

Figure 1 - A plastic IBM flowchart template was used to create flowcharts of program and job stream logic on paper.

This all changed in the late 1970s when interactive computing began to become a significant factor in the commercial software that corporate IT departments were generating. By then mainframe computers had much more memory than they had back in the 1960s, so interactive programs could be much larger than the small programs found within the individual job steps of a batch job stream. Since the interactive software had to be loaded into a computer all in one shot, and required some kind of a user interface that did things like checking the input data from an end-user, and also had to interact with the end-user in a dynamic manner, interactive programs were necessarily much larger than the small programs that were found in the individual job steps of a batch job stream. These factors caused corporate IT departments to move from the Agile prototyping methodologies of the 1960s and early 1970s to the Waterfall methodology of the 1980s, and so by the early 1980s prototyping software on the fly was considered to be an immature approach. Instead, corporate IT departments decided that a formal development process was needed, and they chose the Waterfall approach used by the construction and manufacturing industries to combat the high costs of making changes late in the development process. This was because in the early 1980s CPU costs were still exceedingly quite high so it made sense to create lots of upfront design documents before coding actually began to minimize the CPU costs involved with creating software. For example, in the early 1980s, if I had a $100,000 project, it was usually broken down as $25,000 for programming manpower costs, $25,000 for charges from IT Management and other departments in IT, and $50,000 for program compiles and test runs to develop the software. Because just running compiles and test runs of the software under development consumed about 50% of the costs of a development project, it made sense to adopt the Waterfall development model to minimize those costs.

As with all things, development fads come and go, and the current fad is always embraced by all, except for a small handful of heretics waiting in the wings to launch the next development fad. So how does one make an informed decision about how to proceed? This is where having a theoretical framework comes in handy.

The Importance of Having a Theoretical Framework
As I mentioned in my Introduction to Softwarephysics I transitioned from being an exploration geophysicist, exploring for oil, to become an IT professional in Amoco's IT department in 1979. At the time, the Waterfall methodology was all the rage, and that was the way I was taught to properly develop and maintain software at the time. But I never really liked the Waterfall methodology because I was still used to the old Agile prototyping ways that I had been using ever since I first learned how to write Fortran code to solve scientific problems. At the time, I was also putting together the first rudimentary principles of softwarephysics that would later go on to form the basis of a comprehensive theoretical framework for the behavior of software. This all led me to believe that the very popular Waterfall methodology of the time was not the way to go, and instead, a more Agile prototyping methodology that let software evolve through small incremental changes would be more productive. So in the early 1980s, I was already leaning towards Agile techniques, but with a twist. What we really needed to do was to adopt an Agile biological approach to software that allowed us to grow and evolve software over time in an interactive manner with end-users actively involved in the process.

So I contend that Agile is definitely the way to go, but that the biological approach to software that is found within softwarephysics is the essential element that Agile development is still missing. So Agile development is close, but not the final solution. Again, Agile development is just another example of IT stumbling around in the dark because it lacks a theoretical framework. Like all living things, IT eventually stumbles upon something that really works and then sticks with it. We have seen this happen many times before over the past 75 years, or 2.4 billion seconds, ever since Konrad Zuse first cranked up his Z3 computer in May of 1941. IT eventually converges upon the same solutions that living things stumbled upon billions of years ago and continues to use to this very day. In the 1970s IT discovered the importance of using structured programming techniques, which is based upon compartmentalizing software functions into internal functions, and which is very similar to the compartmentalization of functions found within the organelles of eukaryotic cells. In the 1980s IT stumbled upon object-oriented programming, which mimics the organization and interaction of cells with different cell types in a multicellular organism. Similarly, the SOA (Service Oriented Architecture) of the past decade is simply an echo of the distant Cambrian explosion that happened 541 million years ago for living things. For more on this see the SoftwarePaleontology section of SoftwareBiology. Cloud computing is the next architectural element that IT has stumbled upon through convergence, and is very similar to the architectural organization of the social insects like ants and bees. For more on that see Cloud Computing and the Coming Software Mass Extinction.

As I explained in The Fundamental Problem of Software and the softwarephysics postings leading up to it, the fundamental problem with developing and maintaining software is the effects of the second law of thermodynamics acting upon software in a nonlinear Universe. The second law of thermodynamics is constantly trying to reduce the total amount of useful information in the Universe when we code software, creating small bugs in software whenever we work on it, and because the Universe is largely nonlinear in nature, these small bugs can produce huge problems immediately, or may even initially lay dormant, but later cause problems in the distant future. So the question is, are there any complex information processing systems in the physical Universe that deal well with both the second law of thermodynamics and nonlinearity? The answer to this question is of course, yes – living things do a marvelous job in this perilous Universe with contending with both the second law of thermodynamics and nonlinearity. Just as every programmer must assemble characters into lines of code, living things must assemble atoms into complex organic molecules in order to perform the functions of life, and because the physical Universe is largely nonlinear, small errors in these organic molecules can have disastrous effects. And this all has to be done in a Universe bent on degenerating into a state of maximum entropy and minimum useful information content thanks to our old friend the second law of thermodynamics. So it makes sense from an IT perspective to adopt these very successful biological techniques when developing, maintaining, and operating software.

Living things do not use the Waterfall methodology to produce the most complex things we know of in the Universe. Instead, they use Agile techniques. Species evolve over time and gain additional functionality through small incremental changes honed by natural selection, and all complex multicellular organisms come to maturity from a single fertilized egg through the small incremental changes of embryogenesis.

An Agile Biological Approach to Software
So how do you apply an Agile biological approach to software? In How to Think Like a Softwarephysicist I provided a general framework on how to do so. But I also have a very good case study of how I did so in the past. For example, in SoftwarePhysics I described how I started working on BSDE - the Bionic Systems Development Environment back in 1985 while in the IT department of Amoco. BSDE was an early mainframe-based IDE (Integrated Development Environment like Eclipse) at a time when there were no IDEs. During the 1980s BSDE was used to grow several million lines of production code for Amoco by growing applications from embryos. For an introduction to embryology see Software Embryogenesis. The DDL statements used to create the DB2 tables and indexes for an application were stored in a sequential file called the Control File and performed the functions of genes strung out along a chromosome. Applications were grown within BSDE by turning their genes on and off to generate code. BSDE was first used to generate a Control File for an application by allowing the programmer to create an Entity-Relationship diagram using line printer graphics on an old IBM 3278 terminal.

Figure 2 - BSDE was run on IBM 3278 terminals, using line printer graphics, and in a split-screen mode. The embryo under development grew within BSDE on the top half of the screen, while the code generating functions of BSDE were used on the lower half of the screen to insert code into the embryo and to do compiles on the fly while the embryo ran on the upper half of the screen. Programmers could easily flip from one session to the other by pressing a PF key.

After the Entity-Relationship diagram was created, the programmer used a BSDE option to create a skeleton Control File with DDL statements for each table on the Entity-Relationship diagram and each skeleton table had several sample columns with the syntax for various DB2 datatypes. The programmer then filled in the details for each DB2 table. When the first rendition of the Control File was completed, another BSDE option was used to create the DB2 database for the tables and indexes on the Control File. Another BSDE option was used to load up the DB2 tables with test data from sequential files. Each DB2 table on the Control File was considered to be a gene. Next, a BSDE option was run to generate an embryo application. The embryo was a 10,000 line of code PL/1, Cobol or REXX application that performed all of the primitive functions of the new application. The programmer then began to grow the embryo inside of BSDE in a split-screen mode. The embryo ran on the upper half of an IBM 3278 terminal and could be viewed in real-time, while the code generating options of BSDE ran on the lower half of the IBM 3278 terminal. BSDE was then used to inject new code into the embryo's programs by reading the genes in the Control File for the embryo in a real-time manner while the embryo was running in the top half of the IBM 3278 screen. BSDE had options to compile and link modified code on the fly while the embryo was still executing. This allowed for a tight feedback loop between the programmer and the application under development. In fact, many times BSDE programmers sat with end-users and co-developed software together on the fly in a very Agile manner. When the embryo had grown to full maturity, BSDE was then used to create online documentation for the new application and was also used to automate the install of the new application into production. Once in production, BSDE generated applications were maintained by adding additional functions to their embryos.

Since BSDE was written using the same kinds of software that it generated, I was able to use BSDE to generate code for itself. The next generation of BSDE was grown inside of its maternal release. Over a period of seven years, from 1985 – 1992, more than 1,000 generations of BSDE were generated, and BSDE slowly evolved in an Agile manner into a very sophisticated tool through small incremental changes. BSDE dramatically improved programmer efficiency by greatly reducing the number of buttons programmers had to push in order to generate software that worked.

Figure 3 - Embryos were grown within BSDE in a split-screen mode by transcribing and translating the information stored in the genes in the Control File for the embryo. Each embryo started out very much the same but then differentiated into a unique application based upon its unique set of genes.

Figure 4 – BSDE appeared as the cover story of the October 1991 issue of the Enterprise Systems Journal

BSDE had its own online documentation that was generated by BSDE. Amoco's IT department also had a class to teach programmers how to get started with BSDE. As part of the curriculum Amoco had me prepare a little cookbook on how to build an application using BSDE:

BSDE – A 1989 document describing how to use BSDE - the Bionic Systems Development Environment - to grow applications from genes and embryos within the maternal BSDE software.

I wish that I could claim that I was smart enough to have sat down and thought up all of this stuff from first principles, but that is not what happened. It all just happened through small incremental changes in a very Agile manner over a very long period of time and most of the design work was done subconsciously, if at all. Even the initial BSDE ISPF edit macros happened through serendipity. When I first started programming DB2 applications, I found myself copying in the DDL CREATE TABLE statements from the file I used to create the DB2 database into the program that I was working on. This file, with the CREATE TABLE statements, later became the Control File used by BSDE to store the genes for an application. I would then go through a series of editing steps on the copied in data to transform it from a CREATE TABLE statement into a DB2 SELECT, INSERT, UPDATE, or DELETE statement. I would do the same thing all over again to declare the host variables for the program. Being a lazy programmer, I realized that there was really no thinking involved in these editing steps and that an ISPF edit macro could do the job equally as well, only very quickly and without error, so I went ahead and wrote a couple of ISPF edit macros to automate the process. I still remember the moment when it first hit me. For me, it was very much like the scene in 2001 - A Space Odyssey when the man-ape picks up a wildebeest thighbone and starts to pound the ground with it. My ISPF edit macros were doing the same thing that happens when the information in a DNA gene is transcribed into a protein! A flood of biological ideas poured into my head over the next few days, because at last, I had a solution for my pent-up ideas about nonlinear systems and the second law of thermodynamics that were making my life so difficult as a commercial software developer. We needed to "grow" code – not write code!

BSDE began as a few simple ISPF edit macros running under ISPF edit. ISPF is the software tool that mainframe programmers still use today to interface to the IBM MVS and VM/CMS mainframe operating systems and contains an editor that can be greatly enhanced through the creation of edit macros written in REXX. I began BSDE by writing a handful of ISPF edit macros that could automate some of the editing tasks that a programmer needed to do when working on a program that used a DB2 database. These edit macros would read a Control File, which contained the DDL statements to create the DB2 tables and indexes. The CREATE TABLE statements in the Control File were the equivalent of genes, and the Control File itself performed the functions of a chromosome. For example, a programmer would retrieve a skeleton COBOL program, with the bare essentials for a COBOL/DB2 program, from a stock of reusable BSDE programs. The programmer would then position their cursor in the code to generate a DB2 SELECT statement and hit a PFKEY. The REXX edit macro would read the genes in the Control File and would display a screen listing all of the DB2 tables for the application. The programmer would then select the desired tables from the screen, and the REXX edit macro would then copy the selected genes to an array (mRNA). The mRNA array was then sent to a subroutine that inserted lines of code (tRNA) into the COBOL program. The REXX edit macro would also declare all of the SQL host variables in the DATA DIVISION of the COBOL program and would generate code to check the SQLCODE returned from DB2 for errors and take appropriate actions. A similar REXX ISPF edit macro was used to generate screens. These edit macros were also able to handle PL/1 and REXX/SQL programs. They could have been altered to generate the syntax for any programming language such as C, C++, or Java. As time progressed, BSDE took on more and more functionality via ISPF edit macros. Finally, there came a point where BSDE took over and ISPF began to run under BSDE. This event was very similar to the emergence of the eukaryotic architecture for cellular organisms. BSDE consumed ISPF like the first eukaryotic cells that consumed prokaryotic bacteria and used them as mitochondria and chloroplasts. With continued small incremental changes, BSDE continued to evolve.

I noticed that I kept writing the same kinds of DB2 applications, with the same basic body plan, over and over. At the time I did not know it, but these were primarily Model-View-Controller (MVC) applications. For more on the MVC design pattern please see Software Embryogenesis. From embryology, I got the idea of using BSDE to read the Control File for an application and to generate an "embryo" for the application based upon its unique set of genes. The embryo would perform all of the things I routinely programmed over and over for a new application. Once the embryo was generated for a new application from its Control File, the programmer would then interactively "grow" code and screens for the application. With time, each embryo differentiated into a unique individual application in an Agile manner until the fully matured application was delivered into production by BSDE. At this point, I realized that I could use BSDE to generate code for itself, and that is when I started using BSDE to generate the next generation of BSDE. This technique really sped up the evolution of BSDE because I had a positive feedback loop going. The more powerful BSDE became, the faster I could add improvements to the next generation of BSDE through the accumulated functionality inherited from previous generations.

Embryos were grown within BSDE using an ISPF split-screen mode. The programmer would start up a BSDE session and run Option 4 – Interactive Systems Development from the BSDE Master Menu. This option would look for an embryo, and if it did not find one, would offer to generate an embryo for the programmer. Once an embryo was implanted, the option would turn the embryo on and the embryo would run inside of the BSDE session with whatever functionality it currently had. The programmer would then split his screen with PF2 and another BSDE session would appear in the lower half of his terminal. The programmer could easily toggle control back and forth between the upper and lower sessions with PF9. The lower session of BSDE was used to generate code and screens for the embryo on the fly, while the embryo in the upper BSDE session was fully alive and functional. This was possible because BSDE generated applications that used ISPF Dialog Manager for screen navigation, which was an interpretive environment, so compiles were not required for screen changes. If your logic was coded in REXX, you did not have to do compiles for logic changes either, because REXX was an interpretive language. If PL/1 or COBOL were used for logic, BSDE had facilities to easily compile code for individual programs after a coding change, and ISPF Dialog Manager would simply load the new program executable when that part of the embryo was exercised. These techniques provided a tight feedback loop so that programmers and end-users could immediately see the effects of a change as the embryo grew and differentiated.

But to fully understand how BSDE was used to grow embryos through small incremental changes into fully mature applications, we need to turn to how that is accomplished in the biosphere. For example, many people find it hard to believe that all human beings evolved from a single primitive cell in the very distant past, yet we all managed to do that and in only about 9 months of gestation! Indeed, it does seem like a data processing miracle that a complex human body can grow and differentiate into 100 trillion cells, composed of several hundred different cell types that ultimately form the myriad varied tissues within a body, from a single fertilized egg cell. In biology, the study of this incredible feat is called embryogenesis, or developmental biology, and this truly amazing process from a data processing perspective is certainly worthy of emulating when developing software. So let's spend some time seeing how that is done.

Human Embryogenesis
Most multicellular organisms follow a surprisingly similar sequence of steps to form a complex body, composed of billions or trillions of eukaryotic cells, from a single fertilized egg. This is a sure sign of some inherited code at work that has been tweaked many times to produce a multitude of complex body plans or phyla by similar developmental processes. Since many multicellular life forms follow a similar developmental theme let us focus, as always, upon ourselves and use the development of human beings as our prime example of how developmental biology works. For IT professionals and other readers not familiar with embryogenesis, it would be best now to view this short video before proceeding:

Medical Embryology - Difficult Concepts of Early Development Explained Simply https://www.youtube.com/watch?annotation_id=annotation_1295988581&feature=iv&src_vid=rN3lep6roRI&v=nQU5aKKDwmo

Basically, a fertilized egg, or zygote, begins to divide many times over, without the zygote really increasing in size at all. After a number of divisions, the zygote becomes a ball of undifferentiated cells that are all the same and is known as a morula. The morula then develops an interior hollow center called a blastocoel. The hollow ball of cells is known as a blastula and all the cells in the blastula are undifferentiated, meaning that they are all still identical in nature.

Figure 5 – A fertilized egg, or zygote, divides many times over to form a solid sphere of cells called a morula. The morula then develops a central hole to become a hollow ball of cells known as a blastula. The blastula consists of identical cells. When gastrulation begins some cells within the blastula begin to form three layers of differentiated cells – the ectoderm, mesoderm, and endoderm. The above figure does not show the amnion, which forms, just outside of the infolded cells that create the gastrula. See Figure 6 for the location of the amnion.

The next step is called gastrulation. In gastrulation one side of the blastula breaks symmetry and folds into itself and eventually forms three differentiated layers – the ectoderm, mesoderm and endoderm. The amnion forms just outside of the gastrulation infold.

Figure 6 – In gastrulation three layers of differentiated cells form - the ectoderm, mesoderm and endoderm by cells infolding and differentiating.

Figure 7 – Above is a close-up view showing the ectoderm, mesoderm and endoderm forming from the primitive streak.

The cells of the endoderm go on to differentiate into the internal organs or guts of a human being. The cells of the mesoderm, or the middle layer, go on to form the muscles and connective tissues that do most of the heavy lifting. Finally, the cells of the ectoderm go on to differentiate into the external portions of the human body, like the skin and nerves.

Figure 8 – Some examples of the cell types that develop from the endoderm, mesoderm and ectoderm.



Figure 9 – A human being develops from the cells in the ectoderm, mesoderm and endoderm as they differentiate into several hundred different cell types.



Figure 10 – A human embryo develops through small incremental changes into a fully mature newborn in an Agile manner. Living things, the most complex data processing machines in the Universe, never use the Waterfall methodology to build new bodies or to evolve over time into new species. Living things always develop new functionality through small incremental changes in an Agile manner.

Growing Embryos in an Agile Manner
At the time IBM had a methodology called JAD - Joint Application Development. In a JAD the programmers and end-users met together in a room and roughed out the design for a new application on flip charts in an interactive manner. The flip chart papers were then taped to the walls for all to see. At the end of the day, the flip charts were gathered by an attending systems analyst who then attempted to put together the JAD flip charts into a User Requirements document that was then carried into the normal Waterfall development process. This seemed rather cumbersome, so with BSDE we tried to do some JAD work in an interactive manner with end-users by growing Model-View-Controller (MVC) embryos together. In an MVC application, the data Model was stored on DB2 tables, forming the "guts" or endoderm tissues of the application. The View code consisted of ISPF screens and reports that could be viewed by the end-user, forming the ectoderm tissues of the application. Finally, the Model and View were connected together by the Controller code that did most of the heavy lifting, forming the mesoderm tissues of the application. Controller code was by far the most difficult to code and was the most expensive part of a project. So as in human gastrulation, we grew embryos from the outside-in and the inside-out, by finally having the external ectoderm tissues meet the internal endoderm tissues in the middle at the mesoderm tissues of the Controller code. Basically, we would first start with some data modeling with the end-user to rough out the genes in the Control File chromosome. Then we had BSDE generate an MVC embryo for the application based upon its genes in the Control File. Next we roughed out the ectoderm ISPF screens for the application, using BSDE to generate the ISPF screens and field layouts on each ISPF screen. We then used BSDE to generate REXX, Cobol or PL/1 programs to navigate the flow of the ISPF screens. So we worked on the endoderm and ectoderm with the end-users first since they were very easy to change and they determined the look and feel of the application. By letting end-users first interact with an embryo that consisted of just the end-user ectoderm tissues, forming the user interface, and the underlying DB2 tables of the endoderm it was possible to rough out an application with no user requirements documentation at all in a very Agile manner. Of course as end-users interacted with the embryo, many times we discovered that new data elements were needed to be added to our DB2 model stored on the Control File, so we would just add the new columns to the genes on the Control File and then just regenerate the impacted ISPF screens from the modified genes. Next we added validation code to the embryo to validate the input fields on the ISPF screens, using BSDE templates. BSDE came with a set of templates of reusable code that could be imported for this purpose. This was before the object-oriented programming revolution in IT, so the BSDE reusable code templates performed the same function as the class library of an object-oriented programming language. Finally, mockup reports were generated by BSDE. BSDE was used to create a report mockup file that looked like what the end-user wanted to see in an online or line printer report. BSDE then took the report mockup file and generated a Cobol or PL/1 program that produced the report output to give the end-user a good feel for how the final report would actually look. At this point end-users could fully navigate the embryo and interact with it in real-time. They could pretend to enter data into the embryo as they navigated the embryo's ISPF screens, and they could see the embryo generate reports. Once the end-user was happy with the embryo, we would start to use ISPF to generate code for the mesoderm Controller tissues by having BSDE create SELECT, INSERT, UPDATE and DELETE SQL statements in the embryonic programs. We also used the BSDE reusable code templates for processing logic in the mesoderm Controller code that connected the ectoderm ISPF screens to the endoderm DB2 database.

I Think, Therefore I am - a Heretic
BSDE was originally developed for my own use, but fellow programmers soon took note, and over time, an underground movement of BSDE programmers developed at Amoco, and so I began to market BSDE and softwarephysics within Amoco. Being a rather conservative oil company, this was a hard sell, and I am not much of a salesman. By now it was late 1986, and I was calling their programmers "software engineers" and was pushing the unconventional ideas of applying physics and biology to software to grow software in an Agile manner. These were very heretical ideas at the time because of the dominance of the Waterfall methodology. BSDE was generating real applications with practically no upfront user requirements or specification documents to code from. Fortunately for me, all of Amoco's income came from a direct application of geology, physics, or chemistry, so many of our business partners were geologists, geophysicists, chemists, or chemical, petroleum, electrical, or industrial engineers, and that was a big help. Computer viruses began to appear about this time, and they provided some independent corroboration for the idea of applying biological concepts to software, and in 1987 we also started to hear some things about artificial life from Chris Langton out of Los Alamos, but there still was a lot of resistance to the idea back at Amoco. One manager insisted that I call genes "templates". I used the term "bionic" in the product name after I checked in the dictionary and found that bionic meant applying concepts from biology to solve engineering problems, and that was exactly what I was trying to achieve. There were also a couple of American television programs in the 1970s that introduced the term to the American public, The Six Million Dollar Man and The Bionic Woman, which featured superhuman characters performing astounding feats. In my BSDE road shows, I would demo BSDE in action spewing out perfect code in real-time and about 20 times faster than normal human programmers could achieve, so I thought the term "bionic" was fitting. I am still not sure that using the term "bionic" was the right thing to do. IT was not "cool" in the 1980s, as it is in today’s ubiquitous Internet Age, and was dominated by very serious and conservative people with backgrounds primarily in accounting. However, BSDE was pretty successful and by 1989, BSDE was finally recognized by Amoco IT management and was made available to all Amoco programmers. A BSDE class was developed and about 20 programmers became active BSDE programmers. A BSDE COI (Community of Interest) was formed, and I used to send out weekly email newsletters about applying physics and biology to software to the COI members.

So that is how BSDE got started by accident. Since I did not have any budget for BSDE development, and we charged out all of our time at Amoco to various projects, I was forced to develop BSDE through small incremental enhancements in an Agile manner that always made my job on the billable projects a little easier. I knew that was how evolution worked, so I was not too concerned. In retrospect, this was a fortunate thing. In the 1980s, the accepted theory of the day was that you needed to prepare lots of user requirement and specifications documents up front, following the Waterfall methodology, for a new application, and then you would code from the documents as a blueprint. This was before the days of Agile development through small incremental releases that you find today. In the mid-1980s, prototyping was a very heretical proposition in IT. But I don’t think that I could have built BSDE using the traditional blueprint Waterfall methodology of the day.

Conclusion
As you can see, having a sound theoretical framework, like softwarephysics, helps when you are faced with making an important IT decision, like whether to use the Agile or Waterfall methodology. Otherwise, IT decisions are left to the political whims of the times, with no sound basis.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston