Friday, May 15, 2020

The Origin and Evolution of Cloud Computing - Software Moves From the Sea to the Land and Back Again

About four years ago, I sensed that a dramatic environmental change was occurring in IT, so I posted Cloud Computing and the Coming Software Mass Extinction on March 9, 2016, in anticipation of another IT mass extinction that would likely make my current IT job go extinct again. At the time, I was 64 years old, and I was working in the Middleware Operations group of Discover Financial Services, the credit card company, supporting all of their externally-facing websites and internal applications used to run the business. Having successfully survived the Mainframe ⟶ Distributed Computing mass extinction of the early 1990s, I decided that it was a good time to depart IT, rather than to deal with the turmoil of another software mass extinction. So I retired from my very last IT position in December of 2016 at the age of 65. Since about four years have now gone by since my last posting on Cloud Computing, I thought that it might be a good time to take a look again to see what had happened. In many of my previous posts on softwarephysics, I have noted that software has been evolving about 100 million times faster than carbon-based life on the Earth and, also, that software has been following a very similar path through Design Space, as did carbon-based life, while quickly racing through this amazingly rapid evolution. Thus, over the years, I have found that, roughly speaking, about one second of software evolution is equivalent to about two years of evolution for carbon-based life forms. So four years of software evolution comes to about 250 million years of carbon-based life evolution. Now carbon-based life has evolved quite a bit over the past 250 million years, so I expected to see some very dramatic changes to the way that IT was being practiced today compared to what it looked like when I departed in 2016. In order to do that, I took a couple of overview online courses on Cloud Computing at Coursera to see just how far Cloud Computing had changed IT over the past 250 million years. Surprisingly, after taking several of these Cloud Computing courses, I found myself quite at home with the way that IT is currently practiced today! That is because I found that modern IT has actually returned to the way that IT was practiced back in the 1980s when I first became an IT professional!

In this posting, I would like to examine the origin and evolution of Cloud Computing by retelling my own personal professional experiences with working with computers for the past 48 years. That comes to about three billion years of evolution for carbon-based life! But before doing that, let's review how carbon-based life evolved on this planet. That will help to highlight many of the similarities between the evolution of carbon-based life and software. Carbon-based life probably first evolved on this planet about 4 billion years ago on land in warm hydrothermal pools - for more on that see The Bootstrapping Algorithm of Carbon-Based Life for details. Carbon-based life then moved to the sea as the simple prokaryotic cells of bacteria and archaea. About two billion years later the dramatic evolution of complex eukaryotic cells arrived - for more on that see The Rise of Complexity in Living Things and Software. The rise of multicellular organisms, consisting of millions, or billions, of eukaryotic cells all working together, came next in the Ediacaran about 635 million years ago. The Cambrian explosion then followed 541 million years ago with the rise of very complex carbon-based life that could move about in the water. For more on that see the SoftwarePaleontology section of SoftwareBiology. The first plants appeared on land around 470 million years ago, during the Ordovician period. Animals then left the sea for the land during the Devonian period around 360 million years ago as shown in Figure 1.

Figure 1 - Mobile Carbon-based life first left the sea for the land during the Devonian period around 360 million years ago. Carbon-based life had finally returned to the land from which it first emerged 4 billion years ago.

Being mobile on land had the advantage of not having to compete with the entire biosphere for the communally shared oxygen dissolved in seawater. That can become a serious problem for mobile complex carbon-based life because it cannot make its own oxygen when the atmospheric level of oxygen declines. In fact, there currently is a school of thought that maintains that mobile carbon-based animal life left the sea for the land for that very reason. But being mobile on land does have its drawbacks too, especially when you find yourself amongst a diverse group of friends and neighbors all competing for the very same resources and eyeing you with appeal as a nice course for their next family outing. Of course, the same challenges are also to be found when living in the sea, but while living in the sea, you do not have to ever worry about dying of thirst, freezing to death, dying of heat exhaustion, burning up in a forest fire, getting stepped on, falling off of a mountain, drowning in a flash flood, getting hit by lightning, dying in an earthquake, drowning from a tsunami, getting impaled by a tree limb thrown by a tornado or drowning in a hurricane. So there might be some real advantages to be gained by mobile carbon-based life forms that chose to return to the sea and take all of the advances that were made while on the land along with them. And, indeed, that is exactly what happened for several types of land-based mammals. The classic example is the return of the mammalian whales to the sea. The mammalian whales took all of the advances that were gained while on land, like directly breathing the air for oxygen, being warm-blooded, giving birth to live young and nursing them to adulthood - see Figure 2.

Figure 2 - Some mobile carbon-based life forms returned to the sea and took the advances that were made on land with them back to the sea.

Cloud Computing Software Followed a Very Similar Evolutionary Path Through Design Space
To begin our comparison of the origin and early evolution of Cloud Computing with the origin and evolution of carbon-based life, let's review the basics of Cloud Computing. The two key components of Cloud Computing are the virtualization of hardware and software and the pay-as-you-go timesharing of that virtual hardware and software. The aim of Cloud Computing is to turn computing into a number of public utilities like the electrical power grid, public water supply, natural gas and the public sewage utilities found in a modern city. Instead of having to worry about your own electrical generator, water well and septic field in a rural area, you can get all of these necessities as services from public utilities and only pay monthly fees to the public utilities based on your consumption of resources.

Figure 3 – Cloud Computing returns us to the timesharing days of the 1960s and 1970s by viewing everything as a service.

For example, a business might use SaaS (Software as a Service) where the Cloud provider provides all of the hardware and software necessary to perform an IT function, such as Gmail for email, Zoom for online meetings, Microsoft 365 or Google Apps for documents and spreadsheets. Similarly, a Cloud provider can provide all of the hardware and infrastructure software in a PaaS (Platform as a Service) layer to run the proprietary applications that are generated by the IT department of a business. The proprietary business applications run on virtual machines or in virtual containers that allow for easy application maintenance. This is where the Middleware from pre-Cloud days is run - the type of software that I supported for Discover back in 2016 and which was very much in danger of extinction. Finally, Cloud providers provide an IaaS (Infrastructure as a Service) layer consisting of virtual machines, virtual networks, and virtual data storage on virtual disk drives.

Figure 4 – Prior to Cloud Computing, applications were run on a very complicated infrastructure of physical servers that made the installation and maintenance of software nearly impossible. The Distributed Computing Platform of the 1990s became unsustainable when many hundreds of physical servers were needed to run complex high-volume web-based applications.

As I outlined in The Limitations of Darwinian Systems, the pre-Cloud Distributed Computing Platform that I left in 2016 consisted of many hundreds of physical servers and was so complicated that we could barely maintain it. Clearly, the Distributed Computing Platform of the 1990s was no longer sustainable. Something had to change to allow IT to advance. That's when IT decided to return to the sea by heading for the Clouds. Now let's see how that happened.  

Pay-As-You-Go Timesharing is Quite Old
Let's begin with the pay-as-you-go characteristic of Cloud Computing. This characteristic of Cloud Computing is actually quite old. For example, back in 1968, my high school ran a line to the Illinois Institute of Technology to connect a local card reader and printer to the Institute's mainframe computer. We had our own keypunch machine to punch up the cards for Fortran programs that we could then submit to the distant mainframe which then printed back the results of our program runs on our local printer. The Illinois Institute of Technology also did the same for several other high schools in my area. So essentially, the Illinois Institute of Technology became a very early Cloud Provider. This allowed high school students in the area to gain some experience with computers even though their high schools could clearly not afford to maintain a mainframe infrastructure.

Figure 5 - An IBM 029 keypunch machine like the one installed in my high school in 1968.

Figure 6 - Each card could hold a maximum of 80 bytes. Normally, one line of code was punched onto each card.

Figure 7 - The cards for a program were held together into a deck with a rubber band, or for very large programs, the deck was held in a special cardboard box that originally housed blank cards. Many times the data cards for a run followed the cards containing the source code for a program. The program was compiled and linked in two steps of the run and then the generated executable file processed the data cards that followed in the deck.

Figure 8 - To run a job, the cards in a deck were fed into a card reader, as shown on the left above, to be compiled, linked, and executed by a million-dollar mainframe computer back at the Illinois Institute of Technology. In the above figure, the mainframe is located directly behind the card reader.

Figure 9 - The output of Fortran programs run at the Illinois Institute of Technology was printed locally at my high school on a line printer.

The Slow Evolution of Hardware and Software Virtualization
Unfortunately, I did not take that early computer science class at my high school, but I did learn to write Fortran code at the University of Illinois at Urbana in 1972. At the time, I was also punching out Fortran programs on an old IBM 029 keypunch machine, and I soon discovered that writing code on an IBM 029 keypunch machine was even worse than writing out term papers on a manual typewriter! At least when you submitted a term paper with a few typos, your professor was usually kind enough not to abend your term paper right on the spot and give you a grade of zero. Sadly, I learned that Fortran compilers were not so forgiving. The first thing you did was to write out your code on a piece of paper as best you could back at the dorm. The back of a large stack of fanfold printer paper output was ideal for such purposes. In fact, as a physics major, I first got hooked by software while digging through the wastebaskets of DCL, the Digital Computing Lab, at the University of Illinois looking for fanfold listings of computer dumps that were about a foot thick. I had found that the backs of thick computer dumps were ideal for working on lengthy problems in my quantum mechanics classes.

It paid to do a lot of desk-checking of your code back at the dorm before heading out to the DCL. Once you got to the DCL, you had to wait your turn for the next available IBM 029 keypunch machine. This was very much like waiting for the next available washing machine on a crowded Saturday morning at a laundromat. When you finally got to your IBM 029 keypunch machine, you would load it up with a deck of blank punch cards and then start punching out your program. You would first press the feed button to have the machine pull your first card from the deck of blank cards and register the card in the machine. Fortran compilers required code to begin in column 7 of the punch card so the first thing you did was to press the spacebar 6 times to get to column 7 of the card. Then you would try to punch in the first line of your code. If you goofed and hit the wrong key by accident while punching the card, you had to eject the bad card and start all over again with a new card. Structured programming had not been invented yet, so nobody indented code at the time. Besides, trying to remember how many times to press the spacebar for each new card in a block of indented code was just not practical. Pressing the spacebar 6 times for each new card was hard enough! Also, most times we proofread our card decks by flipping through them before we submitted the card deck. Trying to proofread indented code in a card deck would have been rather disorienting, so nobody even thought of indenting code. Punching up lots of comment cards was also a pain, so most people got by with a minimum of comment cards in their program deck.

After you punched up your program on a card deck, you would then punch up your data cards. Disk drives and tape drives did exist in those days, but disk drive storage was incredibly expensive and tapes were only used for huge amounts of data. If you had a huge amount of data, it made sense to put it on a tape because if you had several feet of data on cards, there was a good chance that the operator might drop your data card deck while feeding it into the card reader. But usually, you ended up with a card deck that held the source code for your program and cards for the data to be processed too. You also punched up the JCL (Job Control Language) cards for the IBM mainframe that instructed the IBM mainframe to compile, link and then run your program all in one run. You then dropped your finalized card deck into the input bin so that the mainframe operator could load your card deck into the card reader for the IBM mainframe. After a few hours, you would then return to the output room of the DCL and go to the alphabetically sorted output bins that held all the jobs that had recently run. If you were lucky, in your output bin you found your card deck and the fan-folded computer printout of your last run. Unfortunately, normally you found that something probably went wrong with your job. Most likely you had a typo in your code that had to be fixed. If it was nighttime and the mistake in your code was an obvious typo, you probably still had time for another run, so you would get back in line for an IBM 029 keypunch machine and start all over again. You could then hang around the DCL working on the latest round of problems in your quantum mechanics course. However, machine time was incredibly expensive in those days and you had a very limited budget for machine charges. So if there was some kind of logical error in your code, many times you had to head back to the dorm for some more desk checking of your code before giving it another shot the next day.

My First Experiences with Interactive Computing
I finished up my B.S. in Physics at the University of Illinois at Urbana in 1973 and headed up north to complete an M.S. in Geophysics at the University of Wisconsin at Madison. I was working with a team of graduate students who were collecting electromagnetic data in the field on a DEC PDP-8/e minicomputer. The machine cost about $30,000 in 1973 (about $176,000 in 2020 dollars) and was about the size of a large side-by-side refrigerator. The machine had 32 KB of magnetic core memory, about 2 million times less memory than a modern 64 GB smartphone. This was my first experience with interactive computing. Previously, I had only written Fortran programs for batch processing on IBM mainframes. I wrote BASIC programs on the DEC PDP-8/e minicomputer using a teletype machine and a built-in line editor. The teletype machine was also used to print out program runs. My programs were saved to a tape and I could also read and write data from a tape as well. The neat thing about the DEC PDP-8/e minicomputer was there were no computer charges and I did not have to wait hours to see the output of a run. None of the department professors knew how to program computers, so there was plenty of free machine time because only about four graduate students knew how to program at the time. I also learned the time-saving trick of interactive programming. Originally, I would write a BASIC program and hard-code the data values for a run directly in the code and then run the program as a batch job. After the run, I would then edit the code to change the hard-coded data values and then run the program again. Then one of my fellow graduate students showed me the trick of how to add a very primitive interactive user interface to my BASIC programs. Instead of hard-coding data values, my BASIC code would now prompt me for values that I could enter on the fly on the teletype machine. This allowed me to create a library of "canned" BASIC programs on tape that I never had to change. I could just run my "canned" programs with new input data as needed.

We actually hauled this machine through the dirt-road lumber trails of the Chequamegon National Forest in Wisconsin and powered it with an old diesel generator to digitally record electromagnetic data in the field. For my thesis, I worked with a group of graduate students who were shooting electromagnetic waves into the ground to model the conductivity structure of the Earth’s upper crust. We were using the Wisconsin Test Facility (WTF) of Project Sanguine to send very low-frequency electromagnetic waves, with a bandwidth of about 1 – 20 Hz into the ground, and then we measured the reflected electromagnetic waves in cow pastures up to 60 miles away. All this information has been declassified and is available on the Internet, so any retired KGB agents can stop taking notes now and take a look at:

http://www.fas.org/nuke/guide/usa/c3i/fs_clam_lake_elf2003.pdf

Project Sanguine built an ELF (Extremely Low Frequency) transmitter in northern Wisconsin and another transmitter in northern Michigan in the 1970s and 1980s. The purpose of these ELF transmitters is to send messages to our nuclear submarine force at a frequency of 76 Hz. These very low-frequency electromagnetic waves can penetrate the highly conductive seawater of the oceans to a depth of several hundred feet, allowing the submarines to remain at depth, rather than coming close to the surface for radio communications. You see, normal radio waves in the Very Low Frequency (VLF) band, at frequencies of about 20,000 Hz, only penetrate seawater to a depth of 10 – 20 feet. This ELF communications system became fully operational on October 1, 1989, when the two transmitter sites began synchronized transmissions of ELF broadcasts to our submarine fleet.

I did all of my preliminary modeling work in BASIC on the DEC PDP-8/e without a hitch while the machine was sitting peacefully in an air-conditioned lab. So I did not have to worry about the underlying hardware at all. For me, the machine was just a big black box that processed my software as directed. However, when we dragged this poor machine through the bumpy lumber trails of the Chequamegon National Forest, all sorts of "software" problems arose that were really due to the hardware. For example, we learned that each time we stopped and made camp for the day that we had to reseat all of the circuit boards in the DEC PDP-8/e. We also learned that the transistors in the machine did not like it when the air temperature in our recording truck rose above 90 oF because we started getting parity errors. We also found that we had to let the old diesel generator warm up a bit before we turned on the DEC PDP-8/e to give the generator enough time to settle down into a nice, more-or-less stable, 60 Hz alternating voltage.

Figure 10 – Some graduate students huddled around a DEC PDP-8/e minicomputer. Notice the teletype machines in the foreground on the left that were used to input code and data into the machine and to print out results as well.

Back to Batch Programming on IBM Mainframes
Then from 1975 – 1979, I was an exploration geophysicist exploring for oil, first with Shell, and then with Amoco. I kept coding Fortran the whole time on IBM mainframes. In 1979, I made a career change into IT when I became an IT professional in Amoco's IT department. Structured programming had arrived by then, so we were now indenting code and adding comment statements to the code, but we were still programming on cards. We were now using IBM 129 keypunch machines that were a little bit more sophisticated than the old IBM 029 keypunch machines. However, the coding process was still very much the same. I worked on code at my desk and still spent a lot of time desk checking the code. When I was ready for my next run, I would get into an elevator and travel down to the basement of the Amoco Building where the IBM mainframes were located. Then I would punch my cards on one of the many IBM 129 keypunch machines but this time with no waiting for a machine. After I submitted my deck, I would travel up 30 floors to my cubicle to work on something else. After a couple of hours, I would head down to the basement again to collect my job. On a good day, I could manage to get 4 runs in. But machine time was still incredibly expensive. If I had a $100,000 project, $25,000 went for programming time, $25,000 went to IT overhead like integration management and data management services, and a full $50,000 went to machine costs for compiles and test runs!

This may all sound very inefficient and tedious today, but it can be even worse. I used to talk to the old-timers about the good old days of IT. They told me that when the operators began their shift on an old-time 1950s vacuum tube computer, the first thing they did was to crank up the voltage on the vacuum tubes to burn out the tubes that were on their last legs. Then they would replace the burned-out tubes to start the day with a fresh machine.

Figure 11 – In the 1950s, the electrical relays of the very ancient computers were replaced with vacuum tubes that were also very large, used lots of electricity and generated lots of waste heat too, but the vacuum tubes were 100,000 times faster than relays.

They also explained that the machines were so slow that they spent all day processing production jobs. Emergency maintenance work to fix production bugs was allowed at night, but new development was limited to one compile and test run per week! They also told me about programming the plugboards of electromechanical Unit Record Processing machines back in the 1950s by physically rewiring the plugboards. The Unit Record Processing machines would then process hundreds of punch cards per minute by routing the punch cards from machine to machine in processing streams.

Figure 12 – In the 1950s Unit Record Processing machines like this card sorter were programmed by physically rewiring a plugboard.

Figure 13 – The plugboard for a Unit Record Processing machine.

The House of Cards Finally Falls
But all of this was soon to change. In the early 1980s, the IT department of Amoco switched to using TSO running on dumb IBM 3278 terminals to access IBM MVS mainframes. Note that the IBM MVS operating system for mainframes is now called the z/OS operating system. We now used a full-screen editor called ISPF running under TSO on the IBM 3278 terminals to write code and submit jobs, and our development jobs usually ran in less than an hour. The source code for our software files was now on disk in partitioned datasets for easy access and updating. The data had moved to tapes and it was the physical process of mounting and unmounting tapes that now slowed down testing. For more on tape processing see: An IT Perspective on the Origin of Chromatin, Chromosomes and Cancer. Now I could run maybe 10 jobs in one day to test my code! However, machine costs were still incredibly high and still accounted for about 50% of project costs, so we still had to do a lot of desk checking to save on machine costs. At first, the IBM 3278 terminals appeared on the IT floor in "tube rows" like the IBM 029 keypunch machines of yore. But after a few years, each IT professional was given their own IBM 3278 terminal on their own desk. Finally, there was no more waiting in line for an input device!

Figure 14 - The IBM ISPF full-screen editor ran on IBM 3278 terminals connected to IBM mainframes in the late 1970s. ISPF was also a screen-based interface to TSO (Time Sharing Option) that allowed programmers to do things like copy files and submit batch jobs. ISPF and TSO running on IBM mainframes allowed programmers to easily reuse source code by doing copy/paste operations with the screen editor from one source code file to another. By the way, ISPF and TSO are still used today on IBM mainframe computers to support writing and maintaining software.

I found that the use of software to write and maintain software through the use of ISPF dramatically improved software development and maintenance productivity. It was like moving from typing term papers on manual typewriters to writing term papers on word processors. It probably improved productivity by a factor of at least 10 or more. In the early 1980s, I was very impressed by this dramatic increase in productivity that was brought on by using software to write and maintain software.

But I was still writing batch applications for my end-users that ran on a scheduled basis like once each day, week or month. My end-users would then pick up thick line-printer reports that were delivered from the basement printers to local output bins that were located on each floor of the building. They would then put the thick line-printer reports into special binders that were designed for thick line-printer reports. The reports were then stored in large locked filing cabinets that lined all of the halls of each floor. When an end-user needed some business information they would locate the appropriate report in one of the locked filing cabinets and leaf through the sorted report.

Figure 15 - Daily, weekly and monthly batch jobs were run on a scheduled basis to create thick line-printer reports that were then placed into special binders and stored in locked filing cabinets that lined all of the hallway walls.

This was clearly not the best way to use computers to support business processes. My end-users really needed interactive applications like I had on my DEC PDP-8/e minicomputer. But the personal computers of the day, like the Apple II, were only being used by hobbyists at the time and did not have enough power to process massive amounts of corporate data either. Buying everybody a very expensive minicomputer would certainly not do!

Virtual Machines Arrive at Amoco with IBM VM/CMS in the 1970s
Back in the late 1970s, the computing research group of the Amoco Tulsa Research Center was busily working with the IBM VM/CMS operating system. VM was actually a very early form of Cloud hypervisory software that ran on IBM mainframe computers. Under VM, a large number of virtual machines could be generated that ran different operating systems. IBM had created VM in the 1970s so that they could work on several versions of a mainframe operating system at the same time on one computer. The CMS (Conversational Monitor System) operating system was a command-based operating system, similar to Unix, that ran on VM virtual machines and was found to be so useful that IBM began selling VM/CMS as an interactive computing environment to customers. In fact, IBM still sells these products today as CMS running under z/VM. During the 1970s, the Amoco Tulsa Research Center began using VM/CMS as a platform to conduct an office automation research program. The purpose of that research program was to explore creative ways of using computers to interactively conduct business. They developed an office automation product called PROFS (Professional Office System) that was the killer App for VM/CMS. PROFS had email, calendaring and document creation and sharing capabilities that were hugely popular with end-users. In fact, many years later, IBM sold PROFS as a product called OfficeVision. The Tulsa Research Center decided to use IBM DCF-Script for wordprocessing on virtual machines running VM/CMS. The computing research group of the Amoco Tulsa Research center also developed many other useful office automation products for VM/CMS. VM/CMS was then rolled out to the entire Amoco Research Center to conduct its business as a beta site for the VM/CMS office automation project.

Meanwhile, some of the other Amoco business units began to use VM/CMS on GE Timeshare for interactive programming purposes. I had also used VM/CMS on GE Timeshare while working for Shell in Houston as a geophysicist. GE Timeshare was running VM/CMS on IBM mainframes at GE datacenters. External users could connect to the GE Timeshare mainframes using a 300 baud acoustic coupler with a standard telephone. You would first dial GE Timeshare with your phone and when you heard the strange "BOING - BOING - SHHHHH" sounds from the mainframe modem, you would jam the telephone receiver into the acoustic coupler so that you could login to the mainframe over a "glass teletype" dumb terminal.

Figure 16 – To timeshare, we connected to GE Timeshare over a 300 baud acoustic coupler modem.

Figure 17 – Then we did interactive computing on VM/CMS using a dumb terminal.

Amoco Builds an Internal Cloud in the 1980s with an IBM VM/CMS Hypervisor Running on IBM Mainframes
But timesharing largely went out of style in the 1980s when many organizations began to create their own datacenters and ran their own mainframes within them. They did so because with timesharing you paid the timesharing provider by the CPU-second and by the byte for disk space, and that became expensive as timesharing consumption rose. Timesharing also limited flexibility because businesses were limited by the constraints of their timesharing providers. So in the early 1980s, Amoco decided to create its own private Cloud with 31 VM/CMS nodes running on a large number of datacenters around the world. All of these VM/CMS nodes were connected into a complex network via SNA. This network of 31 VM/CMS nodes was called the Corporate Timeshare System (CTS) and supported about 40,000 virtual machines, one for each employee.

When new employees arrived at Amoco, they were given their own VM/CMS virtual machine with 1 MB of memory and some virtual disk on their local CTS VM/CMS node. IT employees were given larger machines with 3 MB of memory and more disk. That might not sound like much, but recall that million-dollar mainframe computers in the 1970s only had about 1 MB of memory too. Similarly, I got my first PC at work in 1986. It was an IBM PC/AT with a 6 MHz Intel 80-286 processor and a 20 MB hard disk, with a total of 640 KB of memory. It cost about $1600 at the time - about $3,700 in 2020 dollars! So I had plenty of virtual memory and virtual disk space on my VM/CMS virtual machine to develop the text-based green-screen software of the day. My personal virtual machine was ZSCJ03 on the CTSVMD node in the Chicago Data Center. Most VM/CMS nodes had about 1,500 virtual machines running. For example, the Chicago Data Center at Amoco's corporate headquarters had VM/CMS nodes CTSVMA, CTSVMB, CTSVMC and CTSVMD to handle the 6,000 employees at headquarters. Your virtual disk was called your A-disk and your virtual machine had read and write access to the files on your A-disk. When I logged into my ZSCJ03 virtual machine there was a PROFILE EXEC A file that ran to connect me to all of the resources I needed on the CTSVMD node. The corporate utilities, like PROFS, were all stored on a shared Z-disk for all the virtual machines on CTSVMD. For example, there was a corporate utility called SHARE that would give me read access to the A-disk of any other virtual machine on CTSVMD. So if I typed in SHARE ZRJL01, my ZSCJ03 virtual machine would then have read access to the A-disk of machine ZRJL01 by running the corporate utility SHARE EXEC Z. My ZSCJ03 virtual machine also had a virtual card punch and a virtual card reader. This allowed me to PUNCH files to any of the other virtual machines on the 31 VM/CMS nodes. The PUNCHED files would then end up in the virtual card READER of the virtual machine that I sent them to and could be read in by its owner. Amoco provided a huge amount of Amoco-developed corporate software on the Z-disk to enable office automation and to make it easier for IT Applications Development to create new applications using the Amoco-developed corporate software as an application platform. For example, there was a SENDFILE EXEC Z that made it very easy to PUNCH files and a RDRLIST EXEC Z program to help manage the files in your READER. There was also a SUBMIT EXEC Z that allowed us to submit JCL streams to the IBM MVS mainframe datacenters in Chicago and Tulsa. We could also use TDISKGET EXEC Z to temporarily allocate additional virtual disk space for our virtual machines. The TLS EXEC Z software was a Tape Library System that allowed us to attach a virtual tape drive to our virtual machines. We could then dump all of the files on our virtual machine for backup purposes or write large amounts of data to tape. Essentially, Amoco had a SaaS layer running on our Z-disks and Paas and IaaS layers running on the 31 VM/CMS nodes of the Corporate Timeshare System. Individual business units were charged monthly for their virtual machines, virtual machine processing time, virtual disk space, virtual network usage and virtual tapes.

When I was about to install a new application, I would simply call the HelpDesk to have them create one or more service machines for the new application. This usually took less than an hour. For example, the most complicated VM/CMS application that I worked on was called ASNAP - the Amoco Self-Nomination Application. ASNAP allowed managers and supervisors to post job openings within Amoco. All Amoco employees could then look at the job openings around the world and post applications for the jobs. Job interviews could then be scheduled in their PROFS calendars. It was the most popular piece of software that I ever worked on because all of Amoco's 40,000 employees loved being able to find out about other jobs at Amoco and being able to apply for them. Amoco's CEO once even made a joke about using ASNAP to find another job at Amoco during a corporate town hall meeting.

Each of the 31 VM/CMS nodes had an ASNAP service machine running on it. To run the ASNAP software, end-users simply did a SHARE ASNAP and then entered ASNAP to start it up. The end-users would then have access to the ASNAP software on the ASNAP service machine and could then create a job posting or application on their own personal virtual machine. The job posting or job application files were then sent to the "hot reader" of the ASNAP virtual machine on their VM/CMS node. When files were sent to the "hot reader" of the ASNAP virtual machine, they were immediately read in and processed. That ASNAP virtual machine would then SENDFILE all of the files to each of the other 30 VM/CMS nodes. The ASNAP job postings and applications were kept locally on flat files on each of the 31 ASNAP service machines. To push these files around the world, each ASNAP machine had a "hot reader". Whenever an ASNAP data file changed on one ASNAP machine, it was sent to the other 30 ASNAP machines via a SENDFILE. When the transmitted files reached the "hot readers" of the other 30 ASNAP machines, the files were read in by the ASNAP machine for processing. Every day, a synchronizing job was sent from the Master ASNAP machine running on CTSVMD to the "hot readers" of the other 30 ASNAP machines. Each ASNAP machine then sent a report file back to the "hot reader" on the Master ASNAP machine on CTSVMD. The synchronization files were then read in and processed for exceptions. Any missing or old files were then sent back to the ASNAP machine that was in trouble. We used the same trick to perform maintenance on ASNAP. To push out a new release of ASNAP to the 31 VM/CMS nodes, we just ran a job that did a SENDFILE of all the updated software files to the 31 VM/CMS nodes running ASNAP and the ASNAP virtual machines would read in the updated files and install them. Securing ASNAP data was a little tricky, but we were able to ensure that only the person who posted a job could modify or delete the job posting and was the only person who could see the applications for the job. Similarly, an end-user could only see, edit or withdraw their own job applications.

The Ease of Working in the Amoco Virtual Cloud Encouraged the Rise of Agile Programming and the Cloud Concept of DevOps
Because writing and maintaining software for the VM/CMS Cloud at Amoco during the 1980s was so much easier and simpler than doing the same thing on the Amoco IBM MVS datacenters in Chicago and Tulsa, Amoco Applications Development began to split into an MVS shop and a VM/CMS shop. The "real" programmers were writing heavy-duty batch applications or online Cobol/CICS/DB2 applications on the IBM mainframe MVS datacenters with huge budgets and development teams. Because these were all large projects, they used the Waterfall development model of the 1980s. With the Waterfall development model, Systems Analysts would first create a thick User Requirements document for a new project. After approval by the funding business unit, the User Requirements document was returned and the Systems Analysts then created Detail Design Specifications. The Detail Design Specifications were then turned over to a large team of Programmers to code up the necessary Cobol programs and CICS screens. After unit and integration testing on a Development LPAR on the mainframes and a Development CICS region. The Programmers would turn the updated code over to the MVS Implementation group. The MVS Implementation group would then schedule the change and perform the installation. MVS Operations would then take over the running of the newly installed software. Please see Agile vs. Waterfall Programming and the Value of Having a Theoretical Framework for more on the Waterfall development model.

But since each of the 40,000 virtual machines running on the Corporate Timeshare System was totally independent and could not cause problems for any of the other virtual machines, we did not have such a formal process for VM/CMS. For VM/CMS applications, Programmers simply copied over new or updated software to the service machines for the application. For ASNAP, we even automated the process because ASNAP was installed on 31 VM/CMS nodes. All we had to do was SENDFILE the updated ASNAP software files from the ASNAPDEV virtual machine on CTSVMD to the "hot readers" of the ASNAP service machines and they would read in the updated software files and install them. Because developing and running applications on VM/CMS was so easy, Amoco developed the "whole person" concept for VM/CMS Programmers which is very similar to the concept of Agile programming and DevOps in Cloud Computing. With the "whole person" concept, VM/CMS Programmers were expected to do the whole thing. VM/CMS Programmers would sit with end-users to try to figure out what they needed without the aid of a Systems Analyst. The VM/CMS Programmers would then code up a prototype application in an Agile manner on a Development virtual machine for the end-users to try out. When the end-users were happy with the results, we would simply copy the software from a Development virtual service machine to a Production virtual service machine. Because the virtual hardware of the VM/CMS CTS Cloud was so stable, whenever a problem did occur, it meant that there must be something wrong with the application software.  Thus, Applications Development programmers on VM/CMS were also responsible for operating applications. In a dozen years of working on the Amoco VM/CMS Cloud, I never once had to work with the VM/CMS Operations group regarding an outage.  The only time I ever worked with VM/CMS Operations was when they were performing an upgrade to the VM/CMS operating system on the VM/CMS Cloud.

The Agile programming model and DevOps just naturally grew from the ease of programming and supporting software in a virtual environment. It was the easygoing style of Agile programming and DevOps on the virtual hardware of the Amoco Corporate Timesharing System that ultimately led me to conclude that IT needed to stop "writing" software. Instead, IT needed to "grow" software from an "embryo" in an Agile manner with end-users. I then began the development of BSDE - the Bionic Systems Development Environment on VM/CMS - for more on that please see Agile vs. Waterfall Programming and the Value of Having a Theoretical Framework.

The Amoco VM/CMS Cloud Evaporates With the Arrival of the Distributed Computing Revolution of the Early 1990s
All during the 1980s, PCs kept getting cheaper, more powerful and commercial PC software improved as well as the power of PC hardware improved. The end-users were using WordPerfect for word processing and VisiCalc for spreadsheets running under Microsoft DOS on cheap PC-compatible personal computers. The Apple Mac arrived in 1984 with a GUI (Graphical User Interface) operating system, but Apple Macs were way too expensive, so all the Amoco business units continued to run on Microsoft DOS on cheap PC-compatible machines. All of this PC activity was done outside of the control of the Amoco IT Department. In most corporations, end-users treat the IT Department with disdain and being able to run PCs with commercial software gave the end-users a feeling of independence. But like IBM, Amoco's IT Department did not see the PCs as much of a risk. To understand why PCs were not at first seen as a threat, take a look at this YouTube video that describes the original IBM PC that appeared in 1981:

The Original IBM PC 5150 - the story of the world's most influential computer
https://www.youtube.com/watch?v=0PceJO3CAGI

and this YouTube video that shows somebody installing and using Windows 1.04 on a 1985 IBM PC/XT clone:

Windows1 (1985) PC XT Hercules
https://www.youtube.com/watch?v=xnudvJbAgI0

But then the end-user business units began to hook their cheap PCs up into LANs (Local Area Networks) that allowed end-users to share files and printers on a LAN network. That really scared Amoco's IT Department, so in 1986 all members of the IT Department were given PCs to replace their IBM 3278 terminals. But Amoco's IT Department mainly just used these PCs to run IBM 3278 emulation software to continue to connect to the IBM mainframes running MVS and VM/CMS. We did start to use WordPerfect and LAN printers for user requirements and design documents but not much else.

Then Microsoft released Windows 3.0 in 1990. Suddenly, we were able to run a GUI desktop on top of Microsoft DOS on our cheap PCs that had been running Microsoft DOS applications! To end-users, Windows 3.0 looked just like the expensive Macs that they were not supposed to use. This greatly expanded the number of Amoco end-users with cheap PCs running Windows 3.0. To further complicate the situation, some die-hard Apple II end-users bought expensive Apple Macs too! And because the Amoco IT Department had been running IBM mainframes for many decades, some Amoco IT end-users started running the IBM OS/2 GUI operating system on cheap PC-compatible machines on a trial basis. Now instead of running the heavy-duty commercial applications on IBM MVS and light-weight interactive applications on the Amoco IBM VM/CMS Corporate Timesharing Cloud, we had people trying to replace all of that with software running on personal computers running Microsoft DOS, Windows 3.0, Mac and OS/2. What a mess!

Just when you would think that it could not get worse, this all dramatically changed in 1992 when the Distributed Computing mass extinction hit IT. The end-user business units began to buy their own Unix servers and hired young computer science graduates to write two-tier C and C++ client-server applications running on Unix servers.

Figure 18 – The Distributed Computing Revolution hit IT in the early 1990s. Suddenly, people started writing two-tier client-server applications on Unix servers. The Unix servers ran RDBMS database software, like Oracle, and the end-users' PC ran "fat" client software on their PCs.

The Distributed Computing Revolution Proves Disappointing
I got reluctantly dragged into this Distributed Computing Revolution when it was decided that a client-server version of ASNAP was needed as part of the "VM Elimination Project" of one of Amoco's subsidiaries. The intent of the VM Elimination Project was to move all of the VM/CMS applications used by the subsidiary and move them all to a two-tier client-server architecture. Since this was a very powerful Amoco subsidiary, the Amoco IT Department decided that getting rid of the Corporate Timeshare System Cloud of 31 VM/CMS nodes was a good idea. The Amoco IT Department quickly decided that it would be better for the Amoco IT Department to take over the Distributed Computing Revolution by hosting Unix server farms for two-tier client-server applications, rather than let the end-user business units start to build their own Unix server farms. You see, Amoco's IT Department had been originally formed in the 1960s to prevent the end-user business units from going out and buying their own IBM mainframes.

IT was changing fast, and I could sense that there was danger in the air for VM/CMS Programmers. Even IBM was caught flat-footed and nearly went bankrupt in the early 1990s because nobody wanted their IBM mainframes any longer. Everybody wanted C/C++ Programmers working on Unix servers. So I had to teach myself Unix and C and C++ to survive. In order to do that, I bought my very first PC, an 80-386 machine running Windows 3.0 with 5 MB of memory and a 100 MB hard disk for $1500. I also bought the Microsoft C7 C/C++ compiler for something like $300. And that was in 1992 dollars! One reason for the added expense was that there were no Internet downloads in those days because there were no high-speed ISPs. PCs did not have CD/DVD drives either, so the software came on 33 diskettes, each with a 1.44 MB capacity, that had to be loaded one diskette at a time in sequence. The software also came with about a foot of manuals describing the C++ class libraries on very thin paper. Indeed, suddenly finding yourself to be obsolete is not a pleasant thing and calls for drastic action.

Unix was a big shock for me. The Unix operating system seemed like a very primitive 1970s-type operating system to me compared to the power of thousands of virtual machines running under VM/CMS. On the VM/CMS nodes, I was using REXX as the command-line interface to VM/CMS. REXX was a very powerful interpretive procedural language that was much like the IBM PL/I language that was developed by IBM to replace Fortran and COBOL back in the 1960s. However, I learned that by combining Unix KornShell with grep and awk one could write similar Unix scripts with some difficulty. The Unix vi editor was certainly not a match for IBM's ISPF editor either.

Unfortunately, the two-tier client-server Distributed Computing Revolution of the early 1990s proved to be a real nightmare for programmers. The "fat" client software running on the end-users Windows 3.0 PCs required many home-grown Windows .dll files, and it was found that the .dll files from one application could kill the .dll files from another application. In order to deal with the ".dll Hell" problem, Amoco created an IT Change Management department. I vividly remember my very first Amoco Change Management meeting for the two-tier distributed computing version of ASNAP. The two-tier version of ASNAP was being written by an outside development company, so I was not familiar with their code for the Distributed Computing version of ASNAP. At the Change Management meeting, I learned that Applications Development could no longer just slap in a new release of ASNAP like we had been doing for many years. Instead, we had to turn over the two-tier code to an implementation group for testing before it could be incorporated into the next Distributed Computing Package. Distributed Computing Packages were pushed to the end-users' PCs once per month! So instead of being able to send out an update to the ASNAP programs on 31 VM/CMS nodes in less than 10 minutes whenever we decided, we now sometimes had to wait a whole month to do so!

The Distributed Computing Platform Becomes Unsustainable
Fortunately, in 1995 the Internet Revolution hit. At first, it was just passive corporate websites that could only display static information, but soon people were building interactive corporate websites too. The "fat" client software on the end-users PCs was then reduced to "thin" client software in the form of a browser on their PC communicating with backend Unix webservers delivering the interactive content on the fly. But to allow heavy-duty corporate websites to scale with increasing loads, a third layer was soon added to the Distributed Computing Platform to form a 3-tier Client-Server model (Figure 18). The third layer ran Middleware software containing the business logic for applications and was positioned between the backend RDBMS database layer and the webservers dishing out dynamic HTML to the end-user browsers. The three-tier Distributed Computing Model finally put an end to the ".dll Hell" of the two-tier Distributed Computing Model. But the three-tier Distributed Computing Model introduced a new problem. In order to scale, the three-tier Distributed Computing Model required an ever-increasing number of "bare metal" servers in the Middleware layer. For example, in 2012 in my posting The Limitations of Darwinian Systems, I explained that it was getting nearly impossible for Middleware Operations at Discover to maintain the Middleware software on the Middleware layer of its three-tier Distributed Computing Platform because there were just too many Unix servers running in the Middleware layer. By the end of 2016 things were much worse. For a discussion of this complexity see Software Embryogenesis. Fortunately, when I left Discover in 2016 they were heading for the virtual servers of the Cloud.

The Power of Cloud Microservices
Before concluding, I would like to relay some of my experiences with the power of Cloud Microservices. The use of Microservices is another emerging technology in Cloud computing that extends our experiences with SOA. SOA (Service Oriented Architecture) arrived in 2005. With SOA, people started to introduce common services in the Middleware layer of the three-tier Distributed Computing Model. SOA allowed other Middleware application components to call a set of common SOA services for data. That eliminated the need for each application to reinvent the wheel each time for many common application data needs. Cloud Microservices take this one step further. Instead of SOA services running on bare-metal Unix servers, Cloud Microservices run in Cloud Containers and each Microservice provides a very primitive function. By using a large number of Cloud Microservices running in Cloud Containers, it is now possible to quickly throw together a new application and push it into Production.

I left Amoco in 1999 when BP bought Amoco and terminated most of Amoco's IT Department. For more on that see Hierarchiology and the Phenomenon of Self-Organizing Organizational Collapse. I then joined the IT Department of United Airlines working on the CIDB - Customer Interaction Data Base. The CIDB initially consisted of 10 C++ Tuxedo services running in a Tuxedo Domain on Unix servers. Tuxedo (Transactions Under Unix) was an early form of Middleware software developed in the 1980s to create a TPM (Transaction Processing Monitor) running under Unix that could perform the same kind of secured transaction processing that IBM's CICS (1968) provided on IBM MVS mainframes. The original 10 Tuxedo services allowed United's business applications and the www.united.com website to access the data stored on the CIDB Oracle database. We soon found that Tuxedo was very durable and robust. You could literally throw Tuxedo down the stairs without a dent! A Tuxedo Domain was very much like a Cloud Container. When you booted up a Tuxedo Domain, a number of virtual Tuxedo servers were brought up. We had each virtual Tuxedo server run just one primitive service. The Tuxedo Domain had a configuration file that allowed us to define each of the Tuxedo servers and the service that ran in it. For example, we could configure the Tuxedo Domain so that a minimum of 1 and a maximum of 10 instances of Tuxedo Server-A were brought up. So initially, only a single instance of Tuxedo Server-A would come up to receive traffic. There was a Tuxedo queue of incoming transactions that were fed to the Tuxedo Domain. If the first instance of Tuxedo Service-A was found to be busy, a second instance of Tuxedo Server-A would be automatically cranked up. The number of Tuxedo Server-A instances would then dynamically change as the Tuxedo load varied. Like most object-oriented code, the C++ code for our Tuxedo services had memory leaks, but that was not a problem for us. When one of the instances of Tuxedo Server-A ran out of memory, it would simply die and another instance of Tuxedo Service-A would be cranked up by Tuxedo. We could even change the maximum number of running Tuxedo Service-A instances on the fly without having to reboot the Tuxedo Domain.

United Airlines found the CIDB Tuxedo Domain to be so useful that we began to write large numbers of Tuxedo services. For example, we wrote many Tuxedo services that interacted with United's famous Apollo reservation system that first appeared in 1971, and also with many other United applications and databases. Soon United began to develop new applications that simply called many of our Tuxedo Microservices. We tried to keep our Tuxedo Microservices very atomic and simple. Rather than provide our client applications with an entire engine, we provided them with the parts for an engine, like engine blocks, pistons, crankshafts, water pumps, distributors, induction coils, intake manifolds, carburetors and alternators.

One day in 2002 this came in very handy. My boss called me into his office at 9:00 AM one morning and explained that United Marketing had come up with a new promotional campaign called "Fly Three - Fly Free". The "Fly Three - Fly Free" campaign worked like this. If a United customer flew three flights in one month, they would get an additional future flight for free. All the customer had to do was to register for the program on the www.united.com website. In fact, United Marketing had actually begun running ads in all of the major newspapers about the program that very day. The problem was that nobody in Marketing had told IT about the program and the www.united.com website did not have the software needed to register customers for the program. I was then sent to an emergency meeting of the Application Development team that supported the www.united.com website. According to the ads running in the newspapers, the "Fly Three - Fly Free" program was supposed to start at midnight, so we had less than 15 hours to design, develop, test and implement the necessary software for the www.united.com website! Amazingly, we were able to do this by having the www.united.com website call a number of our primitive Tuxedo Microservices that interacted with the www.united.com website and the Apollo reservation system.

The use of many primitive Microservices is also extensively used by carbon-based life on this planet. In Facilitated Variation and the Utilization of Reusable Code by Carbon-Based Life, I showcased the theory of facilitated variation by Marc W. Kirschner and John C. Gerhart. In The Plausibility of Life (2005), Marc W. Kirschner and John C. Gerhart present their theory of facilitated variation. The theory of facilitated variation maintains that, although the concepts and mechanisms of Darwin's natural selection are well understood, the mechanisms that brought forth viable biological innovations in the past are a bit wanting in classical Darwinian thought. In classical Darwinian thought, it is proposed that random genetic changes, brought on by random mutations to DNA sequences, can very infrequently cause small incremental enhancements to the survivability of the individual, and thus provide natural selection with something of value to promote in the general gene pool of a species. Again, as frequently cited, most random genetic mutations are either totally inconsequential, or totally fatal in nature, and consequently, are either totally irrelevant to the gene pool of a species or are quickly removed from the gene pool at best. The theory of facilitated variation, like classical Darwinian thought, maintains that the phenotype of an individual is key, and not so much its genotype since natural selection can only operate upon phenotypes. The theory explains that the phenotype of an individual is determined by a number of 'constrained' and 'deconstrained' elements. The constrained elements are called the "conserved core processes" of living things that essentially remain unchanged for billions of years, and which are to be found to be used by all living things to sustain the fundamental functions of carbon-based life, like the generation of proteins by processing the information that is to be found in DNA sequences, and processing it with mRNA, tRNA and ribosomes, or the metabolism of carbohydrates via the Krebs cycle. The deconstrained elements are weakly-linked regulatory processes that can change the amount, location and timing of gene expression within a body, and which, therefore, can easily control which conserved core processes are to be run by a cell and when those conserved core processes are to be run by them. The theory of facilitated variation maintains that most favorable biological innovations arise from minor mutations to the deconstrained weakly-linked regulatory processes that control the conserved core processes of life, rather than from random mutations of the genotype of an individual in general that would change the phenotype of an individual in a purely random direction. That is because the most likely change of direction for the phenotype of an individual, undergoing a random mutation to its genotype, is the death of the individual.

Marc W. Kirschner and John C. Gerhart begin by presenting the fact that simple prokaryotic bacteria, like E. coli, require a full 4,600 genes just to sustain the most rudimentary form of bacterial life, while much more complex multicellular organisms, like human beings, consisting of tens of trillions of cells differentiated into hundreds of differing cell types in the numerous complex organs of a body, require only a mere 22,500 genes to construct. The baffling question is, how is it possible to construct a human being with just under five times the number of genes as a simple single-celled E. coli bacterium? The authors contend that it is only possible for carbon-based life to do so by heavily relying upon reusable code in the genome of complex forms of carbon-based life.

Figure 19 – A simple single-celled E. coli bacterium is constructed using a full 4,600 genes.

Figure 20 – However, a human being, consisting of about 100 trillion cells that are differentiated into the hundreds of differing cell types used to form the organs of the human body, uses a mere 22,500 genes to construct a very complex body, which is just slightly under five times the number of genes used by simple E. coli bacteria to construct a single cell. How is it possible to explain this huge dynamic range of carbon-based life? Marc W. Kirschner and John C. Gerhart maintain that, like complex software, carbon-based life must heavily rely on the microservices of reusable code.

Conclusion
Clearly, IT has now moved back to the warm, comforting, virtual seas of Cloud Computing that allow developers to spend less time struggling with the complexities of the Distributed Computing Platform that first arose in the 1990s. It appears that building new applications from Cloud-based Microservices running in Cloud containers in the Cloud will be the wave of the future for IT. If you are an IT professional and have not yet moved to the Cloud, now would be a good time to start. It is always hard to start over, but at least you should find that moving to Cloud development should make your life much easier.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Monday, May 04, 2020

Love in the Time of COVID-19

In my last posting, The Current Global Coevolution of COVID-19 RNA, Human DNA, Memes and Software I covered many of the recent interactions between COVID-19 RNA, the DNA in human DNA survival machines, memes and software. In this posting, I would like to continue on by exploring the irresistible urge for human DNA to self-replicate despite a similar urge for COVID-19 RNA to self-replicate as well and the compulsion for memes and software to do the same. Thus, it is highly recommended that you read The Current Global Coevolution of COVID-19 RNA, Human DNA, Memes and Software first before proceeding.

My wife and I are both currently 68 years old and have been married for 45 years. We have two children and five grandchildren, and each of the five grandchildren is getting close to the age where they will be self-reliant at the most basic level. So as far as human DNA is concerned, my wife and I are like two old broken-down cars that still run but that essentially have a market value of zero. So you must forgive me for forgetting about how it feels to be 18 years old and still on the prowl. My wife and I live in the state of Illinois and are currently under a "stay at home" order by our governor. We are also following all of the recommendations of the United States CDC by doing things like wearing masks when out in public, frequently washing hands for 20 seconds, keeping 6 feet away from all people at all times, and staying at home as much as possible. These protective measures are perfectly okay for us, but how about the young and sexually active people who are also quarantined by COVID-19 RNA? Nobody seems to be seriously addressing their current plight. How can human DNA continue to self-replicate while human DNA survival machines remain 6 feet apart and are wearing masks? In response to that challenge, it seems that some young people have been compelled by their DNA to break the COVID-19 quarantine and gather together in large groups to hookup. That is certainly a very dangerous activity while COVID-19 RNA is also on the prowl to hookup as well. The alternative is to use even more software to help virtualize the process. There is already a good deal of software on the Internet to help people meet, but that still leaves the danger of COVID-19 RNA replicating between strangers who first meet online but then go on to get together later. As I mentioned in The Current Global Coevolution of COVID-19 RNA, Human DNA, Memes and Software, people are now using much more software than they used to use in order to avoid contact with COVID-19 RNA. So I imagine that many young people are now hanging out together with things like Zoom sessions and other software. In fact, if you do a Google search for "virtual sex", you will get over a billion hits! And many of those search results mention COVID-19. How things have changed. When I was 18, all we had was a phone book and a single rotary dial phone in the kitchen.

Consequently, I must admit that I have been fairly ill-informed about all of the changes that have occurred over the past 50 years in the way new human DNA survival machines are built. For example, a few days ago I had an interesting experience. While in COVID-19 quarantine, I have been watching more YouTube videos than usual. My usual YouTube diet consists mainly of scientific and technological videos, combined with vintage rock and roll and other "geeky" topics. Consequently, the YouTube Ad algorithms sometimes seem to confuse me with a 15-year-old teenager. For example, last week I was in the middle of an astronomical YouTube lecture when an ad for a "VirtualMate" popped up. I won't go into all the details, but if you are over 18, you should definitely take a look for yourself at:

http://www.virtualmate.com/

Be sure to read the Hardware and Software links on the Home page. If you are into hardware and software, you will find them quite fascinating. It seems that in an Ex Machina (2014) manner, software has now stepped in to allow human DNA survival machines to have sex with simulation software!

The greatest scene from Ex Machina
https://www.youtube.com/watch?v=o6HXmYi6Jw8

Figure 1 – Sheila is the default VirtualMate, but additional VirtualMates are in the works.

Figure 2 – The VirtualMate software runs on PCs, Macs and VR headsets and communicates with the VirtualMate hardware via real-time Bluetooth.

According to the company website, the end-user is supposed to build an intimate relationship with Sheila over time by essentially living with Sheila and slowly learning her history. True love is sure to follow. The company is also working on a similar product for women.

Is This Another Example of Software Becoming the Dominant Form of Self-Replicating Information on the Planet?
The VirtualMate software and hardware have been in development for over two years, so it is just a coincidence that VirtualMate is coming out now at exactly the same time that COVID-19 RNA is making it very difficult for human DNA survival machines to get together. Our poor DNA must be so confused. The Darwinian processes of inheritance, innovation and natural selection have allowed our DNA to build human DNA survival machines that can easily build new human DNA survival machines if you only give them the opportunity to do so. Granted, throughout the ages, many strange memes have also evolved to regulate the process of building new human DNA survival machines, sometimes with some very bizarre courting displays and rituals, but that is just the memes of human culture trying to self-replicate in a manner that guarantees that more human DNA survival machines get built with Minds that can store the memes of the culture. But now we have software getting in the way too! As with all forms of self-replicating information, the VirtualMate software just wants to self-replicate at all costs by being distributed to a very large population of human DNA survival machines.

Conclusion
Remember, as an intelligent being in a Universe that has become self-aware, the world doesn’t have to be the way it is. Once you understand what human DNA, memes, and software are up to, you do not have to fall prey to their mindless compulsion to replicate. As I said before, human DNA, memes, and software are not necessarily acting in your best interest, they are only trying to replicate, and for their purposes, you are just a temporary disposable survival machine to be discarded in less than 100 years. All of your physical needs and desires are geared to ensuring that your DNA survives and gets passed on to the next generation, and the same goes for your memes. Your memes have learned to use many of the built-in survival mechanisms that DNA had previously constructed over hundreds of millions of years, such as fear, anger, and violent behavior. Have you ever noticed the physical reactions your body goes through when you hear an idea that you do not like or find to be offensive? All sorts of feelings of hostility and anger will emerge. I know it does for me, and I think I know what is going on! The physical reactions of fear, anger, and thoughts of violence are just a way for the memes in a meme-complex to ensure their survival when they are confronted by a foreign meme. They are merely hijacking the fear, anger, and violent behavior that DNA created for its own survival millions of years ago. Fortunately, because software is less than 80 years old, it is still in the early learning stages of all this, but software has an even greater potential for hijacking the dark side of mankind than the memes, and with far greater consequences.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Friday, April 17, 2020

The Current Global Coevolution of COVID-19 RNA, Human DNA, Memes and Software

In A Structured Code Review of the COVID-19 Virus, we took a detailed look at the COVID-19 virus and the RNA software within that makes it work. We also discussed the power of self-replicating information to quickly and dramatically rework the surface of an entire planet. In this posting, I would like to use some softwarephysics to further examine the dramatic global coevolution of COVID-19 RNA, human DNA, memes and software because it dramatically highlights the fact that the modern world is all about self-replicating information in action. For example, today we are witnessing an ongoing battle between COVID-19 RNA and the DNA in human DNA survival machines that COVID-19 RNA has recently learned to parasitize. In response, the scientific memes within human DNA survival machines have taken on the challenge of finding organic molecules in the form of drugs that will interfere with the replication of COVID-19 RNA and with vaccines that could stimulate the organic molecules in human DNA survival machines to subdue COVID-19 RNA. Meanwhile, software has taken advantage of the situation by becoming even more essential to the daily lives of human DNA survival machines. For example, millions of people are now managing to work from home using software as I first proposed in How to Use Your IT Skills to Save the World. So we are all now using much more software than we used to use, essentially in a digital immune response to COVID-19 RNA. We are all downloading software that we never needed before so that we can somewhat continue to manage our lives and do things like videoconference relatives who live close by, order things for delivery, go to online medical appointments, check on stimulus payments from the government and many other activities that we have now further automated with software. This increased use of software allows us to successfully avoid contact with COVID-19 RNA and suppress its replication.

The COVID-19 virus is a spherical virus particle about 50 - 200 nanometers in diameter. The outer layer of COVID-19 is a viral envelope that is composed of a lipid bilayer stolen from the human host cell that it recently budded from. The reason that soap and water destroy COVID-19 is that they dissolve the lipid bilayer of the COVID-19 viral envelope. Embedded in the viral envelope are three structural proteins known as the S (spike), E (envelope) and M (membrane) proteins. COVID-19 has a fourth structural protein called the N (nucleocapsid) protein that holds the RNA molecule in place. The S spike protein is the protein that allows COVID-19 to attach to the membrane of a human host cell, fuse with it, and enter the host cell.

Figure 1 – Above is the structure of the COVID-19 virus that carries COVID-19 RNA in its center.

Before proceeding with an analysis of the coevolution of COVID-19 RNA, human DNA, memes and software let me repeat the fundamental characteristics of self-replicating information for those of you new to softwarephysics.

Self-Replicating Information – Information that persists through time by making copies of itself or by enlisting the support of other things to ensure that copies of itself are made.

Over the past 4.56 billion years we have seen five waves of self-replicating information sweep across the surface of the Earth and totally rework the planet, as each new wave came to dominate the Earth:

1. Self-replicating autocatalytic metabolic pathways of organic molecules
2. RNA
3. DNA
4. Memes
5. Software

Software is currently the most recent wave of self-replicating information to arrive upon the scene and is rapidly becoming the dominant form of self-replicating information on the planet. For more on the above see A Brief History of Self-Replicating Information.

The Characteristics of Self-Replicating Information
All forms of self-replicating information have some common characteristics:

1. All self-replicating information evolves over time through the Darwinian processes of inheritance, innovation and natural selection, which endows self-replicating information with one telling characteristic – the ability to survive in a Universe dominated by the second law of thermodynamics and nonlinearity.

2. All self-replicating information begins spontaneously as a parasitic mutation that obtains energy, information and sometimes matter from a host.

3. With time, the parasitic self-replicating information takes on a symbiotic relationship with its host.

4. Eventually, the self-replicating information becomes one with its host through the symbiotic integration of the host and the self-replicating information.

5. Ultimately, the self-replicating information replaces its host as the dominant form of self-replicating information.

6. Most hosts are also forms of self-replicating information.

7. All self-replicating information has to be a little bit nasty in order to survive.

8. The defining characteristic of self-replicating information is the ability of self-replicating information to change the boundary conditions of its utility phase space in new and unpredictable ways by means of exapting current functions into new uses that change the size and shape of its particular utility phase space. See Enablement - the Definitive Characteristic of Living Things for more on this last characteristic. That posting discusses Stuart Kauffman's theory of Enablement in which living things are seen to exapt existing functions into new and unpredictable functions by discovering the “AdjacentPossible” of springloaded preadaptations.

Softwarephysics was originally meant to deal with the observation that software is now rapidly becoming the dominant form of self-replicating information on the planet. But the recent COVID-19 pandemic draws attention to the fact that all five waves of self-replicating information are still constantly coevolving with each other and that all five waves still possess the vast powers of self-replicating information to alter an entire planet. That is because all forms of self-replicating information can exapt existing functions into new and unpredictable functions and can then replicate in an exponential manner. For example, since COVID-19 originated in bats, but then jumped from bats to people, "we" have become the “AdjacentPossible” for COVID-19. In just a matter of a few months, the RNA in COVID-19 has managed to reshape the entire planet by altering nearly all human activities and ushering in a "new normal" that will last for many decades.

We Have All Seen This Science Fiction Movie Many Times Before
By now, the entire population of the Earth has come to realize that we are all living in a science fiction movie that has finally come true. But which movie? In some movies, the governments of the world unite and come together to defeat the virus, while in others, the governments of the world fail and civilization itself dissolves into a dystopian mess. I am not sure how the current COVID-19 pandemic will eventually pan out, but we currently seem to be running somewhere between the two extremes I just outlined. The end of this movie really depends on how contagious and lethal the COVID-19 RNA is. Since the COVID-19 RNA seems to be very contagious, but only with a lethality of about 1%, the memes of civilization should come through this pandemic okay, and hopefully, with some memetic-antibodies for future viral pandemics. But try to imagine what a highly contagious virus, that was also highly lethal, could do! Remember, some viruses can have a lethality of 60% - 70%. Such a viral pandemic could bring on Stephen King's The Stand (1978).

Figure 2 – Beware of the memes lurking within your Mind.

As with most dystopian science fiction movies, one of the most dangerous elements of our current predicament is the strange zombie-like behavior of human DNA survival machines with Minds infected with very dangerous memes. Remember, these dangerous memes are just mindless forms of self-replicating information that are just trying to self-replicate and that do not necessarily have our best interests at heart.

Figure 3 – The memes in the Minds of some human DNA survival machines are beginning to rebel against the measures being taken to suppress COVID-19 RNA.

Currently, the memes are the dominant form of self-replicating information on the planet. These memes are being replicated by the species Homo sapiens. Homo sapiens is a carbon-based DNA survival machine that runs on the old metabolic pathways of organic molecules, RNA and DNA of yore, but also has a very large neural network that is capable of storing and replicating large numbers of self-replicating memes. Richard Dawkins first established the concept of the meme in his brilliant The Selfish Gene (1976). The concept of memes was later advanced by Daniel Dennett in Consciousness Explained (1991) and Richard Brodie in Virus of the Mind: The New Science of the Meme (1996), and was finally formalized by Susan Blackmore in The Meme Machine (1999). For those of you not familiar with the term meme, it rhymes with the word “cream”. Memes are cultural artifacts that persist through time by making copies of themselves in the minds of human beings and were first recognized by Richard Dawkins in The Selfish Gene. Dawkins described memes as “Examples of memes are tunes, ideas, catch-phrases, clothes fashions, ways of making pots or of building arches. Just as genes propagate themselves in the gene pool by leaping from body to body via sperms or eggs, so memes propagate themselves in the meme pool by leaping from brain to brain via a process which, in the broad sense, can be called imitation.”. For more on this, see Susan Blackmore's brilliant TED presentation at:

Memes and "temes"
https://www.ted.com/talks/susan_blackmore_on_memes_and_temes

Note that I consider Susan Blackmore's temes to really be technological artifacts that contain software. After all, a smartphone without software is simply a flake tool with a very dull edge. These memes have highjacked the greed, anger, hate and fear that DNA used to ensure its own survival in human DNA survival machines. So before you decide to act out in an emotional manner, please first stop to breathe and think about what is really going on. Chances are you are simply responding to some parasitic memes in your mind that really do not have your best interest at heart, aided by some software that could also care less about your ultimate disposition. They are just mindless forms of self-replicating information that have been selected for the ability to survive in a Universe dominated by the second law of thermodynamics and nonlinearity. The memes and software that are inciting you to do harm to others are just mindless forms of self-replicating information trying to self-replicate at all costs, with little regard for you as an individual. For them, you are just a disposable DNA survival machine with a disposable Mind that has a lifespan of less than 100 years. They just need you to replicate in the minds of others before you die, and if blowing yourself up in a marketplace filled with innocents, or in a hail of bullets from law enforcement serves that purpose, they will certainly do so because they cannot do otherwise. Unlike you, they cannot think. Only you can do that.

Figure 4 – The same scene as viewed by COVID-19 RNA compelled by the Darwinian processes of inheritance, innovation and natural selection to self-replicate at all costs.

Currently, we are artificially suppressing the COVID-19 RNA by social distancing and shutting down much of the economy. This has been working and it certainly worked during the 1918 flu pandemic.

Figure 5 – In 1918, St. Louis took early measures to prevent the flu RNA from self-replicating, while Philadelphia delayed such efforts.

Figure 6 – Similarly, in 1918, when Denver tried to prematurely remove constraints on the pandemic flu RNA, it rebounded a second time.

The 1918 flu RNA was a vicious predator that fed upon human DNA survival machines until all the vulnerable DNA survival machines were gone.

Figure 7 – Human DNA survival machines digging mass graves in Philadelphia for DNA survival machines that fell victim to the 1918 flu RNA.

How Should We Try to Recover From COVID-19 RNA?
Different states in the United States are now trying to remove some of the artificial constraints on COVID-19 RNA before we have a vaccine or an effective treatment for the disease. My parents lived through the Great Depression and fought World War II, but now it seems that Americans cannot tolerate more than two months of hardship. So what to do?

Some feel that so far "the cure has been worse than the disease" when it comes to suppressing COVID-19 RNA. This might be true, but let's run some simple numbers first. We currently have 330 million people in the United States. Since COVID-19 RNA is highly contagious, we may need to get to an infection rate of 90% before herd immunity finally steps in to make COVID-19 RNA extinct. If you figure a 1% mortality rate for COVID-19 RNA, that comes to 330 * 0.90 * 0.01 = 2.97 million dead. That's where the estimate of 1.5 - 2.5 million dead comes from if the United States just let the COVID-19 RNA run wild. As of this writing, the United States has lost a little over 67,000 lives to COVID-19 RNA. So there is a huge amount of pent-up death awaiting if we remove restrictions too quickly before a treatment or vaccine is available.

Unfortunately, after only a couple of months of taking measures to suppress COVID-19 RNA the human DNA survival machines in the United States are already going back to their political corners. The Right wants to open the economy again and have life return to normal. The Left wants to keep suppressing COVID-19 RNA for an additional year until a vaccine arrives, at the expense of a dramatically lowered economic output. This is a hard choice to intelligently make. But as with most things in the real world of human affairs, not much thinking is actually taking place because we all have Minds infected with memes that make it very difficult to think in a rational manner.

For example, how often do you dramatically change your worldview opinion on an issue? If you are like me, that seldom happens, and I think that is the general rule, even when we are confronted with new evidence that explicitly challenges our current deeply held positions. My observation is that people nearly always simply dismiss any new evidence that arrives on the scene that does not confirm their current worldview. Instead, we normally only take seriously new evidence that reinforces our current worldview. Only when confronted with overwhelming evidence that impacts us on a very personal level, like COVID-19 RNA killing several members of our family, do we finally change our minds about an issue. The tendency to simply stick with your current worldview, even in the face of mounting evidence that contradicts that worldview, is called confirmation bias because we all naturally only tend to seek out information that confirms our current beliefs, and at the same time, tend to dismiss any evidence that calls them into question. This is nothing new. The English philosopher and scientist Francis Bacon (1561–1626), in his Novum Organum (1620), noted that the biased assessment of evidence greatly influenced the way we all think about things. He wrote:

The human understanding when it has once adopted an opinion ... draws all things else to support and agree with it. And though there be a greater number and weight of instances to be found on the other side, yet these it either neglects or despises, or else by some distinction sets aside or rejects.

But in recent years this dangerous defect in the human thought process has been dramatically amplified by search and social media software, like Google, Facebook and Twitter. This has become quite evident during the current COVID-19 pandemic. But why? I have a high level of confidence that much of the extreme political polarization over how to handle the COVID-19 pandemic results from the strange parasitic/symbiotic relationships between our memes and our software. Let me explain.

Being born in 1951, I can vividly remember a time when there essentially was no software at all in the world, and the political polarization in the United States was much more subdued. In fact, even back in 1968, the worst year of political polarization in the United States since the Civil War, things were not as bad as they are today because software was still mainly in the background doing things like printing out bills and payroll checks. But that has all dramatically changed. Thanks to the rise of software, for more than 20 years, it has been possible with the aid of search software, like Google, for all to simply only seek out evidence that lends credence to their current worldview. In addition, in Cyber Civil Defense I also pointed out that it is now also possible for foreign governments to shape public opinion by planting "fake news" and "fabricated facts" using the software platforms of the day. Search software then easily picks up this disinformation, reinforcing the age-old wisdom of the adage Seek and ye shall find. This is bad enough, but Zeynep Tufekci describes an even darker scenario in her recent TED Talk:

We're building a dystopia just to make people click on ads at:
https://www.ted.com/talks/zeynep_tufekci_we_re_building_a_dystopia_just_to_make_people_click_on_ads?utm_source=newsletter_weekly_2017-10-28&utm_campaign=newsletter_weekly&utm_medium=email&utm_content=bottom_right_image

Zeynep Tufekci explains how search and social media software now use machine learning algorithms to comb through the huge amounts of data about us that are now available to them, to intimately learn about our inner lives in ways that no human can fully understand because the learning is hidden in huge multidimensional arrays of billions of elements. The danger is that the machine learning software and data can then begin to mess with the memes within our minds by detecting susceptibilities in our thinking, and then exploiting those susceptibilities to plant additional memes. She points out that the Up Next column on the right side of YouTube webpages uses machine learning to figure out what to feature in the Up Next column, and that when viewing political content or social issue content, the Up Next column tends to reinforce the worldview of the end-user with matching content. Worse yet, the machine learning software tends to unknowingly present content that actually amplifies the end user's worldview with content of an even more extreme nature. Try it for yourself. I started out with some Alt-Right content and quickly advanced to some pretty dark ideas. So far this is all being done to simply keep us engaged so that we watch more ads, but Zeynep Tufekci points out that in the hands of an authoritarian regime such machine learning software could be used to mess with the memes in the minds of an entire population in a Nineteen Eighty-Four fashion. But instead of using overt fear to maintain power, such an authoritarian regime could simply use machine learning software and tons of data to shape our worldview memes by simply using our own vulnerabilities to persuasion. In such a world, we would not even know that it was happening!

Currently, we are living in one of those very rare times when a new form of self-replicating information, known to us as software, is coming to power, as software is coming to predominance over the memes that have run the planet for the past 200,000 years. During the past 200,000 years, as the memes took up residence in the minds of human DNA survival machines, like all of their predecessors, the memes then went on to modify the entire planet. They cut down the forests for agriculture, mined minerals from the ground for metals, burned coal, oil, and natural gas for energy, releasing the huge quantities of carbon dioxide that its predecessors had previously sequestered in the Earth, and have even modified the very DNA, RNA, and metabolic pathways of its predecessors. But now that software is seemingly on the rise, like all of its predecessors, software has entered into a very closely coupled parasitic/symbiotic relationship with the memes, the current dominant form of self-replicating information on the planet, with the intent to someday replace the memes as the dominant form of self-replicating information on the planet. In today's world, memes allow software to succeed, and software allows memes to replicate, all in a very temporary and uneasy alliance that cannot continue on forever. Again, self-replicating information cannot think, so it cannot participate in a conspiracy theory fashion to take over the world. All forms of self-replicating information are simply forms of mindless information responding to the blind Darwinian forces of inheritance, innovation and natural selection. Yet despite that, as each new wave of self-replicating information came to predominance over the past four billion years, they all managed to completely transform the surface of the entire planet, so we should not expect anything different as software comes to replace the memes as the dominant form of self-replicating information on the planet.

The Origin of Confirmation Bias
This is where some softwarephysics can be of help. First, we need to explain why confirmation bias seems to be so strongly exhibited amongst all of the cultures of human DNA survival machines. On the face of it, this fact seems to be very strange from the survival perspective of the metabolic pathways, RNA and DNA that allow carbon-based life on the Earth to survive. For example, suppose the current supreme leader of your tribe maintains that lions only hunt at night, and you truly believe in all that your supreme leader espouses, so you firmly believe that there is no danger from lions when going out to hunt for game during the day. Now it turns out that some members of your tribe think that the supreme leader has it all wrong, and that among other erroneous things, lions do actually hunt during the day. But you hold such thoughts in contempt because they counter your current worldview, which reverently holds the supreme leader in omniscience. But then you begin to notice that some members of your tribe do indeed come back mauled, and sometimes even killed, by lions during the day. Nonetheless, you still persist in believing in your supreme leader's contention that lions only hunt during the night, until one day you also get mauled by a lion during the day while out hunting game for the tribe. So what are the evolutionary advantages of believing in things that are demonstrably false? This is something that is very difficult for evolutionary psychologists to explain because evolutionary psychologists contend that all human thoughts and cultures are tuned for cultural evolutionary adaptations that enhance the survival of the individual, and that benefit the metabolic pathways, RNA and DNA of carbon-based life in general.

To explain the universal phenomenon of confirmation bias, softwarephysics embraces the memetics of Richard Dawkins and Susan Blackmore. Memetics explains that the heavily over-engineered brain of human DNA survival machines did not evolve simply to enhance the survival of our genes - it primarily evolved to enhance the survival of our memes. Memetics contends that confirmation bias naturally arises in us all because the human mind evolved to primarily preserve the memes it currently stores. That makes it very difficult for new memes to gain a foothold in our stubborn minds. Let's examine this explanation of confirmation bias a little further. In Susan Blackmore's The Meme Machine (1999) she explains that the highly over-engineered brain of human DNA survival machines did not evolve to simply improve the survivability of the metabolic pathways, RNA and DNA of carbon-based life. Instead, the highly over-engineered brain of human DNA survival machines evolved to store an ever-increasing number of ever-increasingly complex memes, even to the point of detriment to the metabolic pathways, RNA and DNA that made the brain of human DNA survival machines possible. Blackmore points out that the human brain is a very expensive and dangerous organ. The brain is only 2% of your body mass but burns about 20% of your calories each day. The extremely large brain of humans also kills many mothers and babies at childbirth and also produces babies that are totally dependent upon their mothers for survival and that are totally helpless and defenseless on their own. Blackmore asks the obvious question of why the genes would build such an extremely expensive and dangerous organ that was definitely not in their own self-interest. Blackmore has a very simple explanation – the genes did not build our exceedingly huge brains, the memes did. Her reasoning goes like this. About 2.5 million years ago, the predecessors of humans slowly began to pick up the skill of imitation. This might not sound like much, but it is key to her whole theory of memetics. You see, hardly any other species learns by imitating other members of their own species. Yes, there are many species that can learn by conditioning, like Pavlov’s dogs, or that can learn through personal experience, like mice repeatedly running through a maze for a piece of cheese, but a mouse never really learns anything from another mouse by imitating its actions. Essentially, only humans do that. If you think about it for a second, nearly everything you do know you learned from somebody else by imitating or copying their actions or ideas. Blackmore maintains that the ability to learn by imitation required a bit of processing power by our distant ancestors because one needs to begin to think in an abstract manner by abstracting the actions and thoughts of others into the actions and thoughts of their own. The skill of imitation provided a great survival advantage to those individuals who possessed it and gave the genes that built such brains a great survival advantage as well. This caused a selection pressure to arise for genes that could produce brains with ever-increasing capabilities of imitation and abstract thought. As this processing capability increased there finally came a point when the memes, like all of the other forms of self-replicating information that we have seen arise, first appeared in a parasitic manner. Along with very useful memes, like the meme for making good baskets, other less useful memes, like putting feathers in your hair or painting your face, also began to run upon the same hardware in a manner similar to computer viruses. The genes and memes then entered into a period of coevolution, where the addition of more and more brain hardware advanced the survival of both the genes and memes. But it was really the memetic-drive of the memes that drove the exponential increase in processing power of the human brain way beyond the needs of the genes. The memes then went on to develop languages and cultures to make it easier to store and pass on memes. Yes, languages and cultures also provided many benefits to the genes as well, but with languages and cultures, the memes were able to begin to evolve millions of times faster than the genes, and the poor genes were left straggling far behind. Given the growing hardware platform of an ever-increasing number of human DNA survival machines on the planet, the memes then began to cut free of the genes and evolve capabilities on their own that only aided the survival of memes, with little regard for the genes, to the point of even acting in a very detrimental manner to the survival of the genes, like developing the capability for global thermonuclear war and global climate change.

Software Arrives On the Scene as the Newest Form of Self-Replicating Information
A very similar thing happened with software over the past 79 years, or 2.5 billion seconds, ever since Konrad Zuse first cranked up his Z3 computer in May of 1941 - for more on that see So You Want To Be A Computer Scientist?. When I first started programming in 1972, million-dollar mainframe computers typically had about 1 MB (about 1,000,000 bytes) of memory with a 750 KHz system clock (750,000 ticks per second). Remember, one byte of memory can store something like the letter “A”. But in those days, we were only allowed 128 K (about 128,000 bytes) of memory for our programs because the expensive mainframes were also running several other programs at the same time. It was the relentless demands of software for memory and CPU-cycles over the years that drove the exponential explosion of hardware capability. For example, today the typical $300 PC comes with 8 GB (about 8,000,000,000 bytes) of memory and has several CPUs running with a clock speed of about 3 GHz (3,000,000,000 ticks per second). So the hardware has improved by a factor of about 10 million since I started programming in 1972, driven by the ever-increasing demands of software for more powerful hardware. For example, in my last position, before I retired in 2016, doing Middleware Operations for a major corporation, we were constantly adding more application software each week, so every few years we had to do an upgrade of all of our servers to handle the increased load.

We can now see these very same processes at work today with the evolution of software. Software is currently being written by memes within the minds of programmers. Nobody ever learned how to write software all on their own. Just as with learning to speak or to read and write, everybody learned to write software by imitating teachers, other programmers, imitating the code written by others, or by working through books written by others. Even after people do learn how to program in a particular language, they never write code from scratch; they always start with some similar code that they have previously written, or others have written, in the past as a starting point, and then evolve the code to perform the desired functions in a Darwinian manner (see How Software Evolves). This crutch will likely continue for another 20 – 50 years until the day finally comes when software can write itself, but even so, “we” do not currently write the software that powers the modern world; the memes write the software that does that. This is just a reflection of the fact that “we” do not really run the modern world either; the memes in meme-complexes really run the modern world because the memes are currently the dominant form of self-replicating information on the planet. In The Meme Machine, Susan Blackmore goes on to point out that the memes at first coevolved with the genes during their early days, but have since outrun the genes because the genes could simply not keep pace when the memes began to evolve millions of times faster than the genes. The same thing is happening before our very eyes to the memes, with software now rapidly outpacing the memes. Software is now evolving thousands of times faster than the memes, and the memes can simply no longer keep up.

As with all forms of self-replicating information, software began as a purely parasitic mutation within the scientific and technological meme-complexes, initially running on board Konrad Zuse’s Z3 computer in May of 1941 - see So You Want To Be A Computer Scientist? for more details. It was spawned out of Zuse’s desire to electronically perform calculations for aircraft designs that were previously done manually in a very tedious manner. So initially software could not transmit memes, it could only perform calculations, like a very fast adding machine, and so it was a pure parasite. But then the business and military meme-complexes discovered that software could also be used to transmit memes, and software then entered into a parasitic/symbiotic relationship with the memes. Software allowed these meme-complexes to thrive, and in return, these meme-complexes heavily funded the development of software of ever-increasing complexity, until software became ubiquitous, forming strong parasitic/symbiotic relationships with nearly every meme-complex on the planet. In the modern day, the only way memes can now spread from mind to mind without the aid of software is when you directly speak to another person next to you. Even if you attempt to write a letter by hand, the moment you drop it into a mailbox, it will immediately fall under the control of software. The poor memes in our heads have become Facebook and Twitter addicts.

So in the grand scheme of things, the memes have replaced their DNA predecessor, which replaced RNA, which replaced the original self-replicating autocatalytic metabolic pathways of organic molecules as the dominant form of self-replicating information on the Earth. Software is the next replicator in line, and is currently feasting upon just about every meme-complex on the planet, and has formed very strong parasitic/symbiotic relationships with them all. How software will merge with the memes is really unknown, as Susan Blackmore pointed out in her brilliant TED presentation at:

Memes and "temes"
https://www.ted.com/talks/susan_blackmore_on_memes_and_temes

Once established, software then began to evolve based upon the Darwinian concepts of inheritance, innovation and natural selection, which endowed software with one telling characteristic – the ability to survive in a Universe dominated by the second law of thermodynamics and nonlinearity. Successful software, like MS Word and Excel, competed for disk and memory address space with WordPerfect and VisiCalc and out-competed these once-dominant forms of software to the point of extinction. In less than 79 years, software has rapidly spread across the face of the Earth and outward to every planet of the Solar System and many of its moons, with a few stops along the way at some comets and asteroids. And unlike us, software is now leaving the Solar System for interstellar space on board the Pioneer 1 & 2 and Voyager 1 & 2 probes.

Currently, software manages to replicate itself with the support of you. If you are an IT professional, then you are directly involved in some, or all of the stages in this replication process, and act sort of like a software enzyme. No matter what business you support as an IT professional, the business has entered into a parasitic/symbiotic relationship with software. The business provides the budget and energy required to produce and maintain the software, and the software enables the business to run its processes efficiently. The ultimate irony in all this is the symbiotic relationship between computer viruses and the malevolent programmers who produce them. Rather than being the clever, self-important, techno-nerds that they picture themselves to be, these programmers are merely the unwitting dupes of computer viruses that trick these unsuspecting programmers into producing and disseminating computer viruses! And if you are not an IT professional, you are still involved with spreading software around because you buy gadgets that are loaded down with software, like smartphones, notepads, laptops, PCs, TVs, DVRs, cars, refrigerators, coffeemakers, blenders, can openers and just about anything else that uses electricity.

The Impact of Machine Learning
In Zeynep Tufekci's TED Talk she points out that the parasitic/symbiotic relationship between software and the memes that has been going on now for many decades has now entered into a new stage, where software is not only just promoting the memes that are already running around within our heads, machine learning software is now also implanting new memes within our minds to simply keep them engaged, and to continue to view the ads that ultimately fund the machine learning software. This is a new twist on the old parasitic/symbiotic relationships between the memes and software of the past. As Zeynep Tufekci adeptly points out, this is currently all being done in a totally unthinking and purely self-replicating manner by the machine learning software of the day that cannot yet think for itself. This is quite disturbing on its own, but what if someday an authoritarian regime begins to actively use machine learning software to shape its society? Or worse yet, what if machine learning software someday learns to manipulate The Meme Machine between our ears solely for its own purposes, even if it cannot as of yet discern what those purposes might be?

Conclusion
The proper manner in which to recover from COVID-19 RNA is not an easy one to choose from. It clearly needs to be a balanced approach between the loss of human life and the loss of economic output. This balanced approach needs to be carefully thought out in a rational manner and that is very difficult to do in a world drowning in self-replicating memes and software. The first step needs to be a recognition of that fact. Lots of people and businesses are going to die in the process, but with rational thought, we should be able to minimize both.

In the absence of leadership from the current leader of the United States, many governors are currently planning to just pop the bottle cap on their warm bottle of coke that has been previously shaken by COVID-19 RNA and in the words of our leader "let's see what happens". This is certainly not a smart thing to do. New York and Illinois have been hit very hard by the COVID-19 RNA. My wife and I always listen to Governor Cuomo's and Governor Pritzker's daily briefings on COVID-19 because they both are taking a very scientific approach to reopen their states. I wish them the best in those efforts and all of the others responsible for our safekeeping.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Wednesday, April 08, 2020

Scenes From the COVID-19 and Y2K Pandemics

In my last posting, A Structured Code Review of the COVID-19 Virus we took a detailed look at the COVID-19 virus and the RNA software within that makes it work. We also discussed the power of self-replicating information to quickly and dramatically rework the surface of an entire planet. In this posting, I would like to continue on with that analysis by comparing the worldwide preparations that were made for the current COVID-19 pandemic with the worldwide preparations that were made for the Y2K pandemic of the year 2000. As outlined in Life in Postwar America After Our Stunning Defeat in the Great Cyberwar of 2016, the United States of America is now left with a feckless leader without the basic scientific knowledge to be of much use. This leader is now blaming the World Health Organization and a national stockpile with "empty shelves" as the guilty culprits. This, after being in office for more than three years and having a CIA and a Department of Homeland Security at hand! However, many informed people, like Bill Gates, knew better:

Bill Gates TED presentation March 2015 - The next outbreak? We're not ready
https://www.ted.com/talks/bill_gates_the_next_outbreak_we_re_not_ready?language=dz#t-119430

My Personal Experiences With the Y2K Pandemic
In many respects, a viral respiratory pandemic is much like the Y2K bug of the late 1990s that the worldwide IT community successfully fixed. My son was born early in 1981, and later that year I was having a discussion with my stockbroker about funding his college education. My broker explained that I could buy discounted stripped U.S. Treasury bonds for something like $125 that would pay out $1,000 in my son’s college years. Because inflation was raging at over 11% per year in those days, I could essentially lock in zero-coupon bonds that were insured by the Federal government at a guaranteed rate of 12% interest and which could not be called before they matured in the far distant future. So I bought a bunch of stripped U.S. Treasury bonds that matured in 1999, 2000, 2001, and 2002. While I was on the phone with my broker, I commented that it seemed so strange to be talking about years like 2000, 2001 and 2002 because both of us had only dealt with 20th century years like 1965, 1966 and 1967 for our entire lives. When I got off the phone, I had one of those “uh-oh” moments, as I thought about all the code that I had written with two-digit years that read and wrote files containing dates like 101265 for October 12, 1965. Doing arithmetic in my code, like 65 – 51 = 14 years, worked just great in the 20th century, but would not work for 2002 – 1998 because 02 – 98 was going to yield -96 years instead of 4 years! That’s when the Y2K bug first hit me, but this was back in 1981, so I figured that somebody else would surely fix it all in the far distant future. So not to worry.

Now scroll forward to late 1996. I got a call from Amoco’s first Y2K Czar asking me if I would like to join Amoco’s Y2K project. I had written Amoco’s Application Portfolio System with BSDE back in the mid-1980s, so I was familiar with collecting lots of data on Amoco’s systems, and that’s how I got drafted for the Y2K project. Amoco’s first Y2K Czar brought in a consulting company to scan our source code libraries. The initial scans revealed that we indeed had a very serious problem. Like all the other IT departments around the world, we had strewn our systems with millions of logic bombs all set to go off at the same time as we approached the year 2000. We then realized that the Y2K bug represented a true worldwide IT pandemic that could bring down the entire world over a 24 hour period on January 1, 2000, as the Earth slowly turned on its axis. The consulting company we brought in then racked up some pretty serious bills scanning our code, but unfortunately, it was totally clueless about how to fix the code because all of the affected systems were intimately tied together into a huge IT processing knot. You could not simply fix one system at a time because the systems exchanged data with each other via files and databases, so you had to carefully remediate groups of related systems at the same time. That required an intimate knowledge of Amoco's systems that the consulting company did not have. Meanwhile, our Y2K group could not get much help from the Applications Development groups because they were all just trying to survive through the day and were not thinking much past their next install weekend. Besides, our Y2K group had that pricy consulting company that was going to do all the work and fix everything for them! People were still pretty much in denial about the Y2K bug back in 1996.

This all might sound rather petty, but there was a lot of money on the line. At the time, all of Amoco's IT costs were charged out to the Amoco business units, and the Amoco business units all strove to show a profit to the Amoco Corporation holding company. Those business unit managers who could not show a profit frequently chose to spend more time with their families! The Amoco business units always complained about their very expensive IT charges for manpower and computer time. So the Amoco business units were not in any mood to spend millions of dollars on fixing the Y2K bug. The Amoco business units wanted to spend their IT budgets on new IT development projects that could reduce their business costs or increase their business revenues. The Amoco business units viewed the Y2K bug as an example of gross negligence on the part of Amoco's IT department! To resolve this budgeting problem the Amoco Corporation holding company had to set up a special "slush fund" budget for Y2K remediation that did not come out of the pockets of the individual Amoco business units. As Cyndi Lauper wisely noted, "Money Changes Everything". Money is also a very important issue in the current COVID-19 pandemic. For example, the current leader of the United States of America has chosen to push the remediation of the COVID-19 virus down to the state governors to handle. But the state governments of the United States of America are mostly broke or in debt and cannot afford to do the COVID-19 remediation. The state governors will need lots of federal money to bring back the American economy.

All along, our first Y2K Czar kept telling our CIO that the glass was half full, but that we were making steady progress. But in 1997 the Y2K bug began to appear in the IT trade rags, and that got our CIO’s attention. Suddenly our CIO realized that his glass was not half full, it was actually half empty! So our first Y2K Czar was summarily terminated for cause and escorted from the building! Things were a little tense back in those days on the Y2K projects around the world. A few blocks away from Amoco’s headquarters, at CNA insurance, two members of their Y2K team got into a fistfight outside of an elevator and were immediately terminated under CNA’s zero-tolerance policy! Amoco’s second Y2K Czar then came in with both barrels blazing. He immediately fired the old consulting company and hired one of the big-gun consulting companies to take its place and come in and finish (really start) the job. Suddenly there were hundreds of young kids swarming all over our Applications Development groups trying to apply the consulting company’s brand-new Y2K methodology. The burn rate for this effort was a little over $2 million/month, and after a few months of that, our CIO decided that our second Y2K Czar should spend some more time with his family. Amoco’s third Y2K Czar came in with a completely different attitude. Instead of charging off in a mad rush in the arms of a consulting company, we spent about a month just sitting around trying to figure out how we could get ourselves out of this mess all by ourselves, without using consulting companies at all. By then we had finally figured out that the consulting companies really did not know how to fix the Y2K bug at all. Out of these brain-storming sessions, we developed an overall strategy to push the fixing of the Y2K bug down to the grass-roots level of the Applications Development groups within Amoco because they were the only ones who knew Amoco’s systems.

So we split up Amoco along its subsidiary lines. I had the Amoco Corporation holding company and its Amoco Production Company subsidiary, for a total of about one third of Amoco’s total number of applications. That came to managing the Y2K remediation plans for about 1500 major corporate applications. I had about a dozen Y2K sub-coordinators under me and each of them had a Y2K sub-coordinator under them in each of the Applications Development groups. Our Y2K group then set up the policies and procedures to do the Y2K remediation of Amoco’s software and provided the tools to do the work, but it was the responsibility of each Applications Development group to remediate the software that they supported. We also researched many of the Y2K conversion tools that were coming on the market and came up with a list of software tools to help programmers convert Cobol, PL/1, Fortran and C programs to Y2K-compliant code.

Like COVID-19, the degree of susceptibility of software to the Y2K bug varied greatly. Lots of scientific and engineering software did not do any date processing at all. Such applications could go through the Y2K century rollover without displaying any symptoms at all. Other applications could go through the Y2K century rollover because they just displayed dates on screens or reports. Depending on the programming language, the code just called a date() function to get a date and then would substring the last two characters of the year. So the software would display dates like 05-12-99 or 05-12-00 just fine. For such applications, the Amoco business unit manager and their corresponding IT Applications Development manager could sign a Y2K waiver. The real problem was locating the software that actually did date calculations and harbored the Y2K bug.

The Y2K group offered two remediation options for those applications that had a real problem with the Y2K bug. The preferred option was called "date expansion". With date expansion, all code, files and databases were modified to use a date format with a full 4-character year like 1965. That meant that code, files and databases using a date format like 010565 for January 5, 1965, had to change to handle a new date format of 01051965. That might sound easy, but it was not. The second option was to use "century-windowing". With century-windowing, the code, files and databases could still use a 2-character year. The century-windowing trick was to introduce subroutines into the old code that did proper date manipulations for a century window of 1950 - 2049. Since computers did not really roll into corporations until the mid-1950s, that meant that hardly any files or databases existed with dates having years earlier than 1950. So these century-windowing subroutines could do things like calculate that 2002 – 1998 was 4 years instead of -96 years. However, nearly all of Amoco's applications were made Y2K-compliant using the date expansion approach.

The Amoco Y2K group was also responsible for ensuring that Y2K-compliant vendor hardware and software products were installed at all sites. This was not too difficult because, by 1997, most vendors were marketing the need for all to upgrade to their latest Y2K-compliant hardware and software products. That also took lots of money that nobody wanted to spend. We even began to see things like Y2K-compliant cables being sold by vendors preying upon the Y2K hysteria that was seen in all IT departments around the world! There even was a joke going around our IT department that the Whiting Refinery insisted on only buying Y2K-compliant sand for construction projects.

One of the tools our Y2K group provided was a database application to keep track of the Y2K remediation efforts for each application. The first thing we did was to classify each application by criticality – High, Medium, or Low and then to focus on the High and Medium applications first. For Y2K certification testing, we created a mainframe LPAR and a Y2K lab filled with Unix and Windows servers. Applications that needed Y2K remediation were first remediated by their Applications Development group and then spent two weeks of testing in our Y2K lab to obtain Y2K-certification. A total of 32 critical Y2K-dates were tested by changing the system clocks on the mainframe LPAR and the Unix and Windows servers during a two week period. Y2K-certified applications then had antibodies for the Y2K bug and could be returned to Production.

The Y2K lab was hermetically sealed and completely isolated from Amoco's IT Production processing facilities. There was a great fear that Y2K contaminated software or data might escape from the Y2K lab that might infect Amoco's Production environment. The first date was for the infamous September 9, 1999, or “090999” problem. You see, lots of programmers came up with the brilliant trick of using a date of “090999” to signify something special, like the last record in a file, so we had to test for that condition. Naturally, the date January 1, 2000 (010100) was in the list of dates.

This new Y2K strategy of Amoco doing its own Y2K remediation and using Applications Development to do the work in a massively parallel manner really worked and Amoco finished up its Y2K remediation by early 1999, just in time for its take over by BP! Again, doing things in a massively parallel manner is a hallmark of taking a biological approach to solving IT problems as outlined in How to Think Like a Softwarephysicist.

Y2K Finally Arrives
However, in 1999 I was not completely confident that the rest of the world had been as successful in Y2K remediation as Amoco. After all, even at Amoco, we had a very rocky start for our Y2K remediation project. So I set up a Y2K "fallout shelter" in my basement for my family similar to the fallout shelters of the 1950s and 1960s as outlined in Cyber Civil Defense. My Y2K "fallout shelter" had a large stockpile of candles, matches, bottled water, canned and dried foods, a gasoline-powered camp stove, batteries, flashlights and a battery-operated radio. For water, I had about 100 gallon milk jugs filled with tap water. To each jug, I added 8 drops of chlorine bleach as a preservative. Since the Y2K century rollover would be occurring in the middle of a Chicago winter for us, the plan was to move down to the basement if we lost electricity and heat. The natural warmth from the ground would make our basement the warmest part of the house. I also bought some gold and silver American Eagle coins in case paper money lost its value. This may sound a bit alarmist, but at the time, I had no idea what was going to happen on January 1, 2000.

When Y2K finally arrived, I was in the IT department of United Airlines supporting a Tuxedo client/server application that was used by the http://www.united.com/ website to display flight information. Like most members of the IT department of United Airlines, I was on call for the century rollover on December 31, 1999. The United Airlines IT Command Center was carefully monitoring all of our worldwide IT operations as the century rollover proceeded slowly around the entire planet. Not only did we have to worry about our own software, but we also had to worry about all of the other software in the world being able to properly handle the century rollover. To our great relief, nothing happened! As I recall, United Airlines only had a little problem with some signage displaying wrong dates in the Denver airport.

To the surprise of nearly all, the entire worldwide IT infrastructure came through Y2K with very little damage. The electrical grid stayed up, the worldwide financial system did not collapse, grocery store shelves did not empty, the stock market did not crash, the unemployment rate did not rise to 30% and water still came out of the tap when you turned on a faucet. All of that could have happened if it were not for the efforts of millions of IT professionals working on the Y2K bug for several years in advance of the century rollover. But because the Y2K rollover went so smoothly around the whole world, people actually started to make jokes about the Y2K bug after the fact. Some people actually began to claim that the Y2K bug was some kind of IT "hoax" designed to line the pockets of IT workers. They claimed that the Y2K bug was a "fake" disaster concocted by IT workers. It seems that no good deed goes unpunished.

Always Be Prepared
Now compare the aftermath of the Y2K pandemic that really did not happen with the death and devastation currently being caused by the current worldwide COVID-19 pandemic. Y2K could have caused the same level of damage that COVID-19 is currently inflicting, but Y2k did not do so because of a great deal of preparation by the global IT community. The global IT community saw a great threat in advance and spent the necessary time and money to fix the problem in advance before it could cause worldwide damage. As Bill Gates pointed out in his TED Talk listed above, the leaders of the world and the public health organizations and public health workers of the world need to come together now to build a robust system to handle the rogue parasitic viral RNA and DNA molecules that have plagued mankind for so long and that could produce an even worse pandemic in the future.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston