Friday, October 31, 2008

How to Think Like a Softwarephysicist

In this posting, I would like to sum up softwarephysics and explain how you can use it in your daily activities as an IT professional.

The Basic Idea of Softwarephysics
In softwarephysics, I have tried to imagine what a physicist would do if he were suddenly transported to another universe with similar, but slightly different, physical laws than the physical Universe we are all used to. This is essentially what happened to me in 1979, when I switched careers from being an exploration geophysicist, exploring for oil in the Gulf of Suez with Amoco, to being a professional IT person supporting Amoco’s production geophysical software. One very scary Monday morning, I suddenly found myself surrounded by all these strange IT people scurrying about in a controlled state of panic, like the characters in Alice in Wonderland. This was a totally alien terrain for me. Granted, I had been programming geophysical models for oil companies since taking a FORTRAN class back in 1972, but that was the full extent of my academic credentials in computer science. Fortunately, back in 1979 most of the other IT professionals I came across were also refugees from the physical sciences, engineering, mathematics, or accounting, but there was also a disturbing influx of young computer science graduates from the recently formed computer science departments of the major universities within the United States. Being totally lost in this new career, I naturally gravitated back to the physics, chemistry, biology, and geology that had served me so well in my previous career. I figured if you could apply physics to geology; why not apply physics to software? So like the Gulf of Suez exploration team at Amoco that I had just left, consisting of geologists, geophysicists, paleontologists, geochemists, and petrophysicists, I decided to take all the physics, chemistry, biology, and geology that I could muster and throw it at the problem of software, in order to help me cope with the daily mayhem of life in IT, by viewing software as a virtual substance whose behavior could be modeled, but not fully understood at the deepest levels. Again, softwarephysics is just an effective theory of software behavior that only makes valid predictions over a limited range of IT conditions and only provides a limited insight into the true nature of software. So in softwarephysics, we adopt a very positivistic point of view, in that we do not care what software “really” is, we only care about how software appears to behave, with the intended goal of developing models that reproduce and predict software behavior. In all of my previous postings on softwarephysics, I have tried to illustrate this evolutionary change in the way people approach physics and the other physical sciences. In the 17th, 18th, and 19th centuries physicists mistakenly thought that they were actually discovering the fundamental laws of the Universe, which they thought were based upon real tangible things like particles, waves, and fields. According to Newton, light was a stream of particles, for Maxwell it was an electromagnetic wave, and for Einstein, it was both. Now with QED, we think of light as a probability wave that simultaneously follows all possible paths as you observe your image in a mirror. Somehow, each photon seems to instantly perform an infinite number of calculations to determine where and how it will ultimately end its journey. So by the end of the 20th century, most of physics had dissolved into pure mathematics and worse yet, we now realize that all these theories and models are just approximations of reality, if reality even exists. In a similar manner, in softwarephysics, I have tried to pattern software behavior after the very successful effective theories of physics, most notably thermodynamics, statistical mechanics, the general theory of relativity, and quantum mechanics. But always keep in mind that these models are just approximations of software behavior. Do not fall into the same trap that the early physicists did, by confusing models of software behavior with what software “really” is. In fact, nobody “really” knows what software is because if you keep digging deeper you eventually cross over to the hardware upon which the software runs, and then you get into real trouble – just ask any physicist, because nobody “really” knows what the hardware is made of.

In The Fundamental Problem of Software, I laid down the three laws of software mayhem:

1. The second law of thermodynamics tends to introduce small bugs into software that are never detected through testing.

2. Because software is inherently nonlinear these small bugs cause general havoc when they reach production.

3. But even software that is absolutely bug-free can reach a critical tipping point and cross over from linear to nonlinear behavior, with disastrous and unpredictable results, as the load on software is increased.

So as an IT professional, you have two major pitfalls to avoid – the second law of thermodynamics and nonlinearity. In Entropy – the Bane of Programmers, I described how software can be viewed macroscopically as the functions the software performs, the speed with which those functions are performed, and the stability and reliability of its performance. In The Demon of Software, I also showed how software could also be viewed microscopically, as a large number of characters in a huge number of possible microstates, or versions, and in both of these postings, I showed how the entropy, or disorder, of software, tends to increase and its internal information content decrease whenever a change is made because of the second law of thermodynamics – there simply are far more “buggy” versions, or microstates, of a piece of software, than there are correct versions, or microstates, of a piece of software. In SoftwareChemistry, I carried this further by describing a model which depicted software as a series of interacting organic molecules (lines of code) consisting of atoms (characters), each of which can be in one of 256 quantum ASCII states. In this model, each character or atom in a line of code is defined by 8 bits or electrons in a spin-up ↑ or spin-down ↓ state. In SoftwareBiology, I described how living things are faced with exactly the same problem as programmers, in that they must assemble complex organic molecules from individual atoms with very low error rates in order to perform the functions of life in a Universe dominated by the second law of thermodynamics. And in Software Chaos, I showed how both living things and software must also contend with the effects of existing in a nonlinear Universe, where very small changes to these organic molecules or lines of code create dramatic and unpredictable results. In Self-Replicating Information, I showed how both software and living things are forms of self-replicating information, and that is why there are so many similarities between biology and IT. So as IT professionals, there is much we can learn from the survival strategies that living things have developed over the past 4 billion years to overcome the second law of thermodynamics and nonlinearity.

Keep an Open Mind and Learn From the Other Sciences
The first thing you need to do is open your mind and learn from the other sciences. In my opinion, computer science is a very isolated science indeed, much more so than the other sciences, which also suffer from this malady. Indeed, inbred thinking is a dire threat faced by all the sciences, which severely limits the pace at which they can make progress. In Self-Replicating Information and The Fundamental Problem of Everything, I tried to describe how the very conservative nature of meme-complexes severely restricts the adoption of new ideas, or memes, into a meme-complex. To some extent, this is a good survival strategy for a meme-complex, so that it does not pick up mutant memes of a dubious nature, but it seems that nearly all meme-complexes, and especially IT, take this conservative position to an extreme, and that prevents them from making valuable progress. Yes, there is always a risk in adopting a new idea, or meme, and that is why meme-complexes are so conservative. But there is also a risk in not adopting, or at least examining, a new idea or meme too. So the first thing you need to do as a softwarephysicst is to keep an open mind to new ideas from the other sciences beyond what you were taught in computer science classes. Do not fall into the “But we do not think about IT problems like that” trap, without at least thinking the whole thing through. Perhaps the source of all your problems has been precisely because you have not thought about IT problems in those terms!

Google and the Wikipedia are great ways to brush up on your general scientific knowledge. Another thing I like to do is to read the many popular books written by some of the greatest scientists in the world today. I read about one of these books each week. These scientists invested a great deal of their valuable time in writing these books, that could have easily been put to other uses, to help us all gain a better understanding of the Universe, and they certainly did not do it for the money! Finally, I like to visit the district library of my township to check out very heavy 800-page college textbooks on subjects like chemistry, organic chemistry, biochemistry, astronomy, cosmology, geology, molecular biology, cell biology, and the like. When reading these textbooks, I just read through the entire volume, skipping over all the chapter problem sets and not worrying about exams in the least. I really loved the university life of my youth, but being an old man free from the pressures of problem sets and exams, allows me to focus on the essence of the subject matter at hand, and unlike most university courses which only cover perhaps 30% of the volume, I can read the entire textbook at my own leisurely pace. As I said before, I frequently have thoughts of going back to my old physics department to change the plaque hanging on the department wall to read: “I can do the problems; I just don’t understand the material!”.

Use Softwarephysics as Your Secret Weapon
Next, if you are young and upwardly mobile, I would recommend keeping softwarephysics to yourself for your own personal use. As I originally explained in SoftwarePhysics, I decided to wait 25 years until the year 2004, before attempting to approach the IT community with softwarephysics a second time. My hope was that by 2004, the IT and computer science communities would have matured to the point where softwarephysics might be more favorably accepted than it was back in the 1980s. But I did not take into consideration the dramatic deterioration of the scientific curiosity and literacy of the American public over this same 25 year period. Indeed, I have found the marketing of softwarephysics in the 21st century to be even more difficult than it was back in the early 1980s, when most IT professionals came from a background in the physical sciences or mathematics. The formation of a very conservative computer science meme-complex in the 1980s, with its own set of accepted and legitimate ideas that preclude all others, did not help either. So I continue to be frustrated by strange looks whenever I try to explain to my fellow IT colleagues why they are being tormented so by software and how to avoid some of the pitfalls. I am 57 years old, on a glide path to retirement, happy where I am, and with very little to lose in a compassionately frozen career path, but if you are young and on the way up, spouting ideas about softwarephysics in meetings will definitely not enhance your upward mobility – believe me I was once young too. But you can still secretly use the concepts from softwarephysics to steer yourself clear of many of the pitfalls of IT, and that will definitely bode well for your career. So use softwarephysics as your secret weapon in tight situations, when you have to quickly make tough IT decisions. Your fellow IT colleagues will be trying to get by on IT common sense, and I have showed you how unreliable common sense can be in the Universe we live in. So to quote Rudyard Kipling - ”If you can keep your head when all about you are losing theirs”, you are obviously unaware of the situation, or perhaps, you are secretly using softwarephysics to get yourself through it all.

Take a Biological Approach to Software
In SoftwareBiology and Self-Replicating Information, we saw that software is a form of self-replicating information that has entered into a parasitic/symbiotic relationship with the business your IT efforts support. The business provides a substantial IT budget for the care and feeding of software and also for the promotion of its replication, and in return, the software allows the business to conduct its business processes and operations. Smart businesses view this as a symbiotic relationship, while less agile businesses view software in a more parasitic manner. For example, I was in the IT department of Amoco for more than 22 years, and at Amoco, IT was definitely viewed as a parasitic cost - just a necessary evil to its core business of finding, producing, refining, and marketing oil. On the other hand, my current employer, the best company I have ever worked for, views software completely differently. My current employer is in the financial industry, and since the only thing this business does is to store and process information, IT is core to its business processes and is viewed as a means of gaining competitive advantage. On the other hand, there is Amoco, the last of the mighty Standard Oil Trust companies to still use the infamous Standard Oil Torch and Oval emblem, but which is no longer with us, having finally managed to achieve its lifelong ambition of reducing its IT budget to $0.

In SoftwareBiology, we saw that as an IT professional, you are the equivalent of a software enzyme, greasing the wheels for the care and feeding of the software upon which your business runs. But the problem has been that over the past 70 years, we have not learned to be very effective software enzymes. The good news is that you can drastically improve your IT competence and productivity by adopting the same survival strategies that living things have discovered through 4 billion years of innovation and natural selection to overcome the second law of thermodynamics and nonlinearity. That will be the emphasis for the remainder of this posting.

Do Not Fight the Second Law of Thermodynamics
The second law of thermodynamics is really just a scientific restatement of Murphy’s Law – if it can go wrong, it will go wrong. The second law guarantees that all systems will naturally seek a level of maximum entropy, or disorder, minimum information content, and minimum free energy. In my day-to-day activities, I constantly see my fellow IT professionals taunt the second law of thermodynamics with abandon, like poking a sleeping bear with a sharp stick, only to see them gobbled up in the end. Living things, on the other hand, do not fight the second law of thermodynamics but use it instead to fashion the structures out of which they are made. In SoftwareBiology, we saw how living things compartmentalize things by surrounding them with membranes. This includes the external cell membrane itself, which is used to isolate the innards of cells from their external environment, like object instances in the memory of a computer, but also to surround the organelles within, like their mitochondria and chloroplasts, to isolate functions like the methods of an object instance. We saw that these very useful membranes are constructed by simply letting phospholipids form a bilayer which minimizes the free energy of the phospholipid molecules, like a ball rolling down a hill.

Thus, living cells use the second law of thermodynamics to construct these very useful cell membranes one molecule at a time, with no construction crew on the payroll. In addition, they embed protein receptors into the phospholipid bilayer of the cell membrane in order to communicate with the outside world. Molecules called ligands that are secreted by other cells can plug into the socket of a receptor protein embedded in the phospholipid bilayer of a cell membrane, causing the protein to change shape as it minimizes its free energy. This shape change of a receptor protein then causes other internal cellular molecules to take actions. The receptor protein itself results from amino acids being strung together into a polypeptide chain by the transcription of a DNA gene into a protein via mRNA, tRNA, and ribosomes. Once the polypeptide chain of amino acids forms, it naturally folds itself up into a complex 3-dimensional structure by minimizing its free energy too.

+++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++@+++@++++++++++++++++++++++++++++
+++++++++++++++++++++@@@++++++++++++++++++++++++++++
OOOOOOOOOOOOOOOO@@@@OOOOOOOOOOOOOOOOOOOOOOOO
||||||||||||||||||||||||||||||||||||@@@@@||||||||||||||||||||||||||||||||||||||||||||||||
||||||||||||||||||||||||||||||||||||@@@@@||||||||||||||||||||||||||||||||||||||||||||||||
OOOOOOOOOOOOOOOO@@@@@OOOOOOOOOOOOOOOOOOOOOOO
++++++++++++++++++++@@@@++++++++++++++++++++++++++++
++++++++++++++++++++@@@@++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++


There is much to learn from this ingenious strategy from an IT perspective. IT professionals normally are in a constant battle with the second law of thermodynamics and we normally take it on in a deadly frontal assault, which gives the second law all the advantages, as it happily machine guns us down from a protected position. As you can see, living things use more of a guerrilla warfare approach to the second law, by sneaking behind the lines and tricking the second law into firing on its own troops. Instead of turning cars into piles of rust, the second law suddenly finds itself constructing beautiful and intricate structures from simple molecules!

One way for IT professionals to trick the second law is to always use templates, like the DNA, mRNA, and tRNA templates used in protein synthesis. My description of BSDE – the Bionic Systems Development Environment is a good example of how I used templates to generate code. In this tool, I went to the extreme of trying to fully replicate the protein synthesis steps used by living cells to generate SQL code from “genes” for the programs within an “embryo” application that developed within BSDE through embryonic growth and differentiation as the “genes” were turned on and off by a programmer.

Another example of using templates is to try to copy/paste code and text as much as possible. For example, I currently approve all the change tickets that Middleware Operations performs, and it is always a challenge for me to get the people who submit these change tickets to include accurate information. In WebSphere you install Applications into a WebSphere Appserver via an .ear file, like Customer.ear, that contains all the code required by the Customer Application. These .ear files are stored on a server controlled by our Source Code Management group. So in the change ticket, the requester has to specify the path to the required .ear files. Quite frequently, the paths and .ear file names are misspelled, which causes us all sorts of grief when we try to install the .ear files in the middle of the night. I keep telling change ticket submitters that all they have to do is navigate in Windows Explorer to the desired .ear file and then highlight the Address textbox showing the path to the .ear file and then copy/paste the path into their change ticket. Right-clicking on the .ear file and displaying the .ear file’s Properties allows them to then copy/paste the exact name of the .ear file too. I am always frustrated by the great reluctance on the part of change ticket submitters to follow this simple fool-proof process.

Another way to trick the second law is to minimize typing with your fingers whenever possible. Every time you type out some software or a Unix command on the command line, there is always the chance of making a mistake. For example, I currently support over 200 production servers, and also a large number of UAT, Integration, and Development servers. Trying to keep track of all those servers and what they do is a real challenge. I have a script on our home server called /home/scj03/p that sets my Unix environmental variables the way I like them and also about 400 aliases that significantly reduce my need to type. Reducing typing also speeds up my ability to handle tough situations. When we have a website outage, I will get paged into a conference call for the problem, and I might end up with 20 windows open to different servers and monitoring tools. Just logging into all those servers and getting to the appropriate directories can take a long time, so using aliases really helps speed things up. For example my “p” script sets:

alias g="grep"
alias ap1=”ssh apw01it.unitedamoco.com”
day=`date +'%C%y%m%d'`
alias apl="echo 'cd to the Apache log files for today';cd /apache/logs; ls -l | grep $day"

So if I want to look at today’s Apache webserver logs on webserver apw01it.unitedamoco.com for the customer instance, instead of typing:

ssh apw01it.unitedamoco.com
cd /apache/logs
ls –l | grep 021409 | custom

I just type:

ap1
apl | g custom

and I see:

cd to the Apache log files for today
-rw-r--r-- 1 apache apache 35599711 Feb 4 01:59 access.customer-public-14000.20090204.00h00m.log
-rw-r--r-- 1 apache apache 29864992 Feb 4 03:59 access.customer-public-14000.20090204.02h00m.log
-rw-r--r-- 1 apache apache 23817935 Feb 4 05:59 access.customer-public-14000.20090204.04h00m.log
-rw-r--r-- 1 apache apache 81466621 Feb 4 07:59 access.customer-public-14000.20090204.06h00m.log

I also have some WebSphere aliases that let me quickly look at the SystemOut.log for a WebSphere Appserver. For example, suppose the WebSphere Appserver is running on dwas99it.unitedamoco.com:

alias dw1="ssh dwas99it.unitedamoco.com”
alias l="echo 'cd to the WAS Appserver log directories';cd /data/WebSphere; ls"
alias clm="echo 'cd to logs directory and more SystemOut.log';. /home/scj03/bin/clm6"

The above clm alias calls a small clm6 script in my bin directory:

/home/scj03/bin/clm6
#!/usr/bin/ksh
#
if [ $# -lt 1 ]
then
    echo " Syntax error - need the name of a directory"
    echo " Example:"
    echo " clm6 appserver1"
    echo " Does a:"
    echo " cd appserver1/ver6/logs"
    echo " more SystemOut.log"
else
    cd $1/ver6/logs
    dir=`pwd`
    echo "more $dir/SystemOut.log"
    echo " "
    more SystemOut.log
fi

So instead of typing in:

ssh dwas99it.unitedamoco.com
cd /data/WebSphere
cd orders
cd ver6/logs
more SystemOut.log

I just type:

dw1
l
clm orders

and I immediately see the SystemOut.log file for the orders WebSphere appserver:

************ Start Display Current Environment ************
WebSphere Platform 6.1 [ND 6.1.0.13 cf130745.06] running with process name UnitedAmocoCell\was6node1\orders and process id 164482
Host Operating System is AIX, version 5.3
Java version = J2RE 1.5.0 IBM J9 2.3 AIX ppc-32 j9vmap3223-20071007 (JIT enabled)
J9VM - 20071004_14218_bHdSMR
JIT - 20070820_1846ifx1_r8
GC - 200708_10, Java Compiler = j9jit23, Java VM name = IBM J9 VM

Since I have over 400 aliases, I have an alias that helps me find other aliases:

alias a="echo ' '; alias | grep -i"

So if I want to find all the aliases that have to do with our Apache webservers, I just type in:

a apache

and I see:

apb='echo '\''cd to the Apache Start/Stop scripts directory'\'';cd /opdata/apache/scripts; ls'
apc='echo '\''cd to the Apache servers directory'\'';cd /opdata/apache/conf; ls -l'
apcon='echo '\''cd to the Apache static content directory'\'';cd /data/content; ls -l'
apl='echo '\''cd to the Apache log files for today'\'';cd /opdata/apache/logs; ls -l | grep '\''Feb 04'\'
aps='echo '\''cd to the Apache servers directory'\'';cd /opdata/apache/conf; ls -l'
sa='echo '\''sudo su - apache'\'';sudo su - apache'

I have my /home/scj03/p script on all the servers I support, and I have a script send_p.sh that scp’s the p script to all the servers when I make changes to it. When I login to a server, I frequently have to sudo to a production ID that has permissions on the production files, such as “sudo su – apache”. After I do that, I press F2, which I have set to “. /home/scj03/p” in my telnet software to set all my aliases for the production ID.

Do Not Fight the Nonlinear Nature of Software
In Software Chaos, I described the nonlinear nature of software, in that small changes to software or the load that software processes can cause dramatic and unpredictable results. I also suggested that IT needs to stop thinking of software in linear terms, constantly looking for root causes, when none may even exist. What is the root cause of a tornado? The sad truth is that because of the nonlinear nature of weather, a tornado might simply be generated by a gentle wind blowing over a hill. IT needs to view software outages in the same way that the civil authorities view severe weather conditions like tornados. The first priority should be reducing the loss of life and the damage to property, by the constant monitoring of conditions and an early warning system that generates alerts for people to take action. You can always go back to look at log files and dumps to try to figure out what happened, but your first action should be to fail production traffic over to your backup software infrastructure or to start bouncing impacted software components if you have no backup infrastructure. You should think of your IT infrastructure like the ER of a hospital. Your first priority should be to stabilize the patient and get them out of danger, then you can begin to look for root causes. First get the patient’s heart pumping again, then get him breathing, stop the bleeding, and only then start to look for bullets in his log files and core dumps.

In Software Chaos, we also saw that many nonlinear systems, like a simple pendulum, do have regimes in which they behave like nice controllable linear systems. Many times, if you overbuild your software infrastructure sufficiently, you can safely operate your software in such a linear regime. Unfortunately, in IT there has always been a tendency to push the software infrastructure beyond 80% of capacity to get your money's worth. Many times this simply happens because the growth of the processing load overtakes the capacity of the infrastructure because rigorous capacity planning was not continuously performed.

Follow Standards
All living things use the same 20 amino acids to build proteins, the same nucleic acids in the form of DNA and RNA to store and process genetic information, the same simple sugars to build polysaccharide structures, the same metabolic pathways like the Krebs cycle to oxidize carbohydrates, the same fatty acids to make fats and oils - the list goes on and on. And these standards that all living things follow are not necessarily predetermined things. For example, you can build almost an infinite number of amino acids from their basic structure of:

    H   H O
     |    |  ||
H-N-C-C-OH
          |
          R

by simply changing the atoms in the R side group. But of all these millions of possible amino acids, all living things from bacteria to humans, use the same 20 amino acids to build proteins. And they also all use the same bases A, C, T, G, and U for DNA and RNA and the same mapping table to map amino acids to the 3-bit base-pair bytes of genetic information. It even goes further than that. Many of the organic molecules involved in biochemical reactions can come in left and right-handed versions, called stereoisomers or optical isomers. To see how this works, hold up your left and right hands palm down. Notice that they both look completely identical, with the same number of fingers and the same size and shape. But are they? Imagine you lost your right hand in a car accident, and by some miracle of surgery, they replaced it with a donor hand, but the donor hand happened to be a left hand! This would present all sorts of difficulties for you, beyond just trying to put your gloves on. As an IT professional, you can easily simulate these difficulties by simply crossing over your hands on your keyboard. Now try to type some code! You will find that your fingers no longer line up properly with the keys on your keyboard. Similarly, right-handed organic molecules have a hard time lining up with left-handed organic molecules in a lock and key configuration. This is why nearly all of the amino acids used by living things are left-handed, or L amino acids, while nearly all sugars are right-handed or D sugars. Living things could probably have gotten by if they had reversed things by only using D amino acids and L sugars, but there had to be a standard set by the biosphere at some point to go with one or the other. Otherwise, predators would not be able to eat their prey, without worrying about the arbitrary coding standard their prey happened to have decided upon!

This is a valuable lesson for all IT professionals – use standards whenever possible. This can be applied to all the artifacts of IT – design documents, variable names, method names, database names, database column names, job names, job schedule names – the list goes on. For nearly 30 years, I have watched IT people suffer the consequences of the expediency of the moment. They plunge ahead in the heat of the moment to get something quickly into production, with the sincere intent of coming back to fix everything to conform to standards, but they never seem to do so. If you learn nothing more from softwarephysics than the value in following standards to overcome the effects of the second law of thermodynamics and nonlinearity, as all living things do, you will find it well worth your effort. Too much of IT is hand-crafted and non-standard.

If It Ain’t Broke, Don’t Fix It!
Living things evolve through small incremental enhancements that always improve their survivability. Richard Dawkins calls this Climbing Mount Improbable (1996). In this book, Dawkins depicts survivability as a mountainous landscape, with foothills and valleys leading up the slope. Living things do not jump straight up the face of a steep cliff in this survivability terrain, instead, they follow a slow arduous upward path in their assent to improved survivability. They also do not descend into valleys of lower survivability, which might have an easier path to higher survivability on the other side of the valley, because living things cannot “foresee” the terrain of the survivability landscape lying ahead. Natural selection only lives for the moment and selects for advantageous mutations and against disadvantageous mutations at every point along the landscape. Consequently, living things can only move upwards in climbing mount improbable, or stand still if nothing else. There is no selective advantage to decreasing your chance of survival by descending into a valley of lower survivability. Now there is both good and bad in this strategy from an IT perspective. As an IT professional, you are not limited to this path of small incremental enhancements in your climb up mount improbable, because you are privy to an expanded view of the survivability terrain of software. This gives you a better view of the hills and valleys leading up the mountain than living things are privy to, but sometimes it is also advantageous to take the more limited view that living things have in formulating IT designs. So when somebody comes up with a brilliant enhancement in a meeting that sounds like a good idea, I frequently put the question to myself – “Would this enhancement naturally evolve in nature? What are the selective advantages of this approach? What are the selective disadvantages?“.

Here is a recent example, in WebSphere, there is a plugin-cfg.xml file that is installed on Apache webserver instances that tells Apache where to send WebSphere requests that it receives from browsers over the Internet. These requests are sent to physical WebSphere servers hosting WebSphere Applications installed in WebSphere Appservers. Now a typical physical Apache webserver might have 20 Apache instances serving up webpages for the many websites being hosted by the physical Apache webserver. Prior to WebSphere 6.0, we had just one communal plugin-cfg.xml file for all the Apache instances on a physical Apache webserver. The danger was that when we used WebSphere to regenerate the plugin-cfg.xml file for changes to Application A, there was a slight chance that it could mess up the plugin-cfg.xml file section for Application B. However, in 5 years of WebSphere administration, I do not recall a single instance of this actually happening. To overcome this non-existent problem, in WebSphere 6.0 IBM came up with an enhancement that allows for separate plugin-cfg.xml files for each of the 20 Apache instances we might currently have on an Apache webserver, providing complete isolation between the Applications being serviced by the Apache webserver. Now on the face of it, this sounded like a great idea to all, except to me. I immediately thought to myself, “Would this enhancement have a chance to evolve in nature? Would it be viewed as a positive mutation, enhancing survivability, or as a genetic disease to be ruthlessly eliminated from the gene pool?” My conclusion was that this enhancement would cause all sorts of problems because of the second law of thermodynamics and nonlinearity, with no offsetting advantages. My fear was that the developers who created our change tickets would forget to mention which of the 20 Apache instances we should map their WebSphere Applications to and that our Offshore team members doing the installs in the middle of the night, would occasionally get mixed up and accidentally copy the wrong plugin-cfg.xml file from one of the 20 directories storing plugin-cfg.xml files on the WebSphere Deployment Manager server to its corresponding directory on the 8 Apache webservers. You see, the WebSphere Deployment Manager server now has 20 directories storing plugin-cfg.xml files and each of the 8 Apache webservers also have 20 directories storing plugin-cfg.xml files, so the number of permutations of messing this up are huge. With WebSphere 5.0 there was only one plugin-cfg.xml file stored in one directory on the WebSphere Deployment Manager server and only one directory on each of the 8 Apache webservers storing the single plugin-cfg.xml file. This was pretty hard to mess up and it never happened over the course of five years. Now with the WebSphere 6.0 enhancement, we had 20 plugin-cfg.xml files to worry about on the WebSphere Deployment Manager server and 20 plugin-cfg.xml files on each of the 8 Apache webservers! Just do the math to come up with the ways you could mess that up! And sure enough, one pager cycle when I was Primary, we got hundreds of calls into our call center about people getting intermittent 404s when trying to login to our website. Naturally, I could not reproduce the problem. It turns out that our install team accidentally copied the wrong plugin-cfg.xml file into one of our 8 Apache webservers, causing a 1 in 8 chance of generating a 404. This was a very difficult problem to troubleshoot because I never had this problem when there was only one plugin-cfg.xml file used by all the Apache instances, so I never even suspected a bad plugin-cfg.xml file. Now, do I find fault with our install team for this mishap? Not in the least. Our install team does a marvelous job with the very difficult task of doing production installs. Too many times in IT we needlessly taunt the second law of thermodynamics when we tell people to avoid mistakes by simply “being more careful”. This is just setting people up for failure, like poking the second law of thermodynamics with a sharp stick. To alleviate this problem we now have a script that automatically pushes out the plugin-cfg.xml file from a menu. All you have to do is pick the correct plugin-cfg.xml file listed on the menu of 20 plugin-cfg.xml files. Even so, this requires a huge amount of intellectual energy to do correctly. The developers have to specify the correct plugin-cfg.xml file to regenerate for the Apache instance their Application uses on their change tickets and our Offshore team has to pick the correct plugin-cfg.xml file from the menu at install time. My contention is that this “enhancement” would be viewed in nature as a genetic disease, with a great deal of selection pressure against it because it consumes energy and resources, but provides no competitive survival advantage.

Follow the Lead of Bacteria and Viruses for Day-To-Day and Project Management Activities
As I pointed out in SoftwareBiology, prokaryotic bacteria are the epitome of good IT design, consequently, bacteria are found to exist under much harsher conditions and under a much wider range of conditions than multicellular organisms. We saw that bacteria block their genes to minimize I/O operations and do not load up their genomes with tons of junk DNA and introns, as do the “higher” forms of life. Similarly, viruses are also very good examples of lean IT architectural design, performing extraordinary feats of data processing at the molecular level with a minimum of structure. The simplicity, resourcefulness, and frugality of bacteria and viruses are surely to be admired from an IT perspective. So I strongly recommend that all IT professionals read a college textbook on microbiology to obtain a full appreciation of the very sophisticated data processing and project management techniques that bacteria and viruses have developed. There is much to be learned from them. So for the mundane day-to-day issues of IT, I frequently ask myself “What would a bacterium or virus do in this situation?”.

I also use bacteria and viruses as superb role models for good IT project management practices. An E. coli bacterium is a far more complicated data processing machine than any PC and can certainly outperform any PC on the market today (Figure 1). Yet an E. coli bacterium can build a clone of itself in 22 minutes from simple atoms using massively parallel project management processes. First, a self-replicating E. coli bacterium must use complex biosynthetic pathways to assemble the basic macromolecules of life, like amino acids, sugars, lipids, and DNA and RNA nucleotides, from carbon dioxide, water, and nitrogen for its clone. This code has to operate at transaction rates of billions of transactions per second to build the necessary feedstock for new cell wall and cell membrane material for the clone. The E. coli then proceeds to elongate by extending its cell wall and cell membrane by adding on additional proteins, phospholipids, and peptidoglycan to the existing structure. Eventually, the E. coli will split into two separate E. coli bacteria in a process called transverse fission, but before it can do that it must replicate the 5 million base pairs of DNA that define the 5,000 genes of its genome. The 5 million base pairs of DNA are supercoiled into the tiny E. coli bacterium, like stuffing a 30-foot string into a box that is ½ of an inch on a side. An E. coli can replicate DNA at the rate of about 1000 base pairs per second in each direction around its main loop of DNA from an initial starting point called the oriC. Surprisingly, humans do not have that many more genes than E. coli, we only have about 25,000 genes, but we require a full 9 months to self-replicate, so bacteria definitely have us beat on that score. Now at an overall rate of 2,000 base pairs per second, it takes about 40 minutes for an E. coli to replicate 5 million base pairs of DNA, and yet E. coli can self-replicate in only 22 minutes. How do they manage that feat of project management? Well as soon as the two new daughter loops of DNA begin to form, they immediately start to self-replicate too, so that by the time an E. coli fissions, it already has the DNA for the next generation well underway! E. coli are able to replicate their DNA with an error rate of about 1 error in every 100,000 base pairs. However, E. coli also have a DNA polymerase III enzyme that proofreads the recently replicated DNA and corrects about 99 out of each 100 errors so that E. coli can replicate with an overall error rate of about 1 error in every 10 million base pairs. Thus, when an E. coli bacterium replicates, there is only a 50% chance that its twin cell will contain even one error in the whole 5 million base pairs that define its genome!

Figure 1 – An E. coli bacterium (click to enlarge)

While all this DNA is being self-replicated, the E. coli is also busy forming huge numbers of about 3,000 proteins from amino acids for the enzymes that run the E. coli biosynthetic pathways and for the proteins that must be incorporated into the newly forming cell membrane material for its twin. To create the multiple proteins required to perform a particular task in one I/O operation, the E. coli DNA is transcribed to mRNA at a rate of about 50 bases per second by an enzyme protein called RNA polymerase (Figure 2). While the growing strip of mRNA, which contains multiple blocked genes for proteins, is being transcribed from the E. coli DNA, multiple ribosomes clamp onto the freshly generated mRNA at the same time and begin reading the mRNA I/O buffer and peeling off polypeptide chains of amino acids at the rate of 30 amino acids per second. You can actually see this all happen with the aid of an electron microscope, which shows a large number of ribosomes zipping along the mRNA and peeling off polypeptide chains of amino acids which later fold up into complex 3-dimensional proteins as the polypeptide chains seek a configuration that minimizes their free energy in accordance with the second law of thermodynamics.

Figure 2 – Multiple blocked genes are transcribed to mRNA by RNA polymerase and then a large number of ribosomes read the mRNA in parallel to form proteins by stringing together amino acids in the proper sequence. Individual proteins peel off as the ribosomes reach stop codons (end of file bytes) along the mRNA I/O buffer (click to enlarge)

Viruses are even more efficient project managers. Figure 3 shows the T4 virus, which preys upon E. coli bacteria. The T4 is one of the most complex viruses known and is about 900 atoms wide and 2,000 atoms long and looks very much like the Lunar Landing Module of the Apollo missions and performs a very similar function. The head, collar, sheath, base plate, and tail fibers of the T4 are composed of several thousand individual protein subunits held together by the electromagnetic force. The head contains the DNA for 200 T4 genes encoded on 168,800 base pairs, which is far more many genes than most viruses have, the smallest of which contain only about 10 genes. When a T4 lands upon the surface of an E. coli bacterium, the T4 tail fibers grip onto specific molecules in the outer cell envelope of the E. coli. The contractile sheath of the T4 consists of 24 protein rings, but when the T4 lands on an E. coli, the base plate is unplugged, and the 24 rings contract into 12 rings, which injects the T4 DNA into the E. coli like a syringe. Within one minute of the T4 DNA being injected into the E. coli, the production of E. coli mRNA for E. coli proteins ceases because the E. coli RNA polymerase is actually more strongly attracted to the promoters for the T4 genes on the T4 DNA than for its own DNA! In less than 2 minutes, the E. coli RNA polymerase begins transcribing the T4 DNA of specific T4 genes to produce early T4 proteins that take over the biosynthetic processes of the E. coli bacterium. Within 5 minutes of infection, these T4 proteins begin to break down the 5 million base pairs of E. coli DNA into nucleotides that can be reused to make DNA for new T4 viruses. In less than 10 minutes, under the direction of the T4 DNA, the E. coli then begins making about 50 different enzymes that can replicate T4 DNA about 10 times faster than E. coli DNA, but with a higher error rate. So now the E. coli bacterium starts to make T4 DNA at a rate of 10,000 base pairs per second, yielding one T4 DNA strand every 17 seconds. This is also done in parallel at multiple sites to create the DNA for 100 – 300 T4 viruses. At the same time as all this T4 DNA is being generated, the E. coli is also manufacturing the proteins for the T4 heads based upon the instructions in the T4 genes. As each T4 head is completed, it is stuffed with a T4 DNA strand. Next, T4 DNA is transcribed into mRNA for the tail proteins. The base plate is formed first and the contractile sheath is assembled upon the base plate which is then capped. The completed head and tail assemblies are then welded together to form a completed T4 virus. A T4 gene then creates a protein that can eat through the E. coli cell membrane and wall. This forms a hole in the E. coli bacterium, and about 25 minutes after infection, 100 – 300 T4 viruses emerge from the now dead E. coli bacterium ready to infect other E. coli.

Figure 3 – A T4 virus Lunar Landing Module (click to enlarge)

While on the subject of viruses, it should be noted that viruses are marvelous DNA survival machines and very good examples of self-replicating information that come in many sizes, shapes, and configurations. Some have complex structure, like the T4, while others, like polio and cold viruses, come in a simple spherically shaped capsid protein shell like the head of a T4. The HIV virus is an example of a complex virus that has a capsid protein shell surrounded by a second protein shell, which in turn, is surrounded by a phospholipid membrane with embedded proteins, like the surface of a normal human cell. Viruses can contain single-stranded DNA or double-stranded DNA in the form of linear strands or a circular loop. They also can contain single or double-stranded RNA. A virus may contain tens to hundreds of genes, and since they are totally dependent upon full-fledged cells to replicate, viruses have evolved to replicate quickly with less fidelity than normal cells. Viruses are in a constant battle with normal cells because, for the most part, they are purely parasitic forms of DNA or RNA – just mindless forms of self-replicating information. Normal cells have evolved a number of strategies to combat the viruses, like altering the proteins in their outer cell membranes so that the viruses cannot attach to their cell membranes. So viruses tend to replicate DNA and RNA with less fidelity to allow favorable mutations to arise more easily to combat the defense mechanisms of normal cells. Because viruses are constantly evolving new survival strategies, they evolve more quickly than bacteria or other life forms, and that is why new strains of cold and flu viruses are constantly emerging. Some viruses have even evolved to form a parasitic/symbiotic relationship with their hosts. For example, certain viruses that infect bacteria have become lysogenic temperate viruses. A lysogenic virus does not usually kill its host bacterium. Instead, the injected viral DNA splices itself into the bacterium’s main loop of bacterial DNA. Thus, each time the bacterium replicates, it passes along the viral DNA to its descendants. After all, the name of the game for self-replicating information is to find a way to replicate; it really does not matter how that is achieved. Some of the genes in the viral DNA that is now part of the bacterium’s main loop of DNA even alter the proteins in the host bacterium’s outer cell membrane to make it harder for other viruses to attach to the infected bacterium! Thus, a bacterium infected by a lysogenic virus gains some immunity to other viruses, and the bacterium and viral DNA are truly in a parasitic/symbiotic relationship like that explored in Software Symbiogenesis. However, a lysogenic virus still needs to replicate on its own once in a while, otherwise it would lose its identity and just become part of a new strain of bacteria. So about one time in a thousand, the infected bacterium will go through the normal lytic process to generate hundreds of viruses that escape to infect other bacteria, like a T4 virus does.

Because viruses cannot replicate themselves without the aid of normal cells, most biologists do not consider viruses to be alive. There is also some controversy as to how the viruses originated. Most biologists think that viruses are escaped genes – self-replicating DNA or RNA that were once part of a normal cell, but which have developed their own little spaceships to explore the biosphere with. Others believe that viruses are active fossils of self-replicating information from a time before there was true life on Earth and that viruses evolved from precursors of true living cells. The Equivalence Conjecture of Softwarephysics might be of help here. In the Software Universe, we definitely know that computer viruses arose in the 1980s after the development of operating systems and application software. Many computer viruses are lysogenic, in that they splice themselves into the executable file of application software and are launched each time the application is executed, and they look for other hosts to infect with each application launch. Others just look for other executables on a PC to infect and rely on those executables being accessed by other PCs on a network. However, I don’t think many computer viruses form symbiotic relationships with their hosts by making applications even better - software vendors call those upgrades and charge dearly for them.

I frequently try to follow the superb project management techniques of bacteria and viruses when doing IT project management work, especially the use of massively parallel repeatable processes. For example, in the late 1990s, I spent about three years on Amoco’s Y2K project. I was in charge of coordinating the remediation of about 1/3 of Amoco’s applications, about 1500 applications. The overall project manager of this effort was constantly assigning us impossible tasks, like “Please review all of your remediation plans and report back next week on those that might be behind schedule”. Just do the math. If you spent just one hour reviewing each of 1500 remediation plans, that would come to 1500 hours. For a 40 hour work week that would come to 37.5 weeks just to do a cursory review of all 1500 remediation plans! The way we finally worked through issues like this was to form a hierarchy of coordinators all using the same repeatable processes. I had 8 sub-coordinators and each of them had a sub-coordinator in each Application Development group, and the sub-coordinators in each Application Development group were responsible for the Y2K remediation of all the applications the group supported. As a member of the top tier of Y2K coordinators, I spent most of my time putting together repeatable processes for managing the Y2K remediation of the applications that could be run in parallel on all 1500 applications.

We created a hermetically sealed lab for the distributed applications running on Unix and Windows servers and an LPAR on our mainframes for Y2K testing. The first step was to determine which applications needed Y2K remediation and to create remediation plans for those that did. Luckily for me, as a former geophysicist, I had all of the exploration and production applications, many of which did not do date processing and did not require Y2K remediation. However, I also had the Amoco corporate applications which took data from the Amoco subsidiaries and these applications were very rich in date processing, so I still had plenty of work to do. The determination of Y2K readiness and the production of remediation plans was done in a more or less parallel manner by all of the sub-coordinators. We had some software that scanned for date processing that helped with the analysis, and we created a database and some software to create and store all of the remediation plans. This allowed us to keep track of the overall progress of the Y2K remediations and to manage the whole process. We then scheduled each application that needed Y2K remediation for a “flight” in the Y2K lab. The system clocks on the Unix and Windows servers and the mainframe LPAR were then cycled through about 25 critical dates over a one-week “flight”, starting with the date 09/09/1999 (the dreaded date = “9999” problem) and continuing on with 12/31/1999, 01/01/2000, … etc. Production test data was allowed into the hermetically sealed Y2K lab, but no data was allowed to escape it, for fear of contaminating our current 20th-century production processing with 21st-century data. So for your next IT project that calls for the migration of all your applications to a new operating system release or a new version of Oracle or some other global infrastructural change, take a tip from bacteria and viruses and use a project management approach based upon repeatable parallel processes and you will make your deadline for sure.

Focus on Implementation and Change Management Processes
It is also useful to follow the lead of bacteria and viruses when it comes to IT implementation processes and change management activities. When I transitioned from geophysics into IT in 1979, there were no Change Management groups, no DBAs, no UAT testers, no IT Security, no IT Project Management department, and we didn’t even have IT! I was in the ISD (Information Services Department), and I was a programmer, not a developer, and we ruled the day because we did it all. Yes, we had to deal with some meddling from those pesky systems analysts with their freshly minted MBA degrees but no programming experience, but thankfully, they were usually busy with preparing user requirement documents of a dubious nature and did not get into our way too often. ISD had two sections - computer systems and manual systems. The people in manual systems worked on the workflows for manual systems with the Forms Design department, while the computer systems people programmed COBOL, PL/I, or FORTRAN code for the IBM MVS/TSO mainframes running the OS/370 operating system. Luckily for me, I knew some FORTRAN and ended up in the computer systems section of ISD, because for some reason, designing paper forms all day long did not exactly appeal to me. We created systems in those days, not applications, and nearly all of them were batch. We coded up and tested individual programs that were run in sequence, one after the other, in a job stream controlled by JCL or a cataloged procedure. To implement these systems all we had to do was fill out a 3-part carbon paper form for the Data Management clerks – they got one copy, we got one copy, and somebody else must have gotten the third copy, but I never found out who. The form simply told the Data Management clerks which load modules to move from my development PDS (partitioned dataset) to my production PDS and how to move the cataloged procedures from my development proclib to the production proclib. That was it. Systems were pretty simple back then and there was no change management.

This worked great until the Distributed Computing Revolution hit in the early 1990s. Then computer systems became applications, and they were spread over a large number of shared computing resources – PCs, Unix servers, Windows servers and mainframes, and the old implementation model no longer worked. Suddenly, IT had to come up with change management and implementation processes for the new complicated distributed architecture. But unfortunately, the old mindset of Application Development (programmers) ruling the day still persisted through this revolution, even though the IT world had totally changed. And because the three laws of software mayhem never went away, the Application Development Life Cycle (ADLC) on the distributed platform always tended to get behind schedule, so that the last step in the life cycle, the implementation step, got compressed into an IT after-thought along with change management. Implementers were still on the bottom of the IT totem pole, like the Data Management clerks that I dealt with in the 1980s, and often times had to deal with half-baked implementation plans crammed through the new change management processes at the last minute.

I have found that different IT shops have matured to various degrees with change management procedures and the implementation step of the ADLC, with some doing better than others, but as a whole, IT could certainly improve with these vital processes. So I would like to propose a biological approach to change management processes and the implementation step of the ADLC. As we saw above, when a T4 virus lands upon an E. coli bacterium, it has a detailed implementation plan of exactly what to do and when with each step rigorously detailed. The net result is that 25 minutes after landing upon an E. coli bacterium, the T4 has produced 100 -300 T4 viruses and has flawlessly manipulated the reactions of a trillion trillion molecules in the process. Think of it in another way. In medicine, surgeons are at the top of the medical totem pole. Before you go into surgery, you will see your family GP and probably an internist, cardiologist, or oncologist who will help with the diagnosis of your problem by ordering a large number of tests and perhaps engaging the services of a radiologist for a CAT scan or MRI, and maybe a pathologist to read some biopsy slides. When you go into the OR, an anesthesiologist will put you under and keep you alive during the procedure, but it will be the surgeon that either cures you or kills you. In medicine, they don’t like to find out halfway through an operation that the artificial heart valve is the wrong size or that they ran out of blood for the patient. In medicine, the implementation step is key and the surgeon, or implementer, is the Captain of the ship and is well prepared and fully supported by the rest of the medical community. Last-minute shoot-from-the-hip efforts are not tolerated for long. Similarly, IT needs to refocus some energy into the vital process of change management and the implementation step of the ADLC. After all, like a surgical patient, end-users do not experience the beneficial or ill effects of software until it is implemented.

Follow the Lead of Multicellular Organisms and the Biosphere for Architectural Issues
For higher level IT issues, look to the superb organization found in multicellular organisms and the biosphere. For example, we are currently going through the SOA – Service Oriented Architecture revolution in IT. With SOA, we develop a large number of common services, like looking up the current status of a customer, which can then be called by any number of applications, rather than have each application individually code the same identical functionality. As we saw in SoftwareBiology, SOA is just the IT implementation of the dramatic eruption of multicellular architecture during the Cambrian Explosion 541 million years ago. Consider the plight of a single, lonely, aerobic bacterium which is challenged to find its own sources of food, water, oxygen, and warmth and must also find an environment in which it can dump its wastes without poisoning its immediate surroundings, and do all this while it tries to avoid all the dangers of a hostile microbial world. Contrast that with the relatively fat and easy life of the 100 trillion cell instances in your body that find themselves bathed in all the food, water, oxygen, and warmth they could desire, without the least worry about dumping their wastes into their immediate environments because their wastes are briskly flushed away, and without the least worry of being taken out to lunch by a predator. The poor little bacterium must have code that handles all these difficult situations, while the cells in a multicellular organism just rely on the services of other cell instances residing in SOA service machines, or organs, known as hearts, lungs, intestines, livers, kidneys and spleens to handle all these common services. In Software Symbiogenesis and Self-Replicating Information, we saw how the biosphere has taken SOA to an even higher level. Not only do the cells in a multicellular organism rely upon the services of cells within the organs of its body, they can also rely upon the services of cells in other bodies via parasitic/symbiotic relationships between organisms. We are beginning to see this in IT too, as companies begin to expose services to other companies over the Internet to allow B-to-B applications to form symbiotic relationships between businesses in a supply chain.

I became convinced of the value of SOA many years ago at United Airlines before SOA had even appeared on the IT scene. At the time, I was supporting the middleware for www.united.com running on Tuxedo. Through a process of innovation and natural selection, the middleware group ended up writing a large number of Tuxedo services to access things like customer data from a series of databases and reservation data stored in the Apollo reservation system to support www.united.com. We then opened up these Tuxedo services for other internal applications at United to use via a JOLT call from their Java applications. Over a period of a year or two, we accidentally evolved a pretty sophisticated SOA architecture. This SOA architecture was put to the test one day when my boss got a call from our CIO. It turns out that our marketing department had published a “Fly Three, Fly Free” campaign in hundreds of newspapers that day. The campaign explained that if you flew three flights with United in any given month, you were then eligible to fly free to any destination within the continental United States. All you had to do was to register for the campaign to obtain your flight benefit. Now that was the problem. Our marketing department had forgotten to tell IT about the new campaign, and in the newspaper ads, it stated that the promotion was going into effect that day at 12:00 midnight! So we had 15 hours to design, develop, test, and implement an application to be used by our call centers and that could also be run from www.united.com. We immediately formed a number of crash workgroups in applications development, database, QA testing, and call center training to field this application as quickly as possible by working in parallel as much as possible. And because we could sew together an application quickly by calling our already existing SOA Tuxedo services, we actually made the 15-hour deadline! After that experience, I was totally sold on the idea of SOA, before SOA even existed.

Let Softwarephysics Guide Your Career Path
SOA has the advantage of reducing the amount of redundant code that a corporation has to maintain and allows developers to quickly throw together new Applications because they can just assemble them from a collection of existing components or services. The problem that SOA presents is governance - who decides how and when to change components? SOA can also be a problem for Operations when troubleshooting problems because many applications draw upon the same services, and a bug in one service can impact many applications. So is SOA a good thing or a bad thing? And how do you decide?

Well, there was this same controversy about object-oriented programming in the early 1990s. Was object-oriented programming a good thing or a bad thing? And was it just a flash in the pan, or something that would catch on? There is a lot of hype in IT, and you can easily get misdirected down the wrong architectural path. For example, in the early 1980s, they told us to start looking for a different job because IT would soon be vanishing. You see, in the early 1980s, the theory was that 4GL languages (4th Generation Languages), like Mark IV and Focus, would soon allow end-users to simply write their own software, and the Japanese were also working on a 4th Generation Computer, due to be completed by 1990, that would be based upon an operating system using Artificial Intelligence. End-users would interact with this new operating system like the folks in 2001 - A Space Odyssey, with no need for pesky IT people. But none of that ever happened! So in the early 1990s, the COBOL/CICS/DB2 mainframe folks had a dim view of object-oriented languages like C++ because they thought they were just another flash in the pan. There was also a lot of FUD (Fear, Uncertainty, and Doubt) in the early 1990s because of the impending architectural change to a distributed computing platform which made matters even more confusing. We saw folks throwing together LANs with cheap PCs, and then there were those nasty Unix people bringing in Unix servers to replace the mainframes too. Mainframes were supposed to be gone by 1995, and IBM nearly did go bankrupt in those days. So maybe Unix and C++ might be something good to learn - or maybe not?

So how do you make sense of all this? How do you make good IT decisions about your career path when IT architectural changes seem to be happening, but there is all this hype? Fortunately, I had softwarephysics back in the early 1990s to help me make decisions. Based upon softwarephysics, I taught myself Unix, C++ and Java (1995) when it came out because softwarephysics told me that object-oriented programming was a keeper and would have a big impact on IT. Softwarephysics also predicts that SOA is the way to go too. Softwarephysics claims that object-oriented programming and SOA are really just implementations of multicellular organization in software. An object instance is just a cell. The human body has 210 cell-types or Classes. Each of the 100 trillion cells in your body is an object instance of one of these 210 Classes. Each cell is isolated by a cell membrane that protects it from other cells so that other cells cannot directly change the cell's "data". In object-oriented languages, this is called encapsulation. Each cell contains many internal methods(), used by the cell to conduct biological operations. Each cell also has a set of public or exposed methods() that can be called by other cells, via the secretion of organic molecules, that bind to receptors on the cell membrane and induce the cell to perform biological processes. The cells in your body also use the object-oriented concepts of inheritance and polymorphism. The 100 trillion cell instances in your body use the SOA services of other cell instances, like the cell instances in your lungs, liver, kidneys, heart and intestines. These organs, or service machines, provide your cell instances with oxygen, food, and water, and also dispose of the wastes they excrete. So given the success of the multicellular architecture found in the biosphere, softwarephysics predicts that object-oriented programming and SOA will also be successful and worthy of investing some time in learning.

Conclusion
As you can see, softwarephysics is not a magic bullet to cure all of your IT woes. It is really more of a way to think about software. Because it provides an effective theory of software behavior, softwarephysics can be a very robust and flexible tool in your IT arsenal. All you have to do is use your imagination!


Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Tuesday, September 09, 2008

MoneyPhysics

Yes, I know this posting was supposed to be about how you could use softwarephysics on a daily basis as an IT professional, but the “real world” of human affairs has once again intervened with the current worldwide financial crisis, so as in The Fundamental Problem of Everything, I once again wish to take a slight detour along our path to IT enlightenment. I find the current worldwide financial crisis to be a wonderful case study demonstrating the value of having an effective theory for the behavior of the virtual substance we call money. Because in softwarephysics we also view software as a virtual substance to be modeled by effective theories, I believe there are some valuable lessons to be learned from our current economic problems. However, before we can analyze the current worldwide financial crisis using some of the tools we learned in our study of softwarephysics, we need to do a brief recap to refresh our memories.

Recap of Softwarephysics
Recall that in softwarephysics we view software as a virtual substance with observable characteristics that can be modeled, but not quite fully understood at the deepest levels. Just as the quantum field theories of QED and QCD of the Standard Model of particle physics provide very accurate effective theories for the behavior of the virtual substance we call matter, softwarephysics attempts to do the same for the virtual substance we call software. So softwarephysics is simply an attempt to provide an effective theory of software behavior that can be used to make predictions and provide a direction for thought that allows you to make better decisions as an IT professional. Recall that an effective theory is just an approximation of reality that only works over an effective range of conditions and only provides a limited level of insight into the true nature of the phenomenon at hand. With softwarephysics, I have tried to pattern software behavior after the very successful effective theories of physics, most notably thermodynamics, statistical mechanics, the general theory of relativity, and quantum mechanics. These very successful effective theories produced very valuable models for the behavior of real things in the physical Universe in the 20th century. For example, QED has predicted the gyromagnetic ratio of the electron accurate to 11 decimal places, and the general theory of relativity has predicted the orbital decay rate of a binary system of neutron stars accurate to 14 decimal places, but unfortunately, both effective theories are totally incompatible with each other. The general theory of relativity works great for large masses interacting at large distances, but not for little masses interacting over very small distances like atoms, while QED works for very small things like electrons and photons over very short distances, but not for large masses over large distances. So we now know that although both theories are very accurate in making predictions over a limited range of conditions, they really are only models of reality and that neither one is a complete depiction of true reality itself. Over many of the previous postings, I have tried to convey this idea that all the current effective theories of physics are just models of reality, if indeed reality even exists, and are not what true “reality” really is. Similarly, in softwarephysics we take a positivistic approach to software behavior; we really don’t care what software really is, we only care about modeling how software appears to behave. Confusing “reality” with models of reality is a hard thing to let go of. I think this is perhaps the greatest challenge for IT professionals new to softwarephysics.

In SoftwareChemistry we also saw that chemistry is really just a very useful set of heuristic models sitting on top of QED that abstract and simplify the very difficult calculations of QED, and in SoftwareBiology we saw that biology is also a series of simplifying models that abstract the complicated models of biochemistry. So the physical sciences can be viewed as an onion-like hierarchy of simplified models of reality that make exquisite predictions for the behaviors of the layers below. But in Software Chaos we also saw that for each layer of modeling, emergent behaviors can arise due to the nonlinear nature of the underlying phenomena. These emergent behaviors are the difficult ones to predict, without introducing another layer at a higher level. Similarly, we saw that software can be viewed as a series of layers of abstract models stacked on top of each other, with applications as the highest form of organized software, overlaying lower levels of software organization like systems software (Apache webservers, J2EE Appservers, DB2, Oracle, Mainframe Gateways, etc.), Unix shells, Unix kernels, machine instructions, and finally microcode. Emergent behaviors also seem to appear in software as we proceed up the hierarchy of software organization too, and we shall see that this applies to financial theory as well.

A Case Study in Virtual Substances - the Current Financial Crisis
Now we have all the tools to proceed. In Software as a Virtual Substance, I tried to relate the virtual substance we call software to the more familiar virtual substance we call money as a useful analogy that highlights the benefits of having an effective theory of software behavior. In truth, economics is just a collection of effective theories that try to describe the behavior of money and all its associated functions like the production and consumption of goods and services. So economics can really be viewed as a kind of moneyphysics. As I previously mentioned, I have always been fascinated with how real money seems to be for most people, when for the most part, it is just a large number of bits on a vast network of computers. For example, the total amount of U.S. currency in circulation is only about 10% of the M2 indicator of the U.S. money supply (currency + savings deposits + money market deposits + CDs), and don’t forget that currency itself is really just pieces of paper with certain ink markings. So money is really not real at all. Money is just another meme, and the very complicated world financial system is a gigantic meme-complex. Now don’t get me wrong, I have nothing against the concept of money. As an 18th-century liberal and 20th-century conservative, I am a strong proponent of capitalism because I see capitalism, despite all its faults, as the best defense against the abuses of the Powers That Be. But if we go back to my favorite definition of physical reality:

Physical Reality - Something that does not go away even when you stop believing in it.

we see that money really is not a part of physical reality. Once people stop believing in money, it simply disappears. That is why the current worldwide financial meltdown is so disturbing. In case you have not looked at the balance of your 401(k) lately, the concept of money is in real danger. Most of the Powers That Be in the worldwide financial meme-complex are in a desperate struggle to keep the illusion that money is real, alive in the minds of the Earth’s population, and I wish them all the best in this effort! Going back to barter would certainly not be a good thing for anybody. So let’s use the current worldwide financial disaster as a timely case study that shows the value in having an accurate effective theory of the virtual substance we call money, in hopes that it will raise the issue of why we need an effective theory for the behavior of the virtual substance we call software.

It seems that once again, we have gotten ourselves into quite a financial pickle through the ingenious development of arcane derivatives and the use of excessive leverage, just as we did in the stock market crash of 1929 and the ensuing Great Depression. An entertaining analysis of the 1929 stock market bubble, and ultimate crash, can be found in John Kenneth Galbraith’s famous The Great Crash 1929 (1954). In 1929, the derivatives and excessive leverage were based upon investment trusts and holding companies, which held common stocks that were inflating in price way beyond the profits generated by the underlying companies. This time, the derivatives and leverage were based upon MBSs (Mortgage Backed Securities), CDOs (Collateralized Debt Obligations), and CDSs (Credit Default Swaps) invested in subprime mortgages on properties that were inflating in price way beyond a sustainable level.

In both cases, the financial community managed to take a linear financial system, that produced small changes in value from small changes in profits to a very nonlinear financial system, which produced large changes of value from small changes in profits through the miracle of leverage. Leverage is borrowing money from somebody in order to invest the money in some kind of financial instrument, rather than using your own money to buy the instrument. The hope is that the borrowed money you invest will generate much more income than the cost of the money you borrowed. A simple form of leverage is buying stock on margin. In 1929 people were able to buy stock with a 10% margin, meaning that you could buy a $100 share of stock for $10 and owe your broker the remaining $90. If the stock quickly rose to $150 during the market bubble, you could then sell the stock for a quick $50 profit, less commission costs and the interest on the $90 you borrowed from your broker. In the current crisis, many speculators did the same thing, but instead of buying stocks on margin, they were buying real estate on margin by taking out low-interest rate ARM (Adjustable Rate Mortgage) mortgages, with little down payment on houses and condos, and then flipping them for a quick profit before the ARM rate was scheduled to dramatically rise.

In 1929, the investment banks on Wall Street came up with an even more ingenious form of leverage through the invention of a derivative known as an investment trust. In finance, a derivative is a financial instrument whose value relies on the value of some other underlying assets and is used to transfer the risk associated with the value of the underlying assets from one party to another. For example, a gold mining company and a jeweler could sign a futures contract to exchange a specified amount of cash for a specified amount of gold in the future. The benefit of the futures contract is that it locks down the future price of gold for both the gold mine and the jeweler, and thus, reduces uncertainty for both of them. The gold mine and the jeweler both reduce one risk and increase another risk at the same time when they sign the futures contract. The gold mine reduces the risk that the price of gold will fall below the price set by the contract, but takes on the risk that the price of gold might rise above the price set by the contract. The jeweler reduces the risk that the price of gold will rise above the price set by the contract, but takes on the risk that the price of gold might fall below the price set by the contract. Once the contract has been formed, it can be traded to other parties who are betting on the fluctuating price of gold. The market value of the futures contract depends upon the current price of gold and how close the contract is to the settlement date. But most futures contracts are never physically fulfilled – the gold is never really delivered; the party that is to deliver the gold merely buys another contract that cancels out the first one. So a futures derivative provides a mechanism to hedge risk and provides something that can be traded in the open market by speculators. Because most futures contracts are really never fulfilled, they are mainly a way of speculating on the future prices of commodities.

As I mentioned in The Fundamental Problem of Everything, there is good and bad in all things. After all, a vice is just a virtue carried to an extreme. The concept of derivatives began in the commodities markets to help ameliorate the risk that farmers were subject to when bringing their crops to market, which is a good thing, but because derivatives can also be traded in an open market by speculators with no real interest in the underlying assets, they also can lead to market bubbles by obscuring the underlying assets. Also, because derivatives tend to spread risk around to a large number of people who have no real interest in the underlying assets, they also dilute the responsibility people have for evaluating and maintaining the value of the underlying assets.

So let’s compare how derivatives and extreme leverage led to financial chaos in 1929 and the present situation. In 1929, investment banks like Goldman Sachs would create an investment trust by issuing $100 million in bonds, $100 million in preferred stock, and $100 million in common stock. They then took the $300 million and bought common stock in real companies like RCA or Ford Motor Company. Now suppose the common stock held by the investment trust rose by 50% during the market bubble from $300 million to a value of $450 million. Since the bonds and preferred stock issued by the investment trust were static instruments, with more or less fixed values, they would still only have a total value of $200 million. The extra $250 million had to go somewhere and ended up being reflected in the value of the common stock issued by the investment trust. So the common stock issued by the trust would rise by 150% from $100 million to $250 million. This is where leverage comes into play for the investment trust derivative. Notice that by borrowing money from other people in the form of bonds and preferred stock, investment banks were able to create a derivative that increased in value by 150% when the underlying assets of the investment trust only increased by 50%, producing a leverage of 3:1. To make matters more interesting, Goldman Sachs then created new investment trusts with a 3:1 leverage that held the common stocks of other investment trusts, yielding a total leverage of 9:1. This continued on an on, by creating even more investment trusts that held the common stock of other investment trusts, which held the common stock of other investment trusts….., creating a gigantic Ponzi scheme. Now imagine that you took your $10 and used it to buy a $100 share in an investment trust with a built-in leverage of 9:1. This would give you a leverage of 90:1, and allow you to reap the benefits of a virtual $900 worth of stock! But the problem with leverage is that it is a two-way street, it magnifies profits but it also magnifies losses. So long as stocks kept rising during 1928 and 1929, things were great and everybody made tons of money on paper, without doing anything intrinsically productive. But all bubbles must finally burst, and when the market bubble of 1927 – 1929 came to an end, all the excessive leveraging of derivatives caused the market to collapse as everybody tried to sell all the fabricated paper at once. For example, if your virtual $900 of stock suddenly dropped by 30% to a virtual $630 as the market declined, your broker would make a margin call because your $100 share of the investment trust now only had a market value of $70. Suddenly your $10 investment turned into a $30 loss because of the downside of leverage. In October 1929, many investors could not make their margin calls and had to sell their shares at a loss instead, which further depressed the market even more and greatly magnified loses because of leverage. The leveraged downward spiral caused the market to crash. For example, the Goldman Sachs investment trust known as Shenandoah Corp., which had been trading at $36 per share in late July 1929, fell to $0.53 by July 1932.

This time, Wall Street used subprime mortgages as the underlying instruments in a derivative called an MBS (Mortgage Backed Security). About 30 years ago I took out a mortgage on a house when we moved back to Chicago. At the time, the mortgage was held by a local bank in the suburb I had moved to, and I made monthly payments to the bank, so the bank originating the loan was very interested in my ability to repay the mortgage. But in recent years banks and mortgage brokerage firms, like Countrywide, originated mortgages, but then sold them off to Fannie Mae, Freddie Mac, or investment banks like Lehman Brothers and Bear Stearns, which then packaged them up into MBS securities, containing perhaps 1,000 mortgages. Now Fannie Mae was created in 1938 in the midst of the Great Depression by FDR precisely to do this sort of thing. In 1968, Lyndon Johnson privatized Fannie Mae to get it off the books because of the rising costs of the war in Vietnam, and in 1970, Freddie Mac was also created by the government to provide some competition for Fannie Mae in the growing secondary mortgage market. So why didn’t this catastrophe happen decades ago? The reason is that both Fannie Mae and Freddie Mac held high standards for the mortgages they bought, at least until recent years. When the dot.com bubble burst in the spring of 2000, and after 9/11 put additional pressures on the financial markets late in 2001, Wall Street was desperate to trade something – anything. So the “securitization” of mortgages – any mortgage at all, no matter how risky - in the form of MBSs sold by the investment banks of Wall Street took off. The selling off of mortgages by the originating local banks and mortgage brokers freed up more cash for the banks and mortgage brokers to originate and sell even more mortgages because the mortgages they sold off did not impact their mandated reserve ratios of cash, and they made a good living off the loan origination fees. Just like the investment trusts of 1929, large commercial banks and investment banks like Lehman Brothers used leverage to buy and hold huge amounts of these MBS derivatives. For example, they could borrow money at 4% from other people and use the funds to buy MBS derivatives with a yield of 8%. Since the underlying assets of the MBS derivatives were mortgages on homes that were continuously going up in value, the MBS derivatives were perceived to be a safe way to make an easy 4% on all the money that was borrowed. The more money you borrowed, the more money you made, so Lehman Brothers ended up running up a leverage of 30:1 before it went bankrupt.

The MBSs were further split up into another kind of derivative called a CDO (Collateralized Debt Obligation) by the investment banks. A CDO takes a bunch of MBSs and sets up a number of cash flow buckets or tranches. When you invest in a CDO, you invest in the cash flow from the MBSs and not the MBSs themselves. The tranches form a hierarchy of risk. The riskiest tranche gets paid last from the cash flow of the underlying MBSs, but it gets paid at a higher rate of return from them. The senior tranche is the first bucket in line for the cash flow, so it bears the least risk and the lowest rate of return. So now we had derivatives of derivatives like back in 1929. The end result of all this was that the banks and mortgage brokers originating the mortgages did not really care if the mortgages would go into default because they were selling them off, and because the investment banks were hungry for more mortgages for their MBSs, the local banks and mortgage brokers began issuing subprime mortgages to people who would normally never qualify for a loan. The packaging of mortgages into MBSs and CDOs further obscured the risk of these subprime mortgages.

Just as in 1929, everything was fine until the American housing bubble burst. As home values began to decline across the entire country, homeowners that were holding subprime mortgages with ARM rates about to rise discovered that, suddenly, the value of their homes was less than their outstanding mortgages. For these homeowners, the logical thing to do was to simply default on their mortgages and walk away. This caused the default rate on the mortgages within the MBSs to skyrocket and the cash flow from the MBSs to nose dive. The market value of the MBSs dropped dramatically, to the point where nobody wanted to buy them at any price, because nobody could tell how bad the mortgage default rate might get. But the financial institutions that had used large amounts of leverage to buy the MBSs still had to make interest payments on the billions of dollars they had borrowed to buy them, even though the cash flow from the MBSs had dropped significantly, pushing these institutions towards bankruptcy.

The insurance companies like AIG got into this mess by issuing an instrument called a CDS (Credit Default Swap) to the investment banks and financial institutions holding the MBS and CDO derivatives. A CDS is really an insurance policy against an MBS or CDO going into default, and like all insurance policies, the investment banks or financial institutions had to pay a premium to AIG for the CDS protection on their MBS and CDO derivatives. Because financial institutions were insuring their MBSs and CDOs with CDSs, that shielded the funds in their MBSs and CDOs from being at risk and did not enter into the calculation of their mandated reserve ratios. This freed up cash for the financial institutions to buy even more MBSs and CDOs. The reason the insurance companies called these contracts a CDS, rather than an insurance policy, is that if they had called them an insurance policy, the CDSs would have been regulated by the agencies that regulate insurance policies. Thus by calling these instruments a CDS, the insurance industry was able to create a very lucrative source of income with no regulation. You see, the agencies that regulate insurance companies require that an insurance company keep some cash on hand to make good on the insurance policies that it issues in order to protect the policyholders, but this was not the case for the CDSs, thus the insurance companies were able to take on huge CDS liabilities, with little cash reserve set aside to pay them off if they went bad. However, the CDSs did present two problems for the insurance companies. Firstly, they did not know how to accurately calculate the risk involved with the MBSs that the CDSs were insuring. Insuring an MBS was not like insuring a home against fire because it was very hard to estimate the likelihood of an MBS failure, so the premiums charged by the insurance companies were probably too low for the risk they took on. This was not a problem, so long as they were just taking in premiums and not paying off on bad MBSs and CDOs. Secondly, most loses that insurance companies suffer are isolated – not all homes in America will burn down at the same time. But this was not true of the MBSs and CDOs that the CDSs were insuring. As home values began to decline across the entire country and homeowners holding subprime mortgages with ARM rates about to rise began defaulting on their mortgages, essentially millions of homes across the country did burn down all at the same time from the perspective of the insurance companies issuing CDSs on MBSs. It is estimated that currently there are about $60 trillion of CDSs in the market today, which makes even the current U.S. national debt of $10.6 trillion look small.

All this was further obscured by the bond rating agencies like Fitch, Moody’s, and Standard & Poor’s. In a scathing July 8, 2008, Securities and Exchange Commission report, it was found that the bond rating agencies were rating these MBSs and CDOs as AAA even though they were filled with worthless mortgages. Believe it or not, these agencies are paid by the very same investment banks that issue MBSs and CDOs to evaluate the MBSs and CDOs! This has been going on for decades with bonds and other securities, but to maintain a semblance of objectivity, the rating agencies have always been careful to shield their analysts from external influence by the investment banks issuing securities. After all, if the rating agencies were viewed by the investment public as just biased extensions of the investment banks and on the take, then an AAA rating of a security would be worthless. Unfortunately, the July 8, 2008, S.E.C report found just that. In an infamous December 2006 email, one rating analyst confided to another “Let’s hope we are all wealthy and retired by the time this house of cards falters.” The report found that the rating analysts were under a great deal of pressure to quickly rate the MBSs and CDOs as AAA, without doing due diligence or fulfilling their fiduciary responsibilities. Other emails in the report showed that analysts were afraid that if they did not rate MBSs and CDOs as AAA, then they would lose business to competing rating agencies.

So all these very complicated and over-leveraged derivatives did a great job of spreading the risk of a huge number of subprime mortgages throughout the financial systems of the world. Indeed, these derivatives became so complicated and hard to calculate valuations for, that Wall Street turned to physics for help and hired many Ph.D. physicists, called quants, to help with the calculations and analysis of these instruments. These derivatives made quantum mechanics look simple, and the newly hired physicists from academia truly did become moneyphysicists. Unfortunately, the quants fell prey to a couple of predicaments familiar to all commercial scientists working in the real world of human affairs. The first was the pressure of MICOR (Make It Come Out Right), where you start with the result that you desire and work the analysis backwards to MICOR. Frequently, the desired analytical results of Quantitative Finance models were graciously provided by the Powers That Be at many institutions in a subtle manner, and the job of the quants was to MICOR the analysis. The second problem was that the quants did not take to heart that their Quantitative Finance models were just effective theories of reality and not reality itself. In How To Think Like A Scientist, I stressed the importance of the final step in the scientific method where you test your models:

The Scientific Method
1. Formulate a set of hypotheses based upon inspiration/revelation with a little empirical inductive evidence mixed in.

2. Expand the hypotheses into a self-consistent model or theory by deducing the implications of the hypotheses.

3. Use more empirical induction to test the model or theory by analyzing many documented field observations or performing controlled experiments to see if the model or theory holds up. It helps to have a healthy level of skepticism at this point. As philosopher Karl Popper has pointed out, you cannot prove a theory to be true, you can only prove it to be false. Galileo pointed out that the truth is not afraid of scrutiny, the more you pound on the truth, the more you confirm its validity.

Unfortunately, the quants became enamored with the power of their own models to make predictions, even if those predictions did not take into account the very nonlinear emergent behaviors of a worldwide financial collapse, like the avalanche that follows the final grain of sand added to a sand pile beyond its critical angle. See Software Chaos for more details on nonlinear systems. The really bad thing about all these derivatives was that nobody really understood them, not even the physicists. Nobody knew how to properly price them or evaluate their inherent risks, so the derivatives essentially released everybody from taking responsibility for the underlying subprime mortgages. If you watched any of the congressional hearings on the current financial crisis on CSPAN this became quite evident.

So now we see that in October of 2008, we are in a very similar situation to where we were back in October of 1929, with derivatives and extreme leverage running amuck. So are we heading for another Great Depression? I do not believe so because we now have much better effective theories for the behavior of the virtual substance we call money. Fortunately, both the Federal Reserve chairman Ben Bernanke and Treasury Secretary Henry Paulson are monetarists, in the tradition of Milton Friedman. Milton Friedman and Anna Schwartz, in their book A Monetary History of the United States, 1867-1960, argued that the Great Depression of the 1930s was caused by a massive contraction of the money supply and not by the lack of investment as Keynes had argued. In this book, Friedman contended that, in the 1930s, the Federal Reserve allowed 1/3 of U.S. banks to fail, which produced a 1/3 drop in the M2 indicator of the U.S. money supply, and this turned a mild recession into the Great Depression. This is why Bernanke, Paulson, and the other leaders of all the central banks of the world are desperately trying to prop up all the banks and financial institutions on the planet by guaranteeing deposits and bailing out investment banks, commercial banks, and insurance companies like AIG, and lowering interest rates.

But in 1929, the Federal Reserve System was new to managing the virtual substance called money, having only been recently created in 1913 to provide a stabilizing central banking system for the United States, following the “Panic of 1907”. Because the Federal Reserve was inexperienced, it took just the opposite actions, based upon the prevailing effective theory of money at the time. In 1929, the Federal Reserve felt that “times were tough” because of the recession that followed the October 1929 market crash. When “times are tough”, the Federal Reserve thought that it made sense to keep interest rates high, to ensure that only financially sound and trustworthy applicants applied for loans. It also felt that it should allow weak banks to go under in order for the strong banks to survive, and provide a sound financial foundation for the country. That is why, during the early 1930s, the Federal Reserve kept interest rates high and let commercial banks, investment banks, and other financial institutions go out of business. Now bank deposits were not insured by the FDIC in the early 1930s, so when a bank went under everybody lost the money they had deposited in the bank, so about 1/3 of the U.S. money supply went from being a virtual substance to being a virtual vacuum by the end of 1932. In 1933, the Glass-Steagall Act was passed, which established the FDIC to insure deposits against bank failures and also introduced many banking reforms that separated commercial banks, the kind of banks you put money into and take out loans from, and investment banks which issue stocks, bonds, and derivatives for corporations. Unfortunately, the banking system reforms of the Glass-Steagall Act that regulated commercial and investment banking activities were repealed on November 12, 1999, by the Republican-led Gramm-Leach-Bliley Act, based upon the assumption that the greed and stupidity of the 1920s were no longer with us – not a very prudent assumption.

The Value of Having an Accurate Effective Theory of Virtual Substances
So we now see the value in having good effective theories for the virtual substance we call money. But is software any less real than money? If we go back to my favorite definition of physical reality, I contend that software is certainly more “real” than money, and yet there are no softwarephysicists to speak of. In the modern world, software certainly plays as much a role in the storage and transfer of economic value as does money, so I find it strange that one can receive a Nobel Prize for modeling the virtual substance we call money, but not for modeling the virtual substance we call software.

Hopefully, the coordinated actions of the central banks of the entire planet will prevent the collapse of the world money supply and prevent another Great Depression because now we have a much better set of effective theories for the behavior of money. Everyone seems to agree that we are forced to bail out the institutions that got us into this jam, but we all know deep down that it sets a very bad precedent. In the 1930s, we let these institutions fail, and the people running them learned a hard lesson in the process. The CEOs of these failed institutions suffered dramatic personal economic consequences for their poor decisions, but that will not be the case for the current crisis when the governments of the world bailout these institutions and their CEOs. As a 20th century conservative, I will gaze in disbelief around Christmas time this year when the CEOs of Wall Street will still demand year-end bonuses in the tens of millions of dollars for nearly bankrupting their institutions and the country, and those bonuses will be coming out of the pockets of U.S. taxpayers this time! I was amazed to see the CEOs of Wall Street go from being staunch capitalists, to grateful socialists, in the span of a single week.

A Need For Reform and Regulation
As an 18th-century liberal and 20th-century conservative, I believe that the government that regulates least is the government that governs best. Capitalism and free markets are the easiest way to run an economy through the miracle of Adam Smith’s “Invisible Hand” because it allows people to work in their own self-interest, which is much easier than trying to force people to do things. It is also a “natural” economic system that naturally fights the second law of thermodynamics through innovation and natural selection, as noted by Charles Darwin in his theory of evolution. The simple beauty of capitalism is that billions of people, all working in their own self-interest, will naturally create a complex and dynamic world economy, with no need for an external designer. In fact, the 20th century has shown us, through the failure of socialism and communism, that it is essentially impossible for any human intelligence to design the complex world economy that we have today. But capitalism does have its limitations. Because capitalism is a Darwinian economic system, it is subject to the same constraints we find in the Earth’s biosphere. Darwinian systems evolve through small incremental changes to adapt to the environment which presently confronts them through innovation and natural selection. Thus, Darwinian systems do not have a teleological or anticipatory nature; they cannot foresee dramatic environmental changes, like an asteroid impact killing off the dinosaurs, or a worldwide financial meltdown killing off the world’s financial markets. In the Tragedy of the Commons (Science - 1968), Garrett Hardin proposed that capitalism and free markets do not work very well for things that have no explicit owner, like the Earth’s atmosphere or the world’s money supply. For example, the financial meltdowns of both 1929 and 2008 were both caused by large numbers of people all working in their own self-interest, with no thought to the Tragedy of the Commons. Just as it is currently in everybody’s self-interest to pollute the Earth’s atmosphere with unchecked emissions of carbon dioxide, in recent years it was in the self-interest of all to pollute the worldwide financial system with toxic subprime mortgages. Everybody benefited, so long as American home values kept rising - low-income homeowners, real estate speculators, local banks, mortgage brokers, Fannie Mae, Freddie Mac, investment banks, bond rating agencies, insurance companies like AIG, and private investors all benefited from the leverage and complex derivatives, hiding the impending catastrophe of subprime mortgages, until the bubble burst. Having everybody working in their own self-interest does amazing things, so long as we pay attention to the Tragedy of the Commons. So capitalism does need some help from government in order for it to work its miracles in a sustainable manner. Capitalism cannot exist in a lawless society, like the extreme of Russian capitalism, where simply killing your competitors is considered a smart marketing strategy.

On November 15, 2008, the leaders of the world financial system will be meeting to start work on such a legal framework to ensure this financial disaster never happens again. My hope is that the new American administration will also convene a similar meeting for the impending disaster of climate change. After all, we seem to be taking unprecedented measures to preserve the world’s money supply, which we have seen is not even a part of physical reality. Wouldn’t it make sense to do the same for the Earth’s climate, which is indeed a part of physical reality? Unfortunately, left to its own devices, world capitalism will optimize the burning of every economically recoverable drop of oil and lump of coal, yielding an atmosphere with 2400 ppm of carbon dioxide. Sure Antarctica will gain a nice balmy climate, like present-day Florida, as we melt the polar ice caps and the permafrost of the Earth, producing a sea level rise of 300 feet, allowing my descendants to frolic on the beaches of the new seacoast in southern Illinois, but the economic costs of trying to adapt to this new climate or fix it will be staggering. This is where government can help capitalism achieve its miracles by simply levying a stiff carbon tax on all carbon based fuels and a carbon tariff on imported goods from countries spewing carbon dioxide into the atmosphere. The income from the carbon tax and tariffs could be offset by an income tax credit to all citizens to make these taxes revenue neutral. This would provide an incentive to repower the U.S. economy with low carbon footprint sources of energy and provide citizens with the funds to buy low carbon footprint products via income tax credits. And for the common defense, the government should also create incentives and fund efforts to rebuild our electrical grid with a superconducting infrastructure and promote nuclear, solar, geothermal, wind, and ocean current sources of energy, so that we do not spend the rest of the 21st century fighting wars over Persian Gulf oil with China, India and the other rising economies of the Earth. Being energy independent is certainly as important as funding B1 bombers or nuclear aircraft carriers.

Similarly, for national security purposes, we need to ensure that the U.S. monetary system is never again placed into such jeopardy, by bringing back the Glass-Steagall Act separating commercial and investment banking, and introducing new legislation regulating MBSs, CDOs, and CDSs. And injecting a little realistic capitalism into the world of Wall Street CEOs would also be beneficial. Theoretically, the stockholders of a corporation are supposed to elect the board of directors, who then hire a management team to run the corporation, but in the real world of human affairs, it seems that the CEOs of Wall Street and the top layer of the corporate management team seem to hire the board of directors, who then appoint compensation committees of underlings, to tell them how much the CEO and top managers should be paid, and they also award outlandish bonuses for incredibly poor performance to these executives. There seems to be a bit of conflict of interest here, but I do not know how to fix it.

Sir Isaac Newton - the First Moneyphysicist
Wall Street and the banking industry will surely object to these measures as being too harsh, but they should be grateful that they never had to deal with the very first moneyphysicist, Sir Isaac Newton! Newton published his famous Principia in 1687, in which he outlined Newtonian mechanics and made him renown throughout the kingdom as the Einstein of his day. In 1696 Newton was made warden of the Royal Mint, a figurehead patronage job with no real duties, as reward, and in 1699 Newton became the Master of the Mint when his predecessor died in office. To the surprise of all, Newton took these figurehead patronage jobs seriously and became a bit of a loose cannon for the Powers That Be running the Royal Mint. At the time, England was at war with France and perhaps 20% of the English coinage was counterfeit, which made it difficult to pay for the costs of war. Also, many of the English coins were clipped – people were shaving silver off the edges of the coins. In 1688, the average circulating silver coin was about 15% underweight because of clipping, and by late 1695, over one-half of an average circulating coin was gone due to clipping. To remedy these problems, Newton oversaw the Great Recoinage of 1696, a massive recoinage effort to replace the untrustworthy coins of the realm with authentic coins of standard weight and silver content that had milled edges to prevent clipping. In 1705, Newton was knighted by Queen Anne as Sir Isaac Newton, not for his great scientific achievements, but for his services to the Crown at the Royal Mint.

Newton was also in charge of tracking down and prosecuting counterfeiters and clippers. Counterfeiting was considered High Treason and a capital offense at the time because it undermined the national defense and thus counterfeiters were viewed as traitors to the Crown. Newton used to disguise himself and walk the streets of London at night, looking for counterfeiters, and actively pursued their arrest and punishment. He attended public executions of counterfeiters for High Treason, which were gruesome affairs in England prior to their ban in 1814. The condemned were drawn and quartered as described in the Wikipedia:

1. Dragged on a hurdle (a wooden frame) to the place of execution. (This is one possible meaning of drawn.)

2. Hanged by the neck for a short time or until almost dead (hanged).

3. Disembowelled and emasculated and the genitalia and entrails burned before the condemned's eyes (this is another meaning of drawn).

4. The body divided into four parts, then Beheaded(quartered).

As you can see, this was far worse than presiding over any annual shareholders meeting filled with angry shareholders! So when the CEOs of Wall Street receive their massive Christmas bonuses this year, funded by taxpayers, and millions of taxpayers are similarly rewarded with devastated 401(k)s and pink slips, the CEOs should be very grateful that Sir Isaac Newton is no longer running about dispensing justice!

Next time we will definitely sum things up with a lessons learned from softwarephysics for IT professionals with a list of tips on how you can improve your performance and make your IT job easier by using softwarephysics on a daily basis.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston