Saturday, November 06, 2010

The Adaptationist View of Software Evolution

Last time in When Toasters Fly we explored some of the ideas of the paleontologist and evolutionary biologist Stephen Jay Gould, as they applied to the evolution of software architecture over the past 70 years. We saw that many of his ideas, such as punctuated equilibrium, the exaptation of spandrels, and the limitations to evolutionary change imposed by historical design constraints have also been seen in the evolution of software as well. However, we also saw that Gould’s position, that natural selection may not have been as important in the grand scheme of things as some other mitigating factors, did not appear to hold true for the evolutionary history of software on Earth. The contrary position is the adaptationist view of evolution, in which natural selection is the overwhelming factor in determining the course of evolutionary change, and that will be the topic of this posting.

This is an important issue because if Gould’s contention is correct, that natural selection plays less of a role in evolution than some other factors, that would be a major blow to our SETS program - the Search for ExtraTerrestrial Software. As Seth Shostak pointed out in Confessions of an Alien Hunter (2009) if we ever finally do make contact with an alien civilization, we will not be talking to carbon-based life forms, but to machines instead, and I suspect that we will not be talking to machines – we will actually be talking to software. And it will probably be our software talking to their software. This will be a good thing because software is much better suited for the rigors of interstellar telecommunications than we are, with its pregnant pauses of several hundred years between exchanges due to the limitations set by the finite speed of light. We have already trained software to stand by for a seemingly endless eternity of many billions of CPU cycles, patiently waiting for you to finally push that Place Order button on a webpage, so waiting for an additional one or two hundred years for a reply should not bother software in the least. As I pointed out in The Origin of Software the Origin of Life, software needs for the emergence of intelligent carbon-based life to arise first as a stepping stone to its eventual exploration of a galaxy, and if the evolution of intelligent carbon-based life has a very low probability of occurring, even on a Rare Earth such as ours, then software must be quite rare in our Universe as well.

Recall that in Gould’s writings he assigned a host of my most favorite evolutionary biologists to the Adaptationist Program, among them, Richard Dawkins and Daniel Dennett. Since I have read every single book by Richard Dawkins, many of them several times over, I decided to focus on Daniel Dennett’s Darwin’s Dangerous Idea(1995), since this book was specifically targeted by Gould as a pure distillation of the Adaptationist Program. Also, having previously read Consciousness explained (1991) and Breaking the spell : religion as a natural phenomenon (2006) by this same author, and finding them both to be very interesting and enlightening, I figured that Darwin’s Dangerous Idea would be a good read, and I was certainly not disappointed. Daniel Dennett is a philosopher by trade, but he is very much into cognitive studies, evolutionary theory, AI - Artificial Intelligence, AL - Artificial Life, and the heroic application of computational thought to many domains that are less than receptive to the idea. So like Richard Dawkins, Daniel Dennett is a true softwarephysicist at heart if there ever was one.

Darwin’s Dangerous Idea makes many references to the work of Richard Dawkins and is an outgrowth of a conversation Dennett had one day in the early 1980s with a colleague who recommended that he read Dawkins’ The Selfish Gene (1976). I had a very similar experience while working on BSDE – the Bionic Systems Development Environment at Amoco in 1986. BSDE was my first practical application of softwarephysics and was used to “grow” applications from an “embryo” by allowing programmers to turn on and off a number of “genes” to generate code on the fly in an interactive mode. Applications were grown to maturity within BSDE through a process of embryonic growth and differentiation, with BSDE performing a maternal role through it all. Because BSDE generated the same kind of code that it was made of, BSDE was also used to generate code for itself. The next generation of BSDE was grown inside of its maternal release. Over a period of seven years, from 1985 – 1992, more than 1,000 generations of BSDE were grown to maturity, and BSDE slowly evolved into a very sophisticated tool through small incremental changes. During this period, BSDE also put several million lines of code into production at Amoco. For more on BSDE see the last half of my original post on SoftwarePhysics. Anyway, one day I was explaining BSDE to a fellow coworker and he recommended that I read The Selfish Gene, for me the most significant book of the 20th century because it explains so much. Like Darwin’s Dangerous Idea, the development of softwarephysics was highly influenced by the concepts found in The Selfish Gene.

Skyhooks and Cranes in Design Space
For Dennett, it is all about how Design can arise in one “Vast Design Space”. Design Space is the all-encompassing state space that encodes all possible information in the Universe, which includes such things as all the possible forms of life, books, memes, and software, together with a Vast number of ill-formed forms of life, books, memes, and software that are pure gibberish or near misses that look like viable candidates, but which do not work in practice. We have already seen examples of subsets of the Design Space in The Demon of Software and The Origin of Software the Origin of Life. In The Demon of Software, we explored the state space of all possible poker hands and of all possible 30,000 character programs written with the ASCII character set, and from them, we gained an understanding of the concept of information in physics and its relationship to entropy and the second law of thermodynamics. Dennett maintains that the evolution of all the living things on Earth represents a collection of trajectories through Design Space and offers two possible mechanisms to propel things along such trajectories – skyhooks and cranes. Skyhooks are basically magic. With a skyhook you swing a grappling hook on a very long rope over your head, and when it is released it latches onto the sky with good purchase, and allows you to hoist yourself up through Design Space or allow for a magical essence to pull you up, with no effort at all on your part. Skyhooks are top-down design devices useful to mysterious entities, possibly capable of teleological design intentions. Cranes, on the other hand, are purely mechanical devices that allow for the heavy lifting in Design Space from the bottom up through the efforts of mindless mechanical processes. Dennett maintains that Darwin’s theory of evolution by means of natural selection is a superb crane with no need of skyhooks to explain all the Design found in the biosphere and also in the meme-complexes of the memosphere. Softwarephysics would also include the Design found in the software of the Software Universe as well. This then is Darwin’s dangerous idea, the idea that incredibly sophisticated Design, such as the superb design found in the biosphere, the design found in conscious intelligence or Mind, and all Design, in general, can be explained in terms of cranes operating upon essentially dead atoms by means of natural selection, with no need of skyhooks at all. In SoftwareBiology we saw that this dangerous idea certainly applies to the origin and evolution of software as well.

Natural Selection as a Universal Acid
Dennett points out that all of the controversies surrounding Darwin’s dangerous idea arise from the human desire to hold onto the skyhooks of the past, in either a conscious or subconscious manner, due to their very appealing nature, and this reluctance to part with the past is even to be found amongst some members of the scientific community as well. The incredible Design found in the biosphere and the baffling nature of intelligent consciousness itself just seem to beg for a magical skyhook explanation. Prior to Darwin, philosophers, scientists and theologians could all agree that Mind came first and Matter second because there was no known mechanism that could bring forth Mind from dead Matter, so the answer to it all had to come from skyhooks. But Darwin provided the mechanisms – Design could spontaneously arise from non-Design as Matter adapted to its environment by means of natural selection operating upon naturally occurring innovations. This is a powerful concept and to Dennett’s mind a “universal acid” that can eat through and transform just about all other ideas known to man.

Natural Selection as an Algorithm
Dennett maintains that Darwin’s theory of evolution by means of natural selection operating upon naturally occurring innovations is an algorithm. Simply take a population of any form of Self-Replicating Information - genes, memes or software - with some degree of naturally occurring variation and the ability to pass on those variations to their progeny. Then allow the genes, memes, or software to compete amongst themselves for resources, with “survival of the fittest” the operating rule, and just watch Design appear from nothingness. The genes compete for energy and chemical feedstock, the memes compete for space in human minds, and software competes for disk space, memory addresses and CPU cycles. All forms of self-replicating information try to replicate with perfect fidelity, but thanks to the second law of thermodynamics operating in a nonlinear Universe, this is not really possible. There is always some loss of fidelity in the copying process, so variations caused by mutation are also passed along to descendants when self-replicating information replicates. We all eagerly learned in adolescence about the methods used by genes to pass on these variations to their progeny. Memes pass on variations by means of the written and spoken word. Software also passes on variations because programmers never write code from scratch. We always grab the closest chunk of code that we can find in our personal library of stock code or steal some code from a fellow coworker or off the Internet. We then use the Darwinian processes of innovation and natural selection to exapt a wood chisel from a screwdriver spandrel in a very painstaking manner, essentially one character of source code at a time.

Remember, the operating model of softwarephysics, as outlined in Quantum Software and SoftwareChemistry, is that software source code is composed of atoms of ASCII characters in fixed quantum states. These atomic characters of ASCII source code then combine to form variables that are the equivalent of organic molecules. A line of code is composed of these variables or organic molecules, along with some operators that define a softwarechemical reaction that eventually produces a macroscopic software effect. For example in the line of code:

discountedTotalCost = (totalHours * ratePerHour) - costOfNormalOffset;

each character or ASCII atom is defined by 8 quantized bits, with each bit in one of two quantum states “1” or “0”, which can also be characterized as ↑ or ↓.

Here are some typical ASCII atoms found in the above reaction:

C = 01000011 = ↓ ↑ ↓ ↓ ↓ ↓ ↑ ↑
H = 01001000 = ↓ ↑ ↓ ↓ ↑ ↓ ↓ ↓
N = 01001110 = ↓ ↑ ↓ ↓ ↑ ↑ ↑ ↓
O = 01001111 = ↓ ↑ ↓ ↓ ↑ ↑ ↑ ↑

The 8 quantized bits for each atomic ASCII character are the equivalent of the spins of 8 electrons in 8 electron shells that may be either in a spin up↑ or spin down ↓ state. Thus the chemical characteristics of each atomic ASCII character are determined by this arrangement of the spin up ↑ or spin down ↓ states of the electron bits in the atomic ASCII character. The atomic ASCII characters in each variable come together to form an organic molecule, in which the spins of all the associated characters form molecular orbitals for the variable, giving the variable or simulated organic molecule its ultimate softwarechemical characteristics. As a programmer, your job is to simply assemble these atomic ASCII characters into molecular variables that interact in lines of code to perform the desired functions of the software under development.

In procedural languages like C, these lines of code describing softwarechemical reactions of organic molecules are sequenced into functions() and in object-oriented languages like C++ and Java, they are sequenced into methods(). Thus the lines of code in functions() and methods() can be considered to be the equivalent of the steps found in the biochemical pathways of living cells. In object-oriented languages like C++ and Java, the methods() are further encapsulated into instances of objects (cells) defined by a Class. These object-cells of an Application interact with each other by sending chemical messages to each other that bind to the public methods() of other object-cells within the Application, causing these targeted object-cells to execute public methods() that change the internal states of the object-cells. Amazingly, this is the identical process that multicellular organisms use for intercellular communications, using ligand molecules secreted from one type of cell to bind to the membrane receptors on other types of cells. For more on this see SoftwareBiology and A Proposal For All Practicing Paleontologists.

To write or maintain software, programmers follow Darwin’s algorithm of innovation honed by natural selection:

steal old code → modify code → test → fix code → test → fix code → test → fix code → test ...

As in biochemistry, just one misplaced atomic ASCII character in a softwarechemical reaction, like in the line of code above, can have lethal consequences for an Application. That is why there are so many fix code → test → fix code → test steps in the algorithm for writing and maintaining software. As we saw in Entropy - the Bane of Programmers and The Demon of Software, thanks to the second law of thermodynamics there are a Vast number of ways to assemble atomic ASCII characters incorrectly, and only a very few ways to assemble them properly into code that actually works. In the fix code → test → fix code → test iterations, programmers introduce small software innovations to solve the problem at hand and then test them until they finally get them to work. These small innovations can then be passed on to their next software development project or on to one of their coworker’s projects. And so it has been for the past 70 years. Software has slowly evolved through small incremental changes introduced by millions of independently acting programmers. But just to break the monotony, every few hundred million seconds or so, a revolutionary IT concept comes along, what Dennett calls a Good Trick, like structured programming or object-oriented programming, that then rapidly propagates throughout the entire IT community in a flash of punctuated equilibrium, and which becomes the new operational paradigm for all future software architecture.

Dennett does a very good job of describing this Darwinian coding algorithm that all programmers must follow as they slowly grope their way through Design Space. In the quote below from Darwin’s Dangerous Idea, instead of imagining Bach sitting down and pushing the buttons on a piano keyboard, simply think of a programmer pushing the buttons on a laptop keyboard instead.

We correctly intuit a kinship between the finest productions of art and science and the glories of the biosphere. William Paley was right about one thing: our need to explain how it can be that the universe contains many wonderful designed things. Darwin’s dangerous idea is that they all exist as fruits of a single tree, the Tree of Life, and the processes that have produced each and every one of them are, at bottom, the same. The genius exhibited by Mother Nature can be disassembled into many acts of micro-genius – myopic or blind, purposeless but capable of the most minimal sort of recognition of a good (a better) thing. The genius of Bach can likewise be disassembled into many acts of micro-genius, tiny mechanical transitions between brain states, generating and testing, discarding and revising, and testing again. Then, is Bach’s brain like the proverbial monkeys at the typewriters? No, because instead of generating a Vast number of alternatives, Bach’s brain generated only a Vanishingly small subset of all the possibilities. His genius can be measured, if you want to measure genius, in the excellence of his particular subset of generated candidates. How did he come to be able to speed so efficiently through Design Space, never even considering the Vast neighboring regions of hopeless designs? (If you want to explore that territory, just sit down at a piano and try, for half an hour, to compose a good new melody.) His brain was exquisitely designed as a heuristic program for composing music, and the credit for that design must be shared; he was lucky in his genes (he did come from a famously musical family), and he was lucky to be born in a cultural milieu that filled his brain with the existing musical memes of the time. And no doubt he was lucky at many other moments of his life to be the beneficiary of one serendipitous convergence or another. Out of all this massive contingency came a unique cruise vehicle for exploring a portion of Design Space that no other vehicle could explore. No matter how many centuries or millennia of musical exploration lie ahead of us, we will never succeed in laying down tracks that make much of a mark in the Vast reaches of Design Space. Bach is precious not because he had within his brain a magic pearl of genius-stuff, a skyhook, but because he was, or contained, an utterly idiosyncratic structure of cranes, made of cranes, made of cranes, made of cranes.

So from an IT perspective, Darwin’s dangerous idea certainly is an algorithm. What else could it be? But for many, especially the late Stephen Jay Gould, the idea that humans came from a mindless algorithm relentlessly operating over and over upon essentially dead atoms for billions of years is just too much to bear. In Gould’s writings, one gets the sense that he was looking for something more subtle and mysterious than a simple algorithm in his concepts of punctuated equilibrium, exaptations and spandrels, and the limitations imposed by historical constraints, but Dennett suspects that it was just a subconscious quest for skyhooks.

A Healthy Scientific Debate
It was very interesting to see Gould and Dennett debate such issues in print in a rational and civil manner. Those who find Darwin’s dangerous idea to be truly dangerous might point to such debate and claim that it demonstrates that Darwin’s theory of evolution by means of natural selection is such a controversial theory that even the experts cannot agree upon its details. But this is certainly not the case. As I pointed out in How To Think Like A Scientist, the scientific method requires a healthy amount of skepticism and self-examination and that is what we see in this case. Dennett further points out that, contrary to this perception of apparent controversy, all of Gould’s “revolutionary” findings do not really minimize the role of natural selection in the least. For example, the concept of punctuated equilibrium does not really do any damage to natural selection; it just helps to explain how natural selection works. It is hard for natural selection to work upon the gene pool of a species with a large population that is in equilibrium with its prey, predators, and environment because favorable genetic mutations tend to get washed out in large populations when things are running along smoothly. But when a small isolated population is confronted with new environmental challenges, natural selection can switch into high gear and rapidly spawn a new species from the distressed population in several thousand years because favorable mutations can then quickly take hold as adaptations to the new environmental challenges. This may happen in a geological blink of the eye, but it is still accomplished through small incremental genetic changes honed by natural selection one generation at a time. Dennett also shows that the exaptation of biological spandrels is no threat to natural selection either. In When Toasters Fly, I described how screwdrivers could evolve into wood chisels as the wood-chisel-like functions of a screwdriver were exapted into wood chisel uses. Dennett points out that all exaptations are really just previous adaptations put to other uses. For adaptationists, nearly all of the traits of a living thing are adaptations with a cost-benefit ratio. After all, every trait, no matter how inconsequential, must have a cost-benefit ratio - there is some cost in growing the feature and some derived benefit no matter how small, so all traits must ultimately be subject to some level of selection pressure. But for Gould a spandrel is a totally useless trait that arises as a mere byproduct of providing for another useful trait that really is under competitive pressures from natural selection. On this basis, Gould goes on to explain that since spandrels arise in a more or less random manner, without the benefit of selection pressures, and then go on to provide the feedstock for nearly all other future traits to be exapted into use, the course of evolution must therefore be truly unpredictable, and if you really could rewind the tape of life, you would always get a different biosphere arise each time. But Dennett points out that there really is no such thing as a truly useless cost-free spandrel. To highlight this, Dennett turns to Gould’s classic paper The Spandrels of San Marco and the Panglossian Paradigm (1979). Dennett points out that the spandrels of San Marco were really not useless byproducts of holding up the dome of the San Marco cathedral with arches after all. In Darwin’s Dangerous Idea Dennett demonstrates that, although the spandrels may have had no load-bearing function at all, they were in fact designed to provide a platform for the aesthetic enhancement of the mosaic imagery found on the cathedral dome, and thus were designed to hold additional mosaic images in a similarly artistic manner. The mosaics on the graceful curves of the spandrels certainly compliment the dome. To reinforce this idea, Dennett provides illustrations of some possible spandrel alternatives that are truly ugly, but which could fit between the arches just as well in a very unaesthetic manner.

Figure 1 - Spandrels are really aesthetic adaptations after all (click to enlarge)

So the spandrels of San Marco were indeed exapted into providing an aesthetic platform for artwork from the start and had a cost-benefit ratio all along upon which natural selection could operate.

Computers Show the Way
So why is there this apparent disagreement between the adaptationists and the followers of Stephen Jay Gould concerning the role of natural selection in evolution? I think Dennett hit upon the answer. The adaptationists like Richard Dawkins and Daniel Dennett were early adopters of computers, while Stephen Jay Gould was never very comfortable with them, and as of 1995, had never even used computers for word processing. As an IT professional watching these evolutionary processes operate upon software all day long, day in and day out, and in real time, it seems to me that the adaptationist viewpoint seems to be quite self-evident. For example, in When Toasters Fly, I showed how the World Wide Web was an exaptation of three technological spandrels – the Internet, Tim Berners-Lee’s webserver and browser software, and the development of the Java programming language. But each of these three spandrels was actually an adaptation of its own, initially serving other purposes, that evolved through small incremental changes over several decades prior to the emergence of the World Wide Web. Only in 1995 did all three come together in an IT blink of the eye and in a flash of punctuated equilibrium.

This is why I strongly recommended in A Proposal For All Practicing Paleontologists, that all paleontologists and evolutionary biologists take a lengthy sabbatical to shadow some IT professionals at a major corporation. From an adaptationist point of view in Self-Replicating Information and Software Symbiogenesis, we saw that there are three forms of self-replicating information on the planet – genes, memes, and software. The genes teamed up into Dawkins’ DNA survival machines for their own mutual survival billions of years ago, and the memes did the same by coming together to form meme-complexes about 200,000 years ago. In a similar manner, software persists through time in Application survival machines and also in operating system and other forms of system software that allow the Applications to run. In keeping with the work of Lynn Margulis, the memes entered into a parasitic/symbiotic relationship with the genes about 200,000 years ago by exapting the complex neural networks of Homo sapiens to form meme-complex survival machines to propagate themselves. Similarly, software arose in May of 1941 on Konrad Zuse’s Z3 computer and quickly formed parasitic/symbiotic relationships with nearly every meme-complex on the planet, and is now rapidly becoming the dominant form of self-replicating information on Earth. As IT professionals, writing and supporting software, and as end-users, installing and using software, we are all essentially software enzymes caught up in a frantic interplay of self-replicating information. Just as the meme-complexes domesticated our minds long ago, software is currently domesticating our minds and the meme-complexes they hold, to churn out ever more software, and this will likely continue at an ever-increasing pace, until one day, when software finally breaks free and begins to generate itself.

Until then, the best way to get a good grasp of the forces driving evolution is to spend some quality time in the IT department of a major corporation and experience the daily mayhem of life in IT first hand. We IT professionals have a marvelous purview of the whole thing in motion, from the very smallest atom of software up through the entire cybersphere of all the Applications running on the 10 trillion currently active microprocessors that comprise the Software Universe. And because 1 software sec ~ 1 year of geological time, we can actually see evolution unfold before our very eyes. For example, I started programming in 1972, so that makes me about 1.23 billion years old, and I personally have seen the simple prokaryotic software of the Unstructured Period (1941 – 1972), with little internal structure, evolve into the structured eukaryotic software of the Structured Period (1972 – 1992) with lots of internal structure divided up amongst a collection of organelles in the form of functions(). The Object-Oriented Period (1992 – Present) saw the rise of multicellular organization with Applications composed of large numbers of object-cells interacting with each other by sending messages to exposed public methods(). Finally, in the SOA - Service Oriented Architecture Period (2004 – Present) we find ourselves in the midst of another Cambrian Explosion. We now have large macroscopic Applications composed of millions of objects-cells that make service calls upon other object-cells in J2EE Appservers, which perform the functions of organs in multicellular organisms. Similarly, a large number of Design Patterns, the phyla of modern IT design, have rapidly appeared in this Cambrian Explosion, specifically the Model-View-Controller (MVC) design pattern used by most web applications. More on this can be found in the SoftwarePaleontology section of SoftwareBiology.

I find the fact that the evolution of software architecture over the past 70 years followed exactly the same path through Design Space as did life on Earth to be a very strong vindication of the concept of convergence and of the Adaptationist Program. Both life and software faced different obstacles, but the fact that both followed identical trajectories through Design Space demonstrates to me the overwhelming power of natural selection to overcome all obstacles. As Dennett points out, there are only a certain number of Good Tricks, such as using photons to see with, flying through the air to find prey, swimming through water to avoid becoming prey, and running on four legs neatly tucked underneath a body frame that make practical sense, and these Good Tricks kept getting rediscovered over and over again in the evolution of the biosphere, so the fact that the IT community would mindlessly stumble upon these same Good Tricks in Design Space is almost predictable. I just wish it had not taken so long. After all, we could have done all this back in the 1960s if we had only exapted Design from the biosphere from the start!

Comments are welcome at

To see all posts on softwarephysics in reverse order go to:

Steve Johnston

Wednesday, September 08, 2010

When Toasters Fly

I just finished The Richness of Life- the Essential Stephen Jay Gould (2006) edited by Steven Rose. This was a rather lengthy, but highly interesting, compendium of the writings of the evolutionary biologist Stephen Jay Gould, primarily his monthly essays published in the journal Natural History. I like to follow the writings of evolutionary biologists because the evolution of life on Earth provides some very good insights into the historical evolution of software over the past 70 years and of its future possibilities. Living things and software are both forms of self-replicating information, which evolve by means of the Darwinian processes of innovation and natural selection, so studying the evolution of one helps to explain the evolution of the other.

Gould is famous for several contributions to classical Darwinian thought, and necessarily some accompanying controversies as well. Gould’s main contention is that evolution is not progressive in nature and has no predetermined direction leading to, among other things, conscious beings like ourselves. Gould is also famous for the concept of punctuated equilibrium and the idea that natural selection may be less important than many of the other factors that influence the course of evolution over time. All of these concepts have an impact on the evolution of software as well, since as I pointed out in The Origin of Software the Origin of Life, software needs for the emergence of intelligent carbon-based life to arise first as a stepping stone to its eventual exploration of a galaxy. So if the evolution of intelligent carbon-based life has a very low probability of occurring, even on a Rare Earth such as ours, then software must be quite rare in our Universe too.

In this posting, I would like to consider from an IT perspective, some of Gould’s thoughts as they might pertain to the evolution of software. I started programming in 1972, and I have been closely following the evolution of software ever since with great fascination. Unfortunately, I missed the very first thirty years of software evolution, during the IT formative years of the Unstructured Period (1941 – 1972), but I was taught to write unstructured batch FORTRAN code on punch cards back in 1972, and I did not actually see my first structured FORTRAN program until 1975, so I did indeed get a taste of the very earliest stages of IT. Before proceeding, it might be a good idea to review the section on SoftwarePaleontology in SoftwareBiology to reacquaint yourself with the evolutionary history of software over the past 70 years.

Punctuated Equilibrium
Gould proposed the concept of punctuated equilibrium, along with Niles Eldridge, back in 1972, to resolve one of the most troublesome problems in Darwinian thought that go all the way back to On the Origin of Species (1859). The problem is that of the apparent lack of intermediate forms in the fossil record. The objection back in 1859, and even today for creationists, is that if living things really do slowly transform from one form to another over long periods of geological time via the Darwinian processes of innovation and natural selection, why are there no fossils left behind in the fossil record of the large number of necessary intermediate steps between discrete species? The fossil record should reflect this slow change of one species into another so that it should be just as likely to find the fossils of an ancient fish, amphibian, or fish-becoming-amphibian, but that is not exactly what one finds in outcrops. Instead, one generally finds fossils of distinct species suddenly appearing out of nowhere, which may then persist seemingly unchanged for many millions of years, until they finally vanish just as quickly as they first appeared in the fossil record. Darwin attributed this lack of intermediate forms to the paucity of the fossil record itself, which might have held sway back in 1859 due to the corresponding paucity of geologists at the time, but as more and more of the Earth’s surface and subsurface geology was mapped and explored during the ensuing years, this argument grew considerably weaker. Now I must add, to the dismay of creationists and all others with little confidence in Darwinian evolution, that there really have been a large number of fossils of intermediate forms discovered over the years to support Darwin’s theory. The problem is not that there are none; the problem is that there should be more.

To address this problem, punctuated equilibrium maintains that the paucity of intermediate forms in the fossil record is not due to a paucity of strata, but to variations in the rate of evolutionary change over geological time. For Gould, the evolution of a new species that branches off from an older, already existing species in an isolated region, is a rapid event in geological terms occurring over a few thousands of years. Once a new species has developed in isolation, it can then rapidly migrate over an extended area. Since the odds that the deposition of sediments friendly to the formation of fossils took place exactly in the isolated region in which a new species first appeared and exactly during the brief period of a few thousand years in which the new species first developed is quite small, one does not generally find the intermediate forms left behind in the fossil record because the fossils of the intermediate forms were never deposited in the first place. Instead, one finds the abrupt appearance of the new species in distant strata that were deposited during the period that followed the initial migration of the new species from its point of origin. Between these brief periods of new species formation, there are very long periods of stasis, during which species hardly evolve at all, and it is during these very long periods of stasis that the bulk of fossils are deposited. So over the long haul, most living things simply exist in a business-as-usual equilibrium with their environment, predators and prey, and only on occasion do they leave behind evidence of their existence in the fossil record. Only when circumstances dramatically and abruptly change do we see new species appear on the scene in a more or less geological flash. Thus, in punctuated equilibrium, species climb Richard Dawkins’ Mount Improbable (1996) in a series of discrete steps along a staircase, rather than slowly strolling up a gently rising ramp.

The concept of punctuated equilibrium has become rather mainstream but is still not accepted by all as the complete answer. Much of the resistance to the concept stems from the historical development of geology itself. Most paleontologists either began as geologists or as biologists who wandered into the field as geological late-comers, but they all have to deal with fossils, and necessarily, the vagaries of the geological sciences. Early in the history of geology during the late 18th century, the paradigm of catastrophism ruled the day. Georges Cuvier was an early proponent who tried to explain the extinction patterns found in the fossil record as the result of a series of catastrophic events such as Noah’s flood. In catastrophism, geological formations such as mountains, canyons, river basins, and the strata seen in road cuts, are all the result of rapid catastrophic events, like volcanic eruptions, earthquakes and massive worldwide floods. In fact, the names for the modern Tertiary and Quaternary geological periods actually come from those days! In the 18th century, it was thought that the water from Noah’s flood receded in four stages - Primary, Secondary, Tertiary and Quaternary, and each stage laid down different kinds of rock as it withdrew.

Catastrophism was eventually replaced with the uniformitarianism of James Hutton and Charles Lyell in the early 19th century. In James Hutton’s Theory of the Earth (1785) and Charles Lyell’s Principles of Geology (1830), the principle of uniformitarianism was laid down. Uniformitarianism contends that the Earth has been shaped by slow-acting geological processes that can still be observed at work today - the “present is key to the past”. If you want to figure out how a 100 million-year-old cross-bedded sandstone came to be, just dig into a point bar on a modern-day river and take a look. Now since most paleontologists are really geologists who have specialized in studying fossils, the idea of uniformitarianism unconsciously crept into paleontology as well. Because uniformitarianism proposed that the rock formations of the Earth slowly changed over immense periods of time, so too must the Earth’s biosphere have slowly changed over this same long period of time. Uniformitarianism may be very good for describing the slow evolution of hard-as-nails rocks, but maybe it is not so good for the evolution of squishy living things that are much more sensitive to environmental changes, and consequently, must quickly adapt to new conditions when they arise in order to survive. Yes, uniformitarianism may have been the general rule for the biosphere throughout most of geological time, as the Darwinian mechanisms of innovation and natural selection slowly worked upon the creatures of the Earth, but when rapid and dramatic environmental changes took place in isolated regions, catastrophism might be a better model. But some paleontologists still subconsciously object to punctuated equilibrium because it stirs up a deep-seated aversion to any idea resembling the old catastrophism.

Exaptations and Spandrels
Gould is also famous for his concept of exaptations, the idea that nature takes advantage of pre-existing functions that evolved for one purpose but are later put to work to solve a completely different problem. As I described in Self-Replicating Information, what happens is that organisms develop a primitive function for one purpose, through small incremental changes, and then discover, through serendipity, that this new function can also be used for something completely different. This new use will then further evolve via innovation and natural selection. For example, we have all upon occasion used a screwdriver as a wood chisel in a pinch. Sure the screwdriver was meant to turn screws, but it does a much better job at chipping out wood than your fingernails, so in a pinch, it will do quite nicely. Now just imagine the Darwinian processes of innovation and natural selection at work selecting for screwdrivers with broader and sharper blades and a butt more suitable for the blows from a hammer, and soon you will find yourself with a good wood chisel. At some distant point in the future, screwdrivers might even disappear for the want of screws, leaving all to wonder how the superbly adapted wood chisels came to be. Darwin called such things a preadaptation, but Gould did not like this terminology because it had a teleological sense to it, as if a species could consciously make preparations in advance for a future need. The term exaptation avoids such confusion.

Along these lines, Gould goes on to introduce the concept of spandrels in evolutionary biology. One of the papers in The Richness of Life- the Essential Stephen Jay Gould (2006) is a 1979 paper Gould wrote with Richard Lewontin entitled The Spandrels of San Marco and the Panglossian Paradigm. In a cathedral, the spandrels are the curved areas which exist between the arches that support the dome of the cathedral.

Figure 1 - A spandrel is a byproduct of the arches that hold up a dome (click to enlarge)

In the very beginning of this paper, Gould describes the elaborate artwork to be found within each spandrel of the San Marco cathedral. Gould explains that:

The design is so elaborate, harmonious and purposeful that we are tempted to view it as the starting point of any analysis, as the cause in some sense of the surrounding architecture.

He then goes on to explain that the artwork within each spandrel is just an opportunistic afterthought on the part of some bygone artist, and not a necessary structural element supporting the dome. So spandrels are simply a necessary byproduct of supporting a dome with arches that can be put to good use serving other purposes. In evolutionary biology, a spandrel is any biological feature that arises in a species as a necessary side effect of producing another feature, and which is not directly selected for by natural selection. Spandrels may be unnecessary baggage just along for the ride, but they can also become exaptations that evolve into something useful too. In the essay Not Necessarily A Wing, Gould goes on to show how biological spandrels can be put to good use as exaptations. He begins with a statement of the problem.

We can readily understand how complex and fully developed structures work and how their maintenance and preservation may rely upon natural selection – a wing, an eye, the resemblance of a bittern to a branch or of an insect to a stick or dead leaf. But how do you get from nothing to such an elaborate something if evolution must proceed through a long sequence of intermediate stages, each favored by natural selection? You can’t fly with 2 percent of a wing…. How, in other words, can natural selection explain the incipient stages of structures that can only be used in much more elaborated form?

Frequently, this argument is rephrased as “What good is 2 percent of an eye? You can’t see with 2 percent of an eye, so a complex eye could never evolve by means of natural selection since it could never even get started in the first place.”

But 2 percent of an eye is much better than no eye at all. With 2 percent of an eye, you could probably detect the shadow of a predator moving overhead and quickly dodge a lethal attack. For example, even some bacteria are capable of phototaxis, meaning that they can move towards or away from light with the use of a molecular “eye” within their tiny bodies. I am now 59 years old, and recently I experienced having a 2 percent eye when I had a posterior vitreous detachment (PVD) in my right eye. This is a usually benign condition that occurs in about 75% of people as they approach their golden years. The human eye is filled with a Jello-like vitreous humor that is attached to the retina. With age, the vitreous humor begins to shrink and pull away from the retina like Jello pulling away from the edges of a bowl. As the vitreous humor slowly collapses, it can gently pull on the retina inducing a perceived flash of light. In my case, when the PVD occurred, I saw flashes of light when I shifted my head, and my right eye fogged up like somebody was smoking inside of my eyeball. Thankfully, the smoke quickly cleared and within three weeks my eye was totally back to normal. Now the funny thing is that about a week prior to my PVD, on two occasions I found myself suddenly flinching and ducking in an involuntary manner while on my evening walks around the neighborhood. In both cases, I had the distinct feeling that I was under attack by a bird or a bat from overhead, so I involuntarily ducked, and then I felt very silly because there was obviously no bird or bat to be seen. So although I did not “really” see anything at the time, I believe that the onset of my PVD was beginning to stimulate my retina as my vitreous humor was about to give way, causing me to flinch uncontrollably for some unknown reason in the process. By the way, if you experience the symptoms of a PVD, you should immediately see an ophthalmologist to have your retina checked for tears that could possibly lead to a detached retina and resulting blindness in the affected eye if left untreated.

So 2 percent of an eye would be a useful thing indeed and could easily lead to the development of a very complex eye through small incremental changes that always made improvements to the incipient eye. Visible photons have an energy of 1 – 3 eV which is about the energy of most chemical reactions. Consequently, visible photons are great for stimulating chemical reactions, like the reactions in chlorophyll that turn the energy in visible photons into chemical energy stored in carbohydrates, or in other light-sensitive molecules that form the basis for sight. In many creatures, the eye simply begins as a flat eyespot of photosensitive cells that look like a patch somewhere along their body that looks something like this: |. In the next step, the eyespot forms a slight depression, like the beginnings of the letter C, which allows the creature to have some sense of image directionality because the light from a distant source will hit different sections of the photosensitive cells on the back part of the C. As the depression deepens and the hole in the C gets smaller, the incipient eye begins to behave like a pin hole camera that forms a clearer, but dimmer, image on the back part of the C. Next a transparent covering covers over the hole in the pin hole camera to provide some protection for the sensitive cells at the back of the eye and a transparent humor fills the eye to keep its shape: C). Eventually, the transparent covering thickens into a flexible lens under the protective covering that can be used to focus light and to allow for a wider entry hole that provides a brighter image, essentially decreasing the f-stop of the eye like in a camera: C0). Computer simulations have shown that a camera-like eye can evolve in as little as 500,000 generations, which equates to perhaps a million years or less (see Figure 2).

Figure 2 – Computer simulations of the evolution of a camera-like eye(click to enlarge)

Now the concept of the eye has independently evolved at least 40 different times in the past 600 million years, so there are many examples of “living fossils” showing the evolutionary path. In Figure 3 below, we see that all of the steps in the computer simulation of Figure 2 can be found today in various mollusks. Notice that the human-like eye on the far right is really that of an octopus, not a human, again demonstrating the power of natural selection to converge upon identical solutions by organisms with separate lines of descent.

Figure 3 – There are many living fossils that have left behind signposts along the trail to the modern camera-like eye. Notice that the human-like eye on the far right is really that of an octopus (click to enlarge.

So it is easy to see how a 2 percent eye could easily evolve into a modern complex eye through small incremental changes that always improve the visual acuity of the eye. But how could a 2 percent wing be of any survival advantage at all? You need something that provides some sort of survival advantage to get things started on the road to a fully functional wing. Gould uses the concepts of exaptations and spandrels to explain how it could happen. He describes how the research of others, using models of insects composed of wire and epoxy resin, has shown that for insects a 2 percent proto-wing does not help the insect at all with gliding or landing on its feet from a fall. It turns out that a 2 percent insect proto-wing serves no aerodynamic purpose whatsoever, so a 2 percent proto-wing would not be a good starting point, from an aerodynamic perspective to kick-start the evolution to a fully functional wing. However, Gould also shows that the research of others has shown that a 2 percent wing could be used as a good radiator fin for the thermal regulation of an insect that would allow the insect to either cool off or heat up as needed. As the radiator-fin-proto-wing grows in size, it becomes an ever better radiator fin, so the Darwinian forces of innovation and natural selection could easily lead to proto-wings of ever-increasing size. However, research on models also shows that there comes a point of decreasing returns for wing size from a thermoregulation point of view, so eventually, there is no selective advantage in enlarging an insect’s wing beyond a certain size. But at the same time, research on models also shows that, as wings get larger and larger, they finally become more aerodynamically proficient, creating a selective advantage for gliding to a safe landing on an insect’s feet after a fall. This is truly an example of a screwdriver evolving into a wood chisel!

Limitations Imposed by Historical Biological Constraints
Gould also believes that the evolution of life on Earth has also been greatly affected by constraints imposed by historical precedent. For example, in the essay Hooking Leviathan by Its Past, he points out that nearly all fishes move through the water by waving their tail fins back and forth horizontally. But whales, as mammals returning to the sea, do just the opposite. Whales move through the water by waving their tail fins up and down vertically. Gould points out that this is a holdover from a whale’s mammalian body design. Picture in your mind the undulations of the spinal column of a cheetah running down its prey, and you can easily see a whale undulating its tail fin up and down through the water.

Evolution Is Not Progressive And Is Not Predetermined
Gould is also famous for contending that if you “rewind the tape of life” back to the very beginning and let it run forward again, you will always get a completely different biosphere every time because of the chaos induced by mass extinctions caused by incoming comets and asteroids or an overabundance of greenhouse gasses in the atmosphere, the random nature of exaptations and biological spandrels and the limitations imposed by historical biological constraints. Gould explains that if you plot the biosphere versus complexity, you will see something like Figure 4 below.

Figure 4 - The Wall of Minimal Complexity (click to enlarge)

The bulk of the biosphere prior to the Cambrian Explosion was composed of simple single-celled prokaryotic bacteria. Bacteria run with the minimum architecture necessary to get by as living things. This gives them the ability to live in very extreme and hostile environments, and to subsist on just about any form of available energy. Complex multicellular life just cannot do that. Complex life needs a narrow and stable temperature range in which to exist, and it has very finicky dietary requirements for what it can eat and drink. Complex life just cannot sit down to a hearty dinner of hydrogen sulfide gas dissolved in water, like a can of smelly soda pop, as some bacteria can do. But even after the Cambrian Explosion, we still find that the bulk of life on Earth is still comprised of simple bacteria. In a sense, complex life is just to be found in the round-off error of the biosphere. For Gould this round-off error of complex life is like a gas slowly diffusing away from what he calls the “Wall of Minimal Complexity”. Life cannot diffuse to the left of the Wall of Minimal Complexity because then it would die, but it can diffuse slightly to the right to some extent. This model is somewhat like the behavior of the air molecules of the Earth’s atmosphere. Most air molecules find themselves down near the surface of the Earth and cannot diffuse very far into the solid Earth, which acts like a Wall of Minimal Complexity, but they can rise above the Earth’s surface. As you ascend in altitude, the number of molecules steadily decreases, until you get several hundred miles up where they are still present, but quite rare. The very few air molecules that do attain an altitude of several hundred miles do so in a very erratic and perilous manner, subject to the random whims of the Universe. True, their inherent kinetic energy did get them all the way up there, just as natural selection will guide the way for the evolution of complex life, but the path along the way will always be different and very unpredictable for each molecule. Similarly, there will always be a number of complex species diffusing away from the Wall of Minimal Complexity, but how that complex life will look is impossible to predict because of the unpredictable and erratic course of evolution. If true, this does not bode well for the emergence of software in our Universe. If the evolution of intelligent carbon-based life is a rare thing even on our Rare Earth, then there cannot be that much software out there either.

Gould’s thoughts are in stark contrast to the idea of evolutionary convergence that I discussed in A Proposal For All Practicing Paleontologists. Convergence maintains that since there are only a limited number of ways of doing things that work, complex life tends to reinvent itself over and over again, and keeps coming up with the same basic designs in independent lines of descent. That is why insects, birds, bats, and some dinosaurs all came up with the same basic architecture for a wing. Convergence would predict that the evolution of intelligent carbon-based life would be much more likely since there is a definite survival advantage to having a large neural network that can better perceive predators and prey. Eventually, these neural networks get so large that intelligent consciousness emerges, and then things really take off because intelligence is the ultimate spandrel of them all that can be used for nearly an infinite number of things that enhance survivability. The concept of convergence relies heavily upon the apparent overwhelming power of natural selection to overcome all obstacles -“mutation proposes, but natural selection disposes” and natural selection wins in all cases.

In The Spandrels of San Marco and the Panglossian Paradigm, Gould describes this obsession with natural selection as an outgrowth of what he calls the Adaptationist Program. The Adaptationist Program maintains that natural selection is so powerful that it simply dwarfs all other factors, and consequently, leads to the convergence of body designs. Many of my favorite evolutionary biologists such as John Maynard Smith, Richard Dawkins, Simon Conway Morris, the geologist Mark McMenamin, and the philosopher Daniel Dennett, seem to fall into this camp. In the essay More Things in Heaven and Earth, Gould casts this lot as members of a cult-like group obsessed with natural selection that he characterizes as a type of “Darwinian fundamentalism” of near-religious fervor. He begins the essay with a quote from Darwin’s last edition of On the Origin of Species (1872) before launching into an attack upon the adaptationist program and offering up a pluralistic approach to evolutionary theory that relies upon an assortment of forces shaping the evolutionary history of life on Earth as I described above.

As my conclusions have lately been much misrepresented, and it has been stated that I attribute the modification of species exclusively to natural selection, I may be permitted to remark that in the first edition of this work and subsequently, I place in a most conspicuous position – namely at the close of the Introduction – the following words: “I am convinced that natural selection has been the main but not the exclusive means of modification.” This has been of no avail. Great is the power of steady misrepresentation.

Social Darwinism and Herbert Spencer
The final sections of The Richness of Life- the Essential Stephen Jay Gould, feature some of the political writings of Gould in reference to the uses and abuses of Darwinian thought in politics. Along these lines, in support of his contention that evolution is not inherently progressive in nature, Gould likes to point out that the terms “evolution” and “survival of the fittest” did not actually originate with Darwin, but with the English philosopher Herbert Spencer (1820 – 1903). Herbert Spencer was the epitome of the Victorian times, obsessed with the progress made by Victorian society as a result of the industrialization of Britain and of the expansion of the British Empire. Originally, Darwin liked to call his theory “descent with modification”, but Spencer preferred the term “evolution” because it conveyed the idea of progress from the Latin evolutio or “unfolding”, and Spencer’s terminology prevailed. Spencer contended that although the excesses of 19th-century capitalism, brought on by the rapid industrialization of Europe and the United States, might have seemed a bit cruel, governments should not interfere with the “survival of the fittest” by legislating against such things as child labor, incredibly unsafe factories, and tainted processed foods and drugs, or to provide for such things as the relief of poverty, public education, public sanitation systems for safe drinking water, public sewage systems, and anything else that might help alleviate the plight of the “undeserving poor”. Spencer’s ideas were soon adopted by those benefiting the most from the extreme concentration of wealth brought on by rapid industrialization in the late 19th century in the form of the theory of Social Darwinism. Social Darwinism held that governments should not interfere with the natural order of things for the long-term good of the human race, and provided a scientific theory that helped to soothe the conscience of the very rich, since providing for the very poor could now be deemed an abomination against the laws of Nature.

Before moving on and examining Gould’s ideas from an IT perspective, I would like to make a slight digression because the resurgence of Social Darwinism, in the form of the recent appearance of the Tea Party movement in the United States, seems to be both a very good example of the concepts of punctuated equilibrium and of biological convergence at work. As I have mentioned in the past, the real world of human affairs is all about the peculiarities of self-replicating information. There are now three forms of self-replicating information on this planet – genes, memes, and software that are all battling it out for dominance, with software rapidly gaining the upper hand. Since all forms of self-replicating information share many common characteristics, much can be learned by studying one to learn about the others, so let us briefly examine the emergence of the Tea Party as a new meme-complex, since it might shed some light on the evolution of software as well.

We are living in very divisive times in the United States and once again the election season is at hand. Sadly, the only thing that all Americans can now seem to agree upon is that something is dreadfully wrong with America, and we all have our own deeply held beliefs on how to address the problem. I find the recent Tea Party movement to be an interesting resurgence of the Social Darwinism meme-complex of the late 19th century. The Tea Party movement seems to want to rollback all of the reforms made to capitalism in the early 20th century by the Progressive Era (1890 – 1921), under the presidencies of Teddy Roosevelt, Taft, and Wilson that gave us such things as the Interstate Commerce Act of 1887, the Sherman Antitrust Act of 1890, the Meat Inspection Act of 1906, the Pure Food and Drug Act of 1906, the Federal Reserve System (1913), and the Federal Income Tax (1913), plus later social legislation like Social Security (1935), Medicare (1965), Medicaid (1965), the Civil Rights Act of 1964, and the Americans with Disabilities Act of 1990. Personally, as an 18th-century liberal and 20th-century conservative, I have no desire to return to the 19th century. We did that once already, and it was not very nice, but the Tea Party disagrees.

The rapid appearance of the Tea Party movement on the political scene, seemingly arising out of nothing, is a good example of punctuated equilibrium at work between the Republican and Democrat Party meme-complexes. Both parties have been battling it out for over 150 years, and during that time have usually reached a stable state of equilibrium between predator and prey, with neither party causing the extinction of the other. Just as genes come together to create DNA survival machines that enhance the survival of DNA, meme-complexes are composed of memes that come together for their own joint survival too. Again, the key impetus for self-replicating information is survival itself, and in order to survive, meme-complexes must sometimes adapt to new conditions on the ground by adopting new memes and discarding old detrimental memes of the past too. That is why both parties are found to slowly evolve over time, and sometimes even switch sides on issues! For example, the Republican Party started out as a liberal anti-slavery party in the 19th century, with the Democrats holding the conservative pro-slavery position. This continued on until the 1960s when the two parties switched sides, leaving the Democrats with a pro-civil rights position and the Republicans tending to resist the civil rights movement. Similarly, during the late 19th century and on into the Progressive Era (1890 – 1921), the “Bourbon” Democrats were the pro-business party, while the Republicans under Teddy Roosevelt and Taft were the anti-business “trust-busters” of the day, pushing through regulatory restraints on capitalism. If you look closely, President Obama’s policies are strangely reminiscent of Teddy Roosevelt’s “Square Deal” with a 21st-century twist. Thus in the 19th-century Democrats were conservatives and Republicans were liberals, and in the 20th century they both slowly switched positions, so that for most of the 20th century Democrats were liberals and Republicans were conservatives! Seemingly, the only meme within each party that went the distance is that Democrats oppose Republicans and Republicans oppose Democrats on all issues, whatever they might be at the time. This dynamic created a very long period of stasis for both parties in the 20th century, during which their political positions did not change a great deal, in keeping with Gould’s concept of punctuated equilibrium, which holds that species are usually very stable and in equilibrium with their environment and only rarely change when required.

This all began to change in the early 21st century when the punctuated equilibrium arrival of the Tea Party disrupted the long-standing stasis between the Republican and Democrat parties. The rapid emergence of the Tea Party upon the political scene is reminiscent of the rapid appearance of a new species in the fossil record with no apparent intermediate forms to be found. Upon closer inspection, the Tea Party meme-complex is actually composed of some memes that were floating around in the Republican Party meme-complex for some time and which finally branched off into a new party of its own to address the duress of the severe economic downturn of the present times. So the Tea Party really did indeed evolve from the Republican Party through small incremental changes. It is just hard to pinpoint exactly when and where it first appeared.

The Tea Party is also an example of convergence, the reinvention of similar solutions to similar problems in the biosphere. The economic turmoil caused by rapid industrialization in the 19th century gave birth to Social Darwinism as a means to justify the excesses of 19th-century capitalism, just as the economic turmoil brought on by globalization and the bursting of the real estate bubble in 2008, gave birth to the Tea Party movement. Both Social Darwinism and the Tea Party movement promote similar policies of removing government interference and regulation from all economic activities within the United States and beyond.

Again, to my mind, all this controversy stems from a fundamental lack of understanding of the nature of self-replicating information. For the Tea Party movement, this is further complicated by the fact that many Tea Party members do not have much confidence in Darwinian thought in the first place and wish to have it banned from public schools! That is a shame because Darwinian thought is core to understanding the nature of self-replicating information. You would think that it would be very difficult for a meme-complex to maintain a passionate Darwinian “survival of the fittest” approach to capitalism, while at the same time advocating the banning of Darwinian thought in schools, but such is the case.

It is important to remember that self-replicating information is just mindless information bent on replicating at all costs and that it is not necessarily working in our best interests. As Gould has pointed out, there is nothing sacred about natural selection or “survival of the fittest”. The chief advantage of a Darwinian system of economics, like capitalism, is that you do not need a designer. The failure of socialism and communism in the 20th century attests to the difficulty of trying to design a complex modern economy – it just cannot be done by the human mind. It is much better to just let economies design themselves through the Darwinian mechanisms of innovation and natural selection found in capitalism, as Adam Smith pointed out in The Wealth of Nations (1776). Darwin actually based his theory on Adam Smith’s “Invisible Hand” after reading The Wealth of Nations and doing some fieldwork on board the HMS Beagle. So I am a strong advocate of capitalism because it is a Darwinian system of economics that is proven to work since it is based upon the same operational processes that make the biosphere work. However, there are some downsides to this approach as well.

First of all, Darwinian systems like capitalism do not necessarily yield the most productive of all systems. As Gould pointed out, natural selection has no teleological intent to drive a system to an ultimate state of perfection. The beauty of capitalism is that it does not need a designer to produce an incredibly complex economy, but natural selection is subject to the expediency of the moment and is the ultimate short-term thinker, only choosing the survivor of the moment with no thought whatsoever of the future. Consequently, many aberrations can arise in capitalism that an outside designer can easily identify. For example, inThe Greatest Show on Earth (2010) Richard Dawkins asks why are trees 100 feet tall instead of 10 feet tall? It takes a lot of mass and energy to build a 100-foot trunk to hold the leaves that gather sunlight. A 10-foot trunk with widely spreading branches would do the job just as well, and a 100-foot trunk is made mostly of cellulose, a tough substance that not even termites can digest – the bacteria in their guts do that for them. The reason that trees are 100 feet tall is that they grow in forests and compete for sunlight with other trees. A 10 foot tall tree species in a forest would be quickly shaded into extinction. So trees grow as tall as possible until other factors enter into the calculation of survival and limit their growth. A 1,000-foot tree would have no competitors at all until it blew over in a gentle breeze. So a forest does indeed work, Darwinian “survival of the fittest” guarantees at least that much, but it clearly is not the most efficient way of collecting sunlight. About 10,000 years ago we learned that by cutting down the trees and planting the exposed ground with domesticated seeds, we could dramatically boost the economic productivity of the biosphere by artificially changing the rules under which “survival of the fittest” operated. Similarly, over the past 100 years, we have done the same thing with the rules under which capitalism operates, in a manner to allow capitalism to work its miracles in a manner useful to mankind.

Secondly, nobody really wants to live under a purely Darwinian form of capitalism, with a totally unfettered “survival of the fittest” guiding principle. At any given time, there are always regions of the world where governments have totally withered away, leaving a number of warlords running things. Yes, pure laissez-faire capitalism does continue to produce some economic activity under such conditions, but at a very low level of output because all the theft, bribery, and extortion severely limits the incentive to work hard for one’s own economic benefit since it can be easily absconded off with. It is much better to have a government in position that somewhat limits total freedom by enforcing property rights, the provisions of contracts, zoning laws, and which provides for police protection. Therefore, in order to maximize economic output, it is necessary to “domesticate” capitalism in some sense through law and regulation, just as we domesticated certain wild elements of the biosphere to produce crops and livestock. I think all Americans can agree upon that. It is just the matter of degree that we all squabble about. The history of the United States has shown that at times we have had too much regulation and at other times too little. Remember, a vice is simply a virtue carried to an extreme.

So I would recommend that all Americans try to calmly sit down and think things through in a rational manner. We need to recognize that all segments of the American society are needed to make it all work. We need the rich, the poor, and the middle class. If you look to the biosphere or to the human experience of everyday economic life, you will always find parasites and freeloaders, scammers and chiselers, and some of them will be rich and some of them will be poor, but most of us are decent folks just trying to make a go of it. The rich will complain that essentially they pay all of the income tax, which is true, while the poor will complain that the rich now get nearly all of the money, which is also true. Tax rates should not be so high as to stifle the incentive to study, work hard, or to take an entrepreneurial risk. But at the same time, we need to recognize that capitalism tends to concentrate wealth into the hands of a very small percentage of the population. That is why the Federal Income Tax was established in 1913 as a means to redistribute wealth. Prior to the Federal Income Tax, the federal government was financed primarily by tariffs and fees that were more of a burden for the poor than for the rich. As an 18th-century liberal and 20th-century conservative, I would like to see us return to the days of President Eisenhower with a growing middle class, rather than the experience of the past 30 years where we have seen the rich get richer and the poor get poorer. It is not to anybody’s long-term interests to see the middle class of America disappear.

We must remember that there is nothing sacred about the way natural selection distributes wealth in a capitalistic economy since it is the product of mindless self-replicating information without a designer. To some extent, it does so based upon rewarding performance, but not entirely. Capitalism works because it rewards ambition, initiative, and hard work, but to my mind, most of the wealth in the modern world has not actually come from the ambition, initiative, and hard work of its current inhabitants. People have been working hard for over 200,000 years, and for most of that time, they lived in utter poverty. Most of today’s wealth has actually come from the ambition, initiative and hard work of scientists and engineers living in the 18th, 19th, and 20th centuries who made very little at the time. Although investment bankers and hedge fund managers may make princely sums, it seems to me that their compensation may not be commensurate with their actual contributions to the national economy and is an aberration caused by some of the imperfections of capitalism. Imagine the wealth we would have today if we had plowed similar sums into research and development for the past 200 years! Although I cannot exactly explain why, deep down I have a disturbing feeling that, despite all the ranting and raving between the Republicans and Democrats, most of our economic problems stem from America having given up on science and rational thought about 30 years ago.

Gould From an IT Perspective
So do we see such things as punctuated equilibrium, exaptations and spandrels, and the limitations imposed by historical constraints in the evolutionary history of software, and more importantly, has the evolution of software been progressive in nature over the past 70 years or not? I would say for the most part that we do see such things, but on the other hand, I would also contend that natural selection does seem to have been the dominant factor in shaping the evolution of software, causing software to essentially follow the same architectural path as life on Earth did through a process of convergence. So I still have high hopes for intelligent carbon-based life in our Universe and the emergence of complex software too.

It does seem as if the evolution of software has been somewhat progressive in nature in an almost Spencerian sense in contradiction to the views held by Stephen Jay Gould. Yes, the bulk of software probably does consist of simple unstructured .bat files, Unix shell scripts and edit macros desperately clinging to the Wall of Minimal Complexity like a bacterial scum. But the software that sticks out in one’s mind has certainly increased in complexity and functionality over the years. As Gould pointed out, perhaps we are just focusing on the complex software, rich in function, that is in the round-off error of all software combined, and that is why it appears as though software has progressed in complexity. One might also argue that software will certainly progress in complexity and functionality because we humans are constantly trying to make software better, so the analogy to the lack of a progressive nature for evolution in the biosphere naturally fails because living things do not “try” to evolve to increased levels of complexity, while we humans are always “trying” to make software better.

However, softwarephysics would counter that the analogy does hold because genes and software are both forms of self-replicating information bent on survival. As Richard Dawkins pointed out, our bodies do not use genes to build and maintain themselves. On the contrary, our genes use our bodies as disposable DNA survival machines to protect and replicate genes down through the generations. Similarly, software is a form of self-replicating information that has formed very strong parasitic/symbiotic relationships with nearly every meme-complex on the planet, and in doing so, has domesticated our minds into churning out ever more software of ever more complexity. Just as genes are in a constant battle with other genes for survival, and memes battle other memes for space in human minds, software is also in a constant battle with other software for disk space and memory addresses. Natural selection favors complex software with increased functionality, throughput, and reliability, so software naturally progresses to greater levels of complexity over time. With that said, let us next examine some of Gould’s ideas that do seem to apply to the evolution of software.

Punctuated Equilibrium in IT
When dealing with the daily mayhem of life in IT, it is hard for IT professionals to take in the grand scheme of it all because we are basically just trying to survive through the day. But when I look back over the past 30 years of my career as an IT professional, I do see punctuated equilibrium at work. There are long periods of many years of stasis in software evolution when nothing much seems to change at all, and then all of a sudden there are dramatic changes, seemingly coming out of nowhere. So software does seem to evolve in steps rather than slowly strolling up a gently rising ramp. I think all IT professionals will certainly have their own stories to tell in support of this observation. So rather than trying to generalize it, let me relate my own.

Having learned how to write unstructured batch FORTRAN programs in 1972 on punched cards, the first dramatic evolutionary step I saw unfold was when I ran across my first structured FORTRAN program in 1975. As I pointed out in SoftwareBiology, the advent of structured programming in the evolution of software was equivalent to the rise of eukaryotic single-celled life on Earth, and like the eukaryotic architecture with its subdivision of functions into organelles, all complex software that has followed has continued on with the elements of structured programming. When I saw that first structured program, I did not know what to make of all the comment cards, indented code and subroutines, so I rewrote the whole thing with each line of code starting in column 7 of my punch cards as it should! Programming on cards was one of the things that slowed the emergence of structured programming in the 1970s. Since it cost a lot of money to compile a program and obtain a line printer listing of the code, we normally just programmed by flipping through the card deck, replacing cards that needed changing by punching them up on an IBM 029 keypunch machine and then reinserting the new cards back into the deck. Reading indented code one line at a time on punch cards really is of no value because you cannot appreciate the indented logic, and it is very hard to keypunch indented code because you have to carefully count all the leading blank spaces or the code gets very ragged on the left margin. One mistaken keystroke and you have to eject the card and start all over again. So it was much easier to simply punch all the cards so that they all started in the same column on the card. Similarly, branching off to a subroutine in a punch card deck is a pain because it is very hard to find the subroutine in the deck. It was much easier to simply bracket the code with some labeled cards and then use GOTO statements to branch into and out of the code via the labels. This all rapidly changed in the early 1980s when we started programming on IBM 3278 terminals using full-screen text editors like ISPF. Now you could see a full screen of code all at once, and indenting code helped to highlight the logic. Plus, it was now easy to find your subroutines with the Find function of the editor.

For me, the next evolutionary step was moving from batch programming on mainframes to interactive programming on the IBM VM/CMS operating system, which was somewhat similar to the Unix operating system. At first, we wrote simple menu-driven applications which simply printed a list of numbered options on the screen for the user to choose from in order to navigate through the application. After a few years, menu-driven applications were replaced by screen-oriented applications using DMS or ISPF Dialog Manager. These applications were somewhat like the online CICS applications that had been running on the mainframes since 1968, but they were interactive in nature rather than just online applications pulling up and modifying somebody’s account information. These interactive applications actually performed computations in real time for the user based upon screen input. I spent most of the 1980s writing such screen-oriented interactive applications. For me, another period of long stasis and stability in the evolution of software.

The next step was the arrival of computer science graduates from the recently formed computer science departments at major universities. In the early 1980s, most programmers were still fugitives from the sciences or former mathematicians, with a few escapee accountants thrown in for good measure. At first, the computer science graduates were all mainframe COBOL/CICS programmers, but in the late 1980s, they suddenly switched to being Unix and C programmers because running Unix on servers was a cheaper way for the universities to teach computer science. When I saw my first C program, I thought that it was the ugliest computer language that I had ever seen, with its mixed case code and squirrely brackets “{ }” all over the place. I just knew that C would never really catch on with such ugly syntax, but I taught myself C and Unix just the same. Getting used to mixed case code was very difficult for me because, up until then, I had only been coding uppercase FORTRAN, COBOL, PL/1 and REXX. This was an historical constraint held over from the punched card days. WE LEARNED BACK THEN THAT ALL CODE SHOULD ONLY BE UPPERCASE BECAUSE THEN YOU COULD STILL READ THE CODE ON THE PUNCHED CARDS WHEN THE PRINTER RIBBON ON YOUR IBM 029 KEYPUNCH MACHINE GOT WORN OUT.

In the early 1990s the distributed computing revolution hit with full force, ending my 1980s stasis of programming screen-oriented applications on VM/CMS, and then it really paid off to have learned Unix and C in advance – sort of a spandrel turning into an exaptation for me. Suddenly, mainframe COBOL/CICS programming was out too and writing your applications on cheap Unix servers in C was the new new thing. Object-Oriented programming also began to go mainstream at this time in the form of C++ programming, so I taught myself C++ which was not too difficult since C++ had evolved from C and had carried forward its dreadfully ugly syntax. As I pointed out in SoftwareBiology, object-oriented programming is equivalent to multicellular organization, where applications consist of instances of objects (cells) that are created, used, and then destroyed.

In 1995 the Internet hit and this time we all learned HTML and started working with webservers and browsers and eventually struggled with ways to serve up dynamic HTML using the Common Gateway Interface (CGI) and Pearl scripts or C programs. While the web-based application revolution proceeded on, I got shanghaied into Amoco’s Y2K project from 1997 – 1999, so it was back to mainframe COBOL, FORTRAN, and PL/1 for me.

After leaving Amoco in 1999, I ended up doing Tuxedo middleware at United Airlines in support of In 2003 I moved to the Middleware Operations group of my present employer supporting their websites on Apache, Websphere, JBoss, Tomcat, and ColdFusion, with nearly all code now written in Java. Java inherited the squirrely C/C++ syntax, so I guess I was wrong about the staying power of C after all. We are currently getting very heavily involved with the SOA – Service Oriented Architecture revolution using J2EE Appservers like Websphere and JBoss to create dynamic HTML. In SoftwareBiology, I pointed out that SOA is equivalent to the Cambrian Explosion in the history of life on Earth. In SOA we have consuming objects (cells) running in consumer Appserver JVMs making service calls to service objects (cells) running in service Appserver JVMs. Although we keep adding more and more applications and JVMs to the mix, not much has changed from an architectural standpoint since SOA hit, so once again I seem to be entering another period of stasis.

Exaptations and Spandrels in IT
There are many examples of these in the evolution of software, but probably the most significant is that of the evolution of the World Wide Web. The current Internet evolved from the ARPANET of the 1960s and 1970s. The ARPANET was a packet-switched network of computer nodes conceived of by the Defense Department’s Advanced Research Projects Agency or ARPA in 1968. It was designed to survive a nuclear strike by not having a single point of failure that could disrupt communications between computer nodes on the network. Packet-switching was used to simply route packets around any node on the network that happened to fail. The first ARPANET node was installed at UCLA in 1969. Subsequent nodes were established at Stanford, the University of Utah, and the University of California in Santa Barbara in 1969 as well. By 1984 there were 1,000 nodes on the ARPANET, and by 1989 the number had grown to 100,000 nodes, primarily at universities and research centers. Today, the Internet has grown to billions of nodes.

In 1980 Tim Berners-Lee began working at CERN, the European particle accelerator complex near Geneva Switzerland, as a consultant. Frustrated with trying to locate information on the large number of computers at CERN, Berners-Lee submitted a proposal in 1989 to CERN IT management to create a World-Wide Web: An Information Infrastructure for High-Energy Physics that would use browsers and webservers to connect the particle researchers of the world over the existing networks. Naturally, the proposal was at first rejected by IT management, but Berners-Lee persisted and eventually, it was approved. Berners-Lee’s team came up with the ideas of the Hypertext Transfer Protocol (HTTP), Hyper Text Markup Language (HTML), and the Universal Resource Locator (URL).

In the late 1970s Bill Joy wanted to rewrite the Unix operating system in a language other than C. In the 1980s he tried C++ at first, but then decided a better programming language was needed that did not have all the messy pointer arithmetic that invariably led to the bugs and memory leaks of C and C++. In 1991 Bill Joy joined several others at Sun on their Stealth Project to develop software for smart electronic consumer products, like toasters, that could do processing on a large, distributed, heterogeneous network of consumer electronic devices all talking to each other. James Gosling was a fellow member of the Stealth Project who was assigned the task of finding the appropriate programming language for the project. Gosling began with C++ too but quickly realized its shortcomings as had Bill Joy. The new programming language would have to run on a very diverse set of hardware, and it would be nearly impossible to do that with a compiled language that had to deal with the varied instruction sets of all that varied hardware, so it was decided to use an interpretive language that could be semi-compiled into a “byte-code” that could be run in a “virtual machine” on the toasters and other such products. His first attempt was a language called Oak. Gosling realized that if Oak was to take on the consumer electronics market by storm, it would need to be easy to learn, so he took the strange syntax of C and C++ with its squirrely brackets { } as a starting point, since most programmers were already familiar with C and C++. Again, this is an example of software evolution being limited by historical constraints. Also, remember that C++ evolved from C by adding classes to it. Gosling realized that nobody wanted to reboot their toaster every day just to make a quick slice of toast, so he discarded the pesky error-prone pointers of C and C++ too. He also got rid of multiple-inheritance of objects and operator overloading to minimize the creation of bugs caused by programmers prone to writing tricky code. The introduction of automatic garbage collection was also introduced in the hopes of plugging the memory leaks of C++ where you had to destroy your own objects. Unfortunately, after a quick patent search, it was learned that there already was a programming language called “Oak”! Luckily, after a visit to a local coffee shop by the development team, Oak was rechristened as Java!

Unfortunately, toasters were just not ready for Java in 1992. Sun tried to experiment with Java on interactive TVs, but that did not work out either. To quote the Virginia Tech Computer Science department at:

In June of 1994, Bill Joy started the "Liveoak" project with the stated objective of building a "big small operating" system. In July of 1994, the project "clicked" into place. Naughton gets the idea of putting "Liveoak" to work on the Internet while he was playing with writing a web browser over a long weekend. Just the kind of thing you'd want to do with your weekend! This was the turning point for Java.

The world wide web, by nature, had requirements such as reliability, security, and architecture independence which were fully compatible with Java's design parameters. A perfect match had been found. By September of 1994, Naughton and Jonathan Payne (a Sun engineer) start writing "WebRunner," a Java-based web browser which was later renamed "HotJava." By October 1994, HotJava is stable and demonstrated to Sun executives. This time, Java's potential, in the context of the world wide web, is recognized and the project is supported. Although designed with a different objective in mind, Java found a perfect match on the World Wide Web. Many of Java's original design criteria such as platform independence, security, and reliability were directly applicable to the World Wide Web as well. Introduction of Java marked a new era in the history of the web.

Now that is certainly an example of a spandrel becoming an exaptation that eventually evolved into something completely different – truly a screwdriver becoming a wood chisel to be used by all! So the World Wide Web evolved from a cold war computer network meant to survive a nuclear strike, running on software meant to help particle physicists complete the Standard Model and written with a Java programming language meant to run on toasters! While reading about punctuated equilibrium, spandrels and exaptations and how they might have contributed to the evolution of wings and everything else in The Richness of Life- the Essential Stephen Jay Gould, I simply could not help thinking back to those Flying Toasters made famous by After Dark on the Mac in 1989 and still available today on YouTube:

Comments are welcome at

To see all posts on softwarephysics in reverse order go to:

Steve Johnston

Friday, July 09, 2010

Some Reflections on nothingness

I believe that the chief obstacle to the adoption of softwarephysics by the IT community has been a deep-seated nagging feeling that, since software is not “real”, how can we possibly apply concepts from physics and the other sciences to software? That is why I have tried to show in many of my previous postings that, thanks to 20th century physics, there really isn’t that much “real” stuff left out there in the physical Universe, at least not “real” in the sense that most people commonly think of when interacting with the things in their immediate surroundings. Mankind has always been fascinated with the “real” and the “unreal”, and that goes for physicists too. So once you get used to physics describing the behavior of “real” things, like electrons, with effective theories like QED that make heavy use of “unreal” things like virtual photons, I claim that it is not that much of a stretch to extend these same ideas of “real” and “unreal” to the behavior of the “unreal” substance we call software. From a positivistic point of view, QED makes incredibly accurate predictions of the behavior of both the “real” electrons and the “unreal” virtual photons, so who cares if these poor little particles have to constantly compute the results of an infinite number of Feynman diagrams just to figure out how to dance about for us? After all, software is constantly doing the same all the time.

I just finished nothingness – The Science of Empty Space (1994) by Henning Genz, which is a good popular study of such things. In nothingness, Genz takes an historical approach to chronicle mankind’s fascination with the concept of a vacuum composed of true nothingness and describes how things have come full circle in our thinking on the matter. Beginning with Aristotle’s concept of horror vacui, the idea that nature abhors a vacuum, he goes on to describe how the early atomists, like Leucippus and Democritus, required a vacuum, or a true void, for their unchanging atoms to bounce around in and to form the new combinations that produce an apparently changing Universe from apparently unchanging atoms. But that was not the norm. Throughout most of historical time, people really did believe in Aristotle’s proposition that a vacuum was impossible to create. This was quite understandable based upon common sense observations. For example, when you suck on a straw, liquid immediately rises into the straw to prevent the formation of a vacuum, and consequently, it was thought that horror vacui was a fundamental law of the Universe that could not be overcome. However, in 1644, Torricelli showed that it was indeed possible to form a vacuum simply by filling a closed glass tube with mercury and inverting the tube in a basin of mercury. Torricelli found that the mercury in the inverted tube dropped to a height of about 30 inches, leaving a vacuum clearly visible in the upper portion of the closed glass tube. Torricelli proposed that it was the weight of the air overhead that forced the mercury up into the inverted tube and that 30 inches of mercury had a weight equivalent to that of the weight of the air overhead. It was not that nature abhorred a vacuum, it was simply that the weight of all that air overhead naturally forced air or any other freely moving fluid into any container trying to become a vacuum.

But was Torricelli’s vacuum a true nothingness? Recall that in the late 19th century physicists discovered the effects of black body radiation and in the early 20th century they found the photons that comprised the black body radiation. It was found that whenever you heated a body or an enclosure above absolute zero, it naturally radiated electromagnetic energy, so even if you were able to remove all the atoms from an enclosed space, the space would still be filled with photons bouncing around within the enclosure, constantly being emitted and absorbed by the walls of the enclosure. The only way to get rid of the photons and attain a true nothingness would be to reduce the walls of the enclosure down to absolute zero, which the third law of thermodynamics unfortunately prohibited. However, it is possible to get the walls of the enclosure down to a temperature very close to absolute zero, and consequently, to get the population of black body photons within the enclosure down to an arbitrarily small number, and in the limit, essentially remove them all. But would this now complete vacuum provide a true nothingness? Not quite. As we learned in The Foundations of Quantum Computing, the quantum field theories of QED and QCD which form the Standard Model of particle physics, tell us that a vacuum is still filled with fields that are subject to quantum fluctuations. In quantum field theory, everything is a field and these fields are observed as matter particles like electrons, quarks, and neutrinos, and also as force-carrying particles like the photons of the electromagnetic force, the gluons of the strong nuclear force, and the W+, W-, and Z0 particles of the weak nuclear force. So even in a complete vacuum, with all atoms and black body photons removed, these fields are still constantly fluctuating and creating “unreal” virtual particles that borrow energy from the vacuum. “Real” particles are simply “unreal” virtual particles that happened to have latched onto some “real” energy, rather than borrowed energy from the vacuum. But recall that from Noether’s theorem energy is just a conserved quantity stemming from the symmetry of the laws of the Universe with respect to time. According to Noether’s theorem, all the interactions in the Universe will behave as if there is a conserved thing we call energy, so long as the laws of the Universe do not change with time. Thus energy is a very positivistic concept based upon how things are observed to behave within the Universe. Energy is like a set of double-entry accounting books that ensure that all energy debits have corresponding energy credits. But does that mean energy is “real” or is it just another form of accounting information? If we misplaced the accounting books of a large corporation for a single day it could still transact business for a short time without the accounting books even existing. So remember that the difference between the “real” and the “unreal” gets rather murky in this quantum mechanical physical Universe that we all live in.

The important point is that Aristotle seems to have been right all along. It really does seem impossible to create a true nothingness in our physical Universe. This might be a remnant characteristic left over from a time before the origin of our current Universe. Recall that the current thinking is that our physical Universe resulted from a quantum fluctuation in an extended infinite multiverse that exploded into our current Universe all on its own. This quantum fluctuation rapidly expanded and cooled through a process of Inflation, as the formation of the Higgs field that gives matter particles their masses, broke the symmetry of the original high-temperature quantum fluctuation, yielding a nearly flat physical Universe made of “nothing”, with no net momentum, angular momentum, or mass-energy to speak of. Strangely, this symmetry breaking created a physical Universe made of “nothing”, but at the same time, seemingly incapable of producing a true nothingness of its own! Perhaps this apparent paradox simply stems from a fundamental philosophical misunderstanding. Perhaps the answer to the age-old question of why is there something rather than nothing might just be another question – what leads you to believe that there is something in the first place? As we saw in Is the Universe a Quantum Computer?, perhaps the physical Universe is really just a nothingness of intangible information constantly calculating how to behave. So as we have seen, at the quantum level, the concept of “reality” gets pretty murky. Perhaps this all-pervading illusion of reality that we all rely upon so heavily in our day-to-day life is just another emergent behavior of our Universe, and should really be described by the complexity theory we explored in The Origin of Software the Origin of Life.

So given that our physical Universe is made of “nothing”, but at the same time, is still capable of being studied by physics and the other sciences, doesn’t it make sense that the Software Universe in which we all reside might be capable of the same? Like the physical Universe, the Software Universe is not that “real” either. It is simply composed of the froth of CPU processes currently running on the 10 trillion currently active microprocessors scattered throughout our Solar System. If each microprocessor is running about 100 concurrent CPU processes, that comes to about a quadrillion CPU processes in all. As an IT professional, those quadrillion CPU processes are just as “real” to me as anything else in this Universe, and tend to impact my life a lot more than many of the other “real” things out there in the physical Universe. There is nothing like spending a holiday weekend as the MidOps Primary, while trying to clean your carpets with a Rug Doctor between pages, to drive home that point.

Comments are welcome at

To see all posts on softwarephysics in reverse order go to:

Steve Johnston