I just finished reading In the Blink of an Eye (2003) by Andrew Parker in which he presents his Light Switch theory for the Cambrian Explosion. The Cambrian Explosion is an enigma that has been plaguing both geologists and evolutionary biologists for more than 150 years, going all the way back to the days of Darwin himself. The Cambrian Explosion is usually characterized as the sudden rapid appearance of complex multicellular organisms in the fossil record. In the strata below the Cambrian, one does not find such fossils, and then in a flash of geological time, large numbers of fossils, of many varieties, are to be found in the Cambrian strata. In the classical description of the Cambrian Explosion, it is proposed that in the Precambrian there were only simple worm-like forms of multicellular life, and they did not leave behind good fossils because they had no hard parts, like shells or hard exoskeletons made of chitin. Then suddenly in the Cambrian, we see the rapid diversification of multicellular life into about 35 different phyla, or basic body plans, that left behind good fossils because they did contain hard parts that could easily fossilize. We have all upon occasion come across silverfish scurrying about in our homes. However, unlike spiders or cockroaches, when you dispatch them with a piece of tissue paper, all you are left with is a smear of protein, rather than a squashed piece of chitin, and that is why we do not find good fossils of life forms in the Precambrian. The exact onset of the Cambrian Explosion keeps bouncing around in geological time, as researchers continue to do their fieldwork, but it is now thought to have begun about 530 million years ago. But the really important point is that the Cambrian Explosion occurred during a very brief period of about 5 million years of geological time, some 500 – 600 million years ago. The two key points of this finding in the fossil record are that the Cambrian Explosion occurred in a very brief amount of geological time, and that it occurred very recently – a mere 500 - 600 million years ago. Since life originated on Earth about 4,000 million years ago, the two big questions that the Cambrian Explosion presents are:
1. Why did it happen so quickly, once it got started?
2. Why did it take so long to finally happen?
Of the two questions, the second is the most perplexing, and is also the most profound, for if it took 3,500 million years for complex multicellular life to evolve on Earth, perhaps there was a good chance that it might not have ever even evolved at all, and that would certainly not bode well for us finding complex multicellular life elsewhere in the Universe, or for finding complex multicellular life that has further evolved to a level of intelligent consciousness that we could commune with, even if the Kepler space telescope should find a large number of Earth-like planets out there over the next few years.
Design Patterns – the Phyla of IT
Before proceeding further, we need to bring our IT readers up to speed on what exactly a phylum is in biology. A phylum is a basic body plan for earning a living in the biosphere and has some distinguishing characteristics. For example, Homo sapiens is in the phylum Chordata because we all have a spinal chord, while insects are in the phylum Arthropoda because they all have a jointed chitin exoskeleton. In IT a phylum is called a design pattern. Design patterns originated as an architectural concept developed by Christopher Alexander in the 1960s. In Notes on the Synthesis of Form (1964), Alexander noted that all architectural forms are really just implementations of a small set of classic design patterns that have withstood the test of time in the real world of human affairs, and that have been blessed by the architectural community throughout history for both beauty and practicality. Basically, given the physical laws of the Universe and the morphology of the human body, there are really only a certain number of ways of doing things from an architectural point of view that work in practice, so by trial and error architects learned to follow a set of well established architectural patterns. In 1987, Kent Beck and Ward Cunningham began experimenting with the idea of applying the concept of design patterns to programming and presented their results at the object-oriented OOPSLA conference that year. So in IT a design pattern describes a certain design motif or way of doing things, just like a phylum describes a basic body plan. A design pattern is a prototypical design architecture that developers can copy and adapt for their particular application to solve the general problem described by the design pattern. This is in recognition of the fact that at any given time there are only a limited number of IT problems that need to be solved at the application level, and it makes sense to apply a general design pattern rather than to reinvent the wheel each time. Developers can use a design pattern by simply adopting the common structure and organization of the design pattern for their particular application, just as living things adopt an overall body plan, or phylum, to solve the basic problems of existence. Just as the Cambrian Explosion was typified by the rapid onset of 35 phyla in the fossil record, the rise of design patterns in IT was closely associated with the rapid onset of object-oriented programming and the Internet Explosion in the early 1990s. In a similar manner, there are only so many ways to earn a living on Earth, and the biosphere seems to have come up with 35 basic body plans, or phyla, to accomplish that. It is interesting to note that no additional phyla ever evolved after the Cambrian Explosion, so it is rather baffling as to why all 35 phyla should have all appeared at the same time in a brief period of 5 million years at the base of the Cambrian.
The Light Switch Theory of the Cambrian Explosion
In In the Blink of an Eye, Andrew Parker proposes that the acquisition of vision by trilobites was the root cause of the Cambrian Explosion. Parker’s explanation for the Cambrian Explosion goes like this. During the last few hundred million years of the Precambrian, all 35 current phyla slowly appeared upon the Earth, but all had adopted very similar soft, worm-like, bodies, with no distinguishing characteristics, and these soft worm-like bodies did not leave behind very good fossils. Apparently, the worm-like body plan was the optimum body design for the day, and there were no compelling reasons for improvement, as will be explained later. Then over a very brief period of a million years or so, trilobites developed eyes that could produce good images of their surroundings. Suddenly, trilobites could now see all of these tiny bits of protein crawling around in worm-like bodies, providing the possibility for hearty meals. Andrew Parker explains that until the invention of an image-forming eye, the Precambrian predators of the Earth practiced passive predation, meaning that they just sat around waiting for prey to fall into their traps, like jellyfish loosely dangling their deadly tentacles, waiting for an unwitting passerby to be stung to death, and then consumed. Without vision, it was very difficult for predators to locate their prey and actively pursue them. But once the trilobites developed sophisticated eyes, a dramatic arms race developed. Suddenly, just remaining still when a predator approached no longer worked, because sunlight streams down upon everything and makes everything visible. Under extreme selective pressures, Precambrian prey began to develop defensive armor in the form of hard exoskeletons with nasty spikes and spines to ward off potential attacks by the pesky trilobites. Thus, the soft, worm-like, bodies of the Precambrian were no longer the optimal design. The trilobites also developed hard parts to make it easier to capture and devour prey, and to avoid becoming the prey of other trilobites too. Other phyla also developed eyes as well, as a defensive measure to avoid the marauding trilobites, and to help find their own prey too.
Figure 1 – Fossil of a trilobite with eyes (click to enlarge)
So the basic idea behind the Light Switch theory for the Cambrian Explosion is that the Cambrian Explosion was not the sudden appearance of 35 phyla with fossil-forming hard parts, rather the Cambrian Explosion was the appearance of the first practical eye that allowed for active predation. The 35 phyla were already in place, but were all hiding in similar soft, worm-like, bodies. Thus, it really was the arrival of active predation that changed everything. With active predation, the 35 already existing phyla were under extreme selective pressures to adopt expensive defensive measures in the form of hard parts, and these expensive hard parts were not needed during the billions of years of passive predation in the Precambrian, so there were no selective pressures to form them.
Parker’s Light Switch theory for the Cambrian Explosion goes a long way in explaining question number one outlined above:
1. Why did it happen so quickly, once it got started?
but it does not explain question number two very well:
2. Why did it take so long to finally happen?
Because question number two now really becomes:
2. Why did it take so long for a practical eye to evolve?
Why Did It Take So Long For the Eye to Evolve?
The trilobites did not have camera-like eyes such as ours, but compound insect-like eyes instead, made up of many individually lensed units with hard crystalline lenses composed of the transparent mineral calcite. However, our natural anthropocentric tendencies have always centered the evolutionary controversy over the origin of the eye upon the origin of the complex camera-like human eye. Even Darwin himself had problems with trying to explain how something as complicated as the human eye could have evolved through small incremental changes from some structure that could not see at all. After all, what good is 1% of an eye? As I have often stated in the past, this is not a difficult thing for IT professionals to grasp because we are constantly evolving software on a daily basis through small incremental changes to our applications. However, when we do look back over the years to what our small incremental changes have wrought, it is quite surprising to see just how far our applications have come from their much simpler ancestors, and to realize that it would be very difficult for an outsider to even recognize their ancestral forms. However, with the aid of computers many researchers in evolutionary biology have shown just how easily a camera-like eye can evolve. Visible photons have an energy of about 1 – 3 eV, which is about the energy of most chemical reactions. Consequently, visible photons are great for stimulating chemical reactions, like the reactions in chlorophyll that turn the energy of visible photons into the chemical energy of carbohydrates, or stimulating the chemical reactions of other light-sensitive molecules that form the basis of sight. In a computer simulation, the eye can simply begin as a flat eyespot of photosensitive cells that look like a patch like this: |. In the next step, the eyespot forms a slight depression, like the beginnings of the letter C, which allows the simulation to have some sense of image directionality because the light from a distant source will hit different sections of the photosensitive cells on the back part of the C. As the depression deepens and the hole in the C gets smaller, the incipient eye begins to behave like a pin hole camera that forms a clearer, but dimmer, image on the back part of the C. Next a transparent covering covers over the hole in the pin hole camera to provide some protection for the sensitive cells at the back of the eye, and a transparent humor fills the eye to keep its shape: C). Eventually the transparent covering thickens into a flexible lens under the protective covering that can be used to focus light, and to allow for a wider entry hole that provides a brighter image, essentially decreasing the f-stop of the eye like in a camera: C0).
So it is easy to see how a 1% eye could easily evolve into a modern complex eye through small incremental changes that always improve the visual acuity of the eye. Such computer simulations predict that a camera-like eye could easily evolve in as little as 500,000 years.
Figure 2 – Computer simulations of the evolution of a camera-like eye(click to enlarge)
Now the concept of the eye has independently evolved at least 40 different times in the past 600 million years, so there are many examples of “living fossils” showing the evolutionary path. In Figure 3 below, we see that all of the steps in the computer simulation of Figure 2 can be found today in various mollusks. Notice that the human-like eye on the far right is really that of an octopus, not a human, again demonstrating the power of natural selection to converge upon identical solutions by organisms with separate lines of descent.
Figure 3 – There are many living fossils that have left behind signposts along the trail to the modern camera-like eye. Notice that the human-like eye on the far right is really that of an octopus.(click to enlarge)
So if the root cause of the Cambrian Explosion hinges upon the arrival of the eye upon the evolutionary scene, and as we have seen above, it is apparently very easy to evolve eyes, why did it take so long? In the very last chapter of In the Blink of an Eye, Andrew Parker tries to address this problem. Parker seems to come to the conclusion that there might have been a dramatic increase in sunlight at the Earth’s surface at the time of the Cambrian Explosion that made eyes physically realizable for the first time. I won’t go into all the details of the explanations offered for why sunlight could have dramatically increased a mere 600 million years ago, because I don’t think that such a dramatic increase in sunlight was really possible. Granted, our Sun is a main sequence star that is gradually getting brighter at the rate of about 1% every 100 million years, so 600 million years ago, the Sun was probably about 6% dimmer than today, and 1,000 million years ago it was perhaps 10% dimmer than today, but that is still much brighter than a cloudy day today, so there surely were plenty of photons bouncing around in the very deep past to see with.
Our Sun is indeed getting brighter because under the high temperatures and pressures in its core, it is turning hydrogen, actually protons, into helium nuclei consisting of two protons and two neutrons. Since helium nuclei have about the mass of four protons, but only the charge of two protons, they take up about as much room as two protons when bouncing around in the Sun’s core. However, because helium nuclei have four times the density of a single proton, the Sun’s core is constantly getting denser with time as protons are constantly being turned into helium nuclei. A core that is constantly getting denser means that gravity is also constantly getting stronger within the Sun’s core, and consequently, the pressure within the Sun’s core that resists the increasing pull of gravity must also rise to stave off the collapse of the core. The pressure within the Sun’s core can only increase by increasing its temperature, but that is easily achieved because a hotter, denser, core also fuses protons into helium nuclei faster than a cooler, less dense, core. The protons in a hotter, denser, core are bouncing around faster and are in closer quarters too, so they are more likely to come close enough together for the attractive strong nuclear force to overcome the repulsive electromagnetic force between them, that tends to keep them apart, and allow the protons to come close enough together for the weak nuclear force to turn protons into neutrons, forming helium nuclei. Thus a hotter, denser, core produces more energy than a cooler, less dense, core, and the generated energy has to go some place. The only place for it to go is away from the Sun, and the Earth just happens to lie in its path. Now a 1% increase per 100 million years might not sound like much, but even a 1% change to the Sun’s current brightness would dramatically change the Earth’s climate. In fact, the only reason that the Earth has not already burned up is that, over the past 600 million years, vast amounts of carbon dioxide have been slowly removed from the Earth’s atmosphere by the biosphere, and have been deposited upon the ocean floor as carbonate deposits that were later subducted into the Earth’s asthenosphere at the Earth’s many subduction zones. So it is the plate tectonics of the Earth that has kept the Earth at a reasonable temperature over the past 600 million years. Now we can see that there really must have been nearly as many photons bouncing around on the Earth’s surface in the deep past as there are today, or the Earth would have been completely frozen over for the whole Precambrian. Now the Earth actually did completely freeze over during a couple of intermittent Snowball Earth episodes during the Precambrian that lasted about 100 million years each, with the last one occurring about 600 – 700 million years ago, but by and large, the Earth was mainly ice-free during the Precambrian, thanks to the very high levels of atmospheric carbon dioxide and methane at the time.
So if there really were lots of photons bouncing around for billions of years during the Precambrian, why were there no eyes to see them with? Let us now turn to the evolutionary history of software for some possible clues.
Using the Evolutionary History of Software as a Model for the Cambrian Explosion
It is possible to glean some insights into why it took so long for the eye to evolve by examining the evolutionary history of software on Earth over the past 2.2 billion seconds, ever since Konrad Zuse cranked up his Z3 computer in May of 1941. Since living things and software are both forms of self-replicating information that have evolved through the Darwinian mechanisms of innovation and natural selection (see Self-Replicating Information for details), and both have converged upon very similar paths through Daniel Dennett’s Design Space, as each had to deal with the second law of thermodynamics in a nonlinear Universe, perhaps we could look to some of the dramatic events in the past evolution of software, when software seemed to have taken similar dramatic leaps, in order to help us understand the Cambrian Explosion. Now although many experts in computer science might vehemently disagree with me as to what caused these dramatic leaps in the evolution of software and exactly when they might have happened, at least we were around to witness them actually happening in real time! And because software is evolving about 60 million times faster than life on Earth, we also have the advantage of reviewing a highly compressed evolutionary history, which has also left behind a very good documented fossil record. Before proceeding, it might be a good idea to review the SoftwarePaleontology section of SoftwareBiology to get a thumbnail sketch of the evolutionary history of software over the past 2.2 billion seconds. When reviewing the evolutionary history of software, it is a good idea to keep in mind that 1 software second ~ 1 year of geological time, and that a billion seconds is about 32 years.
Softwarepaleontology does indeed reveal many dramatic changes to software architecture in deep time that seemed to have occurred overnight, but in most cases, closer examination reveals that the incipient ideas arose much earlier, and then slowly smoldered for many hundreds of millions of seconds before becoming ubiquitous. Here are just a few examples:
1. Mainframe software - The Z3 became operational in May of 1941, and was the world’s first full-fledged computer, but it was not until the introduction of the IBM OS/360 in 1965, that computers took the corporate world by storm. So it took about 24 years, or 757 million seconds, for mainframe software to really catch on.
2. Structured programming - Up until 1972, software was written in an unstructured manner, like the simple unstructured prokaryotic bacteria that dominated the early Earth for the first few billion years after its formation. So it took about 31 years, or 978 million seconds, for structured programming techniques to catch on.
3. PC software The Apple IIe came out in 1977, and the IBM PC followed in 1981, both with command-based operating systems, like Microsoft MS-DOS. But these command-based operating systems required end-users to learn and use many complex commands to operate their PCs, like the complex commands that PC programmers used on the command-based Unix operating systems that they learned to program on. The MS-DOS applications also did not have a common user interface, so end-users also had to learn how to use each MS-DOS application on its own. To address these problems, the Macintosh came out in 1984, with the first operating system with a graphical user interface, known as a GUI, which allowed end-users to drag-and-drop their way around a computer, and the Macintosh applications also shared a common user interface, or look and feel, that made it easier to learn the use of new applications. However, the Macintosh GUI only ran on expensive Macintosh machines, so MS- DOS still reigned supreme on the cheaper IBM PC clones. Microsoft came out with a very primitive GUI operating environment, called Windows 1.0, in 1985 that ran on top of MS-DOS, but it was very rudimentary and not very popular. IBM came out with their OS/2 1.1 GUI in 1988, but it required much more memory to run than MS-DOS, so again, price was a limiting factor. Finally, Microsoft came out with Windows 3.0 in 1990. Windows 3.0 was really only a GUI operating environment that ran on top of MS-DOS, but it could run on cheap low-memory IBM PC clones, and it looked just as good as the expensive Macintosh or OS/2 machines, so it was a huge success. Thus, it took about 13 years, or 410 million seconds, for PC software to finally catch on.
4. Object-oriented programming - Object-oriented programs are the implementation in software of multicellular organization. The first multicellular organisms first appeared on the Earth about 900 million years ago. Simula, the first object-oriented programming language, was developed by Dahl and Nygaard over a three year period from 1962 – 1965, and in the period 1983 - 1985 Sroustrup developed C++, which did introduce the corporate IT world to object-oriented programming. But object-oriented programming really did not take off until 1995, with the introduction of the Java programming language. So it took about 30 years, or 947 million seconds, for object-oriented programming to really catch on.
5. Internet Explosion The Internet was first conceived by the Defense Department’s Advanced Research Projects Agency or ARPA in 1968. The first four nodes on the ARPANET were installed at UCLA, Stanford, the University of Utah, and the University of California in Santa Barbara in 1969. However, it was not until 1995 that the Internet changed from being mainly a scientific and governmental research network into becoming the ubiquitous commercial and consumer network that it is today. So again, it took about 26 years, or 821 million seconds, for Internet software to finally catch on.
6. SOA – Service Oriented Architecture - With SOA, client objects can call upon the services of component objects that perform a well-defined set of functions, like looking up a customer’s account information. Thus, SOA architecture is much like the architecture of modern multicellular organisms, with general body cells making service calls upon the cells of the body’s organs, or even the services of cells within the organs of other bodies. Thus, the SOA revolution is somewhat similar to the Cambrian Explosion. SOA first began with CORBA in 1991, but it really did not catch on until 2004, when IBM began to extensively market the concept. So again, it took about 13 years, or 410 million seconds, for SOA to catch on.
Was the Cambrian Explosion a Real Explosion?
So now we see that the evolution of software over the past 2.2 billion seconds has also proceeded along in fits and starts, with long periods of stasis interrupted by apparently abrupt technological advances. As I pointed out in When Toasters Fly, this is simply evidence of the punctuated equilibrium model of Stephen Jay Gould and Niles Eldredge. For some reason, the spark of a new software architectural element spontaneously arises out of nothing, but its significance is not recognized at the time, and then it just languishes for many hundreds of millions of seconds, hiding in the daily background noise of IT. And then just as suddenly, after perhaps 400 – 900 million seconds, the idea finally catches fire and springs into life. Now why does the evolution of living things and of software both behave in this strange way? My suggestion is to simply take a good look at the phrase “Cambrian Explosion” – what do you see? Well, it appears that some kind of explosion occurred during the Cambrian, and that is the key to the whole business – it really was an explosion! In Is Self-Replicating Information Inherently Self-Destructive?, I discussed how negative feedback loops are stabilizing mechanisms, while positive feedback loops are destabilizing mechanisms that can lead to uncontrolled explosive processes. I also explained how in 1867, Alfred Nobel was able to stabilize the highly unstable liquid known as nitroglycerin, by adding some diatomaceous earth and sodium carbonate to it, to form the stable solid explosive we now call dynamite. The problem with nitroglycerin was that the slightest shock could easily cause it to detonate, but dynamite requires the substantial activation energy of a blasting cap to set it off. In Figure 4 below we see the potential energy function of dynamite, depicted as a marble resting in the depression of a small negative feedback loop, superimposed upon a much larger explosive positive feedback loop. So long as the dynamite is only subjected to mild perturbations or shocks, it will remain calmly in a stable equilibrium. However, if the marble is given a sufficient shock to get it over the hump in its potential energy function, like a stick of dynamite subjected to the detonation of a blasting cap, the marble will rapidly convert all of its potential energy into mechanical energy, as it quickly rolls down its potential energy hill, like the molecules in nitroglycerin releasing their chemical potential energy into the heat and pressure energy of a terrific blast. This is the essence of the punctuated equilibrium model. For most times, predators and prey are in a stable equilibrium, but then something happens to disturb this stable equilibrium to the point where it reaches a tipping point, and crosses over from the stability of negative feedback loops, to the explosive instability of positive feedback loops. Predators and prey then enter into an unstable arms race driven by positive feedback loops, and that is when evolution kicks into high gear and gets something done for a change, like creating a new species or technology.
Figure 4 – Like dynamite, new technologies like the eye are trapped in a stable equilibrium by negative feedback loops, until sufficient activation energy comes along to nudge them into a positive feedback loop regime, where they can explode and become ubiquitous (click to enlarge)
So my suggestion is that the Cambrian Explosion was indeed a real explosion, in the form of an uncontrolled arms race between advancing eyeballs and defensive hard parts. I think that Andrew Parker may have, at long last, really gotten the root cause for the Cambrian Explosion right. The root cause of this arms race was a new form of predation; active predation aided by a new visual sense made possible by eyes, and this new form of predation was the blasting cap that set it all off. But what set off the blasting cap? My suggestion would be – nothing in particular. When you insert a blasting cap into a stick of dynamite, the blasting cap has a pair of copper wire leads running away from the blasting cap that are connected together at their far end with a grounding clip, so that stray electrical voltages do not accidentally set off the blasting cap. To detonate the blasting cap, you remove the grounding clip and then connect the lead wires to a battery-operated detonator. As a young geophysicist, exploring for oil on a seismic crew in the swamps of Louisiana, I vividly recall a fistfight that broke out one day between two crew members. Our explosives technician, known as the Loader, was working on a long string of explosive Nitramon cartridges to be later lowered down into a shot hole to generate seismic waves in the Earth. We were behind schedule, so at the same time, the crew foreman, known as the Observer, was busily using a pocket knife to scrape away the plastic insulation from the lead wires running from the recording field truck to the blasting cap leads. The trouble was that the blasting cap had already been inserted into the first Nitramon cartridge in the string of cartridges, and the grounding clip had also been removed. So when the Loader saw what the Observer was doing back at the recording truck, he ran back to the field truck and tore into him with a vengeance screaming, “Don’t you go messin’ with my life!”. Our Loader was rightly concerned that the contact of the steel pocket knife blade with the copper lead wires could have triggered a voltage spike that could have detonated the blasting cap and the Nitramon string that he was holding!
So here is my take on the root cause of the Cambrian Explosion. What seems to happen with most new technologies, like eyeballs or new forms of software architecture, is that the very early precursors do not provide that much bang for the buck. If you look at the slightly depressed eyespot of step 2 in Figure 2 above, you can imagine that it probably did not provide very much of a selective advantage in the Precambrian, with all those blind and passive predators stumbling around in the dark, and it probably was not that great at locating prey either. So innovative new technologies, like eyeballs or the Internet, seem to languish for hundreds of millions of years (or seconds), waiting for a blasting cap to go off to really get things started, because initially, these new technologies are just not that great at doing what they ultimately can do. However, once these new technologies do catch fire, they then seem to rapidly explode out into dominance, like the white-hot ball of gas at 5,000 0K from the blast of nitroglycerine in a stick of dynamite. I like to think of this supplement to the Light Switch theory of the Cambrian Explosion as the Dynamite Model of the Cambrian Explosion. Just think of a stick of dynamite with an ungrounded blasting cap, patiently waiting for a stray voltage to come along and set it off. I think the Dynamite Model can help to explain the long gap between the onset of multicellular organisms about 900 million years ago, and the Cambrian Explosion that followed about 400 million years later.
So perhaps the Cambrian Explosion really got started by some soft-bodied trilobites that became stranded in a region with very few prey, and that whatever those trilobites were using to find prey at the time, was no longer sufficient to keep them alive for very long. Now along comes a single trilobite with a mutation that provided for a slightly better-than-usual visual field from its very primitive precursor of a compound eye, and that single, lone, hungry trilobite managed to spot a small wiggling worm on the seafloor within striking range. As with the evolutionary history of software, such a minor event would quickly get lost in the daily noise of everyday life, and that is why it is so difficult to put your finger on the exact cause of a technological explosion like the Cambrian Explosion, but I bet that something like that is all that it took.
Comments are welcome at email@example.com
To see all posts on softwarephysics in reverse order go to: