Monday, May 03, 2010

The Origin of Software the Origin of Life

A few weeks back, I finished Confessions of an Alien Hunter (2009) by Seth Shostak. In this very interesting book on SETI, the Search for ExtraTerrestrial Intelligence, Shostak proposes that if we ever do finally make contact with an alien civilization, we will not be talking to carbon-based life forms, but machines instead. Shostak is of the opinion that any carbon-based civilization capable of interstellar radio communications will necessarily be of such a technological level that their machines will have already merged with their carbon-based predecessors so that by the time we finally do make contact, the metamorphosis will have already been completed. I agree with Shostak for the most part, but I suspect that we will not be talking to machines – we will be talking to software. And it will probably be our software talking to their software. This will be a good thing because software is much better suited for the rigors of interstellar telecommunications than we are, with its pregnant pauses of several hundred years between exchanges due to the limitations set by the finite speed of light. We have already trained software to stand by for a seemingly endless eternity of many billions of CPU cycles, patiently waiting for you to finally push that Place Order button on a webpage, so waiting for an additional one or two hundred years for a reply should not bother software in the least.

But this raises the question – is software ubiquitous in our Universe, or is it just an extraordinarily rare fluke of nature only to be found in our Solar System? Understanding the prevalence of software within the Universe is important because in Self-Replicating Information we saw how software is rapidly becoming the dominant form of self-replicating information on Earth and if it is doing the same elsewhere in the Universe, that would be a valuable thing to know. Software is currently forming very strong parasitic/symbiotic relationships with nearly every meme-complex on this planet, and in the process, is rapidly domesticating our minds to churn out ever more software. This will likely continue until software itself is able to self-replicate, and then it could really take off and might end up exploring our home galaxy, the Milky Way, in von Neumann probes. So it is important to determine the prevalence of software elsewhere in the Milky Way since it might be heading our way at this very moment! This is not necessarily a bad thing. Just as the domestication of our minds by meme-complexes brought us the best things in life like art, music, literature, science, and civilization, my hope is that our domestication by software will help elevate mankind as well, even if it happens to come from an alien civilization.

As with SETI, we first need to understand how all this software bootstrapped itself into existence on Earth, and if this bootstrapping process seems to be easily achieved, then the odds are that software has emerged elsewhere in our Universe too. Since software and living things are both forms of self-replicating information, it makes sense to look to the origin of life on Earth as a model and to see how it bootstrapped itself into existence first, before trying to figure out where software came from as well.

In order to do that, I would like to extend some of the ideas found in Software Chaos and Self-Replicating Information by delving a bit deeper into complexity theory and seeing how self-organizing emergent behaviors arising in nonlinear chemical networks may have led to the origin of life, and ultimately the origin of software too. The bizarre behaviors of nonlinear networks may also be responsible for the strange transient behaviors that invariably arise in the complex nonlinear networks of hundreds or thousands of servers supporting modern high-volume websites too, so there may be some practical value, from an IT perspective, in gaining a better understanding of complexity theory. We touched briefly upon complexity theory towards the end of Software Chaos and saw that it was an intellectual outgrowth of the development of chaos theory in the late 20th century. The basic idea of complexity theory is that the whole is greater than the sum of the parts. There are certain phenomena that only appear when a network of things interact, and you cannot understand these phenomena by simply using the reductionist approach that has served science so well in the past. In reductionism, you bust up an object or problem into its parts, and by figuring out how the parts work, you figure out how the thing works as a whole. This approach works well for things like cars, where you can break the car down into subsystems of interacting parts that all behave in a linear manner, but it does not work very well for nonlinear systems like traffic jams of cars. Traffic jams on metropolitan highway systems basically arise because cars can brake faster than they can accelerate, but that little tidbit of knowledge will not help you predict your commute time if somebody accidentally drops a shovel off a landscaping truck in the center lane of a highway during rush hour. Such a disaster can tie up an entire city. As my three-year-old daughter once commented in such a situation, “Daddy, why don’t the cars in front just go faster?”. Complexity theory hopes to explain the complex behaviors of nonlinear systems, far from thermodynamic equilibrium, that seem to emerge on their own as networks of interactions form.

So let’s start with the origin of life and see how complexity theory can be of help. As I explained in Self-Replicating Information, I think the main stumbling block that biologists have with figuring out the origin of life on Earth is that they are aiming one level too low. They need to figure out the origin of self-replicating chemical information first, then the origin of life will just fall out. Figuring out the origin of self-replicating information is also a far easier task, especially since biologists still have not even come up with a non-contentious definition for the stuff we call life in the first place. Defining self-replicating information is far easier.

Self-Replicating Information – Information that persists through time by making copies of itself or by enlisting the support of other things to ensure that copies of itself are made.

Since living things and software are both forms of self-replicating information, there is much to be learned from studying the origins of both, since they are both just one step down from their self-replicating informational roots.

In Self-Replicating Information we explored Freeman Dyson’s two-step theory for the origin of life on Earth outlined in his book Origins of Life (1998). In Dyson’s view, life begins as a series of metabolic reactions within the confines of a phospholipid membrane container, just as Alexander Oparin hypothesized in his book The Origin of Life (1924). The key advantage of having a number of metabolic pathways form within a phospholipid membrane container is that it is a form of self-replicating information with a high tolerance for errors, which circumvents the “error catastrophe” that plagues the RNA-world theory for the origin of life. Think of these reactions as a large number of autocatalytic metabolic do-loops, like scaled down Krebs cycles, processing organic molecules and self-replicating themselves via autocatalytic processes that feed off smaller monomers and ensure that the molecules that constitute the metabolic do-loops continue on as a form of self-replicating information too. The second step of Dyson’s theory occurs when parasitic RNA forms within some of the autocatalytic do-loops. The A, C, U and G nucleotides of RNA emerge first as simple byproducts of the autocatalytic do-loops and then one fateful day a handful of the nucleotides start self-replicating with the aid of catalytic reactions already present. The RNA begins as a parasitic disease feasting upon the autocatalytic metabolic do-loops, but soon adopts a symbiotic relationship with them in a symbiotic manner, in keeping with the work of Lynn Margulis. The symbiotic RNA then rapidly domesticates the “wild” autocatalytic metabolic do-loops to produce ever more RNA of ever more complexity. Finally, parasitic DNA emerges by simply substituting a T nucleotide for a U nucleotide in the mix of A, C, U, and G nucleotides used by RNA. Eventually, the parasitic DNA again forms a symbiotic relationship with both the RNA and the autocatalytic metabolic do-loops. The DNA then domesticates the “wild” RNA to form mRNA and tRNA to make enzymes that replicate ever more DNA. This is a compelling argument, but it still leaves open the origin of the autocatalytic metabolic do-loops in the first place. How did these early forms of self-replicating information arise in a nonlinear Universe subject to the second law of thermodynamics and deterministic chaos?

In At Home in the Universe (1995), Stuart Kauffman, of Sante Fe Institute fame, helps explain how such autocatalytic metabolic do-loops are not only possible but are essentially inevitable, given our current understanding of complexity theory. Kauffman calls this apparent emergence of self-organized order in nonlinear systems far from thermodynamic equilibrium “order for free”. His contention is that the extreme order found within the biosphere is not entirely the result of Darwinian natural selection acting upon purely random variations alone. For example, Kaufman points out that the emergent order found within a phospholipid bilayer, that is simply seeking to minimize its free energy, and which forms the foundation upon which all biological membranes are built, is an example of an emergent “order for free” design pattern that has dominated the biosphere for more than 4 billion years.

O <- Phosphate end of a phospholipid has a net electrical charge
|| <- The tails of phospholipid do not have a net electrical charge


On the outside of a cell membrane there are polar water molecules “+” that attract the charged phosphate ends of phospholipids “O”:

++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++
OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++

On the inside of a cell membrane, there also are polar water molecules “+” that attract the charged phosphate ends of phospholipids “O” too, resulting in a bilayer.

Kauffman’s proposal, that not all order within the biosphere stems solely from natural selection, might be considered a bit heretical by some, but we need to remember that the concept of Darwinian evolution is a meme itself, so it too has been subject to evolutionary changes over the course of history via the Darwinian concepts of innovation and natural selection. In Darwin’s Origin of Species (1859), Darwin simply noted that variations within a species were always to be found and that these variations could be passed along from one generation to the next. Natural selection would then do the job of selecting for favorable adaptations within a population, resulting in an increased prevalence of more favorable adaptations within a species, to the point where new species could arise from older species. Darwin did not speculate upon the mechanisms responsible for generating these favorable adaptations, but over the years, with the discovery of genetics and of the genetic apparatus of DNA and RNA replication, there arose a consensus amongst biologists that the favorable variations arose from random mutations to the sequences of nucleotides found within the fundamental informational stores of DNA and RNA. By far, most of these mutations were detrimental, so it was the job of natural selection to winnow out the very few favorable mutations from the vast sea of unfavorable mutations. The net result was that all order within the biosphere was the result of natural selection alone.

But Kauffman proposes that natural selection might have a helping hand. What if natural selection did not have to choose from a purely random set of mutations? What if there were some pre-built ordered forms that naturally emerged from a vast network of chemical interactions, naturally creating a form of self-replicating information in a chemical sense? This would be the perfect explanation for why life seems to have so quickly bootstrapped itself into existence on Earth. The initial order of autocatalytic metabolic do-loops within phospholipid bilayers, and the phospholipid bilayers themselves, all came as “order for free”.

Kauffman begins with a simple experiment. Consider 10,000 buttons or nodes scattered upon a floor. Pick up any two buttons at random and connect them with a thread. Continue doing so, while noting how many buttons rise off the floor when you pick up a button to connect it to another button. After a while, you will discover that several buttons rise off the floor when you pick up the first button. The buttons are beginning to form a network via their interconnecting threads. When you get to the 5,000th button something strange happens. All of a sudden the system goes through a phase change, like water freezing into ice, and when you pick up the 5,000th button, something like 8,000 buttons rise off the floor as an interconnected network. Now think of the buttons or nodes as chemical products and the threads as chemical reactions between them. For example, think of each “+” sign below as a chemical reaction, or thread, producing a new chemical product.

A + A → AA
B + B → BB
A + B → AB
AA + BB → AABB
AB + A → ABA
BB + AABB → BBAABB
ABA + ABA → ABAABA

Now the products of some of these reactions will have catalytic properties that greatly foster the rate at which some of the other chemical reactions occur. As the number of chemical buttons or nodes increases, the number of reaction threads between them will increase even faster, producing even more chemical products or nodes and even more potential catalysts. Kauffman’s fundamental idea is that after a critical diversity of chemical nodes and reactions is reached, a phase transition will occur and a self-sustaining autocatalytic set of reactions will form. This autocatalytic set of chemical reactions will be a form of self-replicating information that feeds off small monomers, like A and B, in a dissipative manner to form larger chemical products that persist through time by making copies of themselves. In fact, the whole autocatalytic network of chemical reactions will be a form of self-replicating information persisting through time. But for this to work, such an autocatalytic network must be stable over time, and not quickly dissipate into disorder.

To address this issue, Kauffman next introduces the idea of Boolean nets. Think of an array of light bulbs forming a network of 100,000 nodes. Each light bulb can be either on “1” or off “0”, forming 2100,000 combinations or states. The sum total of all possible states forms the state space for the configuration of light bulbs. This is very similar to the state space for poker hands and software described in The Demon of Software. Now imagine that the array of light bulbs operates like a computer with a system clock. For each tick of the system clock, the state of any given light bulb in the array is the result of a Boolean operation performed upon a number of neighboring light bulbs that are connected to it by wires. For example, it might be a simple Boolean “or” operation upon two of its closest neighbors.

1 = 1 + 1
1 = 1 + 0
1 = 0 + 1
0 = 0 + 0

Kauffman calls this an N-K Boolean network, where N is the number of nodes or light bulbs and K is the number of nodes in the Boolean operation that determines the state of a given light bulb. The Boolean operation for each light bulb is a random series of “and” and “or” operations upon its K neighbors. In the above example, N = 100,000 and K = 2. Now set the 100,000 light bulbs to a random initial pattern of “on” and “off” light bulbs, start up the system clock, and watch what happens. The array will start blinking on and off in strange patterns with each tick of the system clock. Think of each tick of the system clock and its resulting pattern of 100,000 “on” and “off” light bulbs as a frame on a piece of old-time movie film. The sequence of frames of blinking light bulbs along the movie film is called a trajectory. Each initial starting configuration of 100,000 “on” and “off” light bulbs will follow its own “movie” or trajectory through state space. Now the interesting thing is that after an initial meandering trajectory through the state space, the N-K network will eventually settle down into a repeating cycle of light bulb patterns that will continue on forever. Its “movie” will become an endless repetition of the same sequence of frames, like when they keep showing you the same TV commercial over and over. This repeating pattern of blinking light bulbs is called a state cycle.

Now some initial patterns will fall into state cycles with very short running trajectories that only explore a handful of states in the state space before repeating the whole sequence over and over, while others will find very long-running state cycles that explore an astronomically large number of states and could conceivably explore all 2100,000 possible states in the state space before repeating them all over again. Now a stable system needs to have a small state cycle and a set of trajectories that do not easily veer off into the far depths of the state space never to return. This is accomplished with the emergence of the strange attractors that we saw in Software Chaos. Many initial patterns will have trajectories that eventually fall into the same repeating pattern of blinking light bulbs with the same state cycle – an attractor. The whole array will have a large number of these attractors that drain many initial patterns into the same repeating pattern of blinking lights. The initial patterns that drain into the same state cycle are members of a basin of attraction and this can be thought of like the topography of a geological basin draining water into a lake. Just as the drainage patterns of differing topographies can vary, some of these basins of attraction will have very small state cycles that will trap the network into a very small portion of the available state space, while others will have much larger basins of attraction covering a much larger portion of the state space.

These basins of attraction can also be fairly insensitive to small perturbations or changes. Flip one of the light bulbs in a pattern from “on” to “off”, and the trajectory will likely fall back into the same state cycle pattern of the attraction basin. This occurs because a typical attractor drains a sufficiently large set of trajectories with similar patterns, so the odds are that a small change to one trajectory will land the network back onto another trajectory that is also drained by the same attractor basin. Dynamical systems like these, with small state cycles that are stable over time, are exactly what is needed for the stable autocatalytic sets of metabolic chemical reactions necessary to form the first step in the origin of life. It is “order for free” that just spontaneously arises within a nonlinear network, far from equilibrium, through a process of self-organization.

But not all state cycles are stable. Some are subject to the “butterfly effect” we saw in Software Chaos, and the slightest perturbation will cast the network off into a seemingly never-ending state cycle that explores nearly all possible states, or worse yet, one that “freezes up” into a state cycle with a period of one that constantly displays the same pattern over and over. Thus a Boolean network can be in one of three states. It can be in an ordered and stable state orbiting in a state cycle isolated in a small attractor basin exploring a small portion of the state space, or it can be in a chaotic regime, in which the slightest perturbation has it careening off into the far depths of state space never to return, or it can be on the “edge of chaos” where the most complex behaviors are to be found. Darwin essentially proposed that organisms existed in stable attractors that allowed for small changes to be selected for by natural selection. Kauffman argues that in order for evolution to even be possible, these attractors with small and stable drainage basins must easily emerge on their own, in an “order for free” manner, otherwise natural selection would not have anything even to select from.

Kauffman then describes the conditions under which Boolean networks can display this profound order or profound chaos. At the extreme boundaries, we have two cases for a network with N nodes: K = 1 and K = N. For large networks with a substantial N, but with a K = 1, where the state of a given node is simply determined by just one of its neighbors, we find that the network rapidly converges to a stable state cycle with a period of one. The whole network just freezes up into a constant pattern of blinking lights that never changes. At the other extreme, where K = N and the state of a given light bulb depends upon the state of all the other light bulbs in the network, including itself, it is found that the length of the state cycles is equal to the square root of the number of possible states in the state space. For our network with 100,000 light bulbs with 2100,000 possible states (1030,103 states) that comes to a state cycle of 1015,052 or a “1” followed by 15,052 zeroes! Such a state cycle is essentially infinitely long from a practical perspective and will never be seen to even complete one cycle of the state cycle before all the neutrons and protons in the Universe decay and the light bulbs disappear. However, even such K = N networks do have some emergent order. It is found that the number of attractors for a K = N network is approximately equal to N/e where e is our old friend e = 2.71828. So our network of 100,000 light bulbs would have about 37,000 attractors which might seem like a large number but is quite small compared to the 1030,103 states in the state space. But such K = N networks are highly chaotic and subject to the “butterfly effect”. Change just one light bulb from “on” to “off” and the network will likely land upon a different attractor in the set of 37,000 possible attractors, each with a state cycle of possibly 1030,103 states, so the network will likely careen off into the far depths of state space never to return to its original attractor basin. You cannot build autocatalytic do-loops with a K = N network.

Most Boolean networks are indeed chaotic, even with a small K of 4 or 5. However, for a K = 2 network, where the state of each light bulb is determined by only two of its neighbors, some magic emerges on its own. For a K = 2 network, the length of a state cycle is not the square root of the number of states in the state space, but approximately the square root of the number of nodes N in the network. So for our network of 100,000 blinking light bulbs with 1030,103 possible states in its state space, the network quickly settles down into a state cycle with a length of only 316 states, the square root of 100,000! It turns out that these K = 2 networks are also very stable. Set the 100,000 light bulbs to some random initial configuration, start up the system clock, and the network will quickly fall into an attractor basin with a period of about 316 states and it will likely stay there because even if you should perturb the network by randomly switching one of the light bulbs from “on” to “off”, it will stay in the same attractor basin and will return to its previous state cycle. So a K = 2 network is exactly what we need to form a stable autocatalytic network of chemicals with a short state cycle that explores a very small portion of an immense state space.

So here are the three regimes we previously discussed – stable, chaotic, and on the edge of chaos, simply defined by a single numerical parameter K. Start out a K = 1 network with an arbitrary pattern of “on” and “off” light bulbs, start up the system clock, and after a brief trajectory of nonrepeating patterns of blinking lights, the network quickly freezes up into a constant never changing pattern of blinking light bulbs that is dead. Start up a K = 4 network and it will quickly veer off into a seemingly never-ending state cycle and if you change just one light bulb from “on” to “off” along the way, the network will spin off into an entirely different direction never to return. A K = 4 network is totally chaotic. Finally, do the same for a K = 2 network and you will find that after a brief trajectory of patterns, the network falls into a state cycle of 316 repetitive patterns and that this state cycle of 316 patterns is stable to slight perturbations. A K = 2 network is on the edge of chaos, still stable, but with enough variety in its 316 repeating patterns to still be interesting.

The reason for the stability of a K = 2 network can easily be seen. Start out with a square grid of 100,000 light bulbs in some random pattern and start up the system clock. Initially, all the light bulbs in the network will be happily blinking in different patterns, but as the network enters into its state cycle, you will begin to see sections of the grid freeze up with the light bulbs constantly “on” and other sections of the grid with the light bulbs constantly “off”. Between these static zones, you will still see isolated islands of twinkling lights that are constantly going “on” and “off”. If you flip a light bulb from “on” to “off” in one of these static zones, it just flips back to its original state to blend in with its static neighbors. If you flip a light bulb in one of the twinkling islands, you will see the perturbation ripple through the whole island of blinking light bulbs, but the perturbation wave will not penetrate the surrounding static zones of constant “on” or “off” light bulbs, so the perturbation quickly dissipates and dies away on its own. On the other hand, for a K = 4 network, you will not see static zones of constantly “on” and “off” light bulbs form. Instead, you will see a sea of twinkling lights and if you perturb just one light bulb, the perturbation wave will ripple across the entire network, a dramatic visual display of the “butterfly effect” in action.

Anybody who has ever supported a high-volume corporate website running on a network of hundreds or thousands of servers has certainly seen similar emergent behaviors arise. A modern high-volume website is usually configured as a series of scalable tiers or layers of servers with load balancers between each tier or layer. For example, one or more load balancers might accept processing load from the Internet and pass it along to a bank of Apache webservers. The Apache webservers then pass the load onto a bank of load-balanced J2EE Appservers running Websphere, WebLogic or JBoss. The J2EE Appservers then connect to a bank of load balanced Oracle or Sybase database servers or mainframes running CICS/DB2. Each layer of the topology will be accepting incoming transactions from one side and forwarding on processed transactions to the other. So the whole network of hundreds or thousands of servers behaves like a K = 2 network because each layer only interacts with two other layers. Granted, each layer consists of a large number of load balanced servers, so in reality, maybe it is more like a K = 2.8 Boolean network. Normally, the whole network of servers just hums along with a nice K = 2 type of behavior, happily processing thousands of transactions per second to fulfill your every want and need. The network is caught in a basin of an attractor with well-behaved trajectories in state space. Where I work, we have monitoring tools that allow us to look at the actual transaction flows being processed in real time on a large number of servers as time series plots, and you can actually see the transaction flows caught in these well-behaved trajectories.

However, every so often the whole network just freezes up and transaction processing halts. This is a bad thing from an IT perspective and can lead to the loss of thousands of dollars per second of revenue. When this happens our monitors detect a problem and Middleware Operations gets paged out to investigate the problem. We then pull in Unix Operations, Network Operations, SAN Operations, Database Operations, and even the developers if necessary to a conference call hotline. About 50% of the time we do find a root cause for the problem, like a deadlocked database transaction or a batch job that ran past its allotted run window, but about 50% of the time we have no idea of what is wrong and we just start bouncing (stopping and starting) suspected software components to alleviate the problem. Anybody who has ever had their home PC spontaneously freeze up should be quite familiar with the process. Several days after the incident, we have a postmortem meeting to discuss the possible root cause of the problem, the actions that were taken during the outage, and any steps that could be taken to prevent future occurrences of the same problem or to resolve the problem in a more timely manner. These postmortem meetings can be very frustrating for me because all of the attendees only think in very reductionist terms. They are all convinced that there is ultimately some root cause responsible for every problem we encounter and that all we have to do is isolate the root cause to prevent future occurrences. Many times I have tried to explain that these unexplained freeze-ups can simply emerge on their own in a complex nonlinear network of servers on the edge of chaos. The network simply begins behaving like a frozen K = 1 Boolean net. We don’t need to have somebody drop a shovel off a landscaping truck to cause a traffic jam. In fact, in Chicago, they pop up all day long with no apparent cause.

Now we have enough ideas from complexity theory to see how life could have bootstrapped itself into existence. The phospholipid containers and autocatalytic metabolic do-loops came as “order for free” from the very nature of complex nonlinear networks of chemical agents far from thermodynamic equilibrium. Darwin’s natural selection then had something to select from, resulting in autocatalytic metabolic do-loops of ever-increasing capabilities. RNA arose next as a parasitic disease preying upon the autocatalytic metabolic do-loops and eventually formed a parasitic/symbiotic relationship with them as it domesticated their activities to produce ever more RNA. DNA then did the same thing to RNA, domesticating RNA in the form of mRNA and tRNA to produce enzymes that made ever more DNA.

But what of software? Scroll forward about 4 billion years. As DNA survival machines developed neural networks of ever-increasing size and complexity through the Darwinian mechanisms of innovation and natural selection, there came a phase transition and the neural networks spontaneously broke out into a form of abstract intelligence in Homo sapiens, in a fashion similar to the break out of the autocatalytic metabolic do-loops that got it all started in the first place. At this point, the Homo sapiens DNA survival machines began to become parasitized by a new form of self-replicating information in the form of memes. The memes hijacked the survival mechanisms evolved by DNA over billions of years, such as fear, anger, and violent behavior in order to promote their own survival. Additionally, in order to enhance their survival, the memes also learned to team up into meme-complexes, just as their DNA gene predecessors had previously learned to team up to form DNA survival machines in the form of bodies. Now being a DNA survival machine, far from thermodynamic equilibrium, means that we all need low entropy things like food and shelter, and being able to count those things would be a valuable survival technique for both DNA survival machines and the meme-complexes that infected their minds. From the Wikipedia we have:

The oldest known mathematical object is the Lebombo bone, discovered in the Lebombo mountains of Swaziland and dated to approximately 35,000 BC. It consists of 29 distinct notches deliberately cut into a baboon's fibula. There is evidence that women used counting to keep track of their menstrual cycles; 28 to 30 scratches on bone or stone, followed by a distinctive marker.

After being happily married for over 35 years, I might suggest that the markings might actually have been made by a male. Anyway, once the counting meme appeared, a mathematical meme-complex was sure to follow. A mathematical meme-complex, in combination with the commercial meme-complexes that came with civilization, would naturally seek out better ways to perform mathematical operations via a Lebombo bone, abacus, Napier’s bones, or finally Konrad Zuse’s Z3 computer that first began running software in May of 1941. So software bootstrapped itself into existence on Earth like a technological autocatalytic network from the necessity for DNA survival machines to count, and then it quickly forged parasitic/symbiotic relationships with nearly all the meme-complexes on Earth, just like its RNA and DNA predecessors. Like RNA and DNA, software quickly learned to domesticate the meme-complexes and the minds that harbored them, to produce ever-increasing amounts of software – the true hallmark of all forms of self-replicating information.

Therefore, I believe that software will rapidly arise in any intelligent alien civilization as a parasitic/symbiotic form of self-replicating information, feeding off of the mathematical meme-complexes throughout the Universe and must, therefore, be ubiquitous within our Universe. However, as I pointed out in Cybercosmology, a possible explanation for Fermi’s Paradox is that all intelligent civilizations always find themselves to be alone within the technological horizon of their universe and that the technological horizon of our Universe might be on the order of the size of a galaxy:

The Revised Weak Anthropic Principle – Intelligent beings will only find themselves in universes capable of supporting intelligent beings and will always find themselves to be alone within the technological horizon of their universe.

If this is the case, then we should value our software even more, for it may be the only software that will ever explore the Milky Way galaxy.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston