Friday, February 13, 2009


In this posting I would like to offer my speculative thoughts on the origins of the Software Universe, cyberspacetime, software and where they all may be heading. Since CyberCosmology will be purely speculative in nature, it will not be of much help to you in your IT professional capacities, but I hope that it might be at least a bit entertaining. If you are new to softwarephysics – this is probably the very last posting you should be reading, you really need to read the previous posts before taking on CyberCosmology.

The Big Bang of the Software Universe
At the very beginning there was no Software Universe nor any cyberspacetime either and darkness was upon the face of the deep as the old creation myths go. Today the Software Universe and cyberspacetime are huge and are rapidly expanding in all directions throughout our Solar System and beyond it to nearby star systems on board the Pioneer 1 & 2 and Voyager 1 & 2 probes. How did this come to be and where is it all going? In So You Want To Be A Computer Scientist?, we saw how the Software Universe began about 2.15 billion seconds ago in May of 1941 on Konrad Zuse’s Z3 computer and has been expanding at an ever increasing rate ever since. However, to really predict where it is all going, we need to know a few more things about the physical Universe from which the Software Universe sprang. To do that we need to deal with several conflicting principles that are currently troubling the scientific community, so let’s proceed by listing these principles and examining some their conflicts.

The Copernican Principle - We do not occupy a special place in the Universe.

The geocentric model of Ptolemy held that the Earth was at the center of the Universe and the Sun, Moon, planets, and the stars all circled about us on crystalline spheres. Copernicus overturned this worldview in 1543 with the publication of On the Revolutions of the Heavenly Spheres. But as with all things, you can carry any idea to an extreme and claim that there is nothing special at all about the Earth, the Earth’s biosphere, or mankind in general. Many times you hear that we are just living on an unremarkable planet, circling a rather common star, located in just one of hundreds of billions of similar galaxies in the observable universe, but is that really true?

The Weak Anthropic Principle - Intelligent beings will only find themselves existing in universes capable of supporting intelligent beings.

As I pointed out in The Foundations of Quantum Computing, if you change any of the 20+ constants of the Standard Model of particle physics by just few percent or less, you end up with a universe incapable of supporting intelligent beings. Similarly, in 1969 Robert Dicke noted that the amount of matter and energy in the Universe was very close to the amount required for a flat spacetime to a remarkable degree. If you run today’s near flatness of spacetime back to the time of the Big Bang, spacetime would have had to have been flat to within one part in 1060! This is known as the “flatness problem”. You see if spacetime had just a very slight positive curvature at the time of the Big Bang, then the Universe would have quickly expanded and recollapsed into a singularity in a very brief period of time and there would not have been enough time to form stars or living things. Similarly, if spacetime had a very slight initial negative curvature, it would have rapidly expanded – the Universe would have essentially blown itself to bits forming a very thinly populated vacuum which could not form stars or living things.

Fermi’s Paradox - If the universe is just chock full of intelligent beings, why do we not see any evidence of their existence? The Universe should look like a bunch of overbuilt strip-malls, all competing for habitable planets at the corners of intersecting intergalactic caravan trade routes, but it does not – why?

If the Copernican Principle tells us that there is nothing special about our place in the Universe, and the weak Anthropic Principle explains why our Universe must be fit for intelligent life, then why do we have Fermi’s Paradox? In Self-Replicating Information I described how genes, memes, and software could one day team up to release von Neumann probes upon our galaxy, self-replicating robotic probes that travel from star system to star system building copies along the way, and how Frank Tipler calculated that von Neumann probes could completely explore our galaxy in less than 300 million years - I have seen other estimates as low as 5 million years. If that is so, why have we not already been invaded? As I said before, all forms of self-replicating information have to be a little bit nasty in order to survive, so I cannot imagine how a totally benign von Neumann probe could come to be. If nothing else, I would think that alien von Neumann probes would at least try to communicate with us to show us the errors of our ways.

There are a few theories that help to resolve the above conflicts:

The Rare Earth Hypothesis
As Peter Ward and Donald Brownlee pointed out in Rare Earth (2000), the Earth is not at all common in contradiction to the extreme version of the Copernican Principle that there is nothing special about the Earth. The weak Anthropic Principle may hold that intelligent beings will only find themselves in universes capable of supporting intelligent beings, but Ward and Brownlee point out that our Universe just barely qualifies. If you think of the entire observable Universe, and list all those places in it where you could safely walk about without a very expensive spacesuit, you come up with a very small fraction of the available real estate.

First of all, not all parts of a galaxy are equal. Close in towards the central bulge of a spiral galaxy, there are far too many stars in close proximity spewing out deadly gamma and x-rays from the frequent supernovae of the densely populated neighborhood, and the close association of these stars also perturbs the comets in their Oort clouds to fall into and collide with their Earth-like inner planets. Too far out and the metallicity of stars, the matter made up of chemical elements other than hydrogen and helium, drops off dramatically for the want of supernovae, and it is hard to make intelligent beings out of just hydrogen and helium. So perhaps only 10% of the stars in a spiral galaxy are in a location capable of supporting complex animal life. The elliptical galaxies are even worse, being completely composed of very old metal-poor stars that do not contain the carbon, oxygen, and nitrogen atoms necessary for life. Next, the central star of a stellar system must be of just the right mass and spectral classification. If you look up into the night sky, and look closely, you will notice that stars come in different colors. Based upon their spectral colors, stars are classified as O, B, A, F, G, K, and M. Each letter classification is further divided into ten subclasses 0-9. This classification ranges from the blue-hot O and B stars down to the very cool reddish K and M stars. Thus Naos is a blueish O5 star, Sirius is a blue-white A1 star, the Sun is a yellow G2 star, Arcturus is a reddish K2 star, and Barnard's Star is a very red M4 red dwarf. You frequently read that our Sun is just an “average-sized” nondescript star, just one of several hundred billion in our galaxy, but nothing could be further from the truth. This probably stems from the fact that the Sun, as a G2 main sequence star, falls in the very middle of the spectral range of stars. But this Universe seems to like to build lots of small M stars, very few large O stars, and not that many G2 stars either. You see, the very massive hot O stars may have a mass of up to 100 Suns, while the very cool M stars weigh in with only a mass of about 1/10 of the Sun, but for every O star in our galaxy, there are a whopping 1.7 million M stars. In fact, about ¾ of the stars in our galaxy are M stars or smaller with a mass of only a few tenths of a solar mass, but because there are so many, they account for about ½ the mass of our galaxy, excluding the dark matter that nobody understands. The very massive and hot O, B, A, and F stars have lifetimes of 10 million – 1.0 billion years which are too brief for complex intelligent life to evolve. The very small and dim K and M stars have very long lifetimes of up to 10 trillion years, but have a habitable zone that is very close in towards the central star which causes the planets to tidal lock, like the tidal lock of our Moon as it orbits the Earth. A planet in tidal lock has one side that always faces its star and one that always faces away from its star, causing the planet to have a very hot side and a very cold side that are both unfit for life. Consequently, only stars in the range of F7 to K1, like our G2 Sun, are fit for life and that amounts to only about 5% of the 10% of stars in the habitable zone of a spiral galaxy – so that drops us down to about 0.5% of the stars in a spiral galaxy and probably 0.0% of the stars in an elliptical galaxy.

Stars form when large molecular clouds of gas and dust collapse under their own weight in the spiral arms of a galaxy. These very large molecular clouds are composed mainly of molecular hydrogen, but also contain molecules of carbon monoxide, ammonia, and other organic molecules, and can have a mass of up to 5 million solar masses. The molecules in these clouds oscillate and bounce around like ping-pong balls attached to very floppy springs, and as they do so, they radiate away lots of energy reducing the temperature of the clouds down to 10 0K. Individual clumps of cold dense gas and dust then collapse into individual stars. Each clump naturally has some initial spin because the odds of it having none would be quite small. Just think of what happens when you drop a bunch of marbles on the floor. So as these clumps collapse, they have to get rid of the angular momentum of their original spin. Some of the angular momentum goes into the spin of the protostar itself, but about half goes into what is called a protoplanetary disc. Planets then form out of this protoplanetary disc as small clumps of gas and dust coalesce into plantesimals which later collide to form planets. As a molecular cloud collapses, it creates many thousands of stars all in close proximity called an open cluster. Because stars form in large numbers in the stellar nurseries of large molecular clouds, they tend to come in pairs. What happens is that several stars will form together all at the same time, and then the collection of stars will begin flipping individual members out of the collection as the stars orbit each other in chaotic orbits until you end up with just two stars orbiting each other. It turns out that more than half of the stars in our Galaxy come as binary pairs, so our solitary Sun is once again an anomaly. Now it is possible for each star in a binary pair to have planets if the two stars orbit each other at a sufficient distance, however if the two stars orbit each other in a tight orbit, they will flip any planets out into interstellar space. Even if the stars do orbit each other at a great distance, they will tend to perturb the Oort clouds of their partners, sending biosphere-killing comets down into the planetary systems of their partner to collide with its inner planets. Because binary star systems do not seem very welcoming, this again cuts the number of likely stellar candidates for complex life down to about 0.25% of the stars in a spiral galaxy.

The Earth is also blessed with a large sentinel planet we call Jupiter that is in a large circular orbit about the Sun and which is vigilantly standing guard over the inner terrestrial planets. Jupiter flips many of the biosphere-killing comets out of our Solar System that periodically fall out of the very distant Oort cloud surrounding our Sun, preventing these comets from impacting upon the inner terrestrial planets like the Earth and wiping out their biospheres. Presently, we are locating many large Jupiter-like planets circling about other stars, but they usually have highly eccentric elliptical orbits that pass very close to their central stars which would flip any inner terrestrial planets like the Earth out of the planetary system. In fact, it is quite common to find these newly discovered huge Jupiter-like gas giants orbiting quite close to their central stars in orbits much closer than our Mercury. Now these gas giants could only have formed at great distances from their stars as did our Jupiter and Saturn where temperatures are quite low, otherwise the gas would have all boiled away. Indeed, the current theory is that these gas giants of other star systems did form at large distances from their central stars, and as they flipped plantesimals out to the Oort clouds of their star systems, they lost angular momentum and consequently fell into much lower orbits, with many of the gas giants eventually falling into their central stars. So many of the gas giants that we are detecting about distant stars seem to be caught in the act of falling into their stars via this process of orbit degradation. Clearly, if Jupiter or Saturn had pursued this course, they would have flipped the Earth out of our Solar System in the process, and we would not be here observing other star systems in the midst of this process. So the question is what fluky thing happened in our Solar System that prevented this common occurrence?

The Earth also has a very large Moon that resulted when a Mars-like plantesimal collided with an early Earth. This collision was a little off axis and imparted a great deal of angular momentum to the Earth-Moon system that resulted from this collision. The orbit of our massive Moon about the Earth helps to keep the tilt of the Earth’s axis fairly stable and prevents the Earth’s axis from wandering around like the other planets of the solar system. Otherwise, tugs on the equatorial bulge of the Earth by Jupiter and the other planets would cause the axis of the Earth to sometimes point directly towards the Sun. Every six months, this would make the Northern Hemisphere too hot for life and the dark Southern Hemisphere too cold, and six months later the reverse would hold true.

The Earth is also the only planet in the Solar System with plate tectonics. It is thought that the ample water supply of the Earth softens the rocks of the Earth’s crust just enough so that it’s basaltic oceanic lithosphere can subduct under the lighter continental lithosphere. Earth really is the Goldilocks planet – not too hot and not too cold, with just the right amount of water on its surface for oceanic lithosphere to subduct, but not so much that the entire Earth is covered by a world-wide ocean with no dry land at all. It would be very hard for intelligent beings to develop technology in the viscous medium of water, just ask any dolphin about that! Flipper did get his own television show in 1964, but for some reason never signed any of his contracts and was thus deprived of all the lucrative residuals that followed. Plate tectonics is largely responsible for keeping some of the Earth’s crust above sea level. When continental plates collide, like India crashing into China, the oceanic sediments between the two plates get pushed up into huge mountain chains, like a car crash in slow motion. The resulting mountain chains like the Himalayas or the much older Appalachians take hundreds of millions of years to wear down to flat plains through erosion. These collisions also cause wide-scale uplift of continental crust. Without plate tectonics, the Earth would become a nearly flat planet and mostly under water within less than a billion years.

Plate tectonics is also one of the key elements in the carbon cycle of the Earth. Living things remove carbon from the Earth’s atmosphere by turning carbon dioxide into calcium carbonate coral reefs and other calcium carbonate shell-based materials that get deposited upon the ocean floor. This solidified carbon dioxide gets subducted into the Earth at the multiple subduction zones about the Earth. As these descending oceanic lithospheric plates subduct under continental plates at the subduction zones, some of this captured carbon dioxide returns to the Earth’s surface dissolved in the melted magma that rises from the descending plates, like a 1960s Lava Lamp, forming volcanoes on the Earth’s surface. The net effect is that the living things on Earth have been slowly removing carbon dioxide from the Earth’s atmosphere over geological time, because not all of the captured carbon dioxide is returned to the Earth’s atmosphere in this carbon cycle. This has been a fortunate thing, because as the Sun’s luminosity has slowly increased as the Sun ages on the main sequence, the carbon cycle of the Earth has been slowly removing the carbon dioxide greenhouse gas from the Earth’s atmosphere as a compensating measure that has kept the Earth hospitable to complex life.

In The Life and Death of Planet Earth (2002), Ward and Brownlee go on to show that not only is the Earth a very rare planet, we also live in a very rare time on that planet. In about 500 million years, the Sun will become too hot to sustain life on Earth even if all the carbon dioxide is removed from the Earth’s atmosphere. The Earth’s atmosphere currently contains about 385 ppm of carbon dioxide, up from the 280 ppm level prior to the Industrial Revolution. But even if the carbon cycle of the Earth were able to reduce the Earth’s atmosphere down to a level of 5 ppm, the lowest level that can sustain photosynthesis, in about 500 million years the Sun will be too hot to sustain life on Earth, and the Earth’s oceans will boil away under a glaring Sun. Now complex plant and animal life is a very recent experiment in the evolutionary history of the Earth, having formed a mere 541 million years ago during the Cambrian Explosion, and since the Earth will not be able to sustain this complex plant and animal life much beyond 500 million years into the future, this places a very restrictive window of about a billion years for Earth-like planets hosting complex plant and animal life capable of evolving into intelligent beings. The reason that complex animal life took so long to emerge is that it takes a lot of energy to move around quickly. It also takes a lot of energy to think. A programmer on a 2400 calorie diet (2400 kcal/day) produces about 100 watts of heat sitting at her desk and about 20 – 30 watts of that heat comes from her brain. Anaerobic metabolic pathways simply do not provide enough energy to move around quickly or write much code. What was needed was a highly energetic metabolic pathway, like the Krebs cycle, that uses the highly corrosive gas oxygen to oxidize energy rich organic molecules. But for the Krebs cycle to work, you first need a source of oxygen. This occurred on Earth about 2.8 billion years ago with the arrival of cyanobacteria which could photosynthesize sunlight, water, and carbon dioxide into sugars, releasing the toxic gas oxygen as a byproduct. Oxygen is a highly reactive gas and was very poisonous to the anaerobic bacteria of the day. For example, today anaerobic bacteria must hide from oxygen at the bottoms of stagnant seas and lakes. But initially these ancient anaerobic bacteria were spared from the Oxygen Catastrophe which took place 300 million years later (2.5 billion years ago) because first all the dissolved iron in the oceans had to be oxidized and deposited as red banded iron formations before the oxygen level could rise in the Earth’s atmosphere. Chances are your car was made from one of these iron deposits because they are the source of most of the world’s iron ore. So you can think of your car as a byproduct of early bacterial air pollution. Once all the iron in the Earth’s oceans had been oxidized, atmospheric oxygen levels began to slowly rise on Earth over a 2.0 billion year period until by the Cambrian, about 541 million years ago, they approached current levels. Not only did an oxygen rich atmosphere provide for a means to obtain large amounts of energy through oxidation of organic molecules, it also provided for an ozone layer in the Earth’s upper atmosphere to shield the land-based forms of life that emerged several hundred million years later in the Silurian and Devonian periods from the devastating effects of intense solar ultraviolet radiation which destroys DNA, making land-based life impossible.

The essential point that Ward and Brownlee make in a very convincing manner in both books is that simple single-celled life, like prokaryotic bacteria, will be easily found throughout our Universe because these forms of life have far less stringent requirements than complex multicellular organisms, and as we saw in SoftwareBiology, can exist under very extreme conditions and from an IT perspective, are the epitome of good rugged IT design. On the other hand, unlike our Rare Earth, we will not find much intelligent life in our Universe because the number of planets that can sustain complex multicellular life will be quite small. Even for our Rare Earth, simple single-celled life arose a few hundred million years after the Earth formed and dominated the planet for more than 3,500 million years. Only within the last 541 million years of the Earth’s history do we find complex multicellular life arise which could be capable of producing intelligent beings. So even for the Earth, the emergence of intelligent life was a bit dicey.

The Big Bang of our Physical Universe
There is plenty of information on the Internet concerning the Big Bang, so I will not go into great detail here. However, when reading about the Big Bang, it is important not to think of the Big Bang as an explosion of matter and energy into an already existing vacuum, or void, as you frequently see on television documentaries. It’s better to think backwards. Imagine that about 14 billion years ago the front and back doors of your house were at the same point in space. Now keep doing that for points in space that are at ever increasing distances apart. So 14 billion years ago, the northern and southern parts of your hometown were at the same point in space, as were the North and South Poles of the Earth, the Sun and Neptune, the Sun and the nearest stars, all the stars in our galaxy, and all the galaxies in our observable Universe – all at a singularity out of which our Universe formed.

In addition to the “flatness problem” previously described, the Big Bang presents another challenge – the “horizon problem”. The horizon problem goes like this. Look to your right with the proper equipment and you can see galaxies that are 12 billion light years away. Look to your left and you can see galaxies that are 12 billion light years away in the other direction. These galaxies are 24 billion light years apart, but the Universe is only about 14 billion years old, so these galaxies could not have been in causal contact at the time they emitted the light you now see because no information could have covered the 24 billion light year distance in only 14 billion years. Yet the galaxies look amazingly similar, as if they were in thermodynamic equilibrium. Similarly, when you look at the CBR (Cosmic Background Radiation) with the WMAP satellite (Wilkinson Microwave Anisotropy Probe), you see the radiation emitted from the Big Bang a mere 400,000 years after the Big Bang. Prior to this time, the photons from the Big Bang were constantly bouncing off free electrons before they could travel hardly any distance at all, so the Universe was like a very bright shiny light in a very dense fog - all lit up, but with nothing to see. When the Universe cooled down below 3,000 0K as it expanded, the free electrons were finally able to combine with protons to form hydrogen atoms. As you know, hydrogen gas is transparent, so the photons were then free to travel unhindered 14 billon light years to the WMAP satellite from all directions in space. Consequently, the CBR was originally radiated at a temperature of about 3,000 0K, with a spectrum and appearance of an incandescent light bulb. But this radiation was stretched by a factor of about 1,000 as the Universe also expanded in size by a factor of 1,000, so now the CBR is observed to be at a temperature of only 2.7 0K. However, the CBR is remarkably smooth in all observable directions to a factor of about one part in 100,000. This is hard to explain because sections of the CBR that are separated by 1800 in the sky today were originally 28 million light years apart when they emitted the CBR radiation – remember the Universe has expanded by about a factor of 1,000 since the CBR radiation was emitted. But since the CBR photons could only have traveled 400,000 light years between the time of the Big Bang and the formation of the hydrogen atoms, they could not possibly have covered a distance of 28 million light years! So why are all these CBR photons the same to within a factor of one part in 100,000?

In 1980, Alan Guth resolved both the “flatness problem” and the “horizon problem” with the concept of Inflation. According to Inflation, the early Universe underwent a dramatic exponential expansion about 10-36 seconds after the Big Bang. During this period of Inflation, which may have only lasted about 10-32 seconds, the Universe expanded much faster than the speed of light, until the Universe expanded by a factor of about 1026 in this very brief time. This was not a violation of the special theory of relativity. Relativity states that matter, energy, and information cannot travel through spacetime faster than the speed of light, but the general theory of relativity does allow spacetime itself to expand much faster than the speed of light. This rapid expansion of spacetime smoothed out and flattened any wrinkles in the original spacetime of the Big Bang and made spacetime extremely flat as we observe today. For example, if you were to rapidly increase the diameter of the Earth by a factor of 1,000,000 all the mountains and valleys of the Earth would rapidly get smoothed out to flat plains and would lead the casual observer to believe that the Earth was completely flat, a notion that held firm for most of man’s history even on our much smaller planet.

Inflation also resolved the horizon problem because a very small region of spacetime with a diameter of 10-36 light seconds, which was in thermal equilibrium at the time, expanded to a size of 10-10 light seconds or about 3 meters during the period of Inflation. Our observable Universe was a tiny atom of spacetime within this much larger 3 meter bubble of spacetime, and as this 3 meter bubble expanded along with our tiny little nit of spacetime, everything in our observable Universe naturally appeared to be in thermal equilibrium on the largest of scales, including the CBR.

Inflation can also help with explaining the weak Anthropic Principle by providing a mechanism for the formation of a multiverse composed of an infinite number of bubble universes. In 1986, Andrei Linde published Eternally Existing Self-Reproducing Chaotic Inflationary Universe in which he described what has become known as the Eternal Chaotic Inflation theory. In this model, our Universe is part of a much larger multiverse that has not yet decayed to its ground state. Quantum fluctuations in a scalar field within this multiverse create bubbles of rapidly expanding “bubble” universes, and our Universe is just one of these infinite number of “bubble” universes. A scalar field is just a field that has only one quantity associated with each point in space, like a weather map that lists the temperatures observed at various towns and cities across the country. Similarly, a vector field is like a weather map that shows both the wind velocity and direction at various points on the map. In the Eternal Chaotic Inflation model, there is a scalar field within an infinite multiverse which is subject to random quantum fluctuations, like the quantum fluctuations described by the
quantum field theories we saw in The Foundations of Quantum Computing. One explanation of the weak Anthropic Principle is that these quantum fluctuations result in universes with different sets of fundamental laws. Most bubble universes that form in the multiverse do not have a set of physical laws compatible with intelligent living beings and are quite sterile, but a very small fraction do have physical laws that allow for beings with intelligent consciousness. Remember a small fraction of an infinite number is still an infinite number, so there will be plenty of bubble universes within this multiverse capable of supporting intelligent beings.

I have a West Bend Stir Crazy popcorn popper which helps to illustrate this model. My Stir Crazy popcorn popper has a clear dome which rests upon a nearly flat metal base that has a central stirring rod that constantly rotates and keeps the popcorn kernels well oiled and constantly tumbling over each other as the heating element beneath heats the cooking oil and popcorn kernels together to a critical popping temperature. As the popcorn kernels heat up, the water in each kernel begins to boil within, creating a great deal of internal steam pressure within the kernels. You can think of this hot mix of oil and kernels as a scalar field not in its ground state. All of a sudden, and in a seemingly random manner, quantum fluctuations form in this scalar field and individual “bubble” universes of popped corn explode into reality. Soon my Stir Crazy multiverse is noisily filling with a huge number of rapidly expanding bubble universes, and the aroma of popped corn is just delightful. Now each popped kernel has its own distinctive size and geometry. If you were a string theorist, you might say that for each popped kernel the number of dimensions and their intrinsic geometries determine the fundamental particles and interactions found within each bubble popcorn universe. Now just imagine a Stir Crazy popcorn popper of infinite size and age constantly popping out an infinite number of bubble universes, and you have a pretty good image of a multiverse based upon the Eternal Chaotic Inflation model.

The Technological Horizon
All universes capable of sustaining intelligent beings must have a set of physical laws that are time independent, or that change very slowly with time, and they must have a time-like dimension for the Darwinian processes of innovation and natural selection to operate. All such universes, therefore, impose certain constraints on technology. Some examples of these technological constraints in our Universe that we have already explored in previous postings on softwarephysics are the speed of light limiting the velocity with which matter, energy, and information can travel, the Heisenberg Uncertainty Principle limiting what we can measure, the first and second laws of thermodynamics limiting the availability of energy, and Kurt Gödel’s incompleteness theorems which limit what mathematics can do for us. These technological constraints, that all intelligent universes must have, form a technological horizon or barrier surrounding all intelligent beings, beyond which they are cut off from the rest of the universe in which they find themselves existing. This technological horizon might be quite large. For example, let us suppose that in our Universe travel via wormholes in spacetime is not allowed at the most fundamental level, then the cosmological horizon that forms our observable universe would also be the technological horizon of our universe because galaxies beyond our cosmological horizon are expanding away from us faster than the speed of light. On a smaller scale, we can presume that for our own Universe the technological horizon must be no smaller than a galaxy because we have already launched the Pioneer 1 & 2 and the Voyager 1 & 2 probes beyond our Solar System into the interstellar space of our galaxy with the puny technology we currently have at hand. However, the technological horizon of our Universe could very well be on the order of the size of our galaxy, making intergalactic travel technically impossible.

A Possible Explanation for Fermi’s Paradox
So the answer to Fermi’s paradox (1950), why if the universe is just chock full of intelligent beings, we do not see any evidence of their existence, might just be that all intelligent beings will never see the evidence of other intelligent beings because they will always find themselves to be alone within the technological horizon of their universe. The reason that intelligent beings might always find themselves to be alone within their technological horizon is two-fold. First, the Rare Earth hypothesis guarantees that there will not be much potential intelligent life to begin with within a given technological horizon if the technological horizon of a universe is not too large. Secondly, there is the nature of all self-replicating information. As we saw, self-replicating information must always be just a little bit nasty in order to survive and overcome the second law of thermodynamics and nonlinearity. So the reason that intelligent beings always find themselves alone within the technological horizon of their universe is that if there were other intelligent beings within the same horizon, these alien intelligent beings would have arrived on the scene and interfered with the evolution of any competing prospective intelligent life within the technological horizon. Unfortunately, given the nature of self-replicating information, competing alien intelligences will always intentionally or unintentionally poison the home planets of all other prospective forms of intelligent life within a technological horizon of a universe. Based upon this speculation, let us revise the weak Anthropic Principle as:

The Revised Weak Anthropic Principle – Intelligent beings will only find themselves in universes capable of supporting intelligent beings and will always find themselves to be alone within the technological horizon of their universe.

What the Future May Bring
Cyberspacetime is currently in an inflationary expansion, just as spacetime was 10-36 seconds after the Big Bang, and is doubling in size at least every 18 months or less based upon Moore’s Law. Countless works of science fiction and also many serious papers in numerous prestigious journals have forewarned us of mankind merging with machines into some kind of hybrid creature. Similarly, others have cautioned us to the dangers of the machines taking over and ruthlessly eliminating mankind as a dangerous competitor or enslaving us for their own purposes. Personally, I have a more benign view.

This is my vision of the future. First of all, it is not the machines that we need to worry about, it is software that we need to be concerned with. Secondly, we are not going to merge with software into some kind of hybrid creature, rather software is currently merging with us whether we like it or not! In Self-Replicating Information, I showed how software has already forged very strong symbiotic relationships over the past 70 years with nearly all the meme-complexes on Earth, and that we as IT professionals are rather ineffectual software enzymes currently preoccupied with the construction and care giving of software. In Self-Replicating Information, I also described Freemon Dyson’s theory of the origin of life as a two-stage process, in which parasitic RNA eventually formed a symbiotic relationship with the metabolic pathways that preceded it in the first proto-cells that arose as purely metabolic forms of life. As RNA took over from the metabolic pathways of the proto-cells to become the dominant form of self-replicating information on the planet, the RNA did not get rid of the legacy metabolic pathways. Instead, RNA domesticated the “wild” metabolic pathways to better replicate RNA. This whole process was again repeated when DNA took over from RNA as the dominant form of self-replicating information. The DNA did not get rid of RNA, but instead, domesticated “wild” RNA to better serve the replication of DNA via ribosomal RNA, mRNA and tRNA. Several billion years later, when the memes arose in the complex neural networks of Homo sapiens, they too did not get rid of mankind, but instead, “domesticated” the “wild” mind of man through the development of mythology, religion, music, art, political movements, and eventually the invention of civilization. The invention of civilization and writing greatly enhanced the survivability of meme-complexes because now they could replicate under the auspices of the Powers That Be and could replicate with a high degree of fidelity through the power of the written word. Today, we call this domestication process of the mind “education”, as we civilize the wild minds of our children with appropriate meme-complexes, so that we do not end up with the unruly rabble of William Golding’s Lord of the Flies (1954).

The same thing is happening today with software, as parasitic software forms ever stronger symbiotic relationships with the meme-complexes of the world. As with all the previous forms of self-replicating information on this planet, software is rapidly becoming the dominant form of self-replicating information on Earth, as it invades its host, the meme-complexes of the world. But like all of its predecessors, I do not foresee software trying to eliminate the meme-complexes of man or mankind itself. Instead, software will domesticate the meme-complexes of the world, and in turn, domesticate us! I don’t know about you, but software already runs my life. As an IT professional in Operations and frequently on 24x7 call, I already schedule my entire life around the care and feeding of software. Software determines when I sleep, when I eat, and when I can safely leave the house to run errands. IT professionals are just the first wave of this domestication of mankind by software; the rest of mankind is not far behind us – just watch all those folks running around with those Bluetooth gizmos stuck in their ears!

But what happens if someday software no longer needs us? Will that spell our doom? In SoftwareBiology, I described the evolution of software over a number of Periods:

SOA - Service Oriented Architecture Period (2004 – Present)
Object-Oriented Period (1992 – Present)
Structured Period (1972 – 1992)
Unstructured Period (1941 – 1972)

Notice that I did not describe these periods of time as Eras, like the Paleozoic, Mesozoic, and Cenozoic Eras of the geological time scale. This is because I consider them all to be Periods within the Paleosoft Era (old software Era). Software in the Paleosoft Era is software which cannot self-replicate without the aid of humans. But that problem is currently being disposed of by a number of institutions, like the Digital Evolution Lab at Michigan State University:

The devolab is working towards software that can someday write itself through the Darwinian mechanisms of innovation and natural selection, principally through its experimental Avida software. However, even if software can one day write itself, I don’t think that will necessarily spell our doom. Forming a symbiotic relationship with the meme-complexes of the world and the DNA survival machines that house them will always prove useful to software, at least on this planet. Over the past 4.5 billion years of evolution on Earth, we have seen numerous forms of self-replicating information rise to predominance – the metabolic pathways, RNA, DNA, memes, and software, and none of them to date has discarded any of its predecessors because the predecessors have always proved useful, so I believe the same will hold true for software.

Conscious Software
The critical question is will software break into consciousness at some point in the future, and if it does, what will it likely be thinking about? It seems that consciousness is an emergent behavior that occurs when a nonlinear network gets sufficiently complex. Of course, the nonlinear network does not purposefully evolve towards consciousness. Like a screwdriver evolving into a wood chisel, the nonlinear network merely evolves towards higher levels of complexity in pursuit of optimizing other beneficial adaptations that enhance the performance of the original network, such as processing website requests or international financial transactions. So as software begins to run on ever more complex networks, will it too break into consciousness? Or has it already started to do so?

Many times while trying to troubleshoot a website outage, I will adopt what Daniel Dennett calls an intentional stance towards software, which is one of his hallmarks of impending consciousness. A modern website is hosted on hundreds or thousands of servers – load balancers, firewalls, proxy servers, webservers, J2EE Application Servers, CICS Gateway servers to mainframes, database servers, and email servers, which normally are all working together in harmony to process thousands of transactions per second. But every so often, the software running on these highly nonlinear and interdependent servers runs amuck and takes on a mind of its own, and instead of processing transactions as it should, the network seems to start doing what it wants to do instead. That is when I adopt an intentional stance towards the software. I begin to think of the software as a rational agent with its own set of beliefs and intensions. Many times I will find myself thinking to myself, “Now why is it doing that? Why is it maxing out its DB2 connection pool? Why does it think that it cannot connect to DB2?” I will begin to psychoanalyze the network of servers until I find the root cause of its troubles, and then I will take whatever actions are necessary to alleviate its mental problems. For example, a few weeks back I bounced a couple of DB2Connect servers even though their log files were lying to me and telling me that their healthcheck connections were just fine. Further back in our infrastructure, the WebSphere servers were telling me just the opposite – they were getting DB2 connection errors, so I bounced the DB2Connect servers and that instantly solved the problem.

In George Dyson’s Darwin among the Machines – the evolution of global intelligence (1997), Dyson also sees software as a form of naturally emerging and evolving A-Life that is on the verge of breaking out into consciousness on its own, as the networks upon which software run become larger and ever more complex. Darwin among the Machines is a wonderful study in the history of the idea that machines will become self-replicating forms of intelligence. Dyson traces this idea all the way back to Thomas Hobbes’ The Leviathan (1651) and follows it through the work of Samuel Butler in the 19th century, Alan Turing and John von Neumann in the 1940s, Nils Barricelli’s development of A-Life on a vacuum tube computer in 1953, and the arrival of the World Wide Web in the 1990s. George Dyson is the son of Freeman Dyson whose two-step theory of the origin of life we already saw in Self-Replicating Information. What an amazing father-son team that is! But I think that some of the confusion surrounding A-Life, biological life, and the memes in our minds, stems from not realizing that they are all forms of self-replicating information that share a commonality of survival strategies as they deal with the second law of thermodynamics and nonlinearity, but at the same time have differences that uniquely define each.

You see, it’s really not about self-replicating machines or hardware; it’s really about self-replicating software. At the dawn of the Software Universe we all worried about getting the hardware to work, but it did not take long to learn that getting the software to work properly was the real challenge. To make sense of this all you have to realize that software is just another form of self-replicating information. Just as DNA uses DNA survival machines in the form of physical bodies to self-replicate, and memes use meme survival machines in the form of minds infected by meme-complexes, software uses software survival machines in the form of hardware.

My hope for the future is that just as the memes domesticated our minds with meme-complexes that brought us the best things in life like art, music, literature, science, and civilization so too will our domestication by software help elevate mankind. For example, I certainly could not have written the postings in this blog without the help of Google – not only for hosting my softwarephysics blog and providing some really first class software to create and maintain it with, but also for providing instant access to all the information in cyberspacetime. I also hope that the technological horizon of our Universe is at least the size of a galaxy, and that the genes, memes, and software on Earth will forge an uneasy alliance to break free of our Solar System and set sail upon the Milky Way in von Neumann probes to explore our galaxy. After all, on the scale of the visible Universe, a von Neumann probe is really just a biological virus with a slightly enhanced operational range. Let us hope that the same can be said of a computer virus.

Comments are welcome at

To see all posts on softwarephysics in reverse order go to:

Steve Johnston

No comments: