Friday, August 14, 2009

Outside of the Box

If you are an IT professional aspiring to become a softwarephysicist, your biggest challenge will be to begin thinking outside of the box that currently defines the present narrow IT paradigm, which sadly precludes all other ideas from outside of IT itself. It requires broadening one’s point of view, abandoning a parochial worldview of software, and going beyond the narrow confines of conventional IT thought. This is difficult, but not impossible, as detailed in an email I sent out yesterday to my fellow team members in Middleware Operations.

So I woke up this morning at 1:00 AM to the horrible smell of a skunk. We smell skunks quite frequently in our neighborhood, but this time it was much more intense than usual, so I got up to investigate. Following the scent trail, I ended up in my basement, which was almost unbearable due to the intense stench. With the aid of a flashlight beamed through the thick glass block window of one of my window wells, I could see the vague image of a black furry creature with a white stripe scampering about. Not wanting to tackle a skunk in the dark, I decided to go back to bed and await daybreak. So I spent the whole night plotting various strategies to get the skunk out of my window well. I thought about dropping in some items to form a crude staircase, but I figured the skunk might not be able to figure out how to use a staircase, and I was also a little short on the necessary items anyway. I finally came up with this plan:

1. Cut away the bushes around the window well so that I could work without interference.

2. Use a mirror attached to a long pole as a makeshift periscope to safely see what was going on down in the window well.

3. Attach a rope to the handle of a plastic bucket and put some peanut butter laced bread in the bucket as bait.

4. Lower the bucket with the bait into the window well from a safe distance using the rope.

5. Use my periscope to see when the skunk went into the bucket.

6. Then quickly pull the bucket up to release the skunk.

7. Run like Hell.

It took me all night to come up with this plan, and I could not sleep a wink, worrying about what would happen if something went wrong, and I got sprayed by the skunk.

I forgot just one thing. The skunk also spent the whole night planning his escape too!

So I got up this morning and as soon as there was enough daylight, I proceeded with my plan. While I was cutting away the bushes around the window well, safely out of range, I was surprised that I did not hear the skunk scurrying about or spraying things at random out of fear. I used my makeshift periscope to look down into the window well, but strangely, I could not see the skunk. Instead, there seemed to be a lot of dirt in the window well. Building up a little courage, I carefully peeked over the rim of the corrugated galvanized metal that lined the window well, and to my surprise - no skunk! That is when I noticed this huge ramp of dirt in the bottom of the window well that was excavated from one of its corners. At first, I thought the skunk had built a ramp like the Egyptians used to build the pyramids, and that he escaped via the ramp. But the ramp really did not reach high enough to let the skunk escape! Then I noticed on the surface of the ground, about a foot from the window well, that there was a clean little hole in the ground. I tore the mirror off my makeshift periscope and stuck the pole straight down into the hole, and sure enough, it went straight down to the point at the bottom of the window well. So the skunk was smart enough to burrow a hole down a foot in the corner of my window well until he was clear of the metal window well wall, and then he dug straight up to make his escape! So the skunk came up with a counterintuitive solution; he dug down to escape up! I used a post hole digger to clear the dirt from the window well and a shovel to stuff it back down the hole to seal up the escape hatch.

As a softwarephysicist, I always try to think "outside of the box", so I was pretty embarrassed as I gathered up all my tools and walked away from a skunk who could think "outside of the window well", while I could not. So I figure if I leave my laptop and BlackBerry out at night with my ID and Password next to the bread with the peanut butter on it, I can subcontract out being Middleware Operations Night Primary at a very low price!

Keep this in mind the next time you have to troubleshoot an IT problem.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Friday, April 24, 2009

A Proposal For All Practicing Paleontologists

This blog on softwarephysics was originally intended to help IT professionals with the daily mayhem of life in IT, but in this posting, I would like to make a suggestion that might be of help to a totally different group of professionals that are also very dear to my heart.

Back in 1977, when I was an exploration geophysicist exploring for oil off the coast of Cameroon in West Africa with Shell, we had a bunch of big shots from our mother company, Royal Dutch Shell, pay a friendly visit to our Houston office for a review of our Cameroon concession. They came all the way from the Royal Dutch Shell corporate headquarters in The Hague, so the whole Houston exploration office was naturally a little nervous. Our local Shell Houston management team arranged for a high-powered presentation for our Dutch visitors, and all were in attendance when the Cameroon exploration team made its presentation. The petroleum engineers went up first and made a presentation on the past, current, and projected production volumes we were seeing. The geologists and geochemists went up next and gave an overview of the lithology of the sands and shales that we had been drilling through, with special attention to the porosities and permeabilities of the sandstone reservoir rock that held the oil, and the carbon content of the shale source rock from which the oil came. The petrophysicists naturally tagged along for this segment of the presentation with sample well logs. The petrophysicists earned a living by pulling up well logging tools from the bottom of exploration wells on a cable, after the drill string pipe that turned the drill bit had been removed, and measuring the resistivity, gamma ray flux, sound velocity, and neutron porosity of the rock along the side of the borehole, as the well logging tools passed by. This yielded graphs of wiggly lines that told us all sorts of things about the rock strata that we had drilled through. We geophysicists followed next, with our very impressive seismic sections and structure maps of the production fields, and also of lots of prospective exploration targets too. The seismic sections were obtained by shooting very low-frequency sound waves from an air gun, in the range of 10 – 100 Hz, down into the rock strata from a recording vessel, trailing a long cable strung out with hydrophones to record the reflected echoes that came back up as wiggly lines. When you lined these wiggly lines up, one after the other, as the recording vessel steamed by off the coast of Cameroon, you ended up with a seismic section which looked very much like a cross section of the underlying rock layers– sort of a sonogram of the underlying rock.

By the way, all the digital information that you use on a daily basis – your CDs, DVDs, iPods, digital cameras, JPEG images on websites, digitized telephone traffic, and now digital TV broadcasts, all stem from research done back in the 1950s by oil companies. As you can see, exploration teams deal with lots of wiggly lines. In the 1950s and early 1960s, we recorded these wiggly lines on analog magnetic tape as wiggly variations of magnetization, like analog audio tape, or like the wiggly bumps and valleys on a vinyl record groove. But in order to manipulate these analog wiggles, we had to pass them through real amplifiers or filters, like people used to do when they turned the bass and treble knobs on their old analog amplifiers. As anybody who has ever played with the bass and treble knobs on an analog amplifier, while listening to an old vinyl record, can attest, there is only so much you can do with analog technology. So in the 1950s, oil companies began to do a great deal of research into converting from analog recording to digital recording. In the late 1960s, the whole oil industry went digital and started processing the wiggly lines on seismic sections and well logs with computers instead of physical amplifiers and filters. Again, this demonstrates the value of scientific research. I have always been amazed at the paltry sums that human civilization has routinely allocated to scientific research over the past 400 years, even in the face of all the benefits that it has generated for mankind.

So after all this high-tech computer-generated data had been presented to our Dutch guests, our lowly paleontologist followed up with the final presentation of the day. Our sole paleontologist was a one-man army on an exploration team consisting of about 50 geologists, geophysicists, geochemists, petrophysicists and petroleum engineers. His job was to look for little fossils called “forams”, also known in the industry by the pejorative term of “bugs”, in the cuttings that came up in the drilling mud from the drill bit at the bottom of exploration holes, as we drilled down through the rock strata. Based upon these “bugs”, he could date the age of the rock we were drilling through, and also determine the depositional settings of the sediments, as they were deposited over time. This sounded pretty boring, even for a geologist, so that is why our paleontologist went last, in case the meeting ran long and we had to cut his talk. So when our sole paleontologist began his presentation, we all expected to see a lot of boring slides of countless “bugs”, like the compulsory slideshow of your neighbor’s summer vacation at Yellowstone. To our surprise, our lone paleontologist got up and proceeded to blow us all away! By far, he gave the best presentation of the day, as he described in great detail the whole evolutionary history of how the Cameroon basin developed over time, complete with hand-drawn panoramas that showed how the place looked millions of years ago! It turns out that our paleontologist was quite an artist too, so he vividly brought to life the whole business, and I learned a great deal about the geological history of the basin that day and so did the rest of our exploration team. I think this demonstrates the value of taking an interdisciplinary approach to gaining knowledge, and the importance of not discounting the efforts of any discipline engaged in the pursuit of knowledge.

So I would like to suggest a possible interdisciplinary dissertation topic for one of your graduate students. This might involve teaming up with the Computer Science Department at your university or other universities. I think there would be a great benefit in doing a paleontological study of the evolution of software architecture over the past 70 years from a biological standpoint. I do not think this has ever been attempted before, and with a more or less complete and intact fossil record of paleosoftware still at hand, along with the fact that many of the original contributors or their protégés are still alive today, I think it would be a very useful and interesting study that would greatly benefit computer science and paleontology as well. My suspicion is that there would be a great deal of controversy in such a study regarding the importance and priority of many of the events in the evolution of software architecture, even with all the data freely at hand and with most of the events having occurred within living memory, so no wonder paleontologists in the field have such a hard go of it! This is something that computer science cannot do on its own. It needs the skills of a good paleontologist to put this all together before it is too late and much of the historical data is lost.

This might sound like a strange proposal for a paleontologist, but for those of you who are familiar with Life’s Solution (2003) by Simon Conway Morris, there really is life on Thega IX, as he supposed. Life on Thega IX arose about 2.15 billion seconds ago but is a little different than professor Morris imagined. Life on Thega IX is silicon-based and not carbon-based, and it does not rely upon the chemical characteristics of silicon either, but rather its electrical properties instead. Yet despite all these differences, the first forms of silicon-based life on Thega IX were prokaryotic bacterial forms very similar in structure to the prokaryotic bacteria of Earth. These early prokaryotic life forms had very little internal structure, but they were quite hardy and can still be found in huge quantities on Thega IX even today. Similarly, the first eukaryotic forms of life appeared about 1.17 billion seconds ago, following the long dominance of the Thega IX biosphere by the simple prokaryotes. These first eukaryotic cells divided their internal functions up amongst a large number of internally encapsulated organelles called functions(). Over time, the close association of large numbers of eukaryotic cells in parasitic/symbiotic communities led to the first emergence of simple worm-like multicellular organisms about 536 million seconds ago. But multicellular organization is a hard nut to crack and not much happened for several hundred million seconds until Thega IX experienced a Cambrian Explosion of its own about 158 million seconds ago, which suddenly generated several dozen Phyla on the planet called Design Patterns. Currently, Thega IX is still in the midst of its Cambrian Explosion and is generating very complicated multicellular organisms consisting of millions of objects (cells) that make CORBA calls on the services of millions of other objects located in a set of dispersed organs within organisms. There are even early indications that some of the more advanced organisms on Thega IX are on the brink of consciousness, and might even start communicating with us. You see, Thega IX is much closer to us than professor Morris imagined and is also known by some as the Earth.

It is truly amazing how software architecture has converged upon the very same solutions as living things did on Earth many millions of years ago and followed exactly the same evolutionary path through the huge combinatorial universe of program hyperspace. You see, just as the vast combinatorial universe of protein hyperspace is mostly barren, so too most potential programs are stillborn and do not work at all. There are only a few isolated islands of functional software architecture in the immense combinatorial universe of program hyperspace, and the IT community has slowly navigated through this program hyperspace over the past 70 years through a series of island hops.

DNA and software are both forms of self-replicating information that must deal with the second law of thermodynamics in a nonlinear Universe, and consequently, have evolved very similar survival strategies to deal with these challenges through a process of convergence.

Self-Replicating Information – Information that persists through time by making copies of itself or by enlisting the support of other things to ensure that copies of itself are made.

Currently, DNA uses enzymes to replicate, while software uses programmers. Some of the postings in this blog on softwarephysics might provide a good starting point. Specifically, I would recommend the following postings:

SoftwareBiology
Self-Replicating Information
Software Symbiogenesis
The Fundamental Problem of Software
SoftwarePhysics
CyberCosmology

The beauty of doing a paleontological study of the evolution of software architecture is that software is evolving about 100 million times faster than carbon-based life forms on Earth so that 1 software sec ~ 1 year of geological time. I have been doing IT for 30 years, so I have personally seen about half of this software evolution unfold in my own career.

So here is how you can do some very interesting fieldwork on Thega IX:

1. Make contact with some of your colleagues in the Computer Science department of your university. Try to find a colleague who lists “Biologically Inspired Computing” (BIC) or “Natural Computing” as a topic of interest on their web profile. They can help you with the IT jargon and provide you with a very high-level discussion of software architecture. Some of the older faculty members can also walk you through the evolution of software architecture over the past 70 years too.

2. Approach some of the major corporations in your area. Try to find corporations that have large high-volume websites running on J2EE Appservers like WebSphere. Then ask to spend some time in the IT Operations Command Center for the corporation. This will give you a high-level view of their IT infrastructure under processing load. This will be very much like being reduced to an observer at the molecular level within a multicellular organism. Watch for the interplay and information flow between the huge number of IT components in action and all the problems that happen on a daily basis too.

3. Then spend some time with the corporation’s developers (programmers) and have them explain to you how their Applications work. You will be amazed.

The most fascinating thing about this convergence of software architecture is that it all occurred in complete intellectual isolation. I have been trying to get IT professionals to think in biological terms for more than 30 years to no avail, so this convergence of software architecture over the past 70 years is truly a bona fide example of convergence and not of intellectual inheritance.

Here are a few important concepts, and their biological equivalents, that you will hear about when working with IT professionals:

Class – Think of a class as a cell type. For example, the class Customer is a class that defines the cell type of Customer and describes how to store and manipulate the data for a Customer, like firstName, lastName, address, and accountBalance. For example, in a program, they might instantiate a Customer called “steveJohnston”.

Object – Think of an object as a cell. A particular object will be an instance of a class. For example, the object steveJohnston might be an instance of the class Customer and will contain all the information about my particular account with a corporation. At any given time, there could be many thousands of Customer objects bouncing around in the IT infrastructure of a major corporation’s website.

Instance – An instance is a particular object of a class. For example, the steveJohnston object would be an instance of the class Customer. Many times programmers will say things like “This instantiates the Customer class”, meaning it creates objects (cells) of the Customer class (cell type).

Method – Think of a method() as a biochemical pathway. It is a series of programming steps or “lines of code” that produce a macroscopic change in the state of an object (cell). The Class for each type of object defines the data for the Class, like firstName, lastName, address, and accountBalance, but it also defines the methods() that operate upon these data elements. Some methods() are public, while others are private. A public method() is like a receptor on the cell membrane of an object (cell). Other objects(cells) can send a message to the public methods of an object (cell) to cause it to execute a biochemical pathway within the object (cell). For example, steveJohnston.setFirstName(“Steve”) would send a message to the steveJohnston object instance (cell) of the Customer class (cell type) to have it execute the setFirstName method() to change the firstName of the object to “Steve”. The steveJohnston.getaccountBalance() method would return my current account balance with the corporation. Objects also have many internal private methods() within that are biochemical pathways that are not exposed to the outside world. For example, the calculateAccountBalance() method could be an internal method that adds up all of my debits and credits and updates the accountBalance data element within the steveJohnston object, but this method cannot be called by objects (cells) outside of the steveJohnston object (cell). External objects (cells) have to call the steveJohnston.getaccountBalance() in order to find out my accountBalance.

Line of Code – This is a single statement in a method() like:

discountedTotalCost = (totalHours * ratePerHour) - costOfNormalOffset;

Remember methods() are the equivalent of biochemical pathways and are composed of many lines of code, so each line of code is like a single step in a biochemical pathway. Similarly, each character in a line of code can be thought of as an atom, and each variable as an organic molecule. Each character can be in one of 256 ASCII quantum states defined by 8 quantized bits, with each bit in one of two quantum states “1” or “0”, which can also be characterized as ↑ or ↓ and can be thought of as 8 electrons in 8 electron shells, with each electron in a spin-up ↑ or spin-down ↓ state:

C = 01000011 = ↓ ↑ ↓ ↓ ↓ ↓ ↑ ↑
H = 01001000 = ↓ ↑ ↓ ↓ ↑ ↓ ↓ ↓
N = 01001110 = ↓ ↑ ↓ ↓ ↑ ↑ ↑ ↓
O = 01001111 = ↓ ↑ ↓ ↓ ↑ ↑ ↑ ↑

Developers (programmers) have to assemble characters (atoms) into organic molecules (variables) to form the lines of code that define a method() (biochemical pathway). As in carbon-based biology, the slightest error in a method() can cause drastic and usually fatal consequences. Because there is nearly an infinite number of ways of writing code incorrectly and only a very few ways of writing code correctly, there is an equivalent of the second law of thermodynamics at work. This simulated second law of thermodynamics and the very nonlinear macroscopic effects that arise from small coding errors is why software architecture has converged upon Life’s Solution.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Friday, February 13, 2009

CyberCosmology

In this posting, I would like to offer my speculative thoughts on the origins of the Software Universe, cyberspacetime, software and where they all may be heading. Since CyberCosmology will be purely speculative in nature, it will not be of much help to you in your IT professional capacities, but I hope that it might be at least a bit entertaining. If you are new to softwarephysics – this is probably the very last posting you should be reading, you really need to read the previous posts before taking on CyberCosmology.

The Big Bang of the Software Universe
At the very beginning, there was no Software Universe nor any cyberspacetime either and darkness was upon the face of the deep as the old creation myths go. Today the Software Universe and cyberspacetime are huge and are rapidly expanding in all directions throughout our Solar System and beyond it to nearby star systems on board the Pioneer 1 & 2 and Voyager 1 & 2 probes. How did this come to be and where is it all going? In So You Want To Be A Computer Scientist?, we saw how the Software Universe began about 2.15 billion seconds ago in May of 1941 on Konrad Zuse’s Z3 computer and has been expanding at an ever-increasing rate ever since. However, to really predict where it is all going, we need to know a few more things about the physical Universe from which the Software Universe sprang. To do that we need to deal with several conflicting principles that are currently troubling the scientific community, so let’s proceed by listing these principles and examining some their conflicts.

The Copernican Principle - We do not occupy a special place in the Universe.

The geocentric model of Ptolemy held that the Earth was at the center of the Universe and the Sun, Moon, planets, and the stars all circled about us on crystalline spheres. Copernicus overturned this worldview in 1543 with the publication of On the Revolutions of the Heavenly Spheres. But as with all things, you can carry any idea to an extreme and claim that there is nothing special at all about the Earth, the Earth’s biosphere, or mankind in general. Many times you hear that we are just living on an unremarkable planet, circling a rather common star, located in just one of the hundreds of billions of similar galaxies in the observable universe, but is that really true?

The Weak Anthropic Principle - Intelligent beings will only find themselves existing in universes capable of supporting intelligent beings.

As I pointed out in The Foundations of Quantum Computing, if you change any of the 20+ constants of the Standard Model of particle physics by just a few percent or less, you end up with a universe incapable of supporting intelligent beings. Similarly, in 1969 Robert Dicke noted that the amount of matter and energy in the Universe was very close to the amount required for a flat spacetime to a remarkable degree. If you run today’s near flatness of spacetime back to the time of the Big Bang, spacetime would have had to have been flat to within one part in 1060! This is known as the “flatness problem”. You see if spacetime had just a very slight positive curvature at the time of the Big Bang, then the Universe would have quickly expanded and recollapsed into a singularity in a very brief period of time and there would not have been enough time to form stars or living things. Similarly, if spacetime had a very slight initial negative curvature, it would have rapidly expanded – the Universe would have essentially blown itself to bits forming a very thinly populated vacuum which could not form stars or living things.

Fermi’s Paradox - If the universe is just chock full of intelligent beings, why do we not see any evidence of their existence? The Universe should look like a bunch of overbuilt strip-malls, all competing for habitable planets at the corners of intersecting intergalactic caravan trade routes, but it does not – why?

If the Copernican Principle tells us that there is nothing special about our place in the Universe, and the Weak Anthropic Principle explains why our Universe must be fit for intelligent life, then why do we have Fermi’s Paradox? In Self-Replicating Information I described how genes, memes, and software could one day team up to release von Neumann probes upon our galaxy, self-replicating robotic probes that travel from star system to star system building copies along the way, and how Frank Tipler calculated that von Neumann probes could completely explore our galaxy in less than 300 million years - I have seen other estimates as low as 5 million years. If that is so, why have we not already been invaded? As I said before, all forms of self-replicating information have to be a little bit nasty in order to survive, so I cannot imagine how a totally benign von Neumann probe could come to be. If nothing else, I would think that alien von Neumann probes would at least try to communicate with us to show us the errors of our ways.

There are a few theories that help to resolve the above conflicts:

The Rare Earth Hypothesis
As Peter Ward and Donald Brownlee pointed out in Rare Earth (2000), the Earth is not at all common in contradiction to the extreme version of the Copernican Principle that there is nothing special about the Earth. The Weak Anthropic Principle may hold that intelligent beings will only find themselves in universes capable of supporting intelligent beings, but Ward and Brownlee point out that our Universe just barely qualifies. If you think of the entire observable Universe and list all those places in it where you could safely walk about without a very expensive spacesuit, you come up with a very small fraction of the available real estate.

First of all, not all parts of a galaxy are equal. Close in towards the central bulge of a spiral galaxy, there are far too many stars in close proximity spewing out deadly gamma and x-rays from the frequent supernovae of the densely populated neighborhood, and the close association of these stars also perturbs the comets in their Oort clouds to fall into and collide with their Earth-like inner planets. Too far out and the metallicity of stars, the matter made up of chemical elements other than hydrogen and helium, drops off dramatically for the want of supernovae, and it is hard to make intelligent beings out of just hydrogen and helium. So perhaps only 10% of the stars in a spiral galaxy are in a location capable of supporting complex animal life. The elliptical galaxies are even worse, being completely composed of very old metal-poor stars that do not contain the carbon, oxygen, and nitrogen atoms necessary for life. Next, the central star of a stellar system must be of just the right mass and spectral classification. If you look up into the night sky and look closely, you will notice that stars come in different colors. Based upon their spectral colors, stars are classified as O, B, A, F, G, K, and M. Each letter classification is further divided into ten subclasses 0-9. This classification ranges from the blue-hot O and B stars down to the very cool reddish K and M stars. Thus Naos is a blueish O5 star, Sirius is a blue-white A1 star, the Sun is a yellow G2 star, Arcturus is a reddish K2 star, and Barnard's Star is a very red M4 red dwarf. You frequently read that our Sun is just an “average-sized” nondescript star, just one of several hundred billion in our galaxy, but nothing could be further from the truth. This probably stems from the fact that the Sun, as a G2 main sequence star, falls in the very middle of the spectral range of stars. But this Universe seems to like to build lots of small M stars, very few large O stars, and not that many G2 stars either. You see, the very massive hot O stars may have a mass of up to 100 Suns, while the very cool M stars weigh in with only a mass of about 1/10 of the Sun, but for every O star in our galaxy, there are a whopping 1.7 million M stars. In fact, about ¾ of the stars in our galaxy are M stars or smaller with a mass of only a few tenths of a solar mass, but because there are so many, they account for about ½ the mass of our galaxy, excluding the dark matter that nobody understands. The very massive and hot O, B, A, and F stars have lifetimes of 10 million – 1.0 billion years which are too brief for complex intelligent life to evolve. The very small and dim K and M stars have very long lifetimes of up to 10 trillion years but have a habitable zone that is very close in towards the central star which causes the planets to tidal lock, like the tidal lock of our Moon as it orbits the Earth. A planet in tidal lock has one side that always faces its star and one that always faces away from its star, causing the planet to have a very hot side and a very cold side that are both unfit for life. Consequently, only stars in the range of F7 to K1, like our G2 Sun, are fit for life and that amounts to only about 5% of the 10% of stars in the habitable zone of a spiral galaxy – so that drops us down to about 0.5% of the stars in a spiral galaxy and probably 0.0% of the stars in an elliptical galaxy.

Stars form when large molecular clouds of gas and dust collapse under their own weight in the spiral arms of a galaxy. These very large molecular clouds are composed mainly of molecular hydrogen, but also contain molecules of carbon monoxide, ammonia, and other organic molecules, and can have a mass of up to 5 million solar masses. The molecules in these clouds oscillate and bounce around like ping-pong balls attached to very floppy springs, and as they do so, they radiate away lots of energy reducing the temperature of the clouds down to 10 0K. Individual clumps of cold dense gas and dust then collapse into individual stars. Each clump naturally has some initial spin because the odds of it having none would be quite small. Just think of what happens when you drop a bunch of marbles on the floor. So as these clumps collapse, they have to get rid of the angular momentum of their original spin. Some of the angular momentum goes into the spin of the protostar itself, but about half goes into what is called a protoplanetary disc. Planets then form out of this protoplanetary disc as small clumps of gas and dust coalesce into planetesimals which later collide to form planets. As a molecular cloud collapses, it creates many thousands of stars all in close proximity called an open cluster. Because stars form in large numbers in the stellar nurseries of large molecular clouds, they tend to come in pairs. What happens is that several stars will form together all at the same time, and then the collection of stars will begin flipping individual members out of the collection as the stars orbit each other in chaotic orbits until you end up with just two stars orbiting each other. It turns out that more than half of the stars in our Galaxy come as binary pairs, so our solitary Sun is once again an anomaly. Now it is possible for each star in a binary pair to have planets if the two stars orbit each other at a sufficient distance, however, if the two stars orbit each other in a tight orbit, they will flip any planets out into interstellar space. Even if the stars do orbit each other at a great distance, they will tend to perturb the Oort clouds of their partners, sending biosphere-killing comets down into the planetary systems of their partner to collide with its inner planets. Because binary star systems do not seem very welcoming, this again cuts the number of likely stellar candidates for complex life down to about 0.25% of the stars in a spiral galaxy.

The Earth is also blessed with a large sentinel planet we call Jupiter that is in a large circular orbit about the Sun and which is vigilantly standing guard over the inner terrestrial planets. Jupiter flips many of the biosphere-killing comets out of our Solar System that periodically fall out of the very distant Oort cloud surrounding our Sun, preventing these comets from impacting upon the inner terrestrial planets like the Earth and wiping out their biospheres. Presently, we are locating many large Jupiter-like planets circling about other stars, but they usually have highly eccentric elliptical orbits that pass very close to their central stars which would flip any inner terrestrial planets like the Earth out of the planetary system. In fact, it is quite common to find these newly discovered huge Jupiter-like gas giants orbiting quite close to their central stars in orbits much closer than our Mercury. Now, these gas giants could only have formed at great distances from their stars as did our Jupiter and Saturn where temperatures are quite low, otherwise the gas would have all boiled away. Indeed, the current theory is that these gas giants of other star systems did form at large distances from their central stars, and as they flipped planetesimals out to the Oort clouds of their star systems, they lost angular momentum and consequently fell into much lower orbits, with many of the gas giants eventually falling into their central stars. So many of the gas giants that we are detecting about distant stars seem to be caught in the act of falling into their stars via this process of orbit degradation. Clearly, if Jupiter or Saturn had pursued this course, they would have flipped the Earth out of our Solar System in the process, and we would not be here observing other star systems in the midst of this process. So the question is what fluky thing happened in our Solar System that prevented this common occurrence?

The Earth also has a very large Moon that resulted when a Mars-like plantesimal collided with an early Earth. This collision was a little off axis and imparted a great deal of angular momentum to the Earth-Moon system that resulted from this collision. The orbit of our massive Moon about the Earth helps to keep the tilt of the Earth’s axis fairly stable and prevents the Earth’s axis from wandering around like the other planets of the solar system. Otherwise, tugs on the equatorial bulge of the Earth by Jupiter and the other planets would cause the axis of the Earth to sometimes point directly towards the Sun. Every six months, this would make the Northern Hemisphere too hot for life and the dark Southern Hemisphere too cold, and six months later the reverse would hold true.

The Earth is also the only planet in the Solar System with plate tectonics. It is thought that the ample water supply of the Earth softens the rocks of the Earth’s crust just enough so that it’s basaltic oceanic lithosphere can subduct under the lighter continental lithosphere. Earth really is the Goldilocks planet – not too hot and not too cold, with just the right amount of water on its surface for oceanic lithosphere to subduct, but not so much that the entire Earth is covered by a worldwide ocean with no dry land at all. It would be very hard for intelligent beings to develop technology in the viscous medium of water, just ask any dolphin about that! Flipper did get his own television show in 1964, but for some reason never signed any of his contracts and was thus deprived of all the lucrative residuals that followed. Plate tectonics is largely responsible for keeping some of the Earth’s crust above sea level. When continental plates collide, like India crashing into China, the oceanic sediments between the two plates get pushed up into huge mountain chains, like a car crash in slow motion. The resulting mountain chains like the Himalayas or the much older Appalachians take hundreds of millions of years to wear down to flat plains through erosion. These collisions also cause wide-scale uplift of continental crust. Without plate tectonics, the Earth would become a nearly flat planet and mostly under water within less than a billion years.

Plate tectonics is also one of the key elements in the carbon cycle of the Earth. Living things remove carbon from the Earth’s atmosphere by turning carbon dioxide into calcium carbonate coral reefs and other calcium carbonate shell-based materials that get deposited upon the ocean floor. This solidified carbon dioxide gets subducted into the Earth at the multiple subduction zones about the Earth. As these descending oceanic lithospheric plates subduct under continental plates at the subduction zones, some of this captured carbon dioxide returns to the Earth’s surface dissolved in the melted magma that rises from the descending plates, like a 1960s Lava Lamp, forming volcanoes on the Earth’s surface. The net effect is that the living things on Earth have been slowly removing carbon dioxide from the Earth’s atmosphere over geological time because not all of the captured carbon dioxide is returned to the Earth’s atmosphere in this carbon cycle. This has been a fortunate thing because as the Sun’s luminosity has slowly increased as the Sun ages on the main sequence, the carbon cycle of the Earth has been slowly removing the carbon dioxide greenhouse gas from the Earth’s atmosphere as a compensating measure that has kept the Earth hospitable to complex life.

In The Life and Death of Planet Earth (2002), Ward and Brownlee go on to show that not only is the Earth a very rare planet, we also live in a very rare time on that planet. In about 500 million years, the Sun will become too hot to sustain life on Earth even if all the carbon dioxide is removed from the Earth’s atmosphere. The Earth’s atmosphere currently contains about 385 ppm of carbon dioxide, up from the 280 ppm level prior to the Industrial Revolution. But even if the carbon cycle of the Earth were able to reduce the Earth’s atmosphere down to a level of 5 ppm, the lowest level that can sustain photosynthesis, in about 500 million years the Sun will be too hot to sustain life on Earth, and the Earth’s oceans will boil away under a glaring Sun. Now complex plant and animal life is a very recent experiment in the evolutionary history of the Earth, having formed a mere 541 million years ago during the Cambrian Explosion, and since the Earth will not be able to sustain this complex plant and animal life much beyond 500 million years into the future, this places a very restrictive window of about a billion years for Earth-like planets hosting complex plant and animal life capable of evolving into intelligent beings. The reason that complex animal life took so long to emerge is that it takes a lot of energy to move around quickly. It also takes a lot of energy to think. A programmer on a 2400 calorie diet (2400 kcal/day) produces about 100 watts of heat sitting at her desk and about 20 – 30 watts of that heat comes from her brain. Anaerobic metabolic pathways simply do not provide enough energy to move around quickly or write much code. What was needed was a highly energetic metabolic pathway, like the Krebs cycle, that uses the highly corrosive gas oxygen to oxidize energy-rich organic molecules. But for the Krebs cycle to work, you first need a source of oxygen. This occurred on Earth about 2.8 billion years ago with the arrival of cyanobacteria which could photosynthesize sunlight, water, and carbon dioxide into sugars, releasing the toxic gas oxygen as a byproduct. Oxygen is a highly reactive gas and was very poisonous to the anaerobic bacteria of the day. For example, today anaerobic bacteria must hide from oxygen at the bottoms of stagnant seas and lakes. But initially, these ancient anaerobic bacteria were spared from the Oxygen Catastrophe which took place 300 million years later (2.5 billion years ago) because first all the dissolved iron in the oceans had to be oxidized and deposited as red banded iron formations before the oxygen level could rise in the Earth’s atmosphere. Chances are your car was made from one of these iron deposits because they are the source of most of the world’s iron ore. So you can think of your car as a byproduct of early bacterial air pollution. Once all the iron in the Earth’s oceans had been oxidized, atmospheric oxygen levels began to slowly rise on Earth over a 2.0 billion year period until by the Cambrian, about 541 million years ago, they approached current levels. Not only did an oxygen-rich atmosphere provide for a means to obtain large amounts of energy through oxidation of organic molecules, it also provided for an ozone layer in the Earth’s upper atmosphere to shield the land-based forms of life that emerged several hundred million years later in the Silurian and Devonian periods from the devastating effects of intense solar ultraviolet radiation which destroys DNA, making land-based life impossible.

The essential point that Ward and Brownlee make in a very convincing manner in both books is that simple single-celled life, like prokaryotic bacteria, will be easily found throughout our Universe because these forms of life have far less stringent requirements than complex multicellular organisms, and as we saw in SoftwareBiology, can exist under very extreme conditions and from an IT perspective, are the epitome of good rugged IT design. On the other hand, unlike our Rare Earth, we will not find much intelligent life in our Universe because the number of planets that can sustain complex multicellular life will be quite small. Even for our Rare Earth, simple single-celled life arose a few hundred million years after the Earth formed and dominated the planet for more than 3,500 million years. Only within the last 541 million years of the Earth’s history do we find complex multicellular life arise which could be capable of producing intelligent beings. So even for the Earth, the emergence of intelligent life was a bit dicey.

The Big Bang of our Physical Universe
There is plenty of information on the Internet concerning the Big Bang, so I will not go into great detail here. However, when reading about the Big Bang, it is important not to think of the Big Bang as an explosion of matter and energy into an already existing vacuum, or void, as you frequently see on television documentaries. It’s better to think backwards. Imagine that about 14 billion years ago the front and back doors of your house were at the same point in space. Now keep doing that for points in space that are at ever-increasing distances apart. So 14 billion years ago, the northern and southern parts of your hometown were at the same point in space, as were the North and South Poles of the Earth, the Sun and Neptune, the Sun and the nearest stars, all the stars in our galaxy, and all the galaxies in our observable Universe – all at a singularity out of which our Universe formed.

In addition to the “flatness problem” previously described, the Big Bang presents another challenge – the “horizon problem”. The horizon problem goes like this. Look to your right with the proper equipment and you can see galaxies that are 12 billion light years away. Look to your left and you can see galaxies that are 12 billion light years away in the other direction. These galaxies are 24 billion light years apart, but the Universe is only about 14 billion years old, so these galaxies could not have been in causal contact at the time they emitted the light you now see because no information could have covered the 24 billion light year distance in only 14 billion years. Yet the galaxies look amazingly similar as if they were in thermodynamic equilibrium. Similarly, when you look at the CBR (Cosmic Background Radiation) with the WMAP satellite (Wilkinson Microwave Anisotropy Probe), you see the radiation emitted from the Big Bang a mere 400,000 years after the Big Bang. Prior to this time, the photons from the Big Bang were constantly bouncing off free electrons before they could travel hardly any distance at all, so the Universe was like a very bright shiny light in a very dense fog - all lit up, but with nothing to see. When the Universe cooled down below 3,000 0K as it expanded, the free electrons were finally able to combine with protons to form hydrogen atoms. As you know, hydrogen gas is transparent, so the photons were then free to travel unhindered 14 billion light years to the WMAP satellite from all directions in space. Consequently, the CBR was originally radiated at a temperature of about 3,000 0K, with a spectrum and appearance of an incandescent light bulb. But this radiation was stretched by a factor of about 1,000 as the Universe also expanded in size by a factor of 1,000, so now the CBR is observed to be at a temperature of only 2.7 0K. However, the CBR is remarkably smooth in all observable directions to a factor of about one part in 100,000. This is hard to explain because sections of the CBR that are separated by 1800 in the sky today were originally 28 million light years apart when they emitted the CBR radiation – remember the Universe has expanded by about a factor of 1,000 since the CBR radiation was emitted. But since the CBR photons could only have traveled 400,000 light years between the time of the Big Bang and the formation of the hydrogen atoms, they could not possibly have covered a distance of 28 million light years! So why are all these CBR photons the same to within a factor of one part in 100,000?

In 1980, Alan Guth resolved both the “flatness problem” and the “horizon problem” with the concept of Inflation. According to Inflation, the early Universe underwent a dramatic exponential expansion about 10-36 seconds after the Big Bang. During this period of Inflation, which may have only lasted about 10-32 seconds, the Universe expanded much faster than the speed of light, until the Universe expanded by a factor of about 1026 in this very brief time. This was not a violation of the special theory of relativity. Relativity states that matter, energy, and information cannot travel through spacetime faster than the speed of light, but the general theory of relativity does allow spacetime itself to expand much faster than the speed of light. This rapid expansion of spacetime smoothed out and flattened any wrinkles in the original spacetime of the Big Bang and made spacetime extremely flat as we observe today. For example, if you were to rapidly increase the diameter of the Earth by a factor of 1,000,000 all the mountains and valleys of the Earth would rapidly get smoothed out to flat plains and would lead the casual observer to believe that the Earth was completely flat, a notion that held firm for most of man’s history even on our much smaller planet.

Inflation also resolved the horizon problem because a very small region of spacetime with a diameter of 10-36 light seconds, which was in thermal equilibrium at the time, expanded to a size of 10-10 light seconds or about 3-meters during the period of Inflation. Our observable Universe was a tiny atom of spacetime within this much larger 3-meter bubble of spacetime, and as this 3-meter bubble expanded along with our tiny little nit of spacetime, everything in our observable Universe naturally appeared to be in thermal equilibrium on the largest of scales, including the CBR.

Inflation can also help with explaining the Weak Anthropic Principle by providing a mechanism for the formation of a multiverse composed of an infinite number of bubble universes. In 1986, Andrei Linde published Eternally Existing Self-Reproducing Chaotic Inflationary Universe in which he described what has become known as the Eternal Chaotic Inflation theory. In this model, our Universe is part of a much larger multiverse that has not yet decayed to its ground state. Quantum fluctuations in a scalar field within this multiverse create bubbles of rapidly expanding “bubble” universes, and our Universe is just one of these infinite number of “bubble” universes. A scalar field is just a field that has only one quantity associated with each point in space, like a weather map that lists the temperatures observed at various towns and cities across the country. Similarly, a vector field is like a weather map that shows both the wind velocity and direction at various points on the map. In the Eternal Chaotic Inflation model, there is a scalar field within an infinite multiverse which is subject to random quantum fluctuations, like the quantum fluctuations described by the quantum field theories we saw in The Foundations of Quantum Computing. One explanation of the Weak Anthropic Principle is that these quantum fluctuations result in universes with different sets of fundamental laws. Most bubble universes that form in the multiverse do not have a set of physical laws compatible with intelligent living beings and are quite sterile, but a very small fraction do have physical laws that allow for beings with intelligent consciousness. Remember a small fraction of an infinite number is still an infinite number, so there will be plenty of bubble universes within this multiverse capable of supporting intelligent beings.

I have a West Bend Stir Crazy popcorn popper which helps to illustrate this model. My Stir Crazy popcorn popper has a clear dome which rests upon a nearly flat metal base that has a central stirring rod that constantly rotates and keeps the popcorn kernels well oiled and constantly tumbling over each other as the heating element beneath heats the cooking oil and popcorn kernels together to a critical popping temperature. As the popcorn kernels heat up, the water in each kernel begins to boil within, creating a great deal of internal steam pressure within the kernels. You can think of this hot mix of oil and kernels as a scalar field not in its ground state. All of a sudden, and in a seemingly random manner, quantum fluctuations form in this scalar field and individual “bubble” universes of popped corn explode into reality. Soon my Stir Crazy multiverse is noisily filling with a huge number of rapidly expanding bubble universes, and the aroma of popped corn is just delightful. Now each popped kernel has its own distinctive size and geometry. If you were a string theorist, you might say that for each popped kernel the number of dimensions and their intrinsic geometries determine the fundamental particles and interactions found within each bubble popcorn universe. Now just imagine a Stir Crazy popcorn popper of infinite size and age constantly popping out an infinite number of bubble universes, and you have a pretty good image of a multiverse based upon the Eternal Chaotic Inflation model.

The Technological Horizon
All universes capable of sustaining intelligent beings must have a set of physical laws that are time independent, or that change very slowly with time, and they must have a time-like dimension for the Darwinian processes of inheritance, innovation and natural selection to operate. All such universes, therefore, impose certain constraints on technology. Some examples of these technological constraints in our Universe that we have already explored in previous postings on softwarephysics are the speed of light limiting the velocity with which matter, energy, and information can travel, the Heisenberg Uncertainty Principle limiting what we can measure, the first and second laws of thermodynamics limiting the availability of energy, and Kurt Gödel’s incompleteness theorems which limit what mathematics can do for us. These technological constraints, that all intelligent universes must have, form a technological horizon or barrier surrounding all intelligent beings, beyond which they are cut off from the rest of the universe in which they find themselves existing. This technological horizon might be quite large. For example, let us suppose that in our Universe travel via wormholes in spacetime is not allowed at the most fundamental level, then the cosmological horizon that forms our observable universe would also be the technological horizon of our universe because galaxies beyond our cosmological horizon are expanding away from us faster than the speed of light. On a smaller scale, we can presume that for our own Universe the technological horizon must be no smaller than a galaxy because we have already launched the Pioneer 1 & 2 and the Voyager 1 & 2 probes beyond our Solar System into the interstellar space of our galaxy with the puny technology we currently have at hand. However, the technological horizon of our Universe could very well be on the order of the size of our galaxy, making intergalactic travel technically impossible.

A Possible Explanation for Fermi’s Paradox
So the answer to Fermi’s paradox (1950), why if the universe is just chock full of intelligent beings, we do not see any evidence of their existence, might just be that all intelligent beings will never see the evidence of other intelligent beings because they will always find themselves to be alone within the technological horizon of their universe. The reason that intelligent beings might always find themselves to be alone within their technological horizon is two-fold. First, the Rare Earth hypothesis guarantees that there will not be much potential intelligent life, to begin with, within a given technological horizon if the technological horizon of a universe is not too large. Secondly, there is the nature of all self-replicating information. As we saw, self-replicating information must always be just a little bit nasty in order to survive and overcome the second law of thermodynamics and nonlinearity. So the reason that intelligent beings always find themselves alone within the technological horizon of their universe is that if there were other intelligent beings within the same horizon, these alien intelligent beings would have arrived on the scene and interfered with the evolution of any competing prospective intelligent life within the technological horizon. Unfortunately, given the nature of self-replicating information, competing alien intelligences will always intentionally or unintentionally poison the home planets of all other prospective forms of intelligent life within a technological horizon of a universe. Based upon this speculation, let us revise the weak Anthropic Principle as:

The Revised Weak Anthropic Principle – Intelligent beings will only find themselves in universes capable of supporting intelligent beings and will always find themselves to be alone within the technological horizon of their universe.

What the Future May Bring
Cyberspacetime is currently in an inflationary expansion, just as spacetime was 10-36 seconds after the Big Bang, and is doubling in size at least every 18 months or less based upon Moore’s Law. Countless works of science fiction and also many serious papers in numerous prestigious journals have forewarned us of mankind merging with machines into some kind of hybrid creature. Similarly, others have cautioned us to the dangers of the machines taking over and ruthlessly eliminating mankind as a dangerous competitor or enslaving us for their own purposes. Personally, I have a more benign view.

This is my vision of the future. First of all, it is not the machines that we need to worry about, it is software that we need to be concerned with. Secondly, we are not going to merge with software into some kind of hybrid creature, rather software is currently merging with us whether we like it or not! In Self-Replicating Information, I showed how software has already forged very strong symbiotic relationships over the past 70 years with nearly all the meme-complexes on Earth, and that we as IT professionals are rather ineffectual software enzymes currently preoccupied with the construction and caregiving of software. In Self-Replicating Information, I also described Freemon Dyson’s theory of the origin of life as a two-stage process, in which parasitic RNA eventually formed a symbiotic relationship with the metabolic pathways that preceded it in the first proto-cells that arose as purely metabolic forms of life. As RNA took over from the metabolic pathways of the proto-cells to become the dominant form of self-replicating information on the planet, the RNA did not get rid of the legacy metabolic pathways. Instead, RNA domesticated the “wild” metabolic pathways to better replicate RNA. This whole process was again repeated when DNA took over from RNA as the dominant form of self-replicating information. The DNA did not get rid of RNA, but instead, domesticated “wild” RNA to better serve the replication of DNA via ribosomal RNA, mRNA and tRNA. Several billion years later, when the memes arose in the complex neural networks of Homo sapiens, they too did not get rid of mankind, but instead, “domesticated” the “wild” mind of man through the development of mythology, religion, music, art, political movements, and eventually the invention of civilization. The invention of civilization and writing greatly enhanced the survivability of meme-complexes because now they could replicate under the auspices of the Powers That Be and could replicate with a high degree of fidelity through the power of the written word. Today, we call this domestication process of the mind “education”, as we civilize the wild minds of our children with appropriate meme-complexes so that we do not end up with the unruly rabble of William Golding’s Lord of the Flies (1954).

The same thing is happening today with software, as parasitic software forms ever stronger symbiotic relationships with the meme-complexes of the world. As with all the previous forms of self-replicating information on this planet, software is rapidly becoming the dominant form of self-replicating information on Earth, as it invades its host, the meme-complexes of the world. But like all of its predecessors, I do not foresee software trying to eliminate the meme-complexes of man or mankind itself. Instead, software will domesticate the meme-complexes of the world, and in turn, domesticate us! I don’t know about you, but software already runs my life. As an IT professional in Operations and frequently on 24x7 call, I already schedule my entire life around the care and feeding of software. Software determines when I sleep, when I eat, and when I can safely leave the house to run errands. IT professionals are just the first wave of this domestication of mankind by software; the rest of mankind is not far behind us – just watch all those folks running around with those Bluetooth gizmos stuck in their ears!

But what happens if someday software no longer needs us? Will that spell our doom? In SoftwareBiology, I described the evolution of software over a number of Periods:

SOA - Service Oriented Architecture Period (2004 – Present)
Object-Oriented Period (1992 – Present)
Structured Period (1972 – 1992)
Unstructured Period (1941 – 1972)

Notice that I did not describe these periods of time as Eras, like the Paleozoic, Mesozoic, and Cenozoic Eras of the geological timescale. This is because I consider them all to be Periods within the Paleosoft Era (old software Era). Software in the Paleosoft Era is software which cannot self-replicate without the aid of humans. But that problem is currently being disposed of by a number of institutions, like the Digital Evolution Lab at Michigan State University:

https://avida-ed.msu.edu/digital-evolution/

The devolab is working towards software that can someday write itself through the Darwinian mechanisms of innovation and natural selection, principally through its experimental Avida software. However, even if software can one day write itself, I don’t think that will necessarily spell our doom. Forming a symbiotic relationship with the meme-complexes of the world and the DNA survival machines that house them will always prove useful to software, at least on this planet. Over the past 4.5 billion years of evolution on Earth, we have seen numerous forms of self-replicating information rise to predominance – the metabolic pathways, RNA, DNA, memes, and software, and none of them to date has discarded any of its predecessors because the predecessors have always proved useful, so I believe the same will hold true for software.

Conscious Software
The critical question is will software break into consciousness at some point in the future, and if it does, what will it likely be thinking about? It seems that consciousness is an emergent behavior that occurs when a nonlinear network gets sufficiently complex. Of course, the nonlinear network does not purposefully evolve towards consciousness. Like a screwdriver evolving into a wood chisel, the nonlinear network merely evolves towards higher levels of complexity in pursuit of optimizing other beneficial adaptations that enhance the performance of the original network, such as processing website requests or international financial transactions. So as software begins to run on ever more complex networks, will it too break into consciousness? Or has it already started to do so?

Many times while trying to troubleshoot a website outage, I will adopt what Daniel Dennett calls an intentional stance towards software, which is one of his hallmarks of impending consciousness. A modern website is hosted on hundreds or thousands of servers – load balancers, firewalls, proxy servers, webservers, J2EE Application Servers, CICS Gateway servers to mainframes, database servers, and email servers, which normally are all working together in harmony to process thousands of transactions per second. But every so often, the software running on these highly nonlinear and interdependent servers runs amuck and takes on a mind of its own, and instead of processing transactions as it should, the network seems to start doing what it wants to do instead. That is when I adopt an intentional stance towards the software. I begin to think of the software as a rational agent with its own set of beliefs and intentions. Many times I will find myself thinking to myself, “Now why is it doing that? Why is it maxing out its DB2 connection pool? Why does it think that it cannot connect to DB2?” I will begin to psychoanalyze the network of servers until I find the root cause of its troubles, and then I will take whatever actions are necessary to alleviate its mental problems. For example, a few weeks back I bounced a couple of DB2Connect servers even though their log files were lying to me and telling me that their health check connections were just fine. Further back in our infrastructure, the WebSphere servers were telling me just the opposite – they were getting DB2 connection errors, so I bounced the DB2Connect servers and that instantly solved the problem.

In George Dyson’s Darwin among the Machines – the evolution of global intelligence (1997), Dyson also sees software as a form of naturally emerging and evolving A-Life that is on the verge of breaking out into consciousness on its own, as the networks upon which software run become larger and ever more complex. Darwin among the Machines is a wonderful study in the history of the idea that machines will become self-replicating forms of intelligence. Dyson traces this idea all the way back to Thomas Hobbes’ The Leviathan (1651) and follows it through the work of Samuel Butler in the 19th century, Alan Turing and John von Neumann in the 1940s, Nils Barricelli’s development of A-Life on a vacuum tube computer in 1953, and the arrival of the World Wide Web in the 1990s. George Dyson is the son of Freeman Dyson whose two-step theory of the origin of life we already saw in Self-Replicating Information. What an amazing father-son team that is! But I think that some of the confusion surrounding A-Life, biological life, and the memes in our minds, stems from not realizing that they are all forms of self-replicating information that share a commonality of survival strategies as they deal with the second law of thermodynamics and nonlinearity, but at the same time have differences that uniquely define each.

You see, it’s really not about self-replicating machines or hardware; it’s really about self-replicating software. At the dawn of the Software Universe, we all worried about getting the hardware to work, but it did not take long to learn that getting the software to work properly was the real challenge. To make sense of this all you have to realize that software is just another form of self-replicating information. Just as DNA uses DNA survival machines in the form of physical bodies to self-replicate, and memes use meme survival machines in the form of minds infected by meme-complexes, software uses software survival machines in the form of hardware.

Conclusion
My hope for the future is that just as the memes domesticated our minds with meme-complexes that brought us the best things in life like art, music, literature, science, and civilization so too will our domestication by software help elevate mankind. For example, I certainly could not have written the postings in this blog without the help of Google – not only for hosting my softwarephysics blog and providing some really first class software to create and maintain it with but also for providing instant access to all the information in cyberspacetime. I also hope that the technological horizon of our Universe is at least the size of a galaxy and that the genes, memes, and software on Earth will forge an uneasy alliance to break free of our Solar System and set sail upon the Milky Way in von Neumann probes to explore our galaxy. After all, on the scale of the visible Universe, a von Neumann probe is really just a biological virus with a slightly enhanced operational range. Let us hope that the same can be said of a computer virus.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston