I just finished reading The Book of Universes (2011) by John D. Barrow, which is a wonderful overview of the rise and advancement of cosmology in the 20th and 21st centuries. Professor Barrow is a strong proponent of using Brandon Carter’s Weak Anthropic Principle (1973), as a practical tool that can be used to single out and eliminate obviously fallacious ideas in cosmology.
The Weak Anthropic Principle - Intelligent beings will only find themselves existing in universes capable of sustaining intelligent beings.
Like Occam's razor, the idea that the simplest explanation for a phenomenon is the best explanation for a phenomenon, John Barrow skillfully wields the Weak Anthropic Principle in The Book of Universes like a skilled plastic surgeon with a sharp scalpel cutting away the unwanted imperfections of cosmology. There have been many advances in cosmology since John Barrow coauthored his first book, and another one of my favorites, The Anthropic Cosmological Principle (1986) with Frank J. Tipler, and Professor Barrow uses The Book of Universes to quickly bring us up to speed on the current state of affairs in cosmology in a very concise and clear manner. But when all is said and done, and all of the current and past models of our Universe have been explored, Barrow seems to lean heavily towards our Universe being a member of an infinite multiverse with no beginning and no end. In such a model, the multiverse endures forever and has always existed in a state of self-replication in keeping with Andrei Linde’s Eternal Chaotic Inflation model (1986), which proposes that the multiverse is in an unending state of inflation and self-replication that is constantly generating new universes where inflation ceases. That is also my current favorite model for our Universe, as I outlined in CyberCosmology, Is the Universe Fine-Tuned for Self-Replicating Information? and Genes, Memes and Software.
But in the chapter on Post-Modern Universes, Professor Barrow has a section on Fake Universes in which he explores some of the more disturbing ramifications of such a multiverse composed of an infinite number of universes, which consequently must also contain an infinite subset of universes capable of sustaining intelligent beings. One proposed problem with this is that within this infinite subset of universes capable of sustaining intelligent beings, there will necessarily be found intelligent beings capable of developing complex software that is able to simulate universes containing simulated intelligent beings. In fact, there will be so many of these simulated universes containing simulated intelligent beings that, if you should find yourself to be an intelligent being, the overwhelming odds are that you are a simulated intelligent being in a simulated universe! You might even be a simulated scientist doing simulated science in a simulated universe. That disturbing idea certainly sounds a lot like softwarephysics and hits a little too close to home for my comfort. As I explained in the Introduction to Softwarephysics, softwarephysics is a simulated science for the simulated Software Universe that we are all immersed in. Since I have been actively doing softwarephysics for over 30 years in the computer-simulated Software Universe as an IT professional, I think that I may have some practical experience with doing simulated science in a simulated universe that can be of help in differentiating between working in a “real” and a simulated universe.
In The Book of Universes, Barrow points out that there are several ways for intelligent observers to determine if they are living in a “real” universe or a simulated universe. First of all, a simulated universe will have a tendency to gloss over the details and the fine points of reality. For example, as we saw in The Foundations of Quantum Computing, the particles defined by the quantum field theories of our physical Universe use Richard Feynman’s path integral formulation of quantum mechanics, first proposed in 1948, to determine how to behave. So these poor little particles have to constantly compute the results of an infinite number of Feynman diagrams just to figure out how to dance about for us. Since this would lead to the consumption of an infinite amount of computer time to simply simulate a single particle, it is necessary to take some computational shortcuts whenever programming a simulation of a universe. For example, in classical mechanics, the shortcut is to allow particles to only follow a single path and not an infinite number of paths. In classical mechanics, a particle will follow the path that minimizes its action. Action is defined as the integral over time of the Lagrangian taken along the path of the particle between its initial position and its final position. Remember, when you integrate something, you just add up all of its little parts, so taking the integral of the Lagrangian over time of a particle’s path is just adding up all of the little contributions to the Lagrangian over its path. The Lagrangian L is defined as:
L = T – U
L = Lagrangian
T = kinetic energy of the particle
U = potential energy of the particle
Now suppose you drop an electron in a vacuum chamber on the surface of the Earth and want to determine its state of motion after it has fallen a distance D of one meter. Before you drop the stationary electron it has zero kinetic energy because it is not moving, but it does have lots of potential energy as it rests in the Earth’s gravitational field, so its Lagrangian begins with a minimum value of:
L = 0 – U = -U
Since the action is the integral of the Lagrangian over time, and the electron wants to follow the path that minimizes its action over time, it will want to spend lots of time near its starting gate of D = 0 in the vacuum chamber where L = -U. Once released, the electron will reluctantly begin to slowly fall straight downward with a very small initial velocity so that it spends lots of time in the region where L is nearly -U. As the electron slowly falls in the Earth’s gravitational field, it will pick up additional speed and kinetic energy as it slowly turns some of its original potential energy into motion. As it does so, L continuously gets larger as T gets bigger and U gets smaller. So in order to minimize its action, the electron will have to move faster and faster through this region of increasing L so that it spends less and less time in regions where L is continuously getting larger and larger. Thus in classical mechanics, we see the electron start out with an initial velocity of zero and see its velocity slowly increase with an acceleration of g = 9.8 m/sec2, and after one meter the electron will have attained a velocity of 4.427 m/sec. In classical mechanics we can accurately predict the exact path that the electron follows and its exact velocity “v” and distance traveled “d” at all times, and that is certainly something easily calculated in a computer simulation:
d = ½ g t2
v2 = 2 g d
where g = 9.8 m/sec2
And indeed if we were to drop a golf ball in a vacuum chamber at the Earth’s surface, it would indeed follow a single path that minimizes its action to a very remarkable degree. But unfortunately, electrons are not observed to behave in this way. Instead, an electron behaves in a strange quantum mechanical manner because of its innate wavelike behavior that explores all possible paths, so we are stuck with computing the results of an infinite number of Feynman diagrams just for this single little electron, and that’s not a very efficient way to run a computer simulation!
So a simulated universe, like the Software Universe, must necessarily take some simplifying shortcuts at least at the quantum level. For example, in Quantum Software and SoftwareChemistry I showed how each character in a line of code in the Software Universe can be thought of as an atom, and each variable as an organic molecule:
discountedTotalCost = (totalHours * ratePerHour) - costOfNormalOffset;
Each character in a line of code can be in one of 256 quantum ASCII states defined by 8 quantized bits, with each bit in one of two quantum states “1” or “0”, which can also be characterized as ↑ or ↓ and can be thought of as 8 electrons in 8 electron shells, with each electron in a spin-up ↑ or spin-down ↓ state:
C = 01000011 = ↓ ↑ ↓ ↓ ↓ ↓ ↑ ↑
H = 01001000 = ↓ ↑ ↓ ↓ ↑ ↓ ↓ ↓
N = 01001110 = ↓ ↑ ↓ ↓ ↑ ↑ ↑ ↓
O = 01001111 = ↓ ↑ ↓ ↓ ↑ ↑ ↑ ↑
Figure 1 – The electron configuration of a carbon atom is similar to the ASCII code for the letter C in the source code of a program (click to enlarge)
We may then think of each character in the above line of code as an atom in an organic molecule. Thus, each variable in the line of code becomes an organic molecule in a chemical reaction with the other variables or organic molecules in the line of code and ultimately produces a macroscopic software effect. The 8 quantized bits for each character are the equivalent of the spins of 8 electrons in 8 electron shells that may be either in a spin-up ↑ or spin-down ↓ state. And the chemical characteristics of each simulated atom are determined by the arrangements of the spin-up ↑ or spin-down ↓ state of the electrons in the character. The atoms in each variable come together to form an organic molecule, in which the spins of all the associated characters form molecular orbitals for the variable, giving the variable its ultimate softwarechemical characteristics. Notice that although the above simulation may be useful to IT professionals because it paves the way towards pursuing a biological approach to software through organic softwarechemistry as I depicted in SoftwareBiology, it really is just a very crude approximation of the quantum field theory of quantum electrodynamics – QED, and is not representative of the complexity of what we find in our Universe.
In The Book of Universes, Barrow also points out that simulated universes are liable to crash, and when they do so, leave behind baffled simulated scientists to ponder why the laws of their universe suddenly failed or dramatically changed. So if you should find yourself in a universe that is 13.7 billion years old and that seems to have run in a very stable and reproducible manner for that entire time, you can probably be assured that you are living in a “real” universe and not in a computer simulation. Now some might argue that, given enough technology, intelligent beings somewhere in the multiverse should certainly be able to create computer simulations of a universe that are both stable and well-behaved over cosmological periods of time. But that would be a violation of the three laws of software mayhem that I presented in The Fundamental Problem of Software:
The Three Laws of Software Mayhem
1. The second law of thermodynamics tends to introduce small bugs into software that are never detected through testing.
2. Because software is inherently nonlinear these small bugs cause general havoc when they reach production.
3. But even software that is absolutely bug-free can reach a critical tipping point and cross over from linear to nonlinear behavior, with disastrous and unpredictable results, as the load on software is increased.
In Self-Replicating Information I explained how both living things and software are forms of self-replicating information. Since the origin of software in a universe is contingent upon the emergence of intelligent beings in the universe, and since they are both forms of self-replicating information, for the above three laws to be truly universal, we must find that all universes capable of sustaining intelligent beings are also nonlinear and have a second law of thermodynamics. In Is the Universe Fine-Tuned for Self-Replicating Information? I explained that such must be the case in order for intelligent beings and software to arise.
Nonlinearity and a second law of thermodynamics are both necessary for the emergence of self-replicating information because they are the driving forces behind the Darwinian mechanisms of innovation and natural selection that select for self-replicating information from information that does not self-replicate and allow it to emerge and evolve. The second law guarantees that some level of copying errors will occur whenever self-replicating information replicates, resulting in mutations that, on rare occasions, provide for beneficial innovations. The universe must also be nonlinear, where small coding errors or mutations can cause dramatic effects, in order for natural selection to winnow out undesirable mutations. The second law also guarantees that the low-entropy materials used to encode self-replicating information and the free energy necessary to replicate it will be in short supply, since the necessary building materials will have a tendency to degrade into high-entropy waste materials, and the free energy will tend to degrade into useless heat energy. The scarcity of these items creates a struggle for existence amongst the competing forms of self-replicating information, leading to the Darwinian mechanism of natural selection. After all, if it had not been for the second law of thermodynamics and nonlinearity, and food and shelter spontaneously emerged out of nothing, we would all be fat and happy bacteria today! Finally, because there will always be more microstates for a disordered high-entropy macrostate, than there are microstates for an ordered low-entropy macrostate, in all possible universes, all universes must necessarily have a second law of thermodynamics, no matter what physics they might be running on at the moment. It’s just in the cards that all possible universes must have a second law of thermodynamics. So in order for intelligent beings and software to arise in the first place in a universe, it must necessarily be a universe in which it is impossible to write software with the perfection required to run a simulated universe of the complexity seen in our Universe. I believe this puts a real Catch-22 limitation on computer simulations of universes. If a universe is capable of producing intelligent beings and software, then it is a universe in which it is impossible to write computer simulations of other universes that are not easily exposed as being fraudulent, and I think this is another demonstration of how the Weak Anthropic Principle can be used to eliminate obviously fallacious ideas in cosmology.
So I would contend that it is safe to say that we are not living in a computer simulated universe created by alien intelligent beings. On the other hand, I am a Platonist at heart, so in a sense, I view our Universe and the multiverse as a mathematical simulation as I described in What’s It All About?. Plato believed that the most fundamental form of reality was comprised of a set of perfect Forms, like the concept of a circle as a set of points equidistant from a central point. The perfect Form of a circle is a blueprint of perfection, but any circle made by man, no matter how accurate, is simply a debased version of the perfect Form of a circle, which by nature is physically unattainable. I contend that the same can be said of software. No matter how hard intelligent beings may strive for perfection, software will always be a debased form of mathematics, without a level of innate perfection sufficient to run a real universe. For that, you need the real thing. For a Platonist, mathematics exists all on its own as a set of perfect Forms, and within a higher reality of its own that can only be discovered by intelligent beings through rational mathematical reasoning, while for a realist, mathematics is just a set of abstract imaginary concepts cooked up by our overly active pattern-matching minds as they interact with empirical observations of the Universe and has no concrete physical basis in reality. But for me as a Platonist, the Universe and multiverse are indeed simulations – just not computer simulations.
For further discussions of these matters see Konrad Zuse’s Calculating Space (1967) at:
or Nick Bostrom’s Are You Living in a Computer Simulation? (2002) at:
or Paul Davies' Cosmic Jackpot: Why Our Universe Is Just Right for Life (2007)
Comments are welcome at email@example.com
To see all posts on softwarephysics in reverse order go to: