Sunday, July 24, 2011

Is Our Universe a Computer Simulation?

I just finished reading The Book of Universes (2011) by John D. Barrow, which is a wonderful overview of the rise and advancement of cosmology in the 20th and 21st centuries. Professor Barrow is a strong proponent of using Brandon Carter’s Weak Anthropic Principle (1973), as a practical tool that can be used to single out and eliminate obviously fallacious ideas in cosmology.

The Weak Anthropic Principle - Intelligent beings will only find themselves existing in universes capable of sustaining intelligent beings.

Like Occam's razor, the idea that the simplest explanation for a phenomenon is the best explanation for a phenomenon, John Barrow skillfully wields the Weak Anthropic Principle in The Book of Universes like a skilled plastic surgeon with a sharp scalpel cutting away the unwanted imperfections of cosmology. There have been many advances in cosmology since John Barrow coauthored his first book, and another one of my favorites, The Anthropic Cosmological Principle (1986) with Frank J. Tipler, and Professor Barrow uses The Book of Universes to quickly bring us up to speed on the current state of affairs in cosmology in a very concise and clear manner. But when all is said and done, and all of the current and past models of our Universe have been explored, Barrow seems to lean heavily towards our Universe being a member of an infinite multiverse with no beginning and no end. In such a model, the multiverse endures forever and has always existed in a state of self-replication in keeping with Andrei Linde’s Eternal Chaotic Inflation model (1986), which proposes that the multiverse is in an unending state of inflation and self-replication that is constantly generating new universes where inflation ceases. That is also my current favorite model for our Universe, as I outlined in CyberCosmology, Is the Universe Fine-Tuned for Self-Replicating Information? and Genes, Memes and Software.

But in the chapter on Post-Modern Universes, Professor Barrow has a section on Fake Universes in which he explores some of the more disturbing ramifications of such a multiverse composed of an infinite number of universes, which consequently must also contain an infinite subset of universes capable of sustaining intelligent beings. One proposed problem with this is that within this infinite subset of universes capable of sustaining intelligent beings, there will necessarily be found intelligent beings capable of developing complex software that is able to simulate universes containing simulated intelligent beings. In fact, there will be so many of these simulated universes containing simulated intelligent beings that, if you should find yourself to be an intelligent being, the overwhelming odds are that you are a simulated intelligent being in a simulated universe! You might even be a simulated scientist doing simulated science in a simulated universe. That disturbing idea certainly sounds a lot like softwarephysics and hits a little too close to home for my comfort. As I explained in the Introduction to Softwarephysics, softwarephysics is a simulated science for the simulated Software Universe that we are all immersed in. Since I have been actively doing softwarephysics for over 30 years in the computer-simulated Software Universe as an IT professional, I think that I may have some practical experience with doing simulated science in a simulated universe that can be of help in differentiating between working in a “real” and a simulated universe.

In The Book of Universes, Barrow points out that there are several ways for intelligent observers to determine if they are living in a “real” universe or a simulated universe. First of all, a simulated universe will have a tendency to gloss over the details and the fine points of reality. For example, as we saw in The Foundations of Quantum Computing, the particles defined by the quantum field theories of our physical Universe use Richard Feynman’s path integral formulation of quantum mechanics, first proposed in 1948, to determine how to behave. So these poor little particles have to constantly compute the results of an infinite number of Feynman diagrams just to figure out how to dance about for us. Since this would lead to the consumption of an infinite amount of computer time to simply simulate a single particle, it is necessary to take some computational shortcuts whenever programming a simulation of a universe. For example, in classical mechanics, the shortcut is to allow particles to only follow a single path and not an infinite number of paths. In classical mechanics, a particle will follow the path that minimizes its action. Action is defined as the integral over time of the Lagrangian taken along the path of the particle between its initial position and its final position. Remember, when you integrate something, you just add up all of its little parts, so taking the integral of the Lagrangian over time of a particle’s path is just adding up all of the little contributions to the Lagrangian over its path. The Lagrangian L is defined as:

L = T – U

where:
L = Lagrangian
T = kinetic energy of the particle
U = potential energy of the particle

Now suppose you drop an electron in a vacuum chamber on the surface of the Earth and want to determine its state of motion after it has fallen a distance D of one meter. Before you drop the stationary electron it has zero kinetic energy because it is not moving, but it does have lots of potential energy as it rests in the Earth’s gravitational field, so its Lagrangian begins with a minimum value of:

L = 0 – U = -U

Since the action is the integral of the Lagrangian over time, and the electron wants to follow the path that minimizes its action over time, it will want to spend lots of time near its starting gate of D = 0 in the vacuum chamber where L = -U. Once released, the electron will reluctantly begin to slowly fall straight downward with a very small initial velocity so that it spends lots of time in the region where L is nearly -U. As the electron slowly falls in the Earth’s gravitational field, it will pick up additional speed and kinetic energy as it slowly turns some of its original potential energy into motion. As it does so, L continuously gets larger as T gets bigger and U gets smaller. So in order to minimize its action, the electron will have to move faster and faster through this region of increasing L so that it spends less and less time in regions where L is continuously getting larger and larger. Thus in classical mechanics, we see the electron start out with an initial velocity of zero and see its velocity slowly increase with an acceleration of g = 9.8 m/sec2, and after one meter the electron will have attained a velocity of 4.427 m/sec. In classical mechanics we can accurately predict the exact path that the electron follows and its exact velocity “v” and distance traveled “d” at all times, and that is certainly something easily calculated in a computer simulation:

d = ½ g t2
v2 = 2 g d

where g = 9.8 m/sec2

And indeed if we were to drop a golf ball in a vacuum chamber at the Earth’s surface, it would indeed follow a single path that minimizes its action to a very remarkable degree. But unfortunately, electrons are not observed to behave in this way. Instead, an electron behaves in a strange quantum mechanical manner because of its innate wavelike behavior that explores all possible paths, so we are stuck with computing the results of an infinite number of Feynman diagrams just for this single little electron, and that’s not a very efficient way to run a computer simulation!

So a simulated universe, like the Software Universe, must necessarily take some simplifying shortcuts at least at the quantum level. For example, in Quantum Software and SoftwareChemistry I showed how each character in a line of code in the Software Universe can be thought of as an atom, and each variable as an organic molecule:

discountedTotalCost = (totalHours * ratePerHour) - costOfNormalOffset;

Each character in a line of code can be in one of 256 quantum ASCII states defined by 8 quantized bits, with each bit in one of two quantum states “1” or “0”, which can also be characterized as ↑ or ↓ and can be thought of as 8 electrons in 8 electron shells, with each electron in a spin-up ↑ or spin-down ↓ state:

C = 01000011 = ↓ ↑ ↓ ↓ ↓ ↓ ↑ ↑
H = 01001000 = ↓ ↑ ↓ ↓ ↑ ↓ ↓ ↓
N = 01001110 = ↓ ↑ ↓ ↓ ↑ ↑ ↑ ↓
O = 01001111 = ↓ ↑ ↓ ↓ ↑ ↑ ↑ ↑

Figure 1 – The electron configuration of a carbon atom is similar to the ASCII code for the letter C in the source code of a program (click to enlarge)

We may then think of each character in the above line of code as an atom in an organic molecule. Thus, each variable in the line of code becomes an organic molecule in a chemical reaction with the other variables or organic molecules in the line of code and ultimately produces a macroscopic software effect. The 8 quantized bits for each character are the equivalent of the spins of 8 electrons in 8 electron shells that may be either in a spin-up ↑ or spin-down ↓ state. And the chemical characteristics of each simulated atom are determined by the arrangements of the spin-up ↑ or spin-down ↓ state of the electrons in the character. The atoms in each variable come together to form an organic molecule, in which the spins of all the associated characters form molecular orbitals for the variable, giving the variable its ultimate softwarechemical characteristics. Notice that although the above simulation may be useful to IT professionals because it paves the way towards pursuing a biological approach to software through organic softwarechemistry as I depicted in SoftwareBiology, it really is just a very crude approximation of the quantum field theory of quantum electrodynamics – QED, and is not representative of the complexity of what we find in our Universe.

In The Book of Universes, Barrow also points out that simulated universes are liable to crash, and when they do so, leave behind baffled simulated scientists to ponder why the laws of their universe suddenly failed or dramatically changed. So if you should find yourself in a universe that is 13.7 billion years old and that seems to have run in a very stable and reproducible manner for that entire time, you can probably be assured that you are living in a “real” universe and not in a computer simulation. Now some might argue that, given enough technology, intelligent beings somewhere in the multiverse should certainly be able to create computer simulations of a universe that are both stable and well-behaved over cosmological periods of time. But that would be a violation of the three laws of software mayhem that I presented in The Fundamental Problem of Software:

The Three Laws of Software Mayhem

1. The second law of thermodynamics tends to introduce small bugs into software that are never detected through testing.

2. Because software is inherently nonlinear these small bugs cause general havoc when they reach production.

3. But even software that is absolutely bug-free can reach a critical tipping point and cross over from linear to nonlinear behavior, with disastrous and unpredictable results, as the load on software is increased.

In Self-Replicating Information I explained how both living things and software are forms of self-replicating information. Since the origin of software in a universe is contingent upon the emergence of intelligent beings in the universe, and since they are both forms of self-replicating information, for the above three laws to be truly universal, we must find that all universes capable of sustaining intelligent beings are also nonlinear and have a second law of thermodynamics. In Is the Universe Fine-Tuned for Self-Replicating Information? I explained that such must be the case in order for intelligent beings and software to arise.

Nonlinearity and a second law of thermodynamics are both necessary for the emergence of self-replicating information because they are the driving forces behind the Darwinian mechanisms of innovation and natural selection that select for self-replicating information from information that does not self-replicate and allow it to emerge and evolve. The second law guarantees that some level of copying errors will occur whenever self-replicating information replicates, resulting in mutations that, on rare occasions, provide for beneficial innovations. The universe must also be nonlinear, where small coding errors or mutations can cause dramatic effects, in order for natural selection to winnow out undesirable mutations. The second law also guarantees that the low-entropy materials used to encode self-replicating information and the free energy necessary to replicate it will be in short supply, since the necessary building materials will have a tendency to degrade into high-entropy waste materials, and the free energy will tend to degrade into useless heat energy. The scarcity of these items creates a struggle for existence amongst the competing forms of self-replicating information, leading to the Darwinian mechanism of natural selection. After all, if it had not been for the second law of thermodynamics and nonlinearity, and food and shelter spontaneously emerged out of nothing, we would all be fat and happy bacteria today! Finally, because there will always be more microstates for a disordered high-entropy macrostate, than there are microstates for an ordered low-entropy macrostate, in all possible universes, all universes must necessarily have a second law of thermodynamics, no matter what physics they might be running on at the moment. It’s just in the cards that all possible universes must have a second law of thermodynamics. So in order for intelligent beings and software to arise in the first place in a universe, it must necessarily be a universe in which it is impossible to write software with the perfection required to run a simulated universe of the complexity seen in our Universe. I believe this puts a real Catch-22 limitation on computer simulations of universes. If a universe is capable of producing intelligent beings and software, then it is a universe in which it is impossible to write computer simulations of other universes that are not easily exposed as being fraudulent, and I think this is another demonstration of how the Weak Anthropic Principle can be used to eliminate obviously fallacious ideas in cosmology.

So I would contend that it is safe to say that we are not living in a computer simulated universe created by alien intelligent beings. On the other hand, I am a Platonist at heart, so in a sense, I view our Universe and the multiverse as a mathematical simulation as I described in What’s It All About?. Plato believed that the most fundamental form of reality was comprised of a set of perfect Forms, like the concept of a circle as a set of points equidistant from a central point. The perfect Form of a circle is a blueprint of perfection, but any circle made by man, no matter how accurate, is simply a debased version of the perfect Form of a circle, which by nature is physically unattainable. I contend that the same can be said of software. No matter how hard intelligent beings may strive for perfection, software will always be a debased form of mathematics, without a level of innate perfection sufficient to run a real universe. For that, you need the real thing. For a Platonist, mathematics exists all on its own as a set of perfect Forms, and within a higher reality of its own that can only be discovered by intelligent beings through rational mathematical reasoning, while for a realist, mathematics is just a set of abstract imaginary concepts cooked up by our overly active pattern-matching minds as they interact with empirical observations of the Universe and has no concrete physical basis in reality. But for me as a Platonist, the Universe and multiverse are indeed simulations – just not computer simulations.

For further discussions of these matters see Konrad Zuse’s Calculating Space (1967) at:
http://www.mathrix.org/zenil/ZuseCalculatingSpace-GermanZenil.pdf

or Nick Bostrom’s Are You Living in a Computer Simulation? (2002) at:
http://www.simulation-argument.com/simulation.html

or Paul Davies' Cosmic Jackpot: Why Our Universe Is Just Right for Life (2007)

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Monday, July 11, 2011

Why We Need IT Grandparents

I just finished reading The Evolution of Grandparents by Rachel Caspari in the August 2011 issue of Scientific American, and it really touched upon one of my IT hot buttons - the rampant age discrimination to be found within IT. I will be turning 60 years old in October, and last January 23, 2011, I actually did become a grandparent for the very first time. So having started programming computers back in 1972, that indeed makes me an IT grandpa! Age discrimination begins at about age 30 in IT and starts to become a serious problem at age 40, so being nearly 60 years old and still doing IT on a daily basis is a rarity. However, thanks to the Scientific Revolution of the 17th century and the benefits of the medical science that came with it, I still feel like I am about 20 years old, so I would like to work to about age 70, and I think that there is no physical reason why I should not be able to do so. Although I may still feel young at heart, I do feel a little bit wiser from an IT perspective, and I now can easily get by on only four hours of sleep per night for extended periods of time – a handy thing in IT that I certainly could not have done when I was much younger. I am also an empty-nester, with none of the pressures of raising young children, so from a purely biological perspective, I would contend that I am now in an even better position to do IT work in my 60s than I was in my 20s! However, that is not how things work in IT. I am quite aware that if I were to lose my current IT position, there is no way I could possibly find another at age 60+. This is quite a sad situation. If you look at the other sciences and the engineering fields, you will find substantial numbers of practitioners in their 50s and 60s, but not so in computer science or IT. In IT you are supposed to be in your 20s, and then quietly disappear when you hit 30. The problem with this is that IT gains no wisdom over time, and; we keep making the same mistakes over and over.

The key point of The Evolution of Grandparents is that up until about 30,000 years ago, humans did not live long enough to become grandparents. The article explains that, like today, humans and their evolutionary predecessors became fertile at about age 15, and then had their first child at that age. But prior to 30,000 years ago, like the IT workers of today, very few individuals lived past age 30, so very few individuals ever became grandparents. Then suddenly, about 30,000 years ago that all changed. Suddenly humans began to live much longer, and at the same time, a dramatic cultural change occurred as well. The primitive low-tech culture of the Middle Paleolithic, with its simple stone scrapers and flint points, was replaced with the high-tech culture of the Upper Paleolithic that was characterized by complex tools and works of art not to be found in the Middle Paleolithic. The author of the article contends that this was no accident. She concludes that a complex feedback loop took place between the longevity of humans and the culture that they were able to pass along. With a sudden and dramatic increase in lifespan, older and wiser humans were suddenly able to pass along their accumulated wisdom, and this, in turn, lengthened the lifespan of the average human even more, to pass along even more wisdom to the next generation.

Again, this is evidence of the complex parasitic/symbiotic relationship that was forged between the genes and the memes about 200,000 years ago when Homo sapiens first appeared. As I outlined in Self-Replicating Information, the arrival of a species on the scene with a very complex neural network allowed the memes to domesticate the minds of Homo sapiens in order to churn out ever-increasing levels of memes, of ever-increasing complexity, and in return, the genes benefited from the technological breakthroughs brought on by the memes of the emerging technological meme-complex that today keeps us all alive. In a similar manner, software entered into a complex parasitic/symbiotic relationship with both the genes and the memes. Ever since Konrad Zuse cranked up his Z3 computer in May of 1941, software has domesticated our minds into churning out ever-increasing levels of software, of ever-increasing complexity, in order to promote the survival of software, and in return, software has provided the genes of Homo sapiens with the means to support a population of 7 billion DNA survival machines that are all infected with the meme-complexes of the world’s cultures. Again, as I pointed out in What’s It All About? and Genes, Memes and Software, it’s really all about self-replicating information in the form of genes, memes, and software all trying to survive in a nonlinear universe that is subject to the second law of thermodynamics, with software rapidly becoming the dominant form of self-replicating information on the planet.

In 1979, I made a career change into IT, after being an exploration geophysicist with Shell and Amoco. Exploration teams are multidisciplinary teams consisting of geologists, geophysicists, petrophysicists, geochemists, and paleontologists all working together towards a common purpose. You see, oil companies try to throw all the science they can muster at trying to figure out what is going on in a prospective basin before they start spending lots of money drilling holes. Now when I moved into Amoco’s IT department, I came into contact with many talented and intelligent IT people, but I was dismayed to discover that, unlike my old exploration teams, there was very little sharing of ideas in computer science with the other sciences. It seemed as though computer science was totally isolated from the other sciences. At the time, I realized that computer science was still a very young science, and that it was more of a technological craft than a science, but that was over 30 years ago! Worse yet, as I began to age within the IT community, I began to realize that IT could not even learn from its own past because of the rampant age discrimination within IT. The trouble with the IT meme-complex is that because it is relatively young and immature, it has not had enough time to discover the benefits of paying heed to the accumulated knowledge of its own past, and is quick to discard it instead. But the problems that I pointed out in The Fundamental Problem of Software, have not changed over time and never will, so tragically, we seem destined to keep making the same mistakes over and over again. When I first transitioned from geophysics into IT in 1979, there were no Change Management groups, no DBAs, no UAT testers, no IT Security, no IT Project Management departments, no Source Code Management groups, no Interactive Development Environments, and we didn’t even have IT! I was in the ISD (Information Services Department), and I was a programmer working with punched cards, not a developer with a whole support structure in place, and we ruled the day because we did it all ourselves. But since then we have learned a great number of things the hard way, and it is really a shame to lose all that knowledge because of age discrimination in IT. As I outlined in How to Think Like a Softwarephysicist, it is important to keep an open mind to new ideas, but this applies to old ideas as well!

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston