Sunday, May 10, 2015

Introduction to Softwarephysics

Softwarephysics is a simulated science for the simulated Software Universe that we are all immersed in. It is an approach to software development, maintenance, and support based upon concepts from physics, chemistry, biology, and geology that I have been using on a daily basis for over 35 years as an IT professional. For those of you not in the business, IT is short for Information Technology, commercial computer science. The purpose of softwarephysics is to explain why IT is so difficult, to suggest possible remedies, and to provide a direction for thought. If you are an IT professional, general computer user, or simply an individual interested in computer science, physics, chemistry, biology, or geology then softwarephysics might be of interest to you, if not in an entirely serious manner, perhaps at least in an entertaining one.

From 1975 – 1979, I was an exploration geophysicist exploring for oil, first with Shell, and then with Amoco. In 1979, I made a career change into IT, and spent about 20 years in development. For the past 14 years, I have been in IT operations, supporting middleware on WebSphere, JBoss, Tomcat, and ColdFusion. When I transitioned into IT from geophysics, I figured if you could apply physics to geology; why not apply physics to software? So like the exploration team at Amoco that I had just left, consisting of geologists, geophysicists, paleontologists, geochemists, and petrophysicists, I decided to take all the physics, chemistry, biology, and geology that I could muster and throw it at the problem of software. The basic idea was that many concepts in physics, chemistry, biology, and geology suggested to me that the IT community had accidentally created a pretty decent computer simulation of the physical Universe on a grand scale, a Software Universe so to speak, and that I could use this fantastic simulation in reverse, to better understand the behavior of commercial software, by comparing software to how things behaved in the physical Universe. Softwarephysics depicts software as a virtual substance, and relies upon our understanding of the current theories in physics, chemistry, biology, and geology to help us model the nature of software behavior. So in physics we use software to simulate the behavior of the Universe, while in softwarephysics we use the Universe to simulate the behavior of software. Along these lines, we use the Equivalence Conjecture of Softwarephysics as an aid; it allows us to shift back and forth between the Software Universe and the physical Universe, and hopefully to learn something about one by examining the other:

The Equivalence Conjecture of Softwarephysics
Over the past 70 years, through the uncoordinated efforts of over 50 million independently acting programmers to provide the world with a global supply of software, the IT community has accidentally spent more than $10 trillion creating a computer simulation of the physical Universe on a grand scale – the Software Universe.

Logical Positivism and Effective Theories
Many IT professionals have a difficult time with softwarephysics because they think of physics as being limited to the study of real things like electrons and photons, and since software is not “real”, how can you possibly apply concepts from physics and the other sciences to software? To address this issue, softwarephysics draws heavily upon two concepts from physics that have served physics quite well over the past century – the concept of logical positivism and the concept of effective theories. This was not always the case. In the 17th, 18th, and 19th centuries, physicists mistakenly thought that they were actually discovering the fundamental laws of the Universe, which they thought were based upon real tangible things like particles, waves, and fields. Classical Newtonian mechanics (1687), thermodynamics (1850), and classical electrodynamics (1864) did a wonderful job of describing the everyday world at the close of the 19th century, but early in the 20th century it became apparent that the models upon which these very successful theories were based did not work very well for small things like atoms or for objects moving at high velocities or in strong gravitational fields. This provoked a rather profound philosophical crisis within physics at the turn of the century, as physicists worried that perhaps 300 years of work was about to go down the drain. The problem was that classical physicists confused their models of reality with reality itself, and when their classical models began to falter, their confidence in physics began to falter as well. This philosophical crisis was resolved with the adoption of the concepts of logical positivism and some new effective theories in physics. Quantum mechanics (1926) was developed for small things like atoms, the special theory of relativity (1905) was developed for objects moving at high velocities and the general theory of relativity (1915) was developed for objects moving in strong gravitational fields.

Logical positivism, usually abbreviated simply to positivism, is an enhanced form of empiricism, in which we do not care about how things “really” are; we are only interested with how things are observed to behave. With positivism, physicists only seek out models of reality - not reality itself. When we study quantum mechanics, we will find that the concept of reality gets rather murky in physics anyway, so this is not as great a loss as it might at first seem. By concentrating on how things are observed to behave, rather than on what things “really” are, we avoid the conundrum faced by the classical physicists. In retrospect, this idea really goes all the way back to the very foundations of physics. In Newton’s Principia (1687) he outlined Newtonian mechanics and his theory of gravitation, which held that the gravitational force between two objects was proportional to the product of their masses divided by the square of the distance between them. Newton knew that he was going to take some philosophical flack for proposing a mysterious force between objects that could reach out across the vast depths of space with no apparent mechanism, so he took a very positivistic position on the matter with the famous words:

I have not as yet been able to discover the reason for these properties of gravity from phenomena, and I do not feign hypotheses. For whatever is not deduced from the phenomena must be called a hypothesis; and hypotheses, whether metaphysical or physical, or based on occult qualities, or mechanical, have no place in experimental philosophy. In this philosophy particular propositions are inferred from the phenomena, and afterwards rendered general by induction.

Instead, Newton focused on how things were observed to move under the influence of his law of gravitational attraction, without worrying about what gravity “really” was.

The second concept, that of effective theories, is an extension of positivism. An effective theory is an approximation of reality that only holds true over a certain restricted range of conditions and only provides for a certain depth of understanding of the problem at hand. For example, Newtonian mechanics is an effective theory that makes very good predictions for the behavior of objects moving less than 10% of the speed of light and which are bigger than a very small grain of dust. These limits define the effective range over which Newtonian mechanics can be applied to solve problems. For very small things we must use quantum mechanics and for very fast things moving in strong gravitational fields, we must use relativity theory. So all of the current theories of physics, such as Newtonian mechanics, classical electrodynamics, thermodynamics, statistical mechanics, the special and general theories of relativity, quantum mechanics, and the quantum field theories of QED and QCD are effective theories that are based upon models of reality, and all these models are approximations - all these models are fundamentally "wrong", but at the same time, these effective theories make exceedingly good predictions of the behavior of physical systems over the limited ranges in which they apply. That is the goal of softwarephysics – to provide for an effective theory of software behavior that makes useful predictions of software behavior that are applicable to the day-to-day activities of IT professionals. So in softwarephysics, we adopt a very positivistic viewpoint of software; we do not care what software “really is”, we only care about how software is observed to behave and try to model those behaviors with an effective theory of software behavior that only holds true over a certain restricted range of conditions and only provides for a certain depth of understanding of the problem at hand.

GPS satellites provide a very good example of positivism and effective theories at work. There are currently 31 GPS satellites orbiting at an altitude of 12,600 miles above the Earth, and each contains a very accurate atomic clock. The signals from the GPS satellites travel to your GPS unit at the speed of light, so by knowing the travel time of the signals from at least 4 of the GPS satellites, it is possible to determine your position on Earth very accurately. In order to do that, it is very important to have very accurate timing measurements. Newtonian mechanics is used to launch the GPS satellites to an altitude of 12,600 miles and to keep them properly positioned in orbit. Classical electrodynamics is then used to beam the GPS signals back down to Earth to the GPS unit in your car. Quantum mechanics is used to build the transistors on the chips onboard the GPS satellites and to understand the quantum tunneling of electrons in the flash memory chips used to store GPS data on the satellites. The special theory of relativity predicts that the onboard atomic clocks on the GPS satellites will run slower and lose about 7.2 microseconds per day due to their high velocities relative to an observer on the Earth. But at the same time, the general theory of relativity also predicts that because the GPS satellites are further from the center of the Earth and in a weaker gravitational field, where spacetime is less deformed than on the surface of the Earth, their atomic clocks also run faster and gain 45.9 microseconds per day due to the weaker gravitational field out there. The net effect is a gain of 38.7 microseconds per day, so the GPS satellite atomic clocks have to be purposefully built to run slow by 38.7 microseconds per day before they are launched, so that they will keep in sync with clocks on the surface of the Earth. If this correction were not made, an error in your position of 100 yards/day would accrue. The end result of the combination of all these fundamentally flawed effective theories is that it is possible to pinpoint your location on Earth to an accuracy of 16 feet or better for as little as $100. But physics has done even better than that with its fundamentally flawed effective theories. By combining the effective theories of special relativity (1905) with quantum mechanics (1926), physicists were able to produce a new effective theory for the behavior of electrons and photons called quantum electrodynamics QED (1948) which was able to predict the gyromagnetic ratio of the electron, a measure of its intrinsic magnetic field, to an accuracy of 11 decimal places. As Richard Feynman has pointed out, this was like predicting the exact distance between New York and Los Angeles accurate to the width of a human hair!

So Newtonian mechanics makes great predictions for the macroscopic behavior of GPS satellites, but it does not work very well for small things like the behavior of individual electrons within transistors, where quantum mechanics is required, or for things moving at high speeds or in strong gravitational fields where relativity theory must be applied. And all three of these effective theories are based upon completely contradictory models. General relativity maintains that spacetime is curved by matter and energy, but that matter and energy are continuous, while quantum mechanics maintains that spacetime is flat, but that matter and energy are quantized into chunks. Newtonian mechanics simply states that space and time are mutually independent dimensions and universal for all, with matter and energy being continuous. The important point is that all effective theories and scientific models are approximations – they are all fundamentally "wrong". But knowing that you are "wrong" gives you a great advantage over people who know that they are "right", because knowing that you are "wrong" allows you to seek improved models of reality. So please consider softwarephysics to simply be an effective theory of software behavior that is based upon models that are fundamentally “wrong”, but at the same time, fundamentally useful for IT professionals. So as you embark upon your study of softwarephysics, please always keep in mind that the models of softwarephysics are just approximations of software behavior, they are not what software “really is”. It is very important not to confuse models of software behavior with software itself, if one wishes to avoid the plight of the 19th century classical physicists.

If you are an IT professional and many of the above concepts are new to you, do not be concerned. This blog on softwarephysics is aimed at a diverse audience, but with IT professionals in mind. All of the above ideas will be covered at great length in the postings in this blog on softwarephysics and in a manner accessible to all IT professionals. Now it turns out that most IT professionals have had some introduction to physics in high school or in introductory college courses, but that presents an additional problem. The problem is that such courses generally only cover classical physics, and leave the student with a very good picture of physics as it stood in 1864! It turns out that the classical physics of Newtonian mechanics, thermodynamics, and classical electromagnetic theory were simply too good to discard and are still quite useful, so they are taught first to beginners and then we run out of time to cover the really interesting physics of the 20th century. Now imagine the problems that the modern world would face if we only taught similarly antiquated courses in astronomy, metallurgy, electrical and mechanical engineering, medicine, economics, biology, or geology that happily left students back in 1864! Since many of the best models for software behavior stem from 20th century physics, we will be covering a great deal of 20th century material in these postings – the special and general theories of relativity, quantum mechanics, quantum field theories, and chaos theory, but I hope that you will find that these additional effective theories are quite interesting on their own, and might even change your worldview of the physical Universe at the same time.

Unintended Consequences for the Scientific Community
As I mentioned at the close of my original posting on SoftwarePhysics, my initial intention for this blog on softwarephysics was to fulfill a promise I made to myself about 20 years ago to approach the IT community with the concept of softwarephysics a second time, following my less than successful attempt to do so in the 1980s, with the hope of helping the IT community to better cope with the daily mayhem of life in IT. However, in laying down the postings for this blog an unintended consequence arose in my mind as I became profoundly aware of the enormity of this vast computer simulation of the physical Universe that the IT community has so graciously provided to the scientific community free of charge and also of the very significant potential scientific value that it provides. One of the nagging problems for many of the observational and experimental sciences is that many times there is only one example readily at hand to study or experiment with, and it is very difficult to do meaningful statistics with a population of N=1.

But the computer simulation of the physical Universe that the Software Universe presents provides another realm for comparison. For example, both biology and astrobiology only have one biosphere on Earth to study and even physics itself has only one Universe with which to engage. Imagine the possibilities if scientists had another Universe readily at hand in which to work! This is exactly what the Software Universe provides. For example, in SoftwareBiology and A Proposal For All Practicing Paleontologists we see that the evolution of software over the past 70 years has closely followed the same path as life on Earth over the past 4.0 billion years, in keeping with Simon Conway Morris’s contention that convergence has played the dominant role in the evolution of life on Earth. In When Toasters Fly, we also see that software has evolved in fits and starts as portrayed by the punctuated equilibrium of Stephen Jay Gould and Niles Eldredge, and in The Adaptationist View of Software Evolution we explore the overwhelming power of natural selection in the evolution of software. In keeping with Peter Ward’s emphasis on mass extinctions dominating the course of evolution throughout geological time, we also see in SoftwareBiology that there have been several dramatic mass extinctions of various forms of software over the past 70 years as well, that have greatly affected the evolutionary history of software, and that between these mass extinctions, software has also tended to evolve through the gradual changes of Hutton’s and Lyell’s uniformitarianism. In Software Symbiogenesis and Self-Replicating Information, we also see the very significant role that parasitic/symbiotic relationships have played in the evolution of software, in keeping with the work of Lynn Margulis and also of Freeman Dyson’s two-stage theory of the origin of life on Earth. In The Origin of Software the Origin of Life, we explore Stuart Kauffman’s ideas on how Boolean nets of autocatalytic chemical reactions might have kick-started the whole thing as an emergent behavior of an early chaotic pre-biotic environment on Earth, and that if Seth Shostak is right, we will never end up talking to carbon-based extraterrestrial aliens, but to alien software instead. In Is the Universe Fine-Tuned for Self-Replicating Information? we explore the thermodynamics of Brandon Carter’s Weak Anthropic Principle (1973), as it relates to the generation of universes in the multiverse that are capable of sustaining intelligent life. Finally, in Programming Clay we revisit Alexander Graham Cairns-Smith’s theory (1966) that Gene 1.0 did not run on nucleic acids, but on clay microcrystal precursors instead.

Similarly for the physical sciences, in Is the Universe a Quantum Computer? we find a correspondence between TCP/IP and John Cramer’s Transactional Interpretation of quantum mechanics. In SoftwarePhysics and Cyberspacetime, we also see that the froth of CPU processes running with a clock speed of 109 Hz on the 10 trillion currently active microprocessors that comprise the Software Universe can be viewed as a slowed down simulation of the spin-foam froth of interacting processes of loop quantum gravity running with a clock speed of 1043 Hz that may comprise the physical Universe. And in Software Chaos, we examine the nonlinear behavior of software and some of its emergent behaviors and follow up in CyberCosmology with the possibility that vast quantities of software running on large nonlinear networks might eventually break out into consciousness in accordance with the work of George Dyson and Daniel Dennett. Finally, in Model-Dependent Realism - A Positivistic Approach to Realism we compare Steven Weinberg’s realism with the model-dependent realism of Stephen Hawking and Leonard Mlodinow and how the two worldviews affect the search for a Final Theory. Finally, in The Software Universe as an Implementation of the Mathematical Universe Hypothesis we at long last explore what software might really be, and discover that the Software Universe might actually be more closely related to the physical Universe than you might think.

The chief advantage of doing fieldwork in the Software Universe is that, unlike most computer simulations of the physical Universe, it is an unintended and accidental simulation, without any of the built-in biases that most computer simulations of the physical Universe suffer. So you will truly be able to do fieldwork in a pristine and naturally occuring simulation, just as IT professionals can do fieldwork in the wild and naturally occuring simulation of software that the living things of the biosphere provide. Secondly, the Software Universe is a huge simulation that is far beyond the budgetary means of any institution or consortium by many orders of magnitude. So if you are an evolutionary biologist, astrobiologist, or paleontologist working on the origin and evolution of life in the Universe, or a physicist or economist working on the emergent behaviors of nonlinear systems and complexity theory, or a neurobiologist working on the emergence of consciousness in neural networks, or even a frustrated string theorist struggling with quantum gravity, it would be well worth your while to pay a friendly call upon the local IT department of a major corporation in your area. Start with a visit to the Command Center for their IT Operations department to get a global view of their IT infrastructure and to see how it might be of assistance to the work in your area of interest. From there you can branch out to the applicable area of IT that will provide the most benefit.

The Impact of Self-Replicating Information Upon the Planet
One of the key findings of softwarephysics is concerned with the magnitude of the impact upon the planet of self-replicating information.

Self-Replicating Information – Information that persists through time by making copies of itself or by enlisting the support of other things to ensure that copies of itself are made.

Basically, we have seen several waves of self-replicating information dominate the Earth:
1. Self-replicating autocatalytic metabolic pathways of organic molecules
2. RNA
3. DNA
4. Memes
5. Software

Note that because the self-replicating autocatalytic metabolic pathways of organic molecules, RNA and DNA have become so heavily intertwined over time that I now simply refer to them as the “genes”. Over the past 4.0 billion years, the surface of the Earth has been totally reworked by three forms of self-replicating information – the genes, memes and software, with software rapidly becoming the dominant form of self-replicating information on the planet. For more on this see:

A Brief History of Self-Replicating Information
How to Use an Understanding of Self-Replicating Information to Avoid War
How to Use Softwarephysics to Revive Memetics in Academia
Is Self-Replicating Information Inherently Self-Destructive?
Is the Universe Fine-Tuned for Self-Replicating Information?
Self-Replicating Information

Softwarephysics and the Real World of Human Affairs
Having another universe readily at hand to explore, even a simulated universe like the Software Universe, necessarily has an impact upon one's personal philosophy of life, and allows one to draw certain conclusions about the human condition and what’s it all about, so as you read through the postings in this blog you will stumble across a bit of my own personal philosophy - definitely a working hypothesis still in the works. Along these lines you might be interested in a few postings where I try to apply softwarephysics to the real world of human affairs:

MoneyPhysics – my impression of the 2008 world financial meltdown.

The Fundamental Problem of Everything – If you Google "the fundamental problem of everything", this will be the only hit you get on the entire Internet, which is indicative of the fundamental problem of everything!

What’s It All About? and Genes, Memes and Software – my current working hypothesis on what’s it all about.

How to Use an Understanding of Self-Replicating Information to Avoid War – my current working hypothesis for how the United States can avoid getting bogged down again in continued war in the Middle East.

Hierarchiology and the Phenomenon of Self-Organizing Organizational Collapse - a modern extension of the classic Peter Principle that applies to all hierarchical organizations and introduces the Time Invariant Peter Principle.

Some Specifics About These Postings
The postings in this blog are a supplemental reading for my course on softwarephysics for IT professionals entitled SoftwarePhysics 101 – The Physics of Cyberspacetime, which was originally designed to be taught as a series of seminars at companies where I was employed. Since softwarephysics essentially covers the simulated physics, chemistry, biology, and geology of an entire simulated universe, the slides necessarily just provide a cursory skeleton upon which to expound. The postings in this blog go into much greater depth. Because each posting builds upon its predecessors, the postings in this blog should be read in reverse order from the oldest to the most recent, beginning with my original posting on SoftwarePhysics. In addition, several universities also now offer courses on Biologically Inspired Computing which cover some of the biological aspects of softwarephysics, and the online content for some of these courses can be found by Googling for "Biologically Inspired Computing" or "Natural Computing". At this point we will finish up with my original plan for this blog on softwarephysics with a purely speculative posting on CyberCosmology that describes the origins of the Software Universe, cyberspacetime, software and where they all may be heading. Since CyberCosmology will be purely speculative in nature, it will not be of much help to you in your IT professional capacities, but I hope that it might be a bit entertaining. Again, if you are new to softwarephysics, you really need to read the previous posts before taking on CyberCosmology. I will probably continue on with some additional brief observations about softwarephysics in the future, but once you have completed CyberCosmology, you can truly consider yourself to be a bona fide softwarephysicist.

For those of you following this blog, the posting dates on the posts may seem to behave in a rather bizarre manner. That is because in order to get the Introduction to Softwarephysics listed as the first post in the context root of http://softwarephysics.blogspot.com/ I have to perform a few IT tricks. When publishing a new posting, I simply copy the contents of the Introduction to Softwarephysics to a new posting called the New Introduction to Softwarephysics. Then I update the original Introduction to Softwarephysics entry with the title and content of the new posting to be published. I then go back and take “New” out of the title of the New Introduction to Softwarephysics. This way the Introduction to Softwarephysics always appears as the first posting in the context root of http://softwarephysics.blogspot.com/. The side effect of all this is that the real posting date of posts is the date that appears on the post that you get when clicking on the Newer Post link at the bottom left of the posting webpage.

SoftwarePhysics 101 – The Physics of Cyberspacetime is now available on Microsoft OneDrive.

SoftwarePhysics 101 – The Physics of Cyberspacetime - Original PowerPoint document

Entropy – A spreadsheet referenced in the document

BSDE – A 1989 document describing how to use BSDE - the Bionic Systems Development Environment - to grow applications from genes and embryos within the maternal BSDE software.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
http://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Tuesday, February 10, 2015

The Software Universe as an Implementation of the Mathematical Universe Hypothesis

In my Introduction to Softwarephysics posting, I explained how I first began working on softwarephysics back 1979 when I switched careers from being an exploration geophysicist, exploring for oil in the Gulf of Suez with Amoco, to being an IT professional supporting Amoco’s Production software in Amoco’s IT department. One very scary Monday morning, I was conducted to my new office cubicle in Amoco’s IT department, and I immediately found myself surrounded by a large number of very strange IT people, all scurrying about in a near state of panic, like the characters in Alice in Wonderland. After 36 years in the IT departments of several major corporations, I can now state with confidence that most corporate IT departments can best be described as “frantic”. This new IT job was a totally alien experience for me, and I immediately thought that I had just made a very dreadful career path mistake, with little hope of salvation. Granted, I had been programming geophysical models for my thesis and for oil companies ever since taking a basic FORTRAN course back in 1972, but that was the full extent of my academic credentials in computer science. It seemed as though these strange IT people had created, and were living in, their own little universe, a Software Universe so to speak. Suddenly, software was determining when I ate, when I slept and when I saw my family. So subconsciously I immediately began to work on softwarephysics to help myself get out of this mess. I figured that if you could apply physics to geology, why not apply physics to software? At the time, in a very pragmatic manner induced by the dire urgency of the situation, I decided to take a very positivistic approach to software out of immediate necessity, in that I would not focus on what software “really was”. Instead, I just wanted to develop a handful of effective theories to help explain how software seemed to behave in order to help myself to better cope with the daily mayhem of life in IT. Recall that an effective theory is an approximation of reality that only holds true over a certain restricted range of conditions and only provides for a certain depth of understanding of the problem at hand. In the infamous words of Samuel Johnson - "Depend upon it, sir, when a man knows he is to be hanged in a fortnight, it concentrates his mind wonderfully.". So at the time, I did not have the luxury to worry much about what software “really” was, I just wanted a set of effective theories to help explain how software seemed to behave, and I needed them quickly in order to survive this new job and to bring home the groceries. But now in 2015, with my retirement from IT looming in the foreseeable future, perhaps it is finally time to worry about what software “really” is and what the Software Universe might actually be made of.

Strangely, it seems that physicists may have been doing this very same thing for the past 400 years by not worrying so much about what’s it all about, and instead, focusing more upon working on a set of effective theories that explain how our Universe seems to work. It is well known that all of our current theories in physics, such as Newtonian mechanics, Newtonian gravity, classical electrodynamics, thermodynamics, statistical mechanics, the special and general theories of relativity, quantum mechanics, and the quantum field theories like QED and QCD of the Standard Model of particle physics are just effective theories that are based upon models of reality, and are not reality itself, because all of these effective theories are approximations. All of these effective theories are intrinsically fundamentally "wrong", but at the same time, these effective theories make exceedingly good predictions of the behavior of physical systems over the limited ranges in which they apply. And all of these effective theories are also based upon completely contradictory models. General Relativity maintains that spacetime is curved by matter, energy and pressure, but that matter and energy are continuous, while quantum mechanics and quantum field theories maintain that spacetime is flat, but that matter and energy are quantized into chunks. Newtonian mechanics simply states that space and time are mutually independent things and universal for all, with matter and energy being continuous. The important point is that all of these effective theories and scientific models are approximations – they are all fundamentally "wrong", but knowing that you are "wrong" gives you a great advantage over people who know that they are "right", because knowing that you are "wrong" allows you to seek out better models of reality, and that is what physics is all about.

So it seems that in both physics and softwarephysics there has been a tendency to focus upon compartmentalizing the Universe into an arbitrary collection of different size and energy regimes, and then to come up with a set of effective theories to deal with each regime. Granted, physics has sought a TOE – a Theory of Everything in recent decades, but without much success. But the one unifying characteristic for all of these effective theories in physics and softwarephysics is that they are all fundamentally mathematical in nature. This unifying feature of the effective theories of physics becomes quite evident to students as they do their coursework. Quite frequently, physics students will find themselves studying the very same mathematics in both a physics course and a mathematics course in the very same semester. Why is that?

Back in 2011 in What’s It All About?, I described my current working hypothesis for what’s it all about. I explained that my current working hypothesis was that our multiverse was a form of self-replicating mathematical information that I dubbed the FEU – the Fundamental Essence of the Universe. In that posting I alluded to Eugene Wigner’s oft-cited paper The Unreasonable Effectiveness of Mathematics in the Natural Sciences (1960), which is available at:

http://www.dartmouth.edu/~matc/MathDrama/reading/Wigner.html

and to the strange fact that our Universe seems to behave mathematically to an extreme. This observation first began with Galileo, who determined that the distance an object fell was proportional to the square of the time that it fell, and that this distance did not depend upon anything other than the time of the fall. It did not depend upon what the object was made of or how heavy the object was – all objects fell with exactly the same acceleration. For example, an object will fall four times further in twice the time, no matter what the object is made of or how heavy the object might be. As time passed and physics progressed, the mathematics describing our Universe has become ever more complex with each passing decade, until now we have Einstein’s General Theory of Relativity (see Cyberspacetime), with its reliance on a Riemannian metric defined by 4-dimensional tensors representing a large number of nonlinear differential equations that cannot be solved except for the simplest of cases, and the quantum field theories of the Standard Model of particle physics with their reliance on rotations in U(1), SU(2), and SU(3) internal symmetry spaces (see The Foundations of Quantum Computing). The mathematics of string theory is so difficult that it cannot even be uniquely defined at this time. So why is our Universe so mathematical in nature?

I just finished reading Our Mathematical Universe: My Quest for the Ultimate Nature of Reality (2014) by Max Tegmark and Love and Math: The Heart of Hidden Reality (2014) by Edward Frenkel, and these two books help to answer that question. Edward Frenkel makes a very good case that mathematicians do not invent mathematics; instead they discover the mathematics that already exists in an abstract manner beyond our Universe in an eternal Platonic plane of existence. Consequently, if we ever do finally make contact with an alien civilization we will most likely find that they have discovered the very same mathematics that we have, like Euler's identity:

e + 1 = 0

which relates the constants e (used to calculate your mortgage payment), the imaginary number i, the π from circles, and the source of the counting numbers 0 and 1. Edward Frenkel is deeply involved in the Langlands Program, an effort to unify all of the various branches of mathematics under a grand TOE – a Theory of Everything for mathematics, similar to the TOE that physicists have been searching for.

Max Tegmark then takes this assumption that mathematics exists in its own eternal abstract space, independent of physical reality, and proposes that the simplest explanation for why our Universe behaves so mathematically is that our Universe is not simply described by mathematics, but rather that our Universe is mathematics! Basically, our Universe and all that we perceive to be “reality” is just an abstract mathematical structure, and that is why our Universe seems to behave so mathematically. Max Tegmark first proposed this intriguing idea in The Mathematical Universe (2007) at http://arxiv.org/pdf/0704.0646v2

The Mathematical Universe Hypothesis – MUH
Max Tegmark begins his paper with two hypotheses:

External Reality Hypothesis (ERH): There exists an external physical reality completely independent of us humans.

Mathematical Universe Hypothesis (MUH): Our external physical reality is a mathematical structure.

Most physicists easily subscribe to the first hypothesis (see Model-Dependent Realism - A Positivistic Approach to Realism for further discussion), and spend their entire careers working on effective theories to explain how the external reality behaves. Max Tegmark explains that these effective theories they produce usually consist of two components – mathematical equations and human “baggage”. The human “baggage” consists of the words that humans use to explain how the mathematical equations are connected to what we humans observe and intuitively understand. Humans use words like atoms, molecules, cells and organisms to make sense of the mathematical equations, and to allude to things without having to deal with the underlying mathematics that reveal their true fundamental essences in the ERH. These words then become a quick and convenient notational shorthand that allow humans to communicate with each other. But Max Tegmark contends that this human “baggage” is totally unnecessary, and often times gets in the way of a true understanding of the ERH because the “baggage” words can insert a good deal of subconscious human emotion into the discussion. For example, nearly all human cultures initially develop mythologies to help explain how the Universe operates, and these mythologies are usually composed 100% of human “baggage”. But even in science, words like “atom” can raise all sorts of misleading images in one’s mind of little colored electron balls orbiting little colored proton and neutron balls, and the words “electron”, “proton” and “neutron” can do even further damage. Max Tegmark then goes on to ask if there is a way to describe the ERH without any human “baggage” at all by making our description of the ERH totally abstract by only using symbols with no preconceived meanings whatsoever. In such a description of the ERH, the entities in external reality and the relations between them would become completely abstract, forcing any words or other symbols used to denote them to become mere labels with no preconceived meanings whatsoever. A mathematical structure is precisely this: abstract entities with relations between them. A mathematical structure simply consists of abstract entities with relations between these abstract entities, like the set of integers or real numbers and the mathematical operations that can be performed on them. For example, in his paper, Max Tegmark describes the mathematical structure known as the group with two elements. We could give these two elements the labels “0” and “1” and define the following relationships between them:

0 + 0 = 0
0 + 1 = 1
1 + 0 = 1
1 + 1 = 0

Note that since this mathematical structure only contains two elements and there is no concept of a “2” we have to wrap the relationship of 1 + 1 around back to 0. Think of a clock with only a “0” and a “1” on its face, with the “0” at the top of the clock face and the “1” at the bottom. When adding a “1” to a clock that has its hand already pointing down to “1”, the clock hand returns back to the “0” at the top of the clock face. In mathematics this is known as addition modulo two. But Max Tegmark then goes on to explain that the above mathematical structure could also be expressed as:

Even and even make even
Even and odd make odd
Odd and even make odd
Odd and odd make even

since the elements and relationships within this simple mathematical structure have no intrinsic meaning and are purely abstract in nature. In Love and Math Edward Frenkel describes additional examples of more complex mathematical structures, such as the counting numbers 1, 2, 3, 4…. and the relations that define addition, subtraction, multiplication and division. Next he throws in the negative numbers to define the integers …. –3, -2, -1, 0, 1, 2, 3 … and relations that define addition, subtraction, multiplication and division amongst them. Divisions of the integers by 10, 100, 1000… lead us to decimals like 1.3467 and divisions of integers by integers, like 345/259, naturally lead us to the rational numbers, and all of these differing mathematical structures can have the same set of relationships of addition, subtraction, multiplication and division. Next we can introduce the irrational numbers, like the square root of 2 as the distance along the hypotenuse of a 1 x 1 right triangle in flat space, and also the transcendental numbers like π and e to yield the real numbers. To bring in the complex numbers we simply define “i” to be the square root of –1:
          ____
i  =  √ -1

(or in other words  i²  =  -1  )

Figure 1 – The square root of 2 can be defined as the distance along the hypotenuse of a 1 x 1 right triangle in a flat space and is approximately 1.4142336…. It cannot be defined by a decimal number or by the ratio of two integers in general, so it is not a rational number, yet it logically still exists “out there” someplace.

The point that Edward Frenkel makes in Love and Math is that even though these mathematical structures are purely abstract in nature, they do seem to really “exist” out there in their own abstract plane of existence and thus can be discovered by any sentient beings willing to make the effort. Additionally, because proper mathematical structures also have to be self-consistent, and cannot contradict themselves, they represent a small subset of all possible ideas. This is important because, as we all know, much of human thought is not self-consistent and is indeed self-contradictory in nature instead. So if we think of the set of all possible mathematical structures and conclude that each is equivalent to a physical universe, we end up with a very large number of possible universes that are members of a mathematical multiverse.

In order to explore this vast multiverse of mathematical structures, in both his paper and book, Max Tegmark classifies multiverses into four levels – Level I, Level II, Level III and finally a Level IV multiverse composed of all possible mathematical structures. To understand his reasoning we need some background in what has been going on in cosmology and physics for the last 35 years.

A Multitude of Multiverses
This section will provide a brief overview of the concept of the multiverse that seems to be gelling in both modern cosmology and physics. The following books and articles provide further details on this grand synthesis that has been taking place:

1. The Hidden Reality: Parallel Universes and the Deep Laws of the Cosmos (2011) by Brian Greene.

2. Many Worlds in One: The Search for Other Universes (2007) by Alex Vilenkin.

3. The Cosmic Landscape: String Theory and the Illusion of Intelligent Design (2006) by Leonard Susskind.

4. The Self-Reproducing Inflationary Universe Scientific American (1994) by Andrei Linde at http://www.stanford.edu/%7Ealinde/1032226.pdf

5. Andrei Linde’s website at http://web.stanford.edu/~alinde/ and specifically this paper: Inflation, Quantum Cosmology and the Anthropic Principle (2002) http://arxiv.org/pdf/hep-th/0211048

Back in the 1920s we first discovered that our Universe was expanding, meaning that the space between galaxy clusters was observed to be steadily increasing. If the Universe is indeed expanding, and you simply reverse time, you can then deduce that everything in the Universe must have been a lot closer together in the past because there was less space between things back then. Carried to its logical conclusion, that means everything must have started off from a very small region, or possibly even from a single point. This realization led to the Big Bang model of cosmology, where all of the space, time, matter and energy of the Universe was created at a single point about 13.8 billion years ago and then expanded into what we see today. But there were several problems with the original Big Bang model:

1. The Flatness Problem – The spacetime of our Universe seems to be flat, meaning that it can be described by the Euclidean geometry that you learned in high school. In Euclidean geometry parallel lines do not meet, and the angles in a triangle always add up to 1800. People came to this conclusion by using the General Theory of Relativity. They observed that when all of the mass in the Universe that could be seen was added up, it came to about 4% of the critical density needed to make spacetime flat, and that was pretty close to 100% at the time because most of the numbers in cosmology back then were only good to an order of magnitude. The problem was that if the Universe were nearly flat today, then it had to be very flat at the time of the Big Bang, accurate to about one part in 1062! Why would just the right amount of matter and energy be created at the time of the Big Bang to make the Universe so flat?

2. The Horizon Problem – If you look to your left you can detect photons from the CBR - Cosmic Background Radiation that were generated about 380,000 years after the Big Bang. These photons have been traveling for 13.8 billion years and were generated by atoms that are now about 46 billion light years away because the Universe has been expanding while the photons were traveling to you, and the atoms were carried along with the expanding space. Now if you look to your right, you can also detect CBR photons that have been traveling for 13.8 billion years and that were also generated by atoms that are now 46 billion light years to your right. The problem is that if you work out the mathematics, the atoms on your left and right that generated these photons 380,000 years after the Big Bang were about 38 million light years apart at the time that they generated the CBR photons. So your left-side atoms and right-side atoms never had time to come into thermal equilibrium with each other before they emitted the photons because they were about 100 times further apart than the distance light could have covered in 380,000 years. The problem is that the photons coming into your detector, from both your left and right sides, now have the very same black body wavelength distribution of 2.78 0K accurate to one part in 100,000. How is that possible? It would be like all of the members of your high school class suddenly showing up at a restaurant without ever communicating with each other. That’s called a high school reunion, and it takes a lot of planning and communications to pull one off successfully.

3. The Magnetic-Monopole Problem – At the time of the Big Bang temperatures were so high that, according to the proposed Grand Unified Theories of physics that people were working on in the 1970s, very heavy and stable magnetic-monopole particles should have been generated and should today be as common as electrons. Unlike a bar magnet that has both a North and South pole, these magnetic-monopole particles would only have one North or South pole. The problem is that we do not see such particles today.

In 1980, Alan Guth was working on a solution for the magnetic-monopole problem. During his research, he came to the sudden realization that a positive scalar field, now known as the Inflaton field, could cause spacetime to rapidly expand in an exponential manner and carry off the magnetic-monopoles to far-flung distant regions of the Universe, and that would explain why we do not see them today. He called this effect “Inflation” and proposed that it operated over a very brief period 10-36 - 10-33 seconds after the Big Bang. But suddenly Alan Guth also realized that the rapid inflation could also fix the Flatness and Horizon problems as well. Inflation proposes that our Universe appears to be extraordinarily flat because the rapid expansion would have smoothed out any bumps or curvatures that it might have started out with. And the reason why CBR photons coming in from your left match the temperature of the CBR photons coming in from your right is that the atoms that released them were in thermal equilibrium just before inflation dramatically blew them millions of light years apart.

But why does this mystical scalar Inflaton field cause the exponential rapid expansion of space known as Inflation and what exactly is a scalar field anyway? For that we need to go back to quantum field theories (see The Foundations of Quantum Computing for details). Recall that in the quantum field theories of particle physics that form the Standard Model, everything is a quantum field. The normal matter particles that we are familiar with, known as fermion particles, are composed of electron quantum fields, up quark quantum fields, down quark quantum fields, and neutrino quantum fields. These matter quantum fields are smeared out across the entire Universe, but the strength of the fields at any given spot tell us how probable it is to find an electron, up quark, down quark or neutrino at that spot. Unfortunately, we cannot directly detect these quantum fields. Instead, we detect electrons, protons (2 up quarks with 1 down quark), neutrons (1 up quark with two down quarks) or neutrinos when we try to detect the quantum fields themselves. That’s why they are called quantum fields, as opposed to the plain old electric and magnetic fields of the 19th century, because they are quantized into particles when measured. There are also some other exotic quantum fields, like the charm, strange, top and bottom quark fields too, but we only briefly see them in particle accelerators as those fields generate particles in collisions. Particle accelerators also excite lepton muon and tauon quantum fields in collisions too. In addition to the fermion matter quantum fields, we also have quantum fields for the force carrying boson particles. So we have photon quantum fields, Z0 quantum fields, W+ quantum fields, W- quantum fields and gluon quantum fields as well, and again, we cannot detect these quantum fields directly. When we try to do that we detect the corresponding particle. Now the proposed scalar Inflaton field would have just been another quantum field with a positive energy.

Figure 2 – The Standard Model of particle physics is composed of quantum fields that we observe as particles. The matter particles are called fermions and have a spin of ½ . The force carrying particles are called bosons and they have a spin of 1. The Higgs boson has a spin of 0 and is a scalar field like the proposed scalar Inflaton field.

Next we need to bring in the General Theory of Relativity (1915) (see Cyberspacetime for details) to explain why a scalar quantum field with a positive energy could cause space to expand so rapidly. The General Theory of Relativity is Einstein’s theory of gravity, which, by the way, is the only force of nature not covered by the Standard Model of particle physics. In fact, the General Theory of Relativity is not a quantum theory at all, it is a classical theory that does not have the quantized characteristics of a quantum field theory. Currently, we do not have a theory of quantum gravity, so we have to make do with the General Theory of Relativity for now. Prior to the General Theory of Relativity, we used Newton’s much simpler theory of gravity, and we still use Newton’s theory most of the time today because it works quite well for most practical problems, like sending a probe to Saturn. Now for Newton a ton of lead would produce the same amount of gravity no matter what. For example, a ton of hot lead under great pressure would produce exactly the same gravitational field as a ton of cold lead under no pressure at all. But that is not true for Einstein’s theory of gravity. In the General Theory of Relativity, matter, energy and pressure can all distort spacetime, and it is the distortion of spacetime that produces the illusion of gravity. So for Einstein a ton of hot lead would produce more gravity than a ton of cold lead because the hot lead has more thermal energy than the cold lead and the extra thermal energy would produce some additional gravity. And the same goes for a ton of lead under great pressure. The pressure within the lead would also produce some additional gravity. Now the really strange thing about the General Theory of Relativity is that pressure can be both positive and negative, and consequently, can add to the gravity that an object produces or reduce the amount of gravity that an object produces. In the General Theory of Relativity when you stretch a rubber band, it produces a little less gravity than you would expect because the stretched rubber band has negative pressure. It would also produce a little more gravity because of the potential energy that it stored due to being stretched, but the negative pressure of the tension in the rubber band would still reduce the total amount of gravity produced. The end result is that negative pressure produces negative gravity and negative gravity is just what we need to make space expand. Now if the scalar Inflaton field had positive energy, the positive energy would produce positive gravity, and that would cause space to contract. But what if the Inflaton field was like our stretched rubber band and also had lots of negative pressure at the same time? In that case the negative gravity produced by the negative pressure could overwhelm the positive gravity produced by the positive energy of the Inflaton field, causing space to rapidly expand, and as more space was created, it would also come with the built-in scalar Inflaton field that was driving the rapid expansion of space. Clearly, this could lead to a runaway condition, with the Universe rapidly inflating its brains out, as more and more space was created containing more and more of the scalar Inflaton field.

But where would all of this positive energy of the Inflaton field come from as it spiraled out of control? Aren’t we violating the first law of thermodynamics and the conservation of energy by having a constant Inflaton field with positive energy continuously growing in a rapidly expanding Universe? Now here comes the really neat part. The positive energy of the scalar Inflaton field also creates an ordinary gravitational field because the General Theory of Relativity states that matter, energy and pressure distort spacetime in such a manner as to create the illusion of a gravitational field, and gravitational fields have negative energy. So the positive energy of the scalar Inflaton field is offset by the negative energy of the gravitational field that it generates as it comes into existence. The idea that gravitational fields contain negative energy sounds a bit strange and is rather hard to imagine, so let’s delve into it a bit further. The easiest way to see this is to use Newton’s classical theory of gravity. Like I said, most times we do not need to use the General Theory of Relativity for most problems dealing with gravity, and this is a perfect example. So let’s imagine a large spherical metal spaceship way out in space. The walls of this spherical metal spaceship have a certain thickness, and the spherical spaceship itself has a certain radius. Now according to Newton’s theory of gravity, if you are outside of the spaceship, the gravitational field that the spaceship creates is exactly like the gravitational field that would be created if all of the mass of the metal walls were concentrated at a single point at the center of the spaceship. An external observer would see gravitational field lines all emerging from the spherical spaceship as if all of its mass was concentrated at its center (see Figure 3).

Figure 3 – Outside of the spaceship the gravitational field looks like it was generated by a single point at the center of the spaceship with a mass equal to the total mass of the metal walls.

The really strange thing about Newtonian gravity is that if you were inside of the spaceship, there would be no gravitational field at all! That is because no matter where you are inside of the spaceship, you will be tugged equally in all directions by the spherical metal walls of the spaceship (see Figure 4). So the mental picture I am trying to paint here consists of a gravitational field on the outside of the spaceship pointing away from the center of the spaceship in all directions as if all of the mass of the spaceship were at its center, but inside of the spaceship there is no gravitational field at all.

Figure 4 – No matter where you are inside of the spaceship the metal walls tug on you equally in all directions so there is no gravity at all on the inside and the gravitational field is zero at all points within the spaceship.

Now suppose we allowed the spherical walls of the spaceship to contract by having segments of the walls slide past each other like sliding doors as the walls contracted. This would allow the segments of the spaceship walls to fall towards the center of the spaceship. Now as the wall segments fell in towards the center of the spaceship, they would gain kinetic energy that could be used for some useful task. For example, it is a falling weight on a chain that drives a Grandfather clock. Now if we look at our contracting spaceship from the outside, we will see that as the spherical walls contract and pick up kinetic energy, an additional volume of gravitational field will have been created just outside of the spaceship because the spaceship has gotten smaller. The net effect is that as the spaceship walls pick up positive kinetic energy, some volume of the Universe outside of the shrinking spaceship that originally was inside of the spaceship with no gravitational field now has a gravitational field. So where did the positive kinetic energy of the falling spaceship walls come from? Well it came from creating some volume of space with a compensating gravitational field with negative energy, and that is why gravitational fields contain negative energy. This is where Alan Guth gets his famous "ultimate free lunch". Once a scalar Inflaton field gets started, it’s like opening a large cosmic zipper in spacetime. As the zipper unzips, it creates a scalar Inflaton field with positive energy, with a large negative pressure that drives Inflation, and also a compensating gravitational field with negative energy. The positive energy of the Inflaton field exactly matches the negative energy of the gravitational field, and the two add up to precisely zero net energy. But the large negative pressure of the scalar Inflaton field continues to cause the Universe to explosively grow in an exponential manner.

Now how do you get one of these Inflaton fields going? Well, calculations show that all you need is about 0.01 mg of mass at the Planck density. The Planck density is 1093 times the density of water, so that is a little hard to attain, but thanks to quantum mechanics it is not impossible. Recall that in quantum mechanics, particles and antiparticles are constantly coming into existence due to fluctuations in their quantum fields, so if you wait long enough you will get a quantum fluctuation of 0.01 mg at the Planck density. To put that into perspective a U.S. 5 cent nickel has a mass that is 500,000 times greater than 0.01 mg. The net effect of all this inflation is that shortly after the Big Bang we end up with an infinitely large Universe made of “nothing”. Indeed, the WMAP and Planck satellite mappings of the CBR now show that our Universe is flat to within an error of less than 1%. That means that the total amount of positive energy in our Universe, arising from its matter and energy content, exactly matches the negative gravitational energy of all that stuff pulling on itself. Our Universe also seems to have no net momentum, angular momentum, electrical charge or color charge too, so it really does appear to be made of “nothing”. It’s like adding up all of the real numbers, both positive and negative, and obtaining a sum of exactly zero.

Now if our Universe is infinite and the small portion of the Universe that we can see is quantized into a finite number of quantum states that means that there must also be an infinite number of exact copies of our visible Universe out there in the infinite Universe. And there are also an infinite number of close-call Universes that are near perfect copies too. Remember, we can only see photons that have been traveling for the past 13.8 billion years, and those photons came from atoms that are now about 46 billion light years away from us because the Universe has expanded and carried those atoms away from us during those 13.8 billion years. So our visible Universe only has a radius of about 46 billion light years and that is quite puny in an infinite Universe. People have actually calculated approximately how far away our closest twin Universe should be, and naturally it is too far away for humans to really conceive of, but it is a finite distance away from us in an infinite Universe. Max Tegmark calls this infinite collection of visible universes a Level I multiverse.

Now the hard thing about inflation is not getting inflation going, quantum fluctuations will take care of that, the hard part is getting inflation to stop because once it gets going there does not seem to be any good reason for it to ever stop. It should just keep going on creating more and more space containing the Inflaton field with a positive energy density and a corresponding gravitational field with a negative energy density that zeros out the Inflaton field positive energy. But don’t forget that the Inflaton field is a quantum field. Since the Inflaton field has a positive energy that means it can decay into something else with a lower level of energy, and turn the surplus energy into something else, like you and me. In general, energy-rich quantum fields will decay into other quantum fields with a lower energy with a certain half-life, and as the positive energy of the Inflaton field decayed, it created all sorts of other quantum fields, like the electron and quark quantum fields that we are made of. But what if the expansion rate of inflation exceeds the half-life of the Inflaton field itself? Then we could have little spots in the rapidly expanding space where the Inflaton field decays into matter and energy and inflation stops. To the inhabitants of these little spots, it would look like a Big Bang creation of their Universe. In between these little spots, where the Inflaton field decayed with a Big Bang into a universe, inflation would continue on and would rapidly separate these little universes apart much faster than the speed of light (see Figure 5).

In 1983 Andrei Linde proposed just such a model that he called Eternal Chaotic Inflation. Not only could inflation continue on for an infinitely long time into the future, it could also have been going on for an infinitely long time in the past, and even the little bubble universes that it produced could sprout additional regions of inflating space via quantum fluctuations that generated the necessary 0.01 mg of Planck density matter. Thus Eternal Chaotic Inflation produces an eternal fractal-like structure (Figure 6).

Figure 5 – In a Level II multiverse the Inflaton field decays into an infinite number of Level I multiverses that continue to expand at a very much slower rate because the Inflaton field has decayed into the matter and energy that the occupants perceive as their Big Bang. Inflation continues on between the little spots and causes the little Level I multiverses to rapidly fly apart.

Figure 6 – Andrei Linde’s Eternal Chaotic Inflation creates an eternal multiverse that has always existed and continues to spawn additional areas of inflating space in a fractal-like manner.

That sort brings us back full circle to Fred Hoyle’s steady state model of cosmology, where our Universe is eternal and constantly creating additional matter as it expands. In Eternal Chaotic Inflation the Inflaton field has always existed and has dramatically created an infinite multiverse of rapidly growing space that contains an infinite number of little Level I multiverses made of “nothing” where the Inflaton field decayed into matter and energy, and each of the little Level I multiverses is infinite in size too. Max Tegmark labels such a multiverse a Level II multiverse.

But how could each generated Level I multiverse in the Level II multiverse also be infinitely large too? For that we have to return to relativity theory (see Is Information Real? and Cyberspacetime for details). Recall that in relativity theory space and time get all mixed up together into a spacetime, and that my time can become your space and vice versa. So from the perspective of an observer outside of a bubble Level I multiverse where the Inflaton field has decayed into matter and energy, such an observer would see the bubble universe as having a finite spatial dimension, and the size of the bubble universe would continue to slowly expand, but at a much slower rate than those portions of the Level II multiverse where the Inflaton field still existed and was causing inflation to continue on. However, such an observer would also see the bubble Level I multiverse continue to expand forever, and thus the observer would maintain that the bubble Level I multiverse had a finite spatial dimension but an infinite time dimension. However, an observer within the bubble Level I multiverse would see quite a different thing. They would not see the walls of the bubble existing in space, but existing in time instead. They would see the walls of the bubble as being infinite in length, rather than time, and defining the time of their Big Bang. For them, every point along the bubble wall would look like the moment of the Big Bang in time and it would be infinitely large. It’s the “my time” is “your space” thing in relativity theory all over again, but this time at a cosmic scale.

Figure 7 – From outside of the bubble Level I multiverses it looks like the bubbles are finite in space but infinite in time. From inside a bubble Level I multiverse it looks like the walls of the bubble define the time of the bubble’s Big Bang and the bubble wall is spatially infinite.

So in a Level II multiverse we now have an infinite number of Level I multiverses as before, but with a difference. In a Level II multiverse each bubble Level I multiverse can decay into a multiverse with its own kind of physics. Here is the reason why. When the scalar Inflaton field decays away and inflation stops in a bubble Level I multiverse, other quantum fields take over and those fields will generate a vacuum with a certain vacuum energy. This vacuum energy can be positive, negative or zero. If the remaining vacuum energy is positive, the bubble Level I multiverse will continue to expand, but at a much slower rate than the inflating portions of the Level II multiverse where the Inflaton field still exists. If the vacuum energy is negative, the bubble Level I multiverse will collapse into nothingness, and if the vacuum energy is zero, the bubble Level I multiverse will not expand or contract at all. Each vacuum energy will have its own kind of physics because the vacuum energy will be defined by the quantum fields and particles it contains. For example, we now know that the vacuum energy of our Universe is slightly positive. This is because we discovered in 1998 that the expansion rate of our Universe is not decreasing as was expected. Instead, it was found to be increasing. This fact has been confirmed by numerous observations, and has led to the realization that about 72% of our Universe is composed of dark energy. The dark energy is just the positive vacuum energy of our Universe that is driving its expansion rate to increase.

The fact that our Universe has a small positive vacuum energy causes some theoretical problems. Recall that the quantum field theories of the Standard Model of particle physics maintain that our Universe is filled with a sea of virtual particles constantly popping into and out of existence. In fact, those quantum field theories predict that the virtual matter fermion particles should produce a positive vacuum energy that is about 10123 times greater than the very weak positive vacuum energy that we have observed. On the other hand, the virtual force carrying boson particles should also create a negative vacuum energy that is about 10123 times greater than the vacuum energy that we have observed. Now before we discovered the very weak positive vacuum energy of our Universe that is created by the dark energy, everybody thought that the positive energy of the fermions exactly cancelled out the negative energy of the bosons to yield a vacuum energy of exactly zero. Physicists are never bothered much by the numbers “0”, “1” or even infinity, but coming up with an explanation for how a positive number with 123 digits could nearly exactly cancel out another negative number with 123 digits to the extent that they are exactly the same except for the very last digit has proven quite difficult. One possibility would be to augment the Standard Model of particle physics with SUSY – the Supersymmetric Standard Model. In SUSY each fermion has a boson twin with exactly the same mass. For example, in pure SUSY the electron fermion particle has a boson twin called the selectron with exactly the same mass. Since the number of fermions would then exactly match the number of bosons in the Universe, their positive and negative contributions to the vacuum energy would exactly cancel out to zero. The only problem with pure SUSY is that selectrons should be just as common as electrons in our Universe, and we have yet to find any. Actually, we never could find selectrons in a pure SUSY universe because the selectrons and electrons would have exactly the same mass. That means that electrons in atoms could easily turn into selectrons and speed away. That would quickly put an end to chemistry in our Universe and put an end to us as well. So clearly a pure SUSY universe with a zero vacuum energy could not produce intelligent self-aware entities like ourselves. This is another example of the weak Anthropic Principle in action. Our existence as sentient beings rules out lots of physics that is not capable of supporting sentient beings. Thus in a Level II multiverse we should only expect to find ourselves in those bubble Level I multiverses with a small, but nonzero vacuum energy. For example, the current thinking is that we live in a slightly dented universe with a SUSY that is not entirely symmetric. At the LHC they are looking for SUSY particles and one of the hopes is that some of the lighter SUSY particles make up the dark matter of our universe which accounts for about 24% of the critical density that makes our Universe flat (72% dark energy, 24% dark matter and 4% normal matter).

In The Cosmic Landscape: String Theory and the Illusion of Intelligent Design Leonard Susskind thinks he has an explanation for this fantastic cancellation of the huge positive and negative energy densities of the fermions and bosons in our Universe. Leonard Susskind is at Stanford with Andrei Linde, and together they make a formidable team. Leonard Susskind was one of the originators of string theory, which has now matured into M-theory. The original purpose of string theory was to overcome some of the basic problems with quantum field theory. Again, in the quantum field theories of the Standard Model, when we try to measure a quantum field we find a particle instead, and the particle is supposed to have a dimension of zero. For example, when we try to measure an electron quantum field, we find an electron instead, and the electron is supposed to have a radius of zero and is not supposed to be made of anything smaller. Yet the electron has an electrical charge of –1, a mass of 0.5101 MeV and a spin of ½. How can something that takes up no space at all contain all of that? Where in the world would it put it all? But it gets much worse. Let's explore this further by taking a classical view of an electron. Imagine a very small spherical body of electron stuff that has a charge of –1 and a mass of 0.5101 MeV. Now let the radius of the sphere go to zero. Remember that 1/0 is an undefined mathematical operation and goes to infinity as the denominator gets closer to 0. Now as the sphere of charged electron stuff gets smaller and smaller, the electric field surrounding the electron goes to infinity also because the electric field is defined as E = kq/R2. Since an electric field contains energy, the energy of the electric field also goes to infinity as well, and since E = mc2 that means that the mass of the electron should go to infinity as well, and that is clearly not what we observe. String theory gets around these problems by imagining that electrons are not point particles. Instead they have a very small, but finite extent, because they are made of little vibrating strings of mathematics. That gets rid of all of the problems with infinities in quantum field theories. Originally, it was hoped that string theory would produce values for the large number of numbers that have to be plugged into the Standard Model by hand. For example, the Standard Model does not predict that electrons have a mass of 0.5101 MeV. That number, and lots more, have to be plugged into the theory by observing them first. Unfortunately, string theory never came up with a set of unique numbers for the Standard Model. Instead, it was found that string theory could yield nearly an infinite number of possible universes, all with differing physics. Currently, it is estimated that string theory could produce at least 10500 different kinds of universes, and probably lots more.

However, Leonard Susskind has pointed out that this might be a good thing after all based upon what we have learned about cosmology over the past 35 years. Rather than thinking that string theory should produce a unique set of numbers for the Standard Model, perhaps we should think of string theory as producing a unique framework for building universes in a Level II multiverse. For example, people never expected Newtonian mechanics and gravity to explain why the Earth orbits the Sun at a distance of 93 million miles. Instead, they were quite happy with using Newtonian physics to explain the billions of planetary orbits in our galaxy that we are slowly on the way to discovering. In this view, the Earth orbits the Sun at a distance of 93 million miles because if it did not, we would not be here wondering about it. After all, nobody is on the surface of Mercury or Venus marveling at their orbital distances from the Sun. Leonard Susskind calls this very large array of 10500 possible universes the Cosmic Landscape and relies on the Anthropic Principle to explain why our vacuum energy is just slightly positive and not zero. Out of necessity, we can only find ourselves in a bubble Level I multiverse within the infinite Level II multiverse with such a low vacuum energy, and string theory does a great job at providing the framework for that in physics.

So with a Level II multiverse we now have an eternal and infinite multiverse composed of an infinite number of Level I multiverses that can have an infinite number of ways of doing physics. Each Level I universe will also be infinitely large and will also contain an infinite number of copies of each visible Universe within it, but the physics throughout the entire Level I universe will be the same. This makes things very interesting. It means that life might be much like the movie Groundhog Day (1993) with each of us reliving our lives over and over an infinite number of times and in an infinite number of ways.

Finally, we come to what Max Tegmark defines as a Level III multiverse. A Level III multiverse might span all of the Level II and Level I universes put together via the Many-Worlds interpretation of quantum mechanics (see Quantum Software, The Foundations of Quantum Computing, and Is the Universe a Quantum Computer? for details). Recall that the founding fathers of quantum mechanics had a great deal of trouble with interpreting what the Schrödinger’s equation was trying to tell them. According to Max Tegmark they had a great deal of trouble with adding the human “baggage” to it. The Schrödinger equation was telling them that although electrons were smeared out over the entire Universe, they were more likely to be found where the amplitudes of the solutions to the Schrödinger equation were most intense. The problem was that whenever a measurement was taken they always found an electron at a single spot and not smeared out all over the place. So they came up with the Copenhagen interpretation of quantum mechanics in the 1920s. According to the Copenhagen interpretation, when a measurement is taken of an electron, the act of making the measurement causes the wavefunction solution to the Schrödinger equation that describes the electron’s position to collapse to a single point and that is where the electron will be found. However, there is nothing in the mathematics of quantum mechanics that describes how this collapse of the wavefunction takes place. The explanation for the wavefunction collapse is just some more human “baggage" to help humans deal with the Schrödinger equation. Still, when I took my very first course in quantum mechanics back in 1970, the Copenhagen interpretation was presented as if it were a fact.

In 1957, Hugh Everett working on his Ph.D. under John Wheeler, proposed the Many-Worlds interpretation of quantum mechanics as an alternative. The Many-Worlds interpretation admits an absolute reality, but claims that there are an infinite number of absolute realities spread across an infinite number of parallel universes. In the Many-Worlds interpretation, when electrons or photons encounter a two-slit experiment, they go through one slit or the other, and when they hit the projection screen they interfere with electrons or photons from other universes that went through the other slit! In Everett’s original version of the Many-Worlds interpretation, the entire Universe splits into two distinct universes whenever a particle is faced with a choice of quantum states, and so all of these universes are constantly branching into an ever growing number of additional universes. In the Many-Worlds interpretation of quantum mechanics, the wavefunctions or probability clouds of electrons surrounding an atomic nucleus are the result of overlaying the images of many “real” electrons in many parallel universes. Thus, according to the Many-Worlds interpretation wavefunctions never collapse. They just deterministically evolve in an abstract mathematical Hilbert space. In recent years it seems that the Many-Worlds interpretation has been gaining support in the physics community, with the Copenhagen interpretation on the decline.

The one common thread throughout all of these multiverses is a fundamental reliance on mathematics to describe how they behave. Max Tegmark goes on to make the conjecture that the reason that the Level I, Level II and Level III multiverses are so mathematical in nature is that they simply are just implementations of mathematical structures. Consequently, he goes on to add one additional level – a Level IV multiverse composed of all possible mathematical structures, and he proposes that each of those mathematical structures is responsible for generating the Level I, Level II and Level III multiverses. With this view, the reason we keep coming up with better and better effective theories in physics is that we just keep discovering better and better mathematical approximations of the mathematical structure that is our particular Universe in the Level IV multiverse described by all possible mathematical structures. As Max Tegmark put it:

The MUH provides this missing explanation. It explains the utility of mathematics for describing the physical world as a natural consequence of the fact that the latter is a mathematical structure, and we are simply uncovering this bit by bit. The various approximations that constitute our current physics theories are successful because simple mathematical structures can provide good approximations of certain aspects of more complex mathematical structures. In other words, our successful theories are not mathematics approximating physics, but mathematics approximating mathematics.

This idea is quite familiar to IT professionals because we are very used to thinking in terms of the “logical” design of software in contrast to its “physical” implementation which can greatly vary. Over the course of time software has been physically implemented on electromagnetic relay switches, vacuum tubes, discrete transistors, integrated circuit chips, and one day may be implemented on optical chips or as qubits in quantum mechanical devices.

The Computable Universe Hypothesis
IT professionals will be glad to learn that Max Tegmark has found a special place for us in the Level IV multiverse. This is where the Software Universe that we live and work in is located in the Level IV multiverse. It could also be where our Level I, Level II and Level III multiverses are also located, but Max Tegmark is not prepared to make that claim at this time because we do not have sufficient data.

In his paper and book, in addition to the ERH and the MUH, Max Tegmark proposes:

Computable Universe Hypothesis (CUH): The mathematical structure that is our external physical reality is defined by computable functions.

and goes on to define it as:

By this we mean that the relations (functions) that define the mathematical structure as in Appendix A1 can all be implemented as computations that are guaranteed to halt after a finite number of steps. We will see that the CUH places extremely restrictive conditions on mathematical structures, which has attractive advantages as well as posing serious challenges.

Max Tegmark uses Figure 8 below to further explain this in terms of formal systems. A formal system describes a mathematical structure or a computation, and it is said that if a formal system describes a mathematical structure or computation that the mathematical structure or computation is a model of the formal system.

Figure 8 – The relationships between formal systems, mathematical structures and computations.

Because Max Tegmark wonders if our Universe is a member of the CUH, he struggles with the CUH in terms of the infamous “halting” problem in computer science. But we in IT know much better. We have no problems whatsoever with writing software that never goes to completion. In fact, most of the software in the Software Universe is “buggy” software that does not perform properly in all cases, but that never stands in our way for long.

Thus we finally come to what the Software Universe is made of. It is made of binary arithmetic and Boolean logic in a Level IV multiverse forming the software that we all dearly love. In the indexing notation that Max Tegmark has developed for describing different locations in the Level IV multiverse, all of Boolean Algebra, and the Software Universe itself, can be simply expressed as a mathematical structure defined by the string 11220001110. And as all IT professionals and most end-users know, it really does not matter if that software is even running. In the words of Max Tegmark:

Above we discussed how mathematical structures and computations are closely related, in that the former are defined by the latter. On the other hand, computations are merely special cases of mathematical structures. For example, the information content (memory state) of a digital computer is a string of bits, say “1001011100111001...” of great but finite length, equivalent to some large but finite integer n written in binary. The information processing of a computer is a deterministic rule for changing each memory state into another (applied over and over again), so mathematically, it is simply a function f mapping the integers onto themselves that gets iterated: n -> f(n) -> f(f(n)) -> .... In other words, even the most sophisticated computer simulation is merely a special case of a mathematical structure, hence included in the Level IV multiverse

I have drawn a question mark in the center of the triangle to suggest that the three mathematical structures, formal systems, and computations are simply different aspects of one underlying transcendent structure whose nature we still do not fully understand. This structure (perhaps restricted to the defined/decidable/halting part as hypothesized in Section VII below) exists “out there”, and is both the totality of what has mathematical existence and the totality of what has physical existence, i.e., the Level IV multiverse.

For example, since every universe simulation corresponds to a mathematical structure, and therefore already exists in the Level IV multiverse, does it in some meaningful sense exist “more” if it is in addition run on a computer? This question is further complicated by the fact that eternal inflation predicts an infinite space with infinitely many planets, civilizations, and computers, and that the Level IV multiverse includes an infinite number of possible simulations. The above-mentioned fact that our universe (together with the entire Level III multiverse) may be simulatable by quite a short computer program (Section VIB) calls into question whether it makes any ontological difference whether simulations are “run” or not. If, as argued above, the computer need only describe and not compute the history, then the complete description would probably fit on a single memory stick, and no CPU power would be required. It would appear absurd that the existence of this memory stick would have any impact whatsoever on whether the multiverse it describes exists “for real”. Even if the existence of the memory stick mattered, some elements of this multiverse will contain an identical memory stick that would “recursively” support its own physical existence. This would not involve any Catch-22 “chicken-and-egg” problem regarding whether the stick or the multiverse existed first, since the multiverse elements are 4-dimensional spacetimes, whereas “creation” is of course only a meaningful notion within a spacetime.


And at long last we see some support for the Equivalence Conjecture of Softwarephysics that I first postulated back in 1979 and have always had in the Introduction to Softwarephysics:

Along these lines, we use the Equivalence Conjecture of Softwarephysics as an aid; it allows us to shift back and forth between the Software Universe and the physical Universe, and hopefully to learn something about one by examining the other:

The Equivalence Conjecture of Softwarephysics Over the past 70 years, through the uncoordinated efforts of over 50 million independently acting programmers to provide the world with a global supply of software, the IT community has accidentally spent more than $10 trillion creating a computer simulation of the physical Universe on a grand scale – the Software Universe.


By the way, you can see some excellent lectures by Max Tegmark, Alan Guth, Andrei Linde and Edward Frenkel at the World Science U under the Master Classes section of the website:

http://www.worldscienceu.com/

While you are there, be sure to take Brian Geene’s excellent math-based Special Relativity class even if you are an expert on the Special Theory of Relativity. I guarantee that you will pick up some additional insights that you never realized before.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
http://softwarephysics.blogspot.com/

Regards,
Steve Johnston