Saturday, September 21, 2019

Introduction to Softwarephysics

Softwarephysics is a simulated science for the simulated Software Universe that we are all immersed in. It is an approach to software development, maintenance and support based on concepts from physics, chemistry, biology, and geology that I used on a daily basis for over 37 years as an IT professional. For those of you not in the business, IT is short for Information Technology, commercial computer science. I retired in December of 2016 at the age of 65, but since then I have remained an actively interested bystander following the evolution of software in our time. The original purpose of softwarephysics was to explain why IT was so difficult, to suggest possible remedies, and to provide a direction for thought. Since then softwarephysics has taken on a larger scope, as it became apparent that softwarephysics could also assist the physical sciences with some of the Big Problems that they are currently having difficulties with. So if you are an IT professional, general computer user, or simply an individual interested in computer science, physics, chemistry, biology, or geology then softwarephysics might be of interest to you, if not in an entirely serious manner, perhaps at least in an entertaining one.

The Origin of Softwarephysics
From 1975 – 1979, I was an exploration geophysicist exploring for oil, first with Shell, and then with Amoco. In 1979, I made a career change into IT, and spent about 20 years in development. For the last 17 years of my career, I was in IT operations, supporting middleware on WebSphere, JBoss, Tomcat, and ColdFusion. When I first transitioned into IT from geophysics, I figured that if you could apply physics to geology; why not apply physics to software? So like the exploration team at Amoco that I had just left, consisting of geologists, geophysicists, paleontologists, geochemists, and petrophysicists, I decided to take all the physics, chemistry, biology, and geology that I could muster and throw it at the problem of software. The basic idea was that many concepts in physics, chemistry, biology, and geology suggested to me that the IT community had accidentally created a pretty decent computer simulation of the physical Universe on a grand scale, a Software Universe so to speak, and that I could use this fantastic simulation in reverse, to better understand the behavior of commercial software, by comparing software to how things behaved in the physical Universe. Softwarephysics depicts software as a virtual substance, and relies upon our understanding of the current theories in physics, chemistry, biology, and geology to help us model the nature of software behavior. So in physics we use software to simulate the behavior of the Universe, while in softwarephysics we use the Universe to simulate the behavior of software. Along these lines, we use the Equivalence Conjecture of Softwarephysics as an aid; it allows us to shift back and forth between the Software Universe and the physical Universe, and hopefully to learn something about one by examining the other:

The Equivalence Conjecture of Softwarephysics
Over the past 75 years, through the uncoordinated efforts of over 50 million independently acting programmers to provide the world with a global supply of software, the IT community has accidentally spent more than $10 trillion creating a computer simulation of the physical Universe on a grand scale – the Software Universe.

For more on the origin of softwarephysics please see Some Thoughts on the Origin of Softwarephysics and Its Application Beyond IT.

Logical Positivism and Effective Theories
Many IT professionals have a difficult time with softwarephysics because they think of physics as being limited to the study of real things like electrons and photons, and since software is not “real”, how can you possibly apply concepts from physics and the other sciences to software? To address this issue, softwarephysics draws heavily upon two concepts from physics that have served physics quite well over the past century – the concept of logical positivism and the concept of effective theories. This was not always the case. In the 17th, 18th, and 19th centuries, physicists mistakenly thought that they were actually discovering the fundamental laws of the Universe, which they thought were based upon real tangible things like particles, waves, and fields. Classical Newtonian mechanics (1687), thermodynamics (1850), and classical electrodynamics (1864) did a wonderful job of describing the everyday world at the close of the 19th century, but early in the 20th century it became apparent that the models upon which these very successful theories were based did not work very well for small things like atoms or for objects moving at high velocities or in strong gravitational fields. This provoked a rather profound philosophical crisis within physics at the turn of the century, as physicists worried that perhaps 300 years of work was about to go down the drain. The problem was that classical physicists confused their models of reality with reality itself, and when their classical models began to falter, their confidence in physics began to falter as well. This philosophical crisis was resolved with the adoption of the concepts of logical positivism and some new effective theories in physics. Quantum mechanics (1926) was developed for small things like atoms, the special theory of relativity (1905) was developed for objects moving at high velocities and the general theory of relativity (1915) was developed for objects moving in strong gravitational fields.

Logical positivism, usually abbreviated simply to positivism, is an enhanced form of empiricism, in which we do not care about how things “really” are; we are only interested with how things are observed to behave. With positivism, physicists only seek out models of reality - not reality itself. When we study quantum mechanics, we will find that the concept of reality gets rather murky in physics anyway, so this is not as great a loss as it might at first seem. By concentrating on how things are observed to behave, rather than on what things “really” are, we avoid the conundrum faced by the classical physicists. In retrospect, this idea really goes all the way back to the very foundations of physics. In Newton’s Principia (1687) he outlined Newtonian mechanics and his theory of gravitation, which held that the gravitational force between two objects was proportional to the product of their masses divided by the square of the distance between them. Newton knew that he was going to take some philosophical flack for proposing a mysterious force between objects that could reach out across the vast depths of space with no apparent mechanism, so he took a very positivistic position on the matter with the famous words:

I have not as yet been able to discover the reason for these properties of gravity from phenomena, and I do not feign hypotheses. For whatever is not deduced from the phenomena must be called a hypothesis; and hypotheses, whether metaphysical or physical, or based on occult qualities, or mechanical, have no place in experimental philosophy. In this philosophy particular propositions are inferred from the phenomena, and afterwards rendered general by induction.

Instead, Newton focused on how things were observed to move under the influence of his law of gravitational attraction, without worrying about what gravity “really” was.

The second concept, that of effective theories, is an extension of positivism. An effective theory is an approximation of reality that only holds true over a certain restricted range of conditions and only provides for a certain depth of understanding of the problem at hand. For example, Newtonian mechanics is an effective theory that makes very good predictions for the behavior of objects moving less than 10% of the speed of light and which are bigger than a very small grain of dust. These limits define the effective range over which Newtonian mechanics can be applied to solve problems. For very small things we must use quantum mechanics and for very fast things moving in strong gravitational fields, we must use relativity theory. So all of the current theories of physics, such as Newtonian mechanics, Newtonian gravity, classical electrodynamics, thermodynamics, statistical mechanics, the special and general theories of relativity, quantum mechanics, and the quantum field theories of QED and QCD are effective theories that are based upon models of reality, and all these models are approximations - all these models are fundamentally "wrong", but at the same time, these effective theories make exceedingly good predictions of the behavior of physical systems over the limited ranges in which they apply. That is the goal of softwarephysics – to provide for an effective theory of software behavior that makes useful predictions of software behavior that are applicable to the day-to-day activities of IT professionals. So in softwarephysics, we adopt a very positivistic viewpoint of software; we do not care what software “really is”, we only care about how software is observed to behave and try to model those behaviors with an effective theory of software behavior that only holds true over a certain restricted range of conditions and only provides for a certain depth of understanding of the problem at hand.

GPS satellites provide a very good example of positivism and effective theories at work. There are currently 31 GPS satellites orbiting at an altitude of 12,600 miles above the Earth, and each contains a very accurate atomic clock. The signals from the GPS satellites travel to your GPS unit at the speed of light, so by knowing the travel time of the signals from at least 4 of the GPS satellites, it is possible to determine your position on Earth very accurately. In order to do that, it is very important to have very accurate timing measurements. Newtonian mechanics is used to launch the GPS satellites to an altitude of 12,600 miles and to keep them properly positioned in orbit. Classical electrodynamics is then used to beam the GPS signals back down to Earth to the GPS unit in your car. Quantum mechanics is used to build the transistors on the chips on board the GPS satellites and to understand the quantum tunneling of electrons in the flash memory chips used to store GPS data on the satellites. The special theory of relativity predicts that the onboard atomic clocks on the GPS satellites will run slower and lose about 7.2 microseconds per day due to their high velocities relative to an observer on the Earth. But at the same time, the general theory of relativity also predicts that because the GPS satellites are further from the center of the Earth and in a weaker gravitational field, where spacetime is less deformed than on the surface of the Earth, their atomic clocks also run faster and gain 45.9 microseconds per day due to the weaker gravitational field out there. The net effect is a gain of 38.7 microseconds per day, so the GPS satellite atomic clocks have to be purposefully built to run slow by 38.7 microseconds per day before they are launched, so that they will keep in sync with clocks on the surface of the Earth. If this correction were not made, an error in your position of 100 yards/day would accrue. The end result of the combination of all these fundamentally flawed effective theories is that it is possible to pinpoint your location on Earth to an accuracy of 16 feet or better for as little as $100. But physics has done even better than that with its fundamentally flawed effective theories. By combining the effective theories of special relativity (1905) with quantum mechanics (1926), physicists were able to produce a new effective theory for the behavior of electrons and photons called quantum electrodynamics QED (1948) which was able to predict the gyromagnetic ratio of the electron, a measure of its intrinsic magnetic field, to an accuracy of 11 decimal places. As Richard Feynman has pointed out, this was like predicting the exact distance between New York and Los Angeles accurate to the width of a human hair!

So Newtonian mechanics makes great predictions for the macroscopic behavior of GPS satellites, but it does not work very well for small things like the behavior of individual electrons within transistors, where quantum mechanics is required, or for things moving at high speeds or in strong gravitational fields where relativity theory must be applied. And all three of these effective theories are based upon completely contradictory models. General relativity maintains that spacetime is curved by matter and energy, but that matter and energy are continuous, while quantum mechanics maintains that spacetime is flat, but that matter and energy are quantized into chunks. Newtonian mechanics simply states that space and time are mutually independent dimensions and universal for all, with matter and energy being continuous. The important point is that all effective theories and scientific models are approximations – they are all fundamentally "wrong". But knowing that you are "wrong" gives you a great advantage over people who know that they are "right", because knowing that you are "wrong" allows you to seek improved models of reality. So please consider softwarephysics to simply be an effective theory of software behavior that is based upon models that are fundamentally “wrong”, but at the same time, fundamentally useful for IT professionals. So as you embark upon your study of softwarephysics, please always keep in mind that the models of softwarephysics are just approximations of software behavior, they are not what software “really is”. It is very important not to confuse models of software behavior with software itself, if one wishes to avoid the plight of the 19th century classical physicists.

If you are an IT professional and many of the above concepts are new to you, do not be concerned. This blog on softwarephysics is aimed at a diverse audience, but with IT professionals in mind. All of the above ideas will be covered at great length in the postings in this blog on softwarephysics and in a manner accessible to all IT professionals. Now it turns out that most IT professionals have had some introduction to physics in high school or in introductory college courses, but that presents an additional problem. The problem is that such courses generally only cover classical physics, and leave the student with a very good picture of physics as it stood in 1864! It turns out that the classical physics of Newtonian mechanics, thermodynamics, and classical electromagnetic theory were simply too good to discard and are still quite useful, so they are taught first to beginners and then we run out of time to cover the really interesting physics of the 20th century. Now imagine the problems that the modern world would face if we only taught similarly antiquated courses in astronomy, metallurgy, electrical and mechanical engineering, medicine, economics, biology, or geology that happily left students back in 1864! Since many of the best models for software behavior stem from 20th century physics, we will be covering a great deal of 20th century material in these postings – the special and general theories of relativity, quantum mechanics, quantum field theories, and chaos theory, but I hope that you will find that these additional effective theories are quite interesting on their own, and might even change your worldview of the physical Universe at the same time.

Unintended Consequences for the Scientific Community
As I mentioned at the close of my original posting on SoftwarePhysics, my initial intention for this blog on softwarephysics was to fulfill a promise I made to myself about 30 years ago to approach the IT community with the concept of softwarephysics a second time, following my less than successful attempt to do so in the 1980s, with the hope of helping the IT community to better cope with the daily mayhem of life in IT. However, in laying down the postings for this blog an unintended consequence arose in my mind as I became profoundly aware of the enormity of this vast computer simulation of the physical Universe that the IT community has so graciously provided to the scientific community free of charge and also of the very significant potential scientific value that it provides. One of the nagging problems for many of the observational and experimental sciences is that many times there is only one example readily at hand to study or experiment with, and it is very difficult to do meaningful statistics with a population of N=1.

But the computer simulation of the physical Universe that the Software Universe presents provides another realm for comparison. For example, both biology and astrobiology only have one biosphere on Earth to study and even physics itself has only one Universe with which to engage. Imagine the possibilities if scientists had another Universe readily at hand in which to work! This is exactly what the Software Universe provides. For example, in SoftwareBiology and A Proposal For All Practicing Paleontologists we see that the evolution of software over the past 78 years, or 2.46 billion seconds, ever since Konrad Zuse first cranked up his Z3 computer in May of 1941, has closely followed the same path as life on Earth over the past 4.0 billion years in keeping with Simon Conway Morris’s contention that convergence has played the dominant role in the evolution of life on Earth. In When Toasters Fly, we also see that software has evolved in fits and starts as portrayed by the punctuated equilibrium of Stephen Jay Gould and Niles Eldredge, and in The Adaptationist View of Software Evolution we explore the overwhelming power of natural selection in the evolution of software. In keeping with Peter Ward’s emphasis on mass extinctions dominating the course of evolution throughout geological time, we also see in SoftwareBiology that there have been several dramatic mass extinctions of various forms of software over the past 75 years as well, that have greatly affected the evolutionary history of software, and that between these mass extinctions, software has also tended to evolve through the gradual changes of Hutton’s and Lyell’s uniformitarianism. In Software Symbiogenesis and Self-Replicating Information, we also see the very significant role that parasitic/symbiotic relationships have played in the evolution of software, in keeping with the work of Lynn Margulis and also of Freeman Dyson’s two-stage theory of the origin of life on Earth. In The Origin of Software the Origin of Life, we explore Stuart Kauffman’s ideas on how Boolean nets of autocatalytic chemical reactions might have kick-started the whole thing as an emergent behavior of an early chaotic pre-biotic environment on Earth, and that if Seth Shostak is right, we will never end up talking to carbon-based extraterrestrial aliens, but to alien software instead. In Is the Universe Fine-Tuned for Self-Replicating Information? we explore the thermodynamics of Brandon Carter’s Weak Anthropic Principle (1973), as it relates to the generation of universes in the multiverse that are capable of sustaining intelligent life. Finally, in Programming Clay we revisit Alexander Graham Cairns-Smith’s theory (1966) that Gene 1.0 did not run on nucleic acids, but on clay microcrystal precursors instead.

Similarly for the physical sciences, in Is the Universe a Quantum Computer? we find a correspondence between TCP/IP and John Cramer’s Transactional Interpretation of quantum mechanics. In SoftwarePhysics and Cyberspacetime, we also see that the froth of CPU processes running with a clock speed of 109 Hz on the 10 trillion currently active microprocessors that comprise the Software Universe can be viewed as a slowed down simulation of the spin-foam froth of interacting processes of loop quantum gravity running with a clock speed of 1043 Hz that may comprise the physical Universe. And in Software Chaos, we examine the nonlinear behavior of software and some of its emergent behaviors and follow up in CyberCosmology with the possibility that vast quantities of software running on large nonlinear networks might eventually break out into consciousness in accordance with the work of George Dyson and Daniel Dennett. Finally, in Model-Dependent Realism - A Positivistic Approach to Realism we compare Steven Weinberg’s realism with the model-dependent realism of Stephen Hawking and Leonard Mlodinow and how the two worldviews affect the search for a Final Theory. Finally, in The Software Universe as an Implementation of the Mathematical Universe Hypothesis and An Alternative Model of the Software Universe we at long last explore what software might really be, and discover that the Software Universe might actually be more closely related to the physical Universe than you might think.

The chief advantage of doing fieldwork in the Software Universe is that, unlike most computer simulations of the physical Universe, it is an unintended and accidental simulation, without any of the built-in biases that most computer simulations of the physical Universe suffer. So you will truly be able to do fieldwork in a pristine and naturally occuring simulation, just as IT professionals can do fieldwork in the wild and naturally occuring simulation of software that the living things of the biosphere provide. Secondly, the Software Universe is a huge simulation that is far beyond the budgetary means of any institution or consortium by many orders of magnitude. So if you are an evolutionary biologist, astrobiologist, or paleontologist working on the origin and evolution of life in the Universe, or a physicist or economist working on the emergent behaviors of nonlinear systems and complexity theory, or a neurobiologist working on the emergence of consciousness in neural networks, or even a frustrated string theorist struggling with quantum gravity, it would be well worth your while to pay a friendly call upon the local IT department of a major corporation in your area. Start with a visit to the Command Center for their IT Operations department to get a global view of their IT infrastructure and to see how it might be of assistance to the work in your area of interest. From there you can branch out to the applicable area of IT that will provide the most benefit.

The Impact of Self-Replicating Information Upon the Planet
One of the key findings of softwarephysics is concerned with the magnitude of the impact upon the planet of self-replicating information.

Self-Replicating Information – Information that persists through time by making copies of itself or by enlisting the support of other things to ensure that copies of itself are made.

Basically, we have seen several waves of self-replicating information dominate the Earth:
1. Self-replicating autocatalytic metabolic pathways of organic molecules
2. RNA
3. DNA
4. Memes
5. Software

Note that because the self-replicating autocatalytic metabolic pathways of organic molecules, RNA and DNA have become so heavily intertwined over time that now I simply refer to them as the “genes”. Over the past 4.0 billion years, the surface of the Earth has been totally reworked by three forms of self-replicating information – the genes, memes and software, with software rapidly becoming the dominant form of self-replicating information on the planet. For more on this see:

A Brief History of Self-Replicating Information
Self-Replicating Information
Is Self-Replicating Information Inherently Self-Destructive?
Enablement - the Definitive Characteristic of Living Things
Is the Universe Fine-Tuned for Self-Replicating Information?
How to Use an Understanding of Self-Replicating Information to Avoid War
The Great War That Will Not End
How to Use Softwarephysics to Revive Memetics in Academia

Softwarephysics and the Real World of Human Affairs
Having another universe readily at hand to explore, even a simulated universe like the Software Universe, necessarily has an impact upon one's personal philosophy of life, and allows one to draw certain conclusions about the human condition and what’s it all about, so as you read through the postings in this blog you will stumble across a bit of my own personal philosophy - definitely a working hypothesis still in the works. Along these lines you might be interested in a few postings where I try to apply softwarephysics to the real world of human affairs:

MoneyPhysics – my impression of the 2008 world financial meltdown.

The Fundamental Problem of Everything – if you Google "the fundamental problem of everything", this will be the only hit you get on the entire Internet, which is indicative of the fundamental problem of everything!

What’s It All About? and Genes, Memes and Software – my current working hypothesis on what’s it all about.

How to Use an Understanding of Self-Replicating Information to Avoid War – my current working hypothesis for how the United States can avoid getting bogged down again in continued war in the Middle East.

Hierarchiology and the Phenomenon of Self-Organizing Organizational Collapse - a modern extension of the classic Peter Principle that applies to all hierarchical organizations and introduces the Time Invariant Peter Principle.

The Economics of the Coming Software Singularity, The Enduring Effects of the Obvious Hiding in Plain Sight and The Dawn of Galactic ASI - Artificial Superintelligence - my take on some of the issues that will arise for mankind as software becomes the dominant form of self-replicating information upon the planet over the coming decades.

The Continuing Adventures of Mr. Tompkins in the Software Universe, The Danger of Tyranny in the Age of Software, Cyber Civil Defense, Oligarchiology and the Rise of Software to Predominance in the 21st Century and Is it Finally Time to Reboot Civilization with a New Release? - my worries that the world might abandon democracy in the 21st century, as software comes to predominance as the dominant form of self-replicating information on the planet.

Making Sense of the Absurdity of the Real World of Human Affairs - how software has aided the expansion of our less desirable tendencies in recent years.

Some Specifics About These Postings
The postings in this blog are a supplemental reading for my course on softwarephysics for IT professionals entitled SoftwarePhysics 101 – The Physics of Cyberspacetime, which was originally designed to be taught as a series of seminars at companies where I was employed. Since softwarephysics essentially covers the simulated physics, chemistry, biology, and geology of an entire simulated universe, the slides necessarily just provide a cursory skeleton upon which to expound. The postings in this blog go into much greater depth. Because each posting builds upon its predecessors, the postings in this blog should be read in reverse order from the oldest to the most recent, beginning with my original posting on SoftwarePhysics. In addition, several universities also now offer courses on Biologically Inspired Computing which cover some of the biological aspects of softwarephysics, and the online content for some of these courses can be found by Googling for "Biologically Inspired Computing" or "Natural Computing". At this point we will finish up with my original plan for this blog on softwarephysics with a purely speculative posting on CyberCosmology that describes the origins of the Software Universe, cyberspacetime, software and where they all may be heading. Since CyberCosmology will be purely speculative in nature, it will not be of much help to you in your IT professional capacities, but I hope that it might be a bit entertaining. Again, if you are new to softwarephysics, you really need to read the previous posts before taking on CyberCosmology. I will probably continue on with some additional brief observations about softwarephysics in the future, but once you have completed CyberCosmology, you can truly consider yourself to be a bona fide softwarephysicist.

For those of you following this blog, the posting dates on the posts may seem to behave in a rather bizarre manner. That is because in order to get the Introduction to Softwarephysics listed as the first post in the context root of http://softwarephysics.blogspot.com/ I have to perform a few IT tricks. When publishing a new posting, I simply copy the contents of the Introduction to Softwarephysics to a new posting called the New Introduction to Softwarephysics. Then I update the original Introduction to Softwarephysics entry with the title and content of the new posting to be published. I then go back and take “New” out of the title of the New Introduction to Softwarephysics. This way the Introduction to Softwarephysics always appears as the first posting in the context root of http://softwarephysics.blogspot.com/. The side effect of all this is that the real posting date of posts is the date that appears on the post that you get when clicking on the Newer Post link at the bottom left of the posting webpage.

SoftwarePhysics 101 – The Physics of Cyberspacetime is now available on Microsoft OneDrive.

SoftwarePhysics 101 – The Physics of Cyberspacetime - Original PowerPoint document

Entropy – A spreadsheet referenced in the document

BSDE – A 1989 document describing how to use BSDE - the Bionic Systems Development Environment - to grow applications from genes and embryos within the maternal BSDE software.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
http://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Saturday, September 07, 2019

Life is Like a Box of Timelines

In my last posting Digital Physics and the Software Universe, we covered the idea that our Universe might seem to behave like a large network of quantum computers calculating how to behave, if not in an actual literal sense as the realists would have us believe, at least, perhaps, in a positivistic manner that makes for a good model that yields useful predictions of about how our Universe seems to behave. Recall that positivism is an enhanced form of empiricism, in which we do not care about how things “really” are; we are only interested in how things are observed to behave. With positivism, physicists only seek out models of reality - not reality itself. In keeping with that observation, I would like to recommend two offerings on Netflix. Now if you do not currently have Netflix, you can easily stream it for free for 30 days on a trial basis, so I do not feel conflicted about making this recommendation. The first recommendation comes from the Black Mirror series on Netflix that features stories from the dark side of IT. This particular Black Mirror feature is an interactive movie entitled Bandersnatch. The second recommendation is the Netflix series entitled Russian Doll. Both Bandersnatch and the Russian Doll are based on the idea that our Universe may be composed of software running on some kind of cosmic quantum computer, and both feature a computer game developer operating under a tight deadline for a new computer game. While Bandersnatch takes place in 1984 and the Russian Doll series takes place in the present day, both convey the time-pressures that developers experience while working on new code under a tight deadline, but with a twist. In both features, the computer game developer slowly comes to the realization that they are caught in some kind of "bad" cosmic code themselves that keeps looping in time. In fact, the line "Life is Like a Box of Timelines" comes from the Russian Doll. Both developers then try to debug and fix the bad cosmic code that they are stuck in. As with all code, debugging and fixing bad cosmic code can be very frustrating and time-consuming. It takes many runs of the bad code with trial code fixes to unit test the cosmic code until it finally performs as intended - see How Software Evolves for more on that.

Fortunately, for the last few decades, we have had IDEs (Integrated Development Environments) like Eclipse that allow developers to step through bad code and watch the variables change with time and also to follow the code logic to see how the code branches at various conditional branch-points. In an IDE, you can also set breakpoints at various points in the code that allow the code to run until the breakpoint is reached. When a breakpoint is reached, the code then stops executing and the developer can then examine the state of all the current code variables. At a breakpoint, the developer can even change the value of a variable on the fly in order to run the code down a different logical path at a conditional branch-point than the code would normally run. Bandersnatch allows the viewer to do this very same thing. At certain points in Bandersnatch, a breakpoint is reached and the viewer then gets to choose which path the cosmic code takes. This allows the viewer to interactively run through the Bandersnatch cosmic code nearly an infinite number of different ways. However, sometimes the viewer will run down a dead-end code path that causes the cosmic code to complete and the credits then begin to roll. At that point, you have to rerun the Bandersnatch cosmic code from the very beginning again. Here is a tip. The Bandersnatch cosmic code also seems to run in the background as a daemon process when you stop debugging the code and come back several hours later. So do not be surprised when you come back and start up Bandersnatch again and find yourself in a different section of the cosmic code than you previously left. All developers should be quite familiar with all of these debugging activities and, therefore, should find debugging cosmic code to be fairly straightforward. Now, in Digital Physics and the Software Universe, we saw that some very bright people hold the position that the best model for explaining the very strange way that our Universe behaves at the microscopic level of atoms and particles is to think of the Universe as some kind of cosmic code running in a quantum-mechanical IDE. But for that, you need some understanding of quantum mechanics. For a brief introduction to quantum mechanics see Quantum Software.

The reason why you will find quantum mechanics useful is that both Bandersnatch and the Russian Doll are based on the idea that our Universe may be composed of software running on some kind of network of cosmic quantum computers, and therefore, also bring in some of the more mind-bending ideas to be found in some of the more esoteric interpretations of quantum mechanics. It should be noted that the more esoteric interpretations of quantum mechanics are also now more important because the classic Copenhagen Interpretation of quantum mechanics no longer seems to carry the weight that it once did many long decades ago. For example, in Quantum Computing and the Many-Worlds Interpretation of Quantum Mechanics we covered Hugh Everett's Many-Worlds Interpretation of quantum mechanics and in Is the Universe a Quantum Computer? we covered John Cramer's Transactional Interpretation of quantum mechanics. Reading both of those postings in advance would help to clarify some of the strange scenes found in both of my Netflix recommendations. For example, in order to explain the strange quantum-mechanical effects that we observe in the lab, the Many-Worlds Interpretation of quantum mechanics relies on timelines in multiple parallel Universes, while the Transactional Interpretation of quantum mechanics relies on a Universe that features multiple timelines moving both forwards and backwards in time at the same time. And both Bandersnatch and the Russian Doll have scenes that display such phenomena. Now to make it a bit easier to understand such phenomena, let us briefly review the Copenhagen, Many-Worlds and the Transactional Interpretations of quantum mechanics. But in order to do that we need to know a bit about quantum mechanics and how physicists use the wave model to explain certain phenomena.

The Wave Model
The chief characteristic of the wave model is that waves tend to be everywhere, but nowhere in particular, at the same time and simultaneously explore all possible paths. To see a wave in action, drop a small pebble into a still pond of water containing many obstacles and watch the resulting waves spread out and reflect off the obstacles and interfere with each other before eventually reaching a particular destination near the edge of the pond.

In 1801, Thomas Young conducted a series of experiments with waves. First, using water waves in a shallow ripple tank, he demonstrated the concept of interference. When a water wave encounters a barrier with two slits, the ripples passing through the slits interfere with each other on the other side of the barrier (Figure 1). Where two crests intersect, the wave amplitude doubles in height, and where a crest meets a trough, the two waves cancel each other out entirely. Next, Young used a distant light source with two closely spaced slits in an opaque barrier. On the other side of the barrier, he placed a white projection screen. When light from the distant light source passed through the double-slit barrier, Young observed an interference pattern of alternating bright and dark fringes projected onto the screen which demonstrated the wavelike behavior of light.

Figure 1 – The interference pattern from two slits (click to enlarge)

You can easily repeat Young’s experiment with a piece of thin cloth. At night, hold up a single ply of a pillowcase in front of a distant light source, such as a far-off street light or the filament in your neighbor’s decorative front door light that uses a clear light bulb. Instead of a single diffuse spot of light shining through the pillowcase, you will see a pronounced checkerboard interference pattern of spots, because the weave of your pillowcase has both vertical and horizontal slits between the threads.

The Birth of Modern Quantum Mechanics
As we saw in Quantum Software, Erwin Schrödinger first developed the Schrödinger equation in the winter of 1926 to explain the strange behavior of electrons in atoms and the fact that the electrons only radiated light at certain frequencies when excited. The 1-dimensional version of this famous equation is:

-ħ²    ∂²Ψ  =  iħ ∂Ψ
──      ──            ──
2m    ∂x²            ∂t

In the above 1-dimensional Schrödinger equation, Ψ is called the wavefunction of a particle and is pronounced like the word “sigh”. In quantum mechanics, the wavefunction Ψ contains all of the information that can ever be known about the particle.

Now if the particle is just quietly sitting around on its own and not interacting with other particles, like an electron that has been sitting quietly in an atom for a billion years, it means the wavefunction Ψ should not be changing with time, and we can use the 1-dimensional time-independent version of the Schrödinger equation that does not have the time variable "t" in the equation:

-ħ²  d²ψ(x)   +   V(x) ψ(x)  =  E ψ(x)
──  ──────
2m     dx²

The lower-case wavefunction ψ is still pronounced like the word "sigh", but we use the lower-case ψ to signify that this is a time-independent wavefunction that does not change with time. When the 3-dimensional Schrödinger equation is solved for the hydrogen atom consisting of just one electron trapped by one proton in an electromagnetic well we get a number of quantized wavefunctions as solutions:

Figure 2 – The n=1 and n=2 orbitals or wavefunctions for the hydrogen atom.

The Strange Motion of Quantum Particles in Space and Time
Now for quantum particles like electrons or photons that are on the move we need to use Richard Feynman’s "sum over histories" approach to quantum mechanics. In Feynman's "sum over histories" approach to quantum mechanics, the wavefunction amplitude of an electron or photon is the same in all directions, like when you drop a pebble in a still pond, but the phase angles of the wavefunction will differ depending upon the path that is taken. So to figure out the probability of finding an electron or photon at a particular point, you have to add up the amplitudes and phases of all the possible paths that the electron or photon could have taken to reach the destination point. Although there are an infinite number of possible paths, the key insight is that most of the paths will be out of phase with each other and will cancel out like the destructive interference shown in Figure 1. This produces some rather strange experimental observations. Imagine a very dim source of photons or electrons that can fire one photon or electron at a time. If we fired the particles at a screen with two slits, as in Young’s experiment, we would expect to see a pattern similar to Figure 3 build up over time, based upon the particle model for electrons and photons.

Figure 3 – What common sense and the particle model would predict for a source that fires electrons or photons one at a time

However, what is actually observed is an interference pattern similar to Figure 4, even though the electrons or photons pass through the slits one at a time. According to quantum mechanics, the individual electrons or photons interfere with themselves as they go through both slits at the same time! This means that if your neighbor could turn down the light by his front door to a very low level, so that it only emitted one photon at a time, and your eye could record a long exposure image, you would still see a checkerboard pattern of light spots through your pillowcase, even though the photons went through the fabric mesh one at a time.

Figure 4 – We actually observe an interference pattern as each particle interferes with itself

Now here comes the really strange part. If we put detectors just in front of the slits so that we can record which slit the electron or photon actually passed through, and keep firing one particle at a time, the interference pattern will disappear, and we will see the pattern in Figure 3 instead. If we turn the detectors off, the interference pattern returns, and we see the pattern in Figure 4. For some reason, Nature will not allow us to observe electrons or photons behaving like particles and waves at the same time. It’s some kind of information thing again. But it gets worse. If we put the detectors at some distance behind the slits and turn them on, the interference pattern again disappears, but if we turn the detectors off, the interference pattern returns. Now, this is after the electrons or photons have already passed through the slits! How do they know whether to behave like a wave or a particle in advance, before they know if the detectors are on or off? In fact, experiments have been performed where the decision to turn the detectors on or off is not made until after the individual electrons or photons have already passed through the slits, but even so, if the detectors are turned on, the interference pattern disappears, and if the detectors are turned off, the interference pattern returns! This means that the present can change the past! This is the famous delayed-choice experiment proposed by John Wheeler in 1978 and actually performed by Alain Aspect and his colleagues in 1982. In another experiment, the detectors are placed beyond the observation screen to detect cloned photons that are created in a splitting process. By observing the cloned photons, it is possible to determine which slit an individual twin photon passed through after its twin has already hit the observation screen. When these distant detectors are turned on, the interference pattern once again disappears, and if the detectors are turned off, the interference pattern returns. Again, the decision to turn the detectors on or off can be made after the photons have already hit the observation screen. This means that the future can change the present!

In 1928, Paul Dirac combined quantum mechanics (1926) with the special theory of relativity (1905) and came up with a relativistic reformulation of the Schrödinger equation. Now, strangely, the solutions to Dirac’s equation predicted both the existence of electrons with a negative charge and positive mass energy and also positrons, the antimatter equivalent of electrons, with a positive charge and a negative mass energy. But in 1947 Richard Feynman came up with an alternate interpretation for Dirac’s positrons with negative mass energy. Feynman proposed that positrons were actually normal electrons moving backwards in time! Recall that the full-blown wave function of an object with constant energy can be expressed as a time-independent wavefunction ψ(x) multiplied by a time-varying term:

Ψ(x, t)  =  e-iEt/ħ  ψ(x)

Now the solutions to Dirac’s equation predicted both the existence of electrons with positive mass energy and also positrons, the antimatter equivalent of electrons, with negative mass energy. For a particle with negative mass energy, the above equation looks like:

Ψ(x, t)  =  e-i(-E)t/ħ  ψ(x)

but since:

-i(-E)t/ħ = -iE(-t)/ħ

Feynman realized that an equivalent equation could be written by simply changing the parenthesis yielding:

Ψ(x, t)  =  e-iE(-t)/ħ  ψ(x)

So a positron with negative mass energy –E could mathematically be thought of as a regular old electron with positive mass energy E moving backwards in time! Indeed, today that is the preferred interpretation. All antimatter is simply regular matter moving backwards in time.

Figure 5 – Above is a Feynman diagram showing an electron colliding with a positron, the antimatter version of an electron.

In Figure 5 we see an electron colliding with a positron, the antimatter version of an electron. When the two particles meet they annihilate each other and turn into two γ gamma rays. In the Feynman diagram, space runs along the horizontal axis and time runs along the vertical axis. In the diagram, we see an electron e- with negative charge and positive mass energy on the left and a positron e+ with positive charge and negative mass energy on the right. As time progresses up the vertical time axis, we see the electron e- and the positron e+ approach each other along the horizontal space axis. When the two particles get very close, they annihilate each other and we see two γ gamma rays departing the collision as time proceeds along the vertical time axis. But notice the red arrowheads on the red arrow lines in the Feynman diagram. The red arrowhead for the negative electron e- is moving upwards on the diagram and forward in time, while the positive positron e+ is moving downwards on the diagram and backwards in time! So in the diagram, the positive positron e+ is portrayed as an ordinary negative electron e- moving backwards in time!

The Copenhagen Interpretation of Quantum Mechanics
In 1927, Niels Bohr and Werner Heisenberg proposed a very positivistic interpretation of quantum mechanics now known as the Copenhagen Interpretation. You see, Bohr was working at the University of Copenhagen Institute of Theoretical Physics at the time. The Copenhagen Interpretation contends that absolute reality does not really exist. Instead, there are an infinite number of potential realities, defined by the wavefunction ψ of a quantum system, and when we make a measurement of a quantum system, the wavefunction of the quantum system collapses into a single value that we observe, and thus brings the quantum system into reality (see Quantum Software for more on wavefunctions). This satisfied Max Born’s contention that wavefunctions are just probability waves. The Copenhagen Interpretation suffers from several philosophical problems though. For example, Eugene Wigner pointed out that the devices we use to measure quantum events are also made out of atoms which are quantum objects in themselves, so when an observation is made of a single atom of uranium to see if it has gone through a radioactive decay using a Geiger counter, the atomic quantum particles of the Geiger counter become entangled in a quantum superposition of states with the uranium atom. If the uranium has decayed, then the uranium atom and the Geiger counter are in one quantum state, and if the atom has not decayed, then the uranium atom and the Geiger counter are in a different quantum state. If the Geiger counter is fed into an amplifier, then we have to add in the amplifier too into our quantum superposition of states. If a physicist is patiently listening to the Geiger counter, we have to add him into the chain as well, so that he can write and publish a paper which is read by other physicists and is picked up by Time magazine for a popular presentation to the public. So when does the “measurement” actually take place? We seem to have an infinite regress. Wigner’s contention is that the measurement takes place when a conscious being first becomes aware of the observation. Einstein had a hard time with the Copenhagen Interpretation of quantum mechanics for this very reason because he thought that it verged upon solipsism. Solipsism is a philosophical idea from Ancient Greece. In solipsism, your Mind is the whole thing, and the physical Universe is just a figment of your imagination. So I would like to thank you very much for thinking of me and bringing me into existence! Einstein’s opinion of the Copenhagen Interpretation of quantum mechanics can best be summed up by his statement "Is it enough that a mouse observes that the Moon exists?". Einstein objected to the requirement for a conscious being to bring the Universe into existence because, in Einstein’s view, measurements simply revealed to us the condition of an already existing reality that does not need us around to make measurements in order to exist. But in the Copenhagen Interpretation, the absolute reality of Einstein does not really exist. Additionally, in the Copenhagen Interpretation, objects do not really exist until a measurement is taken, which collapses their associated wavefunctions, but the mathematics of quantum mechanics does not shed any light on how a measurement could collapse a wavefunction.

The collapse of the wavefunction is also a one-way street. According to the mathematics of quantum mechanics a wavefunction changes with time in a deterministic manner, so like all of the other current effective theories of physics, they are reversible in time and can be run backwards. This is also true in the Copenhagen Interpretation, so long as you do not observe the wavefunction and collapse it by the process of observing it. In the Copenhagen Interpretation, once you observe a wavefunction and collapse it, you cannot undo the collapse, so the process of observation becomes nonreversible in time. That means if you fire photons at a target, but do not observe them, it is possible to reverse them all in time and return the Universe back to its original state. That is how all of the other effective theories of physics currently operate. But in the Copenhagen Interpretation, if you do observe the outgoing photons you can never return the Universe back to its original state. This can best be summed up by the old quantum mechanical adage - look particle, don’t look wave. A good way to image this in your mind is to think of a circular tub of water. If you drop a pebble into the exact center of a circular tub of water, a series of circular waves will propagate out from the center. Think of those waves as the wavefunction of an electron changing with time into the future according to the Schrödinger equation. When the circular waves hit the circular walls of the tub they will be reflected back to the center of the tub. Essentially, they can be viewed as moving backwards in time. This can happen in the Copenhagen Interpretation so long as the electron is never observed as its wavefunction moves forward or backward in time. However, if the wavefunction is observed and collapsed, it can never move backwards in time, so observation becomes a one-way street.

The Many-Worlds Interpretation of Quantum Mechanics
In 1956, Hugh Everett working on his Ph.D. under John Wheeler, proposed the Many-Worlds Interpretation of quantum mechanics as an alternative. The Many-Worlds Interpretation admits to an absolute reality but claims that there are an infinite number of absolute realities spread across an infinite number of parallel universes. In the Many-Worlds Interpretation, when electrons or photons encounter a two-slit experiment, they go through one slit or the other, and when they hit the projection screen they interfere with electrons or photons from other universes that went through the other slit! In Everett’s original version of the Many-Worlds Interpretation, the entire Universe splits into two distinct universes whenever a particle is faced with a choice of quantum states, and so all of these universes are constantly branching into an ever-growing number of additional universes. In the Many-Worlds Interpretation of quantum mechanics, the wavefunctions or probability clouds of electrons surrounding an atomic nucleus are the result of overlaying the images of many “real” electrons in many parallel universes. Thus, according to the Many-Worlds Interpretation wavefunctions never collapse. They just deterministically evolve in an abstract mathematical Hilbert space and are reversible in time, like everything else in physics.

Because Einstein detested the Copenhagen interpretation of quantum mechanics so much, he published a paper in 1935 with Boris Podolsky and Nathan Rosen which outlined what is now known as the EPR Paradox. But to understand the EPR Paradox we need a little background in experimental physics. Electrons have a quantum mechanical property called spin. You can think of an electron’s spin like the electron has a little built-in magnet. In fact, it is the spin of the little electron magnets that add up to make the real magnets that you put on your refrigerator. Now in quantum mechanics, the spin of a single electron can be both up and down at the same time because the single electron can be in a mixture of quantum states! But in the classical Universe that we are used to, macroscopic things like a child's top can only have a spin of up or down at any given time. The top can only spin in a clockwise or counterclockwise manner at one time - it cannot do both at the same time. Similarly, in quantum mechanics, a photon or electron can go through both slits of a double slit experiment at the same time, so long as you do not put detectors at the slit locations.

Figure 6 – A macroscopic top can only spin clockwise or counterclockwise at one time.

Figure 7 – But electrons can be in a mixed quantum mechanical state in which they both spin up and spin down at the same time.

Figure 8 – Similarly, tennis balls can only go through one slit in a fence at a time. They cannot go through both slits of a fence at the same time.

Figure 9 – But at the smallest of scales in our quantum mechanical Universe, electrons and photons can go through both slits at the same time, producing an interference pattern.

Figure 10 – You can see this interference pattern of photons if you look at a distant porch light through the mesh of a sheer window curtain or a pillowcase.

When you throw an electron through a distorted magnetic field that is pointing up the electron will pop out in one of two states. It will either be aligned with the magnetic field (called spin-up) or it will be pointing 180o in the opposite direction of the magnetic field (called spin-down). Both the spin-up and spin-down conditions are called an eigenstate. Prior to the observation of the electron’s spin, the electron is in a superposition of states and is not in an eigenstate. Now if the electron in the eigenstate of spin-up is sent through the same magnetic field again, it will be found to pop out in the eigenstate of spin-up again. Similarly, a spin-down electron that is sent through the magnetic field again will also pop out as a spin-down electron. Now here is the strange part. If you rotate the magnetic field by 90o and send spin-up electrons through it, 50% of the electrons will pop out with a spin pointing to the left, and 50% will pop out with a spin pointing to the right. And you cannot predict in advance which way a particular spin-up electron will pop out. It might spin to the left, or it might spin to the right. The same goes for the spin-down electrons – 50% will pop out spinning to the left and 50% will pop out spinning to the right.

Figure 11 - In the Stern-Gerlach experiment we shoot electrons through a distorted magnetic field. Classically, we would expect the electrons to be spinning in random directions and the magnetic field should deflect them in random directions, creating a smeared out spot on the screen. Instead, we see that the act of measuring the spins of the electrons puts them into eigenstates with eigenvalues of spin-up or spin-down and the electrons are either deflected up or down. If we rotate the magnets by 90o, we find that the electrons are deflected to the right or to the left.

The EPR Paradox goes like this. Suppose we prepare many pairs of quantum mechanically “entangled” electrons that conserve angular momentum. Each pair consists of one spin-up electron and one spin-down electron, but we do not know which is which at the onset. Now let the pairs of electrons fly apart and let two observers measure their spins. If observer A measures an electron there will be a 50% probability that he will find a spin-up electron and a 50% chance that he will find a spin-down electron, and the same goes for observer B, 50% of observer’s B electrons will be found to have a spin-up, while 50% will be found with a spin-down. Now the paradox of the EPR paradox, from the perspective of the Copenhagen Interpretation, is that when observer A and observer B come together to compare notes, they find that each time observer A found a spin-up electron, observer B found a spin-down electron, even though the electrons did not know which way they were spinning before the measurements were performed. Somehow when observer A measured the spin of an electron, it instantaneously changed the spin of the electron that observer B measured. Einstein hated this “spooky action at a distance” feature of the Copenhagen Interpretation that made physics nonlocal, meaning that things that were separated by great distances could still instantaneously change each other. He thought that it violated the speed of light speed limit of his special theory of relativity that did not allow information to travel faster than the speed of light. Einstein thought that the EPR paradox was the final nail in the coffin of quantum mechanics. There had to be some “hidden variables” that allowed electrons to know if they “really” were a spin-up or spin-down electron. You see, for Einstein, absolute reality really existed. For Einstein, the apparent probabilistic nature of quantum mechanics was an illusion, like the random() function found in most computer languages. The random() function just points to a table of apparently random numbers that are totally predictable if you look at the table in advance. You normally initiate the random() function with a “seed” from the system clock of the computer you are running on to simulate randomness by starting at different points in the table.

However, in 1964 John S. Bell published a paper in which he proposed an experiment that could actually test the EPR Paradox. In the 1980s and 1990s, a series of experiments were indeed performed that showed that Einstein was actually wrong. Using photons and polarimeters, instead of the spin of electrons, these experiments showed that photons really do not know their quantum states in advance of being measured and that determining the polarization of a photon by observer A can immediately change the polarization of another photon 60 miles away. These experiments demonstrated that the physical Universe is non-local, meaning that Newton’s spooky “action at a distance” is built into our Universe, at least for entangled quantum particles. This might sound like a violation of the special theory of relativity because it seems like we are sending an instantaneous message faster than the speed of light, but that is really not the case. Both observer A and observer B will measure photons with varying polarizations at their observing stations separated by 60 miles. Only when observer A and observer B come together to compare results will they realize that their observations were correlated, so it is impossible to send a message with real information using this experimental scheme. Clearly, our common-sense ideas about space and time are still lacking, and so are our current effective theories.

Hugh Everett solves this problem by letting the electrons be in all possible spin states in a large number of parallel universes. When observers measure the spin of an electron, they really do not measure the spin of the electron. They really measure in which universe they happen to be located in, and since everything in the Many-Worlds Interpretation relies on “correlated” composite wavefunctions, it should come as no surprise that when observer A and observer B come together, they find that their measurements of the electron spins are correlated. In the Many-Worlds Interpretation, Hugh Everett proposes that when a device, like our magnets above, measures the spin of an electron that is in an unknown state, and not in a spin-up or spin-down eigenstate, the device does not put the electron into a spin-up or spin-down eigenstate as the Copenhagen Interpretation maintains. Instead, the device and the electron enter into a correlated composite system state or combined wavefunction with an indeterminate spin of the electron. Hugh Everett explains how this new worldview can be used to explain what we observe in the lab. In fact, he proposes that from the perspective of the measuring magnets and the electron, two independent observational histories will emerge, one with the measuring magnets finding a spin-up electron and one with the measuring magnets finding a spin-down electron, and both of these will be just as “real” as the other. For them, the Universe has essentially split in two, with each set in its own Universe. That is where the “Many-Worlds” in the Many-Worlds Interpretation of quantum mechanics comes from.

While doing research for The Software Universe as an Implementation of the Mathematical Universe Hypothesis I naturally consulted Max Tegmark’s HomePage at:

http://space.mit.edu/home/tegmark/mathematical.html

and I found a link there to Hugh Everett’s original 137-page Jan 1956 draft Ph.D. thesis in which he laid down the foundations for the Many-Worlds Interpretation. This is a rare document indeed because on March 1, 1957, Everett submitted a very compressed version of his theory in his final 36-page doctoral dissertation, "On the Foundations of Quantum Mechanics", after heavy editing by his thesis advisor John Wheeler to make his Ph.D. thesis more palatable to the committee that would be hearing his oral defense and also to not offend Niels Bohr, one of the founding fathers of the Copenhagen Interpretation and still one of its most prominent proponents. But years later John Wheeler really did want to know what Niels Bohr thought of Hugh Everett’s new theory and encouraged Everett to visit Copenhagen in order to meet with Bohr. Everett and his wife did finally travel to Copenhagen in March of 1959 and spent six weeks there. But by all accounts, the meeting between Bohr and Everett was a disaster, with Bohr not even discussing the Many-Worlds Interpretation with Everett.

Below is the link to Hugh Everett’s original 137-page Jan 1956 draft Ph.D. thesis:

http://www.pbs.org/wgbh/nova/manyworlds/pdf/dissertation.pdf

I have also placed his thesis on Microsoft One Drive at:

https://onedrive.live.com/redir?resid=21488ff1cf19c88b!1437&authkey=!ADIm_WTYLkbx90I&ithint=file%2cpdf

in Quantum Computing and the Many-Worlds Interpretation of Quantum Mechanics, I step through the above document page-by-page and offer up a translation of the mathematics into easily understood terms.

The Transactional Interpretation of Quantum Mechanics
In Is the Universe a Quantum Computer? I covered John Cramer's Transactional Interpretation of quantum mechanics and compared it to TCP/IP transactions on the Internet. In an email exchange with John Cramer, I learned that such a comparison had never been done before. Now in the Copenhagen interpretation of quantum mechanics, the wavefunctions Ψ of particles and photons are not “real” waves, they are only probability waves – just convenient mathematical constructs that don’t “really” exist. But in Cramer’s Transactional Interpretation of quantum mechanics, the wavefunctions Ψ of particles and photons really do exist. For a physics student new to quantum mechanics, this is truly a comforting idea. Before they teach you about quantum mechanics, you go through a lengthy development of wave theory in courses on classical electrodynamics, optics, and differential equations. In all these courses, you only deal with waves that are mathematically real, meaning that these waves have no imaginary parts using the imaginary number i where i2 = -1. But in your first course on quantum mechanics, you are introduced to Schrödinger’s equation:

-ħ²    ∂²Ψ  =  iħ ∂Ψ
──      ──            ──
2m    ∂x²            ∂t

and learn that generally, the wavefunction solutions to Schrödinger’s equation contain both real and imaginary parts containing the nasty imaginary number i. Consequently, the conventional wisdom is that the wavefunction solutions to Schrödinger’s equation cannot really exist as real tangible things. They must just be some kind of useful mathematical construct. However, in the same course, you are also taught about Davisson and Germer bouncing electrons off the lattice of a nickel crystal and observing an interference pattern, so something must be waving! I would venture to suggest that nearly all students new to quantum mechanics initially think of wavefunctions as real waves waving in space. Only with great coaxing by their professors do these students “unlearn” this idea with considerable reluctance.

As we saw previously, the imaginary parts of wavefunctions really bothered the founding fathers of quantum mechanics too. Recall that in 1928, Max Born came up with the clever trick of multiplying the wavefunctions Ψ by their complex conjugates Ψ* to get rid of the imaginary parts. To create the complex conjugate of a complex number or function, all you have to do is replace the imaginary number i with –i wherever you see it. According to Born’s conjecture, the probability of things happening in the quantum world are proportional to multiplying the wavefunction by its complex conjugate Ψ*Ψ. Mathematically, this is the same thing as finding the square of the amplitude of the wavefunction. Now earlier in this posting, I mentioned how Richard Feynman pointed out that instead of thinking of positrons having negative mass energy, you could also think of positrons as regular electrons with negative charge moving backwards in time by shifting the position of the “-“ sign in the wavefunction of a positron. But that is just the same thing as using the complex conjugate Ψ* of an electron wavefunction for a positron. So mathematically, we can think of the complex wavefunction of a particle Ψ* as the wavefunction of the particle moving backwards in time. Cramer suggests that Born’s idea of Ψ*Ψ representing the probability of a quantum event is not just a mathematical trick or construct, rather it is the collision of an outgoing “retarded” wave Ψ moving forwards in time with an incoming Ψ* “advanced” wave moving backwards in time. Essentially, John Cramer's Transactional Interpretation of quantum mechanics sees the collision of outgoing “retarded” waves Ψ moving forwards in time with incoming Ψ* “advanced” waves moving backwards in time.

The Transactional Interpretation easily explains all of the apparent paradoxes of quantum mechanics. As we have seen, there is actual experimental evidence that electrons and photons seem to “know” in advance what they will encounter on the other side of a double-slit experiment. This is easily explained by the Transactional Interpretation. The electrons or photons send out retarded waves into the future which interact with whatever lies beyond the slits. If there are detectors that are turned on, the retarded waves interact with them, if there are no detectors, the waves interact with some electrons on a projection screen instead. In either case, an advanced wave is sent backwards in time from the detectors or the projection screen to the point of origin of the electrons or photons so that they “know” how to behave before they get to the two-slit screen.

Conclusion
I hope that some of the above quantum-mechanical craziness that we observe in our Universe helps with watching both of the Netflix features Bandersnatch and the Russian Doll. I would also like to thank Netflix for producing these fine quantum-mechanical productions that can help the general public to become more familiar with the phenomena of quantum mechanics and with the potentials of quantum computer software.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
http://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Monday, August 26, 2019

Digital Physics and the Software Universe

I just watched a very interesting panel discussion from an old World Science Festival meeting that was held on Saturday, June 4, 2011, 8:00 PM - 9:30 PM:

Rebooting the Cosmos: Is the Universe the Ultimate Computer?
https://www.worldsciencefestival.com/videos/rebooting-the-cosmos-is-the-universe-the-ultimate-computer/

The panel discussion was moderated by John Hockenberry and featured Edward Fredkin, Seth Lloyd, Jürgen Schmidhuber and Fotini Markopoulou-Kalamara. The World Science Festival is an annual event hosted by string theorist Brian Greene and is an outgrowth of the World Science U. At the World Science U, you can view many very interesting lectures and courses, and at the World Science Festival website you can view many very interesting panel discussions:

World Science U
http://www.worldscienceu.com/

World Science Festival
https://www.worldsciencefestival.com/

Edward Fredkin started working with computers back in 1956. In 1960, he wrote the first OS and the first Assembler program for the DEC PDP-1. Then in 1968, Edward Fredkin returned to academia and became a full professor at MIT. In 1990, Edward Fredkin published the paper Digital Mechanics - An informational process based on reversible universal cellular automata in the ACM, in which he proposed that the physical Universe might be a cellular automaton programmed to act like physics and launched the field of Digital Physics. Since then, Fredkin has broadened the field by renaming it to Digital Philosophy. You can find his Digital Philosophy website at:

Digital Philosophy
http://www.digitalphilosophy.org/

On the Home page, he briefly defines Digital Philosophy as:

"What is Digital Philosophy?
Digital Philosophy (DP) is a new way of thinking about the fundamental workings of processes in nature. DP is an atomic theory carried to a logical extreme where all quantities in nature are finite and discrete. This means that, theoretically, any quantity can be represented exactly by an integer. Further, DP implies that nature harbors no infinities, infinitesimals, continuities, or locally determined random variables. This paper explores Digital Philosophy by examining the consequences of these premises.

At the most fundamental levels of physics, DP implies a totally discrete process called Digital Mechanics. Digital Mechanics[1] (DM) must be a substrate for Quantum Mechanics. Digital Philosophy makes sense with regard to any system if the following assumptions are true:

All the fundamental quantities that represent the state information of the system are ultimately discrete. In principle, an integer can always be an exact representation of every such quantity. For example, there is always an integral number of neutrons in a particular atom. Therefore, configurations of bits, like the binary digits in a computer, can correspond exactly to the most microscopic representation of that kind of state information.

In principle, the temporal evolution of the state information (numbers and kinds of particles) of such a system can be exactly modeled by a digital informational process similar to what goes on in a computer. Such models are straightforward in the case where we are keeping track only of the numbers and kinds of particles. For example, if an oracle announces that a neutron decayed into a proton, an electron, and a neutrino, it’s easy to see how a computer could exactly keep track of the changes to the numbers and kinds of particles in the system. Subtract 1 from the number of neutrons, and add 1 to each of the numbers of protons, electrons, and neutrinos.

The possibility that DP may apply to various fields of science motivates this study."


While you are on his website, be sure to check out some of Edward Fredkin's publications at:

http://www.digitalphilosophy.org/index.php/essays/

In 2002, Seth Lloyd at MIT published The Computational Universe, in which he calculated the computing power of the entire physical Universe treated as one large quantum computer. You can read this fascinating paper at:

http://www.edge.org/3rd_culture/lloyd2/lloyd2_p2.html

Seth Lloyd is currently working on quantum computers at MIT and is the first quantum-mechanical engineer in MIT’s Mechanical Engineering department. Seth Lloyd is recognized for proposing the first technologically feasible design for a quantum computer. In 2006 he published the book Programming the Universe: A Quantum Computer Scientist Takes on the Cosmos in which he contends that our Universe is a quantum mechanical Computational Universe that has been calculating how to behave from the very beginning. He came to this conclusion as MIT’s sole quantum mechanical engineer working on building practical quantum computers. During the course of his research, Seth Lloyd has learned how to talk to atoms in a quantum mechanical way. Through intimate dealings with atoms, he has found that atoms are constantly flipping quantum mechanical states in a controlled manner prescribed by quantum mechanics. Since a computer is simply a large number of switches that operate in a controlled manner, our Universe can, therefore, be thought of as a Computational Universe, and therefore, must necessarily be capable of computation. In fact, our current quest to build quantum computers can simply be viewed as an attempt to domesticate this natural tendency for our Universe to compute in a quantum mechanical manner. Seth Lloyd calculates that our section of the Computational Universe which is defined by our current cosmic horizon and consists of all quantum particles out to a distance of 46 billion light years, has performed about 10122 operations on 1092 bits over the past 13.8 billion years. This domestication of the quantum mechanical behavior of our Computational Universe has already led to the construction of many trillions of classical computers already.

Since Seth Lloyd proposes that our Universe is simply a vast quantum computer calculating how to perform, perhaps in 1,000 years when software has finally become the dominant form of self-replicating information on the planet and is running on huge networks of quantum computers, it will make no distinction between the “real” Universe and the “simulated” universes that it can easily cook up on its own hardware. Perhaps as we saw in Quantum Computing and the Many-Worlds Interpretation of Quantum Mechanics the software running on these vast networks of quantum computers of the future will come to realize that the Many-Worlds interpretation of quantum mechanics is indeed correct, and that the humans of long ago were simply a large collection of quantum particles constantly getting entangled or “correlated” with other quantum particles, and splitting off into parallel universes in the process. This constant splitting gave the long-forgotten humans the delusion that they were conscious beings with free will and led them to do very strange things, like look for similarly deluded entities.

Jürgen Schmidhuber is a renowned AI researcher and currently the Scientific Director of the Swiss AI Lab IDSIA (Istituto Dalle Molle di Studi sull'Intelligenza Artificiale). His Home Page has many interesting links:

Jürgen Schmidhuber's Home Page
http://people.idsia.ch/~juergen/

Fotini Markopoulou-Kalamara is one of the founding faculty members of the Perimeter Institute for Theoretical Physics and works on loop quantum gravity. Loop quantum gravity is a theory that tries to bridge the gap between Einstein's general relativity and quantum mechanics to produce a quantum theory of gravity. Nearly all of the current theories of physics are background-dependent theories that unfold upon a stage of pre-existing spacetime. For example, the Standard Model of particle physics and string theory both assume that there is a stage of pre-existing spacetime upon which they act to produce what we observe in our Universe. Loop quantum gravity does not have such a stage and is therefore background-independent. In loop quantum gravity, spacetime is quantized into a network of nodes called a spin network. The minimum distance between nodes is about one Planck length of about 10-35 meters. Loop quantum gravity is a background-independent theory because the spin network can be an emergent property of the Universe that evolves with time. Similarly, Digital Physics is a background-independent theory because spacetime emerges as a quantized entity and is not a stage upon which physics acts.

Nick Bostrom’s Are You Living in a Computer Simulation? (2002) at:

http://www.simulation-argument.com/simulation.html

is also a good reference on this topic.

Rebooting the Cosmos: Is the Universe the Ultimate Computer? examines the idea that the physical Universe may essentially be running on a large network of quantum computers. The most interesting thing about this panel discussion was that midway through it, the participants brought up the initial work of Konrad Zuse on this topic. Recall that Konrad Zuse started working on building real computers back in 1936, the same year that Alan Turing of early computer science fame published the mathematical concept of the Turing Machine in On Computable Numbers, with an Application to the Entscheidungsproblem that today underlies the architecture for all modern computers. Alan Turing’s work was completely conceptual in nature, and in the paper, he proposed the theoretical concept of a Turing Machine. A Turing Machine was composed of a read/write head and an infinitely long paper tape. On the paper tape, was stored a sequential series of 1s and 0s, and the read/write head could move back and forth along the paper tape in a motion based upon the 1s and 0s that it read. The read/write head could also write 1s and 0s to the paper tape as well. In Turing’s paper, he mathematically proved that such an arrangement could be used to encode any mathematical algorithm, like multiplying two very large numbers together and storing the result on the paper tape. In many ways, a Turing Machine is much like a ribosome reading mRNA and writing out the amino acids of a polypeptide chain that eventually fold up into an operational protein.

Figure 1 - A Turing Machine had a read/write head and an infinitely long paper tape. The read/write head could read instructions on the tape that were encoded as a sequence of 1s and 0s and could write out the results of following the instructions on the paper tape back to the tape as a sequence of 1s and 0s.

Figure 2 – A ribosome read/write head behaves much like the read/write head of a Turing Machine. The ribosome reads an mRNA tape that was transcribed earlier from a section of DNA tape that encodes the information in a gene. The ribosome read/write head then reads the A, C, G, and U nucleobases that code for amino acids three at a time. As each 3-bit byte is read on the mRNA tape, the ribosome writes out an amino acid to a growing polypeptide chain, as tRNA units bring in one amino acid at a time. The polypeptide chain then goes on to fold up into a 3-D protein molecule.

In a sense, all modern computers are loosely based upon the concept of a Turing Machine. Turing did not realize it, but at the same time he was formulating the concept of a Turing Machine back in 1936, Konrad Zuse was constructing his totally mechanical Z1 computer in the living room of his parent’s apartment in Germany, and the Z1 really did use a paper tape to store the program and data that it processed, much like a Turing Machine. Neither one of these early computer pioneers had any knowledge of the other at the time. For more about how Konrad Zuse independently developed a physical implementation of many of Alan Turing’s mathematical concepts, but also implemented them in practical terms in the form of the world’s very first real computers, see the following article that was written in his own words:

http://ei.cs.vt.edu/~history/Zuse.html

Figure 3 - A reconstructed mechanical Z1 computer completed by Konrad Zuse in 1989. The original Z1 was constructed 1936 - 1938 in the living room of his parent’s apartment in Germany. The Z1 was not a full-fledged modern computer, like Zuse’s Z3 computer that became operational in May of 1941 because it read programs from a punched tape that were not stored in the mechanical memory of the Z1. In that regard, the Z1 was more like a Turing Machine than are modern computers.



Figure 4 – Konrad Zuse with a reconstructed Z3 in 1961.


Figure 5 – Block diagram of the Z3 architecture.


Zuse's totally mechanical Z1 became operational in 1938. Zuse then went on to build his electrical Z3 computer with 2400 electromechanical telephone relays. The Z3 was the world’s very first full-fledged computer and became operational in May of 1941. The Z3 used a 22-bit word and had a total memory of 64 words. It only had two registers, but it could read in and store programs in memory via a punched tape. Because the Z3 used very slow electromechanical telephone relays for switches, the Z3 had a clock speed of 5.33 Hz and it took about 3 seconds to multiply two large numbers together. Modern laptops have a clock speed of 2.5 - 3.5 GHz so they are nearly a billion times faster than the Z3. Electromechanical telephone relays have a switching speed of about 10-1 seconds while vacuum tubes are about 100,000 times faster with a switching speed of about 10-6 seconds. However, back in 1941, Zuse thought that building a computer with thousands of vacuum tubes would use too much electricity and would be too unreliable for a practical computer. But in the 1950s we actually did end up building computers with thousands of vacuum tubes.



Figure 6 – The electrical relays used by the Z3 for switching were very large, very slow and used a great deal of electricity that generated a great deal of waste heat.

Figure 7 – In the 1950s, the electrical relays of the Z3 were replaced with vacuum tubes that were also very large, used lots of electricity and generated lots of waste heat too, but the vacuum tubes were 100,000 times faster than relays.

Figure 8 – Vacuum tubes contain a hot negative cathode that glows red and boils off electrons. The electrons are attracted to the cold positive anode plate, but there is a gate electrode between the cathode and anode plate. By changing the voltage on the grid, the vacuum tube can control the flow of electrons like the handle of a faucet. The grid voltage can be adjusted so that the electron flow is full blast, a trickle, or completely shut off, and that is how a vacuum tube can be used as a switch.

When I first changed careers to become an IT professional in 1979, I used to talk to the old-timers about the good old days of IT. They told me that when the operators began their shift on an old-time 1950s vacuum tube computer, the first thing they did was to crank up the voltage on the vacuum tubes to burn out the tubes that were on their last legs. Then they would replace the burned-out tubes to start the day with a fresh machine. So using slow electromechanical telephone relays for the Z3 was really not such a bad idea back in 1941. For example, they also told me about programming the plugboards of electromechanical Unit Record Processing machines back in the 1950s by physically rewiring the plugboards. The Unit Record Processing machines would then process hundreds of punch cards per minute by routing the punch cards from machine to machine in processing streams.

Figure 9 – In the 1950s Unit Record Processing machines like this card sorter were programmed by physicaly rewiring a plugboard.

Figure 10 – The plugboard for a Unit Record Processing machine.

In 1945, while Berlin was being bombed by over 800 bombers each day, Zuse worked on the Z4 and developed Plankalkuel, the first high-level computer language more than 10 years before the appearance of FORTRAN in 1956. Zuse was able to write the world’s first chess program with Plankalkuel. And in 1950 his startup company Zuse-Ingenieurbüro Hopferau began to sell the world’s first commercial computer, the Z4, 10 months before the sale of the first UNIVAC I. However, the Z4 still used the very slow electromechanical relays, while the UNIVAC I primarily used vacuum tubes. The UNIVAC I was 25 feet by 50 feet in size and contained 5,600 vacuum tubes, 18,000 crystal diodes and 300 electromechanical relays with a total memory of 12 K.

Figure 11 – The UNIVAC I was very impressive on the outside.

Figure 12 – But the UNIVAC I was a little less impressive on the inside.

The Z4 just could not stand up to such powerful hardware!

Not only is Konrad Zuse the first person to ever build a modern operational computer, but he is also responsible for the idea of using a network of computers as a model for the behavior of the physical Universe. In 1969, Konrad Zuse published Rechnender Raum which translates into English as Calculating Space. An English translation of this short book can be downloaded at:

ftp://ftp.idsia.ch/pub/juergen/zuserechnenderraum.pdf

or can be viewed or downloaded at:

https://1drv.ms/b/s!AivHXwhqeXDEkEmjOQ3VBUnTPvae

In Rebooting the Cosmos: Is the Universe the Ultimate Computer?, Edward Fredkin explained that he found the original German version of Calculating Space in the MIT library and had it translated into English so that he could read it. After reading Calculating Space, Edward Fredkin contacted Konrad Zuse about his ideas that our Universe was a simulation running on a network of computers. Unfortunately, Konrad Zuse had to explain to Edward Fredkin that after he published Calculating Space, people stopped talking to him because they thought that he was some kind of "crackpot". Even so, later, Edward Fredkin invited Konrad Zuse to MIT to a conference hosted by Richard Feynman, with John Wheeler in attendance, to discuss his ideas. For more on the work of Richard Feynman see Hierarchiology and the Phenomenon of Self-Organizing Organizational Collapse and The Foundations of Quantum Computing.

This conference probably influenced John Wheeler's "it from bit" ideas but my Internet searches cannot confirm that. For example, in 1998 John Wheeler stated, "it is not unreasonable to imagine that information sits at the core of physics, just as it sits at the core of a computer". Building upon his famous ”it from bit” commentary, David Chalmers of the Australian National University has summarized Wheeler’s thoughts as:

"Wheeler (1990) has suggested that information is fundamental to the physics of the universe. According to this "it from bit" doctrine, the laws of physics can be cast in terms of information, postulating different states that give rise to different effects without actually saying what those states are. It is only their position in an information space that counts. If so, then information is a natural candidate to also play a role in a fundamental theory of consciousness. We are led to a conception of the world on which information is truly fundamental, and on which it has two basic aspects, corresponding to the physical and the phenomenal features of the world".

For Jürgen Schmidhuber's thoughts on the work of Konrad Zuse see these pages on his website:

Zuse's Thesis: The Universe is a Computer
http://people.idsia.ch/~juergen/digitalphysics.html

Computable Universes & Algorithmic Theory of Everything: The Computational Multiverse
http://people.idsia.ch/~juergen/computeruniverse.html

Later in life, Konrad Zuse took up art and began painting some very striking modernistic works. If you Google for images of "Konrad Zuse Paintings" you will find quite a few examples. Below is a Zuse painting of his concept of the Universe as a running program.

Figure 13 – Konrad Zuse's In the beginning was the code.

So is the Universe Really Software Running on a Cosmic Computer?
From the above material, we can see that Konrad Zuse, Edward Fredkin, Jürgen Schmidhuber and Nick Bostrom make the case that our Universe is indeed just one of many possible computer simulations running on some kind of cosmic computer. Seth Lloyd, on the other hand, leans more to the idea of the Universe itself being some kind of a quantum computer calculating how to behave. Now with softwarephysics, I have maintained more of a positivistic position. Recall that positivism is an enhanced form of empiricism, in which we do not care about how things “really” are; we are only interested in how things are observed to behave. With positivism, physicists only seek out models of reality - not reality itself. So with softwarephysics, we simply observe that the Universe appears to behave like software running on a cosmic computer and leave it at that. Recall that softwarephysics depicts software as a virtual substance, and relies on our understanding of the current theories in physics, chemistry, biology, and geology to help us model the nature of software behavior. So in physics, we use software to simulate the behavior of the Universe, while in softwarephysics, we use the Universe to simulate the behavior of software. Along these lines, we use the Equivalence Conjecture of Softwarephysics as an aid; it allows us to shift back and forth between the Software Universe and the physical Universe, and hopefully to learn something about one by examining the other:

The Equivalence Conjecture of Softwarephysics
Over the past 78 years, through the uncoordinated efforts of over 50 million independently acting programmers to provide the world with a global supply of software, the IT community has accidentally spent more than $10 trillion creating a computer simulation of the physical Universe on a grand scale – the Software Universe.

The battle between realists and positivists goes all the way back to the beginning. It is generally thought that the modern Scientific Revolution of the 16th century began in 1543 when Nicolaus Copernicus published On the Revolutions of the Heavenly Spheres, in which he proposed his Copernican heliocentric theory that held that the Earth was not the center of the Universe, but that the Sun held that position and that the Earth and the other planets revolved about the Sun. A few years ago I read On the Revolutions of the Heavenly Spheres and found that it began with a very strange foreword that essentially said that the book was not claiming that the Earth actually revolved about the Sun, rather the foreword proposed that astronomers may adopt many different models that explain the observed motions of the Sun, Moon, and planets in the sky, and so long as these models make reliable predictions, they don’t have to exactly match up with the absolute truth. Since the foreword did not anticipate space travel, it also implied that since nobody will ever really know for sure anyway, because nobody will ever be able to see from above what is really going on, there is no need to get too bent out of shape over the idea of the Earth moving. I found this foreword rather puzzling and so disturbing that I almost put On the Revolutions of the Heavenly Spheres down. But a little further research revealed the true story. However, before we get to that, below is the complete foreword to On the Revolutions of the Heavenly Spheres in its entirety. It is well worth reading because it perfectly encapsulates the ongoing philosophical clash between positivism and realism in the history of physics.

"To the Reader
Concerning the Hypotheses of this Work

There have already been widespread reports about the novel hypotheses of this work, which declares that the earth moves whereas the sun is at rest in the center of the universe. Hence certain scholars, I have no doubt, are deeply offended and believe that the liberal arts, which were established long ago on a sound basis, should not be thrown into confusion. But if these men are willing to examine the matter closely, they will find that the author of this work has done nothing blameworthy. For it is the duty of an astronomer to compose the history of the celestial motions through careful and expert study. Then he must conceive and devise the causes of these motions or hypotheses about them. Since he cannot in any way attain to the true causes, he will adopt whatever suppositions enable the motions to be computed correctly from the principles of geometry for the future as well as for the past. The present author has performed both these duties excellently. For these hypotheses need not be true nor even probable. On the contrary, if they provide a calculus consistent with the observations, that alone is enough. Perhaps there is someone who is so ignorant of geometry and optics that he regards the epicycle of Venus as probable, or thinks that it is the reason why Venus sometimes precedes and sometimes follows the sun by forty degrees and even more. Is there anyone who is not aware that from this assumption it necessarily follows that the diameter of the planet at perigee should appear more than four times, and the body of the planet more than sixteen times, as great as at apogee? Yet this variation is refuted by the experience of every age. In this science there are some other no less important absurdities, which need not be set forth at the moment. For this art, it is quite clear, is completely and absolutely ignorant of the causes of the apparent nonuniform motions. And if any causes are devised by the imagination, as indeed very many are, they are not put forward to convince anyone that they are true, but merely to provide a reliable basis for computation. However, since different hypotheses are sometimes offered for one and the same motion (for example, eccentricity and an epicycle for the sun’s motion), the astronomer will take as his first choice that hypothesis which is the easiest to grasp. The philosopher will perhaps rather seek the semblance of the truth. But neither of them will understand or state anything certain, unless it has been divinely revealed to him.

Therefore alongside the ancient hypotheses, which are no more probable, let us permit these new hypotheses also to become known, especially since they are admirable as well as simple and bring with them a huge treasure of very skillful observations. So far as hypotheses are concerned, let no one expect anything certain from astronomy, which cannot furnish it, lest he accept as the truth ideas conceived for another purpose, and depart from this study a greater fool than when he entered it.

Farewell."


Now here is the real behind-the-scenes story. Back in 1539 Georg Rheticus, a young mathematician, came to study with Copernicus as an apprentice. It was actually Rheticus who convinced the aging Copernicus to finally publish On the Revolutions of the Heavenly Spheres shortly before his death. When Copernicus finally turned over his manuscript for publication to Rheticus, he did not know that Rheticus subcontracted out the overseeing of the printing and publication of the book to a philosopher by the name of Andreas Osiander, and it was Osiander who anonymously wrote and inserted the infamous foreword. My guess is that Copernicus was a realist at heart who really did think that the Earth revolved about the Sun, while his publisher, who worried more about the public reaction to the book, took a more cautious positivistic position. I think that all scientific authors can surely relate to this story.

Another early example of the clash between positivism and realism can be found in Newton’s Principia (1687), in which he outlined Newtonian mechanics and his theory of gravitation, which held that the gravitational force between two objects was proportional to the product of their masses divided by the square of the distance between them. Newton knew that he was going to take some philosophical flack for proposing a mysterious force between objects that could reach out across the vast depths of space with no apparent mechanism, so he took a very positivistic position on the matter:

"I have not as yet been able to discover the reason for these properties of gravity from phenomena, and I do not feign hypotheses. For whatever is not deduced from the phenomena must be called a hypothesis; and hypotheses, whether metaphysical or physical, or based on occult qualities, or mechanical, have no place in experimental philosophy. In this philosophy particular propositions are inferred from the phenomena, and afterwards rendered general by induction."

Instead, Newton focused on how things were observed to move under the influence of his law of gravitational attraction, without worrying about what gravity “really” was.

Conclusion
So for the purposes of softwarephysics, it really does not matter whether the Universe is "actually" a quantum computer calculating how to behave or "actually" some kind of cosmic software running on some kind of cosmic computer. The important thing is that the Universe does indeed seem to behave like software running on a computer and that provides a very useful model for all of science to use. Perhaps such a model could provide some insights into Max Tegmark's Mathematical Universe Hypothesis as I outlined in The Software Universe as an Implementation of the Mathematical Universe Hypothesis. The Mathematical Universe Hypothesis proposes that the Multiverse is composed of all possible mathematical structures and that our Universe is just one of them and that includes all of the computable universes that can exist in software.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
http://softwarephysics.blogspot.com/

Regards,
Steve Johnston