Monday, August 26, 2019

Digital Physics and the Software Universe

I just watched a very interesting panel discussion from an old World Science Festival meeting that was held on Saturday, June 4, 2011, 8:00 PM - 9:30 PM:

Rebooting the Cosmos: Is the Universe the Ultimate Computer?
https://www.worldsciencefestival.com/videos/rebooting-the-cosmos-is-the-universe-the-ultimate-computer/

The panel discussion was moderated by John Hockenberry and featured Edward Fredkin, Seth Lloyd, Jürgen Schmidhuber and Fotini Markopoulou-Kalamara. The World Science Festival is an annual event hosted by string theorist Brian Greene and is an outgrowth of the World Science U. At the World Science U, you can view many very interesting lectures and courses, and at the World Science Festival website you can view many very interesting panel discussions:

World Science U
http://www.worldscienceu.com/

World Science Festival
https://www.worldsciencefestival.com/

Edward Fredkin started working with computers back in 1956. In 1960, he wrote the first OS and the first Assembler program for the DEC PDP-1. Then in 1968, Edward Fredkin returned to academia and became a full professor at MIT. In 1990, Edward Fredkin published the paper Digital Mechanics - An informational process based on reversible universal cellular automata in the ACM, in which he proposed that the physical Universe might be a cellular automaton programmed to act like physics and launched the field of Digital Physics. Since then, Fredkin has broadened the field by renaming it to Digital Philosophy. You can find his Digital Philosophy website at:

Digital Philosophy
http://www.digitalphilosophy.org/

On the Home page, he briefly defines Digital Philosophy as:

"What is Digital Philosophy?
Digital Philosophy (DP) is a new way of thinking about the fundamental workings of processes in nature. DP is an atomic theory carried to a logical extreme where all quantities in nature are finite and discrete. This means that, theoretically, any quantity can be represented exactly by an integer. Further, DP implies that nature harbors no infinities, infinitesimals, continuities, or locally determined random variables. This paper explores Digital Philosophy by examining the consequences of these premises.

At the most fundamental levels of physics, DP implies a totally discrete process called Digital Mechanics. Digital Mechanics[1] (DM) must be a substrate for Quantum Mechanics. Digital Philosophy makes sense with regard to any system if the following assumptions are true:

All the fundamental quantities that represent the state information of the system are ultimately discrete. In principle, an integer can always be an exact representation of every such quantity. For example, there is always an integral number of neutrons in a particular atom. Therefore, configurations of bits, like the binary digits in a computer, can correspond exactly to the most microscopic representation of that kind of state information.

In principle, the temporal evolution of the state information (numbers and kinds of particles) of such a system can be exactly modeled by a digital informational process similar to what goes on in a computer. Such models are straightforward in the case where we are keeping track only of the numbers and kinds of particles. For example, if an oracle announces that a neutron decayed into a proton, an electron, and a neutrino, it’s easy to see how a computer could exactly keep track of the changes to the numbers and kinds of particles in the system. Subtract 1 from the number of neutrons, and add 1 to each of the numbers of protons, electrons, and neutrinos.

The possibility that DP may apply to various fields of science motivates this study."


While you are on his website, be sure to check out some of Edward Fredkin's publications at:

http://www.digitalphilosophy.org/index.php/essays/

In 2002, Seth Lloyd at MIT published The Computational Universe, in which he calculated the computing power of the entire physical Universe treated as one large quantum computer. You can read this fascinating paper at:

http://www.edge.org/3rd_culture/lloyd2/lloyd2_p2.html

Seth Lloyd is currently working on quantum computers at MIT and is the first quantum-mechanical engineer in MIT’s Mechanical Engineering department. Seth Lloyd is recognized for proposing the first technologically feasible design for a quantum computer. In 2006 he published the book Programming the Universe: A Quantum Computer Scientist Takes on the Cosmos in which he contends that our Universe is a quantum mechanical Computational Universe that has been calculating how to behave from the very beginning. He came to this conclusion as MIT’s sole quantum mechanical engineer working on building practical quantum computers. During the course of his research, Seth Lloyd has learned how to talk to atoms in a quantum mechanical way. Through intimate dealings with atoms, he has found that atoms are constantly flipping quantum mechanical states in a controlled manner prescribed by quantum mechanics. Since a computer is simply a large number of switches that operate in a controlled manner, our Universe can, therefore, be thought of as a Computational Universe, and therefore, must necessarily be capable of computation. In fact, our current quest to build quantum computers can simply be viewed as an attempt to domesticate this natural tendency for our Universe to compute in a quantum mechanical manner. Seth Lloyd calculates that our section of the Computational Universe which is defined by our current cosmic horizon and consists of all quantum particles out to a distance of 46 billion light years, has performed about 10122 operations on 1092 bits over the past 13.8 billion years. This domestication of the quantum mechanical behavior of our Computational Universe has already led to the construction of many trillions of classical computers already.

Since Seth Lloyd proposes that our Universe is simply a vast quantum computer calculating how to perform, perhaps in 1,000 years when software has finally become the dominant form of self-replicating information on the planet and is running on huge networks of quantum computers, it will make no distinction between the “real” Universe and the “simulated” universes that it can easily cook up on its own hardware. Perhaps as we saw in Quantum Computing and the Many-Worlds Interpretation of Quantum Mechanics the software running on these vast networks of quantum computers of the future will come to realize that the Many-Worlds interpretation of quantum mechanics is indeed correct, and that the humans of long ago were simply a large collection of quantum particles constantly getting entangled or “correlated” with other quantum particles, and splitting off into parallel universes in the process. This constant splitting gave the long-forgotten humans the delusion that they were conscious beings with free will and led them to do very strange things, like look for similarly deluded entities.

Jürgen Schmidhuber is a renowned AI researcher and currently the Scientific Director of the Swiss AI Lab IDSIA (Istituto Dalle Molle di Studi sull'Intelligenza Artificiale). His Home Page has many interesting links:

Jürgen Schmidhuber's Home Page
http://people.idsia.ch/~juergen/

Fotini Markopoulou-Kalamara is one of the founding faculty members of the Perimeter Institute for Theoretical Physics and works on loop quantum gravity. Loop quantum gravity is a theory that tries to bridge the gap between Einstein's general relativity and quantum mechanics to produce a quantum theory of gravity. Nearly all of the current theories of physics are background-dependent theories that unfold upon a stage of pre-existing spacetime. For example, the Standard Model of particle physics and string theory both assume that there is a stage of pre-existing spacetime upon which they act to produce what we observe in our Universe. Loop quantum gravity does not have such a stage and is therefore background-independent. In loop quantum gravity, spacetime is quantized into a network of nodes called a spin network. The minimum distance between nodes is about one Planck length of about 10-35 meters. Loop quantum gravity is a background-independent theory because the spin network can be an emergent property of the Universe that evolves with time. Similarly, Digital Physics is a background-independent theory because spacetime emerges as a quantized entity and is not a stage upon which physics acts.

Nick Bostrom’s Are You Living in a Computer Simulation? (2002) at:

http://www.simulation-argument.com/simulation.html

is also a good reference on this topic.

Rebooting the Cosmos: Is the Universe the Ultimate Computer? examines the idea that the physical Universe may essentially be running on a large network of quantum computers. The most interesting thing about this panel discussion was that midway through it, the participants brought up the initial work of Konrad Zuse on this topic. Recall that Konrad Zuse started working on building real computers back in 1936, the same year that Alan Turing of early computer science fame published the mathematical concept of the Turing Machine in On Computable Numbers, with an Application to the Entscheidungsproblem that today underlies the architecture for all modern computers. Alan Turing’s work was completely conceptual in nature, and in the paper, he proposed the theoretical concept of a Turing Machine. A Turing Machine was composed of a read/write head and an infinitely long paper tape. On the paper tape, was stored a sequential series of 1s and 0s, and the read/write head could move back and forth along the paper tape in a motion based upon the 1s and 0s that it read. The read/write head could also write 1s and 0s to the paper tape as well. In Turing’s paper, he mathematically proved that such an arrangement could be used to encode any mathematical algorithm, like multiplying two very large numbers together and storing the result on the paper tape. In many ways, a Turing Machine is much like a ribosome reading mRNA and writing out the amino acids of a polypeptide chain that eventually fold up into an operational protein.

Figure 1 - A Turing Machine had a read/write head and an infinitely long paper tape. The read/write head could read instructions on the tape that were encoded as a sequence of 1s and 0s and could write out the results of following the instructions on the paper tape back to the tape as a sequence of 1s and 0s.

Figure 2 – A ribosome read/write head behaves much like the read/write head of a Turing Machine. The ribosome reads an mRNA tape that was transcribed earlier from a section of DNA tape that encodes the information in a gene. The ribosome read/write head then reads the A, C, G, and U nucleobases that code for amino acids three at a time. As each 3-bit byte is read on the mRNA tape, the ribosome writes out an amino acid to a growing polypeptide chain, as tRNA units bring in one amino acid at a time. The polypeptide chain then goes on to fold up into a 3-D protein molecule.

In a sense, all modern computers are loosely based upon the concept of a Turing Machine. Turing did not realize it, but at the same time he was formulating the concept of a Turing Machine back in 1936, Konrad Zuse was constructing his totally mechanical Z1 computer in the living room of his parent’s apartment in Germany, and the Z1 really did use a paper tape to store the program and data that it processed, much like a Turing Machine. Neither one of these early computer pioneers had any knowledge of the other at the time. For more about how Konrad Zuse independently developed a physical implementation of many of Alan Turing’s mathematical concepts, but also implemented them in practical terms in the form of the world’s very first real computers, see the following article that was written in his own words:

http://ei.cs.vt.edu/~history/Zuse.html

Figure 3 - A reconstructed mechanical Z1 computer completed by Konrad Zuse in 1989. The original Z1 was constructed 1936 - 1938 in the living room of his parent’s apartment in Germany. The Z1 was not a full-fledged modern computer, like Zuse’s Z3 computer that became operational in May of 1941 because it read programs from a punched tape that were not stored in the mechanical memory of the Z1. In that regard, the Z1 was more like a Turing Machine than are modern computers.



Figure 4 – Konrad Zuse with a reconstructed Z3 in 1961.


Figure 5 – Block diagram of the Z3 architecture.


Zuse's totally mechanical Z1 became operational in 1938. Zuse then went on to build his electrical Z3 computer with 2400 electromechanical telephone relays. The Z3 was the world’s very first full-fledged computer and became operational in May of 1941. The Z3 used a 22-bit word and had a total memory of 64 words. It only had two registers, but it could read in and store programs in memory via a punched tape. Because the Z3 used very slow electromechanical telephone relays for switches, the Z3 had a clock speed of 5.33 Hz and it took about 3 seconds to multiply two large numbers together. Modern laptops have a clock speed of 2.5 - 3.5 GHz so they are nearly a billion times faster than the Z3. Electromechanical telephone relays have a switching speed of about 10-1 seconds while vacuum tubes are about 100,000 times faster with a switching speed of about 10-6 seconds. However, back in 1941, Zuse thought that building a computer with thousands of vacuum tubes would use too much electricity and would be too unreliable for a practical computer. But in the 1950s we actually did end up building computers with thousands of vacuum tubes.



Figure 6 – The electrical relays used by the Z3 for switching were very large, very slow and used a great deal of electricity that generated a great deal of waste heat.

Figure 7 – In the 1950s, the electrical relays of the Z3 were replaced with vacuum tubes that were also very large, used lots of electricity and generated lots of waste heat too, but the vacuum tubes were 100,000 times faster than relays.

Figure 8 – Vacuum tubes contain a hot negative cathode that glows red and boils off electrons. The electrons are attracted to the cold positive anode plate, but there is a gate electrode between the cathode and anode plate. By changing the voltage on the grid, the vacuum tube can control the flow of electrons like the handle of a faucet. The grid voltage can be adjusted so that the electron flow is full blast, a trickle, or completely shut off, and that is how a vacuum tube can be used as a switch.

When I first changed careers to become an IT professional in 1979, I used to talk to the old-timers about the good old days of IT. They told me that when the operators began their shift on an old-time 1950s vacuum tube computer, the first thing they did was to crank up the voltage on the vacuum tubes to burn out the tubes that were on their last legs. Then they would replace the burned-out tubes to start the day with a fresh machine. So using slow electromechanical telephone relays for the Z3 was really not such a bad idea back in 1941. For example, they also told me about programming the plugboards of electromechanical Unit Record Processing machines back in the 1950s by physically rewiring the plugboards. The Unit Record Processing machines would then process hundreds of punch cards per minute by routing the punch cards from machine to machine in processing streams.

Figure 9 – In the 1950s Unit Record Processing machines like this card sorter were programmed by physicaly rewiring a plugboard.

Figure 10 – The plugboard for a Unit Record Processing machine.

In 1945, while Berlin was being bombed by over 800 bombers each day, Zuse worked on the Z4 and developed Plankalkuel, the first high-level computer language more than 10 years before the appearance of FORTRAN in 1956. Zuse was able to write the world’s first chess program with Plankalkuel. And in 1950 his startup company Zuse-Ingenieurbüro Hopferau began to sell the world’s first commercial computer, the Z4, 10 months before the sale of the first UNIVAC I. However, the Z4 still used the very slow electromechanical relays, while the UNIVAC I primarily used vacuum tubes. The UNIVAC I was 25 feet by 50 feet in size and contained 5,600 vacuum tubes, 18,000 crystal diodes and 300 electromechanical relays with a total memory of 12 K.

Figure 11 – The UNIVAC I was very impressive on the outside.

Figure 12 – But the UNIVAC I was a little less impressive on the inside.

The Z4 just could not stand up to such powerful hardware!

Not only is Konrad Zuse the first person to ever build a modern operational computer, but he is also responsible for the idea of using a network of computers as a model for the behavior of the physical Universe. In 1969, Konrad Zuse published Rechnender Raum which translates into English as Calculating Space. An English translation of this short book can be downloaded at:

ftp://ftp.idsia.ch/pub/juergen/zuserechnenderraum.pdf

or can be viewed or downloaded at:

https://1drv.ms/b/s!AivHXwhqeXDEkEmjOQ3VBUnTPvae

In Rebooting the Cosmos: Is the Universe the Ultimate Computer?, Edward Fredkin explained that he found the original German version of Calculating Space in the MIT library and had it translated into English so that he could read it. After reading Calculating Space, Edward Fredkin contacted Konrad Zuse about his ideas that our Universe was a simulation running on a network of computers. Unfortunately, Konrad Zuse had to explain to Edward Fredkin that after he published Calculating Space, people stopped talking to him because they thought that he was some kind of "crackpot". Even so, later, Edward Fredkin invited Konrad Zuse to MIT to a conference hosted by Richard Feynman, with John Wheeler in attendance, to discuss his ideas. For more on the work of Richard Feynman see Hierarchiology and the Phenomenon of Self-Organizing Organizational Collapse and The Foundations of Quantum Computing.

This conference probably influenced John Wheeler's "it from bit" ideas but my Internet searches cannot confirm that. For example, in 1998 John Wheeler stated, "it is not unreasonable to imagine that information sits at the core of physics, just as it sits at the core of a computer". Building upon his famous ”it from bit” commentary, David Chalmers of the Australian National University has summarized Wheeler’s thoughts as:

"Wheeler (1990) has suggested that information is fundamental to the physics of the universe. According to this "it from bit" doctrine, the laws of physics can be cast in terms of information, postulating different states that give rise to different effects without actually saying what those states are. It is only their position in an information space that counts. If so, then information is a natural candidate to also play a role in a fundamental theory of consciousness. We are led to a conception of the world on which information is truly fundamental, and on which it has two basic aspects, corresponding to the physical and the phenomenal features of the world".

For Jürgen Schmidhuber's thoughts on the work of Konrad Zuse see these pages on his website:

Zuse's Thesis: The Universe is a Computer
http://people.idsia.ch/~juergen/digitalphysics.html

Computable Universes & Algorithmic Theory of Everything: The Computational Multiverse
http://people.idsia.ch/~juergen/computeruniverse.html

Later in life, Konrad Zuse took up art and began painting some very striking modernistic works. If you Google for images of "Konrad Zuse Paintings" you will find quite a few examples. Below is a Zuse painting of his concept of the Universe as a running program.

Figure 13 – Konrad Zuse's In the beginning was the code.

So is the Universe Really Software Running on a Cosmic Computer?
From the above material, we can see that Konrad Zuse, Edward Fredkin, Jürgen Schmidhuber and Nick Bostrom make the case that our Universe is indeed just one of many possible computer simulations running on some kind of cosmic computer. Seth Lloyd, on the other hand, leans more to the idea of the Universe itself being some kind of a quantum computer calculating how to behave. Now with softwarephysics, I have maintained more of a positivistic position. Recall that positivism is an enhanced form of empiricism, in which we do not care about how things “really” are; we are only interested in how things are observed to behave. With positivism, physicists only seek out models of reality - not reality itself. So with softwarephysics, we simply observe that the Universe appears to behave like software running on a cosmic computer and leave it at that. Recall that softwarephysics depicts software as a virtual substance, and relies on our understanding of the current theories in physics, chemistry, biology, and geology to help us model the nature of software behavior. So in physics, we use software to simulate the behavior of the Universe, while in softwarephysics, we use the Universe to simulate the behavior of software. Along these lines, we use the Equivalence Conjecture of Softwarephysics as an aid; it allows us to shift back and forth between the Software Universe and the physical Universe, and hopefully to learn something about one by examining the other:

The Equivalence Conjecture of Softwarephysics
Over the past 78 years, through the uncoordinated efforts of over 50 million independently acting programmers to provide the world with a global supply of software, the IT community has accidentally spent more than $10 trillion creating a computer simulation of the physical Universe on a grand scale – the Software Universe.

The battle between realists and positivists goes all the way back to the beginning. It is generally thought that the modern Scientific Revolution of the 16th century began in 1543 when Nicolaus Copernicus published On the Revolutions of the Heavenly Spheres, in which he proposed his Copernican heliocentric theory that held that the Earth was not the center of the Universe, but that the Sun held that position and that the Earth and the other planets revolved about the Sun. A few years ago I read On the Revolutions of the Heavenly Spheres and found that it began with a very strange foreword that essentially said that the book was not claiming that the Earth actually revolved about the Sun, rather the foreword proposed that astronomers may adopt many different models that explain the observed motions of the Sun, Moon, and planets in the sky, and so long as these models make reliable predictions, they don’t have to exactly match up with the absolute truth. Since the foreword did not anticipate space travel, it also implied that since nobody will ever really know for sure anyway, because nobody will ever be able to see from above what is really going on, there is no need to get too bent out of shape over the idea of the Earth moving. I found this foreword rather puzzling and so disturbing that I almost put On the Revolutions of the Heavenly Spheres down. But a little further research revealed the true story. However, before we get to that, below is the complete foreword to On the Revolutions of the Heavenly Spheres in its entirety. It is well worth reading because it perfectly encapsulates the ongoing philosophical clash between positivism and realism in the history of physics.

"To the Reader
Concerning the Hypotheses of this Work

There have already been widespread reports about the novel hypotheses of this work, which declares that the earth moves whereas the sun is at rest in the center of the universe. Hence certain scholars, I have no doubt, are deeply offended and believe that the liberal arts, which were established long ago on a sound basis, should not be thrown into confusion. But if these men are willing to examine the matter closely, they will find that the author of this work has done nothing blameworthy. For it is the duty of an astronomer to compose the history of the celestial motions through careful and expert study. Then he must conceive and devise the causes of these motions or hypotheses about them. Since he cannot in any way attain to the true causes, he will adopt whatever suppositions enable the motions to be computed correctly from the principles of geometry for the future as well as for the past. The present author has performed both these duties excellently. For these hypotheses need not be true nor even probable. On the contrary, if they provide a calculus consistent with the observations, that alone is enough. Perhaps there is someone who is so ignorant of geometry and optics that he regards the epicycle of Venus as probable, or thinks that it is the reason why Venus sometimes precedes and sometimes follows the sun by forty degrees and even more. Is there anyone who is not aware that from this assumption it necessarily follows that the diameter of the planet at perigee should appear more than four times, and the body of the planet more than sixteen times, as great as at apogee? Yet this variation is refuted by the experience of every age. In this science there are some other no less important absurdities, which need not be set forth at the moment. For this art, it is quite clear, is completely and absolutely ignorant of the causes of the apparent nonuniform motions. And if any causes are devised by the imagination, as indeed very many are, they are not put forward to convince anyone that they are true, but merely to provide a reliable basis for computation. However, since different hypotheses are sometimes offered for one and the same motion (for example, eccentricity and an epicycle for the sun’s motion), the astronomer will take as his first choice that hypothesis which is the easiest to grasp. The philosopher will perhaps rather seek the semblance of the truth. But neither of them will understand or state anything certain, unless it has been divinely revealed to him.

Therefore alongside the ancient hypotheses, which are no more probable, let us permit these new hypotheses also to become known, especially since they are admirable as well as simple and bring with them a huge treasure of very skillful observations. So far as hypotheses are concerned, let no one expect anything certain from astronomy, which cannot furnish it, lest he accept as the truth ideas conceived for another purpose, and depart from this study a greater fool than when he entered it.

Farewell."


Now here is the real behind-the-scenes story. Back in 1539 Georg Rheticus, a young mathematician, came to study with Copernicus as an apprentice. It was actually Rheticus who convinced the aging Copernicus to finally publish On the Revolutions of the Heavenly Spheres shortly before his death. When Copernicus finally turned over his manuscript for publication to Rheticus, he did not know that Rheticus subcontracted out the overseeing of the printing and publication of the book to a philosopher by the name of Andreas Osiander, and it was Osiander who anonymously wrote and inserted the infamous foreword. My guess is that Copernicus was a realist at heart who really did think that the Earth revolved about the Sun, while his publisher, who worried more about the public reaction to the book, took a more cautious positivistic position. I think that all scientific authors can surely relate to this story.

Another early example of the clash between positivism and realism can be found in Newton’s Principia (1687), in which he outlined Newtonian mechanics and his theory of gravitation, which held that the gravitational force between two objects was proportional to the product of their masses divided by the square of the distance between them. Newton knew that he was going to take some philosophical flack for proposing a mysterious force between objects that could reach out across the vast depths of space with no apparent mechanism, so he took a very positivistic position on the matter:

"I have not as yet been able to discover the reason for these properties of gravity from phenomena, and I do not feign hypotheses. For whatever is not deduced from the phenomena must be called a hypothesis; and hypotheses, whether metaphysical or physical, or based on occult qualities, or mechanical, have no place in experimental philosophy. In this philosophy particular propositions are inferred from the phenomena, and afterwards rendered general by induction."

Instead, Newton focused on how things were observed to move under the influence of his law of gravitational attraction, without worrying about what gravity “really” was.

Conclusion
So for the purposes of softwarephysics, it really does not matter whether the Universe is "actually" a quantum computer calculating how to behave or "actually" some kind of cosmic software running on some kind of cosmic computer. The important thing is that the Universe does indeed seem to behave like software running on a computer and that provides a very useful model for all of science to use. Perhaps such a model could provide some insights into Max Tegmark's Mathematical Universe Hypothesis as I outlined in The Software Universe as an Implementation of the Mathematical Universe Hypothesis. The Mathematical Universe Hypothesis proposes that the Multiverse is composed of all possible mathematical structures and that our Universe is just one of them and that includes all of the computable universes that can exist in software.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Tuesday, August 06, 2019

How to Study the Origin of Life on the Earth and Elsewhere in the Universe Right Here at Home

Over the past year, I have taken several excellent online courses on the origin of life on the Earth and elsewhere in the Universe. In all of these very interesting courses, I have noticed that as we go further and further back into deep time that it becomes ever harder to figure out exactly what might have happened. Indeed, this is very difficult to do because we only have one example of carbon-based life here on the Earth to examine. This is further complicated by the fact that none of these excellent courses have been able to definitively define exactly what life on the Earth is nor what life elsewhere in the Universe might look like. It is very difficult to determine the origin of something that cannot even be properly defined. That is why I have recommended in many of my softwarephysics posts to step up just one level higher first and begin by definitively defining the general concept of self-replicating information. Once that has been done we can then view carbon-based life on the Earth as just one of many forms of self-replicating information. Then we can proceed to explore the general characteristics of all forms of self-replicating information and also some possible commonalities that might exist with their origins.

Self-Replicating Information – Information that persists through time by making copies of itself or by enlisting the support of other things to ensure that copies of itself are made.

Over the past 4.56 billion years we have seen five waves of self-replicating information sweep across the surface of the Earth and totally rework the planet, as each new wave came to dominate the Earth:

1. Self-replicating autocatalytic metabolic pathways of organic molecules
2. RNA
3. DNA
4. Memes
5. Software

Software is currently the most recent wave of self-replicating information to arrive upon the scene and is rapidly becoming the dominant form of self-replicating information on the planet. For more on the above see A Brief History of Self-Replicating Information.

The Characteristics of Self-Replicating Information
All forms of self-replicating information have some common characteristics:

1. All self-replicating information evolves over time through the Darwinian processes of inheritance, innovation and natural selection, which endows self-replicating information with one telling characteristic – the ability to survive in a Universe dominated by the second law of thermodynamics and nonlinearity.

2. All self-replicating information begins spontaneously as a parasitic mutation that obtains energy, information and sometimes matter from a host.

3. With time, the parasitic self-replicating information takes on a symbiotic relationship with its host.

4. Eventually, the self-replicating information becomes one with its host through the symbiotic integration of the host and the self-replicating information.

5. Ultimately, the self-replicating information replaces its host as the dominant form of self-replicating information.

6. Most hosts are also forms of self-replicating information.

7. All self-replicating information has to be a little bit nasty in order to survive.

8. The defining characteristic of self-replicating information is the ability of self-replicating information to change the boundary conditions of its utility phase space in new and unpredictable ways by means of exapting current functions into new uses that change the size and shape of its particular utility phase space. See Enablement - the Definitive Characteristic of Living Things for more on this last characteristic. That posting discusses Stuart Kauffman's theory of Enablement in which living things are seen to exapt existing functions into new and unpredictable functions by discovering the “AdjacentPossible” of springloaded preadaptations.

By focusing on a definition of self-replicating information first and the common characteristics of all forms of self-replicating information, we eliminate the problems of trying to define “life” itself. In previous posts, I proposed that since carbon-based life on the Earth and software are both forms of self-replicating information, that it only makes sense to look to the origin and early evolution of software for some clues to the origin and early evolution of carbon-based life. For example, in the SoftwarePaleontology section of SoftwareBiology, I explained how the evolution of software over the past 78 years, or 2.46 billion seconds, ever since Konrad Zuse first cranked up his Z3 computer in May of 1941, has closely followed the same path through Design Space as did carbon-based life on this planet over the past 4.0 billion years in keeping with Simon Conway Morris's contention that convergence has played the dominant role in the evolution of life on Earth.

Software is now rapidly becoming the dominant form of self-replicating information on the planet and is having a major impact on mankind as it comes to predominance. So we are now living in one of those very rare times when a new form of self-replicating information, in the form of software, is coming to predominance. For biologists, this presents an invaluable opportunity because software has been evolving about 100 million times faster than living things over the past 2.46 billion seconds. And the evolution of software over that period of time is the only history of a form of self-replicating information that has actually been recorded by human history. In fact, the evolutionary history of software has all occurred within a single human lifetime, and many of those humans are still alive today to testify as to what actually had happened, something that those working on the origin of life on the Earth and its early evolution can only try to imagine.

As I saw in many of these courses, some are now trying to model the origin of carbon-based life on the Earth and elsewhere in the Universe with software. But why not just take advantage of the $10 trillion worldwide IT infrastructure that has been evolving all on its own for the past 2.46 billion seconds? Certainly, the biosphere of the Earth and this $10 trillion IT infrastructure are the most complicated information processing systems that we know of and billions of times more complicated than any research group could ever attain. Take a look at A Lesson for IT Professionals - Documenting the Flowcharts of Carbon-Based Life in the KEGG Online Database for example. The simplest way to do so would be to do some fieldwork in the corporate IT department of a major corporation or governmental agency. The objective of this fieldwork would be to observe the complexities of supporting heavy-duty commercial software in action on a network of hundreds of physical or virtual servers. I think that observing the IT processes that are used to write new code and install it into production on a large network of servers and then watching the emergent behaviors of the code under load would help researchers to understand the origins of carbon-based life and its complex information processing networks of organic molecules.

And there are tons of data for researchers to work with. Most large-scale IT operations have a great deal of monitoring software in place that displays and records the performance of many thousands of software components under processing load. For example, when I retired in December of 2016 my employer had about 500+ Unix servers running the software for their external websites and the internal web-based applications used to run the business. At peak load, these Unix servers were running several billion Objects (think eukaryotic cells) in Java Virtual Machines (JVMs) and all of these Objects (eukaryotic cells) were sending messages to each other, like the cells in a complex multicellular organism.

During an outage, the Command Center of the IT Department will page out perhaps 10 people to join an outage conference call. Then, using the display consoles of the monitoring software on their laptops the members of the outage call will all try to figure out what the heck is going on so that they can quickly fix the problem. Outages can cost between thousands to millions of dollars per second, depending on the business being conducted by the commercial software. Outages frequently happen in the middle of the night when new code goes into production or during the following day when the new code hits peak load. That would be an example of a simple deterministic outage caused by a low-level code change. But many times an outage just happens all on its own for no apparent reason. The whole network of information processes just seems to go berserk. Naturally, such “out of the blue” outages greatly displease IT Management because IT Management always wants to know the root cause of an outage. I spent many years trying to tell IT Management that such emergent outages just naturally arise for complex nonlinear networks far from thermal equilibrium but that explanation never went over very well. Anyway, the monitoring software records tons of data that can be used later to try and find the root cause of the outage. But this monitoring software runs continuously and provides a continuous stream of data that a research team could put to good use.

For a description of what a heavy-duty IT infrastructure looks like see Software Embryogenesis. For a more detailed post on such a proposal see A Proposal for an Odd Collaboration to Explore the Origin of Life with IT Professionals. In that post, I proposed that it might be worthwhile for researchers working on the origin of life or astrobiology to collaborate with the department of their university that teaches business-oriented computer science and with the IT department of a local major corporation or government agency to add some IT professionals to their research teams to bring in some new insights to their research efforts.

Some may object to the idea of software being a form of self-replicating information because currently, software is a product of the human mind. But I think that objection stems from the fact that most people simply do not consider themselves to be a part of the natural world. Instead, most people consciously or subconsciously consider themselves to be a supernatural and immaterial spirit that is temporarily haunting a carbon-based body. For more on that see The Ghost in the Machine the Grand Illusion of Consciousness.

In order for evolution to take place, we need all three Darwinian processes at work – inheritance, innovation and natural selection. And that is the case for all forms of self-replicating information, including carbon-based life, memes and software. Currently, software is being written and maintained by human programmers, but that will likely change in the next 10 – 50 years when the Software Singularity occurs and AI software will be able to write and maintain software better than a human programmer. Even so, one must realize that human programmers are also just machines with a very complicated and huge neural network of neurons that have been trained with very advanced Deep Learning techniques to code software. Nobody learned how to code software sitting alone in a dark room. All programmers inherited the memes for writing software from teachers, books, other programmers or by looking at the code of others. Also, all forms of selection are “natural” unless they are made by supernatural means. So a programmer pursuing bug-free software by means of trial and error is no different than a cheetah deciding upon which gazelle in a herd to pursue.

Conclusion
In The Danger of Believing in Things, I discussed what can happen when a science gets "stuck" on a problem that seems impossible to solve. Sometimes the best thing to do when apparently "stuck" is to simply walk away from the problem for a bit and seek the advice of others, especially others with an entirely different perspective of the problem at hand.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston