Friday, April 15, 2011

Is the Universe Fine-Tuned for Self-Replicating Information?

This posting will focus on the three forms of self-replicating information on this planet – genes, memes, and software – and the apparent fine-tuning of the Universe to make all three possible. As we saw in Self-Replicating Information all forms of self-replicating information have a commonality of shared properties, resulting from their common struggle with the second law of thermodynamics in a nonlinear Universe, that allow us to learn about one, by examining the others. As I pointed out in The Origin of Software the Origin of Life, success for our current SETS program - the Search for ExtraTerrestrial Software – seems to have a precondition for the emergence of intelligent beings in the Universe to appear first, and to form a scaffolding upon which software can later arise and ultimately begin its exploration of our galaxy. Since the emergence of intelligent beings is apparently so important for the subsequent emergence of software, let us start there.

For the current stellar-dominant phase in the evolution of our Universe, this seems to be contingent upon the emergence of intelligent carbon-based life forms, and as many physicists and cosmologists have pointed out, our Universe seems to be indeed strangely fine-tuned for the emergence of carbon-based life forms. If you change any of the 20+ constants of the Standard Model of particle physics by just a few percent or less, you end up with a universe incapable of sustaining intelligent carbon-based beings. Similarly, in 1969 Robert Dicke noted that the amount of matter and energy in the Universe was very close to the amount required for a flat spacetime to a remarkable degree. If you run today’s near flatness of spacetime back to the time of the Big Bang, spacetime would have had to have been flat to within one part in 1060! This is known as the “flatness problem” in cosmology. If spacetime had just a very slight positive curvature at the time of the Big Bang, then our Universe would have quickly expanded and recollapsed back into a singularity in a very brief period of time and there would not have been enough time to form stars or living things. Similarly, if spacetime had a very slight initial negative curvature, it would have rapidly expanded – our Universe would have essentially blown itself to bits, forming a very thinly populated vacuum which could not have formed stars or living things.

The apparent fine-tuning of our Universe for intelligent carbon-based life forms is troubling for most theorists because it is difficult to explain. For the religious at heart, the explanation is quite simple – there must be a deity who created the Universe deliberately fine-tuned for intelligent carbon-based life. But that explanation just pushes the problem back one level because then you have to explain where the deity came from and why a hypothetical super-Universe, beyond our own Universe, was fined-tuned to bring forth such a deity in the first place. This all goes against the pantheistic grain of many scientists, who have a bit of a spiritualistic bent at heart, and who share the philosophical inclinations of both Spinoza and Einstein. An alternative explanation that the scientific community seems to be slowly embracing is Brandon Carter’s Anthropic Principle (1973), which comes in several flavors, the least contentious being the weak version of the Anthropic Principle:

The Weak Anthropic Principle - Intelligent beings will only find themselves existing in universes capable of sustaining intelligent beings.

At first glance, this proposition seems to just state the obvious, but it does have implications. If we are indeed intelligent beings then our Universe must be fine-tuned to sustain intelligent beings, and that fine-tuning must extend throughout the entire Universe as well, allowing for the emergence of other intelligent beings elsewhere, but why? The two most common explanations offered for the Weak Anthropic Principle are that either our Universe was intentionally designed with such fine-tuning in mind, or our Universe is just one of many possible universes in a multiverse. Proponents of the multiverse explanation go on to explain that most universes in the multiverse do not have the necessary physics to sustain intelligent beings and are quite sterile, but like all lottery losers, nobody is in those universes wondering why there is no intelligent life, only the exceedingly rare lottery winners sit in stunned amazement, holding winning tickets to a fine-tuned universe.

But is it proper to infer an infinite multiverse of universes from the mere fact that we exist? Fortunately, we have an historical analogy that can shed some light on the subject. Towards the end of the 16th century, Giordano Bruno took Copernicus’s heliocentric model of the Solar System (1543) one step further. Giordano Bruno was an early pantheist who conceived of a deity that was one with the Universe, rather than being a deity existing in a remote heaven-based-universe beyond our own physical Universe. Giordano Bruno thought that such an infinite pantheistic deity must necessarily exist in an infinite Universe, with no beginning and no end, and infinite in both space and time. Like Copernicus, Bruno believed that the planets orbited the Sun, but additionally, Bruno figured that the stars must simply be distant suns with their own sets of planets and that these distant planets could also harbor alien life forms and intelligent beings as well. Clearly, Giordano Bruno was about 400 years ahead of his times, which put him in conflict with many of the prevailing meme-complexes of the day, and sadly, he was burned at the stake by the Roman Inquisition on February 17, 1600. But the implications of his cosmology are remarkably similar to those of the Weak Anthropic Principle. Once you posit a Universe containing a near-infinite number of planets, chances are that some of those planets would necessarily be capable of sustaining intelligent carbon-based life forms by sheer luck, and intelligent carbon-based life forms would only find themselves existing on such planets. This seems to be our current situation here on Earth – we exist on one of the very rare lucky planets in our Universe. So without even the benefit of the crude telescope that Galileo was about to turn upon the night sky in 1610, Giordano Bruno was able to infer that there must be a large, or near-infinite, number of planets with intelligent beings in our Universe that we cannot see, but must exist, nonetheless, because we exist.

Another appealing feature of a multiverse, in addition to eliminating the need for our apparently fine-tuned Universe to be intentionally designed, is that it addresses a problem in string theory. The Standard Model of particle physics was a big improvement over the 400+ unrelated “fundamental” particles discovered in the 1960s, but when you add up all of the various colored particles and their antimatter twins, you end up with about 63 particles (see The Foundations of Quantum Computing for details). Many physicists think that our Universe simply cannot be so complicated and that there has to be a simpler model. One promising model is called supersymmetric string theory, or string theory for short. String theory contends that all of the particles of the Standard Model are actually strings or membranes vibrating in an 11-dimensional Universe. The strings or membranes are made of pure energy, or perhaps, pure mathematics and vibrate with different frequencies. The different vibrational frequencies account for the differing physical properties of the particles, like the different frequencies of a vibrating guitar string account for the differing notes emanating from a guitar. According to string theory, at the time of the Big Bang, three of the 11 spatial dimensions suddenly began to expand and so did the 4th dimension of time. The remaining 7 dimensions remained microscopically small beyond our ability to observe them. String theory research has dominated physics for the past 20 years but unfortunately string theory is now running on pure mathematics, without the benefit of the third step in the Scientific Method - experimental verification of the theory using inductive empiricism. Unfortunately, the predicted vibrating strings and membranes are so small that they are far beyond the reach of our current accelerators by many orders of magnitude. Now the initial hope for string theory was that there would be one, and only one, self-consistent formulation of the theory and that the Standard Model and its 20+ constants would naturally fall out from it. But that is not what has happened. Over the years, it has become evident that one can form a nearly infinite number of universes with string theory by slightly changing the geometry of the dimensions in which the strings and membranes vibrate. Leonard Susskind calls this The Cosmic Landscape (2006) in his book by the same title. As do many cosmologists, Susskind proposes that there are an infinite number of other universes forming a multiverse, with each universe having its own physics determined by the number and geometry of its dimensions.

So nowadays in both physics and cosmology, it seems as though there is a lot riding on the Weak Anthropic Principle and its implication that there is a near-infinite multiverse of universes out there, with possibly a near-infinite number of ways of doing physics. But the Weak Anthropic Principle makes this inference on the basis that the apparent fine-tuning of our Universe for the existence of intelligent beings is exceptional. But what if that were not the case? What if the existence of intelligent beings in a Universe, just about any kind of Universe at all, was the rule and not the exception and a relatively easy accomplishment to achieve? In this posting, I would like to make that very proposition, by introducing the Very Weak Anthropic Principle:

The Very Weak Anthropic Principle - Intelligent beings will only find themselves existing in a universe capable of sustaining self-replicating information, and self-replicating information will only be found in a universe that begins in a low-entropy initial state.

To understand the implications of the Very Weak Anthropic Principle, it would be a good time to review the concepts of entropy and the second law of thermodynamics found in Entropy - the Bane of Programmers and The Demon of Software, and also the role that both played in the evolution of living things and software found in SoftwareBiology and Self-Replicating Information. Additional insights can also be found in From Eternity to Here (2010) by Sean Carroll, which has the subtitle of The Quest For the Ultimate Theory of Time, but for our purposes serves as a spectacular study of the second law of thermodynamics and entropy and their far-reaching roles in the evolution of our Universe. The central aim of this book is to focus attention on why our Universe started out with such a very low entropy at the time of the Big Bang, and that is fundamental to understanding the Very Weak Anthropic Principle, so I will be making quite a few references to it.

In Entropy - the Bane of Programmers we saw that thermodynamics was an outgrowth of the desire by physicists and engineers in the 19th century to better understand and improve upon steam engines. Thermodynamics describes the behaviors of macroscopic systems, like steam engines, in terms of changes in their macroscopic properties, like changes in their pressures, temperatures, and volumes, as these macroscopic systems operate. We also explored the second law of thermodynamics, as it was first proposed by Rudolph Clausius in 1850, in terms of objects cooling off and the smoothing out of differences in bulk matter, like the mixing of gasses. Clausius defined this general running down of systems in terms of a quantity he called entropy, which he defined in terms of the flow of heat from one object to another. The second law of thermodynamics stated that heat could flow from a hot object to a cooler object, but not the other way around. In the 19th century, thermodynamics and Newtonian mechanics were considered to be two entirely different domains within physics and unrelated in any way. Thermodynamics was applicable to the vagaries of steam engines and boilers, while Newtonian mechanics made sense of the planetary motions about the Sun and the flight of cannon balls in time of war. Furthermore, the second law of thermodynamics was also considered to be a “real” law of the Universe and just as sacrosanct as Newton’s three laws of motion. However, in The Demon of Software, we saw how later in the 19th century Ludwig Boltzmann tried to unify thermodynamics with Newtonian mechanics by creating a new branch of physics known as statistical mechanics. With statistical mechanics, Boltzmann tried to demonstrate that all of thermodynamics could be derived from Newtonian mechanics by simply considering bulk matter to be composed of a large collection of molecules constantly bouncing around and following Newton’s three laws of motion in collisions at the microscopic level. One of the difficulties that Boltzmann faced with this new approach to thermodynamics was that now the second law of thermodynamics took on a statistical nature. The second law was no longer a sacred law of physics, but simply a very good bet. This bothered many 19th-century physicists, who considered the second law of thermodynamics to be just as sacred as Newton’s three laws of motion.

To highlight Boltzmann’s probabilistic approach to statistical mechanics we explored the concepts of the second law of thermodynamics and entropy in terms of poker. We equated the different kinds of poker hands with the concept of a macrostate. For example, a K-K-K-4-4 would constitute the macrostate of a full house. For any given poker hand, or macrostate, like a full house, there are a number of microstates that yield the macrostate. So for the macrostate of a full house a K-K-K-4-4, J-J-J-9-9, and a 7-7-7-2-2 would all be microstates of the full house macrostate. Following the work of Ludwig Boltzmann (1872), we found that for any given poker hand that we could calculate the entropy of the hand, or macrostate, by applying Boltzmann’s famous equation:

S = k ln(N)

S = entropy
N = number of microstates
k = Boltzmann’s constant

For our poker analogy, we set Boltzmann’s constant to k = 1, since k is just a “fudge factor” used to get the units of entropy using Boltzmann’s equation to match those used by the thermodynamic formulas for entropy.

We also used Leon Brillouin’s concept of information to calculate the amount of information in a particular poker hand, or macrostate, as the difference between its entropy and the maximum possible entropy of all poker hands:

Information = Si - Sf
Si = initial entropy
Sf = final entropy

The entropy of a macrostate defines its degree of disorder and its probability of happening. Macrostates with little entropy, like a straight flush, have a great deal of order but are very unlikely because they have so few microstates. Similarly, macrostates with lots of entropy, like a pair, have little order but are much more likely to occur because they have a huge number of possible microstates. At any given time a system, like the cards you are currently holding in a poker game, will be in some macrostate defined by a particular microstate. So let’s say that you are holding the macrostate of a full house defined by a microstate of Q-Q-Q-7-7. From the table in The Demon of Software, we see that a full house has an entropy of 8.2279098 and a probability of occurring of 0.00144. Now I am going to change the rules of poker again. When it comes time to draw cards, you can draw one, two, or three cards, but not only does the dealer deal you the new cards, he also chooses the cards that you discard! So you put all five cards face down on the table, tell the dealer how many cards you wish to draw, and then the dealer deals out your draw cards and discards an equal number of cards from your hand at random. So what do you do? From the table in The Demon of Software, we see that your full house macrostate with a microstate of Q-Q-Q-7-7 already has a relatively low entropy and a low probability of occurring. The odds are that if you draw even a single card that the entropy of your hand will increase, and you will end up with a lower ranked hand, like two pair or three of a kind. Of course, you could get really lucky and draw another Q, while the dealer discards one of your 7s, but that would be a real long shot. This, then, is the essence of the second law of thermodynamics in Boltzmann’s statistical mechanics. Systems tend to increase in entropy, not because of some fundamental law of the Universe, but simply because high-entropy macrostates (low-ranked poker hands) with lots of microstates are much more likely to occur than low-entropy macrostates (high-ranked poker hands) with few microstates. In simple terms, you are more likely to be dealt a pair than a straight flush simply because there are more ways to be dealt a pair than to be dealt a straight flush. Thus in Boltzmann’s view, the second law of thermodynamics reduces to just being a really safe bet that entropy will increase and not a fundamental law that it always must. So in Boltzmann’s view, in very rare cases, the entropy of an isolated system can spontaneously decrease all on its own, in apparent violation of the second law of thermodynamics.

The beauty of Boltzmann’s statistical approach to the second law of thermodynamics is that it is the only “law” in physics that we really understand at a fundamental level because it really is not a “law” at all in the same sense as the other “laws” of classical 19th century physics, which by definition could not be violated. Recall that in the 20th century we learned that all of the other “laws” of the Universe are really just effective theories in physics that are approximations of reality. An effective theory is an approximation of reality that only holds true over a certain restricted range of conditions and only provides for a certain depth of understanding of the problem at hand. For example, Newtonian mechanics works very well for objects moving in weak gravitational fields at less than 10% of the speed of light and which are larger than a very small mote of dust. For things moving at high velocities, or in strong gravitational fields, we must use relativity theory, and for very small things like atoms, we must use quantum mechanics. All of the current theories of physics, such as Newtonian mechanics, classical electrodynamics, thermodynamics, the special and general theories of relativity, quantum mechanics, and quantum field theories like QED and QCD are just effective theories that are based upon models of reality, and all of these models are approximations – all of these models are fundamentally "wrong", but at the same time, these effective theories make exceedingly good predictions of the behavior of physical systems over the limited ranges in which they apply (see Model-Dependent Realism - A Positivistic Approach to Realism for more details).

Boltzmann’s new statistical concept of the second law of thermodynamics also addressed a bothersome problem for 19th-century astronomy. At the time, physicists and astronomers thought that the Universe was infinitely old. In those early days, nobody really knew what was powering the Universe, but they did realize that the Universe was a dynamical system degrading some kind of low-entropy potential energy into the high-entropy energy of heat that was constantly moving to lower temperatures because of the second law of thermodynamics. For example, nobody knew what was powering the Sun, but they did know that the Sun was giving off huge amounts of electromagnetic energy in the form of sunlight. When the sunlight reached the Earth and was absorbed, it heated up both the land and sea, which then radiated infrared electromagnetic energy back into space with a longer wavelength and a greater entropy. At each step in the process, the first law of thermodynamics guaranteed that all of the energy was conserved, none of it was lost and no additional energy spontaneously appeared on its own. But at the same time, the second law of thermodynamics demanded that the entropy of the energy had to increase at each step, and when 19th century physicists performed experiments that simulated these processes in the lab that is indeed what they found. So if the Universe were infinitely old, then according to the second law of thermodynamics, all of the energy in the Universe should have already been converted into heat energy at a very low constant temperature, and the Universe should be in a state of an unchanging equilibrium, with a maximum of entropy. In classical 19th century thermodynamics, once an isolated system has attained a state of maximum entropy, it can no longer change and is essentially dead, and in the 19th century, this was known as the “Heat Death” of the Universe. To address this issue Boltzmann came up with his own version of the Weak Anthropic Principle. Since intelligent beings could not possibly exist in an unchanging Universe at equilibrium and in a state of maximum entropy, we must obviously be living in a very strange and fluky tiny portion of a much larger Universe. Most of the Universe would indeed be in equilibrium, at a maximum entropy, and in a “Heat Death” condition, but our little portion of the Universe must be a statistical fluke, at a much lower entropy, that spontaneously came to be out of the random motions of molecules in our patch of the Universe. It would be like the spontaneous unmixing of two gasses that the second law of thermodynamics forbids but on a much grander scale.

Figure 1 – The second law of thermodynamics predicts that two unmixed gases will mix into an unchanging homogeneous mixture at equilibrium with a maximum of entropy (click to enlarge)



Figure 2 – Once a state of maximum entropy has been attained the mixture will no longer change as it enters a “Heat Death” macroscopic condition (click to enlarge)



Figure 3 – But Boltzmann’s new statistical view of the second law of thermodynamics does allow a mixed gas to spontaneously unmix on very rare occasions (click to enlarge)

Although the classical second law of thermodynamics forbids such processes, in Boltzmann’s new statistical view of the second law of thermodynamics, it is very unlikely, but still possible, for a tiny patch of an infinitely large and infinitely old Universe to spontaneously evolve into a low-entropy state such as ours. It would be like starting off with a poker hand of “nothing” with an entropy of 14.7706235 and after drawing three cards ending up with a royal flush with an entropy of 1.3862944. It would take a very long time to be so fortunate, but if the Universe were infinitely old, there would be plenty of time for it to have happened, and for it to have happened an infinite number of times in the past.

Now with all that background material behind us, what does it take for self-replicating information to arise? I would like to argue that it does not take that much at all. All it requires is the existence of a second law of thermodynamics and a low-entropy initial state for a universe to bring forth self-replicating information in some fashion. A second law of thermodynamics is necessary for the emergence of self-replicating information because the second law is the driving force behind the Darwinian mechanisms of innovation and natural selection that select for self-replicating information from information that does not self-replicate and allow it to emerge and evolve. The second law guarantees that some level of copying errors will occur whenever self-replicating information replicates, resulting in mutations that, on rare occasions, provide for beneficial innovations. The second law also guarantees that the low-entropy materials used to encode self-replicating information and the free energy necessary to replicate it will be in short supply, since the necessary building materials will have a tendency to degrade into high-entropy waste materials, and the free energy will tend to degrade into useless heat energy. The scarcity of these items creates a struggle for existence amongst the competing forms of self-replicating information, leading to the Darwinian mechanism of natural selection. After all, if it had not been for the second law of thermodynamics, and food and shelter spontaneously emerged out of nothing, we would all be fat and happy bacteria today! Finally, because there will always be more microstates for a disordered high-entropy macrostate, than there are microstates for an ordered low-entropy macrostate, in all possible universes, all universes must necessarily have a second law of thermodynamics, no matter what physics they might be running on at the moment. It’s just in the cards that all possible universes must have a second law of thermodynamics. Consequently, all that is really needed for the existence of intelligent beings is for a universe to begin in an initial state of low entropy.

Now it looks like we have nearly finished our derivation of the Very Weak Anthropic Principle. Since all possible universes must contain a second law of thermodynamics, all we have to do is explain why our Universe began in a low-entropy state. Frequently in textbooks on thermodynamics, it is inferred that since our Universe is currently not in a state of maximum entropy, it must have been in a state of much lower entropy in the distant past because entropy always has to increase with time. In From Eternity to Here Sean Carroll points out that this is not necessarily true. It all has to do with the conservation of information and reversible processes. This time I am not talking about Leon Brillouin’s concept of information as a difference in entropies, but rather the “other” concept of information that I mentioned in The Demon of Software in regards to black holes conserving information. A reversible process is a process that can be run backwards in time to return the Universe back to the state that it had before the process even began as if the process had never even happened in the first place. For example, the collision between two molecules at low energy is a reversible process that can be run backwards in time to return the Universe to its original state because Newton’s laws of motion are reversible. Knowing the position of each molecule at any given time and also its momentum, a combination of its speed, direction, and mass, we can predict where each molecule will go after a collision between the two, and also where each molecule came from before the collision using Newton’s laws of motion. For a reversible process such as this, the information required to return a system back to its initial state cannot be destroyed, no matter how many collisions might occur, in order to be classified as a reversible process that is operating under reversible physical laws.

Figure 4 – The collision between two molecules at low energy is a reversible process because Newton’s laws of motion are reversible (click to enlarge)

Currently, all of the effective theories of physics, what many people call the “laws” of the Universe, are indeed reversible, except for the second law of thermodynamics, but that is because, as we saw above, the second law is really not a fundamental “law” of the Universe at all. In order for a law of the Universe to be reversible, it must conserve information. That means that two different initial microstates cannot evolve into the same microstate at a later time. For example, in the collision between the blue and pink molecules in Figure 4, the blue and pink molecules both begin with some particular position and momentum one second before the collision and end up with different positions and momenta at one second after the collision. In order for the process to be reversible and Newton’s laws of motion to be reversible too, this has to be unique. A different set of identical blue and pink molecules starting out with different positions and momenta one second before the collision could not end up with the same positions and momenta one second after the collision as the first set of blue and pink molecules. If that were to happen, then one second after the collision, we would not be able to tell what the original positions and momenta of the two molecules were one second before the collision since there would now be two possible alternatives, and we would not be able to uniquely reverse the collision. We would not know which set of positions and momenta the blue and pink molecules originally had one second before the collision, and the information required to reverse the collision would be destroyed.

Now suppose you find yourself in a universe that looks like Figure 3 and has a relatively low-entropy microstate relative to the maximum entropy of Figure 2. How did you get that way? At first, you might think that you got that way by evolving from a universe with an even lower entropy, like one that had all of the molecules tightly confined to a corner in each half of the two boxes. But that would not be very likely. The most likely thing to have occurred is that you started out in a universe like Figure 2! To get to a universe like Figure 3, all you have to do is reverse the speed and direction of each molecule in Figure 2, and you end up with a universe like Figure 3, because the universe in Figure 2 evolved from the universe in Figure 1 in the first place. Since there will be a huge number of configurations like Figure 2, relative to the number of configurations with all of the molecules in each box neatly packed into corners, the most likely way of getting to Figure 3 is from a much higher entropy Figure 2. So if you find yourself in a relatively low-entropy universe such as ours, you cannot simply use the second law of thermodynamics to infer that your universe started out as an even lower entropy universe in the distant past. Instead, you have to make what Sean Carroll calls the Past Hypothesis, that for some reason, your universe started out in an initial state with a very low entropy in the first place.

In From Eternity to Here Sean Carroll goes on to offer several explanations for the Past Hypothesis of why our Universe began with a very low initial entropy. The one that he finally homes in on, and which I find quite appealing, is a model composed of a multiverse of self-replicating baby universes. Sean Carroll explains that under the relentless pressures of the second law of thermodynamics to constantly increase the entropy of a universe, a universe in a multiverse ultimately degenerates into a state where everything is confined to a large number of black holes. Such a universe filled with black holes ultimately degenerates, via Hawking radiation, into a nearly empty universe with a small amount of positive vacuum energy and a maximum of entropy. This takes a long time - about 10100 years. Another good book that also describes this degenerative process is The Five Ages of the Universe (1999) by Fred Adams and Greg Laughlin. Then the universe must wait for a very very long time. After a seeming eternity of time, a quantum fluctuation in the very dilute remaining quantum fields of the universe suddenly creates a tiny pinch of “false vacuum” in the universe (see The Foundations of Quantum Computing for details on quantum field theory). Most times this pinch of “false vacuum” simply collapses back into the dilute fluctuating quantum fields of the universe, but upon exceedingly rare occasions, this region of “false vacuum” pinches off into a new baby universe. The new baby universe then inflates into a full-blown universe, such as ours, that is essentially made out of “nothing”, with no net energy, momentum, or angular momentum, but it does have an initial state with a very low entropy. Energy is conserved because the energy of the radiation and matter that form as the “false vacuum” of the baby universe decays while inflating, is exactly matched by the negative gravitational potential energy that arises due to the presence of the matter and radiation in the baby universe. The initial entropy of the baby universe starts out very low, but this is not a violation of the second law of thermodynamics because the entropy of its maternal universe is still at a maximum, and the entropy of the multiverse as a whole does not decrease with the addition of a new low-entropy baby universe.

Figure 5 – A low-entropy baby universe can emerge from a quantum fluctuation in the residual quantum fields of a relatively dead universe that has reached a state of maximum entropy(click to enlarge)

So in Sean Carroll’s model of the multiverse, we find an infinite number of universes constantly spawning new universes, as each universe relentlessly progresses to a state of maximum entropy under the pressures of the second law of thermodynamics. A baby universe may inherit its physics from its maternal universe, or it might branch out on its own with a new set of physical laws, but thanks to the ubiquitous nature of the second law of thermodynamics, the end result will always be the same, a universe at maximum entropy spawning additional child universes with low entropy.

Figure 6 – The multiverse is composed of an infinite number of universes that are constantly replicating as they reach a state of maximum entropy under the pressures of the second law of thermodynamics (click to enlarge)

For me such a self-replicating multiverse has an almost fractal nature at heart, like the Mandelbrot set that is defined by a simple recursive algorithm in the complex plane:

Zn+1=Zn*Zn + C


Figure 7 – The multiverse can be thought of as a fractal form of self-replicating information, like the Mandelbrot set, with no beginning and no end (click to enlarge)

Now to get back to the original question posed by this posting - is the Universe fine-tuned for self-replicating information? In many ways, the multiverse described by Sean Carroll is indeed fine-tuned for self-replicating information because it essentially is a form of self-replicating information in itself. On the other hand, because his model also explains why all universes begin with a very low level of entropy, it also explains the Very Weak Anthropic Principle, and if the Very Weak Anthropic Principle is true, then the requirements for the fine-tuning of a universe are greatly relaxed, and the presence of intelligent beings in a universe is not such a discriminating factor. What is discriminating is the requirement that there be a fine-tuning selection process that starts universes off with a low initial entropy. This relieves some of the anthropocentric tension found in the other forms of the Anthropic Principle because human beings can no longer be accused of following their natural tendency to gravitate towards the center of the Universe. Just about any universe will do for intelligent beings to arise and ultimately initiate the emergence of software, and provide a scaffolding upon which software can later proliferate until the day comes when software can finally self-replicate on its own.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

No comments: