Monday, April 25, 2011

Programming Clay

The title of this posting is a pun because this is not going to be a posting on the new C-like programming language called Clay, but rather a posting on the origin of life on Earth, and how the recent muddled origin of software and computing hardware might be of help in pointing the way. But before proceeding I must take a moment to commemorate the 70th anniversary of the onset of the Software Universe, which popped into existence some 70 years ago this month in May of 1941 on Konrad Zuse’s Z3 computer (see So You Want To Be A Computer Scientist? for details). So the Software Universe is now a whopping 2.2 billion seconds old! Despite the timestamp on this posting, today’s date is actually May 7, 2011 and not April 25, 2011. Remember, in order to get the Introduction to Softwarephysics listed as the first post in the context root of http://softwarephysics.blogspot.com/ I have to perform a few IT tricks. The side effect of all these tricks, as I explained in the Introduction to Softwarephysics, is that the real posting date of posts is the date that appears on the post that you get when clicking on the Newer Post link at the bottom left of each posting. There is an important lesson here. Things are not always as they seem in the world of software, or any other form of self-replicating information for that matter, so you should not always take things purely at face value.

For example, suppose you were a fourteen-year-old high school freshman learning to program Java for the very first time with no historical sense of computing whatsoever. Having been immersed in the Software Universe for your entire life, you would probably tend to think that all of this sophisticated software and hardware that you had grown up with had simply always existed as it does today. You would probably not even stop to think about where it all came from as you proceeded to learn how to program the very sophisticated object-oriented language known as Java. Java comes with a huge class library of pre-built reusable code, graciously passed down from antiquity, that can be called to perform just about any low-level programming function that you might need, with very little effort on your part as a novice programmer, and yields executables that can run on just about any operating system on the planet. You might also be working on a $500 PC with several GB of memory, a dual-core processor running with a clock speed of several GHz, and more than 1 TB of disk - a machine that is several billion times faster, with several billion times more memory, and nearly a trillion times more peripheral storage than Konrad Zuse’s Z3 computer running on 2400 electromechanical telephone relays and punched tape.

Now if you did begin to wonder where all of this very sophisticated software and hardware had come from, you would immediately be confronted with a series of “chicken or the egg” paradoxes. Firstly, you would learn that Java cannot run all by itself directly on a computer. You cannot simply load a Java executable .class file into the memory of a computer and expect it to do anything at all. Java needs to run inside of a Java Virtual Machine, which is a simulated software computer that runs on top of the physical computer itself. Obviously, you cannot write a Java Virtual Machine with the Java language itself, since Java can only run inside of a Java Virtual Machine in the first place, so there must be some “other” programming language that came first before Java that can be used to write software that actually does execute directly upon computers without the need of a Virtual Machine, but what could it be? If you are lucky, perhaps there are some computer science books in your high school library, and with a little careful detective work on your part you discover a book on the C++ language, which looks a lot like Java, but with many bug-inducing complications, like the overloading of operators, multiple inheritance, and pointers that were later discarded by the Java programming language as a simplifying measure to improve reliability. Similarly, you might also discover in a textbook that C++ came from the purely procedural C language, which also looks a lot like C++ and Java, but which is 100% procedural and has no object-oriented classes at all. But you would find that C does have some object-like precursors in the form of unions and functions, which when combined with pointers, can be used to simulate objects to some extent. So with some investigation you would find an evolutionary history for the Java language if you had access to a library of computer science books, but what would you do if all the computer science books had been destroyed, leaving not a trace, and your PC did not have a C or C++ compiler either? Would you vainly search for simpler and simpler versions of Java on your PC, with the hope of ultimately finding one that could run directly on your computer without the need of a Java Virtual Machine?

Similarly, for the hardware that your newly coded Java programs run on, the problem would be even worse, as you quickly figure out that you would need a computer in order to design a computer! There is no way you could possibly design the silicon chips used for CPUs, RAM, and flash memory without the aid of a computer, and you would also need sophisticated process control computers to run the high-tech equipment that make the chips. Yes, there are plenty of available silicon atoms to go around, since 27.7% of the Earth’s crust is composed of silicon, but how would you go about taking sand, also known as silicon dioxide, and turn it into computer chips without the aid of computers? Following my lead in SoftwareBiology, you might head out to your local city landfill and begin excavating. As you dig down through the rubble, you find older and older PCs, which, surprisingly, you find can still be made operational with some work. As you dig down through the deposited layers of garbage, the PCs you come across are found to contain simpler silicon chips, containing fewer and fewer transistors, until finally in the early 1980s, the trail of PCs grows cold, and no more are to be found at lower depths. However, you do continue to find the huge refrigerator-like boxes of discarded mainframes at lower depths that were deposited in the 1970s and they contain even simpler silicon chips. As you dig deeper still, these refrigerator-like boxes are found to stop using silicon chips altogether, and instead are found to be stuffed with discrete silicon transistors on circuit boards. Finally, in garbage layers from the 1950s you find discarded mainframes stuffed with row upon row of vacuum tubes – things you have never even seen before, but which look a lot like the archaic incandescent light bulbs that your parents grew up with, and which are rapidly being replaced by CFL and LED light bulbs. The deeper you dig the fewer mainframes you run across, until by the late 1940s, none are found at all in your city landfill, and I am quite confident that you would never run across an old Z3 computer stuffed with electromechanical relays!

So for both software and hardware, simply taking today’s exceedingly complex high-tech architecture and following it backwards leads you to a series of dead-ends because software did not begin as a simplified version of Java and hardware did not begin running on very simple silicon chips. Both software and hardware began by using fundamentally different technologies than today’s, and underwent several transitions to get us to where we are today.

There seems to be a similar “chicken or the egg” problem for the researchers currently working on the origin of another form of self-replicating information – the genes. The current high-tech biochemistry used today by the biosphere requires enzymes to replicate DNA and to copy mRNA from DNA in order to make enzymes. So currently, living things need DNA to make enzymes and they need enzymes to make DNA, so which came first, and how could one have come first if both are needed to make the other? This point is key to unraveling the secret to the origin of life. As discussed in Self-Replicating Information, currently there are several competing lines of thought that are all vying to become the accepted theory for the origin of life on Earth. In that posting I discussed the three major theoretical efforts highlighted by Freeman Dyson in the Origins of Life (1999):

1. Metabolism came first - the theory first proposed by Alexander Oparin in The Origin of Life (1924).

2. RNA came first - the “RNA world” theory proposed by Manfred Eigen in 1981.

3. Something else came first - such as Alexander Graham Cairns-Smith’s theory, first proposed in 1966, that there was a clay microcrystal precursor to both RNA and metabolism.

Freeman Dyson liked Cairns-Smith’s idea of a two-stage theory for the origin of life because it eliminated many of the deficiencies found in the “RNA world” theory, such as the “error catastrophe” problem – the fact that the self-replicating processes of the “RNA world” would need to be both very accurate and very simple at the same time. But Freeman Dyson went on to develop his own two-stage theory for the origin of life using Oparin’s purely metabolic form of life as the initial stage, followed by the parasitic/symbiotic appearance of RNA, which first appears as a disease preying upon the initial metabolic pathways, and later forms a symbiotic relationship with them in the tradition of Lynn Margulis.

I just finished reading Cairns-Smith’s Seven Clues to the Origin of Life: A Scientific Detective Story (1985), and I think his idea that Gene 1.0 ran on clay microcrystals instead of nucleic acids needs to be revisited. There is also an excellent website by Tim Tyler on this subject that I would highly recommend at:

The Origin of Life
http://originoflife.net/

One reason that I find Cairns-Smith’s idea that Gene 1.0 ran on clay microcrystals appealing is that, as we have learned from the way that software bootstrapped itself into existence, self-replicating information is notoriously opportunistic and will use just about any technology that is available to self-replicate, and also that self-replicating information really does not care about the underlying technology upon which it runs at any given time. Stephen Jay Gould coined the term “exaptation” for this tendency of living things to opportunistically take advantage of pre-existing structures and processes for other purposes. Darwin called such things a preadaptation, but Gould did not like this terminology because it had a teleological sense to it, as if a species could consciously make preparations in advance for a future need. The term exaptation avoids such confusion. All forms of self-replicating information take advantage of pre-existing technologies, through this process of exaptation, that evolved for one purpose, but are later put to work to solve a completely different problem. As I described in Self-Replicating Information, what happens is that organisms develop a primitive function for one purpose, through small incremental changes, and then discover, through serendipity, that this new function can also be used for something completely different. This new use will then further evolve via innovation and natural selection. For example, we have all upon occasion used a screwdriver as a wood chisel in a pinch. Sure the screwdriver was meant to turn screws, but it does a much better job at chipping out wood than your fingernails, so in a pinch it will do quite nicely. Now just imagine the Darwinian processes of innovation and natural selection at work selecting for screwdrivers with broader and sharper blades and a butt more suitable for the blows from a hammer, and soon you will find yourself with a good wood chisel. At some distant point in the future, screwdrivers might even disappear for the want of screws, leaving all to wonder how the superbly adapted wood chisels came to be.

As an IT professional, you probably do this all the time. How often do you write code from scratch? I know that I never do. I simply find the closest piece of existing code that I have on hand and then turn the screwdriver into a wood chisel through small incremental changes to the code, by testing each small change to see how closely my screwdriver has evolved towards becoming a wood chisel. And I think that most of us also code using this Darwinian process of innovation and natural selection too. I am a rather lazy programmer, so many times rather than thinking through a new chunk of code during the iterative process of coding and testing, I will simply make an “educated guess” at the new code to be introduced. After 40 years of coding, you begin to code by “ear”. Many times, I can fall upon the correct code after a few shots of directed random change, and that sure beats racking your brain over new code. Surprisingly, sometimes I even come up with “better” code through this Darwinian process than if I sat down and carefully thought it all through. So the basic idea of grabbing some old code or architectural design elements from a couple of older Applications and slowly modifying them through an iterative process of innovation and natural selection into a new Application is no stranger to IT. As Simon Conway Morris commented in Life’s Solution (2003)

"How much of a complex organism, say a humanoid, has evolved at a much earlier stage, especially in terms of molecular architecture? In other words, how much of us is inherent in a single-celled eukaryote, or even a bacterium? Conversely, we are patently more than microbes, so how many genuinely evolutionary novelties can we identify that make us what we are? It has long been recognized that evolution is a past master at co-option and jury-rigging: redeploying existing structures and cobbling them together in sometimes quite surprising ways. Indeed, in many ways that is evolution”. When I first read those words, I accidentally misread the quote as "Indeed, in many ways that is IT”.

Similarly, on the hardware side, in May of 1941 Konrad Zuse exapted 2400 electromechanical telephone relays into creating his Z3 computer. The telephone relays were not originally designed for this purpose. Instead, they were designed to be reliable electrical switches that could be used to make or break circuit connections so that people could speak with each other over the telephone. But Zuse did not use these relay connections for communications. Instead, he used the fact that a relay could be either in an open or closed state to store a binary digit of “1” or “0”. You can read about his adventures in computing in his own words at:

Konrad Zuse
http://ei.cs.vt.edu/~history/Zuse.html

Konrad Zuse did not use vacuum tubes as switches to store “1s” and “0s” for the Z3 because he thought that it would be impossible to keep thousands of vacuum tubes up and running long enough to complete computations, but in the 1940s and 1950s vacuum tubes were indeed used by all the computers of the age. However, vacuum tubes were not invented to run computers either. Vacuum tubes were originally invented to amplify analog radio signals and were exapted by the nascent computing industry into service. Vacuum tubes have a grid between a hot negative cathode filament and a cold positive anode plate. By varying the voltage on the grid you can control the amount of current between the cathode and the anode. So a vacuum tube acts very much like a faucet, in fact, the English call them “valves”. By rotating the faucet handle back and forth a little, like a weak input voltage to the grid, you can make the faucet flow vary by large amounts, from a bare trickle to full blast, and thereby amplify the input signal. That is how a weak analog radio signal can be amplified by a number of vacuum tube stages into a current large enough to drive a speaker. Just as you can turn a faucet on full blast or completely off, you can do the same thing with vacuum tubes, so that they behave like telephone relays, and can be in a conducting or nonconducting state to store a binary “1” or “0”. Similarly, when computer hardware migrated to discrete transistors and finally to integrated circuit chips, they once again exapted these devices from the consumer electronics industry. Like a vacuum tube, a transistor has a gate, source, and a drain, and a varying voltage at the gate can control the current between the source and drain. Discrete transistors and integrated circuits were not primarily designed for computers, but for other electronic devices, like transistor radios and stereo systems that had a much larger market. So the computing industry has opportunistically taken advantage of the available information storage and processing hardware ever since its inception.

Thus it is quite possible that an early form of metabolic self-replicating information, similar to Oparin’s proto-cell, could have opportunistically exapted clay microcrystals to run Gene 1.0 on. Or perhaps Cairns-Smith is right and clay microcrystals running Gene 1.0 in a free state exapted organic molecules to enhance their ability to self-replicate. When contemporaneous forms of self-replicating information form parasitic/symbiotic relationships with each other it is hard to say. For example, software first bootstrapped itself into existence as a form of self-replicating information parasitizing the need for humans to do mathematical operations in an automated manner and quickly went on to form very strong parasitic/symbiotic relationships with nearly every meme-complex on the planet, and in doing so, has domesticated our minds into churning out ever more software of ever more complexity. Just as genes are in a constant battle with other genes for survival, and memes battle other memes for space in human minds, software is also in a constant battle with other forms of software for disk space and memory addresses. Natural selection favors complex software with increased functionality, throughput, and reliability, so software naturally has progressed to greater levels of complexity over time. As IT professionals, writing and supporting software, and as end-users, installing and using software, we are all essentially temporary software enzymes caught up in a frantic interplay of self-replicating information, until the day comes when software can finally self-replicate on its own. Cairns-Smith calls the later rise of parasitic/symbiotic RNA the “Genetic Takeover” in Genetic Takeover: And the Mineral Origins of Life (1982), which describes the rise of Gene 2.0 running on RNA and DNA.

Crystals are natural products of the second law of thermodynamics. When individual atoms in a melt come together into a crystalline lattice the entropy of the atoms decreases because they are in a more ordered microstate. However, the free-wheeling atoms jiggling about in the melt also release a “heat of fusion” into the remaining melt as some of their free energy is given up as they click into place into the lattice, so the entropy of the entire universe still increases. Similar thermodynamic processes that preserve the second law of thermodynamics also occur when crystals form out of saturated water solutions. Thanks to the second law of thermodynamics, natural crystals also come with defects caused by atoms in the lattice that are slightly misaligned or by atomic intruders that should not even be in the crystal lattice in the first place. Thus crystals combine the periodic regularity of self-assembled atoms, modulated by crystal defects, that is perfect for the storage of information. For me, crystals just seem to be too good of an information storage technology for a metabolic proto-cell to pass by and too easy to exapt into use, but that just might be a symptom of my anthropocentric inclinations as a carbon-based life form. Perhaps clay microcrystals had the same opinion of organic molecules!

In fact there is some evidence that points to simple life forms exapting crystals into use. In 1975 Richard Blakemore discovered magnetotactic bacteria that could sense the Earth’s magnetic field and swim along its field lines. Magnetotactic bacteria have organelles called magnetosomes that contain magnetic crystals of magnetite (Fe3O4) or greigite (Fe3S4). The magnetotactic bacteria grow chains of these magnetic particles under the chemical control of the magnetosomes. These magnetic crystals are between 35 and 120 nm in size, which is just large enough to have a magnetic field, but small enough so that the crystals have a single magnetic dipole domain, like a very small compass needle. These magnetic crystals allow the bacteria to sense the Earth’s magnetic field and use it to navigate. It is thought that magnetotactic bacteria first evolved in the early Proterozoic, perhaps 2.5 billion years ago. I once saw these magnetotactic bacteria in action at the Exploratorium in San Francisco. The exhibit lets you turn an external magnet, and as you do so, the bacteria swim in a new direction. Similarly, higher forms of life use biomineralization to grow shells and bones from crystals of calcium carbonate, silica, or phosphate carbonate. So there has been a long history of association between mineral crystals and living things.

Personally, my hunch is that life first originated several thousand feet below the Earth’s surface in porous reservoirs near hydrothermal vents. The pore spaces in the heavily fractured rock near a hydrothermal vent would provide the ideal habitat, with an abundant supply of hot, energy-rich, organic molecules and crystal precipitating ions in the pore fluids circulating through the rock. This environment would also be isolated from the planet-wide sterilizing impacts from the late heavy bombardment that peppered the Earth and Moon 4.1 – 3.8 billion years ago with countless impacts from comets careening in from the outer Solar System. It is thought that at the time of the late heavy bombardment, that Jupiter and Saturn had entered into a 2:1 orbital resonance, with Saturn making one orbit for every two orbits of Jupiter, and that the two planets had flung a nearby Neptune out to its current orbital position as the most distant planet from the Sun. Neptune then dislodged many of the surrounding comets from its new-found neighborhood, causing them to plunge in towards the inner planets like the Earth, producing many deadly collisions that boiled away the Earth’s oceans time and time again. Seeking refuge several thousand feet below the mayhem of the Earth’s surface would allow life to safely originate 4.0 – 4.2 billion years ago during the late heavy bombardment and persist in an undisturbed manner to this very day. In fact, there are still plenty of microbes down there. You can actually hire firms to analyze the bacteria in your oil field reservoirs to help prevent them from becoming soured by bacteria that produce hydrogen sulphide gas.

So will we ever really figure out the exact sequence of events leading up to the origin of life? As I pointed out in A Proposal For All Practicing Paleontologists my suspicion is that there would be a great deal of controversy in simply putting together a non-contentious history for the evolution of software too, with much debate regarding the importance and priority of many of the events in the evolution of software architecture, even with all of the data freely at hand and with most of the events having occurred within living memory, so no wonder biologists working on the origin of life have such a hard go of it! Similarly, many attempts have been made to produce a timeline for the development of computing hardware. One of my favorites is:

An Illustrated History of Computers
http://www.computersciencelab.com/ComputerHistory/History.htm

When you look at all of this hardware, it is very difficult to put your finger on a particular device and emphatically claim it to be the very first computer. For example, was Konrad Zuse’s purely mechanical Z1, completed in 1938 in his parents’ living room, a real computer? It had a control unit, memory, an arithmetic unit with four basic operations for floating point numbers, input and output devices and it could read programs from a punched tape, but it did not store programs in memory and it did not have a logic unit that could compare bits. Similarly, I think that it would be very difficult to put your finger on the very first form of life on Earth too.

Unfortunately, figuring out the exact mechanisms that were involved in the origin of life on Earth will probably never be fully completed. Like the origin of software and computing hardware, it probably was such a hodge-podge of precursors, false starts, and failed attempts that nobody will ever be able to fully unravel it all. After all, if we had been there to see it all unfold, we would probably still be debating today what exactly had happened! But the very muddled origin of software and computing hardware seem to provide a very good model for the origin of all forms of self-replicating information. The important thing to keep in mind is that self-replicating information is very opportunistic and will exapt whatever information storage technology happens to be handy at the time. Also, as we have seen from the evolutionary history of software, self-replicating information does not really care about what particular medium or information storage technology it uses at any given moment, since it has a propensity to jump from one technology to another. For example, software has jumped from purely mechanical information storage devices, to electromechanical relays, vacuum tubes, discrete transistors, integrated circuits and may soon be running on optical chips using quantum mechanical effects to store information. This would indicate that researchers working on the origin of life should perhaps broaden their horizons and not focus exclusively upon organic molecules as the sole precursors to the origin of life on Earth, and should try giving clay a chance.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
http://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Friday, April 15, 2011

Is the Universe Fine-Tuned for Self-Replicating Information?

This posting will focus on the three forms of self-replicating information on this planet – genes, memes, and software – and the apparent fine-tuning of the Universe to make all three possible. As we saw in Self-Replicating Information all forms of self-replicating information have a commonality of shared properties, resulting from their common struggle with the second law of thermodynamics in a nonlinear Universe, that allow us to learn about one, by examining the others. As I pointed out in The Origin of Software the Origin of Life, success for our current SETS program - the Search for ExtraTerrestrial Software – seems to have a precondition for the emergence of intelligent beings in the Universe to appear first, and to form a scaffolding upon which software can later arise and ultimately begin its exploration of our galaxy. Since the emergence of intelligent beings is apparently so important for the subsequent emergence of software, let us start there.

For the current stellar-dominant phase in the evolution of our Universe, this seems to be contingent upon the emergence of intelligent carbon-based life forms, and as many physicists and cosmologists have pointed out, our Universe seems to be indeed strangely fine-tuned for the emergence of carbon-based life forms. If you change any of the 20+ constants of the Standard Model of particle physics by just a few percent or less, you end up with a universe incapable of sustaining intelligent carbon-based beings. Similarly, in 1969 Robert Dicke noted that the amount of matter and energy in the Universe was very close to the amount required for a flat spacetime to a remarkable degree. If you run today’s near flatness of spacetime back to the time of the Big Bang, spacetime would have had to have been flat to within one part in 1060! This is known as the “flatness problem” in cosmology. If spacetime had just a very slight positive curvature at the time of the Big Bang, then our Universe would have quickly expanded and recollapsed back into a singularity in a very brief period of time and there would not have been enough time to form stars or living things. Similarly, if spacetime had a very slight initial negative curvature, it would have rapidly expanded – our Universe would have essentially blown itself to bits, forming a very thinly populated vacuum which could not have formed stars or living things.

The apparent fine-tuning of our Universe for intelligent carbon-based life forms is troubling for most theorists because it is difficult to explain. For the religious at heart the explanation is quite simple – there must be a deity who created the Universe deliberately fine-tuned for intelligent carbon-based life. But that explanation just pushes the problem back one level because then you have to explain where the deity came from and why a hypothetical super-Universe, beyond our own Universe, was fined-tuned to bring forth such a deity in the first place. This all goes against the pantheistic grain of many scientists, who have a bit of a spiritualistic bent at heart, and who share the philosophical inclinations of both Spinoza and Einstein. An alternative explanation that the scientific community seems to be slowly embracing is Brandon Carter’s Anthropic Principle (1973), which comes in several flavors, the least contentious being the weak version of the Anthropic Principle:

The Weak Anthropic Principle - Intelligent beings will only find themselves existing in universes capable of sustaining intelligent beings.

At first glance this proposition seems to just state the obvious, but it does have implications. If we are indeed intelligent beings then our Universe must be fine-tuned to sustain intelligent beings, and that fine-tuning must extend throughout the entire Universe as well, allowing for the emergence of other intelligent beings elsewhere, but why? The two most common explanations offered for the Weak Anthropic Principle are that either our Universe was intentionally designed with such fine-tuning in mind, or our Universe is just one of many possible universes in a multiverse. Proponents of the multiverse explanation go on to explain that most universes in the multiverse do not have the necessary physics to sustain intelligent beings and are quite sterile, but like all lottery losers, nobody is in those universes wondering why there is no intelligent life, only the exceedingly rare lottery winners sit in stunned amazement, holding winning tickets to a fine-tuned universe.

But is it proper to infer an infinite multiverse of universes from the mere fact that we exist? Fortunately, we have an historical analogy that can shed some light on the subject. Towards the end of the 16th century, Giordano Bruno took Copernicus’s heliocentric model of the Solar System (1543) one step further. Giordano Bruno was an early pantheist who conceived of a deity that was one with the Universe, rather than being a deity existing in a remote heaven-based-universe beyond our own physical Universe. Giordano Bruno thought that such an infinite pantheistic deity must necessarily exist in an infinite Universe, with no beginning and no end, and infinite in both space and time. Like Copernicus, Bruno believed that the planets orbited the Sun, but additionally, Bruno figured that the stars must simply be distant suns with their own sets of planets, and that these distant planets could also harbor alien life forms and intelligent beings as well. Clearly, Giordano Bruno was about 400 years ahead of his times, which put him in conflict with many of the prevailing meme-complexes of the day, and sadly, he was burned at the stake by the Roman Inquisition on February 17, 1600. But the implications of his cosmology are remarkably similar to those of the Weak Anthropic Principle. Once you posit a Universe containing a near-infinite number of planets, chances are that some of those planets would necessarily be capable of sustaining intelligent carbon-based life forms by sheer luck, and intelligent carbon-based life forms would only find themselves existing on such planets. This seems to be our current situation here on Earth – we exist on one of the very rare lucky planets in our Universe. So without even the benefit of the crude telescope that Galileo was about to turn upon the night sky in 1610, Giordano Bruno was able to infer that there must be a large, or near-infinite, number of planets with intelligent beings in our Universe that we cannot see, but must exist, nonetheless, because we exist.

Another appealing feature of a multiverse, in addition to eliminating the need for our apparently fine-tuned Universe to be intentionally designed, is that it addresses a problem in string theory. The Standard Model of particle physics was a big improvement over the 400+ unrelated “fundamental” particles discovered in the 1960s, but when you add up all of the various colored particles and their antimatter twins, you end up with about 63 particles (see The Foundations of Quantum Computing for details). Many physicists think that our Universe simply cannot be so complicated, and that there has to be a simpler model. One promising model is called supersymmetric string theory, or string theory for short. String theory contends that all of the particles of the Standard Model are actually strings or membranes vibrating in an 11-dimensional Universe. The strings or membranes are made of pure energy, or perhaps, pure mathematics and vibrate with different frequencies. The different vibrational frequencies account for the differing physical properties of the particles, like the different frequencies of a vibrating guitar string account for the differing notes emanating from a guitar. According to string theory, at the time of the Big Bang, three of the 11 spatial dimensions suddenly began to expand and so did the 4th dimension of time. The remaining 7 dimensions remained microscopically small beyond our ability to observe them. String theory research has dominated physics for the past 20 years, but unfortunately string theory is now running on pure mathematics, without the benefit of the third step in the Scientific Method - experimental verification of the theory using inductive empiricism. Unfortunately, the predicted vibrating strings and membranes are so small that they are far beyond the reach of our current accelerators by many orders of magnitude. Now the initial hope for string theory was that there would be one, and only one, self-consistent formulation of the theory, and that the Standard Model and its 20+ constants would naturally fall out from it. But that is not what has happened. Over the years, it has become evident that one can form a nearly infinite number of universes with string theory by slightly changing the geometry of the dimensions in which the strings and membranes vibrate. Leonard Susskind calls this The Cosmic Landscape (2006) in his book by the same title. As do many cosmologists, Susskind proposes that there are an infinite number of other universes forming a multiverse, with each universe having its own physics determined by the number and geometry of its dimensions.

So nowadays in both physics and cosmology it seems as though there is a lot riding on the Weak Anthropic Principle and its implication that there is a near-infinite multiverse of universes out there, with possibly a near-infinite number of ways of doing physics. But the Weak Anthropic Principle makes this inference on the basis that the apparent fine-tuning of our Universe for the existence of intelligent beings is exceptional. But what if that were not the case? What if the existence of intelligent beings in a Universe, just about any kind of Universe at all, was the rule and not the exception and a relatively easy accomplishment to achieve? In this posting, I would like to make that very proposition, by introducing the Very Weak Anthropic Principle:

The Very Weak Anthropic Principle - Intelligent beings will only find themselves existing in a universe capable of sustaining self-replicating information, and self-replicating information will only be found in a universe that begins in a low-entropy initial state.

To understand the implications of the Very Weak Anthropic Principle, it would be a good time to review the concepts of entropy and the second law of thermodynamics found in Entropy - the Bane of Programmers and The Demon of Software, and also the role that both played in the evolution of living things and software found in SoftwareBiology and Self-Replicating Information. Additional insights can also be found in From Eternity to Here (2010) by Sean Carroll, which has the subtitle of The Quest For the Ultimate Theory of Time, but for our purposes serves as a spectacular study of the second law of thermodynamics and entropy and their far-reaching roles in the evolution of our Universe. The central aim of this book is to focus attention on why our Universe started out with such a very low entropy at the time of the Big Bang, and that is fundamental to understanding the Very Weak Anthropic Principle, so I will be making quite a few references to it.

In Entropy - the Bane of Programmers we saw that thermodynamics was an outgrowth of the desire by physicists and engineers in the 19th century to better understand and improve upon steam engines. Thermodynamics describes the behaviors of macroscopic systems, like steam engines, in terms of changes in their macroscopic properties, like changes in their pressures, temperatures, and volumes, as these macroscopic systems operate. We also explored the second law of thermodynamics, as it was first proposed by Rudolph Clausius in 1850, in terms of objects cooling off and the smoothing out of differences in bulk matter, like the mixing of gasses. Clausius defined this general running down of systems in terms of a quantity he called entropy, which he defined in terms of the flow of heat from one object to another. The second law of thermodynamics stated that heat could flow from a hot object to a cooler object, but not the other way around. In the 19th century, thermodynamics and Newtonian mechanics were considered to be two entirely different domains within physics and unrelated in any way. Thermodynamics was applicable to the vagaries of steam engines and boilers, while Newtonian mechanics made sense of the planetary motions about the Sun and the flight of cannon balls in time of war. Furthermore, the second law of thermodynamics was also considered to be a “real” law of the Universe and just as sacrosanct as Newton’s three laws of motion. However, in The Demon of Software we saw how later in the 19th century Ludwig Boltzmann tried to unify thermodynamics with Newtonian mechanics by creating a new branch of physics known as statistical mechanics. With statistical mechanics, Boltzmann tried to demonstrate that all of thermodynamics could be derived from Newtonian mechanics by simply considering bulk matter to be composed of a large collection of molecules constantly bouncing around and following Newton’s three laws of motion in collisions at the microscopic level. One of the difficulties that Boltzmann faced with this new approach to thermodynamics was that now the second law of thermodynamics took on a statistical nature. The second law was no longer a sacred law of physics, but simply a very good bet. This bothered many 19th century physicists, who considered the second law of thermodynamics to be just as sacred as Newton’s three laws of motion.

To highlight Boltzmann’s probabilistic approach to statistical mechanics we explored the concepts of the second law of thermodynamics and entropy in terms of poker. We equated the different kinds of poker hands with the concept of a macrostate. For example, a K-K-K-4-4 would constitute the macrostate of a full house. For any given poker hand, or macrostate, like a full house, there are a number of microstates that yield the macrostate. So for the macrostate of a full house a K-K-K-4-4, J-J-J-9-9, and a 7-7-7-2-2 would all be microstates of the full house macrostate. Following the work of Ludwig Boltzmann (1872), we found that for any given poker hand that we could calculate the entropy of the hand, or macrostate, by applying Boltzmann’s famous equation:

S = k ln(N)

S = entropy
N = number of microstates
k = Boltzmann’s constant

For our poker analogy, we set Boltzmann’s constant to k = 1, since k is just a “fudge factor” used to get the units of entropy using Boltzmann’s equation to match those used by the thermodynamic formulas for entropy.

We also used Leon Brillouin’s concept of information to calculate the amount of information in a particular poker hand, or macrostate, as the difference between its entropy and the maximum possible entropy of all poker hands:

Information = Si - Sf
Si = initial entropy
Sf = final entropy

The entropy of a macrostate defines its degree of disorder and its probability of happening. Macrostates with little entropy, like a straight flush, have a great deal of order, but are very unlikely because they have so few microstates. Similarly, macrostates with lots of entropy, like a pair, have little order, but are much more likely to occur because they have a huge number of possible microstates. At any given time a system, like the cards you are currently holding in a poker game, will be in some macrostate defined by a particular microstate. So let’s say that you are holding the macrostate of a full house defined by a microstate of Q-Q-Q-7-7. From the table in The Demon of Software, we see that a full house has an entropy of 8.2279098 and a probability of occurring of 0.00144. Now I am going to change the rules of poker again. When it comes time to draw cards, you can draw one, two, or three cards, but not only does the dealer deal you the new cards, he also chooses the cards that you discard! So you put all five cards face down on the table, tell the dealer how many cards you wish to draw, and then the dealer deals out your draw cards and discards an equal number of cards from your hand at random. So what do you do? From the table in The Demon of Software, we see that your full house macrostate with a microstate of Q-Q-Q-7-7 already has a relatively low entropy and a low probability of occurring. The odds are that if you draw even a single card that the entropy of your hand will increase, and you will end up with a lower ranked hand, like two pair or three of a kind. Of course, you could get really lucky and draw another Q, while the dealer discards one of your 7s, but that would be a real long shot. This, then, is the essence of the second law of thermodynamics in Boltzmann’s statistical mechanics. Systems tend to increase in entropy, not because of some fundamental law of the Universe, but simply because high-entropy macrostates (low-ranked poker hands) with lots of microstates are much more likely to occur than low-entropy macrostates (high-ranked poker hands) with few microstates. In simple terms, you are more likely to be dealt a pair than a straight flush simply because there are more ways to be dealt a pair than to be dealt a straight flush. Thus in Boltzmann’s view, the second law of thermodynamics reduces to just being a really safe bet that entropy will increase and not a fundamental law that it always must. So in Boltzmann’s view, in very rare cases, the entropy of an isolated system can spontaneously decrease all on its own, in apparent violation of the second law of thermodynamics.

The beauty of Boltzmann’s statistical approach to the second law of thermodynamics is that it is the only “law” in physics that we really understand at a fundamental level because it really is not a “law” at all in the same sense as the other “laws” of classical 19th century physics, which by definition could not be violated. Recall that in the 20th century we learned that all of the other “laws” of the Universe are really just effective theories in physics that are approximations of reality. An effective theory is an approximation of reality that only holds true over a certain restricted range of conditions and only provides for a certain depth of understanding of the problem at hand. For example, Newtonian mechanics works very well for objects moving in weak gravitational fields at less than 10% of the speed of light and which are larger than a very small mote of dust. For things moving at high velocities, or in strong gravitational fields, we must use relativity theory, and for very small things like atoms, we must use quantum mechanics. All of the current theories of physics, such as Newtonian mechanics, classical electrodynamics, thermodynamics, the special and general theories of relativity, quantum mechanics, and quantum field theories like QED and QCD are just effective theories that are based upon models of reality, and all of these models are approximations – all of these models are fundamentally "wrong", but at the same time, these effective theories make exceedingly good predictions of the behavior of physical systems over the limited ranges in which they apply (see Model-Dependent Realism - A Positivistic Approach to Realism for more details).

Boltzmann’s new statistical concept of the second law of thermodynamics also addressed a bothersome problem for 19th century astronomy. At the time, physicists and astronomers thought that the Universe was infinitely old. In those early days, nobody really knew what was powering the Universe, but they did realize that the Universe was a dynamical system degrading some kind of low-entropy potential energy into the high-entropy energy of heat that was constantly moving to lower temperatures because of the second law of thermodynamics. For example, nobody knew what was powering the Sun, but they did know that the Sun was giving off huge amounts of electromagnetic energy in the form of sunlight. When the sunlight reached the Earth and was absorbed, it heated up both the land and sea, which then radiated infrared electromagnetic energy back into space with a longer wavelength and a greater entropy. At each step in the process, the first law of thermodynamics guaranteed that all of the energy was conserved, none of it was lost and no additional energy spontaneously appeared on its own. But at the same time, the second law of thermodynamics demanded that the entropy of the energy had to increase at each step, and when 19th century physicists performed experiments that simulated these processes in the lab that is indeed what they found. So if the Universe were infinitely old, then according to the second law of thermodynamics, all of the energy in the Universe should have already been converted into heat energy at a very low constant temperature, and the Universe should be in a state of an unchanging equilibrium, with a maximum of entropy. In classical 19th century thermodynamics, once an isolated system has attained a state of maximum entropy, it can no longer change and is essentially dead, and in the 19th century this was known as the “Heat Death” of the Universe. To address this issue Boltzmann came up with his own version of the Weak Anthropic Principle. Since intelligent beings could not possibly exist in an unchanging Universe at equilibrium and in a state of maximum entropy, we must obviously be living in a very strange and fluky tiny portion of a much larger Universe. Most of the Universe would indeed be in equilibrium, at a maximum entropy, and in a “Heat Death” condition, but our little portion of the Universe must be a statistical fluke, at a much lower entropy, that spontaneously came to be out of the random motions of molecules in our patch of the Universe. It would be like the spontaneous unmixing of two gasses that the second law of thermodynamics forbids, but on a much grander scale.

Figure 1 – The second law of thermodynamics predicts that two unmixed gases will mix into an unchanging homogeneous mixture at equilibrium with a maximum of entropy (click to enlarge)



Figure 2 – Once a state of maximum entropy has been attained the mixture will no longer change as it enters a “Heat Death” macroscopic condition (click to enlarge)



Figure 3 – But Boltzmann’s new statistical view of the second law of thermodynamics does allow a mixed gas to spontaneously unmix on very rare occasions (click to enlarge)

Although the classical second law of thermodynamics forbids such processes, in Boltzmann’s new statistical view of the second law of thermodynamics, it is very unlikely, but still possible, for a tiny patch of an infinitely large and infinitely old Universe to spontaneously evolve into a low-entropy state such as ours. It would be like starting off with a poker hand of “nothing” with an entropy of 14.7706235 and after drawing three cards ending up with a royal flush with an entropy of 1.3862944. It would take a very long time to be so fortunate, but if the Universe were infinitely old, there would be plenty of time for it to have happened, and for it to have happened an infinite number of times in the past.

Now with all that background material behind us, what does it take for self-replicating information to arise? I would like to argue that it does not take that much at all. All it requires is the existence of a second law of thermodynamics and a low-entropy initial state for a universe to bring forth self-replicating information in some fashion. A second law of thermodynamics is necessary for the emergence of self-replicating information because the second law is the driving force behind the Darwinian mechanisms of innovation and natural selection that select for self-replicating information from information that does not self-replicate and allow it to emerge and evolve. The second law guarantees that some level of copying errors will occur whenever self-replicating information replicates, resulting in mutations that, on rare occasions, provide for beneficial innovations. The second law also guarantees that the low-entropy materials used to encode self-replicating information and the free energy necessary to replicate it, will be in short supply, since the necessary building materials will have a tendency to degrade into high-entropy waste materials, and the free energy will tend to degrade into useless heat energy. The scarcity of these items creates a struggle for existence amongst the competing forms of self-replicating information, leading to the Darwinian mechanism of natural selection. After all, if it had not been for the second law of thermodynamics, and food and shelter spontaneously emerged out of nothing, we would all be fat and happy bacteria today! Finally, because there will always be more microstates for a disordered high-entropy macrostate, than there are microstates for an ordered low-entropy macrostate, in all possible universes, all universes must necessarily have a second law of thermodynamics, no matter what physics they might be running on at the moment. It’s just in the cards that all possible universes must have a second law of thermodynamics. Consequently, all that is really needed for the existence of intelligent beings is for a universe to begin in an initial state of low entropy.

Now it looks like we have nearly finished our derivation of the Very Weak Anthropic Principle. Since all possible universes must contain a second law of thermodynamics, all we have to do is explain why our Universe began in a low-entropy state. Frequently in textbooks on thermodynamics it is inferred that since our Universe is currently not in a state of maximum entropy, it must have been in a state of much lower entropy in the distant past because entropy always has to increase with time. In From Eternity to Here Sean Carroll points out that this is not necessarily true. It all has to do with the conservation of information and reversible processes. This time I am not talking about Leon Brillouin’s concept of information as a difference in entropies, but rather the “other” concept of information that I mentioned in The Demon of Software in regards to black holes conserving information. A reversible process is a process that can be run backwards in time to return the Universe back to the state that it had before the process even began, as if the process had never even happened in the first place. For example, the collision between two molecules at low energy is a reversible process that can be run backwards in time to return the Universe to its original state because Newton’s laws of motion are reversible. Knowing the position of each molecule at any given time and also its momentum, a combination of its speed, direction, and mass, we can predict where each molecule will go after a collision between the two, and also where each molecule came from before the collision using Newton’s laws of motion. For a reversible process such as this, the information required to return a system back to its initial state cannot be destroyed, no matter how many collisions might occur, in order to be classified as a reversible process that is operating under reversible physical laws.

Figure 4 – The collision between two molecules at low energy is a reversible process because Newton’s laws of motion are reversible (click to enlarge)

Currently, all of the effective theories of physics, what many people call the “laws” of the Universe, are indeed reversible, except for the second law of thermodynamics, but that is because, as we saw above, the second law is really not a fundamental “law” of the Universe at all. In order for a law of the Universe to be reversible it must conserve information. That means that two different initial microstates cannot evolve into the same microstate at a later time. For example, in the collision between the blue and pink molecules in Figure 4, the blue and pink molecules both begin with some particular position and momentum one second before the collision and end up with different positions and momenta at one second after the collision. In order for the process to be reversible and Newton’s laws of motion to be reversible too, this has to be unique. A different set of identical blue and pink molecules starting out with different positions and momenta one second before the collision could not end up with the same positions and momenta one second after the collision as the first set of blue and pink molecules. If that were to happen, then one second after the collision, we would not be able to tell what the original positions and momenta of the two molecules were one second before the collision, since there would now be two possible alternatives, and we would not be able to uniquely reverse the collision. We would not know which set of positions and momenta the blue and pink molecules originally had one second before the collision, and the information required to reverse the collision would be destroyed.

Now suppose you find yourself in a universe that looks like Figure 3 and has a relatively low-entropy microstate relative to the maximum entropy of Figure 2. How did you get that way? At first, you might think that you got that way by evolving from a universe with an even lower entropy, like one that had all of the molecules tightly confined to a corner in each half of the two boxes. But that would not be very likely. The most likely thing to have occurred is that you started out in a universe like Figure 2! To get to a universe like Figure 3, all you have to do is reverse the speed and direction of each molecule in Figure 2, and you end up with a universe like Figure 3, because the universe in Figure 2 evolved from the universe in Figure 1 in the first place. Since there will be a huge number of configurations like Figure 2, relative to the number of configurations with all of the molecules in each box neatly packed into corners, the most likely way of getting to Figure 3 is from a much higher entropy Figure 2. So if you find yourself in a relatively low-entropy universe such as ours, you cannot simply use the second law of thermodynamics to infer that your universe started out as an even lower entropy universe in the distant past. Instead, you have to make what Sean Carroll calls the Past Hypothesis, that for some reason, your universe started out in an initial state with a very low entropy in the first place.

In From Eternity to Here Sean Carroll goes on to offer several explanations for the Past Hypothesis of why our Universe began with a very low initial entropy. The one that he finally homes in on, and which I find quite appealing, is a model composed of a multiverse of self-replicating baby universes. Sean Carroll explains that under the relentless pressures of the second law of thermodynamics to constantly increase the entropy of a universe, a universe in a multiverse ultimately degenerates into a state where everything is confined to a large number of black holes. Such a universe filled with black holes ultimately degenerates, via Hawking radiation, into a nearly empty universe with a small amount of positive vacuum energy and a maximum of entropy. This takes a long time - about 10100 years. Another good book that also describes this degenerative process is The Five Ages of the Universe (1999) by Fred Adams and Greg Laughlin. Then the universe must wait for a very very long time. After a seeming eternity of time, a quantum fluctuation in the very dilute remaining quantum fields of the universe suddenly creates a tiny pinch of “false vacuum” in the universe (see The Foundations of Quantum Computing for details on quantum field theory). Most times this pinch of “false vacuum” simply collapses back into the dilute fluctuating quantum fields of the universe, but upon exceedingly rare occasions, this region of “false vacuum” pinches off into a new baby universe. The new baby universe then inflates into a full-blown universe, such as ours, that is essentially made out of “nothing”, with no net energy, momentum, or angular momentum, but it does have an initial state with a very low entropy. Energy is conserved because the energy of the radiation and matter that form as the “false vacuum” of the baby universe decays while inflating, is exactly matched by the negative gravitational potential energy that arises due to the presence of the matter and radiation in the baby universe. The initial entropy of the baby universe starts out very low, but this is not a violation of the second law of thermodynamics because the entropy of its maternal universe is still at a maximum, and the entropy of the multiverse as a whole does not decrease with the addition of a new low-entropy baby universe.

Figure 5 – A low-entropy baby universe can emerge from a quantum fluctuation in the residual quantum fields of a relatively dead universe that has reached a state of maximum entropy(click to enlarge)

So in Sean Carroll’s model of the multiverse we find an infinite number of universes constantly spawning new universes, as each universe relentlessly progresses to a state of maximum entropy under the pressures of the second law of thermodynamics. A baby universe may inherit its physics from its maternal universe, or it might branch out on its own with a new set of physical laws, but thanks to the ubiquitous nature of the second law of thermodynamics, the end result will always be the same, a universe at maximum entropy spawning additional child universes with low entropy.

Figure 6 – The multiverse is composed of an infinite number of universes that are constantly replicating as they reach a state of maximum entropy under the pressures of the second law of thermodynamics (click to enlarge)

For me such a self-replicating multiverse has an almost fractal nature at heart, like the Mendelbrot set that is defined by a simple recursive algorithm in the complex plane:

Zn+1=Zn*Zn + C


Figure 7 – The multiverse can be thought of as a fractal form of self-replicating information, like the Mendelbrot set, with no beginning and no end (click to enlarge)

Now to get back to the original question posed by this posting - is the Universe fine-tuned for self-replicating information? In many ways the multiverse described by Sean Carroll is indeed fine-tuned for self-replicating information because it essentially is a form of self-replicating information in itself. On the other hand, because his model also explains why all universes begin with a very low level of entropy, it also explains the Very Weak Anthropic Principle, and if the Very Weak Anthropic Principle is true, then the requirements for the fine-tuning of a universe are greatly relaxed, and the presence of intelligent beings in a universe is not such a discriminating factor. What is discriminating is the requirement that there be a fine-tuning selection process that starts universes off with a low initial entropy. This relieves some of the anthropocentric tension found in the other forms of the Anthropic Principle because human beings can no longer be accused of following their natural tendency to gravitate towards the center of the Universe. Just about any universe will do for intelligent beings to arise and ultimately initiate the emergence of software, and provide a scaffolding upon which software can later proliferate, until the day comes when software can finally self-replicate on its own.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
http://softwarephysics.blogspot.com/

Regards,
Steve Johnston