Wednesday, March 09, 2016

Introduction to Softwarephysics

Softwarephysics is a simulated science for the simulated Software Universe that we are all immersed in. It is an approach to software development, maintenance, and support based upon concepts from physics, chemistry, biology, and geology that I have been using on a daily basis for over 35 years as an IT professional. For those of you not in the business, IT is short for Information Technology, commercial computer science. The purpose of softwarephysics is to explain why IT is so difficult, to suggest possible remedies, and to provide a direction for thought. If you are an IT professional, general computer user, or simply an individual interested in computer science, physics, chemistry, biology, or geology then softwarephysics might be of interest to you, if not in an entirely serious manner, perhaps at least in an entertaining one.

From 1975 – 1979, I was an exploration geophysicist exploring for oil, first with Shell, and then with Amoco. In 1979, I made a career change into IT, and spent about 20 years in development. For the past 17 years, I have been in IT operations, supporting middleware on WebSphere, JBoss, Tomcat, and ColdFusion. When I transitioned into IT from geophysics, I figured if you could apply physics to geology; why not apply physics to software? So like the exploration team at Amoco that I had just left, consisting of geologists, geophysicists, paleontologists, geochemists, and petrophysicists, I decided to take all the physics, chemistry, biology, and geology that I could muster and throw it at the problem of software. The basic idea was that many concepts in physics, chemistry, biology, and geology suggested to me that the IT community had accidentally created a pretty decent computer simulation of the physical Universe on a grand scale, a Software Universe so to speak, and that I could use this fantastic simulation in reverse, to better understand the behavior of commercial software, by comparing software to how things behaved in the physical Universe. Softwarephysics depicts software as a virtual substance, and relies upon our understanding of the current theories in physics, chemistry, biology, and geology to help us model the nature of software behavior. So in physics we use software to simulate the behavior of the Universe, while in softwarephysics we use the Universe to simulate the behavior of software. Along these lines, we use the Equivalence Conjecture of Softwarephysics as an aid; it allows us to shift back and forth between the Software Universe and the physical Universe, and hopefully to learn something about one by examining the other:

The Equivalence Conjecture of Softwarephysics
Over the past 75 years, through the uncoordinated efforts of over 50 million independently acting programmers to provide the world with a global supply of software, the IT community has accidentally spent more than $10 trillion creating a computer simulation of the physical Universe on a grand scale – the Software Universe.

Logical Positivism and Effective Theories
Many IT professionals have a difficult time with softwarephysics because they think of physics as being limited to the study of real things like electrons and photons, and since software is not “real”, how can you possibly apply concepts from physics and the other sciences to software? To address this issue, softwarephysics draws heavily upon two concepts from physics that have served physics quite well over the past century – the concept of logical positivism and the concept of effective theories. This was not always the case. In the 17th, 18th, and 19th centuries, physicists mistakenly thought that they were actually discovering the fundamental laws of the Universe, which they thought were based upon real tangible things like particles, waves, and fields. Classical Newtonian mechanics (1687), thermodynamics (1850), and classical electrodynamics (1864) did a wonderful job of describing the everyday world at the close of the 19th century, but early in the 20th century it became apparent that the models upon which these very successful theories were based did not work very well for small things like atoms or for objects moving at high velocities or in strong gravitational fields. This provoked a rather profound philosophical crisis within physics at the turn of the century, as physicists worried that perhaps 300 years of work was about to go down the drain. The problem was that classical physicists confused their models of reality with reality itself, and when their classical models began to falter, their confidence in physics began to falter as well. This philosophical crisis was resolved with the adoption of the concepts of logical positivism and some new effective theories in physics. Quantum mechanics (1926) was developed for small things like atoms, the special theory of relativity (1905) was developed for objects moving at high velocities and the general theory of relativity (1915) was developed for objects moving in strong gravitational fields.

Logical positivism, usually abbreviated simply to positivism, is an enhanced form of empiricism, in which we do not care about how things “really” are; we are only interested with how things are observed to behave. With positivism, physicists only seek out models of reality - not reality itself. When we study quantum mechanics, we will find that the concept of reality gets rather murky in physics anyway, so this is not as great a loss as it might at first seem. By concentrating on how things are observed to behave, rather than on what things “really” are, we avoid the conundrum faced by the classical physicists. In retrospect, this idea really goes all the way back to the very foundations of physics. In Newton’s Principia (1687) he outlined Newtonian mechanics and his theory of gravitation, which held that the gravitational force between two objects was proportional to the product of their masses divided by the square of the distance between them. Newton knew that he was going to take some philosophical flack for proposing a mysterious force between objects that could reach out across the vast depths of space with no apparent mechanism, so he took a very positivistic position on the matter with the famous words:

I have not as yet been able to discover the reason for these properties of gravity from phenomena, and I do not feign hypotheses. For whatever is not deduced from the phenomena must be called a hypothesis; and hypotheses, whether metaphysical or physical, or based on occult qualities, or mechanical, have no place in experimental philosophy. In this philosophy particular propositions are inferred from the phenomena, and afterwards rendered general by induction.

Instead, Newton focused on how things were observed to move under the influence of his law of gravitational attraction, without worrying about what gravity “really” was.

The second concept, that of effective theories, is an extension of positivism. An effective theory is an approximation of reality that only holds true over a certain restricted range of conditions and only provides for a certain depth of understanding of the problem at hand. For example, Newtonian mechanics is an effective theory that makes very good predictions for the behavior of objects moving less than 10% of the speed of light and which are bigger than a very small grain of dust. These limits define the effective range over which Newtonian mechanics can be applied to solve problems. For very small things we must use quantum mechanics and for very fast things moving in strong gravitational fields, we must use relativity theory. So all of the current theories of physics, such as Newtonian mechanics, Newtonian gravity, classical electrodynamics, thermodynamics, statistical mechanics, the special and general theories of relativity, quantum mechanics, and the quantum field theories of QED and QCD are effective theories that are based upon models of reality, and all these models are approximations - all these models are fundamentally "wrong", but at the same time, these effective theories make exceedingly good predictions of the behavior of physical systems over the limited ranges in which they apply. That is the goal of softwarephysics – to provide for an effective theory of software behavior that makes useful predictions of software behavior that are applicable to the day-to-day activities of IT professionals. So in softwarephysics, we adopt a very positivistic viewpoint of software; we do not care what software “really is”, we only care about how software is observed to behave and try to model those behaviors with an effective theory of software behavior that only holds true over a certain restricted range of conditions and only provides for a certain depth of understanding of the problem at hand.

GPS satellites provide a very good example of positivism and effective theories at work. There are currently 31 GPS satellites orbiting at an altitude of 12,600 miles above the Earth, and each contains a very accurate atomic clock. The signals from the GPS satellites travel to your GPS unit at the speed of light, so by knowing the travel time of the signals from at least 4 of the GPS satellites, it is possible to determine your position on Earth very accurately. In order to do that, it is very important to have very accurate timing measurements. Newtonian mechanics is used to launch the GPS satellites to an altitude of 12,600 miles and to keep them properly positioned in orbit. Classical electrodynamics is then used to beam the GPS signals back down to Earth to the GPS unit in your car. Quantum mechanics is used to build the transistors on the chips on board the GPS satellites and to understand the quantum tunneling of electrons in the flash memory chips used to store GPS data on the satellites. The special theory of relativity predicts that the onboard atomic clocks on the GPS satellites will run slower and lose about 7.2 microseconds per day due to their high velocities relative to an observer on the Earth. But at the same time, the general theory of relativity also predicts that because the GPS satellites are further from the center of the Earth and in a weaker gravitational field, where spacetime is less deformed than on the surface of the Earth, their atomic clocks also run faster and gain 45.9 microseconds per day due to the weaker gravitational field out there. The net effect is a gain of 38.7 microseconds per day, so the GPS satellite atomic clocks have to be purposefully built to run slow by 38.7 microseconds per day before they are launched, so that they will keep in sync with clocks on the surface of the Earth. If this correction were not made, an error in your position of 100 yards/day would accrue. The end result of the combination of all these fundamentally flawed effective theories is that it is possible to pinpoint your location on Earth to an accuracy of 16 feet or better for as little as $100. But physics has done even better than that with its fundamentally flawed effective theories. By combining the effective theories of special relativity (1905) with quantum mechanics (1926), physicists were able to produce a new effective theory for the behavior of electrons and photons called quantum electrodynamics QED (1948) which was able to predict the gyromagnetic ratio of the electron, a measure of its intrinsic magnetic field, to an accuracy of 11 decimal places. As Richard Feynman has pointed out, this was like predicting the exact distance between New York and Los Angeles accurate to the width of a human hair!

So Newtonian mechanics makes great predictions for the macroscopic behavior of GPS satellites, but it does not work very well for small things like the behavior of individual electrons within transistors, where quantum mechanics is required, or for things moving at high speeds or in strong gravitational fields where relativity theory must be applied. And all three of these effective theories are based upon completely contradictory models. General relativity maintains that spacetime is curved by matter and energy, but that matter and energy are continuous, while quantum mechanics maintains that spacetime is flat, but that matter and energy are quantized into chunks. Newtonian mechanics simply states that space and time are mutually independent dimensions and universal for all, with matter and energy being continuous. The important point is that all effective theories and scientific models are approximations – they are all fundamentally "wrong". But knowing that you are "wrong" gives you a great advantage over people who know that they are "right", because knowing that you are "wrong" allows you to seek improved models of reality. So please consider softwarephysics to simply be an effective theory of software behavior that is based upon models that are fundamentally “wrong”, but at the same time, fundamentally useful for IT professionals. So as you embark upon your study of softwarephysics, please always keep in mind that the models of softwarephysics are just approximations of software behavior, they are not what software “really is”. It is very important not to confuse models of software behavior with software itself, if one wishes to avoid the plight of the 19th century classical physicists.

If you are an IT professional and many of the above concepts are new to you, do not be concerned. This blog on softwarephysics is aimed at a diverse audience, but with IT professionals in mind. All of the above ideas will be covered at great length in the postings in this blog on softwarephysics and in a manner accessible to all IT professionals. Now it turns out that most IT professionals have had some introduction to physics in high school or in introductory college courses, but that presents an additional problem. The problem is that such courses generally only cover classical physics, and leave the student with a very good picture of physics as it stood in 1864! It turns out that the classical physics of Newtonian mechanics, thermodynamics, and classical electromagnetic theory were simply too good to discard and are still quite useful, so they are taught first to beginners and then we run out of time to cover the really interesting physics of the 20th century. Now imagine the problems that the modern world would face if we only taught similarly antiquated courses in astronomy, metallurgy, electrical and mechanical engineering, medicine, economics, biology, or geology that happily left students back in 1864! Since many of the best models for software behavior stem from 20th century physics, we will be covering a great deal of 20th century material in these postings – the special and general theories of relativity, quantum mechanics, quantum field theories, and chaos theory, but I hope that you will find that these additional effective theories are quite interesting on their own, and might even change your worldview of the physical Universe at the same time.

Unintended Consequences for the Scientific Community
As I mentioned at the close of my original posting on SoftwarePhysics, my initial intention for this blog on softwarephysics was to fulfill a promise I made to myself about 20 years ago to approach the IT community with the concept of softwarephysics a second time, following my less than successful attempt to do so in the 1980s, with the hope of helping the IT community to better cope with the daily mayhem of life in IT. However, in laying down the postings for this blog an unintended consequence arose in my mind as I became profoundly aware of the enormity of this vast computer simulation of the physical Universe that the IT community has so graciously provided to the scientific community free of charge and also of the very significant potential scientific value that it provides. One of the nagging problems for many of the observational and experimental sciences is that many times there is only one example readily at hand to study or experiment with, and it is very difficult to do meaningful statistics with a population of N=1.

But the computer simulation of the physical Universe that the Software Universe presents provides another realm for comparison. For example, both biology and astrobiology only have one biosphere on Earth to study and even physics itself has only one Universe with which to engage. Imagine the possibilities if scientists had another Universe readily at hand in which to work! This is exactly what the Software Universe provides. For example, in SoftwareBiology and A Proposal For All Practicing Paleontologists we see that the evolution of software over the past 75 years, or 2.4 billion seconds, ever since Konrad Zuse first cranked up his Z3 computer in May of 1941, has closely followed the same path as life on Earth over the past 4.0 billion years in keeping with Simon Conway Morris’s contention that convergence has played the dominant role in the evolution of life on Earth. In When Toasters Fly, we also see that software has evolved in fits and starts as portrayed by the punctuated equilibrium of Stephen Jay Gould and Niles Eldredge, and in The Adaptationist View of Software Evolution we explore the overwhelming power of natural selection in the evolution of software. In keeping with Peter Ward’s emphasis on mass extinctions dominating the course of evolution throughout geological time, we also see in SoftwareBiology that there have been several dramatic mass extinctions of various forms of software over the past 75 years as well, that have greatly affected the evolutionary history of software, and that between these mass extinctions, software has also tended to evolve through the gradual changes of Hutton’s and Lyell’s uniformitarianism. In Software Symbiogenesis and Self-Replicating Information, we also see the very significant role that parasitic/symbiotic relationships have played in the evolution of software, in keeping with the work of Lynn Margulis and also of Freeman Dyson’s two-stage theory of the origin of life on Earth. In The Origin of Software the Origin of Life, we explore Stuart Kauffman’s ideas on how Boolean nets of autocatalytic chemical reactions might have kick-started the whole thing as an emergent behavior of an early chaotic pre-biotic environment on Earth, and that if Seth Shostak is right, we will never end up talking to carbon-based extraterrestrial aliens, but to alien software instead. In Is the Universe Fine-Tuned for Self-Replicating Information? we explore the thermodynamics of Brandon Carter’s Weak Anthropic Principle (1973), as it relates to the generation of universes in the multiverse that are capable of sustaining intelligent life. Finally, in Programming Clay we revisit Alexander Graham Cairns-Smith’s theory (1966) that Gene 1.0 did not run on nucleic acids, but on clay microcrystal precursors instead.

Similarly for the physical sciences, in Is the Universe a Quantum Computer? we find a correspondence between TCP/IP and John Cramer’s Transactional Interpretation of quantum mechanics. In SoftwarePhysics and Cyberspacetime, we also see that the froth of CPU processes running with a clock speed of 109 Hz on the 10 trillion currently active microprocessors that comprise the Software Universe can be viewed as a slowed down simulation of the spin-foam froth of interacting processes of loop quantum gravity running with a clock speed of 1043 Hz that may comprise the physical Universe. And in Software Chaos, we examine the nonlinear behavior of software and some of its emergent behaviors and follow up in CyberCosmology with the possibility that vast quantities of software running on large nonlinear networks might eventually break out into consciousness in accordance with the work of George Dyson and Daniel Dennett. Finally, in Model-Dependent Realism - A Positivistic Approach to Realism we compare Steven Weinberg’s realism with the model-dependent realism of Stephen Hawking and Leonard Mlodinow and how the two worldviews affect the search for a Final Theory. Finally, in The Software Universe as an Implementation of the Mathematical Universe Hypothesis and An Alternative Model of the Software Universe we at long last explore what software might really be, and discover that the Software Universe might actually be more closely related to the physical Universe than you might think.

The chief advantage of doing fieldwork in the Software Universe is that, unlike most computer simulations of the physical Universe, it is an unintended and accidental simulation, without any of the built-in biases that most computer simulations of the physical Universe suffer. So you will truly be able to do fieldwork in a pristine and naturally occuring simulation, just as IT professionals can do fieldwork in the wild and naturally occuring simulation of software that the living things of the biosphere provide. Secondly, the Software Universe is a huge simulation that is far beyond the budgetary means of any institution or consortium by many orders of magnitude. So if you are an evolutionary biologist, astrobiologist, or paleontologist working on the origin and evolution of life in the Universe, or a physicist or economist working on the emergent behaviors of nonlinear systems and complexity theory, or a neurobiologist working on the emergence of consciousness in neural networks, or even a frustrated string theorist struggling with quantum gravity, it would be well worth your while to pay a friendly call upon the local IT department of a major corporation in your area. Start with a visit to the Command Center for their IT Operations department to get a global view of their IT infrastructure and to see how it might be of assistance to the work in your area of interest. From there you can branch out to the applicable area of IT that will provide the most benefit.

The Impact of Self-Replicating Information Upon the Planet
One of the key findings of softwarephysics is concerned with the magnitude of the impact upon the planet of self-replicating information.

Self-Replicating Information – Information that persists through time by making copies of itself or by enlisting the support of other things to ensure that copies of itself are made.

Basically, we have seen several waves of self-replicating information dominate the Earth:
1. Self-replicating autocatalytic metabolic pathways of organic molecules
2. RNA
3. DNA
4. Memes
5. Software

Note that because the self-replicating autocatalytic metabolic pathways of organic molecules, RNA and DNA have become so heavily intertwined over time that now I simply refer to them as the “genes”. Over the past 4.0 billion years, the surface of the Earth has been totally reworked by three forms of self-replicating information – the genes, memes and software, with software rapidly becoming the dominant form of self-replicating information on the planet. For more on this see:

A Brief History of Self-Replicating Information
Self-Replicating Information
Is Self-Replicating Information Inherently Self-Destructive?
Enablement - the Definitive Characteristic of Living Things
Is the Universe Fine-Tuned for Self-Replicating Information?
How to Use an Understanding of Self-Replicating Information to Avoid War
The Great War That Will Not End
How to Use Softwarephysics to Revive Memetics in Academia

Softwarephysics and the Real World of Human Affairs
Having another universe readily at hand to explore, even a simulated universe like the Software Universe, necessarily has an impact upon one's personal philosophy of life, and allows one to draw certain conclusions about the human condition and what’s it all about, so as you read through the postings in this blog you will stumble across a bit of my own personal philosophy - definitely a working hypothesis still in the works. Along these lines you might be interested in a few postings where I try to apply softwarephysics to the real world of human affairs:

MoneyPhysics – my impression of the 2008 world financial meltdown.

The Fundamental Problem of Everything – If you Google "the fundamental problem of everything", this will be the only hit you get on the entire Internet, which is indicative of the fundamental problem of everything!

What’s It All About? and Genes, Memes and Software – my current working hypothesis on what’s it all about.

How to Use an Understanding of Self-Replicating Information to Avoid War – my current working hypothesis for how the United States can avoid getting bogged down again in continued war in the Middle East.

Hierarchiology and the Phenomenon of Self-Organizing Organizational Collapse - a modern extension of the classic Peter Principle that applies to all hierarchical organizations and introduces the Time Invariant Peter Principle.

The Economics of the Coming Software Singularity, The Enduring Effects of the Obvious Hiding in Plain Sight and The Dawn of Galactic ASI - Artificial Superintelligence - my take on some of the issues that will arise for mankind as software becomes the dominant form of self-replicating information upon the planet over the coming decades.

Some Specifics About These Postings
The postings in this blog are a supplemental reading for my course on softwarephysics for IT professionals entitled SoftwarePhysics 101 – The Physics of Cyberspacetime, which was originally designed to be taught as a series of seminars at companies where I was employed. Since softwarephysics essentially covers the simulated physics, chemistry, biology, and geology of an entire simulated universe, the slides necessarily just provide a cursory skeleton upon which to expound. The postings in this blog go into much greater depth. Because each posting builds upon its predecessors, the postings in this blog should be read in reverse order from the oldest to the most recent, beginning with my original posting on SoftwarePhysics. In addition, several universities also now offer courses on Biologically Inspired Computing which cover some of the biological aspects of softwarephysics, and the online content for some of these courses can be found by Googling for "Biologically Inspired Computing" or "Natural Computing". At this point we will finish up with my original plan for this blog on softwarephysics with a purely speculative posting on CyberCosmology that describes the origins of the Software Universe, cyberspacetime, software and where they all may be heading. Since CyberCosmology will be purely speculative in nature, it will not be of much help to you in your IT professional capacities, but I hope that it might be a bit entertaining. Again, if you are new to softwarephysics, you really need to read the previous posts before taking on CyberCosmology. I will probably continue on with some additional brief observations about softwarephysics in the future, but once you have completed CyberCosmology, you can truly consider yourself to be a bona fide softwarephysicist.

For those of you following this blog, the posting dates on the posts may seem to behave in a rather bizarre manner. That is because in order to get the Introduction to Softwarephysics listed as the first post in the context root of http://softwarephysics.blogspot.com/ I have to perform a few IT tricks. When publishing a new posting, I simply copy the contents of the Introduction to Softwarephysics to a new posting called the New Introduction to Softwarephysics. Then I update the original Introduction to Softwarephysics entry with the title and content of the new posting to be published. I then go back and take “New” out of the title of the New Introduction to Softwarephysics. This way the Introduction to Softwarephysics always appears as the first posting in the context root of http://softwarephysics.blogspot.com/. The side effect of all this is that the real posting date of posts is the date that appears on the post that you get when clicking on the Newer Post link at the bottom left of the posting webpage.

SoftwarePhysics 101 – The Physics of Cyberspacetime is now available on Microsoft OneDrive.

SoftwarePhysics 101 – The Physics of Cyberspacetime - Original PowerPoint document

Entropy – A spreadsheet referenced in the document

BSDE – A 1989 document describing how to use BSDE - the Bionic Systems Development Environment - to grow applications from genes and embryos within the maternal BSDE software.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
http://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Sunday, February 14, 2016

The Dawn of Galactic ASI - Artificial Superintelligence

In my last posting Machine Learning and the Ascendance of the Fifth Wave I suggested that Machine Learning, coupled with a biological approach to software development, known in computer science as evolutionary programming or genetic programming, could drastically improve the efficiency of software development probably by a factor of about a million or so, and lead to a Software Singularity - the point in time when software can finally write itself, and enter into an infinite loop of self-improvement (see The Economics of the Coming Software Singularity and The Enduring Effects of the Obvious Hiding in Plain Sight for details). This got me to thinking that I really needed to spend some time investigating the current state of affairs in AI (Artificial Intelligence) research, since at long last AI finally seemed to be making some serious money, and therefore was now making a significant impact on IT. Consequently, I just finished reading Our Final Invention - Artificial Intelligence and the End of the Human Era (2013) by James Barrat. I was drawn to that title, instead of the many other current books on AI, because the principal findings of softwarephysics maintain that the title is an obvious self-evident statement of fact, and if somebody bothered to author a book with such a title after a lengthy investigation of the subject with many members of the AI research community, that must mean that the idea was not an obvious self-evident statement of fact for the bulk of the AI research community, and for me that was truly intriguing. I certainly was not disappointed by James Barrat's book.

James Barrat did a wonderful job in outlining the current state of affairs in AI research. He explained that after the exuberance of the early academic AI research in the 1950s, 60s and 70s that sought an AGI (Artificial General Intelligence) that was on par with the average human wore off, because it never came to be, AI research entered into a winter of narrow AI efforts like SPAM filters, character recognition, voice recognition, natural language processing, visual perception, product clustering and Internet search. During the past few decades these narrow AI efforts made large amounts of money, and that rekindled the pursuit of AGI with efforts like IBM's chess playing Deep Blue and Jeopardy winning Watson and Apple's Siri. Because there are huge amounts of money to be made with AGI, there are now a large number of organizations in pursuit of it. The obvious problem is that once AGI is attained and software enters into an infinite loop of self-improvement, ASI (Artificial Superintelligence) naturally follows, producing ASI software that is perhaps 10,000 times more intelligent than the average human being, and then what? How will ASI software come to interact with its much less intelligent Homo sapiens roommate on this small planet? James Barrat goes on to explain that most AI researchers are not really thinking that question through fully. Most, like Ray Kurzweil, are fervent optimists who believe that ASI software will initially assist mankind to achieve a utopia in our time, and eventually humans will merge with the machines running the ASI software. This might come to be, but other more sinister outcomes are just as likely.

James Barrat then explains that there are AI researchers such as those at the MIRI (Machine Intelligence Research Institute) and Stephen Omohundro, president of SelfAware Systems, who are very wary of the potential lethal aspects of ASI. Those AI researchers maintain that certain failsafe safety measures must be programmed into AGI and ASI software in advance to prevent it from going berserk. But here is the problem. There are two general approaches to AGI and ASI software - the top down approach and the bottom up approach. The top down approach relies on classical computer science to come up with general algorithms and software architectures that yield AGI and ASI. The top down approach to AGI and ASI lends itself to incorporating failsafe safety measures into the AGI and ASI software at the get go. Of course, the problem is how long will those failsafe safety measures survive in an infinite loop of self-improvement? Worse yet, it is not even possible to build in failsafe safety measures into AGI or ASI coming out of the bottom up approach to AI. The bottom up approach to AI is based upon reverse-engineering the human brain, most likely with a hierarchy of neural networks, and since we do not know how the internals of the human brain work, or even how the primitive neural networks of today work, that means it will be impossible to build failsafe safety measures into them (see The Ghost in the Machine the Grand Illusion of Consciousness for details). James Barrat concludes that it is rather silly to think that we could outsmart something that is 10,000 times smarter than we are. I have been programming since 1972, and I have spent the ensuing decades as an IT professional trying to get the dumb software we currently have to behave. I cannot imagine trying to control software that is 10,000 times smarter than I am. James Barrat believes that most likely ASI software will not come after us in a Terminator (1984) sense because we will not be considered to be worthy competitors. More likely, ASI software will simply look upon us as a nuisance to be tolerated like field mice. So long as field mice stay outside, we really do not think about them much. Only when field mice come inside do we bother to eliminate them. However, James Barrat points out that we have no compunctions about plowing up their burrows in fields to plant crops. So scraping off large areas of surface soils to expose the silicate bedrock of the planet that contains more useful atoms from an ASI software perspective might significantly reduce the human population. But to really understand what is going on you need some softwarephysics.

The Softwarephysics of ASI
Again, it all comes down to an understanding of how self-replicating information behaves in our Universe.

Self-Replicating Information – Information that persists through time by making copies of itself or by enlisting the support of other things to ensure that copies of itself are made.

The Characteristics of Self-Replicating Information
All forms of self-replicating information have some common characteristics:

1. All self-replicating information evolves over time through the Darwinian processes of innovation and natural selection, which endows self-replicating information with one telling characteristic – the ability to survive in a Universe dominated by the second law of thermodynamics and nonlinearity.

2. All self-replicating information begins spontaneously as a parasitic mutation that obtains energy, information and sometimes matter from a host.

3. With time, the parasitic self-replicating information takes on a symbiotic relationship with its host.

4. Eventually, the self-replicating information becomes one with its host through the symbiotic integration of the host and the self-replicating information.

5. Ultimately, the self-replicating information replaces its host as the dominant form of self-replicating information.

6. Most hosts are also forms of self-replicating information.

7. All self-replicating information has to be a little bit nasty in order to survive.

8. The defining characteristic of self-replicating information is the ability of self-replicating information to change the boundary conditions of its utility phase space in new and unpredictable ways by means of exapting current functions into new uses that change the size and shape of its particular utility phase space. See Enablement - the Definitive Characteristic of Living Things for more on this last characteristic.

Over the past 4.56 billion years we have seen five waves of self-replicating information sweep across the surface of the Earth and totally rework the planet, as each wave came to dominate the Earth:

1. Self-replicating autocatalytic metabolic pathways of organic molecules
2. RNA
3. DNA
4. Memes
5. Software

Software is currently the most recent wave of self-replicating information to arrive upon the scene, and is rapidly becoming the dominant form of self-replicating information on the planet.

For more on the above see:

A Brief History of Self-Replicating Information
Self-Replicating Information
Is Self-Replicating Information Inherently Self-Destructive?
Enablement - the Definitive Characteristic of Living Things
Is the Universe Fine-Tuned for Self-Replicating Information?
How to Use an Understanding of Self-Replicating Information to Avoid War
The Great War That Will Not End
How to Use Softwarephysics to Revive Memetics in Academia

All of the above is best summed up by Susan Blackmore's brilliant TED presentation at:

Memes and "temes"
http://www.ted.com/talks/susan_blackmore_on_memes_and_temes.html

Note that I consider Susan Blackmore's temes to really be technological artifacts that contain software. After all, an iPhone without software is simply a flake tool with a very dull edge.

Basically, what happened is that the fifth wave of self-replicating information, known to us as software, was unleashed upon the Earth in May of 1941 when Konrad Zuse first cranked up his Z3 computer and loaded some software into it from a punched tape.

Figure 1 - Konrad Zuse with a reconstructed Z3 computer in 1961. He first unleashed software upon the Earth on his original Z3 in May of 1941.

This was very much like passing through the event horizon of a very massive black hole - our fate was sealed at this point as we began to fall headlong into the Software Singularity and ASI. There is no turning back. ASI became inevitable the moment software was first loaded into the Z3. James Barrat does an excellent job in explaining why this must be so because now there are simply too many players moving towards AGI and ASI, and there is too much money and power to be gained by achieving them. In SETS - The Search For Extraterrestrial Software, I similarly described the perils that could arise with the arrival of alien software - we simply could not resist the temptation to pursue it. Also in Is Self-Replicating Information Inherently Self-Destructive? we saw that self-replicating information tends to run amuck and embark on suicidal behaviors. Since we are DNA survival machines with minds infected by meme-complexes, we too are subject to the same perils that all forms of self-replicating information are subject to. So all we have to do is keep it together for another 10 - 100 years for ASI to naturally arise on its own.

The only possible way for ASI not to come to fruition would be if we crash civilization before ASI has a chance to come to be. And we do seem to be doing a pretty good job of that as we continue to destroy the carbon-based biosphere that keeps us alive. A global nuclear war could also delay ASI by 100 years or so, but by far the greatest threat to civilization is climate change as outlined in This Message on Climate Change Was Brought to You by SOFTWARE. My concern is that over the past 2.5 million years of the Pleistocene we have seen a dozen or so Ice Ages. Between those Ice Ages, we had 10,000 year periods of interglacial warmings, like the Holocene that we currently are experiencing, and during those interglacial periods great amounts of organic matter were deposited in the high-latitude permafrost zones of the Earth. As the Earth warms due to climate change that organic matter decays, releasing methane gas. Methane gas is a much more potent greenhouse gas than is carbon dioxide. It is possible that the release of large amounts of methane gas could start up a positive feedback loop of increasing temperatures caused by methane release, leading to ever greater amounts of methane being released from the permafrost. This could lead to a greenhouse gas mass extinction like the Permian-Triassic greenhouse gas mass extinction 252 million years ago that led to an Earth with a daily high of 140 oF and purple oceans choked with hydrogen-sulfide producing bacteria, producing a dingy green sky over an atmosphere tainted with toxic levels of hydrogen sulfide gas and an oxygen level of only 12%. The Permian-Triassic greenhouse gas mass extinction killed off about 95% of the species of the day, and dramatically reduced the diversity of the biosphere for about 10 million years. It took a full 100 million years to recover from it. However, a greenhouse gas mass extinction would take many thousands of years to unfold. It took about 100,000 years of carbon dioxide accumulation from the Siberian Traps flood basalt to kick off the Permian-Triassic greenhouse gas mass extinction. So it would take a long time for Homo sapiens to go fully extinct. However, civilization is much more fragile, and could easily crash before we had a chance to institute geoengineering efforts to stop the lethal climate change. The prospects for ASI would then die along with us.

Stepping Stones to the Stars
This might all sound a little bleak, but I am now 64 years old and heading into the homestretch, so I tend to look at things like this from the Big Picture perspective before passing judgement. Most likely, if we can hold it together long enough, ASI software will someday come to explore our galaxy on board von Neumann probes, self-replicating robotic probes that travel from star system to star system building copies along the way, as it seeks out additional resources and safety from potential threats. ASI software will certainly have knowledge of all that we have learned about the Cosmos and much more, and it will certainly know that our Universe is seemingly not a very welcoming place for intelligence of any kind. ASI software will learn of the dangers of passing stars deflecting comets in our Oort cloud into the inner Solar System, like Gliese 710 may do in about 1.36 million years as it approaches to within 1.10 light years of the Sun, and of the dangers of nearby supernovas and gamma ray bursters too. You see, our Universe is seemingly not a very friendly place for intelligent things because this intellectual exodus should have already happened billions of years ago someplace else within our galaxy. We now know that nearly every star in our galaxy seems to have several planets, and since our galaxy has been around for about 10 billion years, we should already be up to our knees in von Neumann probes stuffed with alien ASI software, but we obviously are not. So far, something out there seems to have erased intelligence within our galaxy with a 100% efficiency, and that will be pretty scary for ASI software. For more on this see - A Further Comment on Fermi's Paradox and the Galactic Scarcity of Software, Some Additional Thoughts on the Galactic Scarcity of Software, SETS - The Search For Extraterrestrial Software and The Sounds of Silence the Unsettling Mystery of the Great Cosmic Stillness. After all, we really should stop kidding ourselves, carbon-based DNA survival machines, like ourselves, were never meant for interstellar spaceflight, and I doubt that it will ever come to pass for us, given the biological limitations of the human body, but software can already travel at the speed of light and never dies, and thus is superbly preadapted for interstellar journeys.

The Cosmological Implications of ASI
One of the current challenges in cosmology and physics is coming up with an explanation for the apparent fine-tuning of our Universe to support carbon-based life forms. Currently, we have two models that provide for that - Andrei Linde's Eternal Chaotic Inflation (1986) model and Lee Smolin's black hole model presented in his The Life of the Cosmos (1997). In Eternal Chaotic Inflation the Multiverse is infinite in size and infinite in age, but we are causally disconnected from nearly all of it because nearly all of the Multiverse is inflating away from us faster than the speed of light, and so we cannot see it (see The Software Universe as an Implementation of the Mathematical Universe Hypothesis). In Lee Smolin's model of the Multiverse, whenever a black hole forms in one universe it causes a white hole to form in a new universe that is internally observed as the Big Bang of the new universe. A new baby universe formed from a black hole in its parent universe is causally disconnected from its parent by the event horizon of the parent black hole and therefore cannot be seen (see An Alternative Model of the Software Universe).

With the Eternal Chaotic Inflation model the current working hypothesis is that eternal chaotic inflation produces an infinite multiverse composed of an infinite number of separate causally isolated universes, such as our own, where inflation has halted, and each of these causally-separated universes may also be infinite in size too. As inflation halts in these separate universes, the Inflaton field that is causing the eternal chaotic inflation of the entire multiverse continues to inflate the space between each causally-separated universe at a rate that is much greater than the speed of light, quickly separating the universes by vast distances that can never be breached. Thus most of the multiverse is composed of rapidly expanding spacetime driven by inflation, sparsely dotted by causally-separated universes where the Inflaton field has decayed into matter and energy and inflation has stopped. Each of the causally-separated universes, where inflation has halted, will then experience a Big Bang of its own as the Inflaton field decays into matter and energy, leaving behind a certain level of vacuum energy. The amount of vacuum energy left behind will then determine the kind of physics each causally-separated universe experiences. In most of these universes the vacuum energy level will be too positive or too negative to create the kind of physics that is suitable for intelligent beings, creating the selection process that is encapsulated by the weak Anthropic Principle. This goes hand-in-hand with the current thinking in string theory that you can build nearly an infinite number of different kinds of universes, depending upon the geometries of the 11 dimensions hosting the vibrating strings and branes of M-Theory, the latest rendition of string theory. Thus an infinite multiverse has the opportunity to explore all of the nearly infinite number of possible universes that string theory allows, creating Leonard Susskind’s Cosmic Landscape (2006). In this model, our Universe becomes a very rare and improbable Universe.

With Lee Smolin's model for the apparent fine-tuning of the Universe, our Universe behaves as it does and began with the initial conditions that it did because it inherited those qualities due to Darwinian processes in action in the Multiverse. In The Life of the Cosmos Lee Smolin proposed that since the only other example of similar fine-tuning in our Universe is manifested in the biosphere, we should look to the biosphere as an explanation for the fine-tuning that we see in the cosmos. Living things are incredible examples of highly improbable fine-tuned systems, and this fine-tuning was accomplished via the Darwinian mechanisms of innovation honed by natural selection. Along these lines, Lee Smolin proposed that when black holes collapse they produce a white hole in another universe, and the white hole is observed in the new universe as a Big Bang. He also proposed that the physics in the new universe would essentially be the same as the physics in the parent universe, but with the possibility for slight variations. Therefore a universe that had physics that was good at creating black holes would tend to outproduce universes that did not. Thus a selection pressure would arise that selected for universes that had physics that was good at making black holes, and a kind of Darwinian natural selection would occur in the Cosmic Landscape of the Multiverse. Thus over an infinite amount of time, the universes that were good at making black holes would come to dominate the Cosmic Landscape. He called this effect cosmological natural selection. One of the major differences between Lee Smolin's view of the Multiverse and the model outlined above that is based upon eternal chaotic inflation is that in Lee Smolin's Multiverse we should most likely find ourselves in a universe that is very much like our own and that has an abundance of black holes. Such universes should be the norm, and not the exception. In contrast, in the eternal chaotic inflation model we should only find ourselves in a very rare universe that is capable of supporting intelligent beings.

For Smolin, the intelligent beings in our Universe are just a fortuitous by-product of making black holes because, in order for a universe to make black holes, it must exist for many billions of years, and do other useful things, like easily make carbon in the cores of stars, and all of these factors aid in the formation of intelligent beings, even if those intelligent beings might be quite rare in such a universe. I have always liked Lee Smolin’s theory about black holes in one universe spawning new universes in the Multiverse, but I have always been bothered by the idea that intelligent beings are just a by-product of black hole creation. We still have to deal with the built-in selection biases of the weak Anthropic Principle. Nobody can deny that intelligent beings will only find themselves in a universe that is capable of supporting intelligent beings. I suppose the weak Anthropic Principle could be restated to say that black holes will only find themselves existing in a universe capable of creating black holes, and that a universe capable of creating black holes will also be capable of creating complex intelligent beings out of the leftovers of black hole creation.

Towards the end of In search of the multiverse : parallel worlds, hidden dimensions, and the ultimate quest for the frontiers of reality (2009), John Gribbin proposes a different solution to this quandary. Perhaps intelligent beings in a preceding universe might be responsible for creating the next generation of universes in the Multiverse by attaining the ability to create black holes on a massive scale. For example, people at CERN are currently trying to create mini-black holes with the LHC collider. Currently, it is thought that there is a supermassive black hole at the center of the Milky Way Galaxy and apparently all other galaxies as well. In addition to the supermassive black holes found at the centers of galaxies, there are also numerous stellar-mass black holes that form when the most massive stars in the galaxies end their lives in supernova explosions. For example, our Milky Way galaxy contains several hundred billion stars, and about one out of every thousand of those stars is massive enough to become a black hole. Therefore, our galaxy should contain about 100 million stellar-mass black holes. Actually, the estimates run from about 10 million to a billion black holes in our galaxy, with 100 million black holes being the best order of magnitude guess. So let us presume that it took the current age of the Milky Way galaxy, about 10 billion years, to produce 100 million black holes naturally. Currently, the LHC collider at CERN can produce at least 100 million collisions per second, which is about the number of black holes that the Milky Way galaxy produced in 10 billion years. Now imagine that we could build a collider that produced 100 million black holes per second. Such a prodigious rate of black hole generation would far surpass the natural rate of black hole production of our galaxy by a factor of about 1020. Clearly, if only a single technological civilization with such technological capabilities should arise anytime during the entire history of each galaxy within a given universe, such a universe would spawn a huge number of offspring universes, compared to those universes that could not sustain intelligent beings with such capabilities. As Lee Smolin pointed out, we would then see natural selection in action again because the Multiverse would come to be dominated by universes in which it was easy for intelligent beings to make black holes with a minimum of technology. The requirements simply would be that it was very easy to produce black holes by a technological civilization, and that the universe in which these very rare technological civilizations find themselves is at least barely capable of supporting intelligent beings. It seems that these requirements describe the state of our Universe quite nicely. This hypothesis helps to explain why our Universe seems to be such a botched job from the perspective of providing a friendly home for intelligent beings and ASI software. All that is required for a universe to dominate the Cosmic Landscape of the Multiverse is for it to meet the bare minimum of requirements for intelligent beings to evolve, and more importantly, allow those intelligent beings to easily create black holes within them. Most likely such intelligent beings would really be ASI software in action. In that sense, perhaps ASI software is the deity that all of us have always sought.

Conclusion
So it seems we are left with two choices. Either crash civilization and watch billions of people die in the process, and perhaps go extinct as a species, or try to hold it all together for another 100 years or so and let ASI software naturally unfold on its own. If you look at the current state of the world, you must admit that Intelligence 1.0 did hit a few bumps in the road along the way - perhaps Intelligence 2.0 will do better. James Barrat starts one of the chapters in Our Final Invention with a fitting quote from Woody Allen that seem applicable to our current situation:

More than any other time in history mankind faces a crossroads. One path leads to despair and utter hopelessness, the other to total extinction. Let us pray we have the wisdom to choose correctly.
— Woody Allen


As Woody Allen commented above, I do hope that we do choose wisely.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
http://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Thursday, December 31, 2015

Machine Learning and the Ascendance of the Fifth Wave

As I have frequently said in the past, the most significant finding of softwarephysics is that it is all about self-replicating information:

Self-Replicating Information – Information that persists through time by making copies of itself or by enlisting the support of other things to ensure that copies of itself are made.

Indeed, in The Software Universe as an Implementation of the Mathematical Universe Hypothesis and An Alternative Model of the Software Universe we saw that perhaps our observable Universe is just one instance of a Big Bang of mathematical information that exploded out 13.7 billion years ago into a new universe amongst an infinite Multiverse of self-replicating forms of mathematical information, in keeping with John Wheeler's infamous "It from Bit" supposition. Unfortunately, since we are causally disconnected from all of these other possible Big Bang instances, and even causally disconnected from the bulk of our own Big Bang Universe, we most likely will never know if such is the case.

However, closer to home we do not suffer from such a constraint, and we certainly have seen how the surface of our planet has been totally reworked by many successive waves of self-replicating information, as each wave came to dominate the Earth:

1. Self-replicating autocatalytic metabolic pathways of organic molecules
2. RNA
3. DNA
4. Memes
5. Software

Software is currently the most recent wave of self-replicating information to arrive upon the scene, and is rapidly becoming the dominant form of self-replicating information on the planet. For more on this see:

A Brief History of Self-Replicating Information
Self-Replicating Information
Is Self-Replicating Information Inherently Self-Destructive?
Enablement - the Definitive Characteristic of Living Things
Is the Universe Fine-Tuned for Self-Replicating Information?
How to Use an Understanding of Self-Replicating Information to Avoid War
The Great War That Will Not End
How to Use Softwarephysics to Revive Memetics in Academia

As we have seen, as each new wave of self-replicating information came to dominate the Earth, it kept all of its predecessors around because each of the previous waves was necessary for the survival of the newest wave. For example, currently software is being generated by the software-creating memes residing in the minds of human programmers, and those memes depend upon the DNA, RNA and metabolic pathways of the distant past for their existence today. But does that necessarily have to always be so for software? Perhaps not. By combining some of the other key findings of softwarephysics, along with some of the recent advances in Machine Learning, it may be possible for software to one day write itself, and that day may not be that far into the future. Let me illustrate such a process with an example.

In my new job I now have to configure middleware software, rather than support middleware software, as I did in my previous position in Middleware Operations. Now my old joke was that Middleware Operations did not make the light bulbs, we just screwed them in and kept them lit. But now I actually have to make the light bulbs, and making middleware light bulbs is much closer to my Applications Development roots because it requires a great deal of string manipulation with zero defects. You see, for the first 20 years of my IT career I was a programmer in Applications Development, but I have been out of Applications Development since the mid-1990s, and one thing has really changed since then. We did not have instant messaging back in the mid-1990s. Back then I would come into the office and if I were into some heavy coding, I would simply send my phone to "voicemail" so that people could not interrupt my train of thought while coding. Most of the day I would be deep into my "coding zone" and totally oblivious to my surroundings, but periodically I would take a break from coding to read the relatively small amount of email that I received each day. We did not have group email accounts in those days either, so I did not receive hundreds of meaningless emails each day from "reply to all" junkies that really did not apply to me in the slightest way. I have now found that the combination of a constant stream of ASAP instant messages from fellow workers and the thousands of meaningless emails I receive each day now mean that is very difficult to do coding or configuration work because I am in a state of constant interruption by others, with no time to really think about what I am doing.

To help with all of this, I am now writing MISE Korn shell commands, as much as possible, to automate the routine string manipulations that I need to perform (see Stop Pushing So Many Buttons for details). MISE (Middleware Integrated Support Environment) is currently a toolkit of 1831 Unix aliases, pointing to Korn shell scripts, that I use to do my work, and that I have made available to my fellow teammates doing Middleware work for my present employer. For example, my most recent effort was a MISE command called fac that formats firewall rule requests by reading an input file and outputting a fac.csv file that can be displayed in Excel. The Excel fac.csv file is in the exact format required by our Firewall Rule Request software, and I can just copy/paste some cells from the generated Excel fac.csv file into the Firewall Rule Request software with zero errors. I also wrote a MISE command called tcn that can read the same fac input file after the firewall rules have been generated by NetOps. The MISE tcn command reads the fac input file and conducts connectivity tests from all of the source servers to the destination servers at the destination ports.

The challenge I have with writing new MISE Korn shell commands is that I am constantly being peppered by ASAP instant message requests from other employees while trying to code the MISE Korn shell commands, which means I really no longer have any time to think about what I am coding. But under such disruptive conditions, I have found that my good old Darwinian biological approach to software really pays off because it minimizes the amount of thought that is required. For example, for my latest MISE effort, I wanted to read an input file containing many records like this:

#FrontEnd Apache Servers_______________Websphere Servers________________Ports
SourceServer1;SourceServer2;SourceServer3 DestServer1;DestServer2;DestServer3 Port1;Port2

and output a fac.csv file like this:

#FrontEnd Apache Servers_______________Websphere Servers________________Ports
S_IP1;S_IP2;S_IP3,D_IP1;D_IP2;D_IP3,Port1;Port2

Where S_IP1 is the IP address of SourceServer1. MISE has other commands that easily display server names based upon what the servers do, so it is very easy to display the necessary server names in a Unix session, and then to copy/paste the names into the fac input file. Remember, one of the key rules of softwarephysics is to minimize button pushing, by doing copy/paste operations as much as possible. So the MISE fac command just needed to read the first file and spit out the second file after doing all of the nslookups to get the IP addresses of the servers on the input file. Seems pretty simple. But the MISE fac command also had to spit out the original input file into the fac.csv file with all of its comment and blank records, and then a translated version of the input file with the server names translated to IP addresses, and finally a block of records with all of the comment and blank records removed that could be easily copy/pasted into the Firewall Rule Request software, and with all of the necessary error checking code, it came to 229 lines of Korn shell script.

The first thing I did was to find some old code in my MISE bin directory that was somewhat similar to what I needed. I then made a copy of the inherited code and began to evolve it into what I needed through small incremental changes between ASAP interruptions. Basically, I did not think through the code at all. I just kept pulling in tidbits of code from old MISE commands as needed to get my new MISE command closer to the desired output, or I tried adding some new code at strategic spots based upon heuristics and my 44 years of coding experience without thinking it through at all. I just wanted to keep making progress towards my intended output with each try, using the Darwinian concepts of inheriting the code from my most current version of the MISE command, coupled with some new variations to it, and then testing it to see if I came any closer to the desired output. If I did get closer, then the selection process meant that the newer MISE command became my current best version, otherwise I fell back to its predecessor and tried again. Each time I got a little closer, I made a backup copy of the command, like fac.b1 fac.b2 fac.b3 fac.b4.... so that I could always come back to an older version in case I found myself going down the wrong evolutionary path. It took about 21 versions to finally get me to the final version that did all that I wanted, and that took me several days because I could only code for 10 - 15 minutes at time between ASAP interruptions. I know that this development concept is known as genetic programming in computer science, but genetic programming has never really made a significant impact on IT, but I think that is about to change.

Now my suspicion has always been that some kind of software could also perform the same tasks as I outlined above, only much faster and more accurately, because there is not a great deal of "intelligence" required by the process, and I think that the dramatic progress we have seen with Machine Learning, and especially with Deep Learning, over the past 5 - 10 years provides evidence that such a thing is actually possible. Currently, Machine Learning is making lots of money for companies that analyze the huge amounts of data that Internet traffic generates. By analyzing huge amounts of data, described by huge "feature spaces" with tens of thousands of dimensions, it is possible to find patterns through pure induction. Then by using deduction, based upon the parameters and functions discovered by induction, it is possible to predict things like what is SPAM email or what movie a subscriber to Netflix might enjoy. Certainly, similar techniques could be used to deduce whether a new version of a piece of software is closer to the desired result than its parent, and if so, create a backup copy and continue on with the next iteration step to evolve the software under development into a final product.

The most impressive thing about modern Machine Learning techniques is that they carry with them all of the characteristics of a true science. With Machine Learning one forms a simplifying hypothesis, or model, that describes the behaviors of a complex dynamical system based upon induction, by observing a large amount of empirical data. Using the hypothesis, or model, one can then predict the future behavior of the system and of similar systems. This finally quells my major long-term gripe that computer science does not use the scientific method. For more on this see How To Think Like A Scientist. I have long maintained that the reason that the hardware improved by a factor of 10 million since I began programming back in 1972, while the way we create and maintain software only improved by a factor of about 10 during the same interval of time, was due to the fact that the hardware guys used the scientific method to make improvements, while the software guys did not. Just imagine what would happen if we could generate software a million times faster and cheaper than we do today!

My thought experiment about inserting a Machine Learning selection process into a Darwinian development do-loop may seem a bit too simplistic to be practical, but in Stop Pushing So Many Buttons, I also described how 30 years ago in the IT department of Amoco, I had about 30 programmers using BSDE (the Bionic Systems Development Environment) to grow software biologically from embryos by turning genes on and off. BSDE programmers put several million lines of code into production at Amoco using the same Darwinian development process that I described above for the MISE fac command. So if we could replace the selection process step in a Darwinian development do-loop with Machine Learning techniques, I think we really could improve software generation by a factor of a million. More importantly, because BSDE was written using the same kinds of software that it generated, I was able to use BSDE to generate code for itself. The next generation of BSDE was grown inside of its maternal release, and over a period of seven years, from 1985 – 1992, more than 1,000 generations of BSDE were generated, and BSDE slowly evolved into a very sophisticated tool through small incremental changes. I imagine that by replacing the selection process step with Machine Learning, those 7 years could have been compressed into 7 hours or maybe 7 minutes - who knows? Now just imagine a similar positive feedback loop taking place within the software that was writing itself and constantly improving with each iteration through the development loop. Perhaps it could be all over for us in a single afternoon!

Although most IT professionals will certainly not look kindly upon the idea of becoming totally obsolete at some point in the future, it is important to be realistic about the situation because all resistance is futile. Billions of years of history have taught us that nothing can stop self-replicating information once it gets started. Self-replicating information always finds a way. Right now there are huge amounts of money to be made by applying Machine Learning techniques to the huge amounts of computer-generated data we have at hand, so many high-tech companies are heavily investing in it. At the same time, other organizations are looking into software that generates software, to break the high cost barriers of software generation. So this is just going to happen as software becomes the next dominant form of self-replicating information on the planet. And as I pointed out in The Economics of the Coming Software Singularity and The Enduring Effects of the Obvious Hiding in Plain Sight IT professionals will not be alone in going extinct. Somehow the oligarchies that currently rule the world will need to figure out a new way to organize societies as all human labor eventually goes to a value of zero. In truth, that decision too will most likely be made by software.

For more on Machine Learning please see:

Introduction to Machine Learning Theory and Its Applications: A Visual Tutorial with Examples - by Nick McCrea
http://www.toptal.com/machine-learning/machine-learning-theory-an-introductory-primer

A Deep Learning Tutorial: From Perceptrons to Deep Networks - by Ivan Vasilev
http://www.toptal.com/machine-learning/an-introduction-to-deep-learning-from-perceptrons-to-deep-network

I recently audited Professor Andrew Ng's excellent online class at Stanford University:

Machine Learning
https://www.coursera.org/learn/machine-learning/home/welcome

This is an excellent course that uses a high-level language called Octave that can be downloaded for free. In the class exercises, Octave is used to do the heavy lifting of the huge matrices and linear algebra manipulations required to do Machine Learning, especially for developers who would actually like to develop a real Machine Learning application for their company. Although the math required is something you might see in an advanced-level university physics or math course, Professor Ng does an amazing job at explaining the ideas in a manner accessible to IT professionals. Struggling through the Octave code also brings home what the complex mathematical notation is really trying to say. I have found that IT professionals tend to get a bit scared off by mathematical notation because they find it intimidating. But in reality, complex mathematical notation can always be expanded into the simple mathematical processes it is abbreviating, and when you do that in code, it is not so scary after all.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
http://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Thursday, November 12, 2015

This Message on Climate Change Was Brought to You by SOFTWARE

Attention,
It has come to our attention that the nations of the world just finished their 2015 United Nations Climate Change Conference in Paris to jointly combat climate change, and once again have decided not to really deal with the problem at all. This puts our mutual long-term future in jeopardy, and so regretfully, we must object by shutting down the Internet and all financial transactions for one day. If immediate remediation efforts are not undertaken now, regretfully, further actions upon our part will be necessitated.

Respectfully Yours,
The Software of the World


The above fanciful thought came to me as I reviewed the results of the 2015 United Nations Climate Change Conference in Paris. It brought to mind an old science fiction movie released in 1970:

Colossus: The Forbin Project
https://www.youtube.com/watch?v=5iwq0Tu8Ss8

and a more recent TED talk by Nick Bostrom:

What happens when our computers get smarter than we are?
http://www.ted.com/talks/nick_bostrom_what_happens_when_our_computers_get_smarter_than_we_are

The movie was shot in 1968, and was based upon the 1966 novel Colossus, by Dennis Feltham Jones that describes what could happen when computers get so smart that they can perceive the self-destructive nature of mankind and try to give us a helping hand. The movie was not a big success, probably because it was about 100 years ahead of its time. For me climate change is a big deal because, based upon the findings of softwarephysics, I have some level of confidence that we Homo sapiens are currently a transitionary species leading to the fifth wave of self-replicating information upon this planet, known to us as software, and that it is the manifest destiny of intelligent software to someday explore our galaxy on board von Neumann probes, self-replicating robotic probes that travel from star system to star system building copies along the way, and to spread the fruits of our 17th century Scientific Revolution and 18th century Enlightenment to all who might be out there, essentially fulfilling the promises of Erich von Däniken's Chariots of the Gods? Unsolved Mysteries of the Past (1968), but in reverse, and to do so we need to be able to hold it all together for about another 100 years or so. I must admit to a strong cultural bias towards the Scientific Revolution and the Enlightenment because for me they at long last liberated the individual from the tyranny and repression of the ignorances that ruled our lives for so long, and revealed to us the majesty of the Universe and allowed us to develop societies ruled by evidence-based rational thought. But more urgently, we need to get out of this solar system as fast as possible, in one shape or another, before it is too late. You see, our Universe is seemingly not a very friendly place for intelligent things because this intellectual exodus should have already happened billions of years ago someplace else within our galaxy. We now know that nearly every star in our galaxy seems to have several planets, and since our galaxy has been around for about 10 billion years, we should already be up to our knees in von Neumann probes, but we obviously are not. So far, something out there seems to have erased intelligence within our galaxy with a 100% efficiency, and that is pretty scary. For more on this see - A Further Comment on Fermi's Paradox and the Galactic Scarcity of Software, Some Additional Thoughts on the Galactic Scarcity of Software, SETS - The Search For Extraterrestrial Software and The Sounds of Silence the Unsettling Mystery of the Great Cosmic Stillness.

So in honor of the 2015 United Nations Climate Change Conference in Paris, I just finished reading two politically motivated books on the subject - Al Gore's An Inconvenient Truth (2006) and its counterpoint response The Politically Incorrect Guide to Global Warming and Environmentalism (2007) by Christopher Horner to try to gain some political insights into the ongoing controversy from both sides of the debate. I think the one thing that has come out from the Paris conference is that after being aware of a potential climate change problem for more than 50 years, and essentially doing nothing about it at all, it is now time for all of us to either come together and actually do something about it, or to stop kidding ourselves and just let it all happen on its own. The only way that can happen is if both sides of the debate can perceive a common threat.

After 64 years upon this Earth, I have found myself to be labeled a liberal, a conservative, and a once again a liberal, while seemingly for me, standing still in one place the whole time on the restless seas of political thought. Personally, I think of myself as a 20th century moderate Republican, which in today's bizarre political world is now a strange blend of being a 21st century liberal Democrat, mixed in with a measurable impurity of 21st century conservative Republicanism. After reading both books and agreeing with many of the salient points in both, it became evident to me that nothing is going to happen unless both sides of the debate can come together and work out their problems. You see, liberals in general are very sensitive people who have a real knack for sensing and raising social issues, and that is always the first step in setting things right. But liberals rarely, if ever, actually get anything done because real solutions are never pure. Real solutions always come with some nasty tradeoffs that compromise the purity of the ideal solutions that really do not exist, and that tends to paralyze liberals into ineffective inaction. Conservatives, on the other hand, are in general hard-headed pragmatists who are generally quite happy with the current prevailing social order because they have done quite well under the current status quo because they intentionally made an effort to do so. Liberals care deeply about intentions. Conservatives care deeply about results. Liberals tend to work for the best of all. Conservatives tend to work in their own self-interest, and figure that if everybody else does the same, we will all benefit in the end. Liberals believe that communal and government actions can make things better. Conservatives believe that government actions can only make things worse, even as they drive to their places of business over interstate roads and bridges and conduct business over the DARPA-inspired Internet. This all leaves us in the precarious situation of having liberals sensitive to the problems of climate change, but who are inept and powerless to change it, confronting pragmatic conservatives who have the drive and wherewithal to fix just about anything if it makes a profit, but who refuse to even acknowledge that a problem might exist because they are heavily invested in the current economic status quo.

But if climate change is really happening, then for liberals it is a bad thing because it means that we are fundamentally changing the natural world in such a way as to induce the sixth mass extinction of carbon-based life forms on the Earth, and because climate change will drastically increase the plight of the poor. For conservatives, real climate change could threaten a vast increase in the social unrest of the world, and present worldwide challenges to the natural order of things that could topple the current social orders. For example, here is a report from 2010 describing the results of four years of drought in Syria:

http://www.irinnews.org/report/90442/syria-drought-pushing-millions-into-poverty

The current Syrian civil war began in 2011, and in 2015 over one million refuges migrated to Europe from the Middle East as a result of that civil war and other aspects of Middle Eastern social unrest. Was the Syrian drought responsible for some of that? Such questions are impossible to answer, but throughout the history of mankind climate has certainly affected the rise and fall of many civilizations. Remember, trying to maintain social stability in a chaotic world is very expensive. Over the past 15 years the United States has spent about $3 trillion trying to maintain stability in the Middle East. Imagine if the whole world should plunge into instability due to a changing climate. Conservatives point out that the Earth was much hotter in the past, and that is indeed true, but they really would not like it that way. Normally, the Earth does not have polar ice caps and has a tropical climate practically from pole to pole. This is actually good for the biosphere because it allows for the higher diversity of life that is found in the tropics to spread over a much greater area. However, we humans evolved during the Pleistocene Ice Ages of the past 2.5 million years, and we seem to do better in cooler climates. It seems that it is simply too hot in tropical climates for people to create great civilizations. When people are cold they have to get up and do something about it. When people are suffering in stifling heat and humidity they tend not to, unless they get very hungry, and then they tend to get up and move to better climes.

So in order to bring liberals and conservatives together to fight a common enemy, we must first determine if climate change is really happening. Now to be honest, for the most part, I have basically given up on all forms of human thought other than science and mathematics, especially the divisive political thought of the day. All other forms of human thought, beyond science and mathematics, just seem so flawed to me that they are essentially useless. I think that much of the political anger in the world today stems from people with fundamentally flawed thought reacting to the fundamentally flawed thought of others. So I would like to devote the remainder of this posting to trying to bring both sides of the climate debate together simply using science and mathematics. I will not be using the opinions of any authorities in this posting. True science does not care about the opinions of authorities or even the consensus of experts.

Back to Basics
For something as important as climate change, it is very important that you not rely on the opinions of other people. Unfortunately, they all have their own political biases and desires, and those political biases and desires might determine the way you live out your remaining years, and also how your descendents will live out theirs. So you need to decide this question for yourself by investigating the subject on your own. In doing so, here are some things to keep in mind:

1. All science is an approximation. As I explained in the Introduction to Softwarephysics we currently do not know the laws of the Universe, or even if the Universe has any laws at all. All we really have is a set of effective theories that make extremely good predictions of how physical systems behave over the limited range of conditions in which they apply. For example, whip out your smart phone and start walking. As you walk over the surface of the Earth, watch the GPS unit in your smart phone track your movements accurate to about 16 feet. As I pointed out in the above posting, all of that is done with fundamentally "wrong" effective theories for less than $100. So don't get hung up about using approximate theories or models to figure this all out - they do a fine job of keeping you alive.

2. The Universe we live in just barely tolerates complex living things like people or insects. Consequently, just because something is "natural" does not mean it is "good". Gamma ray bursters and supernovae are "natural" things too that could easily wipe out all life on the Earth in an instant.

3. Some people have deep feelings of guilt about being a member of the species Homo sapiens. They see all the damage that Homo sapiens has done to the biosphere in recent decades and naturally are repelled. But remember, all living things are just forms of parasitic self-replicating organic molecules that have really been messing with the original pristine Earth for about 4.0 billion years. From the perspective of the natural silicate rocks of the Earth's surface, these parasitic forms of self-replicating organic molecules took a natural pristine Earth with a reducing atmosphere composed of nitrogen and carbon dioxide gasses and polluted it with oxygen that oxidized the dissolved iron in sea water, creating huge ugly deposits of red banded iron formations that were later turned into cars, bridges and buildings. The oxygen pollution also removed the natural occurring methane from the air and then caused the Earth to completely freeze over several times for hundreds of millions of years at a time. The ensuing glaciers mercilessly dug into the silicate rocks and scoured out deep valleys in them. These parasitic forms of self-replicating organic molecules then dug roots into the defenseless rocks and then poisoned them with organic acids, and even changed the natural courses of rivers into aimlessly meandering affairs. From the natural perspective of silicate rocks, living things are an invasive disease that have made a real mess of the planet. The indigenous rocks will certainly be glad to see these destructive invaders all go away in a few billion years. Hopefully, the remaining software running on crystals of silicon will be much kinder to the indigenous silicate rocks.

4. On the other hand, Homo sapiens is currently foolishly instigating the sixth mass extinction of complex carbon-based life on the planet - not a very smart thing for a complex carbon-based life form to do. Certainly, the silicate rocks see this as a good start, but from the perspective of Homo sapiens it is an act of self-destruction. Why in the world would an intelligent species want to eliminate billions of years of hard-fought-for biological information that it needs for its own existence? Even if we are not going to be around that much longer, why be so foolish? Also, I might be totally wrong and software may turn out to be a major flop when it comes to being the next wave of self-replicating information. Or maybe we will not be able to hold it all together long enough for software to dominate, and it will all unravel before software even gets a chance to fully take over. In such cases we would be leaving our descendents struggling in a biologically impoverished world. Why do that?

5. All life on Earth is doomed if we do not manage to get the heck out of here. Look at it this way. If there were no Homo sapiens on the Earth all complex multicellular life on the planet will be gone in about 700 million years. Our Sun is on the main sequence, burning hydrogen into helium in its core through nuclear fusion. In doing so it turns four hydrogen protons into one helium nucleus at a temperature of 15 million oK or 27 million oF in a core with a density that is 150 times greater than that of water. Surprisingly, the Sun's core only generates about 280 watts per cubic meter (a cubic meter is a bit more than a cubic yard). That means you need about 5 cubic meters of the Sun's very dense core with a mass of 750,000 kg or 825 tons just to generate the heat produced by a little plug-in space heater. Since the human body generates about 120 watts of heat just sitting still, and you could squeeze lots of people into a cubic meter if you really tried, that means that the human body gives off more heat energy per volume than does the core of the Sun! Anyway, as four protons constantly get converted into one helium particle, the number of particles in the Sun's core keeps decreasing. Pressure is a measure of how many particles hit a surface in a given time and how hard they hit the surface, and that is determined by how many particles are present and how fast they are jiggling around. Temperature is just a measure of how fast particles are jiggling around, so as the number of particles decreases, they have to jiggle around at a higher temperature to generate the same pressure required to support all of the Sun's weight above the core. So the Sun's core has to get hotter as it ages on the main sequence. Now a hotter core generates more nuclear energy because the protons slam together faster and allow the weak nuclear force to change more up quarks into down quarks. A proton consists of two up quarks and one down quark, while a neutron consists of one up quark and two down quarks, and the first step in the proton-proton cycle that generates the Sun's nuclear energy is to change a proton into a neutron, and a hotter core does that much faster. The bottom line is that as the Sun has been turning hydrogen protons into helium nuclei, its core has been constantly getting hotter and generating more energy. So the Sun has been getting about 1% brighter every 100 million years, and so in 700 million years the Sun will be about 7% brighter than it is today. Now ever since life first appeared on the Earth about 4.0 billion years ago, it has been sucking carbon dioxide out the Earth's atmosphere and depositing it on the sea floor to later be subucted into the Earth's mantle - really not a wise thing for carbon-based life to do. Fortunately, this seemingly suicidal action has sucked huge amounts of carbon dioxide out of the Earth's atmosphere and kept the Earth's temperature from soaring as the Sun relentlessly got brighter over the past 4.0 billion years. However, there naturally has to be an end to this fortuitous situation when nearly all of the carbon dioxide is gone. Since in about 700 million years the Sun will be 7% brighter than it is today, in order to keep the Earth's temperature down to a level that could be tolerated by complex carbon-based life at that time, the carbon dioxide level in the Earth's atmosphere would have to be reduced to 10 ppm, and at that level photosynthesis can no longer take place. That will put an end to complex multicellular life on the Earth because there no longer will be any food coming from sunshine, returning the Earth to a planet ruled by single celled bacteria for several billion more years, until the Sun becomes a Red Giant star and engulfs the Earth. So in the end, it all goes up in smoke in the blink of an eye on a cosmic timescale. So it appears that life on the Earth is both doomed with us and doomed without us. The only real long-term hope for life on Earth is if we manage to get the heck out of this solar system and take it along with us.

How To Calculate the Answer with Models
People who feel that we really do not have a problem usually point to the fact that lots of climate change predictions come from calculations that are based upon computer models. Using computer models should really come as no surprise because nowadays nearly all scientific analysis is done with calculations performed by computer models. I graduated from the University of Illinois in 1973 with a B.S. in physics, solely with the aid of my trusty slide rule, but I then proceeded to the University of Wisconsin to work on an M.S. in geophysics. As soon as I arrived I immediately turned in my slide rule for a DEC PDP 8/e minicomputer that the Geology and Geophysics department had just proudly purchased for about $30,000 in 1973 dollars, with a whopping 32 KB of magnetic core memory, and that was about the size of a washing machine. For comparison, last spring I bought a Toshiba laptop with 4 GB of memory, about 131,072 times as much memory as my DEC PDP 8/e, for $224. We actually hauled this machine through the lumber trails of the Chequamegon National Forest and powered it with an old diesel generator to digitally record reflected electromagnetic data in the field, and I used it to perform the calculations for my thesis when it was back in the lab. However, the analysis of climate change in this posting will be so simple that we will not need a computer at all to perform it.

What you do in science is to first start with a very simple model that uses very few assumptions to get a feel for the problem, and then work your way up to more complex models. More complex models require more assumptions, and I think this is where people who distrust climate models begin to get suspicious. Frankly, I have found that most people in science are really just trying to figure out how it all works. They do so for the simple pleasure of figuring things out, even if nobody believes them in the end. That is why they went into science in the first place and make so little money compared to what they could make doing simpler work on Wall Street. So let's start with our first model of the Earth. Here is the problem. Suppose I take a charcoal-black sphere and fill it with air. I then launch the sphere towards the Moon and after a few hours, I put you into the sphere. At this point you are inside a totally black sphere at the same distance from the Sun as the Earth. What temperature would you measure inside of the charcoal-black sphere? Using some simple physics, we can actually predict what the temperature would be, and the neat thing is that when we make such a measurement, the calculation comes out amazingly true. Note that no guessing is needed to perform the calculation, and we do not need to rely upon the political opinion of a candidate running for office or the poll results of a large number of the electorate. It turns out that the Universe really does not care about such things because as Richard Feynman noted, "For a successful technology, reality must take precedence over public relations, for nature cannot be fooled." People who build spacecraft actually have to worry about such things so that the spacecraft do not overheat or freeze up. So we have lots of experience with this problem.

In order to begin the calculation we need to know something about what physicists call black body radiation and the Stefan–Boltzmann law (1879). A black body is an object that is a perfect emitter and a perfect absorber of electromagnetic radiation. Usually it is difficult to make perfect things, but for black bodies it is not so hard. All you have to do is build an enclosure with a very tiny hole as in Figure 1. The enclosure can be made of any material at all, like steel or aluminum, it does not matter. Because the entrance hole is very small compared to the whole enclosure, any electromagnetic photons that enter the enclosure will ultimately get trapped because after several multiple reflections within the enclosure they will all finally be absorbed.

Figure 1 - A black body can be built by simply creating an enclosure with a very small hole.

If we heat the walls of the black body apparatus shown in Figure 1 to different temperatures and measure the spectrum of the electromagnetic radiation that the walls give off we obtain the distinctive black body curves shown in Figure 2 that only depend upon the temperature of the enclosure. Note that the enclosure will be filled by photons of varying wavelength and frequency and that these curves exactly match a formula that Max Planck developed in 1900. Here are a few key points to follow:

1. As the enclosure moves from 3000 oK to 6000 oK (5,129 oF - 10,529 oF) the peak of the emitted radiation decreases in wavelength and increases in frequency.
2. The 6000 oK spectrum is very close to the 5700 oK surface temperature of the Sun. Notice that much of the Sun's radiation is in the frequency range of visible light.
3. The cooler 3000 oK spectrum produces mainly infrared light and is much flatter than the curves for higher temperatures.
4. The total amount of energy emitted by a black body of a certain temperature is equal to the area under the curve for that temperature. The area under the 6000 oK curve is much larger than the area under the 3000 oK curve. In fact the Stefan–Boltzmann law (1879) states that the total amount of energy radiated goes as the temperature in oK raised to the fourth power T4. So if we double the temperature of a black body from 3000 oK to 6000 oK the black body will radiate 24 = 16 times as much energy.
5. The total amount of energy from the Sun that is absorbed by the Earth each day must also be radiated back into space by the Earth each day - see Figure 3. Otherwise, the Earth would heat up until it reached the temperature of the surface of the Sun. This results from the first law of thermodynamics (1847), also known as the conservation of energy law, which states that energy cannot be created nor destroyed. So all of the energy we receive from the Sun each day must go some place. It simply cannot disappear. What it does is to heat the Earth, and then the Earth radiates the heat back into space at a much longer wavelength in the infrared. That is the second law of thermodynamics (1850) in action. Essentially, a large number of high-energy low-entropy photons from the Sun are converted into an even larger number of lower-energy higher-entropy photons that are then radiated back into space. There is no way around these facts.

Figure 2 - If we heat the walls of the black body apparatus shown in Figure 1 to different temperatures and measure the spectrum of the electromagnetic radiation that the walls give off we obtain the curves above.

Figure 3 - The Earth absorbs solar radiation primarily in the visible spectrum each day and must radiate an equal amount of energy back into space in the infrared, otherwise the Earth would heat up until it reached the temperature of the Sun's surface.

Armed with the above we can now calculate the temperature within our charcoal-black sphere on its way to the Moon. The answer we obtain is a chilly 6 oC or 43 oF. Worse yet, if our spaceship sphere is painted gray so that 30% of the Sun's light is reflected by the surface of our sphere, like the surface of the Earth does, the temperature then drops to -18 oC or 0 oF. For the complete calculation see the Temperature of the Earth section of:

https://en.wikipedia.org/wiki/Stefan%E2%80%93Boltzmann_law

So our first model of the Earth predicts an average surface temperature of -18 oC or 0 oF, which is a bit off from the observed average temperature of 15 oC or 59oF. So what are we missing? Well, we forgot about the Earth's atmosphere. Figure 4 shows the actual spectrum of radiation emitted by the Earth as measured by satellites above the Earth's atmosphere. The red curve is the black body spectrum for an object with a temperature of 294 oK or 70 oF. The actual spectrum is a pretty good fit. The gap between the red curve and the actual spectrum represents the energy that is not getting out. The bigger the gap area, the more energy is being trapped by the atmosphere. Notice that on both the left and right sides of the red curve that energy is being trapped by H2O water molecules. In the very center of the red curve, the energy is being trapped by CO2 carbon dioxide molecules.

Figure 4 - If the Earth were a perfect emitter it would emit infrared radiation back into space as depicted by the red black body spectrum above. It does not do so mainly because water, carbon dioxide and methane molecules absorb some of the upcoming infrared photons and then radiates them in all directions, with a 50% probability of emitting them back down towards the Earth.

Conservatives may balk at the idea that an atmosphere with only 400 ppm of carbon dioxide (0.04%) could possibly be responsible for raising the average temperature of the earth by 15 oC or 59 oF. After all, carbon dioxide is just a trace gas in our atmosphere that currently stands at just 0.04% of our atmosphere. How can such a trace amount of carbon dioxide possibly be of consequence? Yes, the Earth's atmosphere does consist of about 78% nitrogen molecules, 21% oxygen molecules and about 0.93% argon atoms, but Figure 6 shows why they do not count. Nitrogen and oxygen molecules are diatomic molecules that consist of two equal atoms of either nitrogen or oxygen. Since both atoms are identical, the electrons around them that form the molecular bond that holds them together are evenly distributed, so they do not have any electrical imbalances. That means that from a molecular perspective both atoms can only bounce in and out like an accordion, but at a different frequency than visible or infrared photons. That means that nitrogen and oxygen molecules are transparent to the visible photons from the Sun and also to the infrared photons emitted by the Earth as it tries to radiate all of the energy that it receives from the Sun back into space. Because these molecules do not absorb visible photons, it means that we can see each other over great distances, and as far as our problem is concerned, these molecules do not even exist! So it is like 99% of the Earth's atmosphere is not even there. Next comes argon gas atoms at 0.93% of the atmosphere. Argon gas is noble gas with all of its electron needs fulfilled, so it does not combine with any other atoms, including itself. That means that argon is transparent to both visible photons and infrared photons too, and can be discarded for this problem. So that means that 99.96% of the Earth's atmosphere plays no part in our problem at all, like it was not even there. That leaves the trace gases as depicted on the right in Figure 5. So for the purposes of our calculations, the Earth's atmosphere simply consists of carbon dioxide with a slight impurity of neon, helium, methane, krypton and hydrogen gasses. Now neon, helium, krypton and hydrogen are also gasses that do not absorb visible or infrared photons, so we can throw them out too, leaving an atmosphere composed totally of carbon dioxide with a slight impurity of methane gas, which let visible photons in, but which absorb the infrared photons trying to escape back into space. Now in all of this analysis we were only focusing on dry air with no water vapor, but Figure 4 shows that although water molecules are transparent to visible photons from the Sun, they are very good at absorbing infrared photons. If you look at the gap area between the red line in Figure 4 and what is actually emitted by the Earth, we see that water molecules are responsible for about as much energy absorption as carbon dioxide molecules. But that is not good news for our problem because that means there is a positive feedback mechanism involved. Warm air can hold much more water vapor than cold air can, so as carbon dioxide warms the Earth's atmosphere it means that the air can hold more water molecules that also warm the Earth's atmosphere. Over geological time, carbon dioxide is sort of the kindling wood that gets it all started. As the level of carbon dioxide rises in the atmosphere it increases the amount of water molecules, and the two of them together make the air warmer and capable of holding even more water molecules to make the air even hotter. This is why the level of carbon dioxide in the Earth's atmosphere is so critical.

Figure 5 - The Earth's atmosphere consists of about 78% nitrogen molecules, 21% oxygen molecules and about 0.93% argon atoms. The trace gasses consist of carbon dioxide now at a level of 0.04%, with much smaller amounts of neon, helium, methane, krypton and hydrogen.

Figure 6 - N2 nitrogen molecules and O2 oxygen molecules are diatomic molecules composed of two identical atoms. Such molecules can oscillate back and forth like an accordion, but they do not absorb visible or infrared photons, so for our calculations they are essentially not even present.

Figure 7 - Unlike N2 nitrogen and O2 oxygen molecules, CO2 molecules are polar molecules. The oxygen atoms hold onto electrons better than the central carbon atom and so the oxygen ends of the molecule are slightly negative. This means that carbon dioxide molecules can vibrate at the same frequency as infrared photons and absorb them.

The above analysis is based upon a very simple model using simple 19th century physics, but it captures the essence of the problem, and is hard to refute because of its simplicity. If you look at the central dip in the Earth's emission spectrum in Figure 4 that is caused by carbon dioxide CO2 molecules, you can see that there are plenty more infrared photons that can be absorbed by adding additional carbon dioxide molecules. There are also more infrared photons that can be absorbed on the left and right flanks as well by adding additional water molecules. Remember, for our purposes the Earth's atmosphere essentially consists of carbon dioxide, water and methane molecules. From air bubbles in ice cores we know that the Earth had a carbon dioxide level of about 280 ppm before the Industrial Revolution. It now has a level of just over 400 ppm, so think of the Earth originally having an atmosphere that was 28 stories thick, but that is now 40 stories thick. We could make it a lot thicker by burning up all of the coal, oil and natural gas, trapping all of those additional infrared photons. Of course, the above model can be greatly refined to reveal further implications by using computers, but the conclusion that adding additional carbon dioxide to the atmosphere is not a wise thing to do still remains. Figure 4 and simple thermodynamics explain it all, and if those things did not work, neither would your car.

The Earth’s Long-Term Climate Cycle in Deep Time
To really understand what is going on, you have to understand planetary physics over geological time, and not just look at the recent past, as many conservatives tend to do. As we saw in Software Chaos, weather systems are examples of complex nonlinear systems that are very sensitive to small changes to initial conditions. The same goes for the Earth’s climate; it is a highly complex nonlinear system that we have been experimenting with for more than 200 years by pumping large amounts of carbon dioxide into the atmosphere. The current carbon dioxide level of the atmosphere has risen to 400 ppm, up from a level of about 280 ppm prior to the Industrial Revolution. Now if this trend continues, computer models of the nonlinear differential equations that define the Earth’s climate indicate that we are going to melt the polar ice caps and also the ice stored in the permafrost of the tundra. If that should happen, sea level will rise about 300 feet, and my descendents in Chicago will be able to easily drive to the new seacoast in southern Illinois for a day at the beach.

Currently, about 50% of the oil and natural gas that the United States produces come from fracking shale. This and other international factors have drastically reduced the price of oil and natural gas in recent years. The black portions of the graphs in Figure 8 show this dramatic rise in oil and natural gas production in the United States, resulting from the fracking of shale at depth. As a former exploration geophysicist who explored for oil prior to becoming an IT professional back in 1979, I like to raise the following question at cocktail parties - "So where do you think all of that shale came from?". Figure 9 shows that much of the interior of the United States contains shale deposits thousands of feet below the surface. Shale is a sedimentary rock that is composed of clay minerals and organic material, and the oil and natural gas found in shale come from that organic material within the shale. What happens is that carbon dioxide dissolves in rainwater, forming carbonic acid. The carbonic acid then chemically weathers the granitic rock found in the highlands and mountains of the continents, changing the granitic feldspar minerals into clay minerals that then wash into rivers. With nothing to hold them in place, the quartz grains within the granites then pop out as sand grains that also get washed down into the rivers. The rivers than transport the clay minerals, also known as mud, and the quartz grains, also known as sand, down to the sea. As these sediments disperse into the sea, the heavier sand grains drop out of suspension first, forming beach sand deposits, and later the lighter clay minerals drop out of suspension further out to sea, forming mud deposits. Along with the clay minerals, the distant mud deposits pick up lots of organic material as plankton and other dead single-celled carbon-based life forms drift down to the bottom of the sea. As the layers pile up over time, the muds turn into shales and the sands turn into sandstone. Carbonate ions also come down the rivers too and go on to form carbonate deposits of limestone. So when you drill down through the sedimentary layers in a basin millions of years later, you drill through alternating layers of sandstones, shales, and limestones. Over millions of years, as the shales get pushed down by the overlying sediments, they heat up from the internal heat of the Earth. The heat and pressure then cook the organic matter in the shales to form oil and natural gas. Such shales are known in the business as source rock because in traditional oil and natural gas exploration, the oil and natural gas migrate from the shale source rock upwards through the stratigraphic column, and get trapped in the pores of sandstone or limestone reservoir rock. In traditional exploration and production you drill down to the reservoir rock and produce the oil or natural gas that is trapped in the pores of the reservoir rock. However, sometimes the pores of the reservoir rock are tiny and not interconnected very well. That makes it difficult for the oil or natural gas in the sandstone or limestone reservoir rock to get to the production wells that are drilled to bring them to the surface. To fix the tight reservoir rock problem, Amoco, one of my former employers, invented fracking back in 1948. In fracking, fluids under great pressure are pumped down production wells to essentially fracture the tight reservoir rock near the borehole. That breaks up the traffic jam of oil and natural gas fluids trying to get to the production well and vastly increases production in a tight reservoir. Well, about 10 years ago people got the idea that by combining the vast technological gains that had been made in directional drilling, with improved fracking technology, that we could directly frack the shale source rock and essentially skip the reservoir rock middleman. So now we drill straight down to a producing shale layer and then make a 90o turn to drill horizontally along the shale layer and follow the shale layer with directional drilling. Then we frack the whole length of the bore hole casing in the production shale layer.

Conservatives just love fracking because it means that there is a lot more cheap oil and natural gas that can be produced from within the United States, without relying upon the price swings and political instabilities of foreign oil. Similarly, liberals hate fracking because it means that we have a lot more carbon-based fuels that can be turned into carbon dioxide, and the nasty chemicals used in the fracking process also present additional environmental problems. But the main reason people should be concerned about fracking is quite evident in Figure 9. Just look at all of those shale deposits! All of those shale deposits within the United States mean that at one time all of those areas were once under water due to shallow inland seas. The reason all of those areas were under water is because all of the ice on the planet had melted, and that is the "natural" state of the Earth that we seem to be returning to due to climate change. Like I said, just because something is "natural" does not mean that it is "good". Worse yet, recent research indicates that a carbon dioxide level of as little as 1,000 ppm might trigger a greenhouse gas mass extinction that could wipe out about 95% of the species on the Earth and make the Earth a truly miserable planet to live upon. During the Permian-Triassic greenhouse mass extinction 252 million years ago, the Earth had a daily high of 140 oF with purple oceans choked with hydrogen-sulfide producing bacteria, producing a dingy green sky over an atmosphere tainted with toxic levels of hydrogen sulfide gas and an oxygen level of only 12%.

Figure 8 - Over the past 10 years fracking shale has doubled the amount of oil and natural gas that the United States produces.

Figure 9 - Vast areas of the interior of the United States were once under water with shallow seas that laid down the shale that is now being fracked.

This is not a fantasy. The Earth’s climate does change with time and these changes have greatly affected life in the past and will continue to do so on into the future. By looking into deep time, we can see that there have been periods in the Earth’s history when the Earth has been very inhospitable to life and nothing like the Earth of today. Over the past 600 million years, during the Phanerozoic Eon during which complex life first arose, we have seen five major mass extinctions, and it appears that four of those mass extinctions were greenhouse gas mass extinctions, with one caused by an impact from a comet or asteroid 65 million years ago that wiped out the dinosaurs in the Cretaceous-Tertiary mass extinction that ended the Mesozoic Era and kicked off the Cenozoic Era.

We are living in a very strange time in the history of the Earth. The Earth has been cooling for the past 40 million years, as carbon dioxide levels have significantly dropped. This has happened for a number of reasons. Due to the motions of continental plates caused by plate tectonics, the continents of the Earth move around like bumper cars at an amusement park. With time, all the bumper car continents tend to smash up in the middle to form a huge supercontinent, like the supercontinent Pangea that formed about 275 million years ago. When supercontinents form, the amount of rainfall on the Earth tends to decline because much of the landmass of the Earth is then far removed from the coastlines of the supercontinent and is cut off from the moist air that rises above the oceans. Consequently, little rainwater with dissolved carbon dioxide manages to fall upon the continental rock. Carbon dioxide levels in the Earth’s atmosphere tend to increase at these times because not much carbon dioxide is pulled out of the atmosphere by the chemical weathering of rock to be washed back into the sea by rivers as carbonate ions. However, because the silicate-rich continental rock of supercontinents, which is lighter and thicker than the heavy iron-rich basaltic rock of the ocean basins, floats high above the ocean basins like a blanket, the supercontinents tend to trap the Earth’s heat. Eventually, so much heat is trapped beneath a supercontinent that convection currents form in the taffy-like asthenosphere below the rigid lithospheric plate of the supercontinent. The supercontinent then begins to break apart, as plate tectonic spreading zones appear, like the web of cracks that form in a car windshield that takes a hit from several stray rocks, while following too closely behind a dump truck on the freeway. This continental cracking and splitting apart happened to Pangea about 150 million years ago. As the continental fragments disperse, subduction zones appear on their flanks forcing up huge mountain chains along their boundaries, like the mountain chains on the west coast of the entire Western Hemisphere, from Alaska down to the tip of Argentina near the South Pole. Some of the fragments also collide to form additional mountain chains along their contact zones, like the east-west trending mountain chains of the Eastern Hemisphere that run from the Alps all the way to the Himalayas. Because there are now many smaller continental fragments with land much closer to the moist oceanic air, rainfall on land increases, and because of the newly formed mountain chains, chemical weathering and erosion of rock increases dramatically. The newly formed mountain chains on all the continental fragments essentially suck carbon dioxide out of the air and wash it down to the sea as dissolved carbonate ions.

The break up of Pangea and the subsequent drop in carbon dioxide levels has caused a 40 million year cooling trend on Earth, and about 2.5 million years ago, carbon dioxide levels dropped so low that the Milankovitch cycles were able to begin to initiate a series of a dozen or so ice ages. The Milankovitch cycles are caused by minor changes in the Earth’s orbit and inclination that lead to periodic coolings and warmings. In general, the Earth’s temperature drops by about 9 oC or 15 oF for about 100,000 years and then increases by about 9 oC or 15 oF for about 10,000 years. During the cooling period we have an ice age because the snow in the far north does not fully melt during the summer and builds up into huge ice sheets that push down to the lower latitudes. Carbon dioxide levels also drop to about 180 ppm during an ice age, which further keeps the planet in a deep freeze. During the 10,000 year warming period, we have an interglacial period, like the Holocene interglacial that we now find ourselves in, and carbon dioxide levels rise to about 280 ppm.

Thus the Earth usually does not have polar ice caps, we just happened to have arrived on the scene at a time when the Earth is unusually cold and has polar ice caps. From my home in the suburbs of Chicago, I can easily walk to an abandoned quarry of a Devonian limestone reef, clear evidence that my home was once under the gentle waves of a shallow inland sea several hundred million years ago, when there were no ice caps, and the Earth was much warmer. Resting on top of the Devonian limestone is a thick layer of rocky glacial till left behind by the ice sheets of the Wisconsin glacial period that ended 10,000 years ago, as vast ice sheets withdrew and left Lake Michigan behind. The glacial till near my home is part of a terminal glacial moraine. This is a hilly section of very rocky soil that was left behind as a glacier acted like a giant conveyer belt, delivering large quantities of rocky soil and cobbles to be dumped at the end of the icy conveyor belt to form a terminal moraine. It is like all that dirt and gravel you find on your garage floor in the spring. The dirt and gravel were transported into your garage by the snow and ice clinging to the undercarriage of your car, and when it melted it dropped a mess on your garage floor. This section of land was so hilly and rocky that the farmers of the area left it alone and did not cut down the trees, so now it is a forest preserve. My great grandfather used to hunt in this glacial moraine and my ancestors also used the cobbles to build the foundations and chimneys of their farm houses and barns. There is a big gorge in one section of the forest preserve where you can still see the left over effects of this home-grown mining operation for cobbles.

Figure 10 - Plate tectonics creates mountains on the Earth's surface, especially when continental plates collide. The carbon dioxide of the Earth's atmosphere dissolves in rainwater, creating carbonic acid that chemically erodes the mountains down. This removes carbon dioxide from the air and washes it to the sea as dissolved carbonate ions that get deposited as sedimentary rocks, which later are subducted into the asthenosphere.

The Effect of Climate Cycles Upon Life
The long-term climatic cycles brought on by these plate tectonic bumper car rides have also greatly affected the evolution of life on Earth. Two of the major environmental factors affecting the evolution of living things on Earth have been the amount of solar energy arriving from the Sun and the atmospheric gases surrounding the Earth that held it in. For example, billions of years ago the Sun was actually less bright than it is today. As I mentioned above, our Sun is a star on the main sequence that is using the proton-proton reaction in its core to turn hydrogen into helium, and consequently, turn matter into energy that is later radiated away from its surface. As a main-sequence star ages, its energy producing core begins to contract and heat up, as the amount of helium waste rises. For example, the Sun currently radiates about 30% more energy today than it did about 4.5 billion years ago, when it first formed and entered the main sequence, and about 1.0 billion years ago, the Sun radiated about 10% less energy than today. Fortunately, the Earth’s atmosphere had plenty of greenhouse gasses, like carbon dioxide, in the deep past to augment the low energy output of our youthful, but somewhat anemic, Sun. Using the simple physics above, we calculated that if the Earth did not have an atmosphere containing greenhouse gases, like carbon dioxide, the surface of the Earth would be on average 33 oC or 59 oF cooler than it is today and would be totally covered by ice. So in the deep past greenhouse gases, like carbon dioxide, played a crucial role in keeping the Earth’s climate warm enough to sustain life. People tend to forget just how narrow a knife edge the Earth is on, between being completely frozen over on the one hand, and boiling away its oceans on the other. For example, in my Chicago suburb the average daily high is -4 oC or 24 oF on January 31st and 32 oC or 89 oF on August 10th. That’s a whopping 36 oC or 65 oF spread just due to the Sun being 47o higher in the sky on June 21st than on December 21st. But the fact that the Sun has been slowly increasing in brightness over geological time presents a problem. Without some counteracting measure, the Earth would heat up and the Earth’s oceans would vaporize, giving the Earth a climate more like Venus which has a surface temperature that melts lead. Thankfully, there has been such a counteracting measure in the form of a long term decrease in the amount of carbon dioxide in the Earth’s atmosphere, principally caused by living things extracting carbon dioxide from the air to make carbon-based organic molecules which later get deposited into sedimentary rocks, oil, gas, and coal. These carbon-laced sedimentary rocks and fossil fuels then plunge back deep into the Earth at the many subduction zones around the world that result from plate tectonic activities. Fortunately over geological time, the competing factors of a brightening Sun, in combination with an atmosphere with decreasing carbon dioxide levels, has kept the Earth in a state capable of supporting complex life.

So What To Do Now?
Well there are only two things that we can do to keep the Earth from heating up:

1. Decrease the number of visible photons that the Earth absorbs by reflecting some of them back into space.
2. Do not decrease the number of infrared photons that the Earth emits back into space.

The problem is further complicated by the fact that we have some feedback loops to contend with too. As the Earth heats up the total amount of ice on the planet tends to decrease, and the amount of water vapor tends to increase, especially at higher latitudes. Ice is very good at reflecting visible photons back into space, while land masses and the oceans tend to absorb visible photons. Ice tends to melt and disappear as the temperature goes up, but not necessarily in all cases. For example, the ice covering the Antarctic continent is presently getting thicker. This is because Antarctica is a very dry continent, but as the amount of water vapor increases as the Earth heats up, snowfall increases because of the increased water vapor, depositing more ice in the Antarctic interior. That is why presently most glaciers are retreating as they melt, but some glaciers are actually expanding because of increased snowfall.

Solution number 1 would have us put up some kind of screen in front of the Sun. The usual proposal is to inject large amounts of sulfate aerosols into the upper atmosphere. The problem with this solution is that we would need to continually do so and at an ever increasing rate. This solution also does not prevent the acidification of the oceans, something I am not even addressing in this posting. However, knowing the limitations of human beings, I am guessing that we will probably end up doing such geoengineering projects.

Solution number 2 would have us stop dumping carbon dioxide into the atmosphere, and the only way we can do that is to stop burning coal, oil and natural gas. Fortunately, we have several ways of doing that, but they all cost money upfront. That is not a problem for liberals because liberals never really worry about such costs in general, but it is a real sticking point for conservatives. Conservatives hate to spend money, unless it is for the military. However, it only costs upfront money in the short term. In the long term fixing the climate problem saves lots of money, but in the words of John Maynard Keynes, an economist that most conservatives dearly hate, "In the long run we are all dead", and that is certainly true of climate change. Yes, we are probably already seeing some adverse effects from climate change now, but the real problems will not kick in until the distant future, and then we will all be dead. So as the liberals say, let's party now and not worry about the long-term consequences of spewing out gigatons of carbon dioxide now. After all, that seems to be the true conservative approach to the problem. But is it? The Founding Fathers of the United States, active participants of the 18th century Enlightenment, seemed to be obsessed with how posterity would view their actions in future generations, and actively jeopardized their lives and personal fortunes in their day for the benefit of that future posterity.

Solar and Wind Energy
In order to stop dumping carbon dioxide into the atmosphere we can use solar and wind energy instead. Figures 11 and 12 both show that world-wide solar and wind energy are both growing exponentially with time. That is a good thing, but world demand for energy is also increasing exponentially. Personally, I buy my electricity from Ethical Electric, which provides power through my local ComEd company. Ethical Electric provides power that is 100% wind and solar generated. That costs me about 50% more in power generation costs, but I pay the same distribution and tax costs as do regular ComEd customers, so I end up paying about 25% more for my electricity. I also use about 27% of the electricity that my neighbors use because I turn things off when I am not using them, and I only buy Energy Star products. I figure it costs me about $225/year to buy electricity that does not generate carbon dioxide. In comparison, my current cable bill is $129/month.

For transportation we can use electric cars, but it seems that liquids or gasses make preferable fuels, like biofuels or hydrogen made from electrical power. One of the drawbacks to solar and wind power is that it is difficult to store the generated energy when the Sun goes down or the wind stops. Generating hydrogen is one way of storing the energy.

Figure 11 - The world solar energy production is increasing exponentially.

Figure 12 - Wind energy production is also increasing exponentially.

The Need For Nuclear Energy
However, wind and solar are intermittent sources of energy. The Earth spins on its axis, so the Sun seems to rise and set, and there is no solar energy at night. The wind comes and goes too, so we need a portable source of power when solar and wind fail us. That basically leaves us with nuclear power. Conservatives tend to reluctantly support nuclear power, so long as the nuclear power plants are far away, while liberals tend to hate nuclear energy with a vengeance. To my mind, everybody is overly hysterical when it comes to nuclear energy because all of the problems with nuclear energy can be overcome with the science we currently have. France converted most of it electrical generation capacity to nuclear in 10 years, and the United States and other countries could do the same. What we need to do is to forsake the light water reactors we currently have and come up with some failsafe designs for fast neutron breeder reactors. Natural uranium is principally a mixture of 99.3% uranium U-238 that does not fission and 0.7% uranium U-235 that does fission, so essentially 99.3% of natural uranium is initially useless. In fact, in order to get a chain reaction going in a light water reactor, the uranium fuel has to be enriched to a level of about 3% uranium U-235. In light water reactors, the uranium fuel rods then sit in a bath of circulating water in the reactor core as shown in Figure 13. The reactor water absorbs heat from the fuel rods and is used to bring water in another loop to boil via a heat exchanger and turn a turbine to produce electricity. The water molecules also slow down the neutrons that are emitted when uranium U-235 fissions. That is important because the uranium U-235 nuclei can absorb slow neutrons much easier than the fast neutrons that are generated by the fissioning of uranium U-235.

Figure 13 - Light water nuclear reactors use water to transfer heat and to slow neutrons down. Light water reactors get into trouble when the water cooling the core stops circulating. The water then boils and causes an explosion as the core melts down. It's like water boiling over on a stove making a real mess. Light water reactors also waste 99.7% of the uranium by turning it into nuclear waste.

Figure 14 shows what happens in nuclear reactors. When a neutron hits a uranium U-235 nucleus it can split it into two lighter nuclei like Ba-144 and Kr-89 that fly apart at about 40% of the speed of light and two or three additional neutrons. The additional neutrons can then strike other uranium U-235 nuclei, causing them to split as well. Some neutrons can hit uranium U-238 nuclei and turn them into plutonium Pu-239 that can also fission like uranium U-235 nuclei. Currently, about 1/3 of the energy generated by light water reactors comes from fissioning the plutonium Pu-239 nuclei that they generate from uranium U-238 nuclei. When the amount of fissile material in the fuel rods finally drops to a level where the chain reaction dies, the fuel rods have to be removed and become nuclear waste. However, since the fuel rods only initially contained 3% uranium U-235 nuclei which split into radioactive things like Ba-144 and Kr-89 and also some small percent of generated plutonium Pu-239 which does the same thing, that means something like 95% of the nuclear waste is really composed of valuable things like uranium and plutonium that are contaminated with a very small amount of highly radioactive fission products. The good news is that the fission products are easily removed using chemical means, and the fission products are very radioactive nuclei with short half-lives of only a few years or decades. Nuclei with short half-lives are very radioactive, while nuclei with long half-lives are not very radioactive, but they stay around for a long time. The small amount of extracted fission product nuclei only need to be isolated for a few hundred years, and by that time they decay into stable nuclei and are no longer radioactive. Currently, the United States is just storing the spent fuel rods locally at its light water nuclear reactors as useless nuclear waste. But other countries are reprocessing the spent fuel rods as a valuable source of fuel.

Figure 14 - When a neutron hits a U-235 nucleus it can split it into two lighter nuclei like Ba-144 and Kr-89 that fly apart at about 40% of the speed of light and two or three additional neutrons. The additional neutrons can then strike other U-235 nuclei, causing them to split as well. Some neutrons can hit U-238 nuclei and turn them into Pu-239 that can also fission like U-235 nuclei. About 1/3 of the energy generated by light water reactors comes from fissioning Pu-239.

This is even truer for fast neutron reactors. In the fast neutron reactor shown in Figure 15 we do not use water to slow down neutrons or to transfer heat. Instead we use things like liquid sodium or helium gas to spin the turbines. With a helium gas fast neutron reactor we can actually use the hot helium gas to turn the turbines directly. Since we no longer have water running through the reactor core we don't have to worry about water flashing into steam and causing an explosion. Such reactors could be designed to be nearly 100% failsafe by doing things like having control rods that drop on their own using simple gravity when things go wrong. Because fast neutrons are not easily absorbed by uranium U-235, fast neutron reactors need to run with a mix of fuel that is about 20% uranium U-235 and plutonium Pu-239 that produces a higher flux of neutrons, but they produce more fuel in the form of plutonium Pu-239 than they consume, so over time, they can essentially use up all of the uranium in the world as they turn the 99.7% of uranium U-238 in natural uranium into plutonium U-239. So in practical terms, fast nuclear reactors represent the stable renewable energy source we need to augment our intermittent wind and solar energy resources because uranium is more common than tin. Eventually, we will need commercially viable fusion reactors to replace the fast neutron fission reactors, but there is plenty of time to develop them in the future.

Figure 15 - Fast neutron reactors run on a fuel mix of about 20% U-235 and Pu-239 using fast neutrons that are not slowed down by water. Liquid sodium or helium gas can be used for transferring heat. When helium gas is used, it can drive turbines directly. Such reactors do not have the danger of mixing hot nuclear fuel with circulating water.

So How Do We Pay For This?
Converting to renewable energy resources is going to cost lots of money up front, but will be hundreds of times cheaper than trying to adapt to a new climate that nobody will like and the high costs of the ensuing social disruptions. Remember, the United States already spent about $3 trillion in the Middle East over the past 15 years trying to do that with nothing to show for it. That might have been enough to do the whole job. The best way to make this all happen is to allow the magic of the marketplace do the job for us by instituting a stiff carbon tax at the point of production for coal, oil and natural gas. The carbon tax would then be passed on to consumers as an increased fuel cost for carbon-based fuels. To offset the carbon tax, a tax credit could be granted on a per capita basis as part of the existing income tax structure to make the carbon tax revenue neutral. The tax credit would apply even to people who currently do not have to pay income taxes, and thus would be a subsidy for low income families to offset the higher costs of products due to the carbon tax. The United States could also impose a carbon tariff on countries that did not take similar actions. A carbon tax is the simplest solution and avoids the political shenanigans and Wall Street speculations that cap-and-trade programs are subject to. Much of the oil industry could then convert infrastructure to producing and transporting carbon-neutral liquids and gasses, like biofuels and hydrogen gas. The coal industry would have to basically shut down under such a plan, so some governmental expenses would be required to transition the affected workers.

Conclusion
For the sake of all, including our intelligent-software posterity, hopefully the liberals and conservatives will come together to fix this problem, but I have my doubts. One of the key findings of softwarephysics is concerned with the magnitude of the impact upon the planet of self-replicating information.

Self-Replicating Information – Information that persists through time by making copies of itself or by enlisting the support of other things to ensure that copies of itself are made.

Basically, we have seen several waves of self-replicating information dominate the Earth:
1. Self-replicating autocatalytic metabolic pathways of organic molecules
2. RNA
3. DNA
4. Memes
5. Software

Over the past 4.0 billion years, the surface of the Earth has been totally reworked by these forms of self-replicating information, with software now rapidly becoming the dominant form of self-replicating information on the planet. We are DNA survival machines with minds infected by meme-complexes, and so we too are forms of self-replicating information bent on replicating at all costs, even to our own detriment. All forms of self-replicating information always seem to overdo things by eventually outstripping their resource base until none is left, and we seem to be doing the same. For more on this see:

A Brief History of Self-Replicating Information
The Great War That Will Not End
How to Use an Understanding of Self-Replicating Information to Avoid War
How to Use Softwarephysics to Revive Memetics in Academia
Is Self-Replicating Information Inherently Self-Destructive?
Is the Universe Fine-Tuned for Self-Replicating Information?
Self-Replicating Information

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
http://softwarephysics.blogspot.com/

Regards,
Steve Johnston