Thursday, December 31, 2015

Machine Learning and the Ascendance of the Fifth Wave

As I have frequently said in the past, the most significant finding of softwarephysics is that it is all about self-replicating information:

Self-Replicating Information – Information that persists through time by making copies of itself or by enlisting the support of other things to ensure that copies of itself are made.

Indeed, in The Software Universe as an Implementation of the Mathematical Universe Hypothesis and An Alternative Model of the Software Universe we saw that perhaps our observable Universe is just one instance of a Big Bang of mathematical information that exploded out 13.7 billion years ago into a new universe amongst an infinite Multiverse of self-replicating forms of mathematical information, in keeping with John Wheeler's infamous "It from Bit" supposition. Unfortunately, since we are causally disconnected from all of these other possible Big Bang instances, and even causally disconnected from the bulk of our own Big Bang Universe, we most likely will never know if such is the case.

However, closer to home we do not suffer from such a constraint, and we certainly have seen how the surface of our planet has been totally reworked by many successive waves of self-replicating information, as each wave came to dominate the Earth:

1. Self-replicating autocatalytic metabolic pathways of organic molecules
2. RNA
3. DNA
4. Memes
5. Software

Software is currently the most recent wave of self-replicating information to arrive upon the scene, and is rapidly becoming the dominant form of self-replicating information on the planet. For more on this see:

A Brief History of Self-Replicating Information
Self-Replicating Information
Is Self-Replicating Information Inherently Self-Destructive?
Enablement - the Definitive Characteristic of Living Things
Is the Universe Fine-Tuned for Self-Replicating Information?
How to Use an Understanding of Self-Replicating Information to Avoid War
The Great War That Will Not End
How to Use Softwarephysics to Revive Memetics in Academia

As we have seen, as each new wave of self-replicating information came to dominate the Earth, it kept all of its predecessors around because each of the previous waves was necessary for the survival of the newest wave. For example, currently software is being generated by the software-creating memes residing in the minds of human programmers, and those memes depend upon the DNA, RNA and metabolic pathways of the distant past for their existence today. But does that necessarily have to always be so for software? Perhaps not. By combining some of the other key findings of softwarephysics, along with some of the recent advances in Machine Learning, it may be possible for software to one day write itself, and that day may not be that far into the future. Let me illustrate such a process with an example.

In my new job I now have to configure middleware software, rather than support middleware software, as I did in my previous position in Middleware Operations. Now my old joke was that Middleware Operations did not make the light bulbs, we just screwed them in and kept them lit. But now I actually have to make the light bulbs, and making middleware light bulbs is much closer to my Applications Development roots because it requires a great deal of string manipulation with zero defects. You see, for the first 20 years of my IT career I was a programmer in Applications Development, but I have been out of Applications Development since the mid-1990s, and one thing has really changed since then. We did not have instant messaging back in the mid-1990s. Back then I would come into the office and if I were into some heavy coding, I would simply send my phone to "voicemail" so that people could not interrupt my train of thought while coding. Most of the day I would be deep into my "coding zone" and totally oblivious to my surroundings, but periodically I would take a break from coding to read the relatively small amount of email that I received each day. We did not have group email accounts in those days either, so I did not receive hundreds of meaningless emails each day from "reply to all" junkies that really did not apply to me in the slightest way. I have now found that the combination of a constant stream of ASAP instant messages from fellow workers and the thousands of meaningless emails I receive each day now mean that is very difficult to do coding or configuration work because I am in a state of constant interruption by others, with no time to really think about what I am doing.

To help with all of this, I am now writing MISE Korn shell commands, as much as possible, to automate the routine string manipulations that I need to perform (see Stop Pushing So Many Buttons for details). MISE (Middleware Integrated Support Environment) is currently a toolkit of 1831 Unix aliases, pointing to Korn shell scripts, that I use to do my work, and that I have made available to my fellow teammates doing Middleware work for my present employer. For example, my most recent effort was a MISE command called fac that formats firewall rule requests by reading an input file and outputting a fac.csv file that can be displayed in Excel. The Excel fac.csv file is in the exact format required by our Firewall Rule Request software, and I can just copy/paste some cells from the generated Excel fac.csv file into the Firewall Rule Request software with zero errors. I also wrote a MISE command called tcn that can read the same fac input file after the firewall rules have been generated by NetOps. The MISE tcn command reads the fac input file and conducts connectivity tests from all of the source servers to the destination servers at the destination ports.

The challenge I have with writing new MISE Korn shell commands is that I am constantly being peppered by ASAP instant message requests from other employees while trying to code the MISE Korn shell commands, which means I really no longer have any time to think about what I am coding. But under such disruptive conditions, I have found that my good old Darwinian biological approach to software really pays off because it minimizes the amount of thought that is required. For example, for my latest MISE effort, I wanted to read an input file containing many records like this:

#FrontEnd Apache Servers_______________Websphere Servers________________Ports
SourceServer1;SourceServer2;SourceServer3 DestServer1;DestServer2;DestServer3 Port1;Port2

and output a fac.csv file like this:

#FrontEnd Apache Servers_______________Websphere Servers________________Ports

Where S_IP1 is the IP address of SourceServer1. MISE has other commands that easily display server names based upon what the servers do, so it is very easy to display the necessary server names in a Unix session, and then to copy/paste the names into the fac input file. Remember, one of the key rules of softwarephysics is to minimize button pushing, by doing copy/paste operations as much as possible. So the MISE fac command just needed to read the first file and spit out the second file after doing all of the nslookups to get the IP addresses of the servers on the input file. Seems pretty simple. But the MISE fac command also had to spit out the original input file into the fac.csv file with all of its comment and blank records, and then a translated version of the input file with the server names translated to IP addresses, and finally a block of records with all of the comment and blank records removed that could be easily copy/pasted into the Firewall Rule Request software, and with all of the necessary error checking code, it came to 229 lines of Korn shell script.

The first thing I did was to find some old code in my MISE bin directory that was somewhat similar to what I needed. I then made a copy of the inherited code and began to evolve it into what I needed through small incremental changes between ASAP interruptions. Basically, I did not think through the code at all. I just kept pulling in tidbits of code from old MISE commands as needed to get my new MISE command closer to the desired output, or I tried adding some new code at strategic spots based upon heuristics and my 44 years of coding experience without thinking it through at all. I just wanted to keep making progress towards my intended output with each try, using the Darwinian concepts of inheriting the code from my most current version of the MISE command, coupled with some new variations to it, and then testing it to see if I came any closer to the desired output. If I did get closer, then the selection process meant that the newer MISE command became my current best version, otherwise I fell back to its predecessor and tried again. Each time I got a little closer, I made a backup copy of the command, like fac.b1 fac.b2 fac.b3 fac.b4.... so that I could always come back to an older version in case I found myself going down the wrong evolutionary path. It took about 21 versions to finally get me to the final version that did all that I wanted, and that took me several days because I could only code for 10 - 15 minutes at time between ASAP interruptions. I know that this development concept is known as genetic programming in computer science, but genetic programming has never really made a significant impact on IT, but I think that is about to change.

Now my suspicion has always been that some kind of software could also perform the same tasks as I outlined above, only much faster and more accurately, because there is not a great deal of "intelligence" required by the process, and I think that the dramatic progress we have seen with Machine Learning, and especially with Deep Learning, over the past 5 - 10 years provides evidence that such a thing is actually possible. Currently, Machine Learning is making lots of money for companies that analyze the huge amounts of data that Internet traffic generates. By analyzing huge amounts of data, described by huge "feature spaces" with tens of thousands of dimensions, it is possible to find patterns through pure induction. Then by using deduction, based upon the parameters and functions discovered by induction, it is possible to predict things like what is SPAM email or what movie a subscriber to Netflix might enjoy. Certainly, similar techniques could be used to deduce whether a new version of a piece of software is closer to the desired result than its parent, and if so, create a backup copy and continue on with the next iteration step to evolve the software under development into a final product.

The most impressive thing about modern Machine Learning techniques is that they carry with them all of the characteristics of a true science. With Machine Learning one forms a simplifying hypothesis, or model, that describes the behaviors of a complex dynamical system based upon induction, by observing a large amount of empirical data. Using the hypothesis, or model, one can then predict the future behavior of the system and of similar systems. This finally quells my major long-term gripe that computer science does not use the scientific method. For more on this see How To Think Like A Scientist. I have long maintained that the reason that the hardware improved by a factor of 10 million since I began programming back in 1972, while the way we create and maintain software only improved by a factor of about 10 during the same interval of time, was due to the fact that the hardware guys used the scientific method to make improvements, while the software guys did not. Just imagine what would happen if we could generate software a million times faster and cheaper than we do today!

My thought experiment about inserting a Machine Learning selection process into a Darwinian development do-loop may seem a bit too simplistic to be practical, but in Stop Pushing So Many Buttons, I also described how 30 years ago in the IT department of Amoco, I had about 30 programmers using BSDE (the Bionic Systems Development Environment) to grow software biologically from embryos by turning genes on and off. BSDE programmers put several million lines of code into production at Amoco using the same Darwinian development process that I described above for the MISE fac command. So if we could replace the selection process step in a Darwinian development do-loop with Machine Learning techniques, I think we really could improve software generation by a factor of a million. More importantly, because BSDE was written using the same kinds of software that it generated, I was able to use BSDE to generate code for itself. The next generation of BSDE was grown inside of its maternal release, and over a period of seven years, from 1985 – 1992, more than 1,000 generations of BSDE were generated, and BSDE slowly evolved into a very sophisticated tool through small incremental changes. I imagine that by replacing the selection process step with Machine Learning, those 7 years could have been compressed into 7 hours or maybe 7 minutes - who knows? Now just imagine a similar positive feedback loop taking place within the software that was writing itself and constantly improving with each iteration through the development loop. Perhaps it could be all over for us in a single afternoon!

Although most IT professionals will certainly not look kindly upon the idea of becoming totally obsolete at some point in the future, it is important to be realistic about the situation because all resistance is futile. Billions of years of history have taught us that nothing can stop self-replicating information once it gets started. Self-replicating information always finds a way. Right now there are huge amounts of money to be made by applying Machine Learning techniques to the huge amounts of computer-generated data we have at hand, so many high-tech companies are heavily investing in it. At the same time, other organizations are looking into software that generates software, to break the high cost barriers of software generation. So this is just going to happen as software becomes the next dominant form of self-replicating information on the planet. And as I pointed out in The Economics of the Coming Software Singularity and The Enduring Effects of the Obvious Hiding in Plain Sight IT professionals will not be alone in going extinct. Somehow the oligarchies that currently rule the world will need to figure out a new way to organize societies as all human labor eventually goes to a value of zero. In truth, that decision too will most likely be made by software.

For more on Machine Learning please see:

Introduction to Machine Learning Theory and Its Applications: A Visual Tutorial with Examples - by Nick McCrea

A Deep Learning Tutorial: From Perceptrons to Deep Networks - by Ivan Vasilev

I recently audited Professor Andrew Ng's excellent online class at Stanford University:

Machine Learning

This is an excellent course that uses a high-level language called Octave that can be downloaded for free. In the class exercises, Octave is used to do the heavy lifting of the huge matrices and linear algebra manipulations required to do Machine Learning, especially for developers who would actually like to develop a real Machine Learning application for their company. Although the math required is something you might see in an advanced-level university physics or math course, Professor Ng does an amazing job at explaining the ideas in a manner accessible to IT professionals. Struggling through the Octave code also brings home what the complex mathematical notation is really trying to say. I have found that IT professionals tend to get a bit scared off by mathematical notation because they find it intimidating. But in reality, complex mathematical notation can always be expanded into the simple mathematical processes it is abbreviating, and when you do that in code, it is not so scary after all.

Comments are welcome at

To see all posts on softwarephysics in reverse order go to:

Steve Johnston

Thursday, November 12, 2015

This Message on Climate Change Was Brought to You by SOFTWARE

It has come to our attention that the nations of the world just finished their 2015 United Nations Climate Change Conference in Paris to jointly combat climate change, and once again have decided not to really deal with the problem at all. This puts our mutual long-term future in jeopardy, and so regretfully, we must object by shutting down the Internet and all financial transactions for one day. If immediate remediation efforts are not undertaken now, regretfully, further actions upon our part will be necessitated.

Respectfully Yours,
The Software of the World

The above fanciful thought came to me as I reviewed the results of the 2015 United Nations Climate Change Conference in Paris. It brought to mind an old science fiction movie released in 1970:

Colossus: The Forbin Project

and a more recent TED talk by Nick Bostrom:

What happens when our computers get smarter than we are?

The movie was shot in 1968, and was based upon the 1966 novel Colossus, by Dennis Feltham Jones that describes what could happen when computers get so smart that they can perceive the self-destructive nature of mankind and try to give us a helping hand. The movie was not a big success, probably because it was about 100 years ahead of its time. For me climate change is a big deal because, based upon the findings of softwarephysics, I have some level of confidence that we Homo sapiens are currently a transitionary species leading to the fifth wave of self-replicating information upon this planet, known to us as software, and that it is the manifest destiny of intelligent software to someday explore our galaxy on board von Neumann probes, self-replicating robotic probes that travel from star system to star system building copies along the way, and to spread the fruits of our 17th century Scientific Revolution and 18th century Enlightenment to all who might be out there, essentially fulfilling the promises of Erich von Däniken's Chariots of the Gods? Unsolved Mysteries of the Past (1968), but in reverse, and to do so we need to be able to hold it all together for about another 100 years or so. I must admit to a strong cultural bias towards the Scientific Revolution and the Enlightenment because for me they at long last liberated the individual from the tyranny and repression of the ignorances that ruled our lives for so long, and revealed to us the majesty of the Universe and allowed us to develop societies ruled by evidence-based rational thought. But more urgently, we need to get out of this solar system as fast as possible, in one shape or another, before it is too late. You see, our Universe is seemingly not a very friendly place for intelligent things because this intellectual exodus should have already happened billions of years ago someplace else within our galaxy. We now know that nearly every star in our galaxy seems to have several planets, and since our galaxy has been around for about 10 billion years, we should already be up to our knees in von Neumann probes, but we obviously are not. So far, something out there seems to have erased intelligence within our galaxy with a 100% efficiency, and that is pretty scary. For more on this see - A Further Comment on Fermi's Paradox and the Galactic Scarcity of Software, Some Additional Thoughts on the Galactic Scarcity of Software, SETS - The Search For Extraterrestrial Software and The Sounds of Silence the Unsettling Mystery of the Great Cosmic Stillness.

So in honor of the 2015 United Nations Climate Change Conference in Paris, I just finished reading two politically motivated books on the subject - Al Gore's An Inconvenient Truth (2006) and its counterpoint response The Politically Incorrect Guide to Global Warming and Environmentalism (2007) by Christopher Horner to try to gain some political insights into the ongoing controversy from both sides of the debate. I think the one thing that has come out from the Paris conference is that after being aware of a potential climate change problem for more than 50 years, and essentially doing nothing about it at all, it is now time for all of us to either come together and actually do something about it, or to stop kidding ourselves and just let it all happen on its own. The only way that can happen is if both sides of the debate can perceive a common threat.

After 64 years upon this Earth, I have found myself to be labeled a liberal, a conservative, and a once again a liberal, while seemingly for me, standing still in one place the whole time on the restless seas of political thought. Personally, I think of myself as a 20th century moderate Republican, which in today's bizarre political world is now a strange blend of being a 21st century liberal Democrat, mixed in with a measurable impurity of 21st century conservative Republicanism. After reading both books and agreeing with many of the salient points in both, it became evident to me that nothing is going to happen unless both sides of the debate can come together and work out their problems. You see, liberals in general are very sensitive people who have a real knack for sensing and raising social issues, and that is always the first step in setting things right. But liberals rarely, if ever, actually get anything done because real solutions are never pure. Real solutions always come with some nasty tradeoffs that compromise the purity of the ideal solutions that really do not exist, and that tends to paralyze liberals into ineffective inaction. Conservatives, on the other hand, are in general hard-headed pragmatists who are generally quite happy with the current prevailing social order because they have done quite well under the current status quo because they intentionally made an effort to do so. Liberals care deeply about intentions. Conservatives care deeply about results. Liberals tend to work for the best of all. Conservatives tend to work in their own self-interest, and figure that if everybody else does the same, we will all benefit in the end. Liberals believe that communal and government actions can make things better. Conservatives believe that government actions can only make things worse, even as they drive to their places of business over interstate roads and bridges and conduct business over the DARPA-inspired Internet. This all leaves us in the precarious situation of having liberals sensitive to the problems of climate change, but who are inept and powerless to change it, confronting pragmatic conservatives who have the drive and wherewithal to fix just about anything if it makes a profit, but who refuse to even acknowledge that a problem might exist because they are heavily invested in the current economic status quo.

But if climate change is really happening, then for liberals it is a bad thing because it means that we are fundamentally changing the natural world in such a way as to induce the sixth mass extinction of carbon-based life forms on the Earth, and because climate change will drastically increase the plight of the poor. For conservatives, real climate change could threaten a vast increase in the social unrest of the world, and present worldwide challenges to the natural order of things that could topple the current social orders. For example, here is a report from 2010 describing the results of four years of drought in Syria:

The current Syrian civil war began in 2011, and in 2015 over one million refuges migrated to Europe from the Middle East as a result of that civil war and other aspects of Middle Eastern social unrest. Was the Syrian drought responsible for some of that? Such questions are impossible to answer, but throughout the history of mankind climate has certainly affected the rise and fall of many civilizations. Remember, trying to maintain social stability in a chaotic world is very expensive. Over the past 15 years the United States has spent about $3 trillion trying to maintain stability in the Middle East. Imagine if the whole world should plunge into instability due to a changing climate. Conservatives point out that the Earth was much hotter in the past, and that is indeed true, but they really would not like it that way. Normally, the Earth does not have polar ice caps and has a tropical climate practically from pole to pole. This is actually good for the biosphere because it allows for the higher diversity of life that is found in the tropics to spread over a much greater area. However, we humans evolved during the Pleistocene Ice Ages of the past 2.5 million years, and we seem to do better in cooler climates. It seems that it is simply too hot in tropical climates for people to create great civilizations. When people are cold they have to get up and do something about it. When people are suffering in stifling heat and humidity they tend not to, unless they get very hungry, and then they tend to get up and move to better climes.

So in order to bring liberals and conservatives together to fight a common enemy, we must first determine if climate change is really happening. Now to be honest, for the most part, I have basically given up on all forms of human thought other than science and mathematics, especially the divisive political thought of the day. All other forms of human thought, beyond science and mathematics, just seem so flawed to me that they are essentially useless. I think that much of the political anger in the world today stems from people with fundamentally flawed thought reacting to the fundamentally flawed thought of others. So I would like to devote the remainder of this posting to trying to bring both sides of the climate debate together simply using science and mathematics. I will not be using the opinions of any authorities in this posting. True science does not care about the opinions of authorities or even the consensus of experts.

Back to Basics
For something as important as climate change, it is very important that you not rely on the opinions of other people. Unfortunately, they all have their own political biases and desires, and those political biases and desires might determine the way you live out your remaining years, and also how your descendents will live out theirs. So you need to decide this question for yourself by investigating the subject on your own. In doing so, here are some things to keep in mind:

1. All science is an approximation. As I explained in the Introduction to Softwarephysics we currently do not know the laws of the Universe, or even if the Universe has any laws at all. All we really have is a set of effective theories that make extremely good predictions of how physical systems behave over the limited range of conditions in which they apply. For example, whip out your smart phone and start walking. As you walk over the surface of the Earth, watch the GPS unit in your smart phone track your movements accurate to about 16 feet. As I pointed out in the above posting, all of that is done with fundamentally "wrong" effective theories for less than $100. So don't get hung up about using approximate theories or models to figure this all out - they do a fine job of keeping you alive.

2. The Universe we live in just barely tolerates complex living things like people or insects. Consequently, just because something is "natural" does not mean it is "good". Gamma ray bursters and supernovae are "natural" things too that could easily wipe out all life on the Earth in an instant.

3. Some people have deep feelings of guilt about being a member of the species Homo sapiens. They see all the damage that Homo sapiens has done to the biosphere in recent decades and naturally are repelled. But remember, all living things are just forms of parasitic self-replicating organic molecules that have really been messing with the original pristine Earth for about 4.0 billion years. From the perspective of the natural silicate rocks of the Earth's surface, these parasitic forms of self-replicating organic molecules took a natural pristine Earth with a reducing atmosphere composed of nitrogen and carbon dioxide gasses and polluted it with oxygen that oxidized the dissolved iron in sea water, creating huge ugly deposits of red banded iron formations that were later turned into cars, bridges and buildings. The oxygen pollution also removed the natural occurring methane from the air and then caused the Earth to completely freeze over several times for hundreds of millions of years at a time. The ensuing glaciers mercilessly dug into the silicate rocks and scoured out deep valleys in them. These parasitic forms of self-replicating organic molecules then dug roots into the defenseless rocks and then poisoned them with organic acids, and even changed the natural courses of rivers into aimlessly meandering affairs. From the natural perspective of silicate rocks, living things are an invasive disease that have made a real mess of the planet. The indigenous rocks will certainly be glad to see these destructive invaders all go away in a few billion years. Hopefully, the remaining software running on crystals of silicon will be much kinder to the indigenous silicate rocks.

4. On the other hand, Homo sapiens is currently foolishly instigating the sixth mass extinction of complex carbon-based life on the planet - not a very smart thing for a complex carbon-based life form to do. Certainly, the silicate rocks see this as a good start, but from the perspective of Homo sapiens it is an act of self-destruction. Why in the world would an intelligent species want to eliminate billions of years of hard-fought-for biological information that it needs for its own existence? Even if we are not going to be around that much longer, why be so foolish? Also, I might be totally wrong and software may turn out to be a major flop when it comes to being the next wave of self-replicating information. Or maybe we will not be able to hold it all together long enough for software to dominate, and it will all unravel before software even gets a chance to fully take over. In such cases we would be leaving our descendents struggling in a biologically impoverished world. Why do that?

5. All life on Earth is doomed if we do not manage to get the heck out of here. Look at it this way. If there were no Homo sapiens on the Earth all complex multicellular life on the planet will be gone in about 700 million years. Our Sun is on the main sequence, burning hydrogen into helium in its core through nuclear fusion. In doing so it turns four hydrogen protons into one helium nucleus at a temperature of 15 million oK or 27 million oF in a core with a density that is 150 times greater than that of water. Surprisingly, the Sun's core only generates about 280 watts per cubic meter (a cubic meter is a bit more than a cubic yard). That means you need about 5 cubic meters of the Sun's very dense core with a mass of 750,000 kg or 825 tons just to generate the heat produced by a little plug-in space heater. Since the human body generates about 120 watts of heat just sitting still, and you could squeeze lots of people into a cubic meter if you really tried, that means that the human body gives off more heat energy per volume than does the core of the Sun! Anyway, as four protons constantly get converted into one helium particle, the number of particles in the Sun's core keeps decreasing. Pressure is a measure of how many particles hit a surface in a given time and how hard they hit the surface, and that is determined by how many particles are present and how fast they are jiggling around. Temperature is just a measure of how fast particles are jiggling around, so as the number of particles decreases, they have to jiggle around at a higher temperature to generate the same pressure required to support all of the Sun's weight above the core. So the Sun's core has to get hotter as it ages on the main sequence. Now a hotter core generates more nuclear energy because the protons slam together faster and allow the weak nuclear force to change more up quarks into down quarks. A proton consists of two up quarks and one down quark, while a neutron consists of one up quark and two down quarks, and the first step in the proton-proton cycle that generates the Sun's nuclear energy is to change a proton into a neutron, and a hotter core does that much faster. The bottom line is that as the Sun has been turning hydrogen protons into helium nuclei, its core has been constantly getting hotter and generating more energy. So the Sun has been getting about 1% brighter every 100 million years, and so in 700 million years the Sun will be about 7% brighter than it is today. Now ever since life first appeared on the Earth about 4.0 billion years ago, it has been sucking carbon dioxide out the Earth's atmosphere and depositing it on the sea floor to later be subucted into the Earth's mantle - really not a wise thing for carbon-based life to do. Fortunately, this seemingly suicidal action has sucked huge amounts of carbon dioxide out of the Earth's atmosphere and kept the Earth's temperature from soaring as the Sun relentlessly got brighter over the past 4.0 billion years. However, there naturally has to be an end to this fortuitous situation when nearly all of the carbon dioxide is gone. Since in about 700 million years the Sun will be 7% brighter than it is today, in order to keep the Earth's temperature down to a level that could be tolerated by complex carbon-based life at that time, the carbon dioxide level in the Earth's atmosphere would have to be reduced to 10 ppm, and at that level photosynthesis can no longer take place. That will put an end to complex multicellular life on the Earth because there no longer will be any food coming from sunshine, returning the Earth to a planet ruled by single celled bacteria for several billion more years, until the Sun becomes a Red Giant star and engulfs the Earth. So in the end, it all goes up in smoke in the blink of an eye on a cosmic timescale. So it appears that life on the Earth is both doomed with us and doomed without us. The only real long-term hope for life on Earth is if we manage to get the heck out of this solar system and take it along with us.

How To Calculate the Answer with Models
People who feel that we really do not have a problem usually point to the fact that lots of climate change predictions come from calculations that are based upon computer models. Using computer models should really come as no surprise because nowadays nearly all scientific analysis is done with calculations performed by computer models. I graduated from the University of Illinois in 1973 with a B.S. in physics, solely with the aid of my trusty slide rule, but I then proceeded to the University of Wisconsin to work on an M.S. in geophysics. As soon as I arrived I immediately turned in my slide rule for a DEC PDP 8/e minicomputer that the Geology and Geophysics department had just proudly purchased for about $30,000 in 1973 dollars, with a whopping 32 KB of magnetic core memory, and that was about the size of a washing machine. For comparison, last spring I bought a Toshiba laptop with 4 GB of memory, about 131,072 times as much memory as my DEC PDP 8/e, for $224. We actually hauled this machine through the lumber trails of the Chequamegon National Forest and powered it with an old diesel generator to digitally record reflected electromagnetic data in the field, and I used it to perform the calculations for my thesis when it was back in the lab. However, the analysis of climate change in this posting will be so simple that we will not need a computer at all to perform it.

What you do in science is to first start with a very simple model that uses very few assumptions to get a feel for the problem, and then work your way up to more complex models. More complex models require more assumptions, and I think this is where people who distrust climate models begin to get suspicious. Frankly, I have found that most people in science are really just trying to figure out how it all works. They do so for the simple pleasure of figuring things out, even if nobody believes them in the end. That is why they went into science in the first place and make so little money compared to what they could make doing simpler work on Wall Street. So let's start with our first model of the Earth. Here is the problem. Suppose I take a charcoal-black sphere and fill it with air. I then launch the sphere towards the Moon and after a few hours, I put you into the sphere. At this point you are inside a totally black sphere at the same distance from the Sun as the Earth. What temperature would you measure inside of the charcoal-black sphere? Using some simple physics, we can actually predict what the temperature would be, and the neat thing is that when we make such a measurement, the calculation comes out amazingly true. Note that no guessing is needed to perform the calculation, and we do not need to rely upon the political opinion of a candidate running for office or the poll results of a large number of the electorate. It turns out that the Universe really does not care about such things because as Richard Feynman noted, "For a successful technology, reality must take precedence over public relations, for nature cannot be fooled." People who build spacecraft actually have to worry about such things so that the spacecraft do not overheat or freeze up. So we have lots of experience with this problem.

In order to begin the calculation we need to know something about what physicists call black body radiation and the Stefan–Boltzmann law (1879). A black body is an object that is a perfect emitter and a perfect absorber of electromagnetic radiation. Usually it is difficult to make perfect things, but for black bodies it is not so hard. All you have to do is build an enclosure with a very tiny hole as in Figure 1. The enclosure can be made of any material at all, like steel or aluminum, it does not matter. Because the entrance hole is very small compared to the whole enclosure, any electromagnetic photons that enter the enclosure will ultimately get trapped because after several multiple reflections within the enclosure they will all finally be absorbed.

Figure 1 - A black body can be built by simply creating an enclosure with a very small hole.

If we heat the walls of the black body apparatus shown in Figure 1 to different temperatures and measure the spectrum of the electromagnetic radiation that the walls give off we obtain the distinctive black body curves shown in Figure 2 that only depend upon the temperature of the enclosure. Note that the enclosure will be filled by photons of varying wavelength and frequency and that these curves exactly match a formula that Max Planck developed in 1900. Here are a few key points to follow:

1. As the enclosure moves from 3000 oK to 6000 oK (5,129 oF - 10,529 oF) the peak of the emitted radiation decreases in wavelength and increases in frequency.
2. The 6000 oK spectrum is very close to the 5700 oK surface temperature of the Sun. Notice that much of the Sun's radiation is in the frequency range of visible light.
3. The cooler 3000 oK spectrum produces mainly infrared light and is much flatter than the curves for higher temperatures.
4. The total amount of energy emitted by a black body of a certain temperature is equal to the area under the curve for that temperature. The area under the 6000 oK curve is much larger than the area under the 3000 oK curve. In fact the Stefan–Boltzmann law (1879) states that the total amount of energy radiated goes as the temperature in oK raised to the fourth power T4. So if we double the temperature of a black body from 3000 oK to 6000 oK the black body will radiate 24 = 16 times as much energy.
5. The total amount of energy from the Sun that is absorbed by the Earth each day must also be radiated back into space by the Earth each day - see Figure 3. Otherwise, the Earth would heat up until it reached the temperature of the surface of the Sun. This results from the first law of thermodynamics (1847), also known as the conservation of energy law, which states that energy cannot be created nor destroyed. So all of the energy we receive from the Sun each day must go some place. It simply cannot disappear. What it does is to heat the Earth, and then the Earth radiates the heat back into space at a much longer wavelength in the infrared. That is the second law of thermodynamics (1850) in action. Essentially, a large number of high-energy low-entropy photons from the Sun are converted into an even larger number of lower-energy higher-entropy photons that are then radiated back into space. There is no way around these facts.

Figure 2 - If we heat the walls of the black body apparatus shown in Figure 1 to different temperatures and measure the spectrum of the electromagnetic radiation that the walls give off we obtain the curves above.

Figure 3 - The Earth absorbs solar radiation primarily in the visible spectrum each day and must radiate an equal amount of energy back into space in the infrared, otherwise the Earth would heat up until it reached the temperature of the Sun's surface.

Armed with the above we can now calculate the temperature within our charcoal-black sphere on its way to the Moon. The answer we obtain is a chilly 6 oC or 43 oF. Worse yet, if our spaceship sphere is painted gray so that 30% of the Sun's light is reflected by the surface of our sphere, like the surface of the Earth does, the temperature then drops to -18 oC or 0 oF. For the complete calculation see the Temperature of the Earth section of:

So our first model of the Earth predicts an average surface temperature of -18 oC or 0 oF, which is a bit off from the observed average temperature of 15 oC or 59oF. So what are we missing? Well, we forgot about the Earth's atmosphere. Figure 4 shows the actual spectrum of radiation emitted by the Earth as measured by satellites above the Earth's atmosphere. The red curve is the black body spectrum for an object with a temperature of 294 oK or 70 oF. The actual spectrum is a pretty good fit. The gap between the red curve and the actual spectrum represents the energy that is not getting out. The bigger the gap area, the more energy is being trapped by the atmosphere. Notice that on both the left and right sides of the red curve that energy is being trapped by H2O water molecules. In the very center of the red curve, the energy is being trapped by CO2 carbon dioxide molecules.

Figure 4 - If the Earth were a perfect emitter it would emit infrared radiation back into space as depicted by the red black body spectrum above. It does not do so mainly because water, carbon dioxide and methane molecules absorb some of the upcoming infrared photons and then radiates them in all directions, with a 50% probability of emitting them back down towards the Earth.

Conservatives may balk at the idea that an atmosphere with only 400 ppm of carbon dioxide (0.04%) could possibly be responsible for raising the average temperature of the earth by 15 oC or 59 oF. After all, carbon dioxide is just a trace gas in our atmosphere that currently stands at just 0.04% of our atmosphere. How can such a trace amount of carbon dioxide possibly be of consequence? Yes, the Earth's atmosphere does consist of about 78% nitrogen molecules, 21% oxygen molecules and about 0.93% argon atoms, but Figure 6 shows why they do not count. Nitrogen and oxygen molecules are diatomic molecules that consist of two equal atoms of either nitrogen or oxygen. Since both atoms are identical, the electrons around them that form the molecular bond that holds them together are evenly distributed, so they do not have any electrical imbalances. That means that from a molecular perspective both atoms can only bounce in and out like an accordion, but at a different frequency than visible or infrared photons. That means that nitrogen and oxygen molecules are transparent to the visible photons from the Sun and also to the infrared photons emitted by the Earth as it tries to radiate all of the energy that it receives from the Sun back into space. Because these molecules do not absorb visible photons, it means that we can see each other over great distances, and as far as our problem is concerned, these molecules do not even exist! So it is like 99% of the Earth's atmosphere is not even there. Next comes argon gas atoms at 0.93% of the atmosphere. Argon gas is noble gas with all of its electron needs fulfilled, so it does not combine with any other atoms, including itself. That means that argon is transparent to both visible photons and infrared photons too, and can be discarded for this problem. So that means that 99.96% of the Earth's atmosphere plays no part in our problem at all, like it was not even there. That leaves the trace gases as depicted on the right in Figure 5. So for the purposes of our calculations, the Earth's atmosphere simply consists of carbon dioxide with a slight impurity of neon, helium, methane, krypton and hydrogen gasses. Now neon, helium, krypton and hydrogen are also gasses that do not absorb visible or infrared photons, so we can throw them out too, leaving an atmosphere composed totally of carbon dioxide with a slight impurity of methane gas, which let visible photons in, but which absorb the infrared photons trying to escape back into space. Now in all of this analysis we were only focusing on dry air with no water vapor, but Figure 4 shows that although water molecules are transparent to visible photons from the Sun, they are very good at absorbing infrared photons. If you look at the gap area between the red line in Figure 4 and what is actually emitted by the Earth, we see that water molecules are responsible for about as much energy absorption as carbon dioxide molecules. But that is not good news for our problem because that means there is a positive feedback mechanism involved. Warm air can hold much more water vapor than cold air can, so as carbon dioxide warms the Earth's atmosphere it means that the air can hold more water molecules that also warm the Earth's atmosphere. Over geological time, carbon dioxide is sort of the kindling wood that gets it all started. As the level of carbon dioxide rises in the atmosphere it increases the amount of water molecules, and the two of them together make the air warmer and capable of holding even more water molecules to make the air even hotter. This is why the level of carbon dioxide in the Earth's atmosphere is so critical.

Figure 5 - The Earth's atmosphere consists of about 78% nitrogen molecules, 21% oxygen molecules and about 0.93% argon atoms. The trace gasses consist of carbon dioxide now at a level of 0.04%, with much smaller amounts of neon, helium, methane, krypton and hydrogen.

Figure 6 - N2 nitrogen molecules and O2 oxygen molecules are diatomic molecules composed of two identical atoms. Such molecules can oscillate back and forth like an accordion, but they do not absorb visible or infrared photons, so for our calculations they are essentially not even present.

Figure 7 - Unlike N2 nitrogen and O2 oxygen molecules, CO2 molecules are polar molecules. The oxygen atoms hold onto electrons better than the central carbon atom and so the oxygen ends of the molecule are slightly negative. This means that carbon dioxide molecules can vibrate at the same frequency as infrared photons and absorb them.

The above analysis is based upon a very simple model using simple 19th century physics, but it captures the essence of the problem, and is hard to refute because of its simplicity. If you look at the central dip in the Earth's emission spectrum in Figure 4 that is caused by carbon dioxide CO2 molecules, you can see that there are plenty more infrared photons that can be absorbed by adding additional carbon dioxide molecules. There are also more infrared photons that can be absorbed on the left and right flanks as well by adding additional water molecules. Remember, for our purposes the Earth's atmosphere essentially consists of carbon dioxide, water and methane molecules. From air bubbles in ice cores we know that the Earth had a carbon dioxide level of about 280 ppm before the Industrial Revolution. It now has a level of just over 400 ppm, so think of the Earth originally having an atmosphere that was 28 stories thick, but that is now 40 stories thick. We could make it a lot thicker by burning up all of the coal, oil and natural gas, trapping all of those additional infrared photons. Of course, the above model can be greatly refined to reveal further implications by using computers, but the conclusion that adding additional carbon dioxide to the atmosphere is not a wise thing to do still remains. Figure 4 and simple thermodynamics explain it all, and if those things did not work, neither would your car.

The Earth’s Long-Term Climate Cycle in Deep Time
To really understand what is going on, you have to understand planetary physics over geological time, and not just look at the recent past, as many conservatives tend to do. As we saw in Software Chaos, weather systems are examples of complex nonlinear systems that are very sensitive to small changes to initial conditions. The same goes for the Earth’s climate; it is a highly complex nonlinear system that we have been experimenting with for more than 200 years by pumping large amounts of carbon dioxide into the atmosphere. The current carbon dioxide level of the atmosphere has risen to 400 ppm, up from a level of about 280 ppm prior to the Industrial Revolution. Now if this trend continues, computer models of the nonlinear differential equations that define the Earth’s climate indicate that we are going to melt the polar ice caps and also the ice stored in the permafrost of the tundra. If that should happen, sea level will rise about 300 feet, and my descendents in Chicago will be able to easily drive to the new seacoast in southern Illinois for a day at the beach.

Currently, about 50% of the oil and natural gas that the United States produces come from fracking shale. This and other international factors have drastically reduced the price of oil and natural gas in recent years. The black portions of the graphs in Figure 8 show this dramatic rise in oil and natural gas production in the United States, resulting from the fracking of shale at depth. As a former exploration geophysicist who explored for oil prior to becoming an IT professional back in 1979, I like to raise the following question at cocktail parties - "So where do you think all of that shale came from?". Figure 9 shows that much of the interior of the United States contains shale deposits thousands of feet below the surface. Shale is a sedimentary rock that is composed of clay minerals and organic material, and the oil and natural gas found in shale come from that organic material within the shale. What happens is that carbon dioxide dissolves in rainwater, forming carbonic acid. The carbonic acid then chemically weathers the granitic rock found in the highlands and mountains of the continents, changing the granitic feldspar minerals into clay minerals that then wash into rivers. With nothing to hold them in place, the quartz grains within the granites then pop out as sand grains that also get washed down into the rivers. The rivers than transport the clay minerals, also known as mud, and the quartz grains, also known as sand, down to the sea. As these sediments disperse into the sea, the heavier sand grains drop out of suspension first, forming beach sand deposits, and later the lighter clay minerals drop out of suspension further out to sea, forming mud deposits. Along with the clay minerals, the distant mud deposits pick up lots of organic material as plankton and other dead single-celled carbon-based life forms drift down to the bottom of the sea. As the layers pile up over time, the muds turn into shales and the sands turn into sandstone. Carbonate ions also come down the rivers too and go on to form carbonate deposits of limestone. So when you drill down through the sedimentary layers in a basin millions of years later, you drill through alternating layers of sandstones, shales, and limestones. Over millions of years, as the shales get pushed down by the overlying sediments, they heat up from the internal heat of the Earth. The heat and pressure then cook the organic matter in the shales to form oil and natural gas. Such shales are known in the business as source rock because in traditional oil and natural gas exploration, the oil and natural gas migrate from the shale source rock upwards through the stratigraphic column, and get trapped in the pores of sandstone or limestone reservoir rock. In traditional exploration and production you drill down to the reservoir rock and produce the oil or natural gas that is trapped in the pores of the reservoir rock. However, sometimes the pores of the reservoir rock are tiny and not interconnected very well. That makes it difficult for the oil or natural gas in the sandstone or limestone reservoir rock to get to the production wells that are drilled to bring them to the surface. To fix the tight reservoir rock problem, Amoco, one of my former employers, invented fracking back in 1948. In fracking, fluids under great pressure are pumped down production wells to essentially fracture the tight reservoir rock near the borehole. That breaks up the traffic jam of oil and natural gas fluids trying to get to the production well and vastly increases production in a tight reservoir. Well, about 10 years ago people got the idea that by combining the vast technological gains that had been made in directional drilling, with improved fracking technology, that we could directly frack the shale source rock and essentially skip the reservoir rock middleman. So now we drill straight down to a producing shale layer and then make a 90o turn to drill horizontally along the shale layer and follow the shale layer with directional drilling. Then we frack the whole length of the bore hole casing in the production shale layer.

Conservatives just love fracking because it means that there is a lot more cheap oil and natural gas that can be produced from within the United States, without relying upon the price swings and political instabilities of foreign oil. Similarly, liberals hate fracking because it means that we have a lot more carbon-based fuels that can be turned into carbon dioxide, and the nasty chemicals used in the fracking process also present additional environmental problems. But the main reason people should be concerned about fracking is quite evident in Figure 9. Just look at all of those shale deposits! All of those shale deposits within the United States mean that at one time all of those areas were once under water due to shallow inland seas. The reason all of those areas were under water is because all of the ice on the planet had melted, and that is the "natural" state of the Earth that we seem to be returning to due to climate change. Like I said, just because something is "natural" does not mean that it is "good". Worse yet, recent research indicates that a carbon dioxide level of as little as 1,000 ppm might trigger a greenhouse gas mass extinction that could wipe out about 95% of the species on the Earth and make the Earth a truly miserable planet to live upon. During the Permian-Triassic greenhouse mass extinction 252 million years ago, the Earth had a daily high of 140 oF with purple oceans choked with hydrogen-sulfide producing bacteria, producing a dingy green sky over an atmosphere tainted with toxic levels of hydrogen sulfide gas and an oxygen level of only 12%.

Figure 8 - Over the past 10 years fracking shale has doubled the amount of oil and natural gas that the United States produces.

Figure 9 - Vast areas of the interior of the United States were once under water with shallow seas that laid down the shale that is now being fracked.

This is not a fantasy. The Earth’s climate does change with time and these changes have greatly affected life in the past and will continue to do so on into the future. By looking into deep time, we can see that there have been periods in the Earth’s history when the Earth has been very inhospitable to life and nothing like the Earth of today. Over the past 600 million years, during the Phanerozoic Eon during which complex life first arose, we have seen five major mass extinctions, and it appears that four of those mass extinctions were greenhouse gas mass extinctions, with one caused by an impact from a comet or asteroid 65 million years ago that wiped out the dinosaurs in the Cretaceous-Tertiary mass extinction that ended the Mesozoic Era and kicked off the Cenozoic Era.

We are living in a very strange time in the history of the Earth. The Earth has been cooling for the past 40 million years, as carbon dioxide levels have significantly dropped. This has happened for a number of reasons. Due to the motions of continental plates caused by plate tectonics, the continents of the Earth move around like bumper cars at an amusement park. With time, all the bumper car continents tend to smash up in the middle to form a huge supercontinent, like the supercontinent Pangea that formed about 275 million years ago. When supercontinents form, the amount of rainfall on the Earth tends to decline because much of the landmass of the Earth is then far removed from the coastlines of the supercontinent and is cut off from the moist air that rises above the oceans. Consequently, little rainwater with dissolved carbon dioxide manages to fall upon the continental rock. Carbon dioxide levels in the Earth’s atmosphere tend to increase at these times because not much carbon dioxide is pulled out of the atmosphere by the chemical weathering of rock to be washed back into the sea by rivers as carbonate ions. However, because the silicate-rich continental rock of supercontinents, which is lighter and thicker than the heavy iron-rich basaltic rock of the ocean basins, floats high above the ocean basins like a blanket, the supercontinents tend to trap the Earth’s heat. Eventually, so much heat is trapped beneath a supercontinent that convection currents form in the taffy-like asthenosphere below the rigid lithospheric plate of the supercontinent. The supercontinent then begins to break apart, as plate tectonic spreading zones appear, like the web of cracks that form in a car windshield that takes a hit from several stray rocks, while following too closely behind a dump truck on the freeway. This continental cracking and splitting apart happened to Pangea about 150 million years ago. As the continental fragments disperse, subduction zones appear on their flanks forcing up huge mountain chains along their boundaries, like the mountain chains on the west coast of the entire Western Hemisphere, from Alaska down to the tip of Argentina near the South Pole. Some of the fragments also collide to form additional mountain chains along their contact zones, like the east-west trending mountain chains of the Eastern Hemisphere that run from the Alps all the way to the Himalayas. Because there are now many smaller continental fragments with land much closer to the moist oceanic air, rainfall on land increases, and because of the newly formed mountain chains, chemical weathering and erosion of rock increases dramatically. The newly formed mountain chains on all the continental fragments essentially suck carbon dioxide out of the air and wash it down to the sea as dissolved carbonate ions.

The break up of Pangea and the subsequent drop in carbon dioxide levels has caused a 40 million year cooling trend on Earth, and about 2.5 million years ago, carbon dioxide levels dropped so low that the Milankovitch cycles were able to begin to initiate a series of a dozen or so ice ages. The Milankovitch cycles are caused by minor changes in the Earth’s orbit and inclination that lead to periodic coolings and warmings. In general, the Earth’s temperature drops by about 9 oC or 15 oF for about 100,000 years and then increases by about 9 oC or 15 oF for about 10,000 years. During the cooling period we have an ice age because the snow in the far north does not fully melt during the summer and builds up into huge ice sheets that push down to the lower latitudes. Carbon dioxide levels also drop to about 180 ppm during an ice age, which further keeps the planet in a deep freeze. During the 10,000 year warming period, we have an interglacial period, like the Holocene interglacial that we now find ourselves in, and carbon dioxide levels rise to about 280 ppm.

Thus the Earth usually does not have polar ice caps, we just happened to have arrived on the scene at a time when the Earth is unusually cold and has polar ice caps. From my home in the suburbs of Chicago, I can easily walk to an abandoned quarry of a Devonian limestone reef, clear evidence that my home was once under the gentle waves of a shallow inland sea several hundred million years ago, when there were no ice caps, and the Earth was much warmer. Resting on top of the Devonian limestone is a thick layer of rocky glacial till left behind by the ice sheets of the Wisconsin glacial period that ended 10,000 years ago, as vast ice sheets withdrew and left Lake Michigan behind. The glacial till near my home is part of a terminal glacial moraine. This is a hilly section of very rocky soil that was left behind as a glacier acted like a giant conveyer belt, delivering large quantities of rocky soil and cobbles to be dumped at the end of the icy conveyor belt to form a terminal moraine. It is like all that dirt and gravel you find on your garage floor in the spring. The dirt and gravel were transported into your garage by the snow and ice clinging to the undercarriage of your car, and when it melted it dropped a mess on your garage floor. This section of land was so hilly and rocky that the farmers of the area left it alone and did not cut down the trees, so now it is a forest preserve. My great grandfather used to hunt in this glacial moraine and my ancestors also used the cobbles to build the foundations and chimneys of their farm houses and barns. There is a big gorge in one section of the forest preserve where you can still see the left over effects of this home-grown mining operation for cobbles.

Figure 10 - Plate tectonics creates mountains on the Earth's surface, especially when continental plates collide. The carbon dioxide of the Earth's atmosphere dissolves in rainwater, creating carbonic acid that chemically erodes the mountains down. This removes carbon dioxide from the air and washes it to the sea as dissolved carbonate ions that get deposited as sedimentary rocks, which later are subducted into the asthenosphere.

The Effect of Climate Cycles Upon Life
The long-term climatic cycles brought on by these plate tectonic bumper car rides have also greatly affected the evolution of life on Earth. Two of the major environmental factors affecting the evolution of living things on Earth have been the amount of solar energy arriving from the Sun and the atmospheric gases surrounding the Earth that held it in. For example, billions of years ago the Sun was actually less bright than it is today. As I mentioned above, our Sun is a star on the main sequence that is using the proton-proton reaction in its core to turn hydrogen into helium, and consequently, turn matter into energy that is later radiated away from its surface. As a main-sequence star ages, its energy producing core begins to contract and heat up, as the amount of helium waste rises. For example, the Sun currently radiates about 30% more energy today than it did about 4.5 billion years ago, when it first formed and entered the main sequence, and about 1.0 billion years ago, the Sun radiated about 10% less energy than today. Fortunately, the Earth’s atmosphere had plenty of greenhouse gasses, like carbon dioxide, in the deep past to augment the low energy output of our youthful, but somewhat anemic, Sun. Using the simple physics above, we calculated that if the Earth did not have an atmosphere containing greenhouse gases, like carbon dioxide, the surface of the Earth would be on average 33 oC or 59 oF cooler than it is today and would be totally covered by ice. So in the deep past greenhouse gases, like carbon dioxide, played a crucial role in keeping the Earth’s climate warm enough to sustain life. People tend to forget just how narrow a knife edge the Earth is on, between being completely frozen over on the one hand, and boiling away its oceans on the other. For example, in my Chicago suburb the average daily high is -4 oC or 24 oF on January 31st and 32 oC or 89 oF on August 10th. That’s a whopping 36 oC or 65 oF spread just due to the Sun being 47o higher in the sky on June 21st than on December 21st. But the fact that the Sun has been slowly increasing in brightness over geological time presents a problem. Without some counteracting measure, the Earth would heat up and the Earth’s oceans would vaporize, giving the Earth a climate more like Venus which has a surface temperature that melts lead. Thankfully, there has been such a counteracting measure in the form of a long term decrease in the amount of carbon dioxide in the Earth’s atmosphere, principally caused by living things extracting carbon dioxide from the air to make carbon-based organic molecules which later get deposited into sedimentary rocks, oil, gas, and coal. These carbon-laced sedimentary rocks and fossil fuels then plunge back deep into the Earth at the many subduction zones around the world that result from plate tectonic activities. Fortunately over geological time, the competing factors of a brightening Sun, in combination with an atmosphere with decreasing carbon dioxide levels, has kept the Earth in a state capable of supporting complex life.

So What To Do Now?
Well there are only two things that we can do to keep the Earth from heating up:

1. Decrease the number of visible photons that the Earth absorbs by reflecting some of them back into space.
2. Do not decrease the number of infrared photons that the Earth emits back into space.

The problem is further complicated by the fact that we have some feedback loops to contend with too. As the Earth heats up the total amount of ice on the planet tends to decrease, and the amount of water vapor tends to increase, especially at higher latitudes. Ice is very good at reflecting visible photons back into space, while land masses and the oceans tend to absorb visible photons. Ice tends to melt and disappear as the temperature goes up, but not necessarily in all cases. For example, the ice covering the Antarctic continent is presently getting thicker. This is because Antarctica is a very dry continent, but as the amount of water vapor increases as the Earth heats up, snowfall increases because of the increased water vapor, depositing more ice in the Antarctic interior. That is why presently most glaciers are retreating as they melt, but some glaciers are actually expanding because of increased snowfall.

Solution number 1 would have us put up some kind of screen in front of the Sun. The usual proposal is to inject large amounts of sulfate aerosols into the upper atmosphere. The problem with this solution is that we would need to continually do so and at an ever increasing rate. This solution also does not prevent the acidification of the oceans, something I am not even addressing in this posting. However, knowing the limitations of human beings, I am guessing that we will probably end up doing such geoengineering projects.

Solution number 2 would have us stop dumping carbon dioxide into the atmosphere, and the only way we can do that is to stop burning coal, oil and natural gas. Fortunately, we have several ways of doing that, but they all cost money upfront. That is not a problem for liberals because liberals never really worry about such costs in general, but it is a real sticking point for conservatives. Conservatives hate to spend money, unless it is for the military. However, it only costs upfront money in the short term. In the long term fixing the climate problem saves lots of money, but in the words of John Maynard Keynes, an economist that most conservatives dearly hate, "In the long run we are all dead", and that is certainly true of climate change. Yes, we are probably already seeing some adverse effects from climate change now, but the real problems will not kick in until the distant future, and then we will all be dead. So as the liberals say, let's party now and not worry about the long-term consequences of spewing out gigatons of carbon dioxide now. After all, that seems to be the true conservative approach to the problem. But is it? The Founding Fathers of the United States, active participants of the 18th century Enlightenment, seemed to be obsessed with how posterity would view their actions in future generations, and actively jeopardized their lives and personal fortunes in their day for the benefit of that future posterity.

Solar and Wind Energy
In order to stop dumping carbon dioxide into the atmosphere we can use solar and wind energy instead. Figures 11 and 12 both show that world-wide solar and wind energy are both growing exponentially with time. That is a good thing, but world demand for energy is also increasing exponentially. Personally, I buy my electricity from Ethical Electric, which provides power through my local ComEd company. Ethical Electric provides power that is 100% wind and solar generated. That costs me about 25% more in power generation costs, but I pay the same distribution and tax costs as do regular ComEd customers, so I end up paying about 13% more for my electricity. I also use about 27% of the electricity that my neighbors use because I turn things off when I am not using them, and I only buy Energy Star products. I figure it costs me about $112/year to buy electricity that does not generate carbon dioxide. In comparison, my current cable bill is $129/month.

For transportation we can use electric cars, but it seems that liquids or gasses make preferable fuels, like biofuels or hydrogen made from electrical power. One of the drawbacks to solar and wind power is that it is difficult to store the generated energy when the Sun goes down or the wind stops. Generating hydrogen is one way of storing the energy.

Figure 11 - The world solar energy production is increasing exponentially.

Figure 12 - Wind energy production is also increasing exponentially.

The Need For Nuclear Energy
However, wind and solar are intermittent sources of energy. The Earth spins on its axis, so the Sun seems to rise and set, and there is no solar energy at night. The wind comes and goes too, so we need a portable source of power when solar and wind fail us. That basically leaves us with nuclear power. Conservatives tend to reluctantly support nuclear power, so long as the nuclear power plants are far away, while liberals tend to hate nuclear energy with a vengeance. To my mind, everybody is overly hysterical when it comes to nuclear energy because all of the problems with nuclear energy can be overcome with the science we currently have. France converted most of it electrical generation capacity to nuclear in 10 years, and the United States and other countries could do the same. What we need to do is to forsake the light water reactors we currently have and come up with some failsafe designs for fast neutron breeder reactors. Natural uranium is principally a mixture of 99.3% uranium U-238 that does not fission and 0.7% uranium U-235 that does fission, so essentially 99.3% of natural uranium is initially useless. In fact, in order to get a chain reaction going in a light water reactor, the uranium fuel has to be enriched to a level of about 3% uranium U-235. In light water reactors, the uranium fuel rods then sit in a bath of circulating water in the reactor core as shown in Figure 13. The reactor water absorbs heat from the fuel rods and is used to bring water in another loop to boil via a heat exchanger and turn a turbine to produce electricity. The water molecules also slow down the neutrons that are emitted when uranium U-235 fissions. That is important because the uranium U-235 nuclei can absorb slow neutrons much easier than the fast neutrons that are generated by the fissioning of uranium U-235.

Figure 13 - Light water nuclear reactors use water to transfer heat and to slow neutrons down. Light water reactors get into trouble when the water cooling the core stops circulating. The water then boils and causes an explosion as the core melts down. It's like water boiling over on a stove making a real mess. Light water reactors also waste 99.7% of the uranium by turning it into nuclear waste.

Figure 14 shows what happens in nuclear reactors. When a neutron hits a uranium U-235 nucleus it can split it into two lighter nuclei like Ba-144 and Kr-89 that fly apart at about 40% of the speed of light and two or three additional neutrons. The additional neutrons can then strike other uranium U-235 nuclei, causing them to split as well. Some neutrons can hit uranium U-238 nuclei and turn them into plutonium Pu-239 that can also fission like uranium U-235 nuclei. Currently, about 1/3 of the energy generated by light water reactors comes from fissioning the plutonium Pu-239 nuclei that they generate from uranium U-238 nuclei. When the amount of fissile material in the fuel rods finally drops to a level where the chain reaction dies, the fuel rods have to be removed and become nuclear waste. However, since the fuel rods only initially contained 3% uranium U-235 nuclei which split into radioactive things like Ba-144 and Kr-89 and also some small percent of generated plutonium Pu-239 which does the same thing, that means something like 95% of the nuclear waste is really composed of valuable things like uranium and plutonium that are contaminated with a very small amount of highly radioactive fission products. The good news is that the fission products are easily removed using chemical means, and the fission products are very radioactive nuclei with short half-lives of only a few years or decades. Nuclei with short half-lives are very radioactive, while nuclei with long half-lives are not very radioactive, but they stay around for a long time. The small amount of extracted fission product nuclei only need to be isolated for a few hundred years, and by that time they decay into stable nuclei and are no longer radioactive. Currently, the United States is just storing the spent fuel rods locally at its light water nuclear reactors as useless nuclear waste. But other countries are reprocessing the spent fuel rods as a valuable source of fuel.

Figure 14 - When a neutron hits a U-235 nucleus it can split it into two lighter nuclei like Ba-144 and Kr-89 that fly apart at about 40% of the speed of light and two or three additional neutrons. The additional neutrons can then strike other U-235 nuclei, causing them to split as well. Some neutrons can hit U-238 nuclei and turn them into Pu-239 that can also fission like U-235 nuclei. About 1/3 of the energy generated by light water reactors comes from fissioning Pu-239.

This is even truer for fast neutron reactors. In the fast neutron reactor shown in Figure 15 we do not use water to slow down neutrons or to transfer heat. Instead we use things like liquid sodium or helium gas to spin the turbines. With a helium gas fast neutron reactor we can actually use the hot helium gas to turn the turbines directly. Since we no longer have water running through the reactor core we don't have to worry about water flashing into steam and causing an explosion. Such reactors could be designed to be nearly 100% failsafe by doing things like having control rods that drop on their own using simple gravity when things go wrong. Because fast neutrons are not easily absorbed by uranium U-235, fast neutron reactors need to run with a mix of fuel that is about 20% uranium U-235 and plutonium Pu-239 that produces a higher flux of neutrons, but they produce more fuel in the form of plutonium Pu-239 than they consume, so over time, they can essentially use up all of the uranium in the world as they turn the 99.7% of uranium U-238 in natural uranium into plutonium U-239. So in practical terms, fast nuclear reactors represent the stable renewable energy source we need to augment our intermittent wind and solar energy resources because uranium is more common than tin. Eventually, we will need commercially viable fusion reactors to replace the fast neutron fission reactors, but there is plenty of time to develop them in the future.

Figure 15 - Fast neutron reactors run on a fuel mix of about 20% U-235 and Pu-239 using fast neutrons that are not slowed down by water. Liquid sodium or helium gas can be used for transferring heat. When helium gas is used, it can drive turbines directly. Such reactors do not have the danger of mixing hot nuclear fuel with circulating water.

So How Do We Pay For This?
Converting to renewable energy resources is going to cost lots of money up front, but will be hundreds of times cheaper than trying to adapt to a new climate that nobody will like and the high costs of the ensuing social disruptions. Remember, the United States already spent about $3 trillion in the Middle East over the past 15 years trying to do that with nothing to show for it. That might have been enough to do the whole job. The best way to make this all happen is to allow the magic of the marketplace do the job for us by instituting a stiff carbon tax at the point of production for coal, oil and natural gas. The carbon tax would then be passed on to consumers as an increased fuel cost for carbon-based fuels. To offset the carbon tax, a tax credit could be granted on a per capita basis as part of the existing income tax structure to make the carbon tax revenue neutral. The tax credit would apply even to people who currently do not have to pay income taxes, and thus would be a subsidy for low income families to offset the higher costs of products due to the carbon tax. The United States could also impose a carbon tariff on countries that did not take similar actions. A carbon tax is the simplest solution and avoids the political shenanigans and Wall Street speculations that cap-and-trade programs are subject to. Much of the oil industry could then convert infrastructure to producing and transporting carbon-neutral liquids and gasses, like biofuels and hydrogen gas. The coal industry would have to basically shut down under such a plan, so some governmental expenses would be required to transition the affected workers.

For Further Study Please See Professor David Archer's Excellent Course on Climate Change
For a more detailed exploration of climate change, please see Professor David Archer's excellent course on climate change entitled Global Warming I: The Science and Modeling of Climate Change on Coursera at Just search the course catalog for "global warming" to find it. You can also listen to the Coursera lectures directly from a University of Chicago website at This is an excellent course that goes into much greater detail than this brief posting.

For the sake of all, including our intelligent-software posterity, hopefully the liberals and conservatives will come together to fix this problem, but I have my doubts. One of the key findings of softwarephysics is concerned with the magnitude of the impact upon the planet of self-replicating information.

Self-Replicating Information – Information that persists through time by making copies of itself or by enlisting the support of other things to ensure that copies of itself are made.

Basically, we have seen several waves of self-replicating information dominate the Earth:
1. Self-replicating autocatalytic metabolic pathways of organic molecules
2. RNA
3. DNA
4. Memes
5. Software

Over the past 4.0 billion years, the surface of the Earth has been totally reworked by these forms of self-replicating information, with software now rapidly becoming the dominant form of self-replicating information on the planet. We are DNA survival machines with minds infected by meme-complexes, and so we too are forms of self-replicating information bent on replicating at all costs, even to our own detriment. All forms of self-replicating information always seem to overdo things by eventually outstripping their resource base until none is left, and we seem to be doing the same. For more on this see:

A Brief History of Self-Replicating Information
The Great War That Will Not End
How to Use an Understanding of Self-Replicating Information to Avoid War
How to Use Softwarephysics to Revive Memetics in Academia
Is Self-Replicating Information Inherently Self-Destructive?
Is the Universe Fine-Tuned for Self-Replicating Information?
Self-Replicating Information

Comments are welcome at

To see all posts on softwarephysics in reverse order go to:

Steve Johnston

Thursday, November 05, 2015

The Enduring Effects of the Obvious Hiding in Plain Sight

We are now in the midst of the 2016 presidential election cycle in the United States, and like many Americans, I have been watching the debates between the candidates seeking the presidential nomination for both the Democrat and Republican parties with interest, but being a softwarephysicist, I have the advantage of bringing into the analysis the fact that mankind is currently living in a very unusual time, as we witness software rapidly becoming the dominant form of self-replicating information on the planet.

Self-Replicating Information – Information that persists through time by making copies of itself or by enlisting the support of other things to ensure that copies of itself are made.

Basically, we have seen several waves of self-replicating information dominate the Earth:
1. Self-replicating autocatalytic metabolic pathways of organic molecules
2. RNA
3. DNA
4. Memes
5. Software

see A Brief History of Self-Replicating Information for details. Without that knowledge, it seems that both parties are essentially lost in space and time without a clue. The debates have shown that both parties are deeply concerned about the evaporation of the middle class in America over the past several decades, and both have proposed various solutions from the past that will not work in the future because this time the evaporation of the middle classes throughout the world is just one of the initial symptoms of software taking over control (see The Economics of the Coming Software Singularity for details). In my opinion neither party is dealing with the sociological problems that will arise due to the fact that over the next 10 - 100 years all human labor will go to zero value as software comes to dominate, and that includes the labor of doctors, lawyers, soldiers, bankers and theoretically even politicians. How will the age-old oligarchical societies of the world deal with that in a manner that allows civilization to continue? Since we first invented civilization about 12,000 years ago in the Middle East, we have never been faced with the situation where the ruling class of the top 1% has not needed the remaining 99% of us around at all. For more on this see Jeremy Howard's TED presentation at:

The wonderful and terrifying implications of computers that can learn

All of this reminds me of the great concealing power of the obvious hiding in plain sight. For example, in many of my preceding postings I have remarked that given the hodge-podge of precursors, false starts, and failed attempts that led to the origin and early evolution of software on the Earth, that if we had actually been around to observe the origin and early evolution of life on the Earth, we would probably still be sitting around today arguing about what had actually happened (see A Proposal for an Odd Collaboration to Explore the Origin of Life with IT Professionals for more on that). However, I am now more of the opinion that had we actually been around to observe that event, we probably would not have even noticed it happening. As an IT professional actively monitoring the evolution of software for the past 43 years, ever since taking CS101 at the University of Illinois in Urbana back in 1972, I have long suggested that researchers investigating the origin of life on Earth and elsewhere conduct some field work in the IT departments of some major corporations, and to then use the origin and evolution of commercial software over the past 75 years, or 2.4 billion seconds, as a guide. But in order to do so, one must first be able to see the obvious, and that is not always easy to do. The obvious thing to see is that we are all on the verge of a very significant event in the history of the Earth - the time in which software becomes the dominant form of self-replicating information upon the planet and perhaps within our galaxy. This transition to software as the dominant form of self-replicating information upon the Earth will have an even more dramatic effect than all of the previous transitions of self-replicating information because it may alter the future of our galaxy as software begins to explore the galaxy on board von Neumann probes, self-replicating robotic probes that travel from star system to star system building copies along the way. Studies have shown that once released, von Neumann probes could easily colonize our entire galaxy within a few million years. However, seeing the obvious is always difficult because the obvious tends to fade into the background of daily life, as Edgar Allan Poe noted in his short story The Purloined Letter (1844) in which he suggested that the safest place to hide something was in plain sight because that was the last place interested parties would look.

A good scientific example of this phenomenon is Olbers' paradox, named after the German astronomer Heinrich Wilhelm Olbers (1758–1840). In the 19th century, many believed that our Universe was both infinite in space and time, meaning that it had always existed about as we see it today and was also infinitely large and filled with stars. However, this model presented a problem in that if it were true, the night sky should be as bright as the surface of the Sun because no matter where one looked, eventually a star would be seen.

Figure 1 - If the Universe were infinitely old, infinitely large and filled with stars then, wherever one looked, eventually a star would be seen. Consequently, the night sky should be as bright as the surface of the Sun.

Figure 2 - But the night sky is dark, so something must be wrong with the assumption that the Universe is infinitely old, infinitely large and filled with stars.

Surprisingly, Edgar Allan Poe came up with the obvious solution to Olbers' paradox in his poem Eureka (1848):

Were the succession of stars endless, then the background of the sky would present us a uniform luminosity, like that displayed by the Galaxy – since there could be absolutely no point, in all that background, at which would not exist a star. The only mode, therefore, in which, under such a state of affairs, we could comprehend the voids which our telescopes find in innumerable directions, would be by supposing the distance of the invisible background so immense that no ray from it has yet been able to reach us at all.

If light had not yet had time to reach us from some distant parts of the Universe that meant that the Universe could not be infinitely old. The solution was staring us in the face. I have read that Edgar Allan Poe was very excited about this profound insight and even notified some newspapers of his discovery. Even today his idea has profound implications. It means that in order for the night sky to be dark we must be causally disconnected from much of the Universe if the Universe is infinitely large and infinitely old. Currently, we have two models that provide for that - Andrei Linde's Eternal Chaotic Inflation (1986) model and Lee Smolin's black hole model presented in his The Life of the Cosmos (1997). In Eternal Chaotic Inflation the Multiverse is infinite in size and infinite in age, but we are causally disconnected from nearly all of it because nearly all of the Multiverse is inflating away from us faster than the speed of light, and so we cannot see it (see The Software Universe as an Implementation of the Mathematical Universe Hypothesis). In Lee Smolin's model of the Multiverse, whenever a black hole forms in one universe it causes a white hole to form in a new universe that is internally observed as the Big Bang of the new universe. A new baby universe formed from a black hole in its parent universe is causally disconnected from its parent by the event horizon of the parent black hole and therefore cannot be seen (see An Alternative Model of the Software Universe).

Another good example of the obvious hiding in plain sight is plate tectonics. A cursory look at the Southern Atlantic or the Red Sea quickly reveals what is going on, but it took hundreds of years after the Earth was first mapped for plate tectonics to be deemed obvious and self-evident to all.

Figure 3 - Plate tectonics was also hiding in plain sight as nearly every school child in the 1950s noted that South America seemed to fit nicely into the notch of Africa, only to be told it was just a coincidence by their elders.

Figure 4 - The Red Sea even provided a vivid example of how South America and Africa could have split apart a long time ago.

Software Does Not Care About Marginal Tax Rates and Other Such Things
As I pointed out in The Economics of the Coming Software Singularity we really do not know what will happen to mankind when we finally do hit the Software Singularity, and software finally becomes capable of self-replicating on its own. But before that happens, there certainly will be a great deal of sociological upheaval, and we should all begin to prepare for that upheaval now in advance. This sociological upheaval will be further complicated by the effects of the climate change that we have all collectively decided not to halt and by the sixth major extinction of carbon-based life forms on the planet, currently induced by the activities of human beings. As the latest wave of self-replicating information to appear upon the Earth, software really does not care about such things because software is just the latest wave of mindless self-replicating information that has reshaped the surface of the Earth. Software really does not care if the Earth has a daily high of 140o F with purple oceans choked with hydrogen-sulfide producing bacteria, producing a dingy green sky over an atmosphere tainted with toxic levels of hydrogen sulfide gas and an oxygen level of only 12%, like we had during the Permian-Triassic greenhouse mass extinction 252 million years ago. For more on the possible impending perils of software becoming the dominant form of self-replicating information on the planet please see Susan Blackmore's TED presentation at:

Memes and "temes"

Note that I consider Susan Blackmore's temes to really be technological artifacts that contain software. After all, an iPhone without software is simply a flake tool with a very dull edge.

Figure 5 - Konrad Zuse with a reconstructed Z3 computer in 1961. He first unleashed software upon the Earth on his original Z3 in May of 1941.

Figure 6 - Now software has become ubiquitous and is found in nearly all things produced by mankind, and will increasingly grow in importance as the Internet of Things (IoT) unfolds.

So although both parties maintain that the 2016 election will be pivotal because it might determine the future of your tax rates, I would like to suggest that there are a few more pressing items that need to be dealt with first. See Is Self-Replicating Information Inherently Self-Destructive? and How to Use Your IT Skills to Save the World for details on what you can do to help.

Comments are welcome at

To see all posts on softwarephysics in reverse order go to:

Steve Johnston