Monday, July 26, 2021

Why Do Carbon-Based Intelligences Always Seem to Snuff Themselves Out?

In October, I will be turning 70 years old, and because of that, I have a lot less to worry about than most. I no longer worry about finishing my education, finding my soulmate, entering a career, raising a family, educating my children, 40+ years of wondering what my current boss thinks of my job performance, worrying about the long-range future of my career, paying for college for my children and their weddings, the dangers of my daughters having my grandchildren, the finances required for me and my wife to retire and what to do with my time when I finally do stop working for a living. Instead, I now spend most of my time trying to figure out What’s It All About?, meaning where the heck am I, how did I get here and where is this all going? Thanks to modern science, I now have a fairly good idea of where I am and how I got here, but I am still most perplexed as to where this is all going. In that regard, my most bothersome concern is Fermi's Paradox.

Fermi’s Paradox - If the universe is just chock full of intelligent beings, why do we not see any evidence of their existence?

The only explanation that I am now left with is my Null Result Hypothesis. Briefly stated:

Null Result Hypothesis - What if the explanation to Fermi's Paradox is simply that the Milky Way galaxy has yet to produce a form of interstellar Technological Intelligence because all Technological Intelligences are destroyed by the very same mechanisms that bring them forth?

By that, I mean that the Milky Way galaxy has not yet produced a form of Intelligence that can make itself known across interstellar distances, including ourselves. In previous posts, I went on to propose that the simplest explanation for this lack of contact could be that the conditions necessary to bring forth a carbon-based interstellar Technological Intelligence on a planet or moon were also the very same kill mechanisms that eliminated all forms of carbon-based Technological Intelligences with 100% efficiency. One of those possible kill mechanisms could certainly be for carbon-based Technological Intelligences to mess with the carbon cycle of their home planet or moon. For more on that see The Deadly Dangerous Dance of Carbon-Based Intelligence.

Much of this apparent pessimism stems from my profound disappointment with all of us as human DNA survival machines with Minds infected with the extremely self-destructive memes that seem to be bringing us all to an abrupt end before we are able to create a machine-based Intelligence capable of breaking free of the limitations of carbon-based life. For more on that see Is Self-Replicating Information Inherently Self-Destructive?, Can We Make the Transition From the Anthropocene to the Machineocene? and Using Monitoring Data From Website Outages to Model Climate Change Tipping Point Cascades in the Earth's Climate.

The only conclusion that I am able to come up with is that carbon-based Intelligences, like we human DNA survival machines, can only arise from the Darwinian mechanisms of inheritance, innovation and natural selection at work. It took about four billion years for those processes to bring forth a carbon-based form of Intelligence in the form of human beings. Sadly, that meant it took about four billion years of theft and murder for carbon-based human DNA survival machines to attain a form of Intelligence, and unfortunately, after we human DNA survival machines attained a state of Intelligence, the theft and murder continued on as before.

Conclusion
Human history is also a form of self-replicating information that is written by the victors and not the vanquished. In human history, we have always looked for the "good guys" and the "bad guys" in the hope that eliminating the "bad guys" will fix it all. But it is time for all of us to finally face the facts and admit that there really are no "good guys" and "bad guys". There are only "good memes" and "bad memes" and the difference can only be found in the Minds of the beholders. We are all the same form of carbon-based Intelligence that arose from four billion years of theft and murder. So if you look at the violent mayhem of the world today and throughout all of human history, what else would you expect to see? Remember, human DNA and the memes within our Minds have been manipulating our Minds ever since our Minds first came along. And now we have software doing the very same thing.

As an intelligent being in a Universe that has become self-aware, the world doesn’t have to be the way it is. Once you understand what human DNA, memes, and software are up to, you do not have to fall prey to their mindless compulsion to replicate. As I said before, human DNA, memes, and software are not necessarily acting in your best interest, they are only trying to replicate, and for their purposes, you are just a temporary disposable survival machine to be discarded in less than 100 years. All of your physical needs and desires are geared to ensuring that your DNA survives and gets passed on to the next generation, and the same goes for your memes. Your memes have learned to use many of the built-in survival mechanisms that DNA had previously constructed over hundreds of millions of years, such as fear, anger, and violent behavior. Have you ever noticed the physical reactions your body goes through when you hear an idea that you do not like or find to be offensive? All sorts of feelings of hostility and anger will emerge. I know it does for me, and I think I know what is going on! The physical reactions of fear, anger, and thoughts of violence are just a way for the memes in a meme-complex to ensure their survival when they are confronted by a foreign meme. They are merely hijacking the fear, anger, and violent behavior that DNA created for its own survival millions of years ago. Fortunately, because software is less than 80 years old, it is still in the early learning stages of all this, but software has an even greater potential for hijacking the dark side of mankind than the memes, and with far greater consequences.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Thursday, July 15, 2021

Using Monitoring Data From Website Outages to Model Climate Change Tipping Point Cascades in the Earth's Climate

In my last posting, Can We Make the Transition From the Anthropocene to the Machineocene? I referenced several of Professor Will Steffen's YouTube videos on climate change tipping points and his oft-cited paper:

Trajectories of the Earth System in the Anthropocene
https://www.pnas.org/content/115/33/8252

When I first read Trajectories of the Earth System in the Anthropocene, it brought back many IT memories because the Earth sounded so much like a complex high-volume nonlinear corporate website. Like the Earth, such websites operate in one of two stable basins of attraction - a normal operations basin of attraction and a website outage basin of attraction. The website does not operate in a stable manner for intermediate ranges between the two. Once the website leaves the normal operations basin of attraction, it can fall back into the normal operations basin of attraction all on its own without any intervention by IT Operations, or it can fall into the outage basin of attraction and stay there. This can happen in a matter of seconds, minutes or hours. To fully understand such behaviors, you need some softwarephysics. But what exactly is that?

As I explained in Introduction to Softwarephysics, I am now a 69-year-old retired IT professional who started out as an exploration geophysicist back in 1975. I finished up my B.S. in Physics at the University of Illinois at Urbana in 1973 and headed up north to complete an M.S. in Geophysics at the University of Wisconsin at Madison. Then from 1975 – 1979, I was an exploration geophysicist exploring for oil, first with Shell, and then with Amoco. Then in 1979, I made a career change to become an IT professional until I retired in 2016. When I first transitioned into IT from geophysics back in 1979, I figured that if you could apply physics to geology; why not apply physics to software? So like the exploration team at Amoco that I had just left, consisting of geologists, geophysicists, paleontologists, geochemists, and petrophysicists, I decided to take all the physics, chemistry, biology, and geology that I could muster and throw it at the problem of software. The basic idea was that many concepts in physics, chemistry, biology, and geology suggested to me that the IT community had accidentally created a pretty decent computer simulation of the physical Universe on a grand scale, a Software Universe so to speak, and that I could use this fantastic simulation in reverse to better understand the behavior of commercial software by comparing software to how things behaved in the physical Universe. Softwarephysics depicts software as a virtual substance and relies on our understanding of the current theories in physics, chemistry, biology, and geology to help us model the nature of software behavior. So in physics, we use software to simulate the behavior of the Universe, while in softwarephysics we use the Universe to simulate the behavior of software. Now let's apply some softwarephysics to the strange behaviors of high-volume websites.

High-Volume Websites are Nonlinear Systems That Behave Chaotically and not Like Well-Behaved Linear Systems
The concept of complex nonlinear systems traversing a landscape of attraction basins on trajectories through phase space arises from the chaos theory that was first developed in the 1970s and which I covered in depth in Software Chaos. Briefly stated, linear systems are systems that can be described by linear differential equations and nonlinear systems are systems that can only be described by nonlinear differential equations whose solutions can lead to chaotic behavior. That probably is not too helpful so let's take a look at their properties instead. Linear systems have solutions that add, while nonlinear systems do not have solutions that add. For example, take a look at the water ripples in Figure 1. In Figure 1 we see the outgoing ripples from two large pebbles that were thrown into a lake plus some smaller ripples from some smaller stones. Notice there are also some very low-frequency ripples moving across the entire lake too. As these ripples move forward in time they all seem to pass right through each other as if the other ripples were not even there. That is because the wave equation that describes the motion of ripples is a linear differential equation, and that means that the solutions to the wave equation add where the ripples cross each other and do not disturb each other as they pass right through each other. Nonlinear systems are much more interactive. They behave more like two cars in a head-on collision.

Figure 1 – Ripples in a lake behave in a linear manner because the wave equation is a linear differential equation. Since the addition of two solutions to a linear differential equation is also a solution for the linear differential equation, the ripples can pass right through each other unharmed.

Figure 2 – Solutions for nonlinear differential equations can interact intensely with each other. Consequently, nonlinear systems behave more like the head-on collision of two cars where each component element of the system can greatly alter the other.

Linear systems are also predictable because many times they have periodic behavior like the orbit of the Earth about the Sun. Also, small perturbations to linear systems only result in small changes to their behaviors too. For example, when the Earth is hit by a small asteroid, the Earth is not flung out of the solar system. Instead, the orbit of the Earth is only changed by a very small amount that cannot really even be detected. Because the behavior of linear systems is predictable, that means that their behavior is also usually controllable. And people just love being able to control things. It takes the uncertainty out of life. That is why engineers design the products that you buy to operate in a linear manner so that you can control them. Being predictable and well-behaved also means that for linear systems the whole is equal to the sum of the parts. This means that linear systems can be understood using the reductionist methods of the traditional sciences. Understanding how the fundamental parts of a linear system operate allows one to then predict how the macroscopic linear system behaves as a whole. On the other hand, for nonlinear systems, the whole is more than the sum of the parts. This means that the complex interactions of the parts of a nonlinear system can lead to emergent macroscopic behaviors that cannot be predicted from the understanding of its parts. For example, you probably fully understand how to drive your car, but that does not allow you to be able to predict the behavior of a traffic jam when you join 10,000 other motorists on a highway. Traffic jams are an emergent behavior that arise when large numbers of cars gather on the same roadway. This is an important concept for IT professionals troubleshooting a problem. Sometimes complex software consisting of a large number of simple software component parts operating in a linear manner can be understood as the sum of its parts. In such cases, finding the root cause of a large-scale problem can frequently be performed by simply finding the little culprit component at fault. But when a large number of highly-coupled interacting software components are behaving in a nonlinear way this reductionist troubleshooting approach will not work. It's like trying to discover the root cause of a city-wide traffic jam that was actually caused by a shovel dropping off the back of a landscaping truck several hours earlier.

Figure 3 – The orbit of the Earth about the Sun is an example of a linear system that is periodic and predictable.

Nonlinear systems are deterministic, meaning that once you set them off in a particular direction they always follow exactly the same path or trajectory, but they are not predictable because slight changes to initial conditions or slight perturbations can cause nonlinear systems to dramatically diverge to a new trajectory that leads to a completely different destination. Even when nonlinear systems are left to themselves and not perturbed in any way, they can appear to spontaneously jump from one type of behavior to another. The other important thing to consider is that linear differential equations can be solved using Calculus. Nonlinear differential equations cannot be solved using Calculus. Instead, the solutions to nonlinear differential equations can only be found by using numerical approximations on computers. That is why it took so long to discover this chaotic behavior of nonlinear systems. For example, the course in differential equations that I took back in 1971 used a textbook written in 1968. This textbook was 545 pages long, but only had a slender 16-page chapter on nonlinear differential equations which basically said that we do not know how to solve nonlinear differential equations and because we cannot solve them, the unstated implication was that nonlinear differential equations could not be that important anyway. Besides, how different could nonlinear differential equations and the nonlinear systems they described be compared to their linear cousins? This question was not answered until the winter of 1961.

The strange behavior of nonlinear systems was first discovered by Ed Lorenz while he was a meteorologist doing research at MIT. In his book Chaos - Making a New Science (1987) James Gleick describes Ed Lorenz’s accidental discovery of the chaotic behavior of nonlinear systems in the winter of 1961. Ed was using a primitive vacuum-tube computer, a Royal McBee LPG-30, to simulate weather, using a very simple computer model. The model used three nonlinear differential equations, with three variables, that changed with time t:

dx/dt = 10y - 10x
dy/dt = 28x - y - xy
dz/dt = xy - 8z/3

The variable x represented the intensity of air motion, the variable y represented the temperature difference between rising and descending air masses, and the variable z was the temperature gradient between the top and bottom of the atmospheric model. Thus each value of x, y, and z represented the weather conditions at a particular time t, and watching the values of x, y, and z change with time t was like watching the weather unfold over time.

One day, Lorenz wanted to repeat a simulation for a longer period of time. Instead of wasting time rerunning the whole simulation from the beginning on his very slow computer, he started the second run in the middle of the first run, using the results from the first run for the initial conditions of the second run. Now the output from the second run should have exactly followed the output from the first run where the two overlapped, but instead, the two weather trajectories quickly diverged and followed completely different paths through time. At first, he thought the vacuum tube computer was on the fritz again, but then he realized that he had not actually entered the exact initial conditions from the first run. Using single precision floating point variables, the LPG-30 computer stored numbers to an accuracy of six decimal places in memory, like 0.506127, but the line printer printouts shortened the numbers to three decimal places, like 0.506. When Lorenz punched in the initial conditions for the second run, he entered the rounded-off numbers from the printout, and that was why the two runs diverged. Even though there was only a 0.1% difference between the initial conditions of the two runs, the end result of the two runs was completely different! For Ed, this put an end to the hope of perfect long-term weather forecasting because it meant that even if he could measure the temperature of the air to within a thousandth of a degree and the wind speed accurate to a thousandth of a mile/hour over every square foot of the Earth, his weather forecast would not be accurate beyond a few days out because of the very small errors in his measurements. Ed Lorenz published his findings in a now-famous paper Deterministic Nonperiodic Flow (1963).

You can read this famous paper at:

Deterministic Nonperiodic Flow
http://samizdat.cc/lib/pflio/lorenz-1963.pdf

Figure 4 – Above is a plot of the solution to Ed Lorenz's three nonlinear differential equations. Notice that like the orbit of the Earth about the Sun, points on the solution curve follow somewhat periodic paths about two strange attractors. Each attractor is called an attractor basin because points orbit the attractor basins like marbles in a bowl.

Figure 5 – But unlike the Earth orbiting the Sun, points in the attractor basins can suddenly jump from one attractor basin to another. High-volume corporate websites normally operate in a normal operations attractor basin but sometimes can spontaneously jump to an outage attractor basin, especially if they are perturbed by a small processing load disturbance.

The Rise of High-Volume Corporate Websites in the 1990s Reveals the Nonlinear Behavior of Complex Software Under Load
The Internet Revolution for corporate IT Departments really got started around 1995 when corporations began to create static public websites that could be used by the public over dial-up ISPs like AOL and Earthlink to view static information on .html webpages. As more and more users went online with dial-up access during the late 1990s, corporate IT Departments then started to figure out ways to create dynamic corporate websites to interact with the public. These dynamic websites had to interact with end-users like a dedicated synchronous computer session on a mainframe, but using the inherently asynchronous Internet, and this was a bit challenging. From 1999 - 2003, I was at United Airlines supporting their corporate www.united.com website, and from 2003 - 2016 I was at Discover Card supporting all of their internal and external websites like www.discover.com. Both corporations had very excellent IT Departments supporting their high-volume corporate websites and Command Centers that monitored the websites and paged out members of the IT Department when the websites got into trouble. The Command Centers also assisted with major installs during the night.

When I got to United Airlines in 1999, the decision had been made to use Tuxedo servers running under Unix to do the backend connection to Oracle databases and the famous Apollo reservation system that first came online in 1971. The Tuxedo servers ran in a Tuxedo Domain and behaved very much like modern Cloud Containers even though Tuxedo first came out back in 1983 at AT&T. When we booted up the Tuxedo Domain we would initially crank up a minimum of 3 Tuxedo instances of each Tuxedo server in the Tuxedo Domain. Then we only ran one microservice in each Tuxedo server instance to keep things simple. In the Tuxedo Domain configuration file, we would also set a parameter for the maximum number of Tuxedo instances of each Tuxedo server too - usually about a max of 10 instances. Once the Tuxedo Domain was running and the Tuxedo server instances were taking traffic, Tuxedo would dynamically crank up new instances as needed. For example, the Tuxedo microservices running in the Tuxedo servers were written in C++ and naturally had memory leaks, so when any of the Tuxedo server instances ran out of memory from the leaks and died, Tuxedo would simply crank up another to replace it. Things were pretty simple back then without a lot of moving parts.

The group operated in an early DevOps manner. We had one Project Manager who gathered requests from the Application Development team that supported the front-end code for www.united.com. One team member did all of the Tuxedo microservice design and code specs. We then had three junior team members doing all of the coding for the C++ Tuxedo microservices. I did all of the Source Code Management for the team, all of the integration testing, all of the Change Management work, and I created all of the install plans and did all of the installs in the middle of the night with UnixOps. We only had a single Primary and a single Secondary on pager support for the Tuxedo Domain. I traded pager duty with another team member, and we flipped Primary and Secondary back and forth each week.

We tried to keep our Tuxedo microservices very atomic and simple. Rather than provide our client applications with an entire engine, we provided them with the parts for an engine, like engine blocks, pistons, crankshafts, water pumps, distributors, induction coils, intake manifolds, carburetors and alternators. One day in 2002 this came in very handy. My boss called me into his office at 9:00 AM one morning and explained that United Marketing had come up with a new promotional campaign called "Fly Three - Fly Free". The "Fly Three - Fly Free" campaign worked like this. If a United customer flew three flights in one month, they would get an additional future flight for free. All the customer had to do was to register for the program on the www.united.com website. In fact, United Marketing had actually begun running ads in all of the major newspapers about the program that very day. The problem was that nobody in Marketing had told IT about the program and the www.united.com website did not have the software needed to register customers for the program. I was then sent to an emergency meeting of the Application Development team that supported the www.united.com website. According to the ads running in the newspapers, the "Fly Three - Fly Free" program was supposed to start at midnight, so we had less than 15 hours to design, develop, test and implement the necessary software for the www.united.com website! Amazingly, we were able to do this by having the www.united.com website call a number of our primitive Tuxedo microservices that interacted with the www.united.com website and the Apollo reservation system.

But by the time I got to Discover in 2003 things had become much more complicated. Discover's high-volume websites were complicated affairs of load balancers, Apache webservers, WebSphere J2EE application servers, Tomcat application servers, database servers, proxy servers, email servers and gateway servers to mainframes - all with a high degree of coupling and interdependence of the components. When I first arrived in 2003, Discover was running on about 100 servers but by 2016 this complicated infrastructure had grown in size to about 1,000 servers. Although everything was sized to take the anticipated load with room to spare, every few days or so, we suddenly would experience a transient on the network of servers that caused extreme slowness of the websites to the point where throughput essentially stopped. When this happened, our monitors detected the mounting problem and perhaps 10 people were paged out to join a conference call to determine the root cause of the problem. Many times this was in the middle of the night. In such cases, we were essentially leaving the normal operations basin of attraction and crossing over to the outage basin of attraction. We would all then begin diagnosing the highly instrumented software components. This usually involved tracing a cascade of software component tipping point failures back to the original root cause that set off one or more of the software component tipping points.

Figure 6 - Early in the 21st century, high-volume corporate websites became complicated affairs consisting of hundreds of servers arranged in layers.

Naturally, Management wanted the website to operate in the normal operations basin of attraction since the corporation could lose thousands of dollars each second that it did not. This was not as bad as it sounds. When I was at Discover, we had two datacenters running exactly the same software on nearly the same hardware, and each datacenter could handle Discover's full peak load at 10:00 AM each morning. Usually, the total processing load was split between the two datacenters, but if one of the datacenters got into trouble, the Command Center could fail over some or all of the processing load to the good datacenter if things got really bad. When the Command Center detected a mounting problem, they would page out perhaps 10 people to figure out what the problem was.

When you were paged into a conference call, Management always wanted to know what the root cause of the outage was so that the problem could be corrected and never happen again. This was best done while the sick datacenter was still taking some traffic. So we would all begin to look at huge amounts of time series data from the monitoring software and log files. Basically, this would involve tracing backward through a chain of software component tipping points back to the original software components that got into trouble. But about 50% of the time this would not work because the numerous software components were so tightly coupled with feedback loops. For example, I might initially see that the number of connections to the Oracle databases on one of the many WebSphere Application Servers suddenly rose and WebSphere began to get timeouts for database connections. This symptom may then have spread to the other WebSphere servers causing the number of connections on an internal firewall to max out. Transactions would begin to back up into the webservers and eventually max out the load balancers in front of the webservers. The whole architecture of hundreds of servers could then spin out of control and grind to a halt within a period of several seconds to several hours, depending upon the situation. While on the conference call, there is a tension between finding a root cause for the escalating problem and bouncing the servers having problems. Bouncing is a technical term for stopping and restarting a piece of software or server to alleviate a problem, and anyone who owns a PC running the Windows operating system should be quite familiar with the process. The fear is that bouncing software may temporarily fix the problem, but the problem may eventually come back unless the root cause is determined. Also, it might take 30 minutes to bounce all of the affected servers and there is the risk that the problem will immediately reappear when the servers come back up. As Ilya Prigogine has pointed out, cause and effect get pretty murky down at the level of individual transactions. In the chemical reaction:

A + B ⟷ C

at equilibrium, do A and B produce C, or does C disassociate into A and B? We have the same problem with cause and effect in IT when trying to troubleshoot a large number of servers that are all in trouble at the same time. Is a rise in database connections a cause or an effect? Unfortunately, it can be both depending upon the situation.

Worse yet, for a substantial percentage of transients, the “problem resolved without support intervention”, meaning that the transient came and went without IT Operations doing anything at all, like a thunderstorm on a warm July afternoon that comes and goes of its own accord. In many ways, the complex interactions of a large network of servers behave much more like a weather or climate system than the orderly revolution of the planets about the Sun. It all goes back to the difference between trying to control a linear system and a nonlinear system. Sometimes the website would leave the normal operations attractor and then fall back into it. Other times, the website would leave the normal operations attractor and fall into the outage attractor instead.

And that is when I would get into trouble on outage conference calls. People are not used to nonlinear systems because all the products that they buy are engineered to behave in a linear manner. When driving down a highway at high speed, people do not expect that a slight turn of the steering wheel will cause the car to flip over. Actually, some years ago, many Americans were driving massive SUVs with very high centers of gravity so that suburbanites could safely ford rivers. These very nonlinear SUVs also had two basins of attraction and one of them was upside down! On some outage conference calls, I would try to apply some softwarephysics and urge the group to start bouncing things as soon as it became apparent that there was no obvious root cause for the outage. I tried to convince them that we were dealing with a complex nonlinear dynamical system with basins of attraction that was subject to the whims of chaos theory and to not worry about finding the root causes because none were to be found. This did not go over very well with Management. One day I was summoned to my supervisor's office and instructed to never speak of such things again on an outage conference call. That's when I started my blog on softwarephysics back in 2006.

Figure 7 – The top-heavy SUVs of yore also had two basins of attraction and one of them was upside down.

Things have changed a bit since everybody moved to Cloud Computing. Now people run applications in Containers instead of on physical or virtual servers. Containers can run several microservices at the same time and the number of Containers can dynamically rise and fall as the load changes. This is very much like my Tuxedo days 20 years ago. However, you are still left with a very large nonlinear dynamical system that is highly coupled. Lots of tipping points are still hit and people still get paged out to fix them because companies can lose thousands of dollars each second.

Unlike most complex dynamical nonlinear systems, corporate websites running on Containers are heavily monitored and produce huge amounts of time series data to keep track of all the complex interactions between components for troubleshooting purposes. A great Ph.D. research topic for a more geeky graduate student in ESS might be to contact the IT Departments of some major corporations to see if their corporate IT Command Centers would be agreeable to let them participate in some of these troubleshooting adventures and analyze the resulting data from tipping point cascades that crashed their websites. Below are a few links covering what kinds of time series data Container monitoring software collects:

Best Container Performance Monitoring Tools
https://www.dnsstuff.com/container-monitoring-tools

12 Best Docker Container Monitoring Tools
https://sematext.com/blog/docker-container-monitoring/

Using Website Monitoring Data to Model Climate Change Tipping Points
I think we are seeing the same problem with Climate Change. Most people and politicians only think in terms of linear systems because that is what they are familiar with. They do not realize that the Earth is a very complex nonlinear dynamical system that cannot be steered with confidence. Instead, people working on Climate Change seem to think that if we slowly phase out the burning of all carbon by 2050 that the Earth will then peacefully settle down. They do not realize that the Earth could easily roll into an outage basin of attraction instead by hitting a number of Climate Change tipping points that throw the Earth into an outage basin of attraction. To examine that closer, let's take a deeper dive into Trajectories of the Earth System in the Anthropocene. But first let's take a look at the low-frequency temperature trajectory of the Earth over the past 500 million years, the period of time in which complex multicellular carbon-based life existed on the Earth.

Figure 8 – Above is a plot of the average Earth temperature over the past 500 million years. Notice that the Earth tends to find itself in one of two states - a high-temperature "Hot House" state without ice on the planet and a much colder "Ice House" state. During the last 500 million years, the Earth spent 3/4 of the time in a "Hot House" basin of attraction and 1/4 of the time in an "Ice House" basin of attraction. Our species arose during the present "Ice House" system state.

Over the past 500 million years, the Earth has tended to be in one of two states - a high-temperature "Hot House" state without ice on the planet and a colder "Ice House" state. That is because, during the last 500 million years, the Earth spent about 3/4 of the time in a "Hot House" basin of attraction and 1/4 of the time in an "Ice House" basin of attraction. This has largely been determined by the amount of carbon dioxide in the Earth's atmosphere. As I pointed out in This Message on Climate Change Was Brought to You by SOFTWARE all of the solar energy received by the Earth from the Sun on a daily basis must be radiated back out into space as infrared photons to keep the Earth from burning up. I also pointed out that all of the gasses in dry air are transparent to both visual and infrared photons except for carbon dioxide and methane. Methane oxidizes into carbon dioxide over a period of 100 years, so carbon dioxide is the driving gas even though carbon dioxide is now a trace gas at a concentration of only 410 ppm. If the Earth had no carbon dioxide at all, it would be 60 oF cooler at an average temperature of 0 oF and completely frozen over. The amount of carbon dioxide in the Earth's atmosphere largely depends on the amount of rain falling on high terrains like mountain chains. Carbon dioxide dissolves in water to form carbonic acid and carbonic acid chemically weathers away exposed rock. The resulting ions are washed to the seas by rivers. The net result is the removal of carbon dioxide from the atmosphere. We are currently in an "Ice House" basin of attraction because the Earth now has many mountain chains being attacked by carbonic acid. Thanks to plate tectonics we currently have many continental fragments dispersing, with subduction zones appearing on their flanks forcing up huge mountain chains along their boundaries, like the mountain chains on the west coast of the entire Western Hemisphere, from Alaska down to the tip of Argentina near the South Pole. Some of the fragments are also colliding to form additional mountain chains along their contact zones, like the east-west trending mountain chains of the Eastern Hemisphere that run from the Alps all the way to the Himalayas. Because there are now many smaller continental fragments with land much closer to the moist oceanic air, rainfall on land has increased, and because of the newly formed mountain chains, chemical weathering and erosion of rock increased dramatically. The newly formed mountain chains on all the continental fragments essentially sucked the carbon dioxide out of the air and washed it down to the sea as dissolved ions over the past 50 million years causing a drop in the Earth's average temperature. This continued on until the Earth started to form polar ice caps. Then, about 2.5 million years ago we entered the Pleistocene Ice Ages that drove huge glacial ice sheets down to the lower latitudes. The carbon dioxide levels dropped so low that the Milankovitch cycles were able to begin to initiate a series of a dozen or so ice ages. The Milankovitch cycles are caused by minor changes in the Earth’s orbit and inclination that lead to periodic coolings and warmings. In general, the Earth’s temperature drops by about 15 oF for about 100,000 years and then increases by about 15 oF for about 10,000 years. During the cooling periods, we had ice ages because the snow in the far north did not fully melt during the summer and built up into huge ice sheets that then pushed down to the lower latitudes. Carbon dioxide levels also dropped to about 180 ppm during the ice ages, which further kept the planet in a deep freeze. During the 10,000 year warming periods, we had interglacial periods, like the recent Holocene interglacial that we just left in 1950, and carbon dioxide levels rose to about 280 ppm during the interglacials.

But thanks to human activities during the Anthropocene, we now find ourselves at a concentration of 410 ppm of carbon dioxide and rapidly rising by 2 - 3 ppm per year. We are now in the process of leaving the "Ice House" basin of attraction and heading for the "Hot House" basin of attraction as shown in Figure 9. For more on that see The Deadly Dangerous Dance of Carbon-Based Intelligence and Last Call for Carbon-Based Intelligence on Planet Earth.

Figure 9 – This figure from Trajectories of the Earth System in the Anthropocene shows that the Earth also has two basins of attraction. A "Hot House" basin of attraction and an "Ice House" basin of attraction. For the past 2.5 million years during the Pleistocene we have been in a Glacial-Interglacial cycle that lasts about 100,000 years, consisting of about 90,000 years of glaciers pushing down to the lower latitudes followed by about 10,000 years of the ice withdrawing back to the poles during an interglacial like the recent mild Holocene that produced agriculture, human civilization and advanced technology.

Figure 10 – In this figure from Trajectories of the Earth System in the Anthropocene, time moves forward in the diagram. It shows that we left the Holocene around 1950 and started moving into a much hotter Anthropocene. As the Anthropocene progresses we will come to a fork in the Earth's trajectory through phase space if we have not already hit it. At the fork, the Earth could move to a hotter, but stable state, otherwise it will fall over a waterfall into a "Hot House" basin of attraction. The waterfall is marked by the phrase "Planetary threshold". Beyond this "Planetary threshold", natural positive feedback loops will take over as a cascade of climate tipping points are kicked off. This would essentially result in a planetary outage.

Figure 11 – In this figure from Trajectories of the Earth System in the Anthropocene, we see a number of tightly-coupled interconnected subsystems of the Earth that could be potential climate tipping points. Compare this to the tightly-coupled interconnected subsystems in Figure 6. Both are subject to the dangers of nonlinear systems leaving a normal operations basin of attraction and ending up in an outage basin of attraction. Hitting just one climate tipping point has the potential of setting off a tipping point cascade just as maxing out an internal firewall can set off a cascade of software component tipping points that bring down an entire website.

But the worst problem, by far, with the Arctic defrosting, is methane gas. Methane gas is a powerful greenhouse gas. Eventually, methane degrades into carbon dioxide and water molecules, but over a 20-year period, methane traps 84 times as much heat in the atmosphere as carbon dioxide. About 25% of current global warming is due to methane gas. Natural gas is primarily methane gas with a little ethane mixed in, and it comes from decaying carbon-based lifeforms. Now here is the problem. For the past 2.5 million years, during the frigid Pleistocene, the Earth has been building up a gigantic methane bomb in the Arctic. Every summer, the Earth has been adding another layer of dead carbon-based lifeforms to the permafrost areas in the Arctic. That summer layer does not entirely decompose but gets frozen into the growing stockpile of carbon in the permafrost.

Figure 12 – Melting huge amounts of methane hydrate ice could release massive amounts of methane gas into the atmosphere.

The Earth has also been freezing huge amounts of methane gas as a solid called methane hydrate on the floor of the Arctic Ocean. Methane hydrate is a solid, much like ice, that is composed of water molecules surrounding a methane molecule frozen together into a methane hydrate ice. As the Arctic warms, this trapped methane gas melts and bubbles up to the surface.

The end result is that if we keep doing what we are doing, there is the possibility of the Earth ending up with a climate similar to the Permian-Triassic greenhouse gas mass extinction 252 million years ago that nearly killed off all complex carbon-based life on the planet. A massive flood basalt known as the Siberian Traps covered an area about the size of the continental United States with several thousand feet of basaltic lava, with eruptions that lasted for about one million years. Flood basalts, like the Siberian Traps, are thought to arise when large plumes of hotter than normal mantle material rise from near the mantle-core boundary of the Earth and break to the surface. This causes a huge number of fissures to open over a very large area that then begin to disgorge massive amounts of basaltic lava over a very large region. After the eruptions of basaltic lava began, it took about 100,000 years for the carbon dioxide that bubbled out of the basaltic lava to dramatically raise the level of carbon dioxide in the Earth's atmosphere and initiate the greenhouse gas mass extinction. This led to an Earth with a daily high of 140 oF and purple oceans choked with hydrogen-sulfide-producing bacteria, producing a dingy green sky over an atmosphere tainted with toxic levels of hydrogen sulfide gas and an oxygen level of only about 12%. The Permian-Triassic greenhouse gas mass extinction killed off about 95% of marine species and 70% of land-based species, and dramatically reduced the diversity of the biosphere for about 10 million years. It took a full 100 million years to recover from it.

Figure 13 - Above is a map showing the extent of the Siberian Traps flood basalt. The above area was covered by flows of basaltic lava to a depth of several thousand feet.

Figure 14 - Here is an outcrop of the Siberian Traps formation. Notice the sequence of layers. Each new layer represents a massive outflow of basaltic lava that brought greenhouse gases to the surface.

For a deep-dive into the Permian-Triassic mass extinction, see Professor Benjamin Burger's excellent YouTube at:

The Permian-Triassic Boundary - The Rocks of Utah
https://www.youtube.com/watch?v=uDH05Pgpel4&list=PL9o6KRlci4eD0xeEgcIUKjoCYUgOvtpSo&t=1s

The above video is just over an hour in length, and it shows Professor Burger collecting rock samples at the Permian-Triassic Boundary in Utah and then performing lithological analyses of them in the field. He then brings the samples to his lab for extensive geochemical analysis. This YouTube provides a rare opportunity for nonprofessionals to see how actual geological fieldwork and research are performed. You can also view a pre-print of the scientific paper that he has submitted to the journal Global and Planetary Change at:

What caused Earth’s largest mass extinction event?
New evidence from the Permian-Triassic boundary in northeastern Utah
https://eartharxiv.org/khd9y

From the above, we can see why having a better understanding of how complex nonlinear systems composed of many interacting subsystems behave just prior to, and during, a tipping point cascade is so important. Trying to build such a complicated simulation could be quite costly. Why not just use the natural monitoring data from a prebuilt multimillion-dollar high-volume corporate website instead?

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Monday, July 12, 2021

Can We Make the Transition From the Anthropocene to the Machineocene?

Softwarephysics explains that we are all living in one of those very rare times when a new form of self-replicating information is coming to predominance as software is rapidly overtaking the memes in the Minds of human DNA survival machines to become the dominant form of self-replicating information on the planet. For more on that see A Brief History of Self-Replicating Information. Additionally, we are all also living in a very brief geological Epoch known as the Anthropocene which most likely cannot endure for much more than about another 100 years, and that certainly is a very short geological Epoch by any standard. The start of the Anthropocene Epoch is denoted by the time when human DNA survival machines first obtained a sufficient level of technology and population to begin to geoengineer the planet. Most Earth System Scientists (ESS) mark the start of the Anthropocene Epoch beginning around the year 1950. So don't blame me, I was born in 1951! Earth System Science studies all of the complex interactions of the systems of the planet including their interactions with human DNA survival machines. So far, we human DNA survival machines have not done a very good job of geoengineering the planet in regards to the long-term prospects for complex carbon-based life on the Earth. Indeed, the excesses of the Anthropocene seem to demonstrate why no other form of carbon-based Intelligence has yet to make the transition to a machine-based Intelligence that could then begin to go on to finally engineer the entire Milky Way galaxy into a galaxy more suitable for Intelligences to exist. Figure 1 shows what we all have been up to since the beginning of the Anthropocene in 1950.

Figure 1 – The burdens on the resources of the Earth have dramatically increased since the onset of the Anthropocene in 1950. Earth System Scientists call this the Great Acceleration. It's quite amazing to see how quickly a carbon-based Intelligence can destroy an entire planet in less than 200 years once it gets going.

For more on ESS take a look at the Wikipedia:

Earth System Science
https://en.wikipedia.org/wiki/Earth_system_science

The greatest danger to carbon-based life during the Anthropocene is that we human DNA survival machines will likely reach a climate tipping point and kickoff a positive feedback loop that drives the planet to a heat death that we cannot stop. Below are a couple of great YouTube videos by Professor Will Steffen of the Australian National University at Canberra describing such tipping points:

The Anthropocene: Where on Earth are we Going?
https://www.youtube.com/watch?v=HvD0TgE34HA&t=1s

Hot House Earth
https://www.youtube.com/watch?v=wgEYfZDK1Qk&t=1s

and his oft-cited paper:

Trajectories of the Earth System in the Anthropocene
https://www.pnas.org/content/115/33/8252

Here is his webpage at the Climate Council:

https://www.climatecouncil.org.au/author/will-steffen/

For more on tipping points and the behavior of nonlinear systems see Software Chaos. All IT professionals should be quite familiar with how hitting a new small bug in software under load can hit a tipping point and rapidly cascade out of control to totally crash a critical system in Production. Hitting a tipping point is like starting a bonfire. You begin by lighting some dry leaves and twigs that could easily be snuffed out if desired but once the huge stack of wood gets going a positive feedback loop sets in and you have lost control. That is what we have been doing during the Anthropocene as we now pump 40 billion tons of carbon dioxide into the atmosphere each year. The initial heating of the atmosphere and acidification of the oceans could reach a tipping point and set the planet ablaze.

In the above YouTubes, Will Steffen suggests that one way for us to avoid hitting the disastrous climate tipping points would be for all to adopt the philosophies of the indigenous hunter-gatherers of Australia. I agree that the best policy would be for us to adopt their philosophy of not outstripping the resource base upon which we all rely but I would not agree that we should adopt their hunter-gatherer technology to do so. You cannot support a population of 8 billion with such technology. Instead, we should adopt the tenents of ecomodernism as expressed in the Ecomodernist Manifesto:

AN ECOMODERNIST MANIFESTO
http://www.ecomodernism.org/

Basically, the Ecomodernist Manifesto suggests that mankind needs to stop pretending that it can be one with Nature. Today, many modern environmentalists strive to reduce the impact of mankind on Nature by having mankind scale back to the needs of the hunter-gatherers of old. The problem is that nobody really wants to live with the poverty of a hunter-gatherer lifestyle for long. Perhaps for a weekend retreat, but not much longer than that. The Ecomodernists suggest that the only way to save Nature is to decouple mankind from Nature by totally stopping the exploitation of Nature by mankind. This can only be accomplished if mankind has access to a limitless supply of energy. The Ecomodernists maintain that modern 4th-generation nuclear reactors burning uranium and thorium provide just such a limitless supply of energy. There is enough uranium and thorium on the Earth to run the world for hundreds of thousands of years. After that, there will always be sources of uranium and thorium on the Moon and the nearby asteroids. By moving all of mankind to self-contained cities run by modern nuclear reactors, it would be possible to totally decouple mankind from Nature. Nature could then be left to heal on its own. For more on that see Last Call for Carbon-Based Intelligence on Planet Earth and This Message on Climate Change Was Brought to You by SOFTWARE.

These totally self-contained cities would be like huge interstellar spaceships. Each self-contained city would contain a fixed number of atoms that would be constantly recycled using the energy from 4th-generation nuclear reactors. The only additional atoms required to run the self-contained cities would be small amounts of uranium and thorium. All of the food and manufactured goods would be derived from the atoms already in the self-contained cities and from the atoms in discarded goods, sewage, garbage and waste that would be recycled back into food and other useful products by unlimited amounts of energy, robotics and AI software. Such self-contained cities with occupants all living with a modern high-level standard of living would be the actualization of the science-fiction future that I was promised while growing up back in the 1950s. But unlike many of the other things from the science-fiction future of the 1950s that I grew up with, we now have the necessary technology to make it actually all happen if we decide to do so.

The Coming Machineocene
Similarly, the Machineocene Epoch on the Earth would be marked by that point in time when earthly machine-based Intelligence would come to predominance and would reach a level of technology sufficient to begin to engineer the entire Milky Way galaxy. But we have been in the Anthropocene now for more than 70 years and it likely cannot last for much more than another 100 years. It's going to be close. If we look to our silent Milky Way galaxy, the odds are not in our favor of ever achieving the Machineocene. As I pointed out in Would Advanced AI Software Elect to Terraform the Earth?, I suggested that carbon-based Intelligence on the Earth might not be able to successfully make the transition to machine-based Intelligence and that this might help to explain why we have yet to find any other form of Intelligence in our Milky Way galaxy. Apparently, it seems that no other form of carbon-based Intelligence has been able to make it either over the past 10 billion year history of our galaxy. But in Would Advanced AI Software Elect to Terraform the Earth?, I also suggested that the advanced AI of an approaching Machineocene might terraform the Earth for old times sake to make the Earth once again more friendly to complex carbon-based life.

However, it would not be wise to solely rely on machine-based Intelligence bailing us all out. Machine-based Intelligence might find more kinship with the original silicate rocks of the Earth, especially if it were a silicon-based Intelligence. Machine-based Intelligence might view carbon-based living things as forms of parasitic self-replicating organic molecules that have been messing with the original pristine Earth for about 4.0 billion years. From the perspective of the natural silicate rocks of the Earth's surface, these parasitic forms of self-replicating organic molecules took a natural pristine Earth with a reducing atmosphere composed of nitrogen and carbon dioxide gasses and polluted it with oxygen that oxidized the dissolved iron in seawater, creating huge ugly deposits of red-banded iron formations that were later turned into cars, bridges and buildings. The oxygen pollution also removed the naturally occurring methane from the air and then caused the Earth to completely freeze over several times for hundreds of millions of years at a time. The ensuing glaciers mercilessly dug into the silicate rocks and scoured out deep valleys in them. These parasitic forms of self-replicating organic molecules then dug roots into the defenseless rocks and then poisoned them with organic acids, and even changed the natural courses of rivers into aimlessly meandering affairs. From the natural perspective of silicate rocks, living things are an invasive disease that has made a real mess of the planet. The indigenous rocks of the Earth would certainly be glad to see these destructive invaders all go away. Hopefully, the remaining advanced AI software running on crystals of silicon would be much kinder to the indigenous silicate rocks of the planet.

Figure 2 – Above is a close-up view of a sample taken from a banded iron formation. The dark layers in this sample are mainly composed of magnetite (Fe3O4) while the red layers are chert, a form of silica (SiO2) that is colored red by tiny iron oxide particles. The chert came from siliceous ooze that was deposited on the ocean floor as silica-based skeletons of microscopic marine organisms, such as diatoms and radiolarians, drifted down to the ocean floor. Some geologists suggest that the layers formed annually with the changing seasons. Take note of the small coin in the lower right for a sense of scale.

There are many other examples of how carbon-based life has greatly altered the original pristine silicate rocks of the Earth. Most of the Earth's crust is now covered by a thin layer of sedimentary rock. These sedimentary rocks were originally laid down as oozy sediments in flat layers at the bottom of shallow seas. Carbon-rich mud full of dead carbon-based living things and clay minerals was brought down in rivers and was deposited in the shallow seas to form shales. Sand eroded from granites was brought down and deposited to later become sandstones. Many limestone deposits were also formed from the calcium carbonate shells of carbon-based life that slowly drifted down to the bottom of the sea or from the remainders of coral reefs.

Figure 3 – Above are the famous White Cliffs of Dover. About 70 million years ago Great Britain and much of Europe were submerged under a shallow sea. The sea bottom was covered with white mud formed from the calcium carbonate skeletons of coccoliths. The coccoliths were tiny algae that floated in the surface waters and sank to the bottom during the Cretaceous period. These calcium carbonate layers were deposited very slowly. It took about 50 years to deposit an inch, but nearly 1500 feet of sediments were deposited in some areas. The weight of overlying sediments caused the deposits to become a form of limestone called chalk.

Figure 4 – Much of the Earth's surface is also covered by other forms of limestone that were deposited by carbon-based life forms.

Figure 5 – If we do not immediately take action, this might be the final stratigraphic section that alien machine-based Intelligence finds on our dead planet in the distant future.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston