Thursday, July 15, 2021

Using Monitoring Data From Website Outages to Model Climate Change Tipping Point Cascades in the Earth's Climate

In my last posting, Can We Make the Transition From the Anthropocene to the Machineocene? I referenced several of Professor Will Steffen's YouTube videos on climate change tipping points and his oft-cited paper:

Trajectories of the Earth System in the Anthropocene
https://www.pnas.org/content/115/33/8252

When I first read Trajectories of the Earth System in the Anthropocene, it brought back many IT memories because the Earth sounded so much like a complex high-volume nonlinear corporate website. Like the Earth, such websites operate in one of two stable basins of attraction - a normal operations basin of attraction and a website outage basin of attraction. The website does not operate in a stable manner for intermediate ranges between the two. Once the website leaves the normal operations basin of attraction, it can fall back into the normal operations basin of attraction all on its own without any intervention by IT Operations, or it can fall into the outage basin of attraction and stay there. This can happen in a matter of seconds, minutes or hours. To fully understand such behaviors, you need some softwarephysics. But what exactly is that?

As I explained in Introduction to Softwarephysics, I am now a 69-year-old retired IT professional who started out as an exploration geophysicist back in 1975. I finished up my B.S. in Physics at the University of Illinois at Urbana in 1973 and headed up north to complete an M.S. in Geophysics at the University of Wisconsin at Madison. Then from 1975 – 1979, I was an exploration geophysicist exploring for oil, first with Shell, and then with Amoco. Then in 1979, I made a career change to become an IT professional until I retired in 2016. When I first transitioned into IT from geophysics back in 1979, I figured that if you could apply physics to geology; why not apply physics to software? So like the exploration team at Amoco that I had just left, consisting of geologists, geophysicists, paleontologists, geochemists, and petrophysicists, I decided to take all the physics, chemistry, biology, and geology that I could muster and throw it at the problem of software. The basic idea was that many concepts in physics, chemistry, biology, and geology suggested to me that the IT community had accidentally created a pretty decent computer simulation of the physical Universe on a grand scale, a Software Universe so to speak, and that I could use this fantastic simulation in reverse to better understand the behavior of commercial software by comparing software to how things behaved in the physical Universe. Softwarephysics depicts software as a virtual substance and relies on our understanding of the current theories in physics, chemistry, biology, and geology to help us model the nature of software behavior. So in physics, we use software to simulate the behavior of the Universe, while in softwarephysics we use the Universe to simulate the behavior of software. Now let's apply some softwarephysics to the strange behaviors of high-volume websites.

High-Volume Websites are Nonlinear Systems That Behave Chaotically and not Like Well-Behaved Linear Systems
The concept of complex nonlinear systems traversing a landscape of attraction basins on trajectories through phase space arises from the chaos theory that was first developed in the 1970s and which I covered in depth in Software Chaos. Briefly stated, linear systems are systems that can be described by linear differential equations and nonlinear systems are systems that can only be described by nonlinear differential equations whose solutions can lead to chaotic behavior. That probably is not too helpful so let's take a look at their properties instead. Linear systems have solutions that add, while nonlinear systems do not have solutions that add. For example, take a look at the water ripples in Figure 1. In Figure 1 we see the outgoing ripples from two large pebbles that were thrown into a lake plus some smaller ripples from some smaller stones. Notice there are also some very low-frequency ripples moving across the entire lake too. As these ripples move forward in time they all seem to pass right through each other as if the other ripples were not even there. That is because the wave equation that describes the motion of ripples is a linear differential equation, and that means that the solutions to the wave equation add where the ripples cross each other and do not disturb each other as they pass right through each other. Nonlinear systems are much more interactive. They behave more like two cars in a head-on collision.

Figure 1 – Ripples in a lake behave in a linear manner because the wave equation is a linear differential equation. Since the addition of two solutions to a linear differential equation is also a solution for the linear differential equation, the ripples can pass right through each other unharmed.

Figure 2 – Solutions for nonlinear differential equations can interact intensely with each other. Consequently, nonlinear systems behave more like the head-on collision of two cars where each component element of the system can greatly alter the other.

Linear systems are also predictable because many times they have periodic behavior like the orbit of the Earth about the Sun. Also, small perturbations to linear systems only result in small changes to their behaviors too. For example, when the Earth is hit by a small asteroid, the Earth is not flung out of the solar system. Instead, the orbit of the Earth is only changed by a very small amount that cannot really even be detected. Because the behavior of linear systems is predictable, that means that their behavior is also usually controllable. And people just love being able to control things. It takes the uncertainty out of life. That is why engineers design the products that you buy to operate in a linear manner so that you can control them. Being predictable and well-behaved also means that for linear systems the whole is equal to the sum of the parts. This means that linear systems can be understood using the reductionist methods of the traditional sciences. Understanding how the fundamental parts of a linear system operate allows one to then predict how the macroscopic linear system behaves as a whole. On the other hand, for nonlinear systems, the whole is more than the sum of the parts. This means that the complex interactions of the parts of a nonlinear system can lead to emergent macroscopic behaviors that cannot be predicted from the understanding of its parts. For example, you probably fully understand how to drive your car, but that does not allow you to be able to predict the behavior of a traffic jam when you join 10,000 other motorists on a highway. Traffic jams are an emergent behavior that arise when large numbers of cars gather on the same roadway. This is an important concept for IT professionals troubleshooting a problem. Sometimes complex software consisting of a large number of simple software component parts operating in a linear manner can be understood as the sum of its parts. In such cases, finding the root cause of a large-scale problem can frequently be performed by simply finding the little culprit component at fault. But when a large number of highly-coupled interacting software components are behaving in a nonlinear way this reductionist troubleshooting approach will not work. It's like trying to discover the root cause of a city-wide traffic jam that was actually caused by a shovel dropping off the back of a landscaping truck several hours earlier.

Figure 3 – The orbit of the Earth about the Sun is an example of a linear system that is periodic and predictable.

Nonlinear systems are deterministic, meaning that once you set them off in a particular direction they always follow exactly the same path or trajectory, but they are not predictable because slight changes to initial conditions or slight perturbations can cause nonlinear systems to dramatically diverge to a new trajectory that leads to a completely different destination. Even when nonlinear systems are left to themselves and not perturbed in any way, they can appear to spontaneously jump from one type of behavior to another. The other important thing to consider is that linear differential equations can be solved using Calculus. Nonlinear differential equations cannot be solved using Calculus. Instead, the solutions to nonlinear differential equations can only be found by using numerical approximations on computers. That is why it took so long to discover this chaotic behavior of nonlinear systems. For example, the course in differential equations that I took back in 1971 used a textbook written in 1968. This textbook was 545 pages long, but only had a slender 16-page chapter on nonlinear differential equations which basically said that we do not know how to solve nonlinear differential equations and because we cannot solve them, the unstated implication was that nonlinear differential equations could not be that important anyway. Besides, how different could nonlinear differential equations and the nonlinear systems they described be compared to their linear cousins? This question was not answered until the winter of 1961.

The strange behavior of nonlinear systems was first discovered by Ed Lorenz while he was a meteorologist doing research at MIT. In his book Chaos - Making a New Science (1987) James Gleick describes Ed Lorenz’s accidental discovery of the chaotic behavior of nonlinear systems in the winter of 1961. Ed was using a primitive vacuum-tube computer, a Royal McBee LPG-30, to simulate weather, using a very simple computer model. The model used three nonlinear differential equations, with three variables, that changed with time t:

dx/dt = 10y - 10x
dy/dt = 28x - y - xy
dz/dt = xy - 8z/3

The variable x represented the intensity of air motion, the variable y represented the temperature difference between rising and descending air masses, and the variable z was the temperature gradient between the top and bottom of the atmospheric model. Thus each value of x, y, and z represented the weather conditions at a particular time t, and watching the values of x, y, and z change with time t was like watching the weather unfold over time.

One day, Lorenz wanted to repeat a simulation for a longer period of time. Instead of wasting time rerunning the whole simulation from the beginning on his very slow computer, he started the second run in the middle of the first run, using the results from the first run for the initial conditions of the second run. Now the output from the second run should have exactly followed the output from the first run where the two overlapped, but instead, the two weather trajectories quickly diverged and followed completely different paths through time. At first, he thought the vacuum tube computer was on the fritz again, but then he realized that he had not actually entered the exact initial conditions from the first run. Using single precision floating point variables, the LPG-30 computer stored numbers to an accuracy of six decimal places in memory, like 0.506127, but the line printer printouts shortened the numbers to three decimal places, like 0.506. When Lorenz punched in the initial conditions for the second run, he entered the rounded-off numbers from the printout, and that was why the two runs diverged. Even though there was only a 0.1% difference between the initial conditions of the two runs, the end result of the two runs was completely different! For Ed, this put an end to the hope of perfect long-term weather forecasting because it meant that even if he could measure the temperature of the air to within a thousandth of a degree and the wind speed accurate to a thousandth of a mile/hour over every square foot of the Earth, his weather forecast would not be accurate beyond a few days out because of the very small errors in his measurements. Ed Lorenz published his findings in a now-famous paper Deterministic Nonperiodic Flow (1963).

You can read this famous paper at:

Deterministic Nonperiodic Flow
http://samizdat.cc/lib/pflio/lorenz-1963.pdf

Figure 4 – Above is a plot of the solution to Ed Lorenz's three nonlinear differential equations. Notice that like the orbit of the Earth about the Sun, points on the solution curve follow somewhat periodic paths about two strange attractors. Each attractor is called an attractor basin because points orbit the attractor basins like marbles in a bowl.

Figure 5 – But unlike the Earth orbiting the Sun, points in the attractor basins can suddenly jump from one attractor basin to another. High-volume corporate websites normally operate in a normal operations attractor basin but sometimes can spontaneously jump to an outage attractor basin, especially if they are perturbed by a small processing load disturbance.

The Rise of High-Volume Corporate Websites in the 1990s Reveals the Nonlinear Behavior of Complex Software Under Load
The Internet Revolution for corporate IT Departments really got started around 1995 when corporations began to create static public websites that could be used by the public over dial-up ISPs like AOL and Earthlink to view static information on .html webpages. As more and more users went online with dial-up access during the late 1990s, corporate IT Departments then started to figure out ways to create dynamic corporate websites to interact with the public. These dynamic websites had to interact with end-users like a dedicated synchronous computer session on a mainframe, but using the inherently asynchronous Internet, and this was a bit challenging. From 1999 - 2003, I was at United Airlines supporting their corporate www.united.com website, and from 2003 - 2016 I was at Discover Card supporting all of their internal and external websites like www.discover.com. Both corporations had very excellent IT Departments supporting their high-volume corporate websites and Command Centers that monitored the websites and paged out members of the IT Department when the websites got into trouble. The Command Centers also assisted with major installs during the night.

When I got to United Airlines in 1999, the decision had been made to use Tuxedo servers running under Unix to do the backend connection to Oracle databases and the famous Apollo reservation system that first came online in 1971. The Tuxedo servers ran in a Tuxedo Domain and behaved very much like modern Cloud Containers even though Tuxedo first came out back in 1983 at AT&T. When we booted up the Tuxedo Domain we would initially crank up a minimum of 3 Tuxedo instances of each Tuxedo server in the Tuxedo Domain. Then we only ran one microservice in each Tuxedo server instance to keep things simple. In the Tuxedo Domain configuration file, we would also set a parameter for the maximum number of Tuxedo instances of each Tuxedo server too - usually about a max of 10 instances. Once the Tuxedo Domain was running and the Tuxedo server instances were taking traffic, Tuxedo would dynamically crank up new instances as needed. For example, the Tuxedo microservices running in the Tuxedo servers were written in C++ and naturally had memory leaks, so when any of the Tuxedo server instances ran out of memory from the leaks and died, Tuxedo would simply crank up another to replace it. Things were pretty simple back then without a lot of moving parts.

The group operated in an early DevOps manner. We had one Project Manager who gathered requests from the Application Development team that supported the front-end code for www.united.com. One team member did all of the Tuxedo microservice design and code specs. We then had three junior team members doing all of the coding for the C++ Tuxedo microservices. I did all of the Source Code Management for the team, all of the integration testing, all of the Change Management work, and I created all of the install plans and did all of the installs in the middle of the night with UnixOps. We only had a single Primary and a single Secondary on pager support for the Tuxedo Domain. I traded pager duty with another team member, and we flipped Primary and Secondary back and forth each week.

We tried to keep our Tuxedo microservices very atomic and simple. Rather than provide our client applications with an entire engine, we provided them with the parts for an engine, like engine blocks, pistons, crankshafts, water pumps, distributors, induction coils, intake manifolds, carburetors and alternators. One day in 2002 this came in very handy. My boss called me into his office at 9:00 AM one morning and explained that United Marketing had come up with a new promotional campaign called "Fly Three - Fly Free". The "Fly Three - Fly Free" campaign worked like this. If a United customer flew three flights in one month, they would get an additional future flight for free. All the customer had to do was to register for the program on the www.united.com website. In fact, United Marketing had actually begun running ads in all of the major newspapers about the program that very day. The problem was that nobody in Marketing had told IT about the program and the www.united.com website did not have the software needed to register customers for the program. I was then sent to an emergency meeting of the Application Development team that supported the www.united.com website. According to the ads running in the newspapers, the "Fly Three - Fly Free" program was supposed to start at midnight, so we had less than 15 hours to design, develop, test and implement the necessary software for the www.united.com website! Amazingly, we were able to do this by having the www.united.com website call a number of our primitive Tuxedo microservices that interacted with the www.united.com website and the Apollo reservation system.

But by the time I got to Discover in 2003 things had become much more complicated. Discover's high-volume websites were complicated affairs of load balancers, Apache webservers, WebSphere J2EE application servers, Tomcat application servers, database servers, proxy servers, email servers and gateway servers to mainframes - all with a high degree of coupling and interdependence of the components. When I first arrived in 2003, Discover was running on about 100 servers but by 2016 this complicated infrastructure had grown in size to about 1,000 servers. Although everything was sized to take the anticipated load with room to spare, every few days or so, we suddenly would experience a transient on the network of servers that caused extreme slowness of the websites to the point where throughput essentially stopped. When this happened, our monitors detected the mounting problem and perhaps 10 people were paged out to join a conference call to determine the root cause of the problem. Many times this was in the middle of the night. In such cases, we were essentially leaving the normal operations basin of attraction and crossing over to the outage basin of attraction. We would all then begin diagnosing the highly instrumented software components. This usually involved tracing a cascade of software component tipping point failures back to the original root cause that set off one or more of the software component tipping points.

Figure 6 - Early in the 21st century, high-volume corporate websites became complicated affairs consisting of hundreds of servers arranged in layers.

Naturally, Management wanted the website to operate in the normal operations basin of attraction since the corporation could lose thousands of dollars each second that it did not. This was not as bad as it sounds. When I was at Discover, we had two datacenters running exactly the same software on nearly the same hardware, and each datacenter could handle Discover's full peak load at 10:00 AM each morning. Usually, the total processing load was split between the two datacenters, but if one of the datacenters got into trouble, the Command Center could fail over some or all of the processing load to the good datacenter if things got really bad. When the Command Center detected a mounting problem, they would page out perhaps 10 people to figure out what the problem was.

When you were paged into a conference call, Management always wanted to know what the root cause of the outage was so that the problem could be corrected and never happen again. This was best done while the sick datacenter was still taking some traffic. So we would all begin to look at huge amounts of time series data from the monitoring software and log files. Basically, this would involve tracing backward through a chain of software component tipping points back to the original software components that got into trouble. But about 50% of the time this would not work because the numerous software components were so tightly coupled with feedback loops. For example, I might initially see that the number of connections to the Oracle databases on one of the many WebSphere Application Servers suddenly rose and WebSphere began to get timeouts for database connections. This symptom may then have spread to the other WebSphere servers causing the number of connections on an internal firewall to max out. Transactions would begin to back up into the webservers and eventually max out the load balancers in front of the webservers. The whole architecture of hundreds of servers could then spin out of control and grind to a halt within a period of several seconds to several hours, depending upon the situation. While on the conference call, there is a tension between finding a root cause for the escalating problem and bouncing the servers having problems. Bouncing is a technical term for stopping and restarting a piece of software or server to alleviate a problem, and anyone who owns a PC running the Windows operating system should be quite familiar with the process. The fear is that bouncing software may temporarily fix the problem, but the problem may eventually come back unless the root cause is determined. Also, it might take 30 minutes to bounce all of the affected servers and there is the risk that the problem will immediately reappear when the servers come back up. As Ilya Prigogine has pointed out, cause and effect get pretty murky down at the level of individual transactions. In the chemical reaction:

A + B ⟷ C

at equilibrium, do A and B produce C, or does C disassociate into A and B? We have the same problem with cause and effect in IT when trying to troubleshoot a large number of servers that are all in trouble at the same time. Is a rise in database connections a cause or an effect? Unfortunately, it can be both depending upon the situation.

Worse yet, for a substantial percentage of transients, the “problem resolved without support intervention”, meaning that the transient came and went without IT Operations doing anything at all, like a thunderstorm on a warm July afternoon that comes and goes of its own accord. In many ways, the complex interactions of a large network of servers behave much more like a weather or climate system than the orderly revolution of the planets about the Sun. It all goes back to the difference between trying to control a linear system and a nonlinear system. Sometimes the website would leave the normal operations attractor and then fall back into it. Other times, the website would leave the normal operations attractor and fall into the outage attractor instead.

And that is when I would get into trouble on outage conference calls. People are not used to nonlinear systems because all the products that they buy are engineered to behave in a linear manner. When driving down a highway at high speed, people do not expect that a slight turn of the steering wheel will cause the car to flip over. Actually, some years ago, many Americans were driving massive SUVs with very high centers of gravity so that suburbanites could safely ford rivers. These very nonlinear SUVs also had two basins of attraction and one of them was upside down! On some outage conference calls, I would try to apply some softwarephysics and urge the group to start bouncing things as soon as it became apparent that there was no obvious root cause for the outage. I tried to convince them that we were dealing with a complex nonlinear dynamical system with basins of attraction that was subject to the whims of chaos theory and to not worry about finding the root causes because none were to be found. This did not go over very well with Management. One day I was summoned to my supervisor's office and instructed to never speak of such things again on an outage conference call. That's when I started my blog on softwarephysics back in 2006.

Figure 7 – The top-heavy SUVs of yore also had two basins of attraction and one of them was upside down.

Things have changed a bit since everybody moved to Cloud Computing. Now people run applications in Containers instead of on physical or virtual servers. Containers can run several microservices at the same time and the number of Containers can dynamically rise and fall as the load changes. This is very much like my Tuxedo days 20 years ago. However, you are still left with a very large nonlinear dynamical system that is highly coupled. Lots of tipping points are still hit and people still get paged out to fix them because companies can lose thousands of dollars each second.

Unlike most complex dynamical nonlinear systems, corporate websites running on Containers are heavily monitored and produce huge amounts of time series data to keep track of all the complex interactions between components for troubleshooting purposes. A great Ph.D. research topic for a more geeky graduate student in ESS might be to contact the IT Departments of some major corporations to see if their corporate IT Command Centers would be agreeable to let them participate in some of these troubleshooting adventures and analyze the resulting data from tipping point cascades that crashed their websites. Below are a few links covering what kinds of time series data Container monitoring software collects:

Best Container Performance Monitoring Tools
https://www.dnsstuff.com/container-monitoring-tools

12 Best Docker Container Monitoring Tools
https://sematext.com/blog/docker-container-monitoring/

Using Website Monitoring Data to Model Climate Change Tipping Points
I think we are seeing the same problem with Climate Change. Most people and politicians only think in terms of linear systems because that is what they are familiar with. They do not realize that the Earth is a very complex nonlinear dynamical system that cannot be steered with confidence. Instead, people working on Climate Change seem to think that if we slowly phase out the burning of all carbon by 2050 that the Earth will then peacefully settle down. They do not realize that the Earth could easily roll into an outage basin of attraction instead by hitting a number of Climate Change tipping points that throw the Earth into an outage basin of attraction. To examine that closer, let's take a deeper dive into Trajectories of the Earth System in the Anthropocene. But first let's take a look at the low-frequency temperature trajectory of the Earth over the past 500 million years, the period of time in which complex multicellular carbon-based life existed on the Earth.

Figure 8 – Above is a plot of the average Earth temperature over the past 500 million years. Notice that the Earth tends to find itself in one of two states - a high-temperature "Hot House" state without ice on the planet and a much colder "Ice House" state. During the last 500 million years, the Earth spent 3/4 of the time in a "Hot House" basin of attraction and 1/4 of the time in an "Ice House" basin of attraction. Our species arose during the present "Ice House" system state.

Over the past 500 million years, the Earth has tended to be in one of two states - a high-temperature "Hot House" state without ice on the planet and a colder "Ice House" state. That is because, during the last 500 million years, the Earth spent about 3/4 of the time in a "Hot House" basin of attraction and 1/4 of the time in an "Ice House" basin of attraction. This has largely been determined by the amount of carbon dioxide in the Earth's atmosphere. As I pointed out in This Message on Climate Change Was Brought to You by SOFTWARE all of the solar energy received by the Earth from the Sun on a daily basis must be radiated back out into space as infrared photons to keep the Earth from burning up. I also pointed out that all of the gasses in dry air are transparent to both visual and infrared photons except for carbon dioxide and methane. Methane oxidizes into carbon dioxide over a period of 100 years, so carbon dioxide is the driving gas even though carbon dioxide is now a trace gas at a concentration of only 410 ppm. If the Earth had no carbon dioxide at all, it would be 60 oF cooler at an average temperature of 0 oF and completely frozen over. The amount of carbon dioxide in the Earth's atmosphere largely depends on the amount of rain falling on high terrains like mountain chains. Carbon dioxide dissolves in water to form carbonic acid and carbonic acid chemically weathers away exposed rock. The resulting ions are washed to the seas by rivers. The net result is the removal of carbon dioxide from the atmosphere. We are currently in an "Ice House" basin of attraction because the Earth now has many mountain chains being attacked by carbonic acid. Thanks to plate tectonics we currently have many continental fragments dispersing, with subduction zones appearing on their flanks forcing up huge mountain chains along their boundaries, like the mountain chains on the west coast of the entire Western Hemisphere, from Alaska down to the tip of Argentina near the South Pole. Some of the fragments are also colliding to form additional mountain chains along their contact zones, like the east-west trending mountain chains of the Eastern Hemisphere that run from the Alps all the way to the Himalayas. Because there are now many smaller continental fragments with land much closer to the moist oceanic air, rainfall on land has increased, and because of the newly formed mountain chains, chemical weathering and erosion of rock increased dramatically. The newly formed mountain chains on all the continental fragments essentially sucked the carbon dioxide out of the air and washed it down to the sea as dissolved ions over the past 50 million years causing a drop in the Earth's average temperature. This continued on until the Earth started to form polar ice caps. Then, about 2.5 million years ago we entered the Pleistocene Ice Ages that drove huge glacial ice sheets down to the lower latitudes. The carbon dioxide levels dropped so low that the Milankovitch cycles were able to begin to initiate a series of a dozen or so ice ages. The Milankovitch cycles are caused by minor changes in the Earth’s orbit and inclination that lead to periodic coolings and warmings. In general, the Earth’s temperature drops by about 15 oF for about 100,000 years and then increases by about 15 oF for about 10,000 years. During the cooling periods, we had ice ages because the snow in the far north did not fully melt during the summer and built up into huge ice sheets that then pushed down to the lower latitudes. Carbon dioxide levels also dropped to about 180 ppm during the ice ages, which further kept the planet in a deep freeze. During the 10,000 year warming periods, we had interglacial periods, like the recent Holocene interglacial that we just left in 1950, and carbon dioxide levels rose to about 280 ppm during the interglacials.

But thanks to human activities during the Anthropocene, we now find ourselves at a concentration of 410 ppm of carbon dioxide and rapidly rising by 2 - 3 ppm per year. We are now in the process of leaving the "Ice House" basin of attraction and heading for the "Hot House" basin of attraction as shown in Figure 9. For more on that see The Deadly Dangerous Dance of Carbon-Based Intelligence and Last Call for Carbon-Based Intelligence on Planet Earth.

Figure 9 – This figure from Trajectories of the Earth System in the Anthropocene shows that the Earth also has two basins of attraction. A "Hot House" basin of attraction and an "Ice House" basin of attraction. For the past 2.5 million years during the Pleistocene we have been in a Glacial-Interglacial cycle that lasts about 100,000 years, consisting of about 90,000 years of glaciers pushing down to the lower latitudes followed by about 10,000 years of the ice withdrawing back to the poles during an interglacial like the recent mild Holocene that produced agriculture, human civilization and advanced technology.

Figure 10 – In this figure from Trajectories of the Earth System in the Anthropocene, time moves forward in the diagram. It shows that we left the Holocene around 1950 and started moving into a much hotter Anthropocene. As the Anthropocene progresses we will come to a fork in the Earth's trajectory through phase space if we have not already hit it. At the fork, the Earth could move to a hotter, but stable state, otherwise it will fall over a waterfall into a "Hot House" basin of attraction. The waterfall is marked by the phrase "Planetary threshold". Beyond this "Planetary threshold", natural positive feedback loops will take over as a cascade of climate tipping points are kicked off. This would essentially result in a planetary outage.

Figure 11 – In this figure from Trajectories of the Earth System in the Anthropocene, we see a number of tightly-coupled interconnected subsystems of the Earth that could be potential climate tipping points. Compare this to the tightly-coupled interconnected subsystems in Figure 6. Both are subject to the dangers of nonlinear systems leaving a normal operations basin of attraction and ending up in an outage basin of attraction. Hitting just one climate tipping point has the potential of setting off a tipping point cascade just as maxing out an internal firewall can set off a cascade of software component tipping points that bring down an entire website.

But the worst problem, by far, with the Arctic defrosting, is methane gas. Methane gas is a powerful greenhouse gas. Eventually, methane degrades into carbon dioxide and water molecules, but over a 20-year period, methane traps 84 times as much heat in the atmosphere as carbon dioxide. About 25% of current global warming is due to methane gas. Natural gas is primarily methane gas with a little ethane mixed in, and it comes from decaying carbon-based lifeforms. Now here is the problem. For the past 2.5 million years, during the frigid Pleistocene, the Earth has been building up a gigantic methane bomb in the Arctic. Every summer, the Earth has been adding another layer of dead carbon-based lifeforms to the permafrost areas in the Arctic. That summer layer does not entirely decompose but gets frozen into the growing stockpile of carbon in the permafrost.

Figure 12 – Melting huge amounts of methane hydrate ice could release massive amounts of methane gas into the atmosphere.

The Earth has also been freezing huge amounts of methane gas as a solid called methane hydrate on the floor of the Arctic Ocean. Methane hydrate is a solid, much like ice, that is composed of water molecules surrounding a methane molecule frozen together into a methane hydrate ice. As the Arctic warms, this trapped methane gas melts and bubbles up to the surface.

The end result is that if we keep doing what we are doing, there is the possibility of the Earth ending up with a climate similar to the Permian-Triassic greenhouse gas mass extinction 252 million years ago that nearly killed off all complex carbon-based life on the planet. A massive flood basalt known as the Siberian Traps covered an area about the size of the continental United States with several thousand feet of basaltic lava, with eruptions that lasted for about one million years. Flood basalts, like the Siberian Traps, are thought to arise when large plumes of hotter than normal mantle material rise from near the mantle-core boundary of the Earth and break to the surface. This causes a huge number of fissures to open over a very large area that then begin to disgorge massive amounts of basaltic lava over a very large region. After the eruptions of basaltic lava began, it took about 100,000 years for the carbon dioxide that bubbled out of the basaltic lava to dramatically raise the level of carbon dioxide in the Earth's atmosphere and initiate the greenhouse gas mass extinction. This led to an Earth with a daily high of 140 oF and purple oceans choked with hydrogen-sulfide-producing bacteria, producing a dingy green sky over an atmosphere tainted with toxic levels of hydrogen sulfide gas and an oxygen level of only about 12%. The Permian-Triassic greenhouse gas mass extinction killed off about 95% of marine species and 70% of land-based species, and dramatically reduced the diversity of the biosphere for about 10 million years. It took a full 100 million years to recover from it.

Figure 13 - Above is a map showing the extent of the Siberian Traps flood basalt. The above area was covered by flows of basaltic lava to a depth of several thousand feet.

Figure 14 - Here is an outcrop of the Siberian Traps formation. Notice the sequence of layers. Each new layer represents a massive outflow of basaltic lava that brought greenhouse gases to the surface.

For a deep-dive into the Permian-Triassic mass extinction, see Professor Benjamin Burger's excellent YouTube at:

The Permian-Triassic Boundary - The Rocks of Utah
https://www.youtube.com/watch?v=uDH05Pgpel4&list=PL9o6KRlci4eD0xeEgcIUKjoCYUgOvtpSo&t=1s

The above video is just over an hour in length, and it shows Professor Burger collecting rock samples at the Permian-Triassic Boundary in Utah and then performing lithological analyses of them in the field. He then brings the samples to his lab for extensive geochemical analysis. This YouTube provides a rare opportunity for nonprofessionals to see how actual geological fieldwork and research are performed. You can also view a pre-print of the scientific paper that he has submitted to the journal Global and Planetary Change at:

What caused Earth’s largest mass extinction event?
New evidence from the Permian-Triassic boundary in northeastern Utah
https://eartharxiv.org/khd9y

From the above, we can see why having a better understanding of how complex nonlinear systems composed of many interacting subsystems behave just prior to, and during, a tipping point cascade is so important. Trying to build such a complicated simulation could be quite costly. Why not just use the natural monitoring data from a prebuilt multimillion-dollar high-volume corporate website instead?

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

No comments: