In this posting, we will explore the last of the great achievements of 20th century physics – chaos theory. The key insight of chaos theory is the fundamental difference between linear and nonlinear dynamical systems. A dynamical system is simply a system that changes with time, like the number of cars passing a particular point along I-90, or the response time of an EJB. The fundamental difference between linear and nonlinear systems is their sensitivity to slight changes in initial conditions. But before proceeding with a deeper analysis of chaos theory and nonlinear dynamical systems, let us examine a case study in source code management that highlights the differences between linear and nonlinear systems, and which would be a true nightmare for any corporate SCM (Source Code Management) group.**A Case Study of SCM Failure From 1776**

One of my all-time favorites is the play and movie *1776* which is a fairly accurate historical depiction of the events surrounding the passage of the American Declaration of Independence by the Second Continental Congress. I frequently watch the movie version of *1776* on the 4th of July with my family because it vividly brings to life the all-too-human internal conflicts surrounding the birth of the United States. As you might have guessed, I am an 18th century liberal and a huge fan of the 18th century Enlightenment and the 17th century Scientific Revolution, with their attendant focus on the freedom of thought and the investigation of reality through rational deliberation, things which seem to be sadly lacking in the modern world. But I suppose I am just being a 19th century romantic when it comes to the 18th century; it must have been just as irrational as the world is today. I loved the intellectual freedom of the 1960s, compared to the stifling 1950s, but it was hard raising kids with stock video of your peers frolicking at Woodstock, just as the French Revolution (1787) put the kibosh on the *The Age of Reason* (1795). Still, the 1960s taught me to distrust authority and to keep an open mind to all sides of an issue and to take a cue from the effective theories of physics, by keeping in mind, that all of my deeply held opinions are just approximations of reality too. As I have mentioned before, knowing that you are “wrong” gives you a huge advantage over people who know that they are “right”.

Now that you know of my fondness for *1776*, I would like to offer a slightly different rendition of *1776* from an IT perspective. To begin with, can you find the problem with the following code deployed in 1776?*”We hold these truths to be self evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.”*

This is probably the only line from the Declaration of Independence that most Americans can quote from memory, and surprisingly, more than 50% of them get it wrong, but for an understandably good reason. Let me explain.

All during the first two quarters of 1776, it was becoming fairly obvious in the daily Board meetings that many of the board members wanted to take the Company private with a leveraged buyout from Corporate. This came to a head on June 7th, when Richard Henry Lee of Virginia read his resolution beginning: *"Resolved: That these United Colonies are, and of right ought to be, free and independent States, that they are absolved from all allegiance to the British Crown, and that all political connection between them and the State of Great Britain is, and ought to be, totally dissolved."*

On June 11, consideration of the Lee Resolution by the Board was postponed by a vote of seven to five, with New York abstaining. The Board then recessed for 3 weeks, after funding a five man development team to work on a Declaration of Independence for the leveraged buyout. The development team consisted of John Adams, Benjamin Franklin, Roger Sherman, Robert Livingston, and Thomas Jefferson, with Adams and Franklin as team leads. Naturally, nearly all the coding fell to the junior team member, Thomas Jefferson, and Jefferson did all of the unit and integration testing too because there were no QA team members available for a project thrown together on such short notice. Coding began on June 11th with very little direction from the business. Shortly before June 28th, a structured walkthrough of the code was conducted by the development team, and Adams and Franklin made some minor code changes to the original base line developed by the junior Jefferson.

The Lee resolution was passed by the Board on July 2, 1776, and the Declaration of Independence immediately went into UAT testing, which lasted through the morning of July 4, 1776. On the 4th, the Declaration finally passed UAT testing following a total of 86 code changes. Now during all this time “inalienable” was used in all releases of the code, including the final build that was signed off for UAT testing on July 4th by the Board. After adoption, the Board opened a change ticket for the production release of the Declaration. A staged release was planned, with John Dunlap working in his print shop through the night of the 4th to produce the first beta production build. This first staged beta production release did not have an SSL signing cert and is now known as the Dunlap Broadside release of the Declaration. The Dunlap Broadside beta release was staged to a small population of the total user base for normal end-user validation. George Washington and his troops in New York were part of the beta test population and began using the unsigned Dunlap Broadside on July 9th. On July 19th, the Board opened another change ticket for the final production release of the Declaration and an accompanying SSL signing cert.

Now the problem was that before John Dunlap produced the first beta production build of the Declaration, John Adams changed the code to use "unalienable", instead of the "inalienable" found in the version of code that passed UAT testing! Unfortunately, this unapproved and untested change carried through to the final production build that was released to the general user population. And that is why the official production release of the Declaration that you see in the rotunda of the National Archives contains the word “unalienable”, and why so many Americans are so confused!

There is a frequent plot in Hollywood movies of a character taking a small turn of events that dramatically changes their life and subsequently the entire Universe. For example, in 1998, Gwyneth Paltrow starred in the film *Sliding Doors*, in which she played a character named Helen whose life hinges upon whether or not the doors of a subway train block her entrance to a subway car. At the crucial point in the film, Helen experiences a Hugh Everett type Many-Worlds bifurcation, with one instance getting on the subway car and the other instance stuck back in the station. The two instances of Helen then proceed on drastically different paths through spacetime. By the way, Gwyneth Paltrow’s mom, Blythe Danner, played Thomas Jefferson’s wife Martha in *1776*. I am a big fan of both. Now the question is how often do earth-shattering things like this really happen? For example, how different would the world be today if John Adams had not changed “inalienable” to “unalienable”? Clearly, for at least 50% of Americans, it would not have made the slightest difference at all, and for the rest of the world it would have mattered even less.**Software is Nonlinear**

Now in the telling of my rendition of *1776* I mentioned that the scenario I was laying down would be a nightmare for any corporate SCM group. But why? The Declaration of Independence is 8073 characters or bytes long. Now take the source code for a small 8073 byte C++ program and randomly change an “i” to a “u” at some point in the code. Because C++ is a strongly typed computer language, the 8073 byte program will probably not even compile, which is a fortunate thing, but what if the program did compile and was deployed into production! Most likely it would have performed quite badly. If the C++ program does not compile, just change one of the bits in the executable from a “1” to a “0” with your favorite editor in HEX mode and see what happens when the program runs. Again, disastrous results will ensue. So why is an 8073 byte C++ program so radically different from the Declaration of Independence? The difference is that software is nonlinear in nature, while the Declaration of Independence and many other things in day-to-day life are linear. That means that software is very sensitive to small changes in initial conditions, while many things in life are not. Consequently, all IT people are terrified by the slightest change to software. And this goes for all forms of software, including source code, shell scripts, JCL streams, firewall rules, proxy policies, .xml configuration files, lookup tables, job schedules, database schemas, etc. The slightest change to any of them can produce wild and unpredictable behaviors that might not even appear until days or weeks after the change has been made. This is why IT has developed such elaborate Change Management procedures and safeguards.

There is another area in IT where we also see the effects of nonlinearity. In my current IT job, I support several large websites residing on about 200 production servers. These websites are complicated affairs of load balancers, webservers, WebSphere J2EE application servers, database servers, proxy servers, email servers and gateway servers to mainframes - all with a high degree of coupling and interdependence of the components. Although everything is sized to take the anticipated load with room to spare, every few days or so, we suddenly experience a transient on the network of servers that causes extreme slowness of the websites to the point where throughput essentially stops. When this happens, our monitors detect the mounting problem and perhaps 10 people join a conference call to determine the root cause of the problem. Sometimes we do find the root cause and can quickly restore normal operations, but about 50% of the time we do not find the root cause, and we start bouncing (stopping and restarting) the most suspicious components until performance improves. Worse yet, for a substantial percentage of transients, the “problem resolves without support intervention”, meaning that the transient comes and goes without IT Operations doing anything at all, like a thunderstorm on a warm July afternoon that comes and goes of its own accord. As we shall see later in this posting, in many ways, the complex interactions of a large network of servers behaves much more like a weather system than the orderly revolution of the planets about the Sun. So why is there this difference between linear and nonlinear systems?

**Linear and Nonlinear Systems**

It all goes back to an interplay between the evolution of common sense and classical mechanics back in the 18th and 19th centuries. In 1687, Newton published his *Principia*, in which he laid down the foundations of classical mechanics. The book comes in three parts, in which Newton nails one physics problem after another. Although Newton had developed calculus for his own use, strangely, in the *Principia* all these physics problems are solved using geometrical techniques with the use of the concept of limits borrowed from calculus, but there is no outright use of calculus in the *Principia* itself. Classical mechanics was reformulated in the 18th and 19th centuries by people like Lagrange, Euler, Laplace, and Hamilton, and in this reformulation of classical mechanics, calculus in the role of differential equations played a major contribution. During this time, the supreme achievement of classical mechanics was celestial mechanics and its ability to accurately predict the motion of the planets, comets, and asteroids about the Sun. This accomplishment was possible because the differential equation for a body orbiting the Sun is a linear differential equation. As you may have guessed, linear systems are defined by linear differential equations and nonlinear systems are defined by nonlinear differential equations. So let’s examine the differences between the two types of differential equations. Recall that the general equation for a line is:

y = m x + b

where m is the slope of the line and b is the y-intercept of the line, which is where the line cuts through the y-axis. For example, the line defined by:

y = 3 x + 5

has a slope of 3 and cuts through the y-axis at a height of 5. Differential equations are just equations that have derivatives in them. The first derivative of a curvy line or function dy/dx is just the slope of the curvy line at each point x along the x-axis, while the second derivative d²y/dx² is the curvature of the curvy line at each point along the x-axis. Now a linear differential equation is a differential equation similar to the general equation of a line in that there are just simple “+” signs between simple combinations of derivatives, hence the term “linear”. For example, the following differential equation calls for us to find a function of x or curvy line such that:

2 d²y + 4 dy + 16 y = 0

── ──

dx² dx

This equation tells us to find a curvy line or lines, such that for every point along the x-axis, 2 times the curvature of the curvy line, plus 4 times the slope of the curvy line, plus 16 times the y-value of the curvy line at each point x is equal to 0. Now in previous posts we studied the differential equations for two wave equations, the wave equation for strings and Schrödinger’s equation from quantum mechanics. In both differential equations we used the symbol ∂² for the second derivative or curvature of the wave, and ∂ was used for the first derivative or slope of the wave. This is because a wave changes both in time and space. Imagine a hilly topography - at any given point on a hill there will be one curvature and one slope for the hill in an east-west direction and another curvature and slope for the hill in a north-south direction, that is why we use the symbols ∂² and ∂ in wave equations because these derivatives are only looking at curvatures and slopes along either the time t-axis or the spatial x-axis, and therefore, are called partial derivatives. In the differential equation above, we are only interested in how y changes along the x-axis, so we use the symbols d²/dx² and d/dx.

Solving a differential equation is much like playing the TV game show *Jeopardy*, in that you are given the answer to the question in the form of a differential equation, and you must figure out what the question is - the function or curvy line that meets the stated requirements. When you take a course in differential equations, they teach you all sorts of tricky techniques to come up with the curvy line, and like in *Jeopardy*, guessing is a preferred technique. Now the surprising thing is that most differential equations have no solution! It is not hard to see that the requirements stated in a differential equation can be so outrageous, that no function or curvy line can meet them. Similarly, I could tell a real estate agent that I am looking for a condo on Lake Shore Drive in Chicago, that is above the 30th floor, has a southern view overlooking one of the harbors off Lake Michigan, and has an asking price below $300,000. Now I certainly could find a condo in Chicago that meets some of these requirements, but not all at the same time, and so it goes with differential equations. But for the differential equation above there is a solution:

y = e^{-x} ( c_{1} cos √7 x + c_{2} sin √7 x )

where c_{1} and c_{2} are constant numbers. So there are actually an infinite number of curvy lines that satisfy the differential equation above because we can change c_{1} and c_{2} to different values. To tell that the above solution works for the differential equation at hand, all we have to do is use some calculus to ensure that 2 times the second derivative, plus 4 times the first derivative, plus 16 times the y-value of the formula at each point x is equal to 0. If the above differential equation were being used to solve a physics problem for the position of a particle, we would set up the differential equation, solve it, and then we would need two initial conditions to complete the solution because we would need to figure out the values of c_{1} and c_{2} that work for our problem. These initial conditions might be the initial position and velocity of the particle. Once we have the solution to the differential equation and the c_{1} and c_{2} constants nailed down, marvelous things ensue. We can now predict the motion of the particle for all future and past times. The motion of the particle is deterministic, meaning that it will always follow the same path through space and time for the given set of initial conditions, and the motion is predictable. For example, when you solve the linear differential equation for the Earth orbiting the Sun, you obtain a solution that is an ellipse with the Sun at one foci of the ellipse. Once you plug in the position of the Earth and its velocity at that position, you can predict all future motion of the Earth about the Sun because the elliptical solution is deterministic. And if a system is deterministic and predictable, then that means there is a good chance that it is controllable. Now we do not have the technology at hand to control the motion of the Earth about the Sun, but using the same mathematics, we have managed to send probes to every planet in the solar system because the motions of the probes were deterministic, predictable, and controllable.

Figure 1 – The orbit of the Earth about the Sun is an example of a linear system that is deterministic and predictable (click to enlarge)

The success of classical mechanics and the celestial mechanics that ensued was so powerful that it gradually crept into the Zeitgeist of the times and eventually into our common sense view of the world. Before Newton and classical mechanics, the Universe was wild and unpredictable; after Newton, it became understandable, deterministic, predictable, and many times controllable. This new view of the Universe was carried to an extreme in 1814, when Pierre-Simon Laplace commented:*"We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes."*

Even so, the above quote is a fair representation of the state of affairs for classical physics up until the 1960s. There was just one problem. Most differential equations are not linear and neither are the physics problems they define. For example, we can easily turn the above linear differential equation into a nonlinear differential equation by simply changing the 4 dy/dx term to 4y dy/dx:

2 d²y + 4 y dy + 16 y = 0

── ──

dx² dx

The course in differential equations that I took in 1971 used a textbook written in 1968. This textbook was 545 pages long, but only had a slender 16 page chapter on nonlinear differential equations, which basically said that we do not know how to solve nonlinear differential equations, and because we cannot solve them, the unstated implication was that nonlinear differential equations could not be that important anyway. Besides, how different could nonlinear differential equations and the nonlinear systems they described be, compared to their linear cousins?

Figure 2 – Even a simple pendulum of length L is a nonlinear system (click to enlarge)

To get around the problem of not being able to solve nonlinear differential equations, physicists and engineers fell back to an old trick used by politicians. Rather than answer the question that you have posed, answer a much simpler, but seemingly related, question instead. For example, when you lay down the differential equation for a simple pendulum, consisting of a mass m suspended by a massless rod of length L that makes an angle θ with the vertical, it turns out to be a nonlinear differential equation:

m d²θ + m k sin(θ) = 0

──

dt²

___

where k = √g/l

and g = 32 ft/sec² the gravitational acceleration at the Earth’s surface.

It’s the sin(θ) term in the differential equation that causes the problem. A solution to this differential equation would tell us how the angle θ that the pendulum makes with the vertical changes with time, but because the differential equation is nonlinear, we cannot solve it, and as far as we can tell, no solution even exists. It’s like that condo on Lake Shore Drive. But here is a trick to get around the problem of nonlinearity; cram the nonlinear differential equation into a tight fitting linear box, like our 200 pound man trying to squeeze into an old suit for his 40th high school reunion. To do that, we need to expand the sin(θ) term into a Maclaurin series as:

sin(θ) = θ - θ^{3}/3! + θ^{5}/5! - θ^{7}/7! + …

where the angle θ is measured in radians. Recall that π radians is 180^{0} so that 1 radian = 57.3 ^{0}.

and that the factorial function ! is defined as:

3! = 1 x 2 x 3 = 6

5! = 1 x 2 x 3 x 4 x 5 = 120

7! = 1 x 2 x 3 x 4 x 5 x 6 x 7 = 5040

For example, for θ = 0.2 radians = 11.46^{0} we obtain:

sin(0.2) = 0.2 - 0.008/6 + 0.00032/120 - 0.0000128/5040 + ...

we can see that the higher terms of the series quickly die off so that for small angles:

sin(θ) ≈ θ

For example for:

0.1 radians = 5.729^{0} sin(0.1) = 0.0998 ≈ 0.1

0.2 radians = 11.46^{0} sin(0.2) = 0.1986 ≈ 0.2

0.3 radians = 17.19^{0} sin(0.3) = 0.2955 ≈ 0.3

0.4 radians = 22.92^{0} sin(0.4) = 0.3894 ≈ 0.4

so we can approximate the nonlinear differential equation for the motion of a pendulum with the closely related linear differential equation shown below, so long as the angle θ through which the pendulum swings remains less than about 25^{0} like in a Grandfather clock:

m d²θ + m k θ = 0

──

dt²

Because it is linear, the above differential equation can be solved with calculus and yields a solution of:

θ = c_{1} cos kt + c_{2} sin kt

where again c_{1} and c_{2} are constants determined by the initial conditions of the pendulum.

This solution has periodic motion with a period that only depends upon the length of the pendulum rod l and g = 32 ft/sec²:

__

T = 2 π √l/g

For example, for a pendulum that is 6 feet long the above formula allows us to predict that the period of the pendulum will be 2.72 seconds, no matter how heavy the pendulum mass is. But let’s say that we want the period to be an even 3 seconds. We can use the same formula to calculate that we need to increase the length of the pendulum to 7.30 feet. Now that is control! But remember, the above formula only works when the pendulum swings through small angles where sin(θ) ≈ θ, and consequently, the motion of the pendulum can be approximated by a linear differential equation. For larger swings, the linear approximation begins to break down, and the above formula for the period T of the pendulum no longer makes accurate predictions of the period, and because we cannot accurately predict the period of the pendulum, we can no longer accurately control the period either.

So by the close of the 1950s, the general procedure for physicists and engineers was to reduce the Universe to an assemblage of linear systems, described by linear differential equations, yielding solutions that were both deterministic and predictable, because those were the kinds of differential equations that they could solve with calculus, lending credence to the old adage that, when all you have is a hammer, everything begins to look like a nail! The side benefit to all this was that a linear, deterministic, and predictable Universe was also a controllable Universe, and human beings just love to control things. So engineers were always careful to only operate their nonlinear systems within a range where they behaved linearly and could be controlled. This worldview was truly fitting for the 1950s and its stifling pursuit of uniformity and conformity.

By the end of the first half of the 20th century, classical physics had suffered through two revolutions, relativity and quantum mechanics, and like the 1950s, it was ripe for one more. In the 20th century, physicists followed the same strategy that I took when I got stumped by the very first problem on a physics final. With a vague sense of impending doom, I would simply skip ahead to the more challenging problems, with the sincere intent of coming back to the first problem a little later, but somehow I never did. Physicists in the first half of the 20th century did the same thing; they skipped over things like dripping faucets, the turbulent flow over wingspans, and the motions of air masses in weather systems because the differential equations involved were nonlinear and could not be solved with their calculus hammers.**Computers Come to the Rescue**

All this began to change with the turbulent 1960s and its seemingly never ending expansion of intellectual freedom. There was a collision about to happen between the physical Universe and the Software Universe. The Software Universe began as a few bytes of machine code on Konrad Zuse’s Z3 computer in the spring of 1941 and began to rapidly expand and evolve through the 1950s. But the expansion and evolution of the Software Universe was about 10 billion times faster than the expansion and evolution of the physical Universe, so by the 1960s, the Software Universe had finally begun to catch up, and the two universes began to interact. In the 1950s, scientists and engineers began to use computers to analyze experimental data and perform calculations, essentially using computers as souped-up sliderules. But by the 1960s, computers had advanced to the point where scientists and engineers were able to begin to use computers to perform simulated experiments to model things that previously had to be physically constructed in a lab. Thus efforts in the Software Universe began to pay off, and rapidly speed up research in the physical Universe, because it was much easier to create a software simulation of a physical system, than to actually build the physical system itself in the lab.

This revolution in the way science was done personally affected me. I finished up my B.S. in physics at the University of Illinois in 1973 with the sole support of my trusty sliderule, but fortunately, I did take a class in FORTRAN programming my senior year. I then immediately began work on a M.S. degree in geophysics at the University of Wisconsin at Madison. For my thesis, I worked with a group of graduate students who were shooting electromagnetic waves into the ground to model the conductivity structure of the Earth’s upper crust. We were using the Wisconsin Test Facility (WTF) of Project Sanguine to send very low frequency electromagnetic waves, with a bandwidth of about 1 – 20 Hz into the ground, and then we measured the reflected electromagnetic waves in cow pastures up to 60 miles away. All this information has been declassified and is available on the Internet, so any retired KGB agents can stop taking notes now and take a look at:

http://www.fas.org/nuke/guide/usa/c3i/fs_clam_lake_elf2003.pdf

Project Sanguine built an ELF (Extremely Low Frequency) transmitter in northern Wisconsin and another transmitter in northern Michigan in the 1970s and 1980s. The purpose of these ELF transmitters is to send messages to our nuclear submarine force at a frequency of 76 Hz. These very low frequency electromagnetic waves can penetrate the highly conductive seawater of the oceans to a depth of several hundred feet, allowing the submarines to remain at depth, rather than coming close to the surface for radio communications. You see, normal radio waves in the Very Low Frequency (VLF) band, at frequencies of about 20,000 Hz, only penetrate seawater to a depth of 10 – 20 feet. This ELF communications system became fully operational on October 1, 1989, when the two transmitter sites began synchronized transmissions of ELF broadcasts to our submarine fleet.

Anyway, back in the summers of 1973 and 1974 our team was collecting electromagnetic data from the WTF using a DEC PDP 8/e minicomputer. The machine cost about $30,000 in 1973 dollars and was about the size of a washing machine, with 32K of magnetic core memory. We actually hauled this machine through the lumber trails of the Chequamegon National Forest and powered it with an old diesel generator to digitally record the reflected electromagnetic data in the field. For my thesis, I then created models of the Earth’s upper conductivity structure down to a depth of about 12 miles, using programs written in BASIC. The beautiful thing about the DEC PDP 8/e was that the computer time was free, so I could play around with different models, until I got a good fit to what we recorded in the field. The one thing I learned by playing with the models on the computer was that the electromagnetic waves did not go directly down into the Earth from the WTF, like common sense would lead you to believe. Instead, the ELF waves traveled through the air to where you were observing and then made a nearly 90^{0} turn straight down into the Earth, as they refracted into the much more conductive rock. So at your observing station, you really only saw ELF waves going straight down and reflecting straight back up off the conductivity differences in the upper crust, and this made modeling much easier than dealing with ELF waves transmitted through the Earth from the WTF. And this is what happens for our submarines too; the ELF waves travel through the air all over the world, channeled between the conductive seawater of the oceans and the conductive ionosphere of the atmosphere, like a huge coax cable. When the ELF waves reach a submarine, they are partially refracted straight down to the submarine. I would never have gained this insight by solving Maxwell’s differential equations for electromagnetic waves alone!

After I graduated from Wisconsin in 1975, I went to work for Shell and Amoco exploring for oil between 1975 – 1979, before switching into a career in IT in 1979. But even during this period, I mainly programmed geophysical models of seismic data in FORTRAN for Shell and Amoco. It was while programming computer simulations of seismic data that the seeds of softwarephysics began to creep into my head. When I made a career change into IT in 1979, it seemed like I was trapped in a frantic computer simulation, like the ones I had programmed on the DEC PDP 8/e , buried in punch card decks and fan-fold listings. So to help myself cope with the daily mayhem of life in IT, I developed softwarephysics.

Numerical solutions to nonlinear differential equations finally became possible in the 1960s thanks to the rapid advance of computers. Computers allowed physicists to solve nonlinear differential equations numerically, for the first time, using techniques that could not practically be performed with sliderules alone. For example, the orbit of the Earth about the Sun portrayed in Figure 1 was not calculated by solving the differential equation for the Earth revolving about the Sun. It is from an old program that I wrote in 1994. All I did in that program was to start with some initial position and velocity of the Earth; then I used the curvature of the Earth’s orbit, defined by the 2nd order linear differential equation that describes the motion of the Earth about the Sun, to figure out where the Earth would be in a small amount of time in the future. After I plotted this next point in the orbit, I began the process of finding the next point in the orbit all over again. This was all done in a do-loop, so as I iterated through the loop, the elliptical orbit of the Earth with the Sun at one foci just popped out. Before computers, I would have solved the differential equation for the Earth orbiting the Sun and obtained a solution that defines a family of ellipses, with a couple of constants that were determined by the initial position and velocity of the Earth. Then I would use the equation of the ellipse, and its constants, to plot the position of the Earth. But with a computer, I just did a step-wise plot of the trajectory of the Earth’s orbit about the Sun; my program had no idea that it was laying down an elliptical orbit. Back in the 1960s, using this same technique with the computers of the day, physicists could have solved the nonlinear differential equations that were not solvable with calculus. But in the 1960s, physicists were busy with other things, especially with the problem of unraveling the hundreds of newly discovered particles of high energy physics, and developing the Standard Model previously discussed. It would have taken a very brave graduate student indeed to explain to his thesis advisor that physics had skipped over dripping faucets in the pursuit of quarks, and that he wanted to do his Ph.D. dissertation on dripping faucets instead.

In 1962, Thomas Kuhn published *The Structure of Scientific Revolutions*, in which he proposed that science did not progress by a steadily increasing accumulation of knowledge. Instead, an effective theory of science encountered the limits of its effective range of reliable prediction and reached a crises point, in which the effective theory no longer worked. At that point, a “paradigm shift” was required to formulate a new effective theory which covered the new territory not covered by its predecessor. This was the case for both the paradigm shifts of relativity and quantum mechanics in the early 20th century, when it was realized that the effective theory of classical electrodynamics did not work at high velocities, and that classical Newtonian mechanics did not work for very small things like atoms. However, the physics of nonlinear dynamical systems turned out to be so bizarre that even Kuhn’s concept of paradigm shifts failed to initiate the final revolution of 20th century physics. This revolution was so strange that it had to come from outside physics itself!**Surprise! - Nonlinear Systems are not like Linear Systems at All**

Ed Lorenz did not have a problem with his physics thesis advisor about studying nonlinear systems because Ed was a meteorologist at MIT. In his book *Chaos - Making a New Science* (1987) James Gleick describes Lorenz’s accidental discovery of chaos in the winter of 1961. Ed was using a primitive vacuum tube computer, a Royal McBee LPG-30, to simulate weather, using a very simple computer model. The model used three nonlinear differential equations, with three variables, that changed with time t:

dx/dt = 10y - 10x

dy/dt = 28x - y - xy

dz/dt = xy - 8z/3

The variable x represented the intensity of air motion, the variable y represented the temperature difference between rising and descending air masses, and the variable z was the temperature gradient between the top and bottom of the atmospheric model. Thus each value of x, y, and z represented the weather conditions at a particular time t and watching the values of x, y, and z change with time t was like watching the weather unfold over time.

One day, Lorenz wanted to repeat a simulation for a longer period of time. Instead of wasting time rerunning the whole simulation from the beginning on his very slow computer, he started the second run in the middle of the first run, using the results from the first run for the initial conditions of the second run. Now the output from the second run should have exactly followed the output from the first run where the two overlapped, but instead, the two weather trajectories quickly diverged and followed completely different paths through time. At first, he thought the vacuum tube computer was on the fritz again, but then he realized that he had not actually entered the exact initial conditions from the first run. Using single precision floating point variables, the LPG-30 computer stored numbers to an accuracy of six decimal places in memory, like 0.506127, but the line printer printouts shortened the numbers to three decimal places, like 0.506. When Lorenz punched in the initial conditions for the second run, he entered the rounded-off numbers from the printout, and that was why the two runs diverged. Even though there was only a 0.1% difference between the initial conditions of the two runs, the end result of the two runs were completely different! For Ed, this put an end to the hope of perfect long term weather forecasting because it meant that even if he could measure the temperature of the air to within a thousandth of a degree and the wind speed accurate to a thousandth of a mile/hour over every square foot of the Earth, his weather forecast would not be accurate beyond a few days out because of the very small errors in his measurements. Ed Lorenz published his findings in a now famous paper *Deterministic Nonperiodic Flow* (1963).

You can read this famous paper at:

http://samizdat.cc/lib/pflio/lorenz-1963.pdf

Figure 2 shows a plot of Lorenz’s nonlinear model. This plot was generated with the Winplot application that is available free from the Exeter Academy at:

http://math.exeter.edu/rparris/winplot.html

After you download and install Winplot, start up Winplot and do a:

Window > 3-dim > Equa > Differential

and keep clicking the OK buttons to choose the Lorenz equation, which is the default differential equation that you will see listed. Then do a:

One > Deq trajectory

and specify x = 10.00000, y = 0.00000, z = 0.00000 in the dialog box. Do a View > Axis to plot the x, y, and z axis. Then click on the “Watch” button to start plotting the solution to the 3 nonlinear differential equations, and use “Q” to stop the plotting. Notice how the solution bounces back and forth between two strange attractors. Winplot has a lot of options that you can play with, so have some fun with it. Try “limit duration to” 60 iterations for two runs, one with x =10.00000 and the other for x = 10.00001, then use the “end pt” button to view the two final positions of the runs; you will see that they are very different, even though the starting points 10.00000 and 10.00001 are nearly identical! That is the “Butterfly Effect” that Ed discovered about nonlinear systems.

Figure 3 – Ed Lorenz’s model depicting strange attractors for his 3 nonlinear differential equations (click to enlarge)

Comparing Figure 1 to Figure 3, we can see the difference between the linear system of the Earth orbiting the Sun and a nonlinear weather system. Like nearly all linear systems, the Earth orbiting the Sun is periodic, in that it takes a year for the Earth to orbit the Sun, and the Earth follows the same deterministic elliptical path on each orbit. The numerical solution to Ed Lorenz’s three nonlinear differential equations is just the opposite. Although the solution is deterministic, in that if you start out the simulation with the same initial conditions, you will always get the same end points after 60 iterations, the solution is not periodic. It bounces back and forth between the two strange attractors and never repeats the same path. That is why Lorenz called it *Deterministic Nonperiodic Flow*. But the most crucial difference is that if the Earth gets a little push in one direction or another by a passing comet, it does not crash into the Sun! Like the Declaration of Independence, linear systems are not very sensitive to small changes in initial conditions. For linear systems, small changes produce small effects. Nonlinear systems, on the other hand, are much more like software in that small changes can cause dramatic effects, as demonstrated by our two runs of Lorenz’s model, one with a starting point of x = 10.00000 and the other with a starting point of x = 10.00001.

Although *Deterministic Nonperiodic Flow* is now regarded as the seminal paper launching chaos theory, at the time of publication it went largely unnoticed, especially by the physics community because it was published in a journal for meteorologists, *The Journal of the Atmospheric Sciences*, and it would remain so for more than a decade. Sadly, I must report that Ed Lorenz recently passed away last month on April 16, 2008. So along with Einstein, Heisenberg and Schrödinger, we have lost the founding father of the final revolution of 20th century physics.**Ladybug Chaos in my Backyard**

Our next stop is in 1976, when ecologist Robert May published *Simple Mathematics Models with very complicated dynamics* in Nature. May was an ecologist at Princeton trying to use a computer to model the way the population of a species varied from year to year. For example, in recent years the Chicago area has been infested with the Asian ladybug. Some summers the infestation is very light, but in other summers the ladybugs swarm in huge numbers and get into everything. May wanted to know why populations could vary so much from year to year. A very simple population model would be one in which the population of the ladybugs next summer X_{n+1} was simply some number “a” times the population of ladybugs this summer aX_{n}:

X_{n+1} = aX_{n} (n=0,1,2...)

For the ensuing analysis let's count the number of ladybugs in my backyard in units of a million ladybugs, so that a starting population of X_{0} = 0.5 means that we are starting with a population of half a million ladybugs in my backyard. So if we started with X_{0} = 0.5 and a = 1.6 we would get a population growth of:

X_{0} = 0.500

X_{1} = (1.6) (0.500) = 0.800

X_{2} = (1.6) (0.800) = 1.280

X_{3} = (1.6) (1.280) = 2.048

X_{4} = (1.6) (2.048) = 3.279

X_{5} = (1.6) (3.279) = 5.246

X_{6} = (1.6) (5.246) = 8.394

But this would not be a very good model because the population of ladybugs would simply increase without bound. So May added a term (1 - X_{n}) to kill off the ladybugs, in addition to the aX_{n} term that made the population increase.

The killing off term limits the population to fall between 0 and 1 million, because if X_{n} should ever reach a value of 1 then:

(1 - X_{n}) = (1 - 1) = 0

and there would be no ladybugs at all for the next summer. This could happen if the ladybug population reached 1 million and they ate up all the aphids in my backyard, so that the entire ladybug population starved to death. The final equation is called the logistic map:

X_{n+1} = aX_{n} (1 - X_{n}) (n=0,1,2...)

Now if we start again with X_{0} = 0.5 and a = 1.6 we get a population growth of:

X_{0} = 0.500

X_{1} = (1.6) (0.500) (1 - 0.500) = 0.400

X_{2} = (1.6) (0.400) (1 - 0.400) = 0.384

X_{3} = (1.6) (0.384) (1 - 0.384) = 0.378

X_{4} = (1.6) (0.378) (1 - 0.378) = 0.376

X_{5} = (1.6) (0.376) (1 - 0.376) = 0.375

X_{6} = (1.6) (0.375) (1 - 0.375) = 0.375

which seems to stabilize at a constant population of 0.375. IT professionals should realize that the logistic map is an example of a recursive procedure. The best way to study the logistic map is to experiment with a graphical representation. The following plots of the logistic map were all created with an applet available at:

http://brain.cc.kogakuin.ac.jp/~kanamaru/Chaos/e/Logits/

Figure 4 is a plot of the above table with a starting point of X_{0} = 0.5 and a = 1.6 that converges to a value of 0.375. The inverted parabola is the graphical depiction of the logistic map, while the line at an angle of 45^{0} can be used to reflect the output of each iteration back into the next iteration as its input. The colored lines bouncing back and forth between the inverted parabola and the 45^{0} mirror line are just showing the output of each iteration being fed back in as the input for the next iteration of the loop.

Figure 4 – Plot of the logistic map for a =1.6, with a starting point X_{0} of 0.5 (click to enlarge)

Figure 5 is also a plot of the logistic map for a = 1.6, but in this plot we show the trajectories of two additional starting populations for X_{0}. We see that all three starting populations converge to the same final value of 0.375. This is because the inverted parabola is fairly linear when a = 1.6, so the logistic map for this value of "a" is behaving in a linear manner and is not very sensitive to initial conditions.

Figure 5 – Plot of the logistic map for a =1.6, displaying insensitivity to initial conditions (click to enlarge)

In figure 6 we increase "a" to a value of a = 2.86, keeping the starting population X_{0} = 0.5. Notice that X_{n} converges to a higher value than it did when a = 1.6 and that there is an initial transient in the plot that eventually dies out.

Figure 6 – Plot of the logistic map for a = 2.86, displaying a transient that dies out (click to enlarge)

In Figure 7 we increase a = 3.13 and we begin to see X_{n} begin to oscillate between two values in what is called the first bifurcation.

Figure 7 – Plot of the logistic map for a =3.13, displaying the first bifurcation (click to enlarge)

In Figure 8 we increase a = 3.54 and observe that X_{n} has gone through a second bifurcation and now oscillates between four values.

Figure 8 – Plot of the logistic map for a =3.54, displaying the second bifurcation (click to enlarge)

Finally in Figure 9, we increase a = 3.709 and find that the logistic map has entered the region of chaos. The values of X_{n} now seem to behave randomly and change very quickly as the variable "a" is varied by very small amounts.

Figure 9 – Plot of the logistic map for a =3.709 displaying chaos (click to enlarge)

Figure 10 again shows the logistic map for a = 3.709, but with three closely spaced starting populations X_{0}. Notice that although the starting populations are very close together, the trajectories of each dramatically diverges from the others after just a few iterations. The logistic map is now behaving very nonlinearly in this chaotic region. Although the logistic map is totally deterministic for all values of "a", in that we get the very same results for any starting population X_{0} for any given "a", the logistic map is now totally unpredictable at this point and seems to behave in a random manner.

Figure 10 – Plot of the logistic map for a =3.709, displaying that the chaotic range is very sensitive to initial conditions (click to enlarge)

In my current IT job, we use a product called Wily to monitor our WebSphere J2EE appservers. Figure 10 looks remarkably like the Wily plot of the response time of a WebSphere EJB or other WebSphere component. That is because the complex architecture supporting a modern website is a nonlinear system of load balancers, webservers, J2EE application servers, database servers, proxy servers, email servers and gateway servers to mainframes, with a high degree of coupling and interdependence between the components. Linear systems, on the other hand, are composed of highly independent components that do not interact with each other to a great degree, like the planets of the solar system orbiting the Sun or traffic on an expressway at 4:00 AM in the morning. The behavior of linear systems can be understood by decomposing them into their component parts because the whole is equal to the sum of the parts. What we have learned with chaos theory, with the aid of computers, is that we cannot use this reductionist approach for nonlinear systems because the whole is greater than the sum of the parts for nonlinear systems. As we have already seen, small changes to nonlinear systems can cause dramatic effects; what Ed Lorenz called the “Butterfly Effect”, where the flapping of the wings of a butterfly in Chicago can cause a thunderstorm in London a week later. Nonlinear systems behave much more like weather systems or heavy traffic on an expressway at 4:00 PM, when a trucker lightly steps on the brakes to avoid a newspaper on the road, and causes a major traffic jam 20 minutes later. The response time of an EJB may appear random, but it is not, because the response time of an EJB is deterministic. If you feed exactly the same load into an EJB and keep all the initial conditions identical between runs, you will get the same response time curve every time. But the response time of an EJB is chaotic and that is why it *appears* to be random.

This explains why 50% of the time we never find a root cause for website outages. The reason is that we are dealing with a highly nonlinear system subject to the “Butterfly Effect”. Yes indeed, there is some root cause for the outage, but the root cause is so inconsequential that we never find it, because it hides in the white noise of problems that constantly bombard the Command Center of any large IT operation. And that is why a good percentage of our IT problems “resolve without support intervention”. The transient problems are like thunderstorms on a warm July afternoon, that roll in and then move on of their own accord. We have to stop thinking of IT operations as a predictable linear system, like the planets orbiting the Sun, and instead, think of it more as a nonlinear weather system not totally within our control. But that does not mean that we should give up all hope of taming IT operational problems. Just as it is prudent to have a plan for hurricanes and tornados, there are some measures that we can take to alleviate problems. First of all, as we have seen with pendulums and the logistic map, nonlinear systems frequently have regimes in which they display linear behavior, so for pendulums, if we keep the swing of the pendulum below 25^{0}, it will behave like a nice linear system. Similarly for the logistic map, you can think of the variable "a" as a load parameter that defines the degree of nonlinearity of the logistic map. If we keep the variable "a" to a low value as in Figure 4, it too will behave like a predictable linear system that is not very sensitive to initial conditions. And finally, if you build expressways with enough lanes to keep traffic densities low, they too will behave nicely. So it is important to size your IT infrastructure with a good deal of extra capacity, so that your nonlinear infrastructure operates in a range where it displays predictable linear behavior. There is another important lesson to learn from chaos theory. When an IT outage does occur, the primary goal should be to restore normal operations as quickly as possible. Either fail over to your backup infrastructure or immediately start bouncing suspected components. Do not waste time looking for root causes, since there is a good chance that the root cause will never be found because it is so inconsequential that it does not readily stand out. You can always look for root causes later in logs and dumps.**The Fractal Nature of Chaos**

Our final stop in the story of chaos again brings us back to 1975. In 1975, Mitchell Feigenbaum was at the Los Alamos National Laboratory working on turbulence. Feigenbaum had been issued a HP-65 calculator for the project, the first pocket calculator that could be programmed. I vividly remember the day in 1972, when my professor in Analytical Mechanics walked into class with an HP-35 calculator, the very first electronic sliderule calculator. He spent the whole class going over its functions, and we all sat there dumbfounded. That’s when I learned that 355/113 = 3.1415929 … (11 3|3 55) was equal to π accurate to seven decimal places! Feigenbaum programmed his HP-65 to run the logistic map described above. As he experimented with the logistic map on the HP-65, he noticed a strange thing about the onset of the successive period-doubling bifurcations seen in Figures 7 and 8 above. He determined that the ratio of the differences in the variable "a" between successive bifurcations was always 4.6692016090 …, and he was able to prove this mathematically. He then discovered that this same constant ratio between bifurcations was found for all sorts of other mathematical functions more complicated than the simple logistic map. This "ratio of convergence" is now known as the Feigenbaum constant and is a new constant of nature like π = 3.1415927 … In 1978, Feigenbaum published his findings in the paper * Quantitative Universality for a Class of Nonlinear Transformations*.

In 1975, mathematician Benoît Mandelbrot coined the term fractal to describe self-similar structures like coastlines that seemed to have a geometry based upon a fractional dimension. As they taught you in high school geometry, a point has a dimension of 0, while the points on a line have a dimension of 1, the points defining a plane have a dimension of 2, and the points in a cube have a dimension of 3. But Mandelbrot wondered about things like the wiggly line that describes a coastline. Mandelbrot proposed that a wiggly coastline must have a dimension somewhere between 1 and 2. For example, it has been calculated that the coastline of Great Britain has a dimension of about 1.24 . He also noted that coastlines always looked the same no matter the scale. If you start very far out in space and then continuously zoom in on a coastline, it always has the same basic wiggly shape. Mandelbrot published these findings in *Les objets fractals, forme, hasard et dimension* (1975; an English translation *Fractals: Form, chance and dimension* was published in 1977). Strangely enough, Mandelbrot’s concept of fractal geometry had a role to play in chaos theory too. For example, the orbits in phase space of a chaotic system about a strange attractor are fractal in that they always look the same as you zoom in or out from the strange attractor.**Feigenbaum’s Bifurcations are Fractal**

Let us examine the role of fractals in the logistic map. Figures 11 and 12 were generated with an applet available at:

http://brain.cc.kogakuin.ac.jp/~kanamaru/Chaos/e/BifArea/

In Figure 11 we have a plot of the value X_{n} converges to vs. the variable "a" of the logistic map. Notice that in the region where "a" varies from 2.0 – 3.0, that the value of X_{n} slowly increases. Then around a = 3.0, X_{n} bifurcates and begins to take on two distinct values. At around a = 3.45, X_{n} bifurcates again at Feigenbaum’s ratio and begins to take on four distinct values. This bifurcating and doubling of values continues on at an increasing pace, as predicted by Feigenbaum, as the variable "a" is increased. Finally, when "a" reaches a high enough value, X_{n} breaks into chaos and takes on a large number of seemingly random values.

Figure 11 – Plot of X_{n} vs. "a" displaying bifurcations (click to enlarge)

In Figure 12 we zoom in on Figure 11 and see that the plot of X_{n} vs. the variable "a" is fractal, it looks the same at all scales like the coast line of Great Britain.

Figure 12 – Zoomed in plot of X_{n} vs. the variable "a", showing that bifurcations are fractal (click to enlarge)

To further explore the fascinating world of fractals, download the Fractal Explorer application for free at:

http://www.softpedia.com/get/Multimedia/Graphic/Graphic-Others/Fractal-Explorer.shtml

Fractal Explorer is a very sophisticated application for generating fractals and is a good successor to the classic DOS Fractint application which, sadly, no longer runs on most PCs. Of course, you will begin by exploring the classic Mandelbrot set, the most complicated mathematical figure known to man. The Mandelbrot set easily fits on your computer screen, but has an infinite perimeter and is the only figure known to man with a fractal dimension of 2.0 – the same as for an entire plane!**The New Worldview**

So the linear, deterministic, predictable, and controllable Universe of the 1950s has been replaced by the nonlinear, deterministic, chaotic and uncontrollable Universe of today. The recognition in the 1980s that the whole is greater than the sum of the parts led to an outgrowth of chaos theory now called complexity theory, which looks at complex systems in much the same way as chaos theory does. Complexity theory is a very new multidisciplinary science composed of physicists, economists, mathematicians, biologists, computer scientists, electrical engineers and anybody else who is interested, bucking the long historical trend of reducing science itself to a patchwork of fragmented and isolated disciplines that no longer speak to each other. It is a throwback to the original approach of the Scientific Revolution in the 17th century, which took on Nature as a whole. Complexity theory is an active area of ongoing research at many institutions, stemming from the original work carried on at the Santa Fe Institute in New Mexico, founded in 1984 by George Cowan, and the Center for Complex Systems at my old alma mater, the University of Illinois, founded in 1986 by Stephen Wolfram. In his book, * Complexity: The Emerging Science at the Edge of Order and Chaos* (1993) M. Mitchell Waldrop defines complexity as *"a chaos of behaviors in which the components of the system never quite lock into place, yet never quite dissolve into turbulence either"*. Complexity lies at the edge of chaos, like the complex boundary of the Mandelbrot set, that is on the knife edge of chaos between the strange attractors of infinity and the point (0,0) in the complex plane. Although this edge of chaos is a narrow transition phase, it can be vast, like the infinite perimeter of the Mandelbrot set.

It is amazing how we human beings can ignore the obvious once we get an idea into our heads. For example, Aristotle taught that when you throw a ball, it travels in a straight line until it runs out of impetus, and then it falls straight to the earth. Obviously, for anyone who has ever thrown a ball this is clearly false, but it held sway for nearly 2,000 years until Galileo pointed out that a thrown ball follows a curved parabolic path. Similarly, the model of a linear, deterministic, predictable, and controllable Universe slowly approaching equilibrium at maximum entropy is clearly wrong. Just look around! The Universe is amazingly vibrant and complex; not dull and boring. But this new worldview has not quite penetrated the Zeitgeist of our times. Most people still think in linear terms. The fact that engineers build nonlinear systems, but only run these systems in regions of linear behavior, so that the nonlinear systems are predictable and controllable like our Grandfather clock, only reinforces the misconception that the Universe is linear. For example, presidents frequently lose elections because people actually believe that the highly nonlinear and chaotic national economy can be controlled by the president! But a linear predictable Universe slowly degenerating into a state of maximum entropy would truly be a boring Universe indeed and possibly a Universe incapable of supporting living things. Perhaps only a nonlinear chaotic Universe, such as ours, that is far from equilibrium, is capable of the emergent complexity required for living things. For more information on the Santa Fe Institute see:

http://www.santafe.edu/

and for the Center for Complex Systems go to:

http://www.ccsr.illinois.edu/
**One Final Thought on Nonlinearity for IT Professionals**

Being an 18th century liberal and following the advice of Winston Churchill, I began in my youth as a 20th century liberal in the 1960s and aged into a 20th century conservative in the 1970s, sadly a political philosophy that is now deader than 18th century liberalism. As a 20th century conservative, I believe that it is important for all of us to take personal responsibility for our actions, and I believe that there is a pressing issue at hand important to all IT professionals that needs to be raised, even if it is not welcome news.

As we have seen, weather systems are examples of complex nonlinear systems that are very sensitive to small changes to initial conditions. Well, the same goes for the Earth’s climate; it is a highly complex nonlinear system that we have been experimenting with for more than 200 years by pumping large amounts of carbon dioxide into the atmosphere. The current carbon dioxide level of the atmosphere has risen to 383 ppm, up from a level of about 280 ppm prior to the Industrial Revolution. Now if this trend continues, computer models of the nonlinear differential equations that define the Earth’s climate indicate that we are going to melt the polar ice caps and also the ice stored in the permafrost of the tundra. If that should happen, sea level will rise about 300 feet, and my descendents in Chicago will be able to easily drive to the new seacoast in southern Illinois for a day at the beach. This is not a fantasy. The Earth usually does not have polar ice caps, we just happened to have arrived on the scene at a time when the Earth is unusually cold and has polar ice caps. From my home in the suburbs of Chicago, I can easily walk to an abandoned quarry of a Devonian limestone reef, clear evidence that my home was once under the gentle waves of a shallow inland sea several hundred million years ago. Global warming is not an environmental problem. Most environmental problems can usually be corrected in a few decades or so by halting the production of one offending organic molecule and substituting a more benign molecule in its place. Global warming, on the other hand, is a geophysical problem that might take hundreds of thousands of years to correct. In a future posting, we will discuss the resiliency of biological systems to adapt to new environments, no matter how challenging the situation, so although global warming will initially cause a great deal of biological damage, the biological systems will adapt to the new climate. The real impact of global warming will fall mainly upon the human race and will hurt billions of people because our cultures and economies are tuned to the present cool climate of the Earth.

Now for those of you who think that global warming is a hoax, let me try a different tack. First of all, let me say that I respect your position. As I have said in the past, I try not to believe in things, but rather have a level of confidence in theories. On February 2, 2007, the Intergovernmental Panel on Climate Change, made up of hundreds of scientists from 113 countries, published a report that claimed the panel was about 90% confident that human-generated greenhouse gases accounted for most of the global rise in temperatures over the past half-century. But yes, the bulk of the scientific community might be wrong about global warming. After all, we are dealing with nonlinear systems here, and as we have seen in this posting, nonlinear systems are pretty flaky. Then again, it is not too smart to mess with nonlinear systems because we know how easily they get ticked off by the slightest thing. But certainly you have noticed the $4.00 gasoline at your local gas station lately. The Universe is trying to send us a message. Today, the world will burn 82 million barrels of oil and will find less than 41 million barrels to replace it. And this has been going on for more than a decade. But even the recent rise in gasoline prices has not significantly decreased gasoline consumption because gasoline consumption suffers from inelastic demand, in that the amount of gasoline you use each week does not change greatly with price. If gasoline is $0.40 per gallon or $4.00 per gallon, you still have to drive to work, buy groceries and pick up the kids at soccer games. But in the next 10 – 20 years world oil production will peak and as inelastic demand bumps its head against a rigid supply ceiling, gasoline prices will rise exponentially. You probably have already begun to notice this price rise at your local gas station, as we begin to approach the peak in world oil production. So it makes sense from the perspective of a strong national defense not to spend the remainder of the 21st century fighting over the last trillion barrels of Persian Gulf oil with China and the other emerging energy consumers. As Alan Greenspan has pointed out, over the past seven years we have already spent over $875 billion defending our access to Persian Gulf oil, and we cannot afford to keep doing so and remain a solvent sovereign state. I have seen estimates in *Scientific American* of $450 billion in government subsidies over a 40 year period to convert the U.S. to a solar economy. That sounds like a comparative bargain to me for something so critical to our national defense. Now why would a 20th century conservative want the government to subsidize the conversion of the U.S. energy infrastructure? Because we already did this once before when we built the Dwight D. Eisenhower National System of Interstate and Defense Highways under the National Interstate and Defense Highways Act of 1956 for a total cost of $425 billion in 2006 dollars, and it took over 35 years to complete. Yes, the Interstate Highway system was started by President Eisenhower, another 20th century conservative, as a defense measure and is the largest public works project ever completed by mankind. You see in 1919, Eisenhower participated in the Army's first transcontinental motor convoy from Washington, D.C., to San Francisco which took a grueling 62 days, and during World War II, Eisenhower saw the military advantages of the German autobahn system. Eisenhower later noted that *”The old convoy had started me thinking about good, two-lane highways, but Germany had made me see the wisdom of broader ribbons across the land”*. This was the height of the Cold War, and in the event of a nuclear war with the Soviet Union, an Interstate Highway system between cities would have been essential for the survivors. I still remember doing duck-and-cover drills under my schoolroom desk and each fall bringing home the obligatory civil defense pamphlet on how to build a fallout shelter. Granted, converting the U.S. energy infrastructure to energy independence through the use of renewable energy resources that cannot be disrupted by foreign powers would certainly be comparable to the cost and efforts of building another Interstate Highway system, but certainly it would provide an even greater national security benefit, and like the Interstate Highway system, even greater commercial benefits as well.

Here is a dramatic example of what I am talking about. If there is a war with Iran, something 20th century conservatives would seek to avoid through deterrence, the Iranians will try to blockade the narrow Straight of Hormuz between Iran and Oman with missiles, rockets, and anything else that is available. Again, this is something I would recommend avoiding through deterrence. Paraphrasing Clausewitz - diplomacy is war by other means. The Straight of Hormuz is a mere 34 miles wide at its narrowest point, with just two one mile wide navigational channels for passage into and out of the Persian Gulf. Fully 20% of the world’s oil production must pass through the narrow Straight each day on slow moving tankers that would be easy targets. And because of the growing alliance between Venezuela and Iran, there is a good chance that Venezuela would boycott the U.S. and divert its tankers to an oil starved Europe. When I began class work for my M.S. in geophysics at the University of Wisconsin in September 1973, oil was trading for $2.50/barrel. But when the Yom Kippur War broke out a month later in October 1973, the Arab members of OPEC boycotted the U.S. because of our support of Israel during the war. By Christmas of 1973, oil was trading for $12.50/barrel and the U.S was plagued by gas lines and rationing. Now in 1973, the U.S. only used about 15 million barrels of oil per day of which 1/3 was imported, while today we use 21 million barrels per day and 2/3 is imported. Because of the inelastic demand for gasoline, a 20% reduction in world oil production could easily push gasoline to $10 - $15 per gallon. So for our own self-preservation, we need to convert our dependency on foreign oil to nuclear energy and renewable energy resources like wind, solar, and geothermal energy. In the meantime, considering the great sacrifices that have already been made, if we really want to be patriotic Americans, we need to treat the conservation of gasoline with the same sense of urgency and commitment that our forebears did during World War II by not squandering gasoline shamelessly as we do today.

But how do you get 6 billion people to cooperate in reducing carbon dioxide emissions if it requires the self-sacrifice of the individual for the greater good of all? Clearly, given the limitations of human nature, if that is our only recourse, we are all doomed. However, in the past decade or so we have been able to get nearly 6 billion people to cooperate through the spread of capitalism and the globalization of the world economy. But this was not accomplished through the self-sacrifice of the individual. Instead, it was accomplished through the amazing “Invisible Hand” of Adam Smith, which allows the self-interest of the individual to benefit society as a whole. The only way we can possibly solve this problem is by making it easier and cheaper not to emit carbon dioxide than to continue on with business as usual.

This is where you come in as an IT professional. IT technology has tremendous potential in reducing our consumption of oil. For example, I work for a really great company that has recently instituted a “hoteling” initiative that allows IT professionals to work at least 50% of the time from home. The purpose of the initiative is to have the IT department work out the bugs of the infrastructure, so that the program can be extended to the rest of the corporation as well. CISCO instituted a similar program several years ago and believes that it has already saved $250 million in real estate costs alone. Just do a search on telecommuting to see what is happening out there in cyberspacetime. All you need is broadband connectivity to a good VPN, some email and instant messaging software, some voice over IP telephony software, and some software for Web conferencing for meetings and group collaboration efforts and you have a virtual distributed office network! Your company probably has a lot of this in place already. Our set-up is so good that it passes the equivalent of a Turing Test, I cannot tell if my co-workers are in the office or out of the office when I work with them. We have the technology to solve this problem. What really is needed is a change in thinking. As I pointed out previously, cyberspacetime is flat. Imagine how much energy could be saved if we converted all office work to a distributed office platform based upon telecommuting! The reason we physically travel to a central location for office work stems from a change in work habits brought about by the Industrial Revolution. Prior to the Industrial Revolution, craftsmen worked out of their homes. It was the steam engines of the Industrial Revolution that brought us together, but the steam engines are gone now. As I pointed out previously, as IT professionals we are all warriors in the Information Revolution. So if your company has not instituted a similar program, gather up a little courage and email your CEO a proposal explaining how moving to a virtual distributed office system based upon telecommuting can save your company and fellow employees a great deal of money and ensure business continuity in the event of an extended disruption of Persian Gulf oil. You can play a significant role in fixing this problem through the implementation of IT technology, so put your IT skills and abilities to work in securing our energy independence. After all, you have a lot less to lose than those first Americans who signed the Declaration of Independence.

This concludes the portion of softwarephysics that deals mainly with physics. We will touch again on many of these topics in a future posting on cybercosmology, but little new material will be introduced at that time. Next time we will use all of the softwarephysics that we have covered to date to finally frame the fundamental problem of software.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:

http://softwarephysics.blogspot.com/

Regards,

Steve Johnston