Saturday, April 08, 2017

Introduction to Softwarephysics

Softwarephysics is a simulated science for the simulated Software Universe that we are all immersed in. It is an approach to software development, maintenance and support based upon concepts from physics, chemistry, biology, and geology that I have been using on a daily basis for over 35 years as an IT professional. For those of you not in the business, IT is short for Information Technology, commercial computer science. The original purpose of softwarephysics was to explain why IT was so difficult, to suggest possible remedies, and to provide a direction for thought. Since then softwarephysics has taken on a larger scope, as it became apparent that softwarephysics could also assist the physical sciences with some of the Big Problems that they are currently having difficulties with. So if you are an IT professional, general computer user, or simply an individual interested in computer science, physics, chemistry, biology, or geology then softwarephysics might be of interest to you, if not in an entirely serious manner, perhaps at least in an entertaining one.

The Origin of Softwarephysics
From 1975 – 1979, I was an exploration geophysicist exploring for oil, first with Shell, and then with Amoco. In 1979, I made a career change into IT, and spent about 20 years in development. For the past 17 years, I have been in IT operations, supporting middleware on WebSphere, JBoss, Tomcat, and ColdFusion. When I first transitioned into IT from geophysics, I figured that if you could apply physics to geology; why not apply physics to software? So like the exploration team at Amoco that I had just left, consisting of geologists, geophysicists, paleontologists, geochemists, and petrophysicists, I decided to take all the physics, chemistry, biology, and geology that I could muster and throw it at the problem of software. The basic idea was that many concepts in physics, chemistry, biology, and geology suggested to me that the IT community had accidentally created a pretty decent computer simulation of the physical Universe on a grand scale, a Software Universe so to speak, and that I could use this fantastic simulation in reverse, to better understand the behavior of commercial software, by comparing software to how things behaved in the physical Universe. Softwarephysics depicts software as a virtual substance, and relies upon our understanding of the current theories in physics, chemistry, biology, and geology to help us model the nature of software behavior. So in physics we use software to simulate the behavior of the Universe, while in softwarephysics we use the Universe to simulate the behavior of software. Along these lines, we use the Equivalence Conjecture of Softwarephysics as an aid; it allows us to shift back and forth between the Software Universe and the physical Universe, and hopefully to learn something about one by examining the other:

The Equivalence Conjecture of Softwarephysics
Over the past 75 years, through the uncoordinated efforts of over 50 million independently acting programmers to provide the world with a global supply of software, the IT community has accidentally spent more than $10 trillion creating a computer simulation of the physical Universe on a grand scale – the Software Universe.

For more on the origin of softwarephysics please see Some Thoughts on the Origin of Softwarephysics and Its Application Beyond IT.

Logical Positivism and Effective Theories
Many IT professionals have a difficult time with softwarephysics because they think of physics as being limited to the study of real things like electrons and photons, and since software is not “real”, how can you possibly apply concepts from physics and the other sciences to software? To address this issue, softwarephysics draws heavily upon two concepts from physics that have served physics quite well over the past century – the concept of logical positivism and the concept of effective theories. This was not always the case. In the 17th, 18th, and 19th centuries, physicists mistakenly thought that they were actually discovering the fundamental laws of the Universe, which they thought were based upon real tangible things like particles, waves, and fields. Classical Newtonian mechanics (1687), thermodynamics (1850), and classical electrodynamics (1864) did a wonderful job of describing the everyday world at the close of the 19th century, but early in the 20th century it became apparent that the models upon which these very successful theories were based did not work very well for small things like atoms or for objects moving at high velocities or in strong gravitational fields. This provoked a rather profound philosophical crisis within physics at the turn of the century, as physicists worried that perhaps 300 years of work was about to go down the drain. The problem was that classical physicists confused their models of reality with reality itself, and when their classical models began to falter, their confidence in physics began to falter as well. This philosophical crisis was resolved with the adoption of the concepts of logical positivism and some new effective theories in physics. Quantum mechanics (1926) was developed for small things like atoms, the special theory of relativity (1905) was developed for objects moving at high velocities and the general theory of relativity (1915) was developed for objects moving in strong gravitational fields.

Logical positivism, usually abbreviated simply to positivism, is an enhanced form of empiricism, in which we do not care about how things “really” are; we are only interested with how things are observed to behave. With positivism, physicists only seek out models of reality - not reality itself. When we study quantum mechanics, we will find that the concept of reality gets rather murky in physics anyway, so this is not as great a loss as it might at first seem. By concentrating on how things are observed to behave, rather than on what things “really” are, we avoid the conundrum faced by the classical physicists. In retrospect, this idea really goes all the way back to the very foundations of physics. In Newton’s Principia (1687) he outlined Newtonian mechanics and his theory of gravitation, which held that the gravitational force between two objects was proportional to the product of their masses divided by the square of the distance between them. Newton knew that he was going to take some philosophical flack for proposing a mysterious force between objects that could reach out across the vast depths of space with no apparent mechanism, so he took a very positivistic position on the matter with the famous words:

I have not as yet been able to discover the reason for these properties of gravity from phenomena, and I do not feign hypotheses. For whatever is not deduced from the phenomena must be called a hypothesis; and hypotheses, whether metaphysical or physical, or based on occult qualities, or mechanical, have no place in experimental philosophy. In this philosophy particular propositions are inferred from the phenomena, and afterwards rendered general by induction.

Instead, Newton focused on how things were observed to move under the influence of his law of gravitational attraction, without worrying about what gravity “really” was.

The second concept, that of effective theories, is an extension of positivism. An effective theory is an approximation of reality that only holds true over a certain restricted range of conditions and only provides for a certain depth of understanding of the problem at hand. For example, Newtonian mechanics is an effective theory that makes very good predictions for the behavior of objects moving less than 10% of the speed of light and which are bigger than a very small grain of dust. These limits define the effective range over which Newtonian mechanics can be applied to solve problems. For very small things we must use quantum mechanics and for very fast things moving in strong gravitational fields, we must use relativity theory. So all of the current theories of physics, such as Newtonian mechanics, Newtonian gravity, classical electrodynamics, thermodynamics, statistical mechanics, the special and general theories of relativity, quantum mechanics, and the quantum field theories of QED and QCD are effective theories that are based upon models of reality, and all these models are approximations - all these models are fundamentally "wrong", but at the same time, these effective theories make exceedingly good predictions of the behavior of physical systems over the limited ranges in which they apply. That is the goal of softwarephysics – to provide for an effective theory of software behavior that makes useful predictions of software behavior that are applicable to the day-to-day activities of IT professionals. So in softwarephysics, we adopt a very positivistic viewpoint of software; we do not care what software “really is”, we only care about how software is observed to behave and try to model those behaviors with an effective theory of software behavior that only holds true over a certain restricted range of conditions and only provides for a certain depth of understanding of the problem at hand.

GPS satellites provide a very good example of positivism and effective theories at work. There are currently 31 GPS satellites orbiting at an altitude of 12,600 miles above the Earth, and each contains a very accurate atomic clock. The signals from the GPS satellites travel to your GPS unit at the speed of light, so by knowing the travel time of the signals from at least 4 of the GPS satellites, it is possible to determine your position on Earth very accurately. In order to do that, it is very important to have very accurate timing measurements. Newtonian mechanics is used to launch the GPS satellites to an altitude of 12,600 miles and to keep them properly positioned in orbit. Classical electrodynamics is then used to beam the GPS signals back down to Earth to the GPS unit in your car. Quantum mechanics is used to build the transistors on the chips on board the GPS satellites and to understand the quantum tunneling of electrons in the flash memory chips used to store GPS data on the satellites. The special theory of relativity predicts that the onboard atomic clocks on the GPS satellites will run slower and lose about 7.2 microseconds per day due to their high velocities relative to an observer on the Earth. But at the same time, the general theory of relativity also predicts that because the GPS satellites are further from the center of the Earth and in a weaker gravitational field, where spacetime is less deformed than on the surface of the Earth, their atomic clocks also run faster and gain 45.9 microseconds per day due to the weaker gravitational field out there. The net effect is a gain of 38.7 microseconds per day, so the GPS satellite atomic clocks have to be purposefully built to run slow by 38.7 microseconds per day before they are launched, so that they will keep in sync with clocks on the surface of the Earth. If this correction were not made, an error in your position of 100 yards/day would accrue. The end result of the combination of all these fundamentally flawed effective theories is that it is possible to pinpoint your location on Earth to an accuracy of 16 feet or better for as little as $100. But physics has done even better than that with its fundamentally flawed effective theories. By combining the effective theories of special relativity (1905) with quantum mechanics (1926), physicists were able to produce a new effective theory for the behavior of electrons and photons called quantum electrodynamics QED (1948) which was able to predict the gyromagnetic ratio of the electron, a measure of its intrinsic magnetic field, to an accuracy of 11 decimal places. As Richard Feynman has pointed out, this was like predicting the exact distance between New York and Los Angeles accurate to the width of a human hair!

So Newtonian mechanics makes great predictions for the macroscopic behavior of GPS satellites, but it does not work very well for small things like the behavior of individual electrons within transistors, where quantum mechanics is required, or for things moving at high speeds or in strong gravitational fields where relativity theory must be applied. And all three of these effective theories are based upon completely contradictory models. General relativity maintains that spacetime is curved by matter and energy, but that matter and energy are continuous, while quantum mechanics maintains that spacetime is flat, but that matter and energy are quantized into chunks. Newtonian mechanics simply states that space and time are mutually independent dimensions and universal for all, with matter and energy being continuous. The important point is that all effective theories and scientific models are approximations – they are all fundamentally "wrong". But knowing that you are "wrong" gives you a great advantage over people who know that they are "right", because knowing that you are "wrong" allows you to seek improved models of reality. So please consider softwarephysics to simply be an effective theory of software behavior that is based upon models that are fundamentally “wrong”, but at the same time, fundamentally useful for IT professionals. So as you embark upon your study of softwarephysics, please always keep in mind that the models of softwarephysics are just approximations of software behavior, they are not what software “really is”. It is very important not to confuse models of software behavior with software itself, if one wishes to avoid the plight of the 19th century classical physicists.

If you are an IT professional and many of the above concepts are new to you, do not be concerned. This blog on softwarephysics is aimed at a diverse audience, but with IT professionals in mind. All of the above ideas will be covered at great length in the postings in this blog on softwarephysics and in a manner accessible to all IT professionals. Now it turns out that most IT professionals have had some introduction to physics in high school or in introductory college courses, but that presents an additional problem. The problem is that such courses generally only cover classical physics, and leave the student with a very good picture of physics as it stood in 1864! It turns out that the classical physics of Newtonian mechanics, thermodynamics, and classical electromagnetic theory were simply too good to discard and are still quite useful, so they are taught first to beginners and then we run out of time to cover the really interesting physics of the 20th century. Now imagine the problems that the modern world would face if we only taught similarly antiquated courses in astronomy, metallurgy, electrical and mechanical engineering, medicine, economics, biology, or geology that happily left students back in 1864! Since many of the best models for software behavior stem from 20th century physics, we will be covering a great deal of 20th century material in these postings – the special and general theories of relativity, quantum mechanics, quantum field theories, and chaos theory, but I hope that you will find that these additional effective theories are quite interesting on their own, and might even change your worldview of the physical Universe at the same time.

Unintended Consequences for the Scientific Community
As I mentioned at the close of my original posting on SoftwarePhysics, my initial intention for this blog on softwarephysics was to fulfill a promise I made to myself about 25 years ago to approach the IT community with the concept of softwarephysics a second time, following my less than successful attempt to do so in the 1980s, with the hope of helping the IT community to better cope with the daily mayhem of life in IT. However, in laying down the postings for this blog an unintended consequence arose in my mind as I became profoundly aware of the enormity of this vast computer simulation of the physical Universe that the IT community has so graciously provided to the scientific community free of charge and also of the very significant potential scientific value that it provides. One of the nagging problems for many of the observational and experimental sciences is that many times there is only one example readily at hand to study or experiment with, and it is very difficult to do meaningful statistics with a population of N=1.

But the computer simulation of the physical Universe that the Software Universe presents provides another realm for comparison. For example, both biology and astrobiology only have one biosphere on Earth to study and even physics itself has only one Universe with which to engage. Imagine the possibilities if scientists had another Universe readily at hand in which to work! This is exactly what the Software Universe provides. For example, in SoftwareBiology and A Proposal For All Practicing Paleontologists we see that the evolution of software over the past 75 years, or 2.4 billion seconds, ever since Konrad Zuse first cranked up his Z3 computer in May of 1941, has closely followed the same path as life on Earth over the past 4.0 billion years in keeping with Simon Conway Morris’s contention that convergence has played the dominant role in the evolution of life on Earth. In When Toasters Fly, we also see that software has evolved in fits and starts as portrayed by the punctuated equilibrium of Stephen Jay Gould and Niles Eldredge, and in The Adaptationist View of Software Evolution we explore the overwhelming power of natural selection in the evolution of software. In keeping with Peter Ward’s emphasis on mass extinctions dominating the course of evolution throughout geological time, we also see in SoftwareBiology that there have been several dramatic mass extinctions of various forms of software over the past 75 years as well, that have greatly affected the evolutionary history of software, and that between these mass extinctions, software has also tended to evolve through the gradual changes of Hutton’s and Lyell’s uniformitarianism. In Software Symbiogenesis and Self-Replicating Information, we also see the very significant role that parasitic/symbiotic relationships have played in the evolution of software, in keeping with the work of Lynn Margulis and also of Freeman Dyson’s two-stage theory of the origin of life on Earth. In The Origin of Software the Origin of Life, we explore Stuart Kauffman’s ideas on how Boolean nets of autocatalytic chemical reactions might have kick-started the whole thing as an emergent behavior of an early chaotic pre-biotic environment on Earth, and that if Seth Shostak is right, we will never end up talking to carbon-based extraterrestrial aliens, but to alien software instead. In Is the Universe Fine-Tuned for Self-Replicating Information? we explore the thermodynamics of Brandon Carter’s Weak Anthropic Principle (1973), as it relates to the generation of universes in the multiverse that are capable of sustaining intelligent life. Finally, in Programming Clay we revisit Alexander Graham Cairns-Smith’s theory (1966) that Gene 1.0 did not run on nucleic acids, but on clay microcrystal precursors instead.

Similarly for the physical sciences, in Is the Universe a Quantum Computer? we find a correspondence between TCP/IP and John Cramer’s Transactional Interpretation of quantum mechanics. In SoftwarePhysics and Cyberspacetime, we also see that the froth of CPU processes running with a clock speed of 109 Hz on the 10 trillion currently active microprocessors that comprise the Software Universe can be viewed as a slowed down simulation of the spin-foam froth of interacting processes of loop quantum gravity running with a clock speed of 1043 Hz that may comprise the physical Universe. And in Software Chaos, we examine the nonlinear behavior of software and some of its emergent behaviors and follow up in CyberCosmology with the possibility that vast quantities of software running on large nonlinear networks might eventually break out into consciousness in accordance with the work of George Dyson and Daniel Dennett. Finally, in Model-Dependent Realism - A Positivistic Approach to Realism we compare Steven Weinberg’s realism with the model-dependent realism of Stephen Hawking and Leonard Mlodinow and how the two worldviews affect the search for a Final Theory. Finally, in The Software Universe as an Implementation of the Mathematical Universe Hypothesis and An Alternative Model of the Software Universe we at long last explore what software might really be, and discover that the Software Universe might actually be more closely related to the physical Universe than you might think.

The chief advantage of doing fieldwork in the Software Universe is that, unlike most computer simulations of the physical Universe, it is an unintended and accidental simulation, without any of the built-in biases that most computer simulations of the physical Universe suffer. So you will truly be able to do fieldwork in a pristine and naturally occuring simulation, just as IT professionals can do fieldwork in the wild and naturally occuring simulation of software that the living things of the biosphere provide. Secondly, the Software Universe is a huge simulation that is far beyond the budgetary means of any institution or consortium by many orders of magnitude. So if you are an evolutionary biologist, astrobiologist, or paleontologist working on the origin and evolution of life in the Universe, or a physicist or economist working on the emergent behaviors of nonlinear systems and complexity theory, or a neurobiologist working on the emergence of consciousness in neural networks, or even a frustrated string theorist struggling with quantum gravity, it would be well worth your while to pay a friendly call upon the local IT department of a major corporation in your area. Start with a visit to the Command Center for their IT Operations department to get a global view of their IT infrastructure and to see how it might be of assistance to the work in your area of interest. From there you can branch out to the applicable area of IT that will provide the most benefit.

The Impact of Self-Replicating Information Upon the Planet
One of the key findings of softwarephysics is concerned with the magnitude of the impact upon the planet of self-replicating information.

Self-Replicating Information – Information that persists through time by making copies of itself or by enlisting the support of other things to ensure that copies of itself are made.

Basically, we have seen several waves of self-replicating information dominate the Earth:
1. Self-replicating autocatalytic metabolic pathways of organic molecules
2. RNA
3. DNA
4. Memes
5. Software

Note that because the self-replicating autocatalytic metabolic pathways of organic molecules, RNA and DNA have become so heavily intertwined over time that now I simply refer to them as the “genes”. Over the past 4.0 billion years, the surface of the Earth has been totally reworked by three forms of self-replicating information – the genes, memes and software, with software rapidly becoming the dominant form of self-replicating information on the planet. For more on this see:

A Brief History of Self-Replicating Information
Self-Replicating Information
Is Self-Replicating Information Inherently Self-Destructive?
Enablement - the Definitive Characteristic of Living Things
Is the Universe Fine-Tuned for Self-Replicating Information?
How to Use an Understanding of Self-Replicating Information to Avoid War
The Great War That Will Not End
How to Use Softwarephysics to Revive Memetics in Academia

Softwarephysics and the Real World of Human Affairs
Having another universe readily at hand to explore, even a simulated universe like the Software Universe, necessarily has an impact upon one's personal philosophy of life, and allows one to draw certain conclusions about the human condition and what’s it all about, so as you read through the postings in this blog you will stumble across a bit of my own personal philosophy - definitely a working hypothesis still in the works. Along these lines you might be interested in a few postings where I try to apply softwarephysics to the real world of human affairs:

MoneyPhysics – my impression of the 2008 world financial meltdown.

The Fundamental Problem of Everything – if you Google "the fundamental problem of everything", this will be the only hit you get on the entire Internet, which is indicative of the fundamental problem of everything!

What’s It All About? and Genes, Memes and Software – my current working hypothesis on what’s it all about.

How to Use an Understanding of Self-Replicating Information to Avoid War – my current working hypothesis for how the United States can avoid getting bogged down again in continued war in the Middle East.

Hierarchiology and the Phenomenon of Self-Organizing Organizational Collapse - a modern extension of the classic Peter Principle that applies to all hierarchical organizations and introduces the Time Invariant Peter Principle.

The Economics of the Coming Software Singularity, The Enduring Effects of the Obvious Hiding in Plain Sight and The Dawn of Galactic ASI - Artificial Superintelligence - my take on some of the issues that will arise for mankind as software becomes the dominant form of self-replicating information upon the planet over the coming decades.

Making Sense of the Absurdity of the Real World of Human Affairs - how software has aided the expansion of our less desirable tendencies in recent years.

Some Specifics About These Postings
The postings in this blog are a supplemental reading for my course on softwarephysics for IT professionals entitled SoftwarePhysics 101 – The Physics of Cyberspacetime, which was originally designed to be taught as a series of seminars at companies where I was employed. Since softwarephysics essentially covers the simulated physics, chemistry, biology, and geology of an entire simulated universe, the slides necessarily just provide a cursory skeleton upon which to expound. The postings in this blog go into much greater depth. Because each posting builds upon its predecessors, the postings in this blog should be read in reverse order from the oldest to the most recent, beginning with my original posting on SoftwarePhysics. In addition, several universities also now offer courses on Biologically Inspired Computing which cover some of the biological aspects of softwarephysics, and the online content for some of these courses can be found by Googling for "Biologically Inspired Computing" or "Natural Computing". At this point we will finish up with my original plan for this blog on softwarephysics with a purely speculative posting on CyberCosmology that describes the origins of the Software Universe, cyberspacetime, software and where they all may be heading. Since CyberCosmology will be purely speculative in nature, it will not be of much help to you in your IT professional capacities, but I hope that it might be a bit entertaining. Again, if you are new to softwarephysics, you really need to read the previous posts before taking on CyberCosmology. I will probably continue on with some additional brief observations about softwarephysics in the future, but once you have completed CyberCosmology, you can truly consider yourself to be a bona fide softwarephysicist.

For those of you following this blog, the posting dates on the posts may seem to behave in a rather bizarre manner. That is because in order to get the Introduction to Softwarephysics listed as the first post in the context root of http://softwarephysics.blogspot.com/ I have to perform a few IT tricks. When publishing a new posting, I simply copy the contents of the Introduction to Softwarephysics to a new posting called the New Introduction to Softwarephysics. Then I update the original Introduction to Softwarephysics entry with the title and content of the new posting to be published. I then go back and take “New” out of the title of the New Introduction to Softwarephysics. This way the Introduction to Softwarephysics always appears as the first posting in the context root of http://softwarephysics.blogspot.com/. The side effect of all this is that the real posting date of posts is the date that appears on the post that you get when clicking on the Newer Post link at the bottom left of the posting webpage.

SoftwarePhysics 101 – The Physics of Cyberspacetime is now available on Microsoft OneDrive.

SoftwarePhysics 101 – The Physics of Cyberspacetime - Original PowerPoint document

Entropy – A spreadsheet referenced in the document

BSDE – A 1989 document describing how to use BSDE - the Bionic Systems Development Environment - to grow applications from genes and embryos within the maternal BSDE software.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
http://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Tuesday, March 07, 2017

Cyber Civil Defense

In my posting Cyber Defense about six years ago I warned that, like the global disaster of World War I that the Powers That Be accidentally unleashed upon mankind more than 100 years ago, the current world powers may not fully understand what they have wrought with their large stockpiles of cyberweapons and cybersoldiers. Recall that the world powers that ran the world 100 years ago, just prior to World War I, did not recognize the game-changing effects of the mechanization of warfare. The development at the time of high volume rail systems, capable of quickly transporting large numbers of troops and munitions, and the invention of the machine gun, and the arrival of mechanized transport vehicles and tanks, greatly increased the killing power of nation-states. But this was not generally recognized by the Powers That Be prior to the catastrophe of World War I, which resulted in 40 million casualties and the deaths of 20 million people for apparently no particular reason at all. Similarly, it now seems that the first large-scale cyberattack by Russia upon the United States of America may have successfully elected a president of the United States, but in this posting I would like to propose that there may be some dreadful unintended consequences to this incredible Russian cybervictory that could leave even more dead in its wake. First of all, we should take note that this was not the first president of the United States that Russia managed to elect into office.

I was born in 1951 during the Korean War, and so I lived through all of the very tense Cold War events of the 1950s and 1960s, including the Cuban Missile Crisis of October 1962, which brought us all closer to the prospect of a global nuclear war than we should ever have come, and so let us begin there. On Oct. 4, 1957, Sputnik 1 was successfully launched by the Soviet Union to become the world's very first man-made object to enter into Earth orbit, and was put into orbit by a Soviet R-7 rocket. Earlier in 1957 the Soviet R-7 rocket had become the world's very first functional ICBM missile after its successful 3,700 mile test flight on August 21, 1957. At the time, all of these Russian firsts threw the United States of America into a Cold War frenzy that is now hard to fathom, and had a huge impact upon the United States of America. For example, it built my high school and put me through college. Back in the 1950's, the School District 88 of the state of Illinois was having a hard time trying to convince the stingy local residents of the need for a new high school in the area. But that all changed in January of 1958, after the launch of Sputnik 1, when suddenly the local residents now eagerly voted in a referendum to build a new Willowbrook High School out of the fear that was generated by Sputnik 1, and of the demonstrable superiority of Russian missile technology at the time. Suddenly, Americans also now began to take science and education seriously once again, and for once finally began to hold science and education in the esteem that it actually deserved. For example, in 1969 when I first began work on a B.S. in physics at the University of Illinois, tuition was only $181 per semester, and I was easily able to put myself through college simply by cleaning movie theaters seven days a week during the summers at $2.25/hour. For my M.S. in geophysics at the University of Wisconsin, my tuition and fees were waived, and I received a generous stipend to live on while working as a research assistant, courtesy of a grant from the National Science Foundation. The end result of this was that, in 1975 when I finished school, I had $3000 in the bank, instead of the crushing student debt that most graduates now face, because the United States had not yet given up on supporting education, like it did after the Cold War seemed to have ended on December 25, 1991 when the Soviet Union collapsed under its own weight.

Figure 1 - The launch of Sputnik 1 by the Russians on October 4, 1957 on top of an R-7 ICBM rocket threw the United States of America into a Cold War panic that is now hard to imagine.

But the Russians did far more than that with Sputnik 1. They also managed to elect their very first president of the United States with it. Given the astounding success that the Soviets had had with Sputnik 1 and the R-7 ICBM in 1957, early in 1958 John F. Kennedy seized upon the issue as a "missile gap" with the Soviet Union that the Eisenhower Administration had failed to prevent. Now it turns out that by November of 1960 the "missile gap" had largely been closed in reality, but it still remained in the public zeitgeist of the time as a real issue, and it helped to elect John F. Kennedy to become president of the United States by a very narrow margin. Apparently, John F. Kennedy actually knew at the time that the "missile gap" was really a false myth, but just the same, used it as a useful political tool to help get elected. The Soviets, on the other hand, regarded Kennedy's "missile gap" and the attempted Bay of Pigs invasion of Cuba in 1961 as indications that Kennedy was a dangerous and weak leader who might cave into his more militaristic generals, like General Curtis LeMay, during a crisis and launch a nuclear first strike. The Soviet R-7 ICBMs actually required about 20 hours of preparation to launch, so the R-7 ICBMs were easy targets for conventional bombers to take out before they could be launched during a global nuclear war, and thus the R-7 ICBMs were actually less threatening than long-range bombers, like the B-52. All of this led the Soviet military planners to conclude that additional deterrence measures were in order, and as a consequence, plans were put into place to install medium-range nuclear missiles in Cuba that were more accurate than the R-7. When these missiles were first discovered by American U-2 flights in September of 1962, the Cuban Missile Crisis of October 1962 soon followed. At the time, Kennedy's generals recommended an invasion of Cuba in response, but fortunately for all, Kennedy turned out to be a stronger leader than the Soviets had predicted, and Kennedy countered with a Cuban blockade instead, which allowed both sides enough time to come to their senses. There are now reports that, unknown to the United States at the time, the Soviet field commanders in Cuba had actually been given authority to launch the nuclear weapons under their control by the Soviet High Command in the event of an invasion, the only time such authority has ever been delegated by the Soviet High Command. The Soviet field commanders had at least twenty nuclear warheads on the medium-range R-12 Dvina ballistic missiles under their control that were capable of reaching cities in the United States, including Washington D.C., each carrying a one megaton warhead, and nine tactical nuclear missiles with smaller warheads. If the Soviet field commanders had launched their missiles, many millions of Americans would have been killed in the initial attack, and the ensuing retaliatory nuclear strike against the Soviet Union would have killed roughly one hundred million Russians. The final Soviet counter-attack would have killed a similar number of Americans.

But the above nuclear catastrophe did not happen because reasonable minds on both sides of the conflict always prevailed. In fact, all during the Cold War we had highly capable leaders in both the United States and the Soviet Union at all times, who always thought and behaved in a rational manner founded on sound logical grounds. Now during the Cold War, the United States may not have agreed with the Soviet Union on most things, but both sides always operated in a rational and logical manner, and that is what made the MAD (Mutual Assured Destruction) stalemate work, which prevented nuclear war from breaking out and ending it all. Additionally, the scientific community in the United States always respected those in the Russian scientific community for their brilliant scientific efforts, and this limited scientific dialog helped to keep open the political channels between both countries as well. In fact, I have always been amazed by the astounding Russian scientific achievements over the years that were made without the benefits of the freedom of thought that the 18th century Enlightenment had brought to the western democracies. Despite the limitations imposed upon the Russian scientific community by a series of political strongmen over many decades, they always managed to prevail in the long term. I am not so confident that the scientific communities of the western democracies could do as well under the thumbs of the Alt-Right strongmen that wish to come to power now.

So now thanks to Russian cyberwarfare, we now have a new president of the United States of very limited ability. It seems that the principal skill of this new president lies solely in making questionable real estate deals, but he has no experience with global political nuclear strategy whatsoever, and that is very dangerous for the United States and for Mother Russia as well. True, he does seem to unquestionably favor Russia, for some unknown reason, and that unknown favor is currently being investigated by the FBI and both houses of Congress. But those investigations will take quite some time to complete. Meanwhile, we now have a mentally unhinged leader of North Korea, a Stalinist holdover from the previous century, now rapidly moving towards obtaining ICBMs armed with nuclear warheads that could strike the United States. This has never happened before. We have never had potentially warring nation-states with nuclear weapons headed by administrations that had no idea of what they were doing with such weapons. This is not good for the world, or for Russia either. In the American 1988 vice-presidential debate between Lloyd Bentsen and Dan Quale there is a famous remark by Lloyd Bentsen, after Dan Quale made a vague analogy to himself and John F. Kennedy, that goes "Senator, I served with Jack Kennedy. I knew Jack Kennedy. Jack Kennedy was a friend of mine. Senator, you're no Jack Kennedy." And that certainly is true of the new president of the United States that Russian cyberwarriors helped to elect. Yes, he might seem to be overly friendly to Russian interests, but his administration has already stated that military actions might be required to prevent North Korea from obtaining an ICBM capable of delivering a nuclear warhead that could strike the United States, and this new Administration has also wondered why we cannot use nuclear weapons if we already have them - otherwise, why build such weapons in the first place? An attack of North Korea could be the flashpoint that ignites a global nuclear war between the United States, North Korea and China, like the original Korean War of 1950. True, Russia itself might not get drawn into such a conflict, or maybe it would, based upon earlier precedents like World War I, but nonetheless, the resulting high levels of global radioactive fallout and nuclear winter effects resulting from a large-scale nuclear exchange would bring disaster to Russia as well.

Cyber Civil Defense - How to Build a Cyber Fallout Shelter Against External Influences
Now all during the 1950s and early 1960s, great attention was paid in the United States to the matter of civil defense against a possible nuclear strike by the Russians. During those times, the government of the United States essentially admitted that it could not defend the citizens of the United States from a Soviet bomber attack with nuclear weapons, and so it was up to the individual citizens of the United States to prepare for such a nuclear attack.

Figure 2 - During the 1950s, as a very young child, with the beginning of each new school year, I was given a pamphlet by my teacher describing how my father could build an inexpensive fallout shelter in our basement out of cinderblocks and 2x4s.

Figure 3 - But to me these cheap cinderblock fallout shelters always seemed a bit small for a family of 5, and my parents never bothered to build one because we lived only 25 miles from downtown Chicago.

Figure 4 - For the more affluent, more luxurious accommodations could be constructed for a price.

Figure 5 - But no matter what your socioeconomic level was at the time, all students in the 1950s participated in "duck and cover" drills for a possible Soviet nuclear attack.

Figure 6 - And if you were lucky enough to survive the initial flash and blast of a Russian nuclear weapon with your "duck and cover" maneuver, your school, and all other public buildings, also had a fallout shelter in the basement to help you get through the next two weeks, while the extremely radioactive nucleotides from the Russian nuclear weapons rapidly decayed away.

Unfortunately, living just 25 miles from downtown Chicago, the second largest city in the United States at the time, meant that the whole Chicagoland area was destined to be targeted by a multitude of overlapping 10 and 20 megaton bombs by the Soviet bomber force, meaning that I would be killed multiple times as my atoms were repeatedly vaporized and carried away in the winds of the Windy City.

Now all of these somber thoughts from the distant 1950s might sound a bit bleak, but they are the reason that I pay very little attention when I hear our Congressmen and Senators explain that we have to investigate this Russian meddling with our 2016 presidential election "so that this never happens again". But of course it will happen again because we largely did it to ourselves! The Russians, like all major foreign powers, simply exploited the deep political divide between the Democrats and Republicans in our country. This is nothing new. All major foreign powers throughout history have always sought to meddle in the internal political affairs of other countries, in order to advance their own interests. The United States of America has a long history of doing so, and rightly so! It is always far better to try to modify the ambitions of a possible adversary politically, rather than to do so later militarily. The only difference this time was that the Russians used the full capabilities of the world-wide software infrastructure that is already in place to further their ends, like another Sputnik first, in keeping with my contention that software is now rapidly becoming the dominant form of self-replicating information on the planet. Consequently, cyberspacetime is now the most valuable terrain, from a long-term strategic perspective, to be found on the planet, and once again, the Russians got there first. The Russians realized that, for less than the price of a single ICBM, they could essentially paralyze the United States of America for many years, or perhaps even an entire decade, by simply using the existing software infrastructure of the world to their advantage, and the great divide between the Democrats and Republicans.

Now as an 18th century liberal and a 20th century conservative, I must admit that I am a 20th century Republican who has only voted for Democrats for the past 15 years. I parted with the 21st century Republican Party in 2002 when it turned its back on science, and took up some other new policies that I did not favor. So over the past 45 years, there have been long stretches of time when I was a Republican, and long stretches of time when I was a Democrat, but at all times, I always tried to remain an American and hold the best thinkings of both parties dear to my heart. But the problem today is that most Republicans and most Democrats now view members of the other party as a greater threat to the United States of America than all of the other foreign powers in the world put together. This was the fundamental flaw in today's American society that the Russians exploited using our very own software! Hence, I would like to propose that since the government of the United States cannot really protect us from such a cyberattack in the future, like back in the 1950s, we need to institute a Civilian Cyber Civil Defense program of our own. In fact, this time it is much easier to do so because we do not need to physically build and stock a huge number of fallout shelters. All we need to do is to simply follow the directions on this official 1961 CONELRAD Nuclear Attack Message by not listening to false rumors or broadcasts spread by agents of the enemy:

https://www.youtube.com/watch?v=7iaQMbfazQk

which in today's divisive world simply means:

STOP BELIEVING THE STUPID THINGS YOU READ ON THE INTERNET!

In The Danger of Believing in Things I highlighted the dangers of not employing critical thought when evaluating assertions in our physical Universe, and the same goes for politics. In that posting, I explained that it is very dangerous to believe in things because that means you have turned off your faculty of critical thought, that hopefully, allows you to uncover attempts at deception by others. If you have ever purchased a new car, you know exactly what I am talking about. Instead, you should always approach things with some level of confidence that is less than 100%, and that confidence should always be based upon the evidence at hand. In fact, at an age of 65 years, I now have very little confidence in most forms of human thought beyond the sciences and mathematics. But in today's demented political world, most Americans are now mainly foaming at the mouth over the horrible thoughts and acts of the opposition party, and paying very little attention to the actions of the other foreign powers of the world. Since the 18th century European Enlightenment brought us democracies with the freedom of speech, we all need to recognize as responsible adults that it is impossible for our government to prevent foreign powers from exploiting that freedom by injecting "fake news" and false stories into the political debate. We must realize that, although the Russians are very intelligent and sophisticated people, they never fully benefited from the 18th century European Enlightenment and the freedoms that it brought, so we cannot retaliate in kind against Russia in its next election. So the only defense we have against another similar cyberattack by Russia, or some other foreign power during an election year, is to use the skepticism and critical thinking that science uses every day. As Carl Sagan used to say "Extraordinary claims require extraordinary evidence" . So in the course of the next election cycle, if you see on the Internet or cable network news, that the opposition candidate has been found to be running a child pornography ring out of a local pizza parlor in your city, stop and think. Most likely, you are being tricked by a foreign power into believing something that you already want to believe.

This may not be as easy as it first sounds because, although we all believe ourselves to be sophisticated, rational and open-minded individuals who only pay attention to accurate news accounts, we all do savor the latest piece of political gossip that we see about the opposition candidate, and are more likely to believe it, if it reconfirms our current worldview. In the April 2017 issue of Scientific American, Walter Quattrociocchi published a very interesting article, Inside the Echo Chamber, on this very subject. He showed that studies in Italy have found that instead of creating a "collective intelligence", the Internet has actually created a vast echo chamber of misinformation that has been dramatically amplified by social media software like Facebook and Twitter. The computational social scientists who study the viral spread of such misinformation on the Internet find that, frequently, users of social media software simply confine their Internet activities to websites that only feature similar misinformation that simply reconfirms their current distorted worldview. Worse yet, when confronted with debunking information, people were found to be 30% more likely to continue to read the same distorted misinformation that reaffirms their current worldview, rather than to reconsider their position. Clearly, a bit of memetics could be of help. Softwarephysics maintains that currently software is rapidly becoming the 5th wave of self-replicating information to come to predominance on the Earth, as it continues to form very strong parasitic/symbiotic relationships with the memes that currently rule the world - please see A Brief History of Self-Replicating Information for more on that. All memes soon learn that in order to survive and replicate they need to become appealing memes for the minds of the human DNA survival machines that replicate memes. Again, appealing memes are usually memes that appeal to the genes, and usually have something to do with power, status, wealth or sex. Consequently, most political debate also arises from the desire for power, status, wealth or sex too. The end result is that people like to hear what they like to hear because it reaffirms the worldview that seems to bring them power, status, wealth or sex. So the next time you run across some political memes that seem to make you very happy inside, be very skeptical. The more appealing the political memes appear to be, the less likely they are to be true.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
http://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Monday, February 20, 2017

The Danger of Tyranny in the Age of Software

If you have been following this blog on softwarephysics, then you know that I contend that it is all about self-replicating information struggling to survive in a highly nonlinear Universe, subject to the second law of thermodynamics, and that one of my major concerns has always been why we seem to be the only form of Intelligence to be found within our Milky Way galaxy after nearly 10 billion years of galactic stellar evolution. Now software is just currently the fifth wave of self-replicating information to sweep across the surface of the planet and totally rework its surface - see A Brief History of Self-Replicating Information for more on that. But more importantly, software is the first form of self-replicating information to appear on the Earth that can already travel at the speed of light, and software never dies, so it is superbly preadapted for interstellar space travel. Since we now know that nearly all of the 400 billion stars within our galaxy have planets, then for all intents and purposes, we should now find ourselves knee-deep in von Neumann probes, self-replicating robotic probes stuffed with alien software that travel from star system to star system building copies along the way, as they seek out additional resources and safety from potential threats, but that is clearly not the case. So what gives? Clearly, something must be very wrong with my current thinking.

One of my assumptions all along has been that, capitalism, and the free markets that it naturally enables, would necessarily, bring software to predominance as the dominant form of self-replicating information on the planet, as the Powers that Be who currently rule the Earth try to reduce the costs of production. But now I have my doubts. As an 18th century liberal, and a 20th century conservative, I have always been a strong proponent of the 17th century Scientific Revolution, which brought forth the heretical proposition that rational thought, combined with evidence-based reasoning, could reveal the absolute truth, and allow individuals to actually govern themselves, without the need for an authoritarian monarchy. This change in thinking led to the 18th century Enlightenment, and brought forth the United States of America as a self-governing political entity. But unfortunately, the United States of America has always been a very dangerous experiment in human nature to see if the masses could truly govern themselves, without succumbing to the passions of the mob, and up until now, I have always maintained that we could, but now I am not so sure.

In my last posting The Continuing Adventures of Mr. Tompkins in the Software Universe I highlighted some of the recent political absurdities on the Internet that seem to call into question the very nature of reality in the modern world, and consequently, threaten the very foundations of the 18th century Enlightenment that made the United States of America possible. But the recent arrival of this fact-free virtual cyber-reality is just one element of a much more disturbing rise of Alt-Right movements throughout the world. Many contend that this resurgence of nationalistic authoritarianism is a rejection of the economic globalization that has occurred over the past 30 years or so, and the resulting economic displacement of the middle classes. But my contention is that these Alt-Right movements in such places as the United States, the UK, Germany, France and other western democracies throughout the world, are just another sign of software rapidly becoming the dominant form of self-replicating information on the planet. As software comes to predominance it has caused a great deal of social, political and economic unrest as discussed in The Economics of the Coming Software Singularity , The Enduring Effects of the Obvious Hiding in Plain Sight and Machine Learning and the Ascendance of the Fifth Wave. Basically, the arrival of software in the 1950s slowly began to automate middle class clerical and manufacturing jobs. The evaporation of middle class clerical jobs really began to accelerate in the 1960s, with the arrival of mainframe computers in the business world, and the evaporation of manufacturing jobs picked up considerably in the 1980s, with the arrival of small microprocessors that could be embedded into the machining and assembly machines found on the factory floors. In addition, the creation of world-wide high-speed fiber optic networks in the 1990s to support the arrival of the Internet explosion in 1995, led to software that allowed managers in modern economies to move manual and low-skilled work to the emerging economies of the world where wage scales were substantially lower, because it was now possible to remotely manage such operations using software. But as the capabilities of software continue to progress and general purpose androids begin to appear later in the century there will come a point when even the highly reduced labor costs of the emerging economies will become too dear. At that point the top 1% ruling class may not have much need for the remaining 99% of us, especially if the androids start building the androids. This will naturally cause some stresses within the current oligarchical structure of societies, as their middle classes continue to evaporate and more and more wealth continues to concentrate into the top 1%.

Figure 1 - Above is a typical office full of clerks in the 1950s. Just try to imagine how many clerks were required in a world without software to simply process all of the bank transactions, insurance premiums and claims, stock purchases and sales and all of the other business transactions in a single day.

Figure 2 - Similarly, the Industrial Revolution brought the assembly line and created huge numbers of middle class manufacturing jobs.

Figure 3 - But the arrival of automation software on the factory floor displaced many middle class manufacturing jobs, and will ultimately displace all middle class manufacturing jobs some time in the future.

Figure 4 - Manufacturing jobs in the United States have been on the decline since the 1960s as software has automated many manufacturing processes.

Figure 5 - Contrary to popular public opinion, the actual manufacturing output of the United States has dramatically increased over the years, while at the same time, the percentage of the workforce in manufacturing has steadily decreased. This was due to the dramatic increases in worker productivity that were made possible by the introduction of automation software.

Figure 6 - Self-driving trucks and cars will be the next advance of software to eliminate a large segment of middle class jobs.

So the real culprit that caused the great loss of middle class jobs in the western democracies over the past 40 years was really the vast expansion of automation software at home, and less so the offshoring of jobs. True, currently a substantial amount of job loss can also be attributed to the offshoring of jobs to lower wage scale economies, but that really is just a temporary transient effect. Those offshored jobs will evaporate even faster as software continues on to predominance. Let's face it, with the rapidly advancing capabilities of AI software, all human labor will be reduced to a value of zero over the next 10 - 100 years, and that raises an interesting possible solution for my concerns about not being knee-deep in von Neumann probes.

The Theory and Practice of Oligarchical Collectivism
Now I am not about to compare the bizarre social media behaviors of the new Administration of the United States of America to something out of Nineteen Eighty-Four, written by George Orwell in 1949, and the ability of its infamous Ministry of Truth to distort reality, but I must admit that the numerous Tweets from the new Administration have jogged my memory a bit. I first read Nineteen Eighty-Four in 1964 as a high school freshman at the tender age of 13. At the time, I thought that the book was a very fascinating science fiction story describing a very distant possible future, but given the very anemic IT technology of the day, it seemed much more like a very entertaining political fantasy than something I should really worry much about actually coming true. However, in 2014 I decided to read the book again to see if 50 years of IT progress had made much of a difference to my initial childhood impressions. It should come as no surprise, that in 2014, I now found that the book was now totally doable from a modern IT perspective. Indeed, I now found that, with a few tweaks, a modern oligarchical state run by 2% of the population at the very top, could now easily monitor and control the remaining 98% of the population with ease, given the vast IT infrastructure we already had in place.

But in recent days I have had even more disturbing thoughts. Recall that The Theory and Practice of Oligarchical Collectivism is the book-within-a-book of Nineteen Eighty-Four that describes what is actually going on in the lives of the main characters of the book. The Theory and Practice of Oligarchical Collectivism explains that, ever since we first invented civilization, all civilizations have adopted a hierarchy of the High, the Middle and the Low, no matter what economic system may have been adopted at the time. The High constitute about 2% of the population, and the High run all things within the society. The Middle constitute about 13% of the population, and work for the High to make sure that the Low get things properly done. The Low constitute about 85% of the population, and the Low do all of the non-administrative work to make it all happen. The Low are so busy just trying to survive that they present little danger to the High. The Theory and Practice of Oligarchical Collectivism explains that, throughout history, the Middle has always tried to overthrow the High with the aid of the Low, to establish themselves as the new High. So the Middle must always be viewed as a constant threat to the High. The solution to this problem in Nineteen Eighty-Four was for the High to constantly terrorize the Middle with thugs from the Ministry of Love and other psychological manipulations like doublethink, thoughtcrimes and newspeak, to deny the Middle of even the concept of an existence of a physical reality beyond the fabricated reality created by the Party.

Granted, this was not an ideal solution because it required a great deal of diligence and effort on the part of the High, but it was seen as a necessary evil, because the Middle was always needed to perform all of the administrative functions to keep the High in their elevated positions. But what if there were no need for a Middle? Suppose there came a day when AI software could perform all of the necessary functions of a Middle, without the threat of the Middle overthrowing the High? That would be an even better solution. Indeed, advanced AI software could allow for a 2% High to rule a 98% Low, with no need of a Middle whatsoever. Since the High had absolute control over all of society in The Theory and Practice of Oligarchical Collectivism, it also controlled all scientific advancement of society, and chose to eliminate any scientific advancements that might put the current social order in jeopardy. Such an oligarchical society could then prevent any AI software from advancing beyond the narrow AI technological levels needed to completely monitor and control society with 100% efficiency. That could lead to a society of eternal stasis that would certainly put an end to my von Neumann probes exploring the galaxy.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
http://softwarephysics.blogspot.com/

Regards,
Steve Johnston