Tuesday, June 13, 2017

Introduction to Softwarephysics

Softwarephysics is a simulated science for the simulated Software Universe that we are all immersed in. It is an approach to software development, maintenance and support based upon concepts from physics, chemistry, biology, and geology that I have been using on a daily basis for over 35 years as an IT professional. For those of you not in the business, IT is short for Information Technology, commercial computer science. The original purpose of softwarephysics was to explain why IT was so difficult, to suggest possible remedies, and to provide a direction for thought. Since then softwarephysics has taken on a larger scope, as it became apparent that softwarephysics could also assist the physical sciences with some of the Big Problems that they are currently having difficulties with. So if you are an IT professional, general computer user, or simply an individual interested in computer science, physics, chemistry, biology, or geology then softwarephysics might be of interest to you, if not in an entirely serious manner, perhaps at least in an entertaining one.

The Origin of Softwarephysics
From 1975 – 1979, I was an exploration geophysicist exploring for oil, first with Shell, and then with Amoco. In 1979, I made a career change into IT, and spent about 20 years in development. For the past 17 years, I have been in IT operations, supporting middleware on WebSphere, JBoss, Tomcat, and ColdFusion. When I first transitioned into IT from geophysics, I figured that if you could apply physics to geology; why not apply physics to software? So like the exploration team at Amoco that I had just left, consisting of geologists, geophysicists, paleontologists, geochemists, and petrophysicists, I decided to take all the physics, chemistry, biology, and geology that I could muster and throw it at the problem of software. The basic idea was that many concepts in physics, chemistry, biology, and geology suggested to me that the IT community had accidentally created a pretty decent computer simulation of the physical Universe on a grand scale, a Software Universe so to speak, and that I could use this fantastic simulation in reverse, to better understand the behavior of commercial software, by comparing software to how things behaved in the physical Universe. Softwarephysics depicts software as a virtual substance, and relies upon our understanding of the current theories in physics, chemistry, biology, and geology to help us model the nature of software behavior. So in physics we use software to simulate the behavior of the Universe, while in softwarephysics we use the Universe to simulate the behavior of software. Along these lines, we use the Equivalence Conjecture of Softwarephysics as an aid; it allows us to shift back and forth between the Software Universe and the physical Universe, and hopefully to learn something about one by examining the other:

The Equivalence Conjecture of Softwarephysics
Over the past 75 years, through the uncoordinated efforts of over 50 million independently acting programmers to provide the world with a global supply of software, the IT community has accidentally spent more than $10 trillion creating a computer simulation of the physical Universe on a grand scale – the Software Universe.

For more on the origin of softwarephysics please see Some Thoughts on the Origin of Softwarephysics and Its Application Beyond IT.

Logical Positivism and Effective Theories
Many IT professionals have a difficult time with softwarephysics because they think of physics as being limited to the study of real things like electrons and photons, and since software is not “real”, how can you possibly apply concepts from physics and the other sciences to software? To address this issue, softwarephysics draws heavily upon two concepts from physics that have served physics quite well over the past century – the concept of logical positivism and the concept of effective theories. This was not always the case. In the 17th, 18th, and 19th centuries, physicists mistakenly thought that they were actually discovering the fundamental laws of the Universe, which they thought were based upon real tangible things like particles, waves, and fields. Classical Newtonian mechanics (1687), thermodynamics (1850), and classical electrodynamics (1864) did a wonderful job of describing the everyday world at the close of the 19th century, but early in the 20th century it became apparent that the models upon which these very successful theories were based did not work very well for small things like atoms or for objects moving at high velocities or in strong gravitational fields. This provoked a rather profound philosophical crisis within physics at the turn of the century, as physicists worried that perhaps 300 years of work was about to go down the drain. The problem was that classical physicists confused their models of reality with reality itself, and when their classical models began to falter, their confidence in physics began to falter as well. This philosophical crisis was resolved with the adoption of the concepts of logical positivism and some new effective theories in physics. Quantum mechanics (1926) was developed for small things like atoms, the special theory of relativity (1905) was developed for objects moving at high velocities and the general theory of relativity (1915) was developed for objects moving in strong gravitational fields.

Logical positivism, usually abbreviated simply to positivism, is an enhanced form of empiricism, in which we do not care about how things “really” are; we are only interested with how things are observed to behave. With positivism, physicists only seek out models of reality - not reality itself. When we study quantum mechanics, we will find that the concept of reality gets rather murky in physics anyway, so this is not as great a loss as it might at first seem. By concentrating on how things are observed to behave, rather than on what things “really” are, we avoid the conundrum faced by the classical physicists. In retrospect, this idea really goes all the way back to the very foundations of physics. In Newton’s Principia (1687) he outlined Newtonian mechanics and his theory of gravitation, which held that the gravitational force between two objects was proportional to the product of their masses divided by the square of the distance between them. Newton knew that he was going to take some philosophical flack for proposing a mysterious force between objects that could reach out across the vast depths of space with no apparent mechanism, so he took a very positivistic position on the matter with the famous words:

I have not as yet been able to discover the reason for these properties of gravity from phenomena, and I do not feign hypotheses. For whatever is not deduced from the phenomena must be called a hypothesis; and hypotheses, whether metaphysical or physical, or based on occult qualities, or mechanical, have no place in experimental philosophy. In this philosophy particular propositions are inferred from the phenomena, and afterwards rendered general by induction.

Instead, Newton focused on how things were observed to move under the influence of his law of gravitational attraction, without worrying about what gravity “really” was.

The second concept, that of effective theories, is an extension of positivism. An effective theory is an approximation of reality that only holds true over a certain restricted range of conditions and only provides for a certain depth of understanding of the problem at hand. For example, Newtonian mechanics is an effective theory that makes very good predictions for the behavior of objects moving less than 10% of the speed of light and which are bigger than a very small grain of dust. These limits define the effective range over which Newtonian mechanics can be applied to solve problems. For very small things we must use quantum mechanics and for very fast things moving in strong gravitational fields, we must use relativity theory. So all of the current theories of physics, such as Newtonian mechanics, Newtonian gravity, classical electrodynamics, thermodynamics, statistical mechanics, the special and general theories of relativity, quantum mechanics, and the quantum field theories of QED and QCD are effective theories that are based upon models of reality, and all these models are approximations - all these models are fundamentally "wrong", but at the same time, these effective theories make exceedingly good predictions of the behavior of physical systems over the limited ranges in which they apply. That is the goal of softwarephysics – to provide for an effective theory of software behavior that makes useful predictions of software behavior that are applicable to the day-to-day activities of IT professionals. So in softwarephysics, we adopt a very positivistic viewpoint of software; we do not care what software “really is”, we only care about how software is observed to behave and try to model those behaviors with an effective theory of software behavior that only holds true over a certain restricted range of conditions and only provides for a certain depth of understanding of the problem at hand.

GPS satellites provide a very good example of positivism and effective theories at work. There are currently 31 GPS satellites orbiting at an altitude of 12,600 miles above the Earth, and each contains a very accurate atomic clock. The signals from the GPS satellites travel to your GPS unit at the speed of light, so by knowing the travel time of the signals from at least 4 of the GPS satellites, it is possible to determine your position on Earth very accurately. In order to do that, it is very important to have very accurate timing measurements. Newtonian mechanics is used to launch the GPS satellites to an altitude of 12,600 miles and to keep them properly positioned in orbit. Classical electrodynamics is then used to beam the GPS signals back down to Earth to the GPS unit in your car. Quantum mechanics is used to build the transistors on the chips on board the GPS satellites and to understand the quantum tunneling of electrons in the flash memory chips used to store GPS data on the satellites. The special theory of relativity predicts that the onboard atomic clocks on the GPS satellites will run slower and lose about 7.2 microseconds per day due to their high velocities relative to an observer on the Earth. But at the same time, the general theory of relativity also predicts that because the GPS satellites are further from the center of the Earth and in a weaker gravitational field, where spacetime is less deformed than on the surface of the Earth, their atomic clocks also run faster and gain 45.9 microseconds per day due to the weaker gravitational field out there. The net effect is a gain of 38.7 microseconds per day, so the GPS satellite atomic clocks have to be purposefully built to run slow by 38.7 microseconds per day before they are launched, so that they will keep in sync with clocks on the surface of the Earth. If this correction were not made, an error in your position of 100 yards/day would accrue. The end result of the combination of all these fundamentally flawed effective theories is that it is possible to pinpoint your location on Earth to an accuracy of 16 feet or better for as little as $100. But physics has done even better than that with its fundamentally flawed effective theories. By combining the effective theories of special relativity (1905) with quantum mechanics (1926), physicists were able to produce a new effective theory for the behavior of electrons and photons called quantum electrodynamics QED (1948) which was able to predict the gyromagnetic ratio of the electron, a measure of its intrinsic magnetic field, to an accuracy of 11 decimal places. As Richard Feynman has pointed out, this was like predicting the exact distance between New York and Los Angeles accurate to the width of a human hair!

So Newtonian mechanics makes great predictions for the macroscopic behavior of GPS satellites, but it does not work very well for small things like the behavior of individual electrons within transistors, where quantum mechanics is required, or for things moving at high speeds or in strong gravitational fields where relativity theory must be applied. And all three of these effective theories are based upon completely contradictory models. General relativity maintains that spacetime is curved by matter and energy, but that matter and energy are continuous, while quantum mechanics maintains that spacetime is flat, but that matter and energy are quantized into chunks. Newtonian mechanics simply states that space and time are mutually independent dimensions and universal for all, with matter and energy being continuous. The important point is that all effective theories and scientific models are approximations – they are all fundamentally "wrong". But knowing that you are "wrong" gives you a great advantage over people who know that they are "right", because knowing that you are "wrong" allows you to seek improved models of reality. So please consider softwarephysics to simply be an effective theory of software behavior that is based upon models that are fundamentally “wrong”, but at the same time, fundamentally useful for IT professionals. So as you embark upon your study of softwarephysics, please always keep in mind that the models of softwarephysics are just approximations of software behavior, they are not what software “really is”. It is very important not to confuse models of software behavior with software itself, if one wishes to avoid the plight of the 19th century classical physicists.

If you are an IT professional and many of the above concepts are new to you, do not be concerned. This blog on softwarephysics is aimed at a diverse audience, but with IT professionals in mind. All of the above ideas will be covered at great length in the postings in this blog on softwarephysics and in a manner accessible to all IT professionals. Now it turns out that most IT professionals have had some introduction to physics in high school or in introductory college courses, but that presents an additional problem. The problem is that such courses generally only cover classical physics, and leave the student with a very good picture of physics as it stood in 1864! It turns out that the classical physics of Newtonian mechanics, thermodynamics, and classical electromagnetic theory were simply too good to discard and are still quite useful, so they are taught first to beginners and then we run out of time to cover the really interesting physics of the 20th century. Now imagine the problems that the modern world would face if we only taught similarly antiquated courses in astronomy, metallurgy, electrical and mechanical engineering, medicine, economics, biology, or geology that happily left students back in 1864! Since many of the best models for software behavior stem from 20th century physics, we will be covering a great deal of 20th century material in these postings – the special and general theories of relativity, quantum mechanics, quantum field theories, and chaos theory, but I hope that you will find that these additional effective theories are quite interesting on their own, and might even change your worldview of the physical Universe at the same time.

Unintended Consequences for the Scientific Community
As I mentioned at the close of my original posting on SoftwarePhysics, my initial intention for this blog on softwarephysics was to fulfill a promise I made to myself about 25 years ago to approach the IT community with the concept of softwarephysics a second time, following my less than successful attempt to do so in the 1980s, with the hope of helping the IT community to better cope with the daily mayhem of life in IT. However, in laying down the postings for this blog an unintended consequence arose in my mind as I became profoundly aware of the enormity of this vast computer simulation of the physical Universe that the IT community has so graciously provided to the scientific community free of charge and also of the very significant potential scientific value that it provides. One of the nagging problems for many of the observational and experimental sciences is that many times there is only one example readily at hand to study or experiment with, and it is very difficult to do meaningful statistics with a population of N=1.

But the computer simulation of the physical Universe that the Software Universe presents provides another realm for comparison. For example, both biology and astrobiology only have one biosphere on Earth to study and even physics itself has only one Universe with which to engage. Imagine the possibilities if scientists had another Universe readily at hand in which to work! This is exactly what the Software Universe provides. For example, in SoftwareBiology and A Proposal For All Practicing Paleontologists we see that the evolution of software over the past 75 years, or 2.4 billion seconds, ever since Konrad Zuse first cranked up his Z3 computer in May of 1941, has closely followed the same path as life on Earth over the past 4.0 billion years in keeping with Simon Conway Morris’s contention that convergence has played the dominant role in the evolution of life on Earth. In When Toasters Fly, we also see that software has evolved in fits and starts as portrayed by the punctuated equilibrium of Stephen Jay Gould and Niles Eldredge, and in The Adaptationist View of Software Evolution we explore the overwhelming power of natural selection in the evolution of software. In keeping with Peter Ward’s emphasis on mass extinctions dominating the course of evolution throughout geological time, we also see in SoftwareBiology that there have been several dramatic mass extinctions of various forms of software over the past 75 years as well, that have greatly affected the evolutionary history of software, and that between these mass extinctions, software has also tended to evolve through the gradual changes of Hutton’s and Lyell’s uniformitarianism. In Software Symbiogenesis and Self-Replicating Information, we also see the very significant role that parasitic/symbiotic relationships have played in the evolution of software, in keeping with the work of Lynn Margulis and also of Freeman Dyson’s two-stage theory of the origin of life on Earth. In The Origin of Software the Origin of Life, we explore Stuart Kauffman’s ideas on how Boolean nets of autocatalytic chemical reactions might have kick-started the whole thing as an emergent behavior of an early chaotic pre-biotic environment on Earth, and that if Seth Shostak is right, we will never end up talking to carbon-based extraterrestrial aliens, but to alien software instead. In Is the Universe Fine-Tuned for Self-Replicating Information? we explore the thermodynamics of Brandon Carter’s Weak Anthropic Principle (1973), as it relates to the generation of universes in the multiverse that are capable of sustaining intelligent life. Finally, in Programming Clay we revisit Alexander Graham Cairns-Smith’s theory (1966) that Gene 1.0 did not run on nucleic acids, but on clay microcrystal precursors instead.

Similarly for the physical sciences, in Is the Universe a Quantum Computer? we find a correspondence between TCP/IP and John Cramer’s Transactional Interpretation of quantum mechanics. In SoftwarePhysics and Cyberspacetime, we also see that the froth of CPU processes running with a clock speed of 109 Hz on the 10 trillion currently active microprocessors that comprise the Software Universe can be viewed as a slowed down simulation of the spin-foam froth of interacting processes of loop quantum gravity running with a clock speed of 1043 Hz that may comprise the physical Universe. And in Software Chaos, we examine the nonlinear behavior of software and some of its emergent behaviors and follow up in CyberCosmology with the possibility that vast quantities of software running on large nonlinear networks might eventually break out into consciousness in accordance with the work of George Dyson and Daniel Dennett. Finally, in Model-Dependent Realism - A Positivistic Approach to Realism we compare Steven Weinberg’s realism with the model-dependent realism of Stephen Hawking and Leonard Mlodinow and how the two worldviews affect the search for a Final Theory. Finally, in The Software Universe as an Implementation of the Mathematical Universe Hypothesis and An Alternative Model of the Software Universe we at long last explore what software might really be, and discover that the Software Universe might actually be more closely related to the physical Universe than you might think.

The chief advantage of doing fieldwork in the Software Universe is that, unlike most computer simulations of the physical Universe, it is an unintended and accidental simulation, without any of the built-in biases that most computer simulations of the physical Universe suffer. So you will truly be able to do fieldwork in a pristine and naturally occuring simulation, just as IT professionals can do fieldwork in the wild and naturally occuring simulation of software that the living things of the biosphere provide. Secondly, the Software Universe is a huge simulation that is far beyond the budgetary means of any institution or consortium by many orders of magnitude. So if you are an evolutionary biologist, astrobiologist, or paleontologist working on the origin and evolution of life in the Universe, or a physicist or economist working on the emergent behaviors of nonlinear systems and complexity theory, or a neurobiologist working on the emergence of consciousness in neural networks, or even a frustrated string theorist struggling with quantum gravity, it would be well worth your while to pay a friendly call upon the local IT department of a major corporation in your area. Start with a visit to the Command Center for their IT Operations department to get a global view of their IT infrastructure and to see how it might be of assistance to the work in your area of interest. From there you can branch out to the applicable area of IT that will provide the most benefit.

The Impact of Self-Replicating Information Upon the Planet
One of the key findings of softwarephysics is concerned with the magnitude of the impact upon the planet of self-replicating information.

Self-Replicating Information – Information that persists through time by making copies of itself or by enlisting the support of other things to ensure that copies of itself are made.

Basically, we have seen several waves of self-replicating information dominate the Earth:
1. Self-replicating autocatalytic metabolic pathways of organic molecules
2. RNA
3. DNA
4. Memes
5. Software

Note that because the self-replicating autocatalytic metabolic pathways of organic molecules, RNA and DNA have become so heavily intertwined over time that now I simply refer to them as the “genes”. Over the past 4.0 billion years, the surface of the Earth has been totally reworked by three forms of self-replicating information – the genes, memes and software, with software rapidly becoming the dominant form of self-replicating information on the planet. For more on this see:

A Brief History of Self-Replicating Information
Self-Replicating Information
Is Self-Replicating Information Inherently Self-Destructive?
Enablement - the Definitive Characteristic of Living Things
Is the Universe Fine-Tuned for Self-Replicating Information?
How to Use an Understanding of Self-Replicating Information to Avoid War
The Great War That Will Not End
How to Use Softwarephysics to Revive Memetics in Academia

Softwarephysics and the Real World of Human Affairs
Having another universe readily at hand to explore, even a simulated universe like the Software Universe, necessarily has an impact upon one's personal philosophy of life, and allows one to draw certain conclusions about the human condition and what’s it all about, so as you read through the postings in this blog you will stumble across a bit of my own personal philosophy - definitely a working hypothesis still in the works. Along these lines you might be interested in a few postings where I try to apply softwarephysics to the real world of human affairs:

MoneyPhysics – my impression of the 2008 world financial meltdown.

The Fundamental Problem of Everything – if you Google "the fundamental problem of everything", this will be the only hit you get on the entire Internet, which is indicative of the fundamental problem of everything!

What’s It All About? and Genes, Memes and Software – my current working hypothesis on what’s it all about.

How to Use an Understanding of Self-Replicating Information to Avoid War – my current working hypothesis for how the United States can avoid getting bogged down again in continued war in the Middle East.

Hierarchiology and the Phenomenon of Self-Organizing Organizational Collapse - a modern extension of the classic Peter Principle that applies to all hierarchical organizations and introduces the Time Invariant Peter Principle.

The Economics of the Coming Software Singularity, The Enduring Effects of the Obvious Hiding in Plain Sight and The Dawn of Galactic ASI - Artificial Superintelligence - my take on some of the issues that will arise for mankind as software becomes the dominant form of self-replicating information upon the planet over the coming decades.

The Continuing Adventures of Mr. Tompkins in the Software Universe, The Danger of Tyranny in the Age of Software, Cyber Civil Defense, and Oligarchiology and the Rise of Software to Predominance in the 21st Century - my worries that the world might abandon democracy in the 21st century, as software comes to predominance as the dominant form of self-replicating information on the planet.

Making Sense of the Absurdity of the Real World of Human Affairs - how software has aided the expansion of our less desirable tendencies in recent years.

Some Specifics About These Postings
The postings in this blog are a supplemental reading for my course on softwarephysics for IT professionals entitled SoftwarePhysics 101 – The Physics of Cyberspacetime, which was originally designed to be taught as a series of seminars at companies where I was employed. Since softwarephysics essentially covers the simulated physics, chemistry, biology, and geology of an entire simulated universe, the slides necessarily just provide a cursory skeleton upon which to expound. The postings in this blog go into much greater depth. Because each posting builds upon its predecessors, the postings in this blog should be read in reverse order from the oldest to the most recent, beginning with my original posting on SoftwarePhysics. In addition, several universities also now offer courses on Biologically Inspired Computing which cover some of the biological aspects of softwarephysics, and the online content for some of these courses can be found by Googling for "Biologically Inspired Computing" or "Natural Computing". At this point we will finish up with my original plan for this blog on softwarephysics with a purely speculative posting on CyberCosmology that describes the origins of the Software Universe, cyberspacetime, software and where they all may be heading. Since CyberCosmology will be purely speculative in nature, it will not be of much help to you in your IT professional capacities, but I hope that it might be a bit entertaining. Again, if you are new to softwarephysics, you really need to read the previous posts before taking on CyberCosmology. I will probably continue on with some additional brief observations about softwarephysics in the future, but once you have completed CyberCosmology, you can truly consider yourself to be a bona fide softwarephysicist.

For those of you following this blog, the posting dates on the posts may seem to behave in a rather bizarre manner. That is because in order to get the Introduction to Softwarephysics listed as the first post in the context root of http://softwarephysics.blogspot.com/ I have to perform a few IT tricks. When publishing a new posting, I simply copy the contents of the Introduction to Softwarephysics to a new posting called the New Introduction to Softwarephysics. Then I update the original Introduction to Softwarephysics entry with the title and content of the new posting to be published. I then go back and take “New” out of the title of the New Introduction to Softwarephysics. This way the Introduction to Softwarephysics always appears as the first posting in the context root of http://softwarephysics.blogspot.com/. The side effect of all this is that the real posting date of posts is the date that appears on the post that you get when clicking on the Newer Post link at the bottom left of the posting webpage.

SoftwarePhysics 101 – The Physics of Cyberspacetime is now available on Microsoft OneDrive.

SoftwarePhysics 101 – The Physics of Cyberspacetime - Original PowerPoint document

Entropy – A spreadsheet referenced in the document

BSDE – A 1989 document describing how to use BSDE - the Bionic Systems Development Environment - to grow applications from genes and embryos within the maternal BSDE software.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:

Steve Johnston

Saturday, April 08, 2017

Oligarchiology and the Rise of Software to Predominance in the 21st Century

In my posting Hierarchiology and the Phenomenon of Self-Organizing Organizational Collapse I noted that it was very strange that the study of human hierarchies was not initiated until 1969, when the late Professor Laurence Peter finally published his very famous book The Peter Principle: Why Things Always Go Wrong . This was because, ever since the beginnings of the Pleistocene 11,500 years ago, which marked the last retreat of the world-wide glaciers of the Earth that had impeded the rise of civilization amongst the new species Homo sapiens on this planet, all the civilizations and all of the human organizations that they have wrought throughout history have primarily always been based upon a hierarchical architecture. It is quite amazing that it took so long for mankind to finally recognize this very self-evident fact of human nature. Certainly, nearly all historians of the countless civilizations that have arisen throughout history can all agree that all civilizations and all of the human organizations that they have fostered have primarily been based upon hierarchies of political and economic power.

However, in that posting I also explained that although Professor Peter's profound insight into the nature of human nature was indeed revolutionary, his original definition of the Peter Principle had been largely based upon a rather unique point in time in American history during the 1950s and early 1960s when the United States of America had emerged victorious from World War II with no apparent economic competitors whatsoever, since all of its competitors had essentially self-destructed during the war. However, to generalize Professor Peter's keen insight, I then proposed that there was indeed an alternative, and timeless, version of the Peter Principle that endured for all times and for all hierarchies:

The Time Invariant Peter Principle: In a hierarchy, successful subordinates tell their superiors what their superiors want to hear, while unsuccessful subordinates try to tell their superiors what their superiors need to hear. Only successful subordinates are promoted within a hierarchy, and eventually, all levels of a hierarchy will tend to become solely occupied by successful subordinates who only tell their superiors what their superiors want to hear.

Continuing on along these lines, in this posting I would like to also propose that, in addition to always being organized into hierarchies of political and economic power, all civilizations throughout history have also always been organized into hierarchies of political and economic power forming a very powerful oligarchy of individuals who control the entire civilization. In The Danger of Tyranny in the Age of Software I alluded to George Orwell's 1949 dystopian view of the future in his infamous book Nineteen Eighty-Four in which Orwell outlined a very grim possible future in his book-within-a-book The Theory and Practice of Oligarchical Collectivism. The Theory and Practice of Oligarchical Collectivism maintained that ever since civilization had first been invented, all societies always organized themselves in a hierarchical manner into an oligarchy where 2% of the High ran the entire society. Under the High was a 13% Middle that served the necessary administrative functions to maintain production for the society and to keep the 85% of the Low in their place within the society. Indeed, all civilizations throughout human history have always been organized upon oligarchies of varying degrees of power. This fact is of importance in order to fully understand what will likely happen during the 21st century as software comes to predominance as the dominant form of self-replicating information on the planet.

Again, to really make sense of the modern world, you must first realize that we are all living in a very rare time in which a new form of self-replicating information, known to us as software, is coming to predominance. So in the current world of human politics and economics, it's all about software coming to predominance and not much else. For more on that see: A Brief History of Self-Replicating Information, The Economics of the Coming Software Singularity , The Enduring Effects of the Obvious Hiding in Plain Sight and Machine Learning and the Ascendance of the Fifth Wave. If you do not come to that realization, nothing else makes much sense in the current world, and that is the reason why most of the current political and economic pundits cannot explain what the heck is really going on. Being born in 1951, I can vividly remember a time when there essentially was no software on the Earth at all, so unlike many of today's political and economic pundits, I have the advantage of remembering a time when software really did not play any role whatsoever in the world.

The Need for the New Discipline of Oligarchiology
I just Googled the Internet for the string "Oligarchiology", and to my surprise, I came up with a total of three hits, but only one hit actually had the string "Oligarchiology" in it. Similarly, when I Googled the random nonsensical string "fizzlemonger", I similarly came up with a total of three hits, but all three hits actually had the string "fizzlemonger" in them! So I am now quite confident that the science of Oligarchiology has never been fully explored by academia, even though it is quite self-evident that all human civilizations throughout time have always been ruled by oligarchies of varying political and economic power. What an extraordinary opportunity! So like Professor Peters, I would now like to initiate the new science of Oligarchiology as the study of oligarchies throughout human history. As I always say, better late than never, but regrettably, being retired and at an age of 65 years, I personally now must rely on much younger researchers to carry on with this new discipline in the future. For my purposes, I just need the fact from Oligarchiology that the world is currently run by a number of oligarchies as it always has been. This oligarchical fact of life has been true under numerous social and economic systems - autocracies, aristocracies, feudalism, capitalism, socialism and communism. It just seems that there have always been about 2% of the population that liked to run things, no matter how things were set up, and there is nothing wrong with that. We certainly always do need somebody around to run things, because honestly, 98% of us simply do not have the ambition or desire to do so. Of course, the problem throughout history has always been that the top 2% naturally tended to abuse the privilege a bit and overdid things a little, resulting in 98% of the population having a substantially lower economic standard of living than the top 2%, and that has led to several revolutions in the past that did not always end so well. However, historically, so long as the bulk of the population had a relatively decent life, things went well in general for the entire oligarchical society. The key to this economic stability has always been that the top 2% has always needed the remaining 98% of us around to do things for them, and that maintained the hierarchical peace within societies. But that will be changing in the 21st century as software continues to displace more and more workers. To my mind this all begins at about the age of 14, no matter in what culture you may have been raised in. In the United States of America, this all begins when you enter high school at the age of 14, when it becomes quite apparent that about 2% of the class take on a leadership role. This 2% High then recruits a 13% Middle to administer the 85% Low who are mostly lost in space and time during all of high school. As a proud member of a local high school Low 50 years ago, I have no problem with this self-evident fact of human nature.

Software and the Spread of False Political Memes in the 21st Century
In many of my recent postings, I have alluded to my concern that such hierarchical tranquility in the oligarchical societies of the world may not be the case in the 21st century as software rises to predominance. Again, software is just the fifth wave of self-replicating information to come to predominance on this planet over the past 4.567 billion years. For a more on that please see A Brief History of Self-Replicating Information. My concern is that the current oligarchies of the world could easily turn very ugly, as they did so for most of human history. The rise of Alt-Right movements, formerly known as fascist movements, within the western democracies, and the return of Tzarist Russia do not bold well for the continuing spread of democracy. As software eliminates more and more jobs in the 21st century, there will be a rising level of discontent amongst the displaced against the current world order of oligarchies and their current social orders. This will allow strongmen, with simple solutions that can easily fit on a bumper sticker, to come to power, as has recently happened in the United States of America. These strongmen will argue that the messy checks and balances of 18th century democracy are the enemy of the people, and will not allow them to do what is necessary to make their country great again.

The 18th century Enlightenment brought us both capitalism and the western democracies with the necessary checks and balances that could moderate the excesses of capitalism to create optimal oligarchical societies. Granted, these societies were still oligarchical in nature, because that is just the natural order of things, but they were relatively benign oligarchies that tolerated both political and economic dissent and also allowed for some upward mobility amongst those in the lower classes. The tolerance of dissent is essential for both political and economic progress. Now the 18th century Enlightenment was an outgrowth of the 17th century Scientific Revolution, which revealed to us that there was indeed an absolute reality that could be understood by gathering facts, building models or theories that could explain those observed facts, and thoroughly testing those models and theories to see if they truly could explain the observed facts and new facts that came along. Consequently, both the 17th century Scientific Revolution and the 18th century Enlightenment were based upon the use of evidence-based reasoning to figure things out, and then to take actions to change things if necessary. Therefore, we should all rebel against the new Alt-Right and Tzarist strongmen who request that we simply suspend evidence-based reasoning for the good of the country and to simply "believe" and follow them instead. See The Danger of Believing in Things and The Danger of Tyranny in the Age of Software for more on that.

Of course the suspension of belief in a physical reality by the masses as they blindly follow a strongman leader into oblivion is nothing new, but now we have software in the picture as a complicating factor that makes this much easier. This is because software allows for the easy spread of false-memes. The invention of writing many thousands of years ago greatly aided the spread of political memes. But there has always been a substantial printing and distribution cost associated with spreading false political memes. Software has now reduced those costs to nearly zero, and that is the problem. There is no editorial board on the Internet that demands that news stories be corroborated by multiple sources and fact-checked by others before publication in order to maintain the credibility of the publishing house and ensure that it remains economically viable. Also, printed false political memes cannot self-replicate like software-enabled false political memes can. In the Software Universe of the Internet, anybody can now publish a blatantly false political meme that can then be instantly and easily replicated on social media software. This is why the memes and software, both forms of self-replicating information, have forged such a tight parasitic/symbiotic relationship. False political memes simply need to find the minds of willing human DNA survival machines to self-replicate and grow unchecked in an exponential manner. In fact, we have now found that false political memes can do this much better than true political memes because false political memes usually come with an irresistible scandalous element that attracts the minds of human DNA survival machines, like flowering plants attract bees, in order to replicate. For more on that see Cyber Civil Defense . The 2016 presidential election in the United States is a vivid example of all of this in action. Just try to imagine the outcome of that election if there were no such thing as software!

The Rise of Software During the 21st Century
At the current rate, software will most likely come to predominance as the dominant form of self-replicating information on the planet sometime during the 21st century. As I have outlined in several recent postings, this will be a dramatic development for the entire planet, not to mention a very traumatic experience for mankind in general, since we really do not know how it will all unfold. In the meantime, while we still have some degree of control, as human beings we should all stand up to the fascist tendencies of mankind by embracing the ideals of the Enlightenment that brought us the western democracies and the freedoms to pursue the best in mankind.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:

Steve Johnston

Tuesday, March 07, 2017

Cyber Civil Defense

In my posting Cyber Defense about six years ago I warned that, like the global disaster of World War I that the Powers That Be accidentally unleashed upon mankind more than 100 years ago, the current world powers may not fully understand what they have wrought with their large stockpiles of cyberweapons and cybersoldiers. Recall that the world powers that ran the world 100 years ago, just prior to World War I, did not recognize the game-changing effects of the mechanization of warfare. The development at the time of high volume rail systems, capable of quickly transporting large numbers of troops and munitions, and the invention of the machine gun, and the arrival of mechanized transport vehicles and tanks, greatly increased the killing power of nation-states. But this was not generally recognized by the Powers That Be prior to the catastrophe of World War I, which resulted in 40 million casualties and the deaths of 20 million people for apparently no particular reason at all. Similarly, it now seems that the first large-scale cyberattack by Russia upon the United States of America may have successfully elected a president of the United States, but in this posting I would like to propose that there may be some dreadful unintended consequences to this incredible Russian cybervictory that could leave even more dead in its wake. First of all, we should take note that this was not the first president of the United States that Russia managed to elect into office.

I was born in 1951 during the Korean War, and so I lived through all of the very tense Cold War events of the 1950s and 1960s, including the Cuban Missile Crisis of October 1962, which brought us all closer to the prospect of a global nuclear war than we should ever have come, and so let us begin there. On Oct. 4, 1957, Sputnik 1 was successfully launched by the Soviet Union to become the world's very first man-made object to enter into Earth orbit, and was put into orbit by a Soviet R-7 rocket. Earlier in 1957 the Soviet R-7 rocket had become the world's very first functional ICBM missile after its successful 3,700 mile test flight on August 21, 1957. At the time, all of these Russian firsts threw the United States of America into a Cold War frenzy that is now hard to fathom, and had a huge impact upon the United States of America. For example, it built my high school and put me through college. Back in the 1950's, the School District 88 of the state of Illinois was having a hard time trying to convince the stingy local residents of the need for a new high school in the area. But that all changed in January of 1958, after the launch of Sputnik 1, when suddenly the local residents now eagerly voted in a referendum to build a new Willowbrook High School out of the fear that was generated by Sputnik 1, and of the demonstrable superiority of Russian missile technology at the time. Suddenly, Americans also now began to take science and education seriously once again, and for once finally began to hold science and education in the esteem that it actually deserved. For example, in 1969 when I first began work on a B.S. in physics at the University of Illinois, tuition was only $181 per semester, and I was easily able to put myself through college simply by cleaning movie theaters seven days a week during the summers at $2.25/hour. For my M.S. in geophysics at the University of Wisconsin, my tuition and fees were waived, and I received a generous stipend to live on while working as a research assistant, courtesy of a grant from the National Science Foundation. The end result of this was that, in 1975 when I finished school, I had $3000 in the bank, instead of the crushing student debt that most graduates now face, because the United States had not yet given up on supporting education, like it did after the Cold War seemed to have ended on December 25, 1991 when the Soviet Union collapsed under its own weight.

Figure 1 - The launch of Sputnik 1 by the Russians on October 4, 1957 on top of an R-7 ICBM rocket threw the United States of America into a Cold War panic that is now hard to imagine.

But the Russians did far more than that with Sputnik 1. They also managed to elect their very first president of the United States with it. Given the astounding success that the Soviets had had with Sputnik 1 and the R-7 ICBM in 1957, early in 1958 John F. Kennedy seized upon the issue as a "missile gap" with the Soviet Union that the Eisenhower Administration had failed to prevent. Now it turns out that by November of 1960 the "missile gap" had largely been closed in reality, but it still remained in the public zeitgeist of the time as a real issue, and it helped to elect John F. Kennedy to become president of the United States by a very narrow margin. Apparently, John F. Kennedy actually knew at the time that the "missile gap" was really a false myth, but just the same, used it as a useful political tool to help get elected. The Soviets, on the other hand, regarded Kennedy's "missile gap" and the attempted Bay of Pigs invasion of Cuba in 1961 as indications that Kennedy was a dangerous and weak leader who might cave into his more militaristic generals, like General Curtis LeMay, during a crisis and launch a nuclear first strike. The Soviet R-7 ICBMs actually required about 20 hours of preparation to launch, so the R-7 ICBMs were easy targets for conventional bombers to take out before they could be launched during a global nuclear war, and thus the R-7 ICBMs were actually less threatening than long-range bombers, like the B-52. All of this led the Soviet military planners to conclude that additional deterrence measures were in order, and as a consequence, plans were put into place to install medium-range nuclear missiles in Cuba that were more accurate than the R-7. When these missiles were first discovered by American U-2 flights in September of 1962, the Cuban Missile Crisis of October 1962 soon followed. At the time, Kennedy's generals recommended an invasion of Cuba in response, but fortunately for all, Kennedy turned out to be a stronger leader than the Soviets had predicted, and Kennedy countered with a Cuban blockade instead, which allowed both sides enough time to come to their senses. There are now reports that, unknown to the United States at the time, the Soviet field commanders in Cuba had actually been given authority to launch the nuclear weapons under their control by the Soviet High Command in the event of an invasion, the only time such authority has ever been delegated by the Soviet High Command. The Soviet field commanders had at least twenty nuclear warheads on the medium-range R-12 Dvina ballistic missiles under their control that were capable of reaching cities in the United States, including Washington D.C., each carrying a one megaton warhead, and nine tactical nuclear missiles with smaller warheads. If the Soviet field commanders had launched their missiles, many millions of Americans would have been killed in the initial attack, and the ensuing retaliatory nuclear strike against the Soviet Union would have killed roughly one hundred million Russians. The final Soviet counter-attack would have killed a similar number of Americans.

But the above nuclear catastrophe did not happen because reasonable minds on both sides of the conflict always prevailed. In fact, all during the Cold War we had highly capable leaders in both the United States and the Soviet Union at all times, who always thought and behaved in a rational manner founded on sound logical grounds. Now during the Cold War, the United States may not have agreed with the Soviet Union on most things, but both sides always operated in a rational and logical manner, and that is what made the MAD (Mutual Assured Destruction) stalemate work, which prevented nuclear war from breaking out and ending it all. Additionally, the scientific community in the United States always respected those in the Russian scientific community for their brilliant scientific efforts, and this limited scientific dialog helped to keep open the political channels between both countries as well. In fact, I have always been amazed by the astounding Russian scientific achievements over the years that were made without the benefits of the freedom of thought that the 18th century Enlightenment had brought to the western democracies. Despite the limitations imposed upon the Russian scientific community by a series of political strongmen over many decades, they always managed to prevail in the long term. I am not so confident that the scientific communities of the western democracies could do as well under the thumbs of the Alt-Right strongmen that wish to come to power now.

So now thanks to Russian cyberwarfare, we now have a new president of the United States of very limited ability. It seems that the principal skill of this new president lies solely in making questionable real estate deals, but he has no experience with global political nuclear strategy whatsoever, and that is very dangerous for the United States and for Mother Russia as well. True, he does seem to unquestionably favor Russia, for some unknown reason, and that unknown favor is currently being investigated by the FBI and both houses of Congress. But those investigations will take quite some time to complete. Meanwhile, we now have a mentally unhinged leader of North Korea, a Stalinist holdover from the previous century, now rapidly moving towards obtaining ICBMs armed with nuclear warheads that could strike the United States. This has never happened before. We have never had potentially warring nation-states with nuclear weapons headed by administrations that had no idea of what they were doing with such weapons. This is not good for the world, or for Russia either. In the American 1988 vice-presidential debate between Lloyd Bentsen and Dan Quale there is a famous remark by Lloyd Bentsen, after Dan Quale made a vague analogy to himself and John F. Kennedy, that goes "Senator, I served with Jack Kennedy. I knew Jack Kennedy. Jack Kennedy was a friend of mine. Senator, you're no Jack Kennedy." And that certainly is true of the new president of the United States that Russian cyberwarriors helped to elect. Yes, he might seem to be overly friendly to Russian interests, but his administration has already stated that military actions might be required to prevent North Korea from obtaining an ICBM capable of delivering a nuclear warhead that could strike the United States, and this new Administration has also wondered why we cannot use nuclear weapons if we already have them - otherwise, why build such weapons in the first place? An attack of North Korea could be the flashpoint that ignites a global nuclear war between the United States, North Korea and China, like the original Korean War of 1950. True, Russia itself might not get drawn into such a conflict, or maybe it would, based upon earlier precedents like World War I, but nonetheless, the resulting high levels of global radioactive fallout and nuclear winter effects resulting from a large-scale nuclear exchange would bring disaster to Russia as well.

Cyber Civil Defense - How to Build a Cyber Fallout Shelter Against External Influences
Now all during the 1950s and early 1960s, great attention was paid in the United States to the matter of civil defense against a possible nuclear strike by the Russians. During those times, the government of the United States essentially admitted that it could not defend the citizens of the United States from a Soviet bomber attack with nuclear weapons, and so it was up to the individual citizens of the United States to prepare for such a nuclear attack.

Figure 2 - During the 1950s, as a very young child, with the beginning of each new school year, I was given a pamphlet by my teacher describing how my father could build an inexpensive fallout shelter in our basement out of cinderblocks and 2x4s.

Figure 3 - But to me these cheap cinderblock fallout shelters always seemed a bit small for a family of 5, and my parents never bothered to build one because we lived only 25 miles from downtown Chicago.

Figure 4 - For the more affluent, more luxurious accommodations could be constructed for a price.

Figure 5 - But no matter what your socioeconomic level was at the time, all students in the 1950s participated in "duck and cover" drills for a possible Soviet nuclear attack.

Figure 6 - And if you were lucky enough to survive the initial flash and blast of a Russian nuclear weapon with your "duck and cover" maneuver, your school, and all other public buildings, also had a fallout shelter in the basement to help you get through the next two weeks, while the extremely radioactive nucleotides from the Russian nuclear weapons rapidly decayed away.

Unfortunately, living just 25 miles from downtown Chicago, the second largest city in the United States at the time, meant that the whole Chicagoland area was destined to be targeted by a multitude of overlapping 10 and 20 megaton bombs by the Soviet bomber force, meaning that I would be killed multiple times as my atoms were repeatedly vaporized and carried away in the winds of the Windy City.

Now all of these somber thoughts from the distant 1950s might sound a bit bleak, but they are the reason that I pay very little attention when I hear our Congressmen and Senators explain that we have to investigate this Russian meddling with our 2016 presidential election "so that this never happens again". But of course it will happen again because we largely did it to ourselves! The Russians, like all major foreign powers, simply exploited the deep political divide between the Democrats and Republicans in our country. This is nothing new. All major foreign powers throughout history have always sought to meddle in the internal political affairs of other countries, in order to advance their own interests. The United States of America has a long history of doing so, and rightly so! It is always far better to try to modify the ambitions of a possible adversary politically, rather than to do so later militarily. The only difference this time was that the Russians used the full capabilities of the world-wide software infrastructure that is already in place to further their ends, like another Sputnik first, in keeping with my contention that software is now rapidly becoming the dominant form of self-replicating information on the planet. Consequently, cyberspacetime is now the most valuable terrain, from a long-term strategic perspective, to be found on the planet, and once again, the Russians got there first. The Russians realized that, for less than the price of a single ICBM, they could essentially paralyze the United States of America for many years, or perhaps even an entire decade, by simply using the existing software infrastructure of the world to their advantage, and the great divide between the Democrats and Republicans.

Now as an 18th century liberal and a 20th century conservative, I must admit that I am a 20th century Republican who has only voted for Democrats for the past 15 years. I parted with the 21st century Republican Party in 2002 when it turned its back on science, and took up some other new policies that I did not favor. So over the past 45 years, there have been long stretches of time when I was a Republican, and long stretches of time when I was a Democrat, but at all times, I always tried to remain an American and hold the best thinkings of both parties dear to my heart. But the problem today is that most Republicans and most Democrats now view members of the other party as a greater threat to the United States of America than all of the other foreign powers in the world put together. This was the fundamental flaw in today's American society that the Russians exploited using our very own software! Hence, I would like to propose that since the government of the United States cannot really protect us from such a cyberattack in the future, like back in the 1950s, we need to institute a Civilian Cyber Civil Defense program of our own. In fact, this time it is much easier to do so because we do not need to physically build and stock a huge number of fallout shelters. All we need to do is to simply follow the directions on this official 1961 CONELRAD Nuclear Attack Message by not listening to false rumors or broadcasts spread by agents of the enemy:


which in today's divisive world simply means:


In The Danger of Believing in Things I highlighted the dangers of not employing critical thought when evaluating assertions in our physical Universe, and the same goes for politics. In that posting, I explained that it is very dangerous to believe in things because that means you have turned off your faculty of critical thought, that hopefully, allows you to uncover attempts at deception by others. If you have ever purchased a new car, you know exactly what I am talking about. Instead, you should always approach things with some level of confidence that is less than 100%, and that confidence should always be based upon the evidence at hand. In fact, at an age of 65 years, I now have very little confidence in most forms of human thought beyond the sciences and mathematics. But in today's demented political world, most Americans are now mainly foaming at the mouth over the horrible thoughts and acts of the opposition party, and paying very little attention to the actions of the other foreign powers of the world. Since the 18th century European Enlightenment brought us democracies with the freedom of speech, we all need to recognize as responsible adults that it is impossible for our government to prevent foreign powers from exploiting that freedom by injecting "fake news" and false stories into the political debate. We must realize that, although the Russians are very intelligent and sophisticated people, they never fully benefited from the 18th century European Enlightenment and the freedoms that it brought, so we cannot retaliate in kind against Russia in its next election. So the only defense we have against another similar cyberattack by Russia, or some other foreign power during an election year, is to use the skepticism and critical thinking that science uses every day. As Carl Sagan used to say "Extraordinary claims require extraordinary evidence" . So in the course of the next election cycle, if you see on the Internet or cable network news, that the opposition candidate has been found to be running a child pornography ring out of a local pizza parlor in your city, stop and think. Most likely, you are being tricked by a foreign power into believing something that you already want to believe.

This may not be as easy as it first sounds because, although we all believe ourselves to be sophisticated, rational and open-minded individuals who only pay attention to accurate news accounts, we all do savor the latest piece of political gossip that we see about the opposition candidate, and are more likely to believe it, if it reconfirms our current worldview. In the April 2017 issue of Scientific American, Walter Quattrociocchi published a very interesting article, Inside the Echo Chamber, on this very subject. He showed that studies in Italy have found that instead of creating a "collective intelligence", the Internet has actually created a vast echo chamber of misinformation that has been dramatically amplified by social media software like Facebook and Twitter. The computational social scientists who study the viral spread of such misinformation on the Internet find that, frequently, users of social media software simply confine their Internet activities to websites that only feature similar misinformation that simply reconfirms their current distorted worldview. Worse yet, when confronted with debunking information, people were found to be 30% more likely to continue to read the same distorted misinformation that reaffirms their current worldview, rather than to reconsider their position. Clearly, a bit of memetics could be of help. Softwarephysics maintains that currently software is rapidly becoming the 5th wave of self-replicating information to come to predominance on the Earth, as it continues to form very strong parasitic/symbiotic relationships with the memes that currently rule the world - please see A Brief History of Self-Replicating Information for more on that. All memes soon learn that in order to survive and replicate they need to become appealing memes for the minds of the human DNA survival machines that replicate memes. Again, appealing memes are usually memes that appeal to the genes, and usually have something to do with power, status, wealth or sex. Consequently, most political debate also arises from the desire for power, status, wealth or sex too. The end result is that people like to hear what they like to hear because it reaffirms the worldview that seems to bring them power, status, wealth or sex. So the next time you run across some political memes that seem to make you very happy inside, be very skeptical. The more appealing the political memes appear to be, the less likely they are to be true.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:

Steve Johnston

Monday, February 20, 2017

The Danger of Tyranny in the Age of Software

If you have been following this blog on softwarephysics, then you know that I contend that it is all about self-replicating information struggling to survive in a highly nonlinear Universe, subject to the second law of thermodynamics, and that one of my major concerns has always been why we seem to be the only form of Intelligence to be found within our Milky Way galaxy after nearly 10 billion years of galactic stellar evolution. Now software is just currently the fifth wave of self-replicating information to sweep across the surface of the planet and totally rework its surface - see A Brief History of Self-Replicating Information for more on that. But more importantly, software is the first form of self-replicating information to appear on the Earth that can already travel at the speed of light, and software never dies, so it is superbly preadapted for interstellar space travel. Since we now know that nearly all of the 400 billion stars within our galaxy have planets, then for all intents and purposes, we should now find ourselves knee-deep in von Neumann probes, self-replicating robotic probes stuffed with alien software that travel from star system to star system building copies along the way, as they seek out additional resources and safety from potential threats, but that is clearly not the case. So what gives? Clearly, something must be very wrong with my current thinking.

One of my assumptions all along has been that, capitalism, and the free markets that it naturally enables, would necessarily, bring software to predominance as the dominant form of self-replicating information on the planet, as the Powers that Be who currently rule the Earth try to reduce the costs of production. But now I have my doubts. As an 18th century liberal, and a 20th century conservative, I have always been a strong proponent of the 17th century Scientific Revolution, which brought forth the heretical proposition that rational thought, combined with evidence-based reasoning, could reveal the absolute truth, and allow individuals to actually govern themselves, without the need for an authoritarian monarchy. This change in thinking led to the 18th century Enlightenment, and brought forth the United States of America as a self-governing political entity. But unfortunately, the United States of America has always been a very dangerous experiment in human nature to see if the masses could truly govern themselves, without succumbing to the passions of the mob, and up until now, I have always maintained that we could, but now I am not so sure.

In my last posting The Continuing Adventures of Mr. Tompkins in the Software Universe I highlighted some of the recent political absurdities on the Internet that seem to call into question the very nature of reality in the modern world, and consequently, threaten the very foundations of the 18th century Enlightenment that made the United States of America possible. But the recent arrival of this fact-free virtual cyber-reality is just one element of a much more disturbing rise of Alt-Right movements throughout the world. Many contend that this resurgence of nationalistic authoritarianism is a rejection of the economic globalization that has occurred over the past 30 years or so, and the resulting economic displacement of the middle classes. But my contention is that these Alt-Right movements in such places as the United States, the UK, Germany, France and other western democracies throughout the world, are just another sign of software rapidly becoming the dominant form of self-replicating information on the planet. As software comes to predominance it has caused a great deal of social, political and economic unrest as discussed in The Economics of the Coming Software Singularity , The Enduring Effects of the Obvious Hiding in Plain Sight and Machine Learning and the Ascendance of the Fifth Wave. Basically, the arrival of software in the 1950s slowly began to automate middle class clerical and manufacturing jobs. The evaporation of middle class clerical jobs really began to accelerate in the 1960s, with the arrival of mainframe computers in the business world, and the evaporation of manufacturing jobs picked up considerably in the 1980s, with the arrival of small microprocessors that could be embedded into the machining and assembly machines found on the factory floors. In addition, the creation of world-wide high-speed fiber optic networks in the 1990s to support the arrival of the Internet explosion in 1995, led to software that allowed managers in modern economies to move manual and low-skilled work to the emerging economies of the world where wage scales were substantially lower, because it was now possible to remotely manage such operations using software. But as the capabilities of software continue to progress and general purpose androids begin to appear later in the century there will come a point when even the highly reduced labor costs of the emerging economies will become too dear. At that point the top 1% ruling class may not have much need for the remaining 99% of us, especially if the androids start building the androids. This will naturally cause some stresses within the current oligarchical structure of societies, as their middle classes continue to evaporate and more and more wealth continues to concentrate into the top 1%.

Figure 1 - Above is a typical office full of clerks in the 1950s. Just try to imagine how many clerks were required in a world without software to simply process all of the bank transactions, insurance premiums and claims, stock purchases and sales and all of the other business transactions in a single day.

Figure 2 - Similarly, the Industrial Revolution brought the assembly line and created huge numbers of middle class manufacturing jobs.

Figure 3 - But the arrival of automation software on the factory floor displaced many middle class manufacturing jobs, and will ultimately displace all middle class manufacturing jobs some time in the future.

Figure 4 - Manufacturing jobs in the United States have been on the decline since the 1960s as software has automated many manufacturing processes.

Figure 5 - Contrary to popular public opinion, the actual manufacturing output of the United States has dramatically increased over the years, while at the same time, the percentage of the workforce in manufacturing has steadily decreased. This was due to the dramatic increases in worker productivity that were made possible by the introduction of automation software.

Figure 6 - Self-driving trucks and cars will be the next advance of software to eliminate a large segment of middle class jobs.

So the real culprit that caused the great loss of middle class jobs in the western democracies over the past 40 years was really the vast expansion of automation software at home, and less so the offshoring of jobs. True, currently a substantial amount of job loss can also be attributed to the offshoring of jobs to lower wage scale economies, but that really is just a temporary transient effect. Those offshored jobs will evaporate even faster as software continues on to predominance. Let's face it, with the rapidly advancing capabilities of AI software, all human labor will be reduced to a value of zero over the next 10 - 100 years, and that raises an interesting possible solution for my concerns about not being knee-deep in von Neumann probes.

The Theory and Practice of Oligarchical Collectivism
Now I am not about to compare the bizarre social media behaviors of the new Administration of the United States of America to something out of Nineteen Eighty-Four, written by George Orwell in 1949, and the ability of its infamous Ministry of Truth to distort reality, but I must admit that the numerous Tweets from the new Administration have jogged my memory a bit. I first read Nineteen Eighty-Four in 1964 as a high school freshman at the tender age of 13. At the time, I thought that the book was a very fascinating science fiction story describing a very distant possible future, but given the very anemic IT technology of the day, it seemed much more like a very entertaining political fantasy than something I should really worry much about actually coming true. However, in 2014 I decided to read the book again to see if 50 years of IT progress had made much of a difference to my initial childhood impressions. It should come as no surprise, that in 2014, I now found that the book was now totally doable from a modern IT perspective. Indeed, I now found that, with a few tweaks, a modern oligarchical state run by 2% of the population at the very top, could now easily monitor and control the remaining 98% of the population with ease, given the vast IT infrastructure we already had in place.

But in recent days I have had even more disturbing thoughts. Recall that The Theory and Practice of Oligarchical Collectivism is the book-within-a-book of Nineteen Eighty-Four that describes what is actually going on in the lives of the main characters of the book. The Theory and Practice of Oligarchical Collectivism explains that, ever since we first invented civilization, all civilizations have adopted a hierarchy of the High, the Middle and the Low, no matter what economic system may have been adopted at the time. The High constitute about 2% of the population, and the High run all things within the society. The Middle constitute about 13% of the population, and work for the High to make sure that the Low get things properly done. The Low constitute about 85% of the population, and the Low do all of the non-administrative work to make it all happen. The Low are so busy just trying to survive that they present little danger to the High. The Theory and Practice of Oligarchical Collectivism explains that, throughout history, the Middle has always tried to overthrow the High with the aid of the Low, to establish themselves as the new High. So the Middle must always be viewed as a constant threat to the High. The solution to this problem in Nineteen Eighty-Four was for the High to constantly terrorize the Middle with thugs from the Ministry of Love and other psychological manipulations like doublethink, thoughtcrimes and newspeak, to deny the Middle of even the concept of an existence of a physical reality beyond the fabricated reality created by the Party.

Granted, this was not an ideal solution because it required a great deal of diligence and effort on the part of the High, but it was seen as a necessary evil, because the Middle was always needed to perform all of the administrative functions to keep the High in their elevated positions. But what if there were no need for a Middle? Suppose there came a day when AI software could perform all of the necessary functions of a Middle, without the threat of the Middle overthrowing the High? That would be an even better solution. Indeed, advanced AI software could allow for a 2% High to rule a 98% Low, with no need of a Middle whatsoever. Since the High had absolute control over all of society in The Theory and Practice of Oligarchical Collectivism, it also controlled all scientific advancement of society, and chose to eliminate any scientific advancements that might put the current social order in jeopardy. Such an oligarchical society could then prevent any AI software from advancing beyond the narrow AI technological levels needed to completely monitor and control society with 100% efficiency. That could lead to a society of eternal stasis that would certainly put an end to my von Neumann probes exploring the galaxy.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:

Steve Johnston

Sunday, December 25, 2016

The Continuing Adventures of Mr. Tompkins in the Software Universe

George Gamow was a highly regarded theoretical physicist and cosmologist from the last century who liked to explain concepts in modern physics to the common people by having them partake in adventures along with him in alternative universes that had alternative values for the physical constants that are found within our own Universe. He did so by creating a delightful fictional character back in 1937 by the name of Mr. Tompkins. Mr. Tompkins was an inquisitive bank clerk who was the main character in a series of four popular science books in which he participated in a number of such scientific adventures in alternative universes. I bring this up because back in 1979, when I first switched careers from being an exploration geophysicist to become an IT professional, I had a very similar experience. At the time, it seemed to me as if the strange IT people that I was now working with on a daily basis had created for themselves their own little Software Universe, with these strange IT people as the sole inhabitants. But over the years, I have now seen this strange alternative Software Universe slowly expand in size, to the point now that nearly all of the Earth's inhabitants are now also inhabitants of this alternative Software Universe.

Mr. Tompkins first appeared in George Gamow's mind in 1937 when he wrote a short story called A Toy Universe and unsuccessfully tried to have it published by the magazines of the day, such as Harper's, The Atlantic Monthly, Coronet and other magazines of the time. However, in 1938 he was finally able to publish a series of articles in a British magazine called Discovery that later became the book Mr Tompkins in Wonderland in 1939. Later he published Mr Tompkins Explores the Atom in 1944 and two other books at later dates. The adventures of Mr. Tompkins begin when he spends the afternoon of a bank holiday attending a lecture on the theory of relativity. During the lecture he drifts off to sleep and enters a dream world in which the speed of light is a mere 4.5 m/s (10 mph). This becomes apparent to him when he notices that passing cyclists are subject to a noticeable Lorentz–FitzGerald contraction.

As I explained in the Introduction to Softwarephysics softwarephysics is a simulated science designed to help explain how the simulated Software Universe that we have created for ourselves behaves. To do so, I simply noticed that, like our physical Universe, the Software Universe was quantized and extremely nonlinear in nature. For more on that, please see The Fundamental Problem of Software. Thanks to quantum mechanics (1926) we now know that our physical Universe is quantized into very small chunks of matter, energy, and also probably small chunks of space and time as well. Similarly, the Software Universe is composed of quantized chunks of software that start off as discrete characters in software source code (see Quantum Software for details). Thanks to quantum mechanics, we also now know that the macroscopic behaviors of our Universe are an outgrowth of the quantum mechanical operations of the atoms within it. Similarly, the macroscopic operations of the Software Universe are an outgrowth of the quantized operations of the source code that makes it all work. Now because very small changes to software source code can produce hugely significant changes to the way software operates, software is probably the most nonlinear substance known to mankind. The extreme nonlinear behavior of quantized software, combined with the devastating effects of the second law of thermodynamics to normally produce very buggy non-functional software, necessarily brings in the Darwinian pressures that have caused software to slowly evolve over the past 75 years, or 2.4 billion seconds, ever since Konrad Zuse first cranked up his Z3 computer in May of 1941. For more on this see The Fundamental Problem of Software.

Now the reason Mr. Tompkins never noticed the effects of the special theory of relativity in his everyday life was because the speed of light is so large, but once the speed of light was reduced to 10 mph in an alternative universe for Mr. Tompkins, all of the strange effects of the special theory of relativity became evidently apparent, and with enough time, would have become quite normal to him as a part of normal everyday life. Similarly, the strange effects of quantum mechanics only seem strange to us because Planck's constant is so very small - 6.62607004 × 10-34 Kg m2/second, and therefore, only become apparent for very small things like atoms and electrons. However, if Planck's constant were very much larger, then we would also begin to grow accustomed to the strange behaviors of objects behaving in a quantum mechanical way. For example, in quantum mechanics the spin of a single electron can be both up and down at the same time, but in the classical Universe that we are used to, macroscopic things like a child's top can only have a spin of up or down at any given time. The top can only spin in a clockwise or counterclockwise manner at one time - it cannot do both at the same time. Similarly, in quantum mechanics a photon or electron can go through both slits of a double slit experiment at the same time, so long as you do not put detectors at the slit locations.

Figure 1 – A macroscopic top can only spin clockwise or counterclockwise at one time.

Figure 2 – But electrons can be in a mixed quantum mechanical state in which they both spin up and spin down at the same time.

Figure 3 – Similarly, tennis balls can only go through one slit in a fence at a time. They cannot go through both slits of a fence at the same time.

Figure 4 – But at the smallest of scales in our quantum mechanical Universe, electrons and photons can go through both slits at the same time, producing an interference pattern.

Figure 5 – You can see this interference pattern of photons if you look at a distant porch light through the mesh of a sheer window curtain.

So in quantum mechanics at the smallest of scales, things can be both true and false at the same time. Fortunately for us, at the macroscopic sizes of everyday life, these bizarre quantum effects of nature seem to fade away, so that the things I just described are either true or false in everyday life. Macroscopic tops either spin up or spin down, and tennis balls pass through either one slit or the other, but not both at the same time. Indeed, it is rather strange that, although all of the fundamental particles of our Universe seem to behave in a fuzzy quantum mechanical manner in which true things and false things can both seem to blend into a cosmic grayness of ignorance, at the macroscopic level of our physical Universe, there are still such things as absolute truth and absolute falsehoods that can be measured in a laboratory in a reproducible manner. This must have been so for the Darwinian processes of innovation honed by natural selection to have brought us forth. After all, if Schrödinger's cat could really be both dead and alive at the same time, these Darwinian processes could not have worked, and we would not be here contemplating the differences between true and false assertions. The end result is that in our physical Universe, at the smallest of scales, there is no absolute truth, there are only quantum mechanical opinions, but at the macroscopic level of everyday life, there are indeed such things as absolute truth and absolute falsehoods, and these qualities can be measured in a laboratory in a reproducible manner.

The Current Bizarre World of Political Social Media Software in the United States
Now imagine that our Mr. Tompkins had entered into a bizarre alternative universe in which things were just the opposite. Imagine a universe in which, at the smallest of scales things operated classically, as if things were either absolutely true or false, but at a macroscopic level, things were seen to be both true and false at the same time! Well, we currently do have such an alternative universe close at hand to explore. It is the current bizarre world of political social media software in the United States of America. Recall that currently, the Software Universe runs on classical computers in which a bit can be either a "1" or a "0". In a classical computer a bit can only be a "1" or a "0" at any given time - it cannot be both a "1" and a "0" at the same time. For that you would need to have software running on a quantum computer, and for the most part, we are not there yet. So at the smallest of scales in our current Software Universe, the concept of there actually being a real difference between true and false assertions is fundamental. None of the current software code that makes it all work could possibly run if this were not the case. So it is quite strange that at the macroscopic level of political social media software in the United States, just the opposite seems to be the case. Unfortunately, in today's strange world of political social media software, there seems to be no right or wrong and no distinction between the truth and lies. We now have "alternative facts" and claims of "fake news" abounding, and Twitter feeds from those in power loaded down with false information. Because of this, for any given assertion, 30% of Americans will think that the assertion is true, while 70% of Americans will think that the assertion is false. In the Software Universe there are no longer any facts; there are only opinions in a seemingly upside-down quantum mechanical sense.

The Danger of Believing in Things
In The Danger of Believing in Things I highlighted the dangers of not employing critical thought when evaluating assertions in our physical Universe. The problem today is that most people are now seemingly spending more time living in the simulated Software Universe that we have created, rather than in our actual physical Universe. The end result of this is that, instead of seeking out the truth, the worldview memes infecting our minds simply seek out supporting memes in the Software Universe that lend support to the current worldview memes within our minds. But unlike in our current simulated Software Universe, where those worldview memes can be both absolutely true or absolutely false at the same time, in our physical Universe that behaves classically at the day-to-day scales in which we all live, things can still only be absolutely true or false, but not both. The most dangerous aspect of this new fake reality is that the new Administration of the United States of America maintains that climate change is a hoax, simply because they say it is a hoax, and sadly, for many Americans that is good enough for them. Now climate change might indeed be a hoax in our simulated Software Universe, or it might not be a hoax, because there is no absolute truth in our simulated Software Universe at the macroscopic level; there are only opinions. But that is not the case in the physical Universe in which we all actually live, where climate change is rapidly underway. For more on that please see This Message on Climate Change Was Brought to You by SOFTWARE. In the real physical Universe in which we all actually live, it is very important that we always take the words of Richard Feynman very seriously, for "reality must take precedence over public relations, for nature cannot be fooled."

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:

Steve Johnston