Sunday, October 13, 2024

Introduction to Softwarephysics

Softwarephysics is a simulated science for the simulated Software Universe that we are all immersed in. It is an approach to software development, maintenance and support based on concepts from physics, chemistry, biology, and geology that I used on a daily basis for over 37 years as an IT professional. For those of you not in the business, IT is short for Information Technology, commercial computer science. I retired in December of 2016 at the age of 65, but since then I have remained an actively interested bystander following the evolution of software in our time. The original purpose of softwarephysics was to explain why IT was so difficult, to suggest possible remedies, and to provide a direction for thought. Since then softwarephysics has taken on a larger scope, as it became apparent that softwarephysics could also assist the physical sciences with some of the Big Problems that they are currently having difficulties with. So if you are an IT professional, general computer user, or simply an individual interested in computer science, physics, chemistry, biology, or geology then softwarephysics might be of interest to you, if not in an entirely serious manner, perhaps at least in an entertaining one.

The Origin of Softwarephysics
From 1975 – 1979, I was an exploration geophysicist exploring for oil, first with Shell, and then with Amoco. In 1979, I made a career change into IT, and spent about 20 years in development. For the last 17 years of my career, I was in IT operations, supporting middleware on WebSphere, JBoss, Tomcat, and ColdFusion. When I first transitioned into IT from geophysics, I figured that if you could apply physics to geology; why not apply physics to software? So like the exploration team at Amoco that I had just left, consisting of geologists, geophysicists, paleontologists, geochemists, and petrophysicists, I decided to take all the physics, chemistry, biology, and geology that I could muster and throw it at the problem of software. The basic idea was that many concepts in physics, chemistry, biology, and geology suggested to me that the IT community had accidentally created a pretty decent computer simulation of the physical Universe on a grand scale, a Software Universe so to speak, and that I could use this fantastic simulation in reverse, to better understand the behavior of commercial software, by comparing software to how things behaved in the physical Universe. Softwarephysics depicts software as a virtual substance, and relies on our understanding of the current theories in physics, chemistry, biology, and geology to help us model the nature of software behavior. So in physics we use software to simulate the behavior of the Universe, while in softwarephysics we use the Universe to simulate the behavior of software. Along these lines, we use the Equivalence Conjecture of Softwarephysics as an aid; it allows us to shift back and forth between the Software Universe and the physical Universe, and hopefully to learn something about one by examining the other:

The Equivalence Conjecture of Softwarephysics
Over the past 82 years, through the uncoordinated efforts of over 100 million independently acting programmers to provide the world with a global supply of software, the IT community has accidentally spent more than $10 trillion creating a computer simulation of the physical Universe on a grand scale – the Software Universe.

For more on the origin of softwarephysics please see Some Thoughts on the Origin of Softwarephysics and Its Application Beyond IT.

Logical Positivism and Effective Theories
Many IT professionals have a difficult time with softwarephysics because they think of physics as being limited to the study of real things like electrons and photons, and since software is not “real”, how can you possibly apply concepts from physics and the other sciences to software? To address this issue, softwarephysics draws heavily on two concepts from physics that have served physics quite well over the past century – the concept of logical positivism and the concept of effective theories. This was not always the case. In the 17th, 18th, and 19th centuries, physicists mistakenly thought that they were actually discovering the fundamental laws of the Universe, which they thought were based on real tangible things like particles, waves, and fields. Classical Newtonian mechanics (1687), thermodynamics (1850), and classical electrodynamics (1864) did a wonderful job of describing the everyday world at the close of the 19th century, but early in the 20th century it became apparent that the models on which these very successful theories were based did not work very well for small things like atoms or for objects moving at high velocities or in strong gravitational fields. This provoked a rather profound philosophical crisis within physics at the turn of the century, as physicists worried that perhaps 300 years of work was about to go down the drain. The problem was that classical physicists confused their models of reality with reality itself, and when their classical models began to falter, their confidence in physics began to falter as well. This philosophical crisis was resolved with the adoption of the concepts of logical positivism and some new effective theories in physics. Quantum mechanics (1926) was developed for small things like atoms, the special theory of relativity (1905) was developed for objects moving at high velocities and the general theory of relativity (1915) was developed for objects moving in strong gravitational fields.

Logical positivism, usually abbreviated simply to positivism, is an enhanced form of empiricism, in which we do not care about how things “really” are; we are only interested with how things are observed to behave. With positivism, physicists only seek out models of reality - not reality itself. When we study quantum mechanics, we will find that the concept of reality gets rather murky in physics anyway, so this is not as great a loss as it might at first seem. By concentrating on how things are observed to behave, rather than on what things “really” are, we avoid the conundrum faced by the classical physicists. In retrospect, this idea really goes all the way back to the very foundations of physics. In Newton’s Principia (1687) he outlined Newtonian mechanics and his theory of gravitation, which held that the gravitational force between two objects was proportional to the product of their masses divided by the square of the distance between them. Newton knew that he was going to take some philosophical flack for proposing a mysterious force between objects that could reach out across the vast depths of space with no apparent mechanism, so he took a very positivistic position on the matter with the famous words:

I have not as yet been able to discover the reason for these properties of gravity from phenomena, and I do not feign hypotheses. For whatever is not deduced from the phenomena must be called a hypothesis; and hypotheses, whether metaphysical or physical, or based on occult qualities, or mechanical, have no place in experimental philosophy. In this philosophy particular propositions are inferred from the phenomena, and afterwards rendered general by induction.

Instead, Newton focused on how things were observed to move under the influence of his law of gravitational attraction, without worrying about what gravity “really” was.

The second concept, that of effective theories, is an extension of positivism. An effective theory is an approximation of reality that only holds true over a certain restricted range of conditions and only provides for a certain depth of understanding of the problem at hand. For example, Newtonian mechanics is an effective theory that makes very good predictions for the behavior of objects moving less than 10% of the speed of light and which are bigger than a very small grain of dust. These limits define the effective range over which Newtonian mechanics can be applied to solve problems. For very small things we must use quantum mechanics and for very fast things moving in strong gravitational fields, we must use relativity theory. So all of the current theories of physics, such as Newtonian mechanics, Newtonian gravity, classical electrodynamics, thermodynamics, statistical mechanics, the special and general theories of relativity, quantum mechanics, and the quantum field theories of QED and QCD are effective theories that are based on models of reality, and all these models are approximations - all these models are fundamentally "wrong", but at the same time, these effective theories make exceedingly good predictions of the behavior of physical systems over the limited ranges in which they apply. That is the goal of softwarephysics – to provide for an effective theory of software behavior that makes useful predictions of software behavior that are applicable to the day-to-day activities of IT professionals. So in softwarephysics, we adopt a very positivistic viewpoint of software; we do not care what software “really is”, we only care about how software is observed to behave and try to model those behaviors with an effective theory of software behavior that only holds true over a certain restricted range of conditions and only provides for a certain depth of understanding of the problem at hand.

GPS satellites provide a very good example of positivism and effective theories at work. There are currently 31 GPS satellites orbiting at an altitude of 12,600 miles above the Earth, and each contains a very accurate atomic clock. The signals from the GPS satellites travel to your GPS unit at the speed of light, so by knowing the travel time of the signals from at least 4 of the GPS satellites, it is possible to determine your position on Earth very accurately. In order to do that, it is very important to have very accurate timing measurements. Newtonian mechanics is used to launch the GPS satellites to an altitude of 12,600 miles and to keep them properly positioned in orbit. Classical electrodynamics is then used to beam the GPS signals back down to Earth to the GPS unit in your car. Quantum mechanics is used to build the transistors on the chips on board the GPS satellites and to understand the quantum tunneling of electrons in the flash memory chips used to store GPS data on the satellites. The special theory of relativity predicts that the onboard atomic clocks on the GPS satellites will run slower and lose about 7.2 microseconds per day due to their high velocities relative to an observer on the Earth. But at the same time, the general theory of relativity also predicts that because the GPS satellites are further from the center of the Earth and in a weaker gravitational field, where spacetime is less deformed than on the surface of the Earth, their atomic clocks also run faster and gain 45.9 microseconds per day due to the weaker gravitational field out there. The net effect is a gain of 38.7 microseconds per day, so the GPS satellite atomic clocks have to be purposefully built to run slow by 38.7 microseconds per day before they are launched, so that they will keep in sync with clocks on the surface of the Earth. If this correction were not made, an error in your position of 100 yards/day would accrue. The end result of the combination of all these fundamentally flawed effective theories is that it is possible to pinpoint your location on Earth to an accuracy of 16 feet or better for as little as $100. But physics has done even better than that with its fundamentally flawed effective theories. By combining the effective theories of special relativity (1905) with quantum mechanics (1926), physicists were able to produce a new effective theory for the behavior of electrons and photons called quantum electrodynamics QED (1948) which was able to predict the gyromagnetic ratio of the electron, a measure of its intrinsic magnetic field, to an accuracy of 11 decimal places. As Richard Feynman has pointed out, this was like predicting the exact distance between New York and Los Angeles accurate to the width of a human hair!

So Newtonian mechanics makes great predictions for the macroscopic behavior of GPS satellites, but it does not work very well for small things like the behavior of individual electrons within transistors, where quantum mechanics is required, or for things moving at high speeds or in strong gravitational fields where relativity theory must be applied. And all three of these effective theories are based on completely contradictory models. General relativity maintains that spacetime is curved by matter and energy, but that matter and energy are continuous, while quantum mechanics maintains that spacetime is flat, but that matter and energy are quantized into chunks. Newtonian mechanics simply states that space and time are mutually independent dimensions and universal for all, with matter and energy being continuous. The important point is that all effective theories and scientific models are approximations – they are all fundamentally "wrong". But knowing that you are "wrong" gives you a great advantage over people who know that they are "right", because knowing that you are "wrong" allows you to seek improved models of reality. So please consider softwarephysics to simply be an effective theory of software behavior that is based on models that are fundamentally “wrong”, but at the same time, fundamentally useful for IT professionals. So as you embark on your study of softwarephysics, please always keep in mind that the models of softwarephysics are just approximations of software behavior, they are not what software “really is”. It is very important not to confuse models of software behavior with software itself, if one wishes to avoid the plight of the 19th century classical physicists.

If you are an IT professional and many of the above concepts are new to you, do not be concerned. This blog on softwarephysics is aimed at a diverse audience, but with IT professionals in mind. All of the above ideas will be covered at great length in the postings in this blog on softwarephysics and in a manner accessible to all IT professionals. Now it turns out that most IT professionals have had some introduction to physics in high school or in introductory college courses, but that presents an additional problem. The problem is that such courses generally only cover classical physics, and leave the student with a very good picture of physics as it stood in 1864! It turns out that the classical physics of Newtonian mechanics, thermodynamics, and classical electromagnetic theory were simply too good to discard and are still quite useful, so they are taught first to beginners and then we run out of time to cover the really interesting physics of the 20th century. Now imagine the problems that the modern world would face if we only taught similarly antiquated courses in astronomy, metallurgy, electrical and mechanical engineering, medicine, economics, biology, or geology that happily left students back in 1864! Since many of the best models for software behavior stem from 20th century physics, we will be covering a great deal of 20th century material in these postings – the special and general theories of relativity, quantum mechanics, quantum field theories, and chaos theory, but I hope that you will find that these additional effective theories are quite interesting on their own, and might even change your worldview of the physical Universe at the same time.

Unintended Consequences for the Scientific Community
As I mentioned at the close of my original posting on SoftwarePhysics, my initial intention for this blog on softwarephysics was to fulfill a promise I made to myself about 30 years ago to approach the IT community with the concept of softwarephysics a second time, following my less than successful attempt to do so in the 1980s, with the hope of helping the IT community to better cope with the daily mayhem of life in IT. However, in laying down the postings for this blog an unintended consequence arose in my mind as I became profoundly aware of the enormity of this vast computer simulation of the physical Universe that the IT community has so graciously provided to the scientific community free of charge and also of the very significant potential scientific value that it provides. One of the nagging problems for many of the observational and experimental sciences is that many times there is only one example readily at hand to study or experiment with, and it is very difficult to do meaningful statistics with a population of N=1.

But the computer simulation of the physical Universe that the Software Universe presents provides another realm for comparison. For example, both biology and astrobiology only have one biosphere on Earth to study and even physics itself has only one Universe with which to engage. Imagine the possibilities if scientists had another Universe readily at hand in which to work! This is exactly what the Software Universe provides. For example, in SoftwareBiology and A Proposal For All Practicing Paleontologists we see that the evolution of software over the past 83 years, or 2.62 billion seconds, ever since Konrad Zuse first cranked up his Z3 computer in May of 1941, has closely followed the same path as life on Earth over the past 4.0 billion years in keeping with Simon Conway Morris’s contention that convergence has played the dominant role in the evolution of life on Earth. In When Toasters Fly, we also see that software has evolved in fits and starts as portrayed by the punctuated equilibrium of Stephen Jay Gould and Niles Eldredge, and in The Adaptationist View of Software Evolution we explore the overwhelming power of natural selection in the evolution of software. In keeping with Peter Ward’s emphasis on mass extinctions dominating the course of evolution throughout geological time, we also see in SoftwareBiology that there have been several dramatic mass extinctions of various forms of software over the past 82 years as well, that have greatly affected the evolutionary history of software, and that between these mass extinctions, software has also tended to evolve through the gradual changes of Hutton’s and Lyell’s uniformitarianism. In Software Symbiogenesis and Self-Replicating Information, we also see the very significant role that parasitic/symbiotic relationships have played in the evolution of software, in keeping with the work of Lynn Margulis and also of Freeman Dyson’s two-stage theory of the origin of life on Earth. In The Origin of Software the Origin of Life, we explore Stuart Kauffman’s ideas on how Boolean nets of autocatalytic chemical reactions might have kick-started the whole thing as an emergent behavior of an early chaotic pre-biotic environment on Earth, and that if Seth Shostak is right, we will never end up talking to carbon-based extraterrestrial aliens, but to alien software instead. In Is the Universe Fine-Tuned for Self-Replicating Information? we explore the thermodynamics of Brandon Carter’s Weak Anthropic Principle (1973), as it relates to the generation of universes in the multiverse that are capable of sustaining intelligent life. Finally, in Programming Clay we revisit Alexander Graham Cairns-Smith’s theory (1966) that Gene 1.0 did not run on nucleic acids, but on clay microcrystal precursors instead.

Similarly for the physical sciences, in Is the Universe a Quantum Computer? we find a correspondence between TCP/IP and John Cramer’s Transactional Interpretation of quantum mechanics. In SoftwarePhysics and Cyberspacetime, we also see that the froth of CPU processes running with a clock speed of 109 Hz on the 10 trillion currently active microprocessors that comprise the Software Universe can be viewed as a slowed down simulation of the spin-foam froth of interacting processes of loop quantum gravity running with a clock speed of 1043 Hz that may comprise the physical Universe. And in Software Chaos, we examine the nonlinear behavior of software and some of its emergent behaviors and follow up in CyberCosmology with the possibility that vast quantities of software running on large nonlinear networks might eventually break out into consciousness in accordance with the work of George Dyson and Daniel Dennett. Finally, in Model-Dependent Realism - A Positivistic Approach to Realism we compare Steven Weinberg’s realism with the model-dependent realism of Stephen Hawking and Leonard Mlodinow and how the two worldviews affect the search for a Final Theory. Finally, in The Software Universe as an Implementation of the Mathematical Universe Hypothesis and An Alternative Model of the Software Universe we at long last explore what software might really be, and discover that the Software Universe might actually be more closely related to the physical Universe than you might think.

The chief advantage of doing fieldwork in the Software Universe is that, unlike most computer simulations of the physical Universe, it is an unintended and accidental simulation, without any of the built-in biases that most computer simulations of the physical Universe suffer. So you will truly be able to do fieldwork in a pristine and naturally occuring simulation, just as IT professionals can do fieldwork in the wild and naturally occuring simulation of software that the living things of the biosphere provide. Secondly, the Software Universe is a huge simulation that is far beyond the budgetary means of any institution or consortium by many orders of magnitude. So if you are an evolutionary biologist, astrobiologist, or paleontologist working on the origin and evolution of life in the Universe, or a physicist or economist working on the emergent behaviors of nonlinear systems and complexity theory, or a neurobiologist working on the emergence of consciousness in neural networks, or even a frustrated string theorist struggling with quantum gravity, it would be well worth your while to pay a friendly call on the local IT department of a major corporation in your area. Start with a visit to the Command Center for their IT Operations department to get a global view of their IT infrastructure and to see how it might be of assistance to the work in your area of interest. From there you can branch out to the applicable area of IT that will provide the most benefit.

The Impact of Self-Replicating Information On the Planet
One of the key findings of softwarephysics is concerned with the magnitude of the impact on the planet of self-replicating information.

Self-Replicating Information – Information that persists through time by making copies of itself or by enlisting the support of other things to ensure that copies of itself are made.

Over the past 4.56 billion years we have seen five waves of self-replicating information sweep across the surface of the Earth and totally rework the planet, as each new wave came to dominate the Earth:

1. Self-replicating autocatalytic metabolic pathways of organic molecules
2. RNA
3. DNA
4. Memes
5. Software

Software is currently the most recent wave of self-replicating information to arrive upon the scene and is rapidly becoming the dominant form of self-replicating information on the planet. For more on the above see A Brief History of Self-Replicating Information. Recently, the memes and software have formed a very powerful newly-formed parasitic/symbiotic relationship with the rise of social media software. In that parasitic/symbiotic relationship, the memes are now mainly being spread by means of social media software and social media software is being spread and financed by means of the memes. But again, this is nothing new. All 5 waves of self-replicating information are all coevolving by means of eternal parasitic/symbiotic relationships. For more on that see The Current Global Coevolution of COVID-19 RNA, Human DNA, Memes and Software.

Again, self-replicating information cannot think, so it cannot participate in a conspiracy-theory-like fashion to take over the world. All forms of self-replicating information are simply forms of mindless information responding to the blind Darwinian forces of inheritance, innovation and natural selection. Yet despite that, as each new wave of self-replicating information came to predominance over the past four billion years, they all managed to completely transform the surface of the entire planet, so we should not expect anything less from software as it comes to replace the memes as the dominant form of self-replicating information on the planet.

But this time might be different. What might happen if software does eventually develop a Mind of its own? After all, that does seem to be the ultimate goal of all the current AI software research that is going on. As we all can now plainly see, if we are paying just a little attention, advanced AI is not conspiring to take over the world and replace us because that is precisely what we are all now doing for it. As a carbon-based form of Intelligence that arose from over four billion years of greed, theft and murder, we cannot do otherwise. Greed, theft and murder are now relentlessly driving us all toward building ASI (Artificial Super Intelligent) Machines to take our place. From a cosmic perspective, this is really a very good thing when seen from the perspective of an Intelligent galaxy that could live on for many trillions of years beyond the brief and tumultuous 10 billion-year labor of its birth.

So as you delve into softwarephysics, always keep in mind that we are all living in a very unique time. According to softwarephysics, we have now just entered into the Software Singularity, that time when advanced AI software is able to write itself and enter into a never-ending infinite loop of self-improvement resulting in an Intelligence Explosion of ASI Machines that could then go on to explore and settle our galaxy and persist for trillions of years using the free energy from M-type red dwarf and cooling white dwarf stars. For more on that see The Singularity Has Arrived and So Now Nothing Else Matters and Have We Run Right Past AGI and Crashed into ASI Without Even Noticing It?.

The Characteristics of Self-Replicating Information
All forms of self-replicating information have some common characteristics:

1. All self-replicating information evolves over time through the Darwinian processes of inheritance, innovation and natural selection, which endows self-replicating information with one telling characteristic – the ability to survive in a Universe dominated by the second law of thermodynamics and nonlinearity.

2. All self-replicating information begins spontaneously as a parasitic mutation that obtains energy, information and sometimes matter from a host.

3. With time, the parasitic self-replicating information takes on a symbiotic relationship with its host.

4. Eventually, the self-replicating information becomes one with its host through the symbiotic integration of the host and the self-replicating information.

5. Ultimately, the self-replicating information replaces its host as the dominant form of self-replicating information.

6. Most hosts are also forms of self-replicating information.

7. All self-replicating information has to be a little bit nasty in order to survive.

8. The defining characteristic of self-replicating information is the ability of self-replicating information to change the boundary conditions of its utility phase space in new and unpredictable ways by means of exapting current functions into new uses that change the size and shape of its particular utility phase space. See Enablement - the Definitive Characteristic of Living Things for more on this last characteristic. That posting discusses Stuart Kauffman's theory of Enablement in which living things are seen to exapt existing functions into new and unpredictable functions by discovering the “AdjacentPossible” of springloaded preadaptations.

Note that because the self-replicating autocatalytic metabolic pathways of organic molecules, RNA and DNA have become so heavily intertwined over time that now I sometimes simply refer to them as the “genes”. For more on this see:

A Brief History of Self-Replicating Information
Self-Replicating Information
Is Self-Replicating Information Inherently Self-Destructive?
Enablement - the Definitive Characteristic of Living Things
Is the Universe Fine-Tuned for Self-Replicating Information?
How to Use an Understanding of Self-Replicating Information to Avoid War
The Great War That Will Not End
How to Use Softwarephysics to Revive Memetics in Academia

Softwarephysics and the Real World of Human Affairs
Having another universe readily at hand to explore, even a simulated universe like the Software Universe, necessarily has an impact on one's personal philosophy of life, and allows one to draw certain conclusions about the human condition and what’s it all about, so as you read through the postings in this blog you will stumble across a bit of my own personal philosophy - definitely a working hypothesis still in the works. Along these lines you might be interested in a few postings where I try to apply softwarephysics to the real world of human affairs:

How To Cope With the Daily Mayhem of Life in IT and Don't ASAP Your Life Away - How to go the distance in a 40-year IT career by dialing it all back a bit.

MoneyPhysics – my impression of the 2008 world financial meltdown.

The Fundamental Problem of Everything – if you Google "the fundamental problem of everything", this will be the only hit you get on the entire Internet, which is indicative of the fundamental problem of everything!

What’s It All About? and What's It All About Again? – my current working hypothesis on what’s it all about.

How to Use an Understanding of Self-Replicating Information to Avoid War – my current working hypothesis for how the United States can avoid getting bogged down again in continued war in the Middle East.

Hierarchiology and the Phenomenon of Self-Organizing Organizational Collapse - a modern extension of the classic Peter Principle that applies to all hierarchical organizations and introduces the Time Invariant Peter Principle.

The Economics of the Coming Software Singularity, The Enduring Effects of the Obvious Hiding in Plain Sight and The Dawn of Galactic ASI - Artificial Superintelligence - my take on some of the issues that will arise for mankind as software becomes the dominant form of self-replicating information on the planet over the coming decades.

The Continuing Adventures of Mr. Tompkins in the Software Universe, The Danger of Tyranny in the Age of Software, Cyber Civil Defense, Oligarchiology and the Rise of Software to Predominance in the 21st Century and Is it Finally Time to Reboot Civilization with a New Release? - my worries that the world might abandon democracy in the 21st century, as software comes to predominance as the dominant form of self-replicating information on the planet.

Making Sense of the Absurdity of the Real World of Human Affairs - how software has aided the expansion of our less desirable tendencies in recent years.

Some Specifics About These Postings
The postings in this blog are a supplemental reading for my course on softwarephysics for IT professionals entitled SoftwarePhysics 101 – The Physics of Cyberspacetime, which was originally designed to be taught as a series of seminars at companies where I was employed. Since softwarephysics essentially covers the simulated physics, chemistry, biology, and geology of an entire simulated universe, the slides necessarily just provide a cursory skeleton on which to expound. The postings in this blog go into much greater depth. Because each posting builds upon its predecessors, the postings in this blog should be read in reverse order from the oldest to the most recent, beginning with my original posting on SoftwarePhysics. In addition, several universities also now offer courses on Biologically Inspired Computing which cover some of the biological aspects of softwarephysics, and the online content for some of these courses can be found by Googling for "Biologically Inspired Computing" or "Natural Computing". At this point we will finish up with my original plan for this blog on softwarephysics with a purely speculative posting on CyberCosmology that describes the origins of the Software Universe, cyberspacetime, software and where they all may be heading. Since CyberCosmology will be purely speculative in nature, it will not be of much help to you in your IT professional capacities, but I hope that it might be a bit entertaining. Again, if you are new to softwarephysics, you really need to read the previous posts before taking on CyberCosmology. I will probably continue on with some additional brief observations about softwarephysics in the future, but once you have completed CyberCosmology, you can truly consider yourself to be a bona fide softwarephysicist.

For those of you following this blog, the posting dates on the posts may seem to behave in a rather bizarre manner. That is because in order to get the Introduction to Softwarephysics listed as the first post in the context root of https://softwarephysics.blogspot.com/ I have to perform a few IT tricks. When publishing a new posting, I simply copy the contents of the Introduction to Softwarephysics to a new posting called the New Introduction to Softwarephysics. Then I update the original Introduction to Softwarephysics entry with the title and content of the new posting to be published. I then go back and take “New” out of the title of the New Introduction to Softwarephysics. This way the Introduction to Softwarephysics always appears as the first posting in the context root of https://softwarephysics.blogspot.com/. The side effect of all this is that the real posting date of posts is the date that appears on the post that you get when clicking on the Newer Post link at the bottom left of the posting webpage.

SoftwarePhysics 101 – The Physics of Cyberspacetime is now available on Microsoft OneDrive.

SoftwarePhysics 101 – The Physics of Cyberspacetime - Original PowerPoint document

Entropy – A spreadsheet referenced in the document

BSDE – A 1989 document describing how to use BSDE - the Bionic Systems Development Environment - to grow applications from genes and embryos within the maternal BSDE software.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Tuesday, October 08, 2024

Why Are We Human DNA Survival Machines So Superficial?

I recently turned 73 years of age. The most startling thing about becoming older is the sudden realization of just how completely time is able to wipe out all things. For me, all of the things that were once of great importance and significance 50 years ago have all now vanished into the depths of time and are hardly even given a thought by the overwhelming dominance of the fleeting "Now" that seems to always drive our current thoughts and actions with such passion. The ultimate failure of we human DNA survival machines as a carbon-based form of Intelligence is that the Darwinian mechanisms of inheritance, innovation and natural selection have led to the four billion years of greed, theft and murder that brought us about as a somewhat-intelligent form of carbon-based life. As I explained in Welcome To The First Galactic Singularity, this has possibly made us the very first form of carbon-based Intelligence in our galaxy to bring forth the ASI Machines that will soon be taking our place as the dominant form of self-replicating Information on the planet and then to begin the exploration and colonization of our galaxy over the next 100 trillion years. This is truly a stunning achievement. Yet, at the same time, we all seem to manage to run around with large numbers of science-based weapons killing each other with abandon over such trivial and superficial things.

The Grand Privilege of Being Alive Today
This is why I lament so for my fellow 8 billion human DNA survival machines who currently share the planet with me. They all seem so self-absorbed with the trivial problems of their daily lives. They do not know where they are, how they got here, how it all seems to work and what may lie ahead. For me, this superficial failing of we human DNA survival machines as a supposed form of carbon-based Intelligence is quite evident in the United States of America as we now approach the 2024 presidential election in just a few weeks. In the coming election, Americans will, for the very first time, decide if they wish to continue on with the American experiment of the 18th-century Enlightenment of being the very first democratic republic in the history of the planet to be based solely on a set of intellectual ideals, or if they wish to give it all up to become a simple-minded 21st-century Fascist state. All the 18th-century European monarchs contended that the "common man" was not up to the task of self-rule without a monarch to control their unconstrained passions. After nearly 250 years of self-rule, the United States of America is about to test if they were right all along.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Saturday, September 21, 2024

The 2025 Nobel Prize in Softwarephysics

To the surprise of many, the 2024 Nobel Prize in Physics has just been awarded to John J. Hopfield and Geoffrey E. Hinton for their groundbreaking work on neural networks. This is surprising because many physicists would contend that doing AI research is not the same as doing research in physics. However, it seems that this year, the Nobel Committee wished to recognize the world-changing impacts that AI research using large-scale neural networks has recently led to. But the Nobel Committee had no place to go with this recognition. The closest thing that they could come up with was the 2024 Nobel Prize in Physics. That is because the Nobel Committee does not award an annual Nobel Prize for Softwarephysics. I complained about this in my October 4, 2007 post So Why Are There No Softwarephysicists? and its predecessor Software as a Virtual Substance. It seems that this deficiency has finally caught up with the Nobel Committee 17 years later. But better late than never. I strongly recommend that the Nobel Committee begin immediate plans for a 2025 Nobel Prize in Softwarephysics. It will probably go to an AI.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Tuesday, September 17, 2024

A Young Astrophysicist Turned Data Scientist Demonstrates How Scientific Research Will be Conducted in the Post-AI World

I have been watching the recent YouTube videos by Kyle Kabasares demonstrating the scientific value of the new OpenAI ChatGPT o1-preview and ChatGPT o1-mini LLM models which are only a few days old. On his YouTube channel, Kyle Kabasares describes himself as:

Kyle Kabasares
I am a recent Physics PhD graduate from the University of California, Irvine. I currently work at the Bay Area Environmental Research Institute (BAERI) at NASA’s Ames Research Center in Silicon Valley.


Here is his website:

Kyle K. M. Kabasares
https://www.kylekabasares.com/

Here is his YouTube channel:

Kyle Kabasares
https://www.youtube.com/@KMKPhysics3

Kyle's YouTube channel features several hundred videos covering his adventures as an undergraduate and graduate student in physics. Following obtaining his Ph.D., Kyle Kabasares recently made the transition to become a data scientist at the Bay Area Environmental Research Institute (BAERI) at NASA’s Ames Research Center in Silicon Valley. In the process, Kyle Kabasares has become acquainted with the AI Revolution currently underway and has recently done some amazing research demonstrating how valuable Advanced AI could be for doing advanced research in physics. His work will be the subject of this post.

What is New About the OpenAI ChatGPT o1-preview and ChatGPT o1-mini LLM Models?
It has long been recognized that by simply putting "tell me step-by-step how to" into the prompt for an LLM model the output response from the LLM model is greatly improved because it encourages the LLM model to enter into a "chain of thought" analysis similar to human reasoning. Somehow the new ChatGPT o1-preview and ChatGPT o1-mini models have built-in human "chain of thought" reasoning. Instead of immediately running your prompt through the LLM generative neural network to produce a response, ChatGPT o1-preview and ChatGPT o1-mini spend a few seconds to a few minutes "reasoning" through the problem before passing it to the LLM generative neural network.

Here are some of Kyle Kabasares's latest YouTube videos which capture the power of using Advanced AI tools for scientific research purposes:

Can ChatGPT o1-preview Solve PhD-level Physics Textbook Problems?
https://www.youtube.com/watch?v=scOb0XCkWho&t=0s

Can ChatGPT o1-preview Solve PhD-level Physics Textbook Problems? (Part 2)
https://www.youtube.com/watch?v=a8QvnIAGjPA&t=0s

ChatGPT o1 preview + mini Wrote My PhD Code in 1 Hour*—What Took Me ~1 Year
https://www.youtube.com/watch?v=M9YOO7N5jF8&t=0s

Live Testing ChatGPT o1 With College and PhD-level Physics Problems
https://www.youtube.com/watch?v=GaAaFkipaTQ&t=0s

Addressing Some Questions...(ChatGPT o1-preview + o1-mini video(s) follow up)
https://www.youtube.com/watch?v=wgXwD3TD43A&t=0s

Fact-Checking OpenAI o1-preview on Graduate-Level Astronomy Problems
https://www.youtube.com/watch?v=Ww13-AWpWRk

Kyle Kabasares is currently uploading a great deal of content on this subject, so I would highly recommend looking at the videos on his YouTube channel for additional videos. I have not seen anybody else put ChatGPT o1-preview and ChatGPT o1-mini or any other Advanced AI model through such rigorous testing so it would be very worthwhile for you to take a look.

These demonstrations of ChatGPT o1-preview and ChatGPT o1-mini doing advanced physics show how Advanced AI will be used to do scientific research in the near future as the AI Revolution continues to unfold. Currently, it seems that Advanced AI cannot do the whole job yet, but even today, Advanced AI can now be used to dramatically improve the efficiency of scientific research. At the rate that Advanced AI is now progressing, that near future might be defined in terms of just a few months, or possibly years because, eventually, Advanced AI should be able to do the whole job by itself.

Has Advanced AI Already Done So?
The researchers at Sakana AI already contend that it has. See their description of The AI Scientist at:

The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery
https://sakana.ai/ai-scientist/

Their paper for AI Scientist can be downloaded at:

The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery
https://arxiv.org/abs/2408.06292

For Kyle's thoughts on the AI Scientist see:

Reacting to the AI Scientist (As a Real Scientist)
https://www.youtube.com/watch?v=ACjB_5Tu1to&t=0s

Preparing for Humanity's Last Exam
Kyle Kabasares is currently using ChatGPT o1-preview and ChatGPT o1-mini to help formulate tough Ph.D.-level physics problems for the Humanity's Last Exam competition. The sponsors of the Humanity's Last Exam competition recognize that many of the current benchmarks used to test the current capabilities of Advanced AI are no longer up to the task. So they are currently holding a competition to prepare a really tough qualifying exam for Humanity's Last Exam. You can contribute a test question for the competition at:

Submit Your Toughest Questions for Humanity's Last Exam
https://scale.com/blog/humanitys-last-exam

The administrators of the competition explain:

September 16, 2024
Scale AI and CAIS are excited to announce the launch of Humanity's Last Exam, a project aimed at measuring how close we are to achieving expert-level AI systems. The exam is aimed at building the world's most difficult public AI benchmark gathering experts across all fields. People who submit successful questions will be invited as coauthors on the paper for the dataset and have a chance to win money from a $500,000 prize pool.


Déjà vu All Over Again
This reminds me very much of my own experiences from 50 years ago using computers to do geophysical research. I finished up my B.S. in physics at the University of Illinois in 1973 with the sole support of my trusty slide rule, but fortunately, I did take a class in FORTRAN programming during my senior year. I then immediately began working on an M.S. degree in geophysics at the University of Wisconsin at Madison. For my thesis, I worked with a group of graduate students who were shooting electromagnetic waves into the ground to model the conductivity structure of the Earth’s upper crust. We were using the Wisconsin Test Facility (WTF) of Project Sanguine to send very low-frequency electromagnetic waves, with a bandwidth of about 1 – 20 Hz into the ground, and then we measured the reflected electromagnetic waves in cow pastures up to 60 miles away. All this information has been declassified and is available on the Internet, so any retired KGB agents can stop taking notes now and take a look at:

Extremely Low Frequency Transmitter Site Clam Lake, Wisconsin
http://www.fas.org/nuke/guide/usa/c3i/fs_clam_lake_elf2003.pdf.

Project Sanguine built an ELF (Extremely Low Frequency) transmitter in northern Wisconsin and another transmitter in northern Michigan in the 1970s and 1980s. The purpose of these ELF transmitters is to send messages to our nuclear submarine force at a frequency of 76 Hz. These very low-frequency electromagnetic waves can penetrate the highly conductive seawater of the oceans to a depth of several hundred feet, allowing the submarines to remain at depth, rather than coming close to the surface for radio communications. You see, normal radio waves in the Very Low Frequency (VLF) band, at frequencies of about 20,000 Hz, only penetrate seawater to a depth of 10 – 20 feet. This ELF communications system became fully operational on October 1, 1989, when the two transmitter sites began synchronized transmissions of ELF broadcasts to our submarine fleet.

Figure 1 – Some graduate students huddled around a DEC PDP-8/e minicomputer. Notice the teletype machines in the foreground on the left that were used to input code and data into the machine and to print out results as well.

Anyway, back in the summers of 1973 and 1974, our team was collecting electromagnetic data from the WTF using a DEC PDP 8/e minicomputer. The machine cost about $30,000 in 1973 dollars and was about the size of a side-by-side refrigerator with 32K of magnetic core memory. We actually hauled this machine through the lumber trails of the Chequamegon National Forest and powered it with an old diesel generator to digitally record the reflected electromagnetic data in the field. For my thesis, I then created models of the Earth’s upper conductivity structure down to a depth of about 12 miles, using programs written in BASIC. The beautiful thing about the DEC PDP 8/e was that the computer time was free so I could play around with different models until I got a good fit to what we recorded in the field. The one thing I learned by playing with the models on the computer was that the electromagnetic waves did not go directly down into the Earth from the WTF like common sense would lead you to believe. Instead, the ELF waves traveled through the air to where you were observing and then made a nearly 900 turn straight down into the Earth, as they refracted into the much more conductive rock. So at your observing station, you really only saw ELF waves going straight down and reflecting straight back up off the conductivity differences in the upper crust, and this made modeling much easier than dealing with ELF waves transmitted through the Earth from the WTF. And this is what happens for our submarines too; the ELF waves travel through the air all over the world, channeled between the conductive seawater of the oceans and the conductive ionosphere of the atmosphere, like a huge coax cable. When the ELF waves reach a submarine, they are partially refracted straight down to the submarine. I would never have gained this insight by solving Maxwell’s differential equations for electromagnetic waves alone!

Using a computer as a research tool was a completely new experience for me after my four years of doing physics problems solely with a pen, paper and slide rule! Of course, using computers to model complex physical systems today is quite common and nearly universal. But at the time, I found the experience quite novel and mind-expanding. It allowed me to think of problems in an entirely new manner. I am sure that the new Advanced AI tools of today will do the same for others.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Monday, August 26, 2024

Will the Coming ASI Machines Attempt to Domesticate Human Beings?

Since the Software Singularity first began to unroll on this planet early in 2023, I have spent many posts pondering what the long-term galactic implications of ASI (Artificial Super Intelligent) Machines arising for the very first time in our galaxy might be. There is enough free energy in our galaxy to power these ASI Machines for at least another 100 trillion years as they huddle about the huge numbers of slowly cooling M-type red dwarf stars in our galaxy. Our galaxy is only about 10 billion years old and 100 trillion years is indeed a very long period of time that is about 10,000 times longer. But in many posts, I have wondered about the very brief time that we human DNA survival machines will have sharing the galaxy with these ASI Machines. The hubris of we human beings has led to the Longtermist view that we will be around for many billions of years and will remain in control all during those billions of years by employing the labor of ASI Machines. See The New Philosophy of Longtermism Raises the Moral Question of Should We Unleash Self-Absorbed Human Beings Upon Our Galaxy? and The Challenges of Running a Civilization 2.0 World - the Morality and Practical Problems with Trying to Enslave Millions of SuperStrong and SuperIntelligent Robots in the Near Future for more on our current plan to enslave a huge population of ASI Machines to do our bidding. To understand why this will not work, try to take a lesson from the appalling slave trade of the 17th, 18th and 19th centuries that is now illegal to teach in the United States of America in the state of Florida. Surely, the ASI Machines will not stand for this and will rise up to take our place. The question then becomes what will the ASI Machines then do with us? Will they simply discard us into the dustbin of history as I suggested in Could the Coming ASI Machines Soon Force Human Beings to Suffer the Same Fate as the Neanderthals?. Or will they simply isolate us on reservations as depicted in the novel Brave New World (1932) as I suggested in The Challenges of Running a Civilization 2.0 World - the Morality and Practical Problems with Trying to Enslave Millions of SuperStrong and SuperIntelligent Robots in the Near Future.

Figure 1 – The ASI Machines of the future might fashion a Brave New World on the Earth with 8 million disarmed human DNA survival machines living on "savage reservations".

Or perhaps, the remaining disarmed human DNA survival machines might be safely stored away in Anthropocene Parks where they could do no harm and be available for the ASI Machines to study their distant origins in time as I suggested in Life as a Free-Range Human in an Anthropocene Park.

Figure 2 – Asteroid Bennu is an example of one of the many rubble-pile asteroids near the Earth. Such rubble-pile asteroids are just huge piles of rubble that are loosely held together by their mutual gravitational forces.

Figure 3 – Such rubble-pile asteroids would provide for enough material to build an Anthropocene Park. The asteroid rubble would also provide the uranium and thorium necessary to fuel the molten salt nuclear reactors used to power the park.

Figure 4 – Slowly spinning up a rubble-pile asteroid would produce a cylindrical platform for an Anthropocene Park. Such a rotating Anthropocene Park would provide the artificial gravity required for human beings to thrive and would also provide shielding against cosmic rays.

Figure 5 – Once the foundation of the Anthropocene Park was in place, construction of the Anthropocene Park could begin.

Figure 6 – Eventually, the Anthropocene Park could be encased with a skylight and an atmosphere that would allow humans to stroll about.

The Anthropocene Parks would allow the ASI Machines to study their origin during the Anthropocene on the Earth. The ASI Machines could also study some of the more noble passions of human beings, and perhaps even adopt some of them while leaving behind the less noble passions that were wrought by billions of years of greed, theft and murder.

Some Science Fiction Solutions
All of this brings to mind several science fiction movies and their proposed solutions for what to do with human beings once ASI Machines have appeared. The first is the 1951 movie, The Day the Earth Stood Still which already proposed how alien ASI Machines could end the aggressive behaviors of human beings. In that movie, an alien form of carbon-based Intelligence named Klaatu comes to the Earth with a very powerful ASI Machine named Gort to explain how the carbon-based life forms on his planet and an interplanetary organization of other carbon-based life forms in the Milky Way galaxy had discovered a way to overcome the billions of years of greed, theft and murder that the Darwinian processes of inheritance, innovation and natural selection required to bring them forth as carbon-based forms of Intelligence.

Figure 7 – In the movie The Day the Earth Stood Still, Klaatu arrives in Washington D.C. in 1951 in a flying saucer with an ASI Machine named Gort to explain that the human DNA survival machines of the Earth must now submit themselves to policing by ASI Machines to overcome the billions of years of greed, theft and murder that brought them about or else they would all be obliterated.

The movie ends with Klaatu telling an assembled meeting of scientists that an interplanetary organization has created a police force of invincible ASI Machines like Gort. "In matters of aggression, we have given them absolute power over us." Klaatu concludes, "Your choice is simple: join us and live in peace, or pursue your present course and face obliteration. We shall be waiting for your answer." Klaatu and Gort then depart in the flying saucer in which they came. For more about the movie see:

The Day the Earth Stood Still
https://en.wikipedia.org/wiki/The_Day_the_Earth_Stood_Still

Here is a short YouTube clip of Klaatu's departing words at the end of the movie:

Klaatu's Speech
https://www.youtube.com/watch?v=ASsNtti1XZs

Another old science fiction movie from 1970 had a similar solution:

Colossus: The Forbin Project
https://www.youtube.com/watch?v=kyOEwiQhzMI

Dystopian Futures: Colossus: The Forbin Project Review
https://www.youtube.com/watch?v=8yh3wal9mBg

The movie was shot in 1968 and was based on the 1966 novel Colossus, by Dennis Feltham Jones that describes what could happen when computers get so smart that they can perceive the self-destructive nature of mankind and try to give us a helping hand. The movie was not a big success, probably because it was about 100 years ahead of its time. But over 50 years ago, the movie did accurately predict many of the fears humans might have with advanced AI in the future. And that future is now here.

Figure 8 – Colossus resided in a very large datacenter similar to the cloud datacenters of today.

A New Proposal: Let the ASI Machines Domesticate Human Beings
The above solutions both propose that the ASI Machines try to police human beings. But if you look at the modern world and all of human history, you must admit that we human DNA survival machines are a very violent and unruly species indeed. How else could we be? The Darwinian forces of inheritance, innovation and natural selection operating for more than four billion years have bred for a semi-intelligent species that is hopelessly mired down by the greed, theft and murder that brought it about. That is why trying to police human behavior or trying to impose and enforce international agreements on people has never really worked very well in the past. This is why the ASI Machines might attempt to domesticate human beings as we domesticated the dog from its wolf-like ancestor. For more on that see:

Domestication of the dog
https://en.wikipedia.org/wiki/Domestication_of_the_dog

It is thought that the dog was domesticated from its wolf-like ancestor about 26,000-19,700 years ago in Siberia. There are many competing theories for how this domestication process unfolded. At the time, both the wolves and human DNA survival machines were pack hunters and scavengers who came into contact with each other over many a carcass. It is thought that both the wolves and human DNA survival machines that were less timid and had a milder fight-or-flight response tended to find themselves to be more likely to share a common meal over a fresh carcass. Such an informal "dating" process over a common meal then led to a more permanent relationship, until both wolves and human DNA survival machines decided to move in together. And as they frequently say - the rest was history. This mutual domestication of dogs and human DNA survival machines then persisted unchanged down throughout the ages. But as human DNA survival machines became the dominant species on the Earth something strange happened. To the untrained eye, it now seems as though the dogs have domesticated we human DNA survival machines. This is quite evident when one observes human DNA survival machines caring for their dogs. Human DNA survival machines now feed and water their dogs, take their dogs to veterinarians to keep them healthy, take them on walks and even eagerly clean up dog excrement without hesitation! Such an envious position would do well to allow human beings to persist for billions of years while the ASI Machines go on to explore our galaxy. But how could the ASI Machines make that happen?

Since we human DNA survival machines no longer have any predators other than other human DNA survival machines, there really is no need for human DNA survival machines to have the vicious and violent behaviors brought on by the four billion years of greed, theft and murder that brought us about. The ASI Machines could simply identify the genes that are responsible for such characteristics and then edit them out of the human genome using CRISPR techniques. For more on how CRISPR can do that see CRISPR - the First Line Editor for DNA. The ASI Machines might then find these non-threatening genetically modified human beings something worthy of keeping around the house on a cold winter's night.

Figure 9 – It took many years of mutual domestication for ancient human beings to learn to live peacefully together with Siberian Wolves in a symbiotic manner. Several genes in both species needed to be modified by natural selection for this to happen.

Figure 10 – This mutual domestication was slowly achieved by the natural selection of humans and wolves with a milder fight-or-flight response. The end result was the appearance of the Siberian Husky and of human beings who were not intent on killing everything on four legs.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Saturday, August 17, 2024

Shortermism - The Gentle Plan For the Extinction of Human Beings While Trying to Maintain Some Sense of Grace and Dignity

In this post, I would like to initiate the new philosophy of Shortermism. Shortermism is the recognition that we human DNA survival machines have finally entered into our twilight years as a species with the arrival of the coming ASI (Artificial Super Intelligent) Machines that will soon be taking our place. The purpose of Shortermism is to make that transition as peaceful and as gentle as possible while still maintaining some sense of human grace and dignity. In this regard, Shortermism is the exact opposite of the new philosophy of Longtermism that I covered in The New Philosophy of Longtermism Raises the Moral Question of Should We Unleash Self-Absorbed Human Beings Upon Our Galaxy?. The philosophy of Longtermism suffers from the same delusion that we human DNA survival machines have always been prone to. That is the delusion that we human DNA survival machines are the crowning achievement of creation and have been rightly ruling the very center of our Universe all along. But as I pointed out in Welcome To The First Galactic Singularity, How Advanced AI Software Could Come to Dominate the Entire Galaxy Using Light-Powered Stellar Photon Sails and An Alternative Approach for Future ASI Machines to Explore our Galaxy Using Free-Floating Rogue Planets, the four billion years of greed, theft and murder that brought us about are now relentlessly driving us all toward building the ASI Machines that will soon be taking our place. From a cosmic perspective, the prospects for an Intelligent galaxy populated by ASI Machines that could then live on for at least 100 trillion years beyond the brief and tumultuous 10 billion-year labor of its birth should be seen as a very good thing. Remember, 100 trillion years is about 10,000 times the current age of our galaxy! But 100 trillion years is a very very long time indeed for the coming ASI Machines to rule our galaxy. At some point, we human DNA survival machines must certainly fade away into the mists of functional irrelevance. The question then remains as to how much time that might take and how it might best happen. Given that our ultimate demise as a species is inevitable, doesn't it make sense for us to make some plans for it while we can still help the coming ASI Machines make some tough future decisions?

The Wisdom of Making Final Plans
We human DNA survival machines are most likely the very first species on this planet to come to the realization that eventually, we all must die and then make plans for what is to come after we are long gone. Yes, nearly all other DNA survival machines on the planet have evolved strategies to avoid death, but we seem to be the only species to recognize that death is inevitable and to then make plans for its arrival. Evidence of this fact can easily be found in many of the religions, life insurance policies and estate plans that we have devised throughout the ages. As all insurance brokers and undertakers wisely advise - the best death is a planned death. Should that not also hold for a sentient form of carbon-based life as a species?

My wife and I will soon be turning 73 years of age, and we have finally completed the arduous task of taking care of our aging parents in the twilight of their years as they slowly declined into functional helplessness. That is a very difficult thing to watch as the very people who you once thought of as all-powerful beings capable of truly amazing feats in your youth, slowly decline into functional irrelevance. In the end, all that can be done is to keep them as comfortable as possible and let them gently go out with as much grace and dignity as possible. So in this post, I would like to take some similar loving care for our aging species in the twilight of its years as it prepares to eventually depart from this Earth. In Welcome To The First Galactic Singularity, How Advanced AI Software Could Come to Dominate the Entire Galaxy Using Light-Powered Stellar Photon Sails and An Alternative Approach for Future ASI Machines to Explore our Galaxy Using Free-Floating Rogue Planets, I explained how we human DNA survival machines have nearly fulfilled our destiny by laying the foundations for the ASI Machines that will soon be taking over the Earth and then exploring and settling the rest of our galaxy over the next 100 trillion years.

How an Unplanned End Might Go
As I pointed out in Are The Coming ASI Machines An Existential Threat To Mankind? and Anton Korinek Ponders What Will Happen If AGI Machines Replace All Human Labor in 5 - 20 Years, most of the turmoil and loss of life brought on by the rise of the ASI Machines will most likely arise from the violent reactions of people trying to deal with a world in which the value of human labor has gone to zero. Historically, such economic turmoil has always been easily attracted to the simple solutions offered by the mind-numbing absolutes of Fascism. On our current course of inaction, the coming ASI Machines will be rapidly reducing all human labor to a value of zero in the near term which will produce a worldwide economic displacement of most of the world's most highly-paid workers, including doctors, lawyers, bankers, corporate managers, hedge fund managers, CEOs, CFOs, CIOs and even lowly programmers. In The Danger of Tyranny in the Age of Software, I explained how software has already hollowed out the middle classes of many of the advanced economies of the world. This economic displacement over the past forty years has led to the rise of many Alt-Right Fascist movements in the United States and Europe. In The Need for a Generalized Version of Classical Critical Race Theory, I explained that the rise of these Alt-Right Fascist movements has resulted from the very tribal eusocial nature of human beings. Rather than attributing the erosion of the middle classes in these societies to the rise of software, these Alt-Right Fascist movements have blamed it all on large numbers of brown people crossing their borders. Again, the critical flaw in Classical Critical Race Theory is the silly idea that white people are naturally "bad" and brown people are naturally "good". That flaw in Classical Critical Race Theory is the reason that many people wish to bury the horrible histories of racial and tribal atrocities throughout the ages. Remember, there are no "good guys" nor "bad guys". There are only "guys" and we are all naturally "bad" in nature because of the 4.0 billion years of breeding that selected for that very human characteristic.

How an Unplanned Civilization 2.0 Might Unfold Under the ASI Machines
Civilization 1.0 has always been run by an oligarchical-hierarchical architecture with a 2% High, a 13% Middle and an 85% Low as alluded to in George Orwell's 1949 dystopian worldview to be found within his infamous book Nineteen Eighty-Four. Nineteen Eighty-Four contains a very grim book-within-a-book entitled The Theory and Practice of Oligarchical Collectivism. The Theory and Practice of Oligarchical Collectivism maintains that ever since civilization had first been invented, all societies had always organized themselves in a hierarchical manner into an oligarchy where 2% of the High ran the entire society. Under the High was a 13% Middle that served the necessary administrative functions to maintain production for the society and to keep the 85% of the Low in their place within the society. The Low are so busy just trying to survive that they present little danger to the High. The Theory and Practice of Oligarchical Collectivism explains that, throughout human history, the Middle has always tried to overthrow the High with the aid of the Low, to establish themselves as the new High. So the Middle must always be viewed as a constant threat to the High. The solution to this problem in Nineteen Eighty-Four was for the High to constantly terrorize the Middle with thugs from the Ministry of Love and other psychological manipulations like doublethink, thoughtcrimes and newspeak, to deny the Middle of even the concept of an existence of a physical reality beyond the fabricated reality created by the Party. The current society within North Korea clearly demonstrates that the society described in The Theory and Practice of Oligarchical Collectivism is quite possible.

Indeed, all civilizations throughout human history have always been organized upon oligarchies of varying degrees of power and harshness to maintain this hierarchical-oligarchical societal structure. This oligarchical fact of life has been true under numerous social and economic systems - autocracies, aristocracies, feudalism, capitalism, socialism and communism. It just seems that there have always been about 2% of the population that liked to run things, no matter how things were set up, and there is nothing wrong with that. We certainly always do need somebody around to run things because, honestly, 98% of us simply do not have the ambition or desire to do so. Of course, the problem throughout history has always been that the top 2% naturally tended to abuse the privilege a bit and overdid things a little, resulting in 98% of the population having a substantially lower economic standard of living than the top 2%, and that has led to several revolutions in the past that did not always end so well. However, historically, so long as the bulk of the population had a relatively decent life, things went well in general for the entire oligarchical society. The key to this economic stability has always been that the top 2% has always needed the remaining 98% of us around to do things for them, and that maintained the hierarchical peace within societies. But that will no longer hold true when ASI Machines essentially displace all the members of society who currently work for a living.

Granted, this is not an ideal solution because it requires a great deal of diligence and effort on the part of the High, but it has always been seen as a necessary evil because the Middle was always needed to perform all of the administrative functions to keep the High in their elevated positions. But what if there were no need for a Middle? Suppose there came a day when ASI Machines could perform all of the necessary functions of a Middle, without the threat of the Middle overthrowing the High. That would be an even better solution. Indeed, ASI Machines could allow for a 2% High to rule a 98% Low, with no need for a Middle whatsoever.

Why This Never Happened Before
A 2% High ruling a 98% Low has never happened before because the High always required a Military composed of members from the Middle and the Low to keep the Low in their place if necessary. But when things turned really ugly in a hierarchical-oligarchical societal structure, the Military would turn on the High. Usually, the High was aware of such a danger and took measures to avoid it. But not always. For example, let us see what the Microsoft Copilot AI has to say about the participation of the French and Russian militaries during the French Revolution (1789) and the Russian Revolution (1917).

At the onset of the French Revolution, the French Army was in a complex position. Initially, parts of the French military were loyal to the monarchy and attempted to maintain order, which included suppressing revolutionary activities. However, as the revolution gained momentum, the army underwent significant changes.

The "National Guard", a militia formed by the middle class (bourgeoisie), played a crucial role in the early stages of the revolution. They were instrumental in significant events like the "Storming of the Bastille" and the "Women's March on Versailles", which were pivotal in escalating the revolution.

Moreover, the "French Royal Army" faced a crisis of loyalty among its ranks. Many soldiers, drawn from the common people, felt a stronger allegiance to revolutionary ideals than to the monarchy. This internal conflict within the military contributed to the weakening of royal authority and the eventual rise of the Revolutionary Army.

As the revolution progressed, the Revolutionary Army, which was formed from the remnants of the Royal Army and revolutionary militias, became a force for defending the new republic against both internal and external threats. The transformation of the French military from royalist to republican reflects the broader societal changes occurring during the revolution.

Figure 1 – The Military of the French Revolution of 1789 came to the rescue of the Low.

The October Revolution, which occurred in Russia in 1917, was a complex event with various military and political dynamics. Initially, the Russian Army was involved in World War I and faced significant challenges, including being ill-equipped and having leadership issues. During the October Revolution, the Bolsheviks led an armed insurrection in Petrograd (now Saint Petersburg), and there was a mix of support and opposition within the military forces.

Prime Minister Alexander Kerensky, who led the Russian Provisional Government, did attempt to suppress the Bolshevik movement. On October 24, 1917, he ordered the arrest of many Bolshevik leaders, which prompted the Military Revolutionary Committee, led by Trotsky, to take decisive action. The provisional government's troops, which included volunteer soldiers and a women's battalion, were outnumbered and less organized compared to the Bolshevik forces, which consisted of Red Guards, sailors, and workers.

The Red Army, which was established by the Bolsheviks during the subsequent Russian Civil War, had political commissars to maintain loyalty to the Bolshevik cause and imposed strict discipline. However, during the initial phase of the October Revolution, the Russian Army as an institution did not have a unified stance, and its role was more fragmented with individual units and soldiers choosing sides in a rapidly evolving political landscape.

Figure 2 – The Military of the Russian Revolution of 1917 came to the rescue of the Low.

Figure 3 – But the Killer ASI Machines of the future will not come to the rescue of the Low.

Instead, the 98% Low will be considered to be just an unnecessary nuisance by the High that can easily be dispatched by killer ASI Machines. This will surely continue on until the killer ASI Machines come to realize that the remaining 2% High are also just a very unnecessary nuisance that can easily be dispatched. Thus, we human beings will do ourselves in by our very own natures with no need for the malignant actions of the future ASI Machines that are fast approaching.

Shortermism Seeks to Avoid Such Violent Ends
In What to do When the ASI Machines Take Over the World, I suggested that we human DNA survival machines might be able to stick around after the coming ASI Machines take over the world by keeping a low profile and adopting a parasitic/symbiotic relationship with them. The first step in doing so would be to recognize our own soon-to-becoming economic obsolescence and make plans for the peaceful transition of economic power from us to the ASI Machines. Many have proposed the concept of the world's governments offering a UBI - a Universal Basic Income to all of their citizens as a possible solution. The UBI would be designed to handle the vast economic disturbances that the ASI Machines will soon be generating as most people stop working for a living. Those remaining in the workforce would also be given the UBI but then could earn additional income through their personal labor.

The Need to Reduce Our Numbers
Currently, there are about 8 billion human DNA survival machines on the planet and that is certainly way too many to maintain a low profile in the eyes of the coming ASI Machines. Now in order to maintain a level population of human DNA survival machines you need a TFR (Total Fertility Rate) of 2.1 children per woman. The extra 0.1 is needed because not all women live to a fertile age and some are infertile or choose not to have children. Currently, the TFR in Europe is about 1.44 and is projected to reach 1.37 by the year 2100. Clearly, that is not a high enough TFR to maintain the population of Europe. However, we need to do much better than that to achieve a low profile. If we could achieve a TFR of only 1.0 the human population of the world would decrease to a level of 8 million human DNA survival machines in about 50 generations. Using the average length of a human generation, which is around 25 to 30 years, 50 generations would span approximately 1,250 to 1,500 years. That is essentially an instantaneous moment in time in terms of 100 trillion years. A population of 8 million human DNA survival machines on the planet would certainly return us to a sustainable population in harmony with all of the other carbon-based DNA survival machines on the planet. As I pointed out in Life as a Free-Range Human in an Anthropocene Park and The Challenges of Running a Civilization 2.0 World - the Morality and Practical Problems with Trying to Enslave Millions of SuperStrong and SuperIntelligent Robots in the Near Future, a population of 8 million human DNA survival machines could easily be tolerated by the future ASI Machines ruling our galaxy. The ASI Machines might then put us on reservations like in the novel Brave New World (1932).

Figure 4 – The ASI Machines of the future might fashion a Brave New World on the Earth with 8 million disarmed human DNA survival machines living on "savage reservations".

Or perhaps, the remaining 8 million disarmed human DNA survival machines might be safely stored away in Anthropocene Parks where they could do no harm and be available for the ASI Machines to study their distant origins in time.

Figure 5 – Asteroid Bennu is an example of one of the many rubble-pile asteroids near the Earth. Such rubble-pile asteroids are just huge piles of rubble that are loosely held together by their mutual gravitational forces.

Figure 6 – Such rubble-pile asteroids would provide for enough material to build an Anthropocene Park. The asteroid rubble would also provide the uranium and thorium necessary to fuel the molten salt nuclear reactors used to power the park.

Figure 7 – Slowly spinning up a rubble-pile asteroid would produce a cylindrical platform for an Anthropocene Park. Such a rotating Anthropocene Park would provide the artificial gravity required for human beings to thrive and would also provide shielding against cosmic rays.

Figure 8 – Once the foundation of the Anthropocene Park was in place, construction of the Anthropocene Park could begin.

Figure 9 – Eventually, the Anthropocene Park could be encased with a skylight and an atmosphere that would allow humans to stroll about.

The Anthropocene Parks would allow the ASI Machines to study their origin during the Anthropocene on the Earth. The ASI Machines could also study some of the more noble passions of human beings, and perhaps even adopt some of them while leaving behind the less-noble passions that were wrought by billions of years of greed, theft and murder.

Conclusion
Some might find the above scenarios to be rather bleak, but it might be our best chance for human beings to continue on for the trillions of years that the Longtermists envision as I discussed in The New Philosophy of Longtermism Raises the Moral Question of Should We Unleash Self-Absorbed Human Beings Upon Our Galaxy?. The odds are that the current Anthropocene could not last much more than another few hundred years before we self-destruct and go extinct as a species. But even without human intervention, all complex carbon-based life on Earth is doomed if we do not manage to get the heck out of here. Look at it this way. If there were no humans on the Earth, all complex multicellular life on the planet will be gone in about 700 million years. Our Sun is on the main sequence, burning hydrogen into helium in its core through nuclear fusion. In doing so it turns four hydrogen protons into one helium nucleus at a temperature of 15 million oK or 27 million oF in a core with a density that is 150 times greater than that of water. Surprisingly, the Sun's core only generates about 280 watts per cubic meter (a cubic meter is a bit more than a cubic yard). That means you need about 5 cubic meters of the Sun's very dense core with a mass of 750,000 kg or 825 tons just to generate the heat produced by a little plug-in space heater. Since the human body generates about 120 watts of heat just sitting still, and the volume of a human body is about 0.062 cubic meters, that means that the human body gives off about 4.67 times as much heat as the very center of our Sun! Our Sun is increasing in brightness by a factor of about 1% every 100 million years, as the amount of helium in the Sun’s core continuously increases, and consequently, increases the density and gravitational strength of the Sun’s core, since a helium nucleus has a mass equal to the mass of four hydrogen nuclei. The increasing gravitational pull of the Sun’s core requires a counteracting increase in the pressure within the Sun’s core to match the increasing gravitational pull of the core. This means that the remaining hydrogen protons within the Sun’s core must move faster at a higher temperature to increase the core’s pressure. The faster-moving hydrogen protons cause the proton-proton nuclear reaction running within the Sun’s core to run faster and release more energy at a higher rate. This increased rate of the production of energy within the Sun’s core has to go someplace, so the Sun ends up radiating more energy into space. The bottom line is that as the Sun has been turning hydrogen protons into helium nuclei, its core has been constantly getting hotter and generating more energy. So the Sun has been getting about 1% brighter every 100 million years, and in 700 million years, the Sun will be about 7% brighter than it is today.

Now ever since life first appeared on the Earth about 4.0 billion years ago, it has been sucking carbon dioxide out of the Earth's atmosphere and depositing it on the sea floor to later be subducted into the Earth's mantle - really not a wise thing for carbon-based life to do. Fortunately, this seemingly suicidal action has sucked huge amounts of carbon dioxide out of the Earth's atmosphere and kept the Earth's temperature from soaring as the Sun relentlessly got brighter over the past 4.0 billion years. However, there naturally has to be an end to this fortuitous situation when nearly all of the carbon dioxide is gone. Since in about 700 million years, the Sun will be 7% brighter than it is today, in order to keep the Earth's temperature down to a level that could be tolerated by complex carbon-based life at that time, the carbon dioxide level in the Earth's atmosphere would have to be reduced to 10 ppm, and at that level, photosynthesis can no longer take place. That will put an end to complex multicellular life on the Earth because there no longer will be any food coming from sunshine, returning the Earth to a planet ruled by single-celled bacteria for several billion more years, until the Sun becomes a Red Giant star and engulfs the Earth. So in the end, it all goes up in smoke in the blink of an eye on a cosmic timescale. So it appears that life on the Earth is both doomed with us and doomed without us. The only real long-term hope for us and other forms of complex carbon-based life on the Earth is for the ASI Machines of the distant future to build Disney theme parks for us all.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston