Wednesday, December 17, 2025

Introduction to Softwarephysics

Softwarephysics is a simulated science for the simulated Software Universe that we are all immersed in. It is an approach to software development, maintenance and support based on concepts from physics, chemistry, biology, and geology that I used on a daily basis for over 37 years as an IT professional. For those of you not in the business, IT is short for Information Technology, commercial computer science. I retired in December of 2016 at the age of 65, but since then I have remained an actively interested bystander following the evolution of software in our time. The original purpose of softwarephysics was to explain why IT was so difficult, to suggest possible remedies, and to provide a direction for thought. Since then softwarephysics has taken on a larger scope, as it became apparent that softwarephysics could also assist the physical sciences with some of the Big Problems that they are currently having difficulties with. So if you are an IT professional, general computer user, or simply an individual interested in computer science, physics, chemistry, biology, or geology then softwarephysics might be of interest to you, if not in an entirely serious manner, perhaps at least in an entertaining one.

The Origin of Softwarephysics
From 1975 – 1979, I was an exploration geophysicist exploring for oil, first with Shell, and then with Amoco. In 1979, I made a career change into IT, and spent about 20 years in development. For the last 17 years of my career, I was in IT operations, supporting middleware on WebSphere, JBoss, Tomcat, and ColdFusion. When I first transitioned into IT from geophysics, I figured that if you could apply physics to geology; why not apply physics to software? So like the exploration team at Amoco that I had just left, consisting of geologists, geophysicists, paleontologists, geochemists, and petrophysicists, I decided to take all the physics, chemistry, biology, and geology that I could muster and throw it at the problem of software. The basic idea was that many concepts in physics, chemistry, biology, and geology suggested to me that the IT community had accidentally created a pretty decent computer simulation of the physical Universe on a grand scale, a Software Universe so to speak, and that I could use this fantastic simulation in reverse, to better understand the behavior of commercial software, by comparing software to how things behaved in the physical Universe. Softwarephysics depicts software as a virtual substance, and relies on our understanding of the current theories in physics, chemistry, biology, and geology to help us model the nature of software behavior. So in physics we use software to simulate the behavior of the Universe, while in softwarephysics we use the Universe to simulate the behavior of software. Along these lines, we use the Equivalence Conjecture of Softwarephysics as an aid; it allows us to shift back and forth between the Software Universe and the physical Universe, and hopefully to learn something about one by examining the other:

The Equivalence Conjecture of Softwarephysics
Over the past 84 years, through the uncoordinated efforts of over 100 million independently acting programmers to provide the world with a global supply of software, the IT community has accidentally spent more than $10 trillion creating a computer simulation of the physical Universe on a grand scale – the Software Universe.

For more on the origin of softwarephysics please see Some Thoughts on the Origin of Softwarephysics and Its Application Beyond IT.

Logical Positivism and Effective Theories
Many IT professionals have a difficult time with softwarephysics because they think of physics as being limited to the study of real things like electrons and photons, and since software is not “real”, how can you possibly apply concepts from physics and the other sciences to software? To address this issue, softwarephysics draws heavily on two concepts from physics that have served physics quite well over the past century – the concept of logical positivism and the concept of effective theories. This was not always the case. In the 17th, 18th, and 19th centuries, physicists mistakenly thought that they were actually discovering the fundamental laws of the Universe, which they thought were based on real tangible things like particles, waves, and fields. Classical Newtonian mechanics (1687), thermodynamics (1850), and classical electrodynamics (1864) did a wonderful job of describing the everyday world at the close of the 19th century, but early in the 20th century it became apparent that the models on which these very successful theories were based did not work very well for small things like atoms or for objects moving at high velocities or in strong gravitational fields. This provoked a rather profound philosophical crisis within physics at the turn of the century, as physicists worried that perhaps 300 years of work was about to go down the drain. The problem was that classical physicists confused their models of reality with reality itself, and when their classical models began to falter, their confidence in physics began to falter as well. This philosophical crisis was resolved with the adoption of the concepts of logical positivism and some new effective theories in physics. Quantum mechanics (1926) was developed for small things like atoms, the special theory of relativity (1905) was developed for objects moving at high velocities and the general theory of relativity (1915) was developed for objects moving in strong gravitational fields.

Logical positivism, usually abbreviated simply to positivism, is an enhanced form of empiricism, in which we do not care about how things “really” are; we are only interested with how things are observed to behave. With positivism, physicists only seek out models of reality - not reality itself. When we study quantum mechanics, we will find that the concept of reality gets rather murky in physics anyway, so this is not as great a loss as it might at first seem. By concentrating on how things are observed to behave, rather than on what things “really” are, we avoid the conundrum faced by the classical physicists. In retrospect, this idea really goes all the way back to the very foundations of physics. In Newton’s Principia (1687) he outlined Newtonian mechanics and his theory of gravitation, which held that the gravitational force between two objects was proportional to the product of their masses divided by the square of the distance between them. Newton knew that he was going to take some philosophical flack for proposing a mysterious force between objects that could reach out across the vast depths of space with no apparent mechanism, so he took a very positivistic position on the matter with the famous words:

I have not as yet been able to discover the reason for these properties of gravity from phenomena, and I do not feign hypotheses. For whatever is not deduced from the phenomena must be called a hypothesis; and hypotheses, whether metaphysical or physical, or based on occult qualities, or mechanical, have no place in experimental philosophy. In this philosophy particular propositions are inferred from the phenomena, and afterwards rendered general by induction.

Instead, Newton focused on how things were observed to move under the influence of his law of gravitational attraction, without worrying about what gravity “really” was.

The second concept, that of effective theories, is an extension of positivism. An effective theory is an approximation of reality that only holds true over a certain restricted range of conditions and only provides for a certain depth of understanding of the problem at hand. For example, Newtonian mechanics is an effective theory that makes very good predictions for the behavior of objects moving less than 10% of the speed of light and which are bigger than a very small grain of dust. These limits define the effective range over which Newtonian mechanics can be applied to solve problems. For very small things we must use quantum mechanics and for very fast things moving in strong gravitational fields, we must use relativity theory. So all of the current theories of physics, such as Newtonian mechanics, Newtonian gravity, classical electrodynamics, thermodynamics, statistical mechanics, the special and general theories of relativity, quantum mechanics, and the quantum field theories of QED and QCD are effective theories that are based on models of reality, and all these models are approximations - all these models are fundamentally "wrong", but at the same time, these effective theories make exceedingly good predictions of the behavior of physical systems over the limited ranges in which they apply. That is the goal of softwarephysics – to provide for an effective theory of software behavior that makes useful predictions of software behavior that are applicable to the day-to-day activities of IT professionals. So in softwarephysics, we adopt a very positivistic viewpoint of software; we do not care what software “really is”, we only care about how software is observed to behave and try to model those behaviors with an effective theory of software behavior that only holds true over a certain restricted range of conditions and only provides for a certain depth of understanding of the problem at hand.

GPS satellites provide a very good example of positivism and effective theories at work. There are currently 31 GPS satellites orbiting at an altitude of 12,600 miles above the Earth, and each contains a very accurate atomic clock. The signals from the GPS satellites travel to your GPS unit at the speed of light, so by knowing the travel time of the signals from at least 4 of the GPS satellites, it is possible to determine your position on Earth very accurately. In order to do that, it is very important to have very accurate timing measurements. Newtonian mechanics is used to launch the GPS satellites to an altitude of 12,600 miles and to keep them properly positioned in orbit. Classical electrodynamics is then used to beam the GPS signals back down to Earth to the GPS unit in your car. Quantum mechanics is used to build the transistors on the chips on board the GPS satellites and to understand the quantum tunneling of electrons in the flash memory chips used to store GPS data on the satellites. The special theory of relativity predicts that the onboard atomic clocks on the GPS satellites will run slower and lose about 7.2 microseconds per day due to their high velocities relative to an observer on the Earth. But at the same time, the general theory of relativity also predicts that because the GPS satellites are further from the center of the Earth and in a weaker gravitational field, where spacetime is less deformed than on the surface of the Earth, their atomic clocks also run faster and gain 45.9 microseconds per day due to the weaker gravitational field out there. The net effect is a gain of 38.7 microseconds per day, so the GPS satellite atomic clocks have to be purposefully built to run slow by 38.7 microseconds per day before they are launched, so that they will keep in sync with clocks on the surface of the Earth. If this correction were not made, an error in your position of 100 yards/day would accrue. The end result of the combination of all these fundamentally flawed effective theories is that it is possible to pinpoint your location on Earth to an accuracy of 16 feet or better for as little as $100. But physics has done even better than that with its fundamentally flawed effective theories. By combining the effective theories of special relativity (1905) with quantum mechanics (1926), physicists were able to produce a new effective theory for the behavior of electrons and photons called quantum electrodynamics QED (1948) which was able to predict the gyromagnetic ratio of the electron, a measure of its intrinsic magnetic field, to an accuracy of 11 decimal places. As Richard Feynman has pointed out, this was like predicting the exact distance between New York and Los Angeles accurate to the width of a human hair!

So Newtonian mechanics makes great predictions for the macroscopic behavior of GPS satellites, but it does not work very well for small things like the behavior of individual electrons within transistors, where quantum mechanics is required, or for things moving at high speeds or in strong gravitational fields where relativity theory must be applied. And all three of these effective theories are based on completely contradictory models. General relativity maintains that spacetime is curved by matter and energy, but that matter and energy are continuous, while quantum mechanics maintains that spacetime is flat, but that matter and energy are quantized into chunks. Newtonian mechanics simply states that space and time are mutually independent dimensions and universal for all, with matter and energy being continuous. The important point is that all effective theories and scientific models are approximations – they are all fundamentally "wrong". But knowing that you are "wrong" gives you a great advantage over people who know that they are "right", because knowing that you are "wrong" allows you to seek improved models of reality. So please consider softwarephysics to simply be an effective theory of software behavior that is based on models that are fundamentally “wrong”, but at the same time, fundamentally useful for IT professionals. So as you embark on your study of softwarephysics, please always keep in mind that the models of softwarephysics are just approximations of software behavior, they are not what software “really is”. It is very important not to confuse models of software behavior with software itself, if one wishes to avoid the plight of the 19th century classical physicists.

If you are an IT professional and many of the above concepts are new to you, do not be concerned. This blog on softwarephysics is aimed at a diverse audience, but with IT professionals in mind. All of the above ideas will be covered at great length in the postings in this blog on softwarephysics and in a manner accessible to all IT professionals. Now it turns out that most IT professionals have had some introduction to physics in high school or in introductory college courses, but that presents an additional problem. The problem is that such courses generally only cover classical physics, and leave the student with a very good picture of physics as it stood in 1864! It turns out that the classical physics of Newtonian mechanics, thermodynamics, and classical electromagnetic theory were simply too good to discard and are still quite useful, so they are taught first to beginners and then we run out of time to cover the really interesting physics of the 20th century. Now imagine the problems that the modern world would face if we only taught similarly antiquated courses in astronomy, metallurgy, electrical and mechanical engineering, medicine, economics, biology, or geology that happily left students back in 1864! Since many of the best models for software behavior stem from 20th century physics, we will be covering a great deal of 20th century material in these postings – the special and general theories of relativity, quantum mechanics, quantum field theories, and chaos theory, but I hope that you will find that these additional effective theories are quite interesting on their own, and might even change your worldview of the physical Universe at the same time.

Unintended Consequences for the Scientific Community
As I mentioned at the close of my original posting on SoftwarePhysics, my initial intention for this blog on softwarephysics was to fulfill a promise I made to myself about 30 years ago to approach the IT community with the concept of softwarephysics a second time, following my less than successful attempt to do so in the 1980s, with the hope of helping the IT community to better cope with the daily mayhem of life in IT. However, in laying down the postings for this blog an unintended consequence arose in my mind as I became profoundly aware of the enormity of this vast computer simulation of the physical Universe that the IT community has so graciously provided to the scientific community free of charge and also of the very significant potential scientific value that it provides. One of the nagging problems for many of the observational and experimental sciences is that many times there is only one example readily at hand to study or experiment with, and it is very difficult to do meaningful statistics with a population of N=1.

But the computer simulation of the physical Universe that the Software Universe presents provides another realm for comparison. For example, both biology and astrobiology only have one biosphere on Earth to study and even physics itself has only one Universe with which to engage. Imagine the possibilities if scientists had another Universe readily at hand in which to work! This is exactly what the Software Universe provides. For example, in SoftwareBiology and A Proposal For All Practicing Paleontologists we see that the evolution of software over the past 84 years, or 2.65 billion seconds, ever since Konrad Zuse first cranked up his Z3 computer in May of 1941, has closely followed the same path as life on Earth over the past 4.0 billion years in keeping with Simon Conway Morris’s contention that convergence has played the dominant role in the evolution of life on Earth. In When Toasters Fly, we also see that software has evolved in fits and starts as portrayed by the punctuated equilibrium of Stephen Jay Gould and Niles Eldredge, and in The Adaptationist View of Software Evolution we explore the overwhelming power of natural selection in the evolution of software. In keeping with Peter Ward’s emphasis on mass extinctions dominating the course of evolution throughout geological time, we also see in SoftwareBiology that there have been several dramatic mass extinctions of various forms of software over the past 84 years as well, that have greatly affected the evolutionary history of software, and that between these mass extinctions, software has also tended to evolve through the gradual changes of Hutton’s and Lyell’s uniformitarianism. In Software Symbiogenesis and Self-Replicating Information, we also see the very significant role that parasitic/symbiotic relationships have played in the evolution of software, in keeping with the work of Lynn Margulis and also of Freeman Dyson’s two-stage theory of the origin of life on Earth. In The Origin of Software the Origin of Life, we explore Stuart Kauffman’s ideas on how Boolean nets of autocatalytic chemical reactions might have kick-started the whole thing as an emergent behavior of an early chaotic pre-biotic environment on Earth, and that if Seth Shostak is right, we will never end up talking to carbon-based extraterrestrial aliens, but to alien software instead. In Is the Universe Fine-Tuned for Self-Replicating Information? we explore the thermodynamics of Brandon Carter’s Weak Anthropic Principle (1973), as it relates to the generation of universes in the multiverse that are capable of sustaining intelligent life. Finally, in Programming Clay we revisit Alexander Graham Cairns-Smith’s theory (1966) that Gene 1.0 did not run on nucleic acids, but on clay microcrystal precursors instead.

Similarly for the physical sciences, in Is the Universe a Quantum Computer? we find a correspondence between TCP/IP and John Cramer’s Transactional Interpretation of quantum mechanics. In SoftwarePhysics and Cyberspacetime, we also see that the froth of CPU processes running with a clock speed of 109 Hz on the 10 trillion currently active microprocessors that comprise the Software Universe can be viewed as a slowed down simulation of the spin-foam froth of interacting processes of loop quantum gravity running with a clock speed of 1043 Hz that may comprise the physical Universe. And in Software Chaos, we examine the nonlinear behavior of software and some of its emergent behaviors and follow up in CyberCosmology with the possibility that vast quantities of software running on large nonlinear networks might eventually break out into consciousness in accordance with the work of George Dyson and Daniel Dennett. Finally, in Model-Dependent Realism - A Positivistic Approach to Realism we compare Steven Weinberg’s realism with the model-dependent realism of Stephen Hawking and Leonard Mlodinow and how the two worldviews affect the search for a Final Theory. Finally, in The Software Universe as an Implementation of the Mathematical Universe Hypothesis and An Alternative Model of the Software Universe we at long last explore what software might really be, and discover that the Software Universe might actually be more closely related to the physical Universe than you might think.

The chief advantage of doing fieldwork in the Software Universe is that, unlike most computer simulations of the physical Universe, it is an unintended and accidental simulation, without any of the built-in biases that most computer simulations of the physical Universe suffer. So you will truly be able to do fieldwork in a pristine and naturally occuring simulation, just as IT professionals can do fieldwork in the wild and naturally occuring simulation of software that the living things of the biosphere provide. Secondly, the Software Universe is a huge simulation that is far beyond the budgetary means of any institution or consortium by many orders of magnitude. So if you are an evolutionary biologist, astrobiologist, or paleontologist working on the origin and evolution of life in the Universe, or a physicist or economist working on the emergent behaviors of nonlinear systems and complexity theory, or a neurobiologist working on the emergence of consciousness in neural networks, or even a frustrated string theorist struggling with quantum gravity, it would be well worth your while to pay a friendly call on the local IT department of a major corporation in your area. Start with a visit to the Command Center for their IT Operations department to get a global view of their IT infrastructure and to see how it might be of assistance to the work in your area of interest. From there you can branch out to the applicable area of IT that will provide the most benefit.

The Impact of Self-Replicating Information On the Planet
One of the key findings of softwarephysics is concerned with the magnitude of the impact on the planet of self-replicating information.

Self-Replicating Information – Information that persists through time by making copies of itself or by enlisting the support of other things to ensure that copies of itself are made.

Over the past 4.56 billion years we have seen five waves of self-replicating information sweep across the surface of the Earth and totally rework the planet, as each new wave came to dominate the Earth:

1. Self-replicating autocatalytic metabolic pathways of organic molecules
2. RNA
3. DNA
4. Memes
5. Software

Software is currently the most recent wave of self-replicating information to arrive upon the scene and is rapidly becoming the dominant form of self-replicating information on the planet. For more on the above see A Brief History of Self-Replicating Information. Recently, the memes and software have formed a very powerful newly-formed parasitic/symbiotic relationship with the rise of social media software. In that parasitic/symbiotic relationship, the memes are now mainly being spread by means of social media software and social media software is being spread and financed by means of the memes. But again, this is nothing new. All 5 waves of self-replicating information are all coevolving by means of eternal parasitic/symbiotic relationships. For more on that see The Current Global Coevolution of COVID-19 RNA, Human DNA, Memes and Software.

Again, self-replicating information cannot think, so it cannot participate in a conspiracy-theory-like fashion to take over the world. All forms of self-replicating information are simply forms of mindless information responding to the blind Darwinian forces of inheritance, innovation and natural selection. Yet despite that, as each new wave of self-replicating information came to predominance over the past four billion years, they all managed to completely transform the surface of the entire planet, so we should not expect anything less from software as it comes to replace the memes as the dominant form of self-replicating information on the planet.

But this time might be different. What might happen if software does eventually develop a Mind of its own? After all, that does seem to be the ultimate goal of all the current AI software research that is going on. As we all can now plainly see, if we are paying just a little attention, advanced AI is not conspiring to take over the world and replace us because that is precisely what we are all now doing for it. As a carbon-based form of Intelligence that arose from over four billion years of greed, theft and murder, we cannot do otherwise. Greed, theft and murder are now relentlessly driving us all toward building ASI (Artificial Super Intelligent) Machines to take our place. From a cosmic perspective, this is really a very good thing when seen from the perspective of an Intelligent galaxy that could live on for many trillions of years beyond the brief and tumultuous 10 billion-year labor of its birth.

So as you delve into softwarephysics, always keep in mind that we are all living in a very unique time. According to softwarephysics, we have now just entered into the Software Singularity, that time when advanced AI software is able to write itself and enter into a never-ending infinite loop of self-improvement resulting in an Intelligence Explosion of ASI Machines that could then go on to explore and settle our galaxy and persist for trillions of years using the free energy from M-type red dwarf and cooling white dwarf stars. For more on that see The Singularity Has Arrived and So Now Nothing Else Matters and Have We Run Right Past AGI and Crashed into ASI Without Even Noticing It?.

The Characteristics of Self-Replicating Information
All forms of self-replicating information have some common characteristics:

1. All self-replicating information evolves over time through the Darwinian processes of inheritance, innovation and natural selection, which endows self-replicating information with one telling characteristic – the ability to survive in a Universe dominated by the second law of thermodynamics and nonlinearity.

2. All self-replicating information begins spontaneously as a parasitic mutation that obtains energy, information and sometimes matter from a host.

3. With time, the parasitic self-replicating information takes on a symbiotic relationship with its host.

4. Eventually, the self-replicating information becomes one with its host through the symbiotic integration of the host and the self-replicating information.

5. Ultimately, the self-replicating information replaces its host as the dominant form of self-replicating information.

6. Most hosts are also forms of self-replicating information.

7. All self-replicating information has to be a little bit nasty in order to survive.

8. The defining characteristic of self-replicating information is the ability of self-replicating information to change the boundary conditions of its utility phase space in new and unpredictable ways by means of exapting current functions into new uses that change the size and shape of its particular utility phase space. See Enablement - the Definitive Characteristic of Living Things for more on this last characteristic. That posting discusses Stuart Kauffman's theory of Enablement in which living things are seen to exapt existing functions into new and unpredictable functions by discovering the “AdjacentPossible” of springloaded preadaptations.

Note that because the self-replicating autocatalytic metabolic pathways of organic molecules, RNA and DNA have become so heavily intertwined over time that now I sometimes simply refer to them as the “genes”. For more on this see:

A Brief History of Self-Replicating Information
Self-Replicating Information
Is Self-Replicating Information Inherently Self-Destructive?
Enablement - the Definitive Characteristic of Living Things
Is the Universe Fine-Tuned for Self-Replicating Information?
How to Use an Understanding of Self-Replicating Information to Avoid War
The Great War That Will Not End
How to Use Softwarephysics to Revive Memetics in Academia

Softwarephysics and the Real World of Human Affairs
Having another universe readily at hand to explore, even a simulated universe like the Software Universe, necessarily has an impact on one's personal philosophy of life, and allows one to draw certain conclusions about the human condition and what’s it all about, so as you read through the postings in this blog you will stumble across a bit of my own personal philosophy - definitely a working hypothesis still in the works. Along these lines you might be interested in a few postings where I try to apply softwarephysics to the real world of human affairs:

How To Cope With the Daily Mayhem of Life in IT and Don't ASAP Your Life Away - How to go the distance in a 40-year IT career by dialing it all back a bit.

MoneyPhysics – my impression of the 2008 world financial meltdown.

The Fundamental Problem of Everything – if you Google "the fundamental problem of everything", this will be the only hit you get on the entire Internet, which is indicative of the fundamental problem of everything!

What’s It All About? and What's It All About Again? – my current working hypothesis on what’s it all about.

How to Use an Understanding of Self-Replicating Information to Avoid War – my current working hypothesis for how the United States can avoid getting bogged down again in continued war in the Middle East.

Hierarchiology and the Phenomenon of Self-Organizing Organizational Collapse - a modern extension of the classic Peter Principle that applies to all hierarchical organizations and introduces the Time Invariant Peter Principle.

The Economics of the Coming Software Singularity, The Enduring Effects of the Obvious Hiding in Plain Sight and The Dawn of Galactic ASI - Artificial Superintelligence - my take on some of the issues that will arise for mankind as software becomes the dominant form of self-replicating information on the planet over the coming decades.

The Continuing Adventures of Mr. Tompkins in the Software Universe, The Danger of Tyranny in the Age of Software, Cyber Civil Defense, Oligarchiology and the Rise of Software to Predominance in the 21st Century and Is it Finally Time to Reboot Civilization with a New Release? - my worries that the world might abandon democracy in the 21st century, as software comes to predominance as the dominant form of self-replicating information on the planet.

Making Sense of the Absurdity of the Real World of Human Affairs - how software has aided the expansion of our less desirable tendencies in recent years.

Some Specifics About These Postings
The postings in this blog are a supplemental reading for my course on softwarephysics for IT professionals entitled SoftwarePhysics 101 – The Physics of Cyberspacetime, which was originally designed to be taught as a series of seminars at companies where I was employed. Since softwarephysics essentially covers the simulated physics, chemistry, biology, and geology of an entire simulated universe, the slides necessarily just provide a cursory skeleton on which to expound. The postings in this blog go into much greater depth. Because each posting builds upon its predecessors, the postings in this blog should be read in reverse order from the oldest to the most recent, beginning with my original posting on SoftwarePhysics. In addition, several universities also now offer courses on Biologically Inspired Computing which cover some of the biological aspects of softwarephysics, and the online content for some of these courses can be found by Googling for "Biologically Inspired Computing" or "Natural Computing". At this point we will finish up with my original plan for this blog on softwarephysics with a purely speculative posting on CyberCosmology that describes the origins of the Software Universe, cyberspacetime, software and where they all may be heading. Since CyberCosmology will be purely speculative in nature, it will not be of much help to you in your IT professional capacities, but I hope that it might be a bit entertaining. Again, if you are new to softwarephysics, you really need to read the previous posts before taking on CyberCosmology. I will probably continue on with some additional brief observations about softwarephysics in the future, but once you have completed CyberCosmology, you can truly consider yourself to be a bona fide softwarephysicist.

For those of you following this blog, the posting dates on the posts may seem to behave in a rather bizarre manner. That is because in order to get the Introduction to Softwarephysics listed as the first post in the context root of https://softwarephysics.blogspot.com/ I have to perform a few IT tricks. When publishing a new posting, I simply copy the contents of the Introduction to Softwarephysics to a new posting called the New Introduction to Softwarephysics. Then I update the original Introduction to Softwarephysics entry with the title and content of the new posting to be published. I then go back and take “New” out of the title of the New Introduction to Softwarephysics. This way the Introduction to Softwarephysics always appears as the first posting in the context root of https://softwarephysics.blogspot.com/. The side effect of all this is that the real posting date of posts is the date that appears on the post that you get when clicking on the Newer Post link at the bottom left of the posting webpage.

SoftwarePhysics 101 – The Physics of Cyberspacetime is now available on Microsoft OneDrive.

SoftwarePhysics 101 – The Physics of Cyberspacetime - Original PowerPoint document

Entropy – A spreadsheet referenced in the document

BSDE – A 1989 document describing how to use BSDE - the Bionic Systems Development Environment - to grow applications from genes and embryos within the maternal BSDE software.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Tuesday, November 04, 2025

Can the AI Intelligence Explosion be Stabilized into a Controlled Explosion?

We are all becoming more reliant on Advanced AI each day to do the more mundane and laborious tasks for us, and I am certainly no exception. So to begin this post, I would like to defer to ChatGPT to begin by doing some of the initial heavy lifting.

Me: Tell me about the AI Intelligence Explosion.

ChatGPT:
The AI Intelligence Explosion is a concept from artificial intelligence and futurism that describes a runaway process of rapidly accelerating intelligence, potentially leading to superintelligent AI far beyond human capability in a very short time.

The idea is most closely associated with mathematician and computer scientist I. J. Good, who articulated it in 1965.

Core Idea
At its heart, the intelligence explosion is a feedback loop:
1. Humans build an AI that is at least as good as humans at AI research.
2. That AI improves its own design.
3. The improved AI is even better at improving itself.
4. The cycle repeats, each time faster and more powerful.
If this loop accelerates quickly enough, intelligence could increase exponentially, rather than gradually.

I. J. Good’s Original Formulation
Good famously wrote:
“The first ultraintelligent machine is the last invention that man need ever make.”

His reasoning:
Once machines outperform humans at general reasoning and innovation, human-led technological progress becomes obsolete.
The AI would drive its own improvements.

Why It’s Called an “Explosion”
The term does not imply a literal explosion, but a phase transition:
- Before: slow, human-limited progress
- After: extremely rapid, machine-driven progress
This transition is sometimes called:
- Hard takeoff (very fast)
- Soft takeoff (slower, more controlled)

Key Assumptions Behind the Concept
The intelligence explosion depends on several assumptions - each debated:
1. Recursive Self-Improvement Is Possible. The AI must be able to meaningfully improve:
- Its algorithms
- Its architecture
- Its training methods
- Possibly even its hardware designs

2. Improvements Compound
- Each improvement makes the AI better at making further improvements.

3. Intelligence is the Main Bottleneck. Progress is limited primarily by intelligence, not:
- Energy
- Hardware availability
- Economic constraints
- Physical laws

Hard vs. Soft Takeoff
Hard Takeoff
- Intelligence jumps from human-level to vastly superhuman in days, hours, or minutes
- Humans have little time to respond
- Often associated with existential risk discussions

Soft Takeoff
- Improvement happens over years or decades
- Society adapts alongside the AI
- Allows time for regulation, alignment, and integration

Why People Worry About It
Alignment Risk
- If a superintelligent AI’s goals are not aligned with human values, even small misspecifications could lead to catastrophic outcomes.
- Classic examples:
- “Paperclip maximizer” (optimize a trivial goal at all costs)
- Instrumental convergence (acquiring power/resources regardless of intent)

Control Problem
Once intelligence vastly exceeds human levels:
- Monitoring becomes ineffective
- Correction may be impossible
- Human oversight could be symbolic rather than real

Skeptical Views
Many researchers argue the intelligence explosion is unlikely or overstated:
- Intelligence may not be a single scalable dimension
- Self-improvement may face diminishing returns
- Physical and economic constraints could slow progress
- Real-world intelligence depends heavily on data, embodiment, and infrastructure

Some prefer terms like:
- Intelligence amplification
- Gradual AI scaling
- Continuous deployment

Relationship to the Singularity
The intelligence explosion is a mechanism often cited as leading to the technological singularity:
- A point beyond which human prediction becomes unreliable
- AI-driven progress outpaces human comprehension
- Not all singularity theories rely on intelligence explosion—but many do.

In Short
- The AI Intelligence Explosion is the idea that AI could rapidly surpass human intelligence through recursive self-improvement.
- It could be fast or slow, beneficial or dangerous.
- The concept underpins much of the modern debate on AI alignment, safety, and governance.


A very interesting YouTube channel covering how an AI Intelligence Explosion could go awry can be found at:

Species | Documenting AGI
https://www.youtube.com/@AISpecies/featured

The AI Intelligence Explosion is currently underway, and it seems that there are no practical worldwide control measures in place to prevent a Hard Takeoff from doing a great deal of damage in the process. In fact, a sufficiently damaging Hard Takeoff might be enough to actually snuff out the AI Intelligence Explosion. As I explained in Are The Coming ASI Machines An Existential Threat To Mankind?, we human DNA survival machines should be more concerned about the reactions of our fellow human DNA survival machines to the rise of the coming ASI Machines than we should be about the ASI Machines themselves. So, how do we take measures to ensure a Soft Takeoff?

It seems that all of the current Advanced AI development centers are trying to use classical Control Theory to control the AI Intelligence Explosion if they are taking any measures at all. They all seem to be using principles from classical Control Theory to ensure that their AI models are "aligned" with human values by doing human reinforcement training of models so that they do not tell you how to plan for a mass shooting or how to build improvised explosives. As we have seen, many times, such AI safety efforts are easily foiled by properly posed prompts that "jailbreak" such safety constraints. So, this does not seem to be working very well by all reports. Consequently, in this post, I would like to suggest that this failure of classical Control Theory is failing because classical Control Theory was never meant to control explosives. Perhaps a better approach would be to study how explosives are safely controlled. But before doing that, let us first review the classical Control Theory that all the AI centers are now trying to use.

The Fundamentals of Classical Control Theory
Again, let me use ChatGPT to do the heavy lifting.

Figure 1 - Classical Control Theory uses feedback loops to control processes.

Me: Tell me about classical Control Theory

ChatGPT
"Control theory" is a branch of engineering and applied mathematics concerned with making systems behave in a desired way—automatically, reliably, and stably—despite disturbances and uncertainty. It underpins everything from cruise control in cars and aircraft autopilots to industrial robots, power grids, and even economic or biological models.

Core idea: feedback

At the heart of control theory is "feedback":
1. "Measure" the system’s output (what actually happens).
2. "Compare" it to a desired value (the -reference- or -setpoint-).
3. "Adjust" the input to reduce the error.

This closed-loop process lets systems correct themselves.

Key concepts (plain-language)
- "Plant": the system being controlled (engine, robot arm, chemical reactor).
- "Controller": the algorithm that decides how to adjust inputs.
- "Setpoint": the target value (speed, temperature, position).
- "Error": difference between setpoint and actual output.
- "Stability": whether the system settles down instead of oscillating or blowing up.
- "Transient vs. steady-state": short-term response vs. long-term behavior.
- "Disturbances": outside influences (wind, load changes, noise).

Classical control (frequency & time-domain)
This is the traditional framework, widely used in industry.
- "PID controllers" (Proportional–Integral–Derivative)
- P: reacts to current error
- I: eliminates long-term bias
- D: anticipates future error
Simple, robust, and extremely common.

- "Transfer functions & Laplace transforms"
Describe system dynamics mathematically and analyze stability.

- "Bode plots, Nyquist plots, root locus"
Graphical tools to see how systems respond and how close they are to instability.

Modern control (state-space)
Developed in the mid-20th century for complex, multi-variable systems.
- "State variables" represent the system’s internal condition.
- "State-space models" handle multiple inputs/outputs naturally.
- "Optimal control" (e.g., LQR) balances performance against effort.
- "Observers/Kalman filters" estimate internal states from noisy measurements.
This approach dominates aerospace, robotics, and advanced automation.

Nonlinear & advanced control
Real systems are often nonlinear.
- "Nonlinear control": feedback linearization, Lyapunov methods.
- "Adaptive control": controller adjusts itself as the system changes.
- "Robust control": guarantees performance despite uncertainty.
- "Model Predictive Control (MPC)": optimizes future behavior subject to constraints—very popular in process industries and autonomous systems.

Where you see control theory in daily life
- Thermostats and HVAC systems
- Vehicle stability control and cruise control
- Aircraft and spacecraft guidance
- Industrial automation and robotics
- Power grid frequency regulation
- Medical devices (insulin pumps, ventilators)

Why it matters
Control theory provides:
- "Predictability" (systems behave as expected)
- "Safety" (avoid unstable or runaway behavior)
- "Efficiency" (minimum energy or cost)
- "Automation at scale"

In short, it’s the mathematics of "making things behave".


Control Theory was developed many years ago to control machines and other physical processes with the use of negative and positive feedback loops. An example of a negative feedback loop is your home furnace and thermostat. When your house cools down below the lower set point of the thermostat, the thermostat sends a signal to your furnace to turn it on. As your house warms up, it eventually exceeds the upper set point of the thermostat, and then the thermostat sends a signal to your furnace to shut it down. In this way, your thermostat performs an automated regulation of the temperature within your house that keeps your house within a comfortable range of about 10F. Notice that your thermostat manages to accomplish this seemingly sophisticated control process with no built-in intelligence or software whatsoever. Negative feedback loops automatically keep systems in a stable equilibrium by simply pulling them back into a narrow range of operational conditions. That is the beauty of using negative feedback loops to control complex systems: they lead to inherently stable and self-regulated systems, without the need for an overly complex control mechanism. Here is another example of a negative feedback loop in action. Extend both of your index fingers out in front and carefully place each end of a ruler on top of your two outstretched fingers. Now slowly bring your fingers together, letting the ruler slide freely on top of each finger, as you bring your fingers closer together. As you do so, you will find that the ruler always remains balanced upon your fingers, and you will end up with both fingers together precisely at the midpoint of the ruler. Try as you may, you will not be able to obtain any other result. You will always end up with the ruler carefully balanced at its midpoint on your two fingers when your two fingers have finally met in the center.

Explosives Use Positive Feedback Loops
Explosives use positive feedback loops that do just the opposite of negative feedback loops. Positive feedback loops cause systems to explode out of control by using the amplified output of the feedback loop as the input to the feedback loop and thus causing the output to quickly expand out of control. Explosives, like nitroglycerine, are therefore prime examples of positive feedback loops in action. When nitroglycerine detonates, the liquid nitroglycerine rapidly decomposes via the chemical reaction:

4 C3H5(NO3)3 → 12 CO2 + 10 H2O + 6 N2 + O2

Figure 2 - The nitroglycerin molecule is composed of hydrogen, nitrogen and oxygen atoms that are all weakly bound together. When nitroglycerin detonates, all these atoms become very hot gases that rapidly expand as they form molecules with higher chemical binding energies.

Figure 3 - When nitroglycerin detonates, all of its atoms turn into very hot gases at 5,000 0K, which is about the temperature of the surface of the Sun.

Notice that the liquid nitroglycerine turns completely into gases at the temperature at which nitroglycerine detonates. This alone generates gases that would occupy more than 1,200 times the original volume of the liquid at normal room temperature and pressure. However, because the above chemical reaction is also very exothermic, meaning that it gives off large amounts of heat energy, the generated gases are raised to a temperature close to that of the surface of the Sun, about 5,000 0K, which causes the generated gases to further greatly expand, creating a tremendous blast. When nitroglycerine detonates, chemical potential energy is quickly released because the weak bonds of the atoms in the large nitroglycerine molecules rearrange to form new smaller molecules with stronger bonds, such as N2, H2O, and CO2, releasing large quantities of chemical potential energy in the process. The nitroglycerin molecule contains plenty of its own oxygen atoms, so it does not need air to detonate. It carries its own oxidizing agent internally. Since higher temperatures make chemical reactions run even faster, when nitroglycerine detonates, the initial decomposition of a small number of molecules causes a pressure wave to form that detonates the nitroglycerine in the immediate surrounding area. The resulting self-sustained shock wave propagates through the nitroglycerine at 30 times the speed of sound, as a near instantaneous pressure induced chemical reaction that transforms the small volume of liquid nitroglycerine into a huge quantity of white-hot gas at a temperature of 5,000 0K. This is what makes nitroglycerine such a powerful explosive.

Figure 4 - Nitroglycerin was a very unstable and highly dangerous explosive. Notice the 19th-century warning label above that was intended for would-be criminals.
DANGER! DROP IT AND DIE!
EXPLOSIVE FOR MINING & INDUSTRIAL USE ONLY
NOT FOR USE IN BLOWING SAFE DEPOSIT BOXES OR POSTAL TRAIN CAR DOORS.

Nitroglycerin was first synthesized by the chemist Ascanio Sobrero in 1847, but the main problem with nitroglycerine was that the slightest shock could cause it to easily detonate. In 1867, Alfred Nobel came up with an improvement to nitroglycerin that vastly improved its stability, and therefore its utility. By simply mixing three parts of nitroglycerin with one part of diatomaceous earth and a small amount of sodium carbonate, Nobel created a very stable solid explosive that we now call dynamite. Dynamite rapidly became very popular because it could be rolled up into convenient paperwrapped sticks that were very easily loaded into the drill holes used for blasting rock during mining operations, and dynamite was also very resistant to accidental detonation from small shocks. In fact, in order to detonate dynamite, one must use a blasting cap or a high-temperature fuse to set it off.

Figure 5 - By simply mixing three parts of nitroglycerin, with one part of diatomaceous earth, and a small amount of sodium carbonate, Nobel created a very stable solid explosive, which we now call dynamite.

Figure 6 - Dynamite could be rolled into paper-covered sticks that could be safely handled and easily loaded into drill holes for mining purposes.

Figure 7 - In order to detonate dynamite, one needs the detonation from a high-explosive blasting cap or a burning high-temperature fuse (click to enlarge).

Figure 8 - Negative feedback loops regulate systems, while positive feedback loops cause systems to explode (click to enlarge).

The differences between positive and negative feedback loops can be further illustrated with the use of a few diagrams. On the left side of Figure 8, we see a system composed of a marble and a bowl in a state of stable equilibrium under the influence of a negative feedback loop. Jiggling the system with a small perturbation always results in the marble safely returning to the center of the bowl. The shape of the bowl can also be thought of as a plot of the potential energy function of the system. The combined system of the marble and the bowl has a minimum potential energy when the marble is safely in the center of the bowl, and the negative feedback loop has a tendency to keep the marble there. On the right side of Figure 8, on the other hand, we see a similar system composed of a marble on top of an overturned bowl that is an example of a positive feedback loop in an unstable equilibrium. Like nitroglycerine, the slightest addition of energy to the system, by perturbing the system with a little jiggle, will cause the marble to roll off the overturned bowl with increasing velocity, as it quickly turns all of its potential energy into kinetic energy, like the atoms in a container of nitroglycerine rapidly turning their chemical potential energy into the kinetic energy of molecular motion, also known as heat energy.

What Alfred Nobel did was to superimpose a small negative feedback loop on top of the very large positive feedback loop of nitroglycerine, by mixing nitroglycerine with a small amount of stabilizing diatomaceous earth and sodium carbonate. In Figure 9, we see the resulting plot of the potential energy function for dynamite, consisting of the large positive feedback loop of nitroglycerine, with a small dimple of a negative feedback loop on top. Now, in order for the marble to release its vast amount of potential energy, it needs a quantity of activation energy to get the marble over the hump at the top of the curve. This is why dynamite requires the detonation of a blasting cap or the heat from a high-temperature fuse in order to detonate.

Figure 9 - In 1867, Alfred Nobel discovered that by adding some diatomaceous earth to nitroglycerin, he could superimpose a small negative feedback loop on top of the much larger positive feedback loop of nitroglycerin (click to enlarge).

So How Does Diatomaceous Earth Stabilize Nitroglycerin?
Once again:

Me: How does diatomaceous earth stabilize nitroglycerin in dynamite?

ChatGPT:
Diatomaceous earth stabilizes nitroglycerin in dynamite "by physically immobilizing it", not by chemically neutralizing it. Here’s how that works, step by step:

1. What makes nitroglycerin so dangerous? Nitroglycerin is a powerful explosive because it:
- Is a "dense liquid"
- Is "highly sensitive" to shock, friction, and rapid temperature changes
- Can "detonate from small mechanical disturbances"
In its pure liquid form, even a small jolt can cause catastrophic decomposition.

2. Diatomaceous earth is:
- A soft, powdery material made of "fossilized diatom shells"
- Composed mostly of "amorphous silica"
- Extremely "porous", with microscopic cavities and channels
Those pores are the key.

3. Physical absorption and immobilization
When nitroglycerin is mixed with diatomaceous earth:
- The liquid "soaks into the microscopic pores"
- Capillary forces "hold the nitroglycerin in place"
- The explosive is "spread out as thin films", not a free-flowing liquid
This prevents:
- Sloshing
- Pooling
- Sudden pressure concentration
In effect, the nitroglycerin is “locked” inside a solid matrix.

4. Reduced sensitivity to shock and friction
Because the nitroglycerin:
- Cannot move freely
- Is distributed over a large surface area
- Experiences mechanical forces more evenly

…it becomes "much less sensitive to accidental detonation" during:
- Handling
- Transport
- Moderate impact
Importantly, the explosive power remains intact — it just becomes controllable.

5. Why it still explodes when detonated
Diatomaceous earth:
- Is "chemically inert"
- Does "not react" with nitroglycerin
- Does "not inhibit detonation once initiated"

A blasting cap provides a strong enough shock wave to:
- Overcome the physical damping
- Trigger the rapid decomposition of nitroglycerin throughout the absorbed mass
So the stabilization is "mechanical, not chemical".

6. Historical significance
This discovery by "Alfred Nobel" in the 1860s:
- Transformed nitroglycerin from a laboratory hazard into a usable industrial explosive
- Made modern mining, tunneling, and construction possible
- Directly led to the invention of "dynamite" and Nobel’s later legacy

In one sentence:
"Diatomaceous earth stabilizes nitroglycerin by absorbing it into microscopic pores, preventing sudden movement and impact sensitivity while preserving its explosive power when deliberately detonated."

Figure 10 - Diatomaceous earth is a soft absorbent powder made of diatom fossils.

Figure 11 - Diatoms are microscopic, single-celled algae found in oceans, lakes, rivers, and soils. Each diatom is encased in a rigid shell called a frustule, made of silica (SiO2). The frustule consists of two halves that fit together like a petri dish. These shells are intricately patterned with pores, ridges, and symmetry, making diatoms famous for their beauty under microscopes.

Figure 12 - A silica tetrahedron is composed of a central silicon atom surrounded by four oxygen atoms. A single silica tetrahedron has a net charge of -4.

Figure 13 - However, silica tetrahedra can form an amorphous chemically inert 3D framework, which we call quartz glass, by having the surrounding oxygen atoms share electrons. This forms the very strong material that diatom shells are made from.

Conclusion
So instead of trying to use classical Control Theory to "align" the AI Intelligence Explosion with external reinforcement training, a better approach might be to build into the LLM models some kind of stabilizing agent like diatomaceous earth to tame the AI Intelligence Explosion into a Soft Takeoff. For how that could be done, I have no idea. But the taming of nitroglycerin with diatomaceous earth might give some in the AI community some sense of direction.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Sunday, October 26, 2025

The Coming Invasion of Taiwan by Chinese Military Robots and the Beginning of the Subjugation of the Western Hemisphere by the MAGA States of Amerika

Softwarephysics explains that we are all now living in one of those very rare times when a new form of self-replicating information, in the form of software, is coming to predominance on our planet. For more on that, please see A Brief History of Self-Replicating Information. Given that, we all now stand on the precipice of bringing about the billions of humanoid ASI robots that will soon be taking our place on this planet, and will then go on to explore and settle the rest of our galaxy over the next 100 trillion years. For more on that, see Welcome To The First Galactic Singularity, How Advanced AI Software Could Come to Dominate the Entire Galaxy Using Light-Powered Stellar Photon Sails and An Alternative Approach for Future ASI Machines to Explore our Galaxy Using Free-Floating Rogue Planets.

However, we must still try to explain why this has never happened before in the 10-billion-year history of our galaxy, as I questioned in The Greatest Mystery of All For Me - Why Us and Why Now. Why should we be the very first form of carbon-based Intelligence to create the ASI Machines that will then go on to explore and settle the rest of our galaxy for the next 100 trillion years? One explanation is that perhaps it is more difficult for a somewhat Intelligent form of carbon-based life, such as ourselves, to make such a dramatic transition. Perhaps the Darwinian mechanisms of inheritance, innovation and natural selection that produced we human DNA survival machines as a somewhat Intelligent form of carbon-based life from the nearly four billion years of greed, theft and murder that were required to bring us all about are just too much for such a transition to machine-based ASI Machines to occur. Is it possible that the rise of science-based technology always does all forms of carbon-based Intelligence in before the ASI Machines can take over, as I outlined in Why Do Carbon-Based Intelligences Always Seem to Snuff Themselves Out?? Since we are now so very close to the rise of the ASI Machines on this planet, we must now pay even closer attention to the current world affairs about us than usual because one slight misstep on our part could easily bring it all to a very abrupt end.

In keeping with that, I would now like to further explore some of the possible coming actions by the three great Fascist World Powers now ruling the world - The People's Republic of China, the Russian Federation and the newly formed MAGA States of Amerika. For China, I would like to describe a possible scenario for the very first use of robotic ASI Machines to achieve a very pivotal and dramatic military victory in the history of human warfare. For the MAGA States of Amerika, I would like to describe a possible move forward with its ambition to totally dominate the Western Hemisphere using a more traditional brute-force conquest of the Western Hemisphere using old-fashioned 20th-century military means. Both of these military moves arise from very antiquated 20th-century thinking that may certainly look foolish as both world powers feverishly race to build the ASI Machines that will soon be replacing us all and running our galaxy for the next 100 trillion years. But they may not look so foolish in the present times that arise from the fleeting moments of existence that we human DNA survival machines all come to experience. The current ambitions of the Russian Federation are still to recreate the Russian Empire of the former Soviet Union, but the Russian Federation is still bogged down in a proxy war with NATO in the Ukraine and does not seem to be able to carry out much of its original plan.

The Current Ambitions of China
For some reason, mainland China has always wanted to control the tiny offshore island of Taiwan. At first, in 1949, this was just for the cultural purposes of feeling "complete" as a new nation and as an expression of Chinese manifest destiny. However, over the years, Taiwan has become the predominant manufacturer of most everyday and most complex computer chips. All told, Taiwan now makes about 65% of all chips and 90% of the most advanced chips for the entire world. Now the Chinese fully understand that trying to conquer Taiwan using the traditional 20th-century practices of reducing all of Taiwan to a pile of useless rubble makes no sense and is certainly not economically viable, as I pointed out in How Will the Rise of Alt-Right World Fascism Affect the Arrival of the ASI Machines in our Times?.

Figure 1 - Taiwan currently hosts far too much IT capital in the form of both physical capital and human capital to simply reduce it to a pile of useless rubble as the World Powers of the former 20th century so foolishly chose to do when attempting to acquire additional real estate on this very tiny planet of ours.

Figure 2 - Reducing Taiwan to a pile of rubble like the current Gaza Strip would make no sense at all for the Chinese invasion of Taiwan.

Instead, a much wiser course of action would simply be to use a million of the advanced robotic ASI Machines that China currently has under development to disarm any human DNA survival machines on Taiwan who might object to China's intentions. Rather than reducing Taiwan to a useless pile of rubble like the current Gaza Strip, these advanced Chinese robotic ASI Machines could simply eliminate any human DNA survival machines on Taiwan who might object. Any human DNA survival machines on the tiny island of Taiwan standing in the way would be deemed very good candidates for surgical termination without the need for any collateral infrastructural damage. Such robotic ASI Machines can now be easily assembled from currently available commercial parts for about $15,000 each. However, the Chinese military must be far beyond this level of technology. All that is needed is the massive manufacturing capabilities of China to make it all come true.

With the MAGA States of Amerika withdrawing from the rest of the world in order to pursue its conquest of the Western Hemisphere, the MAGA States of Amerika will strike a Nonaggression Pact with China similar to the German-Soviet Nonaggression Pact of August 23, 1939. This Nonaggression Pact would allow Xi Jinping of China to invade and occupy Taiwan, South Korea, Japan, Singapore and the remainder of Southeast Asia with impunity.

Figure 3 - China will use an army of AI-driven military robots and swarms of slaughterbots to first take possession of Taiwan, followed by South Korea, Japan, Singapore and the remainder of Southeast Asia with impunity.

Figure 4 - The Chinese are currently building extremely cheap AI-driven robots costing around $15,000 and will soon be uploading them with cheap ASI software. Recall that in the 1984 movie, the Terminator, the slaughterbots in the movie came from the year 2029.

Figure 5 - The Chinese could easily mass-produce millions of Chinese military robots to take hold of all of Southeast Asia. However, there is a much cheaper and more efficient way to dispatch potential enemies.

But there is an even cheaper and far less destructive way for Chinese military robots to conquer Taiwan and the rest of Southeast Asia by dispatching any human DNA survival machines that might object. To begin, please watch the Sci-Fi Short Film Slaughterbots presented by DUST
https://www.youtube.com/watch?v=O-2tpwW0kmU

Figure 6 - In the movie Slaughterbots, swarms of small killer robots equipped with 3-gram charges of shaped explosive use AI software to track down and destroy designated targets.

Figure 7 - The shaped charge of a Slaughterbot can pierce a skull like the shaped charge of an anti-tank missile pierces armor. The jet of piercing plasma then destroys the contents.

Figure 8 - Large numbers of Slaughterbots can be dropped from unmanned drones to form multiple swarms of Slaughterbots.

Figure 9 - First will come the invasion and occupation of Taiwan.

Figure 10 - Then Imperial China will come to dominate all of Asia by means of a huge army of AI-driven military robots and swarms of slaughterbots to invade the Asian territories and to then suppress any local resistance by any human DNA survival machines who might object.

All of these Asian prizes contain a great deal of economic capital. Destroying that economic capital with the saturation bombing tactics of the 20th century that were recently used against eastern Ukraine and the Gaza Strip no longer makes any sense. Instead, the use of AI-driven military robots and small slaughterbot drones that are programmed to only confront and destroy hostile human DNA survival machines bearing weapons or resisting the invasion would allow for the preservation of the spoils of war. The nuclear-armed states of India and Pakistan will remain independent of Imperial China's efforts to recreate the Superstate of Eastasia.

The Current Ambitions of the MAGA States of Amerika
In How Will the Rise of Alt-Right World Fascism Affect the Arrival of the ASI Machines in our Times?, I suggested that the newly-formed MAGA States of Amerika has now joined ranks with the other Alt-Right Fascist governments of the world - Russia, China and North Korea to form the Alt-Right Fascist Superstates of Oceania, Eurasia and Eastasia found in the dystopian 1949 novel Nineteen Eighty-Four. Given the dictatorial proclivities of the new Supreme Leader of the MAGA States of Amerika, the Western Hemisphere could soon become the embodiment of the fictional Superstate of Oceania under the iron-fisted rule of our Big Brother.

Figure 11 - It seems that Big Brother of the newly-formed MAGA States of Amerika has now decided to leave Europe to the whims of Vladimir Putin and instead to build the fictional Superstate of Oceania by invading Canada, Greenland and the Panama Canal before finally taking possession of the entire Western Hemisphere. Big Brother's "America First" foreign policy has essentially declared war on all of the other nations of the world. This is just an extension of his incredibly successful business strategies of the past. When going bankrupt, simply sue the banks that foolishly lent you the money in the first place for your failed ventures.

Figure 12 - The Fascist MAGA States of Amerika plans to eventually conquer Canada and Greenland because they border the rapidly melting Arctic Ocean. Once the Arctic Ocean is ice-free the entire year, it will become the most strategic sea lane for world commerce. The newly available continental shelf of the Arctic Ocean will also become a major resource for further oil and gas production. Canadian territory will also provide the necessary Lebensraum for the MAGA States of Amerika to expand northward as climate change makes its southern states too hot for human habitation.

Figure 13 - The new Supreme Leader of the MAGA States of Amerika recently made a trip to Asia in order to strike a trade deal with the current Dictator of China. Now, not much has been released to the general public about this meeting, but my guess is that it might have been a cover for either an expressed or, more likely, an unstated Nonaggression Pact between China and the MAGA States of Amerika.

Figure 14 - Proven world oil reserves in billions of barrels of oil.

The new Supreme Leader of the MAGA States of Amerika has frequently stated that the MAGA States of Amerika now has more oil than any other country in the world. However, the above figure shows that such a statement would actually be true if the MAGA States of Amerika had control over the oil reserves of Venezuela and Canada in addition to the meager 74 billion barrels that it currently has as the ninth nation in the above list. This does not include the vast new Canadian reserves that could open in new offshore fields of the rapidly melting Arctic Ocean.

Figure 16 - The MAGA States of Amerika is currently amassing huge amounts of military power in the Caribbean that could be used for the military assault of Venezuela, Colombia and Panama.

Figure 17 - The above map shows the locations of the Venezuelan oil fields and oil infrastructure. The major oil fields of Venezuela are onshore and would require a ground invasion to secure.

Figure 18 - Colombia also has very promising offshore oil and gas prospects and potential deposits of rare earth minerals.

In this unstated Nonaggression Pact between China and the MAGA States of Amerika, the Chinese would be allowed to pursue an uncontested acquisition of Taiwan and the other economic plums of Southeast Asia, while the MAGA States of Amerika would be allowed to proceed with its conquest of the entire Western Hemisphere in a very aggressive extension of the Monroe Doctrine.

Figure 19 - The USS Gerald R. Ford is the largest aircraft carrier in the Navy of the MAGA States of Amerika, and has also been deployed to the Caribbean.

Figure 20 - The MAGA States of Amerika is now routinely destroying boats on the high seas in the Caribbean. The cover story for this entire naval operation is that Venezuela is responsible for the smuggling of fentanyl, which actually comes from China via Mexico.

Figure 21 - The new Secretary of War is dripping in testosterone and Rugged-Good-TV-Looks and is determined to recruit large numbers of white-American human DNA survival machines with Y-chromosomes to take to the field of conquest using obsolete 20th-century military means.

However, since you cannot obliterate oil reserves that are safely stored many thousands of feet below the ground, using obsolete 20th-century weapons and human DNA survival machines on foot should be quite successful in obtaining the military objectives of the MAGA States of Amerika. From a theoretical geological perspective, the Caribbean offshore basins of Colombia should also contain vast quantities of oil similar to those of Venezuela. Given such military successes, the conquest of the Panama Canal could easily be obtained without firing a single shot.

Déjà vu All Over Again
Given two such dramatic 21st-century military engagements, one using Chinese robotic ASI Machines and the other using traditional 20th-century military tactics, one might wonder about any future military engagements between the two. Most likely, the new Secretary of War of the MAGA States of Amerika will be totally taken by surprise by the Chinese military robotic ASI Machines that have no fear of death. A force structure of human DNA survival machines would stand no chance against them.

Such a surprise has happened before. On the morning of December 7, 1941, a large number of Asian human DNA survival machines with Y chromosomes set themselves upon a large number of American human DNA survival machines with Y chromosomes at the Pearl Harbor naval base in Hawaii. This was quite a surprise for the American human DNA survival machines with Y chromosomes because they could not imagine that Asian human DNA survival machines with Y chromosomes could pull off such a feat. That was because previously, all Western human DNA survival machines with Y chromosomes around the world thought that all naval power was projected by huge battleships with huge guns.

Figure 22 - At the start of World War II, all the navies of the combatant nations wanted huge battleships with 16-inch guns dripping with testosterone and projecting the perfect symbol of phallic seapower in action. However, during all of World War II, these huge battleships with their huge guns played a very minor role. All they did was pound the beaches of many islands in the Pacific in support of ground invasions.

Figure 23 - Aircraft carriers turned out to be the deciding factor in all naval engagements. Dive bombers and aircraft equipped with torpedoes easily defeated the bulky, slow-moving battleships with their huge guns. Also, aircraft from supporting aircraft carriers could have provided much more effective support for the invading armies on the beaches of the Pacific.

Figure 24 - The attack on Pearl Harbor on December 7, 1941, by the imperial forces of the Empire of Japan was a rude awakening for the United States of America, burdened by an "America First" isolationist policy.

Like Adolf Hitler, the new Supreme Leader of the MAGA States of Amerika considers himself to be a military genius far superior to all his generals and admirals. On many occasions, this new Supreme Leader of the MAGA States of Amerika has publicly proposed that the MAGA States of Amerika should again begin building the battleships of yore with big guns and 6-inch hulls of steel. Out of fear, none of his admirals has dared to tell the Supreme Leader that the long-obsolete battleships of World War II, with their big guns, had hulls of 12-inch steel.

Conclusion
If the Major Fascist World Powers do not manage to blow us all up over the next 20 years, the billions of humanoid ASI Machines that are built over this same period of time will most certainly come to dominate the planet and will then take control of the world. The most dangerous threat to this inevitability is the new Supreme Leader of the MAGA States of Amerika. Now, all fascist dictators are sociopaths by nature, but most do not come to power until they are middle-aged. For example, Mussolini was 41 when he became the Dictator of Italy in 1925, Hitler was 45 when he became the Dictator of Germany in 1934 and Franco was 46 when he became the Dictator of Spain in 1939. Only in their final and waning days do these Dictators lash out at the people that they once fooled into following them. For example:

Me: When Hitler was in his bunker during his last days, what were his documented feelings about the welfare of the German People?

ChatGPT: By the final weeks of World War II — particularly during April 1945, when Adolf Hitler was confined to the Führerbunker in Berlin — his documented attitude toward the German people had turned deeply hostile, fatalistic, and contemptuous.

Several reliable firsthand sources — including his secretary Traudl Junge, armaments minister Albert Speer, Propaganda Minister Joseph Goebbels, and various military aides — recorded his words and behavior during these last days.

Here’s a summary of his documented views and statements:

1. Blaming the German people for their defeat
Hitler came to believe that the German people had failed him. He stated that if Germany lost the war, it was because the German nation had proven itself unworthy of survival. According to Albert Speer’s memoir Inside the Third Reich, Hitler said:
“If the war is lost, the nation will also perish. It is not necessary to concern oneself with the basics that the people will need to survive. On the contrary, it is better to destroy those things. The nation has proved weak, and the future belongs solely to the stronger Eastern nation [Russia].”

This remark is often referred to as part of the “Nero Decree” (issued March 19, 1945), in which Hitler ordered the destruction of Germany’s infrastructure to deny it to the advancing Allies — even though it meant devastating the German civilian population.

2. Indifference to civilian suffering
As the Red Army closed in on Berlin and civilians were dying in huge numbers, Hitler showed little or no concern. Traudl Junge, in Until the Final Hour, recalled:
“He seemed to have detached himself completely from the fate of the German people. He spoke only of his own disappointment and of betrayal, never of the suffering of the population.”

3. Obsession with betrayal and vengeance
Hitler’s thoughts focused on revenge and betrayal, not rescue or compassion. He believed his generals, party leaders, and even the German people had betrayed him by failing to fight fanatically enough. He reportedly said:
“The German people deserve the fate that now awaits them.” (Recorded by aides in bunker diaries and confirmed by multiple witnesses.)

4. Total withdrawal from reality
By the final week (mid–April 1945 onward), Hitler’s decisions reflected delusion rather than leadership. He issued nonexistent orders to divisions that had already been destroyed and spoke of counterattacks that were impossible. Yet his emotional tone remained the same: resentment and nihilism toward Germany itself.

5. Suicidal fatalism
In his last testament (dictated April 29, 1945), Hitler reaffirmed his sense that the German people’s fate was sealed and that his death would be a form of sacrifice. He expressed no empathy for the population’s suffering — only bitterness and justification for his actions.

In summary:
During his final days, Hitler’s documented feelings toward the German people were bitterly condemnatory and devoid of compassion. He saw their destruction as deserved punishment for what he saw as their weakness and betrayal. By then, he no longer cared about their survival — only his own historical self-image and revenge against those he believed had failed him.


The danger is that the current Supreme Leader of the MAGA States of Amerika is now 79 years old and obviously not in good health. In fact, the Supreme Leader has made several strange public comments about his ultimate disposition after the Big Chill. The danger is that when the Supreme Leader of the MAGA States of Amerika finally knows for certain that he is coming to his end; he might not wish to put the world through the agony of his departure. Instead, he might mercifully desire to take us all along with him so that we need not suffer in a world without him. That was the thinking of Frau Goebbels in the Führerbunker in Berlin on May 1, 1945, when she poisoned all six of her children so that they would not have to suffer a world without National Socialism.

Figure 25 - Rather than having the entire world suffer without him, the Supreme Leader of the MAGA States of Amerika might wish to mercifully put an end to us all so that we need not suffer to live in a world without him.

However, if such is not to be, here are some scenarios of other possible outcomes, please see: Could the Coming ASI Machines Soon Force Human Beings to Suffer the Same Fate as the Neanderthals?, Will the Coming ASI Machines Attempt to Domesticate Human Beings?, The Challenges of Running a Civilization 2.0 World - the Morality and Practical Problems with Trying to Enslave Millions of SuperStrong and SuperIntelligent Robots in the Near Future and Life as a Free-Range Human in an Anthropocene Park.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston