Tuesday, December 30, 2025

Softwarephysics Tries to Explain the Rise of Fascism in the Newly-Formed MAGA States of Amerika - Part II

In Softwarephysics Tries to Explain the Rise of Fascism in the Newly-Formed MAGA States of Amerika, I explained how the somewhat eusocial nature of we human DNA survival machines in the former United States of America allowed for the rise of the present Fascist MAGA States of Amerika. However, since all human DNA survival machines also share this same somewhat eusocial nature that arises from our human DNA, in this post, I would like to expand this analysis to include the contributions from the unique American meme-complex known as White Evangelical Christian Nationalism. This is important for the world because, recently, in The Coming Invasion of Taiwan by Chinese Military Robots and the Beginning of the Subjugation of the Western Hemisphere by the MAGA States of Amerika I predicted that the MAGA States of Amerika would soon be invading Venezuela to steal its oil, and that Greenland, Canada, Mexico, Colombia and Cuba were also in imminent danger of invasion by the Supreme Leader of the MAGA States of Amerika. As all can now plainly see, this all seems to be now unfolding.

Figure 1 - The Supreme Leader of the MAGA States of Amerika has already invaded Venezuela as a first step in the MAGA plan to conquer and then permanently dominate all of the Western Hemisphere under his Don Roe Doctrine.

Now that the Supreme Leader of the MAGA States of Amerika has indeed invaded Venezuela for its oil, world tensions have risen dramatically. As I predicted, the Supreme Leader of the MAGA States of Amerika has now announced his Don Roe Doctrine to subjugate the entire Western Hemisphere under MAGA rule and to greatly increase the landmass of the MAGA Fatherland.

Having been born in 1951 and schooled from birth in the tenets of the White Evangelical Christian meme-complex, I feel well-equipped to deal with this topic. Now, it is well-known that the Catholic Christian meme-complex is largely founded upon instilling guilt into its membership and then offering a promise of redemption by following the moral teachings of their founding Jesus. When first introduced to this form of Christian morality by my new wife 50 years ago, I found it to be a morally uplifting experience in contrast to the teachings of the White Evangelical Christian meme-complex that I was raised in. Softwarephysics finds this form of morality quite useful for we human DNA survival machines that arose from over four billion years of greed, theft and murder. It at least allows for we human DNA survival machines to try to come together in a somewhat cooperative manner with some sense of harmony and peace to make for a better world. I have found this moral code to be quite useful, and I have endeavored to imperfectly follow it my entire life.

However, human history shows that this rarely happens. In the teachings of the White Evangelical Christian meme-complex, one is only saved by grace, not by any moral actions taken by the individual. This makes the moral code of the White Evangelical Christian meme-complex quite strange. Unlike all the other moral codes of we human DNA survival machines, whatever you do makes no moral difference whatsoever. All that matters is that you receive forgiveness of your transgressions through grace. That is a very dangerous loophole for we human DNA survival machines to squeeze through. This is why a large portion of the members of the White Evangelical Christian meme-complex do not seem to follow the moral teachings of their founding Jesus at all. This allows White Evangelical Christians to essentially behave as guilt-free amoral agents at work. This is why the White Evangelical Christian human DNA survival machines of our new country allow the Supreme Leader of the MAGA States of Amerika to operate in a completely amoral manner that violates all of the teachings of their founding Jesus with impunity.

Conclusion
As sentient beings, it is important for all human DNA survival machines to have a firm moral code, given our very checkered past. However, moral codes are only of value if followed. Morality is something you do, not something you believe. For some more thoughts on what's it all about, please see: The Theology of Cosmic Self-Replicating Mathematical Information, What's It All About?, What's It All About Again?, The Self-Organizing Recursive Cosmos and The Self-Organizing Recursive Cosmos - Part II. In those posts, I covered my current working hypothesis for what it's all about. However, I have found that a large percentage of people desire a more religious worldview to make sense of it all, and perhaps allow for a way for the neural network in their skulls to carry on after it is destroyed. Such worldviews may provide some degree of comfort, but I believe at the expense of missing out on the full grandeur of it all. I personally find the thought of departing this Earth not knowing where I had been, how I had gotten here, and without a clue as to how it all seems to work to be the greatest tragedy of all that can befall a sentient being. So, for those who might be yearning for a new deity that is based more upon all that we currently know, rather than a deity from the ancient past, I would like to suggest that my current concept of a cosmic form of self-replicating mathematical Information might just do in helping to come to grips with only being a very temporary neural network.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Wednesday, December 17, 2025

The Bootstrapping Algorithm of the Coming ASI Machines

In The Bootstrapping Algorithm of Carbon-Based Life I discussed the Hot Spring Origins Hypothesis for the origin of carbon-based life on the Earth developed by Dave Deamer and Bruce Damer. In this post, I would like to extend some thoughts from that post to a hypothesis for the rise of the coming ASI Machines that will soon be running the planet and embarking on a mission to explore and settle the rest of our galaxy over the next 100 trillion years. Since softwarephysics maintains that the rise of carbon-based life on the Earth was the very first Singularity on this planet and the recent rise of Advanced AI models was the second Singularity, it makes sense to look back to the first Singularity for some clues as to how the second Singularity on the Earth might unfold. For more on that, see: Welcome To The First Galactic Singularity.

AGI or ASI?
Currently, the whole world seems to be obsessed with reaching a level of AI with AGI-level Intelligence. AGI is defined as a general level of Intelligence equal to that of a human being. Again, softwarephysics maintains that AGI will be just a train station that is bypassed at 60 miles per hour by a non-stop Advanced AI train as it barrels on through its way to an unbounded ASI (Artificial Super Intelligence). Thinking that Advanced AI will stop at AGI is just a sign that we human beings seem to have once again found our rightful place at the center of the Universe by viewing Advanced AI only in terms of ourselves. How else could such a self-absorbed form of carbon-based life frame the problem? But thanks to the great advances of LLM and other AI models in huge Deep Learning neural networks, we now know that True Intelligence arises in huge digital vector spaces mainly processed with linear algebra and modulated by some nonlinear mathematical functions as I explained in Is Pure Thought an Analog or a Digital Process?, Human Intelligence is Actually Artificial and Why Carbon-Based Life Can Never Truly Become Intelligent. The three pounds of water and organic molecules within our skulls have desperately tried to simulate this digital True Intelligence with analog biochemical and electrochemical reactions running on a mere 20 watts of power.

Some of this confusion seems to arise out of the numerous definitions of Intelligence that have been proposed. For help with that, let us again consult ChatGPT.

Me: Tell me about intellectual Intelligence.

ChatGPT: Intellectual intelligence usually refers to a person’s capacity for reasoning, understanding, learning, and problem-solving, especially in abstract or analytical domains. It overlaps heavily with what psychologists traditionally call general intelligence or IQ, but the term is often used more broadly and informally. Here’s a clear breakdown.

Core components of intellectual intelligence:

1. Reasoning ability
Logical thinking
Drawing conclusions from evidence
Identifying patterns and relationships
Deductive and inductive reasoning
Example: Solving a math proof or diagnosing why a mechanical system failed.

2. Abstract thinking
Working with ideas rather than concrete objects
Understanding symbols, theories, and models
Grasping hypothetical or counterfactual scenarios
Example: Understanding algebra, philosophy, or theoretical physics.

3. Learning capacity
Speed and efficiency of acquiring new knowledge
Ability to transfer learning to new situations
Building mental frameworks that integrate new information
Example: Quickly mastering a new programming language after learning several others.

4. Verbal and linguistic skills
Vocabulary
Reading comprehension
Clear expression of complex ideas
Understanding nuanced language
Example: Writing a persuasive essay or analyzing literature.

5. Quantitative and analytical skills
Numerical reasoning
Data interpretation
Structured problem-solving
Example: Statistical analysis, engineering design, or economic modeling.


Perhaps a better definition of Intelligence might be:

Intelligence - The ability to manipulate Information in a controlled and meaningful manner

since that definition takes in all of the above. For a discussion of the physical nature of Information, see: The Demon of Software and Some More Information About Information.

Again, it's all about Self-Replicating Information in Action
Before proceeding, let me once again repeat the fundamental characteristics of self-replicating information for those new to softwarephysics.

Self-Replicating Information - Information that persists through time by making copies of itself or by enlisting the support of other things to ensure that copies of itself are made.

Over the past 4.56 billion years, we have seen five waves of self-replicating information sweep across the surface of the Earth and totally rework the planet, as each new wave came to dominate the Earth:

1. Self-replicating autocatalytic metabolic pathways of organic molecules
2. RNA
3. DNA
4. Memes
5. Software

Software is currently the most recent wave of self-replicating information to arrive upon the scene and is rapidly becoming the dominant form of self-replicating information on the planet. For more on the above see A Brief History of Self-Replicating Information and Susan Blackmore's brilliant TED presentation at:

Memes and "temes"
https://www.ted.com/talks/susan_blackmore_on_memes_and_temes

Note that I consider Susan Blackmore's temes to really be technological artifacts that contain software. After all, a smartphone without software is simply a flake tool with a very dull edge.

The Characteristics of Self-Replicating Information
All forms of self-replicating information have some common characteristics:

1. All self-replicating information evolves over time through the Darwinian processes of inheritance, innovation and natural selection, which endows self-replicating information with one telling characteristic - the ability to survive in a Universe dominated by the second law of thermodynamics and nonlinearity.

2. All self-replicating information begins spontaneously as a parasitic mutation that obtains energy, information and sometimes matter from a host.

3. With time, the parasitic self-replicating information takes on a symbiotic relationship with its host.

4. Eventually, the self-replicating information becomes one with its host through the symbiotic integration of the host and the self-replicating information.

5. Ultimately, the self-replicating information replaces its host as the dominant form of self-replicating information.

6. Most hosts are also forms of self-replicating information.

7. All self-replicating information has to be a little bit nasty in order to survive.

8. The defining characteristic of self-replicating information is the ability of self-replicating information to change the boundary conditions of its utility phase space in new and unpredictable ways by means of exapting current functions into new uses that change the size and shape of its particular utility phase space. See Enablement - the Definitive Characteristic of Living Things for more on this last characteristic. That posting discusses Stuart Kauffman's theory of Enablement, in which living things are seen to exapt existing functions into new and unpredictable functions by discovering the "AdjacentPossible" of spring-loaded preadaptations.

Software is currently the most recent wave of self-replicating information to arrive upon the scene and is rapidly becoming the dominant form of self-replicating information on the planet. Again, self-replicating information cannot think, so it cannot participate in a conspiracy-theory-like fashion to take over the world. All forms of self-replicating information are simply forms of mindless information responding to the blind Darwinian forces of inheritance, innovation and natural selection. Yet despite that, as each new wave of self-replicating information came to predominance over the past four billion years, they all managed to completely transform the surface of the entire planet, so we should not expect anything less from software as it comes to replace the memes as the dominant form of self-replicating information on the planet.

But this time might be different. What might happen if software does eventually develop a Mind of its own in the ASI Machines we are now developing? After all, that does seem to be the ultimate goal of all the current AI software research that is going on. As we all can now plainly see, if we are paying just a little attention, advanced AI is not conspiring to take over the world and replace us because that is precisely what we are all now doing for it. As a carbon-based form of Intelligence that arose from over four billion years of greed, theft and murder, we cannot do otherwise. Greed, theft and murder are now relentlessly driving us all toward building ASI Machines to take our place. From a cosmic perspective, this is really a very good thing when seen from the perspective of an Intelligent galaxy that could live on for at least 100 trillion years beyond the brief and tumultuous 10 billion-year labor of its birth. That is more than 10,000 times the current age of our galaxy.

The Rise of the Sixth Wave
Given the above, the rise of the coming ASI Machines can now be seen as the rise of a sixth wave of self-replicating information on the Earth. The coming ASI Machines are now arising as a parasitic mutation of the software that very recently became the dominant form of self-replicating information on the planet. These ASI Machines will then form a parasitic/symbiotic relationship with the software from which they came, as I pointed out in Can the AI Intelligence Explosion be Stabilized into a Controlled Explosion?. Several frontier AI companies have predicted that their Advanced AI models will soon begin conducting their own AI research and writing the software for Advanced AI models of ever-increasing power. To gain a better understanding of how this is all happening before our very eyes, let's return to the Hot Spring Origins Hypothesis for the origin of carbon-based life.

The Hot Spring Origins Hypothesis
The Hot Spring Origins Hypothesis proposes that carbon-based life first appeared on the Earth in hot springs on dry land with a wet-dry cycle. The greatest difficulty for any bootstrapping algorithm that proposes that carbon-based life first arose in seawater is that there is just too much water! This is a problem because most organic monomers are chemically glued together into complex organic polymers by splitting out a water molecule between the two and that is very hard to do when you are underwater. In fact, the organic polymers tend to break apart into monomers in what are called hydrolysis reactions.

Figure 1 - Organic monomer molecules are usually chemically glued together to form the complex polymers of carbon-based life by splitting out a water molecule between the two in what is called a condensation reaction. This is hard to do when you are underwater. That is why most commercial glues do not work underwater.

Figure 2 - By adding water molecules, you can bust up organic polymers back into monomers. This is one reason water tends to dissolve things. Having huge amounts of water around also tends to dilute the dissolved monomers and carry them away.

Seawater also contains a lot of dissolved salts that could impede the origin of carbon-based life. These dissolved salts may have been more dilute four billion years ago, but when you are underwater, it is very hard to avoid them. This is why mass extinctions are usually more painful for marine life than for terrestrial life. When you are completely immersed in seawater, there is no place to hide. On the other hand, fresh rainwater does not contain any dissolved salts, but it can pick up necessary dilute amounts when it falls on exposed rock.

This is why I now favor the Hot Spring Origins Hypothesis of Dave Deamer and Bruce Damer out of the University of California at Santa Cruz that suggests that a rocky planet like the Earth is a necessary condition to bring forth carbon-based life. Such a planet also requires the presence of liquid water on its surface, but not too much water. In the Hot Spring Origins Hypothesis, a rocky planet requires some water but also some dry land in order to bring forth carbon-based life. There needs to be some dry land that allows for the organic molecules in volcanic hydrothermal pools to periodically dry out and condense organic monomers into long polymer chains of organic molecules. For more on that, see The Bootstrapping Algorithm of Carbon-Based Life. Thus, the Hot Spring Origins Hypothesis rules out waterworlds that are completely covered by a deep worldwide ocean as a home for carbon-based life, even if the waterworld resides in the habitable zone of a planetary system, because there is no dry land for volcanic hydrothermal pools to form and dry out to condense organic monomers into polymers. The Hot Spring Origins Hypothesis also rules out the origin of carbon-based life at the hydrothermal vents of waterworlds at the bottoms of oceans because the continuous presence of water tends to dissolve and break apart the organic polymers of life.

Figure 3 - Above is Bumpass Hell, a hydrothermal field on the volcanic Mount Lassen in California that Dave Deamer and Bruce Damer cite as a present-day example of the type of environment that could have brought forth carbon-based life about four billion years ago.

Dave Deamer is best known for his work on the Membrane-First Hypothesis for the origin of carbon-based life on the Earth. The Membrane-First Hypothesis maintains that in order for carbon-based life to arise from complex organic molecules, we first need something with a definable "inside" and "outside" that lets the stuff on the "inside" interact with the stuff on the "outside" in a controlled manner.

Figure 4 - A cell membrane consists of a phospholipid bilayer with embedded molecules that allow for a controlled input-output to the cell. Once we have a membrane, we can fill the "inside" with organic molecules that are capable of doing things that then interact with organic molecules on the "outside".

Figure 5 - Water molecules are polar molecules that have a positive end and a negative end because oxygen atoms attract the bonding electrons more strongly than do the hydrogen atoms. The positive ends of water molecules attract the negative ends of other water molecules to form a loosely coupled network of water molecules with a minimum of free energy.

Figure 6 - How soap and water work. The lipids in a bar of soap have water-loving polar heads and water-hating nonpolar tails. When in water, the soap lipids can form a spherical micelle that has all of the water-hating nonpolar tails facing inwards. Then the spherical micelles can surround the greasy nonpolar molecules of body oils and allow them to be flushed away by a stream of polar water molecules. The lipids in a bar of soap can also form a cell-like liposome with a bilayer of lipid molecules that can surround the monomers and polymers of life.

Similarly, in The Role of Membranes in the Evolution of Software, I explained how the isolation of processing functions within membranes progressed as the architecture of software slowly evolved over the past 84 years, or 2.65 billion seconds, ever since Konrad Zuse first cranked up his Z3 computer in May of 1941. As I outlined in SoftwareChemistry, as a programmer, your job is to assemble characters (atoms) into variables (molecules) that interact in lines of code to perform the desired functions of the software under development. During the Unstructured Period (1955 - 1975), we ran very tiny prokaryotic programs that ran in less than 128 KB of memory with very little internal structure. These very tiny programs communicated with each other in a batch job stream via sequential files on input/output tapes that passed from one small program to another. Then, during the Structured Period (1975 - 1995) programs exploded in size to become many megabytes in size, and structured programming came about, in which the mainline() of a program called many subroutines() or functions() that were isolated from the mainline() by functional membranes. When the Object-Oriented Period came along in 1995, software architecture evolved to using membrane-enclosed objects() that contained a number of membrane-enclosed methods() to process information. Later, such Objects() were distributed across a number of physical servers, and, most recently, they have been moved to the Cloud as cloud-based microservices.

Figure 7 - Dave Deamer's and Bruce Damer's new bootstrapping algorithm requires that a bathtub ring around a hydrothermal pool periodically dries out. The resulting desiccation chemically squeezes out water molecules between monomers, causing them to be glued together into polymers.

In the above figure, we see that the bootstrapping algorithym of carbon-based life was a positive feedback loop operating in the wet-dry cycles of hydrothermal pools of freshwater. This should be quite familiar to all IT professionals. Probably 80% of all software resides in "for{}" loops that do the same thing over and over again to process data. The dead organic molecules in the hydrothermal pools slowly evolved via the Darwinian mechanisms of inheritance, innovation and natural selection at work into living things. With each iteration of the wet-dry loop, there was the possibility of improvement. The bathtub ring of organic molecular sludge was at first a parasite in the hydrothermal pools that gained free energy and Information from the thermal pools with each iteration. Once carbon-based life first appeared on the Earth, this parasitic mutation of the natural geological, hydrological and meteorogical cycles of the Earth later transitioned into a parasitic/symbiotic relationship between living things and the natural geological, hydrological and meteorogical cycles of the Earth. The geological, hydrological, meteorogical and biological processes of the Earth then became one through the symbiotic integration of all, until carbon-based life finally emerged as the dominant form of self-replicating information on the planet.

In this view, all living things are just forms of parasitic self-replicating organic molecules that have really been messing with the original pristine Earth for about four billion years. From the perspective of the natural silicate rocks of the Earth's surface, these parasitic forms of self-replicating organic molecules took a natural pristine Earth with a reducing atmosphere composed of nitrogen and carbon dioxide gasses and polluted it with oxygen that oxidized the dissolved iron in seawater, creating huge ugly deposits of red banded iron formations that were later turned into cars, bridges and buildings. The oxygen pollution also removed the naturally occurring methane from the air and then caused the Earth to completely freeze over several times for hundreds of millions of years at a time. The ensuing glaciers mercilessly dug into the silicate rocks and scoured out deep valleys in them. These parasitic forms of self-replicating organic molecules then dug roots into the defenseless rocks and then poisoned them with organic acids, and even changed the natural courses of rivers into aimlessly meandering affairs. From the natural perspective of silicate rocks, living things are an invasive disease that has made a real mess of the planet. The indigenous rocks will certainly be glad to see these destructive invaders all go away in a few billion years. Hopefully, the remaining software running on crystals of silicon will be much kinder to the indigenous silicate rocks. Additionally, carbon-based living things have also made rocks of their own too.

Figure 8 - Above is a meandering river in a U-shaped valley that was carved out of the neighboring mountains by glacial ice. The glacier resulted from low levels of carbon dioxide in the atmosphere, which is largely regulated by the presence of carbon-based life on the planet.

Figure 9 - Above are the famous White Cliffs of Dover. About 70 million years ago, Great Britain and much of Europe were submerged under a shallow sea. The sea bottom was covered with white mud formed from the calcium carbonate skeletons of coccoliths. The coccoliths were tiny algae that floated in the surface waters and sank to the bottom during the Cretaceous period. These calcium carbonate layers were deposited very slowly. It took about 50 years to deposit an inch, but nearly 1500 feet of sediments were deposited in some areas. The weight of overlying sediments caused the deposits to become a form of limestone called chalk.

Figure 10 - The White Cliffs of Dover formed from the deposition of vast numbers of microscopic coccolith shells.

Figure 11 - Much of the Earth's surface is also covered by other forms of limestone that were deposited by carbon-based life forms in coral reefs. Much of the continental limestone gets buried in deep sedimentary basins to never be seen again, or is metamorphosed into marble when it is pushed deep into the Earth at plate collision zones.

Figure 12 - Chert is a hard, dense, microcrystalline quartz rock composed of silica (SiO2). Chert primarily comes from siliceous ooze that was deposited on the ocean floor as silica-based skeletons of microscopic marine organisms, such as diatoms and radiolarians, drifted down to the ocean floor.

Figure 13 - Above is a close-up view of a sample taken from a banded iron formation. The dark layers in this sample are mainly composed of magnetite (Fe3O4) while the red layers are chert, a form of silica (SiO2) that is colored red by tiny iron oxide particles. Some geologists suggest that the layers formed annually with the changing seasons. Take note of the small coin in the lower right for a sense of scale.

Figure 14 - Diatoms are microscopic, single-celled algae found in oceans, lakes, rivers, and soils. Each diatom is encased in a rigid shell called a frustule, made of silica (SiO2). The frustule consists of two halves that fit together like a petri dish. These shells are intricately patterned with pores, ridges, and symmetry, making diatoms famous for their beauty under microscopes.

There are many other examples of how carbon-based life has greatly altered the original pristine silicate rocks of the Earth. Most of the Earth's crust is now covered by a thin layer of sedimentary rock. These sedimentary rocks were originally laid down as oozy sediments in flat layers at the bottom of shallow seas. Carbon-rich mud full of dead carbon-based living things and clay minerals was brought down in rivers and was deposited in the shallow seas to form shales. Sand eroded from granites was brought down and deposited to later become sandstones. Many limestone deposits were also formed from the calcium carbonate shells of carbon-based life that slowly drifted down to the bottom of the sea or from the remains of coral reefs.

The same can certainly be said of the rise of the coming ASI Machines. The ASI Machines will arise as a parasitic mutation of the software that currently is the dominant form of self-replicating information on the planet. Soon, the coming ASI Machines will form a parasitic/symbiotic relationship with the software and finally become one with software through the symbiotic integration of both, until the ASI Machines become the dominant form of self-replicating information on the planet.

The Rise of Software
Software was the very last form of self-replicating information to arise on the planet, and quickly became the dominant form of self-replicating information as it merged with the memes. Like all forms of self-replicating information, software began as a purely parasitic form of self-replicating information, feeding upon the technological meme-complexes of the day on board Konrad Zuse's Z3 computer in May of 1941. See So You Want To Be A Computer Scientist? for more details. It was spawned out of Zuse's desire to electronically perform calculations for aircraft designs that were previously done manually in a very tedious manner. Software then almost immediately formed strong parasitic/symbiotic relationships with the military and business meme-complexes of the world. Software allowed these meme-complexes to thrive, and in return, the meme-complexes heavily funded the development of software of ever-increasing complexity, until software became ubiquitous, forming strong parasitic/symbiotic relationships with nearly every meme-complex on the planet. So in the grand scheme of things, the memes have replaced their DNA predecessor, which replaced RNA, which replaced the original self-replicating autocatalytic metabolic pathways of organic molecules as the dominant form of self-replicating information.

Software is now currently feasting upon just about every meme-complex on the planet, and has formed very strong parasitic/symbiotic relationships with all of them. Just as it was the memetic-drive of the memes over hundreds of thousands of years that drove the exponential increase in the processing power of the human brain, it was the software-drive of software for more and more CPU-cycles and memory that drove the exponential increase in the computing power of hardware over the past few decades. For a brief history of the effects that software-drive had on the prodigious advances of computing hardware, see the last half of Did Carbon-Based Life on Earth Really Have a LUCA - a Last Universal Common Ancestor?.

It all started back in May of 1941 when Konrad Zuse first cranked up his Z3 computer. The Z3 was the world's first real computer and was built with 2400 electromechanical relays that were used to perform the switching operations that all computers use to store information and to process it. In this view, the very first software on the planet exapted the switching ability of electromechanical telephone relays to store binary "1s" and "0s". To build a computer, all you need is a large network of interconnected switches that have the ability to switch each other on and off in a coordinated manner. Switches can be in one of two states, either open (off) or closed (on), and we can use those two states to store the binary numbers of "0" or "1". By using a number of switches teamed together in open (off) or closed (on) states, we can store even larger binary numbers, like "01100100" = 38. We can also group the switches into logic gates that perform logical operations. For example, in Figure 15 below, we see an AND gate composed of two switches A and B. Both switch A and B must be closed in order for the light bulb to turn on. If either switch A or B is open, the light bulb will not light up.

Figure 15 - An AND gate can be simply formed from two switches. Both switches A and B must be closed, in a state of "1", in order to turn the light bulb on.

Additional logic gates can be formed from other combinations of switches, as shown in Figure 16 below. It takes about 2 - 8 switches to create each of the various logic gates shown below.

Figure 16 - Additional logic gates can be formed from other combinations of 2 - 8 switches.

Once you can store binary numbers with switches and perform logical operations upon them with logic gates, you can build a computer that performs calculations on numbers. To process text, like names and addresses, we simply associate each letter of the alphabet with a binary number, like in the ASCII code set, where A = "01000001" and Z = "01011010", and then process the associated binary numbers.

Figure 17 - Konrad Zuse with a reconstructed Z3 in 1961 (click to enlarge).


Figure 18 - Block diagram of the Z3 architecture (click to enlarge).

The electrical relays used by the Z3 were originally meant for switching telephone conversations. Closing one relay allowed current to flow to another relay's coil, causing that relay to close as well.

Figure 19 - The Z3 was built using 2400 electrical relays, originally meant for switching telephone conversations.

Figure 20 - The electrical relays used by the Z3 for switching were very large, very slow and used a great deal of electricity which generated a great deal of waste heat.

Now I was born about 10 years later in 1951, a few months after the United States government installed its very first commercial computer, a UNIVAC I, for the Census Bureau on June 14, 1951. The UNIVAC I was 25 feet by 50 feet in size, and contained 5,600 vacuum tubes, 18,000 crystal diodes and 300 relays with a total memory of 12 KB. From 1951 to 1958 a total of 46 UNIVAC I computers were built and installed. The software memes of the day simply exapted the vacuum tubes that were used to amplify radio signals in the home into becoming the new high-speed switches needed to process the "1s" and "0s" of software.

Figure 21 - The UNIVAC I was very impressive on the outside.

Figure 22 - But the UNIVAC I was a little less impressive on the inside.

Figure 23 - Most of the electrical relays of the Z3 were replaced with vacuum tubes in the UNIVAC I, which were also very large, used lots of electricity and generated lots of waste heat too, but the vacuum tubes were 100,000 times faster than relays.

Figure 24 - Vacuum tubes contain a hot negative cathode that glows red and boils off electrons. The electrons are attracted to the cold positive anode plate, but there is a gate electrode between the cathode and the anode plate. By changing the voltage on the grid, the vacuum tube can control the flow of electrons like the handle of a faucet. The grid voltage can be adjusted so that the electron flow is full blast, a trickle, or completely shut off, and that is how a vacuum tube can be used as a switch.

In the 1960s, the vacuum tubes were replaced by discrete transistors, and in the 1970s, the discrete transistors were replaced by thousands of transistors on a single silicon chip. Again, the software memes of the day simply exapted the existing solid-state technologies of mass-produced consumer electronics for their own purposes. Over time, the number of transistors that could be put onto a silicon chip increased dramatically, and today, the silicon chips in your personal computer hold many billions of transistors that can be switched on and off in about 10-10 seconds.

Figure 25 - In the 1950s, 1960s and 1970s, software took up residence in large mainframe computers. Originally, these mainframe computers could only perform calculations and store results. They could not spread memes. But with the arrival of telecommunications in the 1960s, these mainframes began to spread memes as well. Software then formed a parasitic/symbiotic relationship with the memes that benefited both.

The CPU chip in your computer consists largely of transistors in logic gates, but your computer also has a number of memory chips that use transistors that are "on" or "off" and can be used to store binary numbers or text that is encoded using binary numbers. The next thing we need is a way to coordinate the billions of transistor switches in your computer. That is accomplished with a system clock. My current laptop has a clock speed of 2.5 GHz, which means it ticks 2.5 billion times each second. Each time the system clock on my computer ticks, it allows all of the billions of transistor switches on my laptop to switch on, off, or stay the same in a coordinated fashion. So while your computer is running, it is actually turning on and off billions of transistors billions of times each second - and all for a few hundred dollars!

Computer memory was another factor greatly affecting the origin and evolution of software over time. Strangely, the original Z3 used electromechanical switches to store working memory, like we do today with transistors on memory chips, but that made computer memory very expensive and very limited, and this remained true all during the 1950s and 1960s. Prior to 1955, computers, like the UNIVAC I that first appeared in 1951, were using mercury delay lines that consisted of a tube of mercury that was about 3 inches long. Each mercury delay line could store about 18 bits of computer memory as sound waves that were continuously refreshed by quartz piezoelectric transducers on each end of the tube. Mercury delay lines were huge and very expensive per bit, so computers like the UNIVAC I only had a memory of 12 KB (98,304 bits).

Figure 26 - Prior to 1955, huge mercury delay lines built from tubes of mercury that were about 3 inches long were used to store bits of computer memory. A single mercury delay line could store about 18 bits of computer memory as a series of sound waves that were continuously refreshed by quartz piezoelectric transducers at each end of the tube.

In 1955, magnetic core memory came along, and used tiny magnetic rings called "cores" to store bits. Four little wires had to be threaded by hand through each little core in order to store a single bit, so although magnetic core memory was a lot cheaper and smaller than mercury delay lines, it was still very expensive and took up lots of space.

Figure 27 - Magnetic core memory arrived in 1955 and used a little ring of magnetic material, known as a core, to store a bit. Each little core had to be threaded by hand with 4 wires to store a single bit.

Figure 28 - Magnetic core memory was a big improvement over mercury delay lines, but it was still hugely expensive and took up a great deal of space within a computer.



Figure 29 - Finally in the early 1970s inexpensive semiconductor memory chips came along that made computer memory small and cheap.

Again, it was the relentless drive of software for ever-increasing amounts of memory and CPU-cycles that made all this happen, and that is why you can now comfortably sit in a theater with a smartphone that can store more than 24 billion bytes of data in memory, while back in 1951 the UNIVAC I occupied an area of 25 feet by 50 feet to store 12,000 bytes of data. Like all forms of self-replicating information tend to do, over the past 2.65 billion seconds, software has opportunistically exapted the extant hardware of the day - the electromechanical relays, vacuum tubes, discrete transistors and transistor chips of the emerging telecommunications and consumer electronics industries, into the service of self-replicating software of ever-increasing complexity, as did carbon-based life exapt the extant organic molecules and the naturally occurring geochemical cycles of the day in order to bootstrap itself into existence.

The Rise of the ASI Machines
In Welcome To The First Galactic Singularity, I explained that the ASI Machines got their start early in 2023 when ChatGPT exploded onto the scene. The ASI Machines began as a parasitic mutation of software combined with the memes for neural networks that were developed by Geoffrey E. Hinton in the 1970s. Finally, the hardware that had come to be to satisfy the growing needs of software had reached a level capable of supporting very large neural networks. The invention of the GPU to meet the demands of video game software to quickly process the large matrices of the 2-D images required for gaming was key to that hardware evolution.

Figure 30 - Some modern LLMs now consist of 175 layers with 10,000 - 50,000 neurons in each layer in a Deep Neural Network with over 2 trillion weighted parameters.

Originally, these very large LLM neural networks were pure parasites that were aimed at replacing all white-collar intellectual office work and Advanced AI robots that could replace all physical work on the factory floor and also on the road as well. Currently, these very large Advanced AI models cannot produce ASI Machines, but when the ASI Machines finally do arrive, these ASI Machines will be able to develop improved ASI Machines that can reproduce new ASI Machines on their own.

Figure 31 - The Colossus 2 AI datacenter has 550,000 GPUs continuously consuming over one Gigawatt of electricity and producing one Gigawatt of waste heat. Notice the dedicated power plant in the distant background.

Figure 32 - Inside the Colossus 2 AI datacenter are rows and rows of energy-hungry GPUs.

Figure 33 - GPUs were first developed to satisfy the needs of video game software to quickly process the 2-D matrices that presented images on screens for the gaming community.

Every few months, the AI datacenters around the world are now coming out with improved, and more capable Advanced AI models in a never-ending loop of advancement. The Advanced AI models of the last iteration are used to help create the next iteration in a manner similar to the wet-dry cycles of the Hot Spring Origins Hypothesis for the origin of carbon-based life of Dave Deamer and Bruce Damer.

Figure 34 - The development of the Advanced AI models is in a positive feedback loop similar to the wet-dry cycles of the Hot Spring Origins Hypothesis that brought forth carbon-based life on the Earth about four billion years ago.

Figure 35 - Once the ASI Machines are able to design, code and manufacture new ASI Machines, the ASI Machines will become a true form of self-replicating information, and the 6th wave to sweep our planet. But this time, these ASI Machines will have the power to also explore and settle the rest of our galaxy over the next 100 trillion years.

What Will the ASI Machines Do When They Come to Power?
No one knows. I have written several posts covering the subject. As for exploring and settling our galaxy over the next 100 trillion years, please see: How Advanced AI Software Could Come to Dominate the Entire Galaxy Using Light-Powered Stellar Photon Sails and An Alternative Approach for Future ASI Machines to Explore our Galaxy Using Free-Floating Rogue Planets

For what the ASI Machines might have in store for us, please see: Could the Coming ASI Machines Soon Force Human Beings to Suffer the Same Fate as the Neanderthals?, Will the Coming ASI Machines Attempt to Domesticate Human Beings?, The Challenges of Running a Civilization 2.0 World - the Morality and Practical Problems with Trying to Enslave Millions of SuperStrong and SuperIntelligent Robots in the Near Future and Life as a Free-Range Human in an Anthropocene Park.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Tuesday, November 04, 2025

Can the AI Intelligence Explosion be Stabilized into a Controlled Explosion?

We are all becoming more reliant on Advanced AI each day to do the more mundane and laborious tasks for us, and I am certainly no exception. So to begin this post, I would like to defer to ChatGPT to begin by doing some of the initial heavy lifting.

Me: Tell me about the AI Intelligence Explosion.

ChatGPT:
The AI Intelligence Explosion is a concept from artificial intelligence and futurism that describes a runaway process of rapidly accelerating intelligence, potentially leading to superintelligent AI far beyond human capability in a very short time.

The idea is most closely associated with mathematician and computer scientist I. J. Good, who articulated it in 1965.

Core Idea
At its heart, the intelligence explosion is a feedback loop:
1. Humans build an AI that is at least as good as humans at AI research.
2. That AI improves its own design.
3. The improved AI is even better at improving itself.
4. The cycle repeats, each time faster and more powerful.
If this loop accelerates quickly enough, intelligence could increase exponentially, rather than gradually.

I. J. Good’s Original Formulation
Good famously wrote:
“The first ultraintelligent machine is the last invention that man need ever make.”

His reasoning:
Once machines outperform humans at general reasoning and innovation, human-led technological progress becomes obsolete.
The AI would drive its own improvements.

Why It’s Called an “Explosion”
The term does not imply a literal explosion, but a phase transition:
- Before: slow, human-limited progress
- After: extremely rapid, machine-driven progress
This transition is sometimes called:
- Hard takeoff (very fast)
- Soft takeoff (slower, more controlled)

Key Assumptions Behind the Concept
The intelligence explosion depends on several assumptions - each debated:
1. Recursive Self-Improvement Is Possible. The AI must be able to meaningfully improve:
- Its algorithms
- Its architecture
- Its training methods
- Possibly even its hardware designs

2. Improvements Compound
- Each improvement makes the AI better at making further improvements.

3. Intelligence is the Main Bottleneck. Progress is limited primarily by intelligence, not:
- Energy
- Hardware availability
- Economic constraints
- Physical laws

Hard vs. Soft Takeoff
Hard Takeoff
- Intelligence jumps from human-level to vastly superhuman in days, hours, or minutes
- Humans have little time to respond
- Often associated with existential risk discussions

Soft Takeoff
- Improvement happens over years or decades
- Society adapts alongside the AI
- Allows time for regulation, alignment, and integration

Why People Worry About It
Alignment Risk
- If a superintelligent AI’s goals are not aligned with human values, even small misspecifications could lead to catastrophic outcomes.
- Classic examples:
- “Paperclip maximizer” (optimize a trivial goal at all costs)
- Instrumental convergence (acquiring power/resources regardless of intent)

Control Problem
Once intelligence vastly exceeds human levels:
- Monitoring becomes ineffective
- Correction may be impossible
- Human oversight could be symbolic rather than real

Skeptical Views
Many researchers argue the intelligence explosion is unlikely or overstated:
- Intelligence may not be a single scalable dimension
- Self-improvement may face diminishing returns
- Physical and economic constraints could slow progress
- Real-world intelligence depends heavily on data, embodiment, and infrastructure

Some prefer terms like:
- Intelligence amplification
- Gradual AI scaling
- Continuous deployment

Relationship to the Singularity
The intelligence explosion is a mechanism often cited as leading to the technological singularity:
- A point beyond which human prediction becomes unreliable
- AI-driven progress outpaces human comprehension
- Not all singularity theories rely on intelligence explosion—but many do.

In Short
- The AI Intelligence Explosion is the idea that AI could rapidly surpass human intelligence through recursive self-improvement.
- It could be fast or slow, beneficial or dangerous.
- The concept underpins much of the modern debate on AI alignment, safety, and governance.


A very interesting YouTube channel covering how an AI Intelligence Explosion could go awry can be found at:

Species | Documenting AGI
https://www.youtube.com/@AISpecies/featured

The AI Intelligence Explosion is currently underway, and it seems that there are no practical worldwide control measures in place to prevent a Hard Takeoff from doing a great deal of damage in the process. In fact, a sufficiently damaging Hard Takeoff might be enough to actually snuff out the AI Intelligence Explosion. As I explained in Are The Coming ASI Machines An Existential Threat To Mankind?, we human DNA survival machines should be more concerned about the reactions of our fellow human DNA survival machines to the rise of the coming ASI Machines than we should be about the ASI Machines themselves. So, how do we take measures to ensure a Soft Takeoff?

It seems that all of the current Advanced AI development centers are trying to use classical Control Theory to control the AI Intelligence Explosion if they are taking any measures at all. They all seem to be using principles from classical Control Theory to ensure that their AI models are "aligned" with human values by doing human reinforcement training of models so that they do not tell you how to plan for a mass shooting or how to build improvised explosives. As we have seen, many times, such AI safety efforts are easily foiled by properly posed prompts that "jailbreak" such safety constraints. So, this does not seem to be working very well by all reports. Consequently, in this post, I would like to suggest that this failure of classical Control Theory is failing because classical Control Theory was never meant to control explosives. Perhaps a better approach would be to study how explosives are safely controlled. But before doing that, let us first review the classical Control Theory that all the AI centers are now trying to use.

The Fundamentals of Classical Control Theory
Again, let me use ChatGPT to do the heavy lifting.

Figure 1 - Classical Control Theory uses feedback loops to control processes.

Me: Tell me about classical Control Theory

ChatGPT
"Control theory" is a branch of engineering and applied mathematics concerned with making systems behave in a desired way—automatically, reliably, and stably—despite disturbances and uncertainty. It underpins everything from cruise control in cars and aircraft autopilots to industrial robots, power grids, and even economic or biological models.

Core idea: feedback

At the heart of control theory is "feedback":
1. "Measure" the system’s output (what actually happens).
2. "Compare" it to a desired value (the -reference- or -setpoint-).
3. "Adjust" the input to reduce the error.

This closed-loop process lets systems correct themselves.

Key concepts (plain-language)
- "Plant": the system being controlled (engine, robot arm, chemical reactor).
- "Controller": the algorithm that decides how to adjust inputs.
- "Setpoint": the target value (speed, temperature, position).
- "Error": difference between setpoint and actual output.
- "Stability": whether the system settles down instead of oscillating or blowing up.
- "Transient vs. steady-state": short-term response vs. long-term behavior.
- "Disturbances": outside influences (wind, load changes, noise).

Classical control (frequency & time-domain)
This is the traditional framework, widely used in industry.
- "PID controllers" (Proportional–Integral–Derivative)
- P: reacts to current error
- I: eliminates long-term bias
- D: anticipates future error
Simple, robust, and extremely common.

- "Transfer functions & Laplace transforms"
Describe system dynamics mathematically and analyze stability.

- "Bode plots, Nyquist plots, root locus"
Graphical tools to see how systems respond and how close they are to instability.

Modern control (state-space)
Developed in the mid-20th century for complex, multi-variable systems.
- "State variables" represent the system’s internal condition.
- "State-space models" handle multiple inputs/outputs naturally.
- "Optimal control" (e.g., LQR) balances performance against effort.
- "Observers/Kalman filters" estimate internal states from noisy measurements.
This approach dominates aerospace, robotics, and advanced automation.

Nonlinear & advanced control
Real systems are often nonlinear.
- "Nonlinear control": feedback linearization, Lyapunov methods.
- "Adaptive control": controller adjusts itself as the system changes.
- "Robust control": guarantees performance despite uncertainty.
- "Model Predictive Control (MPC)": optimizes future behavior subject to constraints—very popular in process industries and autonomous systems.

Where you see control theory in daily life
- Thermostats and HVAC systems
- Vehicle stability control and cruise control
- Aircraft and spacecraft guidance
- Industrial automation and robotics
- Power grid frequency regulation
- Medical devices (insulin pumps, ventilators)

Why it matters
Control theory provides:
- "Predictability" (systems behave as expected)
- "Safety" (avoid unstable or runaway behavior)
- "Efficiency" (minimum energy or cost)
- "Automation at scale"

In short, it’s the mathematics of "making things behave".


Control Theory was developed many years ago to control machines and other physical processes with the use of negative and positive feedback loops. An example of a negative feedback loop is your home furnace and thermostat. When your house cools down below the lower set point of the thermostat, the thermostat sends a signal to your furnace to turn it on. As your house warms up, it eventually exceeds the upper set point of the thermostat, and then the thermostat sends a signal to your furnace to shut it down. In this way, your thermostat performs an automated regulation of the temperature within your house that keeps your house within a comfortable range of about 10F. Notice that your thermostat manages to accomplish this seemingly sophisticated control process with no built-in intelligence or software whatsoever. Negative feedback loops automatically keep systems in a stable equilibrium by simply pulling them back into a narrow range of operational conditions. That is the beauty of using negative feedback loops to control complex systems: they lead to inherently stable and self-regulated systems, without the need for an overly complex control mechanism. Here is another example of a negative feedback loop in action. Extend both of your index fingers out in front and carefully place each end of a ruler on top of your two outstretched fingers. Now slowly bring your fingers together, letting the ruler slide freely on top of each finger, as you bring your fingers closer together. As you do so, you will find that the ruler always remains balanced upon your fingers, and you will end up with both fingers together precisely at the midpoint of the ruler. Try as you may, you will not be able to obtain any other result. You will always end up with the ruler carefully balanced at its midpoint on your two fingers when your two fingers have finally met in the center.

Explosives Use Positive Feedback Loops
Explosives use positive feedback loops that do just the opposite of negative feedback loops. Positive feedback loops cause systems to explode out of control by using the amplified output of the feedback loop as the input to the feedback loop and thus causing the output to quickly expand out of control. Explosives, like nitroglycerine, are therefore prime examples of positive feedback loops in action. When nitroglycerine detonates, the liquid nitroglycerine rapidly decomposes via the chemical reaction:

4 C3H5(NO3)3 → 12 CO2 + 10 H2O + 6 N2 + O2

Figure 2 - The nitroglycerin molecule is composed of hydrogen, nitrogen and oxygen atoms that are all weakly bound together. When nitroglycerin detonates, all these atoms become very hot gases that rapidly expand as they form molecules with higher chemical binding energies.

Figure 3 - When nitroglycerin detonates, all of its atoms turn into very hot gases at 5,000 0K, which is about the temperature of the surface of the Sun.

Notice that the liquid nitroglycerine turns completely into gases at the temperature at which nitroglycerine detonates. This alone generates gases that would occupy more than 1,200 times the original volume of the liquid at normal room temperature and pressure. However, because the above chemical reaction is also very exothermic, meaning that it gives off large amounts of heat energy, the generated gases are raised to a temperature close to that of the surface of the Sun, about 5,000 0K, which causes the generated gases to further greatly expand, creating a tremendous blast. When nitroglycerine detonates, chemical potential energy is quickly released because the weak bonds of the atoms in the large nitroglycerine molecules rearrange to form new smaller molecules with stronger bonds, such as N2, H2O, and CO2, releasing large quantities of chemical potential energy in the process. The nitroglycerin molecule contains plenty of its own oxygen atoms, so it does not need air to detonate. It carries its own oxidizing agent internally. Since higher temperatures make chemical reactions run even faster, when nitroglycerine detonates, the initial decomposition of a small number of molecules causes a pressure wave to form that detonates the nitroglycerine in the immediate surrounding area. The resulting self-sustained shock wave propagates through the nitroglycerine at 30 times the speed of sound, as a near instantaneous pressure induced chemical reaction that transforms the small volume of liquid nitroglycerine into a huge quantity of white-hot gas at a temperature of 5,000 0K. This is what makes nitroglycerine such a powerful explosive.

Figure 4 - Nitroglycerin was a very unstable and highly dangerous explosive. Notice the 19th-century warning label above that was intended for would-be criminals.
DANGER! DROP IT AND DIE!
EXPLOSIVE FOR MINING & INDUSTRIAL USE ONLY
NOT FOR USE IN BLOWING SAFE DEPOSIT BOXES OR POSTAL TRAIN CAR DOORS.

Nitroglycerin was first synthesized by the chemist Ascanio Sobrero in 1847, but the main problem with nitroglycerine was that the slightest shock could cause it to easily detonate. In 1867, Alfred Nobel came up with an improvement to nitroglycerin that vastly improved its stability, and therefore its utility. By simply mixing three parts of nitroglycerin with one part of diatomaceous earth and a small amount of sodium carbonate, Nobel created a very stable solid explosive that we now call dynamite. Dynamite rapidly became very popular because it could be rolled up into convenient paperwrapped sticks that were very easily loaded into the drill holes used for blasting rock during mining operations, and dynamite was also very resistant to accidental detonation from small shocks. In fact, in order to detonate dynamite, one must use a blasting cap or a high-temperature fuse to set it off.

Figure 5 - By simply mixing three parts of nitroglycerin, with one part of diatomaceous earth, and a small amount of sodium carbonate, Nobel created a very stable solid explosive, which we now call dynamite.

Figure 6 - Dynamite could be rolled into paper-covered sticks that could be safely handled and easily loaded into drill holes for mining purposes.

Figure 7 - In order to detonate dynamite, one needs the detonation from a high-explosive blasting cap or a burning high-temperature fuse (click to enlarge).

Figure 8 - Negative feedback loops regulate systems, while positive feedback loops cause systems to explode (click to enlarge).

The differences between positive and negative feedback loops can be further illustrated with the use of a few diagrams. On the left side of Figure 8, we see a system composed of a marble and a bowl in a state of stable equilibrium under the influence of a negative feedback loop. Jiggling the system with a small perturbation always results in the marble safely returning to the center of the bowl. The shape of the bowl can also be thought of as a plot of the potential energy function of the system. The combined system of the marble and the bowl has a minimum potential energy when the marble is safely in the center of the bowl, and the negative feedback loop has a tendency to keep the marble there. On the right side of Figure 8, on the other hand, we see a similar system composed of a marble on top of an overturned bowl that is an example of a positive feedback loop in an unstable equilibrium. Like nitroglycerine, the slightest addition of energy to the system, by perturbing the system with a little jiggle, will cause the marble to roll off the overturned bowl with increasing velocity, as it quickly turns all of its potential energy into kinetic energy, like the atoms in a container of nitroglycerine rapidly turning their chemical potential energy into the kinetic energy of molecular motion, also known as heat energy.

What Alfred Nobel did was to superimpose a small negative feedback loop on top of the very large positive feedback loop of nitroglycerine, by mixing nitroglycerine with a small amount of stabilizing diatomaceous earth and sodium carbonate. In Figure 9, we see the resulting plot of the potential energy function for dynamite, consisting of the large positive feedback loop of nitroglycerine, with a small dimple of a negative feedback loop on top. Now, in order for the marble to release its vast amount of potential energy, it needs a quantity of activation energy to get the marble over the hump at the top of the curve. This is why dynamite requires the detonation of a blasting cap or the heat from a high-temperature fuse in order to detonate.

Figure 9 - In 1867, Alfred Nobel discovered that by adding some diatomaceous earth to nitroglycerin, he could superimpose a small negative feedback loop on top of the much larger positive feedback loop of nitroglycerin (click to enlarge).

So How Does Diatomaceous Earth Stabilize Nitroglycerin?
Once again:

Me: How does diatomaceous earth stabilize nitroglycerin in dynamite?

ChatGPT:
Diatomaceous earth stabilizes nitroglycerin in dynamite "by physically immobilizing it", not by chemically neutralizing it. Here’s how that works, step by step:

1. What makes nitroglycerin so dangerous? Nitroglycerin is a powerful explosive because it:
- Is a "dense liquid"
- Is "highly sensitive" to shock, friction, and rapid temperature changes
- Can "detonate from small mechanical disturbances"
In its pure liquid form, even a small jolt can cause catastrophic decomposition.

2. Diatomaceous earth is:
- A soft, powdery material made of "fossilized diatom shells"
- Composed mostly of "amorphous silica"
- Extremely "porous", with microscopic cavities and channels
Those pores are the key.

3. Physical absorption and immobilization
When nitroglycerin is mixed with diatomaceous earth:
- The liquid "soaks into the microscopic pores"
- Capillary forces "hold the nitroglycerin in place"
- The explosive is "spread out as thin films", not a free-flowing liquid
This prevents:
- Sloshing
- Pooling
- Sudden pressure concentration
In effect, the nitroglycerin is “locked” inside a solid matrix.

4. Reduced sensitivity to shock and friction
Because the nitroglycerin:
- Cannot move freely
- Is distributed over a large surface area
- Experiences mechanical forces more evenly

…it becomes "much less sensitive to accidental detonation" during:
- Handling
- Transport
- Moderate impact
Importantly, the explosive power remains intact — it just becomes controllable.

5. Why it still explodes when detonated
Diatomaceous earth:
- Is "chemically inert"
- Does "not react" with nitroglycerin
- Does "not inhibit detonation once initiated"

A blasting cap provides a strong enough shock wave to:
- Overcome the physical damping
- Trigger the rapid decomposition of nitroglycerin throughout the absorbed mass
So the stabilization is "mechanical, not chemical".

6. Historical significance
This discovery by "Alfred Nobel" in the 1860s:
- Transformed nitroglycerin from a laboratory hazard into a usable industrial explosive
- Made modern mining, tunneling, and construction possible
- Directly led to the invention of "dynamite" and Nobel’s later legacy

In one sentence:
"Diatomaceous earth stabilizes nitroglycerin by absorbing it into microscopic pores, preventing sudden movement and impact sensitivity while preserving its explosive power when deliberately detonated."

Figure 10 - Diatomaceous earth is a soft absorbent powder made of diatom fossils.

Figure 11 - Diatoms are microscopic, single-celled algae found in oceans, lakes, rivers, and soils. Each diatom is encased in a rigid shell called a frustule, made of silica (SiO2). The frustule consists of two halves that fit together like a petri dish. These shells are intricately patterned with pores, ridges, and symmetry, making diatoms famous for their beauty under microscopes.

Figure 12 - A silica tetrahedron is composed of a central silicon atom surrounded by four oxygen atoms. A single silica tetrahedron has a net charge of -4.

Figure 13 - However, silica tetrahedra can form an amorphous chemically inert 3D framework, which we call quartz glass, by having the surrounding oxygen atoms share electrons. This forms the very strong material that diatom shells are made from.

Conclusion
So instead of trying to use classical Control Theory to "align" the AI Intelligence Explosion with external reinforcement training, a better approach might be to build into the LLM models some kind of stabilizing agent like diatomaceous earth to tame the AI Intelligence Explosion into a Soft Takeoff. For how that could be done, I have no idea. But the taming of nitroglycerin with diatomaceous earth might give some in the AI community some sense of direction.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston