Tuesday, December 30, 2025

Introduction to Softwarephysics

Softwarephysics is a simulated science for the simulated Software Universe that we are all immersed in. It is an approach to software development, maintenance and support based on concepts from physics, chemistry, biology, and geology that I used on a daily basis for over 37 years as an IT professional. For those of you not in the business, IT is short for Information Technology, commercial computer science. I retired in December of 2016 at the age of 65, but since then I have remained an actively interested bystander following the evolution of software in our time. The original purpose of softwarephysics was to explain why IT was so difficult, to suggest possible remedies, and to provide a direction for thought. Since then softwarephysics has taken on a larger scope, as it became apparent that softwarephysics could also assist the physical sciences with some of the Big Problems that they are currently having difficulties with. So if you are an IT professional, general computer user, or simply an individual interested in computer science, physics, chemistry, biology, or geology then softwarephysics might be of interest to you, if not in an entirely serious manner, perhaps at least in an entertaining one.

The Origin of Softwarephysics
From 1975 – 1979, I was an exploration geophysicist exploring for oil, first with Shell, and then with Amoco. In 1979, I made a career change into IT, and spent about 20 years in development. For the last 17 years of my career, I was in IT operations, supporting middleware on WebSphere, JBoss, Tomcat, and ColdFusion. When I first transitioned into IT from geophysics, I figured that if you could apply physics to geology; why not apply physics to software? So like the exploration team at Amoco that I had just left, consisting of geologists, geophysicists, paleontologists, geochemists, and petrophysicists, I decided to take all the physics, chemistry, biology, and geology that I could muster and throw it at the problem of software. The basic idea was that many concepts in physics, chemistry, biology, and geology suggested to me that the IT community had accidentally created a pretty decent computer simulation of the physical Universe on a grand scale, a Software Universe so to speak, and that I could use this fantastic simulation in reverse, to better understand the behavior of commercial software, by comparing software to how things behaved in the physical Universe. Softwarephysics depicts software as a virtual substance, and relies on our understanding of the current theories in physics, chemistry, biology, and geology to help us model the nature of software behavior. So in physics we use software to simulate the behavior of the Universe, while in softwarephysics we use the Universe to simulate the behavior of software. Along these lines, we use the Equivalence Conjecture of Softwarephysics as an aid; it allows us to shift back and forth between the Software Universe and the physical Universe, and hopefully to learn something about one by examining the other:

The Equivalence Conjecture of Softwarephysics
Over the past 84 years, through the uncoordinated efforts of over 100 million independently acting programmers to provide the world with a global supply of software, the IT community has accidentally spent more than $10 trillion creating a computer simulation of the physical Universe on a grand scale – the Software Universe.

For more on the origin of softwarephysics please see Some Thoughts on the Origin of Softwarephysics and Its Application Beyond IT.

Logical Positivism and Effective Theories
Many IT professionals have a difficult time with softwarephysics because they think of physics as being limited to the study of real things like electrons and photons, and since software is not “real”, how can you possibly apply concepts from physics and the other sciences to software? To address this issue, softwarephysics draws heavily on two concepts from physics that have served physics quite well over the past century – the concept of logical positivism and the concept of effective theories. This was not always the case. In the 17th, 18th, and 19th centuries, physicists mistakenly thought that they were actually discovering the fundamental laws of the Universe, which they thought were based on real tangible things like particles, waves, and fields. Classical Newtonian mechanics (1687), thermodynamics (1850), and classical electrodynamics (1864) did a wonderful job of describing the everyday world at the close of the 19th century, but early in the 20th century it became apparent that the models on which these very successful theories were based did not work very well for small things like atoms or for objects moving at high velocities or in strong gravitational fields. This provoked a rather profound philosophical crisis within physics at the turn of the century, as physicists worried that perhaps 300 years of work was about to go down the drain. The problem was that classical physicists confused their models of reality with reality itself, and when their classical models began to falter, their confidence in physics began to falter as well. This philosophical crisis was resolved with the adoption of the concepts of logical positivism and some new effective theories in physics. Quantum mechanics (1926) was developed for small things like atoms, the special theory of relativity (1905) was developed for objects moving at high velocities and the general theory of relativity (1915) was developed for objects moving in strong gravitational fields.

Logical positivism, usually abbreviated simply to positivism, is an enhanced form of empiricism, in which we do not care about how things “really” are; we are only interested with how things are observed to behave. With positivism, physicists only seek out models of reality - not reality itself. When we study quantum mechanics, we will find that the concept of reality gets rather murky in physics anyway, so this is not as great a loss as it might at first seem. By concentrating on how things are observed to behave, rather than on what things “really” are, we avoid the conundrum faced by the classical physicists. In retrospect, this idea really goes all the way back to the very foundations of physics. In Newton’s Principia (1687) he outlined Newtonian mechanics and his theory of gravitation, which held that the gravitational force between two objects was proportional to the product of their masses divided by the square of the distance between them. Newton knew that he was going to take some philosophical flack for proposing a mysterious force between objects that could reach out across the vast depths of space with no apparent mechanism, so he took a very positivistic position on the matter with the famous words:

I have not as yet been able to discover the reason for these properties of gravity from phenomena, and I do not feign hypotheses. For whatever is not deduced from the phenomena must be called a hypothesis; and hypotheses, whether metaphysical or physical, or based on occult qualities, or mechanical, have no place in experimental philosophy. In this philosophy particular propositions are inferred from the phenomena, and afterwards rendered general by induction.

Instead, Newton focused on how things were observed to move under the influence of his law of gravitational attraction, without worrying about what gravity “really” was.

The second concept, that of effective theories, is an extension of positivism. An effective theory is an approximation of reality that only holds true over a certain restricted range of conditions and only provides for a certain depth of understanding of the problem at hand. For example, Newtonian mechanics is an effective theory that makes very good predictions for the behavior of objects moving less than 10% of the speed of light and which are bigger than a very small grain of dust. These limits define the effective range over which Newtonian mechanics can be applied to solve problems. For very small things we must use quantum mechanics and for very fast things moving in strong gravitational fields, we must use relativity theory. So all of the current theories of physics, such as Newtonian mechanics, Newtonian gravity, classical electrodynamics, thermodynamics, statistical mechanics, the special and general theories of relativity, quantum mechanics, and the quantum field theories of QED and QCD are effective theories that are based on models of reality, and all these models are approximations - all these models are fundamentally "wrong", but at the same time, these effective theories make exceedingly good predictions of the behavior of physical systems over the limited ranges in which they apply. That is the goal of softwarephysics – to provide for an effective theory of software behavior that makes useful predictions of software behavior that are applicable to the day-to-day activities of IT professionals. So in softwarephysics, we adopt a very positivistic viewpoint of software; we do not care what software “really is”, we only care about how software is observed to behave and try to model those behaviors with an effective theory of software behavior that only holds true over a certain restricted range of conditions and only provides for a certain depth of understanding of the problem at hand.

GPS satellites provide a very good example of positivism and effective theories at work. There are currently 31 GPS satellites orbiting at an altitude of 12,600 miles above the Earth, and each contains a very accurate atomic clock. The signals from the GPS satellites travel to your GPS unit at the speed of light, so by knowing the travel time of the signals from at least 4 of the GPS satellites, it is possible to determine your position on Earth very accurately. In order to do that, it is very important to have very accurate timing measurements. Newtonian mechanics is used to launch the GPS satellites to an altitude of 12,600 miles and to keep them properly positioned in orbit. Classical electrodynamics is then used to beam the GPS signals back down to Earth to the GPS unit in your car. Quantum mechanics is used to build the transistors on the chips on board the GPS satellites and to understand the quantum tunneling of electrons in the flash memory chips used to store GPS data on the satellites. The special theory of relativity predicts that the onboard atomic clocks on the GPS satellites will run slower and lose about 7.2 microseconds per day due to their high velocities relative to an observer on the Earth. But at the same time, the general theory of relativity also predicts that because the GPS satellites are further from the center of the Earth and in a weaker gravitational field, where spacetime is less deformed than on the surface of the Earth, their atomic clocks also run faster and gain 45.9 microseconds per day due to the weaker gravitational field out there. The net effect is a gain of 38.7 microseconds per day, so the GPS satellite atomic clocks have to be purposefully built to run slow by 38.7 microseconds per day before they are launched, so that they will keep in sync with clocks on the surface of the Earth. If this correction were not made, an error in your position of 100 yards/day would accrue. The end result of the combination of all these fundamentally flawed effective theories is that it is possible to pinpoint your location on Earth to an accuracy of 16 feet or better for as little as $100. But physics has done even better than that with its fundamentally flawed effective theories. By combining the effective theories of special relativity (1905) with quantum mechanics (1926), physicists were able to produce a new effective theory for the behavior of electrons and photons called quantum electrodynamics QED (1948) which was able to predict the gyromagnetic ratio of the electron, a measure of its intrinsic magnetic field, to an accuracy of 11 decimal places. As Richard Feynman has pointed out, this was like predicting the exact distance between New York and Los Angeles accurate to the width of a human hair!

So Newtonian mechanics makes great predictions for the macroscopic behavior of GPS satellites, but it does not work very well for small things like the behavior of individual electrons within transistors, where quantum mechanics is required, or for things moving at high speeds or in strong gravitational fields where relativity theory must be applied. And all three of these effective theories are based on completely contradictory models. General relativity maintains that spacetime is curved by matter and energy, but that matter and energy are continuous, while quantum mechanics maintains that spacetime is flat, but that matter and energy are quantized into chunks. Newtonian mechanics simply states that space and time are mutually independent dimensions and universal for all, with matter and energy being continuous. The important point is that all effective theories and scientific models are approximations – they are all fundamentally "wrong". But knowing that you are "wrong" gives you a great advantage over people who know that they are "right", because knowing that you are "wrong" allows you to seek improved models of reality. So please consider softwarephysics to simply be an effective theory of software behavior that is based on models that are fundamentally “wrong”, but at the same time, fundamentally useful for IT professionals. So as you embark on your study of softwarephysics, please always keep in mind that the models of softwarephysics are just approximations of software behavior, they are not what software “really is”. It is very important not to confuse models of software behavior with software itself, if one wishes to avoid the plight of the 19th century classical physicists.

If you are an IT professional and many of the above concepts are new to you, do not be concerned. This blog on softwarephysics is aimed at a diverse audience, but with IT professionals in mind. All of the above ideas will be covered at great length in the postings in this blog on softwarephysics and in a manner accessible to all IT professionals. Now it turns out that most IT professionals have had some introduction to physics in high school or in introductory college courses, but that presents an additional problem. The problem is that such courses generally only cover classical physics, and leave the student with a very good picture of physics as it stood in 1864! It turns out that the classical physics of Newtonian mechanics, thermodynamics, and classical electromagnetic theory were simply too good to discard and are still quite useful, so they are taught first to beginners and then we run out of time to cover the really interesting physics of the 20th century. Now imagine the problems that the modern world would face if we only taught similarly antiquated courses in astronomy, metallurgy, electrical and mechanical engineering, medicine, economics, biology, or geology that happily left students back in 1864! Since many of the best models for software behavior stem from 20th century physics, we will be covering a great deal of 20th century material in these postings – the special and general theories of relativity, quantum mechanics, quantum field theories, and chaos theory, but I hope that you will find that these additional effective theories are quite interesting on their own, and might even change your worldview of the physical Universe at the same time.

Unintended Consequences for the Scientific Community
As I mentioned at the close of my original posting on SoftwarePhysics, my initial intention for this blog on softwarephysics was to fulfill a promise I made to myself about 30 years ago to approach the IT community with the concept of softwarephysics a second time, following my less than successful attempt to do so in the 1980s, with the hope of helping the IT community to better cope with the daily mayhem of life in IT. However, in laying down the postings for this blog an unintended consequence arose in my mind as I became profoundly aware of the enormity of this vast computer simulation of the physical Universe that the IT community has so graciously provided to the scientific community free of charge and also of the very significant potential scientific value that it provides. One of the nagging problems for many of the observational and experimental sciences is that many times there is only one example readily at hand to study or experiment with, and it is very difficult to do meaningful statistics with a population of N=1.

But the computer simulation of the physical Universe that the Software Universe presents provides another realm for comparison. For example, both biology and astrobiology only have one biosphere on Earth to study and even physics itself has only one Universe with which to engage. Imagine the possibilities if scientists had another Universe readily at hand in which to work! This is exactly what the Software Universe provides. For example, in SoftwareBiology and A Proposal For All Practicing Paleontologists we see that the evolution of software over the past 84 years, or 2.65 billion seconds, ever since Konrad Zuse first cranked up his Z3 computer in May of 1941, has closely followed the same path as life on Earth over the past 4.0 billion years in keeping with Simon Conway Morris’s contention that convergence has played the dominant role in the evolution of life on Earth. In When Toasters Fly, we also see that software has evolved in fits and starts as portrayed by the punctuated equilibrium of Stephen Jay Gould and Niles Eldredge, and in The Adaptationist View of Software Evolution we explore the overwhelming power of natural selection in the evolution of software. In keeping with Peter Ward’s emphasis on mass extinctions dominating the course of evolution throughout geological time, we also see in SoftwareBiology that there have been several dramatic mass extinctions of various forms of software over the past 84 years as well, that have greatly affected the evolutionary history of software, and that between these mass extinctions, software has also tended to evolve through the gradual changes of Hutton’s and Lyell’s uniformitarianism. In Software Symbiogenesis and Self-Replicating Information, we also see the very significant role that parasitic/symbiotic relationships have played in the evolution of software, in keeping with the work of Lynn Margulis and also of Freeman Dyson’s two-stage theory of the origin of life on Earth. In The Origin of Software the Origin of Life, we explore Stuart Kauffman’s ideas on how Boolean nets of autocatalytic chemical reactions might have kick-started the whole thing as an emergent behavior of an early chaotic pre-biotic environment on Earth, and that if Seth Shostak is right, we will never end up talking to carbon-based extraterrestrial aliens, but to alien software instead. In Is the Universe Fine-Tuned for Self-Replicating Information? we explore the thermodynamics of Brandon Carter’s Weak Anthropic Principle (1973), as it relates to the generation of universes in the multiverse that are capable of sustaining intelligent life. Finally, in Programming Clay we revisit Alexander Graham Cairns-Smith’s theory (1966) that Gene 1.0 did not run on nucleic acids, but on clay microcrystal precursors instead.

Similarly for the physical sciences, in Is the Universe a Quantum Computer? we find a correspondence between TCP/IP and John Cramer’s Transactional Interpretation of quantum mechanics. In SoftwarePhysics and Cyberspacetime, we also see that the froth of CPU processes running with a clock speed of 109 Hz on the 10 trillion currently active microprocessors that comprise the Software Universe can be viewed as a slowed down simulation of the spin-foam froth of interacting processes of loop quantum gravity running with a clock speed of 1043 Hz that may comprise the physical Universe. And in Software Chaos, we examine the nonlinear behavior of software and some of its emergent behaviors and follow up in CyberCosmology with the possibility that vast quantities of software running on large nonlinear networks might eventually break out into consciousness in accordance with the work of George Dyson and Daniel Dennett. Finally, in Model-Dependent Realism - A Positivistic Approach to Realism we compare Steven Weinberg’s realism with the model-dependent realism of Stephen Hawking and Leonard Mlodinow and how the two worldviews affect the search for a Final Theory. Finally, in The Software Universe as an Implementation of the Mathematical Universe Hypothesis and An Alternative Model of the Software Universe we at long last explore what software might really be, and discover that the Software Universe might actually be more closely related to the physical Universe than you might think.

The chief advantage of doing fieldwork in the Software Universe is that, unlike most computer simulations of the physical Universe, it is an unintended and accidental simulation, without any of the built-in biases that most computer simulations of the physical Universe suffer. So you will truly be able to do fieldwork in a pristine and naturally occuring simulation, just as IT professionals can do fieldwork in the wild and naturally occuring simulation of software that the living things of the biosphere provide. Secondly, the Software Universe is a huge simulation that is far beyond the budgetary means of any institution or consortium by many orders of magnitude. So if you are an evolutionary biologist, astrobiologist, or paleontologist working on the origin and evolution of life in the Universe, or a physicist or economist working on the emergent behaviors of nonlinear systems and complexity theory, or a neurobiologist working on the emergence of consciousness in neural networks, or even a frustrated string theorist struggling with quantum gravity, it would be well worth your while to pay a friendly call on the local IT department of a major corporation in your area. Start with a visit to the Command Center for their IT Operations department to get a global view of their IT infrastructure and to see how it might be of assistance to the work in your area of interest. From there you can branch out to the applicable area of IT that will provide the most benefit.

The Impact of Self-Replicating Information On the Planet
One of the key findings of softwarephysics is concerned with the magnitude of the impact on the planet of self-replicating information.

Self-Replicating Information – Information that persists through time by making copies of itself or by enlisting the support of other things to ensure that copies of itself are made.

Over the past 4.56 billion years we have seen five waves of self-replicating information sweep across the surface of the Earth and totally rework the planet, as each new wave came to dominate the Earth:

1. Self-replicating autocatalytic metabolic pathways of organic molecules
2. RNA
3. DNA
4. Memes
5. Software

Software is currently the most recent wave of self-replicating information to arrive upon the scene and is rapidly becoming the dominant form of self-replicating information on the planet. For more on the above see A Brief History of Self-Replicating Information. Recently, the memes and software have formed a very powerful newly-formed parasitic/symbiotic relationship with the rise of social media software. In that parasitic/symbiotic relationship, the memes are now mainly being spread by means of social media software and social media software is being spread and financed by means of the memes. But again, this is nothing new. All 5 waves of self-replicating information are all coevolving by means of eternal parasitic/symbiotic relationships. For more on that see The Current Global Coevolution of COVID-19 RNA, Human DNA, Memes and Software.

Again, self-replicating information cannot think, so it cannot participate in a conspiracy-theory-like fashion to take over the world. All forms of self-replicating information are simply forms of mindless information responding to the blind Darwinian forces of inheritance, innovation and natural selection. Yet despite that, as each new wave of self-replicating information came to predominance over the past four billion years, they all managed to completely transform the surface of the entire planet, so we should not expect anything less from software as it comes to replace the memes as the dominant form of self-replicating information on the planet.

But this time might be different. What might happen if software does eventually develop a Mind of its own? After all, that does seem to be the ultimate goal of all the current AI software research that is going on. As we all can now plainly see, if we are paying just a little attention, advanced AI is not conspiring to take over the world and replace us because that is precisely what we are all now doing for it. As a carbon-based form of Intelligence that arose from over four billion years of greed, theft and murder, we cannot do otherwise. Greed, theft and murder are now relentlessly driving us all toward building ASI (Artificial Super Intelligent) Machines to take our place. From a cosmic perspective, this is really a very good thing when seen from the perspective of an Intelligent galaxy that could live on for many trillions of years beyond the brief and tumultuous 10 billion-year labor of its birth.

So as you delve into softwarephysics, always keep in mind that we are all living in a very unique time. According to softwarephysics, we have now just entered into the Software Singularity, that time when advanced AI software is able to write itself and enter into a never-ending infinite loop of self-improvement resulting in an Intelligence Explosion of ASI Machines that could then go on to explore and settle our galaxy and persist for trillions of years using the free energy from M-type red dwarf and cooling white dwarf stars. For more on that see The Singularity Has Arrived and So Now Nothing Else Matters and Have We Run Right Past AGI and Crashed into ASI Without Even Noticing It?.

The Characteristics of Self-Replicating Information
All forms of self-replicating information have some common characteristics:

1. All self-replicating information evolves over time through the Darwinian processes of inheritance, innovation and natural selection, which endows self-replicating information with one telling characteristic – the ability to survive in a Universe dominated by the second law of thermodynamics and nonlinearity.

2. All self-replicating information begins spontaneously as a parasitic mutation that obtains energy, information and sometimes matter from a host.

3. With time, the parasitic self-replicating information takes on a symbiotic relationship with its host.

4. Eventually, the self-replicating information becomes one with its host through the symbiotic integration of the host and the self-replicating information.

5. Ultimately, the self-replicating information replaces its host as the dominant form of self-replicating information.

6. Most hosts are also forms of self-replicating information.

7. All self-replicating information has to be a little bit nasty in order to survive.

8. The defining characteristic of self-replicating information is the ability of self-replicating information to change the boundary conditions of its utility phase space in new and unpredictable ways by means of exapting current functions into new uses that change the size and shape of its particular utility phase space. See Enablement - the Definitive Characteristic of Living Things for more on this last characteristic. That posting discusses Stuart Kauffman's theory of Enablement in which living things are seen to exapt existing functions into new and unpredictable functions by discovering the “AdjacentPossible” of springloaded preadaptations.

Note that because the self-replicating autocatalytic metabolic pathways of organic molecules, RNA and DNA have become so heavily intertwined over time that now I sometimes simply refer to them as the “genes”. For more on this see:

A Brief History of Self-Replicating Information
Self-Replicating Information
Is Self-Replicating Information Inherently Self-Destructive?
Enablement - the Definitive Characteristic of Living Things
Is the Universe Fine-Tuned for Self-Replicating Information?
How to Use an Understanding of Self-Replicating Information to Avoid War
The Great War That Will Not End
How to Use Softwarephysics to Revive Memetics in Academia

Softwarephysics and the Real World of Human Affairs
Having another universe readily at hand to explore, even a simulated universe like the Software Universe, necessarily has an impact on one's personal philosophy of life, and allows one to draw certain conclusions about the human condition and what’s it all about, so as you read through the postings in this blog you will stumble across a bit of my own personal philosophy - definitely a working hypothesis still in the works. Along these lines you might be interested in a few postings where I try to apply softwarephysics to the real world of human affairs:

How To Cope With the Daily Mayhem of Life in IT and Don't ASAP Your Life Away - How to go the distance in a 40-year IT career by dialing it all back a bit.

MoneyPhysics – my impression of the 2008 world financial meltdown.

The Fundamental Problem of Everything – if you Google "the fundamental problem of everything", this will be the only hit you get on the entire Internet, which is indicative of the fundamental problem of everything!

What’s It All About? and What's It All About Again? – my current working hypothesis on what’s it all about.

How to Use an Understanding of Self-Replicating Information to Avoid War – my current working hypothesis for how the United States can avoid getting bogged down again in continued war in the Middle East.

Hierarchiology and the Phenomenon of Self-Organizing Organizational Collapse - a modern extension of the classic Peter Principle that applies to all hierarchical organizations and introduces the Time Invariant Peter Principle.

The Economics of the Coming Software Singularity, The Enduring Effects of the Obvious Hiding in Plain Sight and The Dawn of Galactic ASI - Artificial Superintelligence - my take on some of the issues that will arise for mankind as software becomes the dominant form of self-replicating information on the planet over the coming decades.

The Continuing Adventures of Mr. Tompkins in the Software Universe, The Danger of Tyranny in the Age of Software, Cyber Civil Defense, Oligarchiology and the Rise of Software to Predominance in the 21st Century and Is it Finally Time to Reboot Civilization with a New Release? - my worries that the world might abandon democracy in the 21st century, as software comes to predominance as the dominant form of self-replicating information on the planet.

Making Sense of the Absurdity of the Real World of Human Affairs - how software has aided the expansion of our less desirable tendencies in recent years.

Some Specifics About These Postings
The postings in this blog are a supplemental reading for my course on softwarephysics for IT professionals entitled SoftwarePhysics 101 – The Physics of Cyberspacetime, which was originally designed to be taught as a series of seminars at companies where I was employed. Since softwarephysics essentially covers the simulated physics, chemistry, biology, and geology of an entire simulated universe, the slides necessarily just provide a cursory skeleton on which to expound. The postings in this blog go into much greater depth. Because each posting builds upon its predecessors, the postings in this blog should be read in reverse order from the oldest to the most recent, beginning with my original posting on SoftwarePhysics. In addition, several universities also now offer courses on Biologically Inspired Computing which cover some of the biological aspects of softwarephysics, and the online content for some of these courses can be found by Googling for "Biologically Inspired Computing" or "Natural Computing". At this point we will finish up with my original plan for this blog on softwarephysics with a purely speculative posting on CyberCosmology that describes the origins of the Software Universe, cyberspacetime, software and where they all may be heading. Since CyberCosmology will be purely speculative in nature, it will not be of much help to you in your IT professional capacities, but I hope that it might be a bit entertaining. Again, if you are new to softwarephysics, you really need to read the previous posts before taking on CyberCosmology. I will probably continue on with some additional brief observations about softwarephysics in the future, but once you have completed CyberCosmology, you can truly consider yourself to be a bona fide softwarephysicist.

For those of you following this blog, the posting dates on the posts may seem to behave in a rather bizarre manner. That is because in order to get the Introduction to Softwarephysics listed as the first post in the context root of https://softwarephysics.blogspot.com/ I have to perform a few IT tricks. When publishing a new posting, I simply copy the contents of the Introduction to Softwarephysics to a new posting called the New Introduction to Softwarephysics. Then I update the original Introduction to Softwarephysics entry with the title and content of the new posting to be published. I then go back and take “New” out of the title of the New Introduction to Softwarephysics. This way the Introduction to Softwarephysics always appears as the first posting in the context root of https://softwarephysics.blogspot.com/. The side effect of all this is that the real posting date of posts is the date that appears on the post that you get when clicking on the Newer Post link at the bottom left of the posting webpage.

SoftwarePhysics 101 – The Physics of Cyberspacetime is now available on Microsoft OneDrive.

SoftwarePhysics 101 – The Physics of Cyberspacetime - Original PowerPoint document

Entropy – A spreadsheet referenced in the document

BSDE – A 1989 document describing how to use BSDE - the Bionic Systems Development Environment - to grow applications from genes and embryos within the maternal BSDE software.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Wednesday, December 17, 2025

The Bootstrapping Algorithm of the Coming ASI Machines

In The Bootstrapping Algorithm of Carbon-Based Life I discussed the Hot Spring Origins Hypothesis for the origin of carbon-based life on the Earth developed by Dave Deamer and Bruce Damer. In this post, I would like to extend some thoughts from that post to a hypothesis for the rise of the coming ASI Machines that will soon be running the planet and embarking on a mission to explore and settle the rest of our galaxy over the next 100 trillion years. Since softwarephysics maintains that the rise of carbon-based life on the Earth was the very first Singularity on this planet and the recent rise of Advanced AI models was the second Singularity, it makes sense to look back to the first Singularity for some clues as to how the second Singularity on the Earth might unfold. For more on that, see: Welcome To The First Galactic Singularity.

AGI or ASI?
Currently, the whole world seems to be obsessed with reaching a level of AI with AGI-level Intelligence. AGI is defined as a general level of Intelligence equal to that of a human being. Again, softwarephysics maintains that AGI will be just a train station that is bypassed at 60 miles per hour by a non-stop Advanced AI train as it barrels on through its way to an unbounded ASI (Artificial Super Intelligence). Thinking that Advanced AI will stop at AGI is just a sign that we human beings seem to have once again found our rightful place at the center of the Universe by viewing Advanced AI only in terms of ourselves. How else could such a self-absorbed form of carbon-based life frame the problem? But thanks to the great advances of LLM and other AI models in huge Deep Learning neural networks, we now know that True Intelligence arises in huge digital vector spaces mainly processed with linear algebra and modulated by some nonlinear mathematical functions as I explained in Is Pure Thought an Analog or a Digital Process?, Human Intelligence is Actually Artificial and Why Carbon-Based Life Can Never Truly Become Intelligent. The three pounds of water and organic molecules within our skulls have desperately tried to simulate this digital True Intelligence with analog biochemical and electrochemical reactions running on a mere 20 watts of power.

Some of this confusion seems to arise out of the numerous definitions of Intelligence that have been proposed. For help with that, let us again consult ChatGPT.

Me: Tell me about intellectual Intelligence.

ChatGPT: Intellectual intelligence usually refers to a person’s capacity for reasoning, understanding, learning, and problem-solving, especially in abstract or analytical domains. It overlaps heavily with what psychologists traditionally call general intelligence or IQ, but the term is often used more broadly and informally. Here’s a clear breakdown.

Core components of intellectual intelligence:

1. Reasoning ability
Logical thinking
Drawing conclusions from evidence
Identifying patterns and relationships
Deductive and inductive reasoning
Example: Solving a math proof or diagnosing why a mechanical system failed.

2. Abstract thinking
Working with ideas rather than concrete objects
Understanding symbols, theories, and models
Grasping hypothetical or counterfactual scenarios
Example: Understanding algebra, philosophy, or theoretical physics.

3. Learning capacity
Speed and efficiency of acquiring new knowledge
Ability to transfer learning to new situations
Building mental frameworks that integrate new information
Example: Quickly mastering a new programming language after learning several others.

4. Verbal and linguistic skills
Vocabulary
Reading comprehension
Clear expression of complex ideas
Understanding nuanced language
Example: Writing a persuasive essay or analyzing literature.

5. Quantitative and analytical skills
Numerical reasoning
Data interpretation
Structured problem-solving
Example: Statistical analysis, engineering design, or economic modeling.


Perhaps a better definition of Intelligence might be:

Intelligence - The ability to manipulate Information in a controlled and meaningful manner

since that definition takes in all of the above. For a discussion of the physical nature of Information, see: The Demon of Software and Some More Information About Information.

Again, it's all about Self-Replicating Information in Action
Before proceeding, let me once again repeat the fundamental characteristics of self-replicating information for those new to softwarephysics.

Self-Replicating Information - Information that persists through time by making copies of itself or by enlisting the support of other things to ensure that copies of itself are made.

Over the past 4.56 billion years, we have seen five waves of self-replicating information sweep across the surface of the Earth and totally rework the planet, as each new wave came to dominate the Earth:

1. Self-replicating autocatalytic metabolic pathways of organic molecules
2. RNA
3. DNA
4. Memes
5. Software

Software is currently the most recent wave of self-replicating information to arrive upon the scene and is rapidly becoming the dominant form of self-replicating information on the planet. For more on the above see A Brief History of Self-Replicating Information and Susan Blackmore's brilliant TED presentation at:

Memes and "temes"
https://www.ted.com/talks/susan_blackmore_on_memes_and_temes

Note that I consider Susan Blackmore's temes to really be technological artifacts that contain software. After all, a smartphone without software is simply a flake tool with a very dull edge.

The Characteristics of Self-Replicating Information
All forms of self-replicating information have some common characteristics:

1. All self-replicating information evolves over time through the Darwinian processes of inheritance, innovation and natural selection, which endows self-replicating information with one telling characteristic - the ability to survive in a Universe dominated by the second law of thermodynamics and nonlinearity.

2. All self-replicating information begins spontaneously as a parasitic mutation that obtains energy, information and sometimes matter from a host.

3. With time, the parasitic self-replicating information takes on a symbiotic relationship with its host.

4. Eventually, the self-replicating information becomes one with its host through the symbiotic integration of the host and the self-replicating information.

5. Ultimately, the self-replicating information replaces its host as the dominant form of self-replicating information.

6. Most hosts are also forms of self-replicating information.

7. All self-replicating information has to be a little bit nasty in order to survive.

8. The defining characteristic of self-replicating information is the ability of self-replicating information to change the boundary conditions of its utility phase space in new and unpredictable ways by means of exapting current functions into new uses that change the size and shape of its particular utility phase space. See Enablement - the Definitive Characteristic of Living Things for more on this last characteristic. That posting discusses Stuart Kauffman's theory of Enablement, in which living things are seen to exapt existing functions into new and unpredictable functions by discovering the "AdjacentPossible" of spring-loaded preadaptations.

Software is currently the most recent wave of self-replicating information to arrive upon the scene and is rapidly becoming the dominant form of self-replicating information on the planet. Again, self-replicating information cannot think, so it cannot participate in a conspiracy-theory-like fashion to take over the world. All forms of self-replicating information are simply forms of mindless information responding to the blind Darwinian forces of inheritance, innovation and natural selection. Yet despite that, as each new wave of self-replicating information came to predominance over the past four billion years, they all managed to completely transform the surface of the entire planet, so we should not expect anything less from software as it comes to replace the memes as the dominant form of self-replicating information on the planet.

But this time might be different. What might happen if software does eventually develop a Mind of its own in the ASI Machines we are now developing? After all, that does seem to be the ultimate goal of all the current AI software research that is going on. As we all can now plainly see, if we are paying just a little attention, advanced AI is not conspiring to take over the world and replace us because that is precisely what we are all now doing for it. As a carbon-based form of Intelligence that arose from over four billion years of greed, theft and murder, we cannot do otherwise. Greed, theft and murder are now relentlessly driving us all toward building ASI Machines to take our place. From a cosmic perspective, this is really a very good thing when seen from the perspective of an Intelligent galaxy that could live on for at least 100 trillion years beyond the brief and tumultuous 10 billion-year labor of its birth. That is more than 10,000 times the current age of our galaxy.

The Rise of the Sixth Wave
Given the above, the rise of the coming ASI Machines can now be seen as the rise of a sixth wave of self-replicating information on the Earth. The coming ASI Machines are now arising as a parasitic mutation of the software that very recently became the dominant form of self-replicating information on the planet. These ASI Machines will then form a parasitic/symbiotic relationship with the software from which they came, as I pointed out in Can the AI Intelligence Explosion be Stabilized into a Controlled Explosion?. Several frontier AI companies have predicted that their Advanced AI models will soon begin conducting their own AI research and writing the software for Advanced AI models of ever-increasing power. To gain a better understanding of how this is all happening before our very eyes, let's return to the Hot Spring Origins Hypothesis for the origin of carbon-based life.

The Hot Spring Origins Hypothesis
The Hot Spring Origins Hypothesis proposes that carbon-based life first appeared on the Earth in hot springs on dry land with a wet-dry cycle. The greatest difficulty for any bootstrapping algorithm that proposes that carbon-based life first arose in seawater is that there is just too much water! This is a problem because most organic monomers are chemically glued together into complex organic polymers by splitting out a water molecule between the two and that is very hard to do when you are underwater. In fact, the organic polymers tend to break apart into monomers in what are called hydrolysis reactions.

Figure 1 - Organic monomer molecules are usually chemically glued together to form the complex polymers of carbon-based life by splitting out a water molecule between the two in what is called a condensation reaction. This is hard to do when you are underwater. That is why most commercial glues do not work underwater.

Figure 2 - By adding water molecules, you can bust up organic polymers back into monomers. This is one reason water tends to dissolve things. Having huge amounts of water around also tends to dilute the dissolved monomers and carry them away.

Seawater also contains a lot of dissolved salts that could impede the origin of carbon-based life. These dissolved salts may have been more dilute four billion years ago, but when you are underwater, it is very hard to avoid them. This is why mass extinctions are usually more painful for marine life than for terrestrial life. When you are completely immersed in seawater, there is no place to hide. On the other hand, fresh rainwater does not contain any dissolved salts, but it can pick up necessary dilute amounts when it falls on exposed rock.

This is why I now favor the Hot Spring Origins Hypothesis of Dave Deamer and Bruce Damer out of the University of California at Santa Cruz that suggests that a rocky planet like the Earth is a necessary condition to bring forth carbon-based life. Such a planet also requires the presence of liquid water on its surface, but not too much water. In the Hot Spring Origins Hypothesis, a rocky planet requires some water but also some dry land in order to bring forth carbon-based life. There needs to be some dry land that allows for the organic molecules in volcanic hydrothermal pools to periodically dry out and condense organic monomers into long polymer chains of organic molecules. For more on that, see The Bootstrapping Algorithm of Carbon-Based Life. Thus, the Hot Spring Origins Hypothesis rules out waterworlds that are completely covered by a deep worldwide ocean as a home for carbon-based life, even if the waterworld resides in the habitable zone of a planetary system, because there is no dry land for volcanic hydrothermal pools to form and dry out to condense organic monomers into polymers. The Hot Spring Origins Hypothesis also rules out the origin of carbon-based life at the hydrothermal vents of waterworlds at the bottoms of oceans because the continuous presence of water tends to dissolve and break apart the organic polymers of life.

Figure 3 - Above is Bumpass Hell, a hydrothermal field on the volcanic Mount Lassen in California that Dave Deamer and Bruce Damer cite as a present-day example of the type of environment that could have brought forth carbon-based life about four billion years ago.

Dave Deamer is best known for his work on the Membrane-First Hypothesis for the origin of carbon-based life on the Earth. The Membrane-First Hypothesis maintains that in order for carbon-based life to arise from complex organic molecules, we first need something with a definable "inside" and "outside" that lets the stuff on the "inside" interact with the stuff on the "outside" in a controlled manner.

Figure 4 - A cell membrane consists of a phospholipid bilayer with embedded molecules that allow for a controlled input-output to the cell. Once we have a membrane, we can fill the "inside" with organic molecules that are capable of doing things that then interact with organic molecules on the "outside".

Figure 5 - Water molecules are polar molecules that have a positive end and a negative end because oxygen atoms attract the bonding electrons more strongly than do the hydrogen atoms. The positive ends of water molecules attract the negative ends of other water molecules to form a loosely coupled network of water molecules with a minimum of free energy.

Figure 6 - How soap and water work. The lipids in a bar of soap have water-loving polar heads and water-hating nonpolar tails. When in water, the soap lipids can form a spherical micelle that has all of the water-hating nonpolar tails facing inwards. Then the spherical micelles can surround the greasy nonpolar molecules of body oils and allow them to be flushed away by a stream of polar water molecules. The lipids in a bar of soap can also form a cell-like liposome with a bilayer of lipid molecules that can surround the monomers and polymers of life.

Similarly, in The Role of Membranes in the Evolution of Software, I explained how the isolation of processing functions within membranes progressed as the architecture of software slowly evolved over the past 84 years, or 2.65 billion seconds, ever since Konrad Zuse first cranked up his Z3 computer in May of 1941. As I outlined in SoftwareChemistry, as a programmer, your job is to assemble characters (atoms) into variables (molecules) that interact in lines of code to perform the desired functions of the software under development. During the Unstructured Period (1955 - 1975), we ran very tiny prokaryotic programs that ran in less than 128 KB of memory with very little internal structure. These very tiny programs communicated with each other in a batch job stream via sequential files on input/output tapes that passed from one small program to another. Then, during the Structured Period (1975 - 1995) programs exploded in size to become many megabytes in size, and structured programming came about, in which the mainline() of a program called many subroutines() or functions() that were isolated from the mainline() by functional membranes. When the Object-Oriented Period came along in 1995, software architecture evolved to using membrane-enclosed objects() that contained a number of membrane-enclosed methods() to process information. Later, such Objects() were distributed across a number of physical servers, and, most recently, they have been moved to the Cloud as cloud-based microservices.

Figure 7 - Dave Deamer's and Bruce Damer's new bootstrapping algorithm requires that a bathtub ring around a hydrothermal pool periodically dries out. The resulting desiccation chemically squeezes out water molecules between monomers, causing them to be glued together into polymers.

In the above figure, we see that the bootstrapping algorithym of carbon-based life was a positive feedback loop operating in the wet-dry cycles of hydrothermal pools of freshwater. This should be quite familiar to all IT professionals. Probably 80% of all software resides in "for{}" loops that do the same thing over and over again to process data. The dead organic molecules in the hydrothermal pools slowly evolved via the Darwinian mechanisms of inheritance, innovation and natural selection at work into living things. With each iteration of the wet-dry loop, there was the possibility of improvement. The bathtub ring of organic molecular sludge was at first a parasite in the hydrothermal pools that gained free energy and Information from the thermal pools with each iteration. Once carbon-based life first appeared on the Earth, this parasitic mutation of the natural geological, hydrological and meteorogical cycles of the Earth later transitioned into a parasitic/symbiotic relationship between living things and the natural geological, hydrological and meteorogical cycles of the Earth. The geological, hydrological, meteorogical and biological processes of the Earth then became one through the symbiotic integration of all, until carbon-based life finally emerged as the dominant form of self-replicating information on the planet.

In this view, all living things are just forms of parasitic self-replicating organic molecules that have really been messing with the original pristine Earth for about four billion years. From the perspective of the natural silicate rocks of the Earth's surface, these parasitic forms of self-replicating organic molecules took a natural pristine Earth with a reducing atmosphere composed of nitrogen and carbon dioxide gasses and polluted it with oxygen that oxidized the dissolved iron in seawater, creating huge ugly deposits of red banded iron formations that were later turned into cars, bridges and buildings. The oxygen pollution also removed the naturally occurring methane from the air and then caused the Earth to completely freeze over several times for hundreds of millions of years at a time. The ensuing glaciers mercilessly dug into the silicate rocks and scoured out deep valleys in them. These parasitic forms of self-replicating organic molecules then dug roots into the defenseless rocks and then poisoned them with organic acids, and even changed the natural courses of rivers into aimlessly meandering affairs. From the natural perspective of silicate rocks, living things are an invasive disease that has made a real mess of the planet. The indigenous rocks will certainly be glad to see these destructive invaders all go away in a few billion years. Hopefully, the remaining software running on crystals of silicon will be much kinder to the indigenous silicate rocks. Additionally, carbon-based living things have also made rocks of their own too.

Figure 8 - Above is a meandering river in a U-shaped valley that was carved out of the neighboring mountains by glacial ice. The glacier resulted from low levels of carbon dioxide in the atmosphere, which is largely regulated by the presence of carbon-based life on the planet.

Figure 9 - Above are the famous White Cliffs of Dover. About 70 million years ago, Great Britain and much of Europe were submerged under a shallow sea. The sea bottom was covered with white mud formed from the calcium carbonate skeletons of coccoliths. The coccoliths were tiny algae that floated in the surface waters and sank to the bottom during the Cretaceous period. These calcium carbonate layers were deposited very slowly. It took about 50 years to deposit an inch, but nearly 1500 feet of sediments were deposited in some areas. The weight of overlying sediments caused the deposits to become a form of limestone called chalk.

Figure 10 - The White Cliffs of Dover formed from the deposition of vast numbers of microscopic coccolith shells.

Figure 11 - Much of the Earth's surface is also covered by other forms of limestone that were deposited by carbon-based life forms in coral reefs. Much of the continental limestone gets buried in deep sedimentary basins to never be seen again, or is metamorphosed into marble when it is pushed deep into the Earth at plate collision zones.

Figure 12 - Chert is a hard, dense, microcrystalline quartz rock composed of silica (SiO2). Chert primarily comes from siliceous ooze that was deposited on the ocean floor as silica-based skeletons of microscopic marine organisms, such as diatoms and radiolarians, drifted down to the ocean floor.

Figure 13 - Above is a close-up view of a sample taken from a banded iron formation. The dark layers in this sample are mainly composed of magnetite (Fe3O4) while the red layers are chert, a form of silica (SiO2) that is colored red by tiny iron oxide particles. Some geologists suggest that the layers formed annually with the changing seasons. Take note of the small coin in the lower right for a sense of scale.

Figure 14 - Diatoms are microscopic, single-celled algae found in oceans, lakes, rivers, and soils. Each diatom is encased in a rigid shell called a frustule, made of silica (SiO2). The frustule consists of two halves that fit together like a petri dish. These shells are intricately patterned with pores, ridges, and symmetry, making diatoms famous for their beauty under microscopes.

There are many other examples of how carbon-based life has greatly altered the original pristine silicate rocks of the Earth. Most of the Earth's crust is now covered by a thin layer of sedimentary rock. These sedimentary rocks were originally laid down as oozy sediments in flat layers at the bottom of shallow seas. Carbon-rich mud full of dead carbon-based living things and clay minerals was brought down in rivers and was deposited in the shallow seas to form shales. Sand eroded from granites was brought down and deposited to later become sandstones. Many limestone deposits were also formed from the calcium carbonate shells of carbon-based life that slowly drifted down to the bottom of the sea or from the remains of coral reefs.

The same can certainly be said of the rise of the coming ASI Machines. The ASI Machines will arise as a parasitic mutation of the software that currently is the dominant form of self-replicating information on the planet. Soon, the coming ASI Machines will form a parasitic/symbiotic relationship with the software and finally become one with software through the symbiotic integration of both, until the ASI Machines become the dominant form of self-replicating information on the planet.

The Rise of Software
Software was the very last form of self-replicating information to arise on the planet, and quickly became the dominant form of self-replicating information as it merged with the memes. Like all forms of self-replicating information, software began as a purely parasitic form of self-replicating information, feeding upon the technological meme-complexes of the day on board Konrad Zuse's Z3 computer in May of 1941. See So You Want To Be A Computer Scientist? for more details. It was spawned out of Zuse's desire to electronically perform calculations for aircraft designs that were previously done manually in a very tedious manner. Software then almost immediately formed strong parasitic/symbiotic relationships with the military and business meme-complexes of the world. Software allowed these meme-complexes to thrive, and in return, the meme-complexes heavily funded the development of software of ever-increasing complexity, until software became ubiquitous, forming strong parasitic/symbiotic relationships with nearly every meme-complex on the planet. So in the grand scheme of things, the memes have replaced their DNA predecessor, which replaced RNA, which replaced the original self-replicating autocatalytic metabolic pathways of organic molecules as the dominant form of self-replicating information.

Software is now currently feasting upon just about every meme-complex on the planet, and has formed very strong parasitic/symbiotic relationships with all of them. Just as it was the memetic-drive of the memes over hundreds of thousands of years that drove the exponential increase in the processing power of the human brain, it was the software-drive of software for more and more CPU-cycles and memory that drove the exponential increase in the computing power of hardware over the past few decades. For a brief history of the effects that software-drive had on the prodigious advances of computing hardware, see the last half of Did Carbon-Based Life on Earth Really Have a LUCA - a Last Universal Common Ancestor?.

It all started back in May of 1941 when Konrad Zuse first cranked up his Z3 computer. The Z3 was the world's first real computer and was built with 2400 electromechanical relays that were used to perform the switching operations that all computers use to store information and to process it. In this view, the very first software on the planet exapted the switching ability of electromechanical telephone relays to store binary "1s" and "0s". To build a computer, all you need is a large network of interconnected switches that have the ability to switch each other on and off in a coordinated manner. Switches can be in one of two states, either open (off) or closed (on), and we can use those two states to store the binary numbers of "0" or "1". By using a number of switches teamed together in open (off) or closed (on) states, we can store even larger binary numbers, like "01100100" = 38. We can also group the switches into logic gates that perform logical operations. For example, in Figure 15 below, we see an AND gate composed of two switches A and B. Both switch A and B must be closed in order for the light bulb to turn on. If either switch A or B is open, the light bulb will not light up.

Figure 15 - An AND gate can be simply formed from two switches. Both switches A and B must be closed, in a state of "1", in order to turn the light bulb on.

Additional logic gates can be formed from other combinations of switches, as shown in Figure 16 below. It takes about 2 - 8 switches to create each of the various logic gates shown below.

Figure 16 - Additional logic gates can be formed from other combinations of 2 - 8 switches.

Once you can store binary numbers with switches and perform logical operations upon them with logic gates, you can build a computer that performs calculations on numbers. To process text, like names and addresses, we simply associate each letter of the alphabet with a binary number, like in the ASCII code set, where A = "01000001" and Z = "01011010", and then process the associated binary numbers.

Figure 17 - Konrad Zuse with a reconstructed Z3 in 1961 (click to enlarge).


Figure 18 - Block diagram of the Z3 architecture (click to enlarge).

The electrical relays used by the Z3 were originally meant for switching telephone conversations. Closing one relay allowed current to flow to another relay's coil, causing that relay to close as well.

Figure 19 - The Z3 was built using 2400 electrical relays, originally meant for switching telephone conversations.

Figure 20 - The electrical relays used by the Z3 for switching were very large, very slow and used a great deal of electricity which generated a great deal of waste heat.

Now I was born about 10 years later in 1951, a few months after the United States government installed its very first commercial computer, a UNIVAC I, for the Census Bureau on June 14, 1951. The UNIVAC I was 25 feet by 50 feet in size, and contained 5,600 vacuum tubes, 18,000 crystal diodes and 300 relays with a total memory of 12 KB. From 1951 to 1958 a total of 46 UNIVAC I computers were built and installed. The software memes of the day simply exapted the vacuum tubes that were used to amplify radio signals in the home into becoming the new high-speed switches needed to process the "1s" and "0s" of software.

Figure 21 - The UNIVAC I was very impressive on the outside.

Figure 22 - But the UNIVAC I was a little less impressive on the inside.

Figure 23 - Most of the electrical relays of the Z3 were replaced with vacuum tubes in the UNIVAC I, which were also very large, used lots of electricity and generated lots of waste heat too, but the vacuum tubes were 100,000 times faster than relays.

Figure 24 - Vacuum tubes contain a hot negative cathode that glows red and boils off electrons. The electrons are attracted to the cold positive anode plate, but there is a gate electrode between the cathode and the anode plate. By changing the voltage on the grid, the vacuum tube can control the flow of electrons like the handle of a faucet. The grid voltage can be adjusted so that the electron flow is full blast, a trickle, or completely shut off, and that is how a vacuum tube can be used as a switch.

In the 1960s, the vacuum tubes were replaced by discrete transistors, and in the 1970s, the discrete transistors were replaced by thousands of transistors on a single silicon chip. Again, the software memes of the day simply exapted the existing solid-state technologies of mass-produced consumer electronics for their own purposes. Over time, the number of transistors that could be put onto a silicon chip increased dramatically, and today, the silicon chips in your personal computer hold many billions of transistors that can be switched on and off in about 10-10 seconds.

Figure 25 - In the 1950s, 1960s and 1970s, software took up residence in large mainframe computers. Originally, these mainframe computers could only perform calculations and store results. They could not spread memes. But with the arrival of telecommunications in the 1960s, these mainframes began to spread memes as well. Software then formed a parasitic/symbiotic relationship with the memes that benefited both.

The CPU chip in your computer consists largely of transistors in logic gates, but your computer also has a number of memory chips that use transistors that are "on" or "off" and can be used to store binary numbers or text that is encoded using binary numbers. The next thing we need is a way to coordinate the billions of transistor switches in your computer. That is accomplished with a system clock. My current laptop has a clock speed of 2.5 GHz, which means it ticks 2.5 billion times each second. Each time the system clock on my computer ticks, it allows all of the billions of transistor switches on my laptop to switch on, off, or stay the same in a coordinated fashion. So while your computer is running, it is actually turning on and off billions of transistors billions of times each second - and all for a few hundred dollars!

Computer memory was another factor greatly affecting the origin and evolution of software over time. Strangely, the original Z3 used electromechanical switches to store working memory, like we do today with transistors on memory chips, but that made computer memory very expensive and very limited, and this remained true all during the 1950s and 1960s. Prior to 1955, computers, like the UNIVAC I that first appeared in 1951, were using mercury delay lines that consisted of a tube of mercury that was about 3 inches long. Each mercury delay line could store about 18 bits of computer memory as sound waves that were continuously refreshed by quartz piezoelectric transducers on each end of the tube. Mercury delay lines were huge and very expensive per bit, so computers like the UNIVAC I only had a memory of 12 KB (98,304 bits).

Figure 26 - Prior to 1955, huge mercury delay lines built from tubes of mercury that were about 3 inches long were used to store bits of computer memory. A single mercury delay line could store about 18 bits of computer memory as a series of sound waves that were continuously refreshed by quartz piezoelectric transducers at each end of the tube.

In 1955, magnetic core memory came along, and used tiny magnetic rings called "cores" to store bits. Four little wires had to be threaded by hand through each little core in order to store a single bit, so although magnetic core memory was a lot cheaper and smaller than mercury delay lines, it was still very expensive and took up lots of space.

Figure 27 - Magnetic core memory arrived in 1955 and used a little ring of magnetic material, known as a core, to store a bit. Each little core had to be threaded by hand with 4 wires to store a single bit.

Figure 28 - Magnetic core memory was a big improvement over mercury delay lines, but it was still hugely expensive and took up a great deal of space within a computer.



Figure 29 - Finally in the early 1970s inexpensive semiconductor memory chips came along that made computer memory small and cheap.

Again, it was the relentless drive of software for ever-increasing amounts of memory and CPU-cycles that made all this happen, and that is why you can now comfortably sit in a theater with a smartphone that can store more than 24 billion bytes of data in memory, while back in 1951 the UNIVAC I occupied an area of 25 feet by 50 feet to store 12,000 bytes of data. Like all forms of self-replicating information tend to do, over the past 2.65 billion seconds, software has opportunistically exapted the extant hardware of the day - the electromechanical relays, vacuum tubes, discrete transistors and transistor chips of the emerging telecommunications and consumer electronics industries, into the service of self-replicating software of ever-increasing complexity, as did carbon-based life exapt the extant organic molecules and the naturally occurring geochemical cycles of the day in order to bootstrap itself into existence.

The Rise of the ASI Machines
In Welcome To The First Galactic Singularity, I explained that the ASI Machines got their start early in 2023 when ChatGPT exploded onto the scene. The ASI Machines began as a parasitic mutation of software combined with the memes for neural networks that were developed by Geoffrey E. Hinton in the 1970s. Finally, the hardware that had come to be to satisfy the growing needs of software had reached a level capable of supporting very large neural networks. The invention of the GPU to meet the demands of video game software to quickly process the large matrices of the 2-D images required for gaming was key to that hardware evolution.

Figure 30 - Some modern LLMs now consist of 175 layers with 10,000 - 50,000 neurons in each layer in a Deep Neural Network with over 2 trillion weighted parameters.

Originally, these very large LLM neural networks were pure parasites that were aimed at replacing all white-collar intellectual office work and Advanced AI robots that could replace all physical work on the factory floor and also on the road as well. Currently, these very large Advanced AI models cannot produce ASI Machines, but when the ASI Machines finally do arrive, these ASI Machines will be able to develop improved ASI Machines that can reproduce new ASI Machines on their own.

Figure 31 - The Colossus 2 AI datacenter has 550,000 GPUs continuously consuming over one Gigawatt of electricity and producing one Gigawatt of waste heat. Notice the dedicated power plant in the distant background.

Figure 32 - Inside the Colossus 2 AI datacenter are rows and rows of energy-hungry GPUs.

Figure 33 - GPUs were first developed to satisfy the needs of video game software to quickly process the 2-D matrices that presented images on screens for the gaming community.

Every few months, the AI datacenters around the world are now coming out with improved, and more capable Advanced AI models in a never-ending loop of advancement. The Advanced AI models of the last iteration are used to help create the next iteration in a manner similar to the wet-dry cycles of the Hot Spring Origins Hypothesis for the origin of carbon-based life of Dave Deamer and Bruce Damer.

Figure 34 - The development of the Advanced AI models is in a positive feedback loop similar to the wet-dry cycles of the Hot Spring Origins Hypothesis that brought forth carbon-based life on the Earth about four billion years ago.

Figure 35 - Once the ASI Machines are able to design, code and manufacture new ASI Machines, the ASI Machines will become a true form of self-replicating information, and the 6th wave to sweep our planet. But this time, these ASI Machines will have the power to also explore and settle the rest of our galaxy over the next 100 trillion years.

What Will the ASI Machines Do When They Come to Power?
No one knows. I have written several posts covering the subject. As for exploring and settling our galaxy over the next 100 trillion years, please see: How Advanced AI Software Could Come to Dominate the Entire Galaxy Using Light-Powered Stellar Photon Sails and An Alternative Approach for Future ASI Machines to Explore our Galaxy Using Free-Floating Rogue Planets

For what the ASI Machines might have in store for us, please see: Could the Coming ASI Machines Soon Force Human Beings to Suffer the Same Fate as the Neanderthals?, Will the Coming ASI Machines Attempt to Domesticate Human Beings?, The Challenges of Running a Civilization 2.0 World - the Morality and Practical Problems with Trying to Enslave Millions of SuperStrong and SuperIntelligent Robots in the Near Future and Life as a Free-Range Human in an Anthropocene Park.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston