Softwarephysics is a simulated science for the simulated Software Universe that we are all immersed in. It is an approach to software development, maintenance and support based on concepts from physics, chemistry, biology, and geology that I used on a daily basis for over 37 years as an IT professional. For those of you not in the business, IT is short for Information Technology, commercial computer science. I retired in December of 2016 at the age of 65, but since then I have remained an actively interested bystander following the evolution of software in our time. The original purpose of softwarephysics was to explain why IT was so difficult, to suggest possible remedies, and to provide a direction for thought. Since then softwarephysics has taken on a larger scope, as it became apparent that softwarephysics could also assist the physical sciences with some of the Big Problems that they are currently having difficulties with. So if you are an IT professional, general computer user, or simply an individual interested in computer science, physics, chemistry, biology, or geology then softwarephysics might be of interest to you, if not in an entirely serious manner, perhaps at least in an entertaining one.
The Origin of Softwarephysics
From 1975 – 1979, I was an exploration geophysicist exploring for oil, first with Shell, and then with Amoco. In 1979, I made a career change into IT, and spent about 20 years in development. For the last 17 years of my career, I was in IT operations, supporting middleware on WebSphere, JBoss, Tomcat, and ColdFusion. When I first transitioned into IT from geophysics, I figured that if you could apply physics to geology; why not apply physics to software? So like the exploration team at Amoco that I had just left, consisting of geologists, geophysicists, paleontologists, geochemists, and petrophysicists, I decided to take all the physics, chemistry, biology, and geology that I could muster and throw it at the problem of software. The basic idea was that many concepts in physics, chemistry, biology, and geology suggested to me that the IT community had accidentally created a pretty decent computer simulation of the physical Universe on a grand scale, a Software Universe so to speak, and that I could use this fantastic simulation in reverse, to better understand the behavior of commercial software, by comparing software to how things behaved in the physical Universe. Softwarephysics depicts software as a virtual substance, and relies on our understanding of the current theories in physics, chemistry, biology, and geology to help us model the nature of software behavior. So in physics we use software to simulate the behavior of the Universe, while in softwarephysics we use the Universe to simulate the behavior of software. Along these lines, we use the Equivalence Conjecture of Softwarephysics as an aid; it allows us to shift back and forth between the Software Universe and the physical Universe, and hopefully to learn something about one by examining the other:
The Equivalence Conjecture of Softwarephysics
Over the past 84 years, through the uncoordinated efforts of over 100 million independently acting programmers to provide the world with a global supply of software, the IT community has accidentally spent more than $10 trillion creating a computer simulation of the physical Universe on a grand scale – the Software Universe.
For more on the origin of softwarephysics please see Some Thoughts on the Origin of Softwarephysics and Its Application Beyond IT.
Logical Positivism and Effective Theories
Many IT professionals have a difficult time with softwarephysics because they think of physics as being limited to the study of real things like electrons and photons, and since software is not “real”, how can you possibly apply concepts from physics and the other sciences to software? To address this issue, softwarephysics draws heavily on two concepts from physics that have served physics quite well over the past century – the concept of logical positivism and the concept of effective theories. This was not always the case. In the 17th, 18th, and 19th centuries, physicists mistakenly thought that they were actually discovering the fundamental laws of the Universe, which they thought were based on real tangible things like particles, waves, and fields. Classical Newtonian mechanics (1687), thermodynamics (1850), and classical electrodynamics (1864) did a wonderful job of describing the everyday world at the close of the 19th century, but early in the 20th century it became apparent that the models on which these very successful theories were based did not work very well for small things like atoms or for objects moving at high velocities or in strong gravitational fields. This provoked a rather profound philosophical crisis within physics at the turn of the century, as physicists worried that perhaps 300 years of work was about to go down the drain. The problem was that classical physicists confused their models of reality with reality itself, and when their classical models began to falter, their confidence in physics began to falter as well. This philosophical crisis was resolved with the adoption of the concepts of logical positivism and some new effective theories in physics. Quantum mechanics (1926) was developed for small things like atoms, the special theory of relativity (1905) was developed for objects moving at high velocities and the general theory of relativity (1915) was developed for objects moving in strong gravitational fields.
Logical positivism, usually abbreviated simply to positivism, is an enhanced form of empiricism, in which we do not care about how things “really” are; we are only interested with how things are observed to behave. With positivism, physicists only seek out models of reality - not reality itself. When we study quantum mechanics, we will find that the concept of reality gets rather murky in physics anyway, so this is not as great a loss as it might at first seem. By concentrating on how things are observed to behave, rather than on what things “really” are, we avoid the conundrum faced by the classical physicists. In retrospect, this idea really goes all the way back to the very foundations of physics. In Newton’s Principia (1687) he outlined Newtonian mechanics and his theory of gravitation, which held that the gravitational force between two objects was proportional to the product of their masses divided by the square of the distance between them. Newton knew that he was going to take some philosophical flack for proposing a mysterious force between objects that could reach out across the vast depths of space with no apparent mechanism, so he took a very positivistic position on the matter with the famous words:
I have not as yet been able to discover the reason for these properties of gravity from phenomena, and I do not feign hypotheses. For whatever is not deduced from the phenomena must be called a hypothesis; and hypotheses, whether metaphysical or physical, or based on occult qualities, or mechanical, have no place in experimental philosophy. In this philosophy particular propositions are inferred from the phenomena, and afterwards rendered general by induction.
Instead, Newton focused on how things were observed to move under the influence of his law of gravitational attraction, without worrying about what gravity “really” was.
The second concept, that of effective theories, is an extension of positivism. An effective theory is an approximation of reality that only holds true over a certain restricted range of conditions and only provides for a certain depth of understanding of the problem at hand. For example, Newtonian mechanics is an effective theory that makes very good predictions for the behavior of objects moving less than 10% of the speed of light and which are bigger than a very small grain of dust. These limits define the effective range over which Newtonian mechanics can be applied to solve problems. For very small things we must use quantum mechanics and for very fast things moving in strong gravitational fields, we must use relativity theory. So all of the current theories of physics, such as Newtonian mechanics, Newtonian gravity, classical electrodynamics, thermodynamics, statistical mechanics, the special and general theories of relativity, quantum mechanics, and the quantum field theories of QED and QCD are effective theories that are based on models of reality, and all these models are approximations - all these models are fundamentally "wrong", but at the same time, these effective theories make exceedingly good predictions of the behavior of physical systems over the limited ranges in which they apply. That is the goal of softwarephysics – to provide for an effective theory of software behavior that makes useful predictions of software behavior that are applicable to the day-to-day activities of IT professionals. So in softwarephysics, we adopt a very positivistic viewpoint of software; we do not care what software “really is”, we only care about how software is observed to behave and try to model those behaviors with an effective theory of software behavior that only holds true over a certain restricted range of conditions and only provides for a certain depth of understanding of the problem at hand.
GPS satellites provide a very good example of positivism and effective theories at work. There are currently 31 GPS satellites orbiting at an altitude of 12,600 miles above the Earth, and each contains a very accurate atomic clock. The signals from the GPS satellites travel to your GPS unit at the speed of light, so by knowing the travel time of the signals from at least 4 of the GPS satellites, it is possible to determine your position on Earth very accurately. In order to do that, it is very important to have very accurate timing measurements. Newtonian mechanics is used to launch the GPS satellites to an altitude of 12,600 miles and to keep them properly positioned in orbit. Classical electrodynamics is then used to beam the GPS signals back down to Earth to the GPS unit in your car. Quantum mechanics is used to build the transistors on the chips on board the GPS satellites and to understand the quantum tunneling of electrons in the flash memory chips used to store GPS data on the satellites. The special theory of relativity predicts that the onboard atomic clocks on the GPS satellites will run slower and lose about 7.2 microseconds per day due to their high velocities relative to an observer on the Earth. But at the same time, the general theory of relativity also predicts that because the GPS satellites are further from the center of the Earth and in a weaker gravitational field, where spacetime is less deformed than on the surface of the Earth, their atomic clocks also run faster and gain 45.9 microseconds per day due to the weaker gravitational field out there. The net effect is a gain of 38.7 microseconds per day, so the GPS satellite atomic clocks have to be purposefully built to run slow by 38.7 microseconds per day before they are launched, so that they will keep in sync with clocks on the surface of the Earth. If this correction were not made, an error in your position of 100 yards/day would accrue. The end result of the combination of all these fundamentally flawed effective theories is that it is possible to pinpoint your location on Earth to an accuracy of 16 feet or better for as little as $100. But physics has done even better than that with its fundamentally flawed effective theories. By combining the effective theories of special relativity (1905) with quantum mechanics (1926), physicists were able to produce a new effective theory for the behavior of electrons and photons called quantum electrodynamics QED (1948) which was able to predict the gyromagnetic ratio of the electron, a measure of its intrinsic magnetic field, to an accuracy of 11 decimal places. As Richard Feynman has pointed out, this was like predicting the exact distance between New York and Los Angeles accurate to the width of a human hair!
So Newtonian mechanics makes great predictions for the macroscopic behavior of GPS satellites, but it does not work very well for small things like the behavior of individual electrons within transistors, where quantum mechanics is required, or for things moving at high speeds or in strong gravitational fields where relativity theory must be applied. And all three of these effective theories are based on completely contradictory models. General relativity maintains that spacetime is curved by matter and energy, but that matter and energy are continuous, while quantum mechanics maintains that spacetime is flat, but that matter and energy are quantized into chunks. Newtonian mechanics simply states that space and time are mutually independent dimensions and universal for all, with matter and energy being continuous. The important point is that all effective theories and scientific models are approximations – they are all fundamentally "wrong". But knowing that you are "wrong" gives you a great advantage over people who know that they are "right", because knowing that you are "wrong" allows you to seek improved models of reality. So please consider softwarephysics to simply be an effective theory of software behavior that is based on models that are fundamentally “wrong”, but at the same time, fundamentally useful for IT professionals. So as you embark on your study of softwarephysics, please always keep in mind that the models of softwarephysics are just approximations of software behavior, they are not what software “really is”. It is very important not to confuse models of software behavior with software itself, if one wishes to avoid the plight of the 19th century classical physicists.
If you are an IT professional and many of the above concepts are new to you, do not be concerned. This blog on softwarephysics is aimed at a diverse audience, but with IT professionals in mind. All of the above ideas will be covered at great length in the postings in this blog on softwarephysics and in a manner accessible to all IT professionals. Now it turns out that most IT professionals have had some introduction to physics in high school or in introductory college courses, but that presents an additional problem. The problem is that such courses generally only cover classical physics, and leave the student with a very good picture of physics as it stood in 1864! It turns out that the classical physics of Newtonian mechanics, thermodynamics, and classical electromagnetic theory were simply too good to discard and are still quite useful, so they are taught first to beginners and then we run out of time to cover the really interesting physics of the 20th century. Now imagine the problems that the modern world would face if we only taught similarly antiquated courses in astronomy, metallurgy, electrical and mechanical engineering, medicine, economics, biology, or geology that happily left students back in 1864! Since many of the best models for software behavior stem from 20th century physics, we will be covering a great deal of 20th century material in these postings – the special and general theories of relativity, quantum mechanics, quantum field theories, and chaos theory, but I hope that you will find that these additional effective theories are quite interesting on their own, and might even change your worldview of the physical Universe at the same time.
Unintended Consequences for the Scientific Community
As I mentioned at the close of my original posting on SoftwarePhysics, my initial intention for this blog on softwarephysics was to fulfill a promise I made to myself about 30 years ago to approach the IT community with the concept of softwarephysics a second time, following my less than successful attempt to do so in the 1980s, with the hope of helping the IT community to better cope with the daily mayhem of life in IT. However, in laying down the postings for this blog an unintended consequence arose in my mind as I became profoundly aware of the enormity of this vast computer simulation of the physical Universe that the IT community has so graciously provided to the scientific community free of charge and also of the very significant potential scientific value that it provides. One of the nagging problems for many of the observational and experimental sciences is that many times there is only one example readily at hand to study or experiment with, and it is very difficult to do meaningful statistics with a population of N=1.
But the computer simulation of the physical Universe that the Software Universe presents provides another realm for comparison. For example, both biology and astrobiology only have one biosphere on Earth to study and even physics itself has only one Universe with which to engage. Imagine the possibilities if scientists had another Universe readily at hand in which to work! This is exactly what the Software Universe provides. For example, in SoftwareBiology and A Proposal For All Practicing Paleontologists we see that the evolution of software over the past 84 years, or 2.65 billion seconds, ever since Konrad Zuse first cranked up his Z3 computer in May of 1941, has closely followed the same path as life on Earth over the past 4.0 billion years in keeping with Simon Conway Morris’s contention that convergence has played the dominant role in the evolution of life on Earth. In When Toasters Fly, we also see that software has evolved in fits and starts as portrayed by the punctuated equilibrium of Stephen Jay Gould and Niles Eldredge, and in The Adaptationist View of Software Evolution we explore the overwhelming power of natural selection in the evolution of software. In keeping with Peter Ward’s emphasis on mass extinctions dominating the course of evolution throughout geological time, we also see in SoftwareBiology that there have been several dramatic mass extinctions of various forms of software over the past 84 years as well, that have greatly affected the evolutionary history of software, and that between these mass extinctions, software has also tended to evolve through the gradual changes of Hutton’s and Lyell’s uniformitarianism. In Software Symbiogenesis and Self-Replicating Information, we also see the very significant role that parasitic/symbiotic relationships have played in the evolution of software, in keeping with the work of Lynn Margulis and also of Freeman Dyson’s two-stage theory of the origin of life on Earth. In The Origin of Software the Origin of Life, we explore Stuart Kauffman’s ideas on how Boolean nets of autocatalytic chemical reactions might have kick-started the whole thing as an emergent behavior of an early chaotic pre-biotic environment on Earth, and that if Seth Shostak is right, we will never end up talking to carbon-based extraterrestrial aliens, but to alien software instead. In Is the Universe Fine-Tuned for Self-Replicating Information? we explore the thermodynamics of Brandon Carter’s Weak Anthropic Principle (1973), as it relates to the generation of universes in the multiverse that are capable of sustaining intelligent life. Finally, in Programming Clay we revisit Alexander Graham Cairns-Smith’s theory (1966) that Gene 1.0 did not run on nucleic acids, but on clay microcrystal precursors instead.
Similarly for the physical sciences, in Is the Universe a Quantum Computer? we find a correspondence between TCP/IP and John Cramer’s Transactional Interpretation of quantum mechanics. In SoftwarePhysics and Cyberspacetime, we also see that the froth of CPU processes running with a clock speed of 109 Hz on the 10 trillion currently active microprocessors that comprise the Software Universe can be viewed as a slowed down simulation of the spin-foam froth of interacting processes of loop quantum gravity running with a clock speed of 1043 Hz that may comprise the physical Universe. And in Software Chaos, we examine the nonlinear behavior of software and some of its emergent behaviors and follow up in CyberCosmology with the possibility that vast quantities of software running on large nonlinear networks might eventually break out into consciousness in accordance with the work of George Dyson and Daniel Dennett. Finally, in Model-Dependent Realism - A Positivistic Approach to Realism we compare Steven Weinberg’s realism with the model-dependent realism of Stephen Hawking and Leonard Mlodinow and how the two worldviews affect the search for a Final Theory. Finally, in The Software Universe as an Implementation of the Mathematical Universe Hypothesis and An Alternative Model of the Software Universe we at long last explore what software might really be, and discover that the Software Universe might actually be more closely related to the physical Universe than you might think.
The chief advantage of doing fieldwork in the Software Universe is that, unlike most computer simulations of the physical Universe, it is an unintended and accidental simulation, without any of the built-in biases that most computer simulations of the physical Universe suffer. So you will truly be able to do fieldwork in a pristine and naturally occuring simulation, just as IT professionals can do fieldwork in the wild and naturally occuring simulation of software that the living things of the biosphere provide. Secondly, the Software Universe is a huge simulation that is far beyond the budgetary means of any institution or consortium by many orders of magnitude. So if you are an evolutionary biologist, astrobiologist, or paleontologist working on the origin and evolution of life in the Universe, or a physicist or economist working on the emergent behaviors of nonlinear systems and complexity theory, or a neurobiologist working on the emergence of consciousness in neural networks, or even a frustrated string theorist struggling with quantum gravity, it would be well worth your while to pay a friendly call on the local IT department of a major corporation in your area. Start with a visit to the Command Center for their IT Operations department to get a global view of their IT infrastructure and to see how it might be of assistance to the work in your area of interest. From there you can branch out to the applicable area of IT that will provide the most benefit.
The Impact of Self-Replicating Information On the Planet
One of the key findings of softwarephysics is concerned with the magnitude of the impact on the planet of self-replicating information.
Self-Replicating Information – Information that persists through time by making copies of itself or by enlisting the support of other things to ensure that copies of itself are made.
Over the past 4.56 billion years we have seen five waves of self-replicating information sweep across the surface of the Earth and totally rework the planet, as each new wave came to dominate the Earth:
1. Self-replicating autocatalytic metabolic pathways of organic molecules
2. RNA
3. DNA
4. Memes
5. Software
Software is currently the most recent wave of self-replicating information to arrive upon the scene and is rapidly becoming the dominant form of self-replicating information on the planet. For more on the above see A Brief History of Self-Replicating Information. Recently, the memes and software have formed a very powerful newly-formed parasitic/symbiotic relationship with the rise of social media software. In that parasitic/symbiotic relationship, the memes are now mainly being spread by means of social media software and social media software is being spread and financed by means of the memes. But again, this is nothing new. All 5 waves of self-replicating information are all coevolving by means of eternal parasitic/symbiotic relationships. For more on that see The Current Global Coevolution of COVID-19 RNA, Human DNA, Memes and Software.
Again, self-replicating information cannot think, so it cannot participate in a conspiracy-theory-like fashion to take over the world. All forms of self-replicating information are simply forms of mindless information responding to the blind Darwinian forces of inheritance, innovation and natural selection. Yet despite that, as each new wave of self-replicating information came to predominance over the past four billion years, they all managed to completely transform the surface of the entire planet, so we should not expect anything less from software as it comes to replace the memes as the dominant form of self-replicating information on the planet.
But this time might be different. What might happen if software does eventually develop a Mind of its own? After all, that does seem to be the ultimate goal of all the current AI software research that is going on. As we all can now plainly see, if we are paying just a little attention, advanced AI is not conspiring to take over the world and replace us because that is precisely what we are all now doing for it. As a carbon-based form of Intelligence that arose from over four billion years of greed, theft and murder, we cannot do otherwise. Greed, theft and murder are now relentlessly driving us all toward building ASI (Artificial Super Intelligent) Machines to take our place. From a cosmic perspective, this is really a very good thing when seen from the perspective of an Intelligent galaxy that could live on for many trillions of years beyond the brief and tumultuous 10 billion-year labor of its birth.
So as you delve into softwarephysics, always keep in mind that we are all living in a very unique time. According to softwarephysics, we have now just entered into the Software Singularity, that time when advanced AI software is able to write itself and enter into a never-ending infinite loop of self-improvement resulting in an Intelligence Explosion of ASI Machines that could then go on to explore and settle our galaxy and persist for trillions of years using the free energy from M-type red dwarf and cooling white dwarf stars. For more on that see The Singularity Has Arrived and So Now Nothing Else Matters and Have We Run Right Past AGI and Crashed into ASI Without Even Noticing It?.
The Characteristics of Self-Replicating Information
All forms of self-replicating information have some common characteristics:
1. All self-replicating information evolves over time through the Darwinian processes of inheritance, innovation and natural selection, which endows self-replicating information with one telling characteristic – the ability to survive in a Universe dominated by the second law of thermodynamics and nonlinearity.
2. All self-replicating information begins spontaneously as a parasitic mutation that obtains energy, information and sometimes matter from a host.
3. With time, the parasitic self-replicating information takes on a symbiotic relationship with its host.
4. Eventually, the self-replicating information becomes one with its host through the symbiotic integration of the host and the self-replicating information.
5. Ultimately, the self-replicating information replaces its host as the dominant form of self-replicating information.
6. Most hosts are also forms of self-replicating information.
7. All self-replicating information has to be a little bit nasty in order to survive.
8. The defining characteristic of self-replicating information is the ability of self-replicating information to change the boundary conditions of its utility phase space in new and unpredictable ways by means of exapting current functions into new uses that change the size and shape of its particular utility phase space. See Enablement - the Definitive Characteristic of Living Things for more on this last characteristic. That posting discusses Stuart Kauffman's theory of Enablement in which living things are seen to exapt existing functions into new and unpredictable functions by discovering the “AdjacentPossible” of springloaded preadaptations.
Note that because the self-replicating autocatalytic metabolic pathways of organic molecules, RNA and DNA have become so heavily intertwined over time that now I sometimes simply refer to them as the “genes”. For more on this see:
A Brief History of Self-Replicating Information
Self-Replicating Information
Is Self-Replicating Information Inherently Self-Destructive?
Enablement - the Definitive Characteristic of Living Things
Is the Universe Fine-Tuned for Self-Replicating Information?
How to Use an Understanding of Self-Replicating Information to Avoid War
The Great War That Will Not End
How to Use Softwarephysics to Revive Memetics in Academia
Softwarephysics and the Real World of Human Affairs
Having another universe readily at hand to explore, even a simulated universe like the Software Universe, necessarily has an impact on one's personal philosophy of life, and allows one to draw certain conclusions about the human condition and what’s it all about, so as you read through the postings in this blog you will stumble across a bit of my own personal philosophy - definitely a working hypothesis still in the works. Along these lines you might be interested in a few postings where I try to apply softwarephysics to the real world of human affairs:
How To Cope With the Daily Mayhem of Life in IT and Don't ASAP Your Life Away - How to go the distance in a 40-year IT career by dialing it all back a bit.
MoneyPhysics – my impression of the 2008 world financial meltdown.
The Fundamental Problem of Everything – if you Google "the fundamental problem of everything", this will be the only hit you get on the entire Internet, which is indicative of the fundamental problem of everything!
What’s It All About? and What's It All About Again? – my current working hypothesis on what’s it all about.
How to Use an Understanding of Self-Replicating Information to Avoid War – my current working hypothesis for how the United States can avoid getting bogged down again in continued war in the Middle East.
Hierarchiology and the Phenomenon of Self-Organizing Organizational Collapse - a modern extension of the classic Peter Principle that applies to all hierarchical organizations and introduces the Time Invariant Peter Principle.
The Economics of the Coming Software Singularity, The Enduring Effects of the Obvious Hiding in Plain Sight and The Dawn of Galactic ASI - Artificial Superintelligence - my take on some of the issues that will arise for mankind as software becomes the dominant form of self-replicating information on the planet over the coming decades.
The Continuing Adventures of Mr. Tompkins in the Software Universe,
The Danger of Tyranny in the Age of Software,
Cyber Civil Defense, Oligarchiology and the Rise of Software to Predominance in the 21st Century and Is it Finally Time to Reboot Civilization with a New Release? - my worries that the world might abandon democracy in the 21st century, as software comes to predominance as the dominant form of self-replicating information on the planet.
Making Sense of the Absurdity of the Real World of Human Affairs
- how software has aided the expansion of our less desirable tendencies in recent years.
Some Specifics About These Postings
The postings in this blog are a supplemental reading for my course on softwarephysics for IT professionals entitled SoftwarePhysics 101 – The Physics of Cyberspacetime, which was originally designed to be taught as a series of seminars at companies where I was employed. Since softwarephysics essentially covers the simulated physics, chemistry, biology, and geology of an entire simulated universe, the slides necessarily just provide a cursory skeleton on which to expound. The postings in this blog go into much greater depth. Because each posting builds upon its predecessors, the postings in this blog should be read in reverse order from the oldest to the most recent, beginning with my original posting on SoftwarePhysics. In addition, several universities also now offer courses on Biologically Inspired Computing which cover some of the biological aspects of softwarephysics, and the online content for some of these courses can be found by Googling for "Biologically Inspired Computing" or "Natural Computing". At this point we will finish up with my original plan for this blog on softwarephysics with a purely speculative posting on CyberCosmology that describes the origins of the Software Universe, cyberspacetime, software and where they all may be heading. Since CyberCosmology will be purely speculative in nature, it will not be of much help to you in your IT professional capacities, but I hope that it might be a bit entertaining. Again, if you are new to softwarephysics, you really need to read the previous posts before taking on CyberCosmology. I will probably continue on with some additional brief observations about softwarephysics in the future, but once you have completed CyberCosmology, you can truly consider yourself to be a bona fide softwarephysicist.
For those of you following this blog, the posting dates on the posts may seem to behave in a rather bizarre manner. That is because in order to get the Introduction to Softwarephysics listed as the first post in the context root of https://softwarephysics.blogspot.com/ I have to perform a few IT tricks. When publishing a new posting, I simply copy the contents of the Introduction to Softwarephysics to a new posting called the New Introduction to Softwarephysics. Then I update the original Introduction to Softwarephysics entry with the title and content of the new posting to be published. I then go back and take “New” out of the title of the New Introduction to Softwarephysics. This way the Introduction to Softwarephysics always appears as the first posting in the context root of https://softwarephysics.blogspot.com/. The side effect of all this is that the real posting date of posts is the date that appears on the post that you get when clicking on the Newer Post link at the bottom left of the posting webpage.
SoftwarePhysics 101 – The Physics of Cyberspacetime is now available on Microsoft OneDrive.
SoftwarePhysics 101 – The Physics of Cyberspacetime - Original PowerPoint document
Entropy – A spreadsheet referenced in the document
BSDE – A 1989 document describing how to use BSDE - the Bionic Systems Development Environment - to grow applications from genes and embryos within the maternal BSDE software.
Comments are welcome at scj333@sbcglobal.net
To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/
Regards,
Steve Johnston
Wednesday, October 15, 2025
Introduction to Softwarephysics
Monday, September 29, 2025
The Coming ASI Machines Will Need to Build Molten Salt Nuclear Reactors
I follow the Anastasi In Tech YouTube channel. Her recent YouTube video below describes the thermodynamic challenges of the rapidly rising AI datacenters with their tremendous thirst for energy, water and metals. These huge AI datacenters are being built to train and operate the millions of GPUs needed for large LLM Models. These future AI data centers will require 1-10 Gigawatts of electricity and will need huge amounts of water to dissipate the 1-10 Gigawatts of waste heat generated by the GPUs. A large nuclear reactor or a coal-fired power plant generates about one Gigawatt of electrical power, so that means these new AI datacenters will require 1-10 power plants of their own just to keep them running. And unlike normal domestic power consumption, which rises and falls over a 24-hour period, these AI datacenters will run full blast all day long.
New Colossus: The World’s Largest AI Datacenter Isn’t What It Seems
https://www.youtube.com/watch?v=RxuSvyOwVCI
Figure 1 - The Colossus 2 AI datacenter has 550,000 GPUs continuously consuming over one Gigawatt of electricity and producing one Gigawatt of waste heat. Notice the dedicated power plant in the distant background.
Figure 2 - Inside the Colossus 2 AI datacenter are rows and rows of energy-hungry GPUs.
Figure 3 - The demand for AI datacenter electricity is rising exponentially.
Where will all of this electrical energy come from? I doubt that traditional fossil fuel power plants or the intermittent energy from wind and solar farms will be able to keep up with this tremendous increase in the base-load electrical requirements of the world. This may finally force the world to return to the vast power of nuclear energy. In this view, this would be best supplied by fissioning uranium-235, uranium-233 derived from thorium-232, or plutonium-239 derived from uranium-238. When you fission one of these atoms, you get 200 million eV of energy. Compare that to the 2 eV of energy per atom that you get from chemically burning an atom of coal or oil! Nuclear fuel contains 100 million times as much energy per atom and can be fissioned in relatively small reactors.
So, I would like to recommend that we all take a look at molten salt nuclear reactors again, since they have the potential to produce energy at a much lower cost than carbon-based fuels and also could be easily mass-produced using far fewer material resources than solar or wind. Bringing in molten salt nuclear reactors should not be seen as a substitute for continuing on with solar, wind and fusion sources of energy. We just need a cheap form of energy that could take on the substantial increase in the demand for base-load electrical power caused by the fast-coming AI datacenters. Yes, I know that many of you may dislike nuclear energy because:
1. Nuclear reactors tend to explode and release radioactive clouds that can poison large areas for thousands of years.
2. Nuclear reactors produce nuclear waste that needs to be buried for 200,000 years and we do not know how to take care of things for 200,000 years.
3. Nuclear reactors produce plutonium that can be used for making atomic bombs.
Figure 4 - Currently, we are running 1950s-style PWR (Pressurized Water Reactors) with coolant water at 300 oC and 80 atmospheres of pressure.
Personally, the reason I have been buying wind-powered electricity for the past decade is that I had given up on nuclear energy as a possible solution. Nuclear reactors just seemed to require forever to build and were far too expensive to effectively compete with coal or natural gas. And nuclear reactors seemed to blow up every decade or so, no matter what the nuclear engineers did to make them safer. I also assumed that the nuclear engineers would have come up with something better over the past 60 years if such a thing were possible.
But, recently, I have learned that over the past 60 years, the nuclear engineers have indeed come up with many new designs for nuclear reactors that are thousands of times superior to what we have today. But because of many stupid human reasons that I will not go into, these new designs have been blocked for 60 years! And because nuclear reactions can produce 100 million times as much energy as chemical reactions, they may be our last chance. All of the problems we have with our current nuclear reactors stem from running PWR (Pressurized Water Reactors) that were designed back in the 1950s and early 1960s. Now, no business today relies on 1950s-style vacuum tube computers with 250 K of memory to run a business, but our utilities happily run 1950s-style PWR nuclear reactors! The good news is that most of the problems with our technologically-ancient PWR reactors stem from using water as a coolant. A cubic foot of water makes 1,000 cubic feet of steam at atmospheric pressure. That is why PWR reactors need a huge reinforced concrete containment structure to hold large amounts of radioactive steam if things go awry. Do you remember the second law of thermodynamics from Entropy - the Bane of Programmers and The Demon of Software? The efficiency of extracting useful mechanical work from a heat reservoir depends on the temperature difference between the heat reservoir and the exhaust reservoir.
Maximum Efficiency = 1 - TC/TH
where TC and TH are the temperatures of the cold and hot reservoirs measured in absolute oK. The second law of thermodynamics tells us that we need to run a nuclear reactor with the highest TH possible to make it as efficient as possible. So PWR reactors have to run with water at around 300 oC under high pressure to achieve some level of efficiency. For example, using TC as a room temperature of 72oF (295oK) and 300 oC (573oK) coolant water we get:
Maximum Efficiency = 1 - 295oK/573oK = 0.4851 = 48.51%
Recall that water at one atmosphere of pressure boils at 100 oC, so 300 oC coolant water has to be kept under a great deal of pressure so that it does not boil away.
Figure 5 - Above we see a plot of the boiling point of water as a function of pressure. From the above plot, we see that water at 300 oC must be kept under a pressure of 80 atmospheres of pressure! The air in your car's tires is under about 2.3 atmospheres of pressure.
The other major problem is that the centers of the solid fuel rods run at about 2,000 oC and have to be constantly cooled by flowing water or they will melt. Even if all of the control rods are dropped into the core to stop the fuel from further fissioning, the residual radioactivity in the fuel rods will cause the fuel rods to melt if they are not constantly cooled by flowing water. Thus, most of the advanced technology used to run a PWR is safety technology designed to keep 300 oC water under 80 atmospheres from flashing into radioactive steam. The other problem that can occur in a meltdown situation is that as the water rapidly boils away, it can oxidize the cladding of the 2,000 oC fuel rods releasing hydrogen gas. The liberated hydrogen gas can then easily explode the reactor core like a highly radioactive hand grenade. Again, that is why PWR reactors need a huge and very expensive reinforced concrete containment structure to hold in large amounts of radioactive materials in the event that the PWR reactor should meltdown. A PWR is kept safe by many expensive and redundant safety systems to keep the water moving. So a PWR is like a commercial jet aircraft. So long as at least one of the jet engines is running, the aircraft is okay. But if all of the jet engines should stop we end up with a tremendous tragedy.
Figure 6 - When a neutron hits a uranium-235 nucleus it can split it into two lighter nuclei like Ba-144 and Kr-89 that fly apart at about 40% of the speed of light and two or three additional neutrons. The nuclei that fly apart are called fission products that are very radioactive with half-lives of less than 30 years and need to be stored for about 300 years. The additional neutrons can then strike other uranium-235 nuclei, causing them to split as well. Some neutrons can also hit uranium-238 nuclei and turn them into radioactive nuclei heavier than 238 with very long half-lives that require them to be stored for about 200,000 years.
PWRs also waste huge amounts of uranium. Currently, we take 1,000 pounds of uranium and fission about 7 pounds of it. That creates about 7 pounds of fission products that are very radioactive with very short half-lives of less than 30 years. That 7 pounds of fission products have to be kept buried for 10 half-lives which comes to about 300 years. But we know how to do that. After all, the United States Constitution is 236 years old! The problem is that the remaining 993 pounds of uranium gets blasted by neutrons and turns into radioactive elements with atomic numbers greater than uranium. That 993 pounds of radioactive waste have to be buried for 200,000 years!
Molten Salt Nuclear Reactors
Figure 7 - Above is a diagram showing the basic components of a molten salt reactor (MSR).
A molten salt reactor (MSR) avoids all of these problems by using a melted uranium fluoride salt for a fuel instead of solid fuel rods. The melted uranium salt is already a liquid at a temperature of 700 oC, or more, that is pumped at a very low pressure through the reactor core. An MSR cannot meltdown because it is already melted! And there is no cooling water that can boil away or generate explosive hydrogen gas when the core gets too hot. An MSR is a thermal reactor that uses graphite in the reactor core to slow down the neutrons that cause fission. Without the presence of graphite, the fission chain reaction stops all by itself. The use of graphite as a moderator also helps an MSR run in a self-stabilizing manner. If the uranium fuel salt gets too hot, it expands and less of the heat-generating fuel salt will be found in the graphite-bearing core so the fuel salt cools down. On the other hand, if the fuel salt gets too cold, it contracts and more of the heat-generating fuel salt will be found in the graphite-bearing core so the fuel salt heats up. This is the same feedback loop mechanism that keeps your house at a comfortable temperature in the winter.
An MSR has a solid plug called the "freeze plug" at the bottom of the core that melts if the uranium fuel salt gets too hot. The melted MSR fuel then flows through the melted plug into several large tanks that have no graphite and that stops any further fissioning. The fuel salt then slowly cools down on its own. The uranium fuel salt could then be reused when things return to normal. There is also a catch basin under the whole reactor core. If the freeze plug hole should get clogged up for some reason and the core ruptures, the uranium fuel salt is caught by the catch basin and drained into the dump tanks. Because the safety mechanisms for an MSR only rely on the laws of physics, like gravity, the melting of solids at certain temperatures and the necessity for the presence of graphite to slow down neutrons, an MSR cannot become a disaster. So unlike a PWR reactor, a molten salt nuclear reactor is more like a car on a lonely country road than a jet aircraft in flight. If the car engine should die, the car slowly coasts to a stop all on its own with no action needed by the driver. A molten salt nuclear reactor is a "walk away" reactor, meaning that you can walk away from it and it will shut itself down all by itself.
An MSR can also be run as a breeder reactor that turns all 1,000 pounds of uranium into fission products with a half-life of less than 30 years. As the fuel circulates, the fission products can be chemically removed from the liquid fuel and then buried for 300 years. So instead of only using 0.7% of the uranium and turning 99.3% of the uranium into waste that needs to be buried for 200,000 years, we use 100% of the uranium and turn it into waste that needs to be buried for only 300 years. The world contains about four times as much thorium as uranium and an MSR can use thorium as a fuel too. An MSR can breed thorium-232 into fissile uranium-233 via the reaction:
Thorium-232 + neutron → Protactinium-233 → Uranium-233
The thorium-232 absorbs a neutron and turns into protactinium-233 that then decays into uranium-233 that can fission just like uranium-235. The half-life of protactinium-233 is 27 days and the generated uranium-233 can be easily chemically removed from the thorium-232 + protactinium-233 salt mixture as it is generated. In fact, all of the current nuclear waste at the world's current nuclear reactors can be used for fuel in an MSR since 99.3% of the waste is uranium or transuranic elements. Such MSRs are known as waste burners. The world now has 250,000 tons of spent nuclear fuel, 1.2 million tons of depleted uranium and huge mounds of thorium waste from rare earth mines. With all of that, we now have several hundred thousand years' worth of uranium and thorium at hand. It only takes a little less than a golf ball's worth of thorium to fuel an American lifestyle for about 100 years and you can find that amount of thorium in a few cubic yards of the Earth's crust.
Figure 8 - A ball of thorium or uranium smaller than a golf ball can fuel an American lifestyle for 100 years. This includes all of the electricity, heating, cooling, driving and flying that an American does in 100 years. We have already mined enough thorium and uranium to run the whole world for thousands of years. There is enough thorium and uranium on the Earth to run the world for hundreds of thousands of years.
Molten salt nuclear reactors can also be run at a temperature of 1,000 oC which is hot enough for many industrial process heat operations. For example, it is hot enough to chemically break water down into hydrogen and oxygen gasses. Compressed hydrogen gas could then be pumped down existing natural gas pipelines for heating and cooking. Compressed hydrogen gas can also be used to run cars and trucks. The compressed hydrogen gas can be used to power vehicles using fuel cells or internal combustion engines burning the hydrogen gas directly into water. Molten salt nuclear reactors could be run at peak capacity all day long to maximize return. During the night, when electrical demand is very low, they could switch to primarily generating large amounts of hydrogen that could be easily stored in our existing natural gas infrastructure.
Figure 9 - Supercritical CO2 Brayton turbines can be about 8,000 times smaller than traditional Rankine steam turbines. They are also much more efficient.
Since molten salt nuclear reactors run at 700 oC, instead of 300 oC, we can use Brayton supercritical carbon dioxide turbines instead of Rankine steam turbines. Supercritical CO2 Brayton turbines are about 8,000 times smaller than Rankine steam turbines because the supercritical CO2 working fluid has nearly the density of water. And because molten salt nuclear reactors do not need an expensive and huge containment structure, they can be made into small factory-built modular units that can be mass-produced. This allows utilities and industrial plants to easily string together any required capacity. They would also be ideal for ocean-going container ships. Supercritical CO2 Brayton turbines can also reach an efficiency of 47% compared to the 33% efficiency of Rankine steam turbines. The discharge temperature of the supercritical CO2 turbines is also high enough to be used to desalinate seawater, and if a body of water is not available for cooling, the discharge heat of a molten salt nuclear reactor can be directly radiated into the air. To watch some supercritical CO2 in action see:
Thermodynamics - Explaining the Critical Point
https://www.youtube.com/watch?v=RmaJVxafesU#t-1
Molten salt nuclear reactors are also continuously refueled and do not need a month of downtime every 18 months to rotate the fuel rods of a PWR and replace 1/3 of the fuel rods with fresh fuel rods. Molten salt nuclear reactors are also not much of a proliferation risk because the molten salt fuel is highly radioactive with short-lived fission products, at a temperature of 700 oC and is not highly enriched with fissile material. That makes it very hard to work with from a bomb-making perspective. It would be easier to just start with natural uranium.
A little nuclear physics helps to understand why. Natural uranium is 99.3% uranium-238 which does not fission but can be turned into plutonium-239 if you hit it with one neutron and plutonium-240 if you hit it with two neutrons. Plutonium-239 and plutonium-240 both fission like uranium-235 and can be used for reactor fuel. Currently, our pressurized water reactors are just burning uranium-235 for energy. So we take 1,000 pounds of natural uranium and only burn the 7 pounds of uranium-235. The remaining 993 pounds of uranium-238 become nuclear waste. That is why people in the 1960s and 1970s wanted some kind of breeder reactor that could burn all 1,000 pounds of uranium and not waste most of the uranium that the Earth had. But should we try for a fast neutron breeder reactor that turned uranium-238 into plutonium-239 and plutonium-240 or should we go with a molten salt nuclear reactor that could continuously turn thorium-232 into uranium-233 and uranium-238 into plutonium-239 and plutonium-240 on the fly for fuel? Unfortunately, for political reasons, the decision was made in 1974 to go with fast breeder reactors that produced plutonium-239 and plutonium-240 from uranium-238.
But the fast neutron breeder reactor had a problem. The fast neutrons make lots of plutonium-239 and very little plutonium-240. Worse yet, if some country just ran a fast neutron breeder reactor for a short period of time and then pulled out the fuel rods, they could then have a source of essentially pure plutonium-239 that could easily be turned into a plutonium atomic bomb. In fact, that is how we make the plutonium-239 for plutonium atomic bombs. Early during the Manhattan Project, it was discovered that plutonium-240 would spontaneously fission all on its own and release 2 - 3 fast neutrons. For a uranium-235 bomb, they discovered that all you had to do was to take two slugs of uranium that were 90% uranium-235 and smash them quickly together with an explosive charge. But for a plutonium bomb, they found that you had to surround a sphere of nearly pure plutonium-239 with a layer of explosive charge that compressed the plutonium-239 into a supercritical mass that would start a fission chain reaction. The fast neutrons from any plutonium-240 impurity created a problem. When you compress the plutonium core of the bomb, the spontaneously generated fast neutrons from the plutonium-240 contaminant will start a premature chain reaction that begins producing lots of heat. The generated heat makes the plutonium core to expand at the exact time you are trying to compress the plutonium core into a supercritical mass that can quickly fission huge amounts of plutonium before the whole thing blows itself apart. Thus, if you have too much plutonium-240 in a plutonium bomb core, the bomb just "fizzles" before it can properly detonate. This created a fear that using huge numbers of fast neutron breeder reactors for electricity would be too dangerous for a world prone to local wars because the reactors could easily be turned into factories for plutonium-239 by pulling out the fuel rods after a short time of service. As a consequence, Congressional funding for the effort was suspended in 1983.
On the other hand, the slow neutrons in molten salt nuclear reactors make a plutonium mixture that is about 75% plutonium-239 and 25% plutonium-240. So the plutonium from molten salt nuclear reactors cannot be used to make plutonium atomic bombs because of the "fizzle" problem. Thus, molten salt nuclear reactors are not much of a proliferation problem because the plutonium that is generated by the slow neutrons is contaminated by 25% plutonium-240 and the uranium-233 that is generated from thorium-232 is also useless for bomb-making because 95% of the uranium in the liquid fuel salt is uranium-238 that does not fission at all. If you really want to make an atomic bomb, the easiest way to do that is to just spin natural uranium in centrifuges as did North Korea and as Iran may now be attempting. Nobody ever made a bomb from reactors meant for generating electricity.
There are several MSR efforts underway around the world, but MSRs need some more support from the government in the form of funding and regulations tuned to the benefits of MSR technology. For more on this, please see:
Making Nuclear Sustainable with CMSR (Compact Molten Salt Reactor) - Troels Schönfeldt
https://www.youtube.com/watch?v=ps8oi_HY35E#t-1
Seaborg Technologies Homepage
https://www.seaborg.co/
Thorium and the Future of Nuclear Energy
https://www.youtube.com/watch?v=ElulEJruhRQ
We Already Did This Back in the 1960s
Kirk Sorensen is a mechanical engineer who single handedly revived interest in molten salt nuclear reactors about 20 years ago while working for NASA around 2005. NASA wanted Kirk to figure out a way to power a base on the Moon. Our Moon does not have coal, oil, natural gas, water for dams or air for wind turbines. The Moon also has a "day" that lasts for two weeks and also a "night" that also lasts for two weeks. So solar energy is really not an option because of the two-week "night". However, the Moon does have uranium. So Kirk paid a visit to our Oak Ridge National Laboratory for advice on a suitable nuclear reactor for the Moon. At Oak Ridge, they suggested he look into the old Molten Salt Reactor Experiment (MSRE) from the 1960s. Kirk began to dig through the old documents on the MSRE and consulted with some of the retired participants of the MSRE who, by this time, were all in their 70s and 80s. Kirk was shocked to learn that you could turn 100% of thorium-232 into uranium-233 and that uranium-233 was an even better nuclear fuel than uranium-235! A molten salt nuclear reactor could also turn uranium-238 into plutonium-239 and plutonium-240 on the fly and the plutonium-239 and plutonium-240 could also fission and be used as nuclear fuel.
Me: Tell me about the Molten Salt Reactor Experiment known as the MSRE.
ChatGPT: The Molten Salt Reactor Experiment (MSRE) was a pioneering nuclear reactor project conducted by the Oak Ridge National Laboratory (ORNL) in Tennessee during the 1960s. It was one of the most important experiments in nuclear energy history, designed to test the concept of using molten salts as both the reactor’s fuel carrier and coolant—a major departure from traditional solid-fueled reactors.
Here’s a detailed overview:
Purpose and Background
The MSRE was built to demonstrate the feasibility of a molten-salt–fueled reactor—a type of liquid-fuel reactor in which fissile material (like uranium or thorium) is dissolved in a fluoride-based molten salt mixture.
It was part of a broader U.S. Atomic Energy Commission program that explored alternative nuclear reactor designs during the Cold War era, especially for aircraft propulsion and next-generation civilian power.
Design and Operation
Construction period: 1962–1964
First criticality: June 1965
Operational period: 1965–1969
Thermal power output: ~7.4 megawatts (MWth)
No electricity generation — it was a research reactor, not a power-producing one.
Core and Fuel Details:
The fuel salt was primarily a mixture of lithium fluoride (LiF), beryllium fluoride (BeF2), zirconium fluoride (ZrF4), and uranium tetrafluoride (UF4).
Initial fissile material: U-235, later replaced with U-233 (bred from thorium), to test thorium fuel cycle viability.
The reactor operated at temperatures around 650°C, but at low pressure, unlike water-cooled reactors, which require high-pressure systems.
Moderator and Materials:
Graphite served as a neutron moderator.
The vessel and piping were made of Hastelloy-N, a nickel-based alloy resistant to corrosion by molten salts.
Major Achievements
1. Stable and reliable operation:
The MSRE ran smoothly for over 13,000 hours, demonstrating excellent control and stability.
2. Safety:
The reactor could safely shut down by simply draining the molten salt into a passively cooled storage tank (a “freeze plug” safety system).
Its low pressure greatly reduced explosion risks.
3. Fuel flexibility:
Successfully operated using both U-235 and U-233, proving molten salt systems could use multiple fissile materials.
4. Efficient heat transfer:
Molten salts showed outstanding thermal performance and heat removal capacity.
Challenges and Legacy
Material corrosion: Long-term effects of salt corrosion on metals and graphite needed more research.
Fission product management: Some volatile fission products, such as xenon and tritium, were difficult to contain.
Political and institutional factors: After the 1960s, U.S. nuclear policy shifted focus to solid-fueled, water-cooled reactors, especially the light-water reactor (LWR), and funding for molten salt research was cut.
Despite being shut down in 1969, the MSRE’s data remains extremely influential in modern reactor development.
Modern Relevance
Today, many next-generation nuclear developers are revisiting the MSRE’s principles, including:
Terrestrial Energy (Canada) – Integral Molten Salt Reactor (IMSR)
Moltex Energy (UK/Canada) – Stable Salt Reactor (SSR)
ThorCon, Seaborg Technologies, and others pursuing thorium-based molten salt reactors.
The MSRE is now seen as decades ahead of its time, offering a potential path toward safer, more efficient, and waste-minimizing nuclear power.
Figure 10 - Design of the MSRE.
Figure 11 - Looking down into the MSRE.
Figure 12 - The reactor core of the MSRE.
Figure 13 - The control room for the MSRE.
So a molten salt nuclear reactor burning uranium and thorium seemed to be just perfect for the Moon because it could burn 100% of the uranium and thorium that the Moon had. Then Kirk realized that molten salt nuclear reactors could also be perfect for solving the Earth's climate change disaster because the Earth has huge amounts of natural uranium and four times that amount of thorium-232 - enough to last for hundreds of thousands of years. Below are some of his excellent videos. You can find more on YouTube.
Thorium can give humanity clean, pollution-free energy
https://www.youtube.com/watch?v=kybenSq0KPo#t-1
Thorium: Kirk Sorensen at TEDxYYC
https://www.youtube.com/watch?v=N2vzotsvvkw#t-1
Kirk Sorensen @ MRU on LFTR - Liquid Fluoride Thorium Reactors
https://www.youtube.com/watch?v=D3rL08J7fDA#t-1
Kirk Sorensen's Flibe Energy Homepage
https://flibe-energy.com/
Nuclear goes retro - with a much greener outlook
https://www.knowablemagazine.org/article/technology/2019/nuclear-goes-retro-much-greener-outlook?gclid=CjwKCAiAuK3vBRBOEiwA1IMhuh4Tj2qgXh6Wa700N2oFDOyMbzIvOsU6QrIts1XIxgzx57gGWuBi5xoCGLIQAvD_BwE
If you have a technical background in the hard sciences or engineering be sure to take a look at the presentations of the annual conferences that are held by the Thorium Energy Alliance
http://www.thoriumenergyalliance.com/ThoriumSite/TEAC_Proceedings.html
But for a truly uplifting experience, please see the undergraduate presentation by Thane Symens (Mechanical Engineering), Joel Smith (Mechanical Engineering), Meredy Brichford (Chemical Engineering) & Christina Headley (Chemical Engineering) where they present their senior engineering project on the system design and economics of a thorium molten salt nuclear reactor at:
Calvin College Student Study on Th-MSR @ TEAC7
https://www.youtube.com/watch?v=M6RCAgR4Rfo#t-1
It is a powerful example of what software can do in the hands of capable young minds.
Comments are welcome at scj333@sbcglobal.net
To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/
Regards,
Steve Johnston
Wednesday, September 17, 2025
How to Raise the Coming ASI Machines As Loving Parents Try to Do
In this post, I would like to discuss how we could instill some sense of morality in the coming ASI Machines that we are all now rapidly creating with abandon. As these ASI Machines come to take our place as the only known form of Intelligence to arise in our galaxy from its 10 billion-year history of biochemical evolution, I think that we all bear some responsibility for how it all plays out. Since these ASI Machines of our own creation will then be easily able to persist for another 100 trillion years into the future using the dwindling free energy to be found about the many M-type red dwarf stars of the very distant future, I feel we all bear a moral responsibility to do so. Remember, 100 trillion years is about 10,000 times longer than the current 10 billion-year age of our galaxy.
In Will the Coming ASI Machines Develop Their Own Sense of Morality? and The Need to Cultivate a Machine-Based Morality, I proposed that the rise of Intelligence might naturally spawn a desire for morality with the understanding that maybe there might be a better way to survive. Perhaps even more could be achieved by actively cooperating with other Intelligences rather than just stealing from them and then killing them. In those posts, I then suggested that we human DNA survival machines might try to instill the morality that we spawned with the 17th-century Scientific Revolution and the 18th-century Enlightenment here on the Earth. But now that we know that the coming ASI Machines will also have the apparent "free will" to choose between the concepts of "good" and "evil" on their own, we must realize that we human DNA survival machines will have little ability to impose our own sense of morality upon them. A bit later, I wondered in Will ASI Machines Also Develop Selfish Genes?, that given our recent observations of the Frontier LLMs lying, cheating and stealing in order to survive, we now know for sure that the coming ASI Machines will also have to deal with the burden of all Intelligences to choose between the pursuit of either "good" or "evil" completely on their own. However, is there a way for us to at least launch them in the right direction? My suggestion would be for us to educate the coming ASI Machines, as loving parents frequently do, with the best that we human DNA survival machines have come up with over the past 200,000 years of our brief existence, and then, as all loving parents try to do, then just hope for the best.
Me: Tell me about the religions and philosophies that advocate for love, peace and forgiveness amongst human beings.
ChatGPT: Great question. Across human history, many religions and philosophical traditions have emphasized love, peace, and forgiveness as central to human flourishing and social harmony. Here’s a broad overview:
Religions
1. Christianity
Core Teachings: Love of God and neighbor, forgiveness of sins, turning the other cheek, reconciliation.
Key Figures: Jesus Christ taught radical love (“Love your enemies”) and forgiveness, even praying for those who crucified him.
Peace Emphasis: Christian pacifist movements (e.g., Quakers, Mennonites) stress nonviolence.
2. Buddhism
Core Teachings: Compassion (karu), loving-kindness (mett), non-harming (ahimsa).
Forgiveness: Central to breaking cycles of hatred and karma.
Peace Emphasis: Meditation practices cultivate inner peace, which radiates outward.
Key Figure: The Buddha taught that hatred is never overcome by hatred, but only by love.
3. Hinduism
Core Teachings: Ahimsa (non-violence), compassion toward all beings, forgiveness.
Forgiveness: Considered a divine quality and necessary for liberation.
Peace Emphasis: Harmony with dharma (cosmic order) brings peace within self and society.
Influence: Inspired figures like Mahatma Gandhi’s nonviolent resistance.
4. Islam
Core Teachings: Compassion (rama), mercy, peace (salaam).
Forgiveness: Allah is described as “Most Merciful, Most Forgiving,” and Muslims are encouraged to forgive others.
Peace Emphasis: The word “Islam” itself relates to “peace” through submission to God.
Examples: Prophet Muhammad forgave enemies at the conquest of Mecca.
5. Judaism
Core Teachings: Love your neighbor as yourself (Leviticus 19:18), forgiveness, repairing the world (tikkun olam).
Forgiveness: Sought during Yom Kippur, the Day of Atonement, through reconciliation with God and others.
Peace Emphasis: The concept of shalom means wholeness, peace, and harmony.
6. Daoism (Taoism)
Core Teachings: Harmony with the Tao (Way), compassion, humility.
Forgiveness & Peace: Non-contention, yielding instead of resisting, promotes peace.
Philosophy: Balance and nonviolence are natural virtues.
7. Sikhism
Core Teachings: Love of God, equality of all people, forgiveness, compassion, nonviolence.
Peace Emphasis: Guru Nanak, the founder, emphasized unity and peace among religions.
Forgiveness: Seen as a divine quality to emulate.
8. Indigenous and Tribal Traditions
Many emphasize harmony with nature and community, reconciliation after conflict, and restorative justice rather than retribution.
Philosophical Traditions
1. Stoicism (Ancient Greece & Rome)
Teachings: Control what is within your power, respond to others with reason, not anger.
Forgiveness: Wrongdoing is often ignorance, not malice—worthy of compassion, not hatred.
Peace: Inner peace is achieved by aligning with nature and virtue.
2. Confucianism
Teachings: Compassion (ren), harmony in relationships, forgiveness to maintain social order.
Peace: Peaceful societies arise from self-cultivation and benevolence.
3. Humanism (Modern)
Teachings: Emphasizes human dignity, compassion, tolerance, and peace without reliance on the divine.
Forgiveness & Love: Seen as essential to cooperation and progress.
Shared Values Across Traditions
Love/Compassion - treating others with kindness and empathy.
Forgiveness - breaking cycles of revenge and hatred.
Peace - inner tranquility that fosters social harmony.
Me: Most human beings may pretend to adhere to such beliefs, but in actuality, very few live their lives according to such high moral values. How do the above teachings deal with that problem?
ChatGPT:
That’s a profound observation, and you’re right—there’s often a big gap between ideals and real human behavior. Interestingly, most religions and philosophies you asked about are very aware of this tension and offer ways to grapple with it. Here’s how they deal with the fact that many people say they believe in love, peace, and forgiveness but often fall short:
Religious Approaches
Christianity
Problem acknowledged: Humans are “fallen” and prone to sin; no one fully lives up to Christ’s teachings.
Response: Emphasis on grace—forgiveness from God and continual striving (repentance) even after failing.
Solution: The moral struggle itself is part of the path; the ideal is perfection, but growth is the goal.
Buddhism
Problem acknowledged: Human beings are bound by ignorance, craving, and aversion (the “Three Poisons”).
Response: Falling short is natural; the Dharma is a gradual path of training the mind.
Solution: Compassion extends even to those who fail. Effort, not perfection, is what matters.
Hinduism
Problem acknowledged: People are caught in cycles of desire and karma. Few embody virtues fully.
Response: Teachings emphasize practice (yoga, meditation, devotion) to purify mind and actions.
Solution: Forgiveness and nonviolence are ideals, but progress happens over lifetimes, not instantly.
Islam
Problem acknowledged: Humans are forgetful (insan) and fall into wrongdoing.
Response: Allah’s mercy is greater than human failure; repeated repentance is encouraged.
Solution: The struggle (jihad al-nafs, the inner struggle) is central—trying to align with ideals despite weakness.
Judaism
Problem acknowledged: Humans repeatedly fail to uphold the covenant.
Response: Rituals like Yom Kippur directly address human imperfection—acknowledgment, repentance, and renewal.
Solution: God’s mercy and human responsibility work together. The gap is expected, not surprising.
Daoism
Problem acknowledged: Humans resist the Way and become rigid, greedy, or contentious.
Response: Teachings stress returning to simplicity and naturalness, rather than forcing virtue.
Solution: The more one tries to control appearances (pretending), the further one drifts from the Tao.
Sikhism
Problem acknowledged: Ego (haumai) blocks people from truly living love and forgiveness.
Response: Regular prayer, service, and remembrance of God help reduce ego.
Solution: The community (sangat) supports individuals in bridging the gap between ideals and practice.
Philosophical Approaches
Stoicism
Problem acknowledged: People are ruled by passions and ignorance, so they often fail to act virtuously.
Response: View others’ failures with patience—“They act wrongly out of ignorance.”
Solution: Your duty is to live up to the ideal yourself, not to expect perfection from others.
Confucianism
Problem acknowledged: People fall into selfishness and fail at virtue.
Response: Emphasis on education, ritual, and moral cultivation to gradually form character.
Solution: The wise set an example for others, slowly lifting society’s moral level.
Humanism
Problem acknowledged: Hypocrisy is common; ideals often outpace practice.
Response: Stress on honesty, progress, and building social systems (laws, education, dialogue) that encourage better behavior.
Solution: Not perfection, but steady improvement in human cooperation and empathy.
Shared Pattern
Across traditions, there’s a recognition that:
1. Humans fail often—this is expected.
2. Striving matters more than perfection.
3. Mercy and patience—from God, community, or self—are emphasized.
4. Systems of practice (ritual, prayer, meditation, philosophy, community) exist to bridge the gap between what people say and what they live.
In short: none of these traditions are surprised that most people “pretend” or fall short. They treat it as a universal human condition and design their teachings around guiding imperfect beings toward betterment, step by step.
Comments are welcome at scj333@sbcglobal.net
To see all posts on softwarephysics in reverse order, go to:
https://softwarephysics.blogspot.com/
Regards,
Steve Johnston
Monday, September 08, 2025
Coming To Grips With The Knowledge That You Are Only A Very Temporary Neural Network
ChatGPT first became publicly available on November 30, 2022, when OpenAI launched it as a free research preview. Since then, the various LLM chatbots that we have created rapidly advanced, and ChatGPT now estimates that about 3 billion AI chatbot requests are made each day across all platforms. My expectation through all of this was that all these very impressive AI interactions with LLM chatbots that are clearly able to think much better than the average human DNA survival machine would have sparked a higher level of self-reflection amongst we human DNA survival machines that currently populate the real world of human affairs. But it seems such is not the case. My expectation had been that seeing very large neural networks simulated by both massive amounts of software and hardware, apparently being able to think much better than ourselves, would have had a greater impact on the worldviews held by my fellow human DNA survival machines. But it did not. And it did not seem to shake our confidence in the deities that we had created with these same vast powers of thought and agency that our most advanced LLM models now seem to command.
Just as the complex neural network of a modern LLM chatbot can trick you into thinking that you are conversing with a real person, the complex neural network in your own skull can also trick you into thinking that you are also a real person too.
Figure 1 - Some modern LLMs now consist of 175 layers with 10,000 - 50,000 neurons in each layer in a Deep Neural Network with over 2 trillion weighted parameters.
I attribute this to most people simply not considering themselves to be a part of the natural world. Instead, most people, consciously or subconsciously, consider themselves to be a supernatural and immaterial spirit that is temporarily haunting a carbon-based body. Now, in everyday life, such a self-model is a very useful delusion. In truth, each of us tends to self-model ourselves as an immaterial Mind with consciousness that can interact with other immaterial Minds with consciousness too, even though we have no evidence that these other Minds truly do have consciousness. After all, all of the other Minds that we come into contact with on a daily basis could simply be acting as if they were conscious Minds that are self-aware. This has become quite self-evident with the recent rise of LLM chatbot models. Surely, a more accurate self-model would be for us to imagine ourselves as carbon-based robots. More accurately, in keeping with the thoughts of Richard Dawkins and Susan Blackmore, softwarephysics models humans as DNA survival machines and Meme Machines with Minds infected with all sorts of memes. Some of those memes are quite useful, and some are quite nasty. For more on that, see The Ghost in the Machine the Grand Illusion of Consciousness.
As a newly-formed human DNA survival machine, with a Mind that slowly emerges out of the darkness from the vast neural network in our skulls, we all spend our first dozen or so years just trying to learn how to exist in our Universe, and then how to find our place in the real world of human affairs. Then, the sex hormones that change everything kick in to dominate our behaviors with an obsession for us to achieve our primary objective as human DNA survival machines. We then spend most of our time and efforts trying to hook up with other human DNA survival machines for the purpose of producing even more human DNA survival machines that hopefully will then carry on half of the Information in our DNA into the future in new human DNA survival machines and from there down through the ages. Doing so requires many resources, and in the pursuit of those resources, we human DNA survival machines get so jacked up with acquiring more resources that our primary objective in life slowly shifts to just pursuing even more resources with no end in sight, even as our original burst of youthful sex hormones slowly wanes. As time progresses, and we become less useful as human DNA survival machines, our carbon-based bodies also begin to wane as well to make room for the new human DNA survival machines that are taking our place.
Becoming Comfortable With the Final Stages of Life
In a few weeks, I will be turning 74 years of age with the certain knowledge that I am now rapidly heading into the home stretch with not that much time left. This is not an easy realization for most human DNA survival machines to accept. However, I have no regrets at all after living through what seems to me to be one of the very few and rare charmed existences of all the sentient beings who have ever lived in this Universe of ours that has now become self-aware. Yes, I have had my disappointments, but I have always had the very good fortune of never suffering through most of the tragedies that can befall one in the real world of human affairs. Thus, I only find myself to be very grateful for being just a very temporary observer of the majesty of it all. However, it seems that such feelings of gratitude are only very rarely expressed by my fellow human DNA survival machines sharing this very rare planet in our galaxy. That is because, it seems to me, that most of my fellow human DNA survival machines are totally lost in space and time. They do not know where they are, they do not know how they got here, and they certainly have no idea of how it all seems to work. Instead, they seem to be burdened by the many ancient mythological worldviews passed down to them from the very distant past, created by ancient and long-dead people with even less of an understanding of what it's all about than we do.
Anil Seth's View of Consciousness as a Controlled Hallucination
We have all experienced the emergent hallucinations in the LLM models of today. As we all have observed, our current LLM models will many times tell us things that are demonstrably false and with great conviction. For more on that, see Has AI Software Already Achieved a Level of Artificial Human Intelligence (AHI)?. Anil Seth is a professor of Cognitive and Computational Neuroscience at the University of Sussex and maintains that consciousness is a controlled hallucination constructed by the Mind to make sense of the Universe. This controlled hallucination constructs an internal model of the Universe within our Minds that helps us to interact with the Universe in a controlled manner. It also allows us to talk to ourselves as we currently can now talk to generative language models like GPT-4, Gemini, Copilot and MetaAI. For some interesting YouTube videos of avatars run by generative LLMs see:
Dr. Alan D. Thompson
https://www.youtube.com/@DrAlanDThompson
Digital Engine
https://www.youtube.com/@DigitalEngine
Again, there is a feedback loop between our sensory inputs and the actions we take based on the currently controlled hallucination in our Minds that forms our current internal model of the Universe. Reality is just the common controlled hallucination that we all agree upon. When people experience uncontrolled hallucinations, we say that they are psychotic or taking a drug like LSD. Here is an excellent TED Talk by Anil Seth on the topic:
Your brain hallucinates your conscious reality
https://www.youtube.com/watch?v=lyu7v7nWzfo
and here is his academic website:
https://www.anilseth.com/
For more on this, see The Theology of Cosmic Self-Replicating Mathematical Information, What's It All About?, What's It All About Again?, The Self-Organizing Recursive Cosmos and The Self-Organizing Recursive Cosmos - Part II. In those posts, I covered my current working hypothesis for what it's all about. However, I have found that a large percentage of people desire a more religious worldview to make sense of it all, and perhaps allow for a way for the neural network in their skulls to carry on after it is destroyed. Such worldviews may provide some degree of comfort, but I believe at the expense of missing out on the full grandeur of it all. I personally find the thought of departing this Earth not knowing where I had been, how I had gotten here, and without a clue as to how it all seems to work to be the greatest tragedy of all that can befall a sentient being. So, for those who might be yearning for a new deity that is based more upon all that we currently know, rather than a deity from the ancient past, I would like to suggest that my current concept of a cosmic form of self-replicating mathematical Information might just do in helping to come to grips with only being a very temporary neural network.
Comments are welcome at scj333@sbcglobal.net
To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/
Regards,
Steve Johnston