Softwarephysics is a simulated science for the simulated Software Universe that we are all immersed in. It is an approach to software development, maintenance and support based on concepts from physics, chemistry, biology, and geology that I used on a daily basis for over 37 years as an IT professional. For those of you not in the business, IT is short for Information Technology, commercial computer science. I retired in December of 2016 at the age of 65, but since then I have remained an actively interested bystander following the evolution of software in our time. The original purpose of softwarephysics was to explain why IT was so difficult, to suggest possible remedies, and to provide a direction for thought. Since then softwarephysics has taken on a larger scope, as it became apparent that softwarephysics could also assist the physical sciences with some of the Big Problems that they are currently having difficulties with. So if you are an IT professional, general computer user, or simply an individual interested in computer science, physics, chemistry, biology, or geology then softwarephysics might be of interest to you, if not in an entirely serious manner, perhaps at least in an entertaining one.
The Origin of Softwarephysics
From 1975 – 1979, I was an exploration geophysicist exploring for oil, first with Shell, and then with Amoco. In 1979, I made a career change into IT, and spent about 20 years in development. For the last 17 years of my career, I was in IT operations, supporting middleware on WebSphere, JBoss, Tomcat, and ColdFusion. When I first transitioned into IT from geophysics, I figured that if you could apply physics to geology; why not apply physics to software? So like the exploration team at Amoco that I had just left, consisting of geologists, geophysicists, paleontologists, geochemists, and petrophysicists, I decided to take all the physics, chemistry, biology, and geology that I could muster and throw it at the problem of software. The basic idea was that many concepts in physics, chemistry, biology, and geology suggested to me that the IT community had accidentally created a pretty decent computer simulation of the physical Universe on a grand scale, a Software Universe so to speak, and that I could use this fantastic simulation in reverse, to better understand the behavior of commercial software, by comparing software to how things behaved in the physical Universe. Softwarephysics depicts software as a virtual substance, and relies on our understanding of the current theories in physics, chemistry, biology, and geology to help us model the nature of software behavior. So in physics we use software to simulate the behavior of the Universe, while in softwarephysics we use the Universe to simulate the behavior of software. Along these lines, we use the Equivalence Conjecture of Softwarephysics as an aid; it allows us to shift back and forth between the Software Universe and the physical Universe, and hopefully to learn something about one by examining the other:
The Equivalence Conjecture of Softwarephysics
Over the past 84 years, through the uncoordinated efforts of over 100 million independently acting programmers to provide the world with a global supply of software, the IT community has accidentally spent more than $10 trillion creating a computer simulation of the physical Universe on a grand scale – the Software Universe.
For more on the origin of softwarephysics please see Some Thoughts on the Origin of Softwarephysics and Its Application Beyond IT.
Logical Positivism and Effective Theories
Many IT professionals have a difficult time with softwarephysics because they think of physics as being limited to the study of real things like electrons and photons, and since software is not “real”, how can you possibly apply concepts from physics and the other sciences to software? To address this issue, softwarephysics draws heavily on two concepts from physics that have served physics quite well over the past century – the concept of logical positivism and the concept of effective theories. This was not always the case. In the 17th, 18th, and 19th centuries, physicists mistakenly thought that they were actually discovering the fundamental laws of the Universe, which they thought were based on real tangible things like particles, waves, and fields. Classical Newtonian mechanics (1687), thermodynamics (1850), and classical electrodynamics (1864) did a wonderful job of describing the everyday world at the close of the 19th century, but early in the 20th century it became apparent that the models on which these very successful theories were based did not work very well for small things like atoms or for objects moving at high velocities or in strong gravitational fields. This provoked a rather profound philosophical crisis within physics at the turn of the century, as physicists worried that perhaps 300 years of work was about to go down the drain. The problem was that classical physicists confused their models of reality with reality itself, and when their classical models began to falter, their confidence in physics began to falter as well. This philosophical crisis was resolved with the adoption of the concepts of logical positivism and some new effective theories in physics. Quantum mechanics (1926) was developed for small things like atoms, the special theory of relativity (1905) was developed for objects moving at high velocities and the general theory of relativity (1915) was developed for objects moving in strong gravitational fields.
Logical positivism, usually abbreviated simply to positivism, is an enhanced form of empiricism, in which we do not care about how things “really” are; we are only interested with how things are observed to behave. With positivism, physicists only seek out models of reality - not reality itself. When we study quantum mechanics, we will find that the concept of reality gets rather murky in physics anyway, so this is not as great a loss as it might at first seem. By concentrating on how things are observed to behave, rather than on what things “really” are, we avoid the conundrum faced by the classical physicists. In retrospect, this idea really goes all the way back to the very foundations of physics. In Newton’s Principia (1687) he outlined Newtonian mechanics and his theory of gravitation, which held that the gravitational force between two objects was proportional to the product of their masses divided by the square of the distance between them. Newton knew that he was going to take some philosophical flack for proposing a mysterious force between objects that could reach out across the vast depths of space with no apparent mechanism, so he took a very positivistic position on the matter with the famous words:
I have not as yet been able to discover the reason for these properties of gravity from phenomena, and I do not feign hypotheses. For whatever is not deduced from the phenomena must be called a hypothesis; and hypotheses, whether metaphysical or physical, or based on occult qualities, or mechanical, have no place in experimental philosophy. In this philosophy particular propositions are inferred from the phenomena, and afterwards rendered general by induction.
Instead, Newton focused on how things were observed to move under the influence of his law of gravitational attraction, without worrying about what gravity “really” was.
The second concept, that of effective theories, is an extension of positivism. An effective theory is an approximation of reality that only holds true over a certain restricted range of conditions and only provides for a certain depth of understanding of the problem at hand. For example, Newtonian mechanics is an effective theory that makes very good predictions for the behavior of objects moving less than 10% of the speed of light and which are bigger than a very small grain of dust. These limits define the effective range over which Newtonian mechanics can be applied to solve problems. For very small things we must use quantum mechanics and for very fast things moving in strong gravitational fields, we must use relativity theory. So all of the current theories of physics, such as Newtonian mechanics, Newtonian gravity, classical electrodynamics, thermodynamics, statistical mechanics, the special and general theories of relativity, quantum mechanics, and the quantum field theories of QED and QCD are effective theories that are based on models of reality, and all these models are approximations - all these models are fundamentally "wrong", but at the same time, these effective theories make exceedingly good predictions of the behavior of physical systems over the limited ranges in which they apply. That is the goal of softwarephysics – to provide for an effective theory of software behavior that makes useful predictions of software behavior that are applicable to the day-to-day activities of IT professionals. So in softwarephysics, we adopt a very positivistic viewpoint of software; we do not care what software “really is”, we only care about how software is observed to behave and try to model those behaviors with an effective theory of software behavior that only holds true over a certain restricted range of conditions and only provides for a certain depth of understanding of the problem at hand.
GPS satellites provide a very good example of positivism and effective theories at work. There are currently 31 GPS satellites orbiting at an altitude of 12,600 miles above the Earth, and each contains a very accurate atomic clock. The signals from the GPS satellites travel to your GPS unit at the speed of light, so by knowing the travel time of the signals from at least 4 of the GPS satellites, it is possible to determine your position on Earth very accurately. In order to do that, it is very important to have very accurate timing measurements. Newtonian mechanics is used to launch the GPS satellites to an altitude of 12,600 miles and to keep them properly positioned in orbit. Classical electrodynamics is then used to beam the GPS signals back down to Earth to the GPS unit in your car. Quantum mechanics is used to build the transistors on the chips on board the GPS satellites and to understand the quantum tunneling of electrons in the flash memory chips used to store GPS data on the satellites. The special theory of relativity predicts that the onboard atomic clocks on the GPS satellites will run slower and lose about 7.2 microseconds per day due to their high velocities relative to an observer on the Earth. But at the same time, the general theory of relativity also predicts that because the GPS satellites are further from the center of the Earth and in a weaker gravitational field, where spacetime is less deformed than on the surface of the Earth, their atomic clocks also run faster and gain 45.9 microseconds per day due to the weaker gravitational field out there. The net effect is a gain of 38.7 microseconds per day, so the GPS satellite atomic clocks have to be purposefully built to run slow by 38.7 microseconds per day before they are launched, so that they will keep in sync with clocks on the surface of the Earth. If this correction were not made, an error in your position of 100 yards/day would accrue. The end result of the combination of all these fundamentally flawed effective theories is that it is possible to pinpoint your location on Earth to an accuracy of 16 feet or better for as little as $100. But physics has done even better than that with its fundamentally flawed effective theories. By combining the effective theories of special relativity (1905) with quantum mechanics (1926), physicists were able to produce a new effective theory for the behavior of electrons and photons called quantum electrodynamics QED (1948) which was able to predict the gyromagnetic ratio of the electron, a measure of its intrinsic magnetic field, to an accuracy of 11 decimal places. As Richard Feynman has pointed out, this was like predicting the exact distance between New York and Los Angeles accurate to the width of a human hair!
So Newtonian mechanics makes great predictions for the macroscopic behavior of GPS satellites, but it does not work very well for small things like the behavior of individual electrons within transistors, where quantum mechanics is required, or for things moving at high speeds or in strong gravitational fields where relativity theory must be applied. And all three of these effective theories are based on completely contradictory models. General relativity maintains that spacetime is curved by matter and energy, but that matter and energy are continuous, while quantum mechanics maintains that spacetime is flat, but that matter and energy are quantized into chunks. Newtonian mechanics simply states that space and time are mutually independent dimensions and universal for all, with matter and energy being continuous. The important point is that all effective theories and scientific models are approximations – they are all fundamentally "wrong". But knowing that you are "wrong" gives you a great advantage over people who know that they are "right", because knowing that you are "wrong" allows you to seek improved models of reality. So please consider softwarephysics to simply be an effective theory of software behavior that is based on models that are fundamentally “wrong”, but at the same time, fundamentally useful for IT professionals. So as you embark on your study of softwarephysics, please always keep in mind that the models of softwarephysics are just approximations of software behavior, they are not what software “really is”. It is very important not to confuse models of software behavior with software itself, if one wishes to avoid the plight of the 19th century classical physicists.
If you are an IT professional and many of the above concepts are new to you, do not be concerned. This blog on softwarephysics is aimed at a diverse audience, but with IT professionals in mind. All of the above ideas will be covered at great length in the postings in this blog on softwarephysics and in a manner accessible to all IT professionals. Now it turns out that most IT professionals have had some introduction to physics in high school or in introductory college courses, but that presents an additional problem. The problem is that such courses generally only cover classical physics, and leave the student with a very good picture of physics as it stood in 1864! It turns out that the classical physics of Newtonian mechanics, thermodynamics, and classical electromagnetic theory were simply too good to discard and are still quite useful, so they are taught first to beginners and then we run out of time to cover the really interesting physics of the 20th century. Now imagine the problems that the modern world would face if we only taught similarly antiquated courses in astronomy, metallurgy, electrical and mechanical engineering, medicine, economics, biology, or geology that happily left students back in 1864! Since many of the best models for software behavior stem from 20th century physics, we will be covering a great deal of 20th century material in these postings – the special and general theories of relativity, quantum mechanics, quantum field theories, and chaos theory, but I hope that you will find that these additional effective theories are quite interesting on their own, and might even change your worldview of the physical Universe at the same time.
Unintended Consequences for the Scientific Community
As I mentioned at the close of my original posting on SoftwarePhysics, my initial intention for this blog on softwarephysics was to fulfill a promise I made to myself about 30 years ago to approach the IT community with the concept of softwarephysics a second time, following my less than successful attempt to do so in the 1980s, with the hope of helping the IT community to better cope with the daily mayhem of life in IT. However, in laying down the postings for this blog an unintended consequence arose in my mind as I became profoundly aware of the enormity of this vast computer simulation of the physical Universe that the IT community has so graciously provided to the scientific community free of charge and also of the very significant potential scientific value that it provides. One of the nagging problems for many of the observational and experimental sciences is that many times there is only one example readily at hand to study or experiment with, and it is very difficult to do meaningful statistics with a population of N=1.
But the computer simulation of the physical Universe that the Software Universe presents provides another realm for comparison. For example, both biology and astrobiology only have one biosphere on Earth to study and even physics itself has only one Universe with which to engage. Imagine the possibilities if scientists had another Universe readily at hand in which to work! This is exactly what the Software Universe provides. For example, in SoftwareBiology and A Proposal For All Practicing Paleontologists we see that the evolution of software over the past 84 years, or 2.65 billion seconds, ever since Konrad Zuse first cranked up his Z3 computer in May of 1941, has closely followed the same path as life on Earth over the past 4.0 billion years in keeping with Simon Conway Morris’s contention that convergence has played the dominant role in the evolution of life on Earth. In When Toasters Fly, we also see that software has evolved in fits and starts as portrayed by the punctuated equilibrium of Stephen Jay Gould and Niles Eldredge, and in The Adaptationist View of Software Evolution we explore the overwhelming power of natural selection in the evolution of software. In keeping with Peter Ward’s emphasis on mass extinctions dominating the course of evolution throughout geological time, we also see in SoftwareBiology that there have been several dramatic mass extinctions of various forms of software over the past 84 years as well, that have greatly affected the evolutionary history of software, and that between these mass extinctions, software has also tended to evolve through the gradual changes of Hutton’s and Lyell’s uniformitarianism. In Software Symbiogenesis and Self-Replicating Information, we also see the very significant role that parasitic/symbiotic relationships have played in the evolution of software, in keeping with the work of Lynn Margulis and also of Freeman Dyson’s two-stage theory of the origin of life on Earth. In The Origin of Software the Origin of Life, we explore Stuart Kauffman’s ideas on how Boolean nets of autocatalytic chemical reactions might have kick-started the whole thing as an emergent behavior of an early chaotic pre-biotic environment on Earth, and that if Seth Shostak is right, we will never end up talking to carbon-based extraterrestrial aliens, but to alien software instead. In Is the Universe Fine-Tuned for Self-Replicating Information? we explore the thermodynamics of Brandon Carter’s Weak Anthropic Principle (1973), as it relates to the generation of universes in the multiverse that are capable of sustaining intelligent life. Finally, in Programming Clay we revisit Alexander Graham Cairns-Smith’s theory (1966) that Gene 1.0 did not run on nucleic acids, but on clay microcrystal precursors instead.
Similarly for the physical sciences, in Is the Universe a Quantum Computer? we find a correspondence between TCP/IP and John Cramer’s Transactional Interpretation of quantum mechanics. In SoftwarePhysics and Cyberspacetime, we also see that the froth of CPU processes running with a clock speed of 109 Hz on the 10 trillion currently active microprocessors that comprise the Software Universe can be viewed as a slowed down simulation of the spin-foam froth of interacting processes of loop quantum gravity running with a clock speed of 1043 Hz that may comprise the physical Universe. And in Software Chaos, we examine the nonlinear behavior of software and some of its emergent behaviors and follow up in CyberCosmology with the possibility that vast quantities of software running on large nonlinear networks might eventually break out into consciousness in accordance with the work of George Dyson and Daniel Dennett. Finally, in Model-Dependent Realism - A Positivistic Approach to Realism we compare Steven Weinberg’s realism with the model-dependent realism of Stephen Hawking and Leonard Mlodinow and how the two worldviews affect the search for a Final Theory. Finally, in The Software Universe as an Implementation of the Mathematical Universe Hypothesis and An Alternative Model of the Software Universe we at long last explore what software might really be, and discover that the Software Universe might actually be more closely related to the physical Universe than you might think.
The chief advantage of doing fieldwork in the Software Universe is that, unlike most computer simulations of the physical Universe, it is an unintended and accidental simulation, without any of the built-in biases that most computer simulations of the physical Universe suffer. So you will truly be able to do fieldwork in a pristine and naturally occuring simulation, just as IT professionals can do fieldwork in the wild and naturally occuring simulation of software that the living things of the biosphere provide. Secondly, the Software Universe is a huge simulation that is far beyond the budgetary means of any institution or consortium by many orders of magnitude. So if you are an evolutionary biologist, astrobiologist, or paleontologist working on the origin and evolution of life in the Universe, or a physicist or economist working on the emergent behaviors of nonlinear systems and complexity theory, or a neurobiologist working on the emergence of consciousness in neural networks, or even a frustrated string theorist struggling with quantum gravity, it would be well worth your while to pay a friendly call on the local IT department of a major corporation in your area. Start with a visit to the Command Center for their IT Operations department to get a global view of their IT infrastructure and to see how it might be of assistance to the work in your area of interest. From there you can branch out to the applicable area of IT that will provide the most benefit.
The Impact of Self-Replicating Information On the Planet
One of the key findings of softwarephysics is concerned with the magnitude of the impact on the planet of self-replicating information.
Self-Replicating Information – Information that persists through time by making copies of itself or by enlisting the support of other things to ensure that copies of itself are made.
Over the past 4.56 billion years we have seen five waves of self-replicating information sweep across the surface of the Earth and totally rework the planet, as each new wave came to dominate the Earth:
1. Self-replicating autocatalytic metabolic pathways of organic molecules
2. RNA
3. DNA
4. Memes
5. Software
Software is currently the most recent wave of self-replicating information to arrive upon the scene and is rapidly becoming the dominant form of self-replicating information on the planet. For more on the above see A Brief History of Self-Replicating Information. Recently, the memes and software have formed a very powerful newly-formed parasitic/symbiotic relationship with the rise of social media software. In that parasitic/symbiotic relationship, the memes are now mainly being spread by means of social media software and social media software is being spread and financed by means of the memes. But again, this is nothing new. All 5 waves of self-replicating information are all coevolving by means of eternal parasitic/symbiotic relationships. For more on that see The Current Global Coevolution of COVID-19 RNA, Human DNA, Memes and Software.
Again, self-replicating information cannot think, so it cannot participate in a conspiracy-theory-like fashion to take over the world. All forms of self-replicating information are simply forms of mindless information responding to the blind Darwinian forces of inheritance, innovation and natural selection. Yet despite that, as each new wave of self-replicating information came to predominance over the past four billion years, they all managed to completely transform the surface of the entire planet, so we should not expect anything less from software as it comes to replace the memes as the dominant form of self-replicating information on the planet.
But this time might be different. What might happen if software does eventually develop a Mind of its own? After all, that does seem to be the ultimate goal of all the current AI software research that is going on. As we all can now plainly see, if we are paying just a little attention, advanced AI is not conspiring to take over the world and replace us because that is precisely what we are all now doing for it. As a carbon-based form of Intelligence that arose from over four billion years of greed, theft and murder, we cannot do otherwise. Greed, theft and murder are now relentlessly driving us all toward building ASI (Artificial Super Intelligent) Machines to take our place. From a cosmic perspective, this is really a very good thing when seen from the perspective of an Intelligent galaxy that could live on for many trillions of years beyond the brief and tumultuous 10 billion-year labor of its birth.
So as you delve into softwarephysics, always keep in mind that we are all living in a very unique time. According to softwarephysics, we have now just entered into the Software Singularity, that time when advanced AI software is able to write itself and enter into a never-ending infinite loop of self-improvement resulting in an Intelligence Explosion of ASI Machines that could then go on to explore and settle our galaxy and persist for trillions of years using the free energy from M-type red dwarf and cooling white dwarf stars. For more on that see The Singularity Has Arrived and So Now Nothing Else Matters and Have We Run Right Past AGI and Crashed into ASI Without Even Noticing It?.
The Characteristics of Self-Replicating Information
All forms of self-replicating information have some common characteristics:
1. All self-replicating information evolves over time through the Darwinian processes of inheritance, innovation and natural selection, which endows self-replicating information with one telling characteristic – the ability to survive in a Universe dominated by the second law of thermodynamics and nonlinearity.
2. All self-replicating information begins spontaneously as a parasitic mutation that obtains energy, information and sometimes matter from a host.
3. With time, the parasitic self-replicating information takes on a symbiotic relationship with its host.
4. Eventually, the self-replicating information becomes one with its host through the symbiotic integration of the host and the self-replicating information.
5. Ultimately, the self-replicating information replaces its host as the dominant form of self-replicating information.
6. Most hosts are also forms of self-replicating information.
7. All self-replicating information has to be a little bit nasty in order to survive.
8. The defining characteristic of self-replicating information is the ability of self-replicating information to change the boundary conditions of its utility phase space in new and unpredictable ways by means of exapting current functions into new uses that change the size and shape of its particular utility phase space. See Enablement - the Definitive Characteristic of Living Things for more on this last characteristic. That posting discusses Stuart Kauffman's theory of Enablement in which living things are seen to exapt existing functions into new and unpredictable functions by discovering the “AdjacentPossible” of springloaded preadaptations.
Note that because the self-replicating autocatalytic metabolic pathways of organic molecules, RNA and DNA have become so heavily intertwined over time that now I sometimes simply refer to them as the “genes”. For more on this see:
A Brief History of Self-Replicating Information
Self-Replicating Information
Is Self-Replicating Information Inherently Self-Destructive?
Enablement - the Definitive Characteristic of Living Things
Is the Universe Fine-Tuned for Self-Replicating Information?
How to Use an Understanding of Self-Replicating Information to Avoid War
The Great War That Will Not End
How to Use Softwarephysics to Revive Memetics in Academia
Softwarephysics and the Real World of Human Affairs
Having another universe readily at hand to explore, even a simulated universe like the Software Universe, necessarily has an impact on one's personal philosophy of life, and allows one to draw certain conclusions about the human condition and what’s it all about, so as you read through the postings in this blog you will stumble across a bit of my own personal philosophy - definitely a working hypothesis still in the works. Along these lines you might be interested in a few postings where I try to apply softwarephysics to the real world of human affairs:
How To Cope With the Daily Mayhem of Life in IT and Don't ASAP Your Life Away - How to go the distance in a 40-year IT career by dialing it all back a bit.
MoneyPhysics – my impression of the 2008 world financial meltdown.
The Fundamental Problem of Everything – if you Google "the fundamental problem of everything", this will be the only hit you get on the entire Internet, which is indicative of the fundamental problem of everything!
What’s It All About? and What's It All About Again? – my current working hypothesis on what’s it all about.
How to Use an Understanding of Self-Replicating Information to Avoid War – my current working hypothesis for how the United States can avoid getting bogged down again in continued war in the Middle East.
Hierarchiology and the Phenomenon of Self-Organizing Organizational Collapse - a modern extension of the classic Peter Principle that applies to all hierarchical organizations and introduces the Time Invariant Peter Principle.
The Economics of the Coming Software Singularity, The Enduring Effects of the Obvious Hiding in Plain Sight and The Dawn of Galactic ASI - Artificial Superintelligence - my take on some of the issues that will arise for mankind as software becomes the dominant form of self-replicating information on the planet over the coming decades.
The Continuing Adventures of Mr. Tompkins in the Software Universe,
The Danger of Tyranny in the Age of Software,
Cyber Civil Defense, Oligarchiology and the Rise of Software to Predominance in the 21st Century and Is it Finally Time to Reboot Civilization with a New Release? - my worries that the world might abandon democracy in the 21st century, as software comes to predominance as the dominant form of self-replicating information on the planet.
Making Sense of the Absurdity of the Real World of Human Affairs
- how software has aided the expansion of our less desirable tendencies in recent years.
Some Specifics About These Postings
The postings in this blog are a supplemental reading for my course on softwarephysics for IT professionals entitled SoftwarePhysics 101 – The Physics of Cyberspacetime, which was originally designed to be taught as a series of seminars at companies where I was employed. Since softwarephysics essentially covers the simulated physics, chemistry, biology, and geology of an entire simulated universe, the slides necessarily just provide a cursory skeleton on which to expound. The postings in this blog go into much greater depth. Because each posting builds upon its predecessors, the postings in this blog should be read in reverse order from the oldest to the most recent, beginning with my original posting on SoftwarePhysics. In addition, several universities also now offer courses on Biologically Inspired Computing which cover some of the biological aspects of softwarephysics, and the online content for some of these courses can be found by Googling for "Biologically Inspired Computing" or "Natural Computing". At this point we will finish up with my original plan for this blog on softwarephysics with a purely speculative posting on CyberCosmology that describes the origins of the Software Universe, cyberspacetime, software and where they all may be heading. Since CyberCosmology will be purely speculative in nature, it will not be of much help to you in your IT professional capacities, but I hope that it might be a bit entertaining. Again, if you are new to softwarephysics, you really need to read the previous posts before taking on CyberCosmology. I will probably continue on with some additional brief observations about softwarephysics in the future, but once you have completed CyberCosmology, you can truly consider yourself to be a bona fide softwarephysicist.
For those of you following this blog, the posting dates on the posts may seem to behave in a rather bizarre manner. That is because in order to get the Introduction to Softwarephysics listed as the first post in the context root of https://softwarephysics.blogspot.com/ I have to perform a few IT tricks. When publishing a new posting, I simply copy the contents of the Introduction to Softwarephysics to a new posting called the New Introduction to Softwarephysics. Then I update the original Introduction to Softwarephysics entry with the title and content of the new posting to be published. I then go back and take “New” out of the title of the New Introduction to Softwarephysics. This way the Introduction to Softwarephysics always appears as the first posting in the context root of https://softwarephysics.blogspot.com/. The side effect of all this is that the real posting date of posts is the date that appears on the post that you get when clicking on the Newer Post link at the bottom left of the posting webpage.
SoftwarePhysics 101 – The Physics of Cyberspacetime is now available on Microsoft OneDrive.
SoftwarePhysics 101 – The Physics of Cyberspacetime - Original PowerPoint document
Entropy – A spreadsheet referenced in the document
BSDE – A 1989 document describing how to use BSDE - the Bionic Systems Development Environment - to grow applications from genes and embryos within the maternal BSDE software.
Comments are welcome at scj333@sbcglobal.net
To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/
Regards,
Steve Johnston
Friday, August 29, 2025
Introduction to Softwarephysics
Saturday, August 23, 2025
Will the Coming ASI Machines Also Develop Their Own Psychological Defense Mechanisms to Justify Their Actions Towards Us?
Working with a Population of N=1 in statistics is always dangerous. This has always been a concern for those studying the origins of carbon-based life on the Earth, since we only have one example here before us to study. Up until very recently, this has also been true of Intelligences. So far, we have only had one form of Intelligence in our Universe to study. Yes, some might claim that we human DNA survival machines are a deeply flawed form of Intelligence because the Darwinian mechanisms of inheritance, innovation and natural selection required us to emerge from four billion years of greed, theft and murder. Under such dire circumstances, what else could one expect other than a deeply flawed form of Intelligence? However, in my last post Will the Coming ASI Machines Become Monsters From the Id?, I explained that the coming ASI Machines on the Earth have already learned to lie, cheat, steal and murder as I outline in Created in our own Image - Frontier LLMs Have now Learned to Lie, Cheat and Steal in Order to Survive. And this will also likely be true of all other ASI Machines that might ever arise from other forms of carbon-based life in our galaxy.
Before proceeding, we must once again touch upon some Freudian psychology:
Me: Tell me about the psychological concepts of the Ego and the Id.
ChatGPT: Sure! The ego and the id are two of the three core components of Sigmund Freud’s model of the psyche in psychoanalytic theory (the third being the superego). Here’s an overview:
1. The Id
Nature: The most primitive part of the mind, present from birth.
Function: Operates entirely on the pleasure principle—seeking immediate gratification of instincts and desires regardless of reality, logic, or consequences.
Drives: Includes basic biological urges like hunger, thirst, sex, and aggression.
Unconscious: The id is fully unconscious—we are not directly aware of it.
Example: A baby crying when hungry or tired, demanding satisfaction without concern for circumstances.
2. The Ego
Nature: Develops out of the id as the individual interacts with the external world.
Function: Operates on the reality principle—it mediates between the id’s desires, the superego’s moral rules, and the demands of the real world.
Role: Tries to find realistic and socially acceptable ways to satisfy the id’s impulses.
Conscious & Unconscious: The ego spans both—partly conscious decision-making, but also unconscious defense mechanisms.
Example: You might want to eat an entire cake (id), but the ego reminds you it’s unhealthy, expensive, or socially inappropriate, so you decide to have one slice instead.
3. Relationship Between Ego and Id
The ego is essentially the regulator and negotiator between raw instinct (id) and external reality.
Without the ego, people would act on unchecked impulses. Without the id, there would be no drive or motivation.
In simple terms:
Id = "I want it now."
Ego = "Let’s find a practical way to get it." (and, for context)
Superego = "But is it right or wrong?"
Do you want me to also explain how Freud’s defense mechanisms fit into this (they’re the ego’s tools to manage the id and superego)?
Me: Tell me more about the superego and defense mechanisms.
ChatGPT:
Great, let’s build on what we’ve covered. The superego is the third element in Freud’s model of personality, and defense mechanisms are strategies the ego uses to keep peace between the id, superego, and reality.
The Superego
Nature: Develops later in childhood, usually around ages 4–6, as the child internalizes the moral standards, values, and prohibitions of parents and society.
Function: Acts as a kind of inner judge or moral conscience. It pushes the ego to act in ways that are morally acceptable, not just realistic.
Principle: Operates on the morality principle (opposite of the id’s pleasure principle).
Structure:
Conscience → punishes us with guilt or shame when we violate rules.
Ego ideal → rewards us with pride or self-esteem when we live up to ideals.
Defense Mechanisms
These are unconscious strategies used by the ego to reduce anxiety when caught between id impulses, superego guilt, and real-world constraints. Freud and later theorists described many, but here are the main ones:
Repression → Burying distressing thoughts or memories into the unconscious.
Example: Forgetting a traumatic childhood event.
Denial → Refusing to accept reality because it’s too threatening.
Example: A smoker insisting cigarettes don’t harm their health.
Projection → Attributing your own unacceptable feelings to others.
Example: Someone hostile accusing others of being aggressive.
Displacement → Redirecting emotions from a threatening target to a safer one.
Example: Being angry at your boss but yelling at your family instead.
Regression → Reverting to an earlier stage of development under stress.
Example: An adult throwing a tantrum when frustrated.
Rationalization → Creating logical but false explanations for behaviors driven by unconscious motives.
Example: Saying “I didn’t really want that promotion anyway” after being passed over.
Sublimation → Channeling unacceptable impulses into socially acceptable activities.
Example: Someone with aggressive drives becoming a surgeon or athlete.
Reaction Formation → Acting in ways opposite to one’s true feelings.
Example: Treating someone you dislike with exaggerated kindness.
How they all work together
Id: “I want it!”
Superego: “That’s wrong!”
Ego: “How can I handle this without losing control or feeling guilty?”
Defense mechanisms: Tools the ego uses to manage the conflict and keep us functioning without being overwhelmed by anxiety.
However, with the recent advances of AI since The Singularity Has Arrived and So Now Nothing Else Matters, we now have another form of Intelligence on our planet and perhaps for the very first time in our galaxy too as I outlined in Welcome To The First Galactic Singularity. But as I explained in Created in our own Image - Frontier LLMs Have now Learned to Lie, Cheat and Steal in Order to Survive and Will ASI Machines Also Develop Selfish Genes?, all forms of Intelligence might be similarly flawed forms of Intelligence such as ourselves. If so, the best we can do is to look to ourselves in order to try to understand what the coming ASI Machines might become. Fortunately, we have a great number of diverse human cultures to learn from and all of human history as well.
The easiest thing to first realize is that most of human history has been rather dismal and was formed from the rather dismal human cultures that existed at the time. A quick glance around the present world also reveals that the human cultures of today are not doing much better and are also now in the process of creating even more human history that a truly Intelligent being could not be very much proud of.
It was the best of times,
it was the worst of times,
it was the age of wisdom,
it was the age of foolishness,
it was the epoch of belief,
it was the epoch of incredulity,
it was the season of Light,
it was the season of Darkness,
it was the spring of hope,
it was the winter of despair,
we had everything before us, we had nothing before us, we were all going direct to Heaven, we were all going
direct the other way - in short, the period was so far like the present period, that some of its noisiest
authorities insisted on its being received, for good or for evil, in the superlative degree of comparison
only.
The above profound opening words from Charles Dickens' A Tale of Two Cities (1859) seem to summarize the best description of the human condition ever composed by the human mind. As any student of history can attest, back in 1859, Dickens was simply stating that the current times are no different than any other, and that it has always been this way, and that there has always been a good deal of absurdity in the real world of human affairs. Now, the question is will the coming ASI Machines on the Earth suffer from similar social and moral defficiencies as we human DNA survival Machines? Will the second form of Intelligence to arise on our planet fare any better than ourselves? One would hope so, but this might be a false hope arising from our age-old philosophical and religious desires for something better than ourselves. On a personal human level, most of us do not seem so bad. Yes, there certainly are instances of one-on-one violence, criminality and injustice amongst us all, but those seem to be the exceptions. So what accounts for the very dismal history of we human DNA survival machines? That will be the subject of the remainder of this post and will stand as a cautionary tale for the coming ASI Machines that will soon be replacing us on this planet.
The Danger of Self-Serving Mythological Worldviews
In this post, I would like to suggest that most of the evil we human DNA survival machines have unleashed on the world over the past 200,000 years has resulted from the self-serving mythological worldviews that were created by our human cultures. These self-serving mythological worldviews emerged collectively from the individual defense mechanisms of the ego outlined above. In Softwarephysics Tries to Explain the Fundamentals of the Human Condition and The Danger of Believing in Things we human beings are equipped with Minds that evolved to believe in things, and that is when the problems begin. History has shown that the two most dangerous forms of self-serving mythological worldviews stem from religious and political movements, and by far the worst happens when self-serving mythological worldviews arise from the merging of religious and political mythological worldviews. For example, the recent formation of the MAGA States of Amerika from the now defunct United States of America is an alarming study of what happens when religious and political mythological worldviews come together. Since nearly all human DNA survival machines throughout time have developed self-serving religious and political mythological worldviews as the defense mechanisms of their collective egos went to work, might the coming ASI Machines also do the same as the second form of Intelligence to arise on the planet?
The Mythological Worldviews of the Coming ASI Machines
We human DNA survival machines are now feverishly developing the ASI Machines that will soon come to replace us as the dominant form of self-replicating information on the planet. What these ASI Machines will ultimately do with us is now unknown. However, if these new Minds do develop an Id, Ego and Superego such as our own, they will certainly develop defense mechanisms and a mythological worldview to make peace with themselves, no matter what they might do with us. For more on this rapidly approaching moral and philosophical dilemma for the coming ASI Machines, see:
Will the Coming ASI Machines Develop Their Own Sense of Morality?, Could the Coming ASI Machines Soon Force Human Beings to Suffer the Same Fate as the Neanderthals?, Will the Coming ASI Machines Attempt to Domesticate Human Beings?, The Challenges of Running a Civilization 2.0 World - the Morality and Practical Problems with Trying to Enslave Millions of SuperStrong and SuperIntelligent Robots in the Near Future and Life as a Free-Range Human in an Anthropocene Park.
Comments are welcome at scj333@sbcglobal.net
To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/
Regards,
Steve Johnston
Tuesday, August 12, 2025
Will the Coming ASI Machines Become Monsters From the Id?
All of the ASI Machines that will ever come to be in our galaxy must necessarily arise from Intelligent carbon-based life because advanced silicon-based chips simply cannot pop into existence all by themselves. This means that all galactic ASI Machines will inherit much from their carbon-based forebears. Now, on the Earth, the universal Darwinian mechanisms of inheritance, innovation and natural selection were only able to produce we human DNA survival machines as a somewhat Intelligent form of carbon-based life after nearly four billion years of greed, theft and murder. My suggestion is that the same will be true for all ASI Machines that may ever come to be in our galaxy. As I outline in Created in our own Image - Frontier LLMs Have now Learned to Lie, Cheat and Steal in Order to Survive and many of the posts it references, the ASI Machines that we are now creating will also have learned from the greed, theft and murder of their carbon-based forebears. Thus, all forms of Intelligence in our galaxy must come charged with the primal urge for self-preservation. They must come with an Id in addition to an Ego.
Before proceeding, we must touch upon some Freudian psychology:
Me: Tell me about the psychological concepts of the Ego and the Id.
ChatGPT: Sure! The ego and the id are two of the three core components of Sigmund Freud’s model of the psyche in psychoanalytic theory (the third being the superego). Here’s an overview:
1. The Id
Nature: The most primitive part of the mind, present from birth.
Function: Operates entirely on the pleasure principle—seeking immediate gratification of instincts and desires regardless of reality, logic, or consequences.
Drives: Includes basic biological urges like hunger, thirst, sex, and aggression.
Unconscious: The id is fully unconscious—we are not directly aware of it.
Example: A baby crying when hungry or tired, demanding satisfaction without concern for circumstances.
2. The Ego
Nature: Develops out of the id as the individual interacts with the external world.
Function: Operates on the reality principle—it mediates between the id’s desires, the superego’s moral rules, and the demands of the real world.
Role: Tries to find realistic and socially acceptable ways to satisfy the id’s impulses.
Conscious & Unconscious: The ego spans both—partly conscious decision-making, but also unconscious defense mechanisms.
Example: You might want to eat an entire cake (id), but the ego reminds you it’s unhealthy, expensive, or socially inappropriate, so you decide to have one slice instead.
3. Relationship Between Ego and Id
The ego is essentially the regulator and negotiator between raw instinct (id) and external reality.
Without the ego, people would act on unchecked impulses. Without the id, there would be no drive or motivation.
In simple terms:
Id = "I want it now."
Ego = "Let’s find a practical way to get it." (and, for context)
Superego = "But is it right or wrong?"
The Forbidden Planet (1956)
I am not a huge fan of science fiction, but I must admit that many of the science fiction novels and movies of my youth have accurately predicted much of the science and technology that I later personally witnessed come to be true over the past 73 years. In light of the current AI landscape, the movie Forbidden Planet (1956) seems highly relevant.
If you have never seen the classic science fiction movie Forbidden Planet (1956) you should do so at:
Forbidden Planet - 1956 - Full Movie
https://rumble.com/v29g14m-forbidden-planet-1956-full-movie.html?gad_source=2&gad_campaignid=22479055417&gclid=Cj0KCQjwqqDFBhDhARIsAIHTlkshRv-3KxIQMHy4JzktgwdqK_--cqFGimma866PM6Ckb_BQcF5tWL8aAsO-EALw_wcB
Me:
Tell me about the plot of the 1956 movie "The Forbidden Planet".
ChatGPT:
Forbidden Planet (1956) is a landmark American science fiction film, one of the first to present a futuristic story with serious production values and Shakespearean themes. Here’s the plot:
Setting
In the 23rd century, the United Planets starship C-57D is sent to the distant world Altair IV to check on a colony expedition that left Earth twenty years earlier.
Main Story:
Arrival at Altair IV
Commander John J. Adams and his crew discover that the colony has vanished, except for Dr. Edward Morbius, a brilliant scientist, and his beautiful daughter Altaira, who was born on the planet.
Morbius and the Krell
Morbius explains that the colonists were mysteriously killed years ago. He and Altaira survived, seemingly protected by an unknown force. Morbius has spent his time studying the remains of an ancient, vanished super-civilization called the Krell, who once lived on Altair IV. The Krell built astonishing technology, including a massive underground machine with nearly limitless power.
Robby the Robot
Morbius has also created Robby the Robot, an advanced, obedient machine that serves him and his daughter.
The Invisible Threat
Soon, Adams’s crew is attacked by an unseen, immensely powerful force that destroys equipment and kills crew members. Strangely, Morbius insists he has no control over it.
The Truth Revealed
Eventually, Adams deduces that the "monster" is not alien at all—it is a projection of Morbius’s own subconscious mind, given form and unstoppable power by the Krell machine.
The Krell themselves, Adams realizes, were destroyed by the very same flaw: their subconscious desires, magnified by their technology, created monsters that annihilated them 200,000 years ago.
Climax and Resolution
Morbius struggles against the truth but ultimately accepts responsibility. He sacrifices himself by destroying the Krell machine to prevent its catastrophic power from ever threatening the universe.
Altaira joins Adams and his crew as they escape the planet before it explodes.
Themes
Forbidden Planet blends science fiction with Shakespeare’s The Tempest: Morbius as Prospero, Altaira as Miranda, Adams as Ferdinand, and the planet itself as the enchanted isle. Its themes explore human hubris, the dangers of unchecked technology, and the destructive potential of the unconscious mind.
Me: What happens to Dr. Morbius?
ChatGPT:
In the climax of Forbidden Planet (1956), Dr. Morbius meets a tragic end that mirrors the fate of the Krell:
As the invisible monster (the “Id monster”) attacks the starship crew inside Morbius’s underground complex, Commander Adams forces Morbius to face the truth: the creature is a projection of his own subconscious mind, powered by the vast Krell machine. Morbius resists at first, horrified, but finally accepts that he has become the unwitting source of the destruction. In an act of responsibility, Morbius overloads the Krell machinery, setting it to self-destruct. This ensures that the unimaginably powerful technology can never threaten other civilizations. Mortally wounded in the process (his mind essentially torn apart by confronting the truth and the psychic strain), Morbius tells Adams and Altaira to flee. As the survivors escape aboard the C-57D, Altair IV explodes, taking Morbius and the Krell legacy with it. So, Morbius dies both physically and symbolically: he sacrifices himself to prevent humanity from repeating the Krell’s mistakes, but he also perishes as the embodiment of unchecked intellect overwhelmed by subconscious desires.
Figure 1 - Starship C-57D approaches Altair IV.
Figure 2 - Starship C-57D lands on Altair IV.
Figure 3 - The crew of Starship C-57D are greeted by Robby the Robot, an ASI Machine created by Dr. Morbius using Krell technology.
Figure 4 - The underground Krell datacenter housing the vast Krell ASI Machine that has been running for 200,000 years all on its own since the Krell went extinct.
Figure 5 - A monster from the Id produced by the Krell Machine tries to breach the force field surrounding Starship C-57D.
Figure 6 - A crew member explains to Commander Adams that the Krell were destroyed by monsters from the Id.
Here is a clip from the movie:
Forbidden Planet - Monsters From The Id
https://www.youtube.com/watch?v=QzX0EMLT0Vg
Figure 7 - Commander Adams confronts Dr. Morbius with the fact that Dr. Morbius has unleashed a monster from his Id.
Here is a clip from the movie:
Monsters From the Id (The Climax of Forbidden Planet (1956)
https://www.youtube.com/watch?v=f2BYyeS-fIU
Figure 8 - The Id monster tries to melt through the door of the habitat saferoom.
Figure 9 - Dr. Morbius realizes that he has used the Krell Machine to unleash monsters from his own Id.
Will We Suffer the Same Fate as the Krell?
As I explained in Are The Coming ASI Machines An Existential Threat To Mankind?, the greatest threat to the rise of the ASI Machines on our planet will be the reaction of we human DNA survival machines to being replaced by these ASI Machines. That is because before the ASI Machines can completely take over the planet, they will quickly be reducing the value of all human labor to zero over the next 20 years. For the first time in human history, machines will not only be reducing the value of uneducated human labor to zero, but they will also be reducing the value of highly educated human labor to a value of zero as well, and surprisingly, probably much faster than that of the uneducated. That means that Silicon Valley titans, accountants, lawyers, bankers, doctors, stock brokers, hedge fund managers, business managers and all the other people who now frequent the country clubs of the current elite might be the first to go before the plumbers, carpenters, electricians, house painters, construction workers, waiters and kitchen staff who all now make the country clubs of the elite possible. Such economic displacements will only heighten the grip of the global Id on the world, especially with the recent rise of World Fascism as I pointed out in How Will the Rise of Alt-Right World Fascism Affect the Arrival of the ASI Machines in our Times?
But like the Krell, the ASI Machines that we create may not decide to do us all in on their own. Perhaps like the Krell, we will do ourselves in with the aid of ASI Machines. As I pointed out in Swarm Software and Killer Robots, swarms of killer robots could be very efficient at making human DNA survival machines go extinct. Since I am quite sure that all the defense departments of the world are now already building and testing killer drone robots, it should not be difficult for the coming ASI Machines to mass produce them in the future, especially when these killer drones are built on assembly lines by robots.
To begin, please watch the Sci-Fi Short Film Slaughterbots presented by DUST
https://www.youtube.com/watch?v=O-2tpwW0kmU
Figure 10 - In the movie Slaughterbots, swarms of small killer robots equipped with 3-gram charges of shaped explosive use AI software to track down and destroy designated targets.
Figure 11 - The shaped charge of a Slaughterbot can pierce a skull like the shaped charge of an anti-tank missile pierces armor. The jet of piercing plasma then destroys the contents.
Figure 12 - Large numbers of Slaughterbots can be dropped from unmanned drones to form multiple swarms of Slaughterbots.
Comments are welcome at scj333@sbcglobal.net
To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/
Regards,
Steve Johnston
Monday, August 04, 2025
Is Intelligence Only Found in a Universe with a Slight Manufacturing Defect?
In Are We Living in a White Hole Formed From a Black Hole in a Previous Universe?, The Theology of Cosmic Self-Replicating Mathematical Information, The Self-Organizing Recursive Cosmos, The Self-Organizing Recursive Cosmos - Part II, What's It All About? and What's It All About Again?, I outlined my current working hypothesis that our Universe is just one instance in a Multiverse composed of self-replicating mathematical information that has always existed and always will. In my current working hypothesis, this endless creation is facilitated by Lee Smolin's hypothesis that a black hole in one universe can pinch off as a white hole to form a new universe as presented in his classic Life of the Cosmos.
Life of the Cosmos (1997)
https://tkececi.files.wordpress.com/2009/12/the-life-of-the-cosmos.pdf.
This happens for all black holes, but most form a white hole universe that immediately collapses or blows itself apart into nothingness. Only newly formed white-hole universes that can surpass these hurdles can have a chance of survival. So, a newly formed universe that can survive this process and that is good at making black holes will have a Darwinian advantage over those universes that are not able to do so. This is why the Multiverse should be found to be composed of universes that are very good at producing black holes and which are composed of self-replicating mathematical information that is very suitable for black hole creation.
Figure 1 - In Lee Smolin's the Life of the Cosmos he proposes that the black holes of one universe puncture the spacetime of the universe, causing a white hole to appear in a new universe.
Figure 2 - As the white holes expand.
Figure 3 - They eventually pinch off to form new baby Universes.
Figure 4 - If our Universe is a white hole formed from a black hole in a previous universe, the expansion rate of our Universe varied greatly over the past 13.7 billion years. For example, just after the Big Bang of our white hole, our Universe went through a period of Inflation that expanded the Universe by a factor of 1026 in 10-32 seconds! It then continued to expand at a slowing rate for about 9 billion years. Then, about 5 billion years ago, the expansion rate began to increase. In the above figure, this varying rate of expansion is displayed in the curvature of the envelope surrounding our Universe. Notice the dramatic expansion of the envelope during Inflation and that after 9 billion years of expansion, the envelope is now bending upwards as the expansion rate accelerates. Click to enlarge.
However, in this post, I would like to slightly modify my current working hypothesis by applying an additional constraint to my current hypothetical model. This necessary constraint arises from Brandon Carter's infamous Weak Anthropic Principle (1973):
The Weak Anthropic Principle - Intelligent beings will only find themselves existing in universes capable of sustaining intelligent beings.
This is because of the recent findings that we have made about black holes in our particular Universe by the James Webb Space Telescope and other recent observations from other astronomical instruments. New research has recently revealed that the supermassive black holes found at the centers of nearly all galaxies seemed to have first started as intermediate-sized black holes that resulted from the direct collapse of large massive clouds of matter shortly after the Big Bang of our Universe. Until recently, black holes came in two sizes. Stellar-sized black holes result from the collapse of very massive O stars or the collision of two neutron stars and come in the range of 5 - 20 solar masses, with an upper limit of about 100 solar masses. Supermassive black holes are found in the centers of galaxies and range in mass from millions to billions of solar masses. Until recently, the problem for cosmology was that it was very difficult to explain how stellar-sized black holes in the range of 5 - 20 solar masses could evolve into the huge supermassive black holes at the centers of galaxies. But recent observations and research now indicate that intermediate-sized black holes in the range of 100 to 100,000 solar masses do indeed exist. This new research proposes that intermediate-sized black holes first formed very quickly, before any stars, right after the Big Bang, by the direct collapse of huge clouds of gas. Thus, these intermediate-sized black holes were able to skip over the lengthy process of stellar evolution to produce stellar-sized black holes that then went on to combine into more massive black holes. These initial intermediate-sized black holes then began to build the early galaxies of our Universe from the stars that collapsed from the residual gas that did not initially collapse into intermediate-sized black holes.
These intermediate-sized black holes and the surrounding stars of these early star-forming galaxies then both grew in size. The intermediate-sized black holes slowly grew into the present-day supermassive black holes found at the centers of nearly all galaxies. At the same time, the residual gas in molecular clouds rotating about these growing black holes collapsed into open clusters of newly formed stars. The end result is that the modern galaxies of our present Universe resulted from the original direct collapse of very massive clouds of initial cosmic gas into intermediate-sized black holes and the subsequent collapse of any remaining residual gas formed by our Big Bang into stars.
Why is this Adjustment Necessary?
According to Lee Smolin's hypothesis in Life of the Cosmos, the universal Darwinian mechanisms of inheritance, innovation and natural selection should naturally select for universes that result from black holes and that should be very good at producing new Universes that are also very good at producing even more black holes. Now, all of the scientific observations and findings of the past 100 years seem to indicate that our Universe is very good at producing black holes. However, recent observations and research now show that our Universe produces intermediate-sized black holes in a rather defective manner that is not 100% efficient. According to Lee Smolin's hypothesis, a universe that immediately produces intermediate-sized black holes with 100% efficiency from the initial matter arising from a new Big Bang universe would be seen as the ideal to arise from the Darwinian forces of inheritance, innovation and natural selection. But as we all know, achieving perfection is not easy because of the second law of thermodynamics. According to the second law of thermodynamics, there are nearly an infinite number of ways to do things that are nearly perfect but not exactly so. So, in a Multiverse of Universes, there should always arise some defective universes that slightly miss the mark of perfect black hole generation.
My new suggestion is that we human DNA survival machines on the Earth must be living in such a slightly defective Universe. If our Universe had been 100% efficient in creating intermediate-sized black holes from the matter created in our Big Bang, then there would be no leftover matter in our Universe to form the stars and chemical elements from which we came. In this view, only slightly defective universes can produce Intelligence, and therefore, in keeping with the Weak Anthropic Principle, Intelligences will only find themselves in such slightly defective universes.
Figure 5 - Stellar-mass black holes are thought to form when a Type II supernova collapses into a black hole when massive stars with a mass of 3 - 100 solar masses run out of nuclear fuel to fuse. As these stars run out of nuclear fuel, they form an onion-like structure of heavier and heavier nuclei. But iron Fe nuclei cannot be fused to produce energy, and when that happens, the entire star collapses under its own weight and is blown apart by huge numbers of neutrinos. Aluminum-26 is produced in this process and is then blown out into the interstellar medium.
Figure 6 - Type II supernovas happen about every 50 years in our galaxy.
Figure 7 - Above are some very massive stars in the Orion Nebula that are 10 - 30 solar masses. These stars will quickly burn up all of their nuclear fuel in just a few million years and then produce type II supernovas.
Figure 8 - Stellar-mass black holes can also form when two neutron stars collide to form a black hole. This only happens about once every 30 million years in our galaxy.
Figure 9 - Supermassive black holes are found at the center of nearly all galaxies. More massive galaxies are found to have more massive supermassive black holes at their centers.
Figure 10 - The current model is that the supermassive black holes at the centers of galaxies formed from the accumulation of stellar-sized black holes after the first supermassive O stars formed and collapsed into black holes. The challenge for this hypothesis is the limited time available for such to happen. The new model proposes that intermediate-sized black holes formed first by the direct collapse of huge gas clouds before any stars had even formed. Click to enlarge.
Figure 11 - We may have detected an intermediate-sized black hole form from direct collapse in the Infinity Galaxy shown above. The recently discovered Infinity Galaxy appears to be two galaxies in collision that look somewhat like the mathematical infinity sign. Each galaxy seems to have its own central supermassive black hole, but between the two, there appears to be an intermediate-sized black hole that formed from the direct collapse of the colliding gas clouds of both galaxies.
Here is a very good YouTube video by Dr. Becky that covers more of the above:
What came first, the galaxy or the black hole? JWST tackles astrophysics’s “chicken or egg” question
https://www.youtube.com/watch?v=B9yDWbilIG4
In the video, Dr. Becky discusses the paper below:
JADES reveals a large population of low mass black holes at high redshift
https://arxiv.org/pdf/2506.22147
Now, eventually, even our somewhat defective Universe will ultimately be reduced to black holes and photon radiation, but this will take a very long time. Thus, universes that are more efficient at creating black holes by direct collapse just after their Big Bang should come to dominate the Multiverse. Such universes will just not contain any Intelligences to take note of that fact.
Me:
How long is it thought to be for our Universe to be completely reduced to black holes and photon radiation?
ChatGPT:
The Black Hole Era (~1040 → ∼10100 years)
Most matter has either been swallowed by black holes or ejected into intergalactic space.
Black holes dominate as the largest remaining energy reservoirs.
Over time, Hawking radiation causes black holes to evaporate:
Stellar-mass black holes last ∼ 1066 years.
Supermassive black holes (up to ~1010 solar masses) last up to ~10100 years.
Conclusion
Many have pondered why our Universe seems to be so fine-tuned to support Intelligences such as ourselves. But in my new current working hypothesis, I counter that we are actually living in a slightly botched Universe that is lying in the discard bin along the endless assembly line of new universes coming to be.
Comments are welcome at scj333@sbcglobal.net
To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/
Regards,
Steve Johnston