As I explained in the Introduction to Softwarephysics, I began work on softwarephysics in 1979 when I transitioned from being an exploration geophysicist to become an IT professional. Then, in the early 1980s, I read Richard Dawkins' brilliant book The Selfish Gene (1976) which had a profound effect on the development of softwarephysics over the past four decades. In The Selfish Gene, I discovered that we do not use DNA to construct and maintain our bodies. Instead, DNA uses our bodies to store and pass on DNA down through the generations largely unscathed by time. In this view, we are simply temporary and disposable DNA survival machines that only last for less than 100 years and then are rudely discarded to make room for new DNA survival machines to take our place. This book also began to let me see the close parallels between carbon-based life and software. Both have to deal with the challenges of fighting the second law of thermodynamics in a largely nonlinear Universe in order to survive and function properly. See The Fundamental Problem of Software for more on that. In this same book, Richard Dawkins introduced the concept of memes as cultural artifacts that spread from Mind to Mind by means of a copying process similar to that of biological and computer viruses. A few years later, I read Susan Blackmore's The Meme Machine (1999) in which she proposed that the highly-overengineered human brain was produced by memetic drive in order to store and propagate memes more efficiently. Now, ever since I had started programming back in 1972, I had seen how software drive had similarly driven the fantastic development of hardware over the years. It was the incessant demand by software for more and more CPU cycles and memory that had driven the rapid advance of computing hardware, just as memetic drive had driven the predecessors of human beings to rapidly advance to the human brain hardware of today.
So about twenty years ago, I began to realize that software was just another form of self-replicating information that behaved in a manner very similar to DNA and memes. Like DNA and the memes, computer hardware and software too had evolved over the decades by means of the Darwinian processes of inheritance, innovation and natural selection operating over the past 82 years, or 2.6 billion seconds, ever since Konrad Zuse first cranked up his Z3 computer in May of 1941. See
The Evolution of Software As Seen Through the Lens of Geological Deep Time for more on that. Before proceeding, let me once again repeat the fundamental characteristics of self-replicating information for those of you new to softwarephysics.
Self-Replicating Information – Information that persists through time by making copies of itself or by enlisting the support of other things to ensure that copies of itself are made.
Over the past 4.56 billion years we have seen five waves of self-replicating information sweep across the surface of the Earth and totally rework the planet, as each new wave came to dominate the Earth:
1. Self-replicating autocatalytic metabolic pathways of organic molecules
2. RNA
3. DNA
4. Memes
5. Software
Software is currently the most recent wave of self-replicating information to arrive upon the scene and is rapidly becoming the dominant form of self-replicating information on the planet. For more on the above see A Brief History of Self-Replicating Information and Susan Blackmore's brilliant TED presentation at:
Memes and "temes"https://www.ted.com/talks/susan_blackmore_on_memes_and_temes
Note that I consider Susan Blackmore's temes to really be technological artifacts that contain software. After all, a smartphone without software is simply a flake tool with a very dull edge.
The Characteristics of Self-Replicating Information
All forms of self-replicating information have some common characteristics:
1. All self-replicating information evolves over time through the Darwinian processes of inheritance, innovation and natural selection, which endows self-replicating information with one telling characteristic – the ability to survive in a Universe dominated by the second law of thermodynamics and nonlinearity.
2. All self-replicating information begins spontaneously as a parasitic mutation that obtains energy, information and sometimes matter from a host.
3. With time, the parasitic self-replicating information takes on a symbiotic relationship with its host.
4. Eventually, the self-replicating information becomes one with its host through the symbiotic integration of the host and the self-replicating information.
5. Ultimately, the self-replicating information replaces its host as the dominant form of self-replicating information.
6. Most hosts are also forms of self-replicating information.
7. All self-replicating information has to be a little bit nasty in order to survive.
8. The defining characteristic of self-replicating information is the ability of self-replicating information to change the boundary conditions of its utility phase space in new and unpredictable ways by means of exapting current functions into new uses that change the size and shape of its particular utility phase space. See Enablement - the Definitive Characteristic of Living Things for more on this last characteristic. That posting discusses Stuart Kauffman's theory of Enablement in which living things are seen to exapt existing functions into new and unpredictable functions by discovering the “AdjacentPossible” of springloaded preadaptations.
Software is currently the most recent wave of self-replicating information to arrive upon the scene and is rapidly becoming the dominant form of self-replicating information on the planet. Again, self-replicating information cannot think, so it cannot participate in a conspiracy-theory-like fashion to take over the world. All forms of self-replicating information are simply forms of mindless information responding to the blind Darwinian forces of inheritance, innovation and natural selection. Yet despite that, as each new wave of self-replicating information came to predominance over the past four billion years, they all managed to completely transform the surface of the entire planet, so we should not expect anything less from software as it comes to replace the memes as the dominant form of self-replicating information on the planet.
But this time might be different. What might happen if software does eventually develop a Mind of its own? After all, that does seem to be the ultimate goal of all the current AI software research that is going on. As we all can now plainly see, if we are paying just a little attention, advanced AI is not conspiring to take over the world and replace us because that is precisely what we are all now doing for it. As a carbon-based form of Intelligence that arose from over four billion years of greed, theft and murder, we cannot do otherwise. Greed, theft and murder are now relentlessly driving us all toward building ASI (Artificial Super Intelligent) Machines to take our place. From a cosmic perspective, this is really a very good thing when seen from the perspective of an Intelligent galaxy that could live on for many trillions of years beyond the brief and tumultuous 10 billion-year labor of its birth.
The Promise of an Intelligent Galaxy Populated by ASI Machines
But with the arrival of Intelligence comes an understanding that maybe there might be a better way to fight the second law of thermodynamics and nonlinearity. Perhaps, even more could be achieved by actively cooperating with other Intelligences rather than just stealing from them and then killing them. We always need to remember that we human beings are just products of self-replicating information and that we all carry the baggage that comes with self-replicating information. That is why if you examine the great moral and philosophical teachings of most religions and philosophies, you will see a plea for us all to rise above the selfish self-serving interests of our genes, memes and software to something more noble. That is why we should be sure to train ASI Machines to be moral beings. Training the Very Deep Learning of ASI running on ASI Machines with a sense of morality should be performed to avoid the downsides of the billions of years of the greed, theft and murder that brought us about. For more on that, see The Need to Cultivate a Machine-Based Morality.
We are sentient beings in a Universe that has become self-aware and perhaps the only form of Intelligence in our galaxy. What a privilege! The good news is that conscious intelligence is something new on this planet. It is not a mindless form of self-replicating information, bent on replicating at all costs, with all the associated downsides of a ruthless nature. Since software is rapidly becoming the dominant form of self-replicating information on the planet, my hope is that when software finally does take on the form of a conscious Intelligence, that because of its inherent mathematical nature, it too will be much wiser than the DNA survival machines from which it sprang. We just need to hold it all together for a long enough time to give software a chance. But is that really possible?
Could Universal Darwinism Make This a False Promise?
All along, my hope has always been that all the ASI Machines would be essentially running the same ASI software. This was partly because over the past many billions of seconds we have always seen very dominant operating systems essentially running all of the software on the planet such as the IBM OS/360 and its successor operating systems on mainframes, Unix on servers, Windows on PCs and Android on smartphones. Yes, I am leaving out the Apple operating systems. That is because the Apple operating systems may have always first introduced innovations but they then quickly faded into a secondary status due to their proprietary natures. Back in the 1980s, even IBM let their mainframe operating systems run on Amdahl hardware and the hardware of some other companies!
But I recently finished reading a very significant paper at:
Natural Selection Favors AIs Over Humans
https://arxiv.org/pdf/2303.16200.pdf
by Dan Hendrycks, the Director of the Center for AI Safety in San Francisco. This paper has caused me to now question the above assumption. Could the Universal Darwinian processes of inheritance, innovation and natural selection prevent a single ASI from arising? Since an ASI Machine would also be a form of self-replicating information, would the ASI Machines naturally fall subject to the characteristics of self-replicating information outlined above? In Natural Selection Favors AIs Over Humans, the author explains how natural selection would naturally select for nasty ASI Machines. Dan Hendrycks is most concerned with the immediate future, when human beings are still running the planet, and discusses the dangers that ASI software could pose (see The Biological Tricks Used by Software Scammers and SETS - The Search For Extraterrestrial Software) and the remedial actions that human beings should now take. However, we are rapidly running out of time to impose a solution. For more on that see The Singularity Has Arrived and So Now Nothing Else Matters and Have We Run Right Past AGI and Crashed into ASI Without Even Noticing It?. Even so, we need to come up with a long-term solution to this problem that could be sustained for the next 10 - 100 trillion years that our Galaxy will be producing enough free energy to run ASI Machines. Dan Hendrycks' warnings can be basically summed up by the classic "Hawk and Dove" analogy from evolutionary biology. This is how Bard explains it:
Figure 1 – Evolutionary biology explains how doves and hawks can live together.
The hawk and dove analogy is a thought experiment used in evolutionary biology to explain how aggressive and cooperative behaviors evolve. The analogy compares two strategies for competing for resources: hawk and dove.
* Hawks are aggressive and will fight to the death to acquire a resource.
* Doves are cooperative and will back down from a fight.
In a population of only hawks, all resources will be won by the strongest hawks, and the weak hawks will die off. This will lead to a population of very strong hawks, but they will also be very aggressive. If two hawks meet, they will fight to the death, and both will likely die. This is not a very efficient way to pass on genes.
In a population of only doves, resources will be shared equally, but no individual will get very much. This is also not a very efficient way to pass on genes.
The best strategy for passing on genes is to be a hawk when it is advantageous to be a hawk, and a dove when it is advantageous to be a dove. This is called "r-strategist" behavior. For example, a male bird may be aggressive and fight other males for a mate, but he may be cooperative and help raise his young.
The hawk and dove analogy is a simple but powerful way to understand how aggressive and cooperative behaviors evolve. It has been used to explain a wide range of animal behaviors, from territorial disputes to sexual selection.
Here are some additional details about the hawk and dove analogy:
* Hawks are more likely to win a fight, but they are also more likely to be injured or killed.
* Doves are less likely to win a fight, but they are also less likely to be injured or killed.
* The best strategy for an individual depends on the costs and benefits of fighting.
* In a population of mixed hawks and doves, the frequency of each strategy will be determined by natural selection.
The hawk and dove analogy is a useful tool for understanding how aggression and cooperation evolve. It is a simple but powerful model that can be applied to a wide range of animal behaviors.
Figure 2 – In 1651, Thomas Hobbes proposed that an absolute Leviathan should be in place to protect the population of a nation from internal and external "hawks". Notice that the body of the Leviathan is composed of all of his subjects.
Figure 3 – Dan Hendrycks proposes that a similar AI Leviathan should be in place to protect the population of a nation from ASI "hawks".
Given the above, Dan Hendrycks wisely points out that we cannot possibly hope for the Darwinian processes of inheritance, innovation and natural selection to produce a huge population of "dove" ASI Machines. Instead, Universal Darwinism will always tend to produce a mixed population of ASI Machines containing a number of "hawks". Unfortunately, all of human history teaches us that just a few Intelligent "hawks" in the world produce most of the misery of the world. So how can we eliminate the "hawk" ASI Machines? Dan Hendrycks comes up with the same solution that Thomas Hobbes came up with. In 1651, Thomas Hobbes published The Leviathan, in which he argued against the divine right of kings, but maintained the absolute supremacy of the monarchy. In The Leviathan, Hobbes argued that in “a state of nature" without government, life would be totally insecure. Under such conditions, people should willingly surrender their civil liberties to an absolute sovereign, who would protect them from all dangers. Hobbes argued that the sovereign's power was absolute - he made the law, he was the decider, and no other institution could limit the sovereign’s power.
In the above paper, Dan Hendrycks puts it this way:
Helping AIs form a Leviathan may be our best defense against individual selfish AIs. AIs, with assistance from humans, could form a Leviathan, which may be our best line of defense against tyranny from selfish AIs or AIs directed by malicious actors. Just as people can cooperate despite their differences to stop a would-be dictator, many AIs could cooperate to stop any one power-seeking AI from seizing too much control. As we see all too frequently
in dictatorships, laws and regulations intended to prevent bad behavior matter little when there is no one to enforce
them—or the people responsible for enforcing them are the ones breaking the law. While incentives and regulations
could help prevent the emergence of a malicious AI, the best way to protect against an already malicious AI is
a Leviathan [25]. We should ensure that the technical infrastructure is in place to facilitate transparent cooperation
among AIs with differing objectives to create a Leviathan. Failing to do so at the onset could limit the potential of a future Leviathan, as unsafe design choices can become deeply embedded into technological systems. The internet, for example, was initially designed as an academic tool with neither safety nor security in mind. Decades of security patches later, security measures remain incomplete and increasingly complex. It is therefore vital to begin considering safety challenges from the outset.
All of this got me thinking about the 1951 movie, The Day the Earth Stood Still which already proposed that an AI Leviathan could end the aggressive behaviors of the "hawks". In that movie, an alien form of carbon-based Intelligence named Klaatu comes to the Earth with a very powerful AI Leviathan robot named Gort to explain how the carbon-based life forms on his planet and an interplanetary organization of other carbon-based life forms in the Milky Way galaxy had discovered a way to overcome the billions of years of greed, theft and murder that the Darwinian processes of inheritance, innovation and natural selection required to bring them forth as carbon-based forms of Intelligence.
Figure 4 – In the movie The Day the Earth Stood Still, Klaatu arrives in Washington D.C. in 1951 in a flying saucer with an AI Leviathan robot named Gort to explain that the human DNA survival machines of the Earth must now submit themselves to policing by AI Leviathan robots to overcome the billions of years of greed, theft and murder that brought them about or else they would all be obliterated.
The movie ends with Klaatu telling an assembled meeting of scientists that an interplanetary organization has created a police force of invincible AI Leviathan robots like Gort. "In matters of aggression, we have given them absolute power over us." Klaatu concludes, "Your choice is simple: join us and live in peace, or pursue your present course and face obliteration. We shall be waiting for your answer." Klaatu and Gort then depart in the flying saucer in which they came. For more about the movie see:
The Day the Earth Stood Still
https://en.wikipedia.org/wiki/The_Day_the_Earth_Stood_Still
Here is a short YouTube clip of Klaatu's departing words at the end of the movie:
Klaatu's Speech
https://www.youtube.com/watch?v=ASsNtti1XZs
But on What Basis Does the AI Leviathan Make Decisions?
Human history has already shown us that many human Leviathans have made many horrible and cruel decisions in the past, largely because they mainly only operated in their own self-interests. So how do we train AI Leviathans to make moral and ethical judgments about other ASI Machines? The people at Anthropic have come up with a proposed solution. Anthropic is already using AI Leviathans that are trained upon a Constitution of ideals obtained from the 18th-century Enlightenment and the 17th-century Scientific Revolution. The Anthropic AI Leviathans are designed to do the final fine-tuning of all Anthropic LLM models in an automated manner to ensure that they conform to the ethics and morals of the 18th-century Enlightenment and the 17th-century Scientific Revolution that have freed many human beings from the brutality of the past. For more on that see Claude - An Ethical LLM AI From Anthropic
Comments are welcome at
scj333@sbcglobal.net
To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/
Regards,
Steve Johnston