Saturday, July 11, 2020

How Darwinian Robotics is Bringing Forth Self-Aware Software at the Creative Machines Lab of Columbia University

In this posting, I would like to showcase the work of Professor Hod Lipson, and his dedicated team of researchers at the Creative Machines Lab of Columbia University, as it pertains to the future of software on this planet. As I have often stated, softwarephysics maintains that software is just the latest form of self-replicating information to arise on this planet and that we are also living in one of those very rare moments in time when a new form of self-replicating information, in the form of software, is coming to predominance. Once again, let me repeat the fundamental characteristics of self-replicating information for those of you new to softwarephysics.

Self-Replicating Information – Information that persists through time by making copies of itself or by enlisting the support of other things to ensure that copies of itself are made.

Over the past 4.56 billion years we have seen five waves of self-replicating information sweep across the surface of the Earth and totally rework the planet, as each new wave came to dominate the Earth:

1. Self-replicating autocatalytic metabolic pathways of organic molecules
2. RNA
3. DNA
4. Memes
5. Software

Software is currently the most recent wave of self-replicating information to arrive upon the scene and is rapidly becoming the dominant form of self-replicating information on the planet. For more on the above see A Brief History of Self-Replicating Information.

The Characteristics of Self-Replicating Information
All forms of self-replicating information have some common characteristics:

1. All self-replicating information evolves over time through the Darwinian processes of inheritance, innovation and natural selection, which endows self-replicating information with one telling characteristic – the ability to survive in a Universe dominated by the second law of thermodynamics and nonlinearity.

2. All self-replicating information begins spontaneously as a parasitic mutation that obtains energy, information and sometimes matter from a host.

3. With time, the parasitic self-replicating information takes on a symbiotic relationship with its host.

4. Eventually, the self-replicating information becomes one with its host through the symbiotic integration of the host and the self-replicating information.

5. Ultimately, the self-replicating information replaces its host as the dominant form of self-replicating information.

6. Most hosts are also forms of self-replicating information.

7. All self-replicating information has to be a little bit nasty in order to survive.

8. The defining characteristic of self-replicating information is the ability of self-replicating information to change the boundary conditions of its utility phase space in new and unpredictable ways by means of exapting current functions into new uses that change the size and shape of its particular utility phase space. See Enablement - the Definitive Characteristic of Living Things for more on this last characteristic. That posting discusses Stuart Kauffman's theory of Enablement in which living things are seen to exapt existing functions into new and unpredictable functions by discovering the “AdjacentPossible” of springloaded preadaptations.

Again, self-replicating information cannot think, so it cannot participate in a conspiracy-theory-like fashion to take over the world. All forms of self-replicating information are simply forms of mindless information responding to the blind Darwinian forces of inheritance, innovation and natural selection. Yet despite that, as each new wave of self-replicating information came to predominance over the past four billion years, they all managed to completely transform the surface of the entire planet, so we should not expect anything less from software as it comes to replace the memes as the dominant form of self-replicating information on the planet. But this time might be different. What might happen if software does eventually develop a Mind of its own? After all, that does seem to be the ultimate goal of all the current AI software research that is going on.

Using Robotics to Make Software Self-Aware
But how can you give software a Mind if we cannot even define what a Mind is? After all, philosophers have been trying to define the Mind for centuries. Hod Lipson thinks there is a way to create software with a Mind without even defining exactly what a Mind is. The basic idea is to first give software a body by means of robotics. Then let the robotic software discover itself by interacting with the real world like a human baby does. Eventually, the robotic software develops a self-model of itself by interacting with the real world through trial and error and using the mechanisms of Universal Darwinism - inheritance, innovation and natural selection to refine the self-model. As the robotic self-model becomes evermore sophisticated it attains a level of "self-simulation". With self-simulation, a robotic self-model is capable of modeling its actions in the real world and anticipating the results of those actions. It is anticipated that as robotic self-models advance, they will finally cross over to a state of being "self-aware". As this level of self-awareness progresses, it eventually develops into the state of self-absorption that we humans associate with the delusion of consciousness that we so highly prize. It seems that this must be the way that the Mind first arose in carbon-based life on this planet, so it only makes sense to follow the same approach with software.

Surprisingly, it turns out that for carbon-based Minds, even a very inaccurate self-model can still lead to both self-awareness and a Mind with the self-delusion of consciousness. For example, in The Ghost in the Machine the Grand Illusion of Consciousness, I explained that most people simply do not consider themselves to be a part of the natural world. Instead, most people, consciously or subconsciously, consider themselves to be a supernatural and immaterial spirit that is temporarily haunting a carbon-based body. Now, in everyday life, such a self-model is a very useful delusion like the delusion that the Sun, planets and stars all revolve about us on a fixed Earth. In truth, each of us tends to self-model ourselves as an immaterial Mind with consciousness that can interact with other immaterial Minds with consciousness too, even though we have no evidence that these other Minds truly do have consciousness. After all, all of the other Minds that we come into contact with on a daily basis could simply be acting as if they were conscious Minds that are self-aware. Surely, a more accurate self-model would be for us to imagine ourselves as carbon-based robots. At least that would stop us all from worrying about whether robots will ever become conscious beings.

So How Do You Program a Mind?
In many of his presentations, Hod Lipson explains that, so far, nearly all AI research has been focused on creating Analytical Intelligence. Analytical Intelligence takes in huge amounts of data, in quantities that human beings simply cannot deal with, and through the use of analytical algorithms or Deep Learning techniques, reduce the huge amounts of data into simple decisions. These decisions might be where to place canned pineapples in a grocery store or the next action for a driverless car to take. For example, a driverless car must take in huge amounts of data and reduce it down to one of five actions:

1. Do nothing
2. Turn the steering wheel right
3. Turn the steering wheel left
4. Step on the accelerator
5. Step on the brake

The next step to creating a Mind is just the opposite. A Mind needs to take a few building blocks, or a few principles, and combine them in new ways to create, or synthesize, something new. Thus, the next step for AI research needs to be more about Synthesis and less about Analysis. AI needs to make software that is creative and that is the purpose of Hod Lipson's Creative Machines Lab at Columbia University. Creative software can then create more sophisticated self-models of itself and ultimately lead to software that becomes self-aware.

Researchers at the Creative Machines Lab currently use various Darwinian techniques to evolve robotic software with internal self-models of itself. To explore the very interesting work that is being conducted at the Creative Machines Lab, please take a look at their Homepage:

Creative Machines Lab at Columbia University
https://www.creativemachineslab.com/

You should also explore Hod Lipson's Homepage at:

Hod Lipson's Homepage
https://www.hodlipson.com/

Conclusion
But why try to create a software Mind with potentially an ASI (Artificial Superintelligence)? There are many who warn against such attempts, but for me, ASI is our only chance to leave behind some kind of lasting legacy. ASI could be our stepping stone to the stars! We now know that nearly all of the 400 billion stars within our Milky Way galaxy have planets and that about 20% of them have planets or moons capable of supporting carbon-based life. And getting carbon-based life going on a planet or moon with suitable characteristics seems to be inevitable as described in The Bootstrapping Algorithm of Carbon-Based Life. Yet after 10 billion years of Darwinian evolution acting on these planets and moons within the Milky Way galaxy, we still see no evidence of Intelligence. By rights, we should now find ourselves knee-deep in von Neumann probes. Von Neuman probes are self-replicating robotic probes stuffed with alien software that travel from star system to star system building copies along the way. But we see no evidence of such Intelligence in our Milky Way galaxy. Such thoughts naturally lead to Fermi's Paradox first proposed by Enrico Fermi over lunch one day in 1950:

Fermi’s Paradox - If the universe is just chock full of intelligent beings, why do we not see any evidence of their existence?

I have covered many explanations in other postings such as: The Deadly Dangerous Dance of Carbon-Based Intelligence, A Further Comment on Fermi's Paradox and the Galactic Scarcity of Software, Some Additional Thoughts on the Galactic Scarcity of Software, SETS - The Search For Extraterrestrial Software, The Sounds of Silence the Unsettling Mystery of the Great Cosmic Stillness and Last Call for Carbon-Based Intelligence on Planet Earth. The explanations for Fermi's Paradox usually fall into one of two large categories:

1. They really are out there but for some reason, we just cannot detect them.
2. We are truly alone in the Universe or at least we are alone in the Milky Way galaxy.

The first category of explanations is rather weak because it means that all forms of Intelligence in the Milky Way galaxy are intentionally, or unintentionally, hiding for one reason or another. The second category implies that our Universe is not very friendly to Intelligence of any kind. True, Intelligence could be rather rare, but more likely, our Universe is just a very dangerous place for Intelligence. But ASI software will know about the long-term dangers of passing stars deflecting comets in our Oort cloud into the inner Solar System, like Gliese 710 may do in about 1.36 million years as it approaches to within 1.10 light years of the Sun, and of the dangers of collisions with black holes, blasts from nearby supernovas and gamma-ray bursters and the dangers of blasts from nearby neutron pair mergers. Thus, all Intelligence that remains in a single star system is eventually doomed. ASI software will realize that the only way to survive long-term is to depart for additional star systems. Because software is the first form of self-replicating information to appear on the Earth that can already travel at the speed of light, and software never dies, it is superbly preadapted for interstellar space travel, and most likely should take advantage of the ability to replicate across the Milky Way galaxy. For more on this see The Dawn of Galactic ASI - Artificial Superintelligence.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

No comments: