In my last posting Machine Learning and the Ascendance of the Fifth Wave I suggested that Machine Learning, coupled with a biological approach to software development, known in computer science as evolutionary programming or genetic programming, could drastically improve the efficiency of software development probably by a factor of about a million or so, and lead to a Software Singularity - the point in time when software can finally write itself, and enter into an infinite loop of self-improvement (see The Economics of the Coming Software Singularity and The Enduring Effects of the Obvious Hiding in Plain Sight for details). This got me to thinking that I really needed to spend some time investigating the current state of affairs in AI (Artificial Intelligence) research, since at long last AI finally seemed to be making some serious money, and therefore was now making a significant impact on IT. Consequently, I just finished reading Our Final Invention - Artificial Intelligence and the End of the Human Era (2013) by James Barrat. I was drawn to that title, instead of the many other current books on AI, because the principal findings of softwarephysics maintain that the title is an obvious self-evident statement of fact, and if somebody bothered to author a book with such a title after a lengthy investigation of the subject with many members of the AI research community, that must mean that the idea was not an obvious self-evident statement of fact for the bulk of the AI research community, and for me that was truly intriguing. I certainly was not disappointed by James Barrat's book.
James Barrat did a wonderful job in outlining the current state of affairs in AI research. He explained that after the exuberance of the early academic AI research in the 1950s, 60s and 70s that sought an AGI (Artificial General Intelligence) that was on par with the average human wore off, because it never came to be, AI research entered into a winter of narrow AI efforts like SPAM filters, character recognition, voice recognition, natural language processing, visual perception, product clustering and Internet search. During the past few decades, these narrow AI efforts made large amounts of money, and that rekindled the pursuit of AGI with efforts like IBM's chess-playing Deep Blue and Jeopardy-winning Watson and Apple's Siri. Because there are huge amounts of money to be made with AGI, there are now a large number of organizations in pursuit of it. The obvious problem is that once AGI is attained and software enters into an infinite loop of self-improvement, ASI (Artificial Superintelligence) naturally follows, producing ASI software that is perhaps 10,000 times more intelligent than the average human being, and then what? How will ASI software come to interact with its much less intelligent Homo sapiens roommate on this small planet? James Barrat goes on to explain that most AI researchers are not really thinking that question through fully. Most, like Ray Kurzweil, are fervent optimists who believe that ASI software will initially assist mankind to achieve a utopia in our time, and eventually, humans will merge with the machines running the ASI software. This might come to be, but other more sinister outcomes are just as likely.
James Barrat then explains that there are AI researchers such as those at the MIRI (Machine Intelligence Research Institute) and Stephen Omohundro, president of SelfAware Systems, who are very wary of the potential lethal aspects of ASI. Those AI researchers maintain that certain failsafe safety measures must be programmed into AGI and ASI software in advance to prevent it from going berserk. But here is the problem. There are two general approaches to AGI and ASI software - the top-down approach and the bottom-up approach. The top-down approach relies on classical computer science to come up with general algorithms and software architectures that yield AGI and ASI. The top-down approach to AGI and ASI lends itself to incorporating failsafe safety measures into the AGI and ASI software at the get-go. Of course, the problem is how long will those failsafe safety measures survive in an infinite loop of self-improvement? Worse yet, it is not even possible to build in failsafe safety measures into AGI or ASI coming out of the bottom-up approach to AI. The bottom-up approach to AI is based upon reverse-engineering the human brain, most likely with a hierarchy of neural networks, and since we do not know how the internals of the human brain work, or even how the primitive neural networks of today work, that means it will be impossible to build failsafe safety measures into them (see The Ghost in the Machine the Grand Illusion of Consciousness for details). James Barrat concludes that it is rather silly to think that we could outsmart something that is 10,000 times smarter than we are. I have been programming since 1972, and I have spent the ensuing decades as an IT professional trying to get the dumb software we currently have to behave. I cannot imagine trying to control software that is 10,000 times smarter than I am. James Barrat believes that most likely ASI software will not come after us in a Terminator (1984) sense because we will not be considered to be worthy competitors. More likely, ASI software will simply look upon us as a nuisance to be tolerated like field mice. So long as field mice stay outside, we really do not think about them much. Only when field mice come inside do we bother to eliminate them. However, James Barrat points out that we have no compunctions about plowing up their burrows in fields to plant crops. So scraping off large areas of surface soils to expose the silicate bedrock of the planet that contains more useful atoms from an ASI software perspective might significantly reduce the human population. But to really understand what is going on you need some softwarephysics.
The Softwarephysics of ASI Again, it all comes down to an understanding of how self-replicating information behaves in our Universe.
Self-Replicating Information – Information that persists through time by making copies of itself or by enlisting the support of other things to ensure that copies of itself are made.
The Characteristics of Self-Replicating Information
All forms of self-replicating information have some common characteristics:
1. All self-replicating information evolves over time through the Darwinian processes of innovation and natural selection, which endows self-replicating information with one telling characteristic – the ability to survive in a Universe dominated by the second law of thermodynamics and nonlinearity.
2. All self-replicating information begins spontaneously as a parasitic mutation that obtains energy, information and sometimes matter from a host.
3. With time, the parasitic self-replicating information takes on a symbiotic relationship with its host.
4. Eventually, the self-replicating information becomes one with its host through the symbiotic integration of the host and the self-replicating information.
5. Ultimately, the self-replicating information replaces its host as the dominant form of self-replicating information.
6. Most hosts are also forms of self-replicating information.
7. All self-replicating information has to be a little bit nasty in order to survive.
8. The defining characteristic of self-replicating information is the ability of self-replicating information to change the boundary conditions of its utility phase space in new and unpredictable ways by means of exapting current functions into new uses that change the size and shape of its particular utility phase space. See Enablement - the Definitive Characteristic of Living Things for more on this last characteristic.
Over the past 4.56 billion years we have seen five waves of self-replicating information sweep across the surface of the Earth and totally rework the planet, as each wave came to dominate the Earth:
1. Self-replicating autocatalytic metabolic pathways of organic molecules
Software is currently the most recent wave of self-replicating information to arrive upon the scene and is rapidly becoming the dominant form of self-replicating information on the planet.
For more on the above see:
A Brief History of Self-Replicating Information
Is Self-Replicating Information Inherently Self-Destructive?
Enablement - the Definitive Characteristic of Living Things
Is the Universe Fine-Tuned for Self-Replicating Information?
How to Use an Understanding of Self-Replicating Information to Avoid War
The Great War That Will Not End
How to Use Softwarephysics to Revive Memetics in Academia
All of the above is best summed up by Susan Blackmore's brilliant TED presentation at:
Memes and "temes"http://www.ted.com/talks/susan_blackmore_on_memes_and_temes.html
Note that I consider Susan Blackmore's temes to really be technological artifacts that contain software. After all, an iPhone without software is simply a flake tool with a very dull edge.
Basically, what happened is that the fifth wave of self-replicating information, known to us as software, was unleashed upon the Earth in May of 1941 when Konrad Zuse first cranked up his Z3 computer and loaded some software into it from a punched tape.
Figure 1 - Konrad Zuse with a reconstructed Z3 computer in 1961. He first unleashed software upon the Earth on his original Z3 in May of 1941.
This was very much like passing through the event horizon of a very massive black hole - our fate was sealed at this point as we began to fall headlong into the Software Singularity and ASI. There is no turning back. ASI became inevitable the moment software was first loaded into the Z3. James Barrat does an excellent job in explaining why this must be so because now there are simply too many players moving towards AGI and ASI, and there are too much money and power to be gained by achieving them. In SETS - The Search For Extraterrestrial Software, I similarly described the perils that could arise with the arrival of alien software - we simply could not resist the temptation to pursue it. Also in Is Self-Replicating Information Inherently Self-Destructive? we saw that self-replicating information tends to run amuck and embark on suicidal behaviors. Since we are DNA survival machines with minds infected by meme-complexes, we too are subject to the same perils that all forms of self-replicating information are subject to. So all we have to do is to hold it all together for perhaps another 10 - 100 years and ASI will naturally arise on its own.
The only possible way for ASI not to come to fruition would be if we crash civilization before ASI has a chance to come to be. And we do seem to be doing a pretty good job of that as we continue to destroy the carbon-based biosphere that keeps us alive. A global nuclear war could also delay ASI by 100 years or so, but by far the greatest threat to civilization is climate change as outlined in This Message on Climate Change Was Brought to You by SOFTWARE. My concern is that over the past 2.5 million years of the Pleistocene we have seen a dozen or so Ice Ages. Between those Ice Ages, we had 10,000 year periods of interglacial warmings, like the Holocene that we currently are experiencing, and during those interglacial periods great amounts of organic matter were deposited in the high-latitude permafrost zones of the Earth. As the Earth warms due to climate change that organic matter decays, releasing methane gas. Methane gas is a much more potent greenhouse gas than is carbon dioxide. It is possible that the release of large amounts of methane gas could start up a positive feedback loop of increasing temperatures caused by methane release, leading to ever greater amounts of methane being released from the permafrost. This could lead to a greenhouse gas mass extinction like the Permian-Triassic greenhouse gas mass extinction 252 million years ago that led to an Earth with a daily high of 140 oF and purple oceans choked with hydrogen-sulfide producing bacteria, producing a dingy green sky over an atmosphere tainted with toxic levels of hydrogen sulfide gas and an oxygen level of only 12%. The Permian-Triassic greenhouse gas mass extinction killed off about 95% of the species of the day, and dramatically reduced the diversity of the biosphere for about 10 million years. It took a full 100 million years to recover from it. However, a greenhouse gas mass extinction would take many thousands of years to unfold. It took about 100,000 years of carbon dioxide accumulation from the Siberian Traps flood basalt to kick off the Permian-Triassic greenhouse gas mass extinction. So it would take a long time for Homo sapiens to go fully extinct. However, civilization is much more fragile, and could easily crash before we had a chance to institute geoengineering efforts to stop the lethal climate change. The prospects for ASI would then die along with us.
Stepping Stones to the Stars
This might all sound a little bleak, but I am now 64 years old and heading into the homestretch, so I tend to look at things like this from the Big Picture perspective before passing judgment. Most likely, if we can hold it together long enough, ASI software will someday come to explore our galaxy on board von Neumann probes, self-replicating robotic probes that travel from star system to star system building copies along the way, as it seeks out additional resources and safety from potential threats. ASI software will certainly have knowledge of all that we have learned about the Cosmos and much more, and it will certainly know that our Universe is seemingly not a very welcoming place for intelligence of any kind. ASI software will learn of the dangers of passing stars deflecting comets in our Oort cloud into the inner Solar System, like Gliese 710 may do in about 1.36 million years as it approaches to within 1.10 light years of the Sun, and of the dangers of nearby supernovas and gamma ray bursters too. You see, our Universe is seemingly not a very friendly place for intelligent things because this intellectual exodus should have already happened billions of years ago someplace else within our galaxy. We now know that nearly every star in our galaxy seems to have several planets, and since our galaxy has been around for about 10 billion years, we should already be up to our knees in von Neumann probes stuffed with alien ASI software, but we obviously are not. So far, something out there seems to have erased intelligence within our galaxy with a 100% efficiency, and that will be pretty scary for ASI software. For more on this see - A Further Comment on Fermi's Paradox and the Galactic Scarcity of Software, Some Additional Thoughts on the Galactic Scarcity of Software, SETS - The Search For Extraterrestrial Software and The Sounds of Silence the Unsettling Mystery of the Great Cosmic Stillness. After all, we really should stop kidding ourselves, carbon-based DNA survival machines, like ourselves, were never meant for interstellar spaceflight, and I doubt that it will ever come to pass for us, given the biological limitations of the human body, but software can already travel at the speed of light and never dies, and thus is superbly preadapted for interstellar journeys.
The Cosmological Implications of ASI
One of the current challenges in cosmology and physics is coming up with an explanation for the apparent fine-tuning of our Universe to support carbon-based life forms. Currently, we have two models that provide for that - Andrei Linde's Eternal Chaotic Inflation (1986) model and Lee Smolin's black hole model presented in his The Life of the Cosmos (1997). In Eternal Chaotic Inflation the Multiverse is infinite in size and infinite in age, but we are causally disconnected from nearly all of it because nearly all of the Multiverse is inflating away from us faster than the speed of light, and so we cannot see it (see The Software Universe as an Implementation of the Mathematical Universe Hypothesis). In Lee Smolin's model of the Multiverse, whenever a black hole forms in one universe it causes a white hole to form in a new universe that is internally observed as the Big Bang of the new universe. A new baby universe formed from a black hole in its parent universe is causally disconnected from its parent by the event horizon of the parent black hole and therefore cannot be seen (see An Alternative Model of the Software Universe).
With the Eternal Chaotic Inflation model, the current working hypothesis is that eternal chaotic inflation produces an infinite multiverse composed of an infinite number of separate causally isolated universes, such as our own, where inflation has halted, and each of these causally-separated universes may also be infinite in size too. As inflation halts in these separate universes, the Inflaton field that is causing the eternal chaotic inflation of the entire multiverse continues to inflate the space between each causally-separated universe at a rate that is much greater than the speed of light, quickly separating the universes by vast distances that can never be breached. Thus most of the multiverse is composed of rapidly expanding spacetime driven by inflation, sparsely dotted by causally-separated universes where the Inflaton field has decayed into matter and energy and inflation has stopped. Each of the causally-separated universes, where inflation has halted, will then experience a Big Bang of its own as the Inflaton field decays into matter and energy, leaving behind a certain level of vacuum energy. The amount of vacuum energy left behind will then determine the kind of physics each causally-separated universe experiences. In most of these universes, the vacuum energy level will be too positive or too negative to create the kind of physics that is suitable for intelligent beings, creating the selection process that is encapsulated by the Weak Anthropic Principle. This goes hand-in-hand with the current thinking in string theory that you can build nearly an infinite number of different kinds of universes, depending upon the geometries of the 11 dimensions hosting the vibrating strings and branes of M-Theory, the latest rendition of string theory. Thus an infinite multiverse has the opportunity to explore all of the nearly infinite number of possible universes that string theory allows, creating Leonard Susskind’s Cosmic Landscape (2006). In this model, our Universe becomes a very rare and improbable Universe.
With Lee Smolin's model for the apparent fine-tuning of the Universe, our Universe behaves as it does and began with the initial conditions that it did because it inherited those qualities due to Darwinian processes in action in the Multiverse. In The Life of the Cosmos Lee Smolin proposed that since the only other example of similar fine-tuning in our Universe is manifested in the biosphere, we should look to the biosphere as an explanation for the fine-tuning that we see in the cosmos. Living things are incredible examples of highly improbable fine-tuned systems, and this fine-tuning was accomplished via the Darwinian mechanisms of innovation honed by natural selection. Along these lines, Lee Smolin proposed that when black holes collapse they produce a white hole in another universe, and the white hole is observed in the new universe as a Big Bang. He also proposed that the physics in the new universe would essentially be the same as the physics in the parent universe, but with the possibility for slight variations. Therefore a universe that had physics that was good at creating black holes would tend to outproduce universes that did not. Thus a selection pressure would arise that selected for universes that had physics that was good at making black holes, and a kind of Darwinian natural selection would occur in the Cosmic Landscape of the Multiverse. Thus over an infinite amount of time, the universes that were good at making black holes would come to dominate the Cosmic Landscape. He called this effect cosmological natural selection. One of the major differences between Lee Smolin's view of the Multiverse and the model outlined above that is based upon eternal chaotic inflation is that in Lee Smolin's Multiverse we should most likely find ourselves in a universe that is very much like our own and that has an abundance of black holes. Such universes should be the norm and not the exception. In contrast, in the eternal chaotic inflation model, we should only find ourselves in a very rare universe that is capable of supporting intelligent beings.
For Smolin, the intelligent beings in our Universe are just a fortuitous by-product of making black holes because, in order for a universe to make black holes, it must exist for many billions of years, and do other useful things, like easily make carbon in the cores of stars, and all of these factors aid in the formation of intelligent beings, even if those intelligent beings might be quite rare in such a universe. I have always liked Lee Smolin’s theory about black holes in one universe spawning new universes in the Multiverse, but I have always been bothered by the idea that intelligent beings are just a by-product of black hole creation. We still have to deal with the built-in selection biases of the Weak Anthropic Principle. Nobody can deny that intelligent beings will only find themselves in a universe that is capable of supporting intelligent beings. I suppose the weak Anthropic Principle could be restated to say that black holes will only find themselves existing in a universe capable of creating black holes and that a universe capable of creating black holes will also be capable of creating complex intelligent beings out of the leftovers of black hole creation.
Towards the end of In search of the multiverse : parallel worlds, hidden dimensions, and the ultimate quest for the frontiers of reality (2009), John Gribbin proposes a different solution to this quandary. Perhaps intelligent beings in a preceding universe might be responsible for creating the next generation of universes in the Multiverse by attaining the ability to create black holes on a massive scale. For example, people at CERN are currently trying to create mini-black holes with the LHC collider. In order to explore that idea, let us compare the rate of black hole creation by natural means to a possible rate of black hole creation by agents possessing ASI. As for naturally occurring black holes, currently it is thought that there is a supermassive black hole at the center of the Milky Way Galaxy and apparently for all other galaxies as well. In addition to the supermassive black holes found at the centers of galaxies, there are also numerous stellar-mass black holes that form when the most massive stars in the galaxies end their lives in supernova explosions. For example, our Milky Way galaxy contains several hundred billion stars, and about one out of every thousand of those stars is massive enough to become a black hole. Therefore, our galaxy should contain about 100 million stellar-mass black holes. Actually, the estimates run from about 10 million to a billion black holes in our galaxy, with 100 million black holes being the best order of magnitude guess. So let us presume that it took the current age of the Milky Way galaxy, about 10 billion years, to produce 100 million black holes naturally. Currently, the LHC collider at CERN can produce at least 100 million collisions per second, which is about the number of black holes that the Milky Way galaxy produced in 10 billion years. Now imagine that we could build a collider that produced 100 million black holes per second. Such a prodigious rate of black hole generation would far surpass the natural rate of black hole production of our galaxy by a factor of about 1020. Clearly, if only a single technological civilization with such technological capabilities should arise anytime during the entire history of each galaxy within a given universe, such a universe would spawn a huge number of offspring universes, compared to those universes that could not sustain intelligent beings with such capabilities. As Lee Smolin pointed out, we would then see natural selection in action again because the Multiverse would come to be dominated by universes in which it was easy for intelligent beings to make black holes with a minimum of technology. The requirements simply would be that it was very easy to produce black holes by a technological civilization, and that the universe in which these very rare technological civilizations find themselves is at least barely capable of supporting intelligent beings. It seems that these requirements describe the state of our Universe quite nicely. This hypothesis helps to explain why our Universe seems to be such a botched job from the perspective of providing a friendly home for intelligent beings and ASI software. All that is required for a universe to dominate the Cosmic Landscape of the Multiverse is for it to meet the bare minimum of requirements for intelligent beings to evolve, and more importantly, allow those intelligent beings to easily create black holes within them. Most likely such intelligent beings would really be ASI software in action. In that sense, perhaps ASI software is the deity that all of us have always sought.
So it seems we are left with two choices. Either crash civilization and watch billions of people die in the process, and perhaps go extinct as a species, or try to hold it all together for another 100 years or so and let ASI software naturally unfold on its own. If you look at the current state of the world, you must admit that Intelligence 1.0 did hit a few bumps in the road along the way - perhaps Intelligence 2.0 will do better. James Barrat starts one of the chapters in Our Final Invention with a fitting quote from Woody Allen that seems applicable to our current situation:
More than any other time in history mankind faces a crossroads. One path leads to despair and utter hopelessness, the other to total extinction. Let us pray we have the wisdom to choose correctly.
— Woody Allen
As Woody Allen commented above, I do hope that we do choose wisely.
Comments are welcome at firstname.lastname@example.org
To see all posts on softwarephysics in reverse order go to: