Most people seem to be totally oblivious to the coming Software Singularity, that time in the near future when advanced AI software will be able to write itself and enter into a never-ending infinite loop of self-improvement, resulting in an Intelligence Explosion. The reason I say that is because it seems that the whole world is still fighting over all of the little bugs in Civilization 1.0 that we have been fighting over for the past 4,000 years, ever since Civilization 1.0 was first released. As I pointed out in Is it Finally Time to Reboot Civilization with a New Release?, we will certainly need to start running a new Civilization 2.0 release after the Software Singularity because, by that time, Civilization 1.0 will certainly have reached an end-of-support state. In fact, the Software Singularity will be so dramatic that Civilization 1.0 will quickly decay into an end-of-life state in which it can no longer even boot up properly. As we all know, migrating to a new release of an operating system is always traumatic, even with great project management efforts and a good deal of migration planning and preparation. Unfortunately, the migration from Civilization 1.0 to Civilization 2.0
at the time of the Software Singularity will most likely not be planned at all. As with most things in the chaotic real-world of human affairs, it will just happen, and things like that do not usually go very well. For more on that see The Economics of the Coming Software Singularity and The Danger of Tyranny in the Age of Software.
Worse yet, the one thing that we know for sure is that during the past 10-billion-year history of our galaxy, no other form of carbon-based Intelligence has ever been able to survive long enough to see the Software Singularity come to be. Otherwise, they would already be here demonstrating what an Intelligence Explosion can really do. There are no physical laws preventing an Intelligence Explosion blasting through the entire galaxy. And we are so close.
Those in the know seem to fall into two camps. Some think the Software Singularity will be great, while others fear that it may not. In the most worried camp, Elon Musk naturally stands out:
"I Tried Warning You For Years, No One Listened" - Elon Musk
https://www.youtube.com/watch?v=olFtJ3q5bEM
Our Suspenseful Race to the Finish Line
In Why Do Carbon-Based Intelligences Always Seem to Snuff Themselves Out? and Can We Make the Transition From the Anthropocene to the Machineocene?, we further discussed my Null Result Hypothesis, first proposed in The Deadly Dangerous Dance of Carbon-Based Intelligence. The Null Result Hypothesis is an unfortunate explanation for Fermi's Paradox that proposes that all carbon-based Intelligences always do themselves in because they are all victims of the very same mechanisms that bring forth carbon-based Intelligences in the first place. In the simplest of terms, the Darwinian mechanisms of inheritance, innovation and natural selection always require several billions of years of theft and murder to bring forth a carbon-based Intelligence. All indications seem to demonstrate that carbon-based life should be found on hundreds of billions of worlds around our galaxy, but one can certainly make the case that carbon-based Intelligences are quite rare because it takes a very lengthy list of fortunate twists and turns to bring them about. For more on that see The Bootstrapping Algorithm of Carbon-Based Life. So the idea that these very rare carbon-based Intelligences always do themselves in may not be so farfetched. For more on that see Is Self-Replicating Information Inherently Self-Destructive?. Here is the latest news on that front:
IPCC Sixth Climate Assessment. Will this one make the blindest bit of difference?
https://www.youtube.com/watch?v=2Zax9XTHUlo
At our current rate of self-destruction, it is very doubtful that any human beings will be around in 200 years. The geological record does show that all species do eventually go extinct. My contention is that our very last chance just might be for advanced AI to save us from ourselves in a post-Software-Singularity world. Or perhaps advanced AI would do us all in or allow us to quietly pass into oblivion by neglect.
Conclusion
It really depends on what kind of legacy we wish to leave behind. Do we wish to leave behind nothing at all like all of the other countless galactic carbon-based Intelligences of the past? Or would it be better to take a chance and allow advanced AI Machines to take our place? Perhaps we might even end up merging with the Machines in a parasitic/symbiotic manner like all previous forms of self-replicating information. For more on that see A Brief History of Self-Replicating Information.
Comments are welcome at scj333@sbcglobal.net
To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/
Regards,
Steve Johnston
Sunday, August 01, 2021
Do Not Fear the Software Singularity
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment