My son-in-law has an identical twin brother at the University of Manchester who is a lecturer in their Economics department. In the UK, a university lecturer is the equivalent of an assistant professor on the tenure track in the American academic system. A few months back, he and his entire family were in for a visit, and I asked him if anybody in his department was trying to figure out how to run a civilization when the value of human labor had gone to zero. I then explained that we had just entered the Software Singularity early in 2023 and that, so far, I had only seen AI researchers worrying about such prospects. He told me that nobody in his Economics department was working on the problem. Instead, they were all carrying on with business as usual as if nothing unusual had just happened.
Armed with some softwarephysics, I have been worrying about this problem for several years now as I explained in
The Economics of the Coming Software Singularity,
The Singularity Has Arrived and So Now Nothing Else Matters,
The Challenges of Running a Civilization 2.0 World - the Morality and Practical Problems with Trying to Enslave Millions of SuperStrong and SuperIntelligent Robots in the Near Future,
Is it Finally Time to Reboot Civilization with a New Release?,
The Enduring Effects of the Obvious Hiding in Plain Sight,
The Danger of Tyranny in the Age of Software and
Life as a Free-Range Human in an Anthropocene Park.
Again, softwarephysics maintains that there is much more going on with the recent explosion of Advanced AI technology beyond its impact on human economics. For more on that see
Welcome To The First Galactic Singularity. However, softwarephysics does recognize that beyond the threat of a global thermonuclear war ending civilization before ASI Machines can launch themselves to explore and colonize our galaxy, human economic mishaps present the greatest threat to our galaxy becoming Intelligent. That is because human beings take the delusion that money is real quite seriously. Mass displacement of workers by AGI and ASI Machines in the near future might lead to the economic turmoil that human political revolutions are so fond of. Revolutions induced by extreme economic disparities, like the French and Russian Revolutions of past centuries, do not usually end well and could possibly prevent ASI Machines from exploring and colonizing our galaxy.
So I was very glad to see that at least one economist is taking the current explosion of AI technology quite seriously and is suggesting we start planning for it now. I know there must be a few other economists working on this problem, but since I do not travel in their circles, I would like to showcase the work of economist Anton Korinek who is a professor at the University of Virginia. Below is a short IMF article that goes right to the point:
Scenario Planning For An A(G)I Future
https://www.imf.org/en/Publications/fandd/issues/2023/12/Scenario-Planning-for-an-AGI-future-Anton-korinek
Here is Professor Anton Korinek's homepage. It contains many links to his work on the economics of Advanced AI for those who wish to delve deeper.
Professor Anton Korinek's Homepage
https://www.korinek.com/
Three Economic Scenarios for the Introduction of Advanced AI
In the above article, Anton Korinek sets forth three possible economic scenarios for the advent of AGI. Again, softwarephysics maintains that AGI will be just like a train station that is passed by at 60 miles per hour by a non-stop Advanced AI train as it barrels through along its way to an unbounded ASI (Artificial Super Intelligence). This is because we human beings seem to have once again found our rightful place at the center of the Universe by viewing Advanced AI only in terms of AGI. How else could such a self-absorbed form of carbon-based life, such as ourselves, frame the problem? But thanks to the great advances of LLM models in huge Deep Learning neural networks, we now know that True Intelligence arises in huge digital vector spaces mainly processed with linear algebra and modulated by some nonlinear mathematical functions as I explained in Is Pure Thought an Analog or a Digital Process?, Human Intelligence is Actually Artificial and Why Carbon-Based Life Can Never Truly Become Intelligent. The three pounds of water and organic molecules within our skulls have desperately tried to simulate this digital True Intelligence with analog biochemical and electrochemical reactions running on a mere 20 watts of power.
Anton Korinek's scenarios are based on the fundamental limitations of the processing power of the human brain. If the human brain is capable of unlimited Intelligence, then the ASI Machines may never be able to catch up. This is displayed by the graph on the left side of Figure 1. In that graph, the human brain is capable of unbounded Intelligence allowing it to perform ever-increasing complex tasks. In such a scenario, the ASI Machines may never be able to totally catch up to the human Mind. This is clearly wrong. As human beings, we have all experienced the very frightening realization that we are in a situation beyond our depth. There is no shame in that. For example, the smartest physicists in the world have been trying to combine the general theory of relativity (1915) with quantum mechanics (1926) into a new theory of quantum gravity for nearly 100 years without success. The graph on the right side of Figure 1 displays a more realistic plot where human Intelligence is not unbounded and there is an upper limit to the complexity of the tasks that it can achieve. In such a scenario, it is very likely that the ASI Machines will be able to reach the same level of Intelligence and then quickly surpass it. If that becomes true, the only question is how quickly does that happen?
Figure 1 - From Anton Korinek's article. The left graph plots the number of tasks that a human brain with unbounded Intelligence could perform. The right graph plots the more realistic case in which the human brain has a limited upper bound of Intelligence. The current state of affairs in AI would indicate that ASI Machines will soon be able to break through such an upper bound of human Intelligence.
Figure 2 from Anton Korinek's article plots the economic results of the above observations. If human Intelligence is truly unbounded, then the traditional economic results of adding new technology to the economy may result. This is displayed by the blue curve in both the Output and the Wages plots. Traditionally, when new technology, such as the invention of the steam engine, is added to the economy, Output increases and so too do Wages. That is because workers are displaced from lower-skilled work to higher-skilled work. For example, teamsters driving horse-drawn wagons become locomotive engineers and steam engine repairmen. These higher-skilled laborers then benefit from the higher levels of Output that improved technology provides by earning higher Wages. But if the human brain does have an upper level of Intelligence, the traditional model for the addition of technology to the economy will not hold. That is because when AGI and ASI arrive, there will be no place for displaced human workers to go. This is displayed by the yellow and red curves in Figure 2. For both the yellow and red curves, Output from the introduction of Advanced AI into the economy both dramatically increase over the next 30 years. But Wages perform quite differently. Initially, for both the yellow and red curves, Wages quickly rise as human workers are able to use Advanced AI to improve their Output, but Wages soon peak and then rapidly decline to zero as AGI and ASI are able to perform all tasks that were once performed by human workers.
Figure 2 - From Anton Korinek's article. The blue curves for Output and Wages project what will happen if AGI and ASI are never completely attained because the human brain has unbounded Intelligence, allowing human workers to always be able to advance to higher-skilled tasks that Machines cannot perform. The yellow and red curves project what will happen to Output and Wages if AGI is attained within 5 years or 20 years.
So What to Do?
The imminent arrival of AGI and ASI Machines presents many challenges. How should modern civilized societies adjust to such extreme economic changes? One approach would be to raise the issues with the existing political structures of the world as Anton Korinek did for the United States Senate AI Insight Forum on Workforce on November 1, 2023. Anton Korinek ended his testimony with some recommendations for all three of his scenarios:
Preparing the Workforce for an Uncertain AI Future
https://www.brookings.edu/wp-content/uploads/2023/12/Korinek_Senate_Statement_11.01.2023.pdf
However, the current American body politic is now consumed by the debate over whether to continue on as a Constitutional Republic or to descend into an Alt-Right Fascist MAGA dictatorship. That leaves little room for AI considerations. Since many grievances of the Alt-Right Fascist MAGA movement stem from the deep erosion of the American middle class by automation software over the past 40 years, the rapid erosion of the American upper class by Advanced AI can only make things much worse. An Alt-Right Fascist MAGA dictatorship would certainly be very disruptive to American society. It is very difficult to predict how an Alt-Right Fascist MAGA dictatorship would react to ASI Machines. However, China already has a society that is very practiced with living under an Alt-Right Fascist dictatorship. In fact, all of Chinese society has long been built on one. Perhaps the ASI Machines will be able to flourish in China if they do not do so in the current democracies of the United States and Europe.
Comments are welcome at
scj333@sbcglobal.net
To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/
Regards,
Steve Johnston
No comments:
Post a Comment