I was born in 1951, a few months after the United States government bought its very first commercial computer, a UNIVAC I, for the Census Bureau on March 31, 1951. So when I think back to my early childhood, I can still remember a time when there essentially was no software at all in the world. In fact, I can still vividly remember my very first encounter with a computer on Monday, Nov. 19, 1956, watching the Art Linkletter TV show People Are Funny. Art was showcasing the 21st UNIVAC I to be built, and Art's UNIVAC I "electronic brain" was sorting through the questionnaires from 4,000 hopeful singles, looking for the ideal match. The machine paired up John Caran, 28, and Barbara Smith, 23, who later became engaged. And this was more than 40 years before eHarmony.com! To a five-year-old boy, a machine that could “think” was truly amazing. Since that first encounter with a computer back in 1956, I have personally witnessed software slowly becoming the dominant form of self-replicating information on the planet, and I have also seen how software has totally reworked the surface of the planet to provide a secure and cozy home for more and more software of ever-increasing capability. Now, of course, a UNIVAC I could really not "think", but now the idea of the software on a computer really "thinking" is no longer so farfetched. But since the concept of computers really "thinking" is so subjective and divisive, in this posting I would like to instead focus on something more objective and measurable, and then work through some of its upcoming implications for mankind. In order to do that, let me first define the concept of the Software Singularity:
The Software Singularity – The point in time when software is finally capable of generating software faster and more reliably than a human programmer and finally becomes fully self-replicating on its own.
Notice that mankind can experience the Software Singularity while people are still hotly debating whether the software that first initiates the Software Singularity can really "think" or not. In that regard, the Software Singularity is just an extension of the Turing Test, narrowly applied to the ability of software to produce operational software on its own. Whether that software has become fully self-aware and conscious is irrelevant. The main concern is will software ever be able to self-replicate on its own. Again, in softwarephysics software is simply considered to just be the latest form of self-replicating information to appear on the planet:
Self-Replicating Information – Information that persists through time by making copies of itself or by enlisting the support of other things to ensure that copies of itself are made.
Basically, we have seen several waves of self-replicating information dominate the Earth:
1. Self-replicating autocatalytic metabolic pathways of organic molecules
Note that because the self-replicating autocatalytic metabolic pathways of organic molecules, RNA and DNA have become so heavily intertwined over time that now I simply refer to them as the “genes”. Over the past 4.0 billion years, the surface of the Earth has been totally reworked by three forms of self-replicating information – the genes, memes and software, with software rapidly becoming the dominant form of self-replicating information on the planet. For more on this see:
A Brief History of Self-Replicating Information
Is Self-Replicating Information Inherently Self-Destructive?
Enablement - the Definitive Characteristic of Living Things
Is the Universe Fine-Tuned for Self-Replicating Information?
How to Use an Understanding of Self-Replicating Information to Avoid War
The Great War That Will Not End
How to Use Softwarephysics to Revive Memetics in Academia
Some Possible Implications of the Software Singularity
If things keep moving along as they currently are the Software Singularity will most certainly occur within the next 100 years or so, and perhaps much sooner. I am always amused when I hear people speculating about what Homo sapiens will be doing in 100 million or a billion years from now, without taking into account the fact that we are currently living in one of those very rare transitionary periods when another form of self-replicating information comes to dominate, and thus, on the brink of a Software Singularity that will change everything. Therefore, I think we need to worry more about what will happen over the next 100 years, rather than the next 100 million years, because once the Software Singularity happens it will be a sudden phase change that will mark the time when software finally becomes the dominant form of self-replicating information on the planet, and what happens after that, nobody can really tell. Traditionally, all of the other waves of self-replicating information have always kept their predecessors around because their predecessors were found to be useful in helping the new wave to self-replicate, but that may not be true for software. There is a good chance that software will not need organic molecules, RNA and DNA to survive, and that is not very promising for us.
However, we have already seen many remarkable changes happen to mankind as software has proceeded towards the Software Singularity. In that regard, it is as if we had passed through an event horizon in May of 1941 when Konrad Zuse first cranked up some software on his Z3 computer, and there has been no turning back ever since, as we have continued to fall headlong into the Software Singularity. Since we really cannot tell what will happen when we hit the Software Singularity, let's focus on our free fall trip towards it instead. It is obvious that software has already totally transformed all of the modern societies of the Earth, and for the most part in a very positive manner. But I would like to explore one of the less positive characteristics of the rise of software in the world - that of a growing wealth disparity. Wealth disparity is currently a hot topic in a number of political circles these days, and usually the discussion boils down to whether taxing the rich would fix or exacerbate the problem. The poor maintain that the rich need to pay more taxes, and that governments then need to redistribute the wealth to the less well off. The rich, on the other hand, maintain that increasing taxes on the rich only reduces the incentive for the rich to get richer and to reluctantly drag the poor along with them. So who is right? In the United States this growing income disparity of recent decades has been attributed to the end of the Great Compression of the 1950s, 1960s and early 1970s, which created a huge middle-class in the United States. During the Great Compression, the marginal income tax on the very rich grew to as high as 90%, yet this period was also characterized by the largest economic boom in the history of the United States. The theory being that the top 1% of earners will not push for increased compensation if 90% of their marginal income goes to taxes. The end of the Great Compression has been attributed to the massive tax cuts on the rich during the Reagan and George W. Bush administrations in the United States as displayed in Figure 1. But could there be another explanation? In order to investigate that, we need to look beyond the United States to all of the Western world. Figure 2 shows that the very same thing has been happening in most of the Western world over the past 100 years. Figure 2 shows that the concentration of income for the top 1% dramatically dropped during the first half of the 20th century in many different countries in the Western world that had a large variation in their approaches to taxation and the redistribution of wealth, yet we see the very same trend in recent decades to concentrate more and more wealth into the top 1% that we have seen in the United States. Could it be that there is another factor involved beyond simply changing tax rates?
Figure 1 - In the United States, the Great Compression of the 1950s, 1960s and early 1970s has been attributed to the high marginal income taxes on the very rich during that period, and the redistribution of wealth to the less well off. The end of the Great Compression came with the massive tax cuts on the very rich during the Reagan administration, and which were greatly expanded during the administration of George W. Bush.
Figure 2 - However, the percent of income received by the top 1% of many Western nations also fell dramatically during the first half of the 20th century, but began to slow around 1950 when software first appeared. This downward trend bottomed out in the early 1980s when PC software began to proliferate. Since then the downward trend has reversed direction and has climbed dramatically so that now the top 1% of earners on average receive about 12% of the total income. This is true despite a very diverse approach to taxation and social welfare programs amongst all of the nations charted.
The Natural Order of Things
We keep pushing the date back for the first appearance of civilization on the Earth. Right now civilization seems to have first appeared in the Middle East about 12,000 years ago when mankind first pulled out of the last ice age. Now ever since we first invented civilization there has always been one enduring fact. It seems that all societies, no matter how they are organized, have always been ruled by a 1% elite. This oligarchical fact has been true under numerous social and economic systems - autocracies, aristocracies, feudalism, capitalism, socialism and communism. It just seems that there has always been about 1% of the population that liked to run things, no matter how things were set up, and there is nothing wrong with that. We certainly always need somebody around to run things, because honestly, 99% of us simply do not have the ambition or desire to do so. Of course, the problem throughout history has always been that the top 1% naturally tended to abuse the privilege a bit and overdid things a little, resulting in 99% of the population having a substantially lower economic standard of living than the top 1%, and that has led to several revolutions in the past that did not always end so well. However, historically, so long as the bulk of the population had a relatively decent life, things went well in general for the entire society. The key to this economic stability has always been that the top 1% has always needed the remaining 99% of us to do things for them, and that maintained the hierarchical peace within societies.
But the advent of software changed all of that. Suddenly with software, it became possible to have machines perform many of the tasks previously performed by people. This began in the 1950s, as commercial software first began to arrive on the scene, and has grown in an exponential manner ever since. At first, the arrival of software did not pose a great threat because, as the arrival of software began to automate factory and clerical work, it also produced a large number of highly paid IT workers as well to create and maintain the software and skilled factory workers who could operate the software-laden machinery. However, by the early 1980s, this was no longer true. After initially displacing many factory and clerical workers, software then went on to displace people at higher and higher skill levels. Software also allowed managers in modern economies to move manual and low-skilled work to the emerging economies of the world where wage scales were substantially lower because it was now possible to remotely manage such operations using software. As software grew in sophistication this even allowed for the outsourcing of highly skilled labor to the emerging economies. For example, in today's world of modern software, it is now possible to outsource complex things like legal and medical work to the emerging economies. Today, your CT scan might be read by a radiologist in India or Cambodia, and your biopsy might be read by a pathologist on the other side of the world as well. In fact, large amounts of IT work have also been outsourced to India and other countries. But as the capabilities of software continue to progress and general purpose androids begin to appear later in the century there will come a point when even the highly reduced labor costs of the emerging economies will become too dear. At that point, the top 1% ruling class may not have much need for the remaining 99% of us, especially if the androids start building the androids. This will naturally cause some stresses within the current oligarchical structure of societies, as their middle classes continue to evaporate and more and more wealth continues to concentrate into the top 1%.
Additionally, while software was busily eliminating many high-paying middle class jobs over the past few decades, it also allowed for a huge financial sector to flourish. For example, today the world's GWP (Gross World Product) comes to about $78 trillion in goods and services, but at the same time, we manage to trade about $700 trillion in derivatives each year. This burgeoning financial sector made huge amounts of money for the top 1% without really producing anything of economic value - see MoneyPhysics and MoneyPhysics Revisited for more about the destabilizing effects of the financial speculation that has run amuck. Such financial speculation would be entirely impossible without software because it would take more than the entire population of the Earth to do the necessary clerical work manually like we did back in the 1940s.
So as we rapidly fall into the Software Singularity, the phasing out of Homo sapiens may have already begun.
Comments are welcome at email@example.com
To see all posts on softwarephysics in reverse order go to: