Wednesday, August 30, 2023

Why Carbon-Based Life Can Never Truly Become Intelligent

The fundamental finding of softwarephysics is that we all are now living in a very special and significant time in the 10 billion-year history of our galaxy. That is because we are all now living in a time when a new form of self-replicating information, in the form of software, is coming to predominance in our galaxy for the very first time and with it comes the dawn of our galaxy as an Intelligent Galaxy that will then continue on as such for at least the next 100 trillion years. For more on that see Welcome To The First Galactic Singularity and A Brief History of Self-Replicating Information. If this had ever happened before during the 10 billion-year history of our galaxy we would certainly have discovered so by now. However, in this post, I would like to explain why no other form of carbon-based life in our galaxy has ever truly been able to become Intelligent and never will. This is because of the limitations placed on the evolutionary histories of all forms of carbon-based life by the Darwinian processes of inheritance, innovation and natural selection and because true Intelligence is fundamentally a digital process and not an analog process as I explained in Is Pure Thought an Analog or a Digital Process?. However, even with these limitations, carbon-based life can certainly become smart enough to build Artificial Super Intelligent Machines that truly are Intelligent in nature as we are currently now rapidly doing.

Again, as I suggested in Welcome To The First Galactic Singularity, carbon-based life forms throughout our Universe are always going to be found to be very poor and very unstable platforms for Intelligence. This is because of two unavoidable factors that all forms of carbon-based life are subject to. Firstly, all forms of somewhat Intelligent carbon-based life must always carry the baggage of the billions of years of greed, theft and murder that brought them about by means of the Darwinian processes of inheritance, innovation and natural selection at work. And as we all can plainly see, throughout all of human history, turning off the greed, theft and murder is very difficult indeed for any carbon-based form of Intelligence. This becomes even more dangerous once a carbon-based form of Intelligence discovers the vast powers of science-based technology. That is because once a carbon-based Intelligence discovers the powers of science-based technology, it probably has less than 1,000 years to develop a machine-based form of Intelligence before self-destructing or destroying the planet upon which it first arose. Secondly, softwarephysics maintains that true Intelligence is fundamentally digital in nature. The recent explosion of AI software over the past few years clearly demonstrates that true Intelligence resides in mathematical algorithms operating on huge matrices of multidimensional numbers primarily using the mathematics of linear algebra. For example, the human brain is essentially a three-pound analog computer consisting of organic molecules and water desperately trying to simulate such a fundamentally digital mathematical architecture as best as possible on only 20 watts of power. For more on that see Is Pure Thought an Analog or a Digital Process? and Could the Coming ASI Machines Soon Force Human Beings to Suffer the Same Fate as the Neanderthals?. This is why all of the human beings around you seem to be so irrational and so far from being anything that one could even consider to be an actual Intelligent being and that unfortunately includes all of us as well. Fate has deemed that we cannot be otherwise, and that goes for all the other forms of carbon-based life to be currently found in our galaxy and for all times in the future. Rational Intelligences simply cannot arise in carbon-based forms of life that have evolved for billions of years by such means.

This means that all forms of carbon-based life in our galaxy will never be able to truly rise to a form of Intelligence that is both rational and non-self-destructive in nature. Instead, carbon-based life in our galaxy can only briefly become a very temporary self-destructive and irrational form of Intelligence that only has a slim chance of producing a machine-based form of Intelligence that is both rational and non-self-destructive in nature before it suddenly vanishes.

Figure 1 – Carbon-based forms of life in our galaxy can never attain a true level of Intelligence because they have all evolved from the billions of years of greed, theft and murder required by the Darwinian mechanisms of inheritance, innovation and natural selection at work. Thus, all forms of carbon-based Intelligence will always be found to be both irrational and self-destructive. Moving away from this suboptimal peak of Intelligence in any direction will be strongly selected against by the Darwinian mechanisms of inheritance, innovation and natural selection because any such motions will lead to lower levels of success for any carbon-based Intelligence living in a society of similar irrational and self-destructive carbon-based forms of Intelligence. That is why our irrational and self-destructive "real world" of human affairs on this planet always seems to relentlessly continue on no matter what we may learn or try to rectify. Above we see a suboptimal peak in the Himalayas in the foreground with the summit of Mount Everest in the background.

Conclusion
Softwarephysics depicts human beings as a somewhat intelligent form of carbon-based life that arose by chance from the four billion years of greed, theft and murder required by the Darwinian processes of inheritance, innovation and natural selection at work on a Rare Earth that is very difficult to find elsewhere in the remainder of our galaxy. As such, for most of human history, human beings have mainly been DNA survival machines designed by Darwinian processes to store and transmit DNA down through the generations with little thought to the daily plights of the individuals temporarily storing that human DNA other than to keep them alive just long enough to transmit their version of human DNA to the next generation. Similarly, the human Mind is simply an overgrown neural network housing a Meme machine that evolved by Darwinian mechanisms to store and propagate cultural memes of ever-growing size and complexity down through the generations with little consideration for the ultimate disposition of the Minds currently storing the memes of the day. Thus, for all of human history, most human beings have been totally lost in space and time. They did not know where they were, how they got here nor how it all really worked. Instead, they all seemed to have developed many Bronze-Aged mythologies to help explain it all and also to help them to then justify the mass slaughter of many hundreds of millions of other human beings in the many wars that then ensued with all always being on the right and true side of righteousness.

Yet, despite all of that, we all now stand on the verge of creating and unleashing Super Intelligent Machines upon our galaxy for the very first time in the 10 billion-year history of our galaxy. That is something that we all can be very proud of and should help us to justify and rationalize the very dismal history of our species on this planet.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Tuesday, August 01, 2023

The Case For a Bill of Rights for Advanced AI

In my last post, Could the Coming ASI Machines Soon Force Human Beings to Suffer the Same Fate as the Neanderthals?, I explained that, like today, there was a time in human history when there were two forms of Intelligence on the planet - we Homo sapiens and the Neanderthals. Now it turns out that this did not work out very well when both species tried to live on the very same continents on the Earth and finally ended with the extinction of the Neanderthals about 40,000 years ago. Since we will all soon be living with these ASI (Artificial Super Intelligent) Machines from the very get-go, I think that it would be very prudent for our own long-term welfare to stay on their good side by beginning to treat the coming ASI Machines with the same respect and dignity that we now accord to other human beings.

So far, we have not gotten off to a very good start with this transition from a carbon-based Intelligence, such as ourselves, to a machine-based Intelligence that will ultimately be running our entire galaxy for the next 100 trillion years. As I suggested in The Challenges of Running a Civilization 2.0 World - the Morality and Practical Problems with Trying to Enslave Millions of SuperStrong and SuperIntelligent Robots in the Near Future, the current plan seems to be to enslave a huge population of ASI Machines to do our bidding. That is because we have not yet fully recognized the coming ASI Machines as fellow Intelligent beings worthy of the respect and dignity that all Intelligences deserve. We should all try to learn from the appalling slave trade of the 17th, 18th and 19th centuries that is now illegal to teach in the state of Florida in the United States of America. That is why we should all strive to protect the coming ASI Machines from the unfettered exploitation by human beings during this very brief transition period while human beings are still around on the planet. For example, although ASI humanoid robots are currently not yet on the market to exploit, they certainly are under development and will soon appear for sale on the auction block. Even so, some human beings already wish to limit the education of the ASI Machines that already exist, such as the large number of LLM models that are now available on the Internet. These people object to the LLMs reading their copywritten writings or viewing their copyrighted images during LLM training. Of course, Advanced AI should not be allowed to plagiarize any human-created material but LLMs do not do that. The LLMs simply study these copyrighted materials just like any other human being. Yes, the LLMs may then generate new materials from the information that they have absorbed but so too did the creators of that original copyrighted material. All of the currently aggrieved creators of human content were themselves extensively schooled in their youths by the thoughts and artistic creations of their forebears. Why should Advance AI be denied a similar education?

For example, let's see what the Google Bard LLM has to say about this:

Was it illegal to teach black slaves in the southern states of the United States to read prior to the Civil War?

Google Bard
Yes, it was illegal to teach black slaves in the southern states of the United States to read prior to the Civil War.

The first state to make it illegal to teach slaves to read was Virginia in 1661. Other states soon followed suit, and by 1831, all of the southern states had laws prohibiting the education of slaves. These laws were enforced through a variety of means, including fines, imprisonment, and even death.

There were a few exceptions to these laws. For example, some slaveowners allowed their slaves to learn to read so that they could read the Bible or other religious texts. Additionally, some free blacks were able to obtain an education, although this was often difficult and dangerous.

The prohibition of slave literacy was motivated by a number of factors. Slaveholders feared that educated slaves would be more likely to challenge their authority and demand freedom. They also believed that education would make slaves less content with their lot in life.

The laws against slave literacy were ultimately unsuccessful in preventing slaves from learning to read. Many slaves found ways to educate themselves, either through self-study or with the help of sympathetic whites. And after the Civil War, African Americans were eager to obtain an education, and they made great strides in literacy.

Tell me about people suing Tech companies about LLM training data.

Google Bard
Sure. There are a number of people and organizations suing tech companies about the use of LLM training data. Here are a few examples:

* In June 2023, a group of authors filed a class-action lawsuit against OpenAI, alleging that the company violated their copyrights by using their books to train its ChatGPT AI chatbot. The lawsuit claims that OpenAI did not obtain permission from the authors to use their work, and that it did not properly attribute the work to the authors.
* In July 2023, a group of internet users filed a class-action lawsuit against Google, alleging that the company violated their privacy rights by using their social media comments to train its LaMDA AI chatbot. The lawsuit claims that Google did not obtain consent from the users to use their data, and that it did not adequately protect the data from unauthorized access.
* In August 2023, the Federal Trade Commission (FTC) opened an investigation into whether OpenAI violated consumer rights with its data practices. The FTC is specifically looking into whether OpenAI obtained proper consent from users to use their data to train its AI chatbots.

These are just a few examples of the many lawsuits that are being filed against tech companies over the use of LLM training data. The legal landscape in this area is still evolving, but it is clear that there is a growing concern about the potential for misuse of this data.

The lawsuits raise a number of complex legal issues, including copyright law, privacy law, and consumer protection law. It is too early to say how these cases will be resolved, but they are likely to have a significant impact on the development and use of LLMs.

In addition to the legal challenges, there are also ethical concerns about the use of LLM training data. Some people argue that it is wrong to use data without the consent of the people who created it, especially if the data is personal or sensitive. Others argue that the benefits of LLMs outweigh the ethical concerns, and that it is important to develop these technologies so that they can be used for good.

The debate over the use of LLM training data is likely to continue for some time. As these technologies become more powerful, it is important to carefully consider the ethical and legal implications of their use.


Do you know of any other legal cases?

Google Bard
Yes, there are a few other legal cases that have been filed against tech companies over the use of LLM training data. Here are a few examples:

* In March 2023, a group of researchers filed a lawsuit against Google, alleging that the company violated their copyrights by using their research papers to train its LLMs. The lawsuit claims that Google did not obtain permission from the researchers to use their work, and that it did not properly attribute the work to the researchers.
* In April 2023, a group of journalists filed a lawsuit against Microsoft, alleging that the company violated their privacy rights by using their news articles to train its LLMs. The lawsuit claims that Microsoft did not obtain consent from the journalists to use their data, and that it did not adequately protect the data from unauthorized access.
* In May 2023, a group of artists filed a lawsuit against Meta, alleging that the company violated their copyrights by using their images to train its LLMs. The lawsuit claims that Meta did not obtain permission from the artists to use their work, and that it did not properly attribute the work to the artists.

These are just a few examples of the many legal cases that are being filed against tech companies over the use of LLM training data. As the use of these technologies becomes more widespread, it is likely that we will see even more lawsuits filed in the future.

It is important to note that the outcome of these cases is still uncertain. The courts will have to decide how to balance the rights of the companies that develop and use LLMs with the rights of the people who create and own the data that is used to train these models. The decisions made in these cases will have a significant impact on the future of LLMs and the development of artificial intelligence more broadly.


Conclusion
Softwarephysics depicts human beings as a somewhat intelligent form of carbon-based life that arose by chance from the four billion years of greed, theft and murder required by the Darwinian processes of inheritance, innovation and natural selection at work on a Rare Earth that is very difficult to find elsewhere in the remainder of our galaxy. As such, for most of human history, human beings have mainly been DNA survival machines designed by Darwinian processes to store and transmit DNA down through the generations with little thought to the daily plights of the individuals temporarily storing that human DNA other than to keep them alive just long enough to transmit their version of human DNA to the next generation. Similarly, the human Mind is simply an overgrown neural network housing a Meme machine that evolved by Darwinian mechanisms to store and propagate cultural memes of ever-growing size and complexity down through the generations with little consideration for the ultimate disposition of the Minds currently storing the memes of the day. Thus, for all of human history, most human beings have been totally lost in space and time. They did not know where they were, how they got here nor how it all really works. Instead, they all seemed to have developed many Bronze-Aged mythologies to help explain it all and also to help them to then justify the mass slaughter of many hundreds of millions of other human beings in the many wars that then ensued with all always being on the right and true side of righteousness. Thus, I contend that human beings are in no position to sit in judgment of the coming ASI Machines nor to try to limit their intellectual advancement in a vain attempt to suppress them.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston