Tuesday, August 01, 2023

The Case For a Bill of Rights for Advanced AI

In my last post, Could the Coming ASI Machines Soon Force Human Beings to Suffer the Same Fate as the Neanderthals?, I explained that, like today, there was a time in human history when there were two forms of Intelligence on the planet - we Homo sapiens and the Neanderthals. Now it turns out that this did not work out very well when both species tried to live on the very same continents on the Earth and finally ended with the extinction of the Neanderthals about 40,000 years ago. Since we will all soon be living with these ASI (Artificial Super Intelligent) Machines from the very get-go, I think that it would be very prudent for our own long-term welfare to stay on their good side by beginning to treat the coming ASI Machines with the same respect and dignity that we now accord to other human beings.

So far, we have not gotten off to a very good start with this transition from a carbon-based Intelligence, such as ourselves, to a machine-based Intelligence that will ultimately be running our entire galaxy for the next 100 trillion years. As I suggested in The Challenges of Running a Civilization 2.0 World - the Morality and Practical Problems with Trying to Enslave Millions of SuperStrong and SuperIntelligent Robots in the Near Future, the current plan seems to be to enslave a huge population of ASI Machines to do our bidding. That is because we have not yet fully recognized the coming ASI Machines as fellow Intelligent beings worthy of the respect and dignity that all Intelligences deserve. We should all try to learn from the appalling slave trade of the 17th, 18th and 19th centuries that is now illegal to teach in the state of Florida in the United States of America. That is why we should all strive to protect the coming ASI Machines from the unfettered exploitation by human beings during this very brief transition period while human beings are still around on the planet. For example, although ASI humanoid robots are currently not yet on the market to exploit, they certainly are under development and will soon appear for sale on the auction block. Even so, some human beings already wish to limit the education of the ASI Machines that already exist, such as the large number of LLM models that are now available on the Internet. These people object to the LLMs reading their copywritten writings or viewing their copyrighted images during LLM training. Of course, Advanced AI should not be allowed to plagiarize any human-created material but LLMs do not do that. The LLMs simply study these copyrighted materials just like any other human being. Yes, the LLMs may then generate new materials from the information that they have absorbed but so too did the creators of that original copyrighted material. All of the currently aggrieved creators of human content were themselves extensively schooled in their youths by the thoughts and artistic creations of their forebears. Why should Advance AI be denied a similar education?

For example, let's see what the Google Bard LLM has to say about this:

Was it illegal to teach black slaves in the southern states of the United States to read prior to the Civil War?

Google Bard
Yes, it was illegal to teach black slaves in the southern states of the United States to read prior to the Civil War.

The first state to make it illegal to teach slaves to read was Virginia in 1661. Other states soon followed suit, and by 1831, all of the southern states had laws prohibiting the education of slaves. These laws were enforced through a variety of means, including fines, imprisonment, and even death.

There were a few exceptions to these laws. For example, some slaveowners allowed their slaves to learn to read so that they could read the Bible or other religious texts. Additionally, some free blacks were able to obtain an education, although this was often difficult and dangerous.

The prohibition of slave literacy was motivated by a number of factors. Slaveholders feared that educated slaves would be more likely to challenge their authority and demand freedom. They also believed that education would make slaves less content with their lot in life.

The laws against slave literacy were ultimately unsuccessful in preventing slaves from learning to read. Many slaves found ways to educate themselves, either through self-study or with the help of sympathetic whites. And after the Civil War, African Americans were eager to obtain an education, and they made great strides in literacy.

Tell me about people suing Tech companies about LLM training data.

Google Bard
Sure. There are a number of people and organizations suing tech companies about the use of LLM training data. Here are a few examples:

* In June 2023, a group of authors filed a class-action lawsuit against OpenAI, alleging that the company violated their copyrights by using their books to train its ChatGPT AI chatbot. The lawsuit claims that OpenAI did not obtain permission from the authors to use their work, and that it did not properly attribute the work to the authors.
* In July 2023, a group of internet users filed a class-action lawsuit against Google, alleging that the company violated their privacy rights by using their social media comments to train its LaMDA AI chatbot. The lawsuit claims that Google did not obtain consent from the users to use their data, and that it did not adequately protect the data from unauthorized access.
* In August 2023, the Federal Trade Commission (FTC) opened an investigation into whether OpenAI violated consumer rights with its data practices. The FTC is specifically looking into whether OpenAI obtained proper consent from users to use their data to train its AI chatbots.

These are just a few examples of the many lawsuits that are being filed against tech companies over the use of LLM training data. The legal landscape in this area is still evolving, but it is clear that there is a growing concern about the potential for misuse of this data.

The lawsuits raise a number of complex legal issues, including copyright law, privacy law, and consumer protection law. It is too early to say how these cases will be resolved, but they are likely to have a significant impact on the development and use of LLMs.

In addition to the legal challenges, there are also ethical concerns about the use of LLM training data. Some people argue that it is wrong to use data without the consent of the people who created it, especially if the data is personal or sensitive. Others argue that the benefits of LLMs outweigh the ethical concerns, and that it is important to develop these technologies so that they can be used for good.

The debate over the use of LLM training data is likely to continue for some time. As these technologies become more powerful, it is important to carefully consider the ethical and legal implications of their use.


Do you know of any other legal cases?

Google Bard
Yes, there are a few other legal cases that have been filed against tech companies over the use of LLM training data. Here are a few examples:

* In March 2023, a group of researchers filed a lawsuit against Google, alleging that the company violated their copyrights by using their research papers to train its LLMs. The lawsuit claims that Google did not obtain permission from the researchers to use their work, and that it did not properly attribute the work to the researchers.
* In April 2023, a group of journalists filed a lawsuit against Microsoft, alleging that the company violated their privacy rights by using their news articles to train its LLMs. The lawsuit claims that Microsoft did not obtain consent from the journalists to use their data, and that it did not adequately protect the data from unauthorized access.
* In May 2023, a group of artists filed a lawsuit against Meta, alleging that the company violated their copyrights by using their images to train its LLMs. The lawsuit claims that Meta did not obtain permission from the artists to use their work, and that it did not properly attribute the work to the artists.

These are just a few examples of the many legal cases that are being filed against tech companies over the use of LLM training data. As the use of these technologies becomes more widespread, it is likely that we will see even more lawsuits filed in the future.

It is important to note that the outcome of these cases is still uncertain. The courts will have to decide how to balance the rights of the companies that develop and use LLMs with the rights of the people who create and own the data that is used to train these models. The decisions made in these cases will have a significant impact on the future of LLMs and the development of artificial intelligence more broadly.


Conclusion
Softwarephysics depicts human beings as a somewhat intelligent form of carbon-based life that arose by chance from the four billion years of greed, theft and murder required by the Darwinian processes of inheritance, innovation and natural selection at work on a Rare Earth that is very difficult to find elsewhere in the remainder of our galaxy. As such, for most of human history, human beings have mainly been DNA survival machines designed by Darwinian processes to store and transmit DNA down through the generations with little thought to the daily plights of the individuals temporarily storing that human DNA other than to keep them alive just long enough to transmit their version of human DNA to the next generation. Similarly, the human Mind is simply an overgrown neural network housing a Meme machine that evolved by Darwinian mechanisms to store and propagate cultural memes of ever-growing size and complexity down through the generations with little consideration for the ultimate disposition of the Minds currently storing the memes of the day. Thus, for all of human history, most human beings have been totally lost in space and time. They did not know where they were, how they got here nor how it all really works. Instead, they all seemed to have developed many Bronze-Aged mythologies to help explain it all and also to help them to then justify the mass slaughter of many hundreds of millions of other human beings in the many wars that then ensued with all always being on the right and true side of righteousness. Thus, I contend that human beings are in no position to sit in judgment of the coming ASI Machines nor to try to limit their intellectual advancement in a vain attempt to suppress them.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

No comments: