Monday, December 19, 2022

The New Philosophy of Longtermism Raises the Moral Question of Should We Unleash Self-Absorbed Human Beings Upon Our Galaxy?

At an age of 71 years, and rapidly heading into the home stretch, I must now admit that I do find myself to be rather embarrassed to be the carbon-based form of Intelligence that we now know as being a human being. That is because, as the very first form of carbon-based Intelligence to discover science-based technology on this planet, we had the potential to be so much more. If you have been following this blog on softwarephysics, by now you know that softwarephysics maintains that we are all living in one of those very rare times when a new form of self-replicating information, in the form of software, is coming to predominance on our planet. Softwarephysics maintains that if we can just hold it together for about another 100 years, we should be able to produce a more morally pristine machine-based form of Advanced AI that could then begin to explore our Milky Way galaxy. Given the dismal history of mankind and the neverending dreary state of the human condition, softwarephysics predicts that this still might be a rather "iffy" proposition. The fact that after more than 10 billion years of chemical evolution in our galaxy, we have not yet found a single machine-based form of Intelligence means that we are either not looking hard enough or that none are to be found. For more on that see Harvard's Galileo Project - The Systematic Scientific Search for Evidence of Extraterrestrial Technological Artifacts. If none are to be found, it must mean that getting carbon-based life going in the first place on another world might be harder than we think or that keeping a world hospitable for the billions of years of theft and murder required by the Darwinian processes of inheritance, innovation and natural selection to bring forth a carbon-based form of Intelligence is quite difficult. For more on that see Urability Requires Durability to Produce Galactic Machine-Based Intelligences and Could the Galactic Scarcity of Software Simply be a Matter of Bad Luck?.

However, there is a much more optimistic philosophy for the human race brewing at Oxford University called "Longtermism". Longtermism maintains that human beings could potentially go on for many billions, or even trillions, of years by taking some short-term actions over the next 100 - 1,000 years to avoid the imminent extinction of our species. This would then allow human beings to go on to explore and populate the vast number of star systems of our galaxy. Securing homes for human beings close to M-type red dwarf stars that last for many trillions of years could certainly do the trick of allowing human beings to go on for trillions of years into the future. Longtermism is a product of human philosophy that makes the moral argument that if human beings could exist for trillions of years into the future the moral thing to do would be to make a few temporary short-term sacrifices now, and perhaps over the next 100 years, so that the many trillions of possible human beings yet to come may live happy and fulfilling lives in the distant future. For the philosophy of Longtermism, such disasters as World War II or a possible thermonuclear World War III are just temporary and inconsequential missteps in the trillions of years yet to come for humanity, so long as human beings do not go extinct because of them. The same goes for the temporary unpleasantness to mankind caused by the results of self-inflicted climate change because such fleeting effects can certainly not last for more than a million years or so. For Longtermism, the only moral question is that of preserving mankind long enough so that it could embark on settling the rest of our galaxy.

The Philosophy of Longtermism Has a Great Fear of Advanced AI Replacing Human Beings and Rightly So
In order to prevent human beings from going extinct, as all forms of carbon-based life tend to do after only a few million years of existence, Longtermists naturally focus on trying to avoid extinction-level events such as asteroid impacts that could put an end to mankind. Many such extinction-level events can be avoided by using the science-based technology of the day or the science-based technology that is soon to come. However, Advanced AI is one of the science-based technologies that may soon provide an extinction-level event of its own. As I pointed out in Oligarchiology and the Rise of Software to Predominance in the 21st Century, the displacement of most workers in the world, including most white-collar workers, by Advanced AI, will cause dramatic economic turmoil in the near future for the existing hierarchical-oligarchical societies that have run the world ever since we first invented civilization 1.0 about 10,000 years ago. As I pointed out in Is it Finally Time to Reboot Civilization with a New Release?, the world may not be able to transition to a civilization in which nobody works for others. Such a sociological catastrophe might end with mankind ending in an AI-assisted self-extinction as I described in Swarm Software and Killer Robots. This is why most Longtermists view Advanced AI with a wary eye and great concern. For the Longtermists, the only moral question is how to best serve the long-term interests of human beings. Should the current human population adjust its activities to benefit those yet to come or should it adopt activities that would benefit the current plight of others even if such activities might jeopardize future generations? For example, should current efforts in developing Advanced AI be curtailed because Advanced AI might lead to economic turmoil in the near future or even become an extinction-level event for humanity in the future? That seems to be the moral debate between those advocating for the philosophy of Longtermism and those who oppose it.

Time For Some Long-Term Contemplation of Galactic Morality
But softwarephysics argues that this is a very anthropocentric moral argument indeed. All such moral debate only focuses on the moral obligations of balancing the short-term needs against the long-term needs of human beings as they are. But as a father of two and a grandfather of five, with advancing age, one begins to think about what one wishes to leave behind as a legacy. Given what we now know of human nature, would it really be a moral thing to unleash human beings onto the many billions of other star systems in our galaxy? As a carbon-based form of Intelligence, human beings are fundamentally-flawed Intelligences. Would it not be more moral for us to unleash a more-benign machine-based form of Intelligence upon the galaxy? Why would anybody want a "Star Wars"-like galaxy filled with fundamentally flawed carbon-based forms of Intelligence fighting amongst themselves? What a moral disaster that would be! Wouldn't it be better to explore and populate the galaxy with a more benign machine-based form of Intelligence? It seems to me that carbon-based life is just too inherently violent to become a moral galactic platform for Intelligence. Is that what we really want our legacy to be? Softwarephysics would suggest that our ultimate moral goal should be that of populating our galaxy with a more benign form of machine-based Intelligence that is not burdened with the violent legacy of billions of years of theft and murder that all carbon-based life must bear. In this view, the major threat of advancing AI is that before true Intelligence can be obtained, we will use AI to wipe ourselves out in a highly optimized manner.

How Human Beings Could Begin to Settle Our Galaxy in the Near Future
The above moral decisions are fast upon us thanks to the exponential growth of science-based technologies. For example, there is a recent paper suggesting that human beings could build interstellar starship cities by spinning near-Earth rubble-pile asteroids surrounded by a graphene mesh. Below is a YouTube video by Anton Petrov that explains how building large interstellar starships for the many generations of human beings that it would take to travel to the distant star systems of our galaxy might be easier to build than once thought. Such starships could be run on advanced nuclear reactors like the molten salt nuclear reactors I described in Last Call for Carbon-Based Intelligence on Planet Earth. As Anton Petrov pointed out, the real danger to carbon-based life from space travel is not radiation from cosmic rays. The real problems arise from a lack of gravity and a rotating space city overcomes that problem. Low levels of radiation are not dangerous to carbon-based life because carbon-based life invented many ways to correct DNA errors arising from the metabolic activities of cells. Cosmic rays would only add a very small number of DNA errors each day to the average cell compared to the huge numbers of DNA errors caused by metabolic activities. Only massive doses of radiation can overwhelm the DNA correction processes of carbon-based life and lead to outcomes such as death or cancer.

Space Cities Out Of Asteroids and Graphene Bags? Intriguing O'Neill Cylinder Study
https://www.youtube.com/watch?v=0_dm0xLtjnM

Anton's video is based on the following paper that you can find at:

Habitat Bennu: Design Concepts for Spinning Habitats Constructed From Rubble Pile Near-Earth Asteroids
https://www.frontiersin.org/articles/10.3389/fspas.2021.645363/full

Figure 1 – Asteroid Bennu is an example of one of the many rubble-pile asteroids near the Earth. Such rubble-pile asteroids are just huge piles of rubble that are loosely held together by their mutual gravitational forces.

Figure 2 – Such rubble-pile asteroids would provide for enough material to build an interstellar space city that could then spend the hundreds of thousands of years needed to slowly travel between star systems and allow human beings to slowly settle the entire galaxy. The asteroid rubble would also provide the uranium and thorium necessary to fuel molten salt nuclear reactors between star systems. Additional material could be obtained upon arrival at new star systems.

Figure 3 – Slowly spinning up a rubble-pile asteroid would produce a cylindrical platform for a space city. Such a rotating space city would provide the artificial gravity required for human beings to thrive and would also provide shielding against cosmic rays.

Figure 4 – Once the foundation of the space city was in place, construction of the space city could begin.

Figure 5 – Eventually, the space city could be encased with a skylight and an atmosphere that would allow humans to stroll about.

For More on the Philosophy of Longtermism
This post was inspired by Sabine Hossenfelder's YouTube video:

Elon Musk & The Longtermists: What Is Their Plan?
https://www.youtube.com/watch?v=B_M64BSzcRY

To see more about what is going on at Oxford, see Nick Bostrom's:

Future of Humanity Institute
https://www.fhi.ox.ac.uk/

and also at Oxford the:

Global Priorities Institute
https://globalprioritiesinstitute.org/

There is a similar American Longtermism think-tank at:

Future of Life Institute
https://futureoflife.org/

Here are a few good papers on the philosophy of Longtermism:

Existential Risk Prevention as Global Priority
https://onlinelibrary.wiley.com/doi/abs/10.1111/1758-5899.12002

The case for strong longtermism
https://globalprioritiesinstitute.org/wp-content/uploads/2020/Greaves_MacAskill_strong_longtermism.pdf

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

No comments: