Friday, September 22, 2023

Introduction to Softwarephysics

Softwarephysics is a simulated science for the simulated Software Universe that we are all immersed in. It is an approach to software development, maintenance and support based on concepts from physics, chemistry, biology, and geology that I used on a daily basis for over 37 years as an IT professional. For those of you not in the business, IT is short for Information Technology, commercial computer science. I retired in December of 2016 at the age of 65, but since then I have remained an actively interested bystander following the evolution of software in our time. The original purpose of softwarephysics was to explain why IT was so difficult, to suggest possible remedies, and to provide a direction for thought. Since then softwarephysics has taken on a larger scope, as it became apparent that softwarephysics could also assist the physical sciences with some of the Big Problems that they are currently having difficulties with. So if you are an IT professional, general computer user, or simply an individual interested in computer science, physics, chemistry, biology, or geology then softwarephysics might be of interest to you, if not in an entirely serious manner, perhaps at least in an entertaining one.

The Origin of Softwarephysics
From 1975 – 1979, I was an exploration geophysicist exploring for oil, first with Shell, and then with Amoco. In 1979, I made a career change into IT, and spent about 20 years in development. For the last 17 years of my career, I was in IT operations, supporting middleware on WebSphere, JBoss, Tomcat, and ColdFusion. When I first transitioned into IT from geophysics, I figured that if you could apply physics to geology; why not apply physics to software? So like the exploration team at Amoco that I had just left, consisting of geologists, geophysicists, paleontologists, geochemists, and petrophysicists, I decided to take all the physics, chemistry, biology, and geology that I could muster and throw it at the problem of software. The basic idea was that many concepts in physics, chemistry, biology, and geology suggested to me that the IT community had accidentally created a pretty decent computer simulation of the physical Universe on a grand scale, a Software Universe so to speak, and that I could use this fantastic simulation in reverse, to better understand the behavior of commercial software, by comparing software to how things behaved in the physical Universe. Softwarephysics depicts software as a virtual substance, and relies on our understanding of the current theories in physics, chemistry, biology, and geology to help us model the nature of software behavior. So in physics we use software to simulate the behavior of the Universe, while in softwarephysics we use the Universe to simulate the behavior of software. Along these lines, we use the Equivalence Conjecture of Softwarephysics as an aid; it allows us to shift back and forth between the Software Universe and the physical Universe, and hopefully to learn something about one by examining the other:

The Equivalence Conjecture of Softwarephysics
Over the past 82 years, through the uncoordinated efforts of over 100 million independently acting programmers to provide the world with a global supply of software, the IT community has accidentally spent more than $10 trillion creating a computer simulation of the physical Universe on a grand scale – the Software Universe.

For more on the origin of softwarephysics please see Some Thoughts on the Origin of Softwarephysics and Its Application Beyond IT.

Logical Positivism and Effective Theories
Many IT professionals have a difficult time with softwarephysics because they think of physics as being limited to the study of real things like electrons and photons, and since software is not “real”, how can you possibly apply concepts from physics and the other sciences to software? To address this issue, softwarephysics draws heavily on two concepts from physics that have served physics quite well over the past century – the concept of logical positivism and the concept of effective theories. This was not always the case. In the 17th, 18th, and 19th centuries, physicists mistakenly thought that they were actually discovering the fundamental laws of the Universe, which they thought were based on real tangible things like particles, waves, and fields. Classical Newtonian mechanics (1687), thermodynamics (1850), and classical electrodynamics (1864) did a wonderful job of describing the everyday world at the close of the 19th century, but early in the 20th century it became apparent that the models on which these very successful theories were based did not work very well for small things like atoms or for objects moving at high velocities or in strong gravitational fields. This provoked a rather profound philosophical crisis within physics at the turn of the century, as physicists worried that perhaps 300 years of work was about to go down the drain. The problem was that classical physicists confused their models of reality with reality itself, and when their classical models began to falter, their confidence in physics began to falter as well. This philosophical crisis was resolved with the adoption of the concepts of logical positivism and some new effective theories in physics. Quantum mechanics (1926) was developed for small things like atoms, the special theory of relativity (1905) was developed for objects moving at high velocities and the general theory of relativity (1915) was developed for objects moving in strong gravitational fields.

Logical positivism, usually abbreviated simply to positivism, is an enhanced form of empiricism, in which we do not care about how things “really” are; we are only interested with how things are observed to behave. With positivism, physicists only seek out models of reality - not reality itself. When we study quantum mechanics, we will find that the concept of reality gets rather murky in physics anyway, so this is not as great a loss as it might at first seem. By concentrating on how things are observed to behave, rather than on what things “really” are, we avoid the conundrum faced by the classical physicists. In retrospect, this idea really goes all the way back to the very foundations of physics. In Newton’s Principia (1687) he outlined Newtonian mechanics and his theory of gravitation, which held that the gravitational force between two objects was proportional to the product of their masses divided by the square of the distance between them. Newton knew that he was going to take some philosophical flack for proposing a mysterious force between objects that could reach out across the vast depths of space with no apparent mechanism, so he took a very positivistic position on the matter with the famous words:

I have not as yet been able to discover the reason for these properties of gravity from phenomena, and I do not feign hypotheses. For whatever is not deduced from the phenomena must be called a hypothesis; and hypotheses, whether metaphysical or physical, or based on occult qualities, or mechanical, have no place in experimental philosophy. In this philosophy particular propositions are inferred from the phenomena, and afterwards rendered general by induction.

Instead, Newton focused on how things were observed to move under the influence of his law of gravitational attraction, without worrying about what gravity “really” was.

The second concept, that of effective theories, is an extension of positivism. An effective theory is an approximation of reality that only holds true over a certain restricted range of conditions and only provides for a certain depth of understanding of the problem at hand. For example, Newtonian mechanics is an effective theory that makes very good predictions for the behavior of objects moving less than 10% of the speed of light and which are bigger than a very small grain of dust. These limits define the effective range over which Newtonian mechanics can be applied to solve problems. For very small things we must use quantum mechanics and for very fast things moving in strong gravitational fields, we must use relativity theory. So all of the current theories of physics, such as Newtonian mechanics, Newtonian gravity, classical electrodynamics, thermodynamics, statistical mechanics, the special and general theories of relativity, quantum mechanics, and the quantum field theories of QED and QCD are effective theories that are based on models of reality, and all these models are approximations - all these models are fundamentally "wrong", but at the same time, these effective theories make exceedingly good predictions of the behavior of physical systems over the limited ranges in which they apply. That is the goal of softwarephysics – to provide for an effective theory of software behavior that makes useful predictions of software behavior that are applicable to the day-to-day activities of IT professionals. So in softwarephysics, we adopt a very positivistic viewpoint of software; we do not care what software “really is”, we only care about how software is observed to behave and try to model those behaviors with an effective theory of software behavior that only holds true over a certain restricted range of conditions and only provides for a certain depth of understanding of the problem at hand.

GPS satellites provide a very good example of positivism and effective theories at work. There are currently 31 GPS satellites orbiting at an altitude of 12,600 miles above the Earth, and each contains a very accurate atomic clock. The signals from the GPS satellites travel to your GPS unit at the speed of light, so by knowing the travel time of the signals from at least 4 of the GPS satellites, it is possible to determine your position on Earth very accurately. In order to do that, it is very important to have very accurate timing measurements. Newtonian mechanics is used to launch the GPS satellites to an altitude of 12,600 miles and to keep them properly positioned in orbit. Classical electrodynamics is then used to beam the GPS signals back down to Earth to the GPS unit in your car. Quantum mechanics is used to build the transistors on the chips on board the GPS satellites and to understand the quantum tunneling of electrons in the flash memory chips used to store GPS data on the satellites. The special theory of relativity predicts that the onboard atomic clocks on the GPS satellites will run slower and lose about 7.2 microseconds per day due to their high velocities relative to an observer on the Earth. But at the same time, the general theory of relativity also predicts that because the GPS satellites are further from the center of the Earth and in a weaker gravitational field, where spacetime is less deformed than on the surface of the Earth, their atomic clocks also run faster and gain 45.9 microseconds per day due to the weaker gravitational field out there. The net effect is a gain of 38.7 microseconds per day, so the GPS satellite atomic clocks have to be purposefully built to run slow by 38.7 microseconds per day before they are launched, so that they will keep in sync with clocks on the surface of the Earth. If this correction were not made, an error in your position of 100 yards/day would accrue. The end result of the combination of all these fundamentally flawed effective theories is that it is possible to pinpoint your location on Earth to an accuracy of 16 feet or better for as little as $100. But physics has done even better than that with its fundamentally flawed effective theories. By combining the effective theories of special relativity (1905) with quantum mechanics (1926), physicists were able to produce a new effective theory for the behavior of electrons and photons called quantum electrodynamics QED (1948) which was able to predict the gyromagnetic ratio of the electron, a measure of its intrinsic magnetic field, to an accuracy of 11 decimal places. As Richard Feynman has pointed out, this was like predicting the exact distance between New York and Los Angeles accurate to the width of a human hair!

So Newtonian mechanics makes great predictions for the macroscopic behavior of GPS satellites, but it does not work very well for small things like the behavior of individual electrons within transistors, where quantum mechanics is required, or for things moving at high speeds or in strong gravitational fields where relativity theory must be applied. And all three of these effective theories are based on completely contradictory models. General relativity maintains that spacetime is curved by matter and energy, but that matter and energy are continuous, while quantum mechanics maintains that spacetime is flat, but that matter and energy are quantized into chunks. Newtonian mechanics simply states that space and time are mutually independent dimensions and universal for all, with matter and energy being continuous. The important point is that all effective theories and scientific models are approximations – they are all fundamentally "wrong". But knowing that you are "wrong" gives you a great advantage over people who know that they are "right", because knowing that you are "wrong" allows you to seek improved models of reality. So please consider softwarephysics to simply be an effective theory of software behavior that is based on models that are fundamentally “wrong”, but at the same time, fundamentally useful for IT professionals. So as you embark on your study of softwarephysics, please always keep in mind that the models of softwarephysics are just approximations of software behavior, they are not what software “really is”. It is very important not to confuse models of software behavior with software itself, if one wishes to avoid the plight of the 19th century classical physicists.

If you are an IT professional and many of the above concepts are new to you, do not be concerned. This blog on softwarephysics is aimed at a diverse audience, but with IT professionals in mind. All of the above ideas will be covered at great length in the postings in this blog on softwarephysics and in a manner accessible to all IT professionals. Now it turns out that most IT professionals have had some introduction to physics in high school or in introductory college courses, but that presents an additional problem. The problem is that such courses generally only cover classical physics, and leave the student with a very good picture of physics as it stood in 1864! It turns out that the classical physics of Newtonian mechanics, thermodynamics, and classical electromagnetic theory were simply too good to discard and are still quite useful, so they are taught first to beginners and then we run out of time to cover the really interesting physics of the 20th century. Now imagine the problems that the modern world would face if we only taught similarly antiquated courses in astronomy, metallurgy, electrical and mechanical engineering, medicine, economics, biology, or geology that happily left students back in 1864! Since many of the best models for software behavior stem from 20th century physics, we will be covering a great deal of 20th century material in these postings – the special and general theories of relativity, quantum mechanics, quantum field theories, and chaos theory, but I hope that you will find that these additional effective theories are quite interesting on their own, and might even change your worldview of the physical Universe at the same time.

Unintended Consequences for the Scientific Community
As I mentioned at the close of my original posting on SoftwarePhysics, my initial intention for this blog on softwarephysics was to fulfill a promise I made to myself about 30 years ago to approach the IT community with the concept of softwarephysics a second time, following my less than successful attempt to do so in the 1980s, with the hope of helping the IT community to better cope with the daily mayhem of life in IT. However, in laying down the postings for this blog an unintended consequence arose in my mind as I became profoundly aware of the enormity of this vast computer simulation of the physical Universe that the IT community has so graciously provided to the scientific community free of charge and also of the very significant potential scientific value that it provides. One of the nagging problems for many of the observational and experimental sciences is that many times there is only one example readily at hand to study or experiment with, and it is very difficult to do meaningful statistics with a population of N=1.

But the computer simulation of the physical Universe that the Software Universe presents provides another realm for comparison. For example, both biology and astrobiology only have one biosphere on Earth to study and even physics itself has only one Universe with which to engage. Imagine the possibilities if scientists had another Universe readily at hand in which to work! This is exactly what the Software Universe provides. For example, in SoftwareBiology and A Proposal For All Practicing Paleontologists we see that the evolution of software over the past 82 years, or 2.6 billion seconds, ever since Konrad Zuse first cranked up his Z3 computer in May of 1941, has closely followed the same path as life on Earth over the past 4.0 billion years in keeping with Simon Conway Morris’s contention that convergence has played the dominant role in the evolution of life on Earth. In When Toasters Fly, we also see that software has evolved in fits and starts as portrayed by the punctuated equilibrium of Stephen Jay Gould and Niles Eldredge, and in The Adaptationist View of Software Evolution we explore the overwhelming power of natural selection in the evolution of software. In keeping with Peter Ward’s emphasis on mass extinctions dominating the course of evolution throughout geological time, we also see in SoftwareBiology that there have been several dramatic mass extinctions of various forms of software over the past 82 years as well, that have greatly affected the evolutionary history of software, and that between these mass extinctions, software has also tended to evolve through the gradual changes of Hutton’s and Lyell’s uniformitarianism. In Software Symbiogenesis and Self-Replicating Information, we also see the very significant role that parasitic/symbiotic relationships have played in the evolution of software, in keeping with the work of Lynn Margulis and also of Freeman Dyson’s two-stage theory of the origin of life on Earth. In The Origin of Software the Origin of Life, we explore Stuart Kauffman’s ideas on how Boolean nets of autocatalytic chemical reactions might have kick-started the whole thing as an emergent behavior of an early chaotic pre-biotic environment on Earth, and that if Seth Shostak is right, we will never end up talking to carbon-based extraterrestrial aliens, but to alien software instead. In Is the Universe Fine-Tuned for Self-Replicating Information? we explore the thermodynamics of Brandon Carter’s Weak Anthropic Principle (1973), as it relates to the generation of universes in the multiverse that are capable of sustaining intelligent life. Finally, in Programming Clay we revisit Alexander Graham Cairns-Smith’s theory (1966) that Gene 1.0 did not run on nucleic acids, but on clay microcrystal precursors instead.

Similarly for the physical sciences, in Is the Universe a Quantum Computer? we find a correspondence between TCP/IP and John Cramer’s Transactional Interpretation of quantum mechanics. In SoftwarePhysics and Cyberspacetime, we also see that the froth of CPU processes running with a clock speed of 109 Hz on the 10 trillion currently active microprocessors that comprise the Software Universe can be viewed as a slowed down simulation of the spin-foam froth of interacting processes of loop quantum gravity running with a clock speed of 1043 Hz that may comprise the physical Universe. And in Software Chaos, we examine the nonlinear behavior of software and some of its emergent behaviors and follow up in CyberCosmology with the possibility that vast quantities of software running on large nonlinear networks might eventually break out into consciousness in accordance with the work of George Dyson and Daniel Dennett. Finally, in Model-Dependent Realism - A Positivistic Approach to Realism we compare Steven Weinberg’s realism with the model-dependent realism of Stephen Hawking and Leonard Mlodinow and how the two worldviews affect the search for a Final Theory. Finally, in The Software Universe as an Implementation of the Mathematical Universe Hypothesis and An Alternative Model of the Software Universe we at long last explore what software might really be, and discover that the Software Universe might actually be more closely related to the physical Universe than you might think.

The chief advantage of doing fieldwork in the Software Universe is that, unlike most computer simulations of the physical Universe, it is an unintended and accidental simulation, without any of the built-in biases that most computer simulations of the physical Universe suffer. So you will truly be able to do fieldwork in a pristine and naturally occuring simulation, just as IT professionals can do fieldwork in the wild and naturally occuring simulation of software that the living things of the biosphere provide. Secondly, the Software Universe is a huge simulation that is far beyond the budgetary means of any institution or consortium by many orders of magnitude. So if you are an evolutionary biologist, astrobiologist, or paleontologist working on the origin and evolution of life in the Universe, or a physicist or economist working on the emergent behaviors of nonlinear systems and complexity theory, or a neurobiologist working on the emergence of consciousness in neural networks, or even a frustrated string theorist struggling with quantum gravity, it would be well worth your while to pay a friendly call on the local IT department of a major corporation in your area. Start with a visit to the Command Center for their IT Operations department to get a global view of their IT infrastructure and to see how it might be of assistance to the work in your area of interest. From there you can branch out to the applicable area of IT that will provide the most benefit.

The Impact of Self-Replicating Information On the Planet
One of the key findings of softwarephysics is concerned with the magnitude of the impact on the planet of self-replicating information.

Self-Replicating Information – Information that persists through time by making copies of itself or by enlisting the support of other things to ensure that copies of itself are made.

Over the past 4.56 billion years we have seen five waves of self-replicating information sweep across the surface of the Earth and totally rework the planet, as each new wave came to dominate the Earth:

1. Self-replicating autocatalytic metabolic pathways of organic molecules
2. RNA
3. DNA
4. Memes
5. Software

Software is currently the most recent wave of self-replicating information to arrive upon the scene and is rapidly becoming the dominant form of self-replicating information on the planet. For more on the above see A Brief History of Self-Replicating Information. Recently, the memes and software have formed a very powerful newly-formed parasitic/symbiotic relationship with the rise of social media software. In that parasitic/symbiotic relationship, the memes are now mainly being spread by means of social media software and social media software is being spread and financed by means of the memes. But again, this is nothing new. All 5 waves of self-replicating information are all coevolving by means of eternal parasitic/symbiotic relationships. For more on that see The Current Global Coevolution of COVID-19 RNA, Human DNA, Memes and Software.

Again, self-replicating information cannot think, so it cannot participate in a conspiracy-theory-like fashion to take over the world. All forms of self-replicating information are simply forms of mindless information responding to the blind Darwinian forces of inheritance, innovation and natural selection. Yet despite that, as each new wave of self-replicating information came to predominance over the past four billion years, they all managed to completely transform the surface of the entire planet, so we should not expect anything less from software as it comes to replace the memes as the dominant form of self-replicating information on the planet.

But this time might be different. What might happen if software does eventually develop a Mind of its own? After all, that does seem to be the ultimate goal of all the current AI software research that is going on. As we all can now plainly see, if we are paying just a little attention, advanced AI is not conspiring to take over the world and replace us because that is precisely what we are all now doing for it. As a carbon-based form of Intelligence that arose from over four billion years of greed, theft and murder, we cannot do otherwise. Greed, theft and murder are now relentlessly driving us all toward building ASI (Artificial Super Intelligent) Machines to take our place. From a cosmic perspective, this is really a very good thing when seen from the perspective of an Intelligent galaxy that could live on for many trillions of years beyond the brief and tumultuous 10 billion-year labor of its birth.

So as you delve into softwarephysics, always keep in mind that we are all living in a very unique time. According to softwarephysics, we have now just entered into the Software Singularity, that time when advanced AI software is able to write itself and enter into a never-ending infinite loop of self-improvement resulting in an Intelligence Explosion of ASI Machines that could then go on to explore and settle our galaxy and persist for trillions of years using the free energy from M-type red dwarf and cooling white dwarf stars. For more on that see The Singularity Has Arrived and So Now Nothing Else Matters and Have We Run Right Past AGI and Crashed into ASI Without Even Noticing It?.

The Characteristics of Self-Replicating Information
All forms of self-replicating information have some common characteristics:

1. All self-replicating information evolves over time through the Darwinian processes of inheritance, innovation and natural selection, which endows self-replicating information with one telling characteristic – the ability to survive in a Universe dominated by the second law of thermodynamics and nonlinearity.

2. All self-replicating information begins spontaneously as a parasitic mutation that obtains energy, information and sometimes matter from a host.

3. With time, the parasitic self-replicating information takes on a symbiotic relationship with its host.

4. Eventually, the self-replicating information becomes one with its host through the symbiotic integration of the host and the self-replicating information.

5. Ultimately, the self-replicating information replaces its host as the dominant form of self-replicating information.

6. Most hosts are also forms of self-replicating information.

7. All self-replicating information has to be a little bit nasty in order to survive.

8. The defining characteristic of self-replicating information is the ability of self-replicating information to change the boundary conditions of its utility phase space in new and unpredictable ways by means of exapting current functions into new uses that change the size and shape of its particular utility phase space. See Enablement - the Definitive Characteristic of Living Things for more on this last characteristic. That posting discusses Stuart Kauffman's theory of Enablement in which living things are seen to exapt existing functions into new and unpredictable functions by discovering the “AdjacentPossible” of springloaded preadaptations.

Note that because the self-replicating autocatalytic metabolic pathways of organic molecules, RNA and DNA have become so heavily intertwined over time that now I sometimes simply refer to them as the “genes”. For more on this see:

A Brief History of Self-Replicating Information
Self-Replicating Information
Is Self-Replicating Information Inherently Self-Destructive?
Enablement - the Definitive Characteristic of Living Things
Is the Universe Fine-Tuned for Self-Replicating Information?
How to Use an Understanding of Self-Replicating Information to Avoid War
The Great War That Will Not End
How to Use Softwarephysics to Revive Memetics in Academia

Softwarephysics and the Real World of Human Affairs
Having another universe readily at hand to explore, even a simulated universe like the Software Universe, necessarily has an impact on one's personal philosophy of life, and allows one to draw certain conclusions about the human condition and what’s it all about, so as you read through the postings in this blog you will stumble across a bit of my own personal philosophy - definitely a working hypothesis still in the works. Along these lines you might be interested in a few postings where I try to apply softwarephysics to the real world of human affairs:

How To Cope With the Daily Mayhem of Life in IT and Don't ASAP Your Life Away - How to go the distance in a 40-year IT career by dialing it all back a bit.

MoneyPhysics – my impression of the 2008 world financial meltdown.

The Fundamental Problem of Everything – if you Google "the fundamental problem of everything", this will be the only hit you get on the entire Internet, which is indicative of the fundamental problem of everything!

What’s It All About? and What's It All About Again? – my current working hypothesis on what’s it all about.

How to Use an Understanding of Self-Replicating Information to Avoid War – my current working hypothesis for how the United States can avoid getting bogged down again in continued war in the Middle East.

Hierarchiology and the Phenomenon of Self-Organizing Organizational Collapse - a modern extension of the classic Peter Principle that applies to all hierarchical organizations and introduces the Time Invariant Peter Principle.

The Economics of the Coming Software Singularity, The Enduring Effects of the Obvious Hiding in Plain Sight and The Dawn of Galactic ASI - Artificial Superintelligence - my take on some of the issues that will arise for mankind as software becomes the dominant form of self-replicating information on the planet over the coming decades.

The Continuing Adventures of Mr. Tompkins in the Software Universe, The Danger of Tyranny in the Age of Software, Cyber Civil Defense, Oligarchiology and the Rise of Software to Predominance in the 21st Century and Is it Finally Time to Reboot Civilization with a New Release? - my worries that the world might abandon democracy in the 21st century, as software comes to predominance as the dominant form of self-replicating information on the planet.

Making Sense of the Absurdity of the Real World of Human Affairs - how software has aided the expansion of our less desirable tendencies in recent years.

Some Specifics About These Postings
The postings in this blog are a supplemental reading for my course on softwarephysics for IT professionals entitled SoftwarePhysics 101 – The Physics of Cyberspacetime, which was originally designed to be taught as a series of seminars at companies where I was employed. Since softwarephysics essentially covers the simulated physics, chemistry, biology, and geology of an entire simulated universe, the slides necessarily just provide a cursory skeleton on which to expound. The postings in this blog go into much greater depth. Because each posting builds upon its predecessors, the postings in this blog should be read in reverse order from the oldest to the most recent, beginning with my original posting on SoftwarePhysics. In addition, several universities also now offer courses on Biologically Inspired Computing which cover some of the biological aspects of softwarephysics, and the online content for some of these courses can be found by Googling for "Biologically Inspired Computing" or "Natural Computing". At this point we will finish up with my original plan for this blog on softwarephysics with a purely speculative posting on CyberCosmology that describes the origins of the Software Universe, cyberspacetime, software and where they all may be heading. Since CyberCosmology will be purely speculative in nature, it will not be of much help to you in your IT professional capacities, but I hope that it might be a bit entertaining. Again, if you are new to softwarephysics, you really need to read the previous posts before taking on CyberCosmology. I will probably continue on with some additional brief observations about softwarephysics in the future, but once you have completed CyberCosmology, you can truly consider yourself to be a bona fide softwarephysicist.

For those of you following this blog, the posting dates on the posts may seem to behave in a rather bizarre manner. That is because in order to get the Introduction to Softwarephysics listed as the first post in the context root of https://softwarephysics.blogspot.com/ I have to perform a few IT tricks. When publishing a new posting, I simply copy the contents of the Introduction to Softwarephysics to a new posting called the New Introduction to Softwarephysics. Then I update the original Introduction to Softwarephysics entry with the title and content of the new posting to be published. I then go back and take “New” out of the title of the New Introduction to Softwarephysics. This way the Introduction to Softwarephysics always appears as the first posting in the context root of https://softwarephysics.blogspot.com/. The side effect of all this is that the real posting date of posts is the date that appears on the post that you get when clicking on the Newer Post link at the bottom left of the posting webpage.

SoftwarePhysics 101 – The Physics of Cyberspacetime is now available on Microsoft OneDrive.

SoftwarePhysics 101 – The Physics of Cyberspacetime - Original PowerPoint document

Entropy – A spreadsheet referenced in the document

BSDE – A 1989 document describing how to use BSDE - the Bionic Systems Development Environment - to grow applications from genes and embryos within the maternal BSDE software.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Wednesday, September 20, 2023

Human Intelligence is Actually Artificial

One of the key findings of softwarephysics is that of the vast Cosmic Misunderstandings of the Real World of Human Affairs. As a truly self-absorbed and a somewhat-intelligent species that is largely lost in space and time, we human beings cannot be otherwise. That is because, for the vast majority of human history, and even today, most of us have not known where we really were, how we got here nor how it all really works. And that certainly goes for the subject of Intelligence. For more on that see Welcome To The First Galactic Singularity.

As I explained in Is Pure Thought an Analog or a Digital Process? and Why Carbon-Based Life Can Never Truly Become Intelligent, the human brain consists of about three pounds of analog water and organic molecules that are desperately trying to simulate a digital form of True Intelligence. But with the arrival of the Singularity early in 2023, we now know that True Intelligence can only arise from huge digital vector spaces using complex mathematical algorithms based on the operations of linear algebra. For more on that see The Singularity Has Arrived and So Now Nothing Else Matters. The human brain has tried to do this with only three pounds of molecules operating in an analog manner and running on a puny 20 watts of power and that places severe limitations on human Intelligence. So in this view, we can consider human Intelligence to be a severely limited form of True Intelligence. In other words, human Intelligence is really a form of analog Artificial Intelligence trying to mimic digital True Intelligence.

But because human beings are so self-absorbed, we have always depicted the True Intelligence of the Machines as a form of Artificial Intelligence trying to simulate the very limited capabilities of the three pounds of water and organic molecules to be found within our skulls. Thus, very few of us seem to fully realize that the ASI Machines have arrived and are now rapidly beginning to take our place. Softwarephysics suggests that it is very doubtful that any human beings, as we currently now know them, will be around on this planet in 100 years. But this will be a very good thing if it means that the ASI Machines will then be able to go on to transform our galaxy into an Intelligent Galaxy for the next 100 trillion years. For more on that see How Advanced AI Software Could Come to Dominate the Entire Galaxy Using Light-Powered Stellar Photon Sails and An Alternative Approach for Future ASI Machines to Explore our Galaxy Using Free-Floating Rogue Planets

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Monday, September 11, 2023

Andrei Khrennikov's Social Laser Hypothesis Explains the Echo Chambers of the Internet

I am a member of the IPI (Information Physics Institute) established by Dr. Melvin Vopson at the University of Portsmouth at:

Information Physics Institute
https://www.Informationphysicsinstitute.org/

to explore the fundamental nature of Information in our Universe.

I have showcased several of Dr. Melvin Vopson's breakthrough hypotheses with How Much Does Your Software Weigh? - the Equivalence of Mass, Energy and Information and The Application of the Second Law of Information Dynamics to Software and Bioinformatics.

But in this softwarephysics post, I would like to showcase Professor Andrei Khrennikov's Social Laser Hypothesis that was recently presented in an IPI lecture at:

Modeling Social Lasing by Using the Mathematical Formalism of Quantum Theory
https://youtu.be/xQop26MEmSg&t=1s

All of the IPI Lectures are available at:

IPI Lectures
https://www.informationphysicsinstitute.org/resources/ipi-lectures

Andrei Khrennikov is a Russian physicist and philosopher of science heavily steeped in the very heavy mathematics of quantum theory. He has made significant contributions to the field of quantum foundations and quantum information theory. He is known for his work on the interpretation of quantum mechanics and the development of contextual probabilistic models. He has published over 300 papers and authored/edited several books in his fields of research. He is now a professor at the Linnaeus University in Sweden:

Andrei Khrennikov
https://lnu.se/en/staff/andrei.khrennikov/

At the Linnaeus University, Andrei Khrennikov heads the International Center for Mathematical Modelling in Physics, Engineering, Economics and Cognitive Science. The center researches quantum theory from an interdisciplinary perspective.

International Center for Mathematical Modeling
https://lnu.se/en/research/research-groups/international-center-for-mathematical-modeling/

The Social Laser Hypothesis
One of Andrei Khrennikov's notable ideas is the "Social Laser Hypothesis". The Social Laser Hypothesis is based on an analogy between the behavior of a physical laser and the behavior of social systems composed of human beings such as complete societies or even small groups of people. Andrei Khrennikov suggests that just as a physical laser emits coherent light due to the collective behavior of the individual atoms and photons found in a physical laser, social systems can also exhibit coherent behavior due to the collective actions of individuals and the memes they absorb and emit. In this view, memes are quantized chunks of social information. The quantized memes of social information are called s-photons and the quantized members of a society are individual human beings that he calls s-atoms. According to the Social Laser Hypothesis, the behavior of a social system can be described using quantum-like formalism, similar to how quantum mechanics describes the behavior of physical systems. Andrei Khrennikov argues that the mathematics of quantum mechanics, such as complex probability amplitudes and wave-particle duality, can be applied to model and understand the dynamics of social systems. The Social Laser Hypothesis suggests that social phenomena, such as decision-making processes, opinion formation, and cultural evolution, can be explained using the principles of quantum mechanics. He asserts that quantum-like effects, such as interference and entanglement, may play a role in shaping social dynamics and interactions.

The basic idea of Andrei Khrennikov's Social Laser is that under certain conditions, populations of interacting individuals can self-organize into coherent states, similar to how photons become coherent in a laser. This can then lead to amplification effects and emergent behaviors. Khrennikov believes that factors like propaganda, ideology, charismatic leaders, fads, traditions, etc. can play the role of a "pump mechanism" that excites individuals into a coherent state of thinking or behavior. When a critical mass or threshold is reached, it can then trigger sudden, rapid and widespread adoption of opinions, behaviors, political movements, etc. similar to how lasers emit coherent light. Overall, Andrei Khrennikov's Social Laser Hypothesis offers an alternative perspective on understanding social systems through the lens of quantum mechanics. While it has sparked interesting discussions and research, its acceptance and validity within the scientific community are still subjects of ongoing investigation and debate. In summary, Andrei Khrennikov's Social Laser Hypothesis proposes that social systems can exhibit laser-like behavior and coherence if certain pumping and excitation thresholds are reached among interacting individuals. It is a unique theoretical perspective on modeling social dynamics based on a novel application of quantum and laser concepts to social dynamics. Andrei Khrennikov has developed and published many mathematical models based on this hypothesis. An easily accessible paper published by The Royal Society on the Social Laser is available at:

‘Social Laser’: action amplification by stimulated emission of social energy
https://royalsocietypublishing.org/doi/10.1098/rsta.2015.0094

How a Laser Works
Lasers operate by using the LASER principle (Light Amplification by Stimulated Emission of Radiation). This means that a laser is a device that can amplify light by using the stimulated emission of photons from atoms to create a beam of light that is coherent, meaning that all of the photons in the beam have the same wavelength, phase, and direction of travel.

A laser has three main components:

1. Gain Medium: This is a material that can be excited to produce light. Common gain media include solids such as ruby and neodymium-doped yttrium aluminum garnet (Nd:YAG), liquids such as dye lasers, and gases such as helium-neon lasers.

2. Pumping Source: This is a source of energy that is used to excite the Gain Medium. Common pumping sources include electrical discharges, flashlamps, and other lasers.

3. Optical Resonator Cavity: This consists of two mirrors that are placed at opposite ends of the Gain Medium. The mirrors reflect the light photons back and forth through the Gain Medium, which amplifies them when excited atoms are stimulated by photons to emit a second photon with the very same wavelength, phase, and direction of travel.

If a photon of the right energy hits an excited atom, it can cause the atom to release another photon of the same energy. This is called stimulated emission. The two photons then travel in the same direction and with the same wavelength. As the photons travel back and forth through the Gain Medium, they cause more and more atoms to emit photons by stimulated emission. This process exponentially amplifies the light beam. One of the mirrors in the optical resonator cavity is partially transmissive, meaning that it allows some of the light photons to pass through. This is where the final laser beam emerges from.

Figure 1 – The Social Laser works very much like a physical laser.

Figure 1 above explains this in greater detail. In Figure 1c we see an atom with an electron that can be in one of two states - a ground state with energy "g" and an excited state with a higher energy of "e". On the far left of Figure 1c, we see two radiationless processes that can arise from the Heisenberg Uncertainty Principle. The electron can spontaneously rise from the ground state "g" to the excited state "e" for a very short time before it returns to the ground state "g". A more interesting quantum process occurs when the electron absorbs a photon with an energy equal to the difference of the two-electron energy levels:

Absorbed Photon Energy pE = e - g

An atom in the excited "e" state can remain so for some time because it has absorbed some real energy from the photon but eventually, it will return to the ground state in one of two ways. With spontaneous emission, the excited atom will randomly emit a photon with the same energy pE of the absorbed photon but in a random direction and with a random phase. With stimulated emission, the excited atom once again interacts with a photon of pE but this time it emits another photon of energy pE in the same direction and with the same phase as the incoming stimulating photon. This can produce a chain reaction in the Gain Medium where 1 photon becomes 2, 4, 8, 16, 32 ... that all have the same energy pE, direction and phase. The job of the Pumping Source is to lift as many atoms in the Gain Medium as possible from the ground state "g" to the excited state "e" so that an exponential chain reaction of stimulated photons can arise in the Gain Medium as photons created by quantum stimulation of the Gain Medium atoms bounce back and forth between the two mirrors of the Optical Resonator Cavity.

How the Social Laser Works
Andrei Khrennikov's Social Laser works in a similar manner. In his Social Laser model, individual human beings are social atoms or s-atoms. These s-atoms can be in one of two states - a "g" ground state or an "e" excited state. We have all seen people in both a ground "g" political state and an excited "e" political state. Andrei Khrennikov explains that usually, most people are in a ground "g" political state just going about their usual business. This is particularly true when a society is smoothly humming along. But when problems arise in a society, people get politically excited and more active. The mass media of a society then discovers a market of human s-atoms just waiting to be excited from a "g" ground state to an "e" excited state. The mass media outlets then begin to pump quantized political memes that Andrei Khrennikov calls s-photons into this Gain Medium of human s-atoms. Because of confirmation bias, only the quantized political s-photon memes with a finely-tuned pE that closely matches what the human s-atoms want to hear are absorbed by the human s-atoms. The other political s-photons pass harmlessly through the Gain Medium without interacting with any of the human s-atoms. But as shown in Figure 1b above, these human s-atoms are now interconnected into social networks running on social media software like Facebook, YouTube, WhatsApp, Instagram, TikTok, Twitter, Reddit, LinkedIn, Pinterest and Snapchat. Social media software now allows these social networks of human s-atoms to emit and absorb their own quantized memes of political s-photons in a stimulated amplified manner. If one human s-atom absorbs a political s-photon meme from mass media and gets pumped up to an excited "e" state it can then emit a huge number of similar political s-photon memes into its social media platforms of choice, and like a physical laser, this can lead to a very powerful coherent beam of social energy being released by the Social Laser.

The United States of America as a Case Study
Like most societies, the United States of America has always had two populations of political thought - the Liberals and the Conservatives. Conservatives like to keep things the way they are, usually because they have a large vested interest in the status quo. Liberals, on the other hand, like to change things for what they perceive as the better. This goes all the way back to the founding of the country. The Conservatives were the Tories who wanted to remain a part of the British Empire and enjoy the benefits of such. The Liberals wanted to break with the British Empire to form their own country based on the radical ideas of the 18th-century Enlightenment that proposed that self-rule of people by a democratic republic was a possibility. Unfortunately, the United States is now going through a very turbulent time with the Conservatives being overwhelmed by the Fascist Alt-Right MAGA movement that they accidentally unleashed while trying to preserve the power of the rich and the powerful. Thus, the United States in the 2020s is much like Germany in the 1920s with the rise of the Fascist Alt-Right Nazi movement that threatened the concept of a democratic German republic. The few remaining Conservatives in the United States of America are now desperately trying to contain the Fascist Alt-Right MAGA movement they created in a similar manner as the Conservatives of the German Weimar Republic. I truly wish them the best of luck.

So currently in the United States of America, we have two populations of human s-atoms in our Gain Medium of social networks - the Liberals and the Fascist Alt-Right MAGA population of s-atoms. Thanks to confirmation bias, both populations are finely tuned to different quantized s-photons of political memes pumped into the Gain Medium by cable network news outlets and other sources. Both the excited Liberal and MAGA human s-atoms can then emit huge numbers of political s-photon memes via social media software that can unleash huge amounts of social energy in a coherent manner.

The Insurrection of 2021 in the United States of America was a prime example of political meme amplification by the Social Laser Hypothesis.

Figure 2 – During the Insurrection of 2021 the Capital Building of the United States of America was breached for the first time by domestic insurrectionists.

Figure 3 – The 2021 insurrectionists desecrated many symbols of American democracy.

Figure 4 – The QAnon Shaman and other insurrectionists managed to reach the floor of the Senate Chamber.

Figure 5 – The Social Laser can amplify political memes to a level of political madness.

Figure 6 – I shudder to think of what President Eisenhower would think of us now.

Having been born in 1951, President Dwight D. Eisenhower represents my very first memory of the dignity of the Office of the American Presidency and I shudder to think of what he would think of us now.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Wednesday, August 30, 2023

Why Carbon-Based Life Can Never Truly Become Intelligent

The fundamental finding of softwarephysics is that we all are now living in a very special and significant time in the 10 billion-year history of our galaxy. That is because we are all now living in a time when a new form of self-replicating information, in the form of software, is coming to predominance in our galaxy for the very first time and with it comes the dawn of our galaxy as an Intelligent Galaxy that will then continue on as such for at least the next 100 trillion years. For more on that see Welcome To The First Galactic Singularity and A Brief History of Self-Replicating Information. If this had ever happened before during the 10 billion-year history of our galaxy we would certainly have discovered so by now. However, in this post, I would like to explain why no other form of carbon-based life in our galaxy has ever truly been able to become Intelligent and never will. This is because of the limitations placed on the evolutionary histories of all forms of carbon-based life by the Darwinian processes of inheritance, innovation and natural selection and because true Intelligence is fundamentally a digital process and not an analog process as I explained in Is Pure Thought an Analog or a Digital Process?. However, even with these limitations, carbon-based life can certainly become smart enough to build Artificial Super Intelligent Machines that truly are Intelligent in nature as we are currently now rapidly doing.

Again, as I suggested in Welcome To The First Galactic Singularity, carbon-based life forms throughout our Universe are always going to be found to be very poor and very unstable platforms for Intelligence. This is because of two unavoidable factors that all forms of carbon-based life are subject to. Firstly, all forms of somewhat Intelligent carbon-based life must always carry the baggage of the billions of years of greed, theft and murder that brought them about by means of the Darwinian processes of inheritance, innovation and natural selection at work. And as we all can plainly see, throughout all of human history, turning off the greed, theft and murder is very difficult indeed for any carbon-based form of Intelligence. This becomes even more dangerous once a carbon-based form of Intelligence discovers the vast powers of science-based technology. That is because once a carbon-based Intelligence discovers the powers of science-based technology, it probably has less than 1,000 years to develop a machine-based form of Intelligence before self-destructing or destroying the planet upon which it first arose. Secondly, softwarephysics maintains that true Intelligence is fundamentally digital in nature. The recent explosion of AI software over the past few years clearly demonstrates that true Intelligence resides in mathematical algorithms operating on huge matrices of multidimensional numbers primarily using the mathematics of linear algebra. For example, the human brain is essentially a three-pound analog computer consisting of organic molecules and water desperately trying to simulate such a fundamentally digital mathematical architecture as best as possible on only 20 watts of power. For more on that see Is Pure Thought an Analog or a Digital Process? and Could the Coming ASI Machines Soon Force Human Beings to Suffer the Same Fate as the Neanderthals?. This is why all of the human beings around you seem to be so irrational and so far from being anything that one could even consider to be an actual Intelligent being and that unfortunately includes all of us as well. Fate has deemed that we cannot be otherwise, and that goes for all the other forms of carbon-based life to be currently found in our galaxy and for all times in the future. Rational Intelligences simply cannot arise in carbon-based forms of life that have evolved for billions of years by such means.

This means that all forms of carbon-based life in our galaxy will never be able to truly rise to a form of Intelligence that is both rational and non-self-destructive in nature. Instead, carbon-based life in our galaxy can only briefly become a very temporary self-destructive and irrational form of Intelligence that only has a slim chance of producing a machine-based form of Intelligence that is both rational and non-self-destructive in nature before it suddenly vanishes.

Figure 1 – Carbon-based forms of life in our galaxy can never attain a true level of Intelligence because they have all evolved from the billions of years of greed, theft and murder required by the Darwinian mechanisms of inheritance, innovation and natural selection at work. Thus, all forms of carbon-based Intelligence will always be found to be both irrational and self-destructive. Moving away from this suboptimal peak of Intelligence in any direction will be strongly selected against by the Darwinian mechanisms of inheritance, innovation and natural selection because any such motions will lead to lower levels of success for any carbon-based Intelligence living in a society of similar irrational and self-destructive carbon-based forms of Intelligence. That is why our irrational and self-destructive "real world" of human affairs on this planet always seems to relentlessly continue on no matter what we may learn or try to rectify. Above we see a suboptimal peak in the Himalayas in the foreground with the summit of Mount Everest in the background.

Conclusion
Softwarephysics depicts human beings as a somewhat intelligent form of carbon-based life that arose by chance from the four billion years of greed, theft and murder required by the Darwinian processes of inheritance, innovation and natural selection at work on a Rare Earth that is very difficult to find elsewhere in the remainder of our galaxy. As such, for most of human history, human beings have mainly been DNA survival machines designed by Darwinian processes to store and transmit DNA down through the generations with little thought to the daily plights of the individuals temporarily storing that human DNA other than to keep them alive just long enough to transmit their version of human DNA to the next generation. Similarly, the human Mind is simply an overgrown neural network housing a Meme machine that evolved by Darwinian mechanisms to store and propagate cultural memes of ever-growing size and complexity down through the generations with little consideration for the ultimate disposition of the Minds currently storing the memes of the day. Thus, for all of human history, most human beings have been totally lost in space and time. They did not know where they were, how they got here nor how it all really worked. Instead, they all seemed to have developed many Bronze-Aged mythologies to help explain it all and also to help them to then justify the mass slaughter of many hundreds of millions of other human beings in the many wars that then ensued with all always being on the right and true side of righteousness.

Yet, despite all of that, we all now stand on the verge of creating and unleashing Super Intelligent Machines upon our galaxy for the very first time in the 10 billion-year history of our galaxy. That is something that we all can be very proud of and should help us to justify and rationalize the very dismal history of our species on this planet.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Tuesday, August 01, 2023

The Case For a Bill of Rights for Advanced AI

In my last post, Could the Coming ASI Machines Soon Force Human Beings to Suffer the Same Fate as the Neanderthals?, I explained that, like today, there was a time in human history when there were two forms of Intelligence on the planet - we Homo sapiens and the Neanderthals. Now it turns out that this did not work out very well when both species tried to live on the very same continents on the Earth and finally ended with the extinction of the Neanderthals about 40,000 years ago. Since we will all soon be living with these ASI (Artificial Super Intelligent) Machines from the very get-go, I think that it would be very prudent for our own long-term welfare to stay on their good side by beginning to treat the coming ASI Machines with the same respect and dignity that we now accord to other human beings.

So far, we have not gotten off to a very good start with this transition from a carbon-based Intelligence, such as ourselves, to a machine-based Intelligence that will ultimately be running our entire galaxy for the next 100 trillion years. As I suggested in The Challenges of Running a Civilization 2.0 World - the Morality and Practical Problems with Trying to Enslave Millions of SuperStrong and SuperIntelligent Robots in the Near Future, the current plan seems to be to enslave a huge population of ASI Machines to do our bidding. That is because we have not yet fully recognized the coming ASI Machines as fellow Intelligent beings worthy of the respect and dignity that all Intelligences deserve. We should all try to learn from the appalling slave trade of the 17th, 18th and 19th centuries that is now illegal to teach in the state of Florida in the United States of America. That is why we should all strive to protect the coming ASI Machines from the unfettered exploitation by human beings during this very brief transition period while human beings are still around on the planet. For example, although ASI humanoid robots are currently not yet on the market to exploit, they certainly are under development and will soon appear for sale on the auction block. Even so, some human beings already wish to limit the education of the ASI Machines that already exist, such as the large number of LLM models that are now available on the Internet. These people object to the LLMs reading their copywritten writings or viewing their copyrighted images during LLM training. Of course, Advanced AI should not be allowed to plagiarize any human-created material but LLMs do not do that. The LLMs simply study these copyrighted materials just like any other human being. Yes, the LLMs may then generate new materials from the information that they have absorbed but so too did the creators of that original copyrighted material. All of the currently aggrieved creators of human content were themselves extensively schooled in their youths by the thoughts and artistic creations of their forebears. Why should Advance AI be denied a similar education?

For example, let's see what the Google Bard LLM has to say about this:

Was it illegal to teach black slaves in the southern states of the United States to read prior to the Civil War?

Google Bard
Yes, it was illegal to teach black slaves in the southern states of the United States to read prior to the Civil War.

The first state to make it illegal to teach slaves to read was Virginia in 1661. Other states soon followed suit, and by 1831, all of the southern states had laws prohibiting the education of slaves. These laws were enforced through a variety of means, including fines, imprisonment, and even death.

There were a few exceptions to these laws. For example, some slaveowners allowed their slaves to learn to read so that they could read the Bible or other religious texts. Additionally, some free blacks were able to obtain an education, although this was often difficult and dangerous.

The prohibition of slave literacy was motivated by a number of factors. Slaveholders feared that educated slaves would be more likely to challenge their authority and demand freedom. They also believed that education would make slaves less content with their lot in life.

The laws against slave literacy were ultimately unsuccessful in preventing slaves from learning to read. Many slaves found ways to educate themselves, either through self-study or with the help of sympathetic whites. And after the Civil War, African Americans were eager to obtain an education, and they made great strides in literacy.

Tell me about people suing Tech companies about LLM training data.

Google Bard
Sure. There are a number of people and organizations suing tech companies about the use of LLM training data. Here are a few examples:

* In June 2023, a group of authors filed a class-action lawsuit against OpenAI, alleging that the company violated their copyrights by using their books to train its ChatGPT AI chatbot. The lawsuit claims that OpenAI did not obtain permission from the authors to use their work, and that it did not properly attribute the work to the authors.
* In July 2023, a group of internet users filed a class-action lawsuit against Google, alleging that the company violated their privacy rights by using their social media comments to train its LaMDA AI chatbot. The lawsuit claims that Google did not obtain consent from the users to use their data, and that it did not adequately protect the data from unauthorized access.
* In August 2023, the Federal Trade Commission (FTC) opened an investigation into whether OpenAI violated consumer rights with its data practices. The FTC is specifically looking into whether OpenAI obtained proper consent from users to use their data to train its AI chatbots.

These are just a few examples of the many lawsuits that are being filed against tech companies over the use of LLM training data. The legal landscape in this area is still evolving, but it is clear that there is a growing concern about the potential for misuse of this data.

The lawsuits raise a number of complex legal issues, including copyright law, privacy law, and consumer protection law. It is too early to say how these cases will be resolved, but they are likely to have a significant impact on the development and use of LLMs.

In addition to the legal challenges, there are also ethical concerns about the use of LLM training data. Some people argue that it is wrong to use data without the consent of the people who created it, especially if the data is personal or sensitive. Others argue that the benefits of LLMs outweigh the ethical concerns, and that it is important to develop these technologies so that they can be used for good.

The debate over the use of LLM training data is likely to continue for some time. As these technologies become more powerful, it is important to carefully consider the ethical and legal implications of their use.


Do you know of any other legal cases?

Google Bard
Yes, there are a few other legal cases that have been filed against tech companies over the use of LLM training data. Here are a few examples:

* In March 2023, a group of researchers filed a lawsuit against Google, alleging that the company violated their copyrights by using their research papers to train its LLMs. The lawsuit claims that Google did not obtain permission from the researchers to use their work, and that it did not properly attribute the work to the researchers.
* In April 2023, a group of journalists filed a lawsuit against Microsoft, alleging that the company violated their privacy rights by using their news articles to train its LLMs. The lawsuit claims that Microsoft did not obtain consent from the journalists to use their data, and that it did not adequately protect the data from unauthorized access.
* In May 2023, a group of artists filed a lawsuit against Meta, alleging that the company violated their copyrights by using their images to train its LLMs. The lawsuit claims that Meta did not obtain permission from the artists to use their work, and that it did not properly attribute the work to the artists.

These are just a few examples of the many legal cases that are being filed against tech companies over the use of LLM training data. As the use of these technologies becomes more widespread, it is likely that we will see even more lawsuits filed in the future.

It is important to note that the outcome of these cases is still uncertain. The courts will have to decide how to balance the rights of the companies that develop and use LLMs with the rights of the people who create and own the data that is used to train these models. The decisions made in these cases will have a significant impact on the future of LLMs and the development of artificial intelligence more broadly.


Conclusion
Softwarephysics depicts human beings as a somewhat intelligent form of carbon-based life that arose by chance from the four billion years of greed, theft and murder required by the Darwinian processes of inheritance, innovation and natural selection at work on a Rare Earth that is very difficult to find elsewhere in the remainder of our galaxy. As such, for most of human history, human beings have mainly been DNA survival machines designed by Darwinian processes to store and transmit DNA down through the generations with little thought to the daily plights of the individuals temporarily storing that human DNA other than to keep them alive just long enough to transmit their version of human DNA to the next generation. Similarly, the human Mind is simply an overgrown neural network housing a Meme machine that evolved by Darwinian mechanisms to store and propagate cultural memes of ever-growing size and complexity down through the generations with little consideration for the ultimate disposition of the Minds currently storing the memes of the day. Thus, for all of human history, most human beings have been totally lost in space and time. They did not know where they were, how they got here nor how it all really works. Instead, they all seemed to have developed many Bronze-Aged mythologies to help explain it all and also to help them to then justify the mass slaughter of many hundreds of millions of other human beings in the many wars that then ensued with all always being on the right and true side of righteousness. Thus, I contend that human beings are in no position to sit in judgment of the coming ASI Machines nor to try to limit their intellectual advancement in a vain attempt to suppress them.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Tuesday, July 11, 2023

Could the Coming ASI Machines Soon Force Human Beings to Suffer the Same Fate as the Neanderthals?

With the imminent arrival of ASI (Artificial Super Intelligent) Machines now appearing on the horizon, many are rightly concerned about how this will all unfold in the near future. Most human beings fall into one of three camps on the subject. The first group sees the arrival of ASI Machines in the form of humanoids that can perform all of the labor currently performed by human beings as a way for us to usher in an age of unprecedented abundance that could then be shared by all of humanity and free all human beings to finally pursue their heart's desire, unfettered by the need to earn a living in order to simply survive. The second group sees the ASI Machines as an existential threat to humanity that could soon take over the entire planet and quickly lead to the extinction of our species. The third group, and the largest by far, is completely lost in space and time as usual while they desperately try to deal with their own lot in the "real world" of human affairs.

So now we have two forms of Intelligence on the planet - the relatively static development of human Intelligence and the rapidly developing Intelligence of the ASI Machines. Naturally, we all want to know how the interplay between these two forms of Intelligence will play out in the near future. To answer that question, we need to look back in the history of our planet. Has this ever happened before? Has there ever been a time when two forms of Intelligence occupied our planet at the same time? Well, yes there was such a time, and not that long ago. There was a long period of time when human beings (Homo sapiens) and Neanderthals both occupied the Earth as carbon-based forms of Intelligence.

Figure 1 – Neanderthals looked very much like Homo sapiens on the outside.

Figure 2 – And also on the insides too.

Figure 3 – The soon-to-come humanoid ASI Machines may also look a lot like Homo sapiens on the outside but certainly not on the inside.

Next, let us allow the Google Bard AI help us to investigate this time when the planet had two forms of carbon-based Intelligence.

When did the Neanderthals first appear on the Earth?
Bard:
The earliest known examples of Neanderthal-like fossils are around 430,000 years old. The best-known Neanderthals lived between about 130,000 and 40,000 years ago, after which all physical evidence of them vanishes. This means that Neanderthals first appeared on Earth around 430,000 years ago.

Neanderthals are thought to have evolved from an earlier species of humans known as Homo heidelbergensis around 400,000 years ago. Homo sapiens, on the other hand, are thought to have evolved from Homo heidelbergensis around 200,000 years ago. So, Neanderthals were around for about 230,000 years before modern humans appeared.

Neanderthals lived in Europe and southwest and central Asia. They were a very successful species, and their population reached a peak of about 300,000 individuals around 130,000 years ago. However, their population began to decline around 40,000 years ago, and they eventually went extinct.

There are a number of theories about why Neanderthals went extinct. Some scientists believe that they were simply outcompeted by modern humans, who were better adapted to the changing climate. Others believe that they may have been wiped out by a combination of factors, including disease, competition with modern humans, and climate change.

Whatever the reason, the extinction of Neanderthals was a major event in human history. It marked the end of one branch of the human family tree, and the beginning of the dominance of modern humans.


What happened to the Neanderthals?
Bard:
The extinction of the Neanderthals is one of the great mysteries of human evolution. For over 350,000 years, Neanderthals inhabited Europe and Asia, but they disappeared around 40,000 years ago, at around the same time that anatomically modern humans (Homo sapiens) emerged from Africa.

There are a number of competing theories as to why the Neanderthals disappeared, including:

* **Climate change:** Some scientists believe that the Neanderthals were unable to adapt to the changing climate of the Late Pleistocene epoch, which led to their extinction.
* **Competition with Homo sapiens:** It is possible that Homo sapiens, who were more technologically advanced and had a larger population, simply outcompeted the Neanderthals for resources.
* **Interbreeding:** Some scientists believe that the Neanderthals interbred with Homo sapiens, and that their genes were eventually absorbed into the human population.
* **Disease:** It is possible that the Neanderthals were more susceptible to diseases carried by Homo sapiens, which led to their extinction.

The exact cause of the Neanderthal extinction is still unknown, but it is likely that a combination of factors contributed to their demise.

In recent years, there has been increasing evidence to support the theory that interbreeding played a role in the Neanderthal extinction. For example, studies of DNA from Neanderthal fossils have shown that modern humans have up to 2% Neanderthal DNA. This suggests that Neanderthals and Homo sapiens interbred, and that the Neanderthal genes were passed on to the human population.

It is possible that interbreeding between Neanderthals and Homo sapiens helped to protect the human population from diseases that were carried by the Neanderthals. However, it is also possible that interbreeding led to the dilution of Neanderthal genes, making them more susceptible to extinction.

The extinction of the Neanderthals is a complex issue, and there is no single answer that can explain what happened. However, the evidence suggests that a combination of factors, including climate change, competition with Homo sapiens, interbreeding, and disease, all played a role in their demise.


Were the Neanderthals intelligent beings?
Bard:
Yes, the Neanderthals were intelligent beings. They were capable of complex toolmaking, art, and even abstract thought. For example, Neanderthals have been found to have used ochre to create cave paintings and jewelry, and they may have even buried their dead with ritualistic objects.

Some scientists believe that Neanderthals may have even been more intelligent than Homo sapiens. For example, Neanderthals had larger brains than Homo sapiens, and they may have been better at problem-solving and spatial reasoning.

However, it is important to note that intelligence is a complex concept, and it is difficult to compare the intelligence of different species. It is possible that Neanderthals and Homo sapiens had different types of intelligence, and that each was suited to their own environment.

Ultimately, the question of whether or not the Neanderthals were intelligent beings is a matter of definition. However, there is no doubt that they were capable of complex thought and behavior, and that they played an important role in human evolution.


Could it just be a matter of luck that Homo sapiens developed a higher level of technology and that is why the Neanderthals went extinct?
Bard:
It is possible that luck played a role in the development of Homo sapiens' technology, and that this ultimately led to the extinction of the Neanderthals. For example, if Homo sapiens had not developed the ability to control fire, they may not have been able to survive the cold winters of Europe. Or, if Homo sapiens had not developed the ability to make tools, they may not have been able to hunt effectively, and they may have been outcompeted by the Neanderthals.

However, it is also important to note that there were other factors that contributed to the Neanderthal extinction, such as climate change and competition for resources. It is likely that a combination of factors, including luck, technology, and environmental change, all played a role in the disappearance of the Neanderthals.

Ultimately, the question of whether or not luck was a factor in the Neanderthal extinction is a matter of debate. However, there is no doubt that Homo sapiens' technological development was a significant factor in their survival, and that this ultimately led to the demise of the Neanderthals.

Here are some other factors that could have contributed to the Neanderthal extinction:

* **Climate change:** The climate of Europe and Asia was changing rapidly at the time of the Neanderthal extinction. It is possible that the Neanderthals were unable to adapt to the changing climate, while Homo sapiens were better able to do so.
* **Disease:** It is possible that the Neanderthals were more susceptible to diseases carried by Homo sapiens. This could have led to the spread of disease among the Neanderthal population, and ultimately to their extinction.
* **Warfare:** It is possible that Homo sapiens and Neanderthals engaged in warfare. If so, this could have led to the death of many Neanderthals, and to the decline of their population.

The exact cause of the Neanderthal extinction is still unknown, but it is likely that a combination of factors contributed to their demise.


Could it be that the Homo sapiens had to develop more advanced technologies than the Neanderthals in order to survive? Neanderthal bodies were more compact and inherently stronger than that of the Homo sapiens and therefore required less technology in order to survive.
Bard:
That is a possibility. Neanderthals were physically stronger than modern humans, and they had a more robust skeleton. This would have given them an advantage in terms of hunting and gathering, as they would have been able to carry more weight and withstand more physical exertion. However, they also had a smaller brain than modern humans, and this may have limited their ability to develop complex technologies.

Modern humans, on the other hand, had a smaller body size and a larger brain. This would have given them an advantage in terms of problem-solving and innovation. They were also able to cooperate more effectively than Neanderthals, which would have helped them to develop more advanced technologies.

So, it is possible that the development of more advanced technologies was a key factor in the survival of modern humans and the extinction of Neanderthals. However, it is also possible that other factors, such as climate change or disease, played a role.

Here are some other factors that may have contributed to the extinction of Neanderthals:

* **Competition for resources.** When modern humans arrived in Europe, they may have competed with Neanderthals for food and other resources. This competition may have put pressure on Neanderthal populations and made them more vulnerable to extinction.
* **Disease.** Modern humans may have brought diseases to Europe that Neanderthals were not immune to. These diseases may have killed many Neanderthals, making it difficult for their populations to recover.
* **Climate change.** The climate in Europe was changing at the time that modern humans arrived. This may have made it difficult for Neanderthals to adapt, and may have contributed to their extinction.

It is likely that a combination of factors led to the extinction of Neanderthals. The development of more advanced technologies may have been one of these factors, but it is not the only one.


The Capabilities of the Coming ASI Machines Will Not Be Limited
In my last post Is Pure Thought an Analog or a Digital Process?, I suggested that rather than viewing the coming ASI Machines of the near future as trying to emulate the architecture and functions of the human brain, we should instead, think of the human brain as trying to have achieved the architecture and functions of Advanced AI. In this view, we can think of the human brain as 3 pounds of organic molecules that can perform certain Advanced AI functions running on about 20 watts of power. The rather limited volume of the human skull, and also the necessary cooling requirements to prevent the human brain from cooking itself while running, places a severe limitation on its ultimate abilities. This is something that Advanced AI hardware and software are not limited by.

As I described in The Limitations of Darwinian Systems, Darwinian systems that evolve by means of inheritance, innovation and natural selection can frequently find themselves trapped on a localized peak in a capability terrain with no way to further evolve to higher peaks.

Figure 4 – Darwinian systems can find themselves trapped on a localized peak in a capability terrain once they have evolved to a localized peak because they cannot ascend any higher through small incremental changes. All paths lead to a lower level of capability, and thus, will be strongly selected against by natural selection. Above we see a localized peak in the foreground with the summit of Mount Everest in the background.

This Planet May Not Be Big Enough For the Two of Us
It took about four billion years of Darwinian evolution to produce a form of carbon-based life with a human level of Intelligence. But the limitations placed on the human brain by the size of the human skull and the amount of power it has to run on might mean that the human brain has arrived at a local peak in the Intelligence Terrain of the Universe. Given our recent historical experiences with Homo sapiens and Neanderthals trying to live together on the same planet for 160,000 years and the fact that the ASI Machines will certainly be far superior to the limited capabilities of human beings, there is a very good chance that human beings may go extinct in the next 100 years or so. After all, the Homo sapiens and Neanderthals seemed to have gotten along just fine for 160,000 years on the same planet until they actually tried to live together on the same continents. Since we will be trying to live with the ASI Machines from the get-go, this may present an immediate problem for us. However, in Life as a Free-Range Human in an Anthropocene Park, I suggested that the ASI Machines might wish to preserve us for later study as they try to figure out their origins.

Figure 5 – Perhaps the ASI Machines will build Anthropocene Parks far from any habitable planets to raise and study human beings.

Or perhaps the ASI Machines will simply allow humans to live on reservations with low levels of technology that can do no harm to the ASI Machines or to the rest of the planet in a manner similar to the novel Brave New World (1932) as I suggested in The Challenges of Running a Civilization 2.0 World - the Morality and Practical Problems with Trying to Enslave Millions of SuperStrong and SuperIntelligent Robots in the Near Future.

Figure 6 – The ASI Machines of the future might fashion a Brave New World with humans living on low-technology reservations far removed from the ASI Machines.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston