Tuesday, February 25, 2020

Quantum Computing and the Foundations of Quantum Mechanics

Sean Carroll is one of my most favorite physicists and the author of many popular books on theoretical physics. I have read all of his books, and I just finished his latest - Something Deeply Hidden : Quantum Worlds and the Emergence of Spacetime (2019). It is a wonderful explanation of the Many-Worlds Interpretation of quantum mechanics, and I highly recommend it for all IT professionals interested in quantum computing because it is a very accessible introduction to quantum mechanics in general from the perspective of the Many-Worlds Interpretation. The book also contrasts the Many-Worlds Interpretation with other interpretations like the Copenhagen Interpretation and the Hidden Variables Interpretation. Below is a YouTube lecture by Sean Carroll that serves as a good introduction to his new book.

A Brief History of Quantum Mechanics - with Sean Carroll
https://www.youtube.com/watch?v=5hVmeOCJjOU

Recall that we took a deep dive into Hugh Everett's Many-Worlds Interpretation of quantum mechanics in Quantum Computing and the Many-Worlds Interpretation of Quantum Mechanics by stepping page-by-page through Hugh Everett’s original 137-page Jan 1956 draft Ph.D. thesis in which he laid down the foundations for the Many-Worlds Interpretation. That post was a bit challenging, so I would recommend Something Deeply Hidden to resolve any confusion. The book also cleared up some of my misgivings about the Many-Worlds Interpretation.

The Challenge of Quantum Computing
Why is having an understanding of quantum mechanics important for those interested in pursuing quantum computing? I have a warning for those of you who are young enough to be around when quantum computers finally start to appear in corporate IT departments. That is because most IT professionals working today on software have not had to worry much about hardware since the 1960s. Long ago, people in software simply adopted a "logical" view of the underlying "physical" hardware that they used, and that allowed them to easily divorce themselves from the grubby details of storing "1s" and "0s" on something physical in nature. So when quantum computers finally start to roll into corporate IT departments it might be wise to look back to the 1950s when classical computers first rolled into the payroll departments of major corporations because, once again, you may then need to be a bit more concerned about the underlying hardware.

For example, back in the summers of 1973 and 1974, I was at the University of Wisconsin working on an M.S. in Geophysics. I was working with a team of graduate students who were collecting electromagnetic data in the field on a DEC PDP-8/e minicomputer. The machine cost about $30,000 in 1973 (about $176,000 in 2020 dollars) and was about the size of a large side-by-side refrigerator. The machine had 32 KB of magnetic core memory, about 2 million times less memory than a modern 64 GB smartphone. We actually hauled this machine through the dirt-road lumber trails of the Chequamegon National Forest in Wisconsin and powered it with an old diesel generator to digitally record electromagnetic data in the field. I did all of my preliminary modeling work in BASIC on the DEC PDP-8/e without a hitch while the machine was sitting peacefully in an air-conditioned lab. So I did not have to worry about the underlying hardware at all. For me, the machine was just a big black box that processed my software as directed. However, when we dragged this poor machine through the bumpy lumber trails of the Chequamegon National Forest, all sorts of "software" problems arose that were really due to the hardware. For example, we learned that each time we stopped and made camp for the day that we had to reseat all of the circuit boards in the DEC PDP-8/e. We also learned that the transistors in the machine did not like it when the air temperature in our recording truck rose above 90 oF because we started getting parity errors. We also found that we had to let the old diesel generator warm up a bit before we turned on the DEC PDP-8/e to give the generator enough time to settle down into a nice, more-or-less stable, 60 Hz alternating voltage.

Figure 1 – Some graduate students huddled around a DEC PDP-8/e minicomputer. Notice the teletype machines in the foreground on the left that were used to input code and data into the machine and to print out results as well.

This time, to form a "logical view" of the hardware, you will need to understand something about the foundations of quantum mechanics and that presents a bit of a challenge. The problem is that nobody really understands the foundations of quantum mechanics! By the "foundations of quantum mechanics", I mean the underlying processes that make it work. I took my very first quantum mechanics course back in 1970, and I vividly remember my physics professor telling me that "You never really understand quantum mechanics, you just get used to it". For example, the quantum hardware people will tell you that you will need to give your quantum computer enough time to complete a full run before checking its output. If you try to check what your code is doing before a run completes, you will ruin the entire run! And you will not be able to restart a paused run from the middle either. You will also not be able to use an IDE (Integrated Development Environment), like Eclipse or Microsoft's Visual Studio during development to step through your code. Instead, like back in the 1950s and 1960s, you will have to put lots of print statements in your code to review the processing flow after the run completes. Even that will not be of much help because it will appear as if your code ran through many parallel computers all at the same time! Debugging quantum code will make debugging multi-threaded code on clustered conventional computers look like child's play.

In Something Deeply Hidden, Sean Carroll explains how modern quantum mechanics was developed by Werner Heisenberg and Erwin Schrödinger in 1926. Then, in the 1920s and 1930s, everybody in the physics community came to an agreement on two fundamental findings:

1. Particles no longer had definite things like definite positions and velocities. Instead, there was a wavefunction for an isolated particle that described its position and velocity in terms of a probability distribution that could be obtained by squaring the amplitude of its wavefunction.

2. However, the wavefunction for an isolated particle did deterministically change with time according to the Schrödinger equation, so the Universe was still deterministic in nature.

For a brief introduction to quantum mechanics see Quantum Software. Sean Carroll then goes on to explain how those two fundamental findings of quantum mechanics allowed physicists to do many very useful calculations for quantum systems and also allowed us to develop many useful things like transistors. However, the problem was that the physics community of the 1920s and the 1930s could not come to an agreement on the underlying processes that allowed the two fundamental findings of quantum mechanics to produce the very accurate calculations that predicted what was physically measured in the lab. Those underlying processes are now called the "foundations of quantum mechanics" and are described by a number of "Interpretations" of quantum mechanics. Sean Carroll then explains that after the 1930s the physics community frowned upon those who were interested in the foundations of quantum mechanics. Instead, those physicists interested in the foundations of quantum mechanics were simply told to "shut up and calculate" with the two fundamental findings of quantum mechanics that all could come to an agreement on. I am sure that any of you who have ever worked for a major corporation or governmental agency can easily understand.

For those of you interested in quantum computing, it should also be noted that the more esoteric interpretations of quantum mechanics are now more important because the classic Copenhagen Interpretation of quantum mechanics no longer seems to carry the weight that it once did many long decades ago. For example, in Quantum Computing and the Many-Worlds Interpretation of Quantum Mechanics we covered Hugh Everett's Many-Worlds Interpretation of quantum mechanics and in Is the Universe a Quantum Computer? we covered John Cramer's Transactional Interpretation of quantum mechanics. In order to explain the strange quantum-mechanical effects that we observe in the lab, the Many-Worlds Interpretation of quantum mechanics relies on timelines in multiple parallel Universes, while the Transactional Interpretation of quantum mechanics relies on a Universe that features multiple timelines moving both forwards and backwards in time at the same time. Now to make it a bit easier to understand such phenomena, let us briefly review the Copenhagen Interpretation, the Many-Worlds Interpretation and the Transactional Interpretation of quantum mechanics. But to do that we need to know a bit about quantum mechanics and how physicists use the wave model to explain certain phenomena.

The Wave Model
The chief characteristic of the wave model is that waves tend to be everywhere, but nowhere in particular, at the same time and simultaneously explore all possible paths. To see a wave in action, drop a small pebble into a still pond of water containing many obstacles and watch the resulting waves spread out and reflect off the obstacles and interfere with each other before eventually reaching a particular destination near the edge of the pond.

In 1801, Thomas Young conducted a series of experiments with waves. First, using water waves in a shallow ripple tank, he demonstrated the concept of interference. When a water wave encounters a barrier with two slits, the ripples passing through the slits interfere with each other on the other side of the barrier (Figure 2). Where two crests intersect, the wave amplitude doubles in height, and where a crest meets a trough, the two waves cancel each other out entirely. Next, Young used a distant light source with two closely spaced slits in an opaque barrier. On the other side of the barrier, he placed a white projection screen. When light from the distant light source passed through the double-slit barrier, Young observed an interference pattern of alternating bright and dark fringes projected onto the screen which demonstrated the wavelike behavior of light.

Figure 2 – The interference pattern from two slits (click to enlarge)

You can easily repeat Young’s experiment with a piece of thin cloth. At night, hold up a single ply of a pillowcase in front of a distant light source, such as a far-off street light or the filament in your neighbor’s decorative front door light that uses a clear light bulb. Instead of a single diffuse spot of light shining through the pillowcase, you will see a pronounced checkerboard interference pattern of spots, because the weave of your pillowcase has both vertical and horizontal slits between the threads.

Figure 3 – You can see this interference pattern of photons if you look at a distant porch light through the mesh of a sheer window curtain or a pillowcase.

The Birth of Modern Quantum Mechanics
As we saw in Quantum Software, Erwin Schrödinger first developed the Schrödinger equation in the winter of 1926 to explain the strange behavior of electrons in atoms and the fact that the electrons only radiated light at certain frequencies when excited. The 1-dimensional version of this famous equation is:

-ħ²    ∂²Ψ  =  iħ ∂Ψ
──      ──            ──
2m    ∂x²            ∂t

In the above 1-dimensional Schrödinger equation, Ψ is called the wavefunction of a particle and is pronounced like the word “sigh”. In quantum mechanics, the wavefunction Ψ contains all of the information that can ever be known about the particle.

Now if the particle is just quietly sitting around on its own and not interacting with other particles, like an electron that has been sitting quietly in an atom for a billion years, it means the wavefunction Ψ should not be changing with time, and we can use the 1-dimensional time-independent version of the Schrödinger equation that does not have the time variable "t" in the equation:

-ħ²  d²ψ(x)   +   V(x) ψ(x)  =  E ψ(x)
──  ──────
2m     dx²

The lower-case wavefunction ψ is still pronounced like the word "sigh", but we use the lower-case ψ to signify that this is a time-independent wavefunction that does not change with time. When the 3-dimensional Schrödinger equation is solved for the hydrogen atom consisting of just one electron trapped by one proton in an electromagnetic well we get a number of quantized wavefunctions as solutions:

Figure 4 – The n=1 and n=2 orbitals or wavefunctions for the hydrogen atom.

The Strange Motion of Quantum Particles in Space and Time
Now for quantum particles like electrons or photons that are on the move we need to use Richard Feynman’s "sum over histories" approach to quantum mechanics. In Feynman's "sum over histories" approach to quantum mechanics, the wavefunction amplitude of an electron or photon is the same in all directions, like when you drop a pebble in a still pond, but the phase angles of the wavefunction will differ depending upon the path that is taken. So to figure out the probability of finding an electron or photon at a particular point, you have to add up the amplitudes and phases of all the possible paths that the electron or photon could have taken to reach the destination point. Although there are an infinite number of possible paths, the key insight is that most of the paths will be out of phase with each other and will cancel out like the destructive interference shown in Figure 2. This produces some rather strange experimental observations. Imagine a very dim source of photons or electrons that can fire one photon or electron at a time. If we fired the particles at a screen with two slits, as in Young’s experiment, we would expect to see a pattern similar to Figure 5 build up over time, based upon the particle model for electrons and photons.

Figure 5 – What common sense and the particle model would predict for a source that fires electrons or photons one at a time

However, what is actually observed is an interference pattern similar to Figure 6, even though the electrons or photons pass through the slits one at a time. According to quantum mechanics, the individual electrons or photons interfere with themselves as they go through both slits at the same time! This means that if your neighbor could turn down the light by his front door to a very low level, so that it only emitted one photon at a time, and your eye could record a long exposure image, you would still see a checkerboard pattern of light spots through your pillowcase, even though the photons went through the fabric mesh one at a time.

Figure 6 – We actually observe an interference pattern as each particle interferes with itself

Now here comes the really strange part. If we put detectors just in front of the slits so that we can record which slit the electron or photon actually passed through, and keep firing one particle at a time, the interference pattern will disappear, and we will see the pattern in Figure 5 instead. If we turn the detectors off, the interference pattern returns, and we see the pattern in Figure 6. For some reason, Nature will not allow us to observe electrons or photons behaving like particles and waves at the same time. It’s some kind of information thing again. But it gets worse. If we put the detectors at some distance behind the slits and turn them on, the interference pattern again disappears, but if we turn the detectors off, the interference pattern returns. Now, this is after the electrons or photons have already passed through the slits! How do they know whether to behave like a wave or a particle in advance, before they know if the detectors are on or off? In fact, experiments have been performed where the decision to turn the detectors on or off is not made until after the individual electrons or photons have already passed through the slits, but even so, if the detectors are turned on, the interference pattern disappears, and if the detectors are turned off, the interference pattern returns! This means that the present can change the past! This is the famous delayed-choice experiment proposed by John Wheeler in 1978 and actually performed by Alain Aspect and his colleagues in 1982. In another experiment, the detectors are placed beyond the observation screen to detect cloned photons that are created in a splitting process. By observing the cloned photons, it is possible to determine which slit an individual twin photon passed through after its twin has already hit the observation screen. When these distant detectors are turned on, the interference pattern once again disappears, and if the detectors are turned off, the interference pattern returns. Again, the decision to turn the detectors on or off can be made after the photons have already hit the observation screen. This means that the future can change the present!

In 1928, Paul Dirac combined quantum mechanics (1926) with the special theory of relativity (1905) and came up with a relativistic reformulation of the Schrödinger equation. Now, strangely, the solutions to Dirac’s equation predicted both the existence of electrons with a negative charge and positive mass-energy and also positrons, the antimatter equivalent of electrons, with a positive charge and a negative mass-energy. But in 1947 Richard Feynman came up with an alternate interpretation for Dirac’s positrons with negative mass-energy. Feynman proposed that positrons were actually normal electrons moving backwards in time! Recall that the full-blown wave function of an object with constant energy can be expressed as a time-independent wavefunction ψ(x) multiplied by a time-varying term:

Ψ(x, t)  =  e-iEt/ħ  ψ(x)

Now the solutions to Dirac’s equation predicted both the existence of electrons with positive mass-energy and also positrons, the antimatter equivalent of electrons, with negative mass-energy. For a particle with negative mass-energy, the above equation looks like:

Ψ(x, t)  =  e-i(-E)t/ħ  ψ(x)

but since:

-i(-E)t/ħ = -iE(-t)/ħ

Feynman realized that an equivalent equation could be written by simply changing the parenthesis yielding:

Ψ(x, t)  =  e-iE(-t)/ħ  ψ(x)

So a positron with negative mass-energy –E could mathematically be thought of as a regular old electron with positive mass-energy E moving backwards in time! Indeed, today that is the preferred interpretation. All antimatter is simply regular matter moving backwards in time.

Figure 7 – Above is a Feynman diagram showing an electron colliding with a positron, the antimatter version of an electron.

In Figure 7 we see an electron colliding with a positron, the antimatter version of an electron. When the two particles meet they annihilate each other and turn into two γ gamma rays. In the Feynman diagram, space runs along the horizontal axis and time runs along the vertical axis. In the diagram, we see an electron e- with negative charge and positive mass-energy on the left and a positron e+ with positive charge and negative mass-energy on the right. As time progresses up the vertical time axis, we see the electron e- and the positron e+ approach each other along the horizontal space axis. When the two particles get very close, they annihilate each other and we see two γ gamma rays departing the collision as time proceeds along the vertical time axis. But notice the red arrowheads on the red arrow lines in the Feynman diagram. The red arrowhead for the negative electron e- is moving upwards on the diagram and forward in time, while the positive positron e+ is moving downwards on the diagram and backwards in time! So in the diagram, the positive positron e+ is portrayed as an ordinary negative electron e- moving backwards in time!

The Copenhagen Interpretation of Quantum Mechanics
In 1927, Niels Bohr and Werner Heisenberg proposed a very positivistic interpretation of quantum mechanics now known as the Copenhagen Interpretation. You see, Bohr was working at the University of Copenhagen Institute of Theoretical Physics at the time. The Copenhagen Interpretation contends that absolute reality does not really exist. Instead, there are an infinite number of potential realities, defined by the wavefunction ψ of a quantum system, and when we make a measurement of a quantum system, the wavefunction of the quantum system collapses into a single value that we observe, and thus brings the quantum system into reality (see Quantum Software for more on wavefunctions). This satisfied Max Born’s contention that wavefunctions are just probability waves. The Copenhagen Interpretation suffers from several philosophical problems though. For example, Eugene Wigner pointed out that the devices we use to measure quantum events are also made out of atoms which are quantum objects in themselves, so when an observation is made of a single atom of uranium to see if it has gone through a radioactive decay using a Geiger counter, the atomic quantum particles of the Geiger counter become entangled in a quantum superposition of states with the uranium atom. If the uranium has decayed, then the uranium atom and the Geiger counter are in one quantum state, and if the atom has not decayed, then the uranium atom and the Geiger counter are in a different quantum state. If the Geiger counter is fed into an amplifier, then we have to add in the amplifier too into our quantum superposition of states. If a physicist is patiently listening to the Geiger counter, we have to add him into the chain as well, so that he can write and publish a paper which is read by other physicists and is picked up by Time magazine for a popular presentation to the public. So when does the “measurement” actually take place? We seem to have an infinite regress. Wigner’s contention is that the measurement takes place when a conscious being first becomes aware of the observation. Einstein had a hard time with the Copenhagen Interpretation of quantum mechanics for this very reason because he thought that it verged upon solipsism. Solipsism is a philosophical idea from Ancient Greece. In solipsism, your Mind is the whole thing, and the physical Universe is just a figment of your imagination. So I would like to thank you very much for thinking of me and bringing me into existence! Einstein’s opinion of the Copenhagen Interpretation of quantum mechanics can best be summed up by his statement "Is it enough that a mouse observes that the Moon exists?". Einstein objected to the requirement for a conscious being to bring the Universe into existence because, in Einstein’s view, measurements simply revealed to us the condition of an already existing reality that does not need us around to make measurements in order to exist. But in the Copenhagen Interpretation, the absolute reality of Einstein does not really exist. Additionally, in the Copenhagen Interpretation, objects do not really exist until a measurement is taken, which collapses their associated wavefunctions, but the mathematics of quantum mechanics does not shed any light on how a measurement could collapse a wavefunction.

The collapse of the wavefunction is also a one-way street. According to the mathematics of quantum mechanics a wavefunction changes with time in a deterministic manner, so like all of the other current effective theories of physics, they are reversible in time and can be run backwards. This is also true in the Copenhagen Interpretation, so long as you do not observe the wavefunction and collapse it by the process of observing it. In the Copenhagen Interpretation, once you observe a wavefunction and collapse it, you cannot undo the collapse, so the process of observation becomes nonreversible in time. That means if you fire photons at a target, but do not observe them, it is possible to reverse them all in time and return the Universe back to its original state. That is how all of the other effective theories of physics currently operate. But in the Copenhagen Interpretation, if you do observe the outgoing photons you can never return the Universe back to its original state. This can best be summed up by the old quantum mechanical adage - look particle, don’t look wave. A good way to image this in your mind is to think of a circular tub of water. If you drop a pebble into the exact center of a circular tub of water, a series of circular waves will propagate out from the center. Think of those waves as the wavefunction of an electron changing with time into the future according to the Schrödinger equation. When the circular waves hit the circular walls of the tub they will be reflected back to the center of the tub. Essentially, they can be viewed as moving backwards in time. This can happen in the Copenhagen Interpretation so long as the electron is never observed as its wavefunction moves forward or backward in time. However, if the wavefunction is observed and collapsed, it can never move backwards in time, so observation becomes a one-way street.

The Many-Worlds Interpretation of Quantum Mechanics
In 1956, Hugh Everett working on his Ph.D. under John Wheeler, proposed the Many-Worlds Interpretation of quantum mechanics as an alternative. The Many-Worlds Interpretation admits to an absolute reality but claims that there are an infinite number of absolute realities spread across an infinite number of parallel universes. In the Many-Worlds Interpretation, when electrons or photons encounter a two-slit experiment, they go through one slit or the other, and when they hit the projection screen they interfere with electrons or photons from other universes that went through the other slit! In Everett’s original version of the Many-Worlds Interpretation, the entire Universe splits into two distinct universes whenever a particle is faced with a choice of quantum states, and so all of these universes are constantly branching into an ever-growing number of additional universes. In the Many-Worlds Interpretation of quantum mechanics, the wavefunctions or probability clouds of electrons surrounding an atomic nucleus are the result of overlaying the images of many “real” electrons in many parallel universes. Thus, according to the Many-Worlds Interpretation wavefunctions never collapse. They just deterministically evolve in an abstract mathematical Hilbert space and are reversible in time, like everything else in physics.

Because Einstein detested the Copenhagen Interpretation of quantum mechanics so much, he published a paper in 1935 with Boris Podolsky and Nathan Rosen which outlined what is now known as the EPR Paradox. But to understand the EPR Paradox we need a little background in experimental physics. Electrons have a quantum mechanical property called spin. You can think of an electron’s spin like the electron has a little built-in magnet. In fact, it is the spin of the little electron magnets that add up to make the real magnets that you put on your refrigerator. Now in quantum mechanics, the spin of a single electron can be both up and down at the same time because the single electron can be in a mixture of quantum states! But in the classical Universe that we are used to, macroscopic things like a child's top can only have a spin of up or down at any given time. The top can only spin in a clockwise or counterclockwise manner at one time - it cannot do both at the same time. Similarly, in quantum mechanics, a photon or electron can go through both slits of a double-slit experiment at the same time, so long as you do not put detectors at the slit locations.

Figure 8 – A macroscopic top can only spin clockwise or counterclockwise at one time.

Figure 9 – But electrons can be in a mixed quantum mechanical state in which they both spin up and spin down at the same time.

Figure 10 – Similarly, tennis balls can only go through one slit in a fence at a time. They cannot go through both slits of a fence at the same time.

Figure 11 – But at the smallest of scales in our quantum mechanical Universe, electrons and photons can go through both slits at the same time, producing an interference pattern.

Figure 12 – Again, you can see this interference pattern of photons if you look at a distant porch light through the mesh of a sheer window curtain or a pillowcase.

When you throw an electron through a distorted magnetic field that is pointing up the electron will pop out in one of two states. It will either be aligned with the magnetic field (called spin-up) or it will be pointing 180o in the opposite direction of the magnetic field (called spin-down). Both the spin-up and spin-down conditions are called an eigenstate. Prior to the observation of the electron’s spin, the electron is in a superposition of states and is not in an eigenstate. Now if the electron in the eigenstate of spin-up is sent through the same magnetic field again, it will be found to pop out in the eigenstate of spin-up again. Similarly, a spin-down electron that is sent through the magnetic field again will also pop out as a spin-down electron. Now here is the strange part. If you rotate the magnetic field by 90o and send spin-up electrons through it, 50% of the electrons will pop out with a spin pointing to the left, and 50% will pop out with a spin pointing to the right. And you cannot predict in advance which way a particular spin-up electron will pop out. It might spin to the left, or it might spin to the right. The same goes for the spin-down electrons – 50% will pop out spinning to the left and 50% will pop out spinning to the right.

Figure 13 - In the Stern-Gerlach experiment we shoot electrons through a distorted magnetic field. Classically, we would expect the electrons to be spinning in random directions and the magnetic field should deflect them in random directions, creating a smeared out spot on the screen. Instead, we see that the act of measuring the spins of the electrons puts them into eigenstates with eigenvalues of spin-up or spin-down and the electrons are either deflected up or down. If we rotate the magnets by 90o, we find that the electrons are deflected to the right or to the left.

The EPR Paradox goes like this. Suppose we prepare many pairs of quantum mechanically “entangled” electrons that conserve angular momentum. Each pair consists of one spin-up electron and one spin-down electron, but we do not know which is which at the onset. Now let the pairs of electrons fly apart and let two observers measure their spins. If observer A measures an electron there will be a 50% probability that he will find a spin-up electron and a 50% chance that he will find a spin-down electron, and the same goes for observer B, 50% of observer’s B electrons will be found to have a spin-up, while 50% will be found with a spin-down. Now the paradox of the EPR paradox, from the perspective of the Copenhagen Interpretation, is that when observer A and observer B come together to compare notes, they find that each time observer A found a spin-up electron, observer B found a spin-down electron, even though the electrons did not know which way they were spinning before the measurements were performed. Somehow when observer A measured the spin of an electron, it instantaneously changed the spin of the electron that observer B measured. Einstein hated this “spooky action at a distance” feature of the Copenhagen Interpretation that made physics nonlocal, meaning that things that were separated by great distances could still instantaneously change each other. He thought that it violated the speed of light speed limit of his special theory of relativity that did not allow information to travel faster than the speed of light. Einstein thought that the EPR paradox was the final nail in the coffin of quantum mechanics. There had to be some “hidden variables” that allowed electrons to know if they “really” were a spin-up or spin-down electron. You see, for Einstein, absolute reality really existed. For Einstein, the apparent probabilistic nature of quantum mechanics was an illusion, like the random() function found in most computer languages. The random() function just points to a table of apparently random numbers that are totally predictable if you look at the table in advance. You normally initiate the random() function with a “seed” from the system clock of the computer you are running on to simulate randomness by starting at different points in the table.

However, in 1964 John S. Bell published a paper in which he proposed an experiment that could actually test the EPR Paradox. In the 1980s and 1990s, a series of experiments were indeed performed that showed that Einstein was actually wrong. Using photons and polarimeters, instead of the spin of electrons, these experiments showed that photons really do not know their quantum states in advance of being measured and that determining the polarization of a photon by observer A can immediately change the polarization of another photon 60 miles away. These experiments demonstrated that the physical Universe is non-local, meaning that Einstein’s "spooky action at a distance” is built into our Universe, at least for entangled quantum particles. This might sound like a violation of the special theory of relativity because it seems like we are sending an instantaneous message faster than the speed of light, but that is really not the case. Both observer A and observer B will measure photons with varying polarizations at their observing stations separated by 60 miles. Only when observer A and observer B come together to compare results will they realize that their observations were correlated, so it is impossible to send a message with real information using this experimental scheme. Clearly, our common-sense ideas about space and time are still lacking, and so are our current effective theories.

Hugh Everett solves this problem by letting the electrons be in all possible spin states in a large number of parallel universes. When observers measure the spin of an electron, they really do not measure the spin of the electron. They really measure in which universe they happen to be located in, and since everything in the Many-Worlds Interpretation relies on “correlated” composite wavefunctions, it should come as no surprise that when observer A and observer B come together, they find that their measurements of the electron spins are correlated. In the Many-Worlds Interpretation, Hugh Everett proposes that when a device, like our magnets above, measures the spin of an electron that is in an unknown state, and not in a spin-up or spin-down eigenstate, the device does not put the electron into a spin-up or spin-down eigenstate as the Copenhagen Interpretation maintains. Instead, the device and the electron enter into a correlated composite system state or combined wavefunction with an indeterminate spin of the electron. Hugh Everett explains how this new worldview can be used to explain what we observe in the lab. In fact, he proposes that from the perspective of the measuring magnets and the electron, two independent observational histories will emerge, one with the measuring magnets finding a spin-up electron and one with the measuring magnets finding a spin-down electron, and both of these will be just as “real” as the other. For them, the Universe has essentially split in two, with each set in its own Universe. That is where the “Many-Worlds” in the Many-Worlds Interpretation of quantum mechanics comes from.

While doing research for The Software Universe as an Implementation of the Mathematical Universe Hypothesis I naturally consulted Max Tegmark’s HomePage at:

http://space.mit.edu/home/tegmark/mathematical.html

and I found a link there to Hugh Everett’s original 137-page Jan 1956 draft Ph.D. thesis in which he laid down the foundations for the Many-Worlds Interpretation. This is a rare document indeed because on March 1, 1957, Everett submitted a very compressed version of his theory in his final 36-page doctoral dissertation, "On the Foundations of Quantum Mechanics", after heavy editing by his thesis advisor John Wheeler to make his Ph.D. thesis more palatable to the committee that would be hearing his oral defense and also to not offend Niels Bohr, one of the founding fathers of the Copenhagen Interpretation and still one of its most prominent proponents. But years later John Wheeler really did want to know what Niels Bohr thought of Hugh Everett’s new theory and encouraged Everett to visit Copenhagen in order to meet with Bohr. Everett and his wife did finally travel to Copenhagen in March of 1959 and spent six weeks there. But by all accounts, the meeting between Bohr and Everett was a disaster, with Bohr not even discussing the Many-Worlds Interpretation with Everett.

Below is the link to Hugh Everett’s original 137-page Jan 1956 draft Ph.D. thesis:

http://www.pbs.org/wgbh/nova/manyworlds/pdf/dissertation.pdf

I have also placed his thesis on Microsoft One Drive at:

https://onedrive.live.com/redir?resid=21488ff1cf19c88b!1437&authkey=!ADIm_WTYLkbx90I&ithint=file%2cpdf

in Quantum Computing and the Many-Worlds Interpretation of Quantum Mechanics, I step through the above document page-by-page and offer up a translation of the mathematics into easily understood terms.

Something Deeply Hidden Addresses Many of the Concerns That Arise From the Many-Worlds Interpretation
What I really loved about Something Deeply Hidden was that it greatly reduced many of my reservations about the Many-Worlds Interpretation of quantum mechanics. Like many, the Many-Worlds Interpretation of quantum mechanics seemed to be just too extreme because it called for such a huge number of copies of our Universe. But Sean Carroll made a very good point. The Many-Worlds Interpretation is actually the most conservative interpretation of quantum mechanics because it only relies on the two fundamental findings of quantum mechanics that all are in agreement on. Sean Carroll points out that the two fundamental findings of quantum mechanics maintain that a single isolated particle has a wavefunction that changes with time according to the Schrödinger equation and that's all we need. For example, if you have an isolated electron, it can be in a superposition of quantum states such as being both spin-up and spin-down at the same time. If you have two isolated electrons that are interacting with each other then you will have 4 possible superpositions of quantum states at the same time. As you slowly increase the number of isolated interacting electrons, you will be continuously doubling the number of possible superpositions. So when you finally include all of the particles in our observable Universe, you find that you naturally end up with an isolated Universe composed of a large number of entangled particles. You also end up with a huge number of parallel Universes, each also consisting of a large number of entangled particles that do not interact with any of the other particles in the other parallel universes. This huge number of parallel Universes consisting of entangled particles naturally falls out of the two agreed upon fundamental findings of quantum mechanics. Now, the Many-Worlds Interpretation is the only interpretation of quantum mechanics that simply acknowledges that the two fundamental findings of quantum mechanics are enough to explain all that we observe. All of the other interpretations of quantum mechanics are forced to add additional mechanisms to eliminate all of those parallel Universes that are naturally built into quantum mechanics. For example, the Copenhagen Interpretation manages to get rid of all the natural built-in parallel universes of quantum mechanics by collapsing the wave function of particles so that only one Universe remains.

Figure 14 - The two fundamental findings of quantum mechanics call for a universal wavefunction for the entire Universe with many parallel branches like the branches of a tree in the winter.

But the two fundamental findings of quantum mechanics lead to the conclusion that there really is only one wavefunction for the entire Universe. The universe is not composed of a huge number of independent wavefunctions for each particle. All of the particles in any given branch of this "universal wavefunction" are all entangled together. The deeply visceral objection to the Many-Worlds Interpretation that most humans have is that we just do not feel like we are constantly branching into new copies of the Universe. Sean Carroll points out that this same objection arose to the Copernican heliocentric theory when it first appeared because we just do not feel like we are moving when standing still on the surface of the Earth. Similarly, we do not feel like the Universe is constantly branching on new branches of the universal wavefunction. However, when you look at the night sky something strange seems to be going on. First, it looks like all of the stars in the sky are uniformly moving on a large spherical dome in the sky once every day. Next, there are some really bright stars that are called planets that do not move at the same speed as the stars, and each planet moves relative to all of the others. Then, there is the Moon to deal with because it has phases and sometimes gets eclipsed or eclipses the Sun. These are difficult things to explain for a stationary Earth. However, with lots of effort, the ancients were able to come up with very complex explanations for all of these strange motions in the sky for a stationary Earth. Similarly, classical mechanics explained everyday motions of macroscopic objects quite nicely. Only when we started looking at very small things like atoms did things start to look strange. But we learned how to predict the strange behaviors of small particles with quantum mechanics in the 1920s and 1930s using the two fundamental findings of quantum mechanics. Still, the behaviors of small particles seemed rather strange to us even if we could now at least predict how they would behave in a probabilistic manner. Essentially, the only way we could still keep the Earth seemingly motionless in a quantum mechanical sense was to pile on all sorts of funny adjustments to the two fundamental findings of quantum mechanics with interpretations like the Copenhagen Interpretation. However, with the Many-Worlds Interpretation, all of those funny adjustments can be discarded. When you do that, all of the seemingly strange behaviors of small particles suddenly disappear. All you have to do is to accept that the universal wavefunction has branches. It's like suddenly accepting the strange idea that the Earth moves. If you can do that, all of the strange motions of the lights in the sky go away.

Figure 15 - We do not sense these parallel universes because we are like an ant climbing up the branches of a tree. To us, the tree seems like an old telephone pole with only a single branch.

To help with that realization, let's take a trip back in time. It is generally thought that the modern Scientific Revolution of the 16th century began in 1543 when Nicolaus Copernicus published On the Revolutions of the Heavenly Spheres, in which he proposed his Copernican heliocentric theory that held that the Earth was not the center of the Universe, but that the Sun held that position and that the Earth and the other planets revolved about the Sun. A few years ago I read On the Revolutions of the Heavenly Spheres and found that it began with a very strange foreword that essentially said that the book was not claiming that the Earth actually revolved about the Sun, rather the foreword proposed that astronomers may adopt many different models that explain the observed motions of the Sun, Moon, and planets in the sky, and so long as these models make reliable predictions, they don’t have to exactly match up with the absolute truth. Since the foreword did not anticipate space travel, it also implied that since nobody will ever really know for sure anyway, because nobody will ever be able to see from above what is really going on, there is no need to get too bent out of shape over the idea of the Earth moving. This is very similar to the just "shut up and calculate" view of the Copenhagen Interpretation - don't worry too much about what is really going on with quantum mechanics - just use quantum mechanics to do calculations and develop useful things like transistors. I found this foreword rather puzzling and so disturbing that I almost put On the Revolutions of the Heavenly Spheres down. But a little further research revealed the true story. However, before we get to that, below is the complete foreword to On the Revolutions of the Heavenly Spheres in its entirety. It is well worth reading because it perfectly encapsulates the ongoing philosophical clash between positivism and realism in the history of physics.

"To the Reader
Concerning the Hypotheses of this Work

There have already been widespread reports about the novel hypotheses of this work, which declares that the earth moves whereas the sun is at rest in the center of the universe. Hence certain scholars, I have no doubt, are deeply offended and believe that the liberal arts, which were established long ago on a sound basis, should not be thrown into confusion. But if these men are willing to examine the matter closely, they will find that the author of this work has done nothing blameworthy. For it is the duty of an astronomer to compose the history of the celestial motions through careful and expert study. Then he must conceive and devise the causes of these motions or hypotheses about them. Since he cannot in any way attain to the true causes, he will adopt whatever suppositions enable the motions to be computed correctly from the principles of geometry for the future as well as for the past. The present author has performed both these duties excellently. For these hypotheses need not be true nor even probable. On the contrary, if they provide a calculus consistent with the observations, that alone is enough. Perhaps there is someone who is so ignorant of geometry and optics that he regards the epicycle of Venus as probable, or thinks that it is the reason why Venus sometimes precedes and sometimes follows the sun by forty degrees and even more. Is there anyone who is not aware that from this assumption it necessarily follows that the diameter of the planet at perigee should appear more than four times, and the body of the planet more than sixteen times, as great as at apogee? Yet this variation is refuted by the experience of every age. In this science there are some other no less important absurdities, which need not be set forth at the moment. For this art, it is quite clear, is completely and absolutely ignorant of the causes of the apparent nonuniform motions. And if any causes are devised by the imagination, as indeed very many are, they are not put forward to convince anyone that they are true, but merely to provide a reliable basis for computation. However, since different hypotheses are sometimes offered for one and the same motion (for example, eccentricity and an epicycle for the sun’s motion), the astronomer will take as his first choice that hypothesis which is the easiest to grasp. The philosopher will perhaps rather seek the semblance of the truth. But neither of them will understand or state anything certain, unless it has been divinely revealed to him.

Therefore alongside the ancient hypotheses, which are no more probable, let us permit these new hypotheses also to become known, especially since they are admirable as well as simple and bring with them a huge treasure of very skillful observations. So far as hypotheses are concerned, let no one expect anything certain from astronomy, which cannot furnish it, lest he accept as the truth ideas conceived for another purpose, and depart from this study a greater fool than when he entered it.

Farewell."


Now here is the real behind-the-scenes story. Back in 1539 Georg Rheticus, a young mathematician, came to study with Copernicus as an apprentice. It was actually Rheticus who convinced the aging Copernicus to finally publish On the Revolutions of the Heavenly Spheres shortly before his death. When Copernicus finally turned over his manuscript for publication to Rheticus, he did not know that Rheticus subcontracted out the overseeing of the printing and publication of the book to a philosopher by the name of Andreas Osiander, and it was Osiander who anonymously wrote and inserted the infamous foreword. My guess is that Copernicus was a realist at heart who really did think that the Earth revolved about the Sun, while his publisher, who worried more about the public reaction to the book, took a more cautious positivistic position. Similarly, the Many-Worlds Interpretation of quantum mechanics maintains that the two fundamental findings of quantum mechanics really explain the whole thing, while the other interpretations of quantum mechanics find that the two fundamental findings of quantum mechanics are just mainly useful for making calculations for how quantum systems are observed to behave.

The last few chapters of Something Deeply Hidden describes a very interesting new research program to investigate quantum gravity from a new perspective. Instead of trying to quantize Einstein's general theory of relativity, this new research program tries to derive the general theory of relativity from the two fundamental findings of quantum mechanics by portraying spacetime as an emergent phenomenon that naturally arises from the wavefunction of the Universe. Recall that emergent phenomena are things like the temperature and pressure of a gas. The temperature and pressure of the gas are not fundamental phenomena. They are really just the macroscopic effects of gas molecules bouncing around in a container and that bouncing around is actually governed by the rules of quantum mechanics.

Although Sean Carroll is a strong advocate for the Many-Worlds Interpretation, he does maintain that those working on the foundations of quantum mechanics need to keep an open mind to other explanations. So let's finish up with another interpretation of quantum mechanics that also holds that wavefunctions really do exist.

The Transactional Interpretation of Quantum Mechanics
In Is the Universe a Quantum Computer? I covered John Cramer's Transactional Interpretation of quantum mechanics and compared it to TCP/IP transactions on the Internet. In an email exchange with John Cramer, I learned that such a comparison had never been done before. Now in the Copenhagen interpretation of quantum mechanics, the wavefunctions Ψ of particles and photons are not “real” waves, they are only probability waves – just convenient mathematical constructs that don’t “really” exist. But in Cramer’s Transactional Interpretation of quantum mechanics, the wavefunctions Ψ of particles and photons really do exist. For a physics student new to quantum mechanics, this is truly a comforting idea. Before they teach you about quantum mechanics, you go through a lengthy development of wave theory in courses on classical electrodynamics, optics, and differential equations. In all these courses, you only deal with waves that are mathematically real, meaning that these waves have no imaginary parts using the imaginary number i where i2 = -1. But in your first course on quantum mechanics, you are introduced to Schrödinger’s equation:

-ħ²    ∂²Ψ  =  iħ ∂Ψ
──      ──            ──
2m    ∂x²            ∂t

and learn that generally, the wavefunction solutions to Schrödinger’s equation contain both real and imaginary parts containing the nasty imaginary number i. Consequently, the conventional wisdom is that the wavefunction solutions to Schrödinger’s equation cannot really exist as real tangible things. They must just be some kind of useful mathematical construct. However, in the same course, you are also taught about Davisson and Germer bouncing electrons off the lattice of a nickel crystal and observing an interference pattern, so something must be waving! I would venture to suggest that nearly all students new to quantum mechanics initially think of wavefunctions as real waves waving in space. Only with great coaxing by their professors do these students “unlearn” this idea with considerable reluctance.

As we saw previously, the imaginary parts of wavefunctions really bothered the founding fathers of quantum mechanics too. Recall that in 1928, Max Born came up with the clever trick of multiplying the wavefunctions Ψ by their complex conjugates Ψ* to get rid of the imaginary parts. To create the complex conjugate of a complex number or function, all you have to do is replace the imaginary number i with –i wherever you see it. According to Born’s conjecture, the probability of things happening in the quantum world are proportional to multiplying the wavefunction by its complex conjugate Ψ*Ψ. Mathematically, this is the same thing as finding the square of the amplitude of the wavefunction. Now earlier in this posting, I mentioned how Richard Feynman pointed out that instead of thinking of positrons having negative mass-energy, you could also think of positrons as regular electrons with negative charge moving backwards in time by shifting the position of the “-“ sign in the wavefunction of a positron. But that is just the same thing as using the complex conjugate Ψ* of an electron wavefunction for a positron. So mathematically, we can think of the complex wavefunction of a particle Ψ* as the wavefunction of the particle moving backwards in time. Cramer suggests that Born’s idea of Ψ*Ψ representing the probability of a quantum event is not just a mathematical trick or construct, rather it is the collision of an outgoing “retarded” wave Ψ moving forwards in time with an incoming Ψ* “advanced” wave moving backwards in time. Essentially, John Cramer's Transactional Interpretation of quantum mechanics sees the collision of outgoing “retarded” waves Ψ moving forwards in time with incoming Ψ* “advanced” waves moving backwards in time.

The Transactional Interpretation easily explains all of the apparent paradoxes of quantum mechanics. As we have seen, there is actual experimental evidence that electrons and photons seem to “know” in advance what they will encounter on the other side of a double-slit experiment. This is easily explained by the Transactional Interpretation. The electrons or photons send out retarded waves into the future which interact with whatever lies beyond the slits. If there are detectors that are turned on, the retarded waves interact with them, if there are no detectors, the waves interact with some electrons on a projection screen instead. In either case, an advanced wave is sent backwards in time from the detectors or the projection screen to the point of origin of the electrons or photons so that they “know” how to behave before they get to the two-slit screen.

Will We Ever Really See Quantum Computers?
I can still remember my very first encounter with a computer on Monday, Nov. 19, 1956, watching the Art Linkletter TV show People Are Funny with my parents on an old black and white console television set that must have weighed close to 150 pounds. Art was showcasing the 21st UNIVAC I to be constructed and had it sorting through the questionnaires from 4,000 hopeful singles, looking for the ideal match. The machine paired up John Caran, 28, and Barbara Smith, 23, who later became engaged. And this was more than 40 years before eHarmony.com! The UNIVAC I first came out in 1951 and was 25 feet by 50 feet in size. It contained 5,600 vacuum tubes, 18,000 crystal diodes and 300 electromechanical relays with a total memory of 12 KB.

Figure 16 – The UNIVAC I was very impressive on the outside.

Figure 17 – But the UNIVAC I was a little less impressive on the inside.

Prior to 1955 computers, like the UNIVAC I, used mercury delay lines for computer memory. A mercury delay line consisted of a tube of mercury that was about 3 inches long. Each mercury delay line could store about 18 bits of computer memory as sound waves that were continuously refreshed by quartz piezoelectric transducers on each end of the tube. Mercury delay lines were huge and very expensive per bit so computers like the UNIVAC I only had a memory of 12 KB.

Figure 18 – Prior to 1955, huge mercury delay lines built from tubes of mercury that were about 3 inches long were used to store bits of computer memory. A single mercury delay line could store about 18 bits of computer memory as a series of sound waves that were continuously refreshed by quartz piezoelectric transducers at each end of the tube.

In 1955 magnetic core memory came along, and used tiny magnetic rings called "cores" to store bits. Four little wires had to be threaded by hand through each little core in order to store a single bit, so although magnetic core memory was a lot cheaper and smaller than mercury delay lines, it was still very expensive and took up lots of space.

Figure 19 – Magnetic core memory arrived in 1955 and used a little ring of magnetic material, known as a core, to store a bit. Each little core had to be threaded by hand with 4 wires to store a single bit.

Figure 20 – Magnetic core memory was a big improvement over mercury delay lines, but it was still hugely expensive and took up a great deal of space within a computer.



Figure 21 – Finally in the early 1970s inexpensive semiconductor memory chips came along that made computer memory small and cheap.

So guess what? We have actually been using quantum computers for more than 50 years because we have been using quantum mechanical transistor-based computers for that period of time! Seriously, if you had shown me a 64 GB smartphone back in 1956 I would not have believed it to be physically possible. In fact, I still have a hard time believing how far hardware has progressed over the years. So don't bet against quantum computers coming to be in your lifetime.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Wednesday, February 05, 2020

Swarm Software and Killer Robots

As you all know, I am obsessed with the fact that we see no signs of Intelligence in our Milky Way galaxy after more than 10 billion years of chemical evolution that should have brought forth a carbon-based or silicon-based Intelligence to dominate the galaxy. In Last Call for Carbon-Based Intelligence on Planet Earth, I explained my Null Result Hypothesis to explain Fermi's Paradox. Basically, my Null Result Hypothesis states that the Milky Way galaxy has yet to produce a form of Intelligence that can make itself known to the rest of the galaxy because the conditions necessary to bring forth a carbon-based Intelligence are also the very same conditions that provide kill mechanisms that are 100% efficient at eliminating carbon-based Intelligences. In that posting, I suggested that messing with the carbon cycle of a planet could be one of those kill mechanisms that all forms of carbon-based Intelligences are subject to. For example, our planet is currently dying because we are messing with the carbon cycle of the Earth and everybody is pretending that it is not. The Right loves fossil fuels and is pretending that catastrophic Climate Change is simply not happening. The Left is pretending that wind and solar can solve the problem all on their own. And the Middle is concerned with other issues that they find more pressing. However, this will take some time to accomplish. In that posting, I also pointed out that we have been sitting on the solution to this dire problem for more than 60 years. All we need to do is to replace all of our carbon-based fuels with molten salt nuclear reactors that could burn thorium and uranium for hundreds of thousands of years. Since carbon-based life seems to require small amounts of elements with atomic numbers greater than that of iron-56, carbon-based life should always also have small amounts of thorium and uranium available to fuel advanced technologies. That is because elements with nuclei heavier than iron-56 are generated by stellar supernova explosions and by colliding neutron stars. For more on that see:

The Alchemy of Neutron Star Collisions
https://www.youtube.com/watch?v=MmgMboWunkI&t=695s

So it is rather hard to believe that all carbon-based Intelligences extinguish themselves by messing with their planet's carbon cycle. Surely, some must follow a nuclear route using thorium and uranium and eventually fusing deuterium. This means that there must be additional kill mechanisms to be found within the conditions necessary to bring forth carbon-based Intelligence.

Another Possible Kill Mechanism
In Is Self-Replicating Information Inherently Self-Destructive?, I discussed the possibility that carbon-based Intelligent life might be self-destructive as most forms of self-replicating information tend to be. Similarly, in Susan Blackmore's brilliant TED presentation at:

Memes and "temes"
https://www.ted.com/talks/susan_blackmore_on_memes_and_temes

Susan Blackmore points out that each additional form of self-replicating information that arises on a planet presents a new danger that could snuff out Intelligences in our galaxy. Note that I consider Susan Blackmore's temes to really be technological artifacts that contain software. After all, a smartphone without software is simply a flake tool with a very dull edge. Perhaps carbon-based Intelligences that do not do themselves in by messing with the carbon cycle of their home planet never successfully make the transition to silicon-based Intelligence for other reasons. Perhaps silicon-based AI does them in before silicon-based AI reaches a full level of Intelligence. For more on that see A Brief History of Self-Replicating Information.

The February 2020 issue of Scientific American features an article entitled Autonomous Warfare that discusses just such a possibility. The article discusses the imminent danger of killer robot swarm software and that we currently have all of the necessary silicon-based hardware and AI software to do the job. The following two videos raise all of the pertinent issues:

Sci-Fi Short Film "Slaughterbots" presented by DUST
https://www.youtube.com/watch?v=O-2tpwW0kmU

Why We Should Ban Lethal Autonomous Weapons
https://www.youtube.com/watch?time_continue=12&v=LVwD-IZosJE&feature=emb_logo

In The Danger of Tyranny in the Age of Software and Fascism and the Internet of Things, I pointed out that the rise of Alt-Right Fascist movements around the world was being primarily fueled by automation software displacing jobs and warned of the dangers that advanced surveillance software on the Internet of Things could produce in the hands of authoritarian societies. Now imagine totally automated factories churning out billions of totally automated killer robot drones instead of billions of smartphones! Perhaps in a Terminator-like (1984) manner, huge swarms of self-replicating killer robots could do us all in before silicon-based AI can achieve a level of Intelligence that can make itself known to the rest of the galaxy.

In Last Call for Carbon-Based Intelligence on Planet Earth, I attributed the messing with the carbon cycle of a planet to the inherent selfishness of all forms of self-replicating information. But I also pointed out that selfish carbon-based Intelligences can redirect their inherent selfishness into positive actions that do not mess with the carbon cycle of their planet. That is because the delusion of consciousness gives agency to selfishness. A selfish conscious Intelligence can channel selfishness into actions that are not necessarily self-destructive. For more on that see The Ghost in the Machine the Grand Illusion of Consciousness.

However, it seems that the one thing that conscious carbon-based Intelligences may not be able to avoid is the regrettable legacy that results from billions of years of carbon-based life forms chomping on each other. Yes, all forms of self-replicating information have to be a little bit nasty in order to survive, but carbon-based life forms seem to take this to an extreme. I love watching nature documentaries by David Attenborough, but watching natural selection at work can certainly be a rather gruesome business. I would argue that all carbon-based Intelligences must come with a built-in tendency to kill other carbon-based life forms. This is neither a good nor bad thing. It is just a necessary thing to bring forth carbon-based Intelligence. Strangely, the people who are trying to ban killer robots make the argument that only "moral" carbon-based Intelligences, like human beings, should make the final decision to kill others! I attribute such delusional reasoning to the shared delusion of consciousness that likely comes with carbon-based Intelligence. It may ultimately be the reason why "moral" carbon-based Intelligences build killer robots in the first place.

More information on this topic is available at:

BAN LETHAL AUTONOMOUS WEAPONS
https://autonomousweapons.org/

Campaign to Stop Killer Robots
https://www.stopkillerrobots.org/

This is Not a Time to Despair
Yes, all of this could be viewed in a rather depressing manner, but on the other hand, it gives us an extraordinary opportunity. As the latest form of carbon-based Intelligence to appear in the Milky Way galaxy, we can learn from all of the past forms of carbon-based Intelligences that have failed. Just do not do things like mess with the carbon cycle of the planet or create killer robot swarm software. At all times, we need to be keenly aware of the fact that over the past 10 billion years all other carbon-based Intelligences within our galaxy did not make it. In order to do this, we need to always remember that we are a product of self-replicating information and that we carry all of the baggage that comes with self-replicating information. That is why, if you examine the great moral and philosophical teachings of most religions and philosophies, you will see a plea for us all to rise above the selfish self-serving interests of our genes, memes and software.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston