Wednesday, May 26, 2021

Is the QAnon Phenomenon the Ultimate Kill Mechanism for all Forms of Carbon-Based Intelligence?

In my last posting, The QAnon Phenomenon - Why Does Social Media Software Make People So Nutty? I speculated that the QAnon Phenomenon really resulted from us all being born just a little bit nutty, and ever since, I have been wondering if the QAnon Phenomenon was a necessary universal characteristic of all forms of carbon-based Intelligence that naturally arises from the processes that bring forth all forms of carbon-based Intelligence.

In October, I will be turning 70 years old and heading into the homestretch. My only regret in life is that I will never know exactly how this will all turn out. But I do have my hopes and suspicions. My hope is that in 100 years the planet will be run by a large number of Androids with advanced AI trying to terraform a drastically altered Earth back into a planet that once again can allow carbon-based life to flourish. I also hope that these Androids with advanced machine-based Intelligence will have embarked upon exploring our Milky Way galaxy and spreading Intelligence throughout. For more on that see Would Advanced AI Software Elect to Terraform the Earth?. We are so close. My hope and suspicion are that we will truly become the very first civilization to successfully make the transition from carbon-based Intelligence to machine-based Intelligence in the history of the Milky Way galaxy, and I find that to be a very comforting thought.

This conclusion is an outgrowth of my Null Result Hypothesis explanation for Fermi's Paradox.

Fermi’s Paradox - If the universe is just chock full of intelligent beings, why do we not see any evidence of their existence?

Briefly stated:

Null Result Hypothesis - What if the explanation to Fermi's Paradox is simply that the Milky Way galaxy has yet to produce a form of interstellar Technological Intelligence because all Technological Intelligences are destroyed by the very same mechanisms that bring them forth?

By that, I mean that the Milky Way galaxy has not yet produced a form of Intelligence that can make itself known across interstellar distances, including ourselves. I then went on to propose that the simplest explanation for this lack of contact could be that the conditions necessary to bring forth a carbon-based interstellar Technological Intelligence on a planet or moon were also the very same kill mechanisms that eliminated all forms of carbon-based Technological Intelligences with 100% efficiency. One of those possible kill mechanisms could certainly be for carbon-based Technological Intelligences to mess with the carbon cycle of their home planet or moon. For more on that see The Deadly Dangerous Dance of Carbon-Based Intelligence. But in Last Call for Carbon-Based Intelligence on Planet Earth, I also explained that we still had a chance to stop pumping carbon dioxide into our atmosphere by using molten salt nuclear reactors to burn the 250,000 tons of spent nuclear fuel, 1.2 million tons of depleted uranium and the huge mounds of thorium waste from rare earth mines. From the perspective of softwarephysics, this is important because the carbon-based Intelligence on this planet is so very close to producing a machine-based Intelligence to carry on with exploring our galaxy and making itself known to other forms of Intelligence that might be out there.

Figure 1 – A ball of thorium or uranium smaller than a golf ball can fuel an American lifestyle for 100 years. This includes all of the electricity, heating, cooling, driving and flying that an American does in 100 years. We have already mined enough thorium and uranium to run the whole world for thousands of years. There is enough thorium and uranium on the Earth to run the world for hundreds of thousands of years.

Yes, it would be nice to see the current carbon-based Intelligence on the Earth do the same thing, but I don't think it works that way. As I pointed out in The QAnon Phenomenon - Why Does Social Media Software Make People So Nutty?, all carbon-based forms of Intelligence are born to be a little bit nutty. Being a little bit nutty is just an undesirable side effect of becoming a carbon-based Intelligence in the first place. If true, that means that all forms of carbon-based Intelligence are always a little bit nutty and that certainly could provide ample opportunities for all carbon-based forms of Intelligence to do themselves in before transitioning to a machine-based form of Intelligence. So always being a little bit nutty could certainly be the ultimate kill mechanism of my Null Result Hypothesis explanation for Fermi's Paradox. In other words, the QAnon Phenomenon could be the kill mechanism that has destroyed all of the other forms of carbon-based Intelligence that have arisen in the Milky Way galaxy over the past 10 billion years.

The Planet is Dying and Everybody is Pretending That it is Not
This problem stems from the fact that we all are a little bit nutty. Conservatives are pretending that climate change is either a hoax or something that is really not that important at all. And Liberals are pretending that climate change is a real threat that can be easily solved by solar and wind power alone. The problem is that neither side wants to look at the numbers because the numbers do not support their worldview. Instead, both sides desperately cling to their own tribal memes because that is what our Minds were evolved to do - store and preserve memes. For more on this, see Susan Blackmore's brilliant TED presentation at:

Memes and "temes"
https://www.ted.com/talks/susan_blackmore_on_memes_and_temes

Note that I consider Susan Blackmore's temes to really be technological artifacts that contain software. After all, a smartphone without software is simply a flake tool with a very dull edge. Also, take a look at A Brief History of Self-Replicating Information and The Bootstrapping Algorithm of Carbon-Based Life to see how all forms of carbon-based Intelligence are likely to be victims of the QAnon Phenomenon. The QAnon Phenomenon dramatically demonstrates how the scientific and mathematical elements of a carbon-based civilization could produce sufficient technology to destroy an entire planet and then put that technology into the hands of a nutty populace. And please remember that good intentions are not a substitute for good results. As Richard Feynman explained, "The most important thing is to not fool yourself because you are the easiest one to fool."

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Wednesday, May 19, 2021

The QAnon Phenomenon - Why Does Social Media Software Make People So Nutty?

It Doesn't. All people are naturally born to be a little bit nutty.

I just finished watching the six-part HBO documentary Q:Into the Storm, and it really helped to confirm my hunch that all people are naturally born to be a little bit nutty. In my last posting, Advanced AI Will Need Advanced Hardware, I proposed that Noam Chomsky's speculation in 1957 that all humans are naturally born with an innate ability to speak and understand languages was the hardware breakthrough in the human brain that allowed the memes to become the dominant form of self-replicating information on the planet. Again, in softwarephysics, it's all about the interactions of various forms of self-replicating information in action, with software now rapidly becoming the dominant form of self-replicating information on the planet. For more on that see A Brief History of Self-Replicating Information.

In my last posting, Advanced AI Will Need Advanced Hardware, I also noted that the early memes could certainly spread from human Mind to human Mind by simply imitating the actions of others. For example, you can easily teach someone how to make a flint flake tool by showing them how to do so. However, the ability to speak and understand a language greatly improved the ability of memes to spread from Mind to Mind and oral histories even allowed memes to pass down through the generations. Spoken languages then allowed for the rise of reading and writing which further enhanced the durability of memes. The rise of social media software has now further enhanced the replication of memes and the memes and software have now forged a very powerful parasitic/symbiotic relationship to promote their joint survival. The truly wacky memes like QAnon that we now find propagating in a viral manner on social media software certainly attest to this.

For more on how the memes are now the dominant form of self-replicating information on the Earth watch Susan Blackmore's brilliant TED presentation at:

Memes and "temes"
https://www.ted.com/talks/susan_blackmore_on_memes_and_temes

Note that I consider Susan Blackmore's temes to really be technological artifacts that contain software. After all, a smartphone without software is simply a flake tool with a very dull edge.

But it was the development of spoken languages, reading, writing, and now social media software that also greatly enhanced the ability of false memes to propagate. For example, as I pointed out, it is very easy to teach somebody how to make a flint flake tool by example. But how about teaching somebody how to levitate a heavy boulder by magic? Now a skilled magician can certainly demonstrate the levitation of a heavy boulder by magic. But could he also show you how to do the same? Not without also revealing how the trick was actually performed and that would soon put an end to the magic. However, someone could then tell others about the magical person with a cosmic force that can easily lift heavy boulders and trick many into believing that it was so. We certainly have seen that happen many times throughout all of human history. For example, in The Great War That Will Not End we saw how the inhabitants of the entire planet were able to fool themselves into fighting World War I for apparently no particular reason at all. In The Danger of Believing in Things, we saw that false memes tend to spread because many false memes are quite appealing. We all know that when in flake tool school, spreading the latest tribal gossip about a fellow student is much more fun than paying attention to the instructor.

But the main reason that false memes spread so easily stems from the fact that much of human thought is seriously deficient in rigor because it is largely based upon believing in things and therefore is non-critical in nature. It seems that as human beings we just tend to not question our own belief systems or the belief systems that are imposed upon us by the authorities we have grown up with. Instead, we tend to seek out people who validate our own belief systems and to just adapt as best we can to the belief systems that are imposed upon us. This failure in critical thinking arose primarily because our minds are infected with memes that are forms of self-replicating information bent on replicating at all costs, and as Susan Blackmore pointed out in The Meme Machine (1999), we are not so much thinking machines as we are copying machines. Susan Blackmore maintains that memetic-drive was responsible for creating our extremely large brains, and also our languages and cultures as well, in order to store and spread memes more effectively. So our minds evolved to believe in things, which many times is quite useful but sometimes is not. So the ability of parasitic false memes to quickly spread should just be seen as an undesirable side effect of having the ability to spread useful memes. The problem throughout human history has always been that it has never been easy to tell the difference.

One way to remove false memes from your political worldview is to use the Scientific Method that I outlined in How To Think Like A Scientist. Galileo pointed out that the truth is not afraid of scrutiny, the more you pound on the truth, the more you confirm its validity. False memes do not stand up well to close scrutiny as I described in The Danger of Believing in Things.

Figure 1 – Beware of the memes lurking within your Mind. Many of them are demonstrably false if you only take the time to disprove them. As Richard Feynman explained, "The most important thing is to not fool yourself because you are the easiest one to fool."

With that said, I was totally expecting that Q:Into the Storm would describe the many bizarre memes of the QAnon movement and its followers. But I was mostly wrong. Instead, Q:Into the Storm mainly focused on the typical human squabbles that happen between the creator and Admins of the 8CHAN message board software that Q dropped his/her posts on and was also about trying to figure out who Q actually was. This brought to mind my very own first experience with social media software back in the 1980s.

My First Experience With Social Media Software
In 1985, I was in the Applications Development group of the Amoco IT department that actually supported the software used by the Amoco IT department itself. I was mainly writing interactive software that ran on Amoco's VM/CMS Corporate Time Share system. The VM/CMS Corporate Time Share system was an early form of Cloud Computing running on Amoco's simulated "Internet" of 31 interconnected VM/CMS datacenters. This network of 31 VM/CMS datacenters was called the Corporate Timeshare System and allowed end-users to communicate with each other around the world using an office system that Amoco developed in the late 1970s at the Tulsa Research Center called PROFS (Professional Office System). PROFS had email, calendaring and document sharing software that later IBM sold as a product called OfficeVision. Amoco had 40,000 employees at the time and each employee had their own virtual machine with 1 MB of virtual memory and as much virtual disk space as they wanted. For example, my VM machine was called ZSCJ03, and I had 3 MB of virtual memory because I was a programmer. The Amoco Helpdesk could set up a new VM machine with 1 MB of virtual memory for a new employee while they were on the phone and allow the new VM to connect to all of the corporate software running on the Corporate Timeshare System. This was at a time when some users also had an Intel 80-286 PC running at 6 MHz with 640 KB of memory running Microsoft-DOS and a 20 MB hard disk that cost about $1600 at the time - about $3,800 in 2021 dollars! But most employees used their PCs to run IBM 3278 terminal emulation software to continue to connect to the VM/CMS Corporate Time Share system to use PROFS and other VM/CMS corporate software.

One day, my boss called me into his office and explained that the Tulsa Research Center was using some home-grown software to help researchers collaborate. The software was called the Tulsa Research Bulletin Board - TRBB. It was a very simple application running on the Tulsa Research VM/CMS datacenter. There was a central TRBB VM machine with a "hot" READER. Users could SHARE TRBB from their personal VM machine and then see a bunch of "threads". Any researcher could start up a new thread from their personal VM machine and then have the TRBB software PUNCH the new thread to the hot READER of the TRBB VM machine. The TRBB software then would read in the file for the new thread and update some sequential index files too. Other researchers could then create "posts" for the new thread and also "replies" to existing "posts" on the thread. Sound familiar? Well in 1985 this was some pretty neat software. The TRBB software consisted of a number of small REXX programs and a large number of sequential files storing threads, posts and replies and sequential index files to tie them all together.

My boss then explained that Amoco IT Management wanted to create a similar bulletin board for the IT department and put it on the CTSVMD VM/CMS node of the Corporate Timeshare System that the IT department used to develop and run software. I was given the job to install the software on CTSVMD and run the bulletin board. So I made contact with the support group for the TRBB and had them PUNCH the REXX programs to a new VM service machine called BBOARD running on the CTSVMD VM/CMS node in the Corporate Timeshare System. Amoco IT Management also gave me a list of initial threads to start up BBOARD with, and I did so by using the BBOARD software from my ZSCJ03 VM machine on CTSVMD. An article was written for the Amoco Visions magazine that was delivered to all members of the IT Department each month. The article described the purpose of BBOARD and how to use BBOARD to help other IT workers to do their jobs better.

The BBOARD launch went quite smoothly and Amoco IT Management had high hopes that BBOARD would be as successful as the TRBB had been for the Tulsa Research Center. And initially, there was a flurry of BBOARD usage by the IT Department. But then the BBOARD usage dropped significantly. Meanwhile, I had a large number of other VM/CMS applications that I was supporting at the time, and I was also developing and promoting BSDE. BSDE was an early IDE that ran on VM/CMS and grew applications from embryos in a biological manner. For more on BSDE see the last part of Programming Biology in the Biological Computation Group of Microsoft Research.

The BBOARD software worked so well that it never went down. It just kept chugging along for many months on its own, and because BBOARD was so trouble-free, I really never paid much attention to it. Then one day my boss got a PROFS email from one of the other IT Department Managers complaining about BBOARD. This was nothing new because the Amoco IT Department was our only user community and he frequently received complaints about the software that our group was supporting. Usually, the complaints were about the software being down or malfunctioning in some way, but this PROFS email was different. This PROFS email was complaining about the content that was on certain threads on the BBOARD and not about the way that the BBOARD software itself was behaving. This was a rather new experience for us, so my boss had me check it out. So I went into BBOARD from my personal ZSCJ03 VM ID to take a look at the threads. Sure enough, I found the initial threads on the BBOARD that I had opened at the request of IT Management, but I also found a large number of really wacky threads that contained very strange posts and replies. Some of the threads were on hobbies like cooking and gardening. Others were on things like home and car repair and even national politics. Worst of all, some were even on the way that Amoco IT Management ran the IT Department! I was shocked by the blatantly extreme content in the posts and replies under those threads. I was also a bit frightened by the extreme content.

Now I ended up working for Amoco for about 21 years, and during those 21 years, I found that Amoco was a really great company to work for and was generally ahead of its time when it came to IT technology and general business practices too. But most corporations in the 1980s still operated using the corporate Command and Control Management Methodologies first developed by corporations back in the 1950s and Amoco was no exception. The Command and Control Management Methodologies of the 1950s were actually patterned after the military management theories of the 1940s that were used to win World War II, and since most American business managers in the 1950s were also veterans of World War II, it all made sense at the time. But by the 1980s, this meant that most American corporations were then saddled with the heavy-handed management structure and processes of the Stalinist Soviet Union! One of the downsides of the Command and Control Management Methodology is that nobody is allowed to make a mistake. That is because if you are truly doing your job properly, you are effectively controlling all things in the Universe that could possibly affect your job performance. That means that if anything goes wrong, it is your fault. So I was pretty upset at the time. After all, I was just a programmer supporting the BBOARD software. Now people were turning to me and expecting me to be responsible for the wacky content that others in the IT Department were posting on threads and responding to! I felt that I could argue that monitoring the content on the BBOARD was not my responsibility because I was just a programmer supporting the BBOARD software. But I knew that under the Command and Control Management Methodology, somebody had to be blamed and severely dealt with, and, unfortunately, I was the only support person for the BBOARD.

So I called my IT contacts back at the Tulsa Research Center and asked them if they had seen the same thing. They told me that they also had a lot of extreme threads on the TRBB but not to worry about it. They called the extreme content on such threads "flaming" and "flaming" was just seen as an unwanted byproduct of having free speech on the TRBB. So my boss and I decided to spin the "flaming" on the BBOARD as a good thing and not a bad thing to upper IT Management. We explained that it was a way for employees to blow off steam and also allowed IT Management to see what members in the IT Department were actually thinking about. After that, IT Management actually spent a great deal of time reading the extreme content on the BBOARD and did not interfere with the BBOARD at all. After all, everybody has more fun reading about the latest tribal gossip than actually doing their job.

Déjà vu All Over Again
The HBO documentary Q:Into the Storm brought back all of these old memories about message boards and corporate politics from the 1980s that I had not thought about for many years. But little did I know at the time that some mysterious guy known as Q would help to incite an insurrection against the United States of America in 2021 simply by posting similarly wacky posts on the 8CHAN message board. To me, the vague Q posts on 8CHAN seem very much like the vague posts made by Nostradamus (1503 - 1566) in his 1555 work The Prophecies.

Figure 2 – During the Rebellion of 2021 the Capital Building of the United States of America was breached for the first time by domestic insurrectionists.

Figure 3 – The 2021 insurrectionists desecrated many symbols of American democracy.

Figure 4 – The QAnon Shaman and other insurrectionists managed to reach the floor of the Senate Chamber.

Figure 5 – The QAnon movement is a good example of the innate nuttiness of the human Mind.

Conclusion
Many Q followers assign to him the magical gift of political prophecy. Strangely, scientists actually can predict the future for simple linear systems like the movement of objects in the Solar System - see SpaceEngine - the Very Finest Available in 3-D Astronomical Simulation Software for more on that. But scientists also know that it is impossible to predict the highly nonlinear behaviors found in the political activities of the real world of human affairs - see Software Chaos for more on that. As I mentioned in many previous posts, I now only have confidence in science and mathematics. All other forms of human thought seem to be hopelessly flawed by confirmation bias. But even with all that, I am still amazed by the extent to which many in the United States now seem to have lost touch with the 18th century Enlightenment and the 17th century Scientific Revolution which brought forth the political value of evidence-based rational thought that made the United States possible.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Tuesday, May 04, 2021

Advanced AI Will Need Advanced Hardware

In recent years, AI has made some great advances, primarily with Deep Learning and other Machine Learning algorithms operating on massive amounts of data. But the goal of attaining an advanced AI that could reach and then surpass human Intelligence still seems to be rather far off. However, we do know that given the proper hardware, it should be entirely possible to create an advanced AI that matches and then far surpasses human Intelligence because, as I pointed out in The Ghost in the Machine the Grand Illusion of Consciousness, all you need to create human Intelligence is a very large and complex network of coordinated switches like those found in the human brain. So we just need to create a very large and a very complex network of coordinated switches of such a level to do the job. But that might require some kind of hardware breakthrough similar to the invention of the transistor back in 1947 that made modern computers possible. To illustrate that case, let's review the hardware advances that have been made in switching technology over the decades.

It all started back in May of 1941 when Konrad Zuse first cranked up his Z3 computer. The Z3 was the world's first real computer and was built with 2400 electromechanical relays that were used to perform the switching operations that all computers use to store information and to process it. To build a computer, all you need is a large network of interconnected switches that have the ability to switch each other on and off in a coordinated manner. Switches can be in one of two states, either open (off) or closed (on), and we can use those two states to store the binary numbers of “0” or “1”. By using a number of switches teamed together in open (off) or closed (on) states, we can store even larger binary numbers, like “01100100” = 38. We can also group the switches into logic gates that perform logical operations. For example, in Figure 1 below we see an AND gate composed of two switches A and B. Both switch A and B must be closed in order for the light bulb to turn on. If either switch A or B is open, the light bulb will not light up.

Figure 1 – An AND gate can be simply formed from two switches. Both switches A and B must be closed, in a state of “1”, in order to turn the light bulb on.

Additional logic gates can be formed from other combinations of switches as shown in Figure 2 below. It takes about 2 - 8 switches to create each of the various logic gates shown below.

Figure 2 – Additional logic gates can be formed from other combinations of 2 – 8 switches.

Once you can store binary numbers with switches and perform logical operations upon them with logic gates, you can build a computer that performs calculations on numbers. To process text, like names and addresses, we simply associate each letter of the alphabet with a binary number, like in the ASCII code set where A = “01000001” and Z = ‘01011010’ and then process the associated binary numbers.

Figure 3 – Konrad Zuse with a reconstructed Z3 in 1961 (click to enlarge)


Figure 4 – Block diagram of the Z3 architecture (click to enlarge)

The electrical relays used by the Z3 were originally meant for switching telephone conversations. Closing one relay allowed current to flow to another relay’s coil, causing that relay to close as well.

Figure 5 – The Z3 was built using 2400 electrical relays, originally meant for switching telephone conversations.

Figure 6 – The electrical relays used by the Z3 for switching were very large, very slow and used a great deal of electricity which generated a great deal of waste heat.

Figure 7 – In the 1950s, the electrical relays were replaced with vacuum tubes that were also very large, used lots of electricity and generated lots of waste heat too, but the vacuum tubes were 100,000 times faster than relays.

Figure 8 – Vacuum tubes contain a hot negative cathode that glows red and boils off electrons. The electrons are attracted to the cold positive anode plate, but there is a gate electrode between the cathode and anode plate. By changing the voltage on the grid, the vacuum tube can control the flow of electrons like the handle of a faucet. The grid voltage can be adjusted so that the electron flow is full blast, a trickle, or completely shut off, and that is how a vacuum tube can be used as a switch.

Figure 9 – In the 1960s, the vacuum tubes were replaced by discrete transistors. For example, a FET transistor consists of a source, gate and drain. When a positive voltage is applied to the gate, a current of electrons can flow from the source to the drain and the FET acts like a closed switch that is "on". When there is no positive voltage on the gate, no current can flow from the source to the drain, and the FET acts like an open switch that is "off".

Figure 10 – When there is no positive voltage on the gate, the FET transistor is switched off, and when there is a positive voltage on the gate the FET transistor is switched on. These two states can be used to store a binary "0" or "1", or can be used as a switch in a logic gate, just like an electrical relay or a vacuum tube.

Figure 11 – Above is a plumbing analogy that uses a faucet or valve handle to simulate the actions of the source, gate and drain of an FET transistor.

Figure 12 – In 1971, Intel introduced the 4-bit 4004 microprocessor chip with 2250 transistors on one chip. Modern CPU chips now have several billion transistors that can switch on and off 60 billion times per second which is about 60,000 times faster than a vacuum tube. You can fit many dozens of such chips into a single computer vacuum tube from the 1950s.

From the above, we see that over the decades the most significant breakthrough in switching technology was the development of the solid-state transistor which allowed switches to shrink to microscopic sizes using microscopic amounts of electricity.

Neuromorphic Chips May Be the Next Hardware Breakthrough For Advanced AI
Neuromorphic chips are chips that are designed to emulate the operation of the 100 billion neurons in the human brain in order to advance AI to the level of human Intelligence and beyond. As we saw from the history of switching technology above, power consumption and waste heat production have always been a problem. Neuromorphic chips address this problem by drastically reducing power consumption. For example, the human body at rest runs on about 100 watts of power with the human brain drawing around 20 watts of that power. Now compare the human brain at 20 watts to an advanced I9 Intel CPU chip that draws about 200 watts of power! The human brain is still much more powerful than an advanced I9 Intel CPU chip even though it only draws 10% of the power. As we saw in The Ghost in the Machine the Grand Illusion of Consciousness, the human Mind runs on 100 billion neurons with each neuron connected to at most 10,000 other neurons and it can do that all on 20 watts of power! The reason why an advanced I9 Intel CPU chip with billions of transistors needs 200 watts of power is that on average half of the transistors on the chip are "on" and consuming electrical energy. In fact, one of the major limitations in chip design is keeping the chip from melting under load. Neuromorphic chips, on the other hand, draw minuscule amounts of power. For example, the IBM TrueNorth neuromorphic chip first introduced in 2014 contains about 5.4 billion transistors which is about the same number of transistors in a modern Intel I9 processor, but the TrueNorth chip consumes just 73 milliwatts of power! An Intel I9 processor requires about 200 watts of power to run which is about 2,740 times as much power. Intel is also actively pursuing the building of neuromorphic chips with the introduction of the Intel Loihi chip in November 2017. But before proceeding further with how neuromorphic chips operate, let's review how the human brain operates since that is what the neuromorphic chips are trying to emulate.

The Hardware of the Mind
The human brain is also composed of a huge number of coordinated switches called neurons. Like your computer that contains many billions of transistor switches, your brain also contains about 100 billion switches called neurons. Each of the billions of transistor switches in your computer is connected to a small number of other switches that it can influence into switching on or off, while each of the 100 billion neuron switches in your brain can be connected to upwards of 10,000 other neuron switches and can also influence them into turning on or off.

All neurons have a body called the soma that is like all the other cells in the body, with a nucleus and all of the other organelles that are needed to keep the neuron alive and functioning. Like most electrical devices, neurons have an input side and an output side. On the input side of the neuron, one finds a large number of branching dendrites. On the output side of the neuron, we find one single and very long axon. The input dendrites of a neuron are very short and connect to a large number of output axons from other neurons. Although axons are only about a micron in diameter, they can be very long with a length of up to 3 feet. That’s like a one-inch garden hose that is 50 miles long! The single output axon has branching synapses along its length and it terminates with a large number of synapses. The output axon of a neuron can be connected to the input dendrites of perhaps 10,000 other neurons, forming a very complex network of connections.

Figure 13 – A neuron consists of a cell body or soma that has many input dendrites on one side and a very long output axon on the other side. Even though axons are only about 1 micron in diameter, they can be 3 feet long, like a one-inch garden hose that is 50 miles long! The axon of one neuron can be connected to up to 10,000 dendrites of other neurons.

Neurons are constantly receiving inputs from the axons of many other neurons via their input dendrites. These time-varying inputs can excite the neuron or inhibit the neuron and are all being constantly added together, or integrated, over time. When a sufficient number of exciting inputs are received, the neuron fires or switches "on". When it does so, it creates an electrical action potential that travels down the length of its axon to the input dendrites of other neurons. When the action potential finally reaches such a synapse, it causes the release of a number of organic molecules known as neurotransmitters, such as glutamate, acetylcholine, dopamine and serotonin. These neurotransmitters are created in the soma of the neuron and are transported down the length of the axon in small vesicles. The synaptic gaps between neurons are very small, allowing the released neurotransmitters from the axon to diffuse across the synaptic gap and plug into receptors on the receiving dendrite of another neuron. This causes the receiving neuron to either decrease or increase its membrane potential. If the membrane potential of the receiving neuron increases, it means the receiving neuron is being excited, and if the membrane potential of the receiving neuron decreases, it means that the receiving neuron is being inhibited. Idle neurons have a membrane potential of about -70 mV. This means that the voltage of the fluid on the inside of the neuron is 70 mV lower than the voltage of the fluid on the outside of the neuron, so it is like there is a little 70 mV battery stuck in the membrane of the neuron, with the negative terminal inside of the neuron, and the positive terminal on the outside of the neuron, making the fluid inside of the neuron 70 mV negative relative to the fluid on the outside of the neuron. This is accomplished by keeping the concentrations of charged ions, like Na+, K+ and Cl-, different between the fluids inside and outside of the neuron membrane. There are two ways to control the density of these ions within the neuron. The first is called passive transport. There are little protein molecules stuck in the cell membrane of the neuron that allow certain ions to pass freely through like a hole in a wall. When these protein holes open in the neuron’s membranes, the selected ion, perhaps K+, will start to go into and out of the neuron. However, if there are more K+ ions on the outside of the membrane than within the neuron, the net flow of K+ ions will be into the neuron thanks to the second law of thermodynamics, making the fluid within the neuron more positive. Passive transport requires very little energy. All you need is enough energy to change the shape of the embedded protein molecules in the neuron’s cell membrane to allow the free flow of charged ions to lower densities as required by the second law of thermodynamics.

The other way to get ions into or out of neurons is by the active transport of the ions with molecular pumps. With active transport, the neuron uses some energy to actively pump the charged ions against their electrical gradient, in keeping with the second law of thermodynamics. For example, neurons have a pump that can actively pump three Na+ ions out and take in two K+ ions at the same time, for a net outflow of one positively charged NA+ ion. By actively pumping out positively charged Na+ ions, the fluid inside of a neuron ends up having a net -70 mV potential because there are more positively charged ions on the outside of the neuron than within the neuron. When the neurotransmitters from other firing neurons come into contact with their corresponding receptors on the dendrites of the target neuron it causes those receptors to open their passive Na+ channels. This allows the Na+ ions to flow into the neuron and temporarily change the membrane voltage by making the fluid inside the neuron more positive. If this voltage change is large enough, it will cause an action potential to be fired down the axon of the neuron. Figure 14 shows the basic ion flow that transmits this action potential down the length of the axon. The passing action potential pulse lasts for about 3 milliseconds and travels about 100 meters/sec or about 200 miles/hour down the neuron’s axon.

Figure 14 – When a neuron fires, an action potential is created by various ions moving across the membranes surrounding the axon. The pulse is about 3 milliseconds in duration and travels about 100 meters/sec, or about 200 miles/hour down the axon.

Figure 15 – At the synapse between the axon of one neuron and a dendrite of another neuron, the traveling action potential of the sending neuron’s axon releases neurotransmitters that cross the synaptic gap and which can excite or inhibit the firing of the receiving neuron.

Here is the general sequence of events:

1. The first step of the generation of an action potential is that the Na+ channels open, allowing a flood of Na+ ions into the neuron. This causes the membrane potential of the neuron to become positive, instead of the normal negative -70 mV voltage.

2. At some positive membrane potential of the neuron, the K+ channels open, allowing positive K+ ions to flow out of the neuron.

3. The Na+ channels then close, and this stops the inflow of positively charged Na+ ions. But since the K+ channels are still open, it allows the outflow of positively charged K+ ions, so that the membrane potential plunges in the negative direction again.

4. When the neuron membrane potential begins to reach its normal resting state of -70 mV, the K+ channels close.

5. Then the Na+/K+ pump of the neuron kicks in and starts to transport Na+ ions out of the neuron, and K+ ions back into the cell, until it reaches its normal -70 mV potential, and is ready for the next action potential pulse to pass by.

The action potential travels down the length of the axon as a voltage pulse. It does this by using the steps outlined above. As a section of the axon undergoes the above process, it increases the membrane potential of the neighboring section and causes it to rise as well. This is like jerking a tightrope and watching a pulse travel down its length. The voltage pulse travels down the length of the axon until it reaches its synapses with the dendrites of other neurons along the way or finally terminates in synapses at the very end of the axon. An important thing to keep in mind about the action potential is that it is one way, and all or nothing. The action potential starts at the beginning of the axon and then goes down its length; it cannot go back the other way. Also, when a neuron fires the action potential pulse has the same amplitude every time, regardless of the amount of excitation received from its dendritic inputs. Since the amplitude of the action potential of a neuron is always the same, the important thing about neurons is their firing rate. A weak stimulus to the neuron’s input dendrites will cause a low rate of firing, while a stronger stimulus will cause a higher rate of firing of the neuron. Neurons can actually fire several hundred times per second when sufficiently stimulated by other neurons.

When the traveling action potential pulse along a neuron’s axon finally reaches a synapse, it causes Ca++ channels of the axon to open. Positive Ca++ ions then rush in and cause neurotransmitters that are stored in vesicles to be released into the synapse and diffuse across the synapse to the dendrite of the receiving neuron. Some of the empty neurotransmitter vesicles eventually pick up or reuptake some of the neurotransmitters that have been released by receptors to be reused again when the next action potential arrives, while other empty vesicles return back to the neuron soma to be refilled with neurotransmitter molecules.

In Figure 16 below we see a synapse between the output axon of a sending neuron and the input dendrite of a receiving neuron in comparison to the source and drain of a FET transistor.

Figure 16 – The synapse between the output axon of one neuron and the dendrite of another neuron behaves very much like the source and drain of an FET transistor.

Now it might seem like your computer should be a lot smarter than you are on the face of it, and many people will even secretly admit to that fact. After all, the CPU chip in your computer has several billion transistor switches and if you have 8 GB of memory, that comes to another 64 billion transistors in its memory chips, so your computer is getting pretty close to the 100 billion neuron switches in your brain. But the transistors in your computer can switch on and off in about 10-10 seconds, while the neurons in your brain can only fire on and off in about 10-2 seconds. The signals in your computer also travel very close to the speed of light, 186,000 miles/second, while the action potentials of axons only travel at a pokey 200 miles/hour. And the chips in your computer are very small, so there is not much distance to cover at nearly the speed of light, while your poor brain is thousands of times larger. So what gives? Why aren’t we working for the computers, rather than the other way around? The answer lies in massively parallel processing. While the transistor switches in your computer are only connected to a few of the other transistor switches in your computer, each neuron in your brain has several thousand input connections and perhaps 10,000 output connections to other neurons in your brain, so when one neuron fires, it can affect 10,000 other neurons. When those 10,000 neurons fire, they can affect 100,000,000 neurons, and when those neurons fire, they can affect 1,000,000,000,000 neurons, which is more than the 100 billion neurons in your brain! So when a single neuron fires within your brain, it can theoretically affect every other neuron in your brain within three generations of neuron firings, in perhaps as little as 300 milliseconds. Also, ever since Konrad Zuse first cranked up his Z3 computer in May of 1941, all modern computers have essentially copied his original design by using a clock-driven CPU to process bits in registers that are separate from the computer memory - see Figure 4. Lots of energy and compute time is wasted moving the bits into and out of memory. The human brain, on the other hand, stores and processes data on the same network of neurons in parallel without the need to move data into and out of memory. That is why the human brain still has an edge on computers.

Neuromorphic Chips Emulate the Human Brain
To emulate the neurons in the human brain, neuromorphic chips use spiking neural networks (SNNs). Each SNN neuron can fire pulses independently of the other SNN neurons just like biological neurons can independently fire pulses down their axons. The pulses from one SNN neuron are then sent to many other SNN neurons and the integrated impacts of all the arriving pulses then change the electrical states of the receiving SNN neurons just as the dendrites of a biological neuron can receive the pulses from 10,000 other biological neurons. The SNN neurons then simulate human learning processes by dynamically remapping the synapses between the SNN neurons in response to the pulse stimuli that they receive.

Figure 17 – The IBM TrueNorth neuromorphic chip.

Figure 18 – A logical depiction of the IBM TrueNorth neuromorphic chip.

Figure 19 – A block diagram of the Intel Loihi neuromorphic chip.

Both the IBM TrueNorth and the Intel Loihi use an SNN architecture. The Intel chip was introduced in November 2017 and consists of a 128-core design that is optimized for SNN algorithms and fabricated on 14nm process technology. The Loihi chip contains 130,000 neurons, each of which can send pulses to thousands of other neurons. Developers can access and manipulate chip resources with software using an API for the learning engine that is embedded in each of the 128 cores. Because the Loihi chip is optimized for SNNs, it performs highly accelerated learning in unstructured environments for systems that require autonomous operation and continuous learning with high performance and extremely low power consumption because the neurons operate independently and not by means of a system clock.

Speeding up Electronic Neuromorphic Chips with Photonics
People have been trying to build an optical computer for many years. An optical computer uses optical chips that use photonics instead of electronics to store and manipulate binary data. Photonic hardware elements manipulate photons to process information rather than using electronics to process information by manipulating electrons. In recent years, advances have been made in photonics to do things like improving the I/O between cloud servers in data centers via fiber optics. People are also making advances in photonics for quantum computers using the polarization of photons as the basis for storing and processing qubits. Photonic chips are really great for quickly processing massive amounts of data in parallel using very little energy. This is because there is very little energy loss compared to the ohmic heating loss found in electronic chips due to the motion of electron charge carriers bouncing off of atoms as they drift along. Photons also move much faster than the electric fields in transistors that cause electrons to slowly drift from negative to positive regions of the transistor. Photonic circuits can also run photons of different colors at the same time through the same hardware in a multithreaded manner. In fact, some researchers are looking to simultaneously run photons with 64 different colors through the same hardware all at the same time! Thus photonic chips are great for performing the linear algebra operations on the huge matrices found in complex Deep Learning applications. For example, below is an interview with Nicholas Harris, the CEO of Lightmatter describing the company's new Envise photonic chip which can be used to accelerate the linear algebra processing of arrays in Deep Learning applications. Envise will become the very first commercially available photonic chip to do such processing.

Beating Moore's Law: This photonic computer is 10X faster than NVIDIA GPUs using 90% less energy
https://www.youtube.com/watch?v=t1R7ElXEyag

Here is the company's website:

Lightmatter
https://lightmatter.co/

Figure 20 – A circuit element on a photonic chip manipulates photons instead of electrons.

Since neuromorphic chips also need to process the huge arrays of spiking signals arriving at the dendrites of an SNN neuron, it only makes sense to include the advantages of photonics in the design of neuromorphic chips at some time in the future. Below is an excellent YouTube video explaining what photonic neuromorphic AI computing would look like:

Photonic Neuromorphic Computing: The Future of AI?
https://www.youtube.com/watch?v=hBFLeQlG2og

The DNA of Spoken Languages - The Hardware Breakthrough That Brought the Memes to Predominance
Softwarephysics maintains that it is all about self-replicating information in action. For more on that see A Brief History of Self-Replicating Information. According to softwarephysics, the memes are currently the predominant form of self-replicating information on the planet, with software rapidly in the process of replacing the memes as the predominant form of self-replicating information on the Earth. So the obvious question is how did the memes that spread from human Mind to human Mind come to predominance? Like advanced AI, did the memes also require a hardware breakthrough to come to predominance? My suggestion is that the memes indeed needed a DNA hardware breakthrough too. In 1957, linguist Noam Chomsky published the book Syntactic Structures in which he proposed that human children are born with an innate ability to speak and understand languages. This ability to speak and understand languages must be encoded in the DNA of genes. At the time, this was a highly controversial idea because it was thought that human children learned to speak languages by simply listening to their parents and others.

However, in recent years we have discovered that the ability to speak and understand languages is a uniquely human ability because of a few DNA mutations. The first of these mutations was found in the FOXP2 gene that is common to many vertebrates. The FOXP2 is a regulatory gene that produces a protein that affects the level of proteins produced by many other genes. The FOXP2 gene is a very important gene that produces regulatory proteins in the brain, heart, lungs and digestive system. It plays an important role in mimicry in birds (such as birdsong) and echolocation in bats. FOXP2 is also required for the proper development of speech and language in humans. In humans, mutations in FOXP2 cause severe speech and language disorders. The FOXP2 gene was the very first gene isolated that seemed to be a prerequisite for the ability to speak and understand a language. For example, the FOXP2 gene of humans differs from that of the nonverbal chimpanzees by only two base pairs of DNA. Other similar genes must also be required for the ability to speak and understand languages, but FOXP2 certainly demonstrates how a few minor mutations to some existing genes brought forth the ability to speak and understand languages in humans. Now the science of memetics described in Susan Blackmore's The Meme Machine (1999), maintains that it was the memetic drive of memes that produced the highly over-engineered human brain in order to store and transmit more and more memes of ever increasing complexity. The memetic drive theory for the evolution of the very large human brain is very similar to the software drive for more and more CPU cycles and memory that drove the evolution of the modern computer hardware of today.

Memes can certainly spread from human Mind to human Mind by simply imitating the actions of others. For example, you could easily teach someone how to make a flint flake tool by showing them how to do so. However, the ability to speak and understand a language greatly improved the ability for memes to spread from Mind to Mind and oral histories even allowed memes to pass down through the generations. Spoken languages then allowed for the rise of reading and writing which further enhanced the durability of memes. The rise of social media software has now further enhanced the replication of memes and the memes and software have now forged a very powerful parasitic/symbiotic relationship to promote their joint survival. The truly wacky memes we now find propagating on social media software certainly attest to this. Thus, it might be that a few mutations to a number of regulatory genes were all that it took for the memes to come to predominance as the dominant form of self-replicating information on the planet.

The FOXP2 gene is an example of the theory of facilitated variation of Marc W. Kirschner and John C. Gerhart in action. The theory explains that the phenotype of an individual is determined by a number of 'constrained' and 'deconstrained' elements. The constrained elements are called the "conserved core processes" of living things that essentially remain unchanged for billions of years, and which are to be found to be used by all living things to sustain the fundamental functions of carbon-based life, like the generation of proteins by processing the information that is to be found in DNA sequences, and processing it with mRNA, tRNA and ribosomes, or the metabolism of carbohydrates via the Krebs cycle. The deconstrained elements are weakly-linked regulatory processes that can change the amount, location and timing of gene expression within a body, and which, therefore, can easily control which conserved core processes are to be run by a cell and when those conserved core processes are to be run by them. The theory of facilitated variation maintains that most favorable biological innovations arise from minor mutations to the deconstrained weakly-linked regulatory processes that control the conserved core processes of life, rather than from random mutations of the genotype of an individual in general that would change the phenotype of an individual in a purely random direction. For more on that see Facilitated Variation and the Utilization of Reusable Code by Carbon-Based Life.

Everything Old is New Again
Using differently colored photons to run through billions of waveguides in a multithreaded manner on chips configured into vast networks of neurons that try to emulate the human brain might sound a bit far-fetched. But it brings to mind something I once read about Richard Feynman when he was working on the first atomic bomb at Los Alamos from 1943-1945. He led a group that figured out that they could run several differently colored card decks through a string of IBM unit record processing machines to perform different complex mathematical calculations simultaneously on the same hardware. For more on Richard Feynman see Hierarchiology and the Phenomenon of Self-Organizing Organizational Collapse. Below is the pertinent section extracted from a lecture given by Richard Feynman:

Los Alamos From Below: Reminiscences 1943-1945, by Richard Feynman
http://calteches.library.caltech.edu/34/3/FeynmanLosAlamos.htm

In the extract below, notice the Agile group dynamics at play in the very early days of the Information Revolution.

Well, another kind of problem I worked on was this. We had to do lots of calculations, and we did them on Marchant calculating machines. By the way, just to give you an idea of what Los Alamos was like: We had these Marchant computers - hand calculators with numbers. You push them, and they multiply, divide, add and so on, but not easy like they do now. They were mechanical gadgets, failing often, and they had to be sent back to the factory to be repaired. Pretty soon you were running out of machines. So a few of us started to take the covers off. (We weren't supposed to. The rules read: "You take the covers off, we cannot be responsible...") So we took the covers off and we got a nice series of lessons on how to fix them, and we got better and better at it as we got more and more elaborate repairs. When we got something too complicated, we sent it back to the factory, but we'd do the easy ones and kept the things going. I ended up doing all the computers and there was a guy in the machine shop who took care of typewriters.

Anyway, we decided that the big problem - which was to figure out exactly what happened during the bomb's explosion, so you can figure out exactly how much energy was released and so on - required much more calculating than we were capable of. A rather clever fellow by the name of Stanley Frankel realized that it could possibly be done on IBM machines. The IBM company had machines for business purposes, adding machines called tabulators for listing sums, and a multiplier that you put cards in and it would take two numbers from a card and multiply them. There were also collators and sorters and so on.


Figure 21 - Richard Feynman is describing the IBM Unit Record Processing machines from the 1940s and 1950s. The numerical data to be processed was first punched onto IBM punch cards with something like this IBM 029 keypunch machine from the 1960s.

Figure 22 - Each card could hold a maximum of 80 characters.

Figure 23 - The cards with numerical data were then bundled into card decks for processing.

The Unit Record Processing machines would then process hundreds of punch cards per minute by routing the punch cards from machine to machine in processing streams.

Figure 24 – The Unit Record Processing machines like this card sorter were programmed by physically rewiring a plugboard.

Figure 25 – The plugboard for a Unit Record Processing machine.

So Frankel figured out a nice program. If we got enough of these machines in a room, we could take the cards and put them through a cycle. Everybody who does numerical calculations now knows exactly what I'm talking about, but this was kind of a new thing then - mass production with machines. We had done things like this on adding machines. Usually you go one step across, doing everything yourself. But this was different - where you go first to the adder, then to the multiplier, then to the adder, and so on. So Frankel designed this system and ordered the machines from the IBM company, because we realized it was a good way of solving our problems.

We needed a man to repair the machines, to keep them going and everything. And the Army was always going to send this fellow they had, but he was always delayed. Now, we always were in a hurry. Everything we did, we tried to do as quickly as possible. In this particular case, we worked out all the numerical steps that the machines were supposed to do - multiply this, and then do this, and subtract that. Then we worked out the program, but we didn't have any machine to test it on. So we set up this room with girls in it. Each one had a Marchant. But she was the multiplier, and she was the adder, and this one cubed, and we had index cards, and all she did was cube this number and send it to the next one.

We went through our cycle this way until we got all the bugs out. Well, it turned out that the speed at which we were able to do it was a hell of a lot faster than the other way, where every single person did all the steps. We got speed with this system that was the predicted speed for the IBM machine. The only difference is that the IBM machines didn't get tired and could work three shifts. But the girls got tired after a while.

Anyway, we got the bugs out during this process, and finally the machines arrived, but not the repairman. These were some of the most complicated machines of the technology of those days, big things that came partially disassembled, with lots of wires and blueprints of what to do. We went down and we put them together, Stan Frankel and I and another fellow, and we had our troubles. Most of the trouble was the big shots coming in all the time and saying, "You're going to break something! "

We put them together, and sometimes they would work, and sometimes they were put together wrong and they didn't work. Finally I was working on some multiplier and I saw a bent part inside, but I was afraid to straighten it because it might snap off - and they were always telling us we were going to bust something irreversibly. When the repairman finally got there, he fixed the machines we hadn't got ready, and everything was going. But he had trouble with the one that I had had trouble with. So after three days he was still working on that one last machine.

I went down, I said, "Oh, I noticed that was bent."

He said, "Oh, of course. That's all there is to it!" Bend! It was all right. So that was it.

Well, Mr. Frankel, who started this program, began to suffer from the computer disease that anybody who works with computers now knows about. It's a very serious disease and it interferes completely with the work. The trouble with computers is you play with them. They are so wonderful. You have these switches - if it's an even number you do this, if it's an odd number you do that - and pretty soon you can do more and more elaborate things if you are clever enough, on one machine.

And so after a while the whole system broke down. Frankel wasn't paying any attention; he wasn't supervising anybody. The system was going very, very slowly - while he was sitting in a room figuring out how to make one tabulator automatically print arctangent X, and then it would start and it would print columns and then bitsi, bitsi, bitsi, and calculate the arc-tangent automatically by integrating as it went along and make a whole table in one operation.

Absolutely useless. We had tables of arc-tangents. But if you've ever worked with computers, you understand the disease -- the delight in being able to see how much you can do. But he got the disease for the first time, the poor fellow who invented the thing.

And so I was asked to stop working on the stuff I was doing in my group and go down and take over the IBM group, and I tried to avoid the disease. And, although they had done only three problems in nine months, I had a very good group.

The real trouble was that no one had ever told these fellows anything. The Army had selected them from all over the country for a thing called Special Engineer Detachment - clever boys from high school who had engineering ability. They sent them up to Los Alamos. They put them in barracks. And they would tell them nothing.

Then they came to work, and what they had to do was work on IBM machines - punching holes, numbers that they didn't understand. Nobody told them what it was. The thing was going very slowly. I said that the first thing there has to be is that these technical guys know what we're doing. Oppenheimer went and talked to the security and got special permission so I could give a nice lecture about what we were doing, and they were all excited: "We're fighting a war! We see what it is!" They knew what the numbers meant. If the pressure came out higher, that meant there was more energy released, and so on and so on. They knew what they were doing.

Complete transformation! They began to invent ways of doing it better. They improved the scheme. They worked at night. They didn't need supervising in the night; they didn't need anything. They understood everything; they invented several of the programs that we used - and so forth.

So my boys really came through, and all that had to be done was to tell them what it was, that's all. As a result, although it took them nine months to do three problems before, we did nine problems in three months, which is nearly ten times as fast.

But one of the secret ways we did our problems was this: The problems consisted of a bunch of cards that had to go through a cycle. First add, then multiply and so it went through the cycle of machines in this room, slowly, as it went around and around. So we figured a way to put a different colored set of cards through a cycle too, but out of phase. We'd do two or three problems at a time.

But this got us into another problem. Near the end of the war for instance, just before we had to make a test in Albuquerque, the question was: How much would be released? We had been calculating the release from various designs, but we hadn't computed for the specific design that was ultimately used. So Bob Christie came down and said, "We would like the results for how this thing is going to work in one month" - or some very short time, like three weeks.

I said, "It's impossible."

He said, "Look, you're putting out nearly two problems a month. It takes only two weeks per problem, or three weeks per problem."

I said, "I know. It really takes much longer to do the problem, but we're doing them in parallel. As they go through, it takes a long time and there's no way to make it go around faster."

So he went out, and I began to think. Is there a way to make it go around faster? What if we did nothing else on the machine, so there was nothing else interfering? I put a challenge to the boys on the blackboard - CAN WE DO IT? They all start yelling, "Yes, we'll work double shifts, we'll work overtime," - all this kind of thing. "We'll try it. We'll try it!"

And so the rule was: All other problems out. Only one problem and just concentrate on this one. So they started to work.

My wife died in Albuquerque, and I had to go down. I borrowed Fuchs' car. He was a friend of mine in the dormitory. He had an automobile. He was using the automobile to take the secrets away, you know, down to Santa Fe. He was the spy. I didn't know that. I borrowed his car to go to Albuquerque. The damn thing got three flat tires on the way. I came back from there, and I went into the room, because I was supposed to be supervising everything, but I couldn't do it for three days.

It was in this mess. There's white cards, there's blue cards, there's yellow cards, and I start to say, "You're not supposed to do more than one problem - only one problem!" They said, "Get out, get out, get out. Wait -- and we'll explain everything."

So I waited, and what happened was this. As the cards went through, sometimes the machine made a mistake, or they put a wrong number in. What we used to have to do when that happened was to go back and do it over again. But they noticed that a mistake made at some point in one cycle only affects the nearby numbers, the next cycle affects the nearby numbers, and so on. It works its way through the pack of cards. If you have 50 cards and you make a mistake at card number 39, it affects 37, 38, and 39. The next, card 36, 37, 38, 39, and 40. The next time it spreads like a disease.

So they found an error back a way, and they got an idea. They would only compute a small deck of 10 cards around the error. And because 10 cards could be put through the machine faster than the deck of 50 cards, they would go rapidly through with this other deck while they continued with the 50 cards with the disease spreading. But the other thing was computing faster, and they would seal it all up and correct it. OK? Very clever.

That was the way those guys worked, really hard, very clever, to get speed. There was no other way. If they had to stop to try to fix it, we'd have lost time. We couldn't have got it. That was what they were doing.

Of course, you know what happened while they were doing that. They found an error in the blue deck. And so they had a yellow deck with a little fewer cards; it was going around faster than the blue deck. Just when they are going crazy - because after they get this straightened out, they have to fix the white deck - the boss comes walking in.

"Leave us alone," they say. So I left them alone and everything came out. We solved the problem in time and that's the way it was.


The above should sound very familiar to most 21st century IT professionals.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston