In recent years, AI has made some great advances, primarily with Deep Learning and other Machine Learning algorithms operating on massive amounts of data. But the goal of attaining an advanced AI that could reach and then surpass human Intelligence still seems to be rather far off. However, we do know that given the proper hardware, it should be entirely possible to create an advanced AI that matches and then far surpasses human Intelligence because, as I pointed out in The Ghost in the Machine the Grand Illusion of Consciousness, all you need to create human Intelligence is a very large and complex network of coordinated switches like those found in the human brain. So we just need to create a very large and a very complex network of coordinated switches of such a level to do the job. But that might require some kind of hardware breakthrough similar to the invention of the transistor back in 1947 that made modern computers possible. To illustrate that case, let's review the hardware advances that have been made in switching technology over the decades.
It all started back in May of 1941 when Konrad Zuse first cranked up his Z3 computer. The Z3 was the world's first real computer and was built with 2400 electromechanical relays that were used to perform the switching operations that all computers use to store information and to process it. To build a computer, all you need is a large network of interconnected switches that have the ability to switch each other on and off in a coordinated manner. Switches can be in one of two states, either open (off) or closed (on), and we can use those two states to store the binary numbers of “0” or “1”. By using a number of switches teamed together in open (off) or closed (on) states, we can store even larger binary numbers, like “01100100” = 38. We can also group the switches into logic gates that perform logical operations. For example, in Figure 1 below we see an AND gate composed of two switches A and B. Both switch A and B must be closed in order for the light bulb to turn on. If either switch A or B is open, the light bulb will not light up.
Figure 1 – An AND gate can be simply formed from two switches. Both switches A and B must be closed, in a state of “1”, in order to turn the light bulb on.
Additional logic gates can be formed from other combinations of switches as shown in Figure 2 below. It takes about 2 - 8 switches to create each of the various logic gates shown below.
Figure 2 – Additional logic gates can be formed from other combinations of 2 – 8 switches.
Once you can store binary numbers with switches and perform logical operations upon them with logic gates, you can build a computer that performs calculations on numbers. To process text, like names and addresses, we simply associate each letter of the alphabet with a binary number, like in the ASCII code set where A = “01000001” and Z = ‘01011010’ and then process the associated binary numbers.
Figure 3 – Konrad Zuse with a reconstructed Z3 in 1961 (click to enlarge)
Figure 4 – Block diagram of the Z3 architecture (click to enlarge)
The electrical relays used by the Z3 were originally meant for switching telephone conversations. Closing one relay allowed current to flow to another relay’s coil, causing that relay to close as well.
Figure 5 – The Z3 was built using 2400 electrical relays, originally meant for switching telephone conversations.
Figure 6 – The electrical relays used by the Z3 for switching were very large, very slow and used a great deal of electricity which generated a great deal of waste heat.
Figure 7 – In the 1950s, the electrical relays were replaced with vacuum tubes that were also very large, used lots of electricity and generated lots of waste heat too, but the vacuum tubes were 100,000 times faster than relays.
Figure 8 – Vacuum tubes contain a hot negative cathode that glows red and boils off electrons. The electrons are attracted to the cold positive anode plate, but there is a gate electrode between the cathode and anode plate. By changing the voltage on the grid, the vacuum tube can control the flow of electrons like the handle of a faucet. The grid voltage can be adjusted so that the electron flow is full blast, a trickle, or completely shut off, and that is how a vacuum tube can be used as a switch.
Figure 9 – In the 1960s, the vacuum tubes were replaced by discrete transistors. For example, a FET transistor consists of a source, gate and drain. When a positive voltage is applied to the gate, a current of electrons can flow from the source to the drain and the FET acts like a closed switch that is "on". When there is no positive voltage on the gate, no current can flow from the source to the drain, and the FET acts like an open switch that is "off".
Figure 10 – When there is no positive voltage on the gate, the FET transistor is switched off, and when there is a positive voltage on the gate the FET transistor is switched on. These two states can be used to store a binary "0" or "1", or can be used as a switch in a logic gate, just like an electrical relay or a vacuum tube.
Figure 11 – Above is a plumbing analogy that uses a faucet or valve handle to simulate the actions of the source, gate and drain of an FET transistor.
Figure 12 – In 1971, Intel introduced the 4-bit 4004 microprocessor chip with 2250 transistors on one chip. Modern CPU chips now have several billion transistors that can switch on and off 60 billion times per second which is about 60,000 times faster than a vacuum tube. You can fit many dozens of such chips into a single computer vacuum tube from the 1950s.
From the above, we see that over the decades the most significant breakthrough in switching technology was the development of the solid-state transistor which allowed switches to shrink to microscopic sizes using microscopic amounts of electricity.
Neuromorphic Chips May Be the Next Hardware Breakthrough For Advanced AI
Neuromorphic chips are chips that are designed to emulate the operation of the 100 billion neurons in the human brain in order to advance AI to the level of human Intelligence and beyond. As we saw from the history of switching technology above, power consumption and waste heat production have always been a problem. Neuromorphic chips address this problem by drastically reducing power consumption. For example, the human body at rest runs on about 100 watts of power with the human brain drawing around 20 watts of that power. Now compare the human brain at 20 watts to an advanced I9 Intel CPU chip that draws about 200 watts of power! The human brain is still much more powerful than an advanced I9 Intel CPU chip even though it only draws 10% of the power. As we saw in The Ghost in the Machine the Grand Illusion of Consciousness, the human Mind runs on 100 billion neurons with each neuron connected to at most 10,000 other neurons and it can do that all on 20 watts of power! The reason why an advanced I9 Intel CPU chip with billions of transistors needs 200 watts of power is that on average half of the transistors on the chip are "on" and consuming electrical energy. In fact, one of the major limitations in chip design is keeping the chip from melting under load. Neuromorphic chips, on the other hand, draw minuscule amounts of power. For example, the IBM TrueNorth neuromorphic chip first introduced in 2014 contains about 5.4 billion transistors which is about the same number of transistors in a modern Intel I9 processor, but the TrueNorth chip consumes just 73 milliwatts of power! An Intel I9 processor requires about 200 watts of power to run which is about 2,740 times as much power. Intel is also actively pursuing the building of neuromorphic chips with the introduction of the Intel Loihi chip in November 2017. But before proceeding further with how neuromorphic chips operate, let's review how the human brain operates since that is what the neuromorphic chips are trying to emulate.
The Hardware of the Mind
The human brain is also composed of a huge number of coordinated switches called neurons. Like your computer that contains many billions of transistor switches, your brain also contains about 100 billion switches called neurons. Each of the billions of transistor switches in your computer is connected to a small number of other switches that it can influence into switching on or off, while each of the 100 billion neuron switches in your brain can be connected to upwards of 10,000 other neuron switches and can also influence them into turning on or off.
All neurons have a body called the soma that is like all the other cells in the body, with a nucleus and all of the other organelles that are needed to keep the neuron alive and functioning. Like most electrical devices, neurons have an input side and an output side. On the input side of the neuron, one finds a large number of branching dendrites. On the output side of the neuron, we find one single and very long axon. The input dendrites of a neuron are very short and connect to a large number of output axons from other neurons. Although axons are only about a micron in diameter, they can be very long with a length of up to 3 feet. That’s like a one-inch garden hose that is 50 miles long! The single output axon has branching synapses along its length and it terminates with a large number of synapses. The output axon of a neuron can be connected to the input dendrites of perhaps 10,000 other neurons, forming a very complex network of connections.
Figure 13 – A neuron consists of a cell body or soma that has many input dendrites on one side and a very long output axon on the other side. Even though axons are only about 1 micron in diameter, they can be 3 feet long, like a one-inch garden hose that is 50 miles long! The axon of one neuron can be connected to up to 10,000 dendrites of other neurons.
Neurons are constantly receiving inputs from the axons of many other neurons via their input dendrites. These time-varying inputs can excite the neuron or inhibit the neuron and are all being constantly added together, or integrated, over time. When a sufficient number of exciting inputs are received, the neuron fires or switches "on". When it does so, it creates an electrical action potential that travels down the length of its axon to the input dendrites of other neurons. When the action potential finally reaches such a synapse, it causes the release of a number of organic molecules known as neurotransmitters, such as glutamate, acetylcholine, dopamine and serotonin. These neurotransmitters are created in the soma of the neuron and are transported down the length of the axon in small vesicles. The synaptic gaps between neurons are very small, allowing the released neurotransmitters from the axon to diffuse across the synaptic gap and plug into receptors on the receiving dendrite of another neuron. This causes the receiving neuron to either decrease or increase its membrane potential. If the membrane potential of the receiving neuron increases, it means the receiving neuron is being excited, and if the membrane potential of the receiving neuron decreases, it means that the receiving neuron is being inhibited. Idle neurons have a membrane potential of about -70 mV. This means that the voltage of the fluid on the inside of the neuron is 70 mV lower than the voltage of the fluid on the outside of the neuron, so it is like there is a little 70 mV battery stuck in the membrane of the neuron, with the negative terminal inside of the neuron, and the positive terminal on the outside of the neuron, making the fluid inside of the neuron 70 mV negative relative to the fluid on the outside of the neuron. This is accomplished by keeping the concentrations of charged ions, like Na+, K+ and Cl-, different between the fluids inside and outside of the neuron membrane. There are two ways to control the density of these ions within the neuron. The first is called passive transport. There are little protein molecules stuck in the cell membrane of the neuron that allow certain ions to pass freely through like a hole in a wall. When these protein holes open in the neuron’s membranes, the selected ion, perhaps K+, will start to go into and out of the neuron. However, if there are more K+ ions on the outside of the membrane than within the neuron, the net flow of K+ ions will be into the neuron thanks to the second law of thermodynamics, making the fluid within the neuron more positive. Passive transport requires very little energy. All you need is enough energy to change the shape of the embedded protein molecules in the neuron’s cell membrane to allow the free flow of charged ions to lower densities as required by the second law of thermodynamics.
The other way to get ions into or out of neurons is by the active transport of the ions with molecular pumps. With active transport, the neuron uses some energy to actively pump the charged ions against their electrical gradient, in keeping with the second law of thermodynamics. For example, neurons have a pump that can actively pump three Na+ ions out and take in two K+ ions at the same time, for a net outflow of one positively charged NA+ ion. By actively pumping out positively charged Na+ ions, the fluid inside of a neuron ends up having a net -70 mV potential because there are more positively charged ions on the outside of the neuron than within the neuron. When the neurotransmitters from other firing neurons come into contact with their corresponding receptors on the dendrites of the target neuron it causes those receptors to open their passive Na+ channels. This allows the Na+ ions to flow into the neuron and temporarily change the membrane voltage by making the fluid inside the neuron more positive. If this voltage change is large enough, it will cause an action potential to be fired down the axon of the neuron. Figure 14 shows the basic ion flow that transmits this action potential down the length of the axon. The passing action potential pulse lasts for about 3 milliseconds and travels about 100 meters/sec or about 200 miles/hour down the neuron’s axon.
Figure 14 – When a neuron fires, an action potential is created by various ions moving across the membranes surrounding the axon. The pulse is about 3 milliseconds in duration and travels about 100 meters/sec, or about 200 miles/hour down the axon.
Figure 15 – At the synapse between the axon of one neuron and a dendrite of another neuron, the traveling action potential of the sending neuron’s axon releases neurotransmitters that cross the synaptic gap and which can excite or inhibit the firing of the receiving neuron.
Here is the general sequence of events:
1. The first step of the generation of an action potential is that the Na+ channels open, allowing a flood of Na+ ions into the neuron. This causes the membrane potential of the neuron to become positive, instead of the normal negative -70 mV voltage.
2. At some positive membrane potential of the neuron, the K+ channels open, allowing positive K+ ions to flow out of the neuron.
3. The Na+ channels then close, and this stops the inflow of positively charged Na+ ions. But since the K+ channels are still open, it allows the outflow of positively charged K+ ions, so that the membrane potential plunges in the negative direction again.
4. When the neuron membrane potential begins to reach its normal resting state of -70 mV, the K+ channels close.
5. Then the Na+/K+ pump of the neuron kicks in and starts to transport Na+ ions out of the neuron, and K+ ions back into the cell, until it reaches its normal -70 mV potential, and is ready for the next action potential pulse to pass by.
The action potential travels down the length of the axon as a voltage pulse. It does this by using the steps outlined above. As a section of the axon undergoes the above process, it increases the membrane potential of the neighboring section and causes it to rise as well. This is like jerking a tightrope and watching a pulse travel down its length. The voltage pulse travels down the length of the axon until it reaches its synapses with the dendrites of other neurons along the way or finally terminates in synapses at the very end of the axon. An important thing to keep in mind about the action potential is that it is one way, and all or nothing. The action potential starts at the beginning of the axon and then goes down its length; it cannot go back the other way. Also, when a neuron fires the action potential pulse has the same amplitude every time, regardless of the amount of excitation received from its dendritic inputs. Since the amplitude of the action potential of a neuron is always the same, the important thing about neurons is their firing rate. A weak stimulus to the neuron’s input dendrites will cause a low rate of firing, while a stronger stimulus will cause a higher rate of firing of the neuron. Neurons can actually fire several hundred times per second when sufficiently stimulated by other neurons.
When the traveling action potential pulse along a neuron’s axon finally reaches a synapse, it causes Ca++ channels of the axon to open. Positive Ca++ ions then rush in and cause neurotransmitters that are stored in vesicles to be released into the synapse and diffuse across the synapse to the dendrite of the receiving neuron. Some of the empty neurotransmitter vesicles eventually pick up or reuptake some of the neurotransmitters that have been released by receptors to be reused again when the next action potential arrives, while other empty vesicles return back to the neuron soma to be refilled with neurotransmitter molecules.
In Figure 16 below we see a synapse between the output axon of a sending neuron and the input dendrite of a receiving neuron in comparison to the source and drain of a FET transistor.
Figure 16 – The synapse between the output axon of one neuron and the dendrite of another neuron behaves very much like the source and drain of an FET transistor.
Now it might seem like your computer should be a lot smarter than you are on the face of it, and many people will even secretly admit to that fact. After all, the CPU chip in your computer has several billion transistor switches and if you have 8 GB of memory, that comes to another 64 billion transistors in its memory chips, so your computer is getting pretty close to the 100 billion neuron switches in your brain. But the transistors in your computer can switch on and off in about 10-10 seconds, while the neurons in your brain can only fire on and off in about 10-2 seconds. The signals in your computer also travel very close to the speed of light, 186,000 miles/second, while the action potentials of axons only travel at a pokey 200 miles/hour. And the chips in your computer are very small, so there is not much distance to cover at nearly the speed of light, while your poor brain is thousands of times larger. So what gives? Why aren’t we working for the computers, rather than the other way around? The answer lies in massively parallel processing. While the transistor switches in your computer are only connected to a few of the other transistor switches in your computer, each neuron in your brain has several thousand input connections and perhaps 10,000 output connections to other neurons in your brain, so when one neuron fires, it can affect 10,000 other neurons. When those 10,000 neurons fire, they can affect 100,000,000 neurons, and when those neurons fire, they can affect 1,000,000,000,000 neurons, which is more than the 100 billion neurons in your brain! So when a single neuron fires within your brain, it can theoretically affect every other neuron in your brain within three generations of neuron firings, in perhaps as little as 300 milliseconds. Also, ever since Konrad Zuse first cranked up his Z3 computer in May of 1941, all modern computers have essentially copied his original design by using a clock-driven CPU to process bits in registers that are separate from the computer memory - see Figure 4. Lots of energy and compute time is wasted moving the bits into and out of memory. The human brain, on the other hand, stores and processes data on the same network of neurons in parallel without the need to move data into and out of memory. That is why the human brain still has an edge on computers.
Neuromorphic Chips Emulate the Human Brain
To emulate the neurons in the human brain, neuromorphic chips use spiking neural networks (SNNs). Each SNN neuron can fire pulses independently of the other SNN neurons just like biological neurons can independently fire pulses down their axons. The pulses from one SNN neuron are then sent to many other SNN neurons and the integrated impacts of all the arriving pulses then change the electrical states of the receiving SNN neurons just as the dendrites of a biological neuron can receive the pulses from 10,000 other biological neurons. The SNN neurons then simulate human learning processes by dynamically remapping the synapses between the SNN neurons in response to the pulse stimuli that they receive.
Figure 17 – The IBM TrueNorth neuromorphic chip.
Figure 18 – A logical depiction of the IBM TrueNorth neuromorphic chip.
Figure 19 – A block diagram of the Intel Loihi neuromorphic chip.
Both the IBM TrueNorth and the Intel Loihi use an SNN architecture. The Intel chip was introduced in November 2017 and consists of a 128-core design that is optimized for SNN algorithms and fabricated on 14nm process technology. The Loihi chip contains 130,000 neurons, each of which can send pulses to thousands of other neurons. Developers can access and manipulate chip resources with software using an API for the learning engine that is embedded in each of the 128 cores. Because the Loihi chip is optimized for SNNs, it performs highly accelerated learning in unstructured environments for systems that require autonomous operation and continuous learning with high performance and extremely low power consumption because the neurons operate independently and not by means of a system clock.
Speeding up Electronic Neuromorphic Chips with Photonics
People have been trying to build an optical computer for many years. An optical computer uses optical chips that use photonics instead of electronics to store and manipulate binary data. Photonic hardware elements manipulate photons to process information rather than using electronics to process information by manipulating electrons. In recent years, advances have been made in photonics to do things like improving the I/O between cloud servers in data centers via fiber optics. People are also making advances in photonics for quantum computers using the polarization of photons as the basis for storing and processing qubits. Photonic chips are really great for quickly processing massive amounts of data in parallel using very little energy. This is because there is very little energy loss compared to the ohmic heating loss found in electronic chips due to the motion of electron charge carriers bouncing off of atoms as they drift along. Photons also move much faster than the electric fields in transistors that cause electrons to slowly drift from negative to positive regions of the transistor. Photonic circuits can also run photons of different colors at the same time through the same hardware in a multithreaded manner. In fact, some researchers are looking to simultaneously run photons with 64 different colors through the same hardware all at the same time! Thus photonic chips are great for performing the linear algebra operations on the huge matrices found in complex Deep Learning applications. For example, below is an interview with Nicholas Harris, the CEO of Lightmatter describing the company's new Envise photonic chip which can be used to accelerate the linear algebra processing of arrays in Deep Learning applications. Envise will become the very first commercially available photonic chip to do such processing.
Beating Moore's Law: This photonic computer is 10X faster than NVIDIA GPUs using 90% less energy
https://www.youtube.com/watch?v=t1R7ElXEyag
Here is the company's website:
Lightmatter
https://lightmatter.co/
Figure 20 – A circuit element on a photonic chip manipulates photons instead of electrons.
Since neuromorphic chips also need to process the huge arrays of spiking signals arriving at the dendrites of an SNN neuron, it only makes sense to include the advantages of photonics in the design of neuromorphic chips at some time in the future. Below is an excellent YouTube video explaining what photonic neuromorphic AI computing would look like:
Photonic Neuromorphic Computing: The Future of AI?
https://www.youtube.com/watch?v=hBFLeQlG2og
The DNA of Spoken Languages - The Hardware Breakthrough That Brought the Memes to Predominance
Softwarephysics maintains that it is all about self-replicating information in action. For more on that see A Brief History of Self-Replicating Information. According to softwarephysics, the memes are currently the predominant form of self-replicating information on the planet, with software rapidly in the process of replacing the memes as the predominant form of self-replicating information on the Earth. So the obvious question is how did the memes that spread from human Mind to human Mind come to predominance? Like advanced AI, did the memes also require a hardware breakthrough to come to predominance? My suggestion is that the memes indeed needed a DNA hardware breakthrough too. In 1957, linguist Noam Chomsky published the book Syntactic Structures in which he proposed that human children are born with an innate ability to speak and understand languages. This ability to speak and understand languages must be encoded in the DNA of genes. At the time, this was a highly controversial idea because it was thought that human children learned to speak languages by simply listening to their parents and others.
However, in recent years we have discovered that the ability to speak and understand languages is a uniquely human ability because of a few DNA mutations. The first of these mutations was found in the FOXP2 gene that is common to many vertebrates. The FOXP2 is a regulatory gene that produces a protein that affects the level of proteins produced by many other genes. The FOXP2 gene is a very important gene that produces regulatory proteins in the brain, heart, lungs and digestive system. It plays an important role in mimicry in birds (such as birdsong) and echolocation in bats. FOXP2 is also required for the proper development of speech and language in humans. In humans, mutations in FOXP2 cause severe speech and language disorders. The FOXP2 gene was the very first gene isolated that seemed to be a prerequisite for the ability to speak and understand a language. For example, the FOXP2 gene of humans differs from that of the nonverbal chimpanzees by only two base pairs of DNA. Other similar genes must also be required for the ability to speak and understand languages, but FOXP2 certainly demonstrates how a few minor mutations to some existing genes brought forth the ability to speak and understand languages in humans. Now the science of memetics described in Susan Blackmore's The Meme Machine (1999), maintains that it was the memetic drive of memes that produced the highly over-engineered human brain in order to store and transmit more and more memes of ever increasing complexity. The memetic drive theory for the evolution of the very large human brain is very similar to the software drive for more and more CPU cycles and memory that drove the evolution of the modern computer hardware of today.
Memes can certainly spread from human Mind to human Mind by simply imitating the actions of others. For example, you could easily teach someone how to make a flint flake tool by showing them how to do so. However, the ability to speak and understand a language greatly improved the ability for memes to spread from Mind to Mind and oral histories even allowed memes to pass down through the generations. Spoken languages then allowed for the rise of reading and writing which further enhanced the durability of memes. The rise of social media software has now further enhanced the replication of memes and the memes and software have now forged a very powerful parasitic/symbiotic relationship to promote their joint survival. The truly wacky memes we now find propagating on social media software certainly attest to this. Thus, it might be that a few mutations to a number of regulatory genes were all that it took for the memes to come to predominance as the dominant form of self-replicating information on the planet.
The FOXP2 gene is an example of the theory of facilitated variation of Marc W. Kirschner and John C. Gerhart in action. The theory explains that the phenotype of an individual is determined by a number of 'constrained' and 'deconstrained' elements. The constrained elements are called the "conserved core processes" of living things that essentially remain unchanged for billions of years, and which are to be found to be used by all living things to sustain the fundamental functions of carbon-based life, like the generation of proteins by processing the information that is to be found in DNA sequences, and processing it with mRNA, tRNA and ribosomes, or the metabolism of carbohydrates via the Krebs cycle. The deconstrained elements are weakly-linked regulatory processes that can change the amount, location and timing of gene expression within a body, and which, therefore, can easily control which conserved core processes are to be run by a cell and when those conserved core processes are to be run by them. The theory of facilitated variation maintains that most favorable biological innovations arise from minor mutations to the deconstrained weakly-linked regulatory processes that control the conserved core processes of life, rather than from random mutations of the genotype of an individual in general that would change the phenotype of an individual in a purely random direction. For more on that see Facilitated Variation and the Utilization of Reusable Code by Carbon-Based Life.
Everything Old is New Again
Using differently colored photons to run through billions of waveguides in a multithreaded manner on chips configured into vast networks of neurons that try to emulate the human brain might sound a bit far-fetched. But it brings to mind something I once read about Richard Feynman when he was working on the first atomic bomb at Los Alamos from 1943-1945. He led a group that figured out that they could run several differently colored card decks through a string of IBM unit record processing machines to perform different complex mathematical calculations simultaneously on the same hardware. For more on Richard Feynman see Hierarchiology and the Phenomenon of Self-Organizing Organizational Collapse. Below is the pertinent section extracted from a lecture given by Richard Feynman:
Los Alamos From Below: Reminiscences 1943-1945, by Richard Feynman
http://calteches.library.caltech.edu/34/3/FeynmanLosAlamos.htm
In the extract below, notice the Agile group dynamics at play in the very early days of the Information Revolution.
Well, another kind of problem I worked on was this. We had to do lots of calculations, and we did them on Marchant calculating machines. By the way, just to give you an idea of what Los Alamos was like: We had these Marchant computers - hand calculators with numbers. You push them, and they multiply, divide, add and so on, but not easy like they do now. They were mechanical gadgets, failing often, and they had to be sent back to the factory to be repaired. Pretty soon you were running out of machines. So a few of us started to take the covers off. (We weren't supposed to. The rules read: "You take the covers off, we cannot be responsible...") So we took the covers off and we got a nice series of lessons on how to fix them, and we got better and better at it as we got more and more elaborate repairs. When we got something too complicated, we sent it back to the factory, but we'd do the easy ones and kept the things going. I ended up doing all the computers and there was a guy in the machine shop who took care of typewriters.
Anyway, we decided that the big problem - which was to figure out exactly what happened during the bomb's explosion, so you can figure out exactly how much energy was released and so on - required much more calculating than we were capable of. A rather clever fellow by the name of Stanley Frankel realized that it could possibly be done on IBM machines. The IBM company had machines for business purposes, adding machines called tabulators for listing sums, and a multiplier that you put cards in and it would take two numbers from a card and multiply them. There were also collators and sorters and so on.
Figure 21 - Richard Feynman is describing the IBM Unit Record Processing machines from the 1940s and 1950s. The numerical data to be processed was first punched onto IBM punch cards with something like this IBM 029 keypunch machine from the 1960s.
Figure 22 - Each card could hold a maximum of 80 characters.
Figure 23 - The cards with numerical data were then bundled into card decks for processing.
The Unit Record Processing machines would then process hundreds of punch cards per minute by routing the punch cards from machine to machine in processing streams.
Figure 24 – The Unit Record Processing machines like this card sorter were programmed by physically rewiring a plugboard.
Figure 25 – The plugboard for a Unit Record Processing machine.
So Frankel figured out a nice program. If we got enough of these machines in a room, we could take the cards and put them through a cycle. Everybody who does numerical calculations now knows exactly what I'm talking about, but this was kind of a new thing then - mass production with machines. We had done things like this on adding machines. Usually you go one step across, doing everything yourself. But this was different - where you go first to the adder, then to the multiplier, then to the adder, and so on. So Frankel designed this system and ordered the machines from the IBM company, because we realized it was a good way of solving our problems.
We needed a man to repair the machines, to keep them going and everything. And the Army was always going to send this fellow they had, but he was always delayed. Now, we always were in a hurry. Everything we did, we tried to do as quickly as possible. In this particular case, we worked out all the numerical steps that the machines were supposed to do - multiply this, and then do this, and subtract that. Then we worked out the program, but we didn't have any machine to test it on. So we set up this room with girls in it. Each one had a Marchant. But she was the multiplier, and she was the adder, and this one cubed, and we had index cards, and all she did was cube this number and send it to the next one.
We went through our cycle this way until we got all the bugs out. Well, it turned out that the speed at which we were able to do it was a hell of a lot faster than the other way, where every single person did all the steps. We got speed with this system that was the predicted speed for the IBM machine. The only difference is that the IBM machines didn't get tired and could work three shifts. But the girls got tired after a while.
Anyway, we got the bugs out during this process, and finally the machines arrived, but not the repairman. These were some of the most complicated machines of the technology of those days, big things that came partially disassembled, with lots of wires and blueprints of what to do. We went down and we put them together, Stan Frankel and I and another fellow, and we had our troubles. Most of the trouble was the big shots coming in all the time and saying, "You're going to break something! "
We put them together, and sometimes they would work, and sometimes they were put together wrong and they didn't work. Finally I was working on some multiplier and I saw a bent part inside, but I was afraid to straighten it because it might snap off - and they were always telling us we were going to bust something irreversibly. When the repairman finally got there, he fixed the machines we hadn't got ready, and everything was going. But he had trouble with the one that I had had trouble with. So after three days he was still working on that one last machine.
I went down, I said, "Oh, I noticed that was bent."
He said, "Oh, of course. That's all there is to it!" Bend! It was all right. So that was it.
Well, Mr. Frankel, who started this program, began to suffer from the computer disease that anybody who works with computers now knows about. It's a very serious disease and it interferes completely with the work. The trouble with computers is you play with them. They are so wonderful. You have these switches - if it's an even number you do this, if it's an odd number you do that - and pretty soon you can do more and more elaborate things if you are clever enough, on one machine.
And so after a while the whole system broke down. Frankel wasn't paying any attention; he wasn't supervising anybody. The system was going very, very slowly - while he was sitting in a room figuring out how to make one tabulator automatically print arctangent X, and then it would start and it would print columns and then bitsi, bitsi, bitsi, and calculate the arc-tangent automatically by integrating as it went along and make a whole table in one operation.
Absolutely useless. We had tables of arc-tangents. But if you've ever worked with computers, you understand the disease -- the delight in being able to see how much you can do. But he got the disease for the first time, the poor fellow who invented the thing.
And so I was asked to stop working on the stuff I was doing in my group and go down and take over the IBM group, and I tried to avoid the disease. And, although they had done only three problems in nine months, I had a very good group.
The real trouble was that no one had ever told these fellows anything. The Army had selected them from all over the country for a thing called Special Engineer Detachment - clever boys from high school who had engineering ability. They sent them up to Los Alamos. They put them in barracks. And they would tell them nothing.
Then they came to work, and what they had to do was work on IBM machines - punching holes, numbers that they didn't understand. Nobody told them what it was. The thing was going very slowly. I said that the first thing there has to be is that these technical guys know what we're doing. Oppenheimer went and talked to the security and got special permission so I could give a nice lecture about what we were doing, and they were all excited: "We're fighting a war! We see what it is!" They knew what the numbers meant. If the pressure came out higher, that meant there was more energy released, and so on and so on. They knew what they were doing.
Complete transformation! They began to invent ways of doing it better. They improved the scheme. They worked at night. They didn't need supervising in the night; they didn't need anything. They understood everything; they invented several of the programs that we used - and so forth.
So my boys really came through, and all that had to be done was to tell them what it was, that's all. As a result, although it took them nine months to do three problems before, we did nine problems in three months, which is nearly ten times as fast.
But one of the secret ways we did our problems was this: The problems consisted of a bunch of cards that had to go through a cycle. First add, then multiply and so it went through the cycle of machines in this room, slowly, as it went around and around. So we figured a way to put a different colored set of cards through a cycle too, but out of phase. We'd do two or three problems at a time.
But this got us into another problem. Near the end of the war for instance, just before we had to make a test in Albuquerque, the question was: How much would be released? We had been calculating the release from various designs, but we hadn't computed for the specific design that was ultimately used. So Bob Christie came down and said, "We would like the results for how this thing is going to work in one month" - or some very short time, like three weeks.
I said, "It's impossible."
He said, "Look, you're putting out nearly two problems a month. It takes only two weeks per problem, or three weeks per problem."
I said, "I know. It really takes much longer to do the problem, but we're doing them in parallel. As they go through, it takes a long time and there's no way to make it go around faster."
So he went out, and I began to think. Is there a way to make it go around faster? What if we did nothing else on the machine, so there was nothing else interfering? I put a challenge to the boys on the blackboard - CAN WE DO IT? They all start yelling, "Yes, we'll work double shifts, we'll work overtime," - all this kind of thing. "We'll try it. We'll try it!"
And so the rule was: All other problems out. Only one problem and just concentrate on this one. So they started to work.
My wife died in Albuquerque, and I had to go down. I borrowed Fuchs' car. He was a friend of mine in the dormitory. He had an automobile. He was using the automobile to take the secrets away, you know, down to Santa Fe. He was the spy. I didn't know that. I borrowed his car to go to Albuquerque. The damn thing got three flat tires on the way. I came back from there, and I went into the room, because I was supposed to be supervising everything, but I couldn't do it for three days.
It was in this mess. There's white cards, there's blue cards, there's yellow cards, and I start to say, "You're not supposed to do more than one problem - only one problem!" They said, "Get out, get out, get out. Wait -- and we'll explain everything."
So I waited, and what happened was this. As the cards went through, sometimes the machine made a mistake, or they put a wrong number in. What we used to have to do when that happened was to go back and do it over again. But they noticed that a mistake made at some point in one cycle only affects the nearby numbers, the next cycle affects the nearby numbers, and so on. It works its way through the pack of cards. If you have 50 cards and you make a mistake at card number 39, it affects 37, 38, and 39. The next, card 36, 37, 38, 39, and 40. The next time it spreads like a disease.
So they found an error back a way, and they got an idea. They would only compute a small deck of 10 cards around the error. And because 10 cards could be put through the machine faster than the deck of 50 cards, they would go rapidly through with this other deck while they continued with the 50 cards with the disease spreading. But the other thing was computing faster, and they would seal it all up and correct it. OK? Very clever.
That was the way those guys worked, really hard, very clever, to get speed. There was no other way. If they had to stop to try to fix it, we'd have lost time. We couldn't have got it. That was what they were doing.
Of course, you know what happened while they were doing that. They found an error in the blue deck. And so they had a yellow deck with a little fewer cards; it was going around faster than the blue deck. Just when they are going crazy - because after they get this straightened out, they have to fix the white deck - the boss comes walking in.
"Leave us alone," they say. So I left them alone and everything came out. We solved the problem in time and that's the way it was.
The above should sound very familiar to most 21st century IT professionals.
Comments are welcome at scj333@sbcglobal.net
To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/
Regards,
Steve Johnston