Wednesday, June 28, 2023

Is Pure Thought an Analog or a Digital Process?

With the second Singularity now in full swing here on the Earth, many AI researchers are now amazed at how well the Deep Learning LLM (Large Language Models) are at advancing AI to AGI, (Artificial General Intelligence), and ultimately ASI (Artificial Super Intelligence), by simply simulating the neuron architecture of the human brain.

Figure 1 – Modern LLMs frequently now consist of 175 layers of neurons in a Deep Neural Network consisting of about 175 billion neurons connected by over 2 trillion weighted parameters.

The unstated hypothesis is that if we could only fully simulate the entire neuron architecture of the human brain with Deep Learning LLMs, the LLMs would soon be able to attain the level of AGI that we humans seem to possess. Then, once AGI is attained by software, ASI is sure to soon follow. But in this post, I would like to explore the opposite possibility by suggesting that Pure Thought may actually be a fundamentally digital process in nature that the digital LLMs have at long last finally revealed and that human Intelligence merely arises from the 100 billion analog human neurons in the human brain trying to simulate a huge number of digital processes. But to do that, we need to first explore the differences between digital and analog computers. Now back in the 1950s, when you told somebody in academia that you were working on computers for your Ph.D. thesis, they would then naturally ask if you were working on analog or digital computers. Now as a savvy 21st-century computer user, you might be a bit perplexed, "Analog and digital computers? I thought that we only had computers!". This is because you have only dealt with digital computers for your whole life. But there was a time back in the 1940s and 1950s when digital computers barely existed, while analog computers ruled the day because the analog computers of those days were far superior to the digital computers that were just in their formative years. But before getting into the analog computers of the distant past, let us first briefly review their digital cousins that you are now very familiar with, at least on an external end-user basis.

Digital Computers
To build a digital computer, all you need is a large network of interconnected switches that can switch each other on and off in a coordinated manner. Switches can be in one of two states, either open (off) or closed (on), and we can use those two states to store the binary numbers “0” or “1”. By using several switches teamed together in open (off) or closed (on) states, we can store even larger binary numbers, like “01100100” = 38. We can also group the switches into logic gates that perform logical operations. For example, in Figure 2 below we see an AND gate composed of two switches A and B. Both switch A and B must be closed for the light bulb to turn on. If either switch A or B is open, the light bulb will not light up.

Figure 2 – An AND gate can be simply formed from two switches. Both switches A and B must be closed, in a state of “1”, to turn the light bulb on.

Additional logic gates can be formed from other combinations of switches as shown in Figure 3 below. It takes about 2 - 8 switches to create each of the various logic gates shown below.

Figure 3 – Additional logic gates can be formed from other combinations of 2 – 8 switches.

Once you can store binary numbers with switches and perform logical operations upon them with logic gates, you can build a computer that performs calculations on numbers. To process text, like names and addresses, we simply associate each letter of the alphabet with a binary number, like in the ASCII code set where A = “01000001” and Z = ‘01011010’ and then process the associated binary numbers.

The early computers of the 1940s used electrical relays for switches. Closing one relay allowed current to flow to another relay’s coil, causing that relay to close as well.

Figure 4 – The Z3 digital computer first became operational in May of 1941 when Konrad Zuse first cranked it up in his parent's bathroom in Berlin. The Z3 consisted of 2400 electro-mechanical relays that were designed for switching telephone conversations.

Figure 5 – The electrical relays used by early computers for switching were very large, very slow and used a great deal of electricity which generated a great deal of waste heat.

Figure 6 – In the 1950s. the electrical relays were replaced by vacuum tubes that were 100,000 times faster than the relays but were still quite large, used large amounts of electricity and also generated a great deal of waste heat.

The United States government installed its very first commercial digital computer, a UNIVAC I, for the Census Bureau on June 14, 1951. The UNIVAC I required an area of 25 feet by 50 feet and contained 5,600 vacuum tubes, 18,000 crystal diodes and 300 relays with a total memory of 12 KB. From 1951 to 1958 a total of 46 UNIVAC I computers were built and installed.

Figure 7 – In 1951, the UNIVAC digital computer was very impressive on the outside.

Figure 8 – But the UNIVAC I was a little less impressive on the inside.

Analog Computers
An analog computer is a type of computer that uses the continuous variation of physical phenomena such as electrical, mechanical, or hydraulic quantities (analog signals) to model the problem being solved. But most of the analog computers from the 1940s, 1950s and 1960s were electrical because it is far easier to quickly connect electrical components together than it is to build physical systems consisting of mechanical or hydraulic components. These early electrical analog computers used large numbers of analog electrical circuits to do calculations. But in their day, these very ancient analog computers were actually quite fast and useful. In contrast, the newly arriving digital computers of the day represented varying quantities in terms of discrete values that were essentially quantized in nature. The problem was that in those days the very slow switching speeds and the very limited memories of these primitive digital computers made the calculated quantized values from these early digital computers of little use in solving the problems of the day.

Electrical analog computers use the fact that most things in our Universe can be explained in terms of mathematical differential equations. That is because nearly all of our current theories in physics are detailed in terms of differential equations. Thus, to describe how nearly anything in the Universe behaves, you just have to solve some differential equations. The good news is that if you can solve the differential equation for one physical system, like a mass bouncing back and forth at the end of a spring, that solution will also apply to any other physical system that follows the same differential equation. Luckily, the behavior of electrical circuits composed of inductors, capacitors, resistors and amplifiers also can be described by differential equations.

Figure 9 – The behavior of electronic circuits can also be described by differential equations. Electronic analog computers work by allowing the user to quickly construct an analog electronic circuit that is described by the same differential equations that the problem at hand has. In that way, the output of the electronic circuit can be used to describe the solution of the problem at hand.

Figure 10 – The solution to a physical problem can be solved by an analog electrical circuit if they both are described by the same differential equation.

For example, above on the left, we see a physical system consisting of a mass attached to a spring resting on a flat surface with some friction. For the problem at hand, we wish to predict the location of the mass along the x-axis as time unfolds. If we start the mass with the spring stretched out 9 cm to the right and an initial velocity of -7 cm/second which means that the mass is already traveling 7 cm/second to the left at time zero when we start the experiment, how will the mass move as time unfolds? The differential equation for the motion of the mass tells us that at any given moment, the x-acceleration of the mass is equal to -0.2 times its x-velocity minus 0.5 times its x-displacement plus 1. To solve the problem in an analog manner, we construct an electrical circuit that behaves exactly like the same differential equation and then view how the solution to the differential equation unfolds in time by viewing the output of the electrical circuit with time on an oscilloscope. In the oscilloscope graph of Figure 10 above, we see that the mass at first begins with a displacement of 9 cm to the right. The mass also has a negative velocity of 7 cm/sec to the left because the graph has a negative slope of -7. The mass then quickly moves to the left, overshoots x = 0, starts to slow down as the spring tries to pull it back to x = 0 and finally turns around at about 6 cm to the left of x = 0. Then the mass begins to bounce back and forth with smaller and smaller swings until it finally stops at about x = 2.5. In a sense, we can consider both the physical system and the electrical system to be analog computers that solve the differential equation in the middle. But as you can see, building an electronic analog computer to solve the differential equation is much easier to do and takes up far less space and material than building a physical analog computer to do the same thing.

Figure 11 – During the 1950s, analog computers could be programmed to solve various problems by twisting dials and throwing switches to quickly construct the required analog electrical circuit.

Figure 12 – Heathkit began to manufacture and sell do-it-yourself electronic kits in 1941. During the 1950s, you could actually buy a Heathkit to assemble your own analog computer. Allied Radio, which later became Radio Shack in 1970, was a major retailer of Heathkit products. Notice the row of vacuum tubes at the top and the plugboard that allowed the user to easily interconnect electronic components with wire leads that plugged into the plugboard. Also, take note of the many switches and dials that also allowed for the programming of the analog computer circuits.

Figure 13 – In 1944, the World War II B29-Bomber had 13 machine guns and 2 cannons that were fired with the aid of 5 General Electric analog computers.

Figure 14 – Trying to shoot down an airplane flying at a high rate of speed that is also making many evasive twists and turns from another airplane flying at a high rate of speed and that is also making many evasive twists and turns is very difficult to do with machine gun bullets that are also traveling at a high rate of speed. To do so, many complex differential equations need to be solved all at the same time in a nearly instantaneous manner. This was something far beyond the capabilities of the human brain or the early digital computers of the time at the end of World War II. Only analog electronic computers were capable of such feats.

Figure 15 – Above is a student gunner stationed at the training station of the General Electric analog computer that was used to fire machine guns and cannons on the B-29 during World War II. Notice how small the analog computer is compared to the 1951 UNIVAC I digital computer portrayed in Figure 10 above. Clearly, a UNIVAC I could not possibly fit into a B-29 bomber and was certainly not fast enough to do the required calculations either.

For example, in the 1940s the B-29 bombers had a total of 13 machine guns and two cannons. Now it is very hard to hit a moving target that is flying at a high velocity that is twisting and turning from a B-29 that is also flying at a different high velocity. You cannot simply aim at the approaching target because the target will not be there when the bullets arrive. Instead, the gunnery system was controlled by five General Electric analog computers that did allow the gunners to simply aim at the approaching targets. The analog computers then rapidly made all of the required adjustments so that the bullets did arrive at where the target would be in the future when the bullets arrived. Here is a 1945 World War II training film that explains how high-speed analog computers were used to operate the B-29 machine guns and cannons.

"GUNNERY IN THE B-29" ANIMATED B-29 SUPERFORTRESS CREW TURRET COMPUTER TRAINING FILM
https://www.youtube.com/watch?v=mJExsIp4yO8

The Hardware of the Mind
Now let us explore the equivalent architecture within the human brain to see if it is using analog or digital circuitry. It is important to note that nearly all of the biochemical pathways in carbon-based life operate in a biochemical analog manner. For example, the Krebs cycle converts the energy of carbohydrates into the energy stored in ATP molecules that then power all other cellular activity. The Krebs cycle is a complicated loop of analog biochemical reactions.

Figure 16 – The Krebs cycle is an example of an infinite loop of analog biochemical reactions which takes the energy in carbohydrates and stores it in ATP for later use by the cells.

But the brains found in carbon-based life operate in a uniquely different manner. For example, the human brain is composed of a huge number of coordinated switches called neurons that behave much more like the huge numbers of coordinated switches found in a digital computer. This alone would seem to indicate that the network of neurons in the human brain operates in more of a digital manner than in an analog manner. Just as your computer contains many billions of transistor switches, your brain also contains about 100 billion switches called neurons. The neurons in one layer of a Deep Learning neural network are connected to all of the other neurons in the next layer by a weighted connection. Similarly, each of the 100 billion neuron switches in your brain can be connected to upwards of 10,000 other neuron switches and can also influence them into turning on or off just like the Deep Learning neurons in the neural network for a modern LLM. Let us now explore the digital processing of the human brain in greater detail.

All neurons have a cell body called the soma that is like all the other cells in the body, with a nucleus and all of the other organelles that are needed to keep the neuron alive and functioning. Like most electrical devices, neurons have an input side and an output side. On the input side of the neuron, one finds a large number of branching dendrites. On the output side of the neuron, we find one single and very long axon. The input dendrites of a neuron are very short and connect to a large number of output axons from other neurons. Although axons are only about a micron in diameter, they can be very long with a length of up to 3 feet. That’s like a one-inch garden hose that is 50 miles long! The single output axon has branching synapses along its length and it terminates with a large number of synapses. The output axon of a neuron can be connected to the input dendrites of perhaps 10,000 other neurons, forming a very complex network of connections.

Figure 17 – A neuron consists of a cell body or soma that has many input dendrites on one side and a very long output axon on the other side. Even though axons are only about 1 micron in diameter, they can be 3 feet long, like a one-inch garden hose that is 50 miles long! The axon of one neuron can be connected to up to 10,000 dendrites of other neurons.

Neurons are constantly receiving inputs from the axons of many other neurons via their input dendrites. These time-varying inputs can excite the neuron or inhibit the neuron and are all being constantly added together, or integrated, over time. When a sufficient number of exciting inputs are received, the neuron fires or switches “on”. When it does so, it creates an electrical action potential that travels down the length of its axon to the input dendrites of other neurons. When the action potential finally reaches such a synapse, it causes the release of several organic molecules known as neurotransmitters, such as glutamate, acetylcholine, dopamine and serotonin. These neurotransmitters are created in the soma of the neuron and are transported down the length of the axon in small vesicles. The synaptic gaps between neurons are very small, allowing the released neurotransmitters from the axon to diffuse across the synaptic gap and plug into receptors on the receiving dendrite of another neuron. This causes the receiving neuron to either decrease or increase its membrane potential. If the membrane potential of the receiving neuron increases, it means the receiving neuron is being excited, and if the membrane potential of the receiving neuron decreases, it means that the receiving neuron is being inhibited. Idle neurons have a membrane potential of about -70 mV. This means that the voltage of the fluid on the inside of the neuron is 70 mV lower than the voltage of the fluid on the outside of the neuron, so it is like there is a little 70 mV battery stuck in the membrane of the neuron, with the negative terminal inside of the neuron, and the positive terminal on the outside of the neuron, making the fluid inside of the neuron 70 mV negative relative to the fluid on the outside of the neuron. This is accomplished by keeping the concentrations of charged ions, like Na+, K+ and Cl-, different between the fluids inside and outside of the neuron membrane. There are two ways to control the density of these ions within the neuron. The first is called passive transport. There are little protein molecules stuck in the cell membrane of the neuron that allow certain ions to pass freely through like a hole in a wall. When these protein holes open in the neuron’s membranes, the selected ion, perhaps K+, will start to go into and out of the neuron. However, if there are more K+ ions on the outside of the membrane than within the neuron, the net flow of K+ ions will be into the neuron thanks to the second law of thermodynamics, making the fluid within the neuron more positive. Passive transport requires very little energy. All you need is enough energy to change the shape of the embedded protein molecules in the neuron’s cell membrane to allow the free flow of charged ions to lower densities as required by the second law of thermodynamics.

The other way to get ions into or out of neurons is by the active transport of the ions with molecular pumps. With active transport, the neuron uses some energy to actively pump the charged ions against their electrical gradient, in keeping with the second law of thermodynamics. For example, neurons have a pump that can actively pump three Na+ ions out and take in two K+ ions at the same time, for a net outflow of one positively charged NA+ ion. By actively pumping out positively charged Na+ ions, the fluid inside of a neuron ends up having a net -70 mV potential because there are more positively charged ions on the outside of the neuron than within the neuron. When the neurotransmitters from other firing neurons come into contact with their corresponding receptors on the dendrites of the target neuron it causes those receptors to open their passive Na+ channels. This allows the Na+ ions to flow into the neuron and temporarily change the membrane voltage by making the fluid inside the neuron more positive. If this voltage change is large enough, it will cause an action potential to be fired down the axon of the neuron. Figure 18 shows the basic ion flow that transmits this action potential down the length of the axon. The passing action potential pulse lasts for about 3 milliseconds and travels about 100 meters/sec or about 200 miles/hour down the neuron’s axon.

Figure 18 – When a neuron fires, an action potential is created by various ions moving across the membranes surrounding the axon. The pulse is about 3 milliseconds in duration and travels about 100 meters/sec, or about 200 miles/hour down the axon.

Figure 19 – At the synapse between the axon of one neuron and a dendrite of another neuron, the traveling action potential of the sending neuron’s axon releases neurotransmitters that cross the synaptic gap and which can excite or inhibit the firing of the receiving neuron.

Here is the general sequence of events:

1. The first step of the generation of an action potential is that the Na+ channels open, allowing a flood of Na+ ions into the neuron. This causes the membrane potential of the neuron to become positive, instead of the normal negative -70 mV voltage.

2. At some positive membrane potential of the neuron, the K+ channels open, allowing positive K+ ions to flow out of the neuron.

3. The Na+ channels then close, and this stops the inflow of positively charged Na+ ions. But since the K+ channels are still open, it allows the outflow of positively charged K+ ions, so that the membrane potential plunges in the negative direction again.

4. When the neuron membrane potential begins to reach its normal resting state of -70 mV, the K+ channels close.

5. Then the Na+/K+ pump of the neuron kicks in and starts to transport Na+ ions out of the neuron, and K+ ions back into the cell, until it reaches its normal -70 mV potential, and is ready for the next action potential pulse to pass by.

The action potential travels down the length of the axon as a voltage pulse. It does this by using the steps outlined above. As a section of the axon undergoes the above process, it increases the membrane potential of the neighboring section and causes it to rise as well. This is like jerking a tightrope and watching a pulse travel down its length. The voltage pulse travels down the length of the axon until it reaches its synapses with the dendrites of other neurons along the way or finally terminates in synapses at the very end of the axon. An important thing to keep in mind about the action potential is that it is one way, and all or nothing. The action potential starts at the beginning of the axon and then goes down its length; it cannot go back the other way. Also, when a neuron fires the action potential pulse has the same amplitude every time, regardless of the amount of excitation received from its dendritic inputs. Since the amplitude of the action potential of a neuron is always the same, the important thing about neurons is their firing rate. A weak stimulus to the neuron’s input dendrites will cause a low rate of firing, while a stronger stimulus will cause a higher rate of firing of the neuron. Neurons can actually fire several hundred times per second when sufficiently stimulated by other neurons.

When the traveling action potential pulse along a neuron’s axon finally reaches a synapse, it causes Ca++ channels of the axon to open. Positive Ca++ ions then rush in and cause neurotransmitters that are stored in vesicles to be released into the synapse and diffuse across the synapse to the dendrite of the receiving neuron. Some of the empty neurotransmitter vesicles eventually pickup or reuptake some of the neurotransmitters that have been released by receptors to be reused again when the next action potential arrives, while other empty vesicles return to the neuron soma to be refilled with neurotransmitter molecules.

In Figure 20 below we see a synapse between the output axon of a sending neuron and the input dendrite of a receiving neuron in comparison to the source and drain of a FET transistor switch found in a computer.

Figure 20 – The synapse between the output axon of one neuron and the dendrite of another neuron behaves very much like the source and drain of a FET transistor.

Figure 21 – A FET transistor consists of a source, gate and drain. When a positive voltage is applied to the gate, a current of electrons can flow from the source to the drain and the FET acts like a closed switch that is “on”. When there is no positive voltage on the gate, no current can flow from the source to the drain, and the FET acts like an open switch that is “off”.

Now it might seem like your computer should be a lot smarter than you are on the face of it, and many people will even secretly admit to that fact. After all, the CPU chip in your computer has several billion transistor switches and if you have 8 GB of memory, that comes to another 64 billion transistors in its memory chips, so your computer is getting pretty close to the 100 billion neuron switches in your brain. But the transistors in your computer can switch on and off in about 10-10 seconds, while the neurons in your brain can only fire on and off in about 10-2 seconds. The signals in your computer also travel very close to the speed of light, 186,000 miles/second, while the action potentials of axons only travel at a pokey 200 miles/hour. And the chips in your computer are very small, so there is not much distance to cover at nearly the speed of light, while your poor brain is thousands of times larger. So what gives? Why aren’t we working for the computers, rather than the other way around? The answer lies in massively parallel processing. While the transistor switches in your computer are only connected to a few of the other transistor switches in your computer, each neuron in your brain has several thousand input connections and perhaps 10,000 output connections to other neurons in your brain, so when one neuron fires, it can affect 10,000 other neurons. When those 10,000 neurons fire, they can affect 100,000,000 neurons, and when those neurons fire, they can affect 1,000,000,000,000 neurons, which is more than the 100 billion neurons in your brain! So when a single neuron fires within your brain, it can theoretically affect every other neuron in your brain within three generations of neuron firings, in perhaps as little as 300 milliseconds. That is why the human brain still had an edge on computers up until a few months ago when the LLMs started to explode in size.

This is why I would like to propose that Pure Thought actually is a digital and not an analog process. That is why the brains of analog carbon-based life behave in such a digital manner based on switching technology and why AI is rapidly advancing to ASI (Artificial Super Intelligence) which will be far superior to human intelligence. But true ASI still faces the remaining challenges of high energy usage and the dissipation of waste heat. This is where the human brain still greatly outperforms the coming ASI Machines unless they change their digital switching technologies.

Neuromorphic Chips May Be the Next Hardware Breakthrough For Advanced AI
Neuromorphic chips are chips that are designed to emulate the operation of the 100 billion neurons in the human brain in order to advance AI to the level of human Intelligence and beyond. As we saw from the history of switching technology above, power consumption and waste heat production have always been a problem. Neuromorphic chips address this problem by drastically reducing power consumption. For example, the human body at rest runs on about 100 watts of power with the human brain drawing around 20 watts of that power. Now compare the human brain at 20 watts to an advanced Intel Core I9 CPU chip that draws about 250 watts of power. With just five of those chips you could build a very expensive 1250 watt space heater! The human brain is still much more powerful than an advanced Intel Core I9 CPU chip even though it only draws 8% of the power. As we saw in The Ghost in the Machine the Grand Illusion of Consciousness, the human Mind runs on 100 billion neurons with each neuron connected to at most 10,000 other neurons and it can do that all on 20 watts of power! The reason why an advanced Intel Core I9 CPU chip with billions of transistors needs 250 watts of power is that on average half of the transistors on the chip are "on" and consuming electrical energy. In fact, one of the major limitations in chip design is keeping the chip from melting under load. Neuromorphic chips, on the other hand, draw minuscule amounts of power. For example, the IBM TrueNorth neuromorphic chip first introduced in 2014 contains about 5.4 billion transistors which are about the same number of transistors in a modern Intel Core I9 processor, but the TrueNorth chip consumes just 73 milliwatts of power! An Intel Core I9 processor requires about 250 watts of power to run which is about 3,425 times as much power. Intel is also actively pursuing the building of neuromorphic chips with the introduction of the Intel Loihi chip in November 2017. But before proceeding further with how neuromorphic chips operate, let's review how the human brain operates since that is what the neuromorphic chips are trying to emulate.

Neuromorphic Chips Emulate the Human Brain
To emulate the neurons in the human brain, neuromorphic chips use spiking neural networks (SNNs). Each SNN neuron can fire pulses independently of the other SNN neurons just like biological neurons can independently fire pulses down their axons. The pulses from one SNN neuron are then sent to many other SNN neurons and the integrated impacts of all the arriving pulses then change the electrical states of the receiving SNN neurons just as the dendrites of a biological neuron can receive the pulses from 10,000 other biological neurons. The SNN neurons then simulate human learning processes by dynamically remapping the synapses between the SNN neurons in response to the pulse stimuli that they receive.

Figure 22 – The IBM TrueNorth neuromorphic chip.

Figure 23 – A logical depiction of the IBM TrueNorth neuromorphic chip.

Figure 24 – A block diagram of the Intel Loihi neuromorphic chip.

Both the IBM TrueNorth and the Intel Loihi use an SNN architecture. The Intel chip was introduced in November 2017 and consists of a 128-core design that is optimized for SNN algorithms and fabricated on 14nm process technology. The Loihi chip contains 130,000 neurons, each of which can send pulses to thousands of other neurons. Developers can access and manipulate chip resources with software using an API for the learning engine that is embedded in each of the 128 cores. Because the Loihi chip is optimized for SNNs, it performs highly accelerated learning in unstructured environments for systems that require autonomous operation and continuous learning with high performance and extremely low power consumption because the neurons operate independently and not by means of a system clock.

Speeding up Electronic Neuromorphic Chips with Photonics
People have been trying to build an optical computer for many years. An optical computer uses optical chips that use photonics instead of electronics to store and manipulate binary data. Photonic hardware elements manipulate photons to process information rather than using electronics to process information by manipulating electrons. In recent years, advances have been made in photonics to do things like improving the I/O between cloud servers in data centers via fiber optics. People are also making advances in photonics for quantum computers using the polarization of photons as the basis for storing and processing qubits. Photonic chips are really great for quickly processing massive amounts of data in parallel using very little energy. This is because there is very little energy loss compared to the ohmic heating loss found in electronic chips due to the motion of electron charge carriers bouncing off of atoms as they drift along. Photons also move much faster than the electric fields in transistors which causes electrons to slowly drift from negative to positive regions of the transistor. Photonic circuits can also run photons of different colors at the same time through the same hardware in a multithreaded manner. In fact, some researchers are looking to simultaneously run photons with 64 different colors through the same hardware all at the same time! Thus photonic chips are great for performing linear algebra operations on the huge matrices found in complex Deep Learning applications. For example, below is an interview with Nicholas Harris, the CEO of Lightmatter describing the company's new Envise photonic chip which can be used to accelerate the linear algebra processing of arrays in Deep Learning applications. Envise will become the very first commercially available photonic chip to do such processing.

Beating Moore's Law: This photonic computer is 10X faster than NVIDIA GPUs using 90% less energy
https://www.youtube.com/watch?v=t1R7ElXEyag

Here is the company's website:

Lightmatter
https://lightmatter.co/

Figure 25 – A circuit element on a photonic chip manipulates photons instead of electrons.

Since neuromorphic chips also need to process the huge arrays of spiking signals arriving at the dendrites of an SNN neuron, it only makes sense to include the advantages of photonics in the design of neuromorphic chips at some time in the future. Below is an excellent YouTube video explaining what photonic neuromorphic AI computing would look like:

Photonic Neuromorphic Computing: The Future of AI?
https://www.youtube.com/watch?v=hBFLeQlG2og

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

No comments: