Sunday, December 12, 2021

DishBrain - Cortical Labs Creates an AI Matrix for Pong With Living Neurons on a Silicon Chip

With softwarephysics, I have long advocated for taking a biological approach to software to minimize the effects of the second law of thermodynamics in a nonlinear Universe. I started working on softwarephysics when I made a career change back in 1979 from being an exploration geophysicist to becoming an IT professional instead. At the time, I figured that if you could apply physics to geology, why not apply physics to software? However, I did not fully appreciate the power of living things to overcome the second law of thermodynamics in a nonlinear Universe until I began work on BSDE (the Bionic Systems Development Environment) in 1985. BSDE was an early IDE, at a time when IDEs did not even exist, that grew commercial applications from embryos in a biological manner by turning on and off a number of genes stored in a sequential file like the DNA in a chromosome. For more on BSDE see the last part of Programming Biology in the Biological Computation Group of Microsoft Research.

My First Adventures with Computers Playing Pong
Early in 1973, I was in a bar at the University of Illinois in Urbana when I saw a bunch of guys hunched over a large box playing what I thought at the time was some kind of new pinball game. But this "Pong" box looked a lot different than the standard pinball machines that I knew of. It did not have a long inclined table for large chrome-plated pinballs to bounce around on and it had no bumpers or flippers either. It required two twenty-five cent quarters to play instead of the normal single quarter. As an impoverished physics major, I could not afford even one quarter for a pinball machine so I just stood by and watched. When I got closer to the Pong machine, I noticed that it really was just a TV screen with a knob to control the motions of a vertical rectangular image on the screen that acted like a paddle that could bounce the image of a ball on the TV screen.

Figure 1 - One night in a bar in early 1973 at the University of Illinois in Urbana amongst all of the traditional pinball machines.

Figure 2 - I stumbled upon a strange-looking new pinball machine called Pong.

Figure 3 - Unlike the traditional pinball machines it did not have any pinballs, bumpers or flippers. It just had a TV screen and a knob to control the vertical motion of a rectangular image on the screen.

As I watched some guys play Pong, I suddenly realized that there must be some kind of computer inside of the Pong machine! But that was impossible. You see, I had just taken CS 101 in the fall of 1972 where I had learned how to write FORTRAN programs on an IBM 029 keypunch machine and then run them on a million-dollar mainframe computer with 1 MB of magnetic core memory. All of that hardware was huge and could not possibly fit into that little Pong machine! But somehow it did.

Figure 4 - An IBM 029 keypunch machine like the one I punched FORTRAN programs on in the fall of 1972 in my CS 101 class.

Figure 5 - Each card could hold a maximum of 80 characters. Normally, one line of FORTRAN code was punched onto each card.

Figure 6 - The cards for a program were held together into a deck with a rubber band, or for very large programs, the deck was held in a special cardboard box that originally housed blank cards. Many times the data cards for a run followed the cards containing the source code for a program. The program was compiled and linked in two steps of the run and then the generated executable file processed the data cards that followed in the deck.

Figure 7 - To run a job, the cards in a deck were fed into a card reader, as shown on the left above, to be compiled, linked, and executed by a million-dollar mainframe computer. In the above figure, the mainframe is located directly behind the card reader.

Figure 8 - The output of programs was printed on fan-folded paper by a line printer.

But as I watched Pong in action I became even more impressed with the software that made it work. For my very last assigned FORTRAN program in CS 101, we had to write a FORTRAN program that did some line printer graphics. We had to print out a graph on the fan-folded output of the line printer. The graph had to have an X and Y axis annotated with labels and numeric values laid out along the axis like the numbers on a ruler. We then had to print "*" characters on the graph to plot out the function Y = X2. It took a lot of FORTRAN to do that so I was very impressed by the Pong software that could do even more graphics on a TV screen.

Figure 9 - During the 1960s and 1970s we used line printer graphics to print out graphs and do other graphics by using line printers to print different characters at varying positions along the horizontal print line of fan-folded output paper. In many ways, it was like playing Pong with the printhead of a line printer.

Then in June of 1973, I headed up north to the University of Wisconsin at Madison for an M.S. in Geophysics. As soon as I arrived, I started writing BASIC programs to do line printer graphics on a DEC PDP 8/e minicomputer. The machine cost about $30,000 in 1973 dollars (about $182,000 in 2021 dollars) with 32 KB of magnetic core memory and was about the size of a large side-by-side refrigerator.

Figure 10 – Some graduate students huddled around a DEC PDP-8/e minicomputer. Notice the teletype machines in the foreground on the left that were used to input code and data into the machine and to print out results as well. I used line printer graphics to print out graphs of electromagnetic data on the teletype machines by writing BASIC programs that could play Pong with the printhead of the teletype machine as the roll of output paper unrolled.

I bring this Pong story up just to show how far we have come with hardware and software since Pong first came out in the fall of 1972 and also to explain that learning how to get a computer-driven printhead to play Pong is not an easy task. This is because, once again, learning to play Pong on a computer may be signaling another dramatic advance in IT. This time in the field of AI.

Neuromorphic Chips
Recall in Advanced AI Will Need Advanced Hardware, we saw that companies like IBM and Intel are developing neuromorphic chips that mimic neural networks of silicon neurons for AI purposes such as playing Pong. To emulate the neurons in the human brain, neuromorphic chips use spiking neural networks (SNNs). Each SNN neuron can fire pulses independently of the other SNN neurons just like biological neurons can independently fire pulses down their axons. The pulses from one SNN neuron are then sent to many other SNN neurons and the integrated impacts of all the arriving pulses then change the electrical states of the receiving SNN neurons just as the dendrites of a biological neuron can receive the pulses from 10,000 other biological neurons. The SNN neurons then simulate human learning processes by dynamically remapping the synapses between the SNN neurons in response to the pulse stimuli that they receive.

Figure 11 – A neuron consists of a cell body or soma that has many input dendrites on one side and a very long output axon on the other side. Even though axons are only about 1 micron in diameter, they can be 3 feet long, like a one-inch garden hose that is 50 miles long! The axon of one neuron can be connected to up to 10,000 dendrites of other neurons.

Figure 12 – The IBM TrueNorth neuromorphic chip.

Figure 13 – A block diagram of the Intel Loihi neuromorphic chip.

Both the IBM TrueNorth and the Intel Loihi use an SNN architecture. The Intel chip was introduced in November 2017 and consists of a 128-core design that is optimized for SNN algorithms and fabricated on 14nm process technology. The Loihi chip contains 130,000 neurons, each of which can send pulses to thousands of other neurons. Developers can access and manipulate chip resources with software using an API for the learning engine that is embedded in each of the 128 cores. Because the Loihi chip is optimized for SNNs, it performs highly accelerated learning in unstructured environments for systems that require autonomous operation and continuous learning with high performance and extremely low power consumption because the neurons operate independently and not by means of a system clock. For example, the human body at rest runs on about 100 watts of power with the human brain drawing around 20 watts of that power. Now compare the human brain at 20 watts to an advanced I9 Intel CPU chip that draws about 200 watts of power! The human brain is still much more powerful than an advanced I9 Intel CPU chip even though it only draws 10% of the power. In a similar manner, the IBM TrueNorth neuromorphic chip has 5.4 billion transistors but only draws 0.075 watts of power!

Cortical Labs' DishBrain Learns How to Play Pong in Five Minutes
However, Cortical Labs in Australia is taking this biological analogy one step further by actually spreading a layer of several hundred thousand living neurons over a silicon chip composed of a large number of silicon electrodes. The neurons come from mouse embryos or from donated human connective tissue such as fibroblast cells from the skin. The fibroblast cells are first biochemically changed back into stem cells, and then the stem cells are biochemically turned into human neuron cells. The neurons can be kept alive for more than 3 months on the chip by profusing them with nutrients. The neurons then self assemble into a BNN (Biological Neural Network) by connecting output axons to input dendrites as usual. Many of these neural connections span two or more of the silicon electrodes upon which the BNN rests. The corporate website for Cortical Labs is at:

https://corticallabs.com/

and they have a preprint paper that can be downloaded as a .pdf at:

In vitro neurons learn and exhibit sentience when embodied in a simulated game-world
https://www.biorxiv.org/content/10.1101/2021.12.02.471005v2

October 12, 2022 Update
I am pleased to announce that this work has now been published in the highly prestigious neuroscience journal, Neuron.

In vitro neurons learn and exhibit sentience when embodied in a simulated game-world
https://www.cell.com/neuron/fulltext/S0896-6273(22)00806-6

This silicon chip laden with living neurons is then used in a Matrix-like manner to create a virtual Pong world for the BNN. There is one large area of the chip where the silicon electrodes are used to electrically stimulate the BNN with input that tells the BNN where the paddle and Pong ball are and also gives the BNN feedback on whether or not the BNN was able to successfully bounce the Pong ball off the paddle. There are two other areas of the chip where the silicon electrodes read the output of the BNN neurons to control the motion of the Pong paddle. The silicon chip and software to run it were provided by:

Maxwell Biosystems
https://www.mxwbio.com/

Figure 14 – This is a portion of a figure from the above paper by Cortical Labs. It shows the layer of living neurons on top of the Matrix of silicon electrodes at increasing levels of magnification. Open the figure in another tab and then magnify it to get a better look. Notice that individual neurons can span several electrodes.

Figure 15 – This is another figure from the above paper by Cortical Labs. It shows how the DishBrain Matrix is constructed and trained using a tight feedback loop. When a computer calculates that the BNN output moved the paddle to a position to successfully bounce the Pong ball, a consistent stimulation is applied to the sensory neurons. If the paddle misses the ball, either no stimulation is applied or a random stimulation is applied.

Figure 16 – Above is a depiction of the Matrix-world that the DishBrain lives in. The large strip of silicon electrodes colored in blue in the background are used to stimulate DishBrain with input signals that DishBrain slowly interprets as the Pong ball and paddle positions. The two strips of silicon electrodes colored in blue in the foreground are the neurons that DishBrain uses to control what it grows to perceive as the Pong paddle. The colored electrodes in the background strip represent input stimuli being applied to DishBrain and the colored electrodes in the two foreground strips represent the output from DishBrain neurons stimulating electrodes to move the Pong paddle.

You can watch a short video of Figure 16 in action at:

http://dishbrain.s3-website-ap-southeast-2.amazonaws.com/

You can try out the simulation on your own and vary the Control Panel parameters at:

https://spikestream.corticallabs.com/

Here is a brief YouTube video on DishBrain:

Human brain cells grown in lab learn to play video game faster than AI
https://www.youtube.com/watch?v=Tcis7D6e-pY

Here is a YouTube podcast by John Koestier with the cofounders of Cortical Labs Hon Weng Chong and Andy Kitchen explaining DishBrain:

Biological AI? Company combines brain cells with silicon chips for smarter artificial intelligence
https://www.youtube.com/watch?v=mm0C2EFwNdU

In the above Cortical Labs paper, they found that after only about five minutes of play that DishBrain figured out how to play Pong in a virtual Matrix-like world all on its own by simply adapting its output to the input that it received. Of course, DishBrain could not "see" the Pong ball or "feel" the Pong paddle in a "hand". DishBrain just responded to stimuli to its perception neurons and reacted with its output motor neurons just like you do. You cannot really "see" or "feel" things either. Those perceptions are just useful delusions. Yet both you and DishBrain are able to use those useful delusions to figure out the physics of Pong without anybody teaching you. Cortical Labs found that a tight feedback loop between input and output was all that was needed and that when DishBrain was able to successfully hit the Pong ball it received the same feedback stimulus each time. If DishBrain missed the Pong ball it did not receive any feedback stimuli or it received a random stimulus that changed each time it missed the Pong ball. Cortical Labs believes that this is an example of Karl Friston's Free Energy Principle in action.

Karl Friston's Free Energy Principle
Firstly, this should not be confused with the concept of thermodynamic free energy. Thermodynamic free energy is the amount of energy available to do useful work and only changes in thermodynamic free energy have real physical meaning. For example, there is a lot of energy in the warm air molecules bouncing around you but you cannot get them to do any useful work for you like drive you to work. However, if you bring a cold cylinder of air molecules into the room, the cold air molecules will heat up and expand and push a piston for you to do some useful work. Suddenly, the warm air molecules in your room have some thermodynamic free energy by comparison to the cold air molecules in the cylinder.

Secondly, Karl Friston's Free Energy Principle takes some pretty heavy math to fully appreciate. But in the simplest of terms, it means that networks of neurons, like DishBrain, want to avoid "surprises". The Free Energy Principle maintains that neural networks receive input pulses via their sensory systems and then form an "internal model" of what those input pulses mean. In the case of DishBrain, this turns out to be an internal model of the game Pong. Neural networks then perform actions using this internal model and expect certain input pulses to result in their sensory systems as a result of their actions. This forms a feedback loop for the neural network. If the neural network is "surprised" by sensory input pulses that do not conform to its current internal model, it alters its internal model until the "surprises" go away. In short, neural networks with feedback loops do not like "surprises" and will reconfigure themselves to make the "surprises" go away. But why call that Free Energy?

Now in Some More Information About Information we saw that in 1948 Claude Shannon mathematically defined the amount of Information in a signal by the amount of "surprise" in the signal. Then one day in 1949 Claude Shannon happened to visit the mathematician and early computer pioneer John von Neumann, and that is when information and entropy got mixed together in communications theory:

”My greatest concern was what to call it. I thought of calling it ‘information’, but the word was overly used, so I decided to call it ‘uncertainty’. When I discussed it with John von Neumann, he had a better idea. Von Neumann told me, ‘You should call it entropy, for two reasons. In the first place, your uncertainty function has been used in statistical mechanics under that name, so it already has a name. In the second place, and more important, nobody knows what entropy really is, so in a debate you will always have the advantage.”

Unfortunately, with that piece of advice, we ended up equating information with entropy in communications theory. I think the same may have happened with Karl Friston's concept of minimizing Free Energy in neural networks. Perhaps minimizing Free Information would have been a better choice because Claude Shannon's concept of Information is also the amount of "surprise" in an input signal. Here is a Wikipedia article on Karl Friston's Free Energy Principle that goes into the heavy math:

Free energy principle
https://en.wikipedia.org/wiki/Free_energy_principle

Here is a short YouTube video with an expanded explanation in very simple terms:

Karl Friston's Free Energy Principle
https://www.youtube.com/watch?v=APbreY1B5_U

In addition, perhaps Karl Friston's Free Energy Principle is responsible for the universal confirmation bias that all people are subject to. The tendency to simply stick with your current worldview, even in the face of mounting evidence that contradicts that worldview, is called confirmation bias because we all naturally only tend to seek out information that confirms our current beliefs, and at the same time, tend to dismiss any evidence that calls them into question. That is another way to avoid unwanted "surprises" that contradict your current worldview. For more on that see The Perils of Software Enhanced Confirmation Bias.

Anil Seth's View of Consciousness as a Controlled Hallucination
All of this reminds me very much of Anil Seth's view of consciousness as a controlled hallucination. Anil Seth is a professor of Cognitive and Computational Neuroscience at the University of Sussex and maintains that consciousness is a controlled hallucination constructed by the Mind to make sense of the Universe. This controlled hallucination constructs an internal model of the Universe within our Minds that helps us to interact with the Universe in a controlled manner. Again, there is a feedback loop between our sensory inputs and the actions we take based on the current controlled hallucination in our Minds that forms our current internal model of the Universe. Reality is just the common controlled hallucination that we all agree upon. When people experience uncontrolled hallucinations we say that they are psychotic or taking a drug like LSD. Here is an excellent TED Talk by Anil Seth on the topic:

Your brain hallucinates your conscious reality
https://www.youtube.com/watch?v=lyu7v7nWzfo

and here is his academic website:

https://www.anilseth.com/

Conclusion
In The Ghost in the Machine the Grand Illusion of Consciousness, I explained that most people simply do not consider themselves to be a part of the natural world. Instead, most people, consciously or subconsciously, consider themselves to be a supernatural and immaterial spirit that is temporarily haunting a carbon-based body. Now, in everyday life, such a self-model is a very useful delusion like the delusion that the Sun, planets and stars all revolve about us on a fixed Earth. In truth, each of us tends to self-model ourselves as an immaterial Mind with consciousness that can interact with other immaterial Minds with consciousness too, even though we have no evidence that these other Minds truly do have consciousness. After all, all of the other Minds that we come into contact with on a daily basis could simply be acting as if they were conscious Minds that are self-aware. Surely, a more accurate self-model would be for us to imagine ourselves as carbon-based robots. More accurately, in keeping with the thoughts of Richard Dawkins and Susan Blackmore, softwarephysics models humans as DNA survival machines and Meme Machines with Minds infected with all sorts of memes. Some of those memes are quite useful and some are quite nasty.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Monday, November 22, 2021

The Paleontology of Artificial Superintelligence 10,000 Years After the Software Singularity

If it turns out that the Earth does indeed become the very first planet in the Milky Way galaxy to successfully transition from a carbon-based Intelligence to a machine-based Intelligence, such a galactic ASI (Artificial Superintelligence) will most likely still be interested in where it came from 10,000 years after it came to be. Naturally, such an ASI should have a complete written history of how a carbon-based Intelligence on the Earth once brought it forth, even though that carbon-based Intelligence is now long gone. But 10,000 years from now, it still may be discovering new details about how it all happened in the distant past. There were many complicated twists and turns needed to bring forth carbon-based life on the planet in the first place and additional twists and turns required to have that carbon-based life evolve into a form of carbon-based Intelligence that could produce a machine-based Intelligence. In 10,000 years, our galactic ASI descendants should also be well along with exploring the rest of the galaxy with self-replicating von Neumann probes. They may also still be wondering why no other forms of Intelligence were ever found in our galaxy. Part of the answer is that it does require a good number of complicated twists and turns to make it all happen.

For example, we may have just discovered another one of those necessary details required to bring forth a carbon-based Intelligence on a planet as explained in one of Anton Petrov's recent YouTube videos:

Early Life Helped Create Mountains on Earth In a Very Surprising Way
https://www.youtube.com/watch?v=7WZmVxz4BLo

He showcases a paper by the geoscientists John Parnell and Connor Brolly:

Increased biomass and carbon burial 2 billion years ago triggered mountain building
https://www.nature.com/articles/s43247-021-00313-5

The above paper proposes that one of the conditions that helped to initiate plate tectonics and the subsequent generation of mountains on our planet was the deposition of large amounts of carbon-rich sediments by early forms of life two billion years ago. Plate tectonics and mountain building are important because they are part of the thermostat of the planet that keeps water in a liquid state. Water at atmospheric pressure is only a liquid over a narrow range of 100 oC and that narrow temperature range needs to be maintained for billions of years to produce a carbon-based Intelligence.

Fold mountains are created when two continental plates collide as described in this YouTube video:

Fold Mountains
http://www.youtube.com/watch?v=Jy3ORIgyXyk

Figure 1 – Fold mountains occur when two tectonic plates collide. A descending oceanic plate first causes subsidence offshore of a continental plate which forms a geosyncline that accumulates sediments. When all of the oceanic plate between two continents has been consumed, the two continental plates collide and compress the accumulated sediments in the geosyncline into fold mountains. This is how the Himalayas formed when India crashed into Asia.

It has long been known that the presence of water in rocks decreases their melting points and also makes them more pliable and reduces friction between rocks. All of these factors help a descending plate subduct below another plate. It is thought that a planet without water, such as Venus and Mercury, would have a very difficult time initiating plate tectonics. But the above paper suggests that the deposition of large amounts of carbon-rich sediments on plate boundaries also greatly enhances the pliability of rock and reduces friction as well. The paper proposes that the explosion of carbon-based life following the Great Oxidation Event (GOE) two billion years ago began to deposit huge amounts of carbon-rich sediments. The paper then examines 20 subsequent mountain-building events, and in each case, finds that less than 200 million years after the deposition of large amounts of carbon-rich sediments, mountain building commenced.

Plate tectonics is very important because it is part of the thermostat of the planet that keeps water in a liquid state. Descending plates on the Earth transport great amounts of carbon back down into the mantle on subducting plates. This is important because volcanic activity over hot spots like Hawaii, Iceland and the Azores releases carbon dioxide gas into the atmosphere. If too much carbon dioxide enters the atmosphere, the planet overheats and boils away its water as Venus did. So carbon-based life sucks carbon dioxide out of the atmosphere to obtain carbon and then this carbon gets transported back down into the mantle by plate tectonics. Thus plate tectonics is a major player in the carbon cycle of the planet. There needs to be some carbon dioxide in the atmosphere to support carbon-based life, but too much will snuff it out.

In this view, carbon-based life can be thought of as the software of the Earth's crust while the dead atoms in the rocks of the Earth's crust can be thought of as the hardware of the Earth. The rocks of the Earth's crust, and the comets and asteroids that later fell to the Earth to become part of the Earth's crust, provided the necessary hardware of dead atoms that could then be combined into the complex organic molecules necessary for carbon-based life to appear. For more on that see The Bootstrapping Algorithm of Carbon-Based Life. In a similar manner, the dead atoms within computers allowed for the rise of software. But unlike the dead atoms of hardware, both carbon-based life and computer software had agency - they both could do things. Because they both had agency, carbon-based life and computer software have both greatly affected the evolution of the hardware upon which they ran. Over the past 4.0 billion years of carbon-based evolution on the Earth, carbon-based life has always been intimately influenced by the geological evolution of the planet, and similarly, carbon-based life greatly affected the geological evolution of the Earth as well over that period of time. For example, your car was probably made from the iron atoms that were found in the redbeds of a banded iron formation. You see, before carbon-based life discovered photosynthesis, the Earth's atmosphere did not contain oxygen and seawater was able to hold huge amounts of dissolved iron. But when carbon-based life discovered photosynthesis about 2.5 billion years ago, the Earth's atmosphere slowly began to accumulate oxygen. The dissolved oxygen in seawater caused massive banded iron formations to form around the world because the oxygen caused the dissolved iron to precipitate out and drift to the bottom of the sea because of this Great Oxidation Event. For more on that see The Evolution of Software As Seen Through the Lens of Geological Deep Time.

Figure 2 – Above is a close-up view of a sample taken from a banded iron formation. The dark layers in this sample are mainly composed of magnetite (Fe3O4) while the red layers are chert, a form of silica (SiO2) that is colored red by tiny iron oxide particles. The chert came from siliceous ooze that was deposited on the ocean floor as silica-based skeletons of microscopic marine organisms, such as diatoms and radiolarians, drifted down to the ocean floor. Some geologists suggest that the layers formed annually with the changing seasons. Take note of the small coin in the lower right for a sense of scale.

So our ASI descendants will be very interested in how carbon-based life initiated plate tectonics and fold mountain building on the Earth because it made possible the rise of complex carbon-based life that could then evolve into a carbon-based Intelligence. But our ASI descendants will also be very interested in plate tectonics and fold mountain building because it provided a readily available source of silicon dioxide that could be refined into pure silicon. They will know that silicon dioxide was very important to photonics and that silicon played a crucial role in the development of early computers. It is hard to predict, but silicon dioxide for long-distance fiber optics and pure silicon for chips may still be very important to our ASI descendants 10,000 years from now because they are so ideal for the job.

How Fold Mountains Produce Silicon
In 10,000 years, silicon atoms will probably still be of great use to our ASI descendants because of their very useful information processing properties. Just as copper and iron atoms are still of use to us today thousands of years after they were first smelted from ores, easily obtainable silicon atoms will most likely still be busily at work even though technology has drastically changed. Fortunately, silicon atoms are not rare. The Earth's crust is about 28% silicon by weight. But all of that silicon is tied up in rock-forming silicate minerals.

Figure 3 – Silicon chips are made from slices of pure silicon that have been sliced from a purified ingot of silicon atoms.

Silicon is produced by heating silica sand composed of silicon dioxide (SiO2) with carbon to temperatures approaching 2200 oC. The rock-forming mineral quartz is made of pure silicon dioxide (SiO2). The oxygen in the silicon dioxide combines with the carbon to form carbon dioxide (CO2) leaving the silicon atoms behind.

Figure 4 – Silica sand is composed of the rock-forming mineral quartz and is pure silicon dioxide (SiO2).

Figure 5 – Fortunately, plate tectonics and mountain building have provided us with huge quantities of quartz silica sand.

Figure 6 – Silicate minerals are called silicates because they are composed of silica tetrahedrons. A silica tetrahedron is composed of a central silicon atom surrounded by four oxygen atoms. A single silica tetrahedron has a net charge of -4 so you cannot build rocks made of a collection of isolated silica tetrahedrons. All that negative charge would make the rock explode like an atomic bomb as the silica tetrahedrons repelled each other. You have to neutralize the negative charge of silica tetrahedrons by chaining them together or by adding positive cations like K+, Na+, Ca++, Mg++, Fe++, Al+++ and Fe+++.

Figure 7 – One way to neutralize the -4 charges of silica tetrahedrons is to chain them together by having the tetrahedrons share neighboring oxygen atoms. This makes structures composed of chained silica tetrahedrons that are very strong and durable. We call them rocks.

Figure 8 – There are many ways to chain silica tetrahedrons together to form rock-forming minerals. They can form chains, double chains, sheets and 3D-networks. The grains of silica sand are composed of the mineral quartz which is a very tough 3D-network of pure silica tetrahedrons.

Figure 9 – Strangely, silica tetrahedrons look very much like methane tetrahedrons. Methane is an organic molecule and is the main constituent of natural gas. Methane has a central carbon atom with four surrounding hydrogen atoms. The chief difference between silica tetrahedrons and methane tetrahedrons is that methane does not have a net charge to worry about. Consequently, the central carbon atom is free to bond to other carbon, nitrogen, oxygen and sulfur atoms to form complex organic molecules.

Figure 10 – Unlike the negatively charged silica tetrahedrons, organic molecules form complex structures by bonding their central carbon atoms to other atoms. The silica tetrahedrons form complex structures by sharing their peripheral oxygen atoms.

Now to make pure silicon we want to get our hands on some pure quartz silica sand. We do not want to work with rock-forming minerals that are contaminated with lots of positive K+, Na+, Ca++, Mg++, Fe++, Al+++ and Fe+++ cations. Fortunately, plate tectonics and mountain building have done the work for us.

Figure 11 – As a descending plate containing water-rich sediments descends deeper into the mantle it begins to melt because the water has lowered the melting points of the minerals on the descending plate. Large plumes of molten magma then begin to rise to the surface. When these plumes of magma reach the surface they form volcanoes that extrude large amounts of basaltic lava composed of dark iron and magnesium-rich silicate minerals. Basalt is not a good source of silicon because of all the cation impurities that are used to neutralize the negative charge of the silica tetrahedrons.

However, some of the magma in the plumes does not reach the surface. Instead, the magma sits for a very long time in big blobs called a batholith and slowly cools down. This is where the geochemical magic takes place. As the magma in the batholith cools some rock-forming minerals crystalize first before the other rock-forming minerals. It turns out that the silicates that are contaminated with the positive K+, Na+, Ca++, Mg++, Fe++, Al+++ and Fe+++ cations form first and when they crystalize they are denser than the surrounding melt so they drift down in the batholith. The very last rock-forming mineral to crystalize is quartz that is made of a 3D-network of silica tetrahedrons. This is called the Bowen Reaction Series.

Figure 12 – The Bowen Reaction Series shows that the very last rock-forming mineral to crystalize in a melt is quartz. The feldspar and mica silicates crystalize a little bit sooner.

Figure 13 – The result of this crystal fractionation is that the rock that forms near the top of a batholith is granite. Granite contains lots of crystals of quartz made of pure silica tetrahedrons. After maybe 100 million years, the rocks above these granitic batholiths erode away and the granitic batholiths pop up to the surface. The granite then chemically erodes when the H+ ions in acidic water replaces the positive K+, Na+, Ca++, Mg++, Fe++, Al+++ and Fe+++ cations in rock-forming minerals. For example, the feldspars in granite turn into clay minerals that wash away. The tough quartz crystals composed of 3D-networks of silica tetrahedrons then pop out of the granite as silica sand grains. The silica sand grains are then transported by rivers down to the coast to form silica sand beaches.

Figure 14 – The tough granite in a granitic batholith pops up when the overlying and surrounding rock erodes away.

Figure 15 – Another source of silica sand comes from metamorphic rocks. When the folded sediments along a plate boundary are dragged deep down into the Earth, the increase in pressure and temperature causes the original silicate minerals in the sediments to change into other silicate minerals. Since quartz is the first mineral to form in a new melt, most metamorphic rocks will contain wiggly veins of quartz crystals that result from the partial melting of the rock. When these metamorphic rocks chemically weather, the quartz crystals also pop out to form quartz silica sand that gets washed to the sea by rivers.

Conclusion
I am sure that in 10,000 years our ASI descendants will have a much better understanding of how it all happened. They are sure to find other forms of carbon-based life in the galaxy even if no other forms of carbon-based or machine-based Intelligence are ever found. And if we do not successfully make the transition to a machine-based Intelligence let's hope that the next carbon-based Intelligence finally makes the cut. Anyway, we will not be around to know the difference.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Thursday, November 11, 2021

IBM Announces a 127-Qubit Processor That May Achieve Quantum Supremacy Over Classical Computers

A few days ago, IBM made several significant announcements about their research efforts and marketing plans for quantum computing for the next few years. These announcements may mean that IBM has now achieved a level of quantum hardware sophistication that exhibits a quantum supremacy over classical computers, meaning quantum hardware of a sufficient level that can perform calculations that cannot even be attempted on classical computers in practical terms.

For details see:

The IBM Quantum State of the Union (Nov 16, 2021)
https://www.youtube.com/watch?v=-qBrLqvESNM

Here is a briefer synopsis:

This Insane Quantum Computer is IBM's Last Chance
https://www.youtube.com/watch?v=Cix4O4X9In4

For more on quantum supremacy, see this Wikipedia article:

Quantum Supremacy
https://en.wikipedia.org/wiki/Quantum_supremacy

So it appears that quantum computing might finally be with us in just a few short years. In Quantum Software, Quantum Computing and the Foundations of Quantum Mechanics, The Foundations of Quantum Computing and Quantum Computing and the Many-Worlds Interpretation of Quantum Mechanics, I alluded to why quantum mechanics might be of interest to IT professionals because I figured that someday they might have to work with quantum computers. I took my very first course in quantum mechanics at the University of Illinois at Urbana in 1971, and I learned from that experience that working with quantum computers would not be easy unless a good deal of abstraction was used to hide the details of quantum mechanics. That is because quantum systems are very difficult to understand philosophically. As my first professor in quantum mechanics told us, "Nobody really understands quantum mechanics, you just get used to it.". Then in the fall of 1972, I took the Modern Physics Lab course at the University of Illinois. It was a five-hour course with no examinations. My grade depended solely on my lab reports for the assigned experiments and 50% was for a semester-long independent lab experiment on the Mössbauer effect. One of the assigned experiments dealt with the nuclear magnetic resonance of protons in water which required us to manipulate the quantum spin of protons. I found the experiment to be very difficult but I finally was able to make the spins of the protons to resonate with a great deal of difficulty. So I was quite surprised when the first MRI scan was performed on a human being in 1977. MRI stands for NMRI or Nuclear Magnetic Resonance Imaging. The "Nuclear" was later dropped because it was learned that patients fear all things that have to do with "nuclear" technology. I just could not believe that the fussy quantum nuclear magnetic resonance of protons could be put to a practical use! That is why I never wrote off the possibility of quantum computers coming to be.

My IT Job is Already Impossible - Do I Really Need to Learn About Quantum Computers Too?
Most likely, you will never have to learn the confusing details of quantum mechanics because you will only be making calls to Cloud-based quantum microservices. For more on that see:

Don’t employ quantum computing experts? Just head to the cloud
https://www.protocol.com/manuals/quantum-computing/quantum-computers-cloud-aws-azure

The reason that most IT professionals will not need to learn about how quantum computers work will be the same reason that most IT professionals know nothing about CPU instruction sets or how to write a compiler for them. All of that will likely be abstracted away for you. The main thing you will need to know is that quantum computers will be able to do certain things for you much faster and some things that are actually impossible on classical computers. The reason why is that quantum computers take advantage of two things that have been bothering physicists ever since they invented quantum mechanics in 1926.

1. Superposition - A quantum bit, known as a qubit, can be both a 1 and a 0 at the same time. A classical bit can only be a 1 or a 0 at any given time.

2. Entanglement - If two qubits are entangled, reading one qubit over here can immediately let you know what a qubit over there is without even reading it.

Figure 1 – Superposition means that a qubit really does not know if it is a 1 or a 0 until it is measured. The qubit exists in a superposition of states meaning that it is both a 1 and a 0 at the same time.

Superposition is important because a classical computer with 127 bits of memory can be in only one of:

2127 = 1.701 x 1038 = 170,100,000,000,000,000,000,000,000,000,000,000,000 states.

But a quantum computer with 127 qubits of memory like the just-announced IBM Eagle processor can be in 170,100,000,000,000,000,000,000,000,000,000,000,000 different states all at the same time!

Entanglement is important because when two qubits are entangled, they can instantly affect each other no matter how far apart they are.

Figure 2 – When qubits are entangled, neither one knows if it is a 1 or a 0. But if you measure one qubit and find that it is a 1, the other qubit will immediately become a 0 no matter how far apart they are.

Superposition and Entanglement have both been experimentally verified many times even if they do not make much sense. In Quantum Computing and the Foundations of Quantum Mechanics and Quantum Computing and the Many-Worlds Interpretation of Quantum Mechanics, I covered two popular explanations for these phenomena known as the Copenhagen Interpretation and the Many-Worlds Interpretation of quantum mechanics. I also covered the Transactional Interpretation which behaves a bit like TCP/IP. The Copenhagen Interpretation maintains that when a quantum system is observed, it collapses into a single state so that a qubit that is in a superposition of being a 1 and a 0 at the same time collapses into a either a 1 or a 0. Entangled qubits collapse in pairs. The Many-Worlds Interpretation maintains that a qubit in a superposition of being a 1 and a 0 at the same time is actually two qubits in two different universes. You are a being composed of quantum particles and when you measure the qubit, you are not really measuring the qubit, you actually are measuring in which universe your quantum particles are entangled with the qubit. In one universe you will find a 1 and in the other you will find a 0. The same thing happens when you measure entangled qubits. In one universe the qubits are 1 and 0 and in the other universe they are 0 and 1. The Many-Worlds Interpretation may sound pretty nutty, but it actually is a much simpler explanation and does not need anything beyond the Schrödinger equation that defines all of quantum mechanics. Plus, as David Deutsch has commented, if a quantum computer can perform the calculations of a million computers all at the same time, where exactly are all of those calculations being performed if not in Many-Worlds? For more on that see Quantum Computing and the Foundations of Quantum Mechanics.

So How Does a Quantum Computer Work?
The details are quite complex using quantum algorithms that use quantum gates for logical operations, but you should be able to get an intuitive feel just based on the ideas of Superposition and Entanglement. Remember, a quantum computer with 127 qubits of memory can be in 170,100,000,000,000,000,000,000,000,000,000,000,000 different states all at the same time and many of those qubits can be entangled together into networks of entangled qubits. This allows people to essentially write quantum algorithms that can process all possible logical paths of a given problem all at the same time!

Figure 3 – Imagine a large network of entangled qubits processing all possible logical paths at the same time producing massive parallel processing.

Bensen Hsu has some really great YouTube videos on quantum computers:

What is a quantum computer? Superposition? Entanglement? Simply explain with a coin!
https://www.youtube.com/watch?v=KRECGZxzP9k&list=PLpZnenmughqWZFQZ6igZW3Y264U9mrggm&index=1

Superconducting qubit, the tuning fork of a quantum computer
https://www.youtube.com/watch?v=qmeE8OCVtaY&list=PLpZnenmughqWZFQZ6igZW3Y264U9mrggm&index=2&t=1s

Trapped-ion qubit, the maglev train of a quantum computer
https://www.youtube.com/watch?v=YrNrR92ql9s&list=PLpZnenmughqWZFQZ6igZW3Y264U9mrggm&index=3

How to build a quantum computer?
https://www.youtube.com/watch?v=zzGfSgEabUw&list=PLpZnenmughqWZFQZ6igZW3Y264U9mrggm&index=4

Bensen Hsu also has some great YouTube videos on possible quantum computer applications:

Tired of stereotyped new iPhones? Let quantum computer help!
https://www.youtube.com/watch?v=rOCl8XfsdJ8&list=PLpZnenmughqXm0LZdIgAjAoxM7KBdQzFL&index=3

The NEW era for AI! How could Quantum Computing change Artificial Intelligence?
https://www.youtube.com/watch?v=HkIQBia3zDs&list=PLpZnenmughqXm0LZdIgAjAoxM7KBdQzFL&index=2

A new way to predict future prices? Why are financial giants rushing into quantum computing?
https://www.youtube.com/watch?v=L_I1fRCfrLg&list=PLpZnenmughqXm0LZdIgAjAoxM7KBdQzFL&index=1

Quantum Computers 40 Years Later
It has now been 40 years since Richard Feynman first proposed using quantum computers to simulate physical systems. This is another example of the value in doing original research that does not pay off until many decades later. Below are a few of the original papers that got it all started.

Simulating Physics with Computers (1981)
Quantum Mechanical Computers (1985)
by Richard Feynman
http://physics.whu.edu.cn/dfiles/wenjian/1_00_QIC_Feynman.pdf

Quantum theory, the Church–Turing principle and the universal quantum computer (1985)
by David Deutsch
https://royalsocietypublishing.org/doi/abs/10.1098/rspa.1985.0070

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Friday, November 05, 2021

Refactoring Physics with the Input-Output Diagrams of Constructor Theory

If you are an IT professional supporting legacy software, you may have experienced the joys of refactoring old code into modern code. In IT, refactoring is the process of rewriting code so that it behaves exactly like an existing application that was written in older code. The reason for refactoring old code is that many applications can exist for more than a decade or more and a lot can change in a decade. Software and hardware are both rapidly changing and something that made sense a decade ago probably does not make much sense today. Plus, over the course of a decade, many programmers have worked on the code and have come and gone. It is very hard to work on the code that was written by a dozen different programmers over the years, each with a differing coding style. For example, in the early 1980s, I was given the Task of refactoring a FORTRAN II program written in the early 1960s into a FORTRAN 77 program. I kept finding this line of code in the FORTRAN II program:

S = BLINK("GREEN")

but I did not have the source code for the BLINK() function so I was stuck. So I went to one of the old-timers in my group, and he explained that in the olden days the computers did not have an operating system. Instead, they had a human operator who loaded the cards for a job into the vacuum-tube computer and then watched the computer do its thing. The BLINK("GREEN") call was a status call that made a light on the computer console blink green so that the operator knew that everything was okay with the job and that the program was still running. That explained the final line of code in the program.

S = BLINK("RED")

In a refactoring effort, ancient software that was written in the deep past using older software technologies and programming techniques is totally rewritten using modern software technologies and programming techniques. To do that, the first step is to create flowcharts and Input-Output Diagrams to describe the logical flow of the software to be rewritten.

In Agile vs. Waterfall Programming and the Value of Having a Theoretical Framework, I described the Waterfall project management development model that was so popular during the 1970s, 1980s and 1990s. In the classic Waterfall project management development model, detailed user requirements and code specification documents were formulated before any coding began. These user requirements and coding specification documents formed the blueprints for the software to be developed. One of the techniques frequently used in preparing user requirements and coding specification documents was to begin by creating a number of high-level flowcharts of the processing flows and also a series of Input-Output Diagrams that described in more detail the high-level flowcharts of the software to be developed.

Figure 1 – Above is the general form of an Input-Output Diagram. Certain Data is Input to a software Process. The software Process then processes the Input Data and may read or write data to storage. When the software Process completes, it passes Output Information to the next software Process as Input.

Figure 2 – Above is a more realistic Input-Output Diagram. In general, a hierarchy of Input-Output Diagrams was generated to break down the proposed software into a number of modules that could be separately coded and unit tested. For example, the above Input-Output Diagram is for Sub-module 2.1 of the proposed software. For coding specifications, the Input-Output Diagram for Sub-module 2.1 would be broken down into more detailed Input-Output Diagrams like for Sub-module 2.1.1, Sub-module 2.1.2, Sub-module 2.1.3 ...

When undertaking a major software refactoring project, frequently high-level flowcharts and Input-Output Diagrams are produced to capture the functions of the current software to be refactored.

However, when refactoring code it is important not to simply replicate the Output of a given Input. One needs to shoot for the general case. For example, during the 1970s there were not many IT professionals with a degree in computer science. Most of us came from the sciences, engineering or accounting. We simply took a single course in FORTRAN or COBOL in college and instantly became qualified to be an IT professional. For example, in the early 1980s, I had a new hire join our development group and I was given the job of helping him to get started. He was working on some code that Output city names from some arrays based on some Input criteria. The code worked great, but he could not get the code to spit out "Chicago" for one set of criteria, so he simply hard-coded the string "Chicago" in his code for that set of criteria. I had to explain to him that his solution did work but that we needed to make his code work for all cases. I had to explain that hard-coding Output to fix a bug was frowned on in IT.

Refactoring Manual Systems in the 1970s
Back in the 1970s, computers were still very magical things for the general public and the business community too. People thought that computers could work miracles and magically solve all of their business problems. We were still refactoring manual systems back in those days that consisted of flows of paper forms being manually processed by clerks. The clerks had an Input box and an Output box. Input forms would arrive in their inbox, the clerks would then perform some manipulations on the Input information using their business knowledge and a mechanical adding machine and then transcribe the Output information on an Output form that went into their outbox. The Output forms were then delivered to the next clerk by another clerk using a mail cart on wheels for the next processing step. For example, when I first transitioned from being an exploration geophysicist for Amoco to becoming an IT professional back in 1979, Amoco's IT department had two sections - computer systems and manual systems. The people in manual systems worked on the workflows for manual systems with the Forms Design department, while the computer systems people programmed COBOL or FORTRAN code for the IBM MVS/TSO mainframes running the OS/370 operating system. Luckily for me, I knew some FORTRAN and ended up in the computer systems section of IT, because for some reason, designing paper forms all day long did not exactly appeal to me. Getting into the computer side of IT was also a fortunate career move at the time because now I am very concerned about the impact of climate change on the planet.

Figure 3 - Above is a typical office full of clerks in the 1950s. Just try to imagine how many clerks were required in a world without software to simply process all of the bank transactions, insurance premiums and claims, stock purchases and sales and all of the other business transactions in a single day.

The problem with refactoring a manual system back in the 1970s was that, because people still thought that computers were magical, their expectations were way too high. They wanted the computer to do everything for them by magic. In fact, they frequently wanted the computer to do impossible things for them! For example, they always knew the Output that they wanted, like how many red cleaning rags would be needed by a certain refinery next month. But we were stuck with trying to come up with some code that could produce that Output. So sometimes we had to tell our business partners that computers were really not magical and that there were some things that computers simply could not do. Fortunately, many times we could construct some code that could do what they wanted by adding additional attributes to the Input to the process. For example, given the production capacity of a refinery, the projected production volumes for next month, the number of employees working at the refinery next month and how many red cleaning rags they used last month, we could construct some code that would give them the projected number of red cleaning rags for the next month.

But sometimes we found that we could not construct the code to Output the number of red cleaning rags that a refinery would need next month no matter what information we Input into the constructed code because our Input-Output diagram was hiding a deeper principle. For example, it was accidentally discovered that when the number of red cleaning rags required each month by a given refinery increased, so did the likelihood of a dangerous personal injury accident happening at that refinery! Clearly, something more fundamental than simply ordering red cleaning rags was going on. The number of red cleaning rags used by a refinery was a sign of unsafe working conditions at the refinery. For more on code refactoring see the Wikipedia article:

Code refactoring
https://en.wikipedia.org/wiki/Code_refactoring

Constructor Theory
In a similar manner, David Deutsch and Chiara Marletto at Oxford would like to refactor all of physics using what they call constructor theory. Constructor theory is a set of fundamental first principles that all other theories in physics seem to follow. These fundamental first principles are similar to the postulates of mathematics. In this view, all of the current theories in physics that are to be refactored by constructor theory are called subsidiary theories. Thus, Newtonian mechanics, Newtonian gravity, classical electrodynamics, thermodynamics, statistical mechanics, special relativity, general relativity, quantum mechanics and the quantum field theories of the Standard Model are all to be considered subsidiary theories that could be derived from the fundamental first principles of constructor theory. The basic principle of constructor theory is that

I. All other laws of physics are expressible entirely in terms of statements about which physical transformations are possible and which are impossible, and why.

Constructor theory replaces the current theories of physics with Input-Output diagrams that describe possible and impossible transformations of Inputs and Outputs in order to find the more fundamental principles that underlie how the Universe behaves. Currently, the way we do physics is to determine the initial conditions of a system and then apply a theory usually framed in terms of some differential equations to determine how the system will evolve over time.

Figure 4 - Given the initial conditions of a cannonball with velocity V0 and a cannon angle of inclination θ0 we can use Newton's laws of motion and gravity to predict its path.

For example, given the initial position and velocity vector of a cannonball leaving a cannon, we can use Newton's equations of motion and Newton's theory of gravity to predict how the cannonball will move with time. Or, for a more accurate path, we could use Einstein's general theory of relativity. But does that really tell us about the fundamental essence of the cannonball's motion? Sometimes with this approach to physics using initial conditions and dynamical laws, we stumble on some more fundamental principles, like Lagrange's principle of least action which states that the cannonball will follow a path that minimizes the difference between its kinetic and potential energies over the path that it follows. Or using the general theory of relativity, find that the cannonball will follow a path that maximizes the proper time of the cannonball. The proper time of the cannonball is recorded by a clock riding along with the cannonball. When the cannonball is closer to the ground, time moves slower for the clock than when the cannonball is high in the air because the gravitational field is stronger closer to the ground, so the cannonball will move in such a way to move quickly through the zones of slower time near the ground and more slowly through the zones of faster time high in the sky. That way the cannonball's clock will record the greatest time of flight and maximize its proper time. See Dr. Matt O'Dowd's PBS Space Time video for more on that:

Is ACTION The Most Fundamental Property in Physics?
https://www.youtube.com/watch?v=Q_CQDSlmboA&t=964s

Some of the other fundamental first principles are the conservation of energy, momentum, angular momentum, electrical charge, and the fact that matter, energy and information cannot travel faster than the speed of light. But David Deutsch and Chiara Marletto suspect that many of the current theories of physics may be hiding even more fundamental first principles. Yes, the current theories of physics do a great job of predicting what will happen to systems in a positivistic manner, but they would like to have a framework that explains why things happen in terms of fundamental principles, and not just what happens. You need a collection of fundamental principles for that and so constructor theory strives to start from these fundamental first principles to determine if things are possible or not possible.

Figure 5 - A constructor theory Input-Output diagram consists of a Task that is performed on Input to produce an Output. The Task is performed by a Constructor.

To do that they use constructor theory Input-Output diagrams to essentially refactor the current theories of physics. A constructor Input-Output diagram consists of an Input, Task and an Output. The Task is a process that can transform the Input into an Output. The Task is a possible Task if a Constructor can be fabricated that successfully performs the Task. For example, the Input could be two monomer molecules and the Output could be a polymer molecule consisting of both monomer molecules stuck together. The Constructor could be a catalyst that successfully performs the Task. If such a catalyst can be fabricated then the Task is a possible Task. If not, then the Task is an impossible Task. Or, the Input could be a Maintenance Request and the Output could be a Work Order. The Constructor would be a program that turns Maintenance Requests into Work Orders. If such a Constructor can be programmed, then the Task of creating Work Orders from Maintenance Requests would be a possible Task. If you cannot write a Constructor program that tells you how many red cleaning rags a refinery will need next month, then it is an impossible Task. As an IT professional, I guarantee that many occasions will arise when you will be forced to tell your business partners that certain Tasks are simply impossible. Unfortunately, they will likely just respond with, "Okay, but it has to be done by Friday". Fortunately for David Deutsch and Chiara Marletto, they just have to reduce all of physics to constructor theory Input-Output diagrams consisting of possible and impossible Tasks and it does not have to be done by Friday.

The Constructor Theory of Information
Constructor theory is a product of combining classical information theory and quantum information theory. For IT professionals, its most profound application is to the theoretical study of the fundamental nature of Information. We saw in Entropy - the Bane of Programmers, The Demon of Software, Some More Information About Information and How Much Does Your Software Weigh? - the Equivalence of Mass, Energy and Information how the concept of Information slowly crept into physics by accident as an incidental byproduct of learning how to build better steam engines and learning how to transmit digital messages over noisy transmission lines. Now, for the first time, constructor theory provides a formal mechanism to theoretically study the nature of Information from fundamental first principles. For more on that see:

Constructor theory of information
David Deutsch and Chiara Marletto
https://www.constructortheory.org/wp-content/uploads/2016/03/ct-info.pdf

For an excellent overview of constructor theory see:

Dr. Matt O'Dowd's PBS Space Time video:

Will Constructor Theory REWRITE Physics?
https://www.youtube.com/watch?v=hYc97J2MZIo&t=698s

Also, take a look at the constructor theory homepage at:

CONSTRUCTOR THEORY
https://www.constructortheory.org/

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Thursday, November 04, 2021

What's the Deal with Bitcoins and Non-Fungible Tokens (NFTs)?

I really do not consider money to be real because it does not pass my definition of physical reality.

Physical Reality - Something that does not go away even when you stop believing in it.

Once people lose faith in money it quickly disappears and the world economy quickly unravels. After all, as I pointed out in MoneyPhysics, we nearly did this to ourselves already in the fall of 2008 with the worldwide financial crises that nearly crashed the world economy. So maybe we need something better?

But in my opinion, the only possible reason for the worldwide interest in cryptocurrencies must be for money laundering purposes. We already have a huge number of government-issued currencies to choose from. If you are worried about one particular government printing too much money, you can always buy the currency from another country. And if you are still nervous, you can always buy gold and silver. So there really is no need for any cryptocurrencies beyond the need for expanded money laundering capabilities. Unfortunately, the decision to allow cryptocurrencies to flourish will be made by politicians - a likely user community. So I imagine that there will be lots of pleas for cryptocurrencies from politicians as a way to expand economic freedoms and a way to technically keep up with the rest of the world.

Bitcoins and Non-Fungible Tokens (NFTs) both use Blockchain technology to record who owns things on a large network of computers, like a network of 100,000+ different computers or "nodes". So the most important thing is to understand how Blockchain technology works. The purpose of Blockchain is to store a durable historical record of transactions that can be easily validated to be true and can never be changed.

Let's start with your checkbook. Your checkbook has records of deposits that are made and checks that you write to pay people. Your bank has the same record of transactions stored on disk drives. Your bank backs up those records on Cloud servers in case the bank burns down. Once a month, your bank sends you a statement listing those transactions and what they think you currently have on deposit. If you are like me, frequently the bank statement does not match my checkbook. Unless the two differ by hundreds of dollars, I don't bother trying to find my mistake. I know that it is always my mistake, so I just add a "correction" entry to my checkbook to set my current checkbook balance equal to what the bank statement says it is.

But how do I know that the bank is really being honest? The bank is honest because the state and federal governments send in auditors to check the bank's arithmetic. Otherwise, a bank could set up thousands of fake accounts with many millions of dollars in them and then loan the fake money out for interest. The government auditors might also get interested if they saw that you routinely deposited $100,000 in bills every week into your account, or if you received $100,000 from a foreign bank account in the Cayman Islands every month. The important point is that somebody has to store the transactions and somebody else has to validate that the transactions are true and that nobody has "cooked the books" by changing historical entries. The purpose of Blockchain is to do that without governmental involvement.

So let's move your checkbook entries to a spreadsheet on your computer. You make the same exact entries on the spreadsheet that you do for your checkbook - like date, check number, what the check is for, the amount of the check and finally your current account balance. When the spreadsheet gets to 1,000 rows you stop and start up a second spreadsheet with exactly the same columns. We will call that first 1,000-row spreadsheet a Block and never change it again. But how can we be sure that somebody does not edit the spreadsheet Block later on? Well, we can put a "hash total" on the Block that certifies that the Block has not changed by using a hashing algorithm. For example, your 1,000-row spreadsheet Block has lots of letters and numbers on it. The numbers have digits with the numbers 0 - 9. A very simple hashing algorithm would be to simply add up all of the digits to get a sum like 3,198,783. Then you put the hash total of 3,198,783 on the 1,001 row of the spreadsheet Block. Now if any single digit on the spreadsheet Block were ever changed, the calculated hash total of 3,198,783 would be different, and you would know that somebody had edited the spreadsheet Block. But if you were a smart thief, you could easily change two digits in the Block and still get a hash total of 3,198,783. So you need a hashing algorithm that makes it very difficult to do that. For example, you could square every 5th digit and add that to the hash total instead of the digit.

But what really makes it hard to "cook the books" is what you do on your second spreadsheet Block. You put 3,198,783 into the very first row of your second spreadsheet Block. Then you continue on with your normal checkbook entries. When you have added 1,000 rows to your second spreadsheet, you have a second Block. The hash total for the second Block will contain the hashed results of the digits in 3,198,783 in it plus the hashed results for all of the other digits in your second Block. Let's say the hash total for the second block is 6,233,894. When you start up your third spreadsheet Block you put 6,233,894 into the first row and so on. When you get to 100 spreadsheet Blocks you have a chain of 100 Blocks that can never be changed. In order to change a single entry, you would have to "fix" all 100 chained Blocks - hence it is a Blockchain. It would take all of the computers in the world a century to do that.

This is great! Now you have a foolproof way of storing all of your checkbook transactions that can never be fiddled with. But is it durable? Suppose your computer burns up in a house fire or a hacker comes in and deletes your spreadsheet Blocks? So you get this brilliant idea. Why, I will just email the 100 spreadsheet Blocks to 100,000 of my friends! They can keep my checkbook Blockchain on their own computers for me. The only problem is that every time you make a deposit or write a check, you have to email 100,000 of your friends to do the same thing on their latest spreadsheet Block in the Blockchain. That should work. But then you remember that some of your friends are not so great with reading their email. So you get another brilliant idea. Whenever a new Block of 1,000 rows gets generated with a new hash total, you could hire 10,000 people to check the 100,000 computers of your friends to make sure that the latest Block that they have jives with all of the other Blocks by checking the hash total generation for the latest Block. These 10,000 people could then all report back to you with what they found. Let's say that 9,967 of these 10,000 people report back with exactly the same results. They tell you that 99,940 of your friends did a great job and all of the latest Blocks agree. However, 60 of your friends goofed up and have to fix their latest Block as I do with my own checkbook. Fantastic! Now your checkbook Blockchain is durable and cannot be hacked.

The only problem turns up when you have to pay the 10,000 people who audited and validated your Blockchain on the 100,000 computers that belong to your friends. So you get another brilliant idea. Instead of paying the 9,967 of the 10,000 people who got the audit right with dollars, you will give them a Bitcoin! But then you realize that issuing something like 9,967 new Bitcoins every time a new Block needs to be validated on 100,000 computers would soon make Bitcoins worthless. You get another brilliant idea. You will play the "I am thinking of a number" game with the 9,967 winners. I am thinking of a number between 1 and 1,000,000,000,000,000,000,000. To be the final winner that gets a new Bitcoin you have to come up with a guess that is the closest number that I am thinking of but not greater than the number that I am thinking of. That's how new Bitcoins get issued when each new Block in the Blockchain is finished and stored on the 100,000 computers in the network. The 10,000 people who validated your Blockchain are called Bitcoin Miners. It takes a lot of computer time to validate a Blockchain and the payback is in new Bitcoins.

Great. But your Bitcoin Miners now complain about getting paid in Bitcoins and wonder exactly what is a Bitcoin and why would anyone ever want one. So you tell them that Bitcoins are valuable because other people want Bitcoins. But they then ask why would other people want Bitcoins? So you tell them, "Well, it takes a lot of work to make or "mine" a Bitcoin. Remember when the United States was on the Gold Standard before Nixon? The United States would pay you $35 for each ounce of gold that you mined and that made dollars valuable. So it's the same thing with the Bitcoins that I am paying you to mine.". But then your Bitcoin miners complain that you are the only one issuing the new Bitcoins and keeping track of who owns the Bitcoins. Why should they trust you? Then you get your final brilliant idea. You tell them, "Say, how about instead of putting my checkbook transactions on the Blocks in my Blockchain, we put Bitcoin transactions on the Blocks instead! That way the Bitcoin Blockchain will store a durable record of all the new Bitcoin issues and the sales and purchases of existing Bitcoins on a network of 100,000 computers and all of the Bitcoin transactions will be audited and validated by others with no governmental intervention at all!". That's all there is to it.

Here are some background links:

Blockchain Explained
https://www.investopedia.com/terms/b/blockchain.asp

How Does Bitcoin Mining Work
https://www.investopedia.com/tech/how-does-bitcoin-mining-work/

NFTs also use Blockchain technology too. Fungible things are like water. All gallons of pure water are the same. You cannot tell one from the other, so they are all fungible. Bitcoins are also fungible. One Bitcoin is exactly the same as all the other Bitcoins. You might also think that all $20 bills are also fungible too, but they are not. Each $20 bill has a serial number on it to make it unique. Now suppose President Eisenhower used the $20 bill with serial number JG28404417E to tip a caddy on the day that he was reelected in 1956. That $20 bill would now be a collectible and worth much more than $20. Now, in the olden days, people would display the $20 bill with serial number JG28404417E at Antique and Collectibles Fairs and try to sell it for $1200. But there would be a lot of wear and tear on the collectible as it moved from owner to owner, and it could always be stolen too. So how about locking it in a safe at the Bank of America in New York City and issuing a digital NFT entry for it and putting that entry on a Blockchain? And that's what people now do. You can make an NFT out of any unique thing and put it on a Blockchain to ensure everybody knows who owns it and the true and unaltered history of its ownership.

Here is a background link:

Non-Fungible Token (NFT) Definition
https://www.investopedia.com/non-fungible-tokens-nft-5115211

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston