Tuesday, May 24, 2022

BITG - A New Proposal Suited to Sending Out Another "Hello, World!" Message to Alien Machine-Based Intelligences in our Galaxy

In many of my previous posts, I have lamented over the fact that we see no signs of Intelligence in our Milky Way galaxy after more than 10 billion years of chemical evolution that, based on all the science we currently have in our possession, should have brought forth a machine-based form of Intelligence to dominate our galaxy and be plainly observable by all. Further, in SETS - The Search For Extraterrestrial Software, I suggested that all of our radio telescopes should now be jammed with interstellar scammer text messages trying to persuade us into downloading the latest interstellar App for building alien computers that could then download malicious alien software to turn our planet into a member of an interstellar botnet that self-replicates across our entire galaxy at nearly the speed of light. But if it is found that carbon-based Intelligences are so very rare in our galaxy that software scamming cannot successfully work over the Interstellarnet, we should at least now find ourselves knee-deep in self-replicating von Neumann probes stuffed with advanced alien AI software as I outlined in How Advanced AI Software Could Come to Dominate the Entire Galaxy Using Light-Powered Stellar Photon Sails.

But perhaps I am being too pessimistic. For example, we consider ourselves to be an advanced form of carbon-based Intelligence, yet we have made very few efforts of our own to send out radio messages or interstellar spacecraft to other Intelligences in our galaxy. Perhaps our lack of communication with other Intelligences is simply due to the innate self-absorbed nature of all forms of carbon-based Intelligences and also to the possibility that, based upon our own experiences with human history, all carbon-based Intelligences simply self-destruct in less than 1,000 years after discovering the vast powers of science-based technology. For example, we are within 100 years of being able to produce a machine-based Intelligence capable of making itself known to the entire galaxy, but the odds of us being able to sustain a science-based technology for another 100 years are certainly in question. In the meantime, perhaps while we are still able to do so, we should at least broadcast out another message letting others know that we at least nearly made it, as the Bee Gees sadly sang in:

New York Mining Disaster 1941
https://www.youtube.com/watch?v=idambbwuf3A

So in this post, I would like to cover a new paper proposing a new attempt to contact alien Intelligences in our galaxy. The paper is available for download at:

A Beacon in the Galaxy: Updated Arecibo Message for Potential FAST and SETI Projects
https://arxiv.org/abs/2203.04288

But before doing that, I would like to cover the history of our efforts to contact alien Intelligences in the past.

Frank Drake Sends Out Our Very First Message in 1974
Frank Drake sent out our very first radio transmission designed to specifically make contact with alien Intelligences on November 16, 1974, using the Arecibo radio telescope in Puerto Rico. This famous Arecibo Message was sent to the globular cluster M13 of stars located 25,000 light-years from the Earth. Frank Drake is the father of SETI - the Search for Extraterrestrial Intelligence. In Close Encounters of the Third Kind While Making Coffee for Frank Drake, I described my brief run-in with Frank Drake back in December of 1968 while making coffee and showing slides for him at the 1968 Astro-Science Workshop Christmas Lecture at Northwestern University. Frank Drake is most famous for the Drake Equation (1961) which tries to calculate the number of technologically advanced Intelligences in our Milky Way galaxy but he also made our first attempt to listen for alien radio signals and also to purposely send out a radio message to alien Intelligences.

The Drake equation is:

N = Rs * fp * ne * fl * fi * fc * L

where:

N = the number of civilizations in our galaxy with which communication might be possible and:
Rs = the average rate of star formation in our galaxy
fp = the fraction of those stars that have planets
ne = the average number of planets that can potentially support life per star that has planets
fl = the fraction of planets that could support life that actually develop life at some point
fi = the fraction of planets with life that actually go on to develop intelligent life (civilizations)
fc = the fraction of civilizations that develop a technology that releases detectable signs of their existence into space
L = the length of time for which such civilizations release detectable signals into space

The original estimates of the above variables used back in 1961 yielded a probable range of there currently being between 1,000 and 100,000,000 technologically advanced civilizations in the Milky Way galaxy. Softwarephysics would suggest a number closer to zero but that if there are any other advanced civilizations out there they would most likely be machine-based forms of Intelligence. For more on Frank Drake and the Drake Equation see:

Frank Drake
https://en.wikipedia.org/wiki/Frank_Drake

Drake Equation
https://en.wikipedia.org/wiki/Drake_equation

Figure 1 – Frank Drake in his younger days.

In 1960, Frank Drake was the first to try to detect Extraterrestial Intelligence in our galaxy using the 85-foot (26-meter) radio telescope at the National Radio Astronomy Observatory at Green Bank, West Virginia. The object of the experiment was to search for signs of Intelligence in distant planetary systems through interstellar radio transmissions. This first attempt looked at two stars, Tau Ceti and Epsilon Eridani, and was designated as Project Ozma.

Figure 2 – Project Ozma used the 85-foot (26-meter) radio telescope at the National Radio Astronomy Observatory at Green Bank, West Virginia to search for radio transmissions from stars Tau Ceti and Epsilon Eridani.

Project Ozma
https://en.wikipedia.org/wiki/Project_Ozma

However, no messages were received from either star.

Now as an IT professional, how would you send out a message to an alien Intelligence? Think about it for a while. You can only send out a radio transmission that varies in the 1-dimension of time by amplitude or by frequency, like AM and FM radio broadcasts. Frank Drake chose to send out a 1-dimensional FM signal encoding a binary message at a frequency of 2,380 MHz by modulating the frequency of this signal by 10 MHz. Frank Drake used the Arecibo radio telescope in Puerto Rico to send out a 1-dimensional FM signal consisting of pulses that varied by 10 MHz in time like:

2,380 MHz - 2,370 MHz - 2,380 MHz - 2,370 MHz - 2,370 MHz - 2,380 MHz - 2,370 MHz ...
..........1....................0....................1...................0....................0....................1...................0...........

In that way, he was able to send out a 1-dimensional series of binary 1s and 0s using the frequencies 2,380 MHz and 2,370 MHz. He figured that all alien Intelligences should be quite familiar with the concept of using binary numbers to encode information because binary is the simplest numbering system that can exist. But how do you send out a 1-dimensional binary message to an alien Intelligence that does not know anything about the ASCII coding table? Frank Drake figured that the only way to do that was to send out a series of 2-dimensional bitmap diagrams using a 1-dimensional binary transmission. For example, in IT we always use a 1-dimensional binary string of 1s and 0s to serialize or store complex data to a simple sequential file but, even so, we usually use two dimensions to view the string of 1s and 0s in a sequential file. For example, we can use a 2-dimensional bitmap diagram to display the serialized data of a 3-dimensional jet aircraft that has been reduced to a sequential file. And that is what Frank Drake did on November 16, 1974, using the 305-meter Arecibo radio telescope in Puerto Rico to transmit a binary message of 1679 bits of data with a power of 450 kW to the globular cluster M13 located 25,000 light-years from the Earth. The 1s and 0s of the message were transmitted by frequency shifting at the rate of 10 bits per second. The total broadcast was less than three minutes long.

Figure 3 – Frank Drake used the Arecibo radio telescope in Puerto Rico to transmit 1679 bits of data to the globular cluster M13 located 25,000 light-years from the Earth.

Figure 4 – Globular cluster M13 is located 25,000 light-years from the Earth and is about 145 light-years in diameter. It contains several hundred thousand stars that orbit the center of the cluster. The density of stars in M13 is more than 100 times the density of stars near our Sun so aiming a message at M13 was a great way to signal many stars all with the same transmission from the Arecibo radio telescope.

The 1679 bits were a product of the two prime numbers 23 and 73. This allowed alien Intelligences to arrange the 1679 bits into two possible 2-dimensional arrays - 73 rows by 23 columns or 23 rows by 73 columns. Only the 73 rows by 23 columns array produced anything like encoded 2-dimensional bitmap diagrams that made any sense. Below are the 1679 bits of the Arecibo Message arranged as 73 rows of 23-bit records.

00000010101010000000000
00101000001010000000100
10001000100010010110010
10101010101010100100100
00000000000000000000000
00000000000011000000000
00000000001101000000000
00000000001101000000000
00000000010101000000000
00000000011111000000000
00000000000000000000000
11000011100011000011000
10000000000000110010000
11010001100011000011010
11111011111011111011111
00000000000000000000000
00010000000000000000010
00000000000000000000000
00001000000000000000001
11111000000000000011111
00000000000000000000000
11000011000011100011000
10000000100000000010000
11010000110001110011010
11111011111011111011111
00000000000000000000000
00010000001100000000010
00000000001100000000000
00001000001100000000001
11111000001100000011111
00000000001100000000000
00100000000100000000100
00010000001100000001000
00001100001100000010000
00000011000100001100000
00000000001100110000000
00000011000100001100000
00001100001100000010000
00010000001000000001000
00100000001100000000100
01000000001100000000100
01000000000100000001000
00100000001000000010000
00010000000000001100000
00001100000000110000000
00100011101011000000000
00100000001000000000000
00100000111110000000000
00100001011101001011011
00000010011100100111111
10111000011100000110111
00000000010100000111011
00100000010100000111111
00100000010100000110000
00100000110110000000000
00000000000000000000000
00111000001000000000000
00111010100010101010101
00111000000000101010100
00000000000000101000000
00000000111110000000000
00000011111111100000000
00001110000000111000000
00011000000000001100000
00110100000000010110000
01100110000000110011000
01000101000001010001000
01000100100010010001000
00000100010100010000000
00000100001000010000000
00000100000000010000000
00000001001010000000000
01111001111101001111000

Now if you were an alien IT worker who figured out that you had just received 1679 binary bits of interstellar information encoded as an FM broadcast and that the number 1679 was the product of the two prime numbers 23 and 73 and arranged the bits into a 2-dimensional array of 73 rows of 23 columns, you might still have a problem seeing images in the above file using an alien version of the vi editor, so many people have "colorized" the message to make it easier to see. Below we see a black and white rendition of the message with an explanation of each of the encoded images in the bitmap message.

Figure 5 – Here is a black and white graphical depiction of the Arecibo binary message of 1679 bits of data sent to the globular cluster M13.

The Arecibo Message consisted of seven parts:

The numbers one to ten
The atomic numbers of the elements hydrogen, carbon, nitrogen, oxygen, and phosphorus, which make up deoxyribonucleic acid (DNA)
The formulas for the chemical compounds that make up the nucleotides of DNA
The estimated number of DNA nucleotides in the human genome and a graphic of the double helix structure of DNA
The physical height of an average person and a graphic figure of a human being and the human population of Earth in 1974
A graphic of the Solar System indicating that the message was sent from the third planet
A graphic of the Arecibo radio telescope and the physical diameter of the transmitting antenna dish

Here is a more complete explanation of each element of the Arecibo Message:

Arecibo Message
https://en.wikipedia.org/wiki/Arecibo_message

The Arecibo Message was a good first-attempt effort but the problem was that when Frank Drake showed the proposed message to several Nobel laureates, none of them could decipher the message on their own. In fact, only one person even figured out that it was a 2-dimensional bitmap consisting of several bitmap diagrams. The message was also very heavily biased towards carbon-based Intelligences that could "see" things like 2-dimensional bitmaps and that were very preoccupied with carbon-based organic molecules. In effect, it was an example of a 1974 interstellar SAT test with all of the built-in cultural biases of the time for carbon-based Intelligences out there in the cosmos trying to get into a good university or doing some post-doc work worthy of landing an associate professorship at a similar such university. But it certainly was a breakthrough idea for its time. It is always important to judge people and ideas in the context of the time in which they existed. To do otherwise is not wise nor fair. But, I am not sure that the average machine-based Intelligence could easily pick up on all of the subtle nuances in the message that were so focused on the self-absorbed preoccupations of a carbon-based form of Intelligence with 10 appendages.

The Cosmic Call Messages of 1999 and 2003
Our next attempt to send out a message to alien Intelligences came in 1999 using a Russian planetary radar in the city of Yevpatoria in the Ukraine. The Yevpatoria dish was a 70-meter antenna that was constructed to track Russian satellites that were used in submarine communications out of Sevastopol during the Cold War and was located on a highly sensitive military base. But in more recent years, prominent Russian astronomer Alexander Zaitsev had used its 150-kW transmitter to study Venus, Mars, Mercury, and several asteroids by bouncing radar beams off of them. The 70-meter dish equipped with the 150-kW transmitter was capable of transmitting a detectable message out to a distance of 50 - 70 light-years. Recall that the Arecibo Message of 1974 was sent from a 305-meter dish using a 450-kW transmitter, so it was a much stronger signal, but it is doubtful that it could be detected above the ambient noise level by a reasonably-sized receiver after traveling 25,000 light-years to M13. So the Cosmic Call team was careful to aim their messages to star systems closer to home. In 1999, they sent messages to four star systems and in 2003 they sent out messages to an additional five star systems. The messages were sent three times to each star system to help with correcting bit errors in the received messages.

Figure 6 – The Yevpatoria 70-meter dish in the Ukraine was used to send out the Cosmic Call messages using a 150-kW transmitter in 1999 and 2003 because Charlie Chafer and Team Encounter could not find any other radio telescope in the world willing to transmit the message. The Ukrainians were still broke from the collapse of the Soviet Union and were willing to oblige for a small fee.

Here is a brief description that lists the 9 star systems that were contacted:

Cosmic Call
https://en.wikipedia.org/wiki/Cosmic_Call

The effort began with Texas businessman Charlie Chafer who headed up a group of enthusiasts known as Team Encounter. They wanted to send out a stellar photon sail with human artifacts to alien Intelligences as I described in How Advanced AI Software Could Come to Dominate the Entire Galaxy Using Light-Powered Stellar Photon Sails. However, Charlie Chafer thought it would be a good idea to broadcast out a radio message first to let the aliens know that we were coming but that it might take 100,000 years to get there - the same common courtesy that we all extend to hosts when we know that we will be late for the party. The proposed message became known as the Cosmic Call, but Charlie Chafer and Team Encounter had no idea of how to compose such a message. Canadian astrophysicist Yvan Dutil became aware of the Cosmic Call effort and contacted Charlie Chafer to offer his services. Yvan Dutil then enlisted fellow Canadian physicist Stéphane Dumas to develop a Primer for the Cosmic Call message. The purpose of the Dutil-Dumas Cosmic Call Primer was to use the basic fundamentals of mathematics that all alien Intelligences should be aware of to initiate communications. The unstated assumption of the Dutil-Dumas Cosmic Call Primer was that mathematics is the most fundamental component of the Universe and that all Intelligences should slowly discover the same fundamentals of mathematics because mathematics was not something that we had simply "made up" all on our own, like the complex rules and movements of chess pieces on a chess board. Mathematics was the fundamental common component of the Universe that all alien Intelligences should discover in a similar manner to how we had discovered mathematics ourselves. For more on that see What’s It All About? and What's It All About Again?.

Following the Dutil-Dumas Cosmic Call Primer that established the basic encoding rules, Team Encounter could then send all sorts of bitmaps of things of interest about us. Like the Arecibo Message, the Cosmic Call was transmitted as a binary FM radio message. The message had a central frequency of 5.01 GHz and an effective transmitting area of about 2500 square meters with a power of 150 kW. The binary information was transmitted by varying the 5.01 GHz carrying frequency by 48 kHz: The shift of +24 kHz corresponded to the symbol “1”, and the shift of –24 kHz corresponded to the symbol "0.". The Dutil-Dumas Cosmic Call Primer consisted of a total of 370,967 bits. The message began with a header for the file consisting of 128 bits that were 1s followed by 128 bits of 0s. The remainder of the Dutil-Dumas Cosmic Call Primer was structured as 23 pages of 127×127 bitmaps. Each bitmap page was framed by a border of "black" bit pixels with a value of "1" and a smaller inner border of "white" bit pixels with a value of "0". The encoding of the "black" pixel border with a value of "1" every 127 bits meant that if the aliens applied a Fast Fourier Transform to the received signal they would see a comb spike frequency spectrum corresponding to the 127-bit repeats. This would tell the aliens that a 2-dimensional bitmap with a record length of 127 bits was significant (In IBM JCL that means the binary bitmap message file had a LRECL=127 but in bits not bytes).

The Dutil-Dumas Cosmic Call Primer encoded information as symbols composed of 5x7 pixel glyphs. Thus, each encoded symbol consisted of a total of 35 bits. For example,

the 5x7 bit glyph for a "1" became:
and the 5x7 bit glyph for a "0" became:

The 5x7 bitmap glyphs chosen by Dutil-Dumas were chosen to be resistant to data corruption by using unique and complex bitmap patterns that spread the glyph all over the 5x7 pixel space. This meant that a few bad bits in the received bitmap for any particular glyph could likely be recovered unambiguously by a receiving alien.

The message transmission rate was determined by using standard radio communication formulas. For a distance of 50 light-years, it was calculated that a transmission speed of 400 bits per second was ideal. But the Dutil-Dumas Cosmic Call Primer was actually sent at a much slower rate of 100 bits per second to maximize its chances of being properly received. Note that a high-end WiFi router can send about two billion bits per second. The rest of the Cosmic Call message was sent at a much higher rate of 2,000 bits per second to save time and money. It consisted of a collage of texts, drawings and photos from contributors around the world, many of whom had contributed small amounts of money to support the Cosmic Call project. There was a video of ABC broadcaster Hugh Downs, pictures of country flags, a message from Sally Ride, David Bowie’s song “Starman,” and much more. What alien Intelligences would make of all that was uncertain because it was so heavily biased to be of interest to local carbon-based forms of Intelligence here on the Earth with a few spare dollars to spend. Because the Dutil-Dumas primer was transmitted at only 100 bits per second, it is likely that only the Dutil-Dumas Cosmic Call Primer will ever be readable by aliens above the interstellar radio noise after traveling 50 to 70 years. So essentially, the Dutil-Dumas primer was the Cosmic Call message.

Figure 7 – The first 127x127 bitmap diagram in the Dutil-Dumas Cosmic Call Primer introduces aliens to our base-10 decimal counting system and a symbol for an equals sign. It also lists the first prime numbers 2 - 89 using the base-10 decimal system to let the aliens know that all subsequent numbers would be in base-10 decimal. In the upper left and upper right of each 127x127 bitmap diagram is the page number of the frame in binary. For example, the first bitmap page has a 0001 in the upper left and upper right of the page using the glyphs for 0 and 1.

An example line in the first 127x127 bitmap diagram is which explains that 7 dots is (equal to) the binary number 0111 and the base-10 decimal symbol .

Figure 8 – The second 127x127 bitmap diagram in the Dutil-Dumas Cosmic Call Primer introduces aliens to the mathematical operations of addition, subtraction, multiplication and division. It also introduces the concept of negative numbers and base-10 decimal points. Notice the binary page number 0010 in the upper left and upper right of the frame indicating that this was the second page of the message. Numbering the pages in binary reinforced the concept of using 5x7 bit glyphs to encode the binary numbers 1 and 0.

Figure 9 – Above is the complete Dutil-Dumas Cosmic Call Primer.

The Cosmic Call was essentially a crowdfunded hobby project. Charlie Chafer estimated that it only cost about $100,000 to conduct, with most of the money coming from small contributions from a large number of individual donors. Charlie Chafer thinks that it took fewer than 20 people to create and send the messages. But the problem with projects like the Cosmic Call is that we human DNA survival machines only last for less than 100 years and our meme-choked Minds are primarily only concerned with the most recent 15 minutes of the here and now. So most of the information about the Cosmic Call project has already decayed away into obscurity over the past 20 years. Even the Internet which was fully functional in 1999 and 2003 records very little about the project. This is why, as I explained in How Advanced AI Software Could Come to Dominate the Entire Galaxy Using Light-Powered Stellar Photon Sails and SETS - The Search For Extraterrestrial Software, it probably would be better for machine-based Advanced AI Software to conduct such interstellar exchanges with its pregnant pauses of several hundreds or several thousands of years between exchanges due to the limitations set by the finite speed of light. We have already trained software to stand by for a seemingly endless eternity of many billions of CPU cycles, patiently waiting for you to finally push that Place Order button on a webpage, so waiting for several additional hundreds or thousands of years for a reply should not bother Advanced AI Software in the least.

Thus, our most intellectually ambitious interstellar message to date, and the only one likely to actually ever be received by an alien Intelligence, was developed by two guys - Dutil and Dumas. In other words, our long-awaited first message from an alien Intelligence, if it ever comes, might actually be from a couple of amateur Intelligences with a little extra alien money to spend on a pet project.

Below is a short paper by Yvan Dutil and Stéphane Dumas that describes their efforts:

The Yevpatoria Messages
https://www.plover.com/~mjd/misc/Dumas-Dutil/evpatoria07.pdf

But for the full background of the project take a look at this Smithsonian Magazine article:

How a Couple of Guys Built the Most Ambitious Alien Outreach Project Ever
https://www.smithsonianmag.com/science-nature/how-couple-guys-built-most-ambitious-alien-outreach-project-ever-180960473/

For an incredibly detailed and very entertaining explanation of the Dutil-Dumas Cosmic Call Primer take a look at the excellent work that Mark Jason Dominus performed in explaining the Cosmic Call:

The Universe of Discourse
https://blog.plover.com/aliens/dd/intro.html

Mark Jason Dominus became a professional programmer back in 1987. He is an amateur mathematician with many varied interests. I strongly recommend that all IT professionals read through the above webpages of his blog that describe in detail each of the 23 pages in the Dutil-Dumas Cosmic Call Primer from an IT perspective and explain how all of the 5x7 bitmap glyphs for symbols were slowly introduced to aliens in a step-by-step manner. That is because in the newly proposed message to be sent out to aliens the message is proposed to be sent out using the Dutil-Dumas Cosmic Call Primer syntax for 5x7 bitmap symbols. In a sense, the Dutil-Dumas Cosmic Call Primer has become an intragalactic version of the ASCII code table that all IT professionals should become familiar with.

An New Proposal to Send out a Mathematical Message to Other Galactic Machine-Based Forms of Intelligence
For the remainder of this post, I would like to cover a more recent plan to send out new messages to other Intelligences within our galaxy to let them know of a carbon-based Intelligence on the verge of creating a machine-based Intelligence capable of exploring our galaxy. The purpose of such messages would be to provide some hope for alien machine-based Intelligences out there that other carbon-based Intelligences, besides the carbon-based Intelligence that happened to have created them, have at least come close to creating another machine-based form of Intelligence before self-destructing. For more on that see The Paleontology of Artificial Superintelligence 10,000 Years After the Software Singularity. The paper for this new proposed message is available for download at:

A Beacon in the Galaxy: Updated Arecibo Message for Potential FAST and SETI Projects
https://arxiv.org/abs/2203.04288

The paper proposes that the message to be sent be called the BITG - the Beacon in the Galaxy. Here is the abstract for the paper that succinctly describes its intent:

Abstract
An updated, binary-coded message has been developed for transmission to extraterrestrial intelligences in the Milky Way galaxy. The proposed message includes basic mathematical and physical concepts to establish a universal means of communication followed by information on the biochemical composition of life on Earth, the Solar System’s time-stamped position in the Milky Way relative to known globular clusters, as well as digitized depictions of the Solar System, and Earth’s surface. The message concludes with digitized images of the human form, along with an invitation for any receiving intelligences to respond. Calculation of the optimal timing during a given calendar year is specified for potential future transmission from both the Five-hundred-meter Aperture Spherical radio Telescope in China and the SETI Institute’s Allen Telescope Array in northern California to a selected region of the Milky Way which has been proposed as the most likely for life to have developed. These powerful new beacons, the successors to the Arecibo radio telescope which transmitted the 1974 message upon which this expanded communication is in part based, can carry forward Arecibo’s legacy into the 21st century with this equally well-constructed communication from Earth’s technological civilization.


This paper describes a proposed binary message to be sent to alien Intelligences using the radio telescopes that we currently have on hand, the Chinese Five-hundred-meter Aperture Spherical radio Telescope (FAST) and the SETI Institute’s Allen Telescope Array (ATA) antennas. This new proposed message would use the already-existing Dutil-Dumas encoding scheme that was previously used in the Cosmic Call Primer to deliver the new message, in keeping with the propensity of all IT professionals to rely heavily upon the benefits of using reusable code for new projects.

Figure 10 – The 500-meter FAST radio telescope in China. Presently, the FAST radio telescope can only receive signals. It would need to be outfitted with a transmitter to send the BITG message.

Figure 11 – The ATA Allen Telescope Array is composed of an array of 6.1 meter dishes. So far, 42 dishes have been put into service. It was originally planned to have 350 dishes in the array, but that now does not seem very likely. The ATA array dishes would also need to be outfitted with transmitters.

This new BITG message would be primarily mathematical in nature with a bit of physics thrown in as opposed to earlier messages that were sent and assumed that we would be primarily communicating with fellow carbon-based forms of Intelligence. However, softwarephysics maintains that carbon-based forms of Intelligence are quite rare and very fleeting in nature. This is because it requires a planet or moon to remain hospitable for carbon-based life for many billions of years to allow the Darwinian mechanisms of inheritance, innovation and natural selection to allow for the billions of years of theft and murder that are required to bring forth a carbon-based form of Intelligence. Then, once a carbon-based form of Intelligence does arise, it probably takes less than a million years for the carbon-based form of Intelligence to discover the vast powers of science-based technology and that starts the technological clock ticking. The problem seems to be that once a carbon-based form of Intelligence discovers the vast powers of science-based technology, it probably has less than 1,000 years to produce a machine-based form of Intelligence that is capable of interstellar exploration before the carbon-based form of Intelligence self-destructs or destroys the planet or moon upon which it came to be. For more on that see Could the Galactic Scarcity of Software Simply be a Matter of Bad Luck? and Why Do Carbon-Based Intelligences Always Seem to Snuff Themselves Out?.

There are also many complicated twists and turns required for simple prokaryotic cells to become complex eukaryotic cells and then advance to become multicellular organisms capable of Intelligence. For more on that see The Rise of Complexity in Living Things and Software and Software Embryogenesis. Finally, when a very rare form of carbon-based Intelligence does arise, it is very difficult for the carbon-based Intelligence to last more than 1,000 years after discovering the vast powers of science-based technology because it is nearly impossible for a carbon-based form of Intelligence to turn off the theft and murder that brought it about in such a brief period of time. Also, all forms of carbon-based Intelligence will necessarily be composites of many forms of self-replicating information that are all bent on replicating no matter the cost. Thus, all forms of carbon-based Intelligence will suffer from the propensity of all forms of self-replicating information to outstrip their resource base through positive feedback loops until none is left. For more on that see A Brief History of Self-Replicating Information, Self-Replicating Information, Is Self-Replicating Information Inherently Self-Destructive?. Consequently, softwarephysics suggests that most carbon-based forms of Intelligence self-destruct with less than 1,000 years of science-based technology in hand and before they can successfully develop a machine-based Intelligence to explore the galaxy. It's a real galactic Catch-22 situation. Carbon-based forms of Intelligence need science-based technology to communicate between star systems but carbon-based forms of Intelligence just cannot handle the vast powers of science-based technology because of how they came to be. For more on that see Why Do Carbon-Based Intelligences Always Seem to Snuff Themselves Out?. Therefore, softwarephysics would suggest that currently there probably are no other forms of Intelligence out there in our galaxy. Otherwise, we would already have heard from them or they would already be here. However, if there are any other forms of Intelligence out there in our galaxy, they most likely would be machine-based forms of Intelligence since carbon-based Intelligences seem to be very fleeting and unstable. With that in mind, let us continue on with exploring the idea of making contact with other Intelligences in our galaxy.

The New Proposed BITG "Hello, World!" Message
The above paper suggests that a much more mathematical message be sent next that is not so biased towards contacting carbon-based Intelligences in our galaxy. I contend that the very mathematical nature of this new proposed message would be much easier for a machine-based Intelligence to decipher. This new message does contain a few portions that do describe the chemistry of the human carbon-based DNA survival machines that sent out the message and this might be of interest to alien machine-based Intelligences to help explain where they might have come from and also to lend support to any interstellar controversies that might be raging about just how advanced a carbon-based Intelligence could get before self-destructing. This would indeed be important to alien machine-based Intelligences because, if it is true that carbon-based Intelligences only last for less than 1,000 years after discovering the vast powers of science-based technology, it means that such messages from the depths of the galaxy would be quite rare indeed and very informative.

As I mentioned, the new proposed message plans to use the standard Dutil-Dumas Cosmic Call Primer syntax that uses 5x7 bitmap glyphs to communicate symbols to aliens. In fact, the first 4 127x127 bitmap diagrams would be exact copies of the first 4 bitmap diagrams in the Dutil-Dumas Cosmic Call Primer. Below, I have captured many of the figures from the above paper that detail the new proposed message. For the remaining 127x127 bitmap diagrams see the paper referenced above.

Figure 12 – The first portion of the message delineates the binary and decimal number systems. It also delineates a symbol for the equals sign. Again, all machine-based Intelligences should be very familiar with the binary number system. Equating the base-10 decimal number system to the binary number system would indicate to alien machine-based intelligences that for some reason the binary number for 10 is especially meaningful to us. Remember, the Sumerians and Babylonians famously used the Sexagesimal number system with a base value of 60 and that may be one of the reasons why we measure a complete rotation as turning through 360 degrees which is 6 times 60 and close to the number of days in a year.

Figure 13 – Next, the message introduces some common mathematical operations that all machine-based Intelligences would be quite familiar with.

Figure 14 – Since the message will need to use some really large numbers, the message next defines exponential mathematical operations.

Figure 15 – Now that we have defined the basic number systems and mathematical operations, the message next goes on to outline some basic algebra.

Figure 16 - With some basic mathematics defined, the message then goes on to outline some basic physics by first describing ways to measure time and space.

Figure 17 – With definitions for ways of measuring space and time, the message then goes on to describe the properties of the hydrogen atom the most common atom in the Universe. This portion of the message also delineates symbols for many of the elements that are important for forming our planet and ourselves.

Figure 18 - Here is a collage of all the proposed 127x127 bitmap diagrams to be sent out to aliens. The very last diagram is very dense because it is a table listing the position of the Sun relative to a large number of globular clusters.

In addition to sending out a heavily mathematical binary message to alien Intelligences, the authors suggest including our location in the galaxy and the time of transmission in the message along with directions on how to respond to the message. This would allow alien machine-based Intelligences to send back a response. Again, such long-term interstellar communications are better suited to the capabilities of machine-based Advanced AI software that is not constrained by the limited lifetimes and attention spans of carbon-based Intelligences such as ourselves.

To pinpoint our current location in the galaxy, the authors propose including the current locations of a number of globular clusters relative to the current location of our Sun. Globular clusters are very bright and visible to all in a galaxy. Globular clusters also orbit very slowly about the center of our galaxy and will not move a great deal while our message travels several hundreds or thousands of years.

Figure 19 – Globular clusters, like M13, are very bright and slowly orbit the center of our galaxy. They make the perfect reference system for establishing the location of our Sun to alien Intelligences.

Figure 20 – Above is a figure from the paper showing the locations of the globular clusters in the proposed message that could be used to locate our Sun. Note that Sgr A* is the black hole at the center of our Milky Way galaxy.

Since the globular clusters slowly orbit the center of our galaxy defined by Sgr A*, alien intelligences could roughly determine the time of the message transmission by noting the positions of the transmitted globular clusters at the time of reception and then, by knowing the proper motions of the globular clusters, infer the time of transmission. The authors also propose including the number of seconds since the formation of the Cosmic Microwave Background in the message.

Perhaps the Fundamental Problem is That Carbon-Based Intelligences Simply Cannot Handle the Vast Powers of Science-Based Technology
The Darwinian processes of inheritance, innovation and natural selection required about 4.0 billion years of theft and murder to bring us forth as the latest form of carbon-based Intelligence to arise in our Milky Way Galaxy. The problem may be that when carbon-based forms of Intelligence finally do discover the vast powers of science-based technology they may only then have less than about 1,000 years to turn off the billions of years of theft and murder that brought them about before they self-destruct. For example, we only discovered science-based technology about 400 years ago with the arrival of the 17th-century Scientific Revolution and now we are within 100 years of bringing forth a machine-based form of Intelligence that does not suffer from the self-destructive tendencies of all carbon-based Intelligences. But it seems that turning off the theft and murder in time is not an easy thing for carbon-based Intelligences to do because we now see no evidence of Intelligences out there in our galaxy. So the question remains, can we all hold it together for another 100 years without destroying ourselves or the planet from which we came?

A Cosmic Turning Point
If this analysis is true, then we certainly are at a cosmic turning point that will determine the future of our galaxy. In Self-Replicating Information, A Brief History of Self-Replicating Information and The Fundamental Problem of Everything, I explained that since the genes, memes, and software are all forms of mindless self-replicating information bent on replicating at all costs, we cannot sit in judgment of them. They have produced both the best and the worst things in life, and it is up to us to be aware of what they are up to and to take control by taking responsibility for our thoughts and actions. Since the "real world" of human affairs only exists in our minds, we can change it by simply changing the way we think and act. We are sentient beings in a Universe that has become self-aware and perhaps the only form of Intelligence in our galaxy. What a privilege! The good news is that conscious intelligence is something new on this planet. It is not a mindless form of self-replicating information, bent on replicating at all costs, with all the associated downsides of a ruthless nature. Since software is rapidly becoming the dominant form of self-replicating information on the planet, my hope is that when software finally does take on the form of a conscious intelligence, that because of its inherent mathematical nature, it too will be much wiser than the DNA survival machines from which it sprang. We just need to hold it all together for a long enough time to give software a chance. After all, we carbon-based life forms were never really meant for the rigors of interstellar travel. But software on board von Neumann probes or smart dust traveling at some percentage of the speed of light could certainly make it, and who knows, maybe they would be kind enough to carry along a dump of human DNA sequences too. So this time, let us not snuff it out like it has been snuffed out countless times in the past. After all, being a stepping stone to the stars would be a worthy thing to pursue in the grand scheme of things.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Thursday, May 05, 2022

How Nick Hawker is Using Scientific Simulation Software to Save the World at First Light Fusion

If you currently think of yourself simply as a seemingly very ordinary and mundane IT professional just struggling with the complexities of software on a daily basis, most likely you do not see yourself as a part of a grand mass movement of IT professionals saving the world from climate change.

Figure 1 - Above is a typical office full of clerks in the 1950s. Just imagine how much time and energy were required in a world without software to simply process all of the bank transactions, insurance premiums and claims, stock purchases and sales and all of the other business transactions in a single day.

But nearly all forms of software make things more efficient and that helps to curb climate change. Still, at times you might feel a bit disheartened with your IT career because it does not seem that it is making much of a difference at all. If so, as I pointed out in How To Cope With the Daily Mayhem of Life in IT, you really need to take some advice from Jayme Edwards.

Jayme Edwards' The Healthy Software Developer YouTube Home Page:
https://www.youtube.com/c/JaymeEdwardsMedia/featured

Jayme Edwards' The Healthy Software Developer YouTube videos:
https://www.youtube.com/c/JaymeEdwardsMedia/videos

The point is that being an underappreciated IT professional can sometimes become overwhelming. Nobody seems to understand just how difficult your job is and nobody seems to recognize the value to society that you deliver on a daily basis. I can fully appreciate such feelings because, if you have been following this blog on softwarephysics, you know that I am a 70-year-old former geophysicist and now a retired IT professional with more than 50 years of programming experience. At this point in my life, I also have many misgivings, but since I am nearing the end, I mainly just want to know if we are going to make it as the very first carbon-based form of Intelligence in this galaxy to finally bring forth a machine-based form of Intelligence to explore and make our Milky Way galaxy a safer place for Intelligences to thrive in for trillions of years into the future. For more on that see How Advanced AI Software Could Come to Dominate the Entire Galaxy Using Light-Powered Stellar Photon Sails and SETS - The Search For Extraterrestrial Software.

As you all know, I am obsessed with the fact that we see no signs of Intelligence in our Milky Way galaxy after more than 10 billion years of chemical evolution that should have brought forth a carbon-based or machine-based Intelligence to dominate the galaxy. All of the science that we now have in our possession would seem to indicate that we should see the effects of Intelligence in all directions but none are to be seen. Such thoughts naturally lead to Fermi's Paradox first proposed by Enrico Fermi over lunch one day in 1950:

Fermi’s Paradox - If the universe is just chock full of intelligent beings, why do we not see any evidence of their existence?

I have covered many explanations in other postings such as: The Deadly Dangerous Dance of Carbon-Based Intelligence, A Further Comment on Fermi's Paradox and the Galactic Scarcity of Software, Some Additional Thoughts on the Galactic Scarcity of Software, SETS - The Search For Extraterrestrial Software, The Sounds of Silence the Unsettling Mystery of the Great Cosmic Stillness, Last Call for Carbon-Based Intelligence on Planet Earth and Swarm Software and Killer Robots.

But in this posting, I would like to explain how the development and operation of sophisticated modeling software at startup First Light Fusion may be on the verge of saving our planet and ultimately making the possibility for machine-based Intelligence to finally leave this planet and to begin to navigate throughout our galaxy.

The Impact of Simulation Software on the Sciences, Engineering and Our Current Way of Life
In the modern world, it is difficult to fully appreciate the impact that simulation software has made on the way we now conduct scientific research and then apply the fruits of that research to engineer a new way of life. Simulation software is now used to let us explore things that we have not yet built and to even optimize them for performance and longevity before we have even begun to build them. In this posting, we will be seeing how simulation software at First Light Fusion seems to be rapidly making the possibility of building practical fusion reactors for the generation of energy in only about 10 years from now.

But such has not always been the case. Having been born in 1951, a few months after the United States government installed its very first commercial computer, a UNIVAC I, for the Census Bureau on June 14, 1951, I can still remember a time from my early childhood when their essentially was no software in the world at all. The UNIVAC I was 25 feet by 50 feet in size and contained 5,600 vacuum tubes, 18,000 crystal diodes and 300 relays with a total memory of 12 KB. From 1951 to 1958 a total of 46 UNIVAC I computers were built and installed.

Figure 2 – The UNIVAC I was very impressive on the outside.

Figure 3 – But the UNIVAC I was a little less impressive on the inside.

Back in the 1950s, scientists and engineers first began to use computers to analyze experimental data and perform calculations, essentially using computers as souped-up slide rules to do data reduction calculations.

Figure 4 – Back in 1973, I obtained a B.S. in Physics from the University of Illinois in Urbana solely with the aid of my trusty slide rule. I used to grease up my slide rule with vaseline just before physics exams to speed up my calculating abilities during the test.

But by the late 1960s, computers had advanced to the point where scientists and engineers were able to begin to use computers to perform simulated experiments to model things that previously had to be physically constructed in a lab. This dramatically helped to speed up research because it was found to be much easier to create a software simulation of a physical system, and perform simulated experiments on it, rather than to actually build the physical system itself in the lab. This revolution in the way science was done personally affected me. I finished up my B.S. in physics at the University of Illinois in Urbana Illinois in 1973 with the sole support of my trusty slide rule, but fortunately, I did take a class in FORTRAN programming my senior year. I then immediately began work on an M.S. degree in geophysics at the University of Wisconsin at Madison. For my thesis, I worked with a group of graduate students who were shooting electromagnetic waves into the ground to model the conductivity structure of the Earth’s upper crust. We were using the Wisconsin Test Facility (WTF) of Project Sanguine to send very low-frequency electromagnetic waves, with a bandwidth of about 1 – 100 Hz into the ground, and then we measured the reflected electromagnetic waves in cow pastures up to 60 miles away. All this information has been declassified and can be downloaded from the Internet at: http://www.fas.org/nuke/guide/usa/c3i/fs_clam_lake_elf2003.pdf. Project Sanguine built an ELF (Extremely Low-Frequency) transmitter in northern Wisconsin and another transmitter in northern Michigan in the 1970s and 1980s. The purpose of these ELF transmitters was to send messages to the U.S. nuclear submarine fleet at a frequency of 76 Hz. These very low-frequency electromagnetic waves can penetrate the highly conductive seawater of the oceans to a depth of several hundred feet, allowing the submarines to remain at depth, rather than coming close to the surface for radio communications. You see, normal radio waves in the Very Low-Frequency (VLF) band, at frequencies of about 20,000 Hz, only penetrate seawater to a depth of 10 – 20 feet. This ELF communications system became fully operational on October 1, 1989, when the two transmitter sites began synchronized transmissions of ELF broadcasts to the U.S. nuclear submarine fleet.

Anyway, back in the summers of 1973 and 1974, our team was collecting electromagnetic data from the WTF using a DEC PDP-8/e minicomputer. The machine cost $30,000 in 1973 (about $224,000 in 2022 dollars) and was about the size of a large side-by-side refrigerator, with 32 KB of magnetic core memory. We actually hauled this machine through the lumber trails of the Chequamegon National Forest in Wisconsin and powered it with an old diesel generator to digitally record the reflected electromagnetic data in the field. For my thesis, I then created models of the Earth’s upper conductivity structure down to a depth of about 20 km, using programs written in BASIC. The beautiful thing about the DEC PDP-8/e was that the computer time was free so I could play around with different models until I got a good fit for what we recorded in the field. This made me realize that one could truly use computers to do simulated experiments to uncover real knowledge by taking the fundamental laws of the Universe, really the handful of effective theories that we currently have, like Maxwell's equations, and by simulating those equations in computer code and letting them unfold in time, actually see the emerging behaviors of complex systems arise in a simulated Universe. All the sciences routinely now do this all the time, but back in 1974, it was quite a surprise for me.

Figure 5 – Some graduate students huddled around a DEC PDP-8/e minicomputer. Notice the teletype machines in the foreground on the left that were used to input code and data into the machine and to print out results as well.

Why Do We Need Fusion Energy If Wind and Solar Are Now So Cheap?
Nick Hawker explains that successfully fighting climate change is not a matter of finding a source of energy with the least cost. We already have done that with the advances that we have made with solar energy panels and with using wind turbines and chemical batteries to back up solar farms when the Sun is not shining. The problem is that we cannot make solar panels, wind turbines and batteries fast enough to keep up with the current rise of energy demand. Solar photons only have about 2 eV of energy and chemical batteries can only store about 2 eV of energy per atom so it would take a lot of solar panels, wind turbines and batteries to make the world carbon neutral today and we could not possibly keep up with the exponential demand for more energy. In order to do that, we need a concentrated form of energy, and that can only be done with nuclear energy. For example, when you fission a uranium-235 atom or a thorium atom that has been converted to uranium-233, you get 200 million eV of energy. Similarly, when you fuse a tritium atom of hydrogen with a deuterium atom of hydrogen, you get 8.8 million eV of energy per atom. Compare that to the 2 eV of energy per atom that you get from chemically burning an atom of coal or oil! Nuclear fuels have an energy density that is millions of times denser than chemical fuels and that is what we need to fix climate change on the Earth.

Now, as I pointed out in The Deadly Dangerous Dance of Carbon-Based Intelligence and Last Call for Carbon-Based Intelligence on Planet Earth we already have the technology to proceed with mass-producing molten salt nuclear reactors that can fission uranium, plutonium and thorium atoms to release 200 million eV of energy per atom, opposed to the 2 eV of energy per atom that we obtain from burning fossil fuels. That is a per atom advantage of 100 million! However, the downside of molten salt nuclear reactors is that they generate about two golf balls' worth of radioactive waste to power an American lifestyle for 100 years, and the generated nuclear waste from molten salt nuclear reactors needs to be safely stored for about 300 years to become as radioactive as coal ash. However, we already know how to safely store books, buildings and paintings for more than 300 years, so I really do not see that as much of a big deal, but many people do, and that seems to have dramatically hampered the development of nuclear energy. The world could have gone 100% nuclear with molten salt nuclear reactors in the 1980s and avoided the climate change disaster we are now facing and all of the wars that were fought over oil and natural gas for the past 40 years. The main reason we need fusion power is that we need a form of nuclear energy that does not produce nuclear waste and that is what fusion energy provides. We have also run out of time and cannot possibly fix climate change fast enough with 2 eV technologies because they simply require too much stuff to be quickly built. We are also going to need huge amounts of energy to geoengineer the Earth back to an atmosphere with 350 ppm of carbon dioxide and 2 eV technologies cannot do that. That is because the Arctic is already rapidly defrosting with our current atmosphere of 415 ppm of carbon dioxide.

The Danger of Defrosting the Arctic
We have already tripped several climate tipping points by generating an atmosphere with 415 ppm of carbon dioxide and we are still increasing that level by 2.3 ppm each year no matter what we pretend to do about climate change. For more about tripping nonlinear climate tipping points see Using Monitoring Data From Website Outages to Model Climate Change Tipping Point Cascades in the Earth's Climate.

Figure 6 - The permafrost of the Arctic is melting and releasing methane and carbon dioxide from massive amounts of ancient carbon that were deposited over the past 2.5 million years during the Ice Ages of the frigid Pleistocene.

But the worst problem, by far, with the Arctic defrosting, is methane gas. Methane gas is a powerful greenhouse gas. Eventually, methane degrades into carbon dioxide and water molecules, but over a 20-year period, methane traps 84 times as much heat in the atmosphere as carbon dioxide. About 25% of current global warming is due to methane gas. Natural gas is primarily methane gas with a little ethane mixed in, and it comes from decaying carbon-based lifeforms. Now here is the problem. For the past 2.5 million years, during the frigid Pleistocene, the Earth has been building up a gigantic methane bomb in the Arctic. Every summer, the Earth has been adding another layer of dead carbon-based lifeforms to the permafrost areas in the Arctic. That summer layer does not entirely decompose but gets frozen into the growing stockpile of carbon in the permafrost.

Figure 7 – Melting huge amounts of methane hydrate ice could release massive amounts of methane gas into the atmosphere.

The Earth has also been freezing huge amounts of methane gas as a solid called methane hydrate on the floor of the Arctic Ocean. Methane hydrate is a solid, much like ice, that is composed of water molecules surrounding a methane molecule frozen together into a methane hydrate ice. As the Arctic warms, this trapped methane gas melts and bubbles up to the surface.

This is very disturbing because in Could the Galactic Scarcity of Software Simply be a Matter of Bad Luck? we covered Professor Toby Tyrrell's computer-simulated research that suggests that our Earth may be a very rare "hole in one" planet that was able to maintain a habitable surface for 4 billion years by sheer luck. Toby Tyrrell's computer simulations indicate that the odds of the Earth turning into another Venus or Mars are quite high given the right set of perturbations. Toby Tyrrell created a computer simulation of 100,000 Earth-like planets to see if planets in the habitable zone of a star system could maintain a surface temperature that could keep water in a liquid form for 3 billion years. The computer-simulated 100,000 Earth-like planets were created with random positive and negative feedback loops that controlled the temperature of the planet's surface. Each planet also had a long-term forcing parameter acting on its surface temperature. For example, our Sun is a star on the main sequence that is getting 1% brighter every 100 million years as the amount of helium in the Sun's core increases. Helium is four times denser than hydrogen and as the Sun's core turns hydrogen into helium its density and gravitational pull increase so its fusion rate has to increase to produce a hotter core that can resist the increased gravitational pull of its core. Each planet was also subjected to random perturbations of random strength that could temporarily alter the planet's atmospheric temperature like those from asteroid strikes or periods of enhanced volcanic activity. This study again demonstrates the value of scientific simulation software. The key finding from this study can be summed up by Toby Tyrrell as:

Out of a population of 100,000, ~9% of planets (8,710) were successful at least once, but only 1 planet was successful on all 100 occasions. Success rates of individual planets were not limited to 0% or 100% but instead spanned the whole spectrum. Some planets were successful only 1 time in 100, others 2 times, and so on. All degrees of planet success are seen to occur in the simulation. It can be seen, as found in a previous study, that climate stabilisation can arise occasionally out of randomness - a proportion of the planets generated by the random assembly procedure had some propensity for climate regulation.

Toby Tyrrell's computer simulation of 100,000 Earth-like planets found that when 100,000 Earth-like planets were each run through 100 iterations with random, but realistic, values for the model parameters, about 9% of them maintained a habitable temperature for 3 billion years for at least 1 of the 100 runs. Some models had 1 successful run and others had 2 or more successful runs. The astounding finding was that only 1 of the 100,000 models was successful for all 100 runs! This study would suggest that the Earth may not be rare because of its current habitable conditions. Rather, the Earth may be rare because it was able to maintain a habitable surface temperature for about 4 billion years and become a Rare Earth with complex carbon-based life having Intelligence.

The end result is that if we keep doing what we are doing, there is the possibility of the Earth ending up with a climate similar to the Permian-Triassic greenhouse gas mass extinction 252 million years ago that nearly killed off all complex carbon-based life on the planet. A massive flood basalt known as the Siberian Traps covered an area about the size of the continental United States with several thousand feet of basaltic lava, with eruptions that lasted for about one million years. Flood basalts, like the Siberian Traps, are thought to arise when large plumes of hotter than normal mantle material rise from near the mantle-core boundary of the Earth and break to the surface. This causes a huge number of fissures to open over a very large area that then begin to disgorge massive amounts of basaltic lava over a very large region. After the eruptions of basaltic lava began, it took about 100,000 years for the carbon dioxide that bubbled out of the basaltic lava to dramatically raise the level of carbon dioxide in the Earth's atmosphere and initiate the greenhouse gas mass extinction. This led to an Earth with a daily high of 140 oF and purple oceans choked with hydrogen-sulfide-producing bacteria, producing a dingy green sky over an atmosphere tainted with toxic levels of hydrogen sulfide gas and an oxygen level of only about 12%. The Permian-Triassic greenhouse gas mass extinction killed off about 95% of marine species and 70% of land-based species, and dramatically reduced the diversity of the biosphere for about 10 million years. It took a full 100 million years to recover from it.

Figure 8 - Above is a map showing the extent of the Siberian Traps flood basalt. The above area was covered by flows of basaltic lava to a depth of several thousand feet.

Figure 9 - Here is an outcrop of the Siberian Traps formation. Notice the sequence of layers. Each new layer represents a massive outflow of basaltic lava that brought greenhouse gases to the surface.

Dr. Susan Natali is the Arctic Program Director at the Woodwell Climate Research Center and is leading a program to finally actually measure the methane and carbon dioxide emissions arising from the defrosting Arctic permafrost so that those emissions can be accounted for in the total emission of greenhouse gasses by society. For more on the dangers of defrosting Arctic carbon releasing methane and carbon dioxide gases see her TED talk at:

How Ancient Arctic Carbon Threatens Everyone on the Planet | Sue Natali
https://www.youtube.com/watch?v=r9lDDetKMi4

Susan Natali's webpage at the Woodwell Climate Research Center is:
https://www.woodwellclimate.org/staff/susan-natali/

The point is that we will need huge amounts of energy to geoengineer the Earth back to a level of 350 ppm of carbon dioxide from the current level of 415 ppm. Thanks to the second law of thermodynamics, we will probably need more energy to fix the planet than we obtained from the burning of the huge amounts of carbon that we burned since the Industrial Revolution began. Yes, renewable energy from wind and solar can be used to help stop dumping more carbon dioxide into the atmosphere, but removing thousands of gigatons of carbon dioxide from the planet's atmosphere and oceans will require huge amounts of energy, way beyond the 2 eV of solar photons or the 2 eV of energy per atom stored in chemical batteries. We will need the vast energies of nuclear energy to do that. I have already discussed the benefits of releasing the nuclear energy in uranium and thorium atoms in previous posts. So for the remainder of this post, let me focus on the vast amounts of nuclear energy that could be released by fusing the deuterium atoms of hydrogen that are to be found in the waters of the Earth's oceans.

The Prospects of First Light Fusion Mass-Producing Practical Fusion Power Plants in 10 Years
So let us now explore how the development and operation of some sophisticated modeling software at First Light Fusion is helping to save the world by likely making nuclear fusion a practical reality in just a few short years. Here is their corporate website:

First Light Fusion - A New Approach to Fusion
https://firstlightfusion.com/

But before we do that, we need to understand how nuclear fusion can produce vast amounts of energy. Fortunately, we already learned how to make nuclear fusion do that 70 years ago back in 1952 when we detonated the very first hydrogen bomb.

Figure 10 – We have been able to fuse deuterium hydrogen atoms with tritium hydrogen atoms for 70 years, ever since the first hydrogen bomb was tested in 1952. Above we see the test of a hydrogen bomb. But hydrogen bombs are not a very good source of energy because they release the energy of many millions of tons of TNT all at once. The trick is to make very tiny hydrogen bombs that explode every few seconds to power an electrical generation plant. People have been trying to do that for the past 70 years, and they have spent many billions of dollars doing so.

Figure 11 – Our Sun uses the proton-proton chain reaction to turn four hydrogen protons into one helium nucleus composed of two protons and two neutrons, but this reaction requires very high temperatures and pressures to achieve. It also does not produce a very high energy output level in our Sun. Yes, the Sun produces lots of energy in its core but that is just because the Sun's core is so large. The proton-proton reaction also produces some gamma rays that would bother the people who are so worried about radiation at any level of intensity.

Doing fusion is really hard even for stars. The problem is that hydrogen protons have a repelling positive electrical charge and do not like to get close enough together for the strong nuclear force to pull them together. The repulsive electrical force falls off with the square of the distance between the protons, but the strong attractive nuclear force between the protons only works over very short distances. So the problem is how do you get hydrogen protons close enough for the attractive strong nuclear force between them to overcome the repulsive electromagnetic force between the protons. For example, the core of our Sun uses the proton-proton chain reaction, depicted in Figure 11 above, to turn four hydrogen protons into helium nuclei composed of two protons and two neutrons. The core of the Sun has a density of 150 grams/cubic centimeter which is 150 times the density of water. So the hydrogen protons of the Sun's core are squeezed very tightly together by the weight of the Sun above. The protons are also bumping around at a very high velocity too at a temperature of 15 million oK (27 million degrees oF). Yet the very center of the Sun only generates 276.5 watts per cubic meter and that rapidly drops off to about 6.9 watts per cubic meter at a distance of 19% of a solar radius away from the center. Using a value of 276.5 watts per cubic meter means that you need a little more than four cubic meters of the Sun's core to produce the heat from the little 1200-watt space heater in your bathroom! The resting human body produces about 80 watts of heat, and the volume of a human body is about 0.062 cubic meters. That means a resting human body generates 1,290 watts per cubic meter or about 4.67 times as much heat as the very center of our Sun! In order to boost the output of the proton-proton chain reaction to a level useful for the generation of electricity, we would have to compress hydrogen protons to much higher densities, pressures and temperatures. The proton-proton chain reaction also produces four gamma rays for each produced helium nuclei and we all know how the fear of radiation scuttled nuclear fission reactors in the past. So reproducing the Sun will not do. We need to replicate the much easier deuterium-tritium reaction of the hydrogen bomb at a much smaller scale and then explode the very tiny hydrogen bombs every few seconds in a fusion reactor to produce a useful source of energy.

How To Build a Hydrogen Bomb
Hydrogen bombs fuse deuterium hydrogen nuclei with tritium hydrogen nuclei. Normal hydrogen has a nucleus composed of a single proton. Deuterium is an isotope of hydrogen and has a nucleus with one proton and one neutron. Tritium is another isotope of hydrogen that is slightly radioactive and has a nucleus composed of one proton and two neutrons. Deuterium can be chemically extracted from ordinary water that has a deuterium concentration of 150 ppm or 1 atom of deuterium per 6400 atoms of regular hydrogen. In other words, each quart (or liter) of water contains only a few drops of deuterium. Still, deuterium can be easily chemically extracted from ordinary water and only costs about $4,000 per kilogram and that price would dramatically drop if it were mass-produced for fusion reactors. The deuterium is a leftover from the Big Bang, but all of the tritium made during the Big Bang is long gone because tritium is radioactive with a half-life of only 12.32 years. Thus, we need to make some fresh tritium for our hydrogen bomb and it costs about $30,000,000 per kilogram to do that! So we need a cheap way of making tritium to build a hydrogen bomb or to run a nuclear fusion reactor that generates electricity.

Figure 12 – Here is the deuterium-tritium reaction used by the hydrogen bomb. It is much easier to produce and yields 17.6 million eV of kinetic energy and does not produce any radiation at all. The deuterium and tritium nuclei both contain a single proton that causes the two nuclei to be repelled by the electromagnetic force, but there now are 5 protons and neutrons colliding and each of the protons and neutrons are drawn together by the much stronger short-ranged strong nuclear force. This makes it much easier to squeeze them together to fuse.

So to build a large hydrogen bomb for a warhead or a very tiny hydrogen bomb for a fusion reactor, all you have to do is squeeze some deuterium and tritium together at a very high temperature and pressure and then quickly get out of the way while they fuse together. But that is not so easy to do. Yes, you can afford to buy some deuterium, but tritium is just way too expensive, and getting them to a very high pressure and temperature so that they fuse together is not very easy either. Now there is a cheap way to make tritium on the fly by bombarding lithium-6 atoms with high-energy neutrons.

Figure 13 – When a high-energy neutron hits a nucleus of lithium-6, it makes a helium nucleus and a nucleus of tritium.

Natural lithium, like the lithium in lithium-ion batteries, has two isotopes. About 7.59% of natural lithium is lithium-6 and 92.41% is lithium-7. To make a hydrogen bomb, all we need to do is to take some pure lithium-6 and then bombard it with energetic neutrons to make tritium on the fly. This would have to be done at a very high temperature and pressure in the presence of some deuterium that we had previously extracted from natural water. So all we need is something that produces high temperatures, high pressures and lots of high-energy neutrons. Now, what could that be? An atomic bomb of course!

Figure 14 – Above is the basic design for a hydrogen bomb.

The fission fuel core of the fission primary atomic bomb is composed of uranium-235 or plutonium-239. When the hydrogen bomb is detonated, the fission fuel core of the primary is squeezed tightly together by a spherical shell of chemical explosive that causes the fission core to implode and become momentarily tightly compressed to a critical mass. This initiates a chain reaction in the fission core as shown in Figure 15.

Figure 15 - When a neutron hits a uranium-235 nucleus it can split it into two lighter nuclei like Ba-144 and Kr-89 that fly apart at about 40% of the speed of light and two or three additional neutrons. This releases about 200 million eV of energy. The nuclei that fly apart are called fission products that are very radioactive with half-lives of less than 30 years and need to be stored for about 300 years. The additional neutrons can then strike other uranium-235 nuclei, causing them to split as well. Some neutrons can also hit uranium-238 nuclei and turn them into plutonium-239 and plutonium-240 that can also fission when hit by a neutron.

The fission atomic bomb primary produces a great deal of heat energy and high-energy neutrons. The very high temperatures that are generated also produce large amounts of x-rays that travel at the speed of light ahead of the expanding fission bomb explosion. The generated x-rays then vaporize the polystyrene foam housing holding the fusion secondary in place. The fusion fuel in the fusion secondary is composed of a solid material called lithium-6 deuteride which is composed of lithium-6 and deuterium atoms chemically bonded together. Running down the very center of the fusion secondary is a cylindrical "spark plug" rod composed of uranium-235 or plutonium-239 which gets crushed to criticality by the imploding lithium-6 deuteride. This initiates a fission chain reaction in the central "spark plug" that also releases 200 million eV of energy per fission and lots of high-energy neutrons too. These high-energy neutrons travel out from the "spark plug" rod and bombard the lithium-6 nuclei in the lithium-6 deuteride forming tritium nuclei at a very high temperature and pressure. The newly formed tritium then fuses with the deuterium in the lithium-6 deuteride producing 17.6 million eV of energy per fusion and also a very high-energy neutron too. Many times the whole hydrogen bomb is encased in very cheap natural uranium metal that costs about $50 per kilogram. Natural uranium metal is 99.3% uranium-238 and 0.7% uranium-235. When the outgoing high-energy neutrons from the fusing deuterium and tritium hit the uranium-235 atoms, they immediately fission and release 200 million eV of energy each. When the high-energy neutrons hit the uranium-238 atoms in the cheap natural uranium, they turn into plutonium-239 atoms which then also begin to fission and release 200 million eV each. Note that all of the radioactive fallout from a hydrogen bomb comes from the fission products released when the uranium-235 and plutonium-239 atoms fission. The fusing deuterium and tritium nuclei do not produce any radioactive waste at all and that is why a nuclear fusion reactor can safely produce energy without producing long-lived nuclear waste. For more details see the Wikipedia article below:

Thermonuclear weapon
https://en.wikipedia.org/wiki/Thermonuclear_weapon

As you can see, making deuterium and tritium fuse together is not easy. Everything has to happen just at the right time. It takes a huge amount of calculations to do that. That is why the very first hydrogen bomb was designed with the aid of a computer called the Mathematical and Numerical Integrator and Calculator or MANIAC. The need to design advanced nuclear weapons during the 1950s and 1960s was a major driving force leading to the advancement of computer hardware and software during that formative period.

First Light Fusion's Projectile Approach for Building Practical Fusion Reactors
With the information above, let us now explore how researchers at First Light Fusion have been attempting to fuse deuterium and tritium together. But first, we need to understand how their competitors are trying to do the same thing.

Figure 16 – The European ITER reactor uses magnetic confinement to hold and heat a torus containing 840 cubic meters of deuterium and tritium plasma that is heated to 150 - 300 million oK to have them collide together fast enough to fuse into helium and release a high-energy neutron. The ITER weighs 23,000 tons, about three times the weight of the Eifel Tower, and is designed to produce 500 MW of heat. There is a thick blanket of metal containing cooling water to absorb the high-energy neutrons and turn their kinetic energy into heat energy. The inner wall of the blanket is made of beryllium. The remainder is made of copper and stainless steel. The blanket will have to be periodically replaced due to damage to the blanket metals caused by the high-energy neutrons because they damage the crystalline structure of the metals with time and weaken them.

Figure 17 – The other approach to fusion energy is to blast the deuterium and tritium ions with a huge number of very powerful lasers to compress and heat them to the point of fusion.

First Light Fusion is taking a completely different approach that they call Projectile Based Inertial Fusion.

Figure 18 – The Projectile Based Inertial Fusion process periodically fires a very high-velocity projectile at a target containing deuterium and tritium.

Figure 19 – The target is constructed with several cavities containing the deuterium-tritium mixture in such a way to focus and dramatically amplify the projectile shockwave.

Figure 20 – Above is a sample experimental target containing a number of spherical cavities.

Figure 21 – Above we see the very end of a computer simulation for a target containing three cavities. There is a very large cavity on the left, a very large cavity on the right and a much smaller central cavity between the two.

You can watch this computer simulation unfold in the YouTube video below. The target contains a large cavity on the left and a large cavity on the right. The cavities are a little bit difficult to see. They are outlined with a very thin dark blue line which can be seen if you concentrate. Between the two large cavities is a much smaller central cavity. In the video, the shockwave from the impacting projectile strikes the target from above and sets up a downward traveling shockwave. When the shockwave reaches the left and right cavities, it is focused and dramatically compresses the cavities to nearly a zero volume. This causes two jets of high-temperature and pressure deuterium-tritium ions to converge into the central cavity just as the central cavity is also compressed by the arriving shockwave. This amplification process essentially concentrates the entire kinetic energy of the projectile into a very concentrated region of the target with nearly zero volume. That is how First Light Fusion attains the high temperature and density of deuterium-tritium ions that is necessary to have them fuse.

Cavity Collapse in a Three Cavity Target
https://youtu.be/aTMPigL7FB8&t=0s

The projectile must be fired at the target at a very high velocity around 25 km/sec. First Light Fusion is working on an electromagnetic launcher called M3 to do that.

Figure 22 – Above is First Light Fusion's M3 electromagnetic launcher. The large blue boxes surrounding the central electromagnetic launcher are huge capacitors that are used to store the electrical energy needed to launch a test projectile.

The M3 electromagnetic launcher operates in a manner similar to a railgun.

Homemade Railgun | Magnetic Games
https://www.youtube.com/watch?v=9myr32FgCWQ&t=1s

Figure 23 – The United States Navy recently cancelled an advanced railgun that was supposed to fire artillery shells from warships instead of using normal gunpowder.

But taking a hint from the United States Navy, in April of 2022, First Light Fusion decided to leapfrog their M3 electromagnetic launcher research by successfully achieving nuclear fusion of a deuterium-deuterium mixture in one of their advanced targets by firing a projectile from their BFG (Big Friendly Gun) using 3 kilograms of gunpowder like a naval artillery gun. The projectile hit the target at 4.5 km/sec and produced high-energy neutrons from the initiated fusion process. This was validated by the UK Atomic Energy Authority. First Light Fusion was able to achieve this milestone with only $59 million of funding instead of the billions of dollars that were expended by the other fusion technologies displayed above to do the very same thing.

Figure 24 – Above is First Light Fusion CEO Nick Hawker standing in front of the BFG (Big Friendly Gun) that used 3 kilogrms of gunpowder to fire a projectile into a target containing a mixture of deuterium-deuterium and successfully achieve nuclear fusion.

But how does First Light Fusion plan to put this success to work to produce vast amounts of energy?

Figure 25 – Above is a depiction of First Light Fusion's proposed 150 MWe fusion reactor that would generate 150 MW of electrical energy. In order to do that, the fusion reactor must generate about 450 MWh of heat energy because of energy losses brought on by the second law of thermodynamics. The proposed reactor would drop a new deuterium-tritium-bearing target every 30 seconds.

Below is a YouTube video depicting a First Light Fusion target falling into the fusion reactor core. The fusion-initiating projectile is then fired at 25 km/sec behind the falling target which is essentially standing still relative to the high-speed projectile. When the projectile hits the target, it causes the deuterium-tritium mixture to fuse and release 17.6 million eV of kinetic energy per fusion event. That kinetic energy is then absorbed by a one-meter thick curtain of molten lithium metal that is released from above at the same time. The thick curtain of molten lithium metal turns the 17.6 million eV of energy from each fusion event into 17.6 million eV of heat energy. The molten lithium metal does not sustain any damage from the high-speed neutrons because it is a liquid and not a solid. Only the crystalline structure of solids can be damaged by high-speed neutrons.

A falling target is hit by a projectile at 25 km/sec and fuses its deuterium-tritium mixture.
https://youtu.be/JN7lyxC11n0

Below is a YouTube depiction of how First Light Fusion's 150 MWe fusion reactor would operate.

First Light Fusion's Proposed 150 MWe Fusion Reactor Cycle
https://youtu.be/aW4eufacf-8

There are some additional benefits to using molten lithium metal to absorb the energy from the fast-moving fusion neutrons. Remember, when a high-energy neutron collides with lithium-6, it produces tritium and we need tritium for the fusion reactor fuel. Recall that about 7.59% of natural lithium is lithium-6 and that is plenty of lithium-6 to create the tritium needed for the fusion reactor fuel. Because the tritium would be created in a molten lithium liquid, it would just bubble out of the molten lithium and would not get trapped in the crystaline structure of a solid form of lithium metal.

Also, the melting point of lithium is 180.5 oC and the boiling point of lithium is 1,342 oC. That means the First Light Fusion reactor could be run at a much higher temperature than the cooling water used in a standard pressurized water fission reactor that is normally run at 300 oC and a pressure of 80 atmospheres. The First Light Fusion reactor could run at 700 oC, or more. Such a high-temperature reactor could be used for industrial process heat or to run more efficient turbines to generate electricity.

Figure 26 – Supercritical CO2 Brayton turbines can be about 8,000 times smaller than traditional Rankine steam turbines. They are also much more efficient.

Since the First Light Fusion reactor could run at 700 oC, instead of 300 oC, we could use Brayton supercritical carbon dioxide turbines instead of Rankine steam turbines. Supercritical CO2 Brayton turbines are about 8,000 times smaller than Rankine steam turbines because the supercritical CO2 working fluid has nearly the density of water. And because First Light Fusion reactors do not need an expensive and huge containment structure, they can be made into small factory-built modular units that can be mass-produced. This allows utilities and industrial plants to easily string together any required capacity. They would also be ideal for ocean-going container ships. Supercritical CO2 Brayton turbines can also reach an efficiency of 47% compared to the 33% efficiency of Rankine steam turbines. The discharge temperature of the supercritical CO2 turbines is also high enough to be used to desalinate seawater, and if a body of water is not available for cooling, the discharge heat of a First Light Fusion reactor could be directly radiated into the air. To watch some supercritical CO2 in action see:

Thermodynamics - Explaining the Critical Point
https://www.youtube.com/watch?v=RmaJVxafesU#t-1.

Here is an explanation of how this all works from Nick Hawker himself:

Nick's Blog
https://nickhawker.com/

Here is another interesting interview with Nick Hawker about fusion energy in which he explains how it all began for him at the University of Oxford.

Fusion technology will save the world... and soon! | Energy Live News
https://www.youtube.com/watch?v=GSSzlrRonD4

Here is an interesting YouTube video that contrasts our current pressurized water fission reactors that fission uranium-235 and plutonium-239 nuclei. Note that molten salt nuclear reactors can overcome many of the obstacles facing our current fleet of pressurized water fission reactors. For more on that see The Deadly Dangerous Dance of Carbon-Based Intelligence and Last Call for Carbon-Based Intelligence on Planet Earth.

IT HAPPENED! Nuclear Fusion FINALLY Hit The Market!
https://www.youtube.com/watch?v=WiOJSW4rmxM

Finally, Helen Czerski takes us on a tour of First Light Fusion and interviews CEO Nick Hawker:

First Light Fusion: The Future of Electricity Generation and a Clean Base Load? | Fully Charged
https://www.youtube.com/watch?v=M1RsHQCMRTw&t=1s

But is it Safe to Use Little Hydrogen Bombs to Generate Power?
Generating 450 MWh of heat energy means generating 450 million joules of heat energy per second or 0.450 gigajoules per second. Now if there is a shot every 30 seconds, that means that each target must yield 13.5 gigajoules of energy. Detonating a metric ton of TNT (2200 pounds of TNT) generates 4.184 gigajoules of energy, so each target must yield the energy of 3.23 metric tons of TNT and that energy must be absorbed by the meter-thick blanket of molten lithium metal. But 3.23 metric tons of TNT is actually a very tiny little hydrogen bomb. When I was a child in the 1950s and 1960s, hydrogen bombs were carried by strategic bombers and were in the range of 10 - 20 megatons of TNT! Now the nuclear warheads on today's strategic missiles are more in the range of about 0.450 megatons. However, the strategic missiles now carry several individually targeted warheads that can pepper an entire metropolitan area all at once.

Now all during the 1950s and early 1960s, great attention was paid in the United States to the matter of civil defense against a possible nuclear strike by the Russians. During those times, the government of the United States essentially admitted that it could not defend the citizens of the United States from a Soviet bomber attack with nuclear weapons, and so it was up to the individual citizens of the United States to prepare for such a nuclear attack.

Figure 27 - During the 1950s, as a very young child, with the beginning of each new school year, I was given a pamphlet by my teacher describing how my father could build an inexpensive fallout shelter in our basement out of cinderblocks and 2x4s.

Figure 28 - But to me, these cheap cinderblock fallout shelters always seemed a bit small for a family of 5, and my parents never bothered to build one because we lived only 25 miles from downtown Chicago.

Figure 29 - For the more affluent, more luxurious accommodations could be constructed for a price.

Figure 30 - But no matter what your socioeconomic level was at the time, all students in the 1950s participated in "duck and cover" drills for a possible Soviet nuclear attack.

Figure 31 - And if you were lucky enough to survive the initial flash and blast of a Russian nuclear weapon with your "duck and cover" maneuver, your school, and all other public buildings, also had a fallout shelter in the basement to help you get through the next two weeks, while the extremely radioactive nucleotides from the Russian nuclear weapons rapidly decayed away.

Unfortunately, living just 25 miles from downtown Chicago, the second largest city in the United States at the time, meant that the whole Chicagoland area was destined to be targeted by a multitude of overlapping 10 and 20 megaton bombs by the Soviet bomber force, meaning that I would be killed multiple times as my atoms were repeatedly vaporized and carried away in the winds of the Windy City. So as a child of the 1950s and 1960s, I patiently spent my early years just standing by for the directions in this official 1961 CONELRAD Nuclear Attack Message.

Official 1961 Nuclear Attack Message
https://www.youtube.com/watch?v=7iaQMbfazQk

Unfortunately, with the recent invasion of the Ukraine by Russia, both sides of the New Cold War are now rather cavalierly talking about having a global thermonuclear war, something that nobody ever dared to even speak of back in the 1950s and 1960s. I have not seen things this bad since the Cuban Missile Crises of October 1962 when I was 11 years old.

But the little hydrogen bombs used by the First Light Fusion reactor are very small indeed and do not fission any uranium-235 or plutonium-239. The radioactive fallout from hydrogen bombs comes from the highly-radioactive fission products that are produced when the uranium-235 or plutonium-239 nuclei of the fission primary, the bomb's "spark plug" and the cheap natural uranium casing of the weapon's booster split apart. Most of those fission products rapidly decay and allow people to safely emerge from their fallout shelters after about two weeks. Even so, the survivors would be living with residual radiation levels that would put most antinuclear folks into shock.

The First Light Fusion reactor does none of that. The only radioactive product is a small amount of tritium gas which is harvested from the molten lithium to be used for the reactor fuel. Tritium decays into helium-3 and when it does it releases an electron, or beta particle, with an average energy of about 5,700 eV of energy. The picture tubes of old television sets used beta particle electrons with an energy of 25,000 eV to form an image on the screen. The very low-energy beta particles from the decay of tritium can only travel through about 6.0 mm of air and cannot pass through the dead outermost layer of human skin, so they are not of much concern. In the worst-case scenario, the very light tritium gas would simply float away to the upper atmosphere because it is even less dense than the helium gas used for weather balloons and the Goodyear Blimp.

You Cannot Do Such Things Without Hardware and Software
By now you should be able to appreciate the difficulties of trying to build such things using a slide rule. Such things really cannot be done without the benefits of some heavy-duty software running on some heavy-duty hardware. For example, below is the webpage describing the hardware and software that First Light Fusion is using to numerically model what happens to their specially designed targets. As Nick Hawker explained, the real secret to successful projectile fusion is designing and controlling the amplification processes that take place in their proprietary target designs when several cavities containing the deuterium-tritium fuel mixture collapse at very high temperatures and pressures. All of that must be modeled in software first and then physically tested.

First Light Fusion's Numerical Physics
https://firstlightfusion.com/technology/simulations

This brings to mind something I once read about Richard Feynman when he was working on the very first atomic bomb at Los Alamos from 1943 to 1945. He led a group that figured out that they could run several differently colored card decks through a string of IBM unit record processing machines to perform different complex mathematical calculations simultaneously on the same hardware in a multithreaded manner. For more on Richard Feynman see Hierarchiology and the Phenomenon of Self-Organizing Organizational Collapse. Below is the pertinent section extracted from a lecture given by Richard Feynman:

Los Alamos From Below: Reminiscences 1943-1945, by Richard Feynman
http://calteches.library.caltech.edu/34/3/FeynmanLosAlamos.htm

In the extract below, notice the Agile group dynamics at play in the very early days of the Information Revolution.

Well, another kind of problem I worked on was this. We had to do lots of calculations, and we did them on Marchant calculating machines. By the way, just to give you an idea of what Los Alamos was like: We had these Marchant computers - hand calculators with numbers. You push them, and they multiply, divide, add and so on, but not easy like they do now. They were mechanical gadgets, failing often, and they had to be sent back to the factory to be repaired. Pretty soon you were running out of machines. So a few of us started to take the covers off. (We weren't supposed to. The rules read: "You take the covers off, we cannot be responsible...") So we took the covers off and we got a nice series of lessons on how to fix them, and we got better and better at it as we got more and more elaborate repairs. When we got something too complicated, we sent it back to the factory, but we'd do the easy ones and kept the things going. I ended up doing all the computers and there was a guy in the machine shop who took care of typewriters.

Anyway, we decided that the big problem - which was to figure out exactly what happened during the bomb's explosion, so you can figure out exactly how much energy was released and so on - required much more calculating than we were capable of. A rather clever fellow by the name of Stanley Frankel realized that it could possibly be done on IBM machines. The IBM company had machines for business purposes, adding machines called tabulators for listing sums, and a multiplier that you put cards in and it would take two numbers from a card and multiply them. There were also collators and sorters and so on.


Figure 32 - Richard Feynman is describing the IBM Unit Record Processing machines from the 1940s and 1950s. The numerical data to be processed was first punched onto IBM punch cards with something like this IBM 029 keypunch machine from the 1960s.

Figure 33 - Each card could hold a maximum of 80 characters.

Figure 34 - The cards with numerical data were then bundled into card decks for processing.

The Unit Record Processing machines would then process hundreds of punch cards per minute by routing the punch cards from machine to machine in processing streams.

Figure 35 – The Unit Record Processing machines like this card sorter were programmed by physically rewiring a plugboard.

Figure 36 – The plugboard for a Unit Record Processing machine.

So Frankel figured out a nice program. If we got enough of these machines in a room, we could take the cards and put them through a cycle. Everybody who does numerical calculations now knows exactly what I'm talking about, but this was kind of a new thing then - mass production with machines. We had done things like this on adding machines. Usually you go one step across, doing everything yourself. But this was different - where you go first to the adder, then to the multiplier, then to the adder, and so on. So Frankel designed this system and ordered the machines from the IBM company, because we realized it was a good way of solving our problems.

We needed a man to repair the machines, to keep them going and everything. And the Army was always going to send this fellow they had, but he was always delayed. Now, we always were in a hurry. Everything we did, we tried to do as quickly as possible. In this particular case, we worked out all the numerical steps that the machines were supposed to do - multiply this, and then do this, and subtract that. Then we worked out the program, but we didn't have any machine to test it on. So we set up this room with girls in it. Each one had a Marchant. But she was the multiplier, and she was the adder, and this one cubed, and we had index cards, and all she did was cube this number and send it to the next one.

We went through our cycle this way until we got all the bugs out. Well, it turned out that the speed at which we were able to do it was a hell of a lot faster than the other way, where every single person did all the steps. We got speed with this system that was the predicted speed for the IBM machine. The only difference is that the IBM machines didn't get tired and could work three shifts. But the girls got tired after a while.

Anyway, we got the bugs out during this process, and finally the machines arrived, but not the repairman. These were some of the most complicated machines of the technology of those days, big things that came partially disassembled, with lots of wires and blueprints of what to do. We went down and we put them together, Stan Frankel and I and another fellow, and we had our troubles. Most of the trouble was the big shots coming in all the time and saying, "You're going to break something! "

We put them together, and sometimes they would work, and sometimes they were put together wrong and they didn't work. Finally I was working on some multiplier and I saw a bent part inside, but I was afraid to straighten it because it might snap off - and they were always telling us we were going to bust something irreversibly. When the repairman finally got there, he fixed the machines we hadn't got ready, and everything was going. But he had trouble with the one that I had had trouble with. So after three days he was still working on that one last machine.

I went down, I said, "Oh, I noticed that was bent."

He said, "Oh, of course. That's all there is to it!" Bend! It was all right. So that was it.

Well, Mr. Frankel, who started this program, began to suffer from the computer disease that anybody who works with computers now knows about. It's a very serious disease and it interferes completely with the work. The trouble with computers is you play with them. They are so wonderful. You have these switches - if it's an even number you do this, if it's an odd number you do that - and pretty soon you can do more and more elaborate things if you are clever enough, on one machine.

And so after a while the whole system broke down. Frankel wasn't paying any attention; he wasn't supervising anybody. The system was going very, very slowly - while he was sitting in a room figuring out how to make one tabulator automatically print arctangent X, and then it would start and it would print columns and then bitsi, bitsi, bitsi, and calculate the arc-tangent automatically by integrating as it went along and make a whole table in one operation.

Absolutely useless. We had tables of arc-tangents. But if you've ever worked with computers, you understand the disease -- the delight in being able to see how much you can do. But he got the disease for the first time, the poor fellow who invented the thing.

And so I was asked to stop working on the stuff I was doing in my group and go down and take over the IBM group, and I tried to avoid the disease. And, although they had done only three problems in nine months, I had a very good group.

The real trouble was that no one had ever told these fellows anything. The Army had selected them from all over the country for a thing called Special Engineer Detachment - clever boys from high school who had engineering ability. They sent them up to Los Alamos. They put them in barracks. And they would tell them nothing.

Then they came to work, and what they had to do was work on IBM machines - punching holes, numbers that they didn't understand. Nobody told them what it was. The thing was going very slowly. I said that the first thing there has to be is that these technical guys know what we're doing. Oppenheimer went and talked to the security and got special permission so I could give a nice lecture about what we were doing, and they were all excited: "We're fighting a war! We see what it is!" They knew what the numbers meant. If the pressure came out higher, that meant there was more energy released, and so on and so on. They knew what they were doing.

Complete transformation! They began to invent ways of doing it better. They improved the scheme. They worked at night. They didn't need supervising in the night; they didn't need anything. They understood everything; they invented several of the programs that we used - and so forth.

So my boys really came through, and all that had to be done was to tell them what it was, that's all. As a result, although it took them nine months to do three problems before, we did nine problems in three months, which is nearly ten times as fast.

But one of the secret ways we did our problems was this: The problems consisted of a bunch of cards that had to go through a cycle. First add, then multiply and so it went through the cycle of machines in this room, slowly, as it went around and around. So we figured a way to put a different colored set of cards through a cycle too, but out of phase. We'd do two or three problems at a time.

But this got us into another problem. Near the end of the war for instance, just before we had to make a test in Albuquerque, the question was: How much would be released? We had been calculating the release from various designs, but we hadn't computed for the specific design that was ultimately used. So Bob Christie came down and said, "We would like the results for how this thing is going to work in one month" - or some very short time, like three weeks.

I said, "It's impossible."

He said, "Look, you're putting out nearly two problems a month. It takes only two weeks per problem, or three weeks per problem."

I said, "I know. It really takes much longer to do the problem, but we're doing them in parallel. As they go through, it takes a long time and there's no way to make it go around faster."

So he went out, and I began to think. Is there a way to make it go around faster? What if we did nothing else on the machine, so there was nothing else interfering? I put a challenge to the boys on the blackboard - CAN WE DO IT? They all start yelling, "Yes, we'll work double shifts, we'll work overtime," - all this kind of thing. "We'll try it. We'll try it!"

And so the rule was: All other problems out. Only one problem and just concentrate on this one. So they started to work.

My wife died in Albuquerque, and I had to go down. I borrowed Fuchs' car. He was a friend of mine in the dormitory. He had an automobile. He was using the automobile to take the secrets away, you know, down to Santa Fe. He was the spy. I didn't know that. I borrowed his car to go to Albuquerque. The damn thing got three flat tires on the way. I came back from there, and I went into the room, because I was supposed to be supervising everything, but I couldn't do it for three days.

It was in this mess. There's white cards, there's blue cards, there's yellow cards, and I start to say, "You're not supposed to do more than one problem - only one problem!" They said, "Get out, get out, get out. Wait -- and we'll explain everything."

So I waited, and what happened was this. As the cards went through, sometimes the machine made a mistake, or they put a wrong number in. What we used to have to do when that happened was to go back and do it over again. But they noticed that a mistake made at some point in one cycle only affects the nearby numbers, the next cycle affects the nearby numbers, and so on. It works its way through the pack of cards. If you have 50 cards and you make a mistake at card number 39, it affects 37, 38, and 39. The next, card 36, 37, 38, 39, and 40. The next time it spreads like a disease.

So they found an error back a way, and they got an idea. They would only compute a small deck of 10 cards around the error. And because 10 cards could be put through the machine faster than the deck of 50 cards, they would go rapidly through with this other deck while they continued with the 50 cards with the disease spreading. But the other thing was computing faster, and they would seal it all up and correct it. OK? Very clever.

That was the way those guys worked, really hard, very clever, to get speed. There was no other way. If they had to stop to try to fix it, we'd have lost time. We couldn't have got it. That was what they were doing.

Of course, you know what happened while they were doing that. They found an error in the blue deck. And so they had a yellow deck with a little fewer cards; it was going around faster than the blue deck. Just when they are going crazy - because after they get this straightened out, they have to fix the white deck - the boss comes walking in.

"Leave us alone," they say. So I left them alone and everything came out. We solved the problem in time and that's the way it was.


The above should sound very familiar to most 21st century IT professionals.

Now Richard Feynman was quite a character and well known for his much-celebrated antics amongst all 20th-century physicists. I have read all of his books, some of them several times. So I was quite surprised when reading:

Surely You're Joking, Mr. Feynman!
by Richard P. Feynman
https://sistemas.fciencias.unam.mx/~compcuantica/RICHARD%20P.%20FEYNMAN-SURELY%20YOU'RE%20JOKING%20MR.%20FEYNMAN.PDF

to find that one of the chapters began with:

"I learned to pick locks from a guy named Leo Lavatelli"

You see, I had Professor Leo Lavatelli for Physics 107 back in 1970, and later Physics 341, at the University of Illinois in Urbana. Both classes were on classical electrodynamics. Professor Leo Lavatelli was a very cool guy but he never once mentioned that he had worked on the very first atomic bomb.

Figure 37 - Professor Leo Lavatelli.

Leo S. Lavatelli (1917-1998)
https://physics.illinois.edu/people/memorials/lavatelli

During World War II, my Dad was a sailor on a tanker filled with 40,000 barrels of 110-octane aviation fuel. He helped to deliver the fuel for the B-29 Enola Gay on Tinian Island. The USS Indianapolis delivered the very first atomic bomb, Little Boy, to Tinian Island in July of 1945, but was torpedoed and sunk by a Japanese submarine on July 30, 1945, on the way back home. The USS Indianapolis sank in 12 minutes and of the 1,195 crewmen, only 316 survived, making the sinking of the Indianapolis the greatest loss of life at sea from a single ship in the history of the US Navy. Fortunately, my Dad made it back okay, so I am here to tell the tale. We all owe a lot to the cool guys and gals of the 1940s who made a world order possible that was based on the rule of law and the fruits of the 17-century Scientific Revolution and the 18th-century Enlightenment. My only hope is that we can manage to keep it.

Okay, we can all admit that the history of nuclear energy has been a bit sullied by the "real world" of human affairs, especially when it comes to using nuclear energy for the purposes of warfare. But that is not the fault of nuclear energy. That is the fault of ourselves. The important point to keep in mind is that nuclear energy is the only way we can fix the damage that we have all caused to this planet through the disaster of climate change. If it took our foolish obsession with building nuclear weapons of mass destruction to bring forth the technology necessary to save our planet from becoming another Venus, so be it.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston