Tuesday, November 26, 2019

Last Call for Carbon-Based Intelligence on Planet Earth

In Hierarchiology and the Phenomenon of Self-Organizing Organizational Collapse, I explained that a major problem arises whenever a large group of people come together into any organizational structure. The problem is that no matter how things are set up, it just seems that there are always about 1% of the population that like to run things, and there is nothing wrong with that. We certainly always need somebody around to run things because, honestly, 99% of us simply do not have the ambition or desire to do so. Of course, the problem throughout history has always been that the top 1% naturally tended to abuse the privilege a bit and overdid things a little, resulting in 99% of the population having a substantially lower economic standard of living than the top 1%, and that has led to several revolutions in the past that did not always end so well. Now once a dominance hierarchy has been established, no matter how it is set up, another universal problem arises from the fact that:

1. People like to hear what they like to hear.

2. People do not like to hear what they do not like to hear.

Throughout human history, it seems that civilizations have always gotten into trouble once a dominance hierarchy has formed composed of a large number of individuals stubbornly adhering to the above in the face of desperate times. And I would like to suggest that this same universal phenomenon will naturally arise for any planet dominated by a carbon-based Intelligence. That is why in The Deadly Dangerous Dance of Carbon-Based Intelligence I offered up my Null Result Hypothesis as a possible explanation for Fermi's Paradox:

Fermi’s Paradox - If the universe is just chock full of intelligent beings, why do we not see any evidence of their existence?

Briefly stated:

Null Result Hypothesis - What if the explanation to Fermi's Paradox is simply that the Milky Way galaxy has yet to produce a form of interstellar Technological Intelligence because all Technological Intelligences are destroyed by the very same mechanisms that bring them forth?

By that, I mean that the Milky Way galaxy has not yet produced a form of Intelligence that can make itself known across interstellar distances, including ourselves. I then went on to propose that the simplest explanation for this lack of contact could be that the conditions necessary to bring forth a carbon-based interstellar Technological Intelligence on a planet or moon were also the same kill mechanisms that eliminated all forms of carbon-based Technological Intelligences with 100% efficiency. I then suggested that this kill mechanism might be the tendency for carbon-based Technological Intelligences to mess with their planet's or moon's carbon cycle as we seem to be doing today with the Earth. For more on that see This Message on Climate Change Was Brought to You by SOFTWARE. If true, this means that over the past 10 billion years, not a single form of carbon-based Intelligence has arisen in the Milky Way galaxy to become an interstellar Technological Intelligence. And given the limitations of carbon-based Intelligence, that also most likely means that no form of carbon-based Intelligence has successfully crossed over to a silicon-based Intelligence. This is a deeply disturbing finding because we now know that about 20% of the stars in the Milky Way have planets capable of sustaining carbon-based life. That comes to about 80 billion worlds in the Milky Way capable of sustaining carbon-based life. So even if carbon-based Intelligence is extremely rare, there should have been a huge number of carbon-based Intelligences crossing over to a silicon-based Intelligence over the past 10 billion years in the Milky Way. Yet, there appear to be none. For more on this please see A Brief History of Self-Replicating Information and Is Self-Replicating Information Inherently Self-Destructive?

So How Are We Doing?
Well, instead of terraforming Mars we seem to have been venus-forming the Earth for the past several hundred years, ever since carbon-based Intelligence discovered technology.

Figure 1 – Ever since carbon-based Intelligence on the Earth discovered technology, carbon dioxide levels have been increasing.

Now, in 1992 at the Earth Summit held in Rio de Janeiro the world finally adopted the United Nations Framework Convention on Climate Change (UNFCCC) to tackle global climate change. The purpose of the treaty was to reduce greenhouse gas emissions and prevent the dangerous effects of climate change. Every year since 1995, the Convention of the Parties, or COP, has been held to report on the progress that has been made towards this objective. Currently, the world is coming together for the 25th annual meeting at COP25 in Madrid. But the sad fact is that, as we all know, nothing really has been done in the last 25 years to curb greenhouse gas emissions.

Figure 2 – Despite the 1992 Rio, 1997 Kyoto, 2009 Copenhagen and the 2016 Paris agreements, the data show that nothing really has changed as a result of those efforts.

About 40 years ago, I was an exploration geophysicist exploring for oil, first with Shell and then with Amoco. But in 1979, I made a career change into IT and then spent about 40 years in IT working at various corporations. However, as a geophysicist by training, I have always been very concerned about climate change. A few weeks back, I watched Greta Thunberg's full speech from the U.N. Climate Action Summit at:

How dare you!
https://www.youtube.com/watch?v=-4WqLIFava4#t-1

and I was deeply moved. I now have five grandchildren, all 8 years old or younger, and being 68 years of age, I know that I have less than 20 years left. So I will not be around to see how this all works out. But all of the science that I know tells me that we have just about run out of time, as Greta Thunberg so wisely points out.

In her speech, Greta Thunberg pointed out that the greatest danger is that rising temperatures and acidification of the oceans will reach a tipping point and trigger geochemical processes with strong positive feedbacks that could take this all out of our powers to stop. For example, the Arctic is defrosting. That means there is less ice up north to reflect incoming high-energy visible photons. All the energy in those high-energy visible photons has to be radiated back into space as low-energy infrared photons on a daily basis to maintain equilibrium. But we are pumping carbon dioxide molecules into the atmosphere that prevent that from happening and that causes the air temperature to rise. Warmer air can hold many more water molecules than cooler air and water molecules are really good at absorbing infrared photons too adding to the problem. The rising air temperatures then make even more Arctic ice to melt. But the worst problem, by far, with the Arctic defrosting is methane gas. Methane gas is a powerful greenhouse gas. Eventually, methane degrades into carbon dioxide and water molecules, but over a 20-year period, methane traps 84 times as much heat in the atmosphere as carbon dioxide. About 25% of current global warming is due to methane gas. Natural gas is primarily methane gas with a little ethane mixed in, and it comes from decaying carbon-based lifeforms. Now here is the problem. For the past 2.5 million years, during the frigid Pleistocene, the Earth has been building up a gigantic methane-bomb in the Arctic. Every summer, the Earth has been adding another layer of dead carbon-based lifeforms to the permafrost areas in the Arctic. That summer layer does not entirely decompose but gets frozen into the growing stockpile of carbon in the permafrost. The Earth has also been freezing huge amounts of methane gas as a solid called methane hydrate on the floor of the Arctic Ocean. Methane hydrate is a solid, much like ice, that is composed of water molecules surrounding a methane molecule frozen together into a methane hydrate ice. As the Arctic warms, this trapped methane gas melts and bubbles up to the surface. The end result is that if we keep doing what we are doing, there is the possibility of the Earth ending up with a climate having a daily high of 140 oF with purple oceans choked with hydrogen-sulfide producing bacteria, producing a dingy green sky over an atmosphere tainted with toxic levels of hydrogen sulfide gas and an oxygen level of only 12%, like the Earth had during the End-Permian greenhouse gas mass extinction 252 million years ago. Such a catastrophe might not cause the emerging carbon-based Intelligence of Earth to go extinct, but it most probably would put an end to our ability to make ourselves known to the rest of the Milky Way galaxy.

Figure 3 – Melting huge amounts of methane hydrate ice could release massive amounts of methane gas into the atmosphere.

So What to Do?
Now I love solar and wind power. In fact, I have been buying wind power electricity for the past 10 years. It costs about 50% more to generate, but I only use about 23% of the electricity that my neighbors use, so that only runs me about $240 per year. I also drive a hybrid car that gets 60 - 70 mpg in the summer and 40 - 50 mpg in the winter when the hybrid battery does not work as efficiently. My wife and I do not fly and we do not travel much now that we are both retired. If you want to see the world, just go on Street View of Google Maps and you can do all the sightseeing you want, like walking around the Eiffel Tower, or hiking along a trail in the Grand Canyon. Yes, I know that my efforts really do not make much of a difference, and that I just do these things to make myself feel a little bit better about the current situation. I also know that wind and solar have very low energy densities so that it takes a lot of windmills and solar panels to capture large amounts of energy. Currently, wind and solar only provide for about 4% of the world's energy consumption. I just don't think we have enough time left to go entirely to a solar and wind-powered world. We are driving 80 miles an hour into a concrete wall with only 100 feet left to brake! For example, in:

Roadmap To Nowhere - The Myth of Powering the Nation With Renewable Energy
https://www.roadmaptonowhere.com/

Mike Conley and Tim Maloney use the numbers from the 132-page report of the environmental Solutions Project that requires 18 billion square meters of solar panels and 500,000 5 MW wind turbines to supply all of the energy needs of the United States. Mike and Tim point out that once all of this infrastructure has been constructed on 132,000 square miles of land, we will need to replace 1.23 million square meters of solar panels and 80 of the 5 MW wind turbines every day, forever, as the solar panels and wind turbines wear out.

Figure 4 – The new GE 5 MW wind turbine is 500 feet tall, about the height of a 50 story building. We will need 500,000 of them and will need to replace 80 of them each day as they wear out in the future.

So obviously, what we have been doing thus far is clearly not working. The planet is dying and everybody is pretending that it is not. The Right loves fossil fuels and is pretending that there is no problem. The Left is pretending that the problem can be solved with wind and solar power alone. And the Middle has other concerns that they find more important. How can we change that? The first thing we need to do is to realize that all forms of carbon-based life are forms of self-replicating information that have been honed by 4.0 billion years of natural selection to be fundamentally selfish in nature. That is why most people around the world will not spend a nickel more on energy if they can avoid doing so. That is both a good thing and a bad thing. The bad thing is that most people will simply not spend a nickel more on solar or wind power to reduce carbon dioxide emissions. However, the good thing is that if we can come up with a source of energy that is cheaper than coal and other forms of carbon-based fuels, people will drop the carbon-based fuels like a hot potato all on their own. So I would like to recommend that we all take a look at molten salt nuclear reactors since they have the potential to produce energy at a much lower cost than carbon-based fuels and also could be easily mass-produced using far fewer material resources than solar or wind. Bringing in molten salt nuclear reactors should not be seen as a substitute for continuing on with solar, wind and fusion sources of energy. We just need a cheap form of energy that appeals to those still committed to carbon-based fuels. We also need an insurance policy in case it is found that wind and solar cannot do the job all on their own. Yes, I know that many of you may dislike nuclear energy because:

1. Nuclear reactors tend to explode and release radioactive clouds that can poison large areas for thousands of years.
2. Nuclear reactors produce nuclear waste that needs to be buried for 200,000 years and we do not know how to take care of things for 200,000 years.
3. Nuclear reactors produce plutonium that can be used for making atomic bombs.

Figure 5 – Currently, we are running 1950s-style PWR (Pressurized Water Reactors) with coolant water at 300 oC and 80 atmospheres of pressure.

Personally, the reason I have been buying wind-powered electricity for the past decade is that I had given up on nuclear energy as a possible solution. Nuclear reactors just seemed to require forever to build and were far too expensive to effectively compete with coal or natural gas. And nuclear reactors seemed to blow up every decade or so, no matter what the nuclear engineers did to make them safer. I also assumed that the nuclear engineers would have come up with something better over the past 60 years if such a thing were possible.

But, recently, I have learned that over the past 60 years, the nuclear engineers have indeed come up with many new designs for nuclear reactors that are thousands of times superior to what we have today. But because of many stupid human reasons that I will not go into, these new designs have been blocked for 60 years! And because nuclear reactions can produce two million times as much energy as chemical reactions, they may be our last chance. All of the problems we have with our current nuclear reactors stem from running PWR (Pressurized Water Reactors) that were designed back in the 1950s and early 1960s. Now, no business today relies on 1950s-style vacuum tube computers with 250 K of memory to run a business, but our utilities happily run 1950s-style PWR nuclear reactors! The good news is that most of the problems with our technologically-ancient PWR reactors stem from using water as a coolant. A cubic foot of water makes 1,000 cubic feet of steam at atmospheric pressure. That is why PWR reactors need a huge reinforced concrete containment structure to hold large amounts of radioactive steam if things go awry. Do you remember the second law of thermodynamics from Entropy - the Bane of Programmers and The Demon of Software? The efficiency of extracting useful mechanical work from a heat reservoir depends on the temperature difference between the heat reservoir and the exhaust reservoir.

Maximum Efficiency = 1 - TC/TH

where TC and TH are the temperatures of the cold and hot reservoirs measured in absolute oK. The second law of thermodynamics tells us that we need to run a nuclear reactor with the highest TH possible to make it as efficient as possible. So PWR reactors have to run with water at around 300 oC under high pressure to achieve some level of efficiency. For example, using TC as a room temperature of 72oF (295oK) and 300 oC (573oK) coolant water we get:

Maximum Efficiency = 1 - 295oK/573oK = 0.4851 = 48.51%

Recall that water at one atmosphere of pressure boils at 100 oC, so 300 oC coolant water has to be kept under a great deal of pressure so that it does not boil away.

Figure 6 – Above we see a plot of the boiling point of water as a function of pressure. From the above plot, we see that water at 300 oC must be kept under a pressure of 80 atmospheres of pressure! The air in your car's tires is under about 2.3 atmospheres of pressure.

The other major problem is that the centers of the solid fuel rods run at about 2,000 oC and have to be constantly cooled by flowing water or they will melt. Even if all of the control rods are dropped into the core to stop the fuel from further fissioning, the residual radioactivity in the fuel rods will cause the fuel rods to melt if they are not constantly cooled by flowing water. Thus, most of the advanced technology used to run a PWR is safety technology designed to keep 300 oC water under 80 atmospheres from flashing into radioactive steam. The other problem that can occur in a meltdown situation is that as the water rapidly boils away, it can oxidize the cladding of the 2,000 oC fuel rods releasing hydrogen gas. The liberated hydrogen gas can then easily explode the reactor core like a highly radioactive hand grenade. Again, that is why PWR reactors need a huge and very expensive reinforced concrete containment structure to hold in large amounts of radioactive materials in the event that the PWR reactor should meltdown. A PWR is kept safe by many expensive and redundant safety systems to keep the water moving. So a PWR is like a commercial jet aircraft. So long as at least one of the jet engines is running, the aircraft is okay. But if all of the jet engines should stop we end up with a tremendous tragedy.

Figure 7 - When a neutron hits a uranium-235 nucleus it can split it into two lighter nuclei like Ba-144 and Kr-89 that fly apart at about 40% of the speed of light and two or three additional neutrons. The nuclei that fly apart are called fission products that are very radioactive with half-lives of less than 30 years and need to be stored for about 300 years. The additional neutrons can then strike other uranium-235 nuclei, causing them to split as well. Some neutrons can also hit uranium-238 nuclei and turn them into radioactive nuclei heavier than 238 with very long half-lives that require them to be stored for about 200,000 years.

PWRs also waste huge amounts of uranium. Currently, we take 1,000 pounds of uranium and fission about 7 pounds of it. That creates about 7 pounds of fission products that are very radioactive with very short half-lives of less than 30 years. That 7 pounds of fission products have to be kept buried for 10 half-lives which comes to about 300 years. But we know how to do that. After all, the United States Constitution is 232 years old! The problem is that the remaining 993 pounds of uranium gets blasted by neutrons and turns into radioactive elements with atomic numbers greater than uranium. That 993 pounds of radioactive waste have to be buried for 200,000 years!

Molten Salt Nuclear Reactors

Figure 8 – Above is a diagram showing the basic components of a molten salt reactor (MSR).

A molten salt reactor (MSR) avoids all of these problems by using a melted uranium fluoride salt for a fuel instead of solid fuel rods. The melted uranium salt is already a liquid at a temperature of 700 oC, or more, that is pumped at a very low pressure through the reactor core. An MSR cannot meltdown because it is already melted! And there is no cooling water that can boil away or generate explosive hydrogen gas when the core gets too hot. An MSR is a thermal reactor that uses graphite in the reactor core to slow down the neutrons that cause fission. Without the presence of graphite, the fission chain reaction stops all by itself. The use of graphite as a moderator also helps an MSR run in a self-stabilizing manner. If the uranium fuel salt gets too hot, it expands and less of the heat-generating fuel salt will be found in the graphite-bearing core so the fuel salt cools down. On the other hand, if the fuel salt gets too cold, it contracts and more of the heat-generating fuel salt will be found in the graphite-bearing core so the fuel salt heats up. This is the same feedback loop mechanism that keeps your house at a comfortable temperature in the winter.

An MSR has a solid plug called the "freeze plug" at the bottom of the core that melts if the uranium fuel salt gets too hot. The melted MSR fuel then flows through the melted plug into several large tanks that have no graphite and that stops any further fissioning. The fuel salt then slowly cools down on its own. The uranium fuel salt could then be reused when things return to normal. There is also a catch basin under the whole reactor core. If the freeze plug hole should get clogged up for some reason and the core ruptures, the uranium fuel salt is caught by the catch basin and drained into the dump tanks. Because the safety mechanisms for an MSR only rely on the laws of physics, like gravity, the melting of solids at certain temperatures and the necessity for the presence of graphite to slow down neutrons, an MSR cannot become a disaster. So unlike a PWR reactor, a molten salt nuclear reactor is more like a car on a lonely country road than a jet aircraft in flight. If the car engine should die, the car slowly coasts to a stop all on its own with no action needed by the driver. A molten salt nuclear reactor is a "walk away" reactor, meaning that you can walk away from it and it will shut itself down all by itself.

An MSR can also be run as a breeder reactor that turns all 1,000 pounds of uranium into fission products with a half-life of less than 30 years. As the fuel circulates, the fission products can be chemically removed from the liquid fuel and then buried for 300 years. So instead of only using 0.7% of the uranium and turning 99.3% of the uranium into waste that needs to be buried for 200,000 years, we use 100% of the uranium and turn it into waste that needs to be buried for only 300 years. The world contains about four times as much thorium as uranium and an MSR can use thorium as a fuel too. An MSR can breed thorium-232 into fissile uranium-233 via the reaction:

Thorium-232 + neutron → Protactinium-233 → Uranium-233

The thorium-232 absorbs a neutron and turns into protactinium-233 that then decays into uranium-233 that can fission just like uranium-235. The half-life of protactinium-233 is 27 days and the generated uranium-233 can be easily chemically removed from the thorium-232 + protactinium-233 salt mixture as it is generated. In fact, all of the current nuclear waste at the world's current nuclear reactors can be used for fuel in an MSR since 99.3% of the waste is uranium or transuranic elements. Such MSRs are known as waste burners. The world now has 250,000 tons of spent nuclear fuel, 1.2 million tons of depleted uranium and huge mounds of thorium waste from rare earth mines. With all of that, we now have several hundred thousand years' worth of uranium and thorium at hand. It only takes a little less than a golf ball's worth of thorium to fuel an American lifestyle for about 100 years and you can find that amount of thorium in a few cubic yards of the Earth's crust.

Figure 9 – A ball of thorium or uranium smaller than a golf ball can fuel an American lifestyle for 100 years. This includes all of the electricity, heating, cooling, driving and flying that an American does in 100 years. We have already mined enough thorium and uranium to run the whole world for thousands of years. There is enough thorium and uranium on the Earth to run the world for hundreds of thousands of years.

Molten salt nuclear reactors can also be run at a temperature of 1,000 oC which is hot enough for many industrial process heat operations. For example, it is hot enough to chemically break water down into hydrogen and oxygen gasses. Compressed hydrogen gas could then be pumped down existing natural gas pipelines for heating and cooking. Compressed hydrogen gas can also be used to run cars and trucks. The compressed hydrogen gas can be used to power vehicles using fuel cells or internal combustion engines burning the hydrogen gas directly into water. Molten salt nuclear reactors could be run at peak capacity all day long to maximize return. During the night, when electrical demand is very low, they could switch to primarily generating large amounts of hydrogen that could be easily stored in our existing natural gas infrastructure.

Figure 10 – Supercritical CO2 Brayton turbines can be about 8,000 times smaller than traditional Rankine steam turbines. They are also much more efficient.

Since molten salt nuclear reactors run at 700 oC, instead of 300 oC, we can use Brayton supercritical carbon dioxide turbines instead of Rankine steam turbines. Supercritical CO2 Brayton turbines are about 8,000 times smaller than Rankine steam turbines because the supercritical CO2 working fluid has nearly the density of water. And because molten salt nuclear reactors do not need an expensive and huge containment structure, they can be made into small factory-built modular units that can be mass-produced. This allows utilities and industrial plants to easily string together any required capacity. They would also be ideal for ocean-going container ships. Supercritical CO2 Brayton turbines can also reach an efficiency of 47% compared to the 33% efficiency of Rankine steam turbines. The discharge temperature of the supercritical CO2 turbines is also high enough to be used to desalinate seawater, and if a body of water is not available for cooling, the discharge heat of a molten salt nuclear reactor can be directly radiated into the air. To watch some supercritical CO2 in action see:

Thermodynamics - Explaining the Critical Point
https://www.youtube.com/watch?v=RmaJVxafesU#t-1

Molten salt nuclear reactors are also continuously refueled and do not need a month of downtime every 18 months to rotate the fuel rods of a PWR and replace 1/3 of the fuel rods with fresh fuel rods. Molten salt nuclear reactors are also not much of a proliferation risk because the molten salt fuel is highly radioactive with short-lived fission products, at a temperature of 700 oC and is not highly enriched with fissile material. That makes it very hard to work with from a bomb-making perspective. It would be easier to just start with natural uranium.

A little nuclear physics helps to understand why. Natural uranium is 99.3% uranium-238 which does not fission but can be turned into plutonium-239 if you hit it with one neutron and plutonium-240 if you hit it with two neutrons. Plutonium-239 and plutonium-240 both fission like uranium-235 and can be used for reactor fuel. Currently, our pressurized water reactors are just burning uranium-235 for energy. So we take 1,000 pounds of natural uranium and only burn the 7 pounds of uranium-235. The remaining 993 pounds of uranium-238 become nuclear waste. That is why people in the 1960s and 1970s wanted some kind of breeder reactor that could burn all 1,000 pounds of uranium and not waste most of the uranium that the Earth had. But should we try for a fast neutron breeder reactor that turned uranium-238 into plutonium-239 and plutonium-240 or should we go with a molten salt nuclear reactor that could continuously turn thorium-232 into uranium-233 and uranium-238 into plutonium-239 and plutonium-240 on the fly for fuel? Unfortunately, for political reasons, the decision was made in 1974 to go with fast breeder reactors that produced plutonium-239 and plutonium-240 from uranium-238.

But the fast neutron breeder reactor had a problem. The fast neutrons make lots of plutonium-239 and very little plutonium-240. Worse yet, if some country just ran a fast neutron breeder reactor for a short period of time and then pulled out the fuel rods, they could then have a source of essentially pure plutonium-239 that could easily be turned into a plutonium atomic bomb. In fact, that is how we make the plutonium-239 for plutonium atomic bombs. Early during the Manhattan Project, it was discovered that plutonium-240 would spontaneously fission all on its own and release 2 - 3 fast neutrons. For a uranium-235 bomb, they discovered that all you had to do was to take two slugs of uranium that were 90% uranium-235 and smash them quickly together with an explosive charge. But for a plutonium bomb, they found that you had to surround a sphere of nearly pure plutonium-239 with a layer of explosive charge that compressed the plutonium-239 into a supercritical mass that would start a fission chain reaction. The fast neutrons from any plutonium-240 impurity created a problem. When you compress the plutonium core of the bomb, the spontaneously generated fast neutrons from the plutonium-240 contaminant will start a premature chain reaction that begins producing lots of heat. The generated heat makes the plutonium core to expand at the exact time you are trying to compress the plutonium core into a supercritical mass that can quickly fission huge amounts of plutonium before the whole thing blows itself apart. Thus, if you have too much plutonium-240 in a plutonium bomb core, the bomb just "fizzles" before it can properly detonate. This created a fear that using huge numbers of fast neutron breeder reactors for electricity would be too dangerous for a world prone to local wars because the reactors could easily be turned into factories for plutonium-239 by pulling out the fuel rods after a short time of service. As a consequence, Congressional funding for the effort was suspended in 1983.

On the other hand, the slow neutrons in molten salt nuclear reactors make a plutonium mixture that is about 75% plutonium-239 and 25% plutonium-240. So the plutonium from molten salt nuclear reactors cannot be used to make plutonium atomic bombs because of the "fizzle" problem. Thus, molten salt nuclear reactors are not much of a proliferation problem because the plutonium that is generated by the slow neutrons is contaminated by 25% plutonium-240 and the uranium-233 that is generated from thorium-232 is also useless for bomb-making because 95% of the uranium in the liquid fuel salt is uranium-238 that does not fission at all. If you really want to make an atomic bomb, the easiest way to do that is to just spin natural uranium in centrifuges as did North Korea and as Iran may now be attempting. Nobody ever made a bomb from reactors meant for generating electricity.

There are several MSR efforts underway around the world, but MSRs need some more support from the government in the form of funding and regulations tuned to the benefits of MSR technology. For more on this, please see:

Making Nuclear Sustainable with CMSR (Compact Molten Salt Reactor) - Troels Schönfeldt
https://www.youtube.com/watch?v=ps8oi_HY35E#t-1

Seaborg Technologies Homepage
https://www.seaborg.co/

Thorium and the Future of Nuclear Energy
https://www.youtube.com/watch?v=ElulEJruhRQ

Kirk Sorensen is a mechanical engineer who single handedly revived interest in molten salt nuclear reactors about 15 years ago while working for NASA. NASA wanted Kirk to figure out a way to power a base on the Moon. Our Moon does not have coal, oil, natural gas, water for dams or air for wind turbines. The Moon also has a "day" that lasts for two weeks and also a "night" that also lasts for two weeks. So solar energy is really not an option because of the two-week "night". However, the Moon does have uranium. So Kirk paid a visit to our Oak Ridge National Laboratory for advice on a suitable nuclear reactor for the Moon. At Oak Ridge, they suggested he look into the old Molten Salt Reactor Experiment (MSRE) from the 1960s. Kirk began to dig through the old documents on the MSRE and consulted with some of the retired participants of the MSRE who, by this time, were all in their 70s and 80s. Kirk was shocked to learn that you could turn 100% of thorium-232 into uranium-233 and that uranium-233 was an even better nuclear fuel than uranium-235! A molten salt nuclear reactor could also turn uranium-238 into plutonium-239 and plutonium-240 on the fly and the plutonium-239 and plutonium-240 could also fission and be used as nuclear fuel. So a molten salt nuclear reactor burning uranium and thorium seemed to be just perfect for the Moon because it could burn 100% of the uranium and thorium that the Moon had. Then Kirk realized that molten salt nuclear reactors could also be perfect for solving the Earth's climate change disaster because the Earth has huge amounts of natural uranium and four times that amount of thorium-232 - enough to last for hundreds of thousands of years. Below are some of his excellent videos. You can find more on YouTube.

Thorium can give humanity clean, pollution-free energy
https://www.youtube.com/watch?v=kybenSq0KPo#t-1

Thorium: Kirk Sorensen at TEDxYYC
https://www.youtube.com/watch?v=N2vzotsvvkw#t-1

Kirk Sorensen @ MRU on LFTR - Liquid Fluoride Thorium Reactors
https://www.youtube.com/watch?v=D3rL08J7fDA#t-1

Kirk Sorensen's Flibe Energy Homepage
https://flibe-energy.com/

Nuclear goes retro — with a much greener outlook
https://www.knowablemagazine.org/article/technology/2019/nuclear-goes-retro-much-greener-outlook?gclid=CjwKCAiAuK3vBRBOEiwA1IMhuh4Tj2qgXh6Wa700N2oFDOyMbzIvOsU6QrIts1XIxgzx57gGWuBi5xoCGLIQAvD_BwE

If you have a technical background in the hard sciences or engineering be sure to take a look at the presentations of the annual conferences that are held by the Thorium Energy Alliance
http://www.thoriumenergyalliance.com/ThoriumSite/TEAC_Proceedings.html

But for a truly uplifting experience, please see the undergraduate presentation by Thane Symens (Mechanical Engineering), Joel Smith (Mechanical Engineering), Meredy Brichford (Chemical Engineering) & Christina Headley (Chemical Engineering) where they present their senior engineering project on the system design and economics of a thorium molten salt nuclear reactor at:

Calvin College Student Study on Th-MSR @ TEAC7
https://www.youtube.com/watch?v=M6RCAgR4Rfo#t-1

It is a powerful example of what software can do in the hands of capable young minds.

How To Make Money Sucking Billions of Tons of Carbon Dioxide Out of the Atmosphere
In Greta Thunberg's moving speech, she pointed out that we now need to suck billions of tons of carbon dioxide out of the Earth's atmosphere to prevent positive feedback loops from kicking in and taking all of this out of our hands. Is that another thing that totally selfish carbon-based Intelligence can achieve without spending a dime? One way to do this would be to set up huge seaweed farms in the middle of the Pacific Ocean. Most marine life is confined to coastal waters where it can obtain nutrients from continental runoff. The deep-water oceans are marine deserts by contrast because they are missing the necessary nutrients for carbon-based life. One idea is to use solar energy to pump the nutrient-rich deposits from the abyssal plain of the Pacific Ocean up to the surface to provide the necessary trace elements required by carbon-based life. Then huge seaweed farms would use those trace elements to suck carbon dioxide out of the atmosphere. The seaweed farms would have all of the water, sunshine and carbon dioxide they needed to quickly grow. The seaweed farms would then perform the function of huge pastors to raise fish and shellfish for harvesting. The excess seaweed would be cut and sunk to the abyssal plain to sequester large amounts of carbon.

Figure 11 – Large-scale seaweed farms in the middle of the Pacific Ocean could be used to suck billions of tons of carbon dioxide from the atmosphere and turn it into food.

Figure 12 – Excess carbon could then be deposited on the ocean floor for long-term storage.

Additionally, we may need to use large seaweed farms as ocean preservation areas. As we saw in Triona McGrath's TED presentation:

How pollution is changing the ocean's chemistry
https://www.ted.com/talks/triona_mcgrath_how_pollution_is_changing_the_ocean_s_chemistry?utm_source=Science+worth+knowing&utm_campaign=d1897bbfd6-Science+worth+knowing_12-21-17_COPY_01&utm_medium=email&utm_term=0_83c20124eb-d1897bbfd6-297552313

The pH of the ocean has dropped from 8.2 to 8.1 since the Industrial Revolution because of absorbed carbon dioxide. That is called ocean acidification. Remember, it was mainly ocean acidification that killed off 95% of marine species during the End-Permian greenhouse gas mass extinction 252 million years ago. If nothing changes, the pH of the ocean will drop to 7.8 by 2100 and it is impossible for marine life to make calcium carbonate shells at that pH because the acidic seawater will dissolve the calcium carbonate shells. All you have to do is dump calcium carbonate shells into water with a pH of 7.8 to watch that happen. Unfortunately, lots of creatures at the very bottom of the oceanic food chain make carbonate shells and will go extinct before the year 2100. This could easily cause the entire oceanic ecosystem of the Earth to collapse leaving behind no fish or shellfish. Fortunately, huge kelp forests can grow 2 feet a day and that takes a lot of carbon to do. They also fix lots of dissolved carbon dioxide via photosynthesis. The large removal of carbon dioxide from the surrounding water raises the pH of the water. So by 2100 we may need to cultivate large portions of the oceans with huge seaweed farms to provide a safe refuge for marine life from the very bottom of the food chain to the very top.

Figure 13 – If we do not stop pumping carbon dioxide into the air, the pH of the oceans will reach 7.8 by 2100 and the oceans will die.

For more on this see:

Can seaweed help curb global warming?
https://www.ted.com/talks/tim_flannery_can_seaweed_help_curb_global_warming#t-1

Could underwater farms help fight climate change?
https://www.ted.com/talks/ayana_johnson_and_megan_davis_could_underwater_farms_help_fight_climate_change#t-1

Reversing Climate Change with Ocean-healing Seaweed Ecosystems
https://www.climatecolab.org/contests/2014/global-plan/c/proposal/1307120

OceanForesters Homepage
http://oceanforesters.org/Home_Page.html

Conclusion
Both of the above efforts would need a little help from the world's governments to get going. But since they both have the potential to make lots of money, my hope would be that private companies would then take over and greatly expand them. It could be very much like the rise of the Internet. Normally, I would be looking to the United States to getting this off the ground. For example, take a look at this 1969 film produced by the Oak Ridge National Laboratory for the United States Atomic Energy Commission that describes the Molten Salt Reactor Experiment (MSRE) and how Alvin Weinberg's team of 30 nuclear scientists built the very first experimental molten salt nuclear reactor from scratch with only $10 million during the period 1960 - 1965 and then ran it for 20,000 hours from 1965 - 1969 without a hitch. Don't forget we were spending billions of dollars going to the Moon during the 1960s too:

https://www.youtube.com/watch?v=tyDbq5HRs0o#t-1

But currently the United States is politically paralyzed by political memes and software, and we are incapable of even managing our own affairs. For more on that see Life in Postwar America After Our Stunning Defeat in the Great Cyberwar of 2016. The Republicans keep pretending that climate change is not happening, and the Democrats keep pretending that wind and solar energy alone can fix the problem. The anti-nuclear Left has forged a strange alliance with the pro-fossil fuel Right to eliminate nuclear energy in the United States.

Fortunately, China has huge deposits of thorium and is currently taking up the role that the United States used to play back in the 20th century. The Chinese have spent more than $2 billion on advanced nuclear reactor research and have about 100,000 people working on them. The idea is for China to mass-produce small modular molten salt nuclear reactors on assembly lines like Boeing does for commercial jet aircraft. These compact reactors will then be transported by ships and trucks to an installation site. These compact modular reactors will be cheaper to buy and run than coal, gas, solar or wind plants. These compact reactors will look like small laptops competing with our huge old 1950s-style vacuum tube mainframe PWR reactors. China will first begin to sell these reactors to third-world countries that need lots of cheap electricity to grow. Once the Chinese establish that market and demonstrate the far superior safety of small molten salt nuclear reactors the Chinese will then begin marketing them in Europe and the United States. Six thousand compact 250 MW molten salt nuclear reactors could supply all of the energy that the United States currently uses. There currently are 25,000 commercial jet aircraft on duty around the world. A similar fleet of 250 MW molten salt nuclear reactors could supply the entire world with 100% of the energy it currently requires. With such a state-sponsored effort, China could easily become the next OPEC that controls the world energy supply.

China Invests Big in Clean and Cheap Energy from Thorium
http://www.thoriumenergyworld.com/press-release/china-invests-big-in-clean-and-cheap-energy-from-thorium

Now take a look at this slightly stalinesque video of the current Chinese efforts with molten salt nuclear reactors. Then compare the style of the Chinese video to that of the 1969 United States film:

SINAP T-MSR Promotional Video [ Thorium Molten Salt Reactor ]
https://youtu.be/EdelSZUxZeM

As you can see, China has begun its own state-sponsored "Manhattan Project" to build molten salt nuclear reactors. But the response of the United States has been more like Germany's response during World War II. Recall that the Germans discovered nuclear fission in 1938. Werner Heisenberg, one of the founding fathers of quantum mechanics, was put in charge of the German atomic bomb program. During a fateful meeting with Albert Speer, Hitler's personal architect and the German minister of munitions, Heisenberg asked Speer for 50,000 marks to buy some uranium to get started. Heisenberg figured that a low-ball funding request to get started was the best strategy. However, later, Albert Speer commented that a request for a mere 50,000 marks signaled to him that Werner Heisenberg's work could not be very significant! The idea of preventing China from controlling the world energy supply might be something the Right would be interested in knowing about.

Yes, this might all sound rather stark, but don't forget the age-old motto of the human race, "Don't rush me, I am waiting for the last minute.".

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Tuesday, November 12, 2019

WGD - Whole Genome Duplication
How Carbon-Based Life Installs a New Major Release into Production

Writing and maintaining software is very difficult because so much can go wrong. As we saw in The Fundamental Problem of Software this is largely due to the second law of thermodynamics introducing small bugs into software whenever software is changed and also to the nonlinear nature of software that allows small software bugs to frequently produce catastrophic effects. That is why in Facilitated Variation and the Utilization of Reusable Code by Carbon-Based Life we saw that most new computer or biological software is not written from scratch. Instead, most new software is simply a form of reusable code that has been slightly "tweaked" to produce new software functionality. In her Royal Institution presentation:

Copy number variation and the secret of life
https://www.youtube.com/watch?v=BJm5jHhJNBI&t=1s

Professor Aoife McLysaght explains how carbon-based life uses this same technique to produce new biological functionality by duplicating genes. The website for Professor McLysaght's lab is located at:

Aoife McLysaght Molecular Evolution Lab
http://www.gen.tcd.ie/molevol/

Once you duplicate a gene, that allows one of the two copies to continue to produce the protein encoded by the gene at normal levels while its copy is then free to slightly mutate into a new form that might be able to produce an enhanced protein or an additional protein with new biological function. It is the golden rule of wing-walking in action - don't let go of something until you have hold of something else. Meaning, that if a single gene mutates in isolation it will most likely produce a protein that no longer works and that will be detrimental, or possibly, even fatal for an organism.

Figure 1 – Above we see a gene with four functions. Once the gene has been duplicated, it is possible for the copy of the gene to evolve by divergence. In the first case, we see Subfunctionalization where some of the gene's code disappears from each chromosome of descendants. In the second case, we see Neofunctionalization where the gene on the copied chromosome is free to mutate by changing some genetic code and dropping other genetic code. In the last case, we see the total loss of the copied gene.

All computer users know the importance of keeping backup copies of files around before messing with them in case a drastic mistake is made. Nowadays, most people keep backup copies on the Cloud with Microsoft or Google.

Professor McLysaght then explains that gene duplication can be classified into two broad categories:

SSD - Small Scale Duplication
WGD - Whole Genome Duplication

In SSD one gene or a small group of genes is accidentally duplicated elsewhere on the same chromosome or a different chromosome when DNA is copied. On the other hand, with WGD the entire genome is accidentally duplicated by essentially doubling the number of chromosomes in a cell. The trouble with SSD is that the duplicated gene or genes will at first most likely produce more of the encoded proteins than is usual. In fact, all things being equal, nearly twice as much of the proteins will be at first produced. This is called the "dosage" problem. You see, doubling the production level of a given protein can cause problems. The processing logic carried out by proteins is quite complex. Some proteins are used to build physical structures, like the kerogen in our hair, fingernails and skin, while other proteins are used to carry out biochemical reactions like the hemoglobin in our blood. Other proteins take on a control function by catalyzing biochemical reactions or even by amplifying or inhibiting the expression of other genes. So changing the relative dosage levels of a protein or a group of proteins by means of SSD can be quite dangerous. However, this problem is averted if the entire genome of an organism is duplicated by means of WGD. With WGD the number of all the genes is doubled and so the relative dosage levels of all the generated proteins should remain the same. Now, with one complete set of genes taking the production load for protein production the other set of genes are free to mutate or even disappear. The significance of WGD gene duplications in the evolutionary history of vertebrates was first proposed by Susumu Ohno in 1970 in his book Evolution by Gene Duplication.

Figure 2 – Whole Genome Duplication (WGD) was first proposed by Susumu Ohno in 1970.

Figure 3 – Here we see the difference between SSD and WGD gene duplication.

Since then, bioinformatics has overwhelmingly confirmed the key role of gene duplication in molecular evolution by comparing the genomes of many species at the genetic level of DNA sequences. In fact, the term "ohnolog" has been coined to describe gene duplicates that have survived since a WGD event.

Another good resource for exploring the impact of WGD events in the evolutionary history of carbon-based life is Dr. Hervé Isambert's lab at:

The Isambert Lab
Reconstruction, Analysis and Evolution of Biological Networks
Institut Curie, Paris
http://kinefold.curie.fr/isambertlab/

Among many other resources, the Isambert Lab has been working on the OHNOLOGS database. The OHNOLOGS database currently allows users to explore the genes retained from WGD (Whole Genome Duplication) events in 27 vertebrate genomes and is available at:

OHNOLOGS - A Repository of Genes Retained from Whole Genome Duplications in the Vertebrate Genomes
http://ohnologs.curie.fr/

Figure 4 – Above is a figure from the Isambert Lab that displays a multitude of WGD events in the evolutionary history of carbon-based life.

Figure 5 – Above is a figure that displays a multitude of WGD events specifically in the evolutionary history of carbon-based plantlife.

Further Confirmation of WGD From the Evolution of Computer Software
Softwarephysics maintains that both carbon-based life and computer software have converged upon many of the same solutions to shared data processing problems as they both learned to deal with the second law of thermodynamics in a nonlinear Universe. This should come as no surprise since both carbon-based life and computer software are simply forms of self-replicating information facing the common problems of survival. For more on that please see A Brief History of Self-Replicating Information. For more details on the evolutionary history of software see the SoftwarePaleontology section of SoftwareBiology. So it should come as no surprise that those doing the development and maintenance of computer software should have also discovered the advantages of taking a WGD approach. All IT professionals should be quite familiar with the steps used to move new code into Production, but for those non-IT readers, let me briefly explain the process. Hopefully, you will be able to see many WGD techniques being used in a number of places.

Software Change Management Procedures
Software Change Management arose in the IT departments of major corporations in the 1980s. Prior to the arrival of Change Management processes, corporate IT programmers simply wrote and tested their own software changes in private libraries on the same hardware that ran Production software. When it was time to install the changed software into Production, we simply filled out a ticket to have Data Management move the updated software files from our personal libraries to the Production libraries. Once that was done, the corporate IT programmers could validate the software in the Production libraries with a test batch run before the next scheduled Production run of the batch job. This worked just fine until Production software evolved from batch jobs to online processing by corporate end-users in the early 1980s and especially when external end-users began to interactively use Production software in the 1990s. For example, when I retired in December of 2016, I was in the Middleware Operations group for a major credit card company. All installs were done late at night and during the very early morning hours during our daily Change Window. For an example of a complex software infrastructure supporting a high-volume corporate website please see Software Embryogenesis. Usually, we did about 20 installs each night to cover bug fixes and minor software enhancements. Every change was done under an approved Change Ticket that had an attached install plan that listed all of the items to be installed and the step-by-step timing of each install step. Each install plan also had steps to validate the install and back out the install if problems occurred.

We ran all the Production software in two separate datacenters that were several hundred miles apart. Each datacenter had several hundred Unix servers and ran the exact same Production software. The hardware and software in each datacenter were sized so that it could handle our peak processing load during the middle of the day. Usually, both datacenters would be in Active Mode and taking about half of the total Production processing load. If something horrible happened in one datacenter the Command Center could shift our entire Production processing load to the other datacenter. So during the Change Window for a particular Change Ticket, the Command Center would first move all traffic for the application being changed to the second datacenter. We would then install the new software into the first datacenter and crank it up. Professional validators would then run the new software through a set of validation tests to make sure the software was behaving properly. Then, the Command Center would shut down traffic to the application in the second datacenter to force all traffic to the first datacenter that was running the new software. We would then let the new software on the first datacenter "burn-in" for about 30 minutes of live traffic from real end-users on the Internet. If anything went wrong, the Command Center would move all of the application traffic back to the second datacenter that was still running the old software. We would then back out the new software in the first datacenter and replace it with the old software following the backout plan for the Change Ticket. But if the "burn-in" went well, we would reverse the whole process of traffic flips between the two datacenters to install the new software in the second datacenter. However, if something went wrong the next day with the new software under peak load and an outage resulted, the Command Center would convene a conference call and perhaps 10 people from Applications Development, Middleware Operations, Unix Operations, Network Operations and Database Operations would be paged out and would join the call. The members of the outage call would then troubleshoot the problem in their own areas of expertise to figure out what went wrong. The installation of new code was naturally always our first suspicion. If doing things like restarting the new software did not help, and all other possibilities were eliminated as much as possible, the members of the outage call would come to the decision that the new software was the likely problem, and the new software would be backed out using the Change Ticket backout plan. I hope that you can see how using two separate datacenters that are hundreds of miles apart takes full advantage of the WGD technique used by carbon-based life to keep carbon-based Production up and running at all times for routine maintenance. However, biologists have also discovered that in the evolutionary history of carbon-based life, WGD technology also played a critical role in the rise of new species, so let us take a look at that from an IT perspective.

The Role of WGD Technology in the Implementation of New Species and Major Code Releases
In the above discussion, I explained how the standard Change Management processes were used for the daily changes that Middleware Operations made on a daily basis. However, every few months we conducted a major code release. This was very much like implementing a new species in biology. For a major code release, all normal daily Change Tickets were suspended so that full attention could be focused on the major code release. For a major code release, Applications Development appointed a Release Coordinator for the software release and perhaps 30 - 60 Change Tickets would be generated for the major code release. Each Change Ticket in the major code release had its own detailed installation plan, but the Release Coordinator would also provide a detailed installation and backout plan for all of the Change Tickets associated with the major code release. From an IT perspective, a major code release is like creating a new species. It is like moving from Windows 8.0 to Windows 10.0. The problem with a large major code release is that it cannot all be done in a single standard Change Window during the night and early morning hours. Instead, an extended Change Window must be approved by IT Management that extends into the next day and might complete around 3:00 or 4:00 PM the next day. The basic idea was to totally complete the new major code release in the first datacenter during the early morning hours in the standard Change Window. Once that was done, live traffic was slowly transferred to the first datacenter. For example, initially, only 10% of the live traffic was transferred to the first datacenter running the new major code release. After about an hour of "burning in" the new code, the traffic level in the first datacenter was raised to 30% for about 30 minutes. If all went well, the load level on the first datacenter was raised to 80% for 30 minutes. Finally, 100% of the traffic was transferred to the first datacenter for about 30 minutes for a final "burn-in". After that, the whole install team shifted work to the second datacenter. The most significant danger was that even though the first datacenter had run 100% of the traffic for about 30 minutes, it did so during an early part of the day when the total processing load was rather low. The worst thing that could happen would be for the first datacenter that was running 100% of the Production load on the new major code release would get into trouble when the peak load hit around 10:00 AM. Should that happen, we would be in the horrible situation where the second datacenter was unusable because it was halfway through the major code release and the first datacenter was experiencing problems due to the major code release. Such a situation could bring an entire corporate website down into a "hard down" condition. A "hard down" condition can cost thousands to millions of dollars per second depending on the business being conducted by the software. Such a state of affairs needs to be avoided at all costs and to do that, IT relies heavily on the WGD technique.

First, there are three separate software environments running the current software genome:

Production
Production is the sacred software environment in which no changes are allowed to be made without a Production Change Ticket that has been approved by all layers of IT Management. Production software is sacred because Production software runs the business and is the software that all internal and external users interact with. If Production software fails, it can cost a business or governmental agency thousands or millions of dollars each second! That is why all IT professionals are deathly afraid of messing up Production and, therefore, follow all of the necessary Change Management processes not to do so. I personally know of very talented IT professionals who were summarily fired for making unauthorized changes to Production.

Production Assurance
Production Assurance is the environment that is set up by IT Management to mimic the Production environment as best as possible. It usually is a scaled-down version of Production that does not take Production load. Production Assurance is like a wind tunnel that allows new software to experience the trials and tribulations of Production but using a scaled-down model of Production instead of the "real thing". Production Assurance is where all of the heavy-duty software testing takes place by professional Production Assurance testers. The IT professionals in Applications Development who write the new code do not do testing in Production Assurance. Once software has been exhaustively tested in Production Assurance, it is ready to move to Production with a scheduled Change Ticket in a scheduled Change Window.

Development
The Development environment is where IT professionals in Application Development program new code and perform unit and integration testing on the new code. Again, most new code is reusable code that has been "tweaked". Once all unit and integration testing have been completed on some new code, a Production Assurance Change Ticket is opened for Middleware Operations, Unix Operations, Database Operations and Network Operations to move the new software to Production Assurance for final system-wide testing.

Conclusion
As you can see, IT has also discovered the benefits of the WGD techniques developed by carbon-based life to introduce new genes and new species into the biosphere. Not only do corporate IT departments generally run Production software on two separate Production environments, but corporate IT departments also use multiple WGD environments like Production, Production Assurance and Development to produce new software functionality, and most importantly, move new major releases into Production as a new species of software. Thus, as I suggested in How to Study the Origin of Life on the Earth and Elsewhere in the Universe Right Here at Home, I highly recommend that all researchers investigating the roles that WGD and SSD gene duplication played in the evolutionary history of carbon-based life spend a few months doing some fieldwork in the IT department of a major corporation or governmental agency.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston