Saturday, December 07, 2019

The Unintended Consequences of Good People Trying to do the Right Thing

Most of us are very familiar with the popular quote, "The only thing necessary for the triumph of evil is that good men do nothing." and we have all probably used that quote in a moralistic sense during political debates with our friends and relatives. But the problem has always been what exactly do you mean by the terms "evil" and "good men"? If you have been around for a while, you probably know that, surprisingly, the concepts of "evil" and "good men" vary greatly depending on whom you talk to. In fact, we cannot even come to an agreement on who first uttered the quote. Was it Edmund Burke, R. Murray Hyslop, Charles F. Aked or John Stuart Mill? Nobody really knows for sure.

I just finished watching the Senate Impeachment Trial and saw the United States Senate finally deliver its verdict. Being retired, I was able to watch the whole Impeachment process unfold in both the House of Representatives and the Senate. For me, it was a beautiful example of the unintended consequences of Good People trying to do the Right Thing. Now, I firmly believe that all of the participants were trying to do the Right Thing as they saw it. After 68 years, my observation is that all people always try to do the Right Thing as they see it. And if they cannot do the Right Thing, they then do the Necessary Thing to make the Right Thing ultimately happen. But that is the fundamental problem. At this point in life, I now only have confidence in Science and Mathematics. That is because all other forms of human thought seem to be fatally flawed by confirmation bias. All other forms of human thought are fatally flawed by the efforts of Good People trying to do the Right Thing, or the Necessary Thing, as they see it.

I am bringing up this issue because I think it directly bears on my last posting, Last Call for Carbon-Based Intelligence on Planet Earth. In that posting, I explained that the planet is dying and that everybody is pretending that it is not. The Right loves fossil fuels and is pretending that catastrophic Climate Change is simply not happening. The Left is pretending that wind and solar can solve the problem all on their own. And the Middle is concerned with other issues that they find more pressing.

My most favorite physicist of the 20th century, besides Albert Einstein, was Richard Feynman. Richard Feynman was an infamous character who is frequently quoted. My most favorite quote is, "The most important thing is to not fool yourself because you are the easiest one to fool.". Richard Feynman was also a member of the Presidential Commission on the Space Shuttle Challenger Accident that was headed by the former Secretary of State William Rogers. Richard Feynman was the person who uncovered the fact that the Challenger blew up because of a bad O-ring design that NASA Management knew about. The official Presidential Commission Report ended with Richard Feynman's Appendix F - Personal observations on the reliability of the Shuttle. The very last line of Appendix F is "For a successful technology, reality must take precedence over public relations, for nature cannot be fooled."

That is a very important observation to keep in mind when searching for a solution to Climate Change - "for nature cannot be fooled". Robert Stone is a long-time environmentalist deeply concerned about Climate Change, who like me, was originally anti-nuclear. In fact, Robert Stone actually made an anti-nuclear documentary many years ago. However, in 2013, Robert Stone, like Jim Hansen, had a change of heart when it comes to advanced nuclear energy like molten salt nuclear reactors and he made this documentary:

PANDORA'S PROMISE
https://www.youtube.com/watch?v=ObcgG9vjUbs

I was 12 years old when President Kennedy was assassinated in 1963. I was walking to an English class when a friend coming from another class told me. I dearly loved President Kennedy at the time. Many years later, I learned that if President Kennedy had caved in during the Cuban Missile Crisis and had approved the Cuban invasion that all of his advisors wanted, that the Soviets had standing orders to launch their missiles at the United States! It seems that President Kennedy and his brother Robert were the only ones resisting an invasion. I was 17 when Robert Kennedy was assassinated in 1968, and I loved Robert Kennedy at the time too. They were both Good People trying to do the Right Thing.

Below is a YouTube video of his son, Robert F. Kennedy Jr. having a discussion with Robert Stone in front of an audience that had just watched Robert Stone's documentary PANDORA'S PROMISE. I think that it is an excellent example of two Good People trying to do the Right Thing.

Robert F. Kennedy Jr. & Director of Pandora's Promise Spar Over Nuclear Power
https://www.youtube.com/watch?v=HaP9GuGK8r4

But don't forget, no matter how good your intentions may be, "nature cannot be fooled". The planet is dying and we have run out of time. The energy densities of wind and sunlight are very low and very intermittent and physics makes it very hard to store them for when they are not there. We need a technology that does not try to fool nature with good intentions.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Tuesday, November 26, 2019

Last Call for Carbon-Based Intelligence on Planet Earth

In Hierarchiology and the Phenomenon of Self-Organizing Organizational Collapse, I explained that a major problem arises whenever a large group of people come together into any organizational structure. The problem is that no matter how things are set up, it just seems that there are always about 1% of the population that like to run things, and there is nothing wrong with that. We certainly always need somebody around to run things because, honestly, 99% of us simply do not have the ambition or desire to do so. Of course, the problem throughout history has always been that the top 1% naturally tended to abuse the privilege a bit and overdid things a little, resulting in 99% of the population having a substantially lower economic standard of living than the top 1%, and that has led to several revolutions in the past that did not always end so well. Now once a dominance hierarchy has been established, no matter how it is set up, another universal problem arises from the fact that:

1. People like to hear what they like to hear.

2. People do not like to hear what they do not like to hear.

Throughout human history, it seems that civilizations have always gotten into trouble once a dominance hierarchy has formed composed of a large number of individuals stubbornly adhering to the above in the face of desperate times. And I would like to suggest that this same universal phenomenon will naturally arise for any planet dominated by a carbon-based Intelligence. That is why in The Deadly Dangerous Dance of Carbon-Based Intelligence I offered up my Null Result Hypothesis as a possible explanation for Fermi's Paradox:

Fermi’s Paradox - If the universe is just chock full of intelligent beings, why do we not see any evidence of their existence?

Briefly stated:

Null Result Hypothesis - What if the explanation to Fermi's Paradox is simply that the Milky Way galaxy has yet to produce a form of interstellar Technological Intelligence because all Technological Intelligences are destroyed by the very same mechanisms that bring them forth?

By that, I mean that the Milky Way galaxy has not yet produced a form of Intelligence that can make itself known across interstellar distances, including ourselves. I then went on to propose that the simplest explanation for this lack of contact could be that the conditions necessary to bring forth a carbon-based interstellar Technological Intelligence on a planet or moon were also the same kill mechanisms that eliminated all forms of carbon-based Technological Intelligences with 100% efficiency. I then suggested that this kill mechanism might be the tendency for carbon-based Technological Intelligences to mess with their planet's or moon's carbon cycle as we seem to be doing today with the Earth. For more on that see This Message on Climate Change Was Brought to You by SOFTWARE. If true, this means that over the past 10 billion years, not a single form of carbon-based Intelligence has arisen in the Milky Way galaxy to become an interstellar Technological Intelligence. And given the limitations of carbon-based Intelligence, that also most likely means that no form of carbon-based Intelligence has successfully crossed over to a silicon-based Intelligence. This is a deeply disturbing finding because we now know that about 20% of the stars in the Milky Way have planets capable of sustaining carbon-based life. That comes to about 80 billion worlds in the Milky Way capable of sustaining carbon-based life. So even if carbon-based Intelligence is extremely rare, there should have been a huge number of carbon-based Intelligences crossing over to a silicon-based Intelligence over the past 10 billion years in the Milky Way. Yet, there appear to be none. For more on this please see A Brief History of Self-Replicating Information and Is Self-Replicating Information Inherently Self-Destructive?

So How Are We Doing?
Well, instead of terraforming Mars we seem to have been venus-forming the Earth for the past several hundred years, ever since carbon-based Intelligence discovered technology.

Figure 1 – Ever since carbon-based Intelligence on the Earth discovered technology, carbon dioxide levels have been increasing.

Now, in 1992 at the Earth Summit held in Rio de Janeiro the world finally adopted the United Nations Framework Convention on Climate Change (UNFCCC) to tackle global climate change. The purpose of the treaty was to reduce greenhouse gas emissions and prevent the dangerous effects of climate change. Every year since 1995, the Convention of the Parties, or COP, has been held to report on the progress that has been made towards this objective. Currently, the world is coming together for the 25th annual meeting at COP25 in Madrid. But the sad fact is that, as we all know, nothing really has been done in the last 25 years to curb greenhouse gas emissions.

Figure 2 – Despite the 1992 Rio, 1997 Kyoto, 2009 Copenhagen and the 2016 Paris agreements, the data show that nothing really has changed as a result of those efforts.

About 40 years ago, I was an exploration geophysicist exploring for oil, first with Shell and then with Amoco. But in 1979, I made a career change into IT and then spent about 40 years in IT working at various corporations. However, as a geophysicist by training, I have always been very concerned about climate change. A few weeks back, I watched Greta Thunberg's full speech from the U.N. Climate Action Summit at:

How dare you!
https://www.youtube.com/watch?v=-4WqLIFava4#t-1

and I was deeply moved. I now have five grandchildren, all 8 years old or younger, and being 68 years of age, I know that I have less than 20 years left. So I will not be around to see how this all works out. But all of the science that I know tells me that we have just about run out of time, as Greta Thunberg so wisely points out.

In her speech, Greta Thunberg pointed out that the greatest danger is that rising temperatures and acidification of the oceans will reach a tipping point and trigger geochemical processes with strong positive feedbacks that could take this all out of our powers to stop. For example, the Arctic is defrosting. That means there is less ice up north to reflect incoming high-energy visible photons. All the energy in those high-energy visible photons has to be radiated back into space as low-energy infrared photons on a daily basis to maintain equilibrium. But we are pumping carbon dioxide molecules into the atmosphere that prevent that from happening and that causes the air temperature to rise. Warmer air can hold many more water molecules than cooler air and water molecules are really good at absorbing infrared photons too adding to the problem. The rising air temperatures then make even more Arctic ice to melt. But the worst problem, by far, with the Arctic defrosting is methane gas. Methane gas is a powerful greenhouse gas. Eventually, methane degrades into carbon dioxide and water molecules, but over a 20-year period, methane traps 84 times as much heat in the atmosphere as carbon dioxide. About 25% of current global warming is due to methane gas. Natural gas is primarily methane gas with a little ethane mixed in, and it comes from decaying carbon-based lifeforms. Now here is the problem. For the past 2.5 million years, during the frigid Pleistocene, the Earth has been building up a gigantic methane-bomb in the Arctic. Every summer, the Earth has been adding another layer of dead carbon-based lifeforms to the permafrost areas in the Arctic. That summer layer does not entirely decompose but gets frozen into the growing stockpile of carbon in the permafrost. The Earth has also been freezing huge amounts of methane gas as a solid called methane hydrate on the floor of the Arctic Ocean. Methane hydrate is a solid, much like ice, that is composed of water molecules surrounding a methane molecule frozen together into a methane hydrate ice. As the Arctic warms, this trapped methane gas melts and bubbles up to the surface. The end result is that if we keep doing what we are doing, there is the possibility of the Earth ending up with a climate having a daily high of 140 oF with purple oceans choked with hydrogen-sulfide producing bacteria, producing a dingy green sky over an atmosphere tainted with toxic levels of hydrogen sulfide gas and an oxygen level of only 12%, like the Earth had during the End-Permian greenhouse gas mass extinction 252 million years ago. Such a catastrophe might not cause the emerging carbon-based Intelligence of Earth to go extinct, but it most probably would put an end to our ability to make ourselves known to the rest of the Milky Way galaxy.

Figure 3 – Melting huge amounts of methane hydrate ice could release massive amounts of methane gas into the atmosphere.

So What to Do?
Now I love solar and wind power. In fact, I have been buying wind power electricity for the past 10 years. It costs about 50% more to generate, but I only use about 23% of the electricity that my neighbors use, so that only runs me about $240 per year. I also drive a hybrid car that gets 60 - 70 mpg in the summer and 40 - 50 mpg in the winter when the hybrid battery does not work as efficiently. My wife and I do not fly and we do not travel much now that we are both retired. If you want to see the world, just go on Street View of Google Maps and you can do all the sightseeing you want, like walking around the Eiffel Tower, or hiking along a trail in the Grand Canyon. Yes, I know that my efforts really do not make much of a difference, and that I just do these things to make myself feel a little bit better about the current situation. I also know that wind and solar have very low energy densities so that it takes a lot of windmills and solar panels to capture large amounts of energy. Currently, wind and solar only provide for about 4% of the world's energy consumption. I just don't think we have enough time left to go entirely to a solar and wind-powered world. We are driving 80 miles an hour into a concrete wall with only 100 feet left to brake! For example, in:

Roadmap To Nowhere - The Myth of Powering the Nation With Renewable Energy
https://www.roadmaptonowhere.com/

Mike Conley and Tim Maloney use the numbers from the 132-page report of the environmental Solutions Project that requires 18 billion square meters of solar panels and 500,000 5 MW wind turbines to supply all of the energy needs of the United States. Mike and Tim point out that once all of this infrastructure has been constructed on 132,000 square miles of land, we will need to replace 1.23 million square meters of solar panels and 80 of the 5 MW wind turbines every day, forever, as the solar panels and wind turbines wear out.

Figure 4 – The new GE 5 MW wind turbine is 500 feet tall, about the height of a 50 story building. We will need 500,000 of them and will need to replace 80 of them each day as they wear out in the future.

So obviously, what we have been doing thus far is clearly not working. The planet is dying and everybody is pretending that it is not. The Right loves fossil fuels and is pretending that there is no problem. The Left is pretending that the problem can be solved with wind and solar power alone. And the Middle has other concerns that they find more important. How can we change that? The first thing we need to do is to realize that all forms of carbon-based life are forms of self-replicating information that have been honed by 4.0 billion years of natural selection to be fundamentally selfish in nature. That is why most people around the world will not spend a nickel more on energy if they can avoid doing so. That is both a good thing and a bad thing. The bad thing is that most people will simply not spend a nickel more on solar or wind power to reduce carbon dioxide emissions. However, the good thing is that if we can come up with a source of energy that is cheaper than coal and other forms of carbon-based fuels, people will drop the carbon-based fuels like a hot potato all on their own. So I would like to recommend that we all take a look at molten salt nuclear reactors since they have the potential to produce energy at a much lower cost than carbon-based fuels and also could be easily mass-produced using far fewer material resources than solar or wind. Bringing in molten salt nuclear reactors should not be seen as a substitute for continuing on with solar, wind and fusion sources of energy. We just need a cheap form of energy that appeals to those still committed to carbon-based fuels. We also need an insurance policy in case it is found that wind and solar cannot do the job all on their own. Yes, I know that many of you may dislike nuclear energy because:

1. Nuclear reactors tend to explode and release radioactive clouds that can poison large areas for thousands of years.
2. Nuclear reactors produce nuclear waste that needs to be buried for 200,000 years and we do not know how to take care of things for 200,000 years.
3. Nuclear reactors produce plutonium that can be used for making atomic bombs.

Figure 5 – Currently, we are running 1950s-style PWR (Pressurized Water Reactors) with coolant water at 300 oC and 80 atmospheres of pressure.

Personally, the reason I have been buying wind-powered electricity for the past decade is that I had given up on nuclear energy as a possible solution. Nuclear reactors just seemed to require forever to build and were far too expensive to effectively compete with coal or natural gas. And nuclear reactors seemed to blow up every decade or so, no matter what the nuclear engineers did to make them safer. I also assumed that the nuclear engineers would have come up with something better over the past 60 years if such a thing were possible.

But, recently, I have learned that over the past 60 years, the nuclear engineers have indeed come up with many new designs for nuclear reactors that are thousands of times superior to what we have today. But because of many stupid human reasons that I will not go into, these new designs have been blocked for 60 years! And because nuclear reactions can produce 100 million times as much energy as chemical reactions, they may be our last chance. All of the problems we have with our current nuclear reactors stem from running PWR (Pressurized Water Reactors) that were designed back in the 1950s and early 1960s. Now, no business today relies on 1950s-style vacuum tube computers with 250 K of memory to run a business, but our utilities happily run 1950s-style PWR nuclear reactors! The good news is that most of the problems with our technologically-ancient PWR reactors stem from using water as a coolant. A cubic foot of water makes 1,000 cubic feet of steam at atmospheric pressure. That is why PWR reactors need a huge reinforced concrete containment structure to hold large amounts of radioactive steam if things go awry. Do you remember the second law of thermodynamics from Entropy - the Bane of Programmers and The Demon of Software? The efficiency of extracting useful mechanical work from a heat reservoir depends on the temperature difference between the heat reservoir and the exhaust reservoir.

Maximum Efficiency = 1 - TC/TH

where TC and TH are the temperatures of the cold and hot reservoirs measured in absolute oK. The second law of thermodynamics tells us that we need to run a nuclear reactor with the highest TH possible to make it as efficient as possible. So PWR reactors have to run with water at around 300 oC under high pressure to achieve some level of efficiency. For example, using TC as a room temperature of 72oF (295oK) and 300 oC (573oK) coolant water we get:

Maximum Efficiency = 1 - 295oK/573oK = 0.4851 = 48.51%

Recall that water at one atmosphere of pressure boils at 100 oC, so 300 oC coolant water has to be kept under a great deal of pressure so that it does not boil away.

Figure 6 – Above we see a plot of the boiling point of water as a function of pressure. From the above plot, we see that water at 300 oC must be kept under a pressure of 80 atmospheres of pressure! The air in your car's tires is under about 2.3 atmospheres of pressure.

The other major problem is that the centers of the solid fuel rods run at about 2,000 oC and have to be constantly cooled by flowing water or they will melt. Even if all of the control rods are dropped into the core to stop the fuel from further fissioning, the residual radioactivity in the fuel rods will cause the fuel rods to melt if they are not constantly cooled by flowing water. Thus, most of the advanced technology used to run a PWR is safety technology designed to keep 300 oC water under 80 atmospheres from flashing into radioactive steam. The other problem that can occur in a meltdown situation is that as the water rapidly boils away, it can oxidize the cladding of the 2,000 oC fuel rods releasing hydrogen gas. The liberated hydrogen gas can then easily explode the reactor core like a highly radioactive hand grenade. Again, that is why PWR reactors need a huge and very expensive reinforced concrete containment structure to hold in large amounts of radioactive materials in the event that the PWR reactor should meltdown. A PWR is kept safe by many expensive and redundant safety systems to keep the water moving. So a PWR is like a commercial jet aircraft. So long as at least one of the jet engines is running, the aircraft is okay. But if all of the jet engines should stop we end up with a tremendous tragedy.

Figure 7 - When a neutron hits a uranium-235 nucleus it can split it into two lighter nuclei like Ba-144 and Kr-89 that fly apart at about 40% of the speed of light and two or three additional neutrons. The nuclei that fly apart are called fission products that are very radioactive with half-lives of less than 30 years and need to be stored for about 300 years. The additional neutrons can then strike other uranium-235 nuclei, causing them to split as well. Some neutrons can also hit uranium-238 nuclei and turn them into radioactive nuclei heavier than 238 with very long half-lives that require them to be stored for about 200,000 years.

PWRs also waste huge amounts of uranium. Currently, we take 1,000 pounds of uranium and fission about 7 pounds of it. That creates about 7 pounds of fission products that are very radioactive with very short half-lives of less than 30 years. That 7 pounds of fission products have to be kept buried for 10 half-lives which comes to about 300 years. But we know how to do that. After all, the United States Constitution is 232 years old! The problem is that the remaining 993 pounds of uranium gets blasted by neutrons and turns into radioactive elements with atomic numbers greater than uranium. That 993 pounds of radioactive waste have to be buried for 200,000 years!

Molten Salt Nuclear Reactors

Figure 8 – Above is a diagram showing the basic components of a molten salt reactor (MSR).

A molten salt reactor (MSR) avoids all of these problems by using a melted uranium fluoride salt for a fuel instead of solid fuel rods. The melted uranium salt is already a liquid at a temperature of 700 oC, or more, that is pumped at a very low pressure through the reactor core. An MSR cannot meltdown because it is already melted! And there is no cooling water that can boil away or generate explosive hydrogen gas when the core gets too hot. An MSR is a thermal reactor that uses graphite in the reactor core to slow down the neutrons that cause fission. Without the presence of graphite, the fission chain reaction stops all by itself. The use of graphite as a moderator also helps an MSR run in a self-stabilizing manner. If the uranium fuel salt gets too hot, it expands and less of the heat-generating fuel salt will be found in the graphite-bearing core so the fuel salt cools down. On the other hand, if the fuel salt gets too cold, it contracts and more of the heat-generating fuel salt will be found in the graphite-bearing core so the fuel salt heats up. This is the same feedback loop mechanism that keeps your house at a comfortable temperature in the winter.

An MSR has a solid plug called the "freeze plug" at the bottom of the core that melts if the uranium fuel salt gets too hot. The melted MSR fuel then flows through the melted plug into several large tanks that have no graphite and that stops any further fissioning. The fuel salt then slowly cools down on its own. The uranium fuel salt could then be reused when things return to normal. There is also a catch basin under the whole reactor core. If the freeze plug hole should get clogged up for some reason and the core ruptures, the uranium fuel salt is caught by the catch basin and drained into the dump tanks. Because the safety mechanisms for an MSR only rely on the laws of physics, like gravity, the melting of solids at certain temperatures and the necessity for the presence of graphite to slow down neutrons, an MSR cannot become a disaster. So unlike a PWR reactor, a molten salt nuclear reactor is more like a car on a lonely country road than a jet aircraft in flight. If the car engine should die, the car slowly coasts to a stop all on its own with no action needed by the driver. A molten salt nuclear reactor is a "walk away" reactor, meaning that you can walk away from it and it will shut itself down all by itself.

An MSR can also be run as a breeder reactor that turns all 1,000 pounds of uranium into fission products with a half-life of less than 30 years. As the fuel circulates, the fission products can be chemically removed from the liquid fuel and then buried for 300 years. So instead of only using 0.7% of the uranium and turning 99.3% of the uranium into waste that needs to be buried for 200,000 years, we use 100% of the uranium and turn it into waste that needs to be buried for only 300 years. The world contains about four times as much thorium as uranium and an MSR can use thorium as a fuel too. An MSR can breed thorium-232 into fissile uranium-233 via the reaction:

Thorium-232 + neutron → Protactinium-233 → Uranium-233

The thorium-232 absorbs a neutron and turns into protactinium-233 that then decays into uranium-233 that can fission just like uranium-235. The half-life of protactinium-233 is 27 days and the generated uranium-233 can be easily chemically removed from the thorium-232 + protactinium-233 salt mixture as it is generated. In fact, all of the current nuclear waste at the world's current nuclear reactors can be used for fuel in an MSR since 99.3% of the waste is uranium or transuranic elements. Such MSRs are known as waste burners. The world now has 250,000 tons of spent nuclear fuel, 1.2 million tons of depleted uranium and huge mounds of thorium waste from rare earth mines. With all of that, we now have several hundred thousand years' worth of uranium and thorium at hand. It only takes a little less than a golf ball's worth of thorium to fuel an American lifestyle for about 100 years and you can find that amount of thorium in a few cubic yards of the Earth's crust.

Figure 9 – A ball of thorium or uranium smaller than a golf ball can fuel an American lifestyle for 100 years. This includes all of the electricity, heating, cooling, driving and flying that an American does in 100 years. We have already mined enough thorium and uranium to run the whole world for thousands of years. There is enough thorium and uranium on the Earth to run the world for hundreds of thousands of years.

Molten salt nuclear reactors can also be run at a temperature of 1,000 oC which is hot enough for many industrial process heat operations. For example, it is hot enough to chemically break water down into hydrogen and oxygen gasses. Compressed hydrogen gas could then be pumped down existing natural gas pipelines for heating and cooking. Compressed hydrogen gas can also be used to run cars and trucks. The compressed hydrogen gas can be used to power vehicles using fuel cells or internal combustion engines burning the hydrogen gas directly into water. Molten salt nuclear reactors could be run at peak capacity all day long to maximize return. During the night, when electrical demand is very low, they could switch to primarily generating large amounts of hydrogen that could be easily stored in our existing natural gas infrastructure.

Figure 10 – Supercritical CO2 Brayton turbines can be about 8,000 times smaller than traditional Rankine steam turbines. They are also much more efficient.

Since molten salt nuclear reactors run at 700 oC, instead of 300 oC, we can use Brayton supercritical carbon dioxide turbines instead of Rankine steam turbines. Supercritical CO2 Brayton turbines are about 8,000 times smaller than Rankine steam turbines because the supercritical CO2 working fluid has nearly the density of water. And because molten salt nuclear reactors do not need an expensive and huge containment structure, they can be made into small factory-built modular units that can be mass-produced. This allows utilities and industrial plants to easily string together any required capacity. They would also be ideal for ocean-going container ships. Supercritical CO2 Brayton turbines can also reach an efficiency of 47% compared to the 33% efficiency of Rankine steam turbines. The discharge temperature of the supercritical CO2 turbines is also high enough to be used to desalinate seawater, and if a body of water is not available for cooling, the discharge heat of a molten salt nuclear reactor can be directly radiated into the air. To watch some supercritical CO2 in action see:

Thermodynamics - Explaining the Critical Point
https://www.youtube.com/watch?v=RmaJVxafesU#t-1

Molten salt nuclear reactors are also continuously refueled and do not need a month of downtime every 18 months to rotate the fuel rods of a PWR and replace 1/3 of the fuel rods with fresh fuel rods. Molten salt nuclear reactors are also not much of a proliferation risk because the molten salt fuel is highly radioactive with short-lived fission products, at a temperature of 700 oC and is not highly enriched with fissile material. That makes it very hard to work with from a bomb-making perspective. It would be easier to just start with natural uranium.

A little nuclear physics helps to understand why. Natural uranium is 99.3% uranium-238 which does not fission but can be turned into plutonium-239 if you hit it with one neutron and plutonium-240 if you hit it with two neutrons. Plutonium-239 and plutonium-240 both fission like uranium-235 and can be used for reactor fuel. Currently, our pressurized water reactors are just burning uranium-235 for energy. So we take 1,000 pounds of natural uranium and only burn the 7 pounds of uranium-235. The remaining 993 pounds of uranium-238 become nuclear waste. That is why people in the 1960s and 1970s wanted some kind of breeder reactor that could burn all 1,000 pounds of uranium and not waste most of the uranium that the Earth had. But should we try for a fast neutron breeder reactor that turned uranium-238 into plutonium-239 and plutonium-240 or should we go with a molten salt nuclear reactor that could continuously turn thorium-232 into uranium-233 and uranium-238 into plutonium-239 and plutonium-240 on the fly for fuel? Unfortunately, for political reasons, the decision was made in 1974 to go with fast breeder reactors that produced plutonium-239 and plutonium-240 from uranium-238.

But the fast neutron breeder reactor had a problem. The fast neutrons make lots of plutonium-239 and very little plutonium-240. Worse yet, if some country just ran a fast neutron breeder reactor for a short period of time and then pulled out the fuel rods, they could then have a source of essentially pure plutonium-239 that could easily be turned into a plutonium atomic bomb. In fact, that is how we make the plutonium-239 for plutonium atomic bombs. Early during the Manhattan Project, it was discovered that plutonium-240 would spontaneously fission all on its own and release 2 - 3 fast neutrons. For a uranium-235 bomb, they discovered that all you had to do was to take two slugs of uranium that were 90% uranium-235 and smash them quickly together with an explosive charge. But for a plutonium bomb, they found that you had to surround a sphere of nearly pure plutonium-239 with a layer of explosive charge that compressed the plutonium-239 into a supercritical mass that would start a fission chain reaction. The fast neutrons from any plutonium-240 impurity created a problem. When you compress the plutonium core of the bomb, the spontaneously generated fast neutrons from the plutonium-240 contaminant will start a premature chain reaction that begins producing lots of heat. The generated heat makes the plutonium core to expand at the exact time you are trying to compress the plutonium core into a supercritical mass that can quickly fission huge amounts of plutonium before the whole thing blows itself apart. Thus, if you have too much plutonium-240 in a plutonium bomb core, the bomb just "fizzles" before it can properly detonate. This created a fear that using huge numbers of fast neutron breeder reactors for electricity would be too dangerous for a world prone to local wars because the reactors could easily be turned into factories for plutonium-239 by pulling out the fuel rods after a short time of service. As a consequence, Congressional funding for the effort was suspended in 1983.

On the other hand, the slow neutrons in molten salt nuclear reactors make a plutonium mixture that is about 75% plutonium-239 and 25% plutonium-240. So the plutonium from molten salt nuclear reactors cannot be used to make plutonium atomic bombs because of the "fizzle" problem. Thus, molten salt nuclear reactors are not much of a proliferation problem because the plutonium that is generated by the slow neutrons is contaminated by 25% plutonium-240 and the uranium-233 that is generated from thorium-232 is also useless for bomb-making because 95% of the uranium in the liquid fuel salt is uranium-238 that does not fission at all. If you really want to make an atomic bomb, the easiest way to do that is to just spin natural uranium in centrifuges as did North Korea and as Iran may now be attempting. Nobody ever made a bomb from reactors meant for generating electricity.

There are several MSR efforts underway around the world, but MSRs need some more support from the government in the form of funding and regulations tuned to the benefits of MSR technology. For more on this, please see:

Making Nuclear Sustainable with CMSR (Compact Molten Salt Reactor) - Troels Schönfeldt
https://www.youtube.com/watch?v=ps8oi_HY35E#t-1

Seaborg Technologies Homepage
https://www.seaborg.co/

Thorium and the Future of Nuclear Energy
https://www.youtube.com/watch?v=ElulEJruhRQ

Kirk Sorensen is a mechanical engineer who single handedly revived interest in molten salt nuclear reactors about 15 years ago while working for NASA. NASA wanted Kirk to figure out a way to power a base on the Moon. Our Moon does not have coal, oil, natural gas, water for dams or air for wind turbines. The Moon also has a "day" that lasts for two weeks and also a "night" that also lasts for two weeks. So solar energy is really not an option because of the two-week "night". However, the Moon does have uranium. So Kirk paid a visit to our Oak Ridge National Laboratory for advice on a suitable nuclear reactor for the Moon. At Oak Ridge, they suggested he look into the old Molten Salt Reactor Experiment (MSRE) from the 1960s. Kirk began to dig through the old documents on the MSRE and consulted with some of the retired participants of the MSRE who, by this time, were all in their 70s and 80s. Kirk was shocked to learn that you could turn 100% of thorium-232 into uranium-233 and that uranium-233 was an even better nuclear fuel than uranium-235! A molten salt nuclear reactor could also turn uranium-238 into plutonium-239 and plutonium-240 on the fly and the plutonium-239 and plutonium-240 could also fission and be used as nuclear fuel. So a molten salt nuclear reactor burning uranium and thorium seemed to be just perfect for the Moon because it could burn 100% of the uranium and thorium that the Moon had. Then Kirk realized that molten salt nuclear reactors could also be perfect for solving the Earth's climate change disaster because the Earth has huge amounts of natural uranium and four times that amount of thorium-232 - enough to last for hundreds of thousands of years. Below are some of his excellent videos. You can find more on YouTube.

Thorium can give humanity clean, pollution-free energy
https://www.youtube.com/watch?v=kybenSq0KPo#t-1

Thorium: Kirk Sorensen at TEDxYYC
https://www.youtube.com/watch?v=N2vzotsvvkw#t-1

Kirk Sorensen @ MRU on LFTR - Liquid Fluoride Thorium Reactors
https://www.youtube.com/watch?v=D3rL08J7fDA#t-1

Kirk Sorensen's Flibe Energy Homepage
https://flibe-energy.com/

Nuclear goes retro — with a much greener outlook
https://www.knowablemagazine.org/article/technology/2019/nuclear-goes-retro-much-greener-outlook?gclid=CjwKCAiAuK3vBRBOEiwA1IMhuh4Tj2qgXh6Wa700N2oFDOyMbzIvOsU6QrIts1XIxgzx57gGWuBi5xoCGLIQAvD_BwE

If you have a technical background in the hard sciences or engineering be sure to take a look at the presentations of the annual conferences that are held by the Thorium Energy Alliance
http://www.thoriumenergyalliance.com/ThoriumSite/TEAC_Proceedings.html

But for a truly uplifting experience, please see the undergraduate presentation by Thane Symens (Mechanical Engineering), Joel Smith (Mechanical Engineering), Meredy Brichford (Chemical Engineering) & Christina Headley (Chemical Engineering) where they present their senior engineering project on the system design and economics of a thorium molten salt nuclear reactor at:

Calvin College Student Study on Th-MSR @ TEAC7
https://www.youtube.com/watch?v=M6RCAgR4Rfo#t-1

It is a powerful example of what software can do in the hands of capable young minds.

How To Make Money Sucking Billions of Tons of Carbon Dioxide Out of the Atmosphere
In Greta Thunberg's moving speech, she pointed out that we now need to suck billions of tons of carbon dioxide out of the Earth's atmosphere to prevent positive feedback loops from kicking in and taking all of this out of our hands. Is that another thing that totally selfish carbon-based Intelligence can achieve without spending a dime? One way to do this would be to set up huge seaweed farms in the middle of the Pacific Ocean. Most marine life is confined to coastal waters where it can obtain nutrients from continental runoff. The deep-water oceans are marine deserts by contrast because they are missing the necessary nutrients for carbon-based life. One idea is to use solar energy to pump the nutrient-rich deposits from the abyssal plain of the Pacific Ocean up to the surface to provide the necessary trace elements required by carbon-based life. Then huge seaweed farms would use those trace elements to suck carbon dioxide out of the atmosphere. The seaweed farms would have all of the water, sunshine and carbon dioxide they needed to quickly grow. The seaweed farms would then perform the function of huge pastors to raise fish and shellfish for harvesting. The excess seaweed would be cut and sunk to the abyssal plain to sequester large amounts of carbon.

Figure 11 – Large-scale seaweed farms in the middle of the Pacific Ocean could be used to suck billions of tons of carbon dioxide from the atmosphere and turn it into food.

Figure 12 – Excess carbon could then be deposited on the ocean floor for long-term storage.

Additionally, we may need to use large seaweed farms as ocean preservation areas. As we saw in Triona McGrath's TED presentation:

How pollution is changing the ocean's chemistry
https://www.ted.com/talks/triona_mcgrath_how_pollution_is_changing_the_ocean_s_chemistry?utm_source=Science+worth+knowing&utm_campaign=d1897bbfd6-Science+worth+knowing_12-21-17_COPY_01&utm_medium=email&utm_term=0_83c20124eb-d1897bbfd6-297552313

The pH of the ocean has dropped from 8.2 to 8.1 since the Industrial Revolution because of absorbed carbon dioxide. That is called ocean acidification. Remember, it was mainly ocean acidification that killed off 95% of marine species during the End-Permian greenhouse gas mass extinction 252 million years ago. If nothing changes, the pH of the ocean will drop to 7.8 by 2100 and it is impossible for marine life to make calcium carbonate shells at that pH because the acidic seawater will dissolve the calcium carbonate shells. All you have to do is dump calcium carbonate shells into water with a pH of 7.8 to watch that happen. Unfortunately, lots of creatures at the very bottom of the oceanic food chain make carbonate shells and will go extinct before the year 2100. This could easily cause the entire oceanic ecosystem of the Earth to collapse leaving behind no fish or shellfish. Fortunately, huge kelp forests can grow 2 feet a day and that takes a lot of carbon to do. They also fix lots of dissolved carbon dioxide via photosynthesis. The large removal of carbon dioxide from the surrounding water raises the pH of the water. So by 2100 we may need to cultivate large portions of the oceans with huge seaweed farms to provide a safe refuge for marine life from the very bottom of the food chain to the very top.

Figure 13 – If we do not stop pumping carbon dioxide into the air, the pH of the oceans will reach 7.8 by 2100 and the oceans will die.

For more on this see:

Can seaweed help curb global warming?
https://www.ted.com/talks/tim_flannery_can_seaweed_help_curb_global_warming#t-1

Could underwater farms help fight climate change?
https://www.ted.com/talks/ayana_johnson_and_megan_davis_could_underwater_farms_help_fight_climate_change#t-1

Reversing Climate Change with Ocean-healing Seaweed Ecosystems
https://www.climatecolab.org/contests/2014/global-plan/c/proposal/1307120

OceanForesters Homepage
http://oceanforesters.org/Home_Page.html

Conclusion
Both of the above efforts would need a little help from the world's governments to get going. But since they both have the potential to make lots of money, my hope would be that private companies would then take over and greatly expand them. It could be very much like the rise of the Internet. Normally, I would be looking to the United States to getting this off the ground. For example, take a look at this 1969 film produced by the Oak Ridge National Laboratory for the United States Atomic Energy Commission that describes the Molten Salt Reactor Experiment (MSRE) and how Alvin Weinberg's team of 30 nuclear scientists built the very first experimental molten salt nuclear reactor from scratch with only $10 million during the period 1960 - 1965 and then ran it for 20,000 hours from 1965 - 1969 without a hitch. Don't forget we were spending billions of dollars going to the Moon during the 1960s too:

https://www.youtube.com/watch?v=tyDbq5HRs0o#t-1

But currently the United States is politically paralyzed by political memes and software, and we are incapable of even managing our own affairs. For more on that see Life in Postwar America After Our Stunning Defeat in the Great Cyberwar of 2016. The Republicans keep pretending that climate change is not happening, and the Democrats keep pretending that wind and solar energy alone can fix the problem. The anti-nuclear Left has forged a strange alliance with the pro-fossil fuel Right to eliminate nuclear energy in the United States.

Fortunately, China has huge deposits of thorium and is currently taking up the role that the United States used to play back in the 20th century. The Chinese have spent more than $2 billion on advanced nuclear reactor research and have about 100,000 people working on them. The idea is for China to mass-produce small modular molten salt nuclear reactors on assembly lines like Boeing does for commercial jet aircraft. These compact reactors will then be transported by ships and trucks to an installation site. These compact modular reactors will be cheaper to buy and run than coal, gas, solar or wind plants. These compact reactors will look like small laptops competing with our huge old 1950s-style vacuum tube mainframe PWR reactors. China will first begin to sell these reactors to third-world countries that need lots of cheap electricity to grow. Once the Chinese establish that market and demonstrate the far superior safety of small molten salt nuclear reactors the Chinese will then begin marketing them in Europe and the United States. Six thousand compact 250 MW molten salt nuclear reactors could supply all of the energy that the United States currently uses. There currently are 25,000 commercial jet aircraft on duty around the world. A similar fleet of 250 MW molten salt nuclear reactors could supply the entire world with 100% of the energy it currently requires. With such a state-sponsored effort, China could easily become the next OPEC that controls the world energy supply.

China Invests Big in Clean and Cheap Energy from Thorium
http://www.thoriumenergyworld.com/press-release/china-invests-big-in-clean-and-cheap-energy-from-thorium

Now take a look at this slightly stalinesque video of the current Chinese efforts with molten salt nuclear reactors. Then compare the style of the Chinese video to that of the 1969 United States film:

SINAP T-MSR Promotional Video [ Thorium Molten Salt Reactor ]
https://youtu.be/EdelSZUxZeM

As you can see, China has begun its own state-sponsored "Manhattan Project" to build molten salt nuclear reactors. But the response of the United States has been more like Germany's response during World War II. Recall that the Germans discovered nuclear fission in 1938. Werner Heisenberg, one of the founding fathers of quantum mechanics, was put in charge of the German atomic bomb program. During a fateful meeting with Albert Speer, Hitler's personal architect and the German minister of munitions, Heisenberg asked Speer for 50,000 marks to buy some uranium to get started. Heisenberg figured that a low-ball funding request to get started was the best strategy. However, later, Albert Speer commented that a request for a mere 50,000 marks signaled to him that Werner Heisenberg's work could not be very significant! The idea of preventing China from controlling the world energy supply might be something the Right would be interested in knowing about.

Yes, this might all sound rather stark, but don't forget the age-old motto of the human race, "Don't rush me, I am waiting for the last minute.".

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Tuesday, November 12, 2019

WGD - Whole Genome Duplication
How Carbon-Based Life Installs a New Major Release into Production

Writing and maintaining software is very difficult because so much can go wrong. As we saw in The Fundamental Problem of Software this is largely due to the second law of thermodynamics introducing small bugs into software whenever software is changed and also to the nonlinear nature of software that allows small software bugs to frequently produce catastrophic effects. That is why in Facilitated Variation and the Utilization of Reusable Code by Carbon-Based Life we saw that most new computer or biological software is not written from scratch. Instead, most new software is simply a form of reusable code that has been slightly "tweaked" to produce new software functionality. In her Royal Institution presentation:

Copy number variation and the secret of life
https://www.youtube.com/watch?v=BJm5jHhJNBI&t=1s

Professor Aoife McLysaght explains how carbon-based life uses this same technique to produce new biological functionality by duplicating genes. The website for Professor McLysaght's lab is located at:

Aoife McLysaght Molecular Evolution Lab
http://www.gen.tcd.ie/molevol/

Once you duplicate a gene, that allows one of the two copies to continue to produce the protein encoded by the gene at normal levels while its copy is then free to slightly mutate into a new form that might be able to produce an enhanced protein or an additional protein with new biological function. It is the golden rule of wing-walking in action - don't let go of something until you have hold of something else. Meaning, that if a single gene mutates in isolation it will most likely produce a protein that no longer works and that will be detrimental, or possibly, even fatal for an organism.

Figure 1 – Above we see a gene with four functions. Once the gene has been duplicated, it is possible for the copy of the gene to evolve by divergence. In the first case, we see Subfunctionalization where some of the gene's code disappears from each chromosome of descendants. In the second case, we see Neofunctionalization where the gene on the copied chromosome is free to mutate by changing some genetic code and dropping other genetic code. In the last case, we see the total loss of the copied gene.

All computer users know the importance of keeping backup copies of files around before messing with them in case a drastic mistake is made. Nowadays, most people keep backup copies on the Cloud with Microsoft or Google.

Professor McLysaght then explains that gene duplication can be classified into two broad categories:

SSD - Small Scale Duplication
WGD - Whole Genome Duplication

In SSD one gene or a small group of genes is accidentally duplicated elsewhere on the same chromosome or a different chromosome when DNA is copied. On the other hand, with WGD the entire genome is accidentally duplicated by essentially doubling the number of chromosomes in a cell. The trouble with SSD is that the duplicated gene or genes will at first most likely produce more of the encoded proteins than is usual. In fact, all things being equal, nearly twice as much of the proteins will be at first produced. This is called the "dosage" problem. You see, doubling the production level of a given protein can cause problems. The processing logic carried out by proteins is quite complex. Some proteins are used to build physical structures, like the kerogen in our hair, fingernails and skin, while other proteins are used to carry out biochemical reactions like the hemoglobin in our blood. Other proteins take on a control function by catalyzing biochemical reactions or even by amplifying or inhibiting the expression of other genes. So changing the relative dosage levels of a protein or a group of proteins by means of SSD can be quite dangerous. However, this problem is averted if the entire genome of an organism is duplicated by means of WGD. With WGD the number of all the genes is doubled and so the relative dosage levels of all the generated proteins should remain the same. Now, with one complete set of genes taking the production load for protein production the other set of genes are free to mutate or even disappear. The significance of WGD gene duplications in the evolutionary history of vertebrates was first proposed by Susumu Ohno in 1970 in his book Evolution by Gene Duplication.

Figure 2 – Whole Genome Duplication (WGD) was first proposed by Susumu Ohno in 1970.

Figure 3 – Here we see the difference between SSD and WGD gene duplication.

Since then, bioinformatics has overwhelmingly confirmed the key role of gene duplication in molecular evolution by comparing the genomes of many species at the genetic level of DNA sequences. In fact, the term "ohnolog" has been coined to describe gene duplicates that have survived since a WGD event.

Another good resource for exploring the impact of WGD events in the evolutionary history of carbon-based life is Dr. Hervé Isambert's lab at:

The Isambert Lab
Reconstruction, Analysis and Evolution of Biological Networks
Institut Curie, Paris
http://kinefold.curie.fr/isambertlab/

Among many other resources, the Isambert Lab has been working on the OHNOLOGS database. The OHNOLOGS database currently allows users to explore the genes retained from WGD (Whole Genome Duplication) events in 27 vertebrate genomes and is available at:

OHNOLOGS - A Repository of Genes Retained from Whole Genome Duplications in the Vertebrate Genomes
http://ohnologs.curie.fr/

Figure 4 – Above is a figure from the Isambert Lab that displays a multitude of WGD events in the evolutionary history of carbon-based life.

Figure 5 – Above is a figure that displays a multitude of WGD events specifically in the evolutionary history of carbon-based plantlife.

Further Confirmation of WGD From the Evolution of Computer Software
Softwarephysics maintains that both carbon-based life and computer software have converged upon many of the same solutions to shared data processing problems as they both learned to deal with the second law of thermodynamics in a nonlinear Universe. This should come as no surprise since both carbon-based life and computer software are simply forms of self-replicating information facing the common problems of survival. For more on that please see A Brief History of Self-Replicating Information. For more details on the evolutionary history of software see the SoftwarePaleontology section of SoftwareBiology. So it should come as no surprise that those doing the development and maintenance of computer software should have also discovered the advantages of taking a WGD approach. All IT professionals should be quite familiar with the steps used to move new code into Production, but for those non-IT readers, let me briefly explain the process. Hopefully, you will be able to see many WGD techniques being used in a number of places.

Software Change Management Procedures
Software Change Management arose in the IT departments of major corporations in the 1980s. Prior to the arrival of Change Management processes, corporate IT programmers simply wrote and tested their own software changes in private libraries on the same hardware that ran Production software. When it was time to install the changed software into Production, we simply filled out a ticket to have Data Management move the updated software files from our personal libraries to the Production libraries. Once that was done, the corporate IT programmers could validate the software in the Production libraries with a test batch run before the next scheduled Production run of the batch job. This worked just fine until Production software evolved from batch jobs to online processing by corporate end-users in the early 1980s and especially when external end-users began to interactively use Production software in the 1990s. For example, when I retired in December of 2016, I was in the Middleware Operations group for a major credit card company. All installs were done late at night and during the very early morning hours during our daily Change Window. For an example of a complex software infrastructure supporting a high-volume corporate website please see Software Embryogenesis. Usually, we did about 20 installs each night to cover bug fixes and minor software enhancements. Every change was done under an approved Change Ticket that had an attached install plan that listed all of the items to be installed and the step-by-step timing of each install step. Each install plan also had steps to validate the install and back out the install if problems occurred.

We ran all the Production software in two separate datacenters that were several hundred miles apart. Each datacenter had several hundred Unix servers and ran the exact same Production software. The hardware and software in each datacenter were sized so that it could handle our peak processing load during the middle of the day. Usually, both datacenters would be in Active Mode and taking about half of the total Production processing load. If something horrible happened in one datacenter the Command Center could shift our entire Production processing load to the other datacenter. So during the Change Window for a particular Change Ticket, the Command Center would first move all traffic for the application being changed to the second datacenter. We would then install the new software into the first datacenter and crank it up. Professional validators would then run the new software through a set of validation tests to make sure the software was behaving properly. Then, the Command Center would shut down traffic to the application in the second datacenter to force all traffic to the first datacenter that was running the new software. We would then let the new software on the first datacenter "burn-in" for about 30 minutes of live traffic from real end-users on the Internet. If anything went wrong, the Command Center would move all of the application traffic back to the second datacenter that was still running the old software. We would then back out the new software in the first datacenter and replace it with the old software following the backout plan for the Change Ticket. But if the "burn-in" went well, we would reverse the whole process of traffic flips between the two datacenters to install the new software in the second datacenter. However, if something went wrong the next day with the new software under peak load and an outage resulted, the Command Center would convene a conference call and perhaps 10 people from Applications Development, Middleware Operations, Unix Operations, Network Operations and Database Operations would be paged out and would join the call. The members of the outage call would then troubleshoot the problem in their own areas of expertise to figure out what went wrong. The installation of new code was naturally always our first suspicion. If doing things like restarting the new software did not help, and all other possibilities were eliminated as much as possible, the members of the outage call would come to the decision that the new software was the likely problem, and the new software would be backed out using the Change Ticket backout plan. I hope that you can see how using two separate datacenters that are hundreds of miles apart takes full advantage of the WGD technique used by carbon-based life to keep carbon-based Production up and running at all times for routine maintenance. However, biologists have also discovered that in the evolutionary history of carbon-based life, WGD technology also played a critical role in the rise of new species, so let us take a look at that from an IT perspective.

The Role of WGD Technology in the Implementation of New Species and Major Code Releases
In the above discussion, I explained how the standard Change Management processes were used for the daily changes that Middleware Operations made on a daily basis. However, every few months we conducted a major code release. This was very much like implementing a new species in biology. For a major code release, all normal daily Change Tickets were suspended so that full attention could be focused on the major code release. For a major code release, Applications Development appointed a Release Coordinator for the software release and perhaps 30 - 60 Change Tickets would be generated for the major code release. Each Change Ticket in the major code release had its own detailed installation plan, but the Release Coordinator would also provide a detailed installation and backout plan for all of the Change Tickets associated with the major code release. From an IT perspective, a major code release is like creating a new species. It is like moving from Windows 8.0 to Windows 10.0. The problem with a large major code release is that it cannot all be done in a single standard Change Window during the night and early morning hours. Instead, an extended Change Window must be approved by IT Management that extends into the next day and might complete around 3:00 or 4:00 PM the next day. The basic idea was to totally complete the new major code release in the first datacenter during the early morning hours in the standard Change Window. Once that was done, live traffic was slowly transferred to the first datacenter. For example, initially, only 10% of the live traffic was transferred to the first datacenter running the new major code release. After about an hour of "burning in" the new code, the traffic level in the first datacenter was raised to 30% for about 30 minutes. If all went well, the load level on the first datacenter was raised to 80% for 30 minutes. Finally, 100% of the traffic was transferred to the first datacenter for about 30 minutes for a final "burn-in". After that, the whole install team shifted work to the second datacenter. The most significant danger was that even though the first datacenter had run 100% of the traffic for about 30 minutes, it did so during an early part of the day when the total processing load was rather low. The worst thing that could happen would be for the first datacenter that was running 100% of the Production load on the new major code release would get into trouble when the peak load hit around 10:00 AM. Should that happen, we would be in the horrible situation where the second datacenter was unusable because it was halfway through the major code release and the first datacenter was experiencing problems due to the major code release. Such a situation could bring an entire corporate website down into a "hard down" condition. A "hard down" condition can cost thousands to millions of dollars per second depending on the business being conducted by the software. Such a state of affairs needs to be avoided at all costs and to do that, IT relies heavily on the WGD technique.

First, there are three separate software environments running the current software genome:

Production
Production is the sacred software environment in which no changes are allowed to be made without a Production Change Ticket that has been approved by all layers of IT Management. Production software is sacred because Production software runs the business and is the software that all internal and external users interact with. If Production software fails, it can cost a business or governmental agency thousands or millions of dollars each second! That is why all IT professionals are deathly afraid of messing up Production and, therefore, follow all of the necessary Change Management processes not to do so. I personally know of very talented IT professionals who were summarily fired for making unauthorized changes to Production.

Production Assurance
Production Assurance is the environment that is set up by IT Management to mimic the Production environment as best as possible. It usually is a scaled-down version of Production that does not take Production load. Production Assurance is like a wind tunnel that allows new software to experience the trials and tribulations of Production but using a scaled-down model of Production instead of the "real thing". Production Assurance is where all of the heavy-duty software testing takes place by professional Production Assurance testers. The IT professionals in Applications Development who write the new code do not do testing in Production Assurance. Once software has been exhaustively tested in Production Assurance, it is ready to move to Production with a scheduled Change Ticket in a scheduled Change Window.

Development
The Development environment is where IT professionals in Application Development program new code and perform unit and integration testing on the new code. Again, most new code is reusable code that has been "tweaked". Once all unit and integration testing have been completed on some new code, a Production Assurance Change Ticket is opened for Middleware Operations, Unix Operations, Database Operations and Network Operations to move the new software to Production Assurance for final system-wide testing.

Conclusion
As you can see, IT has also discovered the benefits of the WGD techniques developed by carbon-based life to introduce new genes and new species into the biosphere. Not only do corporate IT departments generally run Production software on two separate Production environments, but corporate IT departments also use multiple WGD environments like Production, Production Assurance and Development to produce new software functionality, and most importantly, move new major releases into Production as a new species of software. Thus, as I suggested in How to Study the Origin of Life on the Earth and Elsewhere in the Universe Right Here at Home, I highly recommend that all researchers investigating the roles that WGD and SSD gene duplication played in the evolutionary history of carbon-based life spend a few months doing some fieldwork in the IT department of a major corporation or governmental agency.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Saturday, September 21, 2019

Programming Biology in the Biological Computation Group of Microsoft Research

In the early 1980s, I strongly advocated taking a biological approach to the development and maintenance of software while in the IT department of Amoco. At the time, I was actively working on a simulated science that I called softwarephysics for my own software development purposes while I was holding down a development job in the IT department of Amoco, and looking back, I must now thank the long-gone IT Management of Amoco for patiently tolerating my efforts at the time. For more on that please see Some Thoughts on the Origin of Softwarephysics and Its Application Beyond IT and Agile vs. Waterfall Programming and the Value of Having a Theoretical Framework. The relatively new field of computational biology, on the other hand, advocates just the reverse. Computational biology uses techniques from computer science to help unravel the mysteries of biology. I just watched a very interesting TED presentation on the computational biology efforts at Microsoft Research by Dr. Sara-Jane Dunn:

The next software revolution: programming biological cells
https://www.ted.com/talks/sara_jane_dunn_the_next_software_revolution_programming_biological_cells?utm_source=newsletter_weekly_2019-11-01&utm_campaign=newsletter_weekly&utm_medium=email&utm_content=top_left_image#t-1

in which she explains how the Biological Computation research group at Microsoft Research is applying standard IT development and maintenance techniques to discover the underlying biological programming code that runs the very complex biological processes of cells and organisms. In her TED talk, Dr. Sara-Jane Dunn maintains that the last half of the 20th century was dominated by the rise of software on electronic hardware, but that the early 21st century will be dominated by being able to first understand, and then later manipulate, the software running on biological hardware through the new science of biological computation. Dr. Sara-Jane Dunn is a member of the Microsoft Research Biological Computation group. The homepage for the Microsoft Biological Computation group is at:

Microsoft Biological Computation
https://www.microsoft.com/en-us/research/group/biological-computation/

The mission statement for the Microsoft Research Biological Computation group is:

Our group is developing theory, methods and software for understanding and programming information processing in biological systems. Our research currently focuses on three main areas: Molecular Programming, Synthetic Biology and Stem Cell Biology. Current projects include designing molecular circuits made of DNA, and programming synthetic biological devices to perform complex functions over time and space. We also aim to understand the computation performed by cells during development, and how the adaptive immune system detects viruses and cancers. We are tackling these questions through the development of computational models and domain-specific computational tools, in close collaboration with leading scientific research groups. The tools we develop are being integrated into a common software environment, which supports simulation and analysis across multiple scales and domains. This environment will serve as the foundation for a biological computation platform.

In her TED Talk, Dr. Sara-Jane Dunn explains that in many ways biologists are like new members of an Applications Development group supporting the biological software that runs on living cells and organisms. Yes, biologists certainly do know a lot about cells and living organisms and also many details about cellular structure. They even have access to the DNA source code of many cells at hand. The problem is that like all new members of an Applications Development group, they just do not know how the DNA source code works. Dr. Sara-Jane Dunn then goes on to explain that by using standard IT systems analysis techniques, it should be possible to slowly piece together the underlying biological software that runs the cells that living things are all made of. Once that underlying software has been pieced together, it should then be possible to perform maintenance on the underlying biological code to fix disease-causing bugs in biological cells and organisms, and also to add functional enhancements to them. For readers not familiar with doing maintenance on large-scale commercial software, let's spend some time describing the daily mayhem of life in IT.

The Joys of Software Maintenance
As I explained in A Lesson for IT Professionals - Documenting the Flowcharts of Carbon-Based Life in the KEGG Online Database many times IT professionals in Applications Development will experience the anxiety of starting out on a new IT job or moving to a new area within their current employer's IT Department. Normally, the first thing your new Applications Development group will do is to show you a chart similar to Figure 1. Figure 1 shows a simplified flowchart for the NAS system. Of course, being new to the group, you may have at least heard about the NAS system, but in truth, you barely even know what the NAS system does! Nonetheless, the members of your new Applications Development group will assume that you are completely familiar with all of the details of the NAS system. Even though you have barely even heard of the NAS system, they will naturally act as if you are a NAS system expert like themselves. For example, they will expect that you will easily be able to estimate the work needed to perform large-scale maintenance efforts to the NAS system, create detailed project plans for large-scale maintenance efforts to the NAS system, perform the necessary code changes needed for large-scale maintenance efforts to the NAS system and easily put large-scale maintenance efforts for the NAS system into production. In truth, during that very first terrifying meeting with your new Applications Development group, the only thing that you will honestly be thankful for is that your new Applications Development group started you out by showing you a simplified flowchart of the NAS system instead of a detailed flowchart of the NAS system! Worse yet, when later you are tasked with heading up a large-scale maintenance effort for the NAS system, you will, unfortunately, discover that a detailed flowchart of the NAS system does not even exist! Instead, you will sadly discover that the excruciatingly complex details of the NAS system can only be found within the minds of the current developers in the Applications Development group supporting the NAS system as a largely unwritten oral history. You will sadly learn that this secret, and largely unwritten, oral history of the NAS system can only be slowly learned by patiently attending many meetings with members of your new Applications Development group and by slowly learning parts of the code for the NAS system by working on portions of the NAS system on a trial and error basis.

Normally, as a very frightened and very timid new member of the Applications Development group supporting the NAS system, you will soon discover that most of the knowledge about the NAS system that you quickly gain will result from goofing around with the code for the NAS system in an IDE (Integrated Development Environment), like Eclipse or Microsoft's Visual Studio in the development computing environment that your IT Department has set up for coding and testing software. In such a development environment, you can run many experimental runs on portions of the NAS system and watch the code execute line-by-line.

Figure 1 – Above is a simplified flowchart for the NAS system.

The Maintenance Plight of Biologists
Now over the past 100 years, biologists have also been trying to figure out the processing flowcharts of carbon-based life by induction, deduction and by performing experiments. This flowcharting activity has been greatly enhanced by the rise of bioinformatics. A fine example of this is the KEGG (Kyoto Encyclopedia of Genes and Genomes). The KEGG is an online collection of databases detailing genomes, biochemical pathways, diseases, drugs, and biochemical molecules. KEGG is available at:

https://www.genome.jp/kegg/kegg2.html

A good description of the KEGG is available in the Wikipedia at:

https://en.wikipedia.org/wiki/KEGG

Figure 2 – Above is a simplified flowchart of the metabolic pathways used by carbon-based life.

Figure 3 – Above is a high-level flowchart of the metabolic pathways used by carbon-based life as presented in the KEGG online database. You can use the KEGG online database to drill down into the above chart and to dig down into the flowcharts for many other biological processes.

I encourage all IT professionals to try out the KEGG online database to drill down into the documented flowcharts of carbon-based life to appreciate the complexities of carbon-based life and to see an excellent example of the documentation of complex information flows. Softwarephysics has long suggested that a biological approach to software was necessitated to progress to such levels of complexity. For more on that see Agile vs. Waterfall Programming and the Value of Having a Theoretical Framework. The challenge for biologists is to piece together the underlying software that runs these complicated flowcharts.

Computer Science Comes to the Rescue in the Form of Computational Biology
IT professionals in Applications Development face a similar challenge when undertaking a major refactoring effort of a large heavy-duty commercial software system. In a refactoring effort, ancient software that was written in the deep past using older software technologies and programming techniques is totally rewritten using modern software technologies and programming techniques. In order to do that, the first step is to create flowcharts and Input-Output Diagrams to describe the logical flow of the software to be rewritten. In a sense, the Microsoft Research Biological Computation group is trying to do the very same thing by refactoring the underlying biological software that runs carbon-based life.

In Agile vs. Waterfall Programming and the Value of Having a Theoretical Framework, I described the Waterfall project management development model that was so popular during the 1970s, 1980s and 1990s. In the classic Waterfall project management development model, detailed user requirements and code specification documents were formulated before any coding began. These user requirements and coding specification documents formed the blueprints for the software to be developed. One of the techniques frequently used in preparing user requirements and coding specification documents was to begin by creating a number of high-level flowcharts of the processing flows and also a series of Input-Output Diagrams that described in more detail the high-level flowcharts of the software to be developed.

Figure 4 – Above is the general form of an Input-Output Diagram. Certain Data is Input to a software Process. The software Process then processes the Input Data and may read or write data to storage. When the software Process completes, it passes Output Information to the next software Process as Input.

Figure 5 – Above is a more realistic Input-Output Diagram. In general, a hierarchy of Input-Output Diagrams was generated to break down the proposed software into a number of modules that could be separately coded and unit tested. For example, the above Input-Output Diagram is for Sub-module 2.1 of the proposed software. For coding specifications, the Input-Output Diagram for Sub-module 2.1 would be broken down into more detailed Input-Output Diagrams like for Sub-module 2.1.1, Sub-module 2.1.2, Sub-module 2.1.3 ...

When undertaking a major software refactoring project, frequently high-level flowcharts and Input-Output Diagrams are produced to capture the functions of the current software to be refactored. When trying to uncover the underlying biological software that runs on carbon-based life, computational biologists need to do the very same thing. The Microsoft Research Biological Computation group is working on a number of large projects in this effort and I would like to highlight two of them.

(RE:IN) - the Reasoning Engine for Interaction Networks
In order to perform a similar software analysis for the functions of living cells and organisms, the Microsoft Research Biological Computation group developed some software called the (RE:IN) - Reasoning Engine for Interaction Networks. The purpose of the (RE:IN) software is to produce a number of possible models that describe the underlying biological software that runs certain biological processes. The (RE:IN) software essentially outputs possible models consisting of flowcharts and Input-Output Diagrams that explain the input data provided to the (RE:IN) software. To use the (RE:IN) software you first enter a number of components. A component can be a gene, a protein or any interacting molecule. The user then enters a number of known interactions between the components. Finally, the user enters a number of experimental results that have been performed on the entire network of components and interactions. The (RE:IN) software then calculates a number of logical models that explain the results of the experimental observations in terms of the components and the interactions amongst the components. I imagine the (RE:IN) software could also be used to analyze commercial software in terms of data components and processing interactions to produce logical models of software behavior for major software refactoring efforts too. The homepage for the (RE:IN) software is at:

https://www.microsoft.com/en-us/research/project/reasoning-engine-for-interaction-networks-rein/

You can download the documentation for the (RE:IN) at:

https://www.microsoft.com/en-us/research/uploads/prod/2016/06/reintutorial.docx

From the (RE:IN) homepage you can run a link that will start up the (RE:IN) software in your browser, or you can use this link:

https://rein.cloudapp.net/

The (RE:IN) homepage also has a couple of examples that have already been filled in with components, interactions and experimental observations such as this one:

The network governing naive pluripotency in mouse embryonic stem cells:
https://rein.cloudapp.net/?pluripotency

It takes a bit of time to run a Solution to the model, so click on the Options Tab and set the Solutions Limit to "1" instead of "10". Then click on the "Run Analysis" button and wait while the processing wheel turns. Eventually, the processing will complete and display a message box telling you that the process has completed.

Station B - Microsoft's Integrated Development Environment for Programming Biology
Station B is Microsoft's project for creating an IDE for programming biology that is similar to Microsoft's Visual Studio IDE for computer software. The homepage for Microsoft's Station B is at:

https://www.microsoft.com/en-us/research/project/stationb/

The general idea behind Station B is to build an IDE for biological programming like we have with Microsoft's Visual Studio IDE for computer software development and maintenance. Here is the Overview from the Station B homepage:

Building a platform for programming biology
The ability to program biology could enable fundamental breakthroughs across a broad range of industries, including medicine, agriculture, food, construction, textiles, materials and chemicals. It could also help lay the foundation for a future bioeconomy based on sustainable technology. Despite this tremendous potential, programming biology today is still done largely by trial-and-error. To tackle this challenge, the field of synthetic biology has been working collectively for almost two decades to develop new methods and technology for programming biology. Station B is part of this broader effort, with a focus on developing an integrated platform that enables selected partners to improve productivity within their own organisations, in line with Microsoft’s core mission. The Station B project builds on over a decade of research at Microsoft on understanding and programming information processing in biological systems, in collaboration with several leading universities. The name Station B is directly inspired by Station Q, which launched Microsoft’s efforts in quantum computing, but focuses instead on biological computing.

The Station B platform is being developed at Microsoft Research in Cambridge, UK, which houses Microsoft’s first molecular biology laboratory. The platform aims to improve all phases of the Design-Build-Test-Learn workflow typically used for programming biological systems:

• The Design phase will incorporate biological programming languages that operate at the molecular, genetic and network levels. These languages can in turn be compiled to a hierarchy of biological abstractions, each with their associated analysis methods, where different abstractions can be selected depending on the biological question being addressed. For example, a Continuous Time Markov Chain can be used to determine how random noise affects system function, using stochastic simulation or probabilistic model-checking methods.

• The Build phase will incorporate compilers that translate high-level programs to DNA code, together with a digital encoding of the biological experiments to be performed.

• The Test phase will execute biological experiments using lab robots in collaboration with technology partner Synthace, by using their award-winning Antha software, a powerful software platform built on Azure Internet of Things that gives biologists sophisticated control over lab hardware.

• The Learn phase will incorporate a range of methods for extracting biological knowledge from experimental data, including Bayesian inference, symbolic reasoning and deep learning methods, running at scale on Azure.

These phases will be integrated with a biological knowledge base that stores computational models representing the current understanding of the biological systems under consideration. As new experiments are performed, the knowledge base will be updated via automated learning.


For those of you not familiar with Microsoft's Visual Studio IDE, you can read about it and download a free community version of Visual Studio for your own personal use at:

https://visualstudio.microsoft.com/vs/

When you crank up Microsoft's Visual Studio, you will find all of the software tools that you need to develop new software or maintain old software. The Visual Studio IDE allows software developers to perform development and maintenance chores that took days or weeks back in the 1960s, 1970s and 1980s in a matter of minutes.

Figure 6 – Above is a screenshot of Microsoft's Visual Studio IDE. It assists developers with writing new software or maintaining old software by automating many of the labor-intensive and tedious chores of working on software. The intent of Project B is to do the same for biologists.

Before you download a free community version of Visual Studio be sure to watch some of the Visual Studio 2019 Launch videos at the bottom of the Visual Studio 2019 download webpage to get an appreciation for what a modern IDE can do.

Some of My Adventures with Programming Software in the 20th Century
But to really understand the significance of an IDE for biologists like Station B, we need to look back a bit to the history of writing software in the 20th century. Like the biological labwork of today, writing and maintaining software in the 20th century was very inefficient, time-consuming and tedious. For example, when I first learned to write Fortran code at the University of Illinois at Urbana in 1972, we were punching out programs on an IBM 029 keypunch machine, and I discovered that writing code on an IBM 029 keypunch machine was even worse than writing term papers on a manual typewriter. At least when you submitted a term paper with a few typos, your professor was usually kind enough not to abend your term paper right on the spot and give you a grade of zero. Sadly, I learned that such was not the case with Fortran compilers! The first thing you did was to write out your code on a piece of paper as best you could back at the dorm. The back of a large stack of fanfold printer paper output was ideal for such purposes. In fact, as a physics major, I first got hooked by software while digging through the wastebaskets of DCL, the Digital Computing Lab, at the University of Illinois looking for fanfold listings of computer dumps that were about a foot thick. I had found that the backs of thick computer dumps were ideal for working on lengthy problems in my quantum mechanics classes.

It paid to do a lot of desk-checking of your code back at the dorm before heading out to the DCL. Once you got to the DCL, you had to wait your turn for the next available IBM 029 keypunch machine. This was very much like waiting for the next available washing machine on a crowded Saturday morning at a laundromat. When you finally got to your IBM 029 keypunch machine, you would load it up with a deck of blank punch cards and then start punching out your program. You would first press the feed button to have the machine pull your first card from the deck of blank cards and register the card in the machine. Fortran compilers required code to begin in column 7 of the punch card so the first thing you did was to press the spacebar 6 times to get to column 7 of the card. Then you would try to punch in the first line of your code. If you goofed and hit the wrong key by accident while punching the card, you had to eject the bad card and start all over again with a new card. Structured programming had not been invented yet, so nobody indented code at the time. Besides, trying to remember how many times to press the spacebar for each new card in a block of indented code was just not practical. Pressing the spacebar 6 times for each new card was hard enough! Also, most times we proofread our card decks by flipping through them before we submitted the card deck. Trying to proofread indented code in a card deck would have been rather disorienting, so nobody even thought of indenting code. Punching up lots of comment cards was also a pain, so most people got by with a minimum of comment cards in their program deck.

After you punched up your program on a card deck, you would then punch up your data cards. Disk drives and tape drives did exist in those days, but disk drive storage was incredibly expensive and tapes were only used for huge amounts of data. If you had a huge amount of data, it made sense to put it on a tape because if you had several feet of data on cards, there was a good chance that the operator might drop your data card deck while feeding it into the card reader. But usually, you ended up with a card deck that held the source code for your program and cards for the data to be processed too. You also punched up the JCL (Job Control Language) cards for the IBM mainframe that instructed the IBM mainframe to compile, link and then run your program all in one run. You then dropped your finalized card deck into the input bin so that the mainframe operator could load your card deck into the card reader for the IBM mainframe. After a few hours, you would then return to the output room of the DCL and go to the alphabetically sorted output bins that held all the jobs that had recently run. If you were lucky, in your output bin you found your card deck and the fanfolded computer printout of your last run. Unfortunately, normally you found that something probably went wrong with your job. Most likely you had a typo in your code that had to be fixed. If it was nighttime and the mistake in your code was an obvious typo, you probably still had time for another run, so you would get back in line for an IBM 029 keypunch machine and start all over again. You could then hang around the DCL working on the latest round of problems in your quantum mechanics course. However, machine time was incredibly expensive in those days and you had a very limited budget for machine charges. So if there was some kind of logical error in your code, many times you had to head back to the dorm for some more desk checking of your code before giving it another shot the next day.

Figure 7 - An IBM 029 keypunch machine like the one I first learned to program on at the University of Illinois in 1972.

Figure 8 - Each card could hold a maximum of 80 bytes. Normally, one line of code was punched onto each card.

Figure 9 - The cards for a program were held together into a deck with a rubber band, or for very large programs, the deck was held in a special cardboard box that originally housed blank cards. Many times the data cards for a run followed the cards containing the source code for a program. The program was compiled and linked in two steps of the run and then the generated executable file processed the data cards that followed in the deck.

Figure 10 - To run a job, the cards in a deck were fed into a card reader, as shown on the left above, to be compiled, linked, and executed by a million-dollar mainframe computer with a clock speed of about 750 KHz and about 1 MB of memory.

Figure 11 - The output of a run was printed on fanfolded paper and placed into an output bin along with your input card deck.

I finished up my B.S. in Physics at the University of Illinois at Urbana in 1973 and headed up north to complete an M.S. in Geophysics at the University of Wisconsin at Madison. Then from 1975 – 1979, I was an exploration geophysicist exploring for oil, first with Shell, and then with Amoco. I kept coding Fortran the whole time. In 1979, I made a career change into IT and spent about 20 years in development. For the last 17 years of my career, I was in IT operations, supporting middleware on WebSphere, JBoss, Tomcat, and ColdFusion. In 1979, when I became an IT professional in Amoco's IT department, I noticed that not much had changed with the way software was developed and maintained. Structured programming had arrived, so we were now indenting code and adding comment statements to the code, but I was still programming on cards. We were now using IBM 129 keypunch machines that were a little bit more sophisticated than the old IBM 029 keypunch machines. However, the coding process was still very much the same. I worked on code at my desk and still spent a lot of time desk checking the code. When I was ready for my next run, I would get into an elevator and travel down to the basement of the Amoco Building where the IBM mainframes were located. Then I would punch my cards on one of the many IBM 129 keypunch machines but this time with no waiting for a machine. After I submitted my deck, I would travel up 30 floors to my cubicle to work on something else. After a couple of hours, I would head down to the basement again to collect my job. On a good day, I could manage to get 4 runs in. But machine time was still incredibly expensive. If I had a $100,000 project, $25,000 went for programming time, $25,000 went to IT overhead like management and data management services costs, and a full $50,000 went to machine costs for compiles and test runs!

This may all sound very inefficient and tedious today, but it can be even worse. When I first changed careers to become an IT professional in 1979, I used to talk to the old-timers about the good old days of IT. They told me that when the operators began their shift on an old-time 1950s vacuum tube computer, the first thing they did was to crank up the voltage on the vacuum tubes to burn out the tubes that were on their last legs. Then they would replace the burned-out tubes to start the day with a fresh machine. They also explained that the machines were so slow that they spent all day processing production jobs. Emergency maintenance work to fix production bugs was allowed at night, but new development was limited to one compile and test run per week! They also told me about programming the plugboards of electromechanical Unit Record Processing machines back in the 1950s by physically rewiring the plugboards. The Unit Record Processing machines would then process hundreds of punch cards per minute by routing the punch cards from machine to machine in processing streams.

Figure 12 – In the 1950s Unit Record Processing machines like this card sorter were programmed by physically rewiring a plugboard.

Figure 13 – The plugboard for a Unit Record Processing machine.

Using Software to Write Software
But all of this was soon to change. In the early 1980s, the IT department of Amoco switched to using TSO running on dumb IBM 3278 terminals to access IBM mainframes. We now used a full-screen editor called ISPF running under TSO on the IBM 3278 terminals to write code and submit jobs, and our development jobs usually ran in less than an hour. The source code for our software files was now on disk in partitioned datasets for easy access and updating. The data had moved to tapes and it was the physical process of mounting and unmounting tapes that now slowed down testing. For more on tape processing see: An IT Perspective on the Origin of Chromatin, Chromosomes and Cancer. Now I could run maybe 10 jobs in one day to test my code! However, machine costs were still incredibly high and still accounted for about 50% of project costs, so we still had to do a lot of desk checking to save on machine costs. At first, the IBM 3278 terminals appeared on the IT floor in "tube rows" like the IBM 029 keypunch machines of yore. But after a few years, each IT professional was given their own IBM 3278 terminal on their own desk. Finally, there was no more waiting in line for an input device!

Figure 14 - The IBM ISPF full-screen editor ran on IBM 3278 terminals connected to IBM mainframes in the late 1970s. ISPF was also a screen-based interface to TSO (Time Sharing Option) that allowed programmers to do things like copy files and submit batch jobs. ISPF and TSO running on IBM mainframes allowed programmers to easily reuse source code by doing copy/paste operations with the screen editor from one source code file to another. By the way, ISPF and TSO are still used today on IBM mainframe computers to support writing and maintaining software.

I found that the use of software to write and maintain software through the use of ISPF dramatically improved software development and maintenance productivity. It was like moving from typing term papers on manual typewriters to writing term papers on word processors. It probably improved productivity by a factor of at least 10 or more. In the early 1980s, I was very impressed by this dramatic increase in productivity that was brought on by using software to write and maintain software. I was working on softwarephysics at the time, and the findings of softwarephysics led me to believe that what programmers really needed was an integrated software tool that would help to automate all of the tedious and repetitious activities of writing and maintaining software. This would allow programmers to overcome the effects of the second law of thermodynamics in a highly nonlinear Universe - for more on that see The Fundamental Problem of Software. But how?

BSDE - An Early Software IDE Founded On Biological Principles
I first began by slowly automating some of my mundane programming activities with ISPF edit macros written in REXX. In SoftwarePhysics I described how this automation activity slowly grew into the development of BSDE - the Bionic Systems Development Environment back in 1985 while in the IT department of Amoco. I am going to spend a bit of time on BSDE because I think it highlights the similarities between writing and maintaining computer software and biological software and the value of having an IDE for each.

BSDE slowly evolved into a full-fledged mainframe-based IDE, like Microsoft's modern Visual Studio, over a number of years at a time when software IDEs did not exist. During the 1980s, BSDE was used to grow several million lines of production code for Amoco by growing applications from embryos. For an introduction to embryology see Software Embryogenesis. The DDL statements used to create the DB2 tables and indexes for an application were stored in a sequential file called the Control File and performed the functions of genes strung out along a chromosome. Applications were grown within BSDE by turning their genes on and off to generate code. BSDE was first used to generate a Control File for an application by allowing the programmer to create an Entity-Relationship diagram using line printer graphics on an old IBM 3278 terminal.

Figure 15 - BSDE was run on IBM 3278 terminals, using line printer graphics, and in a split-screen mode. The embryo under development grew within BSDE on the top half of the screen, while the code generating functions of BSDE were used on the lower half of the screen to insert code into the embryo and to do compiles on the fly while the embryo ran on the upper half of the screen. Programmers could easily flip from one session to the other by pressing a PF key.

After the Entity-Relationship diagram was created, the programmer used a BSDE option to create a skeleton Control File with DDL statements for each table on the Entity-Relationship diagram and each skeleton table had several sample columns with the syntax for various DB2 datatypes. The programmer then filled in the details for each DB2 table. When the first rendition of the Control File was completed, another BSDE option was used to create the DB2 database for the tables and indexes on the Control File. Another BSDE option was used to load up the DB2 tables with test data from sequential files. Each DB2 table on the Control File was considered to be a gene. Next, a BSDE option was run to generate an embryo application. The embryo was a 10,000 line of code PL/1, Cobol or REXX application that performed all of the primitive functions of the new application. The programmer then began to grow the embryo inside of BSDE in a split-screen mode. The embryo ran on the upper half of an IBM 3278 terminal and could be viewed in real-time, while the code generating options of BSDE ran on the lower half of the IBM 3278 terminal. BSDE was then used to inject new code into the embryo's programs by reading the genes in the Control File for the embryo in a real-time manner while the embryo was running in the top half of the IBM 3278 screen. BSDE had options to compile and link modified code on the fly while the embryo was still executing. This allowed for a tight feedback loop between the programmer and the application under development. In fact many times BSDE programmers sat with end-users and co-developed software together on the fly in a very Agile manner. When the embryo had grown to full maturity, BSDE was then used to create online documentation for the new application and was also used to automate the install of the new application into production. Once in production, BSDE generated applications were maintained by adding additional functions to their embryos.

Since BSDE was written using the same kinds of software that it generated, I was able to use BSDE to generate code for itself. The next generation of BSDE was grown inside of its maternal release. Over a period of seven years, from 1985 – 1992, more than 1,000 generations of BSDE were generated, and BSDE slowly evolved in an Agile manner into a very sophisticated tool through small incremental changes. BSDE dramatically improved programmer efficiency by greatly reducing the number of buttons programmers had to push in order to generate software that worked.

Figure 15 - Embryos were grown within BSDE in a split-screen mode by transcribing and translating the information stored in the genes in the Control File for the embryo. Each embryo started out very much the same but then differentiated into a unique application based upon its unique set of genes.

Figure 16 – BSDE appeared as the cover story of the October 1991 issue of the Enterprise Systems Journal

BSDE had its own online documentation that was generated by BSDE. Amoco's IT department also had a class to teach programmers how to get started with BSDE. As part of the curriculum Amoco had me prepare a little cookbook on how to build an application using BSDE:

BSDE – A 1989 document describing how to use BSDE - the Bionic Systems Development Environment - to grow applications from genes and embryos within the maternal BSDE software.

I wish that I could claim that I was smart enough to have sat down and thought up all of this stuff from first principles, but that is not what happened. It all just happened through small incremental changes in a very Agile manner over a very long period of time and most of the design work was done subconsciously, if at all. Even the initial BSDE ISPF edit macros happened through serendipity. When I first started programming DB2 applications, I found myself copying in the DDL CREATE TABLE statements from the file I used to create the DB2 database into the program that I was working on. This file, with the CREATE TABLE statements, later became the Control File used by BSDE to store the genes for an application. I would then go through a series of editing steps on the copied in data to transform it from a CREATE TABLE statement into a DB2 SELECT, INSERT, UPDATE, or DELETE statement. I would do the same thing all over again to declare the host variables for the program. Being a lazy programmer, I realized that there was really no thinking involved in these editing steps and that an ISPF edit macro could do the job equally as well, only very quickly and without error, so I went ahead and wrote a couple of ISPF edit macros to automate the process. I still remember the moment when it first hit me. For me, it was very much like the scene in 2001 - A Space Odyssey when the man-ape picks up a wildebeest thighbone and starts to pound the ground with it. My ISPF edit macros were doing the same thing that happens when the information in a DNA gene is transcribed into a protein! A flood of biological ideas poured into my head over the next few days, because at last, I had a solution for my pent-up ideas about nonlinear systems and the second law of thermodynamics that were making my life so difficult as a commercial software developer. We needed to "grow" code – not write code!

BSDE began as a few simple ISPF edit macros running under ISPF edit. ISPF is the software tool that mainframe programmers still use today to interface to the IBM MVS and VM/CMS mainframe operating systems and contains an editor that can be greatly enhanced through the creation of edit macros written in REXX. I began BSDE by writing a handful of ISPF edit macros that could automate some of the editing tasks that a programmer needed to do when working on a program that used a DB2 database. These edit macros would read a Control File, which contained the DDL statements to create the DB2 tables and indexes. The CREATE TABLE statements in the Control File were the equivalent of genes, and the Control File itself performed the functions of a chromosome. For example, a programmer would retrieve a skeleton COBOL program, with the bare essentials for a COBOL/DB2 program, from a stock of reusable BSDE programs. The programmer would then position their cursor in the code to generate a DB2 SELECT statement and hit a PFKEY. The REXX edit macro would read the genes in the Control File and would display a screen listing all of the DB2 tables for the application. The programmer would then select the desired tables from the screen, and the REXX edit macro would then copy the selected genes to an array (mRNA). The mRNA array was then sent to a subroutine that inserted lines of code (tRNA) into the COBOL program. The REXX edit macro would also declare all of the SQL host variables in the DATA DIVISION of the COBOL program and would generate code to check the SQLCODE returned from DB2 for errors and take appropriate actions. A similar REXX ISPF edit macro was used to generate screens. These edit macros were also able to handle PL/1 and REXX/SQL programs. They could have been altered to generate the syntax for any programming language such as C, C++, or Java. As time progressed, BSDE took on more and more functionality via ISPF edit macros. Finally, there came a point where BSDE took over and ISPF began to run under BSDE. This event was very similar to the emergence of the eukaryotic architecture for cellular organisms. BSDE consumed ISPF like the first eukaryotic cells that consumed prokaryotic bacteria and used them as mitochondria and chloroplasts. With continued small incremental changes, BSDE continued to evolve.

I noticed that I kept writing the same kinds of DB2 applications, with the same basic body plan, over and over. At the time I did not know it, but these were primarily applications using the Model-View-Controller (MVC) design pattern. The idea of using design patterns had not yet been invented in computer science and IT, so my failing to take note of this is understandable. For more on the MVC design pattern please see Software Embryogenesis. From embryology, I got the idea of using BSDE to read the Control File for an application and to generate an "embryo" for the application based on its unique set of genes. The embryo would perform all of the things I routinely programmed over and over for a new application. Once the embryo was generated for a new application from its Control File, the programmer would then interactively "grow" code and screens for the application. With time, each embryo differentiated into a unique individual application in an Agile manner until the fully matured application was delivered into production by BSDE. At this point, I realized that I could use BSDE to generate code for itself, and that is when I started using BSDE to generate the next generation of BSDE. This technique really sped up the evolution of BSDE because I had a positive feedback loop going. The more powerful BSDE became, the faster I could add improvements to the next generation of BSDE through the accumulated functionality inherited from previous generations.

Embryos were grown within BSDE using an ISPF split-screen mode. The programmer would start up a BSDE session and run Option 4 – Interactive Systems Development from the BSDE Master Menu. This option would look for an embryo, and if it did not find one, it would offer to generate an embryo for the programmer. Once an embryo was implanted, the option would turn the embryo on and the embryo would run inside of the BSDE session with whatever functionality it currently had. The programmer would then split his screen with PF2 and another BSDE session would appear in the lower half of his terminal. The programmer could easily toggle control back and forth between the upper and lower sessions with PF9. The lower session of BSDE was used to generate code and screens for the embryo on the fly, while the embryo in the upper BSDE session was fully alive and functional. This was possible because BSDE generated applications that used ISPF Dialog Manager for screen navigation, which was an interpretive environment, so compiles were not required for screen changes. If your logic was coded in REXX, you did not have to do compiles for logic changes either, because REXX was an interpretive language. If PL/1 or COBOL were used for logic, BSDE had facilities to easily compile code for individual programs after a coding change, and ISPF Dialog Manager would simply load the new program executable when that part of the embryo was exercised. These techniques provided a tight feedback loop so that programmers and end-users could immediately see the effects of a change as the embryo grew and differentiated.

Unfortunately, the early 1990s saw the downfall of BSDE. The distributed computing model hit with full force, and instead of deploying applications on mainframe computers, we began to distribute applications across a network of servers and client PCs. Since BSDE generated applications for mainframe computers, it could not compete, and BSDE quickly went extinct in the minds of Amoco IT management. I was left with just a theory and no tangible product, and it became much harder to sell softwarephysics at that point. So after a decade of being considered a little "strange", I decided to give it a rest, and I began to teach myself how to write C and C++ software for Unix servers and PCs. I started out with the Microsoft C/C++ C7 compiler which was a DOS application for writing C/C++ DOS applications. But I converted to Microsoft Visual C/C++ when it first came out in 1993. Microsoft Visual C/C++ was Microsoft's first real IDE and the predecessor to Microsoft's modern Visual Studio IDE. Visual C/C++ was so powerful that I knew that a PC version of BSDE could never compete, so I abandoned all thoughts of producing a PC version of BSDE.

Conclusion
I think that the impact that software development IDEs had on the history of software development and maintenance in the 20th century strongly suggests that Microsoft's Project B to build an IDE for biological programming will surely provide a similar dramatic leap in biological programming productivity in the 21st century, and this will allow for the harnessing of the tremendous power that self-replicating biological systems can provide.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston