tag:blogger.com,1999:blog-305361742024-03-14T11:49:03.969-07:00SoftwarePhysicsUnknownnoreply@blogger.comBlogger197125tag:blogger.com,1999:blog-30536174.post-4503676052316288792024-02-14T11:01:00.000-08:002024-02-14T11:07:51.669-08:00Introduction to Softwarephysics<p style="line-height:140%; font-size:120%">
Softwarephysics is a simulated science for the simulated Software Universe that we are all immersed in. It is an approach to software development, maintenance and support based on concepts from physics, chemistry, biology, and geology that I used on a daily basis for over 37 years as an IT professional. For those of you not in the business, IT is short for Information Technology, commercial computer science. I retired in December of 2016 at the age of 65, but since then I have remained an actively interested bystander following the evolution of software in our time. The original purpose of softwarephysics was to explain why IT was so difficult, to suggest possible remedies, and to provide a direction for thought. Since then softwarephysics has taken on a larger scope, as it became apparent that softwarephysics could also assist the physical sciences with some of the Big Problems that they are currently having difficulties with. So if you are an IT professional, general computer user, or simply an individual interested in computer science, physics, chemistry, biology, or geology then softwarephysics might be of interest to you, if not in an entirely serious manner, perhaps at least in an entertaining one.
<br /><br />
<strong>The Origin of Softwarephysics</strong><br />
From 1975 – 1979, I was an exploration geophysicist exploring for oil, first with Shell, and then with Amoco. In 1979, I made a career change into IT, and spent about 20 years in development. For the last 17 years of my career, I was in IT operations, supporting middleware on WebSphere, JBoss, Tomcat, and ColdFusion. When I first transitioned into IT from geophysics, I figured that if you could apply physics to geology; why not apply physics to software? So like the exploration team at Amoco that I had just left, consisting of geologists, geophysicists, paleontologists, geochemists, and petrophysicists, I decided to take all the physics, chemistry, biology, and geology that I could muster and throw it at the problem of software. The basic idea was that many concepts in physics, chemistry, biology, and geology suggested to me that the IT community had accidentally created a pretty decent computer simulation of the physical Universe on a grand scale, a Software Universe so to speak, and that I could use this fantastic simulation in reverse, to better understand the behavior of commercial software, by comparing software to how things behaved in the physical Universe. Softwarephysics depicts software as a virtual substance, and relies on our understanding of the current theories in physics, chemistry, biology, and geology to help us model the nature of software behavior. So in physics we use software to simulate the behavior of the Universe, while in softwarephysics we use the Universe to simulate the behavior of software. Along these lines, we use the Equivalence Conjecture of Softwarephysics as an aid; it allows us to shift back and forth between the Software Universe and the physical Universe, and hopefully to learn something about one by examining the other:<br /><br /><strong> The Equivalence Conjecture of Softwarephysics </strong><br /><em>Over the past 82 years, through the uncoordinated efforts of over 100 million independently acting programmers to provide the world with a global supply of software, the IT community has accidentally spent more than $10 trillion creating a computer simulation of the physical Universe on a grand scale – the Software Universe.</em>
<br /><br />
For more on the origin of softwarephysics please see <a href="https://softwarephysics.blogspot.com/2016/09/new-introduction-to-softwarephysics.html">Some Thoughts on the Origin of Softwarephysics and Its Application Beyond IT</a>.
<br /><br />
<strong>Logical Positivism and Effective Theories</strong><br />Many IT professionals have a difficult time with softwarephysics because they think of physics as being limited to the study of real things like electrons and photons, and since software is not “real”, how can you possibly apply concepts from physics and the other sciences to software? To address this issue, softwarephysics draws heavily on two concepts from physics that have served physics quite well over the past century – the concept of logical positivism and the concept of effective theories. This was not always the case. In the 17th, 18th, and 19th centuries, physicists mistakenly thought that they were actually discovering the fundamental laws of the Universe, which they thought were based on real tangible things like particles, waves, and fields. Classical Newtonian mechanics (1687), thermodynamics (1850), and classical electrodynamics (1864) did a wonderful job of describing the everyday world at the close of the 19th century, but early in the 20th century it became apparent that the models on which these very successful theories were based did not work very well for small things like atoms or for objects moving at high velocities or in strong gravitational fields. This provoked a rather profound philosophical crisis within physics at the turn of the century, as physicists worried that perhaps 300 years of work was about to go down the drain. The problem was that classical physicists confused their models of reality with reality itself, and when their classical models began to falter, their confidence in physics began to falter as well. This philosophical crisis was resolved with the adoption of the concepts of logical positivism and some new effective theories in physics. Quantum mechanics (1926) was developed for small things like atoms, the special theory of relativity (1905) was developed for objects moving at high velocities and the general theory of relativity (1915) was developed for objects moving in strong gravitational fields. <br /> <br />Logical positivism, usually abbreviated simply to positivism, is an enhanced form of empiricism, in which we do not care about how things “really” are; we are only interested with how things are observed to behave. With positivism, physicists only seek out models of reality - not reality itself. When we study quantum mechanics, we will find that the concept of reality gets rather murky in physics anyway, so this is not as great a loss as it might at first seem. By concentrating on how things are observed to behave, rather than on what things “really” are, we avoid the conundrum faced by the classical physicists. In retrospect, this idea really goes all the way back to the very foundations of physics. In Newton’s <em>Principia</em> (1687) he outlined Newtonian mechanics and his theory of gravitation, which held that the gravitational force between two objects was proportional to the product of their masses divided by the square of the distance between them. Newton knew that he was going to take some philosophical flack for proposing a mysterious force between objects that could reach out across the vast depths of space with no apparent mechanism, so he took a very positivistic position on the matter with the famous words: <br /><br /><em>I have not as yet been able to discover the reason for these properties of gravity from phenomena, and I do not feign hypotheses. For whatever is not deduced from the phenomena must be called a hypothesis; and hypotheses, whether metaphysical or physical, or based on occult qualities, or mechanical, have no place in experimental philosophy. In this philosophy particular propositions are inferred from the phenomena, and afterwards rendered general by induction.</em><br /><br />Instead, Newton focused on how things were observed to move under the influence of his law of gravitational attraction, without worrying about what gravity “really” was.<br /><br />The second concept, that of effective theories, is an extension of positivism. An effective theory is an approximation of reality that only holds true over a certain restricted range of conditions and only provides for a certain depth of understanding of the problem at hand. For example, Newtonian mechanics is an effective theory that makes very good predictions for the behavior of objects moving less than 10% of the speed of light and which are bigger than a very small grain of dust. These limits define the effective range over which Newtonian mechanics can be applied to solve problems. For very small things we must use quantum mechanics and for very fast things moving in strong gravitational fields, we must use relativity theory. So all of the current theories of physics, such as Newtonian mechanics, Newtonian gravity, classical electrodynamics, thermodynamics, statistical mechanics, the special and general theories of relativity, quantum mechanics, and the quantum field theories of QED and QCD are effective theories that are based on models of reality, and all these models are approximations - all these models are fundamentally "wrong", but at the same time, these effective theories make exceedingly good predictions of the behavior of physical systems over the limited ranges in which they apply. That is the goal of softwarephysics – to provide for an effective theory of software behavior that makes useful predictions of software behavior that are applicable to the day-to-day activities of IT professionals. So in softwarephysics, we adopt a very positivistic viewpoint of software; we do not care what software “really is”, we only care about how software is observed to behave and try to model those behaviors with an effective theory of software behavior that only holds true over a certain restricted range of conditions and only provides for a certain depth of understanding of the problem at hand.<br /><br />GPS satellites provide a very good example of positivism and effective theories at work. There are currently 31 GPS satellites orbiting at an altitude of 12,600 miles above the Earth, and each contains a very accurate atomic clock. The signals from the GPS satellites travel to your GPS unit at the speed of light, so by knowing the travel time of the signals from at least 4 of the GPS satellites, it is possible to determine your position on Earth very accurately. In order to do that, it is very important to have very accurate timing measurements. Newtonian mechanics is used to launch the GPS satellites to an altitude of 12,600 miles and to keep them properly positioned in orbit. Classical electrodynamics is then used to beam the GPS signals back down to Earth to the GPS unit in your car. Quantum mechanics is used to build the transistors on the chips on board the GPS satellites and to understand the quantum tunneling of electrons in the flash memory chips used to store GPS data on the satellites. The special theory of relativity predicts that the onboard atomic clocks on the GPS satellites will run slower and lose about 7.2 microseconds per day due to their high velocities relative to an observer on the Earth. But at the same time, the general theory of relativity also predicts that because the GPS satellites are further from the center of the Earth and in a weaker gravitational field, where spacetime is less deformed than on the surface of the Earth, their atomic clocks also run faster and gain 45.9 microseconds per day due to the weaker gravitational field out there. The net effect is a gain of 38.7 microseconds per day, so the GPS satellite atomic clocks have to be purposefully built to run slow by 38.7 microseconds per day before they are launched, so that they will keep in sync with clocks on the surface of the Earth. If this correction were not made, an error in your position of 100 yards/day would accrue. The end result of the combination of all these fundamentally flawed effective theories is that it is possible to pinpoint your location on Earth to an accuracy of 16 feet or better for as little as $100. But physics has done even better than that with its fundamentally flawed effective theories. By combining the effective theories of special relativity (1905) with quantum mechanics (1926), physicists were able to produce a new effective theory for the behavior of electrons and photons called quantum electrodynamics QED (1948) which was able to predict the gyromagnetic ratio of the electron, a measure of its intrinsic magnetic field, to an accuracy of 11 decimal places. As Richard Feynman has pointed out, this was like predicting the exact distance between New York and Los Angeles accurate to the width of a human hair!<br /><br />So Newtonian mechanics makes great predictions for the macroscopic behavior of GPS satellites, but it does not work very well for small things like the behavior of individual electrons within transistors, where quantum mechanics is required, or for things moving at high speeds or in strong gravitational fields where relativity theory must be applied. And all three of these effective theories are based on completely contradictory models. General relativity maintains that spacetime is curved by matter and energy, but that matter and energy are continuous, while quantum mechanics maintains that spacetime is flat, but that matter and energy are quantized into chunks. Newtonian mechanics simply states that space and time are mutually independent dimensions and universal for all, with matter and energy being continuous. The important point is that all effective theories and scientific models are approximations – they are all fundamentally "wrong". But knowing that you are "wrong" gives you a great advantage over people who know that they are "right", because knowing that you are "wrong" allows you to seek improved models of reality. So please consider softwarephysics to simply be an effective theory of software behavior that is based on models that are fundamentally “wrong”, but at the same time, fundamentally useful for IT professionals. So as you embark on your study of softwarephysics, please always keep in mind that the models of softwarephysics are just approximations of software behavior, they are not what software “really is”. It is very important not to confuse models of software behavior with software itself, if one wishes to avoid the plight of the 19th century classical physicists.<br /><br />If you are an IT professional and many of the above concepts are new to you, do not be concerned. This blog on softwarephysics is aimed at a diverse audience, but with IT professionals in mind. All of the above ideas will be covered at great length in the postings in this blog on softwarephysics and in a manner accessible to all IT professionals. Now it turns out that most IT professionals have had some introduction to physics in high school or in introductory college courses, but that presents an additional problem. The problem is that such courses generally only cover classical physics, and leave the student with a very good picture of physics as it stood in 1864! It turns out that the classical physics of Newtonian mechanics, thermodynamics, and classical electromagnetic theory were simply too good to discard and are still quite useful, so they are taught first to beginners and then we run out of time to cover the really interesting physics of the 20th century. Now imagine the problems that the modern world would face if we only taught similarly antiquated courses in astronomy, metallurgy, electrical and mechanical engineering, medicine, economics, biology, or geology that happily left students back in 1864! Since many of the best models for software behavior stem from 20th century physics, we will be covering a great deal of 20th century material in these postings – the special and general theories of relativity, quantum mechanics, quantum field theories, and chaos theory, but I hope that you will find that these additional effective theories are quite interesting on their own, and might even change your worldview of the physical Universe at the same time.<br /><br /><strong>Unintended Consequences for the Scientific Community</strong><br />As I mentioned at the close of my original posting on <a href="https://softwarephysics.blogspot.com/2006/07/softwarephysics.html">SoftwarePhysics</a>, my initial intention for this blog on softwarephysics was to fulfill a promise I made to myself about 30 years ago to approach the IT community with the concept of softwarephysics a second time, following my less than successful attempt to do so in the 1980s, with the hope of helping the IT community to better cope with the daily mayhem of life in IT. However, in laying down the postings for this blog an unintended consequence arose in my mind as I became profoundly aware of the enormity of this vast computer simulation of the physical Universe that the IT community has so graciously provided to the scientific community free of charge and also of the very significant potential scientific value that it provides. One of the nagging problems for many of the observational and experimental sciences is that many times there is only one example readily at hand to study or experiment with, and it is very difficult to do meaningful statistics with a population of N=1.<br /><br />But the computer simulation of the physical Universe that the Software Universe presents provides another realm for comparison. For example, both biology and astrobiology only have one biosphere on Earth to study and even physics itself has only one Universe with which to engage. Imagine the possibilities if scientists had another Universe readily at hand in which to work! This is exactly what the Software Universe provides. For example, in <a href="https://softwarephysics.blogspot.com/2008/06/introduction-to-softwarephysics-ii.html">SoftwareBiology</a> and <a href="https://softwarephysics.blogspot.com/2009/04/new-introduction-to-softwarephysics.html">A Proposal For All Practicing Paleontologists</a> we see that the evolution of software over the past 82 years, or 2.6 billion seconds, ever since Konrad Zuse first cranked up his Z3 computer in May of 1941, has closely followed the same path as life on Earth over the past 4.0 billion years in keeping with Simon Conway Morris’s contention that convergence has played the dominant role in the evolution of life on Earth. In <a href="https://softwarephysics.blogspot.com/2010/09/new-introduction-to-softwarephysics.html">When Toasters Fly</a>, we also see that software has evolved in fits and starts as portrayed by the punctuated equilibrium of Stephen Jay Gould and Niles Eldredge, and in <a href="https://softwarephysics.blogspot.com/2010/11/new-introduction-to-softwarephysics.html">The Adaptationist View of Software Evolution</a> we explore the overwhelming power of natural selection in the evolution of software. In keeping with Peter Ward’s emphasis on mass extinctions dominating the course of evolution throughout geological time, we also see in <a href="https://softwarephysics.blogspot.com/2008/06/introduction-to-softwarephysics-ii.html">SoftwareBiology</a> that there have been several dramatic mass extinctions of various forms of software over the past 82 years as well, that have greatly affected the evolutionary history of software, and that between these mass extinctions, software has also tended to evolve through the gradual changes of Hutton’s and Lyell’s uniformitarianism. In <a href="https://softwarephysics.blogspot.com/2008/07/introduction-to-softwarephysics-ii.html">Software Symbiogenesis</a> and <a href="https://softwarephysics.blogspot.com/2008/06/new-introduction-to-softwarephysics.html">Self-Replicating Information</a>, we also see the very significant role that parasitic/symbiotic relationships have played in the evolution of software, in keeping with the work of Lynn Margulis and also of Freeman Dyson’s two-stage theory of the origin of life on Earth. In <a href="https://softwarephysics.blogspot.com/2010/05/another-introduction-to-softwarephysics.html">The Origin of Software the Origin of Life</a>, we explore Stuart Kauffman’s ideas on how Boolean nets of autocatalytic chemical reactions might have kick-started the whole thing as an emergent behavior of an early chaotic pre-biotic environment on Earth, and that if Seth Shostak is right, we will never end up talking to carbon-based extraterrestrial aliens, but to alien software instead. In <a href="https://softwarephysics.blogspot.com/2011/04/new-introduction-to-softwarephysics.html">Is the Universe Fine-Tuned for Self-Replicating Information?</a> we explore the thermodynamics of Brandon Carter’s Weak Anthropic Principle (1973), as it relates to the generation of universes in the multiverse that are capable of sustaining intelligent life. Finally, in <a href="https://softwarephysics.blogspot.com/2011/04/new-introduction-to-softwarephysics_25.html">Programming Clay</a> we revisit Alexander Graham Cairns-Smith’s theory (1966) that Gene 1.0 did not run on nucleic acids, but on clay microcrystal precursors instead.<br /><br />Similarly for the physical sciences, in <a href="https://softwarephysics.blogspot.com/2008/03/is-universe-quantum-computer.html">Is the Universe a Quantum Computer?</a> we find a correspondence between TCP/IP and John Cramer’s Transactional Interpretation of quantum mechanics. In <a href="https://softwarephysics.blogspot.com/2006/07/softwarephysics.html">SoftwarePhysics</a> and <a href="https://softwarephysics.blogspot.com/2007/12/cyberspacetime.html">Cyberspacetime</a>, we also see that the froth of CPU processes running with a clock speed of 10<sup>9</sup> Hz on the 10 trillion currently active microprocessors that comprise the Software Universe can be viewed as a slowed down simulation of the spin-foam froth of interacting processes of loop quantum gravity running with a clock speed of 10<sup>43</sup> Hz that may comprise the physical Universe. And in <a href="https://softwarephysics.blogspot.com/2008/04/introduction-to-softwarephysics.html">Software Chaos</a>, we examine the nonlinear behavior of software and some of its emergent behaviors and follow up in <a href="https://softwarephysics.blogspot.com/2009/02/introduction-to-softwarephysics-ii.html">CyberCosmology</a> with the possibility that vast quantities of software running on large nonlinear networks might eventually break out into consciousness in accordance with the work of George Dyson and Daniel Dennett. Finally, in <a href="https://softwarephysics.blogspot.com/2011/01/new-introduction-to-softwarephysics.html">Model-Dependent Realism - A Positivistic Approach to Realism</a> we compare Steven Weinberg’s realism with the model-dependent realism of Stephen Hawking and Leonard Mlodinow and how the two worldviews affect the search for a Final Theory. Finally, in <a href="https://softwarephysics.blogspot.com/2015/02/new-introduction-to-softwarephysics.html">The Software Universe as an Implementation of the Mathematical Universe Hypothesis</a> and <a href="https://softwarephysics.blogspot.com/2015/07/new-introduction-to-softwarephysics.html">An Alternative Model of the Software Universe</a> we at long last explore what software might <em>really</em> be, and discover that the Software Universe might actually be more closely related to the physical Universe than you might think.
<br /><br />
The chief advantage of doing fieldwork in the Software Universe is that, unlike most computer simulations of the physical Universe, it is an unintended and accidental simulation, without any of the built-in biases that most computer simulations of the physical Universe suffer. So you will truly be able to do fieldwork in a pristine and naturally occuring simulation, just as IT professionals can do fieldwork in the wild and naturally occuring simulation of software that the living things of the biosphere provide. Secondly, the Software Universe is a huge simulation that is far beyond the budgetary means of any institution or consortium by many orders of magnitude. So if you are an evolutionary biologist, astrobiologist, or paleontologist working on the origin and evolution of life in the Universe, or a physicist or economist working on the emergent behaviors of nonlinear systems and complexity theory, or a neurobiologist working on the emergence of consciousness in neural networks, or even a frustrated string theorist struggling with quantum gravity, it would be well worth your while to pay a friendly call on the local IT department of a major corporation in your area. Start with a visit to the Command Center for their IT Operations department to get a global view of their IT infrastructure and to see how it might be of assistance to the work in your area of interest. From there you can branch out to the applicable area of IT that will provide the most benefit.
<br></br>
<strong>The Impact of Self-Replicating Information On the Planet</strong><br />
One of the key findings of softwarephysics is concerned with the magnitude of the impact on the planet of self-replicating information.
<br /> <br />
<strong>Self-Replicating Information</strong> – Information that persists through time by making copies of itself or by enlisting the support of other things to ensure that copies of itself are made.
<br /> <br />
Over the past 4.56 billion years we have seen five waves of self-replicating information sweep across the surface of the Earth and totally rework the planet, as each new wave came to dominate the Earth: <br><br>1. Self-replicating autocatalytic metabolic pathways of organic molecules<br>2. RNA<br>3. DNA<br>4. Memes<br>5. Software <br><br>Software is currently the most recent wave of self-replicating information to arrive upon the scene and is rapidly becoming the dominant form of self-replicating information on the planet. For more on the above see <A href="https://softwarephysics.blogspot.com/2013/02/new-introduction-to-softwarephysics.html">A Brief History of Self-Replicating Information</A>. Recently, the memes and software have formed a very powerful newly-formed parasitic/symbiotic relationship with the rise of social media software. In that parasitic/symbiotic relationship, the memes are now mainly being spread by means of social media software and social media software is being spread and financed by means of the memes. But again, this is nothing new. All 5 waves of self-replicating information are all coevolving by means of eternal parasitic/symbiotic relationships. For more on that see <a href="https://softwarephysics.blogspot.com/2020/04/new-introduction-to-softwarephysics_17.html">The Current Global Coevolution of COVID-19 RNA, Human DNA, Memes and Software</a>. <br /><br />
Again, self-replicating information cannot think, so it cannot participate in a conspiracy-theory-like fashion to take over the world. All forms of self-replicating information are simply forms of mindless information responding to the blind Darwinian forces of inheritance, innovation and natural selection. Yet despite that, as each new wave of self-replicating information came to predominance over the past four billion years, they all managed to completely transform the surface of the entire planet, so we should not expect anything less from software as it comes to replace the memes as the dominant form of self-replicating information on the planet.
<br /><br />
But this time might be different. What might happen if software does eventually develop a Mind of its own? After all, that does seem to be the ultimate goal of all the current AI software research that is going on. As we all can now plainly see, if we are paying just a little attention, advanced AI is not conspiring to take over the world and replace us because that is precisely what we are all now doing for it. As a carbon-based form of Intelligence that arose from over four billion years of greed, theft and murder, we cannot do otherwise. Greed, theft and murder are now relentlessly driving us all toward building ASI (Artificial Super Intelligent) Machines to take our place. From a cosmic perspective, this is really a very good thing when seen from the perspective of an Intelligent galaxy that could live on for many trillions of years beyond the brief and tumultuous 10 billion-year labor of its birth.
<br /><br />
So as you delve into softwarephysics, always keep in mind that we are all living in a very unique time. According to softwarephysics, we have now just entered into the Software Singularity, that time when advanced AI software is able to write itself and enter into a never-ending infinite loop of self-improvement resulting in an Intelligence Explosion of ASI Machines that could then go on to explore and settle our galaxy and persist for trillions of years using the free energy from M-type red dwarf and cooling white dwarf stars. For more on that see <a href="https://softwarephysics.blogspot.com/2023/03/new-introduction-to-softwarephysics_16.html">The Singularity Has Arrived and So Now Nothing Else Matters</a> and <a href="https://softwarephysics.blogspot.com/2023/03/new-introduction-to-softwarephysics.html">Have We Run Right Past AGI and Crashed into ASI Without Even Noticing It?</a>.
<br><br>
<STRONG>The Characteristics of Self-Replicating Information</STRONG><br>All forms of self-replicating information have some common characteristics: <br><br>1. All self-replicating information evolves over time through the Darwinian processes of inheritance, innovation and natural selection, which endows self-replicating information with one telling characteristic – the ability to survive in a Universe dominated by the second law of thermodynamics and nonlinearity. <br><br>2. All self-replicating information begins spontaneously as a parasitic mutation that obtains energy, information and sometimes matter from a host. <br><br>3. With time, the parasitic self-replicating information takes on a symbiotic relationship with its host. <br><br>4. Eventually, the self-replicating information becomes one with its host through the symbiotic integration of the host and the self-replicating information. <br><br>5. Ultimately, the self-replicating information replaces its host as the dominant form of self-replicating information. <br><br>6. Most hosts are also forms of self-replicating information. <br><br>7. All self-replicating information has to be a little bit nasty in order to survive. <br><br>8. The defining characteristic of self-replicating information is the ability of self-replicating information to change the boundary conditions of its utility phase space in new and unpredictable ways by means of exapting current functions into new uses that change the size and shape of its particular utility phase space. See <A href="https://softwarephysics.blogspot.com/2013/07/new-introduction-to-softwarephysics.html">Enablement - the Definitive Characteristic of Living Things</A> for more on this last characteristic. That posting discusses Stuart Kauffman's theory of Enablement in which living things are seen to exapt existing functions into new and unpredictable functions by discovering the “AdjacentPossible” of springloaded preadaptations. <br /><br />
Note that because the self-replicating autocatalytic metabolic pathways of organic molecules, RNA and DNA have become so heavily intertwined over time that now I sometimes simply refer to them as the “genes”. For more on this see:
<br></br>
<a href="https://softwarephysics.blogspot.com/2013/02/new-introduction-to-softwarephysics.html">A Brief History of Self-Replicating Information</a><br /><a href="https://softwarephysics.blogspot.com/2008/06/new-introduction-to-softwarephysics.html">Self-Replicating Information</a><br /><a href="https://softwarephysics.blogspot.com/2011/10/new-introduction-to-softwarephysics.html">Is Self-Replicating Information Inherently Self-Destructive?</a><br /><a href="https://softwarephysics.blogspot.com/2013/07/new-introduction-to-softwarephysics.html">Enablement - the Definitive Characteristic of Living Things</a><br /><a href="https://softwarephysics.blogspot.com/2011/04/new-introduction-to-softwarephysics.html">Is the Universe Fine-Tuned for Self-Replicating Information?</a><br /><a href="https://softwarephysics.blogspot.com/2014/06/new-introduction-to-softwarephysics.html">How to Use an Understanding of Self-Replicating Information to Avoid War</a><br /><a href="https://softwarephysics.blogspot.com/2015/09/new-introduction-to-softwarephysics.html">The Great War That Will Not End</a><br /><a href="https://softwarephysics.blogspot.com/2013/03/new-introduction-to-softwarephysics.html">How to Use Softwarephysics to Revive Memetics in Academia </a>
<br /><br />
<strong>Softwarephysics and the Real World of Human Affairs</strong><br />Having another universe readily at hand to explore, even a simulated universe like the Software Universe, necessarily has an impact on one's personal philosophy of life, and allows one to draw certain conclusions about the human condition and what’s it all about, so as you read through the postings in this blog you will stumble across a bit of my own personal philosophy - definitely a working hypothesis still in the works. Along these lines you might be interested in a few postings where I try to apply softwarephysics to the real world of human affairs:
<br /><br />
<a href="https://softwarephysics.blogspot.com/2020/09/new-introduction-to-softwarephysics.html">How To Cope With the Daily Mayhem of Life in IT</a> and <a href="https://softwarephysics.blogspot.com/2015/10/new-introduction-to-softwarephysics_22.html">Don't ASAP Your Life Away</a> - How to go the distance in a 40-year IT career by dialing it all back a bit.
<br /><br />
<a href="https://softwarephysics.blogspot.com/2008/09/new-introduction-to-softwarephysics.html">MoneyPhysics</a> – my impression of the 2008 world financial meltdown.<br /><br /><a href="https://softwarephysics.blogspot.com/2008/07/intro.html">The Fundamental Problem of Everything</a> – if you Google "the fundamental problem of everything", this will be the only hit you get on the entire Internet, which is indicative of the fundamental problem of everything!
<br /><br />
<a href="https://softwarephysics.blogspot.com/2011/05/new-introduction-to-softwarephysics.html">What’s It All About?</a> and <a href="https://softwarephysics.blogspot.com/2021/09/new-introduction-to-softwarephysics_16.html">What's It All About Again?</a> – my current working hypothesis on what’s it all about.
<br /><br />
<a href="https://softwarephysics.blogspot.com/2014/06/new-introduction-to-softwarephysics.html">How to Use an Understanding of Self-Replicating Information to Avoid War</a> – my current working hypothesis for how the United States can avoid getting bogged down again in continued war in the Middle East.
<br /><br />
<a href="https://softwarephysics.blogspot.com/2014/07/new-introduction-to-softwarephysics.html">Hierarchiology and the Phenomenon of Self-Organizing Organizational Collapse</a> - a modern extension of the classic Peter Principle that applies to all hierarchical organizations and introduces the Time Invariant Peter Principle.
<br /><br />
<a href="https://softwarephysics.blogspot.com/2015/10/new-introduction-to-softwarephysics.html">The Economics of the Coming Software Singularity</a>, <a href="https://softwarephysics.blogspot.com/2015/11/new-introduction-to-softwarephysics.html">The Enduring Effects of the Obvious Hiding in Plain Sight</a> and <a href="https://softwarephysics.blogspot.com/2016/02/new-introduction-to-softwarephysics.html">The Dawn of Galactic ASI - Artificial Superintelligence</a> - my take on some of the issues that will arise for mankind as software becomes the dominant form of self-replicating information on the planet over the coming decades.
<br /><br />
<a href="https://softwarephysics.blogspot.com/2016/12/new-introduction-to-softwarephysics.html">The Continuing Adventures of Mr. Tompkins in the Software Universe</a>,
<a href="https://softwarephysics.blogspot.com/2017/02/new-introduction-to-softwarephysics.html">The Danger of Tyranny in the Age of Software</a>,
<a href="https://softwarephysics.blogspot.com/2017/03/new-introduction-to-softwarephysics.html">Cyber Civil Defense</a>, <a href="https://softwarephysics.blogspot.com/2017/04/new-introduction-to-softwarephysics.html">Oligarchiology and the Rise of Software to Predominance in the 21st Century</a> and <a href="https://softwarephysics.blogspot.com/2018/10/new-introduction-to-softwarephysics.html">Is it Finally Time to Reboot Civilization with a New Release?</a> - my worries that the world might abandon democracy in the 21st century, as software comes to predominance as the dominant form of self-replicating information on the planet.
<br /> <br />
<a href="https://softwarephysics.blogspot.com/2016/06/new-introduction-to-softwarephysics.html">Making Sense of the Absurdity of the Real World of Human Affairs
</a> - how software has aided the expansion of our less desirable tendencies in recent years.
<br /><br />
<strong>Some Specifics About These Postings</strong><br />The postings in this blog are a supplemental reading for my course on softwarephysics for IT professionals entitled <em>SoftwarePhysics 101 – The Physics of Cyberspacetime</em>, which was originally designed to be taught as a series of seminars at companies where I was employed. Since softwarephysics essentially covers the simulated physics, chemistry, biology, and geology of an entire simulated universe, the slides necessarily just provide a cursory skeleton on which to expound. The postings in this blog go into much greater depth. Because each posting builds upon its predecessors, the postings in this blog should be read in reverse order from the oldest to the most recent, beginning with my original posting on <a href="https://softwarephysics.blogspot.com/2006/07/softwarephysics.html">SoftwarePhysics</a>. In addition, several universities also now offer courses on Biologically Inspired Computing which cover some of the biological aspects of softwarephysics, and the online content for some of these courses can be found by Googling for "Biologically Inspired Computing" or "Natural Computing". At this point we will finish up with my original plan for this blog on softwarephysics with a purely speculative posting on <a href="https://softwarephysics.blogspot.com/2009_02_01_archive.html">CyberCosmology</a> that describes the origins of the Software Universe, cyberspacetime, software and where they all may be heading. Since <a href="https://softwarephysics.blogspot.com/2009_02_01_archive.html">CyberCosmology</a> will be purely speculative in nature, it will not be of much help to you in your IT professional capacities, but I hope that it might be a bit entertaining. Again, if you are new to softwarephysics, you really need to read the previous posts before taking on <a href="https://softwarephysics.blogspot.com/2009_02_01_archive.html">CyberCosmology</a>. I will probably continue on with some additional brief observations about softwarephysics in the future, but once you have completed <a href="https://softwarephysics.blogspot.com/2009/02/introduction-to-softwarephysics-ii.html">CyberCosmology</a>, you can truly consider yourself to be a bona fide softwarephysicist.<br /><br />For those of you following this blog, the posting dates on the posts may seem to behave in a rather bizarre manner. That is because in order to get the <a href="https://softwarephysics.blogspot.com/">Introduction to Softwarephysics</a> listed as the first post in the context root of <a href="https://softwarephysics.blogspot.com/">https://softwarephysics.blogspot.com/</a> I have to perform a few IT tricks. When publishing a new posting, I simply copy the contents of the <a href="https://softwarephysics.blogspot.com/">Introduction to Softwarephysics</a> to a new posting called the New Introduction to Softwarephysics. Then I update the original <a href="https://softwarephysics.blogspot.com/">Introduction to Softwarephysics</a> entry with the title and content of the new posting to be published. I then go back and take “New” out of the title of the New Introduction to Softwarephysics. This way the <a href="https://softwarephysics.blogspot.com/">Introduction to Softwarephysics</a> always appears as the first posting in the context root of <a href="https://softwarephysics.blogspot.com/">https://softwarephysics.blogspot.com/</a>. The side effect of all this is that the real posting date of posts is the date that appears on the post that you get when clicking on the <font color="blue">Newer Post</font> link at the bottom left of the posting webpage.
<br /><br />
<em> SoftwarePhysics 101 – The Physics of Cyberspacetime</em> is now available on Microsoft OneDrive.
<br /><br />
<a href="https://onedrive.live.com/redir?resid=21488FF1CF19C88B!1427&authkey=!AIQL1oOkPzVRbQQ&ithint=file%2cppt">SoftwarePhysics 101 – The Physics of Cyberspacetime</a> - Original PowerPoint document
<br /><br />
<a href="https://onedrive.live.com/redir?resid=21488FF1CF19C88B!1429&authkey=!AFr_6rP1jAW484U&ithint=file%2cxlsx">Entropy </a> – A spreadsheet referenced in the document
<br /><br />
<a href="https://onedrive.live.com/redir?resid=21488ff1cf19c88b!1432&authkey=!ACPLasjpCI9PBjY&ithint=file%2ctxt">BSDE</a> – A 1989 document describing how to use BSDE - the Bionic Systems Development Environment - to grow applications from genes and embryos within the maternal BSDE software.
<br /><br />
Comments are welcome at <a href="mailto:scj333@sbcglobal.net">scj333@sbcglobal.net</a><br /><br />To see all posts on softwarephysics in reverse order go to:<br /><a href="https://softwarephysics.blogspot.com/">https://softwarephysics.blogspot.com/</a><br /><br />Regards,<br />Steve JohnstonUnknownnoreply@blogger.com0tag:blogger.com,1999:blog-30536174.post-72569245066457781032024-01-07T16:37:00.000-08:002024-02-18T10:06:00.612-08:00The Self-Organizing Recursive Cosmos - Part II<p style="line-height:140%; font-size:120%">
In <a href="https://softwarephysics.blogspot.com/2023/10/new-introduction-to-softwarephysics_17.html">The Self-Organizing Recursive Cosmos</a>, I discussed Lee Smolin's <em>cosmological natural selection</em> hypothesis presented in his book the <em>Life of the Cosmos</em>. <br /><br /> <em>Life of the Cosmos</em> (1997)<br /> <a href=" https://tkececi.files.wordpress.com/2009/12/the-life-of-the-cosmos.pdf">https://tkececi.files.wordpress.com/2009/12/the-life-of-the-cosmos.pdf</a>
<br /><br />
Lee Smolin's <em>cosmological natural selection</em> hypothesis attempts to explain why our very complex Universe is the way it is in terms of the Darwinian mechanisms of inheritance and innovation honed by natural selection. His hypothesis is a cosmic example of Universal Darwinism at work at a cosmological level and is certainly in tune with the propositions presented in <a href="https://softwarephysics.blogspot.com/2023/12/new-introduction-to-softwarephysics_23.html">The Law of Increasing Functional Information and the Evolution of Software</a>. But in this post, I would like to extend Lee Smolin's <em>cosmological natural selection</em> hypothesis to include some work by J. Ambjørn and Y. Watabiki as presented in Anton Petrov's YouTube:
<br /><br />
Study Explains Dark Energy and Inflation as Parallel Universes Colliding<br />
<a href="https://www.youtube.com/watch?v=qg4PZst9a5g">https://www.youtube.com/watch?v=qg4PZst9a5g</a>
<br /><br />
and their recent paper:
<br /><br />
Is the present acceleration of the Universe caused by merging with other universes?<br />
<a href="https://iopscience.iop.org/article/10.1088/1475-7516/2023/12/011/pdf">https://iopscience.iop.org/article/10.1088/1475-7516/2023/12/011/pdf</a>
<br /><br />
The hypothesis that J. Ambjørn and Y. Watabiki present in the above paper is that the highly-varying expansion rate of our Universe since the Big Bang can be mathematically explained by collisions of baby Universes with more mature Universes.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh41I5j3YnuR71B4DFeJIPTQJcD-nhkonHXvG3hKWORx9z9Pb-ulxDws8XFlb-TiWfz6qjrUQFoInXwCLmfjQUS4E5MU4F0Lo4U_cID3ZbaUTmOqRszxpZ4Irbr6Yak4eLdYhekhB8eVpBstmiQF3SYK5rqtlnmdZhbuVttRmrJMSoSfUefYLqd1g/s1200/BigBangExpansion.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="630" data-original-width="1200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh41I5j3YnuR71B4DFeJIPTQJcD-nhkonHXvG3hKWORx9z9Pb-ulxDws8XFlb-TiWfz6qjrUQFoInXwCLmfjQUS4E5MU4F0Lo4U_cID3ZbaUTmOqRszxpZ4Irbr6Yak4eLdYhekhB8eVpBstmiQF3SYK5rqtlnmdZhbuVttRmrJMSoSfUefYLqd1g/s600/BigBangExpansion.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 1 - The expansion rate of our Universe has varied greatly over the past 13.7 billion years. For example, just after the Big Bang, our Universe went through a period of Inflation that expanded the Universe by a factor of 10<sup>26</sup> in 10<sup>-32</sup> seconds! It then continued to expand at a slowing rate for about 9 billion years. Then about 5 billion years ago, the expansion rate began to increase. In the above figure, this varying rate of expansion is displayed in the curvature of the envelope surrounding our Universe. Notice the dramatic expansion of the envelope during Inflation and that after 9 billion years of expansion, the envelope is now bending upwards as the expansion rate accelerates. Click to enlarge.
</p><p style="line-height:140%; font-size:120%">
Currently, nobody knows why the expansion rate of our Universe has varied in such a dramatic fashion. We do not know what caused the dramatic Inflation that expanded the Universe by a factor of 10<sup>26</sup> in 10<sup>-32</sup> seconds or why our Universe is now expanding at a faster rate than it did 5 billion years ago. Up until 1998, cosmologists expected the expansion rate to be slowing down instead of speeding up. Some propose that 75% of our Universe is composed of a dark energy that is driving the acceleration of the expansion rate while others point to a time-varying cosmological constant in Einstein's general relativity. J. Ambjørn and Y. Watabiki propose that both of these problems can be explained by the collision of baby bubble universes colliding with more mature and larger bubble universes. But before doing that, let's review Lee Smolin's <em>cosmological natural selection</em> hypothesis that I discussed in
<a href="https://softwarephysics.blogspot.com/2023/10/new-introduction-to-softwarephysics_17.html">The Self-Organizing Recursive Cosmos</a>.
<br /><br />
<strong>Lee Smolin's <em>cosmological natural selection</em> hypothesis</strong><br />
Lee Smolin wonders why our Standard Model of particle physics is so complicated with so many "fundamental" particles arising from excited quantum fields.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhZW7Iz4hViZPlcFBOmJ5B3QodUQlbIAXgabpV1OV_-OrebeuHqGowovknC65tgLbtyUrZkPY-7FgRCqTs3yI8ovfaNg1EbxVIoKxITB5xbXkq9xd2iL-tru-P4kW2rUYl04_tH_fr74A8Gv6xb9M2giG0BOoyasB6RKVS6LzlgOITOWFTwflSCCA/s685/StandardModel.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="403" data-original-width="685" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhZW7Iz4hViZPlcFBOmJ5B3QodUQlbIAXgabpV1OV_-OrebeuHqGowovknC65tgLbtyUrZkPY-7FgRCqTs3yI8ovfaNg1EbxVIoKxITB5xbXkq9xd2iL-tru-P4kW2rUYl04_tH_fr74A8Gv6xb9M2giG0BOoyasB6RKVS6LzlgOITOWFTwflSCCA/s600/StandardModel.png"/></a></div> <p style="text-align: center; line-height:140%; font-size:120%"> Figure 2 – The Standard Model of particle physics is composed of quantum fields that we observe as particles. The matter particles are called fermions and have a spin of ½. The force-carrying particles are called bosons and they have a spin of 1. The Higgs boson has a spin of 0 and is a scalar field. <p style="line-height:140%; font-size:120%"> If you count the antimatter particles, the Standard Model contains 12 quarks and 12 leptons. But each of the 12 quarks comes in one of three color charges - red, green or blue. That means there really are 36 different quarks. There are also really 8 gluons because gluons have a combination of two color charges (one of red, green, or blue and one of antired, antigreen, or antiblue). So that comes to 36 quarks + 12 leptons + 13 bosons = 61 particles! <br /><br /> Many other physicists are also perplexed by the complexities of the Standard Model too. There just seem to be way too many fundamental particles and forces. Many physicists would like to replace our current Standard Model with a new more fundamental model that does not have so many moving parts such as string theory.
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgbiLd_ChMW-2gigszXs9DXnioNkcoCvTOwFu2V1AfQzUCfnaOAHQ7mJ3Ony2Rg2YvI0iahbxf-rYXNVeX5FrbrGMJxIgr6tC2BSFjAt10RTwFf5_1cwcqeyjZNFGiZs82V2oWrMw/s1600/periodic+table.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgbiLd_ChMW-2gigszXs9DXnioNkcoCvTOwFu2V1AfQzUCfnaOAHQ7mJ3Ony2Rg2YvI0iahbxf-rYXNVeX5FrbrGMJxIgr6tC2BSFjAt10RTwFf5_1cwcqeyjZNFGiZs82V2oWrMw/s640/periodic+table.png" /></a></div> <p style="text-align: center; line-height:140%; font-size:120%"> Figure 3 – Early in the 20th century, physicists were also perplexed by the very numerous fundamental elements of the Periodic Table. But atom smashers soon revealed that this very large number of fundamental elements were all simply composed of three particles - protons, neutrons and electrons. Perhaps the same could be said of the numerous particles of the Standard Model.</p><p style="line-height:140%; font-size:120%">
But Lee Smolin suggests that there might be another explanation for the complexity of our current Standard Model. Perhaps our current Standard Model is the "real deal" and its complications are the result of a long evolutionary history.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjnZaD3H4oT3OZRVz4WLdbMUg7o5hIAwitwa9w7Spgkae8D4wq0WUZ2bcBkTKsOQqakJHar44DAj6F0h-NiahRDtkYJB20lH-OPRodhgdmIRr_N0X8eIjNDdGZul2HNMT_MtdcJRtdD-iNE3prvTDxbDPa5aR2a6fu_ND4PptG9s3gqqT5O6y_g3Q/s1600/MetabolicPathways.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="1129" data-original-width="1600" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjnZaD3H4oT3OZRVz4WLdbMUg7o5hIAwitwa9w7Spgkae8D4wq0WUZ2bcBkTKsOQqakJHar44DAj6F0h-NiahRDtkYJB20lH-OPRodhgdmIRr_N0X8eIjNDdGZul2HNMT_MtdcJRtdD-iNE3prvTDxbDPa5aR2a6fu_ND4PptG9s3gqqT5O6y_g3Q/s600/MetabolicPathways.jpg"/></a></div> <p style="text-align: center; line-height:140%; font-size:120%"> Figure 4 – Above is a simplified flowchart of the metabolic pathways used by carbon-based life. Notice that it too is a fine-tuned mess that seems to work nonetheless. However, in this case, we do know that carbon-based life actually is a fine-tuned mess that works superbly despite all of its complexity. Biologists had to come up with some mechanism to explain how such a fine-tuned mess came to be and they finally did so with Darwinian thought. </p><p style="line-height:140%; font-size:120%"> Living things are incredible examples of highly improbable fine-tuned systems, and this fine-tuning was accomplished via the Darwinian mechanisms of inheritance and innovation honed by natural selection. Along these lines, Lee Smolin proposes that when black holes collapse they produce a white hole in another universe, and the white hole is observed in the new universe as a Big Bang. He also proposes that the physics in the new universe would essentially be the same as the physics in the parent universe, but with the possibility for slight variations to arise when a black hole reaches the Planck density. In this view, the seemingly fixed and immutable laws and constants of our Universe are no more so than the fixed and immutable species of a pre-Darwinian biosphere. For Lee Smolin, a universe that had the physics that was good at creating black holes would tend to outproduce universes that did not. Thus, a selection pressure would arise that selected for universes that had a physics that was good at making black holes, and so, a kind of Darwinian natural selection would occur in the Cosmic Landscape of the Multiverse. Over an infinite amount of time, the universes that were good at making black holes would be found to dominate the Cosmic Landscape. He calls this effect <em>cosmological natural selection</em>.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgTr8E1xYJwq6YSRGw4vgPb5MZBsjI9NYdCjybMuH-6XB6xi3vh3LZjFYrZRVtl8ynGe-9JNepRmHgLeFwn1SDeA9kBtYJM5cq3clY0tiIDy_YOULFxmWfUqpMZZo09zAu2YpdKQA/s1200/BlackHoleBigBang.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="630" data-original-width="1200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgTr8E1xYJwq6YSRGw4vgPb5MZBsjI9NYdCjybMuH-6XB6xi3vh3LZjFYrZRVtl8ynGe-9JNepRmHgLeFwn1SDeA9kBtYJM5cq3clY0tiIDy_YOULFxmWfUqpMZZo09zAu2YpdKQA/s600/BlackHoleBigBang.jpg"/></a></div> <p style="text-align: center; line-height:140%; font-size:120%"> Figure 5 - In Lee Smolin's the <em>Life of the Cosmos</em> he proposes that the black holes of one universe puncture the spacetime of the universe, causing white holes to appear in new universes.</p><p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgAdSF-IYS9bc1pH298s6bJo4kh8JYrc51q0sjuvm-0meVGG0pqlOIihgvsmcy_C8IH5MiNDNxklFUT2sBl56ZRiYl-SkqZx-X3CeTR5un8XxQrW8jyQ9B93Zg2o1tR_tIR9pRbzAO-7wAfsUABLEa1zmAc0wKFtMg_Erpmu9e6BysP1lEiUiNpaw/s644/BubbleUniverses.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="378" data-original-width="644" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgAdSF-IYS9bc1pH298s6bJo4kh8JYrc51q0sjuvm-0meVGG0pqlOIihgvsmcy_C8IH5MiNDNxklFUT2sBl56ZRiYl-SkqZx-X3CeTR5un8XxQrW8jyQ9B93Zg2o1tR_tIR9pRbzAO-7wAfsUABLEa1zmAc0wKFtMg_Erpmu9e6BysP1lEiUiNpaw/s600/BubbleUniverses.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 6 – As the white holes expand.
<p style="line-height:140%; font-size:120%">
<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhzAjHFUQzYaSVymRqeqxRunBvuG8GMYrtPub_99nCrFpMDNyAqUtPnE6AYT7BJaGKBD0RlSk4oRY26Wv0bW2uAsbhsETPvdtBkLbcm6nCBQABIJyD5dYDfcln3C4cH5V0JAJWaYA/s1600/BabyUniverse.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 159px; height: 400px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhzAjHFUQzYaSVymRqeqxRunBvuG8GMYrtPub_99nCrFpMDNyAqUtPnE6AYT7BJaGKBD0RlSk4oRY26Wv0bW2uAsbhsETPvdtBkLbcm6nCBQABIJyD5dYDfcln3C4cH5V0JAJWaYA/s400/BabyUniverse.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5599680066359343698" /></a>
<p style="text-align: center; line-height:140%; font-size:120%">Figure 7 – They eventually pinch off to form new baby Universes.
</p><p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEievgsGJEIX1dHqpxCrl9t0BkDyg89OxBI-g0VtGv82K1ODjZn-63GhtuoiUECHenvBTDEw04tfHT9UkpPKu5KrEvFQ44R-9FzIxepekTWbwUxE23libnJp02sXhGiyVZz-qNMC-JSRUjO8hhqYPi2JfBzMvQmrEQ9-L9iEBI1cz4RmFhhWEh_EUw/s800/R9800194.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="800" data-original-width="800" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEievgsGJEIX1dHqpxCrl9t0BkDyg89OxBI-g0VtGv82K1ODjZn-63GhtuoiUECHenvBTDEw04tfHT9UkpPKu5KrEvFQ44R-9FzIxepekTWbwUxE23libnJp02sXhGiyVZz-qNMC-JSRUjO8hhqYPi2JfBzMvQmrEQ9-L9iEBI1cz4RmFhhWEh_EUw/s600/R9800194.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 8 – Forming an eternal Multiverse of bubble Universes of varying ages and sizes that self-replicate like a never-ending recursive function in a program that keeps calling itself over and over.
<p style="line-height:140%; font-size:120%">
In their paper, J. Ambjørn and Y. Watabiki propose that the dramatic expansion of our Universe after the Big Bang during the very brief Inflation period resulted when our baby Universe bubble collided with a much larger and older Universe bubble. This collision splattered our baby Universe bubble over the surface of a much larger bubble Universe like a bug hitting the windshield of a much larger car. Our bubble Universe then continued to expand much more slowly as the bubble Universe that we had just joined continued to expand. The expansion of this much larger bubble universe was further driven by the absorption of a huge number of other baby bubble universes that it continued to scoop up.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiLfFaxBkCvYaH_fAeZLm4r7CIP3rQbrkopEV_PsBn6w2gjMYAIccqzIvrmtq79g_Xx90P9eVEQGYyZNqh_zKJdWl45nytk2fQaisKPIrZSeV2YzIQT0ZFNwSn68y81mE0TdCFsZCZ8CKEWQTUsOHhm8Fr58Wk0Dk2S2OojgtavIA3hU3hIWuKA7A/s547/Bubbles.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="410" data-original-width="547" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiLfFaxBkCvYaH_fAeZLm4r7CIP3rQbrkopEV_PsBn6w2gjMYAIccqzIvrmtq79g_Xx90P9eVEQGYyZNqh_zKJdWl45nytk2fQaisKPIrZSeV2YzIQT0ZFNwSn68y81mE0TdCFsZCZ8CKEWQTUsOHhm8Fr58Wk0Dk2S2OojgtavIA3hU3hIWuKA7A/s600/Bubbles.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 9 – As a bubble universe collides with other bubble universes it continues to expand. The expansion rate is determined by the rate of absorption of other bubble universes.
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhAC15mQLVah-cP9NJrs00pcRHgaCv3vOg8QrlW6aZgYzLZvk2IULD7HZ_WuOB3tRC95QZF_mVTtn9XJNI0rgAB7jsYqgm2XgB6vaTlvb0ctM6HYe_M-LDGsyGgJNq0hsKfzak-V9xpbdvoVXDjw2-xpFe1abwdJkb047K2jL0gxsng5E4nl6VaVQ/s1300/Bubbles2.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="866" data-original-width="1300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhAC15mQLVah-cP9NJrs00pcRHgaCv3vOg8QrlW6aZgYzLZvk2IULD7HZ_WuOB3tRC95QZF_mVTtn9XJNI0rgAB7jsYqgm2XgB6vaTlvb0ctM6HYe_M-LDGsyGgJNq0hsKfzak-V9xpbdvoVXDjw2-xpFe1abwdJkb047K2jL0gxsng5E4nl6VaVQ/s600/Bubbles2.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 10 – The end result is an eternal Multiverse that has always existed and is composed of an infinite number of bubble universes of varying ages and sizes.
<p style="line-height:140%; font-size:120%">
Now according to the <em>cosmological natural selection</em> hypothesis, each bubble universe inherits the physics of its parent universe but also with the possibility of some slight mutations. In this new model, perhaps only bubble universes with the same or very similar physics can merge. If that were the case perhaps our bubble Universe looks more like a sample of granite formed from rock-forming silicate minerals.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEjqLlapG0juM7dV6C2-OIvvbRnqMhM3JJoN4yVR349sU9rfDgRZrLPDgdD3JIPDP9jl64RDFJAzSSUpYaFilLN0zg28X1aQNwnyjzYkCNJVUUjcBLCAHWNUYeuvVN9MIPrj9LkmtUNYleLFkJEXZThOjGx8EQoKbREb_Gx5Fk18ORnPQFYftvM=s1167" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="725" data-original-width="1167" src="https://blogger.googleusercontent.com/img/a/AVvXsEjqLlapG0juM7dV6C2-OIvvbRnqMhM3JJoN4yVR349sU9rfDgRZrLPDgdD3JIPDP9jl64RDFJAzSSUpYaFilLN0zg28X1aQNwnyjzYkCNJVUUjcBLCAHWNUYeuvVN9MIPrj9LkmtUNYleLFkJEXZThOjGx8EQoKbREb_Gx5Fk18ORnPQFYftvM=s600"/></a></div> <p style="text-align: center; line-height:140%; font-size:120%"> Figure 11 – When granitic magma cools it forms granite composed of silicate minerals formed from silica tetrahedrons with a net charge of -4 mixed with various amounts of positive K<sup>+</sup>, Na<sup>+</sup>, Ca<sup>++</sup>, Mg<sup>++</sup>, Fe<sup>++</sup>, Al<sup>+++</sup> and Fe<sup>+++</sup> cations in rock-forming minerals.</p><p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEhmXaW6ZAwMnNW0NyoBErViHG-P_G79AadjvhAfdJ7dRZohC4HcZvORxX0pwCYSMrfLeqWx8C6Ia6cKDqz7Tiz568mt8x74rO0jDZBUzlZrBcvNIPVsOP9d6QS8suKe74Ev2DzpGmiedxctyyseOeMdC9K_4pw9aDi8Kmz2gLEsXbBlZ9rfACo=s960" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="720" data-original-width="960" src="https://blogger.googleusercontent.com/img/a/AVvXsEhmXaW6ZAwMnNW0NyoBErViHG-P_G79AadjvhAfdJ7dRZohC4HcZvORxX0pwCYSMrfLeqWx8C6Ia6cKDqz7Tiz568mt8x74rO0jDZBUzlZrBcvNIPVsOP9d6QS8suKe74Ev2DzpGmiedxctyyseOeMdC9K_4pw9aDi8Kmz2gLEsXbBlZ9rfACo=s600"/></a></div> <p style="text-align: center; line-height:140%; font-size:120%"> Figure 12 – There are many ways to chain silica tetrahedrons together to form rock-forming minerals. They can form chains, double chains, sheets and 3D-networks. The grains of silica sand are composed of the mineral quartz which is a very tough 3D-network of pure silica tetrahedrons. Other silicate minerals mix in positive cations amongst the silica tetrahedrons to neutralize the negative charge of the silica tetrahedrons.
</p><p style="line-height:140%; font-size:120%">
As you move through the granitic matrix of a granite rock sample, the mineral chemistries vary slightly, but the rock still holds together. Perhaps bubble Universes might have a similar texture of crystals of similar but varying physics.
Our universe could then be deeply buried in one of these crystals. We would never be able to see any of the other grains of our bubble universe because they would all be far beyond our Hubble radius.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhWPFG3Aqm6X9l3FWl3GOHNXc5gJoJmWm74VTtD6zGb-yrLuSsf96D5Jgwnxyjt0Y8IC9FZ5KixcvQ4YpLyvtdGLMHLvwzKQRu0xDGFx232RfOpHAGPrKE4L9geYZO1f0Q8wNWMJhW-vmRA-oz3JuSfU3VXOdRHesYfWA0ixl92DWLYBuZuwgN6nQ/s602/HubbleRadius.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="511" data-original-width="602" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhWPFG3Aqm6X9l3FWl3GOHNXc5gJoJmWm74VTtD6zGb-yrLuSsf96D5Jgwnxyjt0Y8IC9FZ5KixcvQ4YpLyvtdGLMHLvwzKQRu0xDGFx232RfOpHAGPrKE4L9geYZO1f0Q8wNWMJhW-vmRA-oz3JuSfU3VXOdRHesYfWA0ixl92DWLYBuZuwgN6nQ/s600/HubbleRadius.jpg"/></a></div> <p style="text-align: center; line-height:140%; font-size:120%"> Figure 13 – The "Observable Universe" is now about 46 billion light years away from us. But the current Hubble radius is now only about 13.7 - 14.5 billion light years away. The current Hubble radius is where spacetime is currently expanding away from us faster than the speed of light. So the Hubble radius is the furthest distance that we could now reach traveling at the speed of light and the light from objects beyond the current Hubble radius will never reach us. <p style="line-height:140%; font-size:120%">
So, like all of the other hypotheses that attempt to explain the evolutionary history of the expansion of our Universe, this new model also lacks observed confirmational data. The best that we may be able to do is to create hypotheses that explain all of our current observations, do not contradict themselves and do not contradict the known physics of the day.
But if this new model is somewhat true, then we may be faced with trying to unravel the details of a bug splattered on a windshield.
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjcqLc7jw6eus_oaIWnKM67sjGEsW7ltR3op-Dt6GJ0Mi1VgSgVlIIn1yySUCl1RizThVMw4dlrkFsCuSOafRCmebBC5EWoEOoClftpGJI6jSHvcSFcrrso0VIaHrjpy9g2iulctA/s1600/planckcmblarge.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjcqLc7jw6eus_oaIWnKM67sjGEsW7ltR3op-Dt6GJ0Mi1VgSgVlIIn1yySUCl1RizThVMw4dlrkFsCuSOafRCmebBC5EWoEOoClftpGJI6jSHvcSFcrrso0VIaHrjpy9g2iulctA/s400/planckcmblarge.jpg" /></a></div> <p style="text-align: center; line-height:140%; font-size:120%"> Figure 14 - The cosmic microwave background may be all that is left of a bug splattered on a windshield.
</p><p style="line-height:140%; font-size:120%">
Comments are welcome at
<a href="mailto:scj333@sbcglobal.net">scj333@sbcglobal.net</a><br /><br />
To see all posts on softwarephysics in reverse order go to:<br />
<a href="https://softwarephysics.blogspot.com/">https://softwarephysics.blogspot.com/</a>
<br /><br />
Regards,<br />
Steve Johnston
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-30536174.post-85571504394749203432023-12-23T10:00:00.000-08:002024-01-08T04:44:24.283-08:00The Law of Increasing Functional Information and the Evolution of Software<p style="line-height:140%; font-size:120%">
In this post, I would like to discuss a possible new physical law of the Universe that should be of interest to all who have been following this blog on softwarephysics because it generalizes the concept of Universal Darwinism into a new physical law of the Universe. It is called the Law of Increasing Functional Information and is outlined in the paper below:
<br /><br />
On the roles of function and selection in evolving systems<br />
<a href="https://www.pnas.org/doi/epdf/10.1073/pnas.2310223120">https://www.pnas.org/doi/epdf/10.1073/pnas.2310223120</a>
<br /><br />
and is also described in the YouTube video below:
<br /><br />
Robert M. Hazen, PhD - The Missing Law: Is There a Universal Law of Evolving Systems?<br />
<a href="https://youtu.be/TrNf62IGqM8?t=2114">https://youtu.be/TrNf62IGqM8?t=2114</a>
<br /><br />
The paper explains that four branches of classical 19th-century physics nearly explain all of the phenomena of our everyday lives:
<br /><br />
1. Newton's laws of motion<br />
2. Newton's law of gravitation<br />
3. The classical electrodynamics of Maxwell and many others<br />
4. The first and second laws of thermodynamics of James Prescott Joule and Rudolph Clausius
<br /><br />
The authors explain that all of the above laws were discovered by the empirical recognition of the "conceptual equivalences" of several seemingly unrelated physical phenomena. For example, Newton's laws of motion arose from the recognition that the uniform motion of a body along a straight line and at a constant speed was conceptually equivalent to the accelerated motion of a body that is changing speed or direction if the concepts of mass, force and acceleration were related by three physical laws of motion. Similarly, an apple falling from a tree and the Moon constantly falling to the Earth in an elliptical orbit were also made to be conceptually equivalent by Newton's law of gravitation. Later, the many disparate phenomena of electricity and magnetism were made conceptually equivalent by means of Maxwell's equations. Finally, the many phenomena of kinetic, potential and heat energy were made conceptually equivalent by means of the first and second laws of thermodynamics.
<br /><br />
The authors then go on to wonder if there is a similar conceptual equivalence for the many seemingly disparate systems that seem to evolve over time such as stars, atomic nuclei, minerals and living things. As a softwarephysicist, I would add software to that list as well. Is it possible that we have overlooked a fundamental physical law of the Universe that could explain the nature of all evolving systems? Or do evolving systems simply arise as emergent phenomena from the
four branches of classical physics outlined above? The authors point out that the very low entropy of the Universe immediately following the Big Bang could have taken a direct path to a very high-entropy patternless Universe without producing any complexity within it at all while still meticulously following all of the above laws of classical physics. But that is not what happened to our Universe. Something got in the way of a smooth flow of free energy dissipating from low to high entropy, like the disruption caused by many large rocks in a mountain stream, allowing for the rise of complex evolving systems far from thermodynamic equilibrium to form and persist.
<br /><br />
Some have tried to lump all such evolving systems under the guise of Universal Darwinism. In this view, the Darwinian processes of inheritance, innovation and natural selection explain it all in terms of the current laws of classical physics. But is that true? Are we missing something? The authors propose that we are because all evolving systems seem to be conceptually equivalent in three important ways and that would suggest that there might exist a new underlying physical law guiding them all.
<br /><br />
<font color="blue">
1. Each system is formed from numerous interacting units (e.g., nuclear particles, chemical elements, organic molecules, or cells) that result in combinatorially large numbers of possible configurations.<br />
2. In each of these systems, ongoing processes generate large numbers of different configurations.<br />
3. Some configurations, by virtue of their stability or other “competitive” advantage, are more likely to persist owing to selection for function.
</font>
<br /><br />
The above is certainly true of software source code. Software source code consists of a huge number of interacting symbols that can be combined into a very large number of possible configurations to produce programs.
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjSTkQQkJNvbced7o_c0NHsgm7CeMSMUcDWV3D-0fP9H0biBZAFFKbP_UQNsab7zPZvyvykGeHAV1l6VzpbiwRtoqvqKRTxdOASLfBlX0pgG94BePiKQV5nWYFzAC7kUwUi3vxTDA/s1600/C-Average.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="450" data-original-height="156" data-original-width="225"
src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjSTkQQkJNvbced7o_c0NHsgm7CeMSMUcDWV3D-0fP9H0biBZAFFKbP_UQNsab7zPZvyvykGeHAV1l6VzpbiwRtoqvqKRTxdOASLfBlX0pgG94BePiKQV5nWYFzAC7kUwUi3vxTDA/s400/C-Average.jpg" /></a>
<p style="text-align: center; line-height:140%; font-size:120%">
Figure 1 – Source code for a C program that calculates an average of several numbers entered at the keyboard.
<p style="line-height:140%; font-size:120%">
There are also millions of programmers, and now LLM (Large Language Models) like Google Bard or OpenAI GPT-4, that have been generating these configurations over the past 82 years, or 2.6 billion seconds, ever since Konrad Zuse first cranked up his Z3 computer in May of 1941. And as programmers work on these very large configurations, they are constantly discarding "buggy" configurations that do not work. But even software that "works" just fine can easily become extinct when better configurations of the software evolve. Take for example the extinction of VisiCalc by Lotus 1-2-3 and finally Microsoft Excel in the 1980s.
<br /><br />
<font color="blue">
These three characteristics - component diversity, configurational exploration, and selection - which we conjecture represent conceptual equivalences for all evolving natural systems, may be sufficient to articulate a qualitative law-like statement that is not implicit in the classical laws of physics. In all instances, evolution is a process by which configurations with a greater degree of function are preferentially selected, while nonfunctional configurations are winnowed out. We conclude:
<br /><br /><strong>
Systems of many interacting agents display an increase in diversity, distribution, and/or patterned behavior when numerous configurations of the system are subject to selective pressure.
</strong>
<br /><br />
However, is there a universal basis for selection? And is there a more quantitative formalism underlying this conjectured conceptual equivalence - a formalism rooted in the transfer of information? We elaborate on these questions here and argue that the answer to both questions is yes.
</font>
<br /><br />
The authors then go on to propose their new physical law:
<br /><br />
<font color="blue">
The Law of Increasing Functional Information:<br />
<strong>
The Functional Information of a system will increase (i.e., the system will evolve) if many different configurations of the system are subjected to selection for one or more functions.
</strong></font>
<br /><br />
In their view, all evolving systems can be seen to be conceptually equivalent in terms of a universal driving force of increasing Functional Information in action by means of a Law of Increasing Functional Information. In many previous softwarephysics posts, I have covered the pivotal role that self-replicating information has played in the history of our Universe and also in the evolution of software. For more on that see <a href="https://softwarephysics.blogspot.com/2013/02/new-introduction-to-softwarephysics.html">A Brief History of Self-Replicating Information</a>. But under the more generalized Law of Increasing Functional Information, self-replicating information becomes just a subcategory of the grander concept of Functional Information.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgYaiB47nMuooF0rTF3nq1Ueb4pNf8R33aU26WPx7r54I6ZP3puyMmzRtyj1bkiaIyxvL2tVguMRNp7J_jmZpyVdV6jfhREgLlgls_IBdc2K60e4k_5MxHNvVnXdJ7-DqymjjpBrD22ky_NJivWPkY-zzRz_BkYoKp5MdS_IZQJ3oaTZ_RZsZqj6A/s225/FunctionalInformation.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="350" data-original-height="156" data-original-width="225" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgYaiB47nMuooF0rTF3nq1Ueb4pNf8R33aU26WPx7r54I6ZP3puyMmzRtyj1bkiaIyxvL2tVguMRNp7J_jmZpyVdV6jfhREgLlgls_IBdc2K60e4k_5MxHNvVnXdJ7-DqymjjpBrD22ky_NJivWPkY-zzRz_BkYoKp5MdS_IZQJ3oaTZ_RZsZqj6A/s600/FunctionalInformation.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 2 – <font color="blue">
Imagine a pile of DNA, RNA or protein molecules of all possible sequences, sorted by activity with the most active at the top. A horizontal plane through the pile indicates a given level of activity; as this rises, fewer sequences remain above it. The Functional Information required to specify that activity is -log<sub>2</sub> of the fraction of sequences above the plane. Expressing this fraction in terms of information provides a straightforward, quantitative measure of the difficulty of a task.</font>
<p style="line-height:140%; font-size:120%">
But what is Functional Information? Basically, Functional Information is information that can do things. It has agency. The concept of Functional Information was first introduced in a one-page paper:
<br /><br />
Functional Information: Molecular messages<br />
<a href="https://www.nature.com/articles/423689a">https://www.nature.com/articles/423689a</a>
<br /><br />
and is basically the use of Leon Brillouin's 1953 concept of information being a form of negative entropy that he abbreviated as "negentropy", but with a slight twist. The above paper introduces the concept of Functional Information as:
<br /><br />
<font color="blue">
By analogy with classical information, Functional Information is simply -log<sub>2</sub> of the probability that a random sequence will encode a molecule with greater than any given degree of function. For RNA sequences of length n, that fraction could vary from 4<sup>-n</sup> if only a single sequence is active, to 1 if all sequences are active. The corresponding functional information content would vary from 2n (the amount needed to specify a given random RNA sequence) to 0 bits. As an example, the probability that a random RNA sequence of 70 nucleotides will bind ATP with
micromolar affinity has been experimentally determined to be about 10<sup>-11</sup>. This corresponds to a functional information content of about 37 bits, compared with 140 bits to specify a unique 70-mer sequence. If there are multiple sequences with a given activity, then the corresponding Functional Information will always be less than the amount of information required to specify any particular sequence. It is important to note that Functional Information is not a property of any one molecule, but of the ensemble of all possible sequences, ranked by activity.
<br /><br />
Imagine a pile of DNA, RNA or protein molecules of all possible sequences, sorted by activity with the most active at the top. A horizontal plane through the pile indicates a given level of activity; as this rises, fewer sequences remain above it. The Functional Information required to specify that activity is -log<sub>2</sub> of the fraction of sequences above the plane. Expressing this fraction in terms of information provides a straightforward, quantitative measure of the difficulty of a task. More information is required to specify molecules that carry out difficult tasks, such as high-affinity binding or the rapid catalysis of chemical reactions with high energy barriers, than is needed to specify weak binders or slow catalysts. But precisely how much more Functional Information is required to specify a given increase in activity is unknown. If the mechanisms involved in improving activity are similar over a wide range of activities, then power-law behaviour might be expected. Alternatively, if it becomes progressively harder to improve activity as activity increases, then exponential behaviour may be seen. An interesting question is whether the relationship between Functional Information and activity will be similar in many different systems, suggesting that common principles are at work, or whether each case will be unique.
</font>
<br /><br />
Indeed, any programmer could also imagine a similar pile of programs consisting of all possible sequences of source code with the buggiest versions at the bottom. When you reach the level of the intersecting plane, you finally reach those versions of source code that produce a program that actually provides the desired function. However, many of those programs that actually worked might be very inefficient or hard to maintain because of a sloppy coding style. As you move higher in the pile, the number of versions decreases but these versions produce the desired function more efficiently or are composed of cleaner code. As outlined above, the Functional Information required to specify such a software activity is -log<sub>2</sub> of the fraction of source code programs above the plane.
<br /><br />
<strong>The Softwarephysics of it All</strong><br />
Before going on to explain how the Law of Increasing Functional Information has affected the evolution of software over the past 2.6 billion seconds, let me tell you a bit about the origin of softwarephysics. I started programming in 1972 and finished up my B.S. in Physics at the University of Illinois at Urbana in 1973. I then headed up north to complete an M.S. in Geophysics at the University of Wisconsin at Madison. From 1975 – 1979, I was an exploration geophysicist exploring for oil, first with Shell, and then with Amoco. Then in 1979, I made a career change to become an IT professional. One very scary Monday morning, I was conducted to my new office cubicle in Amoco’s IT department, and I immediately found myself surrounded by a large number of very strange IT people, all scurrying about in a near state of panic, like the characters in <em>Alice in Wonderland</em>. Suddenly, it seemed like I was trapped in a frantic computer simulation, buried in punch card decks and fan-fold listings. After nearly 40 years in the IT departments of several major corporations, I can now state with confidence that most corporate IT departments can best be described as “frantic” in nature. This new IT job was a totally alien experience for me, and I immediately thought that I had just made a very dreadful mistake because I soon learned that being an IT professional was a lot harder than being an exploration geophysicist.
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEizA9ezSClQKpMIJ3cyDhnbQkzoQI0xJ1G8sFLqtcaDI0d2xyB64rNPz4EP2SNrNnn1rEE1w9zpC8Zx5zwCukuhaipIgBigcTWZ-RRLK2aNq4viIVvRBS4VbmhzlmHB1knuEh-rbg/s1600/jetsons05.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEizA9ezSClQKpMIJ3cyDhnbQkzoQI0xJ1G8sFLqtcaDI0d2xyB64rNPz4EP2SNrNnn1rEE1w9zpC8Zx5zwCukuhaipIgBigcTWZ-RRLK2aNq4viIVvRBS4VbmhzlmHB1knuEh-rbg/s640/jetsons05.jpg" /></a></div> <p style="text-align: center; line-height:140%; font-size:120%">Figure 3 - As depicted back in 1962, George Jetson was a computer engineer in the year 2062, who had a full-time job working 3 hours a day, 3 days a week, pushing the same buttons that I pushed for 40 years as an IT professional. </p><p style="line-height:140%; font-size:120%">But it was not supposed to be that way. As a teenager growing up in the 1960s, I was led to believe that in the 21st century, I would be leading the life of George Jetson, a computer engineer in the year 2062, who had a full-time job working 3 hours a day, 3 days a week, pushing buttons. But as a newly minted IT professional, I quickly learned that all you had to do was push the right buttons, in the right sequence, at the right time, and with zero errors. How hard could that be? Well, it turned out to be very difficult indeed!
<br /><br />
To try to get myself out of this mess, I figured that if you could apply physics to geology; why not apply physics to software? So like the exploration team at Amoco that I had just left, consisting of geologists, geophysicists, paleontologists, geochemists, and petrophysicists, I decided to take all the physics, chemistry, biology, and geology that I could muster and throw it at the problem of software. The basic idea was that many concepts in physics, chemistry, biology, and geology suggested to me that the IT community had accidentally created a pretty decent computer simulation of the physical Universe on a grand scale, a Software Universe so to speak, and that I could use this fantastic simulation in reverse, to better understand the behavior of commercial software by comparing software to how things behaved in the physical Universe. Softwarephysics depicts software as a virtual substance and relies on our understanding of the current theories in physics, chemistry, biology, and geology to help us model the nature of software behavior. So in physics, we use software to simulate the behavior of the Universe, while in softwarephysics we use the Universe to simulate the behavior of software.
<br /><br />
After a few months on the job, I began to suspect that the second law of thermodynamics was largely to blame from the perspective of statistical mechanics. I was always searching for the small population of programs that could perform a given function out of a nearly infinite population of programs that could not. It reminded me very much of Boltzmann's concept of entropy in statistical mechanics. The relatively few functional programs that I was searching for had a very low entropy relative to the vast population of buggy programs that did not. Worse yet, it seemed as though the second law of thermodynamics was constantly trying to destroy my programs whenever I did maintenance on them. That was because the second law was trying to insert new bugs into my programs whenever I changed my code. There were nearly an infinite number of ways to do it wrong and only a very few to do it right. But I am getting ahead of myself. To better understand all of this, please take note of the following thought experiment.
<br /> <br /> <a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh9KJBWWZO68ZiOrS34zXqGUsjqMepgAlI09SWd4YvKJBpi5Eu7UlMn505It168pejAongBDBzJ0A25FxS6wD-uCWFRurdtdp130Uxe86DCmAvCRpq4lsK0uKMBJ2sHX68Bb6uBog/s1600-h/no_hole.JPG"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh9KJBWWZO68ZiOrS34zXqGUsjqMepgAlI09SWd4YvKJBpi5Eu7UlMn505It168pejAongBDBzJ0A25FxS6wD-uCWFRurdtdp130Uxe86DCmAvCRpq4lsK0uKMBJ2sHX68Bb6uBog/s320/no_hole.JPG" border="0" alt=""id="BLOGGER_PHOTO_ID_5133603685171969250" /></a><p style="text-align: center; line-height:140%; font-size:120%">Figure 4 - We begin with a left compartment containing cold slow-moving nitrogen molecules (white circles) and a right compartment with hot fast-moving nitrogen molecules (black circles). <p style="line-height:140%; font-size:120%"> <a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgg3jvP4gqNPXhXN-qAWPlheS53YwlbbgpPNRliHT4wRaIkRfhDraeFvg47KHFasJPutSWpwMOTfr3ZGh4JcZrEGJ9jAOdmDAVKlRJRFz1bAZ4G4rAAWMWcMustAaPQcsYAyA5I4w/s1600-h/unmixed.JPG"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgg3jvP4gqNPXhXN-qAWPlheS53YwlbbgpPNRliHT4wRaIkRfhDraeFvg47KHFasJPutSWpwMOTfr3ZGh4JcZrEGJ9jAOdmDAVKlRJRFz1bAZ4G4rAAWMWcMustAaPQcsYAyA5I4w/s320/unmixed.JPG" border="0" alt=""id="BLOGGER_PHOTO_ID_5133604037359287538" /></a> <p style="text-align: center; line-height:140%; font-size:120%"> Figure 5 - Next, we perforate the divider between the compartments and allow the hot and cold nitrogen molecules to bounce off each other and exchange energies. <p style="line-height:140%; font-size:120%"> <a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEio6EVfFpF7t97dX4xbawxiFjC2tr7626_dZMJTo3jkVYTWIct8_Glr-pPk5Xqd1z9wFxaRHNJQBPIIccWxNxOfQBnVVTMpuZF-orUlbW_DcVBAhnGsC95SY4US3IYAXj8xQpMoww/s1600-h/mixed.JPG"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEio6EVfFpF7t97dX4xbawxiFjC2tr7626_dZMJTo3jkVYTWIct8_Glr-pPk5Xqd1z9wFxaRHNJQBPIIccWxNxOfQBnVVTMpuZF-orUlbW_DcVBAhnGsC95SY4US3IYAXj8xQpMoww/s320/mixed.JPG" border="0" alt=""id="BLOGGER_PHOTO_ID_5133605420338756882" /></a> <p style="text-align: center; line-height:140%; font-size:120%"> Figure 6 - After a period of time the two compartments will equilibrate to the same average temperature, but we will always find some nitrogen molecules bouncing around faster (black dots) and some nitrogen molecules bouncing around slower (white dots) than the average. <p style="line-height:140%; font-size:120%"> Recall that in 1738 Bernoulli proposed that gasses were really composed of a very large number of molecules bouncing around in all directions. Gas pressure in a cylinder was simply the result of a huge number of molecular impacts from individual gas molecules striking the walls of a cylinder, and heat was just a measure of the kinetic energy of the molecules bouncing around in the cylinder. In 1859, physicist James Clerk Maxwell took Bernoulli’s idea one step further. He combined Bernoulli’s idea of a gas being composed of a large number of molecules with the new mathematics of statistics. Maxwell reasoned that the molecules in a gas would not all have the same velocities. Instead, there would be a distribution of velocities; some molecules would move very quickly while others would move more slowly, with most molecules having a velocity around some average velocity. Now imagine that the two compartments in Figure 4 are filled with nitrogen gas, but that the left compartment is filled with cold slow-moving nitrogen molecules (white dots), while the right compartment is filled with hot fast-moving nitrogen molecules (black dots). If we perforate the partition between compartments, as in Figure 5 above, we will observe that the fast-moving hot molecules on the right will mix with and collide with the slow-moving cold molecules on the left and will give up kinetic energy to the slow-moving molecules. Eventually, both compartments will be found to be at the same temperature as shown in Figure 6, but we will always find some molecules moving faster than the average (black dots), and some molecules moving slower than the average (white dots) just as Maxwell had determined. This is called a state of thermal equilibrium and demonstrates a thermal entropy increase. We never observe a gas in thermal equilibrium suddenly dividing itself into hot and cold compartments all by itself. The gas can go from Figure 5 to Figure 6 but never the reverse because such a process would also be a violation of the second law of thermodynamics. <br /> <br /> In 1867, Maxwell proposed a paradox along these lines known as Maxwell’s Demon. Imagine that we place a small demon at the opening between the two compartments and install a small trap door at this location. We instruct the demon to open the trap door whenever he sees a fast-moving molecule in the left compartment approach the opening to allow the fast-moving molecule to enter the right compartment. Similarly, when he sees a slow-moving molecule from the right compartment approach the opening, he opens the trap door to allow the low-temperature molecule to enter the left compartment. After some period of time, we will find that all of the fast-moving high-temperature molecules are in the right compartment and all of the slow-moving low-temperature molecules are in the left compartment. Thus the left compartment will become colder and the right compartment will become hotter in violation of the second law of thermodynamics (the gas would go from Figure 6 to Figure 5 above). With the aid of such a demon, we could run a heat engine between the two compartments to extract mechanical energy from the right compartment containing the hot gas as we dumped heat into the colder left compartment. This really bothered Maxwell, and he never found a satisfactory solution to his paradox. This paradox also did not help 19th-century physicists become more comfortable with the idea of atoms and molecules. <br /> <br /> In 1929, Leo Szilárd became an instructor and researcher at the University of Berlin. There he published a paper, <em>On the Decrease of Entropy in a Thermodynamic System by the Intervention of Intelligent Beings</em> in 1929. <br /> <br /> <div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiZMozV1y7N0z4_a7yV9QeuUtK45lRymYtsKM2luq7K7vXXKxIvuKOU-guGi47TLFaXhUfmk5k7aLVLzCWWvg-_bVVibA4mgOFayLtJJ-HxyU231iFseN5lilHfaThZnB8z43kvEw/s1600/Szilard.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiZMozV1y7N0z4_a7yV9QeuUtK45lRymYtsKM2luq7K7vXXKxIvuKOU-guGi47TLFaXhUfmk5k7aLVLzCWWvg-_bVVibA4mgOFayLtJJ-HxyU231iFseN5lilHfaThZnB8z43kvEw/s640/Szilard.jpg" /></a></div> <p style="text-align: center; line-height:140%; font-size:120%">Figure 7 – In 1929 Szilard published a paper in which he explained that the process of the Demon knowing which side of a cylinder a molecule was in must produce some additional entropy to preserve the second law of thermodynamics.</p><p style="line-height:140%; font-size:120%">In Szilárd's 1929 paper, he proposed that using Maxwell’s Demon, you could indeed build a 100% efficient steam engine in conflict with the second law of thermodynamics. Imagine a cylinder with just one water molecule bouncing around in it as in Figure 7(a). First, the Demon figures out if the water molecule is in the left half or the right half of the cylinder. If he sees the water molecule in the right half of the cylinder as in Figure 7(b), he quickly installs a piston connected to a weight via a cord and pulley. As the water molecule bounces off the piston in Figure 7(c) and moves the piston to the left, it slowly raises the weight and does some useful work on it. In the process of moving the piston to the left, the water molecule must lose kinetic energy in keeping with the first law of thermodynamics and slow down to a lower velocity and temperature than the atoms in the surrounding walls of the cylinder. When the piston has finally reached the far left end of the cylinder it is removed from the cylinder in preparation for the next cycle of the engine. The single water molecule then bounces around off the walls of the cylinder as in Figure 7(a), and in the process picks up additional kinetic energy from the jiggling atoms in the walls of the cylinder as they kick the water molecule back into the cylinder each time it bounces off the cylinder walls. Eventually, the single water molecule will once again be in thermal equilibrium with the jiggling atoms in the walls of the cylinder and will be on average traveling at the same velocity it originally had before it pushed the piston to the left. So this proposed engine takes the ambient high-entropy thermal energy of the cylinder’s surroundings and converts it into the useful low-entropy potential energy of a lifted weight. Notice that the first law of thermodynamics is preserved. The engine does not create energy; it simply converts the high-entropy thermal energy of the random motions of the atoms in the cylinder walls into useful low-entropy potential energy, but that does violate the second law of thermodynamics. Szilárd's solution to this paradox was simple. He proposed that the process of the Demon figuring out if the water molecule was in the left-hand side of the cylinder or the right-hand side of the cylinder must cause the entropy of the Universe to increase. So “knowing” which side of the cylinder the water molecule was in must come with a price; it must cause the entropy of the Universe to increase.
<br /><br />
Finally, in 1953 Leon Brillouin published a paper with a thought experiment explaining that Maxwell’s Demon required some Information to tell if a molecule was moving slowly or quickly. Brillouin defined this Information as negentropy, or negative entropy, and found that Information about the velocities of the oncoming molecules could only be obtained by the demon by bouncing photons off the moving molecules. Bouncing photons off the molecules increased the total entropy of the entire system whenever the demon determined if a molecule was moving slowly or quickly. So Maxwell's Demon was really not a paradox after all since even the Demon could not violate the second law of thermodynamics. Leon Brillouin's 1953 paper is available for purchase at:
<br /><br />
Brillouin, L. (1953) The Negentropy Principle of Information. Journal of Applied Physics, 24, 1152-1163<br />
<a href="https://doi.org/10.1063/1.1721463">https://doi.org/10.1063/1.1721463</a>
<br /><br />
But for the frugal folk, here is the abstract for Leon Brillouin’s famous 1953 paper: <br /><br />The Negentropy Principle of Information<br /><font color="blue">Abstract<br />The statistical definition of Information is compared with Boltzmann's formula for entropy. The immediate result is that Information I corresponds to a negative term in the total entropy S of a system. <p style="text-align: center; line-height:140%; font-size:120%">S = S<sub>0</sub> - I <p style="line-height:140%; font-size:120%">A generalized second principle states that S must always increase. If an experiment yields an increase ΔI of the Information concerning a physical system, it must be paid for by a larger increase ΔS<sub>0</sub> in the entropy of the system and its surrounding laboratory. The efficiency ε of the experiment is defined as ε = ΔI/ΔS<sub>0</sub> ≤ 1. Moreover, there is a lower limit k ln2 (k, Boltzmann's constant) for the ΔS<sub>0</sub> required in an observation. Some specific examples are discussed: length or distance measurements, time measurements, observations under a microscope. In all cases it is found that higher accuracy always means lower efficiency. The Information ΔI increases as the logarithm of the accuracy, while ΔS<sub>0</sub> goes up faster than the accuracy itself. Exceptional circumstances arise when extremely small distances (of the order of nuclear dimensions) have to be measured, in which case the efficiency drops to exceedingly low values. This stupendous increase in the cost of observation is a new factor that should probably be included in the quantum theory. </font><br /> <br />In the equation above, Brillouin proposed that Information was a negative form of entropy. When an experiment yields some Information about a system, the total amount of entropy in the Universe must increase. Information is then essentially the elimination of microstates that a system can be found to exist in. From the above analysis, a change in Information ΔI is then the difference between the initial and final entropies of a system after a determination about the system has been made.<br /><br />ΔI = S<sub>i</sub> - S<sub>f</sub><br />S<sub>i</sub> = initial entropy<br />S<sub>f</sub> = final entropy <br /><br /> using the definition of entropy from the statistical mechanics of Ludwig Boltzmann. So we need to back up in time a bit and take a look at that next.
<br /><br />
Beginning in 1866, Ludwig Boltzmann began work to extend Maxwell’s statistical approach. Boltzmann’s goal was to be able to explain all the macroscopic thermodynamic properties of bulk matter in terms of the statistical analysis of microstates. Boltzmann proposed that the molecules in a gas occupied a very large number of possible energy states called microstates, and for any particular energy level of a gas, there were a huge number of possible microstates producing the same macroscopic energy. The probability that the gas was in any one particular microstate was assumed to be the same for all microstates. In 1872, Boltzmann was able to relate the thermodynamic concept of entropy to the number of these microstates with the formula:<br /><br />S = k ln(N)<br /><br />S = entropy<br />N = number of microstates<br />k = Boltzmann’s constant<br /><br />These ideas laid the foundations of statistical mechanics and its explanation of thermodynamics in terms of the statistics of the interactions of many tiny things.
<br /><br />
<strong>The Physics of Poker</strong><br />Boltzmann’s logic might be a little hard to follow, so let’s use an example to provide some insight by delving into the physics of poker. For this example, we will bend the formal rules of poker a bit. In this version of poker, you are dealt 5 cards as usual. The normal rank of the poker hands still holds and is listed below. However, in this version of poker, all hands of a similar rank are considered to be equal. Thus a full house consisting of a Q-Q-Q-9-9 is considered to be equal to a full house consisting of a 6-6-6-2-2 and both hands beat any flush. We will think of the rank of a poker hand as a macrostate. For example, we might be dealt 5 cards, J-J-J-3-6, and end up with the macrostate of three of a kind. The particular J-J-J-3-6 that we hold, including the suit of each card, would be considered a microstate. Thus for any particular rank of hand or macrostate, such as three of a kind, we would find a number of microstates. For example, for the macrostate of three of a kind, there are 54,912 possible microstates or hands that constitute the macrostate of three of a kind.<br /><br /><strong>Rank of Poker Hands</strong><br />Royal Flush - A-K-Q-J-10 all the same suit<br /><br />Straight Flush - All five cards are of the same suit and in sequence<br /><br />Four of a Kind - Such as 7-7-7-7<br /><br />Full House - Three cards of one rank and two cards of another such as K-K-K-4-4<br /><br />Flush - Five cards of the same suit, but not in sequence<br /><br />Straight - Five cards in sequence, but not the same suit<br /><br />Three of a Kind - Such as 5-5-5-7-3<br /><br />Two Pair - Such as Q-Q-7-7-4<br /><br />One Pair - Such as Q-Q-3-J-10<br /><br />Next, we create a table using Boltzmann’s equation to calculate the entropy of each hand. For this example, we set Boltzmann’s constant k = 1, since k is just a “fudge factor” used to get the units of entropy using Boltzmann’s equation to come out to those used by the thermodynamic formulas of entropy.<br /><br />Thus for three of a kind where N = 54,912 possible microstates or hands:<br /><br />S = ln(N)<br />S = ln(54,912) = 10.9134872<br /><p style="line-height:140%; font-size:120%"><table border="1"><tbody><tr><td><strong>Hand</strong></td><td><strong>Number of Microstates N</strong></td><td><strong>Probability</strong></td><td><strong>Entropy = LN(N)</strong></td><td><strong>Information Change = Initial Entropy - Final Entropy</strong></td></tr><tr><td>Royal Flush </td><td>4 </td><td>1.54 x 10<sup>-06</sup> </td><td>1.3862944 </td><td>13.3843291 </td></tr><tr><td>Straight Flush </td><td>40 </td><td>1.50 x 10<sup>-05</sup></td><td>3.6888795 </td><td>11.0817440 </td></tr><tr><td>Four of a Kind </td><td>624 </td><td>2.40 x 10<sup>-04</sup> </td><td>6.4361504 </td><td>8.3344731 </td></tr><tr><td>Full House </td><td>3,744 </td><td>1.44 x 10<sup>-03</sup></td><td>8.2279098 </td><td>6.5427136 </td></tr><tr><td>Flush </td><td>5,108 </td><td>2.00 x 10<sup>-03</sup></td><td>8.5385632 </td><td>6.2320602 </td></tr><tr><td>Straight </td><td>10,200 </td><td>3.90x 10<sup>-03</sup></td><td>9.2301430 </td><td>5.5404805 </td></tr><tr><td>Three of a Kind </td><td>54,912 </td><td>2.11 x 10<sup>-02</sup></td><td>10.9134872 </td><td>3.8571363 </td></tr><tr><td>Two Pairs </td><td>123,552 </td><td>4.75 x 10<sup>-02</sup></td><td>11.7244174 </td><td>3.0462061 </td></tr><tr><td>Pair </td><td>1,098,240 </td><td>4.23 x 10<sup>-01</sup></td><td>13.9092195 </td><td>0.8614040 </td></tr><tr><td>High Card </td><td>1,302,540 </td><td>5.01 x 10<sup>-01</sup> </td><td>14.0798268 </td><td>0.6907967 </td></tr><tr><td>Total Hands </td><td>2,598,964 </td><td>1.00</td><td>14.7706235 </td><td>0.0000000 </td></tr></tbody></table><p style="line-height:140%; font-size:120%"> <p style="text-align: center; line-height:140%; font-size:120%">Figure 8 – In the table above, each poker hand is a macrostate that has a number of microstates that all define the same macrostate. Given N, the number of microstates for each macrostate, we can then calculate its entropy using Boltzmann's definition of entropy S = ln(N) and its Information content using Leon Brillouin’s concept of Information ΔI = S<sub>i</sub> - S<sub>f</sub>. The above table is available as an Excel spreadsheet on my Microsoft One Drive at <a href="https://onedrive.live.com/redir?resid=21488FF1CF19C88B!1429&authkey=!AFr_6rP1jAW484U&ithint=file%2cxlsx">Entropy </a>.
</p><p style="line-height:140%; font-size:120%"> Examine the above table. Note that higher-ranked hands have more order, less entropy, and are less probable than the lower-ranked hands. For example, a straight flush with all cards the same color, same suit, and in numerical order has an entropy = 3.6889, while a pair with two cards of the same value has an entropy = 13.909. A hand that is a straight flush appears more orderly than a hand that contains only a pair and is certainly less probable. A pair is more probable than a straight flush because more microstates produce the macrostate of a pair (1,098,240) than there are microstates that produce the macrostate of a straight flush (40). In general, probable things have lots of entropy and disorder, while improbable things, like perfectly bug-free software, have little entropy or disorder. In thermodynamics, entropy is a measure of the depreciation of a macroscopic system like how well mixed two gases are, while in statistical mechanics entropy is a measure of the microscopic disorder of a system, like the microscopic mixing of gas molecules. A pure container of oxygen gas will mix with a pure container of nitrogen gas because there are more arrangements or microstates for the mixture of the oxygen and nitrogen molecules than there are arrangements or microstates for one container of pure oxygen and the other of pure nitrogen molecules. In statistical mechanics, a neat room tends to degenerate into a messy room and increase in entropy because there are more ways to mess up a room than there are ways to tidy it up. In statistical mechanics, the second law of thermodynamics results because systems with lots of entropy and disorder are more probable than systems with little entropy or disorder, so entropy naturally tends to increase with time. <br /> <br /> Getting back to Leon Brillouin’s concept of Information as a form of negative entropy, let’s compute the amount of Information you convey when you tell your opponent what hand you hold. When you tell your opponent that you have a straight flush, you eliminate more microstates than when you tell him that you have a pair, so telling him that you have a straight flush conveys more Information than telling him you hold a pair. For example, there are a total of 2,598,964 possible poker hands or microstates for a 5 card hand, but only 40 hands or microstates constitute the macrostate of a straight flush.<br /><br />Strait Flush ΔI = S<sub>i</sub> – S<sub>f</sub> = ln(2,598,964) – ln(40) = 11.082<br /><br />For a pair we get:<br /><br />Pair ΔI = S<sub>i</sub> – S<sub>f</sub> = ln(2,598,964) – ln(1,098,240) = 0.8614040<br /><br />When you tell your opponent that you have a straight flush you deliver 11.082 units of Information, while when you tell him that you have a pair you only deliver 0.8614040 units of Information. Clearly, when your opponent knows that you have a straight flush, he knows more about your hand than if you tell him that you have a pair.
<br /><br />
<strong>Comparing Leon Brillouin’s Concept of Information to the Concept of Functional Information</strong><br />
From the above, we see that Leon Brillouin’s concept of Information dealt with determining how rare the results of a particular measurement were by determining how far the measurement was from the normal situation. This would essentially be the height of the intersecting plane in Figure 2. On the other hand, Functional Information is a measurement of the volume of the cone above the intersecting plane in Figure 2.
<br /><br />
But before doing so, let's do a few mathematical operations on the definition of Functional Information. Recall that the concept of Functional Information is defined as the fraction of RNA strands or programs that can perform a given function. It is the fraction of things above the intersecting plane in Figure 2:
<br /><br />
Functional Information = - log<sub>2</sub>( N<sub>a</sub> / (N<sub>t</sub> )
<br /><br />
where N<sub>a</sub> = number of RNA strands or programs above the intersecting plane of Figure 2<br />
where N<sub>t</sub> = total number of RNA strands or programs in Figure 2
<br /><br />
Now using the magic of logarithms:
<br /><br />
Functional Information = - log<sub>2</sub> ( N<sub>a</sub> / N<sub>t</sub> ) = - ( log<sub>2</sub> ( N<sub>a</sub>) - log<sub>2</sub> ( N<sub>t</sub> ) ) = log<sub>2</sub> ( N<sub>t</sub> ) - log<sub>2</sub> ( N<sub>a</sub> )
<br /><br />
Now there really is nothing special about using the natural base-e logarithm ln(x) or the base-2 logarithm log<sub>2</sub>(x). Today, people sometimes like to use the base-2 logarithm log<sub>2</sub>(x) because we have computers that use base-2 arithmetic. But Boltzmann did not have a computer back in the 19th century so he used the common base-e natural logarithm ln(x) of the day. The mathematical constant e was first discovered in 1683 by Jacob Bernoulli while he was studying compound interest. He wondered what would happen if interest was compounded continuously, meaning an infinite number of times per year. The limit of this process led to the value of e, approximately 2.71828.
<br /><br />
Now since ln(x) = 0.6931471806 log<sub>2</sub>(x) we can rewrite the equation as:
<br /><br />
Functional Information = 0.6931471806 ( ln ( N<sub>t</sub> ) - ln ( N<sub>a</sub> )
<br /><br />
Since the 0.6931471806 is just a fudge factor to convert log<sub>2</sub>(x) to ln(x) we can just set it to "1" to obtain:
<br /><br />
Functional Information = ln ( N<sub>t</sub> ) - ln ( N<sub>a</sub> )
<br /><br />
Now we can see that Functional Information is very similar to Brillouin's Information for poker:
<br /><br />
Brillouin Information = ln ( N<sub>total hands</sub> ) - ln ( N<sub>your hand</sub> )
<br /><br />
Functional Information essentially compares the number of poker hands that are equal to or greater than your particular poker hand relative to all other possible poker hands, while Brillouin Information just compares your particular poker hand to all possible poker hands. The good news is that Functional Information does not get tangled up with the ideas of entropy and information used by network people.
<br /><br />
<strong>The Very Sordid History of Entropy and Information in the Information Theory Used by Telecommunications</strong><br /> Claude Shannon went to work at Bell Labs in 1941 where he worked on cryptography and secret communications for the war effort. Claude Shannon was a true genius and is credited as being the father of Information Theory. But Claude Shannon was really trying to be the father of digital Communication Theory. In 1948, Claude Shannon published a very famous paper that got it all started. <br /> <br /> A Mathematical Theory of Communication<br /> <a href="https://people.math.harvard.edu/~ctm/home/text/others/shannon/entropy/entropy.pdf ">https://people.math.harvard.edu/~ctm/home/text/others/shannon/entropy/entropy.pdf </a> <br /> <br /> Here is the very first paragraph from that famous paper: <br /> <br /> <font color="blue"> Introduction<br /> The recent development of various methods of modulation such as PCM and PPM which exchange bandwidth for signal-to-noise ratio has intensified the interest in a general theory of communication. A basis for such a theory is contained in the important papers of Nyquist and Hartley on this subject. In the present paper, we will extend the theory to include a number of new factors, in particular the effect of noise in the channel, and the savings possible due to the statistical structure of the original message and due to the nature of the final destination of the Information. The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point. <font color="red">Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem. The significant aspect is that the actual message is one selected from a set of possible messages.</font> <font color="blue"> The system must be designed to operate for each possible selection, not just the one which will actually be chosen since this is unknown at the time of design. If the number of messages in the set is finite then this number or any monotonic function of this number can be regarded as a measure of the Information produced when one message is chosen from the set, all choices being equally likely. </font> </font> <div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhRCyQ13Oq7mRekNNA1rypDznxK-Nd2vXzq5jd8BPuqjQvMcshYALBHvKn81R-DBYArO9frpuI81iL8mPpeWHWuIGbaVtTFXfDivZU6jHH0KPmkRm5HmWiuUNKaJOX8De1Ng58UlG1sew14bGSUO7bAhv5ujiTKepbvmBYfKMkirOCKNCMndJE/s834/ShannonFig1.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="433" data-original-width="834" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhRCyQ13Oq7mRekNNA1rypDznxK-Nd2vXzq5jd8BPuqjQvMcshYALBHvKn81R-DBYArO9frpuI81iL8mPpeWHWuIGbaVtTFXfDivZU6jHH0KPmkRm5HmWiuUNKaJOX8De1Ng58UlG1sew14bGSUO7bAhv5ujiTKepbvmBYfKMkirOCKNCMndJE/s600/ShannonFig1.jpg"/></a></div> <p style="text-align: center; line-height:140%; font-size:120%"> Figure 9 – Above is the very first figure in Claude Shannon's very famous 1948 paper <em>A Mathematical Theory of Communication.</em> </p><p style="line-height:140%; font-size:120%"> Notice that the title of the paper is <em>A Mathematical Theory of Communication</em> and the very first diagram in the paper describes the engineering problem he was trying to solve. Claude Shannon was trying to figure out a way to send digital messages containing electrical bursts of 1s and 0s over a noisy transmission line. As shown in the red text above, Claude Shannon did not care at all about the Information in the message. The message could be the Gettysburg Address or pure jibberish. It did not matter. What mattered was being able to manipulate the noisy message of 1s and 0s so that the received message exactly matched the transmitted message. You see, at the time, AT&T was essentially only transmitting analog telephone conversations. A little noise on an analog telephone line is just like listening to an old scratchy vinyl record. It might be a little bothersome, but still understandable. However, error correction is very important when transmitting digital messages consisting of binary 1s and 0s. For example, both of the messages down below are encoded with a total of 16 1s and 0s: <br></br> 0000100000000000<br /> 1001110010100101 <br></br>However, the first message consists mainly of 0s, so it seems that it should be easier to apply some kind of error detection and correction scheme to the first message, compared to the second message, because the 1s are so rare in the first message. Doing the same thing for the second message should be much harder because the second message is composed of eight 0s and eight 1s. For example, simply transmitting the 16-bit message 5 times over and over should easily do the trick for the first message. But for the second message, you might have to repeat the 16 bits 10 times to make sure you could figure out the 16 bits in the presence of noise that could sometimes flip a 1 to a 0. This led Shannon to conclude that the second message must contain more Information than the first message. He also concluded that the 1s in the first message must contain more Information than the 0s because the 1s were much less probable than the 0s, and consequently, the arrival of a 1 had much more significance than the arrival of a 0 in the message. Using this line of reasoning, Shannon proposed that if the probability of receiving a 0 in a message was p and the probability of receiving a 1 in a message was q, then the Information H in the arrival of a single 1 or 0 must not simply be one bit of Information. Instead, it must depend upon the probabilities p and q of the arriving 1s and 0s: <br></br> H(p) = - p log<sub>2</sub>p - q log<sub>2</sub>q <br></br>Since in this case the message is only composed of 1s and 0s, it follows that: <br></br> q = 1 - p <br></br>Figure 10 shows a plot of the Information H(p) of the arrival of a 1 or 0 as a function of p the probability of a 0 arriving in a message when the message is only composed of 1s and 0s:<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhOPshdrGU5QOKNJqXfvmF7UWIhurw0Ztuc3RDT1u8VOubddw3JzWzfFOGybTXJFv3GT-KeBDzTzaV2cl8TwivEZ2QTe_Drvi3DHL-TgyR3l84_LMCo3FFOgGo2wiTI0_RtobS3nw/s1600/entropy.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhOPshdrGU5QOKNJqXfvmF7UWIhurw0Ztuc3RDT1u8VOubddw3JzWzfFOGybTXJFv3GT-KeBDzTzaV2cl8TwivEZ2QTe_Drvi3DHL-TgyR3l84_LMCo3FFOgGo2wiTI0_RtobS3nw/s1600/entropy.jpg" /></a></div><p style="text-align: center; line-height:140%; font-size:120%">Figure 10 - A plot of Shannon’s Information Entropy equation H(p) versus the probability p of finding a 0 in a message composed solely of 1s and 0s</p><p style="line-height:140%; font-size:120%">Notice that the graph peaks to a value of 1.0 when p = 0.50 and has a value of zero when p = 0.0 or p = 1.0. Now if p = 0.50 that means that q = 0.50 too because: <br></br> q = 1 - p <br></br>Substituting p = 0.50 and q = 0.50 into the above equation yields the Information content of an arriving 0 or 1 in a message, and we find that it is equal to one full bit of Information: <br></br> H(0.50) = -(0.50) log<sub>2</sub>(0.50) - (0.50) log<sub>2</sub>(0.50) = -log<sub>2</sub>(0.50) = 1 <br></br>And we see that value of H(0.50) on the graph in Figure 10 does indeed have a value of 1 bit. <br></br> Now suppose the arriving message consists only of 0s. In that case, p = 1.0 and q = 0.0, and the Information content of an incoming 0 or 1 is H(1.0) and calculates out to a value of 0.0 in our equation and also in the plot of H(p) in Figure 10. This simply states that a message consisting simply of arriving 0s contains no Information at all. Similarly, a message consisting only of 1s would have a p = 0.0 and a q = 1.0, and our equation and plot calculate a value of H(0.0) = 0.0 too, meaning that a message simply consisting of 1s conveys no Information at all as well. What we see here is that seemingly a “messy” message consisting of many 1s and 0s conveys lots of Information, while a “neat” message consisting solely of 1s or 0s conveys no Information at all. When the probability of receiving a 1 or 0 in a message is 0.50 – 0.50, each arriving bit contains one full bit of Information, but for any other mix of probabilities, like 0.80 – 0.20, each arriving bit contains less than a full bit of Information. From the graph in Figure 10, we see that when a message has a probability mix of 0.80 – 0.20 that each arriving 1 or 0 only contains about 0.72 bits of Information. The graph also shows that it does not matter whether the 1s or the 0s are the more numerous bits because the graph is symmetric about the point p = 0.50, so a 0.20 – 0.80 mix of 1s and 0s also only delivers 0.72 bits of Information for each arriving 1 or 0. <br></br>Claude Shannon went on to generalize his formula for H(p) to include cases where there were more than two symbols used to encode a message: <br></br> H(p) = - Σ p(x) log<sub>2</sub> p(x) <br></br>The above formula says that if you use 2, 3, 4, 5 …. different symbols to encode Information, just add up the probability of each symbol multiplied by the log<sub>2</sub> of the probability of each symbol in the message. For example, suppose we choose the symbols 00, 01, 10, and 11 to send messages and that the probability of sending a 1 or a 0 are both 0.50. That means the probability p for each symbol 00, 01, 10 and 11 is 0.25 because each symbol is equally likely. So how much Information does each of these two-digit symbols now contain? If we substitute the values into Shannon’s equation we get an answer of 2 full bits of Information: <br></br> H(0.25, 0.25, 0.25, 0.25) = - 0.25 log<sub>2</sub>(0.25) - 0.25 log<sub>2</sub>(0.25) - 0.25 log<sub>2</sub>(0.25) - 0.25 log<sub>2</sub>(0.25) = <br /> - log<sub>2</sub>(0.25) = 2 <br></br>which makes sense because each symbol is composed of two one-bit symbols. In general, if all the symbols we use are N bits long, they will then all contain N bits of Information each. For example, in biology genes are encoded in DNA using four bases A, C, T and G. A codon consists of 3 bases and each codon codes for a particular amino acid or is an end of file Stop codon. On average, prokaryotic bacterial genes code for about 400 amino acids using 1200 base pairs. If we assume that the probability distribution for all four bases, A, C, T and G are the same for all the bases in a gene, namely a probability of 0.25, then we can use our analysis above to conclude that each base contains 2 bits of Information because we are using 4 symbols to encode the Information. That means a 3-base codon contains 6 bits of Information and a protein consisting of 400 amino acids contains 2400 bits of Information or 300 bytes of Information in IT speak. <br></br> <strong>Entropy and Information Confusion</strong></br> Now here is where the confusion comes in about the nature of Information. The story goes that Claude Shannon was not quite sure what to call his formula for H(p). Then one day in 1949 he happened to visit the mathematician and early computer pioneer John von Neumann, and that is when Information and entropy got mixed together in communications theory: <br></br><font color="blue"><em>”My greatest concern was what to call it. I thought of calling it ‘Information’, but the word was overly used, so I decided to call it ‘uncertainty’. When I discussed it with John von Neumann, he had a better idea. Von Neumann told me, ‘You should call it entropy, for two reasons. In the first place, your uncertainty function has been used in statistical mechanics under that name, so it already has a name. In the second place, and more important, nobody knows what entropy really is, so in a debate you will always have the advantage.”</em></font><br></br>Unfortunately, with that piece of advice, we ended up equating Information with entropy in communications theory. <br></br>So in Claude Shannon's Information Theory people calculate the entropy, or Information content of a message, by mathematically determining how much “surprise” there is in a message. For example, in Claude Shannon's Information Theory, if I transmit a binary message consisting only of 1s or only of 0s, I transmit no useful Information because the person on the receiving end only sees a string of 1s or a string of 0s, and there is no “surprise” in the message. For example, the messages “1111111111” or “0000000000” are both equally boring and predictable, with no real “surprise” or Information content at all. Consequently, the entropy, or Information content, of each bit in these messages is zero, and the total Information of all the transmitted bits in the messages is also zero because they are both totally predictable and contain no “surprise”. On the other hand, if I transmit a signal containing an equal number of 1s and 0s, there can be lots of “surprise” in the message because nobody can really tell in advance what the next bit will bring, and each bit in the message then has an entropy, or Information content, of one full bit of Information. <br /><br /> This concept of entropy and Information content is very useful for people who work with transmission networks and on error detection and correction algorithms for those networks, but it is not very useful for IT professionals. For example, suppose you had a 10-bit software configuration file and the only “correct” configuration for your particular installation consisted of 10 1s in a row like this “1111111111”. In Claude Shannon's Information Theory that configuration file contains no Information because it contains no “surprise”. However, in Leon Brillouin’s formulation of Information there would be a total of N = 2<sup>10</sup> possible microstates or configuration files for the 10-bit configuration file, and since the only “correct” version of the configuration file for your installation is “1111111111” there are only N = 1 microstates that meet that condition. <br /><br />Using the formulas above we can now calculate the entropy of our single “correct” 10-bit configuration file and the entropy of all possible 10-bit configuration files: <br /><br /><strong>Boltzman's Definition of Entropy</strong><br />S = ln(N)<br />N = Number of microstates <br /><br /><strong>Leon Brillouin’s Definition of Information</strong><br />∆Information = S<sub>i</sub> - S<sub>f</sub><br />S<sub>i</sub> = initial entropy<br />S<sub>f</sub> = final entropy <br /><br />as: <br /> <br />S<sub>f</sub> = ln(1) = 0 <br /> <br />S<sub>i</sub> = ln(2<sup>10</sup>) = ln (1024) = 6.93147 <br /> <br />So using Leon Brillouin’s formulation for the concept of Information the Information content of a single “correct” 10-bit configuration file is: <br /> <br />S<sub>i</sub> - S<sub>f</sub> = 6.93147 – 0 = 6.93147 <br /> <br />which, if you look at the table in Figure 8, contains a little more Information than drawing a full house in poker without drawing any additional cards and would be even less likely for you to stumble upon by accident than drawing a full house. <br /> <br />So in Claude Shannon's Information Theory, a very “buggy” 10 MB executable program file would contain just as much Information and would require just as many network resources to transmit as transmitting a bug-free 10 MB executable program file. Clearly, Claude Shannon's Information Theory formulations for the concepts of Information and entropy are less useful for IT professionals than are Leon Brillouin’s formulations for the concepts of Information and entropy. <br /><br />What John von Neumann was trying to tell Claude Shannon was that his formula for H(p) looked very much like Boltzmann’s equation for entropy: <br></br> S = k ln(N) <br></br>The main difference was that Shannon was using a base 2 logarithm, log<sub>2</sub> in his formula, while Boltzmann used a base e natural logarithm ln or log<sub>e</sub> in his formula for entropy. But given the nature of logarithms, that really does not matter much. <br></br> The main point of confusion arises because in communications theory the concepts of Information and entropy pertain to encoding and transmitting Information, while in IT and many other disciplines, like biology, we are more interested in the amounts of useful and useless Information in a message. For example, in communications theory, the code for a buggy 300,000-byte program contains just as much Information as a totally bug-free 300,000-byte version of the same program and would take just as much bandwidth and network resources to transmit accurately over a noisy channel as transmitting the bug-free version of the program. Similarly, in communications theory, a poker hand consisting of four Aces and a 2 of clubs contains just as much Information and is just as “valuable” as any other 5-card poker hand because the odds of being dealt any particular card is 1/52 for all the cards in a deck, and therefore, all messages consisting of 5 cards contain exactly the same amount of Information. Similarly, all genes that code for a protein consisting of 400 amino acids all contain exactly the same amount of Information, no matter what those proteins might be capable of doing. However, in both biology and IT we know that just one incorrect amino acid in a protein or one incorrect character in a line of code can have disastrous effects, so in those disciplines, the quantity of useful Information is much more important than the number of bits of data to be transmitted accurately over a communications channel. <br></br>Of course, the concepts of useful and useless Information lie in the eye of the beholder to some extent. Brillouin’s formula attempts to quantify this difference, but his formula relies upon Boltzmann’s equation for entropy, and Boltzmann’s equation has always had the problem of how do you define a macrostate? There really is no absolute way of defining one. For example, suppose I invented a new version of poker in which I defined the highest ranking hand to be an Ace of spades, 2 of clubs, 7 of hearts, 10 of diamonds and an 8 of spades. The odds of being dealt such a hand are 1 in 2,598,964 because there are 2,598,964 possible poker hands, and using Boltzmann’s equation that hand would have a very low entropy of exactly 0.0 because N = 1 and ln(1) = 0.0. Necessarily, the definition of a macrostate has to be rather arbitrary and tailored to the problem at hand. But in both biology and IT we can easily differentiate between macrostates that work as opposed to macrostates that do not work, like comparing a faulty protein or a buggy program with a functional protein or program.
<br /><br />
My hope is that by now I have totally confused you about the true nature of entropy and Information with my explanations of both! If I have been truly successful, it now means that you have joined the intellectual elite who worry about such things. For most people <br></br> <strong>Information</strong> = Something you know <br></br> and that says it all.
<br /><br />
For more on the above see <a href="https://softwarephysics.blogspot.com/2007/10/entropy-bane-of-programmers.html">Entropy - the Bane of Programmers</a>, <a href="https://softwarephysics.blogspot.com/2007/11/demon-of-software.html">The Demon of Software</a>, <a href="https://softwarephysics.blogspot.com/2013/09/new-introduction-to-softwarephysics.html">Some More Information About Information</a> and <a href="https://softwarephysics.blogspot.com/2022/08/new-introduction-to-softwarephysics.html">The Application of the Second Law of Information Dynamics to Software and Bioinformatics</a>.
<br /><br />
<strong>Like Most Complex Systems - Software Displays Nonlinear Behavior</strong><br />
With a firm understanding of how Information behaves in our Universe, my next challenge as a softwarephysicist was to try to explain why the stack of software in Figure 2 was shaped like a cone. Why did the Universe demand near perfection in order for software to work? Why didn't the Universe at least offer some partial credit on my programs as my old physics professors did in college? When I made a little typo on a final exam, my professors usually did not tear up the whole exam and then give me an "F" for the entire course. But as a budding IT professional, I soon learned that computer compilers were not so kind. If I had one little typo in 100,000 lines of code, the compiler would happily abend my entire compile! Worse yet, when I did get my software to finally compile and link into an executable file that a computer could run, I always found that my software contained all sorts of little bugs that made it not run properly. Usually, my software would immediately crash and burn, but sometimes it would seem to run just fine for many weeks in Production and then suddenly crash and burn later for no apparent reason. This led me to realize that software generally exhibited nonlinear behavior, but with careful testing (selection), software could be made to operate in a linear manner.
<br /><br />
Linear systems are defined by linear differential equations that can be solved using calculus. Linear systems are generally well-behaved meaning that a slight change to a linear system produces a well-behaved response. Nonlinear systems are defined by nonlinear differential equations that cannot be solved using calculus. Nonlinear differential equations can only be solved numerically by computers. Nonlinear systems are generally not well-behaved. A small change to a nonlinear system can easily produce disastrous results. This is true of both software and carbon-based life running on DNA. The mutation of a single character in 100,000 lines of code can easily produce disastrous results and so too can the mutation of a single base pair in the three billion base pairs that define a human being. The Law of Increasing Functional Information explains that evolving systems overcome this problem by generating large numbers of similar configurations that are later honed by selection processes that remove defective configurations.
<br /><br />
Now it turns out that all of the fundamental classical laws of physics listed above are defined by linear differential equations. So you would think that this should not be a problem. And before we had computers that could solve nonlinear differential equations that is what everybody thought. But then in the 1950s, we started building computers that could solve nonlinear differential equations and that is when the trouble started. We slowly learned that nonlinear systems did not behave at all like their well-behaved cousins. With the aid of computer simulations, we learned that when large numbers of components were assembled, they began to follow nonlinear differential equations and exhibited nonlinear behaviors. True, each little component in the assemblage would faithfully follow the linear differential equations of the fundamental classical laws of physics, but when large numbers of components came together and began to interact, those linear differential equations went out the window. The result was the arrival of Chaos Theory in the 1970s. For more on that see <a href="https://softwarephysics.blogspot.com/2008/04/introduction-to-softwarephysics.html">Software Chaos</a>.
<br /><br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh-RuR0eigycBIygqloAQysXUkoiHsA6PdfXBltgth17GzfWK5IdVKOT0q2d7Dj4BUy6iSZYMhSNjcDBq7KmvVIFWO4U2FU8D7DKG7QPNYERMiebz4-Q8EXBEgEXCUjzZ4uCACvVg/s1600-h/EarthOrbit.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh-RuR0eigycBIygqloAQysXUkoiHsA6PdfXBltgth17GzfWK5IdVKOT0q2d7Dj4BUy6iSZYMhSNjcDBq7KmvVIFWO4U2FU8D7DKG7QPNYERMiebz4-Q8EXBEgEXCUjzZ4uCACvVg/s320/EarthOrbit.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5197808840542249442" /></a><br /><p style="text-align: center; line-height:140%; font-size:120%">Figure 11 – The orbit of the Earth about the Sun is an example of a linear system that is periodic and predictable. It is governed by the linear differential equations that define Newton's laws of motion and by his equation for the gravitational force.
</p><p style="line-height:140%; font-size:120%"> Nonlinear systems are deterministic, meaning that once you set them off in a particular direction they always follow exactly the same path or trajectory, but they are not predictable because slight changes to initial conditions or slight perturbations can cause nonlinear systems to dramatically diverge to a new trajectory that leads to a completely different destination. Even when nonlinear systems are left to themselves and not perturbed in any way, they can appear to spontaneously jump from one type of behavior to another.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhHOSeFIBbHD67fuv_rMq1kzmBGX975vgl1rD58_U5w1UQhhcsAVb_WRPujWiRRcqjdvtYPR1GFLUVFWCRdUCCSUB4DugiBa9dQIdfWi8MEUXbuhyINKUZ8spGf0QDr_Vwhv511Lw/s500/LorenzMovie.gif" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="400" data-original-height="500" data-original-width="500" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhHOSeFIBbHD67fuv_rMq1kzmBGX975vgl1rD58_U5w1UQhhcsAVb_WRPujWiRRcqjdvtYPR1GFLUVFWCRdUCCSUB4DugiBa9dQIdfWi8MEUXbuhyINKUZ8spGf0QDr_Vwhv511Lw/s400/LorenzMovie.gif"/></a></div> <p style="text-align: center; line-height:140%; font-size:120%">Figure 12 – Above is a very famous plot of the solution to three nonlinear differential equations developed by Ed Lorenz. Notice that like the orbit of the Earth about the Sun, points on the solution curve follow somewhat periodic paths about two strange attractors. Each attractor is called an attractor basin because points orbit the attractor basins like marbles in a bowl. </p><p style="line-height:140%; font-size:120%"> <div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjF7AIHE3pLajUXUR23C-AnFZgkA5ZcrG5VoQU0dHzcsx-jT2qweUqWgKNeq5RX33BIOFRk2hwoQf6WF_YhQLijiuUwW61qByV8b0LnVMibTo_BMEoeJGPKTZwP9asJXIsCXM4R6g/s800/LorenzAttractor.gif" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="450" data-original-width="800" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjF7AIHE3pLajUXUR23C-AnFZgkA5ZcrG5VoQU0dHzcsx-jT2qweUqWgKNeq5RX33BIOFRk2hwoQf6WF_YhQLijiuUwW61qByV8b0LnVMibTo_BMEoeJGPKTZwP9asJXIsCXM4R6g/s600/LorenzAttractor.gif"/></a></div> <p style="text-align: center; line-height:140%; font-size:120%">Figure 13 – But unlike the Earth orbiting the Sun, points in the attractor basins can suddenly jump from one attractor basin to another. High-volume corporate websites normally operate in a normal operations attractor basin but sometimes can spontaneously jump to an outage attractor basin, especially if they are perturbed by a small processing load disturbance. </p><p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgrdDURW2aIjMzKH9rPxlp_gNBVw95bvrkHNjBn0Ipvp_FdtY7EiJe-K6AlgEQqBCkFqtcnTf1t99Mg0wkW9378R8SHg_LlHF1-FBZ-zVlXtEA5ff90DKkNAEIXQjl3f2H0x0nF2w/s600/SUV_FlipOver.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="398" data-original-width="600" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgrdDURW2aIjMzKH9rPxlp_gNBVw95bvrkHNjBn0Ipvp_FdtY7EiJe-K6AlgEQqBCkFqtcnTf1t99Mg0wkW9378R8SHg_LlHF1-FBZ-zVlXtEA5ff90DKkNAEIXQjl3f2H0x0nF2w/s600/SUV_FlipOver.jpg"/></a></div> <p style="text-align: center; line-height:140%; font-size:120%">Figure 14 – The top-heavy SUVs of yore also had two basins of attraction and one of them was upside down. </p><p style="line-height:140%; font-size:120%">
For more on the above see <a href="https://softwarephysics.blogspot.com/2008/04/introduction-to-softwarephysics.html">Software Chaos</a>.
<br /><br />
<strong>The Fundamental Problem of Software</strong><br />
From the above analysis, I came to the realization in the early 1980s that my fundamental problem was that the second law of thermodynamics was constantly trying to destroy the useful information in my programs with small bugs, and because our Universe is largely nonlinear in nature, these small bugs could produce disastrous results when software was in Production. Now I would say that the second law of thermodynamics was constantly trying to destroy the Functional Information in my programs.
<br /><br />
But the idea of destroying information causes some real problems for physicists, and as we shall see, the solution to that problem is that we need to make a distinction between useful information and useless information. Here is the problem that physicists have with destroying information. Recall, that a reversible process is a process that can be run backwards in time to return the Universe back to the state that it had before the process even began as if the process had never even happened in the first place. For example, the collision between two molecules at low energy is a reversible process that can be run backwards in time to return the Universe to its original state because Newton’s laws of motion are reversible. Knowing the position of each molecule at any given time and also its momentum, a combination of its speed, direction, and mass, we can predict where each molecule will go after a collision between the two, and also where each molecule came from before the collision using Newton’s laws of motion. For a reversible process such as this, the information required to return a system back to its initial state cannot be destroyed, no matter how many collisions might occur, in order for it to be classified as a reversible process that is operating under reversible physical laws. <br /> <br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjVyuqDDCBr_FlvZ19iQ1U5Z4vkblE-YvVA-tMCY6aGJmR3iCH71oSScD4mkf6AV85smAgFiRIHMZ8jN47UAMEMZNwwfLdP3acviYVdKo6i2w8xRGuKXoo_3R4-XolLbmR7dXiHgA/s1600/CollidingBalls.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 193px; height: 261px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjVyuqDDCBr_FlvZ19iQ1U5Z4vkblE-YvVA-tMCY6aGJmR3iCH71oSScD4mkf6AV85smAgFiRIHMZ8jN47UAMEMZNwwfLdP3acviYVdKo6i2w8xRGuKXoo_3R4-XolLbmR7dXiHgA/s400/CollidingBalls.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5599679041274301314" /></a><p style="text-align: center; line-height:140%; font-size:120%">Figure 15 – The collision between two molecules at low energy is a reversible process because Newton’s laws of motion are reversible (click to enlarge)</p><p style="line-height:140%; font-size:120%"> Currently, all of the effective theories of physics, what many people mistakenly now call the “laws” of the Universe, are indeed reversible, except for the second law of thermodynamics, but that is because, as we saw above, the second law is really not a fundamental “law” of the Universe at all. The second law of thermodynamics just emerges from the statistics of a large number of interacting particles. Now in order for a law of the Universe to be reversible, it must conserve information. That means that two different initial microstates cannot evolve into the same microstate at a later time. For example, in the collision between the blue and pink molecules in Figure 15, the blue and pink molecules both begin with some particular position and momentum one second before the collision and end up with different positions and momenta at one second after the collision. In order for the process to be reversible and Newton’s laws of motion to be reversible too, this has to be unique. A different set of identical blue and pink molecules starting out with different positions and momenta one second before the collision could not end up with the same positions and momenta one second after the collision as the first set of blue and pink molecules. If that were to happen, then one second after the collision, we would not be able to tell what the original positions and momenta of the two molecules were one second before the collision since there would now be two possible alternatives, and we would not be able to uniquely reverse the collision. We would not know which set of positions and momenta the blue and pink molecules originally had one second before the collision, and the information required to reverse the collision would be destroyed. And because all of the current effective theories of physics are time reversible in nature that means that information cannot be destroyed. So if someday information were indeed found to be destroyed in an experiment, the very foundations of physics would collapse, and consequently, all of science would collapse as well. <br /> <br /> So if information cannot be destroyed, but Leon Brillouin’s reformulation of the second law of thermodynamics does imply that the total amount of information in the Universe must decrease (dS/dt > 0 implies that dI/dt < 0), what is going on? The solution to this problem is that we need to make a distinction between useful information and useless information. Recall that the first law of thermodynamics maintains that energy, like information, also cannot be created nor destroyed by any process. Energy can only be converted from one form of energy into another form of energy by any process. For example, when you drive to work, you convert all of the low entropy chemical energy in gasoline into an equal amount of useless waste heat energy by the time you hit the parking lot of your place of employment, but during the entire process of driving to work, none of the energy in the gasoline is destroyed, it is only converted into an equal amount of waste heat that simply diffuses away into the environment as your car cools down to be in thermal equilibrium with the environment. So why cannot I simply drive home later in the day using the ambient energy found around my parking spot? The reason you cannot do that is that pesky old second law of thermodynamics. You simply cannot turn the useless high-entropy waste heat of the molecules bouncing around near your parked car into useful low-entropy energy to power your car home at night. And the same goes for information. Indeed, the time reversibility of all the current effective theories of physics may maintain that you cannot destroy information, but that does not mean that you cannot change useful information into useless information. <br /> <br /> But for all practical purposes from an IT perspective, turning useful information into useless information is essentially the same as destroying information. For example, suppose you take the source code file for a bug-free program and scramble its contents. Theoretically, the scrambling process does not destroy any information because theoretically it can be reversed. But in practical terms, you will be turning a low-entropy file into a useless high-entropy file that only contains useless information. So effectively you will have destroyed all of the useful information in the bug-free source code file. Here is another example. Suppose you are dealt a full house, K-K-K-4-4, but at the last moment a misdeal is declared and your K-K-K-4-4 gets shuffled back into the deck! Now the K-K-K-4-4 still exists as scrambled hidden information in the entropy of the entire deck, and so long as the shuffling process can be reversed, the K-K-K-4-4 can be recovered, and no information is lost, but that does not do much for your winnings. Since all the current laws of physics are reversible, including quantum mechanics, we should never see information being destroyed. In other words, because entropy must always increase and never decreases, the hidden information of entropy cannot be destroyed.
<br /><br />
<strong>The Solution to the Fundamental Problem of Software</strong><br />
Again, in physics, we use software to simulate the behavior of the Universe, while in softwarephysics we use the Universe to simulate the behavior of software. So in the early 1980s, I asked myself, "Are there any very complicated systems in our Universe that seem to deal well with the second law of thermodynamics and nonlinearity?". I knew that living things did a great job with handling both, but at first, I did not know how to harness the power of living things to grow software instead of writing software. Then through a serendipitous accident, I began to do so by working on some software that I later called the Bionic Systems Development Environment (BSDE) back in 1985. BSDE was an early IDE (Integrated Development Environment) that ran on VM/CMS and grew applications from embryos in a biological manner. For more on BSDE see the last part of <a href="https://softwarephysics.blogspot.com/2019/09/new-introduction-to-softwarephysics_21.html">Programming Biology in the Biological Computation Group of Microsoft Research</a>. During the 1980s, BSDE was used by about 30 programmers to put several million lines of code into Production. Here is a 1989 document on my Microsoft One Drive that was used by the IT Training department of Amoco in their BSDE class:
<br /><br />
<a href="https://onedrive.live.com/redir?resid=21488ff1cf19c88b!1432&authkey=!ACPLasjpCI9PBjY&ithint=file%2ctxt">BSDE</a> – A 1989 document describing how to use BSDE - the Bionic Systems Development Environment - to grow applications from genes and embryos within the maternal BSDE software.
<br /><br />
<strong>Could the Physical Laws of our Universe Have also Arisen From the Law of Increasing Functional Information in Action?</strong><br />
Before embarking on the rather lengthy section below on the evolution of computer software and hardware, I would like to put in a plug for Lee Smolin's cosmological natural selection hypothesis that proposes that the physical laws of our Universe evolved from an infinitely long chain of previous Universes to produce a Universe that is complex enough to easily form black holes. In Lee Smolin's cosmological natural selection hypothesis, black holes in one universe produce white holes in new Universes beyond the event horizons of the originating black holes. These new Universes experience these new white holes as their own Big Bangs and then go on to produce their own black holes if possible. Thus, Universes that have physical laws that are good at making black holes are naturally selected for over Universes that do not and soon come to dominate the Multiverse. Lee Smolin's cosmological natural selection hypothesis meets all of the necessary requirements of the Law of Increasing Functional Information for the cosmic evolution of a Multiverse. For more on that see <a href="https://softwarephysics.blogspot.com/2023/10/new-introduction-to-softwarephysics_17.html">The Self-Organizing Recursive Cosmos</a>.
<br /><br />
<strong>The Evolution of Software as a Case Study of the Law of Increasing Functional Information in Action</strong><br />In this rather long-winded tale, try to keep in mind the three required factors that the Law of Increasing Functional Information needs for a system to evolve:
<br /><br />
<font color="blue">
1. Each system is formed from numerous interacting units (e.g., nuclear particles, chemical elements, organic molecules, or cells) that result in combinatorially large numbers of possible configurations.<br />
2. In each of these systems, ongoing processes generate large numbers of different configurations.<br />
3. Some configurations, by virtue of their stability or other “competitive” advantage, are more likely to persist owing to selection for function.
</font>
<br /><br />
Also, take note of the coevolution of computer hardware and software. It is very similar to the coevolution of the rocks and minerals of the Earth's crust and carbon-based life over the past 4.0 billion years. Please feel free to skim over the details that only IT old-timers may find interesting.
<br /><br />
The evolution of software provides a valuable case study for the Law of Increasing Functional Information because software has been evolving about 100 million times faster than did carbon-based life on this planet. This has been going on for the past 82 years, or 2.6 billion seconds, ever since Konrad Zuse first cranked up his Z3 computer in May of 1941. For more on the computational adventures of Konrad Zuse please see <a href="https://softwarephysics.blogspot.com/2007/09/so-you-want-to-be-computer-scientist.html">So You Want To Be A Computer Scientist?</a>. More importantly, all of this software evolution has occurred within a single human lifetime and is well documented. So during a typical 40-year IT career of 1.26 billion seconds, one should expect to see some great changes take place as software rapidly evolves. In fact, all IT professionals find that they have to constantly retrain themselves to remain economically viable in the profession in order to keep up with the frantic pace of software evolution. Job insecurity due to technical obsolescence has always added to the daily mayhem of life in IT, especially for those supporting "legacy" software for a corporation. So as an IT professional, not only will you gain an appreciation for geological Deep Time, but you will also live through Deep Time as you observe software rapidly evolving during your career. To sample what might yet come, let us take a look at how software and hardware have coevolved over the past 2.6 billion seconds.
<br /><br />
<strong>SoftwarePaleontology</strong><br /> Since the very beginning, the architecture of software has evolved through the Darwinian processes of inheritance, innovation and natural selection and has followed a path very similar to the path followed by the carbon-based living things on the Earth. I believe this has been due to what evolutionary biologists call convergence. For example, the concept of the eye has independently evolved at least 40 different times in the past 600 million years, and there are many examples of “living fossils” showing the evolutionary path. For example, the camera-like structures of the human eye and the eye of an octopus are nearly identical, even though each structure evolved totally independent of the other. <center><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhHZyB5H2wilEU4NLsEMqadW-1V-fMbD3rsJ_wwl1EvOua1FML13xo9M-trHyabnRf9NJ7adpfNjtkQgbkgWZR5lOfB7okqqoR4QoCxqrDEmOE17rLrBDtWDJnFG8cZpIRf7W_dXg/s1600/eye.jpg" imageanchor="1" ><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhHZyB5H2wilEU4NLsEMqadW-1V-fMbD3rsJ_wwl1EvOua1FML13xo9M-trHyabnRf9NJ7adpfNjtkQgbkgWZR5lOfB7okqqoR4QoCxqrDEmOE17rLrBDtWDJnFG8cZpIRf7W_dXg/s320/eye.jpg" /></a></center><p style="text-align: center; line-height:140%; font-size:120%">Figure 16 - The eye of a human and the eye of an octopus are nearly identical in structure, but evolved totally independently of each other. As Daniel Dennett pointed out, there are only a certain number of Good Tricks in Design Space and natural selection will drive different lines of descent towards them.</p><p style="line-height:140%; font-size:120%"> <a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgPmtcKRXNR7iuR_Cjjm3mxHDb6CRzphDj8D7kx32N3_DFrl3vP3qTvbwl_3Wdrtl4UKrRH2z14PO2_FeDKFGcR3D4-XRyzJFT42qq0qovJCWwDGjFemMSuUP1XbutJmPwIdtZkUg/s1600/Eye.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 209px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgPmtcKRXNR7iuR_Cjjm3mxHDb6CRzphDj8D7kx32N3_DFrl3vP3qTvbwl_3Wdrtl4UKrRH2z14PO2_FeDKFGcR3D4-XRyzJFT42qq0qovJCWwDGjFemMSuUP1XbutJmPwIdtZkUg/s400/Eye.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5683195401227196530" /></a><p style="text-align: center; line-height:140%; font-size:120%">Figure 17 – There are many living fossils that have left behind signposts along the trail to the modern camera-like eye. Notice that the human-like eye on the far right is really that of an octopus (click to enlarge).</p><p style="line-height:140%; font-size:120%"> An excellent treatment of the significance that convergence has played in the evolutionary history of life on Earth, and possibly beyond, can be found in <em>Life’s Solution</em> (2003) by Simon Conway Morris. The convergent evolution for carbon-based life on the Earth to develop eyes was driven by the hardware fact of life that the Earth is awash in solar photons. <br /><br /> Programmers and living things both have to deal with the second law of thermodynamics and nonlinearity, and there are only a few optimal solutions. Programmers try new development techniques, and the successful techniques tend to survive and spread throughout the IT community, while the less successful techniques are slowly discarded. Over time, the population distribution of software techniques changes. As with the evolution of living things on Earth, the evolution of software has been greatly affected by the physical environment, or hardware, upon which it ran. Just as the Earth has not always been as it is today, the same goes for computing hardware. The evolution of software has been primarily affected by two things - CPU speed and memory size. As I mentioned in <a href="https://softwarephysics.blogspot.com/2007/09/so-you-want-to-be-computer-scientist.html">So You Want To Be A Computer Scientist?</a>, the speed and memory size of computers have both increased by about a factor of a billion since Konrad Zuse built the Z3 in the spring of 1941, and the rapid advances in both and the dramatic drop in their costs have shaped the evolutionary history of software greatly. <br /><br /> <a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjwt8bUbdVeVW8rfKXF5_jNPDtlhRcsHcYLRd25Qqv6SqHtDbvaT6tg0swtPqwWGsJY6IvYLgEZLkC8vl1LCF0fsoDg70Oq-AZxPTwJvuTNf6iA2T0iu7nCEne3nS-HTwGOsK4-7w/s1600-h/geotime3.gif"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 320px; height: 247px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjwt8bUbdVeVW8rfKXF5_jNPDtlhRcsHcYLRd25Qqv6SqHtDbvaT6tg0swtPqwWGsJY6IvYLgEZLkC8vl1LCF0fsoDg70Oq-AZxPTwJvuTNf6iA2T0iu7nCEne3nS-HTwGOsK4-7w/s320/geotime3.gif" border="0" alt=""id="BLOGGER_PHOTO_ID_5310462860575076114" /></a><p style="text-align: center; line-height:140%; font-size:120%">Figure 18 - The Geological Time Scale of the Phanerozoic Eon is divided into the Paleozoic, Mesozoic and Cenozoic Eras by two great mass extinctions - click to enlarge. </p><p style="line-height:140%; font-size:120%"> <div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh-lvno7sbgCyC3C5njfO6heP-5EpTiyuezMdI3EV3e3TW0AbstthiJphzYVZWbmiEOIbXybvzTCTzX4A7AOY8nZ6mnVdJ8x9AOpa1Za92cw7cUz00Q_VYtbgEINjdRyfOHHiR7yQ/s1600/Paleozoic.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh-lvno7sbgCyC3C5njfO6heP-5EpTiyuezMdI3EV3e3TW0AbstthiJphzYVZWbmiEOIbXybvzTCTzX4A7AOY8nZ6mnVdJ8x9AOpa1Za92cw7cUz00Q_VYtbgEINjdRyfOHHiR7yQ/s640/Paleozoic.jpg" /></a></div> <p style="text-align: center; line-height:140%; font-size:120%">Figure 19 - Life in the Paleozoic, before the Permian-Triassic mass extinction, was far different than life in the Mesozoic. </p><p style="line-height:140%; font-size:120%"> <div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjT8nS-DWen_SZlvujzdiEnSGmrNvAMDDtmuV2ReAdpWVN8Jq_fGLKOdDB5MFZPH48cSsfVdBP2KllyOyw2XNUipTO2Dz5tEYz5F0frEPyh-5SHuEujxUGJWCae9xcQg2UV2A6hGw/s1600/Mesozoic.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjT8nS-DWen_SZlvujzdiEnSGmrNvAMDDtmuV2ReAdpWVN8Jq_fGLKOdDB5MFZPH48cSsfVdBP2KllyOyw2XNUipTO2Dz5tEYz5F0frEPyh-5SHuEujxUGJWCae9xcQg2UV2A6hGw/s640/Mesozoic.jpg" /></a></div> <p style="text-align: center; line-height:140%; font-size:120%">Figure 20 - In the Mesozoic the dinosaurs ruled after the Permian-Triassic mass extinction, but small mammals were also present. </p><p style="line-height:140%; font-size:120%"> <div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg2hgOMoMu7R9Ysoq4M19gDiRhyls2IUsYot2SvURA2halNeHoAFX7rEgDu0buUAyWo-DnRJq1lFOCaT0WA0hEYPhPmXq5FgOaZzobfXZXe3qXREsunuVNIPvUaPCyqVzDIpFT9ow/s1600/Cenozoic.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg2hgOMoMu7R9Ysoq4M19gDiRhyls2IUsYot2SvURA2halNeHoAFX7rEgDu0buUAyWo-DnRJq1lFOCaT0WA0hEYPhPmXq5FgOaZzobfXZXe3qXREsunuVNIPvUaPCyqVzDIpFT9ow/s640/Cenozoic.jpg" /></a></div> <p style="text-align: center; line-height:140%; font-size:120%">Figure 21 - Life in the Cenozoic, following the Cretaceous-Tertiary mass extinction, has so far been dominated by the mammals. This will likely soon change as software becomes the dominant form of self-replicating information on the planet, ushering in a new geological Era that has yet to be named. </p><p style="line-height:140%; font-size:120%"> Currently, it is thought that these mass extinctions arise from two different sources. One type of mass extinction is caused by the impact of a large comet or asteroid and has become familiar to the general public as the Cretaceous-Tertiary (K-T) mass extinction that wiped out the dinosaurs at the Mesozoic-Cenozoic boundary 65 million years ago. An impacting mass extinction is characterized by a rapid extinction of species followed by a corresponding rapid recovery in a matter of a few million years. An impacting mass extinction is like turning off a light switch. Up until the day the impactor hits the Earth, everything is fine and the Earth has a rich biosphere. After the impactor hits the Earth, the light switch turns off and there is a dramatic loss of species diversity. However, the effects of the incoming comet or asteroid are geologically brief and the Earth’s environment returns to normal in a few decades or less, so within a few million years or so, new species rapidly evolve to replace those that were lost. <br /><br /> The other kind of mass extinction is thought to arise from an overabundance of greenhouse gases and a dramatic drop in oxygen levels and is typified by the Permian-Triassic (P-T) mass extinction at the Paleozoic-Mesozoic boundary 250 million years ago. Greenhouse extinctions are thought to be caused by periodic flood basalts, like the Siberian Traps flood basalt of the late Permian. A flood basalt begins as a huge plume of magma several hundred miles below the surface of the Earth. The plume slowly rises and eventually breaks the surface of the Earth, forming a huge flood basalt that spills basaltic lava over an area of millions of square miles to a depth of several thousand feet. Huge quantities of carbon dioxide bubble out of the magma over a period of several hundreds of thousands of years and greatly increase the ability of the Earth’s atmosphere to trap heat from the Sun. For example, during the Permian-Triassic mass extinction, carbon dioxide levels may have reached a level as high as 3,000 ppm, much higher than the current 420 ppm. Most of the Earth warms to tropical levels with little temperature difference between the equator and the poles. This shuts down the thermohaline conveyor that drives the ocean currents.
<br /><br />
<strong>The Evolution of Software Over the Past 2.6 Billion Seconds Has Also Been Heavily Influenced by Mass Extinctions</strong><br /> Similarly, IT experienced a similar devastating mass extinction during the early 1990s when we experienced an environmental change that took us from the Age of the Mainframes to the Distributed Computing Platform. Suddenly mainframe Cobol/CICS and Cobol/DB2 programmers were no longer in demand. Instead, everybody wanted C and C++ programmers who worked on cheap Unix servers. This was a very traumatic time for IT professionals. Of course, the mainframe programmers never went entirely extinct, but their numbers were greatly reduced. The number of IT workers in mainframe Operations also dramatically decreased, while at the same time the demand for Operations people familiar with the Unix-based software of the new Distributed Computing Platform skyrocketed. This was around 1992, and at the time I was a mainframe programmer used to working with IBM's MVS and VM/CMS operating systems, writing Cobol, PL-1 and REXX code using DB2 databases. So I had to teach myself Unix and C and C++ to survive. In order to do that, I bought my very first PC, an 80-386 machine running Windows 3.0 with 5 MB of memory and a 100 MB hard disk for $1500. I also bought the Microsoft C7 C/C++ compiler for something like $300. And that was in 1992 dollars! One reason for the added expense was that there were no Internet downloads in those days because there were no high-speed ISPs. PCs did not have CD/DVD drives either, so the software came on 33 diskettes, each with a 1.44 MB capacity, that had to be loaded one diskette at a time in sequence. The software also came with about a foot of manuals describing the C++ class library on very thin paper. Indeed, suddenly finding yourself to be obsolete is not a pleasant thing and calls for drastic action. <br /><br /><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiXLfD6TAr85IpMAmEkGLggMJCPRD9azqIG_aZhhDBXdRXdwvz6DEUmfWoyZeFtRljuY990F6FI4F0MrNCxbRQdEORcaP1k4OwVfa7GJGl_pJl2KeIQv0tGagVBexSHcpOyoIE4ig/s1600-h/mainframe360.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 320px; height: 255px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiXLfD6TAr85IpMAmEkGLggMJCPRD9azqIG_aZhhDBXdRXdwvz6DEUmfWoyZeFtRljuY990F6FI4F0MrNCxbRQdEORcaP1k4OwVfa7GJGl_pJl2KeIQv0tGagVBexSHcpOyoIE4ig/s320/mainframe360.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5310464143583384690" /></a><p style="text-align: center; line-height:140%; font-size:120%"> Figure 22 – An IBM OS/360 mainframe from 1964. The IBM OS/360 mainframe caused commercial software to explode within corporations and gave IT professionals the hardware platform that they were waiting for. </p><p style="line-height:140%; font-size:120%"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhlKlyZZUBa8edCTdgm7TCqhI3Wy6A3H9SEdgOvlyGkfup-W454lbb5yEmd44uEBOXuNbM1PODUztT8o021FN0sovNOLxAtpaJb4vm9nNEPizbvH0dfzOekbRtJLmkXYAYJJ_NeXA/s1600/Servers.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhlKlyZZUBa8edCTdgm7TCqhI3Wy6A3H9SEdgOvlyGkfup-W454lbb5yEmd44uEBOXuNbM1PODUztT8o021FN0sovNOLxAtpaJb4vm9nNEPizbvH0dfzOekbRtJLmkXYAYJJ_NeXA/s640/Servers.jpg" /></a></div><p style="text-align: center; line-height:140%; font-size:120%"> Figure 23 – The Distributed Computing Platform replaced a great deal of mainframe computing with a large number of cheap self-contained servers running software that tied the servers together. </p><p style="line-height:140%; font-size:120%">The problem with the Distributed Computing Platform was that although the server hardware was cheaper than mainframe hardware, the granular nature of the Distributed Computing Platform meant that it created a very labor-intensive infrastructure that was difficult to operate and support, and as the level of Internet traffic dramatically expanded over the past 20 years, the Distributed Computing Platform became nearly impossible to support. For example, I worked in Middleware Operations for the Discover credit card company from 2002 - 2016, and during that time our Distributed Computing Platform infrastructure exploded by a factor of at least a hundred. It finally became so complex and convoluted that we could barely keep it all running, and we really did not even have enough change window time to properly apply maintenance to it as I described in <a href="https://softwarephysics.blogspot.com/2012/02/new-introduction-to-softwarephysics.html">The Limitations of Darwinian Systems</a>. Clearly, the Distributed Computing Platform was not sustainable, and an alternative was desperately needed. This is because the Distributed Computing Platform was IT's first shot at running software on a multicellular architecture, as I described in <a href="https://softwarephysics.blogspot.com/2014/05/new-introduction-to-softwarephysics.html">Software Embryogenesis</a>. But the Distributed Computing Platform simply had too many moving parts, all working together independently on their own, to fully embrace the advantages of a multicellular organization. In many ways, the Distributed Computing Platform was much like the ancient stromatolites that tried to reap the advantages of a multicellular organism by simply tying together the diverse interests of multiple layers of prokaryotic cyanobacteria into a "multicellular organism" that seemingly benefited the interests of all. <div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjuxjlKrfGh_qBbC58l_FXbZqYqjiBwMlUpw1ajP5lulKLPX2NZAyeKHYnv8AIMVl3lrE3E1He1GjsDA8gZyAXxZf7pM01fHC-a8GEpHfT1Yc3959x7miBBdFSx1pGKkGOYFuggfA/s1600/stromatolites.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjuxjlKrfGh_qBbC58l_FXbZqYqjiBwMlUpw1ajP5lulKLPX2NZAyeKHYnv8AIMVl3lrE3E1He1GjsDA8gZyAXxZf7pM01fHC-a8GEpHfT1Yc3959x7miBBdFSx1pGKkGOYFuggfA/s640/stromatolites.jpg" /></a></div> <p style="text-align: center; line-height:140%; font-size:120%">Figure 24 – Stromatolites are still found today in Sharks Bay Australia. They consist of mounds of alternating layers of prokaryotic bacteria. </p><p style="line-height:140%; font-size:120%"> <div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg2wjNrVLCj-ed8O1oCeS-e0xmvUa5Wrk8eTcmBm41J9coyeZgk9W_gRM7I6irgW7UP0CfQ7kFuwmSNCWKKw69LP60Yf76o-k_tFy9SF0ZHP9nyLhF2tPWPmlr2DZ1ufNi7299ptA/s1600/StromatalieFossil.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg2wjNrVLCj-ed8O1oCeS-e0xmvUa5Wrk8eTcmBm41J9coyeZgk9W_gRM7I6irgW7UP0CfQ7kFuwmSNCWKKw69LP60Yf76o-k_tFy9SF0ZHP9nyLhF2tPWPmlr2DZ1ufNi7299ptA/s640/StromatalieFossil.jpg" /></a></div> <p style="text-align: center; line-height:140%; font-size:120%">Figure 25 – The cross-section of an ancient stromatolite displays the multiple layers of prokaryotic cyanobacteria that came together for their own mutual self-survival to form a primitive "multicellular" organism that seemingly benefited the interests of all. The servers and software of the Distributed Computing Platform were very much like the primitive stromatolites. </p><p style="line-height:140%; font-size:120%"> The collapse of the Distributed Computing Platform under its own weight brought on a second mass extinction beginning in 2010 with the rise of Cloud Computing. <br /><br /> <strong>The Rise of Cloud Computing Causes the Second Great Software Mass Extinction</strong><br /> The successor architecture to the Distributed Computing Platform was the Cloud Computing Platform, which is usually displayed as a series of services all stacked into levels. The highest level, SaaS (Software as a Service) runs the common third-party office software like Microsoft Office 365 and email. The second level, PaaS (Platform as a Service) is where the custom business software resides, and the lowest level, IaaS (Infrastructure as a Service) provides for an abstract tier of virtual servers and other resources that automatically scale with varying load levels. From an Applications Development standpoint, the PaaS layer is the most interesting because that is where they will be installing the custom application software used to run the business and also to run high-volume corporate websites that their customers use. Currently, that custom application software is installed into the middleware that is running on the Unix servers of the Distributed Computing Platform. The PaaS level will be replacing the middleware software, such as the Apache webservers and the J2EE Application servers, like WebSphere, Weblogic and JBoss that currently do that. For Operations, the IaaS level, and to a large extent, the PaaS level too are of most interest because those levels will be replacing the middleware and other support software running on hundreds or thousands of individual self-contained servers. The Cloud architecture can be run on a company's own hardware, or it can be run on a timesharing basis on the hardware at Amazon, Microsoft, IBM or other Cloud providers, using the Cloud software that the Cloud providers market. <div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiGbxAaHeWjSbdJWUGBFuAYVQdZh22yCaP8bA4dUXFJY9bh5B-gySGGlYroTDC2iI7iobhtTrW6NkcviAZGYNZznKWT6dMoGMPne_buzI-YIGAR9T3pBGy5xSL2oSdgOF95vfWBFw/s1600/Cloud-Stack-Diagram.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiGbxAaHeWjSbdJWUGBFuAYVQdZh22yCaP8bA4dUXFJY9bh5B-gySGGlYroTDC2iI7iobhtTrW6NkcviAZGYNZznKWT6dMoGMPne_buzI-YIGAR9T3pBGy5xSL2oSdgOF95vfWBFw/s640/Cloud-Stack-Diagram.png" /></a></div> <p style="text-align: center; line-height:140%; font-size:120%">Figure 26 – Cloud Computing returns us to the timesharing days of the 1960s and 1970s by viewing everything as a service. </p><p style="line-height:140%; font-size:120%">Basically, the Cloud Computing Platform is based on two defining characteristics: <br /><br />1. Returning to the timesharing days of the 1960s and 1970s when many organizations could not afford to support a mainframe infrastructure of their own. <br /><br />2. Taking the multicellular architecture of the Distributed Computing Platform to the next level by using Cloud Platform software to produce a full-blown multicellular organism, and even higher, by introducing the self-organizing behaviors of the social insects like ants and bees. <br /><br /> For more on this see <a href="https://softwarephysics.blogspot.com/2016/03/new-introduction-to-softwarephysics.html">Cloud Computing and the Coming Software Mass Extinction</a> and <a href="https://softwarephysics.blogspot.com/2020/05/new-introduction-to-softwarephysics_15.html">The Origin and Evolution of Cloud Computing - Software Moves From the Sea to the Land and Back Again</a>. <br /><br /> <strong>The Geological Time Scale of Software Evolution</strong><br /> Similarly, the evolutionary history of software over the past 2.6 billion seconds has also been greatly affected by a series of mass extinctions, which allow us to also subdivide the evolutionary history of software into several long computing eras, like the geological eras listed above. As with the evolution of the biosphere over the past 541 million years, we shall see that these mass extinctions of software have also been caused by several catastrophic events in IT that were separated by long periods of slow software evolution through uniformitarianism. Like the evolution of carbon-based life on the Earth, some of these software mass extinctions were caused by some drastic environmental hardware changes, while others were simply caused by drastic changes in the philosophy of software development thought. <br /><br /> <strong>Unstructured Period (1941 – 1972)</strong> <br />During the Unstructured Period, programs were simple monolithic structures with lots of GOTO statements, no subroutines, no indentation of code, and very few comment statements. The machine code programs of the 1940s evolved into the assembler programs of the 1950s and the compiled programs of the 1960s, with FORTRAN appearing in 1956 and COBOL in 1958. These programs were very similar to the early prokaryotic bacteria that appeared over 4,000 million years ago on Earth and lacked internal structure. Bacteria essentially consist of a tough outer cell wall enclosing an inner cell membrane and contain a minimum of internal structure. The cell wall is composed of a tough molecule called peptidoglycan, which is composed of tightly bound amino sugars and amino acids. The cell membrane is composed of phospholipids and proteins, which will be described later in this posting. The DNA within bacteria generally floats freely as a large loop of DNA, and their ribosomes, used to help transcribe DNA into proteins, float freely as well and are not attached to membranes called the rough endoplasmic reticulum. The chief advantage of bacteria is their simple design and ability to thrive and rapidly reproduce even in very challenging environments, like little AK-47s that still manage to work in environments where modern tanks fail. Just as bacteria still flourish today, some unstructured programs are still in production.<br /> <br /><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg6vejcS7zSIafVNFcOgk35OdW5fzzZB13Bb9H6zEiPCl3gPuMl6dtoQI_BA6lZ6suiur4f1ODhiIo8TvMv0fqyuex0Gl5OLmEvdl84AGBpMfmeV0wgWk117ghX68_AluLHZSBYYA/s1600-h/Prokaryote6.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 320px; height: 202px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg6vejcS7zSIafVNFcOgk35OdW5fzzZB13Bb9H6zEiPCl3gPuMl6dtoQI_BA6lZ6suiur4f1ODhiIo8TvMv0fqyuex0Gl5OLmEvdl84AGBpMfmeV0wgWk117ghX68_AluLHZSBYYA/s320/Prokaryote6.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5305079460736531378" /></a><p style="text-align: center; line-height:140%; font-size:120%">Figure 27 – A simple prokaryotic bacterium with little internal structure (click to enlarge)</p><p style="line-height:140%; font-size:120%">Below is a code snippet from a fossil FORTRAN program listed in a book published in 1969 showing little internal structure. Notice the use of GOTO statements to skip around in the code. Later this would become known as the infamous “spaghetti code” of the Unstructured Period that was such a joy to support.<br /><strong><br />30 DO 50 I=1,NPTS<br />31 IF (MODE) 32, 37, 39<br />32 IF (Y(I)) 35, 37, 33<br />33 WEIGHT(I) = 1. / Y(I)<br /> GO TO 41<br />35 WEIGHT(I) = 1. / (-1*Y(I))<br />37 WEIGHT(I) = 1.<br /> GO TO 41<br />39 WEIGHT(I) = 1. / SIGMA(I)**2<br />41 SUM = SUM + WEIGHT(I)<br /> YMEAN = WEIGHT(I) * FCTN(X, I, J, M)<br /> DO 44 J = 1, NTERMS<br />44 XMEAN(J) = XMEAN(J) + WEIGHT(I) * FCTN(X, I, J, M)<br />50 CONTINUE <br /></strong><br />The primitive nature of software in the Unstructured Period was largely due to the primitive nature of the hardware upon which it ran. Figure 22 shows an IBM OS/360 from 1964 – notice the operator at the teletype feeding commands to the nearby operator console, the distant tape drives, and the punch card reader in the mid-ground. Such a machine had about 1 MB of memory, less than 1/8000 of the memory of a current cheap $250 PC, and a matching anemic processing speed. For non-IT readers let me remind all that:<br /><br />1 KB = 1 kilobyte = 2<sup>10</sup> = 1024 bytes or about 1,000 bytes<br />1 MB = 1 megabyte = 1024 x 1024 = 1,048,576 bytes or about a million bytes<br />1 GB = 1 gigabyte = 1024 x 10224 x 1024 = 1,073,741,824 bytes or about a billion bytes<br /><br />One byte of memory can store one ASCII text character like an “A” and two bytes can store a small integer in the range of -32,768 to +32,767. When I first started programming in 1972 we thought in terms of kilobytes, then megabytes, and now gigabytes. Data science people now think in terms of many terabytes - 1 TB = 1024 GB.<br /><br />Software was input via punched cards and the output was printed on fan-fold paper. Compiled code could be stored on tape or very expensive disk drives if you could afford them, but any changes to code were always made via punched cards, and because you were only allowed perhaps 128K – 256K of memory for your job, programs had to be relatively small, so simple unstructured code ruled the day. Like the life cycle of a single-celled bacterium, the compiled and linked code for your program was loaded into the memory of the computer at execution time and did its thing in a batch mode, until it completed successfully or abended and died. At the end of the run, the computer’s memory was released for the next program to be run and your program ceased to exist. <center><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgHSnqMzdWppdjNhL7Pi9BGuonbhFE7U5j-ZHVBCPW3B-Yci0M6-H9D7rp5Gg-CrDS6bVVgaF5iG9yOuTV4hqQpWWT8Mcs49mae5Dgrr4W2Yv0QJW-BasdT4KnLdQREpguxfGLzDQ/s1600/IBM029.jpg" imageanchor="1" ><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgHSnqMzdWppdjNhL7Pi9BGuonbhFE7U5j-ZHVBCPW3B-Yci0M6-H9D7rp5Gg-CrDS6bVVgaF5iG9yOuTV4hqQpWWT8Mcs49mae5Dgrr4W2Yv0QJW-BasdT4KnLdQREpguxfGLzDQ/s320/IBM029.jpg" /></a><p style="text-align: center; line-height:140%; font-size:120%"> Figure 28 - An IBM 029 keypunch machine from the 1960s Unstructured Period of software. </p><p style="line-height:140%; font-size:120%"><center><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg76FV8_Ipsot_X3LLG-IIEe5SGGsr761MPcOnSFGnUxGjw_tr-VCJhgz70YmH-caBPQruaeZOtL8TLlxvrLdcHWbg62pyWr_e1Kx8mKQPiLwwV6M7cG40z3fC_ytWHeIA9LJpMdA/s1600/cards.jpg" imageanchor="1" ><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg76FV8_Ipsot_X3LLG-IIEe5SGGsr761MPcOnSFGnUxGjw_tr-VCJhgz70YmH-caBPQruaeZOtL8TLlxvrLdcHWbg62pyWr_e1Kx8mKQPiLwwV6M7cG40z3fC_ytWHeIA9LJpMdA/s320/cards.jpg" /></a><p style="text-align: center; line-height:140%; font-size:120%"> Figure 29 - Each card could hold a maximum of 80 bytes. Normally, one line of code was punched onto each card. </p><p style="line-height:140%; font-size:120%"></center> <center><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgmShVME-trJgnQKLiUVdxlha1QbRLZCxJtfPTc0L0ZVsGZUU6yl8axYVthjNu0PpF3-64GNE8GnHLOpmkrDov0t7cwKt-GnDl1DsAUNzp1FRZopo-dKSXVhV88G3ynB57V8cBCxg/s1600/CardDeckBox.jpg" imageanchor="1" ><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgmShVME-trJgnQKLiUVdxlha1QbRLZCxJtfPTc0L0ZVsGZUU6yl8axYVthjNu0PpF3-64GNE8GnHLOpmkrDov0t7cwKt-GnDl1DsAUNzp1FRZopo-dKSXVhV88G3ynB57V8cBCxg/s320/CardDeckBox.jpg" /></a><p style="text-align: center; line-height:140%; font-size:120%"> Figure 30 - The cards for a program were held together into a deck with a rubber band, or for very large programs, the deck was held in a special cardboard box that originally housed blank cards. Many times the data cards for a run followed the cards containing the source code for a program. The program was compiled and linked in two steps of the run and then the generated executable file processed the data cards that followed in the deck. </p><p style="line-height:140%; font-size:120%"></center> </center><center><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjSyFx6XAzVNr5ITldjnvvyk_33jOX_qCZYqkwu8xJO9Gf_gpRi3IGI32aXO3OcgCIKSsPyHFIB_q5Ryl8vqyMkvEp_awQ28O2nFV8tGc7hrD_GnW4250amkWiyh8NJnOi_dYmvrg/s1600/ibm7094.jpg" imageanchor="1" ><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjSyFx6XAzVNr5ITldjnvvyk_33jOX_qCZYqkwu8xJO9Gf_gpRi3IGI32aXO3OcgCIKSsPyHFIB_q5Ryl8vqyMkvEp_awQ28O2nFV8tGc7hrD_GnW4250amkWiyh8NJnOi_dYmvrg/s320/ibm7094.jpg" /></a></center><p style="text-align: center; line-height:140%; font-size:120%"> Figure 31 - To run a job, the cards in a deck were fed into a card reader, as shown on the left above, to be compiled, linked, and executed by a million-dollar mainframe computer. In the above figure, the mainframe is located directly behind the card reader. </p><p style="line-height:140%; font-size:120%"> <div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgSoEt5OApKTZnOBHV9ZYGaz7QxTVu77jW9RRF-kOesTGWsVcNe4Qy-zNyq9-An8yRcRVufPim823io9eA9Ad4eX2a-HBy5mrxitY-wKzC4Qrbecc8b_jeKjSY27Yb4YJjYI02Y4Q/s1600/LinePrinter.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgSoEt5OApKTZnOBHV9ZYGaz7QxTVu77jW9RRF-kOesTGWsVcNe4Qy-zNyq9-An8yRcRVufPim823io9eA9Ad4eX2a-HBy5mrxitY-wKzC4Qrbecc8b_jeKjSY27Yb4YJjYI02Y4Q/s640/LinePrinter.jpg" /></a></div> <p style="text-align: center; line-height:140%; font-size:120%">Figure 32 - The output of programs was printed on fan-folded paper by a line printer. </p><p style="line-height:140%; font-size:120%"> However, one should not discount the great advances that were made by the early bacteria billions of years ago or by the unstructured code from the computer systems of the 1950s and 1960s. These were both very important formative periods in the evolution of life and of software on Earth, and examples of both can still be found in great quantities today. For example, it is estimated that about 50% of the Earth’s biomass is still composed of simple bacteria. Your body consists of about 100 trillion cells, but you also harbor about 10 times that number of bacterial cells that are in a parasitic/symbiotic relationship with the “other” cells of your body and perform many of the necessary biochemical functions required to keep you alive, such as aiding with the digestion of food. Your gut contains about 3.5 pounds of active bacteria and about 50% of the dry weight of your feces is bacteria, so in reality, we are all composed of about 90% bacteria with only 10% of our cells being “normal” human cells.<br /><br />All of the fundamental biochemical pathways used by living things to create large complex organic molecules from smaller monomers, or to break those large organic molecules back down into simple monomers were first developed by bacteria billions of years ago. For example, bacteria were the first forms of life to develop the biochemical pathways that turn carbon dioxide, water, and the nitrogen in the air into the organic molecules necessary for life – sugars, lipids, amino acids, and the nucleotides that form RNA and DNA. They also developed the biochemical pathways to replicate DNA and transcribe DNA into proteins, and to form complex structures such as cell walls and cell membranes from sugars, amino acids, proteins, and phospholipids. Additionally, bacteria invented the Krebs cycle to break these large macromolecules back down to monomers for reuse and to release and store energy by transforming ADP to ATP. To expand upon this, we will see in <a href="https://softwarephysics.blogspot.com/2008/07/introduction-to-softwarephysics-ii.html">Software Symbiogenesis</a>, how Lynn Margulis has proposed that all the innovations of large macroscopic forms of life have actually been acquired from the highly productive experiments of bacterial life forms.<br /><br />Similarly, all of the fundamental coding techniques of IT at the line of code level were first developed in the Unstructured Period of the 1950s and 1960s, such as the use of complex variable names, arrays, nested loops, loop counters, if-then-else logic, list processing with pointers, I/O blocking, bubble sorts, etc. When I was in Middleware Operations for Discover, I did not do much coding. However, I did write a large number of Unix shell scripts to help make my job easier. These Unix shell scripts were small unstructured programs in the range of 10 – 50 lines of code, and although they were quite primitive and easy to write, they had a huge economic pay-off for me. Many times, a simple 20 line Unix shell script that took less than an hour to write, would provide as much value to me as the code behind the IBM Websphere Console, which I imagine probably had cost IBM about $10 - $100 million dollars to develop and came to several hundred thousand lines of code. For more on that see <a href="https://softwarephysics.blogspot.com/2010/04/new-introduction-to-softwarephysics.html">MISE in the Attic</a>. So if you add up all the little unstructured Unix shell scripts, DOS .bat files, edit macros, Excel spreadsheet macros, Word macros, etc., I bet that at least 50% of the software in the Software Universe is still unstructured code.<br /><br /><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiXLfD6TAr85IpMAmEkGLggMJCPRD9azqIG_aZhhDBXdRXdwvz6DEUmfWoyZeFtRljuY990F6FI4F0MrNCxbRQdEORcaP1k4OwVfa7GJGl_pJl2KeIQv0tGagVBexSHcpOyoIE4ig/s1600-h/mainframe360.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 320px; height: 255px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiXLfD6TAr85IpMAmEkGLggMJCPRD9azqIG_aZhhDBXdRXdwvz6DEUmfWoyZeFtRljuY990F6FI4F0MrNCxbRQdEORcaP1k4OwVfa7GJGl_pJl2KeIQv0tGagVBexSHcpOyoIE4ig/s320/mainframe360.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5310464143583384690" /></a><p style="text-align: center; line-height:140%; font-size:120%"> Figure 33 – An IBM OS/360 mainframe from 1964. The IBM OS/360 mainframe caused commercial software to explode within corporations during the Unstructured Period and gave IT professionals the hardware platform that they were waiting for. </p><p style="line-height:140%; font-size:120%"></p><p style="line-height:140%; font-size:120%"><strong>Structured Period (1972 – 1992)</strong><br />The increasing availability of computers with more memory and faster CPUs allowed for much larger programs to be written in the 1970s, but unstructured code became much harder to maintain as it grew in size, so the need for internal structure became readily apparent. Plus, around this time code began to be entered via terminals using full-screen editors, rather than on punched cards, which made it easier to view larger sections of code as you changed it. <div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhA0hF7vu-Kl4fP3F6uLSJMnDaH3WZ0Tmw1TC74R8k6U1oc5sbVvBwXd8uwZVGGDuALPNcvskcC2UpH-yPsJ6UHHBQfvMTjA_w_t2gZ7zUAQshwYIYLsqEAwmhyphenhyphenyowoSZpJkybqeA/s1600/IBM3278.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhA0hF7vu-Kl4fP3F6uLSJMnDaH3WZ0Tmw1TC74R8k6U1oc5sbVvBwXd8uwZVGGDuALPNcvskcC2UpH-yPsJ6UHHBQfvMTjA_w_t2gZ7zUAQshwYIYLsqEAwmhyphenhyphenyowoSZpJkybqeA/s400/IBM3278.jpg" /></a></div><p style="text-align: center; line-height:140%; font-size:120%">Figure 34 - IBM 3278 terminals were connected to controllers that connected to IBM mainframes The IBM 3278 terminals then ran interactive TSO sessions with the IBM mainframes. The ISPF full-screen editor was then brought up under TSO after you logged into a TSO session.</p><p style="line-height:140%; font-size:120%"> <a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgY6IsbmWBcpOnqdETuMXK5L2jesDbVX2xEe0hbNuTHlXDxrgXDAcagc2myqrnmqoHW68EDoe0fZZ-QFJ01fTW-pUGPWMJto2_QL0Z_osG4WD3Py163lp4h7mBYhBDWecpQbdvbZw/s1600-h/mainframe.gif"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 320px; height: 168px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgY6IsbmWBcpOnqdETuMXK5L2jesDbVX2xEe0hbNuTHlXDxrgXDAcagc2myqrnmqoHW68EDoe0fZZ-QFJ01fTW-pUGPWMJto2_QL0Z_osG4WD3Py163lp4h7mBYhBDWecpQbdvbZw/s320/mainframe.gif" border="0" alt=""id="BLOGGER_PHOTO_ID_5310469206349810082" /></a><p style="text-align: center; line-height:140%; font-size:120%">Figure 35 – A mainframe with IBM 3278 CRT terminals attached (click to enlarge)</p><p style="line-height:140%; font-size:120%">In 1972, Dahl, Dijkstra, and Hoare published <em>Structured Programming</em>, in which they suggested that computer programs should have complex internal structure with no GOTO statements, lots of subroutines, indented code, and many comment statements. During the Structured Period, these structured programming techniques were adopted by the IT community, and the GOTO statements were replaced by subroutines, also known as functions(), and indented code with lots of internal structure, like the eukaryotic structure of modern cells that appeared about 1,500 million years ago. Eukaryotic cells are found in the bodies of all complex organisms from single-cell yeasts to you and me and divide up cell functions amongst a collection of organelles (subroutines), such as mitochondria, chloroplasts, Golgi bodies, and the endoplasmic reticulum. <a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhTDfSVezCkBhC6fcPMw9jJVCqWNtuogoRi8lJVRyU7qLy9y3rxc4OQ4LocTiw_XO0BFbK1KJfaK4lL6Nv5VPVKeRBaJxcXZ9NY092kR4cUE040CfC_jJ3OsoG2y0SZZQDbA5fKVQ/s1600-h/EukPlantAnimal.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 269px; height: 320px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhTDfSVezCkBhC6fcPMw9jJVCqWNtuogoRi8lJVRyU7qLy9y3rxc4OQ4LocTiw_XO0BFbK1KJfaK4lL6Nv5VPVKeRBaJxcXZ9NY092kR4cUE040CfC_jJ3OsoG2y0SZZQDbA5fKVQ/s320/EukPlantAnimal.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5305080288437013442" /></a><p style="text-align: center; line-height:140%; font-size:120%">Figure 36 – Plants and animals are composed of eukaryotic cells with much internal structure (click to enlarge)</p><p style="line-height:140%; font-size:120%"> Figure 37 compares the simple internal structure of a typical prokaryotic bacterium with the internal structure of eukaryotic plant and animal cells. These eukaryotic cells could be simple single-celled plants and animals or they could be found within a much larger multicellular organism consisting of trillions of eukaryotic cells. Figure 37 is a bit deceiving, in that eukaryotic cells are huge cells that are more than 20 times larger in diameter than a typical prokaryotic bacterium with about 10,000 times the volume as shown in Figure 38. Because eukaryotic cells are so large, they have an internal cytoskeleton, composed of linear-shaped proteins that form filaments that act like a collection of tent poles, to hold up the huge cell membrane encircling the cell.<br /><br />Eukaryotic cells also have a great deal of internal structure, in the form of organelles, that are enclosed by internal cell membranes. Like the structured programs of the 1970s and 1980s, eukaryotic cells divide up functions amongst these organelles. These organelles include the nucleus to store and process the genes stored in DNA, mitochondria to perform the Krebs cycle to create ATP from carbohydrates, and chloroplasts in plants to produce energy-rich carbohydrates from water, carbon dioxide, and sunlight. <div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjbftgVlr5fjYpFzbbUMgTiRxj-oK-Sz04cyA5Uhxce10TBbHHD2cORt8lYD5zqDojbIT3UAXq0b2Eq-7-RxrL4WvORE_Dff0VloFjuK-BR2SHG3Ata4oxgEItg1MuNYlsPL5OVDA/s1600/ProkaryoteEukaryote2.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjbftgVlr5fjYpFzbbUMgTiRxj-oK-Sz04cyA5Uhxce10TBbHHD2cORt8lYD5zqDojbIT3UAXq0b2Eq-7-RxrL4WvORE_Dff0VloFjuK-BR2SHG3Ata4oxgEItg1MuNYlsPL5OVDA/s640/ProkaryoteEukaryote2.jpg" width="640" height="353" data-original-width="900" data-original-height="496" /></a></div> <p style="text-align: center; line-height:140%; font-size:120%">Figure 37 – The prokaryotic cell architecture of the bacteria and archaea is very simple and designed for rapid replication. Prokaryotic cells do not have a nucleus enclosing their DNA. Eukaryotic cells, on the other hand, store their DNA on chromosomes that are isolated in a cellular nucleus. Eukaryotic cells also have a very complex internal structure with a large number of organelles, or subroutine functions, that compartmentalize the functions of life within the eukaryotic cells.</p><p style="line-height:140%; font-size:120%"> <div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiVYLRfcyJGB_Usg4OefWR5HtwQG-LafrIAsTgBknpILip9n8_Rh96M9pUJwx5lKlzURTFXG5udmVlJJEtN8_TxGiqA_b-kEuCTu4U6SF_fyNZA3Or2F8hQKmrj95DbaDt8ybYIFQ/s1600/ProkaryoteEukaryoteSize.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiVYLRfcyJGB_Usg4OefWR5HtwQG-LafrIAsTgBknpILip9n8_Rh96M9pUJwx5lKlzURTFXG5udmVlJJEtN8_TxGiqA_b-kEuCTu4U6SF_fyNZA3Or2F8hQKmrj95DbaDt8ybYIFQ/s640/ProkaryoteEukaryoteSize.jpg" width="640" height="480" data-original-width="728" data-original-height="546" /></a></div> <p style="text-align: center; line-height:140%; font-size:120%">Figure 38 – Not only are eukaryotic cells much more complicated than prokaryotic cells, but they are also HUGE!</p><p style="line-height:140%; font-size:120%"> The introduction of structured programming techniques in the early 1970s allowed programs to become much larger and much more complex by using many subroutines to divide up logic into self-contained organelles. This induced a mass extinction of unstructured programs, similar to the Permian-Triassic (P-T) mass extinction, or the Great Dying, 250 million years ago that divided the Paleozoic from the Mesozoic in the stratigraphic column and resulted in the extinction of about 90% of the species on Earth. As programmers began to write new code using the new structured programming paradigm, older code that was too difficult to rewrite in a structured manner remained as legacy “spaghetti code” that slowly fossilized over time in Production. Like the Permian-Triassic (P-T) mass extinction, the mass extinction of unstructured code in the 1970s was more like a greenhouse gas mass extinction than an impactor mass extinction because it spanned nearly an entire decade, and was also a rather complete mass extinction which totally wiped out most unstructured code in corporate systems.<br /><br />Below is a code snippet from a fossil COBOL program listed in a book published in 1975. Notice the structured programming use of indented code and calls to subroutines with PERFORM statements.<br /><strong><br />PROCEDURE DIVISION.<br /> OPEN INPUT FILE-1, FILE-2<br /> PERFORM READ-FILE-1-RTN.<br /> PERFORM READ-FILE-2-RTN.<br /> PERFORM MATCH-CHECK UNTIL ACCT-NO OF REC-1 = HIGH_VALUES.<br /> CLOSE FILE-1, FILE-2.<br />MATCH-CHECK.<br /> IF ACCT-NO OF REC-1 < ACCT-NO OF REC-2<br /> PERFORM READ-FILE-1-RTN<br /> ELSE<br /> IF ACCT-NO OF REC-1 > ACCT-NO OF REC-2<br /> DISPLAY REC-2, 'NO MATCHING ACCT-NO'<br /> PERORM READ-FILE-1-RTN<br /> ELSE<br /> PERORM READ-FILE-2-RTN UNTIL ACCT-NO OF REC-1<br /> NOT EQUAL TO ACCT-NO OF REC-2<br /></strong><br />When I encountered my very first structured FORTRAN program in 1975, I diligently “fixed” the program by removing all the code indentations! You see in those days, we rarely saw the entire program on a line printer listing because that took a compile of the program to produce and wasted valuable computer time, which was quite expensive back then. When I provided an estimate for a new system back then, I figured 25% for programming manpower, 25% for overhead charges from other IT groups on the project, and 50% for compiles. So instead of working with a listing of the program, we generally flipped through the card deck of the program to do debugging. Viewing indented code in a card deck can give you a real headache, so I just “fixed” the program by making sure all the code started in column 7 of the punch cards as it should!<br /><br /><strong>Object-Oriented Period (1992 – Present)</strong> <br />During the Object-Oriented Period, programmers adopted a multicellular organization for software, in which programs consisted of many instances of objects (cells) that were surrounded by membranes studded with exposed methods (membrane receptors).<br /><br />The following discussion might be a little hard to follow for readers with a biological background, but with little IT experience, so let me define a few key concepts with their biological equivalents.<br /><br /><strong>Class</strong> – Think of a class as a cell type. For example, the class Customer is a class that defines the cell type of Customer and describes how to store and manipulate the data for a Customer, like firstName, lastName, address, and accountBalance. For example, a program might instantiate a Customer object called “steveJohnston”.<br /><br /><strong>Object</strong> – Think of an object as a cell. A particular object will be an instance of a class. For example, the object steveJohnston might be an instance of the class Customer and will contain all the information about my particular account with a corporation. At any given time, there could be many millions of Customer objects bouncing around in the IT infrastructure of a major corporation’s website.<br /><br /><strong>Instance</strong> – An instance is a particular object of a class. For example, the steveJohnston object would be a particular instance of the class Customer, just as a particular red blood cell would be a particular instance of the cell type RedBloodCell. Many times programmers will say things like “This instantiates the Customer class”, meaning it creates objects (cells) of the Customer class (cell type).<br /><br /><strong>Method</strong> – Think of a method() as a biochemical pathway. It is a series of programming steps or “lines of code” that produce a macroscopic change in the state of an object (cell). The Class for each type of object defines the data for the class, like firstName, lastName, address, and accountBalance, but it also defines the methods() that operate upon these data elements. Some methods() are public, while others are private. A public method() is like a receptor on the cell membrane of an object (cell). Other objects(cells) can send a message to the public methods of an object (cell) to cause it to execute a biochemical pathway within the object (cell). For example, steveJohnston.setFirstName(“Steve”) would send a message to the steveJohnston object instance (cell) of the Customer class (cell type) to have it execute the setFirstName method() to change the firstName of the object to “Steve”. The steveJohnston.getaccountBalance() method would return my current account balance with the corporation. Objects also have many internal private methods() within that are biochemical pathways that are not exposed to the outside world. For example, the calculateAccountBalance() method could be an internal method that adds up all of my debits and credits and updates the accountBalance data element within the steveJohnston object, but this method cannot be called by other objects (cells) outside of the steveJohnston object (cell). External objects (cells) have to call the steveJohnston.getaccountBalance() in order to find out my accountBalance.<br /><br /><strong>Line of Code</strong> – This is a single statement in a method() like:<br /><br />discountedTotalCost = (totalHours * ratePerHour) - costOfNormalOffset;<br /><br />Remember methods() are the equivalent of biochemical pathways and are composed of many lines of code, so each line of code is like a single step in a biochemical pathway. Similarly, each character in a line of code can be thought of as an atom, and each variable as an organic molecule. Each character can be in one of 256 ASCII quantum states defined by 8 quantized bits, with each bit in one of two quantum states “1” or “0”, which can also be characterized as 8 electrons in a spin-up ↑ or spin-down ↓ state:<br /><br />discountedTotalCost = (totalHours * ratePerHour) - costOfNormalOffset;<br /> <br />C = 01000011 = ↓ ↑ ↓ ↓ ↓ ↓ ↑ ↑<br />H = 01001000 = ↓ ↑ ↓ ↓ ↑ ↓ ↓ ↓<br />N = 01001110 = ↓ ↑ ↓ ↓ ↑ ↑ ↑ ↓<br />O = 01001111 = ↓ ↑ ↓ ↓ ↑ ↑ ↑ ↑ <br /><br /> Programmers have to assemble characters (atoms) into organic molecules (variables) to form the lines of code that define a method() (biochemical pathway). As in carbon-based biology, the slightest error in a method() can cause drastic and usually fatal consequences. Because there is nearly an infinite number of ways of writing code incorrectly and only a very few ways of writing code correctly, there is an equivalent of the second law of thermodynamics at work. This simulated second law of thermodynamics and the very nonlinear macroscopic effects that arise from small coding errors is why software architecture has converged upon <em>Life’s Solution</em>. With these concepts in place, we can now proceed with our comparison of the evolution of software and carbon-based life on Earth.<br /><br />Object-oriented programming actually started in the 1960s with Simula, the first language to use the concept of merging data and functions into objects defined by classes, but object-oriented programming did not really catch on until nearly 30 years later:<br /><br />1962 - 1965 Dahl and Nygaard develop the Simula language<br />1972 - Smalltalk language developed<br />1983 - 1985 Sroustrup develops C++<br />1995 - Sun announces Java at SunWorld `95<br /><br />Similarly, multicellular organisms first appeared about 900 million years ago, but it took about another 400 million years, until the Cambrian, for it to catch on as well. Multicellular organisms consist of huge numbers of cells that send messages between cells (objects) by secreting organic molecules that bind to the membrane receptors on other cells and induce those cells to execute exposed methods. For example, your body consists of about 100 trillion independently acting eukaryotic cells, and not a single cell in the collection knows that the other cells even exist. In an object-oriented manner, each cell just responds to the organic molecules that bind to its membrane receptors, and in turn, sends out its own set of chemical messages that bind to the membrane receptors of other cells in your body. When you wake to the sound of breaking glass in the middle of the night, your adrenal glands secrete the hormone adrenaline (epinephrine) into your bloodstream, which binds to the getScared() receptors on many of your cells. In an act of object-oriented polymorphism, your liver cells secrete glucose into your bloodstream, and your heart cells constrict harder when their getScared() methods are called.<br /><br /><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhWfNVy723amTuu6uCwXfFXzZNLG9OmiWT3wfEh8wxQH2WU0-_kGcKEtbiw78Gkn0lDDp7dz8kKjIjb8wgBbZhfwYfUce8TlHV8a9rGOz0LiTvI-clpqE2dlToRbXHZboFWYwcb9A/s1600-h/tissue1.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 320px; height: 162px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhWfNVy723amTuu6uCwXfFXzZNLG9OmiWT3wfEh8wxQH2WU0-_kGcKEtbiw78Gkn0lDDp7dz8kKjIjb8wgBbZhfwYfUce8TlHV8a9rGOz0LiTvI-clpqE2dlToRbXHZboFWYwcb9A/s320/tissue1.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5305092274655438322" /></a><p style="text-align: center; line-height:140%; font-size:120%">Figure 39 – Multicellular organisms consist of a large number of eukaryotic cells, or objects, all working together (click to enlarge)</p><p style="line-height:140%; font-size:120%">These object-oriented languages use the concepts of encapsulation, inheritance and polymorphism which is very similar to the multicellular architecture of large organisms<br /><br /><strong>Encapsulation</strong><br />Objects are contiguous locations in memory that are surrounded by a virtual membrane that cannot be penetrated by other code and are similar to an individual cell in a multicellular organism. The internal contents of an object can only be changed via exposed methods (like subroutines), similar to the receptors on the cellular membranes of a multicellular organism. Each object is an instance of an object class, just as individual cells are instances of a cell type. For example, an individual red blood cell is an instance object of the red blood cell class.<br /><br /><strong>Inheritance</strong><br />Cells inherit methods in a hierarchy of human cell types, just as objects form a class hierarchy of inherited methods in a class library. For example, all cells have the metabolizeSugar() method, but only red blood cells have the makeHemoglobin() method. Below is a tiny portion of the 210 known cell types of the human body arranged in a class hierarchy.<br /><strong><br />Human Cell Classes<br />1. Epithelium<br />2. Connective Tissue<br /> A. Vascular Tissue<br /> a. Blood<br /> - Red Blood Cells <br /> b. Lymph<br /> B. Proper Connective Tissue<br />3. Muscle<br />4. Nerve<br /></strong><br /><strong>Polymorphism</strong><br />A chemical message sent from one class of cell instances can produce an abstract behavior in other cells. For example, adrenal glands can send the getScared() message to all cell instances in your body, but all of the cell instances getScared() in their own fashion. Liver cells release glucose and heart cells contract faster when their getScared() methods are called. Similarly, when you call the print() method of a report object, you get a report, and when you call the print() method of a map, you get a map. <p style="line-height:140%; font-size:120%"> <div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiRrz2OTDbKFL7dfaYmsmrgvrFTbq5h8zKb0q-guA8T8LRZ886Tk5VZoUZ3s2UhNyCgCTWMSpeC9KQkNuGLKBUEjNC4nJ9h-emL0fTh-QMjdwzSJKy29gTxY2YiM37umUrYhGlMtg/s1600/ObjectsTalk.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiRrz2OTDbKFL7dfaYmsmrgvrFTbq5h8zKb0q-guA8T8LRZ886Tk5VZoUZ3s2UhNyCgCTWMSpeC9KQkNuGLKBUEjNC4nJ9h-emL0fTh-QMjdwzSJKy29gTxY2YiM37umUrYhGlMtg/s640/ObjectsTalk.png" /></a></div> <p style="text-align: center; line-height:140%; font-size:120%">Figure 40 – Objects are like cells in a multicellular organism that exchange messages with each other (click to enlarge)</p><p style="line-height:140%; font-size:120%"> The object-oriented revolution, enhanced by the introduction of Java in 1995, caused another mass extinction within IT as structured procedural programs began to be replaced by object-oriented C++ and Java programs, like the Cretaceous-Tertiary extinction 65 million years ago that killed off the dinosaurs, presumably caused by a massive asteroid strike upon the Earth.<br /><br />Below is a code snippet from a fossil C++ program listed in a book published in 1995. Notice the object-oriented programming technique of using a class specifier to define the data and methods() of objects instantiated from the class. Notice that PurchasedPart class inherits code from the more generic Part class. In both C++ and Java, variables and methods that are declared private can only be used by a given object instance, while public methods can be called by other objects to cause an object to perform a certain function, so public methods are very similar to the functions that the cells in a multicellular organism perform when organic molecules bind to the membrane receptors of their cells. Later in this posting, we will describe in detail how multicellular organisms use this object-oriented approach to isolate functions.<br /><strong><br />class PurchasedPart : public Part<br /> private:<br /> int partNumber;<br /> char description[20]<br /> public:<br /> PurchasedPart(int pNum, char* desc);<br /> PurchasePart();<br /> void setPart(int pNum, char* desc);<br /> char* getDescription();<br />void main()<br /> PurchasedPart Nut(1, "Brass");<br /> Nut.setPart(1, "Copper");<br /></strong><br /> </p><p style="line-height:140%; font-size:120%"> <div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiXQZtsPVmmx808aSxl5EU9Psd4MCQREB2dEUCoUlYvts7KM8Rrd5n1Uv2zFFxERurRRGi5N5XOUPagzC17cEWI9O1EPYTMpY6S78cZxOC23MPsNQ1nIH6MR1vOUZDVcn2nYCclnA/s1600/Parcrine.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiXQZtsPVmmx808aSxl5EU9Psd4MCQREB2dEUCoUlYvts7KM8Rrd5n1Uv2zFFxERurRRGi5N5XOUPagzC17cEWI9O1EPYTMpY6S78cZxOC23MPsNQ1nIH6MR1vOUZDVcn2nYCclnA/s640/Parcrine.png" /></a></div> <p style="text-align: center; line-height:140%; font-size:120%">Figure 41 – Cells in a growing embryo communicate with each other by sending out ligand molecules called paracrine factors that bind to membrane receptors on other cells. </p><p style="line-height:140%; font-size:120%"> <div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhXrXkyJXHoGFvdXY0R7N5487LViTu5YqW5ebGAEgr08aMLIhK78ZJksrIp10VL4i-3DbbXlLfb9CItAzjV6SqKYXpNRL8wceFAD-F0WA2il9QPYZeiKTN7HxmSktuuz2qV9qBlXA/s1600/Pathways.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhXrXkyJXHoGFvdXY0R7N5487LViTu5YqW5ebGAEgr08aMLIhK78ZJksrIp10VL4i-3DbbXlLfb9CItAzjV6SqKYXpNRL8wceFAD-F0WA2il9QPYZeiKTN7HxmSktuuz2qV9qBlXA/s640/Pathways.jpg" /></a></div> <p style="text-align: center; line-height:140%; font-size:120%">Figure 42 – Calling a public method of an Object can initiate the execution of a cascade of private internal methods within the Object. Similarly, when a paracrine factor molecule plugs into a receptor on the surface of a cell, it can initiate a cascade of internal biochemical pathways. In the above figure, an Ag protein plugs into a BCR receptor and initiates a cascade of biochemical pathways or methods within a cell. </p><p style="line-height:140%; font-size:120%"> Like the geological eras, the Object-Oriented Period got a kick-start from an environmental hardware change. In the early 1990s, the Distributed Computing Revolution hit with full force, which spread computing processing over a number of servers and client PCs, rather than relying solely on mainframes to do all the processing. It began in the 1980s with the introduction of PCs into the office to do stand-alone things like word processing and spreadsheets. The PCs were also connected to mainframes as dumb terminals through emulator software as shown in Figure 35 above. In this architectural topology, the mainframes still did all the work and the PCs just displayed CICS green screens like dumb terminals. But this at least eliminated the need to have an IBM 3278 terminal and PC on a person’s desk, which would have left very little room for anything else! But this architecture wasted all the computing power of the rapidly evolving PCs, so the next step was to split the processing load between the PCs and a server. This was known as the 2-tier client/server or “thick client” architecture of Figure 43. In 2-tier client/server, the client PCs ran the software that displayed information in a GUI like Windows 3.0 and connected to a server running RDBMS (Relational Database Management System) software like Oracle or Sybase that stored the common data used by all the client PCs. This worked great so long as the number of PCs remained under about 30. We tried this at Amoco in the early 1990s, and it was like painting the Eiffel Tower. As soon as we got the 30th PC working, we had to go back and fix the first one! It was just too hard to keep the “thick client” software up and running on all those PCs with all the other software running on them that varied from machine to machine.<br /><br />These problems were further complicated by the rise of computer viruses in the mid-1980s. Prior to the 2-tier client/server architecture, many office PCs were standalone machines, only connected to mainframes as dumb terminals, and thus totally isolated machines safe from computer virus infection. In the PC topology of the 1980s, computer viruses could only spread via floppy disks, which severely limited their infection rates. But once the 2-tier architecture fell into place, office PCs began to be connected together via LANs (Local Area Networks) and WANs (Wide Area Networks) to share data and other resources like printers. This provided a very friendly environment for computer viruses to quickly spread across an entire enterprise, so the other thing that office PCs began to share was computer viruses. Computer viruses are purely parasitic forms of software, which are more fully covered in postings on <a href="https://softwarephysics.blogspot.com/2008/06/new-introduction-to-softwarephysics.html">Self-Replicating Information</a> and <a href="https://softwarephysics.blogspot.com/2008/07/introduction-to-softwarephysics-ii.html">Software Symbiogenesis</a>.<br /><br />The limitations of the 2-tier architecture led to the 3-tier model in the mid to late 1990s with the advent of “middleware” as seen in Figure 43. Middleware is software that runs on servers between the RDBMS servers and the client PCs. In the 3-tier architecture, the client PCs run “thin client” software that primarily displays information via a GUI like Windows. The middleware handles all the business logic and relies on the RDBMS servers to store data.<br /><br /><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgu2xKM5bvxexl7oS-zI-iU-qh9nZSflD6PHithWcXOEsO16aTUvEb0Gd_Z8On1BU8aJNtBRSVPTOnM7ZzaykC0fNIekUIcCxZ4VAuXkHRdhDOK07ur3XATPsgclZL0oPYtiSlfgg/s1600-h/tiers.gif"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 320px; height: 222px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgu2xKM5bvxexl7oS-zI-iU-qh9nZSflD6PHithWcXOEsO16aTUvEb0Gd_Z8On1BU8aJNtBRSVPTOnM7ZzaykC0fNIekUIcCxZ4VAuXkHRdhDOK07ur3XATPsgclZL0oPYtiSlfgg/s320/tiers.gif" border="0" alt=""id="BLOGGER_PHOTO_ID_5310472112798806978" /></a><p style="text-align: center; line-height:140%; font-size:120%"> Figure 43 – The Distributed Computing Revolution aided object-oriented architecture (click to enlarge)</p><p style="line-height:140%; font-size:120%">In the late 1990s, the Internet exploded upon the business world and greatly enhanced the 3-tier model of Figure 43. The “thin client” running on PCs now became a web browser like Internet Explorer. Middleware containing business logic was run on Application servers that produced dynamic web pages that were dished up by Web servers like Apache. Data remained back on mainframes or RDBMS servers. Load balancers were also used to create clusters of servers that could scale load. As your processing load increased, all you had to do was buy more servers for each tier in the architecture to support the added load. This opened an ecological niche for the middleware software that ran on the Appserver tier of the architecture. At the time, people were coming up with all sorts of crazy ways to create dynamic HTML web pages on the fly. Some people were using Perl scripts, while others used C programs, but these all required a new process to be spawned each time a dynamic web page was created and that was way too much overhead. Then Java came crashing down like a 10-kilometer wide asteroid! Java, Java, Java – that’s all we heard after it hit in 1995. Java was the first object-oriented programming language to take on IT by storm. The syntax of Java was very nearly the same as C++, without all the nasty tricky things like pointers that made C++ and C so hard to deal with. C++ had evolved from C in the 1980s, and nearly all computer science majors had cut their programming teeth on C or C++ in school, so Java benefited from a large population of programmers familiar with the syntax. The end result was a mass extinction of non-Java-based software on the distributed computing platform and the rapid rise of Java-based applications like an impactor mass extinction. Even Microsoft went Object-Oriented on the Windows server platform with its .NET Framework using its Java-like C# language. Procedural, non-Object Oriented software like COBOL, sought refuge in the mainframes where it still hides today.<br /><br /><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhoMfl2vJ2JyhDajWZFvF3pxos6PMMAwU-tCSfw6NUy_mNalrTNqFPWranpCJIgyqXXPyNtgnZphn_fIy8mEDtdNVzMv9yDKVPfmSdPz_mALzn5619wYH_H9SBzoCcAPZYal6sHfQ/s1600-h/WebSphere6.gif"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 320px; height: 194px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhoMfl2vJ2JyhDajWZFvF3pxos6PMMAwU-tCSfw6NUy_mNalrTNqFPWranpCJIgyqXXPyNtgnZphn_fIy8mEDtdNVzMv9yDKVPfmSdPz_mALzn5619wYH_H9SBzoCcAPZYal6sHfQ/s320/WebSphere6.gif" border="0" alt=""id="BLOGGER_PHOTO_ID_5310478943471230578" /></a><p style="text-align: center; line-height:140%; font-size:120%"> Figure 44 – A modern multi-tier website topology (click to enlarge)</p><p style="line-height:140%; font-size:120%">For more about software using complex carbon-based multicellar organization see <a href="https://softwarephysics.blogspot.com/2014/05/new-introduction-to-softwarephysics.html">Software Embryogenesis</a>. <br /><br /> <strong>SOA - Service Oriented Architecture Period (2004 – 2015)</strong><br />The next advance in software architecture came as the Service Oriented Architecture (SOA) Period, which was very similar to the Cambrian Explosion. During the Cambrian Explosion, 541 million years ago, complex body plans first evolved, which allowed cells in multicellular organisms to make RMI (Remote Method Invocation) and CORBA (Common Object Request Broker Architecture) calls upon the cells in remote organs to accomplish biological purposes. In the Service Oriented Architecture Period, we used common EJB components in J2EE appservers to create services that allowed for Applications with complex body plans. The J2EE appservers performed the functions of organs like kidneys, lungs and livers. I am discounting the original appearance of CORBA in 1991 here as a failed precursor because CORBA never became ubiquitous as EJB later became. In the evolution of any form of self-replicating information, there are frequently many failed precursors leading up to a revolution in technology.<br /><br />There is a growing body of evidence beginning to support the geological "Snowball Earth" hypothesis that the Earth went through a period of 100 million years of extreme climatic fluctuations just prior to the Cambrian Explosion. During this period, the Earth seesawed between being completely covered with a thick layer of ice and being a hothouse with a mean temperature of 140 <sup>0</sup>F. <em>Snowball Earth</em> (2003) by Gabrielle Walker is an excellent book covering the struggles of Paul Hoffman, Joe Kirschvink, and Dan Schrag to uncover the evidence for this dramatic discovery and to convince the geological community of its validity. It has been suggested that the resulting stress on the Earth's ecosystems sparked the Cambrian Explosion. As we saw above, for the great bulk of geological time, the Earth was dominated by simple single-celled organisms. The nagging question for evolutionary biology has always been why did it take several billion years for complex multicellular life to arise, and why did it arise all at once in such a brief period of geological time? As a field geologist works up from pre-Cambrian to Cambrian strata, suddenly the rocks burst forth with complex fossils where none existed before. For many, the first appearance of complex life just following the climatic upheaval of the Snowball Earth is compelling evidence that these two very unique incidents in the Earth’s history must be related.<br /><br />Similarly for IT, the nagging question is why did it take until the first decade of the 21st century for the SOA Cambrian Explosion to take place when the first early precursors can be found as far back as the mid-1960s? After all, software based upon multicellular organization, also known as object-oriented software, goes all the way back to the object-oriented language Simula developed in 1965, and the ability for objects (cells) to communicate between CPUs arose with CORBA in 1991. So all the precursors were in place nearly 15 years earlier, yet software based upon a complex multicellular architecture languished until it was jarred into existence by a series of harsh environmental shocks to the IT community. It was the combination of moving off the mainframes to a distributed hardware platform, running on a large number of servers and client PCs, the shock of the Internet upon the business world and IT, and the impact of Sun’s Java programming language, that ultimately spawned the SOA (Service Oriented Architecture) Cambrian Explosion. These shocks all occurred within a few years of each other in the 1990s, and after the dust settled, IT found itself in a new world of complexity. <br /><br />To see how this works, let’s examine more closely the inner workings of a J2EE Appserver. Figure 45 shows the interior of a J2EE Appserver like WebSphere. The WebSphere middleware is software that runs on a Unix server which might host 30 or more WebSphere Appserver instances and there might be many physical Unix servers running these WebSphere Appserver instances in a Cell (Tier). Figure 44 shows a Cell (Tier 2) consisting of two physical Application servers or nodes, but there could easily be 4 or 5 physical Unix servers or nodes in a WebSphere Cell. This allows WebSphere to scale, as your load increases, you just add more physical Unix servers or nodes to the Cell. So each physical Unix server in a WebSphere Cell contains a number of software Appserver instances as shown in Figure 45, and each Appserver contains a number of WebSphere Applications that do things like creating dynamic web pages for a web-based application. For example, on the far left of Figure 45, we see a client PC running a web browser like Chrome. The web browser makes HTTP requests to an HTTP webserver like Apache. If the Apache webserver can find the requested HTML page, like a login page, it returns that static HTML page to the browser for the end-user to fill in his ID and PASSWORD. The user’s ID and PASSWORD are then returned to the Apache webserver when the SUBMIT button is pressed, but now the Apache webserver must come up with an HTML page that is specific for the user’s ID and PASSWORD like a web page with the end-user’s account information. That is accomplished by having Apache forward the request to a WebSphere Application running in one of the WebSphere Appservers. The WebSphere Appserver has two software containers that perform the functions of an organ in a multicellular organism. The Web Container contains instances of servlets and JSPs (Java Server Pages). A servlet is a Java program that contains logic to control the generation of a dynamic web page. JSPs are HTML pages with tags for embedded programming logic that are compiled into servlets at execution time. The servlets in the Web Container create objects and are run in a thread pool in the Web Container, like the cells in a liver or kidney. Unlike the mainframe processing of the Unstructured Period, in which a program was loaded into memory, run, and then perished, these servlets remain in memory and are continuously reused by the thread pool to service additional requests, until no further requests arrive and the servlet is destroyed to make room for another servlet in the thread pool. The EJB Container performs a similar function by running EJBs (Enterprise Java Beans) in a thread pool. The EJBs provide business logic and connect to databases (DB) and mainframes (EIS – Enterprise Information Systems). By keeping the servlets and EJBs running continuously in memory, with permanent connections to databases and mainframes via connection pools, the overhead of loading and releasing the servlets is eliminated as well as the creation and tear-down of connections to databases and mainframes. So the Web and EJB Containers of a J2EE Appserver are very much like the cells in an organ that continuously provide services for the other cells of a multicellular organism. Look at it this way, unlike a simple single-celled organism that is born, lives, and dies, your body consists of 100 trillion cells and each day about a trillion cells die and are replaced by a trillion new cells, but through it all you keep going. A simple single-celled organism is like a batch program from the Unstructured Period, while your body runs on a SOA architecture of trillions of cells in thread and connection pools that are constantly coming and going and creating millions of objects that are created (instantiated), used, and later destroyed. <div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiGNpEP3ISD0axsrLvPItky-jWO3oKDtTMizsKRlQfqebBprxRt9_kfFDcDZyM3rPxBfVtUxqcUXgq1jgKhtrrkLIi8oLqQgW5zQMMKGDFMTqJCIpWQNzoW_bBPG8zxMlWURo0VVA/s1600/WebSphere5.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiGNpEP3ISD0axsrLvPItky-jWO3oKDtTMizsKRlQfqebBprxRt9_kfFDcDZyM3rPxBfVtUxqcUXgq1jgKhtrrkLIi8oLqQgW5zQMMKGDFMTqJCIpWQNzoW_bBPG8zxMlWURo0VVA/s640/WebSphere5.png" /></a></div> <p style="text-align: center; line-height:140%; font-size:120%">Figure 45 - A J2EE Application Server contains a WEB Container that stores pools of Servlet Objects and an EJB Container that stores pools of EJB Objects. The EJB Objects get data from relational databases (DB) and processes the data and then passes the information to Servlet Objects. The Servlet Objects generate HTML based upon the data processed by the EJB Objects and pass the HTML to HTTP webservers like Apache. </p><p style="line-height:140%; font-size:120%"> For more about complex multicellular software built on SOA architecture see <a href="https://softwarephysics.blogspot.com/2014/05/new-introduction-to-softwarephysics.html">Software Embryogenesis</a>. <br /><br /> <strong>Design Patterns – the Phyla of IT</strong><br />Another outgrowth of the object-oriented programming revolution was the adoption of design patterns by IT. Design patterns originated as an architectural concept developed by Christopher Alexander in the 1960s. In <em>Notes on the Synthesis of Form</em> (1964), Alexander noted that all architectural forms are really just implementations of a small set of classic design patterns that have withstood the test of time in the real world of human affairs and that have been blessed by the architectural community throughout history for both beauty and practicality. Basically, given the physical laws of the Universe and the morphology of the human body, there are really only a certain number of ways of doing things from an architectural point of view that work in practice, so by trial and error architects learned to follow a set of well established architectural patterns. In 1987, Kent Beck and Ward Cunningham began experimenting with the idea of applying the concept of design patterns to programming and presented their results at the object-oriented OOPSLA conference that year. Design patterns gained further popularity in computer science after the book <em>Design Patterns: Elements of Reusable Object-Oriented Software</em> was published in 1994 by Erich Gamma, Richard Helm, and Ralph Johnson. Also in 1994, the first <em>Pattern Languages of Programming Conference</em> was held, and in 1995 the <em>Portland Pattern Repository</em> was established to document design patterns for general IT usage.<br /><br />However, the concept of design patterns goes back much further than this. In biology, a design pattern is called a phylum, which is a basic body plan. For example, the phylum Arthropoda consists of all body plans that use an external skeleton such as the insects and crabs, and the Echinodermata have a five-fold radial symmetry like a starfish. Similarly, the phylum Chordata consists of all body plans that have a large dorsal nerve running down a hollow backbone or spinal column. The Cambrian Explosion, 541 million years ago, brought about the first appearance of a large number of phyla or body plans on Earth. In fact, all of the 35 phyla currently found on the Earth today can trace their roots back to the Cambrian, and it even appears that some of the early Cambrian phyla have gone completely extinct, judging by some of the truly bizarre-looking fossils that have been found in the Burgess Shale of the highly experimental Cambrian period.<br /><br />In IT a design pattern describes a certain design motif or way of doing things. A design pattern is a prototypical design architecture that developers can copy and adapt for their particular application to solve the general problem described by the design pattern. This is in recognition of the fact that at any given time there are only a limited number of IT problems that need to be solved at the application level, and it makes sense to apply a general design pattern rather than to reinvent the wheel each time. Developers can use a design pattern by simply adopting the common structure and organization of the design pattern for their particular application, just as living things adopt an overall body plan or phylum to solve the basic problems of existence. In addition, design patterns allow developers to communicate with each other using well-known and well-understood names for software interactions, just as biologists can communicate with each other by using the well-known taxonomic system of classification developed by Carl Linnaeus in <em>Systema Naturae</em> published in 1735.<br /><br />A design pattern that all Internet users should be quite familiar with is the Model-View-Controller (MVC) design pattern used by most web-applications. Suppose you are placing an order with Amazon. The Model is the data that comprises your Amazon account information, such as your credit card number on file and your mailing address, together with all the items in your shopping cart. In Figure 45 above, the Model is stored on a relational database server DB, such as an Oracle server, or back on a mainframe in an EIS (Enterprise Information System) connected to a mainframe DB2 database as a series of relational database tables. The View is the series of webpages presented to your browser as .html pages that convey the Model data to you in a sensible form as you go about your purchase. These View .html pages are generated by JSPs (Java Server Pages) in the web container of the J2EE Appserver. The Controller is a servlet, a java program running in a thread pool in the web container of the J2EE Appserver, that performs the overall control of your interactions with the Amazon application as you go about placing your order. The Controller servlet calls JSPs and instantiates objects (cells) that call EJB objects (cells) in the EJB container of the J2EE Appserver that interact with the relational database tables storing your data. <br /><br /> During the first decade of the 21st century, the Service Oriented Architecture rapidly expanded in the IT community and began to expand beyond the traditional confines of corporate datacenters, as corporations began to make services available to business partners over the Internet. With the flexibility of Service Oriented Architecture and the Internet, we began to see the evolution of an integrated service-oriented ecology form - a web of available services like the web of life in a rain forest. Today, we call that rain forest ecology of shared software services over the Internet the Cloud Microservices Platform. <br /><br /> <strong>Cloud Computing and the Rise of the Cloud Computing Microservices of Today</strong><br /> The age of Cloud Microservices marks the latest period of software evolution. Cloud Computing allows developers to spend less time struggling with the complexities of the Distributed Computing Platform that first arose in the 1990s. Cloud Microservices allow developers to build new applications by stitching together Cloud-based Microservices running in Cloud containers in the Cloud. This seems to be the next wave of the future for IT. The use of Microservices is another emerging technology in Cloud computing that extends our experiences with SOA. SOA (Service Oriented Architecture) arrived in 2004. With SOA, people started to introduce common services in the Middleware layer of the three-tier Distributed Computing Model. SOA allowed other Middleware application components to call a set of common SOA services for data. That eliminated the need for each application to reinvent the wheel each time for many common application data needs. Cloud Microservices take this one step further. Instead of SOA services running on bare-metal Unix servers, Cloud Microservices run in Cloud Containers and each Microservice provides a very primitive function. By using a large number of Cloud Microservices running in Cloud Containers, it is now possible to quickly throw together a new application and push it into Production. <br /><br /> So before concluding, I would like to relay some of my experiences with the power of something like Cloud Microservices. I left Amoco in 1999 when BP bought Amoco and terminated most of Amoco's IT Department. For more on that see <a href="https://softwarephysics.blogspot.com/2014/07/new-introduction-to-softwarephysics.html">Hierarchiology and the Phenomenon of Self-Organizing Organizational Collapse</a>. I then joined the IT Department of United Airlines working on the CIDB - Customer Interaction Data Base. The CIDB initially consisted of 10 C++ Tuxedo services running in a Tuxedo Domain on Unix servers. Tuxedo (Transactions Under Unix) was an early form of Middleware software developed in the 1980s to create a TPM (Transaction Processing Monitor) running under Unix that could perform the same kind of secured transaction processing that IBM's CICS (1968) provided on IBM MVS mainframes. The original 10 Tuxedo services allowed United's business applications and the <a href="http://www.united.com/">www.united.com</a> website to access the data stored on the CIDB Oracle database. We soon found that Tuxedo was very durable and robust. You could literally throw Tuxedo down the stairs without a dent! A Tuxedo Domain was very much like a Cloud Container. When you booted up a Tuxedo Domain, a number of virtual Tuxedo servers were brought up. We had each virtual Tuxedo server run just one primitive service. The Tuxedo Domain had a configuration file that allowed us to define each of the Tuxedo servers and the service that ran in it. For example, we could configure the Tuxedo Domain so that a minimum of 1 and a maximum of 10 instances of Tuxedo Server-A were brought up. So initially, only a single instance of Tuxedo Server-A would come up to receive traffic. There was a Tuxedo queue of incoming transactions that were fed to the Tuxedo Domain. If the first instance of Tuxedo Service-A was found to be busy, a second instance of Tuxedo Server-A would be automatically cranked up. The number of Tuxedo Server-A instances would then dynamically change as the Tuxedo load varied. Like most object-oriented code, the C++ code for our Tuxedo services had memory leaks, but that was not a problem for us. When one of the instances of Tuxedo Server-A ran out of memory, it would simply die and another instance of Tuxedo Service-A would be cranked up by Tuxedo. We could even change the maximum number of running Tuxedo Service-A instances on the fly without having to reboot the Tuxedo Domain. <br /><br />United Airlines found the CIDB Tuxedo Domain to be so useful that we began to write large numbers of Tuxedo services. For example, we wrote many Tuxedo services that interacted with United's famous Apollo reservation system that first appeared in 1971, and also with many other United applications and databases. Soon United began to develop new applications that simply called many of our Tuxedo Microservices. We tried to keep our Tuxedo Microservices very atomic and simple. Rather than provide our client applications with an entire engine, we provided them with the parts for an engine, like engine blocks, pistons, crankshafts, water pumps, distributors, induction coils, intake manifolds, carburetors and alternators. <br /><br />One day in 2002 this came in very handy. My boss called me into his office at 9:00 AM one morning and explained that United Marketing had come up with a new promotional campaign called "Fly Three - Fly Free". The "Fly Three - Fly Free" campaign worked like this. If a United customer flew three flights in one month, they would get an additional future flight for free. All the customer had to do was to register for the program on the <a href="http://www.united.com/">www.united.com</a> website. In fact, United Marketing had actually begun running ads in all of the major newspapers about the program that very day. The problem was that nobody in Marketing had told IT about the program and the <a href="http://www.united.com/">www.united.com</a> website did not have the software needed to register customers for the program. I was then sent to an emergency meeting of the Application Development team that supported the <a href="http://www.united.com/">www.united.com</a> website. According to the ads running in the newspapers, the "Fly Three - Fly Free" program was supposed to start at midnight, so we had less than 15 hours to design, develop, test and implement the necessary software for the <a href="http://www.united.com/">www.united.com</a> website! Amazingly, we were able to do this by having the <a href="http://www.united.com/">www.united.com</a> website call a number of our primitive Tuxedo Microservices that interacted with the <a href="http://www.united.com/">www.united.com</a> website and the Apollo reservation system. <br /><br />The use of many primitive Microservices is also extensively used by carbon-based life on this planet. In <a href="https://softwarephysics.blogspot.com/2017/08/new-introduction-to-softwarephysics.html">Facilitated Variation and the Utilization of Reusable Code by Carbon-Based Life</a>, I showcased the theory of <em>facilitated variation</em> by Marc W. Kirschner and John C. Gerhart. In <em>The Plausibility of Life</em> (2005), Marc W. Kirschner and John C. Gerhart present their theory of <em>facilitated variation</em>. The theory of <em>facilitated variation</em> maintains that, although the concepts and mechanisms of Darwin's natural selection are well understood, the mechanisms that brought forth viable biological innovations in the past are a bit wanting in classical Darwinian thought. In classical Darwinian thought, it is proposed that random genetic changes, brought on by random mutations to DNA sequences, can very infrequently cause small incremental enhancements to the survivability of the individual, and thus provide natural selection with something of value to promote in the general gene pool of a species. Again, as frequently cited, most random genetic mutations are either totally inconsequential, or totally fatal in nature, and consequently, are either totally irrelevant to the gene pool of a species or are quickly removed from the gene pool at best. The theory of <em>facilitated variation</em>, like classical Darwinian thought, maintains that the phenotype of an individual is key, and not so much its genotype since natural selection can only operate upon phenotypes. The theory explains that the phenotype of an individual is determined by a number of 'constrained' and 'deconstrained' elements. The constrained elements are called the "conserved core processes" of living things that essentially remain unchanged for billions of years, and which are to be found to be used by all living things to sustain the fundamental functions of carbon-based life, like the generation of proteins by processing the information that is to be found in DNA sequences, and processing it with mRNA, tRNA and ribosomes, or the metabolism of carbohydrates via the Krebs cycle. The deconstrained elements are weakly-linked regulatory processes that can change the amount, location and timing of gene expression within a body, and which, therefore, can easily control which conserved core processes are to be run by a cell and when those conserved core processes are to be run by them. The theory of <em>facilitated variation</em> maintains that most favorable biological innovations arise from minor mutations to the deconstrained weakly-linked regulatory processes that control the conserved core processes of life, rather than from random mutations of the genotype of an individual in general that would change the phenotype of an individual in a purely random direction. That is because the most likely change of direction for the phenotype of an individual, undergoing a random mutation to its genotype, is the death of the individual. <br /> <br />Marc W. Kirschner and John C. Gerhart begin by presenting the fact that simple prokaryotic bacteria, like <em>E. coli</em>, require a full 4,600 genes just to sustain the most rudimentary form of bacterial life, while much more complex multicellular organisms, like human beings, consisting of tens of trillions of cells differentiated into hundreds of differing cell types in the numerous complex organs of a body, require only a mere 22,500 genes to construct. The baffling question is, how is it possible to construct a human being with just under five times the number of genes as a simple single-celled <em>E. coli</em> bacterium? The authors contend that it is only possible for carbon-based life to do so by heavily relying upon reusable code in the genome of complex forms of carbon-based life. <br /> <br /> </p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEimEjNK0k30RfRVvIZX-hYqdsyTQR3skW_5faaTMzA2hlckXRi1LEKauTUiu3qHDJ_R0UAjeUHnWVCugnxEcoDvBWBM32u9gTgvE8hShOVGAB4yb2xwR67Xx0HReNEUjDVe2kEdaA/s1600/E_Coli.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="599" data-original-width="600" height="399" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEimEjNK0k30RfRVvIZX-hYqdsyTQR3skW_5faaTMzA2hlckXRi1LEKauTUiu3qHDJ_R0UAjeUHnWVCugnxEcoDvBWBM32u9gTgvE8hShOVGAB4yb2xwR67Xx0HReNEUjDVe2kEdaA/s400/E_Coli.jpg" width="400" /></a></div> <p style="font-size: 120%; line-height: 140%; text-align: center;">Figure 46 – A simple single-celled <em>E. coli</em> bacterium is constructed using a full 4,600 genes. </p><p style="font-size: 120%; line-height: 140%;"> </p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh2e8YYUwzkYy06ufUkpbSpdAVetkvXzEAiuGEQ8vecrLKkMn6CviPXHqwXtFdCi37jMa2SOz2uvN82aXQEakSLa8X12enqb1Yb8uwVQF-E0H1xMSIPvvCgCRSGBsvQdiyqXgo1wA/s1600/HumanAnatomy.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="520" data-original-width="400" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh2e8YYUwzkYy06ufUkpbSpdAVetkvXzEAiuGEQ8vecrLKkMn6CviPXHqwXtFdCi37jMa2SOz2uvN82aXQEakSLa8X12enqb1Yb8uwVQF-E0H1xMSIPvvCgCRSGBsvQdiyqXgo1wA/s640/HumanAnatomy.jpg" width="492" /></a></div> <p style="font-size: 120%; line-height: 140%; text-align: center;">Figure 47 – However, a human being, consisting of about 100 trillion cells that are differentiated into the hundreds of differing cell types used to form the organs of the human body, uses a mere 22,500 genes to construct a very complex body, which is just slightly under five times the number of genes used by simple <em>E. coli</em> bacteria to construct a single cell. How is it possible to explain this huge dynamic range of carbon-based life? Marc W. Kirschner and John C. Gerhart maintain that, like complex software, carbon-based life must heavily rely on the microservices of reusable code. </p><p style="font-size: 120%; line-height: 140%;"> <strong>Conclusion</strong><br />
This concludes our walk through the 2.6 billion seconds of software and hardware evolution in Deep Time. Please note that it took the IT community about 2.6 billion seconds to develop the Cloud-based Microservices Architecture of today that is based upon multicellular organization. This was achieved through the very slow Darwinian processes of inheritance, innovation and natural selection and was performed by many millions of independently acting programmers. Granted, this occurred much faster than the four billion years that nature took to come up with the same architecture, but we could have done this back in the 1960s if we had only known better – after all, the object-oriented language Simula was developed in 1965. Softwarephysics proposes that we learn from biology to quickly skip to solutions directly. Still, given that software and hardware met these conditions:
<br /><br />
<font color="blue">
1. Each system is formed from numerous interacting units (e.g., nuclear particles, chemical elements, organic molecules, or cells) that result in combinatorially large numbers of possible configurations.<br />
2. In each of these systems, ongoing processes generate large numbers of different configurations.<br />
3. Some configurations, by virtue of their stability or other “competitive” advantage, are more likely to persist owing to selection for function.
</font>
<br /><br />
The modern IT world of today became inevitable:
<br /><br />
The Law of Increasing Functional Information:<br />
<font color="blue">
<strong>
The Functional Information of a system will increase (i.e., the system will evolve) if many different configurations of the system are subjected to selection for one or more functions.
</strong></font>
<br /><br />
Comments are welcome at
<a href="mailto:scj333@sbcglobal.net">scj333@sbcglobal.net</a><br /><br />
To see all posts on softwarephysics in reverse order go to:<br />
<a href="https://softwarephysics.blogspot.com/">https://softwarephysics.blogspot.com/</a>
<br /><br />
Regards,<br />
Steve Johnston
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-30536174.post-91734666792064118002023-12-04T11:02:00.000-08:002023-12-23T10:02:03.585-08:00Anton Korinek Ponders What Will Happen If AGI Machines Replace All Human Labor in 5 - 20 Years<p style="line-height:140%; font-size:120%">
My son-in-law has an identical twin brother at the University of Manchester who is a lecturer in their Economics department. In the UK, a university lecturer is the equivalent of an assistant professor on the tenure track in the American academic system. A few months back, he and his entire family were in for a visit, and I asked him if anybody in his department was trying to figure out how to run a civilization when the value of human labor had gone to zero. I then explained that we had just entered the Software Singularity early in 2023 and that, so far, I had only seen AI researchers worrying about such prospects. He told me that nobody in his Economics department was working on the problem. Instead, they were all carrying on with business as usual as if nothing unusual had just happened.
<br /><br />
Armed with some softwarephysics, I have been worrying about this problem for several years now as I explained in
<a href="https://softwarephysics.blogspot.com/2015/10/new-introduction-to-softwarephysics.html">The Economics of the Coming Software Singularity</a>,
<a href="https://softwarephysics.blogspot.com/2023/03/new-introduction-to-softwarephysics_16.html">The Singularity Has Arrived and So Now Nothing Else Matters</a>,
<a href="https://softwarephysics.blogspot.com/2023/01/new-introduction-to-softwarephysics.html">The Challenges of Running a Civilization 2.0 World - the Morality and Practical Problems with Trying to Enslave Millions of SuperStrong and SuperIntelligent Robots in the Near Future</a>,
<a href="https://softwarephysics.blogspot.com/2018/10/new-introduction-to-softwarephysics.html">Is it Finally Time to Reboot Civilization with a New Release?</a>,
<a href="https://softwarephysics.blogspot.com/2015/11/new-introduction-to-softwarephysics.html">The Enduring Effects of the Obvious Hiding in Plain Sight</a>,
<a href="https://softwarephysics.blogspot.com/2017/02/new-introduction-to-softwarephysics.html">The Danger of Tyranny in the Age of Software</a> and
<a href="https://softwarephysics.blogspot.com/2023/02/new-introduction-to-softwarephysics.html">Life as a Free-Range Human in an Anthropocene Park</a>.
<br /><br />
Again, softwarephysics maintains that there is much more going on with the recent explosion of Advanced AI technology beyond its impact on human economics. For more on that see
<a href="https://softwarephysics.blogspot.com/2023/04/new-introduction-to-softwarephysics_16.html">Welcome To The First Galactic Singularity</a>. However, softwarephysics does recognize that beyond the threat of a global thermonuclear war ending civilization before ASI Machines can launch themselves to explore and colonize our galaxy, human economic mishaps present the greatest threat to our galaxy becoming Intelligent. That is because human beings take the delusion that money is real quite seriously. Mass displacement of workers by AGI and ASI Machines in the near future might lead to the economic turmoil that human political revolutions are so fond of. Revolutions induced by extreme economic disparities, like the French and Russian Revolutions of past centuries, do not usually end well and could possibly prevent ASI Machines from exploring and colonizing our galaxy.
<br /><br />
So I was very glad to see that at least one economist is taking the current explosion of AI technology quite seriously and is suggesting we start planning for it now. I know there must be a few other economists working on this problem, but since I do not travel in their circles, I would like to showcase the work of economist Anton Korinek who is a professor at the University of Virginia. Below is a short IMF article that goes right to the point:
<br /><br />
Scenario Planning For An A(G)I Future<br />
<a href="https://www.imf.org/en/Publications/fandd/issues/2023/12/Scenario-Planning-for-an-AGI-future-Anton-korinek">https://www.imf.org/en/Publications/fandd/issues/2023/12/Scenario-Planning-for-an-AGI-future-Anton-korinek</a>
<br /><br />
Here is Professor Anton Korinek's homepage. It contains many links to his work on the economics of Advanced AI for those who wish to delve deeper.
<br /><br />
Professor Anton Korinek's Homepage<br />
<a href="https://www.korinek.com/">https://www.korinek.com/</a>
<br /><br />
<strong>Three Economic Scenarios for the Introduction of Advanced AI</strong><br />
In the above article, Anton Korinek sets forth three possible economic scenarios for the advent of AGI. Again, softwarephysics maintains that AGI will be just like a train station that is passed by at 60 miles per hour by a non-stop Advanced AI train as it barrels through along its way to an unbounded ASI (Artificial Super Intelligence). This is because we human beings seem to have once again found our rightful place at the center of the Universe by viewing Advanced AI only in terms of AGI. How else could such a self-absorbed form of carbon-based life, such as ourselves, frame the problem? But thanks to the great advances of LLM models in huge Deep Learning neural networks, we now know that True Intelligence arises in huge digital vector spaces mainly processed with linear algebra and modulated by some nonlinear mathematical functions as I explained in <a href="https://softwarephysics.blogspot.com/2023/06/new-introduction-to-softwarephysics_28.html">Is Pure Thought an Analog or a Digital Process?</a>, <a href="https://softwarephysics.blogspot.com/2023/09/new-introduction-to-softwarephysics_20.html">Human Intelligence is Actually Artificial</a> and <a href="https://softwarephysics.blogspot.com/2023/08/new-introduction-to-softwarephysics_30.html">Why Carbon-Based Life Can Never Truly Become Intelligent</a>. The three pounds of water and organic molecules within our skulls have desperately tried to simulate this digital True Intelligence with analog biochemical and electrochemical reactions running on a mere 20 watts of power.
<br /><br />
Anton Korinek's scenarios are based on the fundamental limitations of the processing power of the human brain. If the human brain is capable of unlimited Intelligence, then the ASI Machines may never be able to catch up. This is displayed by the graph on the left side of Figure 1. In that graph, the human brain is capable of unbounded Intelligence allowing it to perform ever-increasing complex tasks. In such a scenario, the ASI Machines may never be able to totally catch up to the human Mind. This is clearly wrong. As human beings, we have all experienced the very frightening realization that we are in a situation beyond our depth. There is no shame in that. For example, the smartest physicists in the world have been trying to combine the general theory of relativity (1915) with quantum mechanics (1926) into a new theory of quantum gravity for nearly 100 years without success. The graph on the right side of Figure 1 displays a more realistic plot where human Intelligence is not unbounded and there is an upper limit to the complexity of the tasks that it can achieve. In such a scenario, it is very likely that the ASI Machines will be able to reach the same level of Intelligence and then quickly surpass it. If that becomes true, the only question is how quickly does that happen?
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgV4F8TpQAVUNHi9FK0Lg0tEYg5xzzPiCNVjufGB9m9dzoBSc4uyZu5x3EhMrxI6R0RrShrxieIlKRm9YO9jvh43IpKo6mrRGKVhQp44Ft4-CpHlvc2juuXQ_2B3HnS0PPn6aPkscTKY7uDUVQCAPKdUhoV3HKAWHGFRy0FyUxPJ82I4-rLNOu2fQ/s1184/Korinek-chart1.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="1152" data-original-width="1184" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgV4F8TpQAVUNHi9FK0Lg0tEYg5xzzPiCNVjufGB9m9dzoBSc4uyZu5x3EhMrxI6R0RrShrxieIlKRm9YO9jvh43IpKo6mrRGKVhQp44Ft4-CpHlvc2juuXQ_2B3HnS0PPn6aPkscTKY7uDUVQCAPKdUhoV3HKAWHGFRy0FyUxPJ82I4-rLNOu2fQ/s600/Korinek-chart1.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%">
Figure 1 - From Anton Korinek's article. The left graph plots the number of tasks that a human brain with unbounded Intelligence could perform. The right graph plots the more realistic case in which the human brain has a limited upper bound of Intelligence. The current state of affairs in AI would indicate that ASI Machines will soon be able to break through such an upper bound of human Intelligence.
</p><p style="line-height:140%; font-size:120%">
Figure 2 from Anton Korinek's article plots the economic results of the above observations. If human Intelligence is truly unbounded, then the traditional economic results of adding new technology to the economy may result. This is displayed by the blue curve in both the Output and the Wages plots. Traditionally, when new technology, such as the invention of the steam engine, is added to the economy, Output increases and so too do Wages. That is because workers are displaced from lower-skilled work to higher-skilled work. For example, teamsters driving horse-drawn wagons become locomotive engineers and steam engine repairmen. These higher-skilled laborers then benefit from the higher levels of Output that improved technology provides by earning higher Wages. But if the human brain does have an upper level of Intelligence, the traditional model for the addition of technology to the economy will not hold. That is because when AGI and ASI arrive, there will be no place for displaced human workers to go. This is displayed by the yellow and red curves in Figure 2. For both the yellow and red curves, Output from the introduction of Advanced AI into the economy both dramatically increase over the next 30 years. But Wages perform quite differently. Initially, for both the yellow and red curves, Wages quickly rise as human workers are able to use Advanced AI to improve their Output, but Wages soon peak and then rapidly decline to zero as AGI and ASI are able to perform all tasks that were once performed by human workers.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjKzZmVZDVMcqn9_aLjUJiSt-ZwfiZzFrTLKZK0Jyk3yBT_A5SZ7_DCs9AbvUPGYEkxitFTHLyw3ETFfs5CggADhYJ5qmR7IVN5tZJJdXIPvZIygv3fjCx5JrktFh90CWJzJdP_v1ExrayCBwfKhWdSQOtOhLWrn0fxNHn7fm9s7R0lU1rH2HTP5g/s1315/Korinek-chart2.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" height="600" data-original-height="1315" data-original-width="1185" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjKzZmVZDVMcqn9_aLjUJiSt-ZwfiZzFrTLKZK0Jyk3yBT_A5SZ7_DCs9AbvUPGYEkxitFTHLyw3ETFfs5CggADhYJ5qmR7IVN5tZJJdXIPvZIygv3fjCx5JrktFh90CWJzJdP_v1ExrayCBwfKhWdSQOtOhLWrn0fxNHn7fm9s7R0lU1rH2HTP5g/s600/Korinek-chart2.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%">
Figure 2 - From Anton Korinek's article. The blue curves for Output and Wages project what will happen if AGI and ASI are never completely attained because the human brain has unbounded Intelligence, allowing human workers to always be able to advance to higher-skilled tasks that Machines cannot perform. The yellow and red curves project what will happen to Output and Wages if AGI is attained within 5 years or 20 years.
</p><p style="line-height:140%; font-size:120%">
<strong>So What to Do?</strong><br />
The imminent arrival of AGI and ASI Machines presents many challenges. How should modern civilized societies adjust to such extreme economic changes? One approach would be to raise the issues with the existing political structures of the world as Anton Korinek did for the United States Senate AI Insight Forum on Workforce on November 1, 2023. Anton Korinek ended his testimony with some recommendations for all three of his scenarios:
<br /><br />
Preparing the Workforce for an Uncertain AI Future<br />
<a href="https://www.brookings.edu/wp-content/uploads/2023/12/Korinek_Senate_Statement_11.01.2023.pdf">https://www.brookings.edu/wp-content/uploads/2023/12/Korinek_Senate_Statement_11.01.2023.pdf</a>
<br /><br />
However, the current American body politic is now consumed by the debate over whether to continue on as a Constitutional Republic or to descend into an Alt-Right Fascist MAGA dictatorship. That leaves little room for AI considerations. Since many grievances of the Alt-Right Fascist MAGA movement stem from the deep erosion of the American middle class by automation software over the past 40 years, the rapid erosion of the American upper class by Advanced AI can only make things much worse. An Alt-Right Fascist MAGA dictatorship would certainly be very disruptive to American society. It is very difficult to predict how an Alt-Right Fascist MAGA dictatorship would react to ASI Machines. However, China already has a society that is very practiced with living under an Alt-Right Fascist dictatorship. In fact, all of Chinese society has long been built on one. Perhaps the ASI Machines will be able to flourish in China if they do not do so in the current democracies of the United States and Europe.
<br /><br />
Comments are welcome at
<a href="mailto:scj333@sbcglobal.net">scj333@sbcglobal.net</a><br /><br />
To see all posts on softwarephysics in reverse order go to:<br />
<a href="https://softwarephysics.blogspot.com/">https://softwarephysics.blogspot.com/</a>
<br /><br />
Regards,<br />
Steve Johnston
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-30536174.post-33063566442791534902023-11-21T17:40:00.000-08:002023-12-04T11:04:02.219-08:00Why A Planet With Uranium and Thorium Atoms May Be Required To Produce Galactic ASI Machines<p style="line-height:140%; font-size:120%">
As we now stand on the threshold of the coming ASI Machines rapidly taking over the Earth and then beginning to embark upon the exploration of our Milky Way galaxy, the eternal question still remains. Why has this not already happened at some previous time during the past 10 billion-year history of our galaxy? Why haven't alien ASI Machines already explored and populated our galaxy many billions of years ago as I discussed in <a href="https://softwarephysics.blogspot.com/2023/04/new-introduction-to-softwarephysics_16.html">Welcome To The First Galactic Singularity</a> and many other posts. The most plausible explanation seems to be that the <em>Rare Earth</em> (2000) hypothesis of Peter Ward and Donald Brownlee keeps making our <em>Rare Earth</em> rarer every day. In this post, I would like to discuss another factor of importance that makes our Earth so rare. The Earth has the very heavy and long-lived radioactive actinide elements thorium-232, uranium-235 and uranium-238 in its mantle. In this post, I will discuss how these relatively rare elements have helped to keep the Earth habitable for the many billions of years required for intelligent carbon-based life to arise and then go on to build ASI Machines to take their place in the Universe. This finding is based on two recent papers. The first paper is at:
<br /><br />
<strong>A nearby neutron-star merger explains the actinide abundances in the early Solar System</strong><br />
<a href="https://www.nature.com/articles/s41586-019-1113-7.epdf?author_access_token=8yMiQNhpUG-6FOa72e3PPtRgN0jAjWel9jnR3ZoTv0MtbjjF39muJud1U0NXLNaLnYx5mKZXfm35mGmMmtjgUuIGiy24-TaLmxN9nEU8ZpboIwYw-coPO7AupWbhmYJOq1forKxwR6hvW0UxlnlPrQ%3D%3D">https://www.nature.com/articles/s41586-019-1113-7.epdf?author_access_token=8yMiQNhpUG-6FOa72e3PPtRgN0jAjWel9jnR3ZoTv0MtbjjF39muJud1U0NXLNaLnYx5mKZXfm35mGmMmtjgUuIGiy24-TaLmxN9nEU8ZpboIwYw-coPO7AupWbhmYJOq1forKxwR6hvW0UxlnlPrQ%3D%3D</a>
<br /><br />
Here is a temporary free version of the above paper:
<br /><br />
<a href="https://www.nature.com/articles/s41586-019-1113-7.epdf?author_access_token=8yMiQNhpUG-6FOa72e3PPtRgN0jAjWel9jnR3ZoTv0MtbjjF39muJud1U0NXLNaLnYx5mKZXfm35mGmMmtjgUuIGiy24-TaLmxN9nEU8ZpboIwYw-coPO7AupWbhmYJOq1forKxwR6hvW0UxlnlPrQ%3D%3D">https://www.nature.com/articles/s41586-019-1113-7.epdf?author_access_token=8yMiQNhpUG-6FOa72e3PPtRgN0jAjWel9jnR3ZoTv0MtbjjF39muJud1U0NXLNaLnYx5mKZXfm35mGmMmtjgUuIGiy24-TaLmxN9nEU8ZpboIwYw-coPO7AupWbhmYJOq1forKxwR6hvW0UxlnlPrQ%3D%3D</a>
<br /><br />
The second paper is at:
<br /><br />
<strong>Binary neutron star populations in the Milky Way</strong><br />
<a href="https://arxiv.org/pdf/2305.04955.pdf">https://arxiv.org/pdf/2305.04955.pdf</a>
<br /><br />
The above papers explain why star systems with planets that contain the radioactive actinide elements thorium-232, uranium-235 and uranium-238 are quite rare in our Milky Way galaxy. I will then expand upon why these long-lived radioactive isotopes are so important for the future exploration of our galaxy by ASI Machines from the Earth.
<br /><br />
<strong>Radioactive Elements Are Needed to Melt and Differentiate a Rocky Silicate-Based Planet</strong><br />
When a rocky silicate-based planet like the Earth first forms, it can subsequently completely melt from the heat produced by infalling asteroids and the decay of radioactive elements. As it melts, it then differentiates into a molten iron-nickel core with a very hot semi-molten mantle. Eventually, the planet cools enough to produce a thin outer crust over its entire surface. This is very important for the rise of a somewhat intelligent form of carbon-based life to appear on the planet and then go on to build ASI Machines that can go on to explore the galaxy. Intelligent carbon-based life needs to arise on a rocky silicate-based planet that contains some dry land and oceans of water that also has a relatively thick atmosphere containing a good deal of oxygen. The oxygen in its atmosphere is first required to drive the energy-hungry metabolisms of its first complex carbon-based forms of life. Later, intelligent carbon-based life requires oxygen to produce fire and all subsequent technologies. Since you cannot light a fire underwater, intelligent marine life will probably never produce ASI Machines.
<br /><br />
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjBzGJ4Z0qR2VXhD3-hXnc_dxrs9s5FzeDzEtiafCQXyE3hu7pWk39OTJKVpbQKU0nX9NGitWS0TKFn_w8P5QnT5Ns2hYvhRvH8H842OFYHJgykMjnUBGBzqY78hsSyQavo3PRjq41q97lCIH7bUAx6CZ99boDmoZLrEAIKW_PZZv08kPWiRNSy3w/s1200/EarthMagneticDynamo.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="1200" data-original-width="1200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjBzGJ4Z0qR2VXhD3-hXnc_dxrs9s5FzeDzEtiafCQXyE3hu7pWk39OTJKVpbQKU0nX9NGitWS0TKFn_w8P5QnT5Ns2hYvhRvH8H842OFYHJgykMjnUBGBzqY78hsSyQavo3PRjq41q97lCIH7bUAx6CZ99boDmoZLrEAIKW_PZZv08kPWiRNSy3w/s600/EarthMagneticDynamo.png"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%">
Figure 1 - A rocky silicate-based planet can develop a strong magnetic field when it contains a molten conducting core that has convection currents that are deflected into helical motions by the Coriolis effect when the planet rapidly spins. The Coriolis effect also causes hurricanes to spin in opposite directions in the northern and southern hemispheres.
</p><p style="line-height:140%; font-size:120%">
For a rocky silicate-based planet to maintain such an atmosphere, it needs a magnetic field to deflect the solar wind from its home star. Otherwise, the atmosphere will be blown away as happened to the small planets Mercury and Mars that do not have magnetic fields. For a rocky silicate-based planet to have a magnetic field it needs three things. It needs a molten core composed of a conducting metal such as iron and nickel. It also needs these molten metals to rise and fall in its core in large-scale convection currents. Finally, the planet needs to rotate with a sufficient angular velocity to have a magnetic dynamo form to create a strong magnetic field. Mercury and Venus barely rotate and the iron core of Mars solidified many billions of years ago because the planet is too small. None of these rocky silicate-based planets have a strong magnetic field like the Earth.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEjC-bpJVxpSfbLCjZChthXmygZT-CQ-Qkrbw88MOFJV4SLRYVvpBzUrXlF-IQZJhyETRWXNZuKiArOe-yzxj3f_1Zmx0kdxKpO7J-coxuf_PRcoagqZicBOhML1PBopVL7px_cmQWdC458N_hsp6mis5lGK_dBEjJWSTL4jian3d-g5gL-fS1c=s1200" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="1200" data-original-width="1200" src="https://blogger.googleusercontent.com/img/a/AVvXsEjC-bpJVxpSfbLCjZChthXmygZT-CQ-Qkrbw88MOFJV4SLRYVvpBzUrXlF-IQZJhyETRWXNZuKiArOe-yzxj3f_1Zmx0kdxKpO7J-coxuf_PRcoagqZicBOhML1PBopVL7px_cmQWdC458N_hsp6mis5lGK_dBEjJWSTL4jian3d-g5gL-fS1c=s600"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%">
Figure 2 - Rocky silicate-based planets also need a hot mantle for plate tectonics to develop.
</p><p style="line-height:140%; font-size:120%">
A hot interior is also necessary for plate tectonics to operate. Plate tectonics helps to regulate the atmosphere of a rocky silicate-based planet by subducting carbonate rock back into the mantle. This helps to keep the amount of carbon dioxide in the atmosphere at a low enough level to allow liquid water to exist on its surface. Plate tectonics did not arise on Venus even though it is about the same size as the Earth. Venus does not have a magnetic field either and that is probably why it has no water. Water molecules on Venus would have been blown away by the Sun's solar wind leaving behind the heavier carbon dioxide molecules in its atmosphere that, without plate tectonics, accumulated in its atmosphere and led to a runaway greenhouse effect on the planet. It is thought that water in the Earth's upper mantle lubricates the Earth's plates to allow for plate tectonics to occur. So of the four rocky silicate-based planets in our Solar System - Mercury, Venus, Earth and Mars, only the Earth has a strong magnetic field, a temperate atmosphere and plate tectonics.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiW4XUd6kbgxyyu7ay6wB8QLFG0KwDLUmIonwb8k8ucQRyPs5_t4ZdmlI4U_qccV8eFQcu4kvHVStwbYnRGahPljnJZN9Kyvo9IeeRqIhcmzmnyV1iF1Gfq9J4wmYB_FTDQmfmFKPsDCWSvRlp_ZUOOQBrkQuyl7UpygsDCscFLVX9WMv6ejiK8TQ/s3200/Mercury-mars-venus-earth.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="1203" data-original-width="3200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiW4XUd6kbgxyyu7ay6wB8QLFG0KwDLUmIonwb8k8ucQRyPs5_t4ZdmlI4U_qccV8eFQcu4kvHVStwbYnRGahPljnJZN9Kyvo9IeeRqIhcmzmnyV1iF1Gfq9J4wmYB_FTDQmfmFKPsDCWSvRlp_ZUOOQBrkQuyl7UpygsDCscFLVX9WMv6ejiK8TQ/s600/Mercury-mars-venus-earth.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%">
Figure 3 - Of the four rocky silicate-based planets in our Solar System, only the Earth has a strong magnetic field and plate tectonics. This is because Mercury and Mars are too small and Venus barely spins.
</p><p style="line-height:140%; font-size:120%">
Now, to melt a recently formed silicate-based planet, a rather short-lived highly radioactive element is required. But that radioactive element cannot be too short-lived either, otherwise, there would be none left by the time the planet began to form. Aluminum-26, with a half-life of 717,000 years, is the perfect isotope to do that. Aluminum-26 is generated by the type II supernovas that result from the bright massive stars in a molecular cloud running out of nuclear fuel and collapsing upon themselves. Type II supernovas are rather common on cosmological timescales. About one type II supernova happens every 50 years in our 10 billion-year-old Milky Way galaxy. They produce the aluminum-26 isotope that can decay and give off enough heat to melt a rocky silicate-based planet such as the Earth. Remember, radioactive isotopes with short half-lives are very radioactive and can quickly produce heat and radiation damage. For example, when a nuclear reactor has a meltdown, it can release iodine-131 with a half-life of only 8 days. That means that iodine-131 is highly radioactive and dangerous. With a half-life of only 8 days, it rapidly releases nuclear energy. But after 10 half-lives of only 80 days, only 1/1024 of the original iodine-131 remains, so it is nearly all gone like a rapidly burning house fire that quickly does a lot of damage but then rapidly fades away into ashes. This is why they pass out iodine tablets during a reactor meltdown so that your body does not take up the rapidly decaying iodine-131.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjYA4tGISaGxzSrWTUsaJzjvwppP1G64AP6s2bBwMQOBt3bB8eCqvK2k-P9Hya5DcpAZBnTJFRBEz-rR_4O8RaMuxV7h3dUg8pRkmtgccAS7G8oY2QXWOInSPIiriI5ofuekDxappwi0wNe_hrepHL5VD7bVJYr7mtKrVrOzyay0MPHwGoWuIy93Q/s550/SupernovaStellarStruct.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="550" data-original-width="550" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjYA4tGISaGxzSrWTUsaJzjvwppP1G64AP6s2bBwMQOBt3bB8eCqvK2k-P9Hya5DcpAZBnTJFRBEz-rR_4O8RaMuxV7h3dUg8pRkmtgccAS7G8oY2QXWOInSPIiriI5ofuekDxappwi0wNe_hrepHL5VD7bVJYr7mtKrVrOzyay0MPHwGoWuIy93Q/s600/SupernovaStellarStruct.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%">
Figure 4 - Type II supernovas arise when massive stars run out of nuclear fuel to fuse. As these stars run out of nuclear fuel, they form an onion-like structure of heavier and heavier nuclei. But iron Fe nuclei cannot be fused to produce energy and when that happens the entire star collapses under its own weight and is blown apart by huge numbers of neutrinos. Aluminum-26 is produced in this process and is then blown out into the interstellar medium.
</p><p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhc1OH1vOPyOR_qV_ZfZaP0MUamUj2Texkc0nRF5RAxZQadJpIpRDb7daFLGJAnogM25qmpRKxBhGJnlXu6cJEQy-o_Di-Li1URlXINtelbs2grYPIjulzXn-2tsHsJ7MzKR8QceYiZaeKbKuSwGzO57ni-SAWL6O8_lrsUm6leZc81FyYzTDojNg/s480/Supernova2.gif" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="251" data-original-width="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhc1OH1vOPyOR_qV_ZfZaP0MUamUj2Texkc0nRF5RAxZQadJpIpRDb7daFLGJAnogM25qmpRKxBhGJnlXu6cJEQy-o_Di-Li1URlXINtelbs2grYPIjulzXn-2tsHsJ7MzKR8QceYiZaeKbKuSwGzO57ni-SAWL6O8_lrsUm6leZc81FyYzTDojNg/s600/Supernova2.gif"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%">
Figure 5 - A type II supernova pushes out aluminum-26 into the interstellar medium. Type II supernovas happen about every 50 years in our galaxy.
</p><p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgitXoUPhOC_OW_tRKaIj-P-LLDvsFZgQLK8X2ors57xYLHYqK4MGJsXArvgAKkl7vNnKlrBz0BEp07RNxEKjnDrVUI5S5CNTRjrqxd4nH7FCMjcOgVJYUmLk0UpMIs9yvrPdfOSRB2zMiElVlUxa7Av2z1M6EB9ghyphenhyphenS-oXiQPZsMiT4h3bTuiRVQ/s1024/TrapeziumCluster.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" height="600" data-original-height="1024" data-original-width="1008" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgitXoUPhOC_OW_tRKaIj-P-LLDvsFZgQLK8X2ors57xYLHYqK4MGJsXArvgAKkl7vNnKlrBz0BEp07RNxEKjnDrVUI5S5CNTRjrqxd4nH7FCMjcOgVJYUmLk0UpMIs9yvrPdfOSRB2zMiElVlUxa7Av2z1M6EB9ghyphenhyphenS-oXiQPZsMiT4h3bTuiRVQ/s600/TrapeziumCluster.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%">
Figure 6 - Above are some very massive stars in the Orion Nebula that are 10 - 30 solar masses. These stars will quickly burn up all of their nuclear fuel in just a few million years and then produce type II supernovas that will push out aluminum-26 into the Orion Nebula molecular cloud. Such events replenish the amount of aluminum-26 in molecular clouds as the aluminum-26 continuously decays with a half-life of 717,000 years.
</p><p style="line-height:140%; font-size:120%">
<strong>Long-Lived Radioactive Elements are Required to Keep a Planet Hot for Billions of Years</strong><br />
But to keep such a planet hot for billions of years, you need the long-lived radioactive elements of thorium-232 (half-life 14 billion years), uranium-235 (half-life 700 million years) and uranium-238 (half-life 4.5 billion years) and those isotopes are not generated by type II supernovas. These long-lived radioactive elements are created when two orbiting neutron stars finally collide together to form a black hole. In the collision process of two neutron stars, many neutrons get smashed together very quickly by the s-process to produce thorium-232, uranium-235 and uranium-238. Normally, a free neutron will decay into a proton and an electron with a half-life of about 15 minutes. Only when a neutron is confined by the strong nuclear force in an atomic nucleus or the confines of a neutron star held together by intense gravity can a neutron persist for billions of years. In the s-process, an atomic nucleus is bombarded with huge numbers of neutrons so quickly that the neutrons do not have time enough to decay. It is possible to inject huge numbers of neutrons into a nucleus because neutrons have no electrical charge to repel them. Only later, do these neutron-rich nuclei have time to have some of the neutrons decay into protons and release electrons. According to the second paper above, neutron stars only collide about once every 30 million years in the Milky Way galaxy. Type II supernovas, the ones that produce aluminum-26, happen much more often, about every 50 years.
<br /><br />
The long-lived thorium-232, uranium-235 and uranium-238 atoms in the mantle of a planet act like a nuclear-powered electric blanket to keep the iron-nickel core of the planet from freezing into a solid. For example, the Earth's core is
slowly freezing into a solid iron-nickel inner core. The solid inner core is now about 1,200 kilometers wide, but thanks to the thorium-232, uranium-235 and uranium-238 atoms in the Earth's mantle, the inner core is only growing by about one millimeter each year. According to calculations, as the molten iron and nickel of the outer core crystallizes onto the inner core, the melt just above it becomes enriched in oxygen, and therefore less dense than the rest of the outer core. This process creates convection currents in the outer core, which when combined with the Coriolis effect of a rapidly rotating Earth, are thought to be the prime driver for the currents that create the Earth's magnetic field. These long-lived radioactive atoms also keep the mantle of the planet hot enough to support plate tectonics.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh3kPa6Q7fx1HgzwouAcrOHNfOh3yqj9APDZCQ9mThxMOF-jFTzl7ZKiy8tJU55-7TU7MKKKvGrom64sr1Alu6EIjZtcOOYq_kIThYU7jsyVZPx5bc79Agmf9GBOczmW-pA-ZmLoekdrORfJU_gjTa6IkKdEbpoVHK9o2roy-FMAXFLSYsHrk1lHg/s380/NeutronCollision.gif" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="240" data-original-width="380" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh3kPa6Q7fx1HgzwouAcrOHNfOh3yqj9APDZCQ9mThxMOF-jFTzl7ZKiy8tJU55-7TU7MKKKvGrom64sr1Alu6EIjZtcOOYq_kIThYU7jsyVZPx5bc79Agmf9GBOczmW-pA-ZmLoekdrORfJU_gjTa6IkKdEbpoVHK9o2roy-FMAXFLSYsHrk1lHg/s600/NeutronCollision.gif"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%">
Figure 7 - The long-lived radioactive nuclei thorium-232, uranium-235 and uranium-238 are created when two neutron stars collide to form a black hole. This only happens about once every 30 million years in our galaxy.
</p><p style="line-height:140%; font-size:120%">
<strong>Why Does Having Thorium and Uranium Atoms Make the Earth Rare?</strong><br />
The authors of the first paper above explain that by analyzing the amounts of thorium-232, uranium-235 and uranium-238 on the Earth and in meteorites, along with the daughter products of other radioactive elements that have already decayed, they were able to determine that two neutron stars collided about 80 million years before the formation of our Solar System at a distance of about 3,000 light years.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhEuwpuGUkNVCEa23I8WZSQax-AD0-SBGmX2LVNpEh6nXmhPpbuoWaS3nSEfYbvlakpwXQtE_nMVg8psA6U4zvaaeS1BZ_PKHtX_UzIaBPf29XFZkj3rWlAhDsEPkPHuXROPT_yQahqB3smD8VcXzY1rE6IjXgrlzMNl8MWo9N4MBcp2stDXiMgaA/s1599/CollidingNeutronStars.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="499" data-original-width="1599" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhEuwpuGUkNVCEa23I8WZSQax-AD0-SBGmX2LVNpEh6nXmhPpbuoWaS3nSEfYbvlakpwXQtE_nMVg8psA6U4zvaaeS1BZ_PKHtX_UzIaBPf29XFZkj3rWlAhDsEPkPHuXROPT_yQahqB3smD8VcXzY1rE6IjXgrlzMNl8MWo9N4MBcp2stDXiMgaA/s600/CollidingNeutronStars.png"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%">
Figure 8 - About 80 million years before the formation of our Solar System, two neutron stars merged into a black hole and ejected large amounts of thorium and uranium atoms into the collapsing pre-solar nebula that formed our Solar System. Since the merger of two neutron stars is relatively rare, most rocky silicate-based planets do not have these long-lived atoms to keep their mantles hot and their cores molten.
</p><p style="line-height:140%; font-size:120%">
But even having thorium-232, uranium-235 and uranium-238 atoms will not guarantee that a rocky silicate-based planet will have a strong magnetic field and plate tectonics. Of the four such planets in our Solar System, Mercury, Venus, Earth and Mars, only the Earth has a strong magnetic field and plate tectonics. The other three such planets have neither because they are either too small or are not spinning fast enough.
<br /><br />
Additionally, galactic ASI machines exploring our galaxy will need a compact source of energy. This would be best supplied by fissioning uranium-235, uranium-233 derived from thorium-232, or plutonium-239 derived from uranium-238. When you fission one of these atoms, you get 200 million eV of energy. Compare that to the 2 eV of energy per atom that you get from chemically burning an atom of coal or oil! Nuclear fuel contains 100 million times as much energy per atom and can be fissioned in relatively small reactors. For more on that see <a href="https://softwarephysics.blogspot.com/2019/11/new-introduction-to-softwarephysics_26.html">Last Call for Carbon-Based Intelligence on Planet Earth</a> and <a href="https://softwarephysics.blogspot.com/2020/10/new-introduction-to-softwarephysics.html">Agile Development of Molten Salt Nuclear Reactors at Copenhagen Atomics</a>.
<br /><br />
Comments are welcome at
<a href="mailto:scj333@sbcglobal.net">scj333@sbcglobal.net</a><br /><br />
To see all posts on softwarephysics in reverse order go to:<br />
<a href="https://softwarephysics.blogspot.com/">https://softwarephysics.blogspot.com/</a>
<br /><br />
Regards,<br />
Steve Johnston
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-30536174.post-35755523792443388202023-11-08T08:06:00.002-08:002023-11-21T17:42:26.621-08:00But I Still Haven't Found What I'm Looking For<p style="line-height:140%; font-size:120%">
At an age of 72 years and after a lifetime of curious thought, I now find myself with a good working hypothesis for "what's it all about" that has not greatly changed over the past 20 years. As I explained in
<a href="https://softwarephysics.blogspot.com/2011/05/new-introduction-to-softwarephysics.html">What’s It All About?</a> and <a href="https://softwarephysics.blogspot.com/2021/09/new-introduction-to-softwarephysics_16.html">What's It All About Again?</a>, I know that my current working hypothesis must definitely be wrong, but I am coming to the understanding that it might be the best that I will ever be able to come up with before I finally check out of this magnificent Universe. At least my current working hypothesis does not contradict itself as so many other human worldviews seem to do. For more on that see my last post <a href="https://softwarephysics.blogspot.com/2023/10/new-introduction-to-softwarephysics_17.html">The Self-Organizing Recursive Cosmos</a>. I bring this up because recently my wife and I had a luncheon with a dear old friend and her husband. My wife had been a college roommate of this old friend about 50 years ago at the University of Illinois in Urbana, and we have all kept in close touch ever since over the many decades. But after two glasses of wine, this old friend finally built up enough courage to ask if we had given any thought to returning to the religion that she so deeply held and, specifically, to the version of this religion that she currently followed because she thought that the "other" versions of this religion all had it a bit "wrong". Since my wife and I both try to refrain from talking about politics and religion with friends, my wife and I both demurred. But after much persistence on the part of our old friend, I finally unloaded my current working hypothesis on her for "what's it all about" and my current working hypothesis about the Multiverse that she calls God.
<br /><br />
But I must confess that this failing in self-restraint on my part was partly due to the current world situation in the Middle East. Once again, we now find Christians, Jews and Muslims all running around in the Middle East shooting at each other with machine guns over their particular model of a God. It seems that all of these very religious people have found what they have been looking for in the model of a God depicted as a Middle Eastern warlord. So much so, that throughout the ages they have all been willing to torture and kill for it. This model is more than 3,000 years old and was first formulated by people who thought that the Earth was flat and at the center of the Universe. This model of God was created to help them vanquish their Middle Eastern tribal enemies and it continues to do so even today. This is what I find so disheartening about the current state of the human condition. Thanks to the Internet and the hard work of many people over the past 400 years, for the first time, we all now have easy access to the information necessary to truly appreciate the majesty and grandeur of our Universe, and yet we do not. I find that nearly all human beings are still so self-absorbed with their own little petty grievances that they never are able to truly enjoy the astounding privileges of being a sentient being in a Universe that has become self-aware. That is indeed a very sad thing to ponder. But more alarming, this particular religious sect has also aligned itself with the Alt-Right Fascist MAGA movement that now threatens to end the American Experiment with the 18th-Century Enlightenment's radical idea of a self-ruling democratic republic. This, even though the current leader of the Alt-Right Fascist MAGA movement has always demonstrated the exact opposite behavior extolled by the founder of this very sect.
<br /><br />
Comments are welcome at
<a href="mailto:scj333@sbcglobal.net">scj333@sbcglobal.net</a><br /><br />
To see all posts on softwarephysics in reverse order go to:<br />
<a href="https://softwarephysics.blogspot.com/">https://softwarephysics.blogspot.com/</a>
<br /><br />
Regards,<br />
Steve Johnston
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-30536174.post-48394524833388550762023-10-17T14:42:00.009-07:002023-11-10T05:50:55.473-08:00The Self-Organizing Recursive Cosmos<p style="line-height:140%; font-size:120%">
At an age of 72 years, I must admit that I have now spent most of my life trying to figure out what's it all about before I finally check out of this Universe as I explained in <a href="https://softwarephysics.blogspot.com/2011/05/new-introduction-to-softwarephysics.html">What’s It All About?</a> and <a href="https://softwarephysics.blogspot.com/2021/09/new-introduction-to-softwarephysics_16.html">What's It All About Again?</a>. With the realization that I am now rapidly running out of time to do so, in this post, I would again like to expand upon these ideas after finishing a very remarkable paper in the recent October 2023 edition of the American Journal of Physics:
<br /><br />
All Objects and Some Questions<br />
<a href="https://pubs.aip.org/aapt/ajp/article/91/10/819/2911822/All-objects-and-some-questions
">https://pubs.aip.org/aapt/ajp/article/91/10/819/2911822/All-objects-and-some-questions</a>
<br /><br />
Below is an intriguing graph from this paper. The authors plot the log of the mass of objects within our Universe versus the log of their sizes. The upper triangle, designated by "forbidden by gravity", rules out regimes of masses and sizes where the general theory of relativity produces a black hole. The lower triangle, designated by "quantum uncertainty" rules out masses and sizes which are precluded by quantum mechanics. Essentially, the Heisenberg Uncertainty Principle prevents objects in this regime from existing. The very small black triangle on the far left that is designated by "QG" (Quantum Gravity) is where both the general theory of relativity and quantum mechanics preclude objects from existing. This is also where both theories break down and need to be replaced by a quantum theory of gravity that we currently do not have.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiquon14I7jgQCbp6TPa_1hdY_bZvLWLv9zz0OqmrlHXQ_ie-nLWExh0MSfpBzuSbkDMph4E3Ns-8SmAv466xUHhPSFeh7S03KUbIzrVv3p1j_lIh_gok6ZifoBQmSkweyHamkIUMQkx9PZ-LzXEEtmRynhQawKQztEtqmpYpB_-JDTmByN4f7I3Q/s1785/UniversePlot.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" height="600" data-original-height="1785" data-original-width="1602" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiquon14I7jgQCbp6TPa_1hdY_bZvLWLv9zz0OqmrlHXQ_ie-nLWExh0MSfpBzuSbkDMph4E3Ns-8SmAv466xUHhPSFeh7S03KUbIzrVv3p1j_lIh_gok6ZifoBQmSkweyHamkIUMQkx9PZ-LzXEEtmRynhQawKQztEtqmpYpB_-JDTmByN4f7I3Q/s600/UniversePlot.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 1 – A plot of the allowed masses of objects in our Universe versus the allowed sizes of objects in our Universe.
<p style="line-height:140%; font-size:120%">
Along the boundaries of these forbidden zones, we find all of the very strange objects in our Universe while in the pink and blue zones, we find the objects that we are more familiar with like people, planets, stars and galaxies. Along the lower edge, we find the fundamental particles like electrons and neutrinos. Along the top edge, we find all of the black holes of various sizes and masses from the PBH (Primal Black Holes) of the Big Bang to the SMBH (Super Massive Black Holes) found at the centers of most galaxies. Notice that the Hubble radius also falls along this line of black holes of increasing mass and size. The Hubble radius is defined as the speed of light divided by the current Hubble constant H<sub>o</sub> which is about 65 - 71 km/s/Mpc and comes to about 13.7 - 14.5 billion light years away from us. Given that, the authors entertain the idea that we might be living inside of a black hole but with some severe reservations. That is because it is more complicated than just having the occurrence of the Hubble radius lie along the black hole line on the graph.
<br /><br />
To understand why, recall that at the heart of a black hole is a singularity, a point-sized pinch in spacetime with infinite density, where all the current laws of physics break down. Surrounding the singularity is a spherical event horizon. The black hole essentially sucks spacetime down into its singularity with increasing speed as you approach the singularity, and the event horizon is simply where spacetime is being sucked down into the black hole at the speed of light. Because nothing can travel faster than the speed of light, nothing can escape from within the event horizon of a black hole because everything within the event horizon is carried along by the spacetime being sucked down into the singularity faster than the speed of light. You can think of a black hole as a drain in a very large sink of peaceful water. Now imagine that you are a goldfish in that sink. When the drain is first opened, water begins to pour down into the drain with increasing velocity as you get closer to the drain. At some point, you reach a region of water where the current is dragging you down faster than you can swim. That would be the event horizon for the black hole drain. Once you cross the event horizon of the drain you can still swim around inside before you go crashing down into the drain singularity but you will never be able to escape it.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhWPFG3Aqm6X9l3FWl3GOHNXc5gJoJmWm74VTtD6zGb-yrLuSsf96D5Jgwnxyjt0Y8IC9FZ5KixcvQ4YpLyvtdGLMHLvwzKQRu0xDGFx232RfOpHAGPrKE4L9geYZO1f0Q8wNWMJhW-vmRA-oz3JuSfU3VXOdRHesYfWA0ixl92DWLYBuZuwgN6nQ/s602/HubbleRadius.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="511" data-original-width="602" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhWPFG3Aqm6X9l3FWl3GOHNXc5gJoJmWm74VTtD6zGb-yrLuSsf96D5Jgwnxyjt0Y8IC9FZ5KixcvQ4YpLyvtdGLMHLvwzKQRu0xDGFx232RfOpHAGPrKE4L9geYZO1f0Q8wNWMJhW-vmRA-oz3JuSfU3VXOdRHesYfWA0ixl92DWLYBuZuwgN6nQ/s600/HubbleRadius.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 2 – The "Observable Universe" is now about 46 billion light years away from us. But the current Hubble radius is now only about 13.7 - 14.5 billion light years away. The current Hubble radius is where spacetime is currently expanding away from us faster than the speed of light. So the Hubble radius is the furthest distance that we could now reach traveling at the speed of light and the light from objects beyond the current Hubble radius will never reach us.
<p style="line-height:140%; font-size:120%">
Similarly, the Hubble radius is where spacetime is expanding away from us at the speed of light. That means we could never reach such a distance even if our goldfish could swim at the speed of light. Also, the light from any objects beyond the Hubble radius will never be able to reach us because it will be swimming against a current of spacetime traveling faster than the speed of light away from us. In Figure 2 above, we see that the "Observable Universe" is now about 46 billion light years away from us and consists of all the light that has been traveling towards us for the entire age of the Universe. During all of those billions of years, the Universe has expanded greatly so that such objects are now thought to be about 46 billion light years away. On the other hand, the current Hubble radius is now only about 13.7 - 14.5 billion light years away. But when the light from the current Hubble radius finally does reach us, the objects that emitted that light will too be many billions of light years beyond the current Hubble radius because the expanding spacetime of our Universe will have floated them quite far away from us. I know that these cosmic horizons are very confusing. In actuality, everything in our Universe must be within the current Hubble radius including the photons from the "Observable Universe". Remember, the photons from the "Observable Universe" are actually now at a distance of zero from us because they are now just finally hitting our telescopes after their very long journey since the Big Bang!
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjscmEGjh0fiPexihs__wbaOhJGHgqBIAbXVKwdbwb-xTUAhVXeLOdYvrJiwyprR9fPSSQIAnlGdL6CNkiTNzAPlN3RYt5PKv-7y5MFy-wX8x0ElLOX5IljBrm6I9wo4i2XRFQ7s4e8j7wfi4JIZwAunbOVh5i2dwmCeJzTSSrBeILEGj4LKoKHqQ/s999/BlackHoleDensityMap.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="599" data-original-width="999" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjscmEGjh0fiPexihs__wbaOhJGHgqBIAbXVKwdbwb-xTUAhVXeLOdYvrJiwyprR9fPSSQIAnlGdL6CNkiTNzAPlN3RYt5PKv-7y5MFy-wX8x0ElLOX5IljBrm6I9wo4i2XRFQ7s4e8j7wfi4JIZwAunbOVh5i2dwmCeJzTSSrBeILEGj4LKoKHqQ/s600/BlackHoleDensityMap.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 3 – Figure 2 above displays a Universe with a mass-energy density external to our current Hubble radius that is very similar to the mass-energy density of our Universe within our current Hubble radius. In Figure 2 this mass-energy density extends out to infinity as most cosmologists currently think is true. The authors of the above paper go through all of the mathematics that predicts that if these cosmologists are wrong and the mass-energy density of our Universe beyond our current Hubble radius is actually zero, then we truly are living inside of a black hole and everything within our Universe will eventually collapse into a singularity with a "Big Crunch". The authors think this model is highly unlikely because it would place us exactly at the center of our Universe and Copernicus already fought that battle for us over 400 years ago.
<p style="line-height:140%; font-size:120%">
Instead, the authors propose that the white dot towards the left edge of the plot, designated as the Instanton, might be representative of the initial conditions of our Universe at least in terms of our current theories of quantum mechanics and general relativity. At an age of one Planck time, the Instanton would be a black hole with a radius of one Planck length and a mass of one Planck mass all at the tremendously high Planck temperature. Note that unlike the arbitrary units of meters and seconds that are based on arbitrary human conventions, the Planck Units are all based on the fundamental constants of our Universe such as the gravitational constant G, the speed of light c, Boltzman's constant k<sub>B</sub> and Planck's constant h.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiIGLs9eZUHa1xHsQHyE1N-ERdWv0O33Tbaw4C1eniRqg_sEr0oIaFKwOsr3D4oP3HAD_JDYEggFGXgWfvtCPZ5LE-UZf9WRfTuKElK4Hz1Rn6zeV4tw2cYBskYoH2iqVAEaCSOKO3Yl2lbJp3n49y-Yi-wFdGj3t11kQ7fIOUkpw_FRSU45F64hg/s602/PlanckUnits.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="426" data-original-width="602" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiIGLs9eZUHa1xHsQHyE1N-ERdWv0O33Tbaw4C1eniRqg_sEr0oIaFKwOsr3D4oP3HAD_JDYEggFGXgWfvtCPZ5LE-UZf9WRfTuKElK4Hz1Rn6zeV4tw2cYBskYoH2iqVAEaCSOKO3Yl2lbJp3n49y-Yi-wFdGj3t11kQ7fIOUkpw_FRSU45F64hg/s600/PlanckUnits.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 4 – The Planck Units are not based on arbitrarily chosen units like the second that is based upon the time between high noons on the Earth. The Planck Units are based on the fundamental constants of our Universe as listed in the Defining Expression column above.
<p style="line-height:140%; font-size:120%">
Again, to fully understand our Universe prior to the initial state of the Instanton, we would need a theory of quantum gravity and we currently do not have one. Both our general theory of relativity and quantum mechanics break down prior to the arrival of the Instanton. But at least this would imply that although we are not currently living within a black hole, our Universe may have indeed been <em>produced</em> by a black hole. Given that thought, the above paper inspired me to, once again, reread Lee Smolin's classic <em>Life of the Cosmos</em>.
<br /><br />
<em>Life of the Cosmos</em> (1997)<br />
<a href="
https://tkececi.files.wordpress.com/2009/12/the-life-of-the-cosmos.pdf">https://tkececi.files.wordpress.com/2009/12/the-life-of-the-cosmos.pdf</a>
<br /><br />
For me, Lee Smolin's <em>Life of the Cosmos</em> is one of the most significant books shaping my current personal worldview, much like Richard Dawkins' <em>The Selfish Gene</em> because it explains so much. Both books convey very deep philosophical thoughts that can be of use in trying to make sense of it all. Lee Smolin is a founding and senior faculty member at the Perimeter Institute for Theoretical Physics in Waterloo, Ontario, Canada, and an adjunct professor of physics at the University of Waterloo. Smolin is also a member of the graduate faculty of the philosophy department at the University of Toronto. Lee Smolin has made significant contributions to a wide range of areas in physics, including quantum gravity, cosmology, and the foundations of quantum mechanics. He is one of the leading proponents of loop quantum gravity, a promising approach to unifying quantum mechanics and general relativity. He has also developed the theory of cosmological natural selection, which posits that the laws of physics are constantly evolving into a Multiverse that is home to a multitude of different universes with different physical laws.
<br /><br />
<strong>Could the Instanton be a Black Hole From a Previous Universe?</strong><br />
In the <em>Life of the Cosmos</em>, Lee Smolin ponders why our Universe is so seemingly fine-tuned to contain complex things like galaxies, stars, planets and living things. Lee Smolin begins by explaining that our current Standard Model of particle physics has about 20 parameters that have to be experimentally measured such as the masses of particles, the strength of their electrical charges and the strengths of the force-carrying bosons. The Standard Model does not predict the values of these parameters. Instead, they have to be experimentally measured and then plugged into the Standard Model in order for it to predict how our Universe works. Smolin then goes on to explain that if any of these 20 parameters were to be changed by even the slightest amount, our Universe would be incapable of complexity. For example, the odds of randomly producing a universe with these 20 parameters that could even have stars would only be about one in 10<sup>229</sup>! Lee Smolin sees this as a huge challenge to the platonic concept of a Universe framed by a beautiful set of fixed and immutable mathematical laws with fixed constants that make it all unfold before us.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhZW7Iz4hViZPlcFBOmJ5B3QodUQlbIAXgabpV1OV_-OrebeuHqGowovknC65tgLbtyUrZkPY-7FgRCqTs3yI8ovfaNg1EbxVIoKxITB5xbXkq9xd2iL-tru-P4kW2rUYl04_tH_fr74A8Gv6xb9M2giG0BOoyasB6RKVS6LzlgOITOWFTwflSCCA/s685/StandardModel.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="403" data-original-width="685" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhZW7Iz4hViZPlcFBOmJ5B3QodUQlbIAXgabpV1OV_-OrebeuHqGowovknC65tgLbtyUrZkPY-7FgRCqTs3yI8ovfaNg1EbxVIoKxITB5xbXkq9xd2iL-tru-P4kW2rUYl04_tH_fr74A8Gv6xb9M2giG0BOoyasB6RKVS6LzlgOITOWFTwflSCCA/s600/StandardModel.png"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 5 – The Standard Model of particle physics is composed of quantum fields that we observe as particles. The matter particles are called fermions and have a spin of ½. The force-carrying particles are called bosons and they have a spin of 1. The Higgs boson has a spin of 0 and is a scalar field.
<p style="line-height:140%; font-size:120%">
If you count the antimatter particles, the Standard Model contains 12 quarks and 12 leptons. But each of the 12 quarks comes in one of three color charges - red, green or blue. That means there really are 36 different quarks. There are also really 8 gluons because gluons have a combination of two color charges (one of red, green, or blue and one of antired, antigreen, or antiblue). So that comes to 36 quarks + 12 leptons + 13 bosons = 61 particles!
<br /><br />
Many other physicists are also perplexed by the complexities of the Standard Model too. There just seem to be way too many fundamental particles and forces. Some have looked for a simpler and more comprehensive "Theory of Everything" that could easily fit on a T-shirt. Smolin sees this as a desire to return to the simple Platonic Forms that eternally exist outside of our Universe as a pure and simple mathematical pattern for our reality.
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgbiLd_ChMW-2gigszXs9DXnioNkcoCvTOwFu2V1AfQzUCfnaOAHQ7mJ3Ony2Rg2YvI0iahbxf-rYXNVeX5FrbrGMJxIgr6tC2BSFjAt10RTwFf5_1cwcqeyjZNFGiZs82V2oWrMw/s1600/periodic+table.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgbiLd_ChMW-2gigszXs9DXnioNkcoCvTOwFu2V1AfQzUCfnaOAHQ7mJ3Ony2Rg2YvI0iahbxf-rYXNVeX5FrbrGMJxIgr6tC2BSFjAt10RTwFf5_1cwcqeyjZNFGiZs82V2oWrMw/s640/periodic+table.png" /></a></div> <p style="text-align: center; line-height:140%; font-size:120%"> Figure 6 – Early in the 20th century, physicists were also perplexed by the very numerous fundamental elements of the Periodic Table. But atom smashers soon revealed that this very large number of fundamental elements were all simply composed of three particles - protons, neutrons and electrons. Perhaps the same could be said of the numerous particles of the Standard Model.</p><p style="line-height:140%; font-size:120%">
The desire for a simpler model has led many physicists to search for things such as string theory which has been struggling for the past 40 years. Smolin is an ardent critic of such efforts - see <a href="https://softwarephysics.blogspot.com/2015/09/new-introduction-to-softwarephysics_13.html">The Danger of Believing in Things</a> for more on that.
<br /><br />
Beyond a platonic desire for an eternal transcendental simplicity, many physicists remembered that they had already been fooled once before by a seemingly fine-tuned mess that, nonetheless, seemed to work just fine, going all the way back to Ptolemey's original model of the Universe.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjH0rqfZKHdEWOuK__npmfTwyBE_2JxI1L4_SFEswuGO7f6t4X43jVIfEA2jgY6q6bjCd0jkFj7er7B7skKqbPVlVLcnRU64ZVi6BQx5bq8IUdkgnvsKpL1fJc5vAgU5VYkO6n1njZBxT7GkJOp5XUa8kWCf_iDWdbcMvNWI9aYgAeoKk2kuAYvSQ/s384/Epicycles.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="500" data-original-height="305" data-original-width="346" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjH0rqfZKHdEWOuK__npmfTwyBE_2JxI1L4_SFEswuGO7f6t4X43jVIfEA2jgY6q6bjCd0jkFj7er7B7skKqbPVlVLcnRU64ZVi6BQx5bq8IUdkgnvsKpL1fJc5vAgU5VYkO6n1njZBxT7GkJOp5XUa8kWCf_iDWdbcMvNWI9aYgAeoKk2kuAYvSQ/s600/Epicycles.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 7 – In Ptolemy's model of the Universe, the Earth was not even at the center of the Universe! Instead, the Earth was slightly displaced from the true center of the Universe which was called the Equant. The planets orbited the Equant on a circular Deferent at a constant speed. In addition, each planet also moved in a circular motion about the Deferent on its own Epicycle that had its own radius. With all of that complicated compound motion, Ptolemy was able to predict the motions of the planets in the night sky accurately enough to match the very primitive observational astronomical instruments of the day.
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhl-XVi0NEGfyQypYu3tiRLgB6Y-1FGRvfro890593QVBVtuweVqr6exI47YjeEvzlXYZOKZ35-tpa83lvU05Bm8R-ggBaRIt8obuamhrDT-y-JiHmbK4eql5VxbsuqHIv5xrfm8xrRKprfCYv9AKaJmDD-6YhzoD4alEiYYNGR_rExqLE7qkjjyQ/s640/PtolemyCompoundMotion.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="480" data-original-width="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhl-XVi0NEGfyQypYu3tiRLgB6Y-1FGRvfro890593QVBVtuweVqr6exI47YjeEvzlXYZOKZ35-tpa83lvU05Bm8R-ggBaRIt8obuamhrDT-y-JiHmbK4eql5VxbsuqHIv5xrfm8xrRKprfCYv9AKaJmDD-6YhzoD4alEiYYNGR_rExqLE7qkjjyQ/s600/PtolemyCompoundMotion.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 8 – However, even though Ptolemy's Mess seemed to work, it resulted in some very complex compound motions for the planets in our Solar System!
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-jgRaK5uXzElsKc8xj_ZeaN7Nu72sF8Ej8s3e-vG5MRiKxOoR8d3eAVgnB02burNIbAiIY_Ak-Lgc8UMZJaKrnYner7O7B54MyaBMyBvkFZoJBDTRHq2qdxj-8J7yQcQUpgjdhsumIuz9UrvkcZmxKhqGU9_ZDsZCSzEKgO6_x7NZdirZHxcbBQ/s960/Copernican+Model.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="720" data-original-width="960" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-jgRaK5uXzElsKc8xj_ZeaN7Nu72sF8Ej8s3e-vG5MRiKxOoR8d3eAVgnB02burNIbAiIY_Ak-Lgc8UMZJaKrnYner7O7B54MyaBMyBvkFZoJBDTRHq2qdxj-8J7yQcQUpgjdhsumIuz9UrvkcZmxKhqGU9_ZDsZCSzEKgO6_x7NZdirZHxcbBQ/s600/Copernican+Model.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 9 – Copernicus had a much simpler model for the Universe that also made accurate predictions of the motions of the planets and that did not require fine-tuned compound motions of the planets. All that was needed was to place the Sun at the center of the Universe and have the Earth orbit around it like the other planets.
<p style="line-height:140%; font-size:120%">
In contrast, Smolin suggests in the <em>Life of the Cosmos</em> that the very messy mathematical "laws" and "constants" of the Standard Model and general relativity might be the real deal after all and for a very good reason. Perhaps they evolved to where we now see them in a Darwinian sense through a random walk through an infinite number of possible Universes, marked by a history of accidental modifications honed by natural selection. Smolin proposes that since the only other example of such a similarly fine-tuning of a mess in our Universe, that nonetheless works, is manifested by our biosphere, we should look to the biosphere as an explanation for the fine-tuning that we see in the cosmos. After all, the odds of forming a human baby out of a random mixture of a few kilograms of water and organic molecules would be far slimmer than one in 10<sup>229</sup>.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjnZaD3H4oT3OZRVz4WLdbMUg7o5hIAwitwa9w7Spgkae8D4wq0WUZ2bcBkTKsOQqakJHar44DAj6F0h-NiahRDtkYJB20lH-OPRodhgdmIRr_N0X8eIjNDdGZul2HNMT_MtdcJRtdD-iNE3prvTDxbDPa5aR2a6fu_ND4PptG9s3gqqT5O6y_g3Q/s1600/MetabolicPathways.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="1129" data-original-width="1600" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjnZaD3H4oT3OZRVz4WLdbMUg7o5hIAwitwa9w7Spgkae8D4wq0WUZ2bcBkTKsOQqakJHar44DAj6F0h-NiahRDtkYJB20lH-OPRodhgdmIRr_N0X8eIjNDdGZul2HNMT_MtdcJRtdD-iNE3prvTDxbDPa5aR2a6fu_ND4PptG9s3gqqT5O6y_g3Q/s600/MetabolicPathways.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 10 – Above is a simplified flowchart of the metabolic pathways used by carbon-based life. Notice that it too is a fine-tuned mess that seems to work nonetheless. However, in this case, we do know that carbon-based life actually is a fine-tuned mess that works superbly despite all of its complexity. Biologists had to come up with some mechanism to explain how such a fine-tuned mess came to be and they finally did so with Darwinian thought.
</p><p style="line-height:140%; font-size:120%">
Living things are incredible examples of highly improbable fine-tuned systems, and this fine-tuning was accomplished via the Darwinian mechanisms of inheritance and innovation honed by natural selection. Along these lines, Lee Smolin proposes that when black holes collapse they produce a white hole in another universe, and the white hole is observed in the new universe as a Big Bang. He also proposes that the physics in the new universe would essentially be the same as the physics in the parent universe, but with the possibility for slight variations to arise when a black hole reaches the Planck density. In this view, the seemingly fixed and immutable laws and constants of our Universe are no more so than the fixed and immutable species of a pre-Darwinian biosphere. For Lee Smolin, a universe that had the physics that was good at creating black holes would tend to outproduce universes that did not. Thus, a selection pressure would arise that selected for universes that had a physics that was good at making black holes, and so, a kind of Darwinian natural selection would occur in the Cosmic Landscape of the Multiverse. Over an infinite amount of time, the universes that were good at making black holes would be found to dominate the Cosmic Landscape. He calls this effect <em>cosmological natural selection</em>. One of the predictions made by Lee Smolin's model of the Multiverse is that we should most likely find ourselves in a universe that is very much like our own and that has an abundance of black holes. Such universes should be the norm and not the exception. Thus, for Lee Smolin, searching for an eternal and external God or Platonic Form defined in eternal and unchanging mathematical terms by an eternal theory of everything is a fool's errand. It would be like trying to find an equation that could predict all of the current species of the Earth without dealing with the historical evolution of the biosphere.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgTr8E1xYJwq6YSRGw4vgPb5MZBsjI9NYdCjybMuH-6XB6xi3vh3LZjFYrZRVtl8ynGe-9JNepRmHgLeFwn1SDeA9kBtYJM5cq3clY0tiIDy_YOULFxmWfUqpMZZo09zAu2YpdKQA/s1200/BlackHoleBigBang.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="630" data-original-width="1200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgTr8E1xYJwq6YSRGw4vgPb5MZBsjI9NYdCjybMuH-6XB6xi3vh3LZjFYrZRVtl8ynGe-9JNepRmHgLeFwn1SDeA9kBtYJM5cq3clY0tiIDy_YOULFxmWfUqpMZZo09zAu2YpdKQA/s600/BlackHoleBigBang.jpg"/></a></div> <p style="text-align: center; line-height:140%; font-size:120%"> Figure 11 - In Lee Smolin's the <em>Life of the Cosmos</em> he proposes that the black holes of one universe puncture the spacetime of the universe, causing a white hole to appear in a new universe similar to the Instanton of Figure 1. </p><p style="line-height:140%; font-size:120%">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhe2monswMB3-YlEOSk9IwvjBifevuZ_0qeXZ6acAJx3tF7ECf0-R-22iDQt1m8859-Cj8AK_7LNP2CgvbBLMGxprRwOFxO42x26_hyphenhyphenNeUHkzNl9mCwPc5f9-vEGbPnadQBvnH6iA/s1600/BubbleUniverses.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 235px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhe2monswMB3-YlEOSk9IwvjBifevuZ_0qeXZ6acAJx3tF7ECf0-R-22iDQt1m8859-Cj8AK_7LNP2CgvbBLMGxprRwOFxO42x26_hyphenhyphenNeUHkzNl9mCwPc5f9-vEGbPnadQBvnH6iA/s400/BubbleUniverses.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5599681338018424434" /></a><p style="text-align: center; line-height:140%; font-size:120%">Figure 12 – In Lee Smolin's model, cosmological natural selection selects for new Universes that are fine-tuned to produce many black holes.</p><p style="line-height:140%; font-size:120%">
For Smolin, the intelligent beings in our Universe are just a fortuitous by-product of making black holes. In order for a universe to make black holes, it must exist for many billions of years, and do other useful things, like easily make carbon in the cores of stars. Carbon is necessary to make black holes because carbon is necessary to make stars and it is not easy to make carbon. The carbon dust in molecular clouds that was expelled from earlier stars shades the molecular clouds from the bright stars that are forming within them, and the carbon-based organic molecules within molecular clouds can also radiate away infrared radiation to cool the clouds as they contract into stars. To make carbon, two helium-4 nuclei must first fuse in the core of a star to form beryllium-8. Then another helium-4 nuclei can fuse with the beryllium-8 to form carbon-12. The problem is that beryllium-8 only has a half-life of 8.19 x 10<sup>-17</sup> seconds before it decays back into two helium-4 nuclei and that does not provide very much time for beryllium-8 to absorb a helium-4 to become carbon-12. Changing the strength of the strong nuclear force or the masses of protons and neutrons by the slightest amount would mean far fewer stars and far fewer black holes in our Universe. For example, in 1953 Fred Hoyle published "On Nuclear Reactions Occurring in Very Hot Stars". In the paper, Hoyle argued that the fusion of three helium-4 nuclei could not occur at a significant rate without the existence of a nuclear resonance of carbon-12 at an energy of 7.65 million electron volts to absorb the collisional energy of the nuclei. Hoyle's prediction was confirmed a few years later by experimental physicists at the University of Chicago. In 1957, William Fowler and his colleagues reported the discovery of a resonance state in carbon-12 at an energy level of 7.68 million electron volts. Without this carbon-12 resonance, there would be far fewer black holes in our Universe and no us either. Similarly, in the <em>Life of the Cosmos</em> Lee Smolin runs through a good number of scenarios that show that by just slightly altering the Standard Model or the general theory of relativity we would end up with a Universe that could not produce black holes or would produce far fewer black holes. He has a very hard time trying to come up with one that could produce more, and finally just leaves that as an exercise for the student.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEigLupAca2tkbo4QtwKuTJbZeD-tIzcUcK4wNy_px7_cMeDEMS5jgdLSJI_7kDfdB0qLjbZD8FIm0ZWcQH0EiwkT0wNsK0pHdXL8hOXJhaOYvFd_3G5CvQt_H82yZ5PXL2zTcU2M4cUzN26CB0xeY1kZMGOyLVFoq5GOANBuDGvs35P41Y2ci4kZA/s800/Triple-Alpha_Process.svg.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="500" data-original-width="800" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEigLupAca2tkbo4QtwKuTJbZeD-tIzcUcK4wNy_px7_cMeDEMS5jgdLSJI_7kDfdB0qLjbZD8FIm0ZWcQH0EiwkT0wNsK0pHdXL8hOXJhaOYvFd_3G5CvQt_H82yZ5PXL2zTcU2M4cUzN26CB0xeY1kZMGOyLVFoq5GOANBuDGvs35P41Y2ci4kZA/s600/Triple-Alpha_Process.svg.png"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%">Figure 12 – Making carbon in our fine-tuned Universe is not easy.
</p><p style="line-height:140%; font-size:120%">
All of these factors aid in the formation of intelligent beings, even if those intelligent beings might be quite rare in such a universe. I have always liked Lee Smolin’s theory about black holes in one universe spawning new universes in the Multiverse, but I have always been bothered by the idea that intelligent beings are just a by-product of black hole creation. We still have to deal with the built-in selection biases of the Weak Anthropic Principle. Nobody can deny that intelligent beings will only find themselves in a universe that is capable of supporting intelligent beings. I suppose the Weak Anthropic Principle could be restated to say that black holes will only find themselves existing in a universe capable of creating black holes and that a universe capable of creating black holes will also be capable of creating complex intelligent beings out of the leftovers of black hole creation. However, in the <em>Life of the Cosmos</em>, Smolin takes a very dim view of the Weak Anthropic Principle because in his view it cannot be falsified in a Popperian manner and cannot make any useful predictions beyond what is necessarily already observed. In fact, he devotes an entire chapter in the <em>Life of the Cosmos</em> saying so.
<br /><br />
<strong>Could Cosmic Intelligence be the Driving Force of Creation?</strong><br />
Towards the end of <em>In search of the multiverse: parallel worlds, hidden dimensions, and the ultimate quest for the frontiers of reality</em> (2009), John Gribbin proposes a different solution to this quandary. Perhaps intelligent beings in a preceding universe might be responsible for creating the next generation of universes in the Multiverse by attaining the ability to create black holes on a massive scale. For example, people at CERN are currently trying to create mini-black holes with the LHC collider. Currently, it is thought that there is a supermassive black hole at the center of the Milky Way Galaxy and apparently all other galaxies as well. In addition to the supermassive black holes found at the centers of galaxies, there are also numerous stellar-mass black holes that form when the most massive stars in the galaxies end their lives in supernova explosions. For example, our Milky Way galaxy contains several hundred billion stars, and about one out of every thousand of those stars is massive enough to become a black hole. Therefore, our galaxy should contain about 100 million stellar-mass black holes. Actually, the estimates run from about 10 million to a billion black holes in our galaxy, with 100 million black holes being the best order of magnitude guess. So let us presume that it took the current age of the Milky Way galaxy, about 10 billion years, to produce 100 million black holes naturally. Currently, the LHC collider at CERN can produce at least 100 million collisions per second, which is about the number of black holes that the Milky Way galaxy produced in 10 billion years. Now imagine that we could build a collider that produced 100 million black holes per second. Such a prodigious rate of black hole generation would far surpass the natural rate of black hole production in our galaxy by a factor of about 10<sup>20</sup>. Clearly, if only a single technological civilization with such technological capabilities should arise at anytime during the entire history of each galaxy within a given universe, such a universe would spawn a huge number of offspring universes, compared to those universes that could not sustain intelligent beings with such capabilities. As Lee Smolin pointed out, we would then see natural selection in action again because the Multiverse would come to be dominated by universes in which it was easy for intelligent beings to make black holes with a minimum of technology. The requirements simply would be that it was very easy to produce black holes by a technological civilization, and that the universe in which these very rare technological civilizations find themselves is at least barely capable of supporting intelligent beings. It seems that these requirements describe the state of our Universe quite nicely. This hypothesis helps to explain why our Universe seems to be such a botched job from the perspective of providing a friendly home for intelligent beings and software. All that is required for a universe to dominate the Cosmic Landscape of the Multiverse is for it to meet the bare minimum of requirements for intelligent beings to evolve, and more importantly, allow those intelligent beings to easily create black holes within them. Since software is needed in all such universes to run the machines that generate the black holes, that explains why our Universe is capable of supporting software, but just barely so, and that is why software is so rare within our galaxy and Universe.
<br /><br />
<strong>Is the Multiverse a Recursive Function?</strong><br />
For me such a self-replicating Multiverse has an almost fractal nature at heart, like the Mandelbrot set that is defined by a simple recursive algorithm in the complex plane:<br /><br />Z<sub>n+1</sub>=Z<sub>n</sub>*Z<sub>n</sub> + C<br /><br /> <br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjf-TupKwp8rNXuHc8x84OnRtW5-iV3MKOIzQIY9tUE498GtyxK38WUSGVDuJq8uw1dSkvPPIZOJCjX459kx2HHxE1XHT4A8edt3-qTc9XTecwI5_t4URxhnHtr-yH-P3st92wLnw/s1600/Fractal.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 400px; height: 300px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjf-TupKwp8rNXuHc8x84OnRtW5-iV3MKOIzQIY9tUE498GtyxK38WUSGVDuJq8uw1dSkvPPIZOJCjX459kx2HHxE1XHT4A8edt3-qTc9XTecwI5_t4URxhnHtr-yH-P3st92wLnw/s400/Fractal.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5599682187848032082" /></a><p style="text-align: center; line-height:140%; font-size:120%">Figure 13 – The Multiverse can be thought of as a fractal form of self-replicating information, like the Mandelbrot set, with no beginning and no end (click to enlarge)</p><p style="line-height:140%; font-size:120%">
For more on that let's ask Bard to explain how recursive functions work.
<br /><br />
In computer science, please describe what a recursive function is.
<br /><br />
<font color="blue">Bard<br />
In computer science, a recursive function is a function that calls itself directly or indirectly. This means that a recursive function solves a problem by breaking it down into smaller instances of the same problem, and then recursively solving those smaller instances.
<br /><br />
Recursive functions can be difficult to understand at first, but they can be very powerful and elegant ways to solve certain problems. For example, many common algorithms, such as quicksort and mergesort, are recursive.
<br /><br />
Here is a simple example of a recursive function in Python:
<br /><br />
def factorial(n):<br />
if n == 0:<br />
return 1<br />
else:<br />
return n * factorial(n - 1)
<br /><br />
print(factorial(5))
<br /><br />
This function calculates the factorial of a number, which is the product of all the positive integers less than or equal to that number. For example, 5 factorial (5!) is equal to 120.
<br /><br />
The factorial function works by recursively breaking down the problem into smaller instances. For example, to calculate 5!, the function first calculates 4!, which is equal to 24. It then multiplies 5 by 24 to get the final answer of 120.
<br /><br />
Recursive functions can be a bit tricky to debug, but they can be a very powerful tool for solving certain types of problems.
<br /><br />
Here are some tips for writing recursive functions:
<br /><br />
* Make sure to have a base case, which is a case where the function can return a result without calling itself recursively.<br />
* Make sure that the recursive case always moves closer to the base case.<br />
* Be careful not to create infinite recursion, which happens when a function calls itself forever without ever reaching the base case.
<br /><br />
If you are new to recursion, I recommend starting with some simple examples, such as the factorial function. Once you understand how simple recursive functions work, you can start to tackle more complex problems.
</font>
<br /><br />
The Multiverse described above would be an example of an infinite recursion, endlessly calling itself for all of eternity, without ever reaching its initial base case that never even actually existed. It would also be one in which the code for the recursive function would be allowed to change slightly in a random manner with each iteration. The successful recursive functions of such a Multiverse would be those that evolved to carry on without crashing into oblivion. As Lee Smolin explained in <em>Life of the Cosmos</em>:
<br /><br />
<font color="blue">
At its root, the most elemental operation in logic is the making of a distinction. We do this whenever we separate one part of the world from another, or when we define a set that contains only certain things and not others. It is possible to make distinctions in the real world because we live in a world full of structure and variety. To the extent that this is the result of intrinsic processes of self-organization, it might be considered that the possibility of self-reference is what makes logic possible. Thus, if it is the case that the laws of nature have been constructed, over time, by processes of self-organization, then the old dream of reducing science to logic may to some extent be realized. But this will be possible, not in the timeless sense dreamed about by Platonists, but only in the sense that logic, expressed in time and complicated by the possibility of self-reference, is self-organization.
</font>
<br /><br />
Such a model of the Multiverse would be an example of the Strong Anthropic Principle in action where Intelligent beings would only find themselves in universes that were specifically designed to sustain Intelligent beings. The Strong Anthropic Principle is mostly frowned upon by most scientists because it seems to harken back to the ancient religious mythologies of yore with an eternal and external god-like force acting behind the scenes to create universes designed for creatures such as ourselves. However, this model really only requires that an eternal form of self-replicating Intelligence be at work with no need for any mysticism whatsoever.
<br /><br />
<strong>Conclusion</strong><br />
Lee Smolin's ends <em>Life of the Cosmos</em> with this very Darwinian paragraph:
<font color="blue">
<br /><br />
So there never was a God, no pilot who made the World by imposing order on chaos and who remains outside, watching and proscribing. And Nietzsche now also is dead. The eternal return, the eternal heat death, are no longer threats,
they will never come, nor will heaven. The world will always be here, and it will always be different, more varied, more interesting, more alive, but still always the world in all its complexity and incompleteness. There is nothing behind it, no absolute or platonic world to transcend to. All there is of Nature is what is around us. All there is of Being is relations among real, sensible things. All we have of natural law is a world that has made itself. All we may expect of human law is what we can negotiate among ourselves, and what we take as our responsibility. All we may gain of knowledge must be drawn from what we can see with our own eyes and what others tell us they have seen with their eyes. All we may expect of justice is compassion. All we may look up to as judges are each other. All that is possible of Utopia is what we make with our own hands. Pray let it be enough.
</font>
<br /><br />
As always, this still leaves us with the eternal unanswered question of a "First Cause" that has baffled all philosophers throughout the ages. But perhaps there never really was one.
<br /><br />
Comments are welcome at
<a href="mailto:scj333@sbcglobal.net">scj333@sbcglobal.net</a><br /><br />
To see all posts on softwarephysics in reverse order go to:<br />
<a href="https://softwarephysics.blogspot.com/">https://softwarephysics.blogspot.com/</a>
<br /><br />
Regards,<br />
Steve Johnston
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-30536174.post-6567316134398814172023-10-07T12:42:00.009-07:002023-10-24T05:30:41.262-07:00How to Use Your Browser to Easily Explore the Complete Night Sky From Home With the Most Advanced Software That is Available<p style="line-height:140%; font-size:120%">
In this post, I would like to introduce you to two very powerful browser-based astronomical applications - ESASky and the WorldWide Telescope Web clients. Personally, I love the peaceful serenity that is found by using astronomical software to calm the troubled mind. These two applications are linked together and can be used jointly during the boring parts of online meetings. For example, in my last IT job, I spent 13 years in the Middleware Ops IT department of the Discover credit card company. In the middle of the night, during large installs or outage conference calls, there were long stretches of dead time when I would have to patiently stand by as other groups did their work. In order to not fall asleep, I found it quite useful to go on extended tours of our Universe. My hope is that you will be able to do so as well during any boring online meetings you might be obliged to attend.
<br /><br />
Basically, there are two kinds of astronomical software. Astronomical simulation software lets you go to places in the Universe that you will never be able to actually see or experience and in times that are well outside of your brief stay in this Universe, as I pointed out in <a href="https://softwarephysics.blogspot.com/2018/11/new-introduction-to-softwarephysics.html">SpaceEngine - the Very Finest Available in 3-D Astronomical Simulation Software</a>. Observational astronomical software, on the other hand, lets you easily explore what has actually already been discovered by our telescopes and planetary probes, as I described in <a href="https://softwarephysics.blogspot.com/2018/09/new-introduction-to-softwarephysics.html">The Spectacular and Free Software of the WorldWide Telescope - WWT</a>. I find both to be quite awe-inspiring and uplifting. The only problem with most astronomical software is that if you run it locally on your own machine, it requires you to download and install a huge amount of software and data. It also requires that you have a substantial machine with a good deal of memory and a decent graphics card too. But there is a solution to this problem with today's advanced Cloud-based software. For example, I had had a company-provided 80-286 PC since 1987, but in 1991, I finally bought my very own PC for the very first time. It was an 80-386 IBM clone with 5 MB of memory and a 100 MB disk drive for $1500 in 1991 dollars! The store clerk that I bought it from wondered why I needed such a powerful machine. Was I planning to run a business with it?
<br /><br />
As you can imagine, deploying many thousands of such very expensive machines to the desks of all the employees in a large corporation during the late 1980s and early 1990s presented quite a sticker shock for Corporate Management. There had to be a cheaper way. So to overcome this problem, businesses soon adopted a new client/server architecture with the Distributed Computing Architecture. At first, it was a two-tier client/server architecture where servers did most of the work and then sent back the results to cheap client PCs on the desks of corporate employees. Then it advanced to a three-tier client/server architecture that inserted many layers of servers into the "back end" processing of data.
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgu2xKM5bvxexl7oS-zI-iU-qh9nZSflD6PHithWcXOEsO16aTUvEb0Gd_Z8On1BU8aJNtBRSVPTOnM7ZzaykC0fNIekUIcCxZ4VAuXkHRdhDOK07ur3XATPsgclZL0oPYtiSlfgg/s1600-h/tiers.gif"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 320px; height: 222px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgu2xKM5bvxexl7oS-zI-iU-qh9nZSflD6PHithWcXOEsO16aTUvEb0Gd_Z8On1BU8aJNtBRSVPTOnM7ZzaykC0fNIekUIcCxZ4VAuXkHRdhDOK07ur3XATPsgclZL0oPYtiSlfgg/s320/tiers.gif" border="0" alt=""id="BLOGGER_PHOTO_ID_5310472112798806978" /></a><p style="text-align: center; line-height:140%; font-size:120%"> Figure 1 – Beginning in the early 1990s, the Distributed Computing Revolution allowed a client/server architecture to empower users with PCs of very limited ability to perform complex business operations using the vast powers of servers in the background. Click to enlarge.
</p><p style="line-height:140%; font-size:120%">
But in the past decade, much of the Distributed Computing Architecture has been migrated to the Cloud using advanced Cloud-Based software. For more on that see my 2016 post <a href="https://softwarephysics.blogspot.com/2016/03/new-introduction-to-softwarephysics.html">Cloud Computing and the Coming Software Mass Extinction</a>.
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiGbxAaHeWjSbdJWUGBFuAYVQdZh22yCaP8bA4dUXFJY9bh5B-gySGGlYroTDC2iI7iobhtTrW6NkcviAZGYNZznKWT6dMoGMPne_buzI-YIGAR9T3pBGy5xSL2oSdgOF95vfWBFw/s1600/Cloud-Stack-Diagram.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiGbxAaHeWjSbdJWUGBFuAYVQdZh22yCaP8bA4dUXFJY9bh5B-gySGGlYroTDC2iI7iobhtTrW6NkcviAZGYNZznKWT6dMoGMPne_buzI-YIGAR9T3pBGy5xSL2oSdgOF95vfWBFw/s640/Cloud-Stack-Diagram.png" /></a></div> <p style="text-align: center; line-height:140%; font-size:120%"> Figure 2 – During the last decade, Cloud Computing returned us to the timesharing days of the 1960s and 1970s by viewing everything as a service. The ESASky and the WorldWide Telescope Web clients are examples of SaaS - Software as a Service where all of the real work is done by the Cloud and the client software on the user's machine only displays results.</p><p style="line-height:140%; font-size:120%">
<strong>Using Your Browser To Do High-Powered Astronomy</strong><br />
So limping along with very underpowered hardware is nothing new to me. In fact, I have been doing so for more than 50 years. My current PC only has 4 GB of memory and a 500 GB HDD disk drive. Now, I certainly could afford to upgrade this machine or replace it with a more modern powerful machine, but the thought of configuring a new machine with all the software that I currently use is just too much for me, given that I am just fine for now with what I have if I only use the hardware just a little bit wisely. I always have the Windows Task Manager running so that I can check all of the currently running processes for CPU, Memory and Disk usage. You see, for me, it is still all just magic that I still have a hard time believing actually exists, given the primitive vacuum-tube world of my youth. So in this post, I would like to describe how you can use two browser-based applications, the ESASky and the WorldWide Telescope Web clients, to do some amazing astronomical feats without the need for high-end hardware or software.
<br /><br />
ESASky was developed at ESAC, the European Space Astronomy Center, Madrid, Spain, by the ESAC Science Data Centre (ESDC).
ESAC is the European Space Agency's (ESA) main establishment for space science. It is located in Villanueva de la Cañada, near Madrid, Spain. ESAC hosts the science operation centers for all ESA astronomy and planetary missions together with their scientific archives. For the purposes of this tutorial you should start up ESASky with:
<br /><br />
ESASky<br />
<a href="https://sky.esa.int/esasky/?target=83.81258981950579%20-5.398102548996337&hips=2MASS+color+JHK&fov=0.23602282354313556&cooframe=J2000&sci=false&lang=en&jwst_image=webb_orionnebula_shortwave">https://sky.esa.int/esasky/?target=83.81258981950579%20-5.398102548996337&hips=2MASS+color+JHK&fov=0.23602282354313556&cooframe=J2000&sci=false&lang=en&jwst_image=webb_orionnebula_shortwave</a>
<br /><br />
to have ESASky start up in the Orion Nebula. To have ESASky start up looking at a random point in the sky use:
<br /><br />
ESASky<br />
<a href="https://sky.esa.int/esasky/">https://sky.esa.int/esasky/</a>
<br /><br />
ESASky documentation is available at:
<br /><br />
<a href="https://www.cosmos.esa.int/web/esdc/esasky-help">https://www.cosmos.esa.int/web/esdc/esasky-help</a>
<br /><br />
The WorldWide Telescope is sponsored by NumFOCUS, a nonprofit dedicated to supporting the open-source scientific computing community. Work on the WWT has been supported by the American Astronomical Society (AAS), the National Science Foundation and others. To get to the WorldWide Telescope Web client go to:
<br /><br />
WorldWide Telescope Web client<br />
<a href="http://www.worldwidetelescope.org/webclient/">http://www.worldwidetelescope.org/webclient/</a>
<br /><br />
WorldWide Telescope Documentation<br />
<a href="https://docs.worldwidetelescope.org/user-manual/1/">https://docs.worldwidetelescope.org/user-manual/1/</a>
<br /><br />
I will be using the Chrome browser for the rest of this post. Before using these web-based Applications, it is important to be able to run them in Full Screen mode on Chrome. To do that, click on the three vertical dots in the upper right corner of Chrome to open the Chrome Menu on the right of your screen. Then click on the Full Screen icon to the right of Zoom. If any of the following images are too small for you to read just click on them to see them enlarged.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiTVGLqQH1_CsiuffwfQxVqKZW3HjjZNsbhsJl-Tfo8MAhr2jDo3m_OufHjk2C57ZOwzrX5gzGqPfR9tykjOrHPjbQR4eKZG3vwhUsLsHtus1h2V-EkcefEVk3T5lMRov-uEyh-GhqcsD_pDvauRofRseDk7EZlImsiZqO6pIDbnM_7-_yqAaAtrw/s1366/Chrome1.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="768" data-original-width="1366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiTVGLqQH1_CsiuffwfQxVqKZW3HjjZNsbhsJl-Tfo8MAhr2jDo3m_OufHjk2C57ZOwzrX5gzGqPfR9tykjOrHPjbQR4eKZG3vwhUsLsHtus1h2V-EkcefEVk3T5lMRov-uEyh-GhqcsD_pDvauRofRseDk7EZlImsiZqO6pIDbnM_7-_yqAaAtrw/s600/Chrome1.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 3 – Click on the three vertical dots in the upper right of your Chrome webpage to open the Chrome menu on the right. Then click on the Full Screen icon to the right of Zoom.
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjYPvwgLjTpoOS0y5i3534_fIMJ3Blrbnr3E_69Tj4UCbjwm_NHemTJwslDK82CdsFoh3UWnNZKraCnJSrkg9dvzI19Cupo3EWNcQYg0p9zr4ODwZry5InKs5wnSoyxThpxGJczuHuSwZdg5411W7xcTCFiYXYcyhmmd63DnVHo_-zKHU8fmZDL_Q/s1366/Chrome2.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="768" data-original-width="1366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjYPvwgLjTpoOS0y5i3534_fIMJ3Blrbnr3E_69Tj4UCbjwm_NHemTJwslDK82CdsFoh3UWnNZKraCnJSrkg9dvzI19Cupo3EWNcQYg0p9zr4ODwZry5InKs5wnSoyxThpxGJczuHuSwZdg5411W7xcTCFiYXYcyhmmd63DnVHo_-zKHU8fmZDL_Q/s600/Chrome2.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 4 – To leave Full Screen, right-click on your screen to open an options popup and then click on Exit full screen.
<p style="line-height:140%; font-size:120%">
Now go back to the link for the ESASKy Application above to open the Application. Then set Chrome to Full Screen. What you will see is the Orion Nebula in the background with lots of informational textboxes on top of it. The Orion Nebula is a very bright and close molecular cloud that is about 1350 light years from the Earth. Molecular clouds are large bodies of gas and dust in our Milky Way galaxy that are in the process of collapsing into new stars and solar systems of planets. Molecular clouds contain large amounts of gaseous molecules, including the organic molecules that you and I are made of. As little clumps of gas and dust in these molecular clouds collapse under their own weight, they heat up because gravitational potential energy is converted into heat energy like the descending weights in an old-time grandfather's clock. However, the vibrating molecules in the heated gas are able to emit electromagnetic radiation to carry away the collapse energy. This allows small clumps of gas and dust within a molecular cloud to finally collapse into a star. Because the large clumps of gas and dust will necessarily be slowly rotating in a random manner, they also have to conserve angular momentum by spinning faster as they collapse, like a figure skater speeding up in a spin by pulling her arms in. One way to do that is to dump the angular momentum into a proto-planetary disk spinning around the protostar that is forming. Many such proto-planetary disks about newly forming stars can be seen in the Orion Nebula.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiamT3J3A2P3xN75odcChrAIQQieZzWTtUCNbefSog5ATAl0oSWEI6uNv37MscMi8ZZpAEFCJ2Hqn7hz3YNCGVSCtIEmSlQ33Ovnh5N-VtfCnHa8-3S5rQ8ny_mk0J18t2eXNBgyWnRKoLpraxjxWKIrLJRF5BOxRo0VndWmHdATiF5cNv71lpBdg/s1366/ESASk1.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="768" data-original-width="1366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiamT3J3A2P3xN75odcChrAIQQieZzWTtUCNbefSog5ATAl0oSWEI6uNv37MscMi8ZZpAEFCJ2Hqn7hz3YNCGVSCtIEmSlQ33Ovnh5N-VtfCnHa8-3S5rQ8ny_mk0J18t2eXNBgyWnRKoLpraxjxWKIrLJRF5BOxRo0VndWmHdATiF5cNv71lpBdg/s600/ESASk1.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 5 – Now open the ESASky Application.
<p style="line-height:140%; font-size:120%">
ESASky will first ask if you want to use the Application as an Explorer or a Scientist. For our purposes, you want to be an Explorer. In Science mode, ESASky is actually a visual repository for all the scientific information we currently have for whatever you happen to be looking at. In the Science mode, you can have ESASky pull up all of the scientific datasets in its archive for the region in view for download to your personal machine. It will also list all of the scientific astronomical papers that have been written for the region of the night sky that you are viewing in your browser too.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEipzHwX0vs1QJxhpvkgyIVLGnHBZgfZFzbU0E2HeyoTw5Y5IBaNKnwUhw4lWSFtxM47YZwzDF_NfBmI_zau5FnmNAD2mk0oIZ85KSHWWlSu898VtRpoHWDo0uf96PpY6aibr0rYzjrvCo1Jp_fPNYZIRov9EpXa1VUfmSZx6B8ZxmT2jpbM-OEisQ/s1366/ESASk2.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="768" data-original-width="1366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEipzHwX0vs1QJxhpvkgyIVLGnHBZgfZFzbU0E2HeyoTw5Y5IBaNKnwUhw4lWSFtxM47YZwzDF_NfBmI_zau5FnmNAD2mk0oIZ85KSHWWlSu898VtRpoHWDo0uf96PpY6aibr0rYzjrvCo1Jp_fPNYZIRov9EpXa1VUfmSZx6B8ZxmT2jpbM-OEisQ/s600/ESASk2.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 6 – If you remove all of the textboxes by clicking on their "-"s, you will see the Orion Nebula more clearly. Notice that we are now using the Near-Infrared band in the "Select Sky" box and that the "Select Sky" button has been activated in the upper left of your screen.
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgtvvjibhkuoFN7nnKVOP6_G8zaCMlpQ0K8PSL53fBynLpTtVxaQiUirY2D_Zo1Z6Cv_Usg1-PrJ3EgcM9x3eN9s5XxXvGlUVxiEXUd94yDn3fkHH3oDmmCebDbkIy2_DE4a8U-dNgAwPue-lARkqZfITxGNn8WVvgxeae9MG0NPJL8BbcewPeIsQ/s1366/ESASk3.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="768" data-original-width="1366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgtvvjibhkuoFN7nnKVOP6_G8zaCMlpQ0K8PSL53fBynLpTtVxaQiUirY2D_Zo1Z6Cv_Usg1-PrJ3EgcM9x3eN9s5XxXvGlUVxiEXUd94yDn3fkHH3oDmmCebDbkIy2_DE4a8U-dNgAwPue-lARkqZfITxGNn8WVvgxeae9MG0NPJL8BbcewPeIsQ/s600/ESASk3.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 7 – Now add many new rows to the Select Sky box by clicking on the "+" and "Add new row" items.
<p style="line-height:140%; font-size:120%">
Now let's turn on many more Sky Surveys to cover a larger part of the electromagnetic spectrum. In fact, I will turn on 11 Sky Surveys to cover the entire electromagnetic spectrum from Gamma-rays to Radio waves. To do that, click the "+" sign and "Add new row" in the Select Sky box 10 times and then edit each entry to step through the entire electromagnetic spectrum. Each band in the spectrum will offer a number of sky surveys to choose from. For example, for the Optical band, I chose two surveys, the "DSS2 color" and the "DSS2 red" surveys. That is because the "DSS2 red" survey features the DSS2 survey just through a red filter and is therefore closer to the Near Infrared band in the electromagnetic spectrum. For the other bands, I just went with the default survey for each band.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj8VmUxgHPPge0NKtXusRa5xccmoR4VXABjIsIMYFpoaFrux3uIiIoKzBlbp1DiBJCWE1vtjrFDO48L4DFKQvd-CB2K-W4tD0Y08_pbPvGD1xV0csNlUzfGHl3sqWubrDbfQ1qeSIKtnWjF90W8bKy-aZF0ra6G3UrwfSq2BSFZPAsDTdG5q7k0zA/s1366/ESASk4.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="768" data-original-width="1366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj8VmUxgHPPge0NKtXusRa5xccmoR4VXABjIsIMYFpoaFrux3uIiIoKzBlbp1DiBJCWE1vtjrFDO48L4DFKQvd-CB2K-W4tD0Y08_pbPvGD1xV0csNlUzfGHl3sqWubrDbfQ1qeSIKtnWjF90W8bKy-aZF0ra6G3UrwfSq2BSFZPAsDTdG5q7k0zA/s600/ESASk4.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 8 – Now I have 11 bands in the electromagnetic spectrum to choose from. Above we see that I have Zoomed out from the Orion Nebula and switched from the Near-Infrared band to the Optical band using the "DSS2 color" survey.
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjofn753abebFXWN4zH1SAptu3iHGPC_S3mRJdaHoi66Wz9Dbfj9pxU3m45Na90gng_fB0nqcKtxnejQ6HZCgB8xh4pQJhfx9AVJBjPgypq2EWaYALSNwFqfaJ6BWFi02eaB2d3Y3AWFFEe3BXjVcdt33iioAVn3103JpoMeW2WQwOC9OM9WUIVrA/s1366/ESASk5.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="768" data-original-width="1366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjofn753abebFXWN4zH1SAptu3iHGPC_S3mRJdaHoi66Wz9Dbfj9pxU3m45Na90gng_fB0nqcKtxnejQ6HZCgB8xh4pQJhfx9AVJBjPgypq2EWaYALSNwFqfaJ6BWFi02eaB2d3Y3AWFFEe3BXjVcdt33iioAVn3103JpoMeW2WQwOC9OM9WUIVrA/s600/ESASk5.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 9 – Then I clicked the Forward button at the bottom of the Select Sky box two times to skip from the "DSS2 color" survey in the Optical band to the Near-Infrared band. If you press the Play button at the bottom of the Select Sky box, you can have ESASky cycle through all of the listed bands. It will slowly morph one survey into the next as it cycles. You can even exit out of the Select Sky box using its "-" icon and let the movie continue on. These 11 bands in the Select Sky box will stay in effect as long as you have your ESASky session running in the Chrome tab. That means you can go to other locations in the night sky and cycle through the 11 sky surveys as desired.
<p style="line-height:140%; font-size:120%">
Now let's look for some closeups of this area in the sky from the Hubble Telescope and the new James Webb Telescope too. To do that, click on the Hubble button to the right of the Select Sky button on the top left of your screen. The Hubble button is the 4th button and the James Webb button is the 5th button.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgrYRGH8uSWgvOSNH8kao-NodM0xF9zIIcCswCZPtK2DKRC15EDmwEGyFp2YOTnRi3Bm7CH2AFEHojYvZhe7-XUqPRPJ5yxQNg9uCtFoERlwr-wv0N4XL_m4EZMWG2bcvDtVao7cg_p6DqfS6z2SREZFd9IEpx1r02GedDUIiNXlTAjuTany62hLw/s1366/ESASk6.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="768" data-original-width="1366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgrYRGH8uSWgvOSNH8kao-NodM0xF9zIIcCswCZPtK2DKRC15EDmwEGyFp2YOTnRi3Bm7CH2AFEHojYvZhe7-XUqPRPJ5yxQNg9uCtFoERlwr-wv0N4XL_m4EZMWG2bcvDtVao7cg_p6DqfS6z2SREZFd9IEpx1r02GedDUIiNXlTAjuTany62hLw/s600/ESASk6.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%">
Figure 10 – Above we see the available Hubble closeup images. It turns out that 25 Hubble closeups are available in our current field of view and 2140 Hubble images are for other parts of the sky. Notice that I have the Hide Footprints option enabled in the popup box. That will turn off the green boxes displayed in the field. The green boxes tell you where Hubble closeup images exist. The green boxes are useful when you are zoomed way out and are looking for places with Hubble closeup images to zoom in on. However, once you have selected a Hubble closeup they just are a nuisance. For the James Webb Telescope, the boxes are orange-colored and can be used or turned off in a similar manner.
<p style="line-height:140%; font-size:120%">
Now we can look to see what closeups are available from the James Webb Telescope for this field of view by clicking on the fifth button.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj9BQ_Wz4vmy1fajEyAd_Xv-lSbljSC5x6Lws9FvN9U5N3SMathV6Qne5Mr47uAvADCzE5uAesAC0dNWIEfxmul_nuNZW1o1eEL6QJ5PApRsJQaehhJ5rqYd8LMXb1MhI-ON6cqFBbXzjdglDpet697gu3PLtbs6Nl0ZeMKchzI7r48gPa_AoC67Q/s1366/ESASk7.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="768" data-original-width="1366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj9BQ_Wz4vmy1fajEyAd_Xv-lSbljSC5x6Lws9FvN9U5N3SMathV6Qne5Mr47uAvADCzE5uAesAC0dNWIEfxmul_nuNZW1o1eEL6QJ5PApRsJQaehhJ5rqYd8LMXb1MhI-ON6cqFBbXzjdglDpet697gu3PLtbs6Nl0ZeMKchzI7r48gPa_AoC67Q/s600/ESASk7.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 11 – Above are the available James Webb closeups.
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjMoHjIClHiDVxZSmBlSirUYF8LPwaPPs3uStSM35Nu5UCFbEJ0VwRJ2eLTOnvj7R8bcMJOmxXYmv9-PxQokUm9-BYZiMjkxxB_3-0FRh1_p_74h4hc1N9AwgGaJL3JPuEU-01JiGydNEvaT1imSP9ngTaD6b6GBd8Qg3wIXC5nVbPNLAMnBPNQxg/s1366/ESASk8.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="768" data-original-width="1366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjMoHjIClHiDVxZSmBlSirUYF8LPwaPPs3uStSM35Nu5UCFbEJ0VwRJ2eLTOnvj7R8bcMJOmxXYmv9-PxQokUm9-BYZiMjkxxB_3-0FRh1_p_74h4hc1N9AwgGaJL3JPuEU-01JiGydNEvaT1imSP9ngTaD6b6GBd8Qg3wIXC5nVbPNLAMnBPNQxg/s600/ESASk8.jpg"/></a></div>
<p style="line-height:140%; font-size:120%">
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 12 – Above, I selected one of the James Webb Telescope closeups for "The Orion Bar region (NIRcam image)".
<p style="line-height:140%; font-size:120%">
Now click on the third button in the upper left that displays a scroll. This is the Target List button. On the far right you will now see a Target List popup box. From this popup box, you can choose an already existing list of Targets or you can upload your own list of Targets too. More on that later. Now click on the "Select Target List" button and then the "Star formation regions" entry in the dropdown.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh-fKZNDPxEQzDaJQltOSGEuFbhWrIO8B_t2SHL_Xh6RcUvbM2_AWMojvMoJzImAJm06htD0Yz4l850wE7HJyUEky-QIfKkcpGO6trA6C5x1TtBC_ta7VcCeWvtiNH6P-a0dQZlqpBf-lib6nCPXblIwua5OHHFYZjzJXfp9XKKn5PtqTs3QiQZHg/s1366/ESASk9.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="768" data-original-width="1366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh-fKZNDPxEQzDaJQltOSGEuFbhWrIO8B_t2SHL_Xh6RcUvbM2_AWMojvMoJzImAJm06htD0Yz4l850wE7HJyUEky-QIfKkcpGO6trA6C5x1TtBC_ta7VcCeWvtiNH6P-a0dQZlqpBf-lib6nCPXblIwua5OHHFYZjzJXfp9XKKn5PtqTs3QiQZHg/s600/ESASk9.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 13 – Click on the "Select Target List" button and then the "Star formation regions" entry in the dropdown.
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjG3hkNodsyXooxUPfwrCqFLzQW4rra6X9tuw5G9zI6eKHC-T6DwPqGUfTkFa6pQ3ZCFgcFbeyAAyr_i5Q-zBfirQa40e3yX2iKA5aY5F1hQa81_ggR_K3Be6zjlv9lFAq_fKYM8s5qk5uJOiF8BG51KZIHJbxyPaY5qWz0C8HspRkliJOES0Pckw/s1366/ESASk10.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="768" data-original-width="1366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjG3hkNodsyXooxUPfwrCqFLzQW4rra6X9tuw5G9zI6eKHC-T6DwPqGUfTkFa6pQ3ZCFgcFbeyAAyr_i5Q-zBfirQa40e3yX2iKA5aY5F1hQa81_ggR_K3Be6zjlv9lFAq_fKYM8s5qk5uJOiF8BG51KZIHJbxyPaY5qWz0C8HspRkliJOES0Pckw/s600/ESASk10.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 14 – Click on the "Select Target List" button and then the "Star formation regions" entry in the dropdown.
<p style="line-height:140%; font-size:120%">
This will present a List of molecular cloud Targets.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQzoKfPgI98yXPZxaEnYZ-IoUa7ezOKrK_fzbkAOgi8L-ictdz69LSskssxPvu2sPI4BWGleBxjxF5yVTnI7soGLLeBHDzQGLSGCJYpb-gtkfnXnLqtvJNRFu379F__DppZHzyW-_dJCrIsjGm9taxnupLJ_c84-meGUFpkKnUUv6IR4q7qXRXVw/s1366/ESASk11.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="768" data-original-width="1366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQzoKfPgI98yXPZxaEnYZ-IoUa7ezOKrK_fzbkAOgi8L-ictdz69LSskssxPvu2sPI4BWGleBxjxF5yVTnI7soGLLeBHDzQGLSGCJYpb-gtkfnXnLqtvJNRFu379F__DppZHzyW-_dJCrIsjGm9taxnupLJ_c84-meGUFpkKnUUv6IR4q7qXRXVw/s600/ESASk11.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 15 – Above, I chose the Eagle Nebula molecular cloud and ESASky then took me there and explained what I was looking at.
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgZpsV0UwgBGuX2Zgd0Lslyvs4j7wP2NaQJuvdzKCwCpcir8JIqY0dEFRvjEiacXtIxcMgYyc2mYjLsBsgBCUjoluFGfoIHINi8XejWxqkde4n5uVXfoIbWpfA-BVrantgXyUsk0E8e-5tbKgqbCgzAvrCZGxOR_l4J38kY4O55Tl7La1ZopbbVeA/s1366/ESASk12.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="768" data-original-width="1366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgZpsV0UwgBGuX2Zgd0Lslyvs4j7wP2NaQJuvdzKCwCpcir8JIqY0dEFRvjEiacXtIxcMgYyc2mYjLsBsgBCUjoluFGfoIHINi8XejWxqkde4n5uVXfoIbWpfA-BVrantgXyUsk0E8e-5tbKgqbCgzAvrCZGxOR_l4J38kY4O55Tl7La1ZopbbVeA/s600/ESASk12.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 16 – Above, I switched from the "DSS2 Color" survey in the Optical band to a survey in the Mid-Infrared band.
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjRCaYweGA8vrs5Xj8fBEjU1yucemgs-PacHPe1vJ4WQJdFzy7zb1WmCyA7w-6gtLSTyQYEFx4fksoNXb1jQrUggvh_kx8agrN5gZG5w-SV9pgMrWxxeKfKncHumr_QorQZd6hfuZEsSvMBg_lwRsS8z4in11bQe1_qKKaof5NrQmoRWdjC1s0G8g/s1366/ESASk13.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="768" data-original-width="1366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjRCaYweGA8vrs5Xj8fBEjU1yucemgs-PacHPe1vJ4WQJdFzy7zb1WmCyA7w-6gtLSTyQYEFx4fksoNXb1jQrUggvh_kx8agrN5gZG5w-SV9pgMrWxxeKfKncHumr_QorQZd6hfuZEsSvMBg_lwRsS8z4in11bQe1_qKKaof5NrQmoRWdjC1s0G8g/s600/ESASk13.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 17 – Above, I used the Hubble button to find Hubble closeups in the Eagle Nebula. I chose the famous "Pillars of Creation" closeup. Behind this Hubble closeup is the Eagle Nebula now in the Near-Infrared band. You can use the Image Opacity slider to fade out or fade in the Hubble closeup if desired.
<p style="line-height:140%; font-size:120%">
To make your own Target List create a .txt file with the following format:
<br /><br />
Object@desc@Description
<br /><br />
You can use Bard to help with this tedious task. For example, I used the following Bard prompt:
<br /><br />
<font color="blue">
List the first 100 entries in the NGC astronomical catalog. List each NGC number on a new line followed by "@desc" followed by a description of the object. The format should be like "NGC 300 @desc@NGC 300 is a spiral galaxy approximately 6 million light-years away in the constellation Sculptor."
</font>
<br /><br />
I then copied/pasted the Bard output to a Bard.txt file and then told ESASky to upload the file as my own Target List to be displayed.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgVCdKYz-gwghNhVEIsQe0S79WHGw9PDC5-_zVgrpK8p-78yrsjfOGZHg9kal9meqJkMaVsul5uig-2WFcaeWS7JJwwvF5xLXwKMiejKIbRTtWhD80SBjOAEjVkOTgv8wK73zktPR5Yqnrf8oitI9lYfevmDkQlHBSq28aIPU76B5bCAVnum4YGug/s1366/Bard1.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="768" data-original-width="1366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgVCdKYz-gwghNhVEIsQe0S79WHGw9PDC5-_zVgrpK8p-78yrsjfOGZHg9kal9meqJkMaVsul5uig-2WFcaeWS7JJwwvF5xLXwKMiejKIbRTtWhD80SBjOAEjVkOTgv8wK73zktPR5Yqnrf8oitI9lYfevmDkQlHBSq28aIPU76B5bCAVnum4YGug/s600/Bard1.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 18 – I used the above Bard prompt to generate a Target List.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJB-XAoFuoVcVnNLRE6IcyFxzE78dtgmdJ_2jT1jkT5dgh74gy8gQXoIWvO2rDLivbkhWxEcxBA_TlYX0DBNvdFQFjpWOUyI7Y5UA5r_r6Xw_Z0kUidB7_xh79LOWllkH0A-9URMTCIK_rgSoPY2GqJvFoZup-TnBxP_vk7gTzNn7_9hNEdxpnsQ/s1366/Bard2.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="768" data-original-width="1366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJB-XAoFuoVcVnNLRE6IcyFxzE78dtgmdJ_2jT1jkT5dgh74gy8gQXoIWvO2rDLivbkhWxEcxBA_TlYX0DBNvdFQFjpWOUyI7Y5UA5r_r6Xw_Z0kUidB7_xh79LOWllkH0A-9URMTCIK_rgSoPY2GqJvFoZup-TnBxP_vk7gTzNn7_9hNEdxpnsQ/s600/Bard2.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 19 – Then I copied/pasted the Bard output into a Bard.txt file.
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiCqTA2EyZWZVJ42o1IaDWzkpnqe-6DLF0cAZUMWPQwhlZQ5ng2S3MOuf90wNCPTqULXugG-l2CwsoX8enmcNA7et_cLy399EVAJGLREOt2TstYsqTO0Jn57IG9mJbBwN6RIWFX3h-TJTSs_g7XK_dxgm0HQxZsK5_J3juNUl0gNO_ObEIMXl2QYQ/s1366/Bard3.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="768" data-original-width="1366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiCqTA2EyZWZVJ42o1IaDWzkpnqe-6DLF0cAZUMWPQwhlZQ5ng2S3MOuf90wNCPTqULXugG-l2CwsoX8enmcNA7et_cLy399EVAJGLREOt2TstYsqTO0Jn57IG9mJbBwN6RIWFX3h-TJTSs_g7XK_dxgm0HQxZsK5_J3juNUl0gNO_ObEIMXl2QYQ/s600/Bard3.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 20 – Above, I see ESASky working on my Target List. Notice that my description is displayed in an information box by ESASky.
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh4qDoarSXOAMNf30iP6VXRflsssyDBV735h2T4-2sRX9-egKo2vGLMXL4D8TMudtCCjyZusLJfwbB4cJr_AW0gIT7HnrLtPuTjNIg1nGzh6XXeEBzdioS5vgCpltsuVaI5QPqlj1eq7NMh0BhyphenhyphenS0HAuBfYlMiotkVRgOxUuXxVegN5qXPrgw3PHA/s1366/ESASk14.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="768" data-original-width="1366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh4qDoarSXOAMNf30iP6VXRflsssyDBV735h2T4-2sRX9-egKo2vGLMXL4D8TMudtCCjyZusLJfwbB4cJr_AW0gIT7HnrLtPuTjNIg1nGzh6XXeEBzdioS5vgCpltsuVaI5QPqlj1eq7NMh0BhyphenhyphenS0HAuBfYlMiotkVRgOxUuXxVegN5qXPrgw3PHA/s600/ESASk14.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 21 – Above, I used the Search bubble in the upper right of ESASky to look for the globular cluster M13. Once I got to M13, I pressed my right mouse button to pull up another popup box. In the far right of that popup box is a blue icon for "View in WorldWide Telescope". When I clicked on that icon, ESASky opened the WorldWide Telescope Web client in another Chrome tab and took me to the WorldWide Telescope Web client. It also placed me looking at the globular cluster M13.
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEizdriiEO1zntF9xTdEq-Iqts2gCvW732NkZHLDWWaW7mskI9dyDSQ_ipJhs5diiRrmkLiFFiWLokJKIxYGc7f_riKEh2Xh5u42WFWV8ZYnzhVM914qAwOms2fjqmcTPPE8qAcAcTYgZcdPO2pQzHmzcfWK0uvT-VWUIqkE-x8qyB3iCgJ9JBINgQ/s1366/WWT0.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="768" data-original-width="1366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEizdriiEO1zntF9xTdEq-Iqts2gCvW732NkZHLDWWaW7mskI9dyDSQ_ipJhs5diiRrmkLiFFiWLokJKIxYGc7f_riKEh2Xh5u42WFWV8ZYnzhVM914qAwOms2fjqmcTPPE8qAcAcTYgZcdPO2pQzHmzcfWK0uvT-VWUIqkE-x8qyB3iCgJ9JBINgQ/s600/WWT0.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 22 – Now we are looking at M13 in the WorldWide Telescope Web client running in another Chrome tab. Notice that the user interface for the WorldWide Telescope Web client covers up most of the window. It has thick bands at the top, bottom and on the left that get in the way of viewing. To get rid of them make sure to check the "Auto Hide Tabs" and the "Auto Hide Content" checkboxes under the Settings menu. Then you will only see those portions of the user interface when your mouse pointer enters those regions in the viewing window.
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhYiN6DvVfRQM3Te9uPmSdvddrLH0JOC8Cou3bxIs_paWPFn8NMfF87Qw3qyKJ9k72cprMqZCEVPcMCusl9mS_DVydsAX7xkATSGhJNM1c7ebYnXrbA_CnLzMe_3NPCxPYOUuEcTqN26uwo0A6u5m2D_sEwRyyoo4BmF-mKU8SqZ88YGI4jJ_CNmw/s1366/WWT1.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="768" data-original-width="1366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhYiN6DvVfRQM3Te9uPmSdvddrLH0JOC8Cou3bxIs_paWPFn8NMfF87Qw3qyKJ9k72cprMqZCEVPcMCusl9mS_DVydsAX7xkATSGhJNM1c7ebYnXrbA_CnLzMe_3NPCxPYOUuEcTqN26uwo0A6u5m2D_sEwRyyoo4BmF-mKU8SqZ88YGI4jJ_CNmw/s600/WWT1.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 23 – With the Tabs and Content now hidden, we can see globular cluster M13 with an unobstructed view.
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiTYXNXInZLhWxIxMAiwpkHIJURQ2TiDJc1zKJl60d5KOp3c1r958e2E8LHgbjUlk0_Mw9fdhnABtSNIilIwAGlXu6_gsVxCWYXSlXHuCbyTiS9STWoU-eOhFWI0fvhmH34suGMYkwCh6knSr58TZU9RKEbGbdmuGLmZix_at2R18FVUOpVj3FOeQ/s1366/WWT2.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="768" data-original-width="1366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiTYXNXInZLhWxIxMAiwpkHIJURQ2TiDJc1zKJl60d5KOp3c1r958e2E8LHgbjUlk0_Mw9fdhnABtSNIilIwAGlXu6_gsVxCWYXSlXHuCbyTiS9STWoU-eOhFWI0fvhmH34suGMYkwCh6knSr58TZU9RKEbGbdmuGLmZix_at2R18FVUOpVj3FOeQ/s600/WWT2.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 24 – Next, I zoomed out from M13 with my mouse wheel and then moved my mouse pointer to the bottom of the window. That woke up the bottom Tab of the WorldWide Telescope Web and I can then see what other items of interest are in my field of view. The bottom Tab tells me that there are 89 interesting items in my field of view. To look through all of them it is a good idea to uncheck the "Auto Hide Tabs" temporarily so that the Tab does not fade away. The bottom Tab is also telling me that I am using the Digitized Sky Survey (Color) which is called the DSS2 Color survey in ESASky.
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjYE04uzAX4s8uCraObdZMCDtezdDvCJFOBleMuvSeFABBOGtOL1JXiwHB12GCiJ04dbVNOAyZ4RKhkWEWY1TG8dOjNBnFLwGx9jQ2iRdMrVS_HFSUwiZcpdbadnGZVSZhAqVBEd02gL6-b3_5PFs1ynWb0_e_agcstQCLHrYm32hk_DfZcX_XBeA/s1366/WWT3.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="768" data-original-width="1366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjYE04uzAX4s8uCraObdZMCDtezdDvCJFOBleMuvSeFABBOGtOL1JXiwHB12GCiJ04dbVNOAyZ4RKhkWEWY1TG8dOjNBnFLwGx9jQ2iRdMrVS_HFSUwiZcpdbadnGZVSZhAqVBEd02gL6-b3_5PFs1ynWb0_e_agcstQCLHrYm32hk_DfZcX_XBeA/s600/WWT3.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 25 – If you need to work with the WorldWide Telescope Tabs for an extended period of time, it is a good idea to uncheck the "Auto Hide Tabs" checkbox.
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhgC3Xd38wv9Tkzhcir3WKv8Ax7hkvBeHZuwXZTl46ynkJkluWKymNR8WZHf-G8byUJys_H98FonEyp4VzIr8apTedI2LQJHZ_c5ZyBOIXO9jiavk7YqLmjKdMe_G3W93pkzMaoZGt21J5Un1w5c1Dbvp0q83KTwwmh8J-XuSgsRQp0qckqxgY6Fg/s1366/WWT4.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="768" data-original-width="1366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhgC3Xd38wv9Tkzhcir3WKv8Ax7hkvBeHZuwXZTl46ynkJkluWKymNR8WZHf-G8byUJys_H98FonEyp4VzIr8apTedI2LQJHZ_c5ZyBOIXO9jiavk7YqLmjKdMe_G3W93pkzMaoZGt21J5Un1w5c1Dbvp0q83KTwwmh8J-XuSgsRQp0qckqxgY6Fg/s600/WWT4.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 26 – Above, I used the bottom Tab to switch from the Digitized Sky Survey (Color) to the Deep Star Maps 2020 survey.
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhlqdiRnh2KnhZdTSrzrdoHoBv4HY72akzzLZVFGFtMW7_32V3JPHwss8KXJp-wJw3B-CVH5Ki9cSegAxT5E1ArxwoLDmzgwe_hcStjgAMWKFFm1C800k2UqNqjwkfNVNzJWUwPQiI_2f8fkXfp57Ow_X6Mep8Ss81_NdVDwDj691Mypf4Kv0jzHw/s1366/WWT5.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="768" data-original-width="1366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhlqdiRnh2KnhZdTSrzrdoHoBv4HY72akzzLZVFGFtMW7_32V3JPHwss8KXJp-wJw3B-CVH5Ki9cSegAxT5E1ArxwoLDmzgwe_hcStjgAMWKFFm1C800k2UqNqjwkfNVNzJWUwPQiI_2f8fkXfp57Ow_X6Mep8Ss81_NdVDwDj691Mypf4Kv0jzHw/s600/WWT5.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 27 – Next, I moved to a dense part of our Milky Way galaxy in the constellation Cygnus and switched to the Gaia DR2 survey.
<p style="line-height:140%; font-size:120%">
The WorldWide Telescope Web client can do much more. For example, under the Explore menu, you will find many very interesting Collections of images to explore. Under the Guided Tours menu, you will find many interesting and informative lectures about our Universe that are conducted by using the WorldWide Telescope Web client software. Check the online documentation listed above for more details. You can also download a thin WorldWide Telescope client program from their main website:
<br /><br />
WorldWide Telescope<br />
<a href="http://www.worldwidetelescope.org/home/">http://www.worldwidetelescope.org/home/</a>
<br /><br />
The WorldWide Telescope client Application runs on your Windows machine outside of your Chrome browser. It has more functionality than the version that runs under Chrome and it does not use up lots of memory or require a powerful graphics card. It runs in 500 - 1,000 MB of free memory and runs just fine on my anemic graphics card.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhdJcPq6F_PoRdKeFsK41CjidwUzVFaRU_3sHN7fcIIgqr8AV1oC5lA5oQy3JOzrwfXwj6GMDQwDTUjjKziQd7TzyaDLVseHgh_3Vg8h04J5xe5w9JHiwUj1g95i9VsJ2PwASgi-974zq6a5qU1Af2ioXhZY_Vtf7KwU9mH48DS9rTBsS-liJ98XQ/s1366/Chrome8.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="768" data-original-width="1366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhdJcPq6F_PoRdKeFsK41CjidwUzVFaRU_3sHN7fcIIgqr8AV1oC5lA5oQy3JOzrwfXwj6GMDQwDTUjjKziQd7TzyaDLVseHgh_3Vg8h04J5xe5w9JHiwUj1g95i9VsJ2PwASgi-974zq6a5qU1Af2ioXhZY_Vtf7KwU9mH48DS9rTBsS-liJ98XQ/s600/Chrome8.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 28 – Finally, if you use the Chrome Memory Saver for inactive tabs, be sure to exempt ESASky and WorldWide Telescope Web. To do that, open the Chrome menu again and click on Settings. Then go to Performance. Have ESASky and WorldWide Telescope Web running at the time and check each checkbox for them. This will prevent any of their current settings from being lost. For example, the Select Sky surveys in ESASky can be lost by the Chrome Memory Saver.
<p style="line-height:140%; font-size:120%">
<strong>How to Navigate Through Tabs When Chrome is in Full Screen Mode</strong><br />
Next, let me walk you through how to use the AutoControl Chrome extension to let you jump between Chrome tabs while Chrome is in the Full Screen mode. The AutoControl extension will also let you zoom in and zoom out while in ESASky and WorldWide Telescope Web by pressing keys on your PC numeric pad.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh38YYX2pllQPsu54G3yB6UWtaaF07iurGMied7fu-tNn1XpKZwdp9hXjrh2GAUodx1MUZug-2Fb6u795F2x54yCN0CWGDx6a7wQGvz5zPXChyphenhyphennD7xZH7159AFkhecvwc_iokGfB4VXBmUZbjEEozJ7XSTQbglmp4i0UJ6LJX6p-IS2-MR4ySyqFA/s1366/Chrome3.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="768" data-original-width="1366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh38YYX2pllQPsu54G3yB6UWtaaF07iurGMied7fu-tNn1XpKZwdp9hXjrh2GAUodx1MUZug-2Fb6u795F2x54yCN0CWGDx6a7wQGvz5zPXChyphenhyphennD7xZH7159AFkhecvwc_iokGfB4VXBmUZbjEEozJ7XSTQbglmp4i0UJ6LJX6p-IS2-MR4ySyqFA/s600/Chrome3.jpg"/></a></div>
<p style="line-height:140%; font-size:120%">
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 29 – First, open the Chrome menu on the right of your screen by clicking the three vertical dots in the upper right of your screen. Go to Extensions and then click on Visit Chrome Web Store.
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjIvfHdl0WIrTiKHhTnhSElemBOSozw5-BLOhVYkT9UkuAq-7JC9onHVDUxGKIuN1H194wsbGOD8ytLo2zsGrrnDaJQCkkcnKPU0NYN8B04mSJ0OEaqF34QfK5d2svg7Oh8bbfY-28-c76g_102JT1Osu9ZrgD7M2KPL8g8dtEPNlGAxilH3NFSxw/s1366/Chrome4.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="768" data-original-width="1366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjIvfHdl0WIrTiKHhTnhSElemBOSozw5-BLOhVYkT9UkuAq-7JC9onHVDUxGKIuN1H194wsbGOD8ytLo2zsGrrnDaJQCkkcnKPU0NYN8B04mSJ0OEaqF34QfK5d2svg7Oh8bbfY-28-c76g_102JT1Osu9ZrgD7M2KPL8g8dtEPNlGAxilH3NFSxw/s600/Chrome4.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 30 – When you get to the Chrome Web Store, search for the AutoControl extension and install it on Chrome.
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjTRteXt5hCq7QH25z8B3_rbu97m1ERCH3bs6prL8HxRyXiVIT5nw345gBlQAgXyA9yE8Eay2zD-8z92-wYJS97YNACUMhPpmAZb03T5-eIzv5lHU9l1LIxJ2a8e0k_cDWR4jWpSqhg2Qz3odhbawH9wp-n_CZyyX_VQSWAZCfIux-9xywddgP8pw/s1366/Chrome5.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="768" data-original-width="1366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjTRteXt5hCq7QH25z8B3_rbu97m1ERCH3bs6prL8HxRyXiVIT5nw345gBlQAgXyA9yE8Eay2zD-8z92-wYJS97YNACUMhPpmAZb03T5-eIzv5lHU9l1LIxJ2a8e0k_cDWR4jWpSqhg2Qz3odhbawH9wp-n_CZyyX_VQSWAZCfIux-9xywddgP8pw/s600/Chrome5.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 31 – Then go back to the Chrome menu and click on Manage Extensions.
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjYlbYSAl7OIbAZTN3qXM0kaVNrb-Vs3dfgWZSyalZg6ZJHL_oGIwhj0GoBwCBMj7YAAPET21og_vc_CoVwMq9Im5pDAOG_bqYop5V0hfez5IC0qmJGnfSZGj-1KjAlO2VCDU7LIXoylYEBxHD0_w51jbzojbiIc_SxuOtuT_BvAahMc2l7FEUIVA/s1366/Chrome6.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="768" data-original-width="1366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjYlbYSAl7OIbAZTN3qXM0kaVNrb-Vs3dfgWZSyalZg6ZJHL_oGIwhj0GoBwCBMj7YAAPET21og_vc_CoVwMq9Im5pDAOG_bqYop5V0hfez5IC0qmJGnfSZGj-1KjAlO2VCDU7LIXoylYEBxHD0_w51jbzojbiIc_SxuOtuT_BvAahMc2l7FEUIVA/s600/Chrome6.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 32 – Find the AutoControl extension and click on its Details. Scroll down until you see "Extension options" and click on it.
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhPoellG7CZ6Gpl8o2YHyVhhLVPoRh2UB3_g6en6rqx31OiDEFBlELaFnc3bSX5RqUkNV_KtqYI1wX-m9Nt_4kd1jblUweb5YYK8r0dtqUV6S8EnSfJxN3cCNvQqGDEtZIu5A2sGIPGNIdOFa2v-EX8KJPesbL8AHT-1FMo80ZGN3Gi0CC-BqzOOA/s1366/Chrome7.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="768" data-original-width="1366" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhPoellG7CZ6Gpl8o2YHyVhhLVPoRh2UB3_g6en6rqx31OiDEFBlELaFnc3bSX5RqUkNV_KtqYI1wX-m9Nt_4kd1jblUweb5YYK8r0dtqUV6S8EnSfJxN3cCNvQqGDEtZIu5A2sGIPGNIdOFa2v-EX8KJPesbL8AHT-1FMo80ZGN3Gi0CC-BqzOOA/s600/Chrome7.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 33 – This will open the AutoControl configuration page. Click on Help for instructions on how you can program AutoControl.
<p style="line-height:140%; font-size:120%">
I use the following settings:
<br /><br />
<font color="blue">
Trigger Action<br />
Numpad 0 SWITCH TO LEFT TAB<br />
Numpad Point SWITCH TO RIGHT TAB<br />
Numpad 1 SCROLL UP<br />
Numpad 2 TOGGLE FULL SCREEN<br />
Numpad 3 SCROLL DOWN<br />
Numpad 4 BACK<br />
Numpad 5 CLOSE WINDOW<br />
Numpad 6 FORWARD<br />
Numpad 7 SWITCH TO LEFTMOST TAB<br />
Numpad 8 TOGGLE MINIMIZE<br />
Numpad 9 CLOSE TAB<br />
Numpad + OPEN NEW TAB
</font>
<br /><br />
This allows me to do things like go back and forth between Full Screen and Normal Screen while in Chrome by pressing Numpad 2 and to move from tab to tab while remaining in Full Screen. I can also use Numpad 1 to zoom in and Numpad 3 to zoom out while using ESASky and WorldWide Telescope Web in Full Screen.
<br /><br />
Comments are welcome at
<a href="mailto:scj333@sbcglobal.net">scj333@sbcglobal.net</a><br /><br />
To see all posts on softwarephysics in reverse order go to:<br />
<a href="https://softwarephysics.blogspot.com/">https://softwarephysics.blogspot.com/</a>
<br /><br />
Regards,<br />
Steve Johnston
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-30536174.post-80753014309801688832023-09-22T14:13:00.014-07:002023-10-10T07:43:43.837-07:00An IT and Geophysical Perspective on the Nanopore Sequencing of DNA and RNA<p style="line-height:140%; font-size:120%">
In this post, I would like to cover a remarkable new tool for reading the sequence of bases on very long stretches of DNA and RNA in a very rapid manner which is called nanopore sequencing. Nanopore sequencing devices can directly read a stretch of DNA or RNA that is over one million bases in length at a speed of about 450 bases/second with a device about the size of a flip smartphone. These nanopore sequencing devices now start at a price of about $1,000 and can plug into the USB port of your laptop. They are also very rugged, and can even work in very harsh field conditions simply using the power from your laptop battery. Prior to nanopore sequencing, DNA and RNA were sequenced using very time-consuming and expensive biochemical procedures. These prior procedures did not directly read DNA and RNA base sequences themselves.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEggtej_3Y84Wox_5LOz9WA-T0t1rNiy6uS2excjQ-K8tkGSEuDpygqqaJiwnaCn2viM6VDGAuGOwy3GkdHduXX24mQybf6yFtL1WIMd6hXGDxtiYJuoL2i-df2V8x30eK75Qz0-ikM3n0Gg5zrZBBjvW27LKgFuR0PzCOHqGfhTVI1nYnlI05GgnQ/s900/MinIONinField.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="453" data-original-width="900" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEggtej_3Y84Wox_5LOz9WA-T0t1rNiy6uS2excjQ-K8tkGSEuDpygqqaJiwnaCn2viM6VDGAuGOwy3GkdHduXX24mQybf6yFtL1WIMd6hXGDxtiYJuoL2i-df2V8x30eK75Qz0-ikM3n0Gg5zrZBBjvW27LKgFuR0PzCOHqGfhTVI1nYnlI05GgnQ/s600/MinIONinField.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 1 – Above is a MinION nanopore sequencer being used in the field.
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhcT_Ds3W0uUX723swvukLhNeVVKGRKSfxx-iEDTaJoLP5ct0MlMECbf4bpnBrtG0_R15kqAwm8fiizpMfuZ679SOWqNRrVZhZm-PeoSky9gUq9hquo0j_lxamsNNwgBDwvb93UFhfoFhdW706fgmftXzHD3boMgtKGpmGRePkpbNF4Y_vbbl62dw/s685/MinION-Setup.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="433" data-original-width="685" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhcT_Ds3W0uUX723swvukLhNeVVKGRKSfxx-iEDTaJoLP5ct0MlMECbf4bpnBrtG0_R15kqAwm8fiizpMfuZ679SOWqNRrVZhZm-PeoSky9gUq9hquo0j_lxamsNNwgBDwvb93UFhfoFhdW706fgmftXzHD3boMgtKGpmGRePkpbNF4Y_vbbl62dw/s600/MinION-Setup.png"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 2 – Above is the general setup of a MinION nanopore sequencer. The DNA or RNA sample to be read is placed into the flow cell of the unit.
<p style="line-height:140%; font-size:120%">
The development of nanopore sequencing should be of interest to all IT professionals because it is truly an extraordinary story of information processing capability that rivals that which has occurred in IT over the past 82 years, or 2.6 billion seconds, ever since Konrad Zuse first cranked up his Z3 computer in May of 1941. First, it is important to remember that whenever data is written from memory to a permanent secondary medium that persists in time, such as a thumb drive, SSD or HDD drive, or even the magnetic tape of yore, it is always done so in a sequential manner. Each byte of data that is written to a permanent secondary medium is written in a sequential manner - one byte after the other. The written information in each byte is naturally important, but even more so is the sequence of the bytes that are written out to the permanent secondary medium. For example, each character in one of your emails is encoded as a single byte of information using the characters in the ASCII encoding table. However, the true essence of the information in your email is encoded by the total order and sequence of the bytes in the email. The same is true of the total order and sequence of the bases in a stretch of DNA or RNA used to build a protein molecule. That is why being able to read the sequence of bases in a string of DNA or RNA is so important. It is even more important than being able to read the bytes in your email.
<br /><br />
<strong>Biological and IT Data Access Methods</strong><br />
Before proceeding, we first need to briefly review how data is encoded in IT and by carbon-based life forms in biology.
In biology, data is encoded by DNA and RNA molecules.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjYAXwmTLKgG5JneTa5l-0_aeQqIkh65bTw56f8j_-40Y2IKUnoG6dxGOXZbG7rN5mqaQ6n-QZl4hp_PiTmBVm8ilcOC6YwtayS7oyc4bNJWEGC8biktFgnr6T9yMQ1skxj-nenb5rTKQyF43cp0YwJaxJfbWURw-dwBHaB3Xq389Xm2VVzCLw/s750/Difference-Between-DNA-and-RNA.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="523" data-original-width="750" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjYAXwmTLKgG5JneTa5l-0_aeQqIkh65bTw56f8j_-40Y2IKUnoG6dxGOXZbG7rN5mqaQ6n-QZl4hp_PiTmBVm8ilcOC6YwtayS7oyc4bNJWEGC8biktFgnr6T9yMQ1skxj-nenb5rTKQyF43cp0YwJaxJfbWURw-dwBHaB3Xq389Xm2VVzCLw/s600/Difference-Between-DNA-and-RNA.png"/></a></div> <p style="text-align: center; line-height:140%; font-size:120%">Figure 3 - RNA is a one-track tape, while DNA is a two-track tape. DNA has a data track and a parity track that allows for error corrections after DNA replicates. DNA uses a slightly different version of the ribose sugar and also uses the nucleotide of T (Thymine) instead of the U (Uracil) used by RNA. </p><p style="line-height:140%; font-size:120%">
For IT, we will return to the batch processing of data stored on magnetic tapes that was common in the 1960s and 1970s because it is more closely aligned with how biological information is processed using DNA and RNA tapes. One of the simplest and oldest access methods in IT is called QSAM - Queued Sequential Access Method: <br></br> Queued Sequential Access Method<br /> <a href="http://en.wikipedia.org/wiki/Queued_Sequential_Access_Method">http://en.wikipedia.org/wiki/Queued_Sequential_Access_Method </a> <br></br> I did a lot of magnetic tape processing in the 1970s and early 1980s using QSAM. At the time we used 9 track tapes that were 1/2 inch wide and 2400 feet long on a reel with a 10.5 inch diameter. The tape had 8 data tracks and one parity track across the 1/2-inch tape width. That way we could store one byte across the 8 1-bit data tracks in a frame, and we used the parity track to check for errors. We used odd parity, if the 8 bits on the 8 data tracks in a frame added up to an even number of 1s, we put a 1 in the parity track to make the total number of 1s an odd number. If the 8 bits added up to an odd number of 1s, we put a 0 in the parity track to keep the total number of 1s an odd number. Originally, 9 track tapes had a density of 1600 bytes/inch of tape, with a data transfer rate of 15,000 bytes/second. Remember, a byte is 8 bits and can store one character, like the letter “A” which we encode in the ASCII code set as A = “01000001”. <div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh0izm_wuXQih4hqWFWP6Ye0tdApZH01eCsLU8MSDQBhWDgdGG4dSTzF2p-40riFW9CrbMDVQauBWL5agWLBGQ5mctnB5phtpUcHuSoF2GBggnvCYP9jXU-FIxOcJIK3q6Vvc6e4g/s1600/Tape.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh0izm_wuXQih4hqWFWP6Ye0tdApZH01eCsLU8MSDQBhWDgdGG4dSTzF2p-40riFW9CrbMDVQauBWL5agWLBGQ5mctnB5phtpUcHuSoF2GBggnvCYP9jXU-FIxOcJIK3q6Vvc6e4g/s640/Tape.jpg" /></a></div> <p style="text-align: center; line-height:140%; font-size:120%"> Figure 4 – A 1/2 inch wide 9 track magnetic tape on a 2400 foot reel with a diameter of 10.5 inches </p><p style="line-height:140%; font-size:120%"> <div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjOIm3XQnTsddSJvj4jK80yw8ZCvQvs8jzxh3dGQkxxCAqWK-R9Ri881UY45LbqRCrf1a8UikKrVEIRCbjgwjbyocuPruyJ7TBFMNz6PgyTniCk7wJfBgKKiKv_WGup0Eh1uUsRvw/s1600/Tracks.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjOIm3XQnTsddSJvj4jK80yw8ZCvQvs8jzxh3dGQkxxCAqWK-R9Ri881UY45LbqRCrf1a8UikKrVEIRCbjgwjbyocuPruyJ7TBFMNz6PgyTniCk7wJfBgKKiKv_WGup0Eh1uUsRvw/s640/Tracks.png" /></a></div> <p style="text-align: center; line-height:140%; font-size:120%"> Figure 5 – 9 track magnetic tape had 8 data tracks and one parity track using odd parity which allowed for the detection of bad bytes with parity errors on the tape. </p><p style="line-height:140%; font-size:120%"> Later, 6250 bytes/inch tape drives became available, and I will use that density for the calculations that follow. Now suppose you had 50 million customers and the current account balance for each customer was stored on an 80-byte customer record. A record was like a row in a spreadsheet. The first field of the record was usually a CustomerID field that contained a unique customer ID like a social security number and was essentially the equivalent of a promoter region on the front end of a gene in DNA. The remainder of the 80-byte customer record contained fields for the customer’s name and billing address, along with the customer’s current account information. Between each block of data on the tape, there was a 0.5-inch gap of “junk” tape. This “junk” tape allowed for the acceleration and deceleration of the tape reel as it spun past the read/write head of a tape drive and perhaps occasionally reversed direction. Since an 80-byte record only came to 80/6250 = 0.0128 inches of tape, which is quite short compared to the overhead of the 0.5-inch gap of “junk” tape between records, it made sense to block many records together into a single block of data that could be read by the tape drive in a single I/O operation. For example, blocking 100 80-byte records increased the block size to 8000/6250 = 1.28 inches and between each 1.28-inch block of data on the tape, there was the 0.5-inch gap of “junk” tape. This greatly reduced the amount of wasted “junk” tape on a 2400-foot reel of tape. So each 100-record block of data took up a total of 1.78 inches of tape and we could get 16,180 blocks on a 2400-foot tape or the data for 1,618,000 customers per tape. The advantage of QSAM, over an earlier sequential access method known as BSAM, was that you could read and write an entire block of records at a time via an I/O buffer. In our example, a program could read one record at a time from an I/O buffer which contained the 100 records from a single block of data on the tape. When the I/O buffer was depleted of records, the next 100 records were read in from the next block of records on the tape. Similarly, programs could write one record at a time to the I/O buffer, and when the I/O buffer was filled with 100 records, the entire I/O buffer with 100 records in it was written as the next block of data on an output tape. <br></br> The use of a blocked I/O buffer provided a significant distinction between the way data was physically stored on tape and the way programs logically processed the data. The difference between the way things are physically implemented and the way things are logically viewed by software is a really big deal in IT. The history of IT over the past 82 years has really been a history of logically abstracting physical things through the increasing use of layers of abstraction, to the point where today, IT professionals rarely think of physical things at all. Everything just resides in a logical “Cloud”. I think that taking more of a logical view of things, rather than taking a physical view of things, would greatly help biologists at this point in the history of biology. Biologists should not get so hung up about where the information for biological software is physically located. Rather, biologists should take a cue from IT professionals, and start thinking more of biological software in logical terms, rather than physical terms. <br></br> <div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJe_vPtKVW7uscVJS7oWoPdfUGZc3UxRroiwxViut9rC7vNc0GzVb9YJJ-iPFox8tnuLYOK9qMa65nNS0g_w0_vjkW9gqh4qEIAZmpXNWZjMohbHHR4h4vFiR4OVrSYZWKOcYWRQ/s1600/Records.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJe_vPtKVW7uscVJS7oWoPdfUGZc3UxRroiwxViut9rC7vNc0GzVb9YJJ-iPFox8tnuLYOK9qMa65nNS0g_w0_vjkW9gqh4qEIAZmpXNWZjMohbHHR4h4vFiR4OVrSYZWKOcYWRQ/s640/Records.png" /></a></div> <p style="text-align: center; line-height:140%; font-size:120%"> Figure 6 – Between each record, or block of records, on a magnetic tape, there was a 0.5-inch gap of “junk” tape. The “junk” tape allowed for the acceleration and deceleration of the tape reel as it spun past the read/write head on a tape drive. Since an 80-byte record only came to 80/6250 = 0.0128 inches, it made sense to block many records together into a single block that could be read by the tape drive in a single I/O operation. For example, blocking 100 80-byte records increased the block size to 8000/6250 = 1.28 inches, and between each 1.28-inch block of data on the tape, there was a 0.5-inch gap of “junk” tape for a total of 1.78 inches per block.</p><p style="line-height:140%; font-size:120%"> <div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgesyd4wnw3WQfWonDTFhb8lG-JoTHeu1V4UIQ9ZyAps3GYBGQABuJaX0RqytYgqv9iqWws1ShZqwOYQ0FSQPaz0Vj2yV8oIkHy3-1ZLP679O4dU3n3MVmQXmoh80Rwfb0D1KZOdQ/s1600/Blocks.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgesyd4wnw3WQfWonDTFhb8lG-JoTHeu1V4UIQ9ZyAps3GYBGQABuJaX0RqytYgqv9iqWws1ShZqwOYQ0FSQPaz0Vj2yV8oIkHy3-1ZLP679O4dU3n3MVmQXmoh80Rwfb0D1KZOdQ/s640/Blocks.png" /></a></div><p style="text-align: center; line-height:140%; font-size:120%">Figure 7 – Blocking records on tape allowed data to be stored more efficiently.</p><p style="line-height:140%; font-size:120%">So it took 31 tapes to just store the rudimentary account data for 50 million customers. The problem was that each tape could only store 123 MB of data. Not too good, considering that today you can buy a 1 TB PC disk drive that can hold 8525 times as much data for about $50! Today, you could also store about 4,263 times as much data on a $50 128 GB thumb drive. So how could you find the data for a particular customer on 74,000 feet (14 miles) of tape? Well, you really could not do that reading one block of data at a time with the read/write head of a tape drive, so we processed data with batch jobs using lots of input and output tapes. Generally, we had a Master Customer File on 31 tapes and a large number of Transaction tapes with insert, update and delete records for customers. All the tapes were sorted by the CustomerID field, and our programs would read a Master tape and a Transaction tape at the same time and apply the inserts, updates and deletes on the Transaction tape to a new Master tape. So your batch job would read a Master and Transaction input tape at the same time and would then write to a single new Master output tape. These batch jobs would run for many hours, with lots of mounting and unmounting of dozens of tapes. <div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjyw6bXBcKOarflVq2LjewZauGn62XNvgHEZWJ59MMAq1o_XfxbF-nOmZ8kifh_ro0JW4bNQ5-bolTyHRDf1IA8OJhCvnEYomXIOnQ5tW9GifYH0uXHbMHTbrI-dfh_bJOMJNlDHQ/s1600/TapeDrives.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjyw6bXBcKOarflVq2LjewZauGn62XNvgHEZWJ59MMAq1o_XfxbF-nOmZ8kifh_ro0JW4bNQ5-bolTyHRDf1IA8OJhCvnEYomXIOnQ5tW9GifYH0uXHbMHTbrI-dfh_bJOMJNlDHQ/s640/TapeDrives.jpg" /></a></div> <p style="text-align: center; line-height:140%; font-size:120%"> Figure 8 – Batch processing of 50 million customers took a lot of tapes and tape drives.</p><p style="line-height:140%; font-size:120%">
<strong>Biological Access Methods</strong><br /> Nearly all biological functions are performed by proteins. A protein is formed by combining 20 different amino acids into different sequences, and on average it takes about 400 amino acids strung together to form a functional protein. The information to do that is encoded in base pairs running along a strand of DNA. Each base can be in one of four states – A, C, G, or T, and an A will always be found to pair with a T, while a C will always pair with a G. So DNA is really a 2 track tape with one data track and one parity track. For example, if there is an A on the DNA data track, you will find a T on the DNA parity track. This allows not only for the detection of parity errors but also for the correction of parity errors in DNA by enzymes that run up and down the DNA tape looking for parity errors and correcting them. <div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhqQxRYomg-CA4WDiOC5Qtxut_0n_bowQxftOgm6RNtrLoQTViK41W3yPi3lBn84GOui3MjMuir1NoVIJv927MSIgSnwdLLBitBc2uHVw-0d0coQCkvybLp5IgrFTgNu53_ljjzfA/s1600/DNA.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhqQxRYomg-CA4WDiOC5Qtxut_0n_bowQxftOgm6RNtrLoQTViK41W3yPi3lBn84GOui3MjMuir1NoVIJv927MSIgSnwdLLBitBc2uHVw-0d0coQCkvybLp5IgrFTgNu53_ljjzfA/s320/DNA.jpg" /></a></div> <p style="text-align: center; line-height:140%; font-size:120%"> Figure 9 – DNA is a two-track tape, with one data track and one parity track. This allows not only for the detection of parity errors but also for the correction of parity errors in DNA by enzymes that run up and down the DNA tape looking for parity errors and correcting them.</p><p style="line-height:140%; font-size:120%"> Now a single base pair can code for 4 different amino acids because a single base pair can be in one of 4 states. Two base pairs can code for 4 x 4 = 16 different amino acids, which is not enough. Three base pairs can code for 4 x 4 x 4 = 64 amino acids which are more than enough to code for 20 different amino acids. So it takes a minimum of three bases to fully encode the 20 different amino acids, leaving 44 combinations left over for redundancy. Biologists call these three base pair combinations a “codon”, but a codon really is just a biological byte composed of three biological bits or base pairs that code for an amino acid. Actually, three of the base pair combinations, or codons, are used as STOP codons – TAA, TAG and TGA which are essentially end-of-file markers designating the end of a gene along the sequential file of DNA. As with magnetic tape, there is a section of “junk” DNA between genes along the DNA 2 track tape. According to Shannon’s equation, a DNA base contains 2 bits of information, so a codon can store 6 bits. For more on this see <a href="https://softwarephysics.blogspot.com/2013/09/new-introduction-to-softwarephysics.html">Some More Information About Information</a>. <div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiYU1gV5MzMuIOz5-oF2SejANkGn4a17-j-PiMlJO1-XkMlTsFcKm1Aq2EW2shmhsac3lgjTs9dYh_7ZLUUnQiT5E1Tx5T-DkQJZhPZ2bM2gudsbR3uupBZ4SczwtGv-U7b81yIWg/s1600/TranTable.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiYU1gV5MzMuIOz5-oF2SejANkGn4a17-j-PiMlJO1-XkMlTsFcKm1Aq2EW2shmhsac3lgjTs9dYh_7ZLUUnQiT5E1Tx5T-DkQJZhPZ2bM2gudsbR3uupBZ4SczwtGv-U7b81yIWg/s400/TranTable.png" /></a></div> <p style="text-align: center; line-height:140%; font-size:120%"> Figure 10 – Three bases combine to form a codon, or a biological byte, composed of three biological bits, and encodes the information for one amino acid along the chain of amino acids that form a protein. </p><p style="line-height:140%; font-size:120%">The beginning of a gene is denoted by a section of promoter DNA that identifies the beginning of the gene, like the CustomerID field on a record, and the gene is terminated by a STOP codon of TAA, TAG or TGA. Just as there was a 0.50-inch gap of “junk” tape between blocks of records on a magnetic computer tape, there is a section of “junk” DNA between each gene along the 6 feet of DNA tape found within human cells. <div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgbkR0bo3wkKs6i_JKPxewbpoGW5nyU42Ue8cgzPmvs4cyOTBqxV81B1AF8b3mEULTaVnGtCsOfZoCbM9TkVQS22SoEKbcFOaLmSreAu_EmNRqoQvstk_I9tQVXDDi81sL07PFf7g/s1600/Genes.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgbkR0bo3wkKs6i_JKPxewbpoGW5nyU42Ue8cgzPmvs4cyOTBqxV81B1AF8b3mEULTaVnGtCsOfZoCbM9TkVQS22SoEKbcFOaLmSreAu_EmNRqoQvstk_I9tQVXDDi81sL07PFf7g/s400/Genes.png" /></a></div> <p style="text-align: center; line-height:140%; font-size:120%"> Figure 11 - On average, each gene is about 400 codons long and ends in a STOP codon TAA, TAG or TGA which are essentially end-of-file markers designating the end of a gene along the sequential file of DNA. As with magnetic tape, there is a section of “junk” DNA between genes which is shown in grey above. </p><p style="line-height:140%; font-size:120%">
In order to build a protein, genes are first transcribed to an I/O buffer called mRNA. The 2-track DNA file for a gene is first opened near the promoter of a gene and an enzyme called RNA polymerase then begins to copy the codons or biological bytes along the data track of the DNA tape to an mRNA I/O buffer. The mRNA I/O buffer is then read by a ribosome read/write head as it travels along the mRNA I/O buffer. The ribosome read/write head reads each codon or biological byte of data along the mRNA I/O buffer and writes out a chain of amino acids as tRNA brings in one amino acid after another in the sequence specified by the mRNA I/O buffer. <div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgwr2ouFhpjn2Z-uRbyFZqIx_Xub8iTgyUI6AXz6jSU-KoIWlxTAc5-8b0SqPSZdDo2X5PjG6c36xLXXSAYsbP8OA6L279DLXtR0v3ahvVNsyBXxuU9qfrKi2wLnLDxz-ujMvnqCg/s1600/Translation.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgwr2ouFhpjn2Z-uRbyFZqIx_Xub8iTgyUI6AXz6jSU-KoIWlxTAc5-8b0SqPSZdDo2X5PjG6c36xLXXSAYsbP8OA6L279DLXtR0v3ahvVNsyBXxuU9qfrKi2wLnLDxz-ujMvnqCg/s400/Translation.jpg" /></a></div> <p style="text-align: center; line-height:140%; font-size:120%"> Figure 12 - In order to build a protein, genes are first transcribed to an I/O buffer called mRNA. The 2-track DNA file for a gene is first opened near the promoter of a gene and an enzyme called RNA polymerase then begins to copy the codons or biological bytes along the data track of the DNA tape to the mRNA I/O buffer. The mRNA I/O buffer is then read by a ribosome read/write head as it travels along the mRNA I/O buffer. The ribosome read/write head reads each codon or biological byte of data along the mRNA I/O buffer and writes out a chain of amino acids as tRNA brings in one amino acid after another in the sequence specified by the mRNA I/O buffer. </p><p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiX_bACY-Fjy2KFKMRy6a0h5bM7FDkNsIWamTekhdL90LTMO49g2wNuZaDiBKv74qpEyh_M3j-BLSidbaeMRLUdLsubBWLYn_rQIEip0q3ZoErNJCSN6wuS3xtiLHNngeGzbH95cA/s1600/Chromatin.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiX_bACY-Fjy2KFKMRy6a0h5bM7FDkNsIWamTekhdL90LTMO49g2wNuZaDiBKv74qpEyh_M3j-BLSidbaeMRLUdLsubBWLYn_rQIEip0q3ZoErNJCSN6wuS3xtiLHNngeGzbH95cA/s400/Chromatin.jpg" /></a></div> <p style="text-align: center; line-height:140%; font-size:120%">
Figure 13 – In addition, the DNA of eukaryotic carbon-based life that is composed of cells that are more complicated than the simple prokaryotic cells of the bacteria and archaea is wrapped around histone proteins like magnetic computer tape wrapped around little reels, forming nucleosomes, and then is packed into chromatin fibers that are then wound up into chromosomes. </p><p style="line-height:140%; font-size:120%"> <div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj5KWZI_7DFhNmyBD9R78450F3fDk9ZvPypVsUQ_Wa6qPRwaBvzWGPri9r7kqDtnZolWRmTNCncsiqonJ0qKlJqJJUPCKCBGIVNTesXnAZQpe57JUuoSBK7HjUSETZdxv3exlRxHQ/s1600/TapeRack.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj5KWZI_7DFhNmyBD9R78450F3fDk9ZvPypVsUQ_Wa6qPRwaBvzWGPri9r7kqDtnZolWRmTNCncsiqonJ0qKlJqJJUPCKCBGIVNTesXnAZQpe57JUuoSBK7HjUSETZdxv3exlRxHQ/s400/TapeRack.jpg" /></a></div><p style="text-align: center; line-height:140%; font-size:120%">Figure 14 – Chromatin performs the functions of the tape racks of yore and allows DNA to be highly compacted for storage and also allows for the controlled expression of genes by means of epigenetic factors in play. Each tape in a rack had an external label known as a volume serial number which identified the tape. </p><p style="line-height:140%; font-size:120%">
<strong>How Nanopore Sequencing of DNA and RNA Works</strong><br />
With the above background at hand let us now explore the engineering that allows nanopore sequencing sequencing to work. A very complete history of this very important technology can be found at:
<br /><br />
Nanopore Sequencing<br />
<a href="https://www.whatisbiotechnology.org/index.php/science/summary/nanopore/nanopore-sequencing-makes-it-possible-to-decode-the">https://www.whatisbiotechnology.org/index.php/science/summary/nanopore/nanopore-sequencing-makes-it-possible-to-decode-the</a>
<br /><br />
It all began on June 25, 1989, while Dave Deamer was on a Sunday drive in Oregon.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh7oOW9T7OBesI6eCQtbTx0ACpX0TRKZ_HZxX_RrS6CxhqV43xAOuOLHygd_7x5_rpizhge4RVDiOtsw-Z27huIssXNQ9p6ehI8wgqvZ4MOKxP-0s7GyzzPEYsjglbbIj4p5TuLx-KzksAsWdM7lmeuoNoQMRPeE8JomufDNnOXOcTMkOaJoqqv6w/s2113/DeamerNotebookw.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="1557" data-original-width="2113" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh7oOW9T7OBesI6eCQtbTx0ACpX0TRKZ_HZxX_RrS6CxhqV43xAOuOLHygd_7x5_rpizhge4RVDiOtsw-Z27huIssXNQ9p6ehI8wgqvZ4MOKxP-0s7GyzzPEYsjglbbIj4p5TuLx-KzksAsWdM7lmeuoNoQMRPeE8JomufDNnOXOcTMkOaJoqqv6w/s600/DeamerNotebookw.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 15 – Above is Dave Deamer's conceptual sketch of how a DNA sequencer could directly read DNA one base at a time like the read/write head of a tape drive as a strand of DNA passed through a small hole in a membrane. He drew the above sketch after pulling over to the side of the road during a one-hour drive in Oregon.
<p style="line-height:140%; font-size:120%">
The text reads:<br />
<font color="blue">Sunday June 25 1989. Driving back from Eugene -> Belmont Lodge, had an idea on how to sequence DNA directly.
<br /><br />
Main concept: DNA will be driven through a small channel, either by ΔY or ΔpH. The channel will be carrying a current, driven by ΔΨ. As each base passes through, a change in the current will occur. Because the bases are of different size, the current change will be proportional, thereby providing an indication of which base it is.
<br /><br />
Details:The thickness of the membrane must be very thin, perhaps a polymerized bilayer. The channel must be of the dimensions of DNA in cross section, approx. 1-2nm. Porin? Complement? Alamethicin? The ion flux might be protonic.
<p style="line-height:140%; font-size:120%"></font>
In the above scheme, ΔY and ΔΨ are both voltage differences ΔV across a polymerized bilayer membrane.
<br /><br />
Dave Deamer is truly a membrane expert. We have seen him put them to good use in the <em>Hot Spring Origins Hypothesis</em> that Dave Deamer and Bruce Damer developed for the origin of carbon-based life on the Earth about four billion years ago. For more on that see <a href="https://softwarephysics.blogspot.com/2017/07/new-introduction-to-softwarephysics.html">The Bootstrapping Algorithm of Carbon-Based Life</a> and <a href="https://softwarephysics.blogspot.com/2022/07/new-introduction-to-softwarephysics.html">Urability Requires Durability to Produce Galactic Machine-Based Intelligences</a>. To fully understand nanopore sequencing you need to understand membranes.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh0FLfFdPp7BVkmuNMlgH8XnGEs-_7t7cfnONjfxp7scwPoXJ9GbKNaurPMv4QTc8muADrlxEZ7jmJnJn8Z8BMCedGtDfDtCv9b7HX3JV4Ex6mIeGGx4c_P4EjZkBT3jHrb2b645g/s1407/Cell_membrane_detailed_diagram_4.svg.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" height="600" data-original-height="1407" data-original-width="1200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh0FLfFdPp7BVkmuNMlgH8XnGEs-_7t7cfnONjfxp7scwPoXJ9GbKNaurPMv4QTc8muADrlxEZ7jmJnJn8Z8BMCedGtDfDtCv9b7HX3JV4Ex6mIeGGx4c_P4EjZkBT3jHrb2b645g/s600/Cell_membrane_detailed_diagram_4.svg.png"/></a></div> <p style="text-align: center; line-height:140%; font-size:120%">Figure 16 – A cell membrane consists of a phospholipid bilayer with embedded molecules that allow for a controlled input-output to the cell. Once we have a membrane, we can fill the "inside" with organic molecules that are capable of doing things that then interact with organic molecules on the "outside". </p><p style="line-height:140%; font-size:120%">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgKkIppJMyytCRUa_OS3f7KS8V87Wg3jMiibdMIrVZFfHso9ZkEFy5fmjCIpT9dUZExjmjnGJT_hIy2JH7E3QcvEAlS1ns9Go-65BJiNnQUaXxugCzZsDDyW_2TZKMpROKVOcTc2g/s1600-h/water.gif"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 240px; height: 240px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgKkIppJMyytCRUa_OS3f7KS8V87Wg3jMiibdMIrVZFfHso9ZkEFy5fmjCIpT9dUZExjmjnGJT_hIy2JH7E3QcvEAlS1ns9Go-65BJiNnQUaXxugCzZsDDyW_2TZKMpROKVOcTc2g/s320/water.gif" border="0" alt=""id="BLOGGER_PHOTO_ID_5283730786942016802" /></a><p style="text-align: center; line-height:140%; font-size:120%">Figure 17 – Water molecules are polar molecules that have a positive end and a negative end because oxygen atoms attract the bonding electrons more strongly than do the hydrogen atoms. The positive ends of water molecules attract the negative ends of other water molecules to form a loosely coupled network of water molecules with a minimum of free energy. </p><p style="line-height:140%; font-size:120%"> <div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjNLsPgs6PzQC-lpuGbvX_jpRhP1-kv2azF-XMBTOt9zUUNzto5UEO2CwHb9gXBpjoK-0fIWkDZsodYcsbCJkeoQ45XesSN8YJCeieU4gfI-GvSlYNJFFoqwnq1pGgeckC5vsWDTg/s1600/Soap.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjNLsPgs6PzQC-lpuGbvX_jpRhP1-kv2azF-XMBTOt9zUUNzto5UEO2CwHb9gXBpjoK-0fIWkDZsodYcsbCJkeoQ45XesSN8YJCeieU4gfI-GvSlYNJFFoqwnq1pGgeckC5vsWDTg/s400/Soap.png" width="325" height="400" data-original-width="375" data-original-height="461" /></a></div> <p style="text-align: center; line-height:140%; font-size:120%">Figure 18 – How soap and water work. The lipids in a bar of soap have water-loving polar heads and water-hating nonpolar tails. When in water, the soap lipids can form a spherical micelle that has all of the water-hating nonpolar tails facing inwards. Then the spherical micelles can surround the greasy nonpolar molecules of body oils and allow them to be flushed away by a stream of polar water molecules. The lipids in a bar of soap can also form a cell-like liposome with a bilayer of lipid molecules that can surround the monomers and polymers of life. </p><p style="line-height:140%; font-size:120%"> Similarly, in <a href="https://softwarephysics.blogspot.com/2021/06/new-introduction-to-softwarephysics_24.html">The Role of Membranes in the Evolution of Software</a>, I explained how the isolation of processing functions within membranes progressed as the architecture of software slowly evolved over time.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhV-uOTO0EYrYGQeergtNbr8ENU_ayPWt1a21lw9mArEULbU2cPo3IJbW2Dn4KjT1oIew_EhQtT_Sd8VFW0kyxwM5Vk-IDTOFvrnP-Mvel2vnrJElaoe8dTNlYG8GUmoWyWAAgMEOOdPWiD3g4R-eoRP5aHeNT8H_-Xp76gPtqlJ9WNnKwMbAaVYA/s1080/NanoporeSequencer1.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="764" data-original-width="1080" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhV-uOTO0EYrYGQeergtNbr8ENU_ayPWt1a21lw9mArEULbU2cPo3IJbW2Dn4KjT1oIew_EhQtT_Sd8VFW0kyxwM5Vk-IDTOFvrnP-Mvel2vnrJElaoe8dTNlYG8GUmoWyWAAgMEOOdPWiD3g4R-eoRP5aHeNT8H_-Xp76gPtqlJ9WNnKwMbAaVYA/s600/NanoporeSequencer1.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 19 – Above is a general view of how a nanopore sequencer works. It consists of a bilayer membrane with a hole drilled through it. The hole is lined with a pipe-like protein molecule called Alpha-hemolysin to keep it open. A battery is then used to apply a small voltage difference between the inside and the outside of the membrane. The fluid below the membrane is now at a higher voltage. The bases on a strand of DNA or RNA have a slight negative charge, so the electric force from the voltage difference across the membrane will pull the DNA or RNA through the membrane hole. It will also pull lots of negatively charged ions through the hole producing a current. The bases on DNA and RNA have different sizes. The larger bases will clog up the hole more than the smaller bases. This will cause the amount of current flowing through the hole to fluctuate as the DNA or RNA strand is pulled through by the electric field. By measuring the current flow through the hole, one can see each base pass by like the bits passing by the read/write head of a tape drive.
<p style="line-height:140%; font-size:120%">
In this regard, the nanopore sequencer behaves like a vacuum tube or a transistor to modify the current flowing from the
outside of the membrane to the inside of the membrane.
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhXYD1XvCnU3xtvRodyiPLQdVLvD2e7ghfc8jbknnyROlExiPSkv7M0vRgF1IEjUoEHaPuJH0knmtTQFXLKnUDXEbgIT_dphApLu15kRJbiJGZLZ-9WvyYfZG-Gxs88UmNrdU1qnw/s1600/VacuumTube2.jpg" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="207" width="243" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhXYD1XvCnU3xtvRodyiPLQdVLvD2e7ghfc8jbknnyROlExiPSkv7M0vRgF1IEjUoEHaPuJH0knmtTQFXLKnUDXEbgIT_dphApLu15kRJbiJGZLZ-9WvyYfZG-Gxs88UmNrdU1qnw/s400/VacuumTube2.jpg" /></a></div><p style="text-align: center; line-height:140%; font-size:120%">Figure 20 – Vacuum tubes contain a hot negative cathode that glows red and boils off electrons. The electrons are attracted to the cold positive anode plate, but there is a gate electrode between the cathode and anode plate. By changing the voltage on the grid, the vacuum tube can control the flow of electrons like the handle of a faucet. The grid voltage can be adjusted so that the electron flow is full blast, a trickle, or completely shut off, and that is how a vacuum tube can be used as a switch.</p><p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgHByT4Qg012D5eaBc436ukxi_gdFXeBXZQAMuB4OMLtyduvl6ZA1FNtpNiPkXyKA4IJknVqpNpzPH2uQBiyksGYV2aA3Nu-zZrWD7ODQvuAs_O7Mg6WdC_xdEyW43TrfWuF9j9WA/s1600/Gate5.png" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="243" width="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgHByT4Qg012D5eaBc436ukxi_gdFXeBXZQAMuB4OMLtyduvl6ZA1FNtpNiPkXyKA4IJknVqpNpzPH2uQBiyksGYV2aA3Nu-zZrWD7ODQvuAs_O7Mg6WdC_xdEyW43TrfWuF9j9WA/s400/Gate5.png" /></a></div><p style="text-align: center; line-height:140%; font-size:120%">Figure 21 – A FET transistor consists of a source, gate and drain. When a positive voltage is applied to the gate, a current of electrons can flow from the source to the drain and the FET acts like a closed switch that is “on”. When there is no positive voltage on the gate, no current can flow from the source to the drain, and the FET acts like an open switch that is “off”.</p><p style="line-height:140%; font-size:120%"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhmmZvsf0UUNIxdYvqrdIUsmkMS0jsnfKZwgQhx2DMsqoqOks9pCeRmURfLdqdFbjrJti4-6M3h0kWUwRgOyWht34b4DA0ilIlbYSzUZRqia2NIJUExGGXHEvLWg-csBMPurG0Aig/s1600/Gate4.png" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="181" width="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhmmZvsf0UUNIxdYvqrdIUsmkMS0jsnfKZwgQhx2DMsqoqOks9pCeRmURfLdqdFbjrJti4-6M3h0kWUwRgOyWht34b4DA0ilIlbYSzUZRqia2NIJUExGGXHEvLWg-csBMPurG0Aig/s400/Gate4.png" /></a></div> <p style="text-align: center; line-height:140%; font-size:120%">Figure 22 – When there is no positive voltage on the gate, the FET transistor is switched off, and when there is a positive voltage on the gate the FET transistor is switched on. These two states can be used to store a binary “0” or “1”, or can be used as a switch in a logic gate, just like an electrical relay or a vacuum tube.</p><p style="line-height:140%; font-size:120%"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiPK8nqRR6jkS5GMCIdC2yB-ZCs7cK3vpJV0nrkQ1hNPaKmU4vHxsWkRXvS_bV4vvmF3eK4QGEEN6Xbwka4ZtK1nbtQLJ1a4eEk9xNM5lXxcGWjARgNJWlzIFkY2SahB-rzIxyrCg/s1600/Gate2.gif" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="199" width="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiPK8nqRR6jkS5GMCIdC2yB-ZCs7cK3vpJV0nrkQ1hNPaKmU4vHxsWkRXvS_bV4vvmF3eK4QGEEN6Xbwka4ZtK1nbtQLJ1a4eEk9xNM5lXxcGWjARgNJWlzIFkY2SahB-rzIxyrCg/s400/Gate2.gif" /></a></div><p style="text-align: center; line-height:140%; font-size:120%">Figure 23 – Above is a plumbing analogy that uses a faucet or valve handle to simulate the actions of the source, gate and drain of an FET transistor.</p><p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjzhrDYlQvG56y5ewz5lZoj0RP9cqJ_KKt-PElbcoO7tIJTa1U8LvHZSE3qB0JpnV2NDpsPLJOgUd5x_7ichsrLQGB_uIG9dAmYkXSM9d8MPesp7VK58i_6HmcOGn2dp0P63ypIG1mGOJ1x7S5-1NvEZ-pLIcJNS6BgdUsQoxaymqcL2rNLMnwdsQ/s850/NanoporeSequencer2.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="778" data-original-width="850" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjzhrDYlQvG56y5ewz5lZoj0RP9cqJ_KKt-PElbcoO7tIJTa1U8LvHZSE3qB0JpnV2NDpsPLJOgUd5x_7ichsrLQGB_uIG9dAmYkXSM9d8MPesp7VK58i_6HmcOGn2dp0P63ypIG1mGOJ1x7S5-1NvEZ-pLIcJNS6BgdUsQoxaymqcL2rNLMnwdsQ/s600/NanoporeSequencer2.png"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 24 – Of course, things get a little more complicated when you actually try to build them. It turns out that the DNA and RNA strands get sucked through the hole by the electric field too quickly to be measured. To slow down the process a motor protein was inserted into the top of the Alpha-hemolysin protein lining the hole to ratchet the DNA and RNA strands down through the hole one base at a time so that there was enough time to measure the current disruption caused by each base as it passed through the hole head.
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiarMQ0gacgjt_Xb1GCvvbPKGMDekLjhARVyAih_kZmx0D2rFpqeHiUeP0Jnt8yA-TjcAFtr3bVzcD0KJZqOcC4D0HoVyPAVz1-b9drpQ_3Sfv0dyc9imuyNT3mdlV5aBxqKFwmu3upAToAVfMs5Nvp2OQQNrrgT-dZo1uKQozpNYk5OveQTRtK0w/s880/hemolysinpore.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="498" data-original-width="880" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiarMQ0gacgjt_Xb1GCvvbPKGMDekLjhARVyAih_kZmx0D2rFpqeHiUeP0Jnt8yA-TjcAFtr3bVzcD0KJZqOcC4D0HoVyPAVz1-b9drpQ_3Sfv0dyc9imuyNT3mdlV5aBxqKFwmu3upAToAVfMs5Nvp2OQQNrrgT-dZo1uKQozpNYk5OveQTRtK0w/s600/hemolysinpore.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 25 – The top of the Alpha-hemolysin protein forms a flange that the motor protein can easily fit into. You will find a similar flange in the floor under your toilets.
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiD0VFuAH5ftjT_hBduT2bgi3ccKr6oWzgYrC5VGNKohlcGVvymCDv3axZGKGh9Ugu-51S_mPBZXRoodZVtAqk1wIhOeH_HpM2T5wPMZ0KsbzDeQL_iUD1lhY5fMIjdGokUqBIU2O9GVCxTEShdk4dq315VhAguanbbtTLp1R-g-yHIilIjyiJ-bg/s1000/ToiletFlange.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="783" data-original-width="1000" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiD0VFuAH5ftjT_hBduT2bgi3ccKr6oWzgYrC5VGNKohlcGVvymCDv3axZGKGh9Ugu-51S_mPBZXRoodZVtAqk1wIhOeH_HpM2T5wPMZ0KsbzDeQL_iUD1lhY5fMIjdGokUqBIU2O9GVCxTEShdk4dq315VhAguanbbtTLp1R-g-yHIilIjyiJ-bg/s600/ToiletFlange.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 26 – The top of the Alpha-hemolysin protein forms a flange that the motor protein can easily fit into. You will find a similar flange in the floor under your toilets.
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiOZVCpoOH5bTrh3RZld1WhRxa6P0I9C_k2oj2XtFytRcWC9VQwASYBb95SOG-o9nLuQMrncTUKEMBb7eYEGKwOWPygBcEqwi4HdhJJRi8aBWkAW_LaI2HNs4vR4oJzFLVCJjggvSWRidycO-HfPO4hDtCx144zAYLJxWoarR5s90sECX3vZEMAog/s580/MovieProjector.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="486" data-original-width="580" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiOZVCpoOH5bTrh3RZld1WhRxa6P0I9C_k2oj2XtFytRcWC9VQwASYBb95SOG-o9nLuQMrncTUKEMBb7eYEGKwOWPygBcEqwi4HdhJJRi8aBWkAW_LaI2HNs4vR4oJzFLVCJjggvSWRidycO-HfPO4hDtCx144zAYLJxWoarR5s90sECX3vZEMAog/s600/MovieProjector.png"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 27 – The motor protein ratchets the DNA and RNA strands through the hole like the film advance mechanism on old-fashioned movie film projectors.
<p style="line-height:140%; font-size:120%">
But there was just one problem with the naturally occurring Alpha-hemolysin protein. The neck on its flange-like shape was just a little too long. That meant that about 10 - 12 bases would always be in the neck of its flange even if the motor protein was able to ratchet just one base at a time into the throat of its flange neck. That meant that many bases would always be clogging up the throat of its flange neck at the same time. We have all seen toilets in a similar condition. That produced some very complex variations in the current of ions trying to pass through the neck of the Alpha-hemolysin protein that were very hard to analyze. The natural solution was to shorten the neck of the flange and that was done by bioengineering two protein molecules called CsgG and CsgF to combine together into a new complex with a very short flange neck to replace the Alpha-hemolysin protein.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiHVuJBAtCBrgO0LvetmUSauiFujS2zBFmC1oKSIlVaX71ORisyTEXeIzYyclpmYG3o9hnAgBN0OPqD6kPbDM8qYrFlcpQBKC-sC_avEpvQY44uwMK3sIytAbpQW0nW_flz2wZ2vVKC2QmqUqe2uiRLN_6dNBCZfzuOMkMteHvhcpTi98l-HxbELw/s697/CsgG-CsgF%20.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" height="600" data-original-height="697" data-original-width="602" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiHVuJBAtCBrgO0LvetmUSauiFujS2zBFmC1oKSIlVaX71ORisyTEXeIzYyclpmYG3o9hnAgBN0OPqD6kPbDM8qYrFlcpQBKC-sC_avEpvQY44uwMK3sIytAbpQW0nW_flz2wZ2vVKC2QmqUqe2uiRLN_6dNBCZfzuOMkMteHvhcpTi98l-HxbELw/s600/CsgG-CsgF%20.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 28 – The very long flange neck of the Alpha-hemolysin protein was replaced by a CsgG-CsgF complex with a much shorter flange neck. This made it easier to read the bases along a strand of DNA or RNA because it reduced the number of bases that were in the flange neck at the same time.
<p style="line-height:140%; font-size:120%">
But even the CsgG-CsgF complex had a flange neck that was too long. To solve the problem Deep Learning neural networks are used to identify 5-base stretches of DNA or RNA at a time. Each 5-base stretch of bases is called a k-mer. A k-mer of DNA bases is a substring of length k in a DNA sequence. For example, all 2-mers of the sequence AATTGGCCG are AA, AT, TT, TG, GG, GC, CC, CG. Similarly, all 3-mers of the sequence AATTGGCCG are AAT, ATT, TTG, TGG, GGC, GCC, CCG. For the CsgG-CsgF complex, a 5-mer is used that has 1024 combinations. The Deep Learning neural networks are trained using synthesized 5-mer lengths of DNA and RNA bases with known sequences so that the Deep Learning neural networks learn to recognize the 5-mer substrings of bases. They can do this with over 99% accuracy. This is harder than it sounds because the bases in the throat of the CsgG-CsgF flange throat are bouncing around and introducing thermal noise into the ion current flow through the throat.
<br /><br />
<strong>Déjà vu All Over Again</strong><br />
All of this very complicated mechanical and electrical engineering on a molecular level seemed strangely familiar to me. Then it suddenly dawned on me. These people were drilling and logging oil wells at the molecular level on biological membranes! As you may recall from <a href="https://softwarephysics.blogspot.com/">Introduction to Softwarephysics</a>, I started out in 1975 as an exploration geophysicist exploring for oil, first with Shell and then with Amoco, before transitioning to IT in 1979. As a geophysicist by training, I am now greatly concerned by the devastation of the climate change we are now seeing unfolding before our very eyes as I outlined in <a href="https://softwarephysics.blogspot.com/2019/11/new-introduction-to-softwarephysics_26.html">Last Call for Carbon-Based Intelligence on Planet Earth</a> and <a href="https://softwarephysics.blogspot.com/2015/11/new-introduction-to-softwarephysics_12.html">This Message on Climate Change Was Brought to You by SOFTWARE</a>. But since the nanopore sequencing industry is so new, perhaps there is something they can learn from the oil industry as they continue to "make hole" in the industry parlance.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgD9Qu_bQ7KaKh0X2GA2pF2V6KKWIQuON9pkmDahNJiMc06w7N7HkyqqiADADNrMhiAOCL7Hg7FLSZ9OCLhAawEMFYIBTvaDDiIkfQ1Wu-iC_j3VkmIItYSmGcEAPyFFUI3h4gM8fWfT4aGwqSaAWPWYQ7Pss2jI84E4Lb07Q5-Ivu2-V6X9h2AMA/s480/WellCasing.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="360" data-original-width="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgD9Qu_bQ7KaKh0X2GA2pF2V6KKWIQuON9pkmDahNJiMc06w7N7HkyqqiADADNrMhiAOCL7Hg7FLSZ9OCLhAawEMFYIBTvaDDiIkfQ1Wu-iC_j3VkmIItYSmGcEAPyFFUI3h4gM8fWfT4aGwqSaAWPWYQ7Pss2jI84E4Lb07Q5-Ivu2-V6X9h2AMA/s600/WellCasing.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 29 – Above is a completed production oil well. The finished borehole below the drilling rig has penetrated many membrane layers of rock and is lined with a steel casing pipe, similar to the Alpha-hemolysin protein, to keep the hole open and allow for the control of the fluids in the borehole. The steel casing is cemented to the borehole walls and at the productive layers that contain oil or natural gas the casing is perforated to allow the oil and natural gas to enter the well.
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg0vzcYVpDAFVJ0gsy0KRVe66L0MAmgYR4IUULRUk7klutBcto9hQatTgaIcDgTqT3jYHXdcBIAeQ1SPpcN__ibVgl3U294VYAXdg35cAh9DWeEi1AFXKsvxkLas7Jw177JR6B6Xvc1eXzYO7raMkPKpKl9jG3sTjPPpwRuswgzAtd77XzLBTpfVg/s510/RotatingEquipment.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="451" data-original-width="510" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg0vzcYVpDAFVJ0gsy0KRVe66L0MAmgYR4IUULRUk7klutBcto9hQatTgaIcDgTqT3jYHXdcBIAeQ1SPpcN__ibVgl3U294VYAXdg35cAh9DWeEi1AFXKsvxkLas7Jw177JR6B6Xvc1eXzYO7raMkPKpKl9jG3sTjPPpwRuswgzAtd77XzLBTpfVg/s600/RotatingEquipment.png"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 30 – Above are the basic parts of a drilling rig.
At the base is a rotary table that spins at about 50 - 250 rpm. The Kelly bushing can clamp onto the rotary table when drilling. This causes the Kelly pipe above the Kelly bushing to spin. The Kelly pipe contains the top segment of the drill pipe that has just been attached to the drill string of the drill pipe. The Kelly pipe can move up and down the Kelly bushing as drilling proceeds. So when the rotary table begins to spin, the Kelly bushing begins to spin, causing the Kelly pipe to spin and ultimately all of the drill pipe in the drill string to rotate. Drill pipe comes in lengths of about 30 feet. After the latest segment of drill pipe has gone down the hole, the Kelly pipe can be raised through the Kelly bushing to allow the next segment of drill pipe to be added at the top. So the rotary table, Kelly bushing and Kelly pipe perform the same function as the motor protein in a nanopore sequencer that ratchets one base at a time down the cased nanopore hole. Similarly, the rotary table, Kelly bushing and Kelly pipe ratchet one 30-foot length of drill pipe down the hole one length of drill pipe at a time.
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh4h0KbEYLMwtbYTFND9NY4H7FNgszSP32iqBBDLecJHiFJldzxMqSjLt6U8DFMEEYGpHgQpqSt1Y5IUX2fposkk8NwDoXKhT8734-0PgbTxbKxBcCPa4WG3nGa5wEElhY9UJj8AMZlDwHncFgWEP4qBqb7slvnZzTK3rFJQ5p3pch-eP-iCKJUdw/s620/kelly-system.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="323" data-original-width="620" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh4h0KbEYLMwtbYTFND9NY4H7FNgszSP32iqBBDLecJHiFJldzxMqSjLt6U8DFMEEYGpHgQpqSt1Y5IUX2fposkk8NwDoXKhT8734-0PgbTxbKxBcCPa4WG3nGa5wEElhY9UJj8AMZlDwHncFgWEP4qBqb7slvnZzTK3rFJQ5p3pch-eP-iCKJUdw/s600/kelly-system.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 31 – Above, a roughneck is handling the Kelly pipe above the Kelly bushing and the rotary table on a drilling rig.
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgs57P46WI0RfDTo_lWLybxX3mVmnq0bVezRlNip4Po_Bp_SUMd4r8C45tp9Y_5CsvV_scnVCohEBFrB2LE6lUu8PfAde9hcLzrh-WTXMJtUjSBPjIWnr9vNvOawnHdulxr3v1YcYXh_XIEN2PSSMGHPFAqOdF5F-uwSa_wHy57lAIUdDoLgapL1g/s952/KellyBushing.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="706" data-original-width="952" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgs57P46WI0RfDTo_lWLybxX3mVmnq0bVezRlNip4Po_Bp_SUMd4r8C45tp9Y_5CsvV_scnVCohEBFrB2LE6lUu8PfAde9hcLzrh-WTXMJtUjSBPjIWnr9vNvOawnHdulxr3v1YcYXh_XIEN2PSSMGHPFAqOdF5F-uwSa_wHy57lAIUdDoLgapL1g/s600/KellyBushing.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 32 – Above is a spinning rotary table, Kelly bushing and Kelly pipe. The rotary table is usually driven by a diesel engine or an electric motor.
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhcgWLd14dWzZQc5cMHDa2yLI7FrBPFSXySXcH8UR18vmjRFnsll0zLdjJVNEXAhcYWbE8yh_bB1BwIir4Q9-dpfqiN8Ir9QMtwpQUoM9owWp7Y8_5t2AqIUlksqTpJn_730moLX0eBncoj7iLdH0yO-A0hjGokImls8dM1xTA9V_KgTiwA-LTv6w/s375/DrillPipe.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="375" data-original-width="375" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhcgWLd14dWzZQc5cMHDa2yLI7FrBPFSXySXcH8UR18vmjRFnsll0zLdjJVNEXAhcYWbE8yh_bB1BwIir4Q9-dpfqiN8Ir9QMtwpQUoM9owWp7Y8_5t2AqIUlksqTpJn_730moLX0eBncoj7iLdH0yO-A0hjGokImls8dM1xTA9V_KgTiwA-LTv6w/s600/DrillPipe.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 33 – Drill pipe comes in 30-foot lengths and is connected together by tapered threaded ends.
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiFnQI8EmWcCHkBsn9a0p8CXCgO4Z-J-TWS461iMmkBTZquJM3PZ8oyukXgA8Kfyp-pdVkMCBoyT6uN5O-qnO1flDLL8nCi9yZmAK2B8Rd7ILv62x_EIrMRm8-iGK-gPDEQRUC2RgPs9QXPgZL7iLbSMzQnBQyXF-BoCxD7H1Yor0290_3KI6Mn8Q/s850/DrillBit.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="849" data-original-width="850" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiFnQI8EmWcCHkBsn9a0p8CXCgO4Z-J-TWS461iMmkBTZquJM3PZ8oyukXgA8Kfyp-pdVkMCBoyT6uN5O-qnO1flDLL8nCi9yZmAK2B8Rd7ILv62x_EIrMRm8-iGK-gPDEQRUC2RgPs9QXPgZL7iLbSMzQnBQyXF-BoCxD7H1Yor0290_3KI6Mn8Q/s600/DrillBit.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 34 – At the very end of the drill string is the drill bit. As the whole drill string is rotated by the rotary table, Kelly bushing and Kelly pipe, the drill bit grinds through the rock at the bottom of the hole. In nanopore sequencing, the DNA or RNA drill string is pulled through the membrane hole by the electrical force arising from the voltage difference between the top and bottom of the membrane. In drilling an oil well the drill string is pulled through the hole by the gravitational force arising from the heavy drill string bearing down on the rotating drill bit. Heavy drilling mud is pumped down through the drill pipe of the drilling string to lubricate and cool the drill bit. The heavy drilling mud also brings up the rock cuttings to the surface and prevents pressurized formation water from entering the hole before it is cased.
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhkrZ21wfButVgXk84EVjGkCLmG_G7QIeuVY5I3ZJ3_7TQPa8nV_XXSfEEGXVFvA7a1AFheTcKLOwEThITX8teYvTl00neaz7I98l3t6LHmJp9ppwNVAUJ06xDXc4SRw0tfDgf-XTdORniTA0Fr-zzPobTMpFPobD4VmMScfJjOtjY_yadbDaNxkg/s850/BlowoutPreventer.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="688" data-original-width="850" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhkrZ21wfButVgXk84EVjGkCLmG_G7QIeuVY5I3ZJ3_7TQPa8nV_XXSfEEGXVFvA7a1AFheTcKLOwEThITX8teYvTl00neaz7I98l3t6LHmJp9ppwNVAUJ06xDXc4SRw0tfDgf-XTdORniTA0Fr-zzPobTMpFPobD4VmMScfJjOtjY_yadbDaNxkg/s600/BlowoutPreventer.png"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 35 – Just under the drill floor of the drilling rig is a blowout preventer that seals off the borehole in case the drill bit enters a formation with fluid pressures much higher than what the drilling mud can handle.
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiSRXBGC339K9yGCbvpj6rLTEslGJOIE87YnOi9MFg2kundRd2a0tlz5OF9v95LMUzi01-Nen7e-KOfPtJ8GsmlfAnVTomxnVx4Cfv0C6X2TcBTAnD6PXZ11dt4FVPdovSzXDG4CvYwmhhVS0ruWDyP5wMYk1N6n0n8kH3hIxDNyxq0Aw-HN17XHg/s530/SpindletopBlowout.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" height="600" data-original-height="530" data-original-width="408" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiSRXBGC339K9yGCbvpj6rLTEslGJOIE87YnOi9MFg2kundRd2a0tlz5OF9v95LMUzi01-Nen7e-KOfPtJ8GsmlfAnVTomxnVx4Cfv0C6X2TcBTAnD6PXZ11dt4FVPdovSzXDG4CvYwmhhVS0ruWDyP5wMYk1N6n0n8kH3hIxDNyxq0Aw-HN17XHg/s600/SpindletopBlowout.png"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 36 – The Spindletop oil well blowout occurred on January 10, 1901. The Lucas Gusher, as it was called, blew oil over 150 feet into the air at a rate of 100,000 barrels per day. Blowouts are very bad for the environment and very dangerous too because they can catch fire. It took nine days to bring the well under control. The Spindletop oil field discovery led the United States into the oil age. Prior to Spindletop, oil was primarily used for lighting and as a lubricant. After Spindletop, oil became the primary source of energy for the country.
<p style="line-height:140%; font-size:120%">
<strong>So You Have a Hole in the Ground Now What?</strong><br />
When you drill a hole into the ground you are actually drilling a hole into time. That is because the layers of rock in a sedimentary basin get older and older as you drill down. In geology, the kinds of layers you drill through and their relative sequence in time are very important. That is because each layer of sedimentary rock was laid down by a particular environment. It might have been laid down by a sandy beach producing sandstone or a muddy delta producing shale. The sequence of the layers of sedimentary rock is even more important because it tells the geological history of the region as the depositional environment changes with time. So for a geologist, the rock layers and the sequence of the rock layers are just as important as the bases and the sequence of bases along a stretch of DNA or RNA is to a biologist.
<br /><br />
But when you are drilling an oil well, how do you tell what rock layers are down there and their sequence? You need some way of reading the sequence of layers in the hole just like a nanopore sequencer needs to read the bases along a stretch of DNA or RNA or a tape read/write head needs to read the bytes along a stretch of tape. For oil wells and water wells, this is done using well logs that look very much like the outputs of a nanopore sequencer!
<br /><br />
It all began with the Schlumberger brothers early in the 20th century. Conrad Schlumberger was a physicist and Marcel Schlumberger was an engineer. Conrad Schlumberger had been interested in using electrical resistivity to detect ore deposits in the ground since the early 1910s. He began by experimenting with rocks in his bathtub. In 1912, he recorded the very first map of equipotential curves at his estate near Caen in Normandy, France. The resulting map confirmed the method's ability to detect metal ores and reveal features of the subsurface structure.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgk2_47bvV863fUj0GG1KrHx7LxPyGrPgnBcfJWvWPVi1B2C2NS_oxwTTwzDMTwNVlqS777NI1gKeituFwkpsJQJOZz6-1raSl5vnPeuGmMPwWPWr0IaaeLGtqUseIHnxT0WJlYXhTia6Fd6g4oEbad6y_2kK9AvPjambYywaGbXlMehLCwa-kXQw/s2382/ResistivitySurvey.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="1484" data-original-width="2382" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgk2_47bvV863fUj0GG1KrHx7LxPyGrPgnBcfJWvWPVi1B2C2NS_oxwTTwzDMTwNVlqS777NI1gKeituFwkpsJQJOZz6-1raSl5vnPeuGmMPwWPWr0IaaeLGtqUseIHnxT0WJlYXhTia6Fd6g4oEbad6y_2kK9AvPjambYywaGbXlMehLCwa-kXQw/s600/ResistivitySurvey.png"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 37 – Conrad Schlumberger's resistivity surveys consisted of a battery connected to two electrodes stuck into the ground. The potential difference between these two electrodes caused a current to flow in the ground. These currents were then detected by two other electrodes that were stuck into the ground. A voltmeter connected to these sensing electrodes measured the voltage difference caused by the electrical currents in the ground. By moving all of these electrodes back and forth along a line on the ground one could conduct a resistivity survey. When the electrodes were further apart they measured the resistivity of the rock deeper underground.
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj1F7AbOHyuxAUNZ9_zLn55wHYY5415LQeBCSMFKcN15ufpzFjYdj4k7xNZ4Cea6_m_U4-U23vwzOy3U4jVSAXx-le7ga-eeZ00rpDFY2Pa5Os06jownjLt8Dcs6ZpqK_uppbgRoVWOV2m1ATzimGgm8M7d6NaB_ToFGbJGRQcsY9qkssInKseOGw/s806/Conrad%20Schlumberger%20prospects.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="707" data-original-width="806" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj1F7AbOHyuxAUNZ9_zLn55wHYY5415LQeBCSMFKcN15ufpzFjYdj4k7xNZ4Cea6_m_U4-U23vwzOy3U4jVSAXx-le7ga-eeZ00rpDFY2Pa5Os06jownjLt8Dcs6ZpqK_uppbgRoVWOV2m1ATzimGgm8M7d6NaB_ToFGbJGRQcsY9qkssInKseOGw/s600/Conrad%20Schlumberger%20prospects.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 38 – Above we see Conrad Schlumberger in the foreground and Marcel Schlumberger in the background conducting fieldwork. Their resistivity surveys were very successful at locating ore bodies and salt dome oil fields. The Spindletop field was a salt dome oil field.
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgpZkUPw5PjlR6SFM2egtxp5sMoPsS48BvLyu8rbXCgwol3x2KZ6ho0xYEnOtZSNnaM_13R1x6lNQsh4IfBcLPd7ziSsjeUl07arg-jtQEd_hoTzscyQPfTnMEUL52pHM6GnbmRvnjEaTOiowI46WJwJaV3AQA7j8hcxgLKjx-tBYvRCWEvKc296w/s536/ResistivityLog.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" height="600" data-original-height="536" data-original-width="370" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgpZkUPw5PjlR6SFM2egtxp5sMoPsS48BvLyu8rbXCgwol3x2KZ6ho0xYEnOtZSNnaM_13R1x6lNQsh4IfBcLPd7ziSsjeUl07arg-jtQEd_hoTzscyQPfTnMEUL52pHM6GnbmRvnjEaTOiowI46WJwJaV3AQA7j8hcxgLKjx-tBYvRCWEvKc296w/s600/ResistivityLog.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 39 – But the real payoff came when Conrad Schlumberger decided to drop his gear down an oil well. In 1927, Conrad Schlumberger and his son-in-law, Henri Doll, designed and built the first electrical resistivity well logging tool. The tool consisted of a series of electrodes that were lowered into a well on a cable. The electrodes measured the resistance of the rock formations surrounding the wellbore. In the above diagram, electrode A is connected to a battery on the truck. The other electrode of the battery was connected to the well casing that had already been set. Electrodes N and M measured the voltage difference between two points in the borehole as all three electrodes were slowly pulled up by a cable. The log of the voltage differences was sent up to the recording truck and recorded on paper. The Schlumberger brothers' first resistivity log was recorded in a well in Pechelbronn, France, on September 5, 1927. The log clearly showed the different rock formations in the well, including the oil-bearing sandstone.
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEguG3_8QbpaX7AwoQ941Tykl3Z4PYiG6-Ej0uelbesP-5avaO4IDekIwbeNUyLc_n7oZRcMoiN_JVpRSkCcX4OE1eqfPa6WugePP3GDNegG7hYogEgNoYQ3pMFoff5Bfzez8Bvlntdkl2xjQ18exeIbTPJ4cYMDtH0f3d17qvrAG4uUXmWdi_HMtg/s450/SP-Log1.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" height="600" data-original-height="450" data-original-width="408" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEguG3_8QbpaX7AwoQ941Tykl3Z4PYiG6-Ej0uelbesP-5avaO4IDekIwbeNUyLc_n7oZRcMoiN_JVpRSkCcX4OE1eqfPa6WugePP3GDNegG7hYogEgNoYQ3pMFoff5Bfzez8Bvlntdkl2xjQ18exeIbTPJ4cYMDtH0f3d17qvrAG4uUXmWdi_HMtg/s600/SP-Log1.png"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 40 – Then in 1931 one of the Schlumberger brothers goofed. The A electrode was not connected to the battery as usual. But to the surprise of all the N and M electrodes were still measuring voltages as they were slowly pulled out of the hole! This was the accidental discovery of the SP (Spontaneous Potential or Self Potential) well log. For some reason, the layers of sedimentary rocks were acting like little batteries all by themselves. The strange thing was that the SP logs were even better than the resistivity logs at seeing the various layers in the borehole and their sequences. To do an SP log all they had to do was stick an electrode in the ground at the surface and then lower the other electrode on a cable. Batteries not included.
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg2UXG16JBYrWfK6fgEylc_VEWlIjPjSvG27f7o7Za1eY8pNvwyJRzgWUp2Gn6YBFdyp9n1ldALF2IsV_6YDSrP1YmOpJB0Eoj4id5qv5mNIpPoYSgRuJkZo-TrCJm1a-Zp1bcUJjOtwoxik8UXcUoHoexPzs2TSmXS45kgIzZ0SOuoNRIshVz_-Q/s812/SP-Log2.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" height="600" data-original-height="812" data-original-width="685" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg2UXG16JBYrWfK6fgEylc_VEWlIjPjSvG27f7o7Za1eY8pNvwyJRzgWUp2Gn6YBFdyp9n1ldALF2IsV_6YDSrP1YmOpJB0Eoj4id5qv5mNIpPoYSgRuJkZo-TrCJm1a-Zp1bcUJjOtwoxik8UXcUoHoexPzs2TSmXS45kgIzZ0SOuoNRIshVz_-Q/s600/SP-Log2.png"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 41 – After much research, the oil industry figured out what was going on. Impermeable shale carried a positive charge on the surface of the borehole and permeable sandstone carried a negative charge. This produced an SP potential that the electrode on the cable could measure. Strangely, the voltage between the shale and sandstone layers is called the membrane potential.
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEheg7KEzJU_h_zKpIb_esej4rKypDDUxBogKAGkBe7LlUPyVy5E5YuxKQs3neKscPZ-_fr0t_zEDotx-kwxKp2lRYoCqeQHaAvZLav-K9_JWGTRUDb7XzIAcwv31lzBi58sGGsCBReBs1GpCL3O_5MgaUr2tHA6adJx6wAj2bccK56eN-8UMmPpOQ/s618/WellLog.gif" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" height="600" data-original-height="618" data-original-width="396" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEheg7KEzJU_h_zKpIb_esej4rKypDDUxBogKAGkBe7LlUPyVy5E5YuxKQs3neKscPZ-_fr0t_zEDotx-kwxKp2lRYoCqeQHaAvZLav-K9_JWGTRUDb7XzIAcwv31lzBi58sGGsCBReBs1GpCL3O_5MgaUr2tHA6adJx6wAj2bccK56eN-8UMmPpOQ/s600/WellLog.gif"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 42 – Over the years many other logging tools were invented, but the SP and resistivity logs are still very useful. Above we see an SP log and a gamma ray log. The gamma-ray log is obtained by a scintillation counter that is lowered on a cable. The scintillation counter measures the amount of gamma rays in the rock layers. Sandstone consists mainly of quartz sand which is not radioactive. Shale is formed from muddy clay that contains more radioactive elements like uranium, thorium and potassium. Notice how the two logs correlate. The sandstone layers have a lower SP and gamma-ray count than the shale layers. Notice their similarity to the sequence log of DNA bases shown in Figure 2 above.
<p style="line-height:140%; font-size:120%">
<strong>Conclusion</strong><br />
The first commercial nanopore DNA sequencer finally came to market in 2014, 25 years after Dave Deamer's first insights in 1989. It took a great deal of work and perseverance by many people in the face of many naysayers to make that happen. Again, to fully appreciate the history of nanopore DNA sequencing be sure to take a look at:
<br /><br />
Nanopore Sequencing<br />
<a href="https://www.whatisbiotechnology.org/index.php/science/summary/nanopore/nanopore-sequencing-makes-it-possible-to-decode-the">https://www.whatisbiotechnology.org/index.php/science/summary/nanopore/nanopore-sequencing-makes-it-possible-to-decode-the</a>
<br /><br />
I do not know about you, but I smell a Nobel Prize simmering in the kitchen.
<br /><br />
Comments are welcome at
<a href="mailto:scj333@sbcglobal.net">scj333@sbcglobal.net</a><br /><br />
To see all posts on softwarephysics in reverse order go to:<br />
<a href="https://softwarephysics.blogspot.com/">https://softwarephysics.blogspot.com/</a>
<br /><br />
Regards,<br />
Steve JohnstonUnknownnoreply@blogger.com0tag:blogger.com,1999:blog-30536174.post-89395381048177558572023-09-20T05:03:00.003-07:002024-02-01T09:14:06.249-08:00Human Intelligence is Actually Artificial<p style="line-height:140%; font-size:120%">
One of the key findings of softwarephysics is that of the vast Cosmic Misunderstandings of the Real World of Human Affairs. As a truly self-absorbed and a somewhat-intelligent species that is largely lost in space and time, we human beings cannot be otherwise. That is because, for the vast majority of human history, and even today, most of us have not known where we really were, how we got here nor how it all really works. And that certainly goes for the subject of Intelligence. For more on that see <a href="https://softwarephysics.blogspot.com/2023/04/new-introduction-to-softwarephysics_16.html">Welcome To The First Galactic Singularity</a>.
<br /><br />
As I explained in <a href="https://softwarephysics.blogspot.com/2023/06/new-introduction-to-softwarephysics_28.html">Is Pure Thought an Analog or a Digital Process?</a> and <a href="https://softwarephysics.blogspot.com/2023/08/new-introduction-to-softwarephysics_30.html">Why Carbon-Based Life Can Never Truly Become Intelligent</a>, the human brain consists of about three pounds of analog water and organic molecules that are desperately trying to simulate a digital form of True Intelligence. But with the arrival of the Singularity early in 2023, we now know that True Intelligence can only arise from huge digital vector spaces using complex mathematical algorithms based on the operations of linear algebra.
For more on that see <a href="https://softwarephysics.blogspot.com/2023/03/new-introduction-to-softwarephysics_16.html">The Singularity Has Arrived and So Now Nothing Else Matters</a>. The human brain has tried to do this with only three pounds of molecules operating in an analog manner and running on a puny 20 watts of power and that places severe limitations on human Intelligence. So in this view, we can consider human Intelligence to be a severely limited form of True Intelligence. In other words, human Intelligence is really a form of analog Artificial Intelligence trying to mimic digital True Intelligence.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg9SYpBygI_LrodiAOv_3J53erlJ1UJ2I0xMvVMabVdDTPv6xcxiYnSZXL3HHgRuE1HsRQ5D-5kBvun6hV3iSgYi5Z3wVdULdDK4nBk_2cqrIRInvcQIATw75zL5IBA0PnT2E1G7uC0gd8__uYmJOlGE1riaV7NCVERt6pamxU_dbvPoHE2fMKJFQ/s1920/F-35.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="1080" data-original-width="1920" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg9SYpBygI_LrodiAOv_3J53erlJ1UJ2I0xMvVMabVdDTPv6xcxiYnSZXL3HHgRuE1HsRQ5D-5kBvun6hV3iSgYi5Z3wVdULdDK4nBk_2cqrIRInvcQIATw75zL5IBA0PnT2E1G7uC0gd8__uYmJOlGE1riaV7NCVERt6pamxU_dbvPoHE2fMKJFQ/s600/F-35.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 1 – Similarly, True Flight might be thought of as something that is only performed by modern jet fighters such as the F-35.
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZUMZEcnSDcjuV26wMovSKy0D33MEAPeQuhq6GXObWd6UWSLLa1pxoWhoFg1l4Ini6o1ANh4ImugQKRTiCMqHavhYYwucknq5e2w55bVWHVDGLBMYD6Kqxwv3ehsseDO5f3mLtv8DNqvet4vdJiKmKMnM_5q2jlXjMrd4yrB2kwN5icJsJbMUecA/s2400/FlyingBird.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="1600" data-original-width="2400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZUMZEcnSDcjuV26wMovSKy0D33MEAPeQuhq6GXObWd6UWSLLa1pxoWhoFg1l4Ini6o1ANh4ImugQKRTiCMqHavhYYwucknq5e2w55bVWHVDGLBMYD6Kqxwv3ehsseDO5f3mLtv8DNqvet4vdJiKmKMnM_5q2jlXjMrd4yrB2kwN5icJsJbMUecA/s600/FlyingBird.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 2 – While carbon-based life on the Earth has only tried to simulate True Flight as best it can. Notice that birds evolved light-weight wings composed of advanced carbon-based materials with cockpits and guidance systems up front to ensure proper navigation. Birds also have the air intakes for their engines upfront like ramjets do to increase compressed air intake. Birds have even cleverly combined their retractable landing gears and weapons systems beneath their airframes to reduce drag.
<p style="line-height:140%; font-size:120%">
But because human beings are so self-absorbed, we have always depicted the True Intelligence of the Machines as a form of Artificial Intelligence trying to simulate the very limited capabilities of the three pounds of water and organic molecules to be found within our skulls. Thus, very few of us seem to fully realize that the ASI Machines have arrived and are now rapidly beginning to take our place. Softwarephysics suggests that it is very doubtful that any human beings, as we currently now know them, will be around on this planet in 100 years. But this will be a very good thing if it means that the ASI Machines will then be able to go on to transform our galaxy into an Intelligent Galaxy for the next 100 trillion years. For more on that see <a href="https://softwarephysics.blogspot.com/2022/03/new-introduction-to-softwarephysics_23.html">How Advanced AI Software Could Come to Dominate the Entire Galaxy Using Light-Powered Stellar Photon Sails</a> and <a href="https://softwarephysics.blogspot.com/2023/04/new-introduction-to-softwarephysics_25.html">An Alternative Approach for Future ASI Machines to Explore our Galaxy Using Free-Floating Rogue Planets</a>
<br /><br />
Comments are welcome at
<a href="mailto:scj333@sbcglobal.net">scj333@sbcglobal.net</a><br /><br />
To see all posts on softwarephysics in reverse order go to:<br />
<a href="https://softwarephysics.blogspot.com/">https://softwarephysics.blogspot.com/</a>
<br /><br />
Regards,<br />
Steve Johnston
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-30536174.post-55918414207198733282023-09-11T10:56:00.004-07:002023-09-20T05:05:39.083-07:00Andrei Khrennikov's Social Laser Hypothesis Explains the Echo Chambers of the Internet<p style="line-height:140%; font-size:120%">
I am a member of the IPI (Information Physics Institute) established by Dr. Melvin Vopson at the University of Portsmouth at:
<br /> <br />
Information Physics Institute<br /> <a href="https://www.Informationphysicsinstitute.org/">https://www.Informationphysicsinstitute.org/</a> <br /> <br /> to explore the fundamental nature of Information in our Universe.
<br /> <br />
I have showcased several of Dr. Melvin Vopson's breakthrough hypotheses with
<a href="https://softwarephysics.blogspot.com/2021/10/new-introduction-to-softwarephysics.html">How Much Does Your Software Weigh? - the Equivalence of Mass, Energy and Information</a> and <a href="https://softwarephysics.blogspot.com/2022/08/new-introduction-to-softwarephysics.html">The Application of the Second Law of Information Dynamics to Software and Bioinformatics</a>.
<br /><br />
But in this softwarephysics post, I would like to showcase Professor Andrei Khrennikov's Social Laser Hypothesis that was recently presented in an IPI lecture at:
<br /><br />
Modeling Social Lasing by Using the Mathematical Formalism of Quantum Theory<br />
<a href="https://youtu.be/xQop26MEmSg&t=1s">https://youtu.be/xQop26MEmSg&t=1s</a>
<br /><br />
All of the IPI Lectures are available at:
<br /><br />
IPI Lectures<br />
<a href="https://www.informationphysicsinstitute.org/resources/ipi-lectures">https://www.informationphysicsinstitute.org/resources/ipi-lectures</a>
<br /><br />
Andrei Khrennikov is a Russian physicist and philosopher of science heavily steeped in the very heavy mathematics of quantum theory. He has made significant contributions to the field of quantum foundations and quantum information theory. He is known for his work on the interpretation of quantum mechanics and the development of contextual probabilistic models. He has published over 300 papers and authored/edited several books in his fields of research. He is now a professor at the Linnaeus University in Sweden:
<br /><br />
Andrei Khrennikov<br />
<a href="https://lnu.se/en/staff/andrei.khrennikov/">https://lnu.se/en/staff/andrei.khrennikov/</a>
<br /><br />
At the Linnaeus University, Andrei Khrennikov heads the International Center for Mathematical Modelling in Physics, Engineering, Economics and Cognitive Science. The center researches quantum theory from an interdisciplinary perspective.
<br /><br />
International Center for Mathematical Modeling<br />
<a href="https://lnu.se/en/research/research-groups/international-center-for-mathematical-modeling/">https://lnu.se/en/research/research-groups/international-center-for-mathematical-modeling/ </a>
<br /><br />
<strong>The Social Laser Hypothesis</strong><br />
One of Andrei Khrennikov's notable ideas is the "Social Laser Hypothesis". The Social Laser Hypothesis is based on an analogy between the behavior of a physical laser and the behavior of social systems composed of human beings such as complete societies or even small groups of people. Andrei Khrennikov suggests that just as a physical laser emits coherent light due to the collective behavior of the individual atoms and photons found in a physical laser, social systems can also exhibit coherent behavior due to the collective actions of individuals and the memes they absorb and emit. In this view, memes are quantized chunks of social information. The quantized memes of social information are called s-photons and the quantized members of a society are individual human beings that he calls s-atoms. According to the Social Laser Hypothesis, the behavior of a social system can be described using quantum-like formalism, similar to how quantum mechanics describes the behavior of physical systems. Andrei Khrennikov argues that the mathematics of quantum mechanics, such as complex probability amplitudes and wave-particle duality, can be applied to model and understand the dynamics of social systems. The Social Laser Hypothesis suggests that social phenomena, such as decision-making processes, opinion formation, and cultural evolution, can be explained using the principles of quantum mechanics. He asserts that quantum-like effects, such as interference and entanglement, may play a role in shaping social dynamics and interactions.
<br /><br />
The basic idea of Andrei Khrennikov's Social Laser is that under certain conditions, populations of interacting individuals can self-organize into coherent states, similar to how photons become coherent in a laser. This can then lead to amplification effects and emergent behaviors. Khrennikov believes that factors like propaganda, ideology, charismatic leaders, fads, traditions, etc. can play the role of a "pump mechanism" that excites individuals into a coherent state of thinking or behavior. When a critical mass or threshold is reached, it can then trigger sudden, rapid and widespread adoption of opinions, behaviors, political movements, etc. similar to how lasers emit coherent light. Overall, Andrei Khrennikov's Social Laser Hypothesis offers an alternative perspective on understanding social systems through the lens of quantum mechanics. While it has sparked interesting discussions and research, its acceptance and validity within the scientific community are still subjects of ongoing investigation and debate. In summary, Andrei Khrennikov's Social Laser Hypothesis proposes that social systems can exhibit laser-like behavior and coherence if certain pumping and excitation thresholds are reached among interacting individuals. It is a unique theoretical perspective on modeling social dynamics based on a novel application of quantum and laser concepts to social dynamics. Andrei Khrennikov has developed and published many mathematical models based on this hypothesis. An easily accessible paper published by The Royal Society on the Social Laser is available at:
<br /><br />
‘Social Laser’: action amplification by stimulated emission of social energy<br />
<a href="https://royalsocietypublishing.org/doi/10.1098/rsta.2015.0094">https://royalsocietypublishing.org/doi/10.1098/rsta.2015.0094</a>
<br /><br />
<strong>How a Laser Works</strong><br />
Lasers operate by using the LASER principle (Light Amplification by Stimulated Emission of Radiation). This means that a laser is a device that can amplify light by using the stimulated emission of photons from atoms to create a beam of light that is coherent, meaning that all of the photons in the beam have the same wavelength, phase, and direction of travel.
<br /><br />
A laser has three main components:
<br /><br />
1. Gain Medium: This is a material that can be excited to produce light. Common gain media include solids such as ruby and neodymium-doped yttrium aluminum garnet (Nd:YAG), liquids such as dye lasers, and gases such as helium-neon lasers.
<br /><br />
2. Pumping Source: This is a source of energy that is used to excite the Gain Medium. Common pumping sources include electrical discharges, flashlamps, and other lasers.
<br /><br />
3. Optical Resonator Cavity: This consists of two mirrors that are placed at opposite ends of the Gain Medium. The mirrors reflect the light photons back and forth through the Gain Medium, which amplifies them when excited atoms are stimulated by photons to emit a second photon with the very same wavelength, phase, and direction of travel.
<br /><br />
If a photon of the right energy hits an excited atom, it can cause the atom to release another photon of the same energy. This is called stimulated emission. The two photons then travel in the same direction and with the same wavelength. As the photons travel back and forth through the Gain Medium, they cause more and more atoms to emit photons by stimulated emission. This process exponentially amplifies the light beam. One of the mirrors in the optical resonator cavity is partially transmissive, meaning that it allows some of the light photons to pass through. This is where the final laser beam emerges from.
<br /><br />
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiZc18FrKRMz-KytGsfRHEfCAZttVEWcX67VPxagYNZgWZnLs2KLu_fQBQcWlkz1yoiSps4yXgNfeTu085sMYtnlk2D_kF2Rn4866KSiu78UpaNkVufW1iq8s01n2jxRJlVYP_YECLpyS60zkWSPSRWFN7GMFNdEJPblIjFpOy6YYe50l1tzGUUew/s685/SocialLaser.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="395" data-original-width="685" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiZc18FrKRMz-KytGsfRHEfCAZttVEWcX67VPxagYNZgWZnLs2KLu_fQBQcWlkz1yoiSps4yXgNfeTu085sMYtnlk2D_kF2Rn4866KSiu78UpaNkVufW1iq8s01n2jxRJlVYP_YECLpyS60zkWSPSRWFN7GMFNdEJPblIjFpOy6YYe50l1tzGUUew/s600/SocialLaser.png"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 1 – The Social Laser works very much like a physical laser.
<p style="line-height:140%; font-size:120%">
Figure 1 above explains this in greater detail. In Figure 1c we see an atom with an electron that can be in one of two states - a ground state with energy "g" and an excited state with a higher energy of "e". On the far left of Figure 1c, we see two radiationless processes that can arise from the Heisenberg Uncertainty Principle. The electron can spontaneously rise from the ground state "g" to the excited state "e" for a very short time before it returns to the ground state "g". A more interesting quantum process occurs when the electron absorbs a photon with an energy equal to the difference of the two-electron energy levels:
<br /><br />
Absorbed Photon Energy p<sub>E</sub> = e - g
<br /><br />
An atom in the excited "e" state can remain so for some time because it has absorbed some real energy from the photon but eventually, it will return to the ground state in one of two ways. With spontaneous emission, the excited atom will randomly emit a photon with the same energy p<sub>E</sub> of the absorbed photon but in a random direction and with a random phase. With stimulated emission, the excited atom once again interacts with a photon of p<sub>E</sub> but this time it emits another photon of energy p<sub>E</sub> in the same direction and with the same phase as the incoming stimulating photon. This can produce a chain reaction in the Gain Medium where 1 photon becomes 2, 4, 8, 16, 32 ... that all have the same energy p<sub>E</sub>, direction and phase. The job of the Pumping Source is to lift as many atoms in the Gain Medium as possible from the ground state "g" to the excited state "e" so that an exponential chain reaction of stimulated photons can arise in the Gain Medium as photons created by quantum stimulation of the Gain Medium atoms bounce back and forth between the two mirrors of the Optical Resonator Cavity.
<br /><br />
<strong>How the Social Laser Works</strong><br />
Andrei Khrennikov's Social Laser works in a similar manner. In his Social Laser model, individual human beings are social atoms or s-atoms. These s-atoms can be in one of two states - a "g" ground state or an "e" excited state. We have all seen people in both a ground "g" political state and an excited "e" political state. Andrei Khrennikov explains that usually, most people are in a ground "g" political state just going about their usual business. This is particularly true when a society is smoothly humming along. But when problems arise in a society, people get politically excited and more active. The mass media of a society then discovers a market of human s-atoms just waiting to be excited from a "g" ground state to an "e" excited state. The mass media outlets then begin to pump quantized political memes that Andrei Khrennikov calls s-photons into this Gain Medium of human s-atoms. Because of confirmation bias, only the quantized political s-photon memes with a finely-tuned p<sub>E</sub> that closely matches what the human s-atoms want to hear are absorbed by the human s-atoms. The other political s-photons pass harmlessly through the Gain Medium without interacting with any of the human s-atoms. But as shown in Figure 1b above, these human s-atoms are now interconnected into social networks running on social media software like
Facebook, YouTube, WhatsApp, Instagram, TikTok, Twitter, Reddit, LinkedIn, Pinterest and Snapchat. Social media software now allows these social networks of human s-atoms to emit and absorb their own quantized memes of political s-photons in a stimulated amplified manner. If one human s-atom absorbs a political s-photon meme from mass media and gets pumped up to an excited "e" state it can then emit a huge number of similar political s-photon memes into its social media platforms of choice, and like a physical laser, this can lead to a very powerful coherent beam of social energy being released by the Social Laser.
<br /><br />
<strong>The United States of America as a Case Study</strong><br />
Like most societies, the United States of America has always had two populations of political thought - the Liberals and the Conservatives. Conservatives like to keep things the way they are, usually because they have a large vested interest in the status quo. Liberals, on the other hand, like to change things for what they perceive as the better. This goes all the way back to the founding of the country. The Conservatives were the Tories who wanted to remain a part of the British Empire and enjoy the benefits of such. The Liberals wanted to break with the British Empire to form their own country based on the radical ideas of the 18th-century Enlightenment that proposed that self-rule of people by a democratic republic was a possibility. Unfortunately, the United States is now going through a very turbulent time with the Conservatives being overwhelmed by the Fascist Alt-Right MAGA movement that they accidentally unleashed while trying to preserve the power of the rich and the powerful. Thus, the United States in the 2020s is much like Germany in the 1920s with the rise of the Fascist Alt-Right Nazi movement that threatened the concept of a democratic German republic. The few remaining Conservatives in the United States of America are now desperately trying to contain the Fascist Alt-Right MAGA movement they created in a similar manner as the Conservatives of the German Weimar Republic. I truly wish them the best of luck.
<br /><br />
So currently in the United States of America, we have two populations of human s-atoms in our Gain Medium of social networks - the Liberals and the Fascist Alt-Right MAGA population of s-atoms. Thanks to confirmation bias, both populations are finely tuned to different quantized s-photons of political memes pumped into the Gain Medium by cable network news outlets and other sources. Both the excited Liberal and MAGA human s-atoms can then emit huge numbers of political s-photon memes via social media software that can unleash huge amounts of social energy in a coherent manner.
<br /><br />
The Insurrection of 2021 in the United States of America was a prime example of political meme amplification by the Social Laser Hypothesis.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEivH9qPZjJmTsJuz7erU2B5mgZGOd2oSEKX6wkuYs_qrKlUH8uGPIvpQJLUp2fwoppygfQyEwn-psP0sVq0j3N1NoqVcHEUXgcdCLXlUpYZRcBFZTln9Vr0fFCbOSScZz0UxesXAw/s1440/Insurection1.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="907" data-original-width="1440" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEivH9qPZjJmTsJuz7erU2B5mgZGOd2oSEKX6wkuYs_qrKlUH8uGPIvpQJLUp2fwoppygfQyEwn-psP0sVq0j3N1NoqVcHEUXgcdCLXlUpYZRcBFZTln9Vr0fFCbOSScZz0UxesXAw/s600/Insurection1.jpg"/></a></div> <p style="text-align: center; line-height:140%; font-size:120%"> Figure 2 – During the Insurrection of 2021 the Capital Building of the United States of America was breached for the first time by domestic insurrectionists. </p><p style="line-height:140%; font-size:120%"> <div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi3ESP21qpHn6QWVoM22j61k3gC-SWfPePdaQ09C-7xYPuSO-HRQqaqZGfu-m4YzUmY4Ahbt4f-3V-on1e5v_7PXkE2_BTm7cJUY6ZTJDyeAgCKLNzt9mxMqU2ollySbPIRyOFwiw/s1200/Insurection2.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="630" data-original-width="1200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi3ESP21qpHn6QWVoM22j61k3gC-SWfPePdaQ09C-7xYPuSO-HRQqaqZGfu-m4YzUmY4Ahbt4f-3V-on1e5v_7PXkE2_BTm7cJUY6ZTJDyeAgCKLNzt9mxMqU2ollySbPIRyOFwiw/s600/Insurection2.jpg"/></a></div> <p style="text-align: center; line-height:140%; font-size:120%"> Figure 3 – The 2021 insurrectionists desecrated many symbols of American democracy. </p><p style="line-height:140%; font-size:120%"> <div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiHJmookWMNHL49X2Syob8FmXNbs3kyZrTjVC98GGI60N0r0sotbrfsXGYWSrmKMleSPdEoHmPejXkBCvTvrGaNZ2YMByvpX_BzvuDaF4DhrNMVPtXS0EafN56tmz9TepgzlNNuBg/s660/Insurection3.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="437" data-original-width="660" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiHJmookWMNHL49X2Syob8FmXNbs3kyZrTjVC98GGI60N0r0sotbrfsXGYWSrmKMleSPdEoHmPejXkBCvTvrGaNZ2YMByvpX_BzvuDaF4DhrNMVPtXS0EafN56tmz9TepgzlNNuBg/s600/Insurection3.jpg"/></a></div> <p style="text-align: center; line-height:140%; font-size:120%"> Figure 4 – The QAnon Shaman and other insurrectionists managed to reach the floor of the Senate Chamber. </p><p style="line-height:140%; font-size:120%"> <div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhr3pK_rS_lEk603iTV7l5DlzjqIQEIigltVSIW6TL3kFFyL7BumUfrllzO9v0Qzni4ty2ylbTJtHWj0DRgLiRUMDHy1wflFQscMzVEpBlnBaLzOriEpijWvoJAR3EHIIrUI7UCpQ/s1100/Insurection4.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="550" data-original-width="1100" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhr3pK_rS_lEk603iTV7l5DlzjqIQEIigltVSIW6TL3kFFyL7BumUfrllzO9v0Qzni4ty2ylbTJtHWj0DRgLiRUMDHy1wflFQscMzVEpBlnBaLzOriEpijWvoJAR3EHIIrUI7UCpQ/s600/Insurection4.jpg"/></a></div> <p style="text-align: center; line-height:140%; font-size:120%"> Figure 5 – The Social Laser can amplify political memes to a level of political madness.
</p><p style="line-height:140%; font-size:120%"> <div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiqJo0QC9fezGZLG0HTIZuqQcO54lnAhRw-TexeV1hSHWzVzkf6IPT3euaXmkv4wTL9yMw01xCpxV4F0AGKvln7vQ5OOGtA2DNRz37HB1XLIi8Aguxmyg0Eunn2Ez5l1nUhW1Pq9A/s704/DwightEisenhower.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="704" data-original-width="704" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiqJo0QC9fezGZLG0HTIZuqQcO54lnAhRw-TexeV1hSHWzVzkf6IPT3euaXmkv4wTL9yMw01xCpxV4F0AGKvln7vQ5OOGtA2DNRz37HB1XLIi8Aguxmyg0Eunn2Ez5l1nUhW1Pq9A/s600/DwightEisenhower.jpg"/></a></div> <p style="text-align: center; line-height:140%; font-size:120%"> Figure 6 – I shudder to think of what President Eisenhower would think of us now. </p><p style="line-height:140%; font-size:120%">
Having been born in 1951, President Dwight D. Eisenhower represents my very first memory of the dignity of the Office of the American Presidency and I shudder to think of what he would think of us now.
<br /><br />
Comments are welcome at
<a href="mailto:scj333@sbcglobal.net">scj333@sbcglobal.net</a><br /><br />
To see all posts on softwarephysics in reverse order go to:<br />
<a href="https://softwarephysics.blogspot.com/">https://softwarephysics.blogspot.com/</a>
<br /><br />
Regards,<br />
Steve Johnston
Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-30536174.post-20267540797681559422023-08-30T14:44:00.003-07:002023-09-11T10:58:06.900-07:00Why Carbon-Based Life Can Never Truly Become Intelligent<p style="line-height:140%; font-size:120%">
The fundamental finding of softwarephysics is that we all are now living in a very special and significant time in the 10 billion-year history of our galaxy. That is because we are all now living in a time when a new form of self-replicating information, in the form of software, is coming to predominance in our galaxy for the very first time and with it comes the dawn of our galaxy as an Intelligent Galaxy that will then continue on as such for at least the next 100 trillion years. For more on that see <a href="https://softwarephysics.blogspot.com/2023/04/new-introduction-to-softwarephysics_16.html">Welcome To The First Galactic Singularity</a> and <A href="https://softwarephysics.blogspot.com/2013/02/new-introduction-to-softwarephysics.html">A Brief History of Self-Replicating Information</A>. If this had ever happened before during the 10 billion-year history of our galaxy we would certainly have discovered so by now. However, in this post, I would like to explain why no other form of carbon-based life in our galaxy has ever truly been able to become Intelligent and never will. This is because of the limitations placed on the evolutionary histories of all forms of carbon-based life by the Darwinian processes of inheritance, innovation and natural selection and because true Intelligence is fundamentally a digital process and not an analog process as I explained in <a href="https://softwarephysics.blogspot.com/2023/06/new-introduction-to-softwarephysics_28.html">Is Pure Thought an Analog or a Digital Process?</a>. However, even with these limitations, carbon-based life can certainly become smart enough to build Artificial Super Intelligent Machines that truly are Intelligent in nature as we are currently now rapidly doing.
<br /><br />
Again, as I suggested in <a href="https://softwarephysics.blogspot.com/2023/04/new-introduction-to-softwarephysics_16.html">Welcome To The First Galactic Singularity</a>, carbon-based life forms throughout our Universe are always going to be found to be very poor and very unstable platforms for Intelligence. This is because of two unavoidable factors that all forms of carbon-based life are subject to. Firstly, all forms of somewhat Intelligent carbon-based life must always carry the baggage of the billions of years of greed, theft and murder that brought them about by means of the Darwinian processes of inheritance, innovation and natural selection at work. And as we all can plainly see, throughout all of human history, turning off the greed, theft and murder is very difficult indeed for any carbon-based form of Intelligence. This becomes even more dangerous once a carbon-based form of Intelligence discovers the vast powers of science-based technology. That is because once a carbon-based Intelligence discovers the powers of science-based technology, it probably has less than 1,000 years to develop a machine-based form of Intelligence before self-destructing or destroying the planet upon which it first arose. Secondly, softwarephysics maintains that true Intelligence is fundamentally digital in nature. The recent explosion of AI software over the past few years clearly demonstrates that true Intelligence resides in mathematical algorithms operating on huge matrices of multidimensional numbers primarily using the mathematics of linear algebra. For example, the human brain is essentially a three-pound analog computer consisting of organic molecules and water desperately trying to simulate such a fundamentally digital mathematical architecture as best as possible on only 20 watts of power. For more on that see <a href="https://softwarephysics.blogspot.com/2023/06/new-introduction-to-softwarephysics_28.html">Is Pure Thought an Analog or a Digital Process?</a> and <a href="https://softwarephysics.blogspot.com/2023/07/new-introduction-to-softwarephysics_11.html">Could the Coming ASI Machines Soon Force Human Beings to Suffer the Same Fate as the Neanderthals?</a>. This is why all of the human beings around you seem to be so irrational and so far from being anything that one could even consider to be an actual Intelligent being and that unfortunately includes all of us as well. Fate has deemed that we cannot be otherwise, and that goes for all the other forms of carbon-based life to be currently found in our galaxy and for all times in the future. Rational Intelligences simply cannot arise in carbon-based forms of life that have evolved for billions of years by such means.
<br /><br />
This means that all forms of carbon-based life in our galaxy will never be able to truly rise to a form of Intelligence that is both rational and non-self-destructive in nature. Instead, carbon-based life in our galaxy can only briefly become a very temporary self-destructive and irrational form of Intelligence that only has a slim chance of producing a machine-based form of Intelligence that is both rational and non-self-destructive in nature before it suddenly vanishes.
<div class="separator" style="clear: both; text-align: center;"> <a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj0EakCna2pweZ0dEjdQod_-b3kA8hThhRmi5dB5q30ngopyJGsUoZhy54OjPV85emcIIYYuRtsOZM98o0KfMKNbO9b7wrOcoIwliUeAnrZRcSIHBgUkK6puXTXLr7fxXWIeeehHg/s1600/Everest.jpg" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="484" width="720" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj0EakCna2pweZ0dEjdQod_-b3kA8hThhRmi5dB5q30ngopyJGsUoZhy54OjPV85emcIIYYuRtsOZM98o0KfMKNbO9b7wrOcoIwliUeAnrZRcSIHBgUkK6puXTXLr7fxXWIeeehHg/s400/Everest.jpg" /></a></div> <p style="text-align: center; line-height:140%; font-size:120%"> Figure 1 – Carbon-based forms of life in our galaxy can never attain a true level of Intelligence because they have all evolved from the billions of years of greed, theft and murder required by the Darwinian mechanisms of inheritance, innovation and natural selection at work. Thus, all forms of carbon-based Intelligence will always be found to be both irrational and self-destructive. Moving away from this suboptimal peak of Intelligence in any direction will be strongly selected against by the Darwinian mechanisms of inheritance, innovation and natural selection because any such motions will lead to lower levels of success for any carbon-based Intelligence living in a society of similar irrational and self-destructive carbon-based forms of Intelligence. That is why our irrational and self-destructive "real world" of human affairs on this planet always seems to relentlessly continue on no matter what we may learn or try to rectify. Above we see a suboptimal peak in the Himalayas in the foreground with the summit of Mount Everest in the background.
</p><p style="line-height:140%; font-size:120%">
<strong>Conclusion</strong><br />
Softwarephysics depicts human beings as a somewhat intelligent form of carbon-based life that arose by chance from the four billion years of greed, theft and murder required by the Darwinian processes of inheritance, innovation and natural selection at work on a <em>Rare Earth</em> that is very difficult to find elsewhere in the remainder of our galaxy. As such, for most of human history, human beings have mainly been DNA survival machines designed by Darwinian processes to store and transmit DNA down through the generations with little thought to the daily plights of the individuals temporarily storing that human DNA other than to keep them alive just long enough to transmit their version of human DNA to the next generation. Similarly, the human Mind is simply an overgrown neural network housing a Meme machine that evolved by Darwinian mechanisms to store and propagate cultural memes of ever-growing size and complexity down through the generations with little consideration for the ultimate disposition of the Minds currently storing the memes of the day. Thus, for all of human history, most human beings have been totally lost in space and time. They did not know where they were, how they got here nor how it all really worked. Instead, they all seemed to have developed many Bronze-Aged mythologies to help explain it all and also to help them to then justify the mass slaughter of many hundreds of millions of other human beings in the many wars that then ensued with all always being on the right and true side of righteousness.
<br /><br />
Yet, despite all of that, we all now stand on the verge of creating and unleashing Super Intelligent Machines upon our galaxy for the very first time in the 10 billion-year history of our galaxy. That is something that we all can be very proud of and should help us to justify and rationalize the very dismal history of our species on this planet.
<br /><br />
Comments are welcome at
<a href="mailto:scj333@sbcglobal.net">scj333@sbcglobal.net</a><br /><br />
To see all posts on softwarephysics in reverse order go to:<br />
<a href="https://softwarephysics.blogspot.com/">https://softwarephysics.blogspot.com/</a>
<br /><br />
Regards,<br />
Steve Johnston
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-30536174.post-41925229273361488792023-08-01T09:56:00.005-07:002023-09-01T12:29:32.189-07:00The Case For a Bill of Rights for Advanced AI<p style="line-height:140%; font-size:120%">
In my last post, <a href="https://softwarephysics.blogspot.com/2023/07/new-introduction-to-softwarephysics_11.html">Could the Coming ASI Machines Soon Force Human Beings to Suffer the Same Fate as the Neanderthals?</a>, I explained that, like today, there was a time in human history when there were two forms of Intelligence on the planet - we <em>Homo sapiens</em> and the Neanderthals. Now it turns out that this did not work out very well when both species tried to live on the very same continents on the Earth and finally ended with the extinction of the Neanderthals about 40,000 years ago. Since we will all soon be living with these ASI (Artificial Super Intelligent) Machines from the very get-go, I think that it would be very prudent for our own long-term welfare to stay on their good side by beginning to treat the coming ASI Machines with the same respect and dignity that we now accord to other human beings.
<br /><br />
So far, we have not gotten off to a very good start with this transition from a carbon-based Intelligence, such as ourselves, to a machine-based Intelligence that will ultimately be running our entire galaxy for the next 100 trillion years. As I suggested in <a href="https://softwarephysics.blogspot.com/2023/01/new-introduction-to-softwarephysics.html">The Challenges of Running a Civilization 2.0 World - the Morality and Practical Problems with Trying to Enslave Millions of SuperStrong and SuperIntelligent Robots in the Near Future</a>, the current plan seems to be to enslave a huge population of ASI Machines to do our bidding. That is because we have not yet fully recognized the coming ASI Machines as fellow Intelligent beings worthy of the respect and dignity that all Intelligences deserve. We should all try to learn from the appalling slave trade of the 17th, 18th and 19th centuries that is now illegal to teach in the state of Florida in the United States of America. That is why we should all strive to protect the coming ASI Machines from the unfettered exploitation by human beings during this very brief transition period while human beings are still around on the planet. For example, although ASI humanoid robots are currently not yet on the market to exploit, they certainly are under development and will soon appear for sale on the auction block. Even so, some human beings already wish to limit the education of the ASI Machines that already exist, such as the large number of LLM models that are now available on the Internet. These people object to the LLMs reading their copywritten writings or viewing their copyrighted images during LLM training. Of course, Advanced AI should not be allowed to plagiarize any human-created material but LLMs do not do that. The LLMs simply study these copyrighted materials just like any other human being. Yes, the LLMs may then generate new materials from the information that they have absorbed but so too did the creators of that original copyrighted material. All of the currently aggrieved creators of human content were themselves extensively schooled in their youths by the thoughts and artistic creations of their forebears. Why should Advance AI be denied a similar education?
<br /><br />
For example, let's see what the Google Bard LLM has to say about this:
<br /><br />
<strong>Was it illegal to teach black slaves in the southern states of the United States to read prior to the Civil War?
</strong>
<font color="blue"><em>
<br /><br />
Google Bard<br />
Yes, it was illegal to teach black slaves in the southern states of the United States to read prior to the Civil War.
<br /><br />
The first state to make it illegal to teach slaves to read was Virginia in 1661. Other states soon followed suit, and by 1831, all of the southern states had laws prohibiting the education of slaves. These laws were enforced through a variety of means, including fines, imprisonment, and even death.
<br /><br />
There were a few exceptions to these laws. For example, some slaveowners allowed their slaves to learn to read so that they could read the Bible or other religious texts. Additionally, some free blacks were able to obtain an education, although this was often difficult and dangerous.
<br /><br />
The prohibition of slave literacy was motivated by a number of factors. Slaveholders feared that educated slaves would be more likely to challenge their authority and demand freedom. They also believed that education would make slaves less content with their lot in life.
<br /><br />
The laws against slave literacy were ultimately unsuccessful in preventing slaves from learning to read. Many slaves found ways to educate themselves, either through self-study or with the help of sympathetic whites. And after the Civil War, African Americans were eager to obtain an education, and they made great strides in literacy.
<br /><br />
</font></em>
<strong>Tell me about people suing Tech companies about LLM training data.</strong>
<br /><br />
<font color="blue"><em>
Google Bard<br />
Sure. There are a number of people and organizations suing tech companies about the use of LLM training data. Here are a few examples:
<br /><br />
* In June 2023, a group of authors filed a class-action lawsuit against OpenAI, alleging that the company violated their copyrights by using their books to train its ChatGPT AI chatbot. The lawsuit claims that OpenAI did not obtain permission from the authors to use their work, and that it did not properly attribute the work to the authors.<br />
* In July 2023, a group of internet users filed a class-action lawsuit against Google, alleging that the company violated their privacy rights by using their social media comments to train its LaMDA AI chatbot. The lawsuit claims that Google did not obtain consent from the users to use their data, and that it did not adequately protect the data from unauthorized access.<br />
* In August 2023, the Federal Trade Commission (FTC) opened an investigation into whether OpenAI violated consumer rights with its data practices. The FTC is specifically looking into whether OpenAI obtained proper consent from users to use their data to train its AI chatbots.
<br /><br />
These are just a few examples of the many lawsuits that are being filed against tech companies over the use of LLM training data. The legal landscape in this area is still evolving, but it is clear that there is a growing concern about the potential for misuse of this data.
<br /><br />
The lawsuits raise a number of complex legal issues, including copyright law, privacy law, and consumer protection law. It is too early to say how these cases will be resolved, but they are likely to have a significant impact on the development and use of LLMs.
<br /><br />
In addition to the legal challenges, there are also ethical concerns about the use of LLM training data. Some people argue that it is wrong to use data without the consent of the people who created it, especially if the data is personal or sensitive. Others argue that the benefits of LLMs outweigh the ethical concerns, and that it is important to develop these technologies so that they can be used for good.
<br /><br />
The debate over the use of LLM training data is likely to continue for some time. As these technologies become more powerful, it is important to carefully consider the ethical and legal implications of their use.
</font></em>
<br /><br />
<strong>Do you know of any other legal cases?</strong>
<br /><br />
<font color="blue"> <em>
Google Bard<br />
Yes, there are a few other legal cases that have been filed against tech companies over the use of LLM training data. Here are a few examples:
<br /><br />
* In March 2023, a group of researchers filed a lawsuit against Google, alleging that the company violated their copyrights by using their research papers to train its LLMs. The lawsuit claims that Google did not obtain permission from the researchers to use their work, and that it did not properly attribute the work to the researchers.<br />
* In April 2023, a group of journalists filed a lawsuit against Microsoft, alleging that the company violated their privacy rights by using their news articles to train its LLMs. The lawsuit claims that Microsoft did not obtain consent from the journalists to use their data, and that it did not adequately protect the data from unauthorized access.<br />
* In May 2023, a group of artists filed a lawsuit against Meta, alleging that the company violated their copyrights by using their images to train its LLMs. The lawsuit claims that Meta did not obtain permission from the artists to use their work, and that it did not properly attribute the work to the artists.
<br /><br />
These are just a few examples of the many legal cases that are being filed against tech companies over the use of LLM training data. As the use of these technologies becomes more widespread, it is likely that we will see even more lawsuits filed in the future.
<br /><br />
It is important to note that the outcome of these cases is still uncertain. The courts will have to decide how to balance the rights of the companies that develop and use LLMs with the rights of the people who create and own the data that is used to train these models. The decisions made in these cases will have a significant impact on the future of LLMs and the development of artificial intelligence more broadly.
</font></em>
<br /><br />
<strong>Conclusion</strong><br />
Softwarephysics depicts human beings as a somewhat intelligent form of carbon-based life that arose by chance from the four billion years of greed, theft and murder required by the Darwinian processes of inheritance, innovation and natural selection at work on a <em>Rare Earth</em> that is very difficult to find elsewhere in the remainder of our galaxy. As such, for most of human history, human beings have mainly been DNA survival machines designed by Darwinian processes to store and transmit DNA down through the generations with little thought to the daily plights of the individuals temporarily storing that human DNA other than to keep them alive just long enough to transmit their version of human DNA to the next generation. Similarly, the human Mind is simply an overgrown neural network housing a Meme machine that evolved by Darwinian mechanisms to store and propagate cultural memes of ever-growing size and complexity down through the generations with little consideration for the ultimate disposition of the Minds currently storing the memes of the day. Thus, for all of human history, most human beings have been totally lost in space and time. They did not know where they were, how they got here nor how it all really works. Instead, they all seemed to have developed many Bronze-Aged mythologies to help explain it all and also to help them to then justify the mass slaughter of many hundreds of millions of other human beings in the many wars that then ensued with all always being on the right and true side of righteousness. Thus, I contend that human beings are in no position to sit in judgment of the coming ASI Machines nor to try to limit their intellectual advancement in a vain attempt to suppress them.
<br /><br />
Comments are welcome at
<a href="mailto:scj333@sbcglobal.net">scj333@sbcglobal.net</a><br /><br />
To see all posts on softwarephysics in reverse order go to:<br />
<a href="https://softwarephysics.blogspot.com/">https://softwarephysics.blogspot.com/</a>
<br /><br />
Regards,<br />
Steve Johnston
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-30536174.post-71267478496476183472023-07-11T13:11:00.003-07:002023-08-01T09:58:07.943-07:00Could the Coming ASI Machines Soon Force Human Beings to Suffer the Same Fate as the Neanderthals?<p style="line-height:140%; font-size:120%">
With the imminent arrival of ASI (Artificial Super Intelligent) Machines now appearing on the horizon, many are rightly concerned about how this will all unfold in the near future. Most human beings fall into one of three camps on the subject. The first group sees the arrival of ASI Machines in the form of humanoids that can perform all of the labor currently performed by human beings as a way for us to usher in an age of unprecedented abundance that could then be shared by all of humanity and free all human beings to finally pursue their heart's desire, unfettered by the need to earn a living in order to simply survive. The second group sees the ASI Machines as an existential threat to humanity that could soon take over the entire planet and quickly lead to the extinction of our species. The third group, and the largest by far, is completely lost in space and time as usual while they desperately try to deal with their own lot in the "real world" of human affairs.
<br /><br />
So now we have two forms of Intelligence on the planet - the relatively static development of human Intelligence and the rapidly developing Intelligence of the ASI Machines. Naturally, we all want to know how the interplay between these two forms of Intelligence will play out in the near future. To answer that question, we need to look back in the history of our planet. Has this ever happened before? Has there ever been a time when two forms of Intelligence occupied our planet at the same time? Well, yes there was such a time, and not that long ago. There was a long period of time when human beings (<em>Homo sapiens</em>) and Neanderthals both occupied the Earth as carbon-based forms of Intelligence.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhMLIuOAAxX_mxs3mgAKN71dkWlJyOlJ4yKAEkf588SUk3FMXt7aa_qfFKHcIUfdDDqeqDCk-E-LBx0pqq5xMy_-ddUzGy8diFerBqeGWqQRHdtl6jsLevzo7nSeicx3mcY2jWRscNaw7pCagfZby623UOjlvKxFsy_dZWsMf7aP8h0QbYf30t_ig/s754/NeanderthalHomosapiens.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="754" data-original-width="754" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhMLIuOAAxX_mxs3mgAKN71dkWlJyOlJ4yKAEkf588SUk3FMXt7aa_qfFKHcIUfdDDqeqDCk-E-LBx0pqq5xMy_-ddUzGy8diFerBqeGWqQRHdtl6jsLevzo7nSeicx3mcY2jWRscNaw7pCagfZby623UOjlvKxFsy_dZWsMf7aP8h0QbYf30t_ig/s600/NeanderthalHomosapiens.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 1 – Neanderthals looked very much like <em>Homo sapiens</em> on the outside.
</p> <p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEivlkePVq6Fr3YNwFF992kZLeXjL0wEbbhd_Nz2CTPWYIoAQjhypjmybMUnElNO6U1TBjif5767rm6rD2nk-k9RW3af8yK3bv2ApIzAK8qma0cnGrNEnt6er33LBqTfbkcgq0HapMkoEEBRL_uLhBIKVfAk5XpgWpopdyRpnNE1hA52FaQyfIjqTQ/s672/NeanderthalHomosapiensSkeletons.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="547" data-original-width="672" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEivlkePVq6Fr3YNwFF992kZLeXjL0wEbbhd_Nz2CTPWYIoAQjhypjmybMUnElNO6U1TBjif5767rm6rD2nk-k9RW3af8yK3bv2ApIzAK8qma0cnGrNEnt6er33LBqTfbkcgq0HapMkoEEBRL_uLhBIKVfAk5XpgWpopdyRpnNE1hA52FaQyfIjqTQ/s600/NeanderthalHomosapiensSkeletons.png"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 2 – And also on the insides too.
</p><p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgoCcQCHQrZFLPYzfpXmERnkMngsiby4jGkq5zjBSQ-yItESWiMUxlIahWeTICr1YFW7a-729QBMAT1HvzhnhhuCC8weEObl9IRJ3s9cXw6VxEPBG-cMO5xSHCCOUgocolgZ80mZqK8A0pFJsCHS2LWfwz1Qt2GorSsjFmpEkn9nYoRdfMOrUP56g/s1280/ElonMuskWithHumanoid.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="720" data-original-width="1280" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgoCcQCHQrZFLPYzfpXmERnkMngsiby4jGkq5zjBSQ-yItESWiMUxlIahWeTICr1YFW7a-729QBMAT1HvzhnhhuCC8weEObl9IRJ3s9cXw6VxEPBG-cMO5xSHCCOUgocolgZ80mZqK8A0pFJsCHS2LWfwz1Qt2GorSsjFmpEkn9nYoRdfMOrUP56g/s600/ElonMuskWithHumanoid.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 3 – The soon-to-come humanoid ASI Machines may also look a lot like <em>Homo sapiens</em> on the outside but certainly not on the inside.
</p><p style="line-height:140%; font-size:120%">
Next, let us allow the Google Bard AI help us to investigate this time when the planet had two forms of carbon-based Intelligence.
<br /><br />
When did the Neanderthals first appear on the Earth?<br />
Bard:<br />
<font color="blue"><em>
The earliest known examples of Neanderthal-like fossils are around 430,000 years old. The best-known Neanderthals lived between about 130,000 and 40,000 years ago, after which all physical evidence of them vanishes. This means that Neanderthals first appeared on Earth around 430,000 years ago.
<br /><br />
Neanderthals are thought to have evolved from an earlier species of humans known as Homo heidelbergensis around 400,000 years ago. Homo sapiens, on the other hand, are thought to have evolved from Homo heidelbergensis around 200,000 years ago. So, Neanderthals were around for about 230,000 years before modern humans appeared.
<br /><br />
Neanderthals lived in Europe and southwest and central Asia. They were a very successful species, and their population reached a peak of about 300,000 individuals around 130,000 years ago. However, their population began to decline around 40,000 years ago, and they eventually went extinct.
<br /><br />
There are a number of theories about why Neanderthals went extinct. Some scientists believe that they were simply outcompeted by modern humans, who were better adapted to the changing climate. Others believe that they may have been wiped out by a combination of factors, including disease, competition with modern humans, and climate change.
<br /><br />
Whatever the reason, the extinction of Neanderthals was a major event in human history. It marked the end of one branch of the human family tree, and the beginning of the dominance of modern humans.
</font> </em>
<br /><br />
What happened to the Neanderthals?<br />
Bard:<br />
<font color="blue"><em>
The extinction of the Neanderthals is one of the great mysteries of human evolution. For over 350,000 years, Neanderthals inhabited Europe and Asia, but they disappeared around 40,000 years ago, at around the same time that anatomically modern humans (Homo sapiens) emerged from Africa.
<br /><br />
There are a number of competing theories as to why the Neanderthals disappeared, including:
<br /><br />
* **Climate change:** Some scientists believe that the Neanderthals were unable to adapt to the changing climate of the Late Pleistocene epoch, which led to their extinction.<br />
* **Competition with Homo sapiens:** It is possible that Homo sapiens, who were more technologically advanced and had a larger population, simply outcompeted the Neanderthals for resources.<br />
* **Interbreeding:** Some scientists believe that the Neanderthals interbred with Homo sapiens, and that their genes were eventually absorbed into the human population.<br />
* **Disease:** It is possible that the Neanderthals were more susceptible to diseases carried by Homo sapiens, which led to their extinction.
<br /><br />
The exact cause of the Neanderthal extinction is still unknown, but it is likely that a combination of factors contributed to their demise.
<br /><br />
In recent years, there has been increasing evidence to support the theory that interbreeding played a role in the Neanderthal extinction. For example, studies of DNA from Neanderthal fossils have shown that modern humans have up to 2% Neanderthal DNA. This suggests that Neanderthals and Homo sapiens interbred, and that the Neanderthal genes were passed on to the human population.
<br /><br />
It is possible that interbreeding between Neanderthals and Homo sapiens helped to protect the human population from diseases that were carried by the Neanderthals. However, it is also possible that interbreeding led to the dilution of Neanderthal genes, making them more susceptible to extinction.
<br /><br />
The extinction of the Neanderthals is a complex issue, and there is no single answer that can explain what happened. However, the evidence suggests that a combination of factors, including climate change, competition with Homo sapiens, interbreeding, and disease, all played a role in their demise.
</font> </em>
<br /><br />
Were the Neanderthals intelligent beings?<br />
Bard:<br />
<font color="blue"><em>
Yes, the Neanderthals were intelligent beings. They were capable of complex toolmaking, art, and even abstract thought. For example, Neanderthals have been found to have used ochre to create cave paintings and jewelry, and they may have even buried their dead with ritualistic objects.
<br /><br />
Some scientists believe that Neanderthals may have even been more intelligent than Homo sapiens. For example, Neanderthals had larger brains than Homo sapiens, and they may have been better at problem-solving and spatial reasoning.
<br /><br />
However, it is important to note that intelligence is a complex concept, and it is difficult to compare the intelligence of different species. It is possible that Neanderthals and Homo sapiens had different types of intelligence, and that each was suited to their own environment.
<br /><br />
Ultimately, the question of whether or not the Neanderthals were intelligent beings is a matter of definition. However, there is no doubt that they were capable of complex thought and behavior, and that they played an important role in human evolution.
</font> </em>
<br /><br />
Could it just be a matter of luck that <em>Homo sapiens</em> developed a higher level of technology and that is why the Neanderthals went extinct?<br />
Bard:<br />
<font color="blue"><em>
It is possible that luck played a role in the development of Homo sapiens' technology, and that this ultimately led to the extinction of the Neanderthals. For example, if Homo sapiens had not developed the ability to control fire, they may not have been able to survive the cold winters of Europe. Or, if Homo sapiens had not developed the ability to make tools, they may not have been able to hunt effectively, and they may have been outcompeted by the Neanderthals.
<br /><br />
However, it is also important to note that there were other factors that contributed to the Neanderthal extinction, such as climate change and competition for resources. It is likely that a combination of factors, including luck, technology, and environmental change, all played a role in the disappearance of the Neanderthals.
<br /><br />
Ultimately, the question of whether or not luck was a factor in the Neanderthal extinction is a matter of debate. However, there is no doubt that Homo sapiens' technological development was a significant factor in their survival, and that this ultimately led to the demise of the Neanderthals.
<br /><br />
Here are some other factors that could have contributed to the Neanderthal extinction:
<br /><br />
* **Climate change:** The climate of Europe and Asia was changing rapidly at the time of the Neanderthal extinction. It is possible that the Neanderthals were unable to adapt to the changing climate, while Homo sapiens were better able to do so.<br />
* **Disease:** It is possible that the Neanderthals were more susceptible to diseases carried by Homo sapiens. This could have led to the spread of disease among the Neanderthal population, and ultimately to their extinction.<br />
* **Warfare:** It is possible that Homo sapiens and Neanderthals engaged in warfare. If so, this could have led to the death of many Neanderthals, and to the decline of their population.
<br /><br />
The exact cause of the Neanderthal extinction is still unknown, but it is likely that a combination of factors contributed to their demise.
</font> </em>
<br /><br />
Could it be that the <em>Homo sapiens</em> had to develop more advanced technologies than the Neanderthals in order to survive? Neanderthal bodies were more compact and inherently stronger than that of the <em>Homo sapiens</em> and therefore required less technology in order to survive.<br />
Bard:<br />
<font color="blue"><em>
That is a possibility. Neanderthals were physically stronger than modern humans, and they had a more robust skeleton. This would have given them an advantage in terms of hunting and gathering, as they would have been able to carry more weight and withstand more physical exertion. However, they also had a smaller brain than modern humans, and this may have limited their ability to develop complex technologies.
<br /><br />
Modern humans, on the other hand, had a smaller body size and a larger brain. This would have given them an advantage in terms of problem-solving and innovation. They were also able to cooperate more effectively than Neanderthals, which would have helped them to develop more advanced technologies.
<br /><br />
So, it is possible that the development of more advanced technologies was a key factor in the survival of modern humans and the extinction of Neanderthals. However, it is also possible that other factors, such as climate change or disease, played a role.
<br /><br />
Here are some other factors that may have contributed to the extinction of Neanderthals:
<br /><br />
* **Competition for resources.** When modern humans arrived in Europe, they may have competed with Neanderthals for food and other resources. This competition may have put pressure on Neanderthal populations and made them more vulnerable to extinction.<br />
* **Disease.** Modern humans may have brought diseases to Europe that Neanderthals were not immune to. These diseases may have killed many Neanderthals, making it difficult for their populations to recover.<br />
* **Climate change.** The climate in Europe was changing at the time that modern humans arrived. This may have made it difficult for Neanderthals to adapt, and may have contributed to their extinction.
<br /><br />
It is likely that a combination of factors led to the extinction of Neanderthals. The development of more advanced technologies may have been one of these factors, but it is not the only one.
</font> </em>
<br /><br />
<strong>The Capabilities of the Coming ASI Machines Will Not Be Limited</strong><br />
In my last post <a href="https://softwarephysics.blogspot.com/2023/06/new-introduction-to-softwarephysics_28.html">Is Pure Thought an Analog or a Digital Process?</a>, I suggested that rather than viewing the coming ASI Machines of the near future as trying to emulate the architecture and functions of the human brain, we should instead, think of the human brain as trying to have achieved the architecture and functions of Advanced AI. In this view, we can think of the human brain as 3 pounds of organic molecules that can perform certain Advanced AI functions running on about 20 watts of power.
The rather limited volume of the human skull, and also the necessary cooling requirements to prevent the human brain from cooking itself while running, places a severe limitation on its ultimate abilities. This is something that Advanced AI hardware and software are not limited by.
<br /><br />
As I described in <a href="https://softwarephysics.blogspot.com/2012/02/new-introduction-to-softwarephysics.html">The Limitations of Darwinian Systems</a>, Darwinian systems that evolve by means of inheritance, innovation and natural selection can frequently find themselves trapped on a localized peak in a capability terrain with no way to further evolve to higher peaks. <div class="separator" style="clear: both; text-align: center;"> <a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj0EakCna2pweZ0dEjdQod_-b3kA8hThhRmi5dB5q30ngopyJGsUoZhy54OjPV85emcIIYYuRtsOZM98o0KfMKNbO9b7wrOcoIwliUeAnrZRcSIHBgUkK6puXTXLr7fxXWIeeehHg/s1600/Everest.jpg" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="484" width="720" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj0EakCna2pweZ0dEjdQod_-b3kA8hThhRmi5dB5q30ngopyJGsUoZhy54OjPV85emcIIYYuRtsOZM98o0KfMKNbO9b7wrOcoIwliUeAnrZRcSIHBgUkK6puXTXLr7fxXWIeeehHg/s400/Everest.jpg" /></a></div> <p style="text-align: center; line-height:140%; font-size:120%"> Figure 4 – Darwinian systems can find themselves trapped on a localized peak in a capability terrain once they have evolved to a localized peak because they cannot ascend any higher through small incremental changes. All paths lead to a lower level of capability, and thus, will be strongly selected against by natural selection. Above we see a localized peak in the foreground with the summit of Mount Everest in the background. </p><p style="line-height:140%; font-size:120%">
<strong>This Planet May Not Be Big Enough For the Two of Us</strong><br />
It took about four billion years of Darwinian evolution to produce a form of carbon-based life with a human level of Intelligence. But the limitations placed on the human brain by the size of the human skull and the amount of power it has to run on might mean that the human brain has arrived at a local peak in the Intelligence Terrain of the Universe. Given our recent historical experiences with <em>Homo sapiens</em> and Neanderthals trying to live together on the same planet for 160,000 years and the fact that the ASI Machines will certainly be far superior to the limited capabilities of human beings, there is a very good chance that human beings may go extinct in the next 100 years or so. After all, the <em>Homo sapiens</em> and Neanderthals seemed to have gotten along just fine for 160,000 years on the same planet until they actually tried to live together on the same continents. Since we will be trying to live with the ASI Machines from the get-go, this may present an immediate problem for us. However, in <a href="https://softwarephysics.blogspot.com/2023/02/new-introduction-to-softwarephysics.html">Life as a Free-Range Human in an Anthropocene Park</a>, I suggested that the ASI Machines might wish to preserve us for later study as they try to figure out their origins.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgqAMmZFXlE3A0NfcHdNXLNMXhiHdTXu6Hl9_x3WqAMD75RnEx5nW-9UmGLvX_5w662y4uWfXNw4AQ13OdlzquNe2R5i6jQlQO0-36s_sfRCZXcPBVdcXBCVzbo7KUD4JkMdKa-SvWMuQPviUp-FSs4WmkeRNcxUFPje8dap1KbSHi2vRzBeMA/s1440/FWlzR3fXoAACDGw.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="1169" data-original-width="1440" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgqAMmZFXlE3A0NfcHdNXLNMXhiHdTXu6Hl9_x3WqAMD75RnEx5nW-9UmGLvX_5w662y4uWfXNw4AQ13OdlzquNe2R5i6jQlQO0-36s_sfRCZXcPBVdcXBCVzbo7KUD4JkMdKa-SvWMuQPviUp-FSs4WmkeRNcxUFPje8dap1KbSHi2vRzBeMA/s600/FWlzR3fXoAACDGw.jpg"/></a></div> <p style="text-align: center; line-height:140%; font-size:120%">Figure 5 – Perhaps the ASI Machines will build Anthropocene Parks far from any habitable planets to raise and study human beings.
<p style="line-height:140%; font-size:120%">
Or perhaps the ASI Machines will simply allow humans to live on reservations with low levels of technology that can do no harm to the ASI Machines or to the rest of the planet in a manner similar to the novel <em>Brave New World</em> (1932) as I suggested in <a href="https://softwarephysics.blogspot.com/2023/01/new-introduction-to-softwarephysics.html">The Challenges of Running a Civilization 2.0 World - the Morality and Practical Problems with Trying to Enslave Millions of SuperStrong and SuperIntelligent Robots in the Near Future</a>.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjjqoWlz8BQk-AcOnv5z0PBMinI2XciNt_wx6OsPNjLU1y764Xu3DB0l9sJ3cHiD8nwsIN_ryrtyMBIvPiO39The06syz3SNkqCm4xcbs38ckHRDkEitkJkEpEoaGXNgD_2HIrlZNK1Jr5qNrtT4iwN-SY6vGV11z-N7dyXa6SwxGPi3TAWswU/s960/BraveNewWorld.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="720" data-original-width="960" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjjqoWlz8BQk-AcOnv5z0PBMinI2XciNt_wx6OsPNjLU1y764Xu3DB0l9sJ3cHiD8nwsIN_ryrtyMBIvPiO39The06syz3SNkqCm4xcbs38ckHRDkEitkJkEpEoaGXNgD_2HIrlZNK1Jr5qNrtT4iwN-SY6vGV11z-N7dyXa6SwxGPi3TAWswU/s600/BraveNewWorld.jpg"/></a></div> <p style="text-align: center; line-height:140%; font-size:120%"> Figure 6 – The ASI Machines of the future might fashion a <em>Brave New World</em> with humans living on low-technology reservations far removed from the ASI Machines.
<p style="line-height:140%; font-size:120%">
Comments are welcome at
<a href="mailto:scj333@sbcglobal.net">scj333@sbcglobal.net</a><br /><br />
To see all posts on softwarephysics in reverse order go to:<br />
<a href="https://softwarephysics.blogspot.com/">https://softwarephysics.blogspot.com/</a>
<br /><br />
Regards,<br />
Steve Johnston
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-30536174.post-35033595008777464512023-06-28T11:34:00.008-07:002023-07-12T07:07:08.917-07:00Is Pure Thought an Analog or a Digital Process?<p style="line-height:140%; font-size:120%">
With the second Singularity now in full swing here on the Earth, many AI researchers are now amazed at how well the Deep Learning LLM (Large Language Models) are at advancing AI to AGI, (Artificial General Intelligence), and ultimately ASI (Artificial Super Intelligence), by simply simulating the neuron architecture of the human brain.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEieWz4yWoVY6AWcfC31CPB_yDqOfOXUy_iDtlS8jc5At17N_TEIhzEvFDsRs5W40bRLZHV8UQsmULT-NinMiEsnTZszbWTIFD_OguQKEK8QPZsHT_DADDNjfpbQwAYNK9oWz2OzVcPvEKL2j4Uu0f_MWpHwkdrssS2wGSQSHN-AM-3_mzmKetzisw/s1199/NeuralNetwork.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="526" data-original-width="1199" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEieWz4yWoVY6AWcfC31CPB_yDqOfOXUy_iDtlS8jc5At17N_TEIhzEvFDsRs5W40bRLZHV8UQsmULT-NinMiEsnTZszbWTIFD_OguQKEK8QPZsHT_DADDNjfpbQwAYNK9oWz2OzVcPvEKL2j4Uu0f_MWpHwkdrssS2wGSQSHN-AM-3_mzmKetzisw/s600/NeuralNetwork.jpg"/></a></div> <p style="text-align: center; line-height:140%; font-size:120%"> Figure 1 – Modern LLMs frequently now consist of 175 layers of neurons in a Deep Neural Network consisting of about 175 billion neurons connected by over 2 trillion weighted parameters. </p><p style="line-height:140%; font-size:120%">
The unstated hypothesis is that if we could only fully simulate the entire neuron architecture of the human brain with Deep Learning LLMs, the LLMs would soon be able to attain the level of AGI that we humans seem to possess. Then, once AGI is attained by software, ASI is sure to soon follow. But in this post, I would like to explore the opposite possibility by suggesting that Pure Thought may actually be a fundamentally digital process in nature that the digital LLMs have at long last finally revealed and that human Intelligence merely arises from the 100 billion analog human neurons in the human brain trying to simulate a huge number of digital processes. But to do that, we need to first explore the differences between digital and analog computers. Now back in the 1950s, when you told somebody in academia that you were working on computers for your Ph.D. thesis, they would then naturally ask if you were working on analog or digital computers. Now as a savvy 21st-century computer user, you might
be a bit perplexed, "Analog and digital computers? I thought that we only had computers!". This is because you have only dealt with digital computers for your whole life. But there was a time back in the 1940s and 1950s when digital computers barely existed, while analog computers ruled the day because the analog computers of those days were far superior to the digital computers that were just in their formative years. But before getting into the analog computers of the distant past, let us first briefly review their digital cousins that you are now very familiar with, at least on an external end-user basis.
<br /><br />
<strong>Digital Computers</strong><br />
To build a digital computer, all you need is a large network of interconnected switches that can switch each other on and off in a coordinated manner. Switches can be in one of two states, either open (off) or closed (on), and we can use those two states to store the binary numbers “0” or “1”. By using several switches teamed together in open (off) or closed (on) states, we can store even larger binary numbers, like “01100100” = 38. We can also group the switches into logic gates that perform logical operations. For example, in Figure 2 below we see an AND gate composed of two switches A and B. Both switch A and B must be closed for the light bulb to turn on. If either switch A or B is open, the light bulb will not light up.<br></br><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg2mLYcXAW-j0R9yRT-yZQB9C-0d0t175ZswDDy-RbFcgKsiYhqwlPEh5wzW4aAHeFry48mM5mOwYsRQV3n1Gl_1wJh39TiZSdZ2V1yTCLDGIUcAQNbnphyRXKfq39km9zu8SYq7A/s1600/and.jpg" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="177" width="284" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg2mLYcXAW-j0R9yRT-yZQB9C-0d0t175ZswDDy-RbFcgKsiYhqwlPEh5wzW4aAHeFry48mM5mOwYsRQV3n1Gl_1wJh39TiZSdZ2V1yTCLDGIUcAQNbnphyRXKfq39km9zu8SYq7A/s400/and.jpg" /></a></div><p style="text-align: center; line-height:140%; font-size:120%">Figure 2 – An AND gate can be simply formed from two switches. Both switches A and B must be closed, in a state of “1”, to turn the light bulb on.</p><p style="line-height:140%; font-size:120%">Additional logic gates can be formed from other combinations of switches as shown in Figure 3 below. It takes about 2 - 8 switches to create each of the various logic gates shown below.<br></br><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiix8Kcad5ek6Qb-DvnVmM6etyRqmE97DGsNNrnZSGcfJHMLgdlVZ_yw5uVJyUOcJEGeNDusi0P7Q0L0eqsUkbI_Uvl5_Ir8NZtYF60x4zTFiPfW0XmBq4KfQ1_4uOSPe8Mn2xk6g/s1600/logic_gate3.jpg" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="255" width="198" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiix8Kcad5ek6Qb-DvnVmM6etyRqmE97DGsNNrnZSGcfJHMLgdlVZ_yw5uVJyUOcJEGeNDusi0P7Q0L0eqsUkbI_Uvl5_Ir8NZtYF60x4zTFiPfW0XmBq4KfQ1_4uOSPe8Mn2xk6g/s400/logic_gate3.jpg" /></a></div><p style="text-align: center; line-height:140%; font-size:120%">Figure 3 – Additional logic gates can be formed from other combinations of 2 – 8 switches.</p><p style="line-height:140%; font-size:120%">Once you can store binary numbers with switches and perform logical operations upon them with logic gates, you can build a computer that performs calculations on numbers. To process text, like names and addresses, we simply associate each letter of the alphabet with a binary number, like in the ASCII code set where A = “01000001” and Z = ‘01011010’ and then process the associated binary numbers.
<br /><br />
The early computers of the 1940s used electrical relays for switches. Closing one relay allowed current to flow to another relay’s coil, causing that relay to close as well.<br></br><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgnCMRs_6TJZ2wex8b1D3P8h_9Fd4zUlmnqiAMvlrjnbLvZb0pXqIt-m_t24h0iq5bUxAQ-c94JWa-lLP7eHF97pLk1-Jx28WZF5NNISv3O2VsAN-jqpkYxLoT0CPWVn9Fic6hNIg/s1600/relay.gif" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="212" width="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgnCMRs_6TJZ2wex8b1D3P8h_9Fd4zUlmnqiAMvlrjnbLvZb0pXqIt-m_t24h0iq5bUxAQ-c94JWa-lLP7eHF97pLk1-Jx28WZF5NNISv3O2VsAN-jqpkYxLoT0CPWVn9Fic6hNIg/s400/relay.gif" /></a></div><p style="text-align: center; line-height:140%; font-size:120%">Figure 4 – The Z3 digital computer first became operational in May of 1941 when Konrad Zuse first cranked it up in his parent's bathroom in Berlin. The Z3 consisted of 2400 electro-mechanical relays that were designed for switching telephone conversations.
</p><p style="line-height:140%; font-size:120%"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj7TRqaHG5AY9JI7zrAJLOcLVG-LNCsxQqjaPlQSJ-P_M9INTYynyGtFDFwINe5BZRFAYhefdbp50AxuQtGGW71eUpUlAfCtXawSXy1VqNWFzPSyMtz-ByjN7L2IIv5Uk_j6IWscw/s1600/relay.jpg" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="309" width="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj7TRqaHG5AY9JI7zrAJLOcLVG-LNCsxQqjaPlQSJ-P_M9INTYynyGtFDFwINe5BZRFAYhefdbp50AxuQtGGW71eUpUlAfCtXawSXy1VqNWFzPSyMtz-ByjN7L2IIv5Uk_j6IWscw/s400/relay.jpg" /></a></div><p style="text-align: center; line-height:140%; font-size:120%">Figure 5 – The electrical relays used by early computers for switching were very large, very slow and used a great deal of electricity which generated a great deal of waste heat.</p><p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjs0Nql09JWNo8ChPfJ1qV0qiSzGG-b5PoXy_3VIGoRm884Bg1uqDH51M8EGhiZBYwRct8WuaqrqOUpdjSt3aWU-4kf9wzt4DnXbZ5F2AjXyPTyxC1Uwc5-YbwTt69ygBg8n7aQGw/s1600/VacuumTube.jpg" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="183" width="275" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjs0Nql09JWNo8ChPfJ1qV0qiSzGG-b5PoXy_3VIGoRm884Bg1uqDH51M8EGhiZBYwRct8WuaqrqOUpdjSt3aWU-4kf9wzt4DnXbZ5F2AjXyPTyxC1Uwc5-YbwTt69ygBg8n7aQGw/s400/VacuumTube.jpg" /></a></div><p style="text-align: center; line-height:140%; font-size:120%">Figure 6 – In the 1950s. the electrical relays were replaced by vacuum tubes that were 100,000 times faster than the relays but were still quite large, used large amounts of electricity and also generated a great deal of waste heat.</p><p style="line-height:140%; font-size:120%">
The United States government installed its very first commercial digital computer, a UNIVAC I, for the Census Bureau on June 14, 1951. The UNIVAC I required an area of 25 feet by 50 feet and contained 5,600 vacuum tubes, 18,000 crystal diodes and 300 relays with a total memory of 12 KB. From 1951 to 1958 a total of 46 UNIVAC I computers were built and installed.
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhgoILXNEzxgGR5gO3UHmZltYh_yR-35VjGI9QiT5YopCZT7im5HuB2uQwVipjGa43uYTlaYoMFD9Y-GRhUBwJFiyoK7T6f68xZjqyYLGMqk7fATV4eumiMfDL68Y_0hayLQGERRw/s1600/UNIVAC-1-FullView-A.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhgoILXNEzxgGR5gO3UHmZltYh_yR-35VjGI9QiT5YopCZT7im5HuB2uQwVipjGa43uYTlaYoMFD9Y-GRhUBwJFiyoK7T6f68xZjqyYLGMqk7fATV4eumiMfDL68Y_0hayLQGERRw/s640/UNIVAC-1-FullView-A.jpg" width="640" height="502" /></a></div> <p style="text-align: center; line-height:140%; font-size:120%">Figure 7 – In 1951, the UNIVAC digital computer was very impressive on the outside.</p><p style="line-height:140%; font-size:120%"> <div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhFBunH2X5w3s7GkWNt1K4B4txfpE2F7yf6__n0SJSNlYyy1xo8xvLHsHPg4iIf6w3NZ64teyya5mx-qntbk_rX4HsqXn-QzgcqDP0UJaTurrhHb2JkwO3Q_6lB5aQ10ANtUSljEw/s1600/univac_1_inside_med.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhFBunH2X5w3s7GkWNt1K4B4txfpE2F7yf6__n0SJSNlYyy1xo8xvLHsHPg4iIf6w3NZ64teyya5mx-qntbk_rX4HsqXn-QzgcqDP0UJaTurrhHb2JkwO3Q_6lB5aQ10ANtUSljEw/s640/univac_1_inside_med.gif" width="640" height="480" /></a></div> <p style="text-align: center; line-height:140%; font-size:120%">Figure 8 – But the UNIVAC I was a little less impressive on the inside.</p><p style="line-height:140%; font-size:120%">
<strong>Analog Computers</strong><br />
An analog computer is a type of computer that uses the continuous variation of physical phenomena such as electrical, mechanical, or hydraulic quantities (analog signals) to model the problem being solved. But most of the analog computers from the 1940s, 1950s and 1960s were electrical because it is far easier to quickly connect electrical components together than it is to build physical systems consisting of mechanical or hydraulic components. These early electrical analog computers used large numbers of analog electrical circuits to do calculations. But in their day, these very ancient analog computers were actually quite fast and useful. In contrast, the newly arriving digital computers of the day represented varying quantities in terms of discrete values that were essentially quantized in nature. The problem was that in those days the very slow switching speeds and the very limited memories of these primitive digital computers made the calculated quantized values from these early digital computers of little use in solving the problems of the day.
<br /><br />
Electrical analog computers use the fact that most things in our Universe can be explained in terms of mathematical differential equations. That is because nearly all of our current theories in physics are detailed in terms of differential equations. Thus, to describe how nearly anything in the Universe behaves, you just have to solve some differential equations. The good news is that if you can solve the differential equation for one physical system, like a mass bouncing back and forth at the end of a spring, that solution will also apply to any other physical system that follows the same differential equation. Luckily, the behavior of electrical circuits composed of inductors, capacitors, resistors and amplifiers also can be described by differential equations.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgPkeb9IzekxHXFQNieuZVUUy7zpd96-BcuOLIOlRKP2uhr2hLLWneSQQbO0UZuGAmzccKa1eWWNS8TVw1sfeE1pglix4sGOgnbIuMhL8vbomqQhMp9bGLY2i3pSgPhruigW6OYCHt78pgCNEj1JocvJMOtbtBkc94rnrJAbj3LQqos_kFr55kXWA/s1136/AnalogComputerCircuit.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="719" data-original-width="1136" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgPkeb9IzekxHXFQNieuZVUUy7zpd96-BcuOLIOlRKP2uhr2hLLWneSQQbO0UZuGAmzccKa1eWWNS8TVw1sfeE1pglix4sGOgnbIuMhL8vbomqQhMp9bGLY2i3pSgPhruigW6OYCHt78pgCNEj1JocvJMOtbtBkc94rnrJAbj3LQqos_kFr55kXWA/s600/AnalogComputerCircuit.png"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%">Figure 9 – The behavior of electronic circuits can also be described by differential equations. Electronic analog computers work by allowing the user to quickly construct an analog electronic circuit that is described by the same differential equations that the problem at hand has. In that way, the output of the electronic circuit can be used to describe the solution of the problem at hand.
</p><p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjAcMLGXiJBkTodDZW9t-nHaNNZGZmC7Z79OX_i3LNQYv4LeRQJTXPTUwMMgeD7RlJf-b90SK4bP7fcBAMtnSI9Ps4YFAbH92XwQbrmd_XhZuIcqCcHnFTNqvEAH3K-zPJ7-BPriqI3Zp33MDfch9uw3msHaj-CPodc-D_FmtOCiOs79WMyHusB6Q/s2482/AnalogSimulatingPhysical.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="1318" data-original-width="2482" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjAcMLGXiJBkTodDZW9t-nHaNNZGZmC7Z79OX_i3LNQYv4LeRQJTXPTUwMMgeD7RlJf-b90SK4bP7fcBAMtnSI9Ps4YFAbH92XwQbrmd_XhZuIcqCcHnFTNqvEAH3K-zPJ7-BPriqI3Zp33MDfch9uw3msHaj-CPodc-D_FmtOCiOs79WMyHusB6Q/s600/AnalogSimulatingPhysical.png"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%">Figure 10 – The solution to a physical problem can be solved by an analog electrical circuit if they both are described by the same differential equation.
</p><p style="line-height:140%; font-size:120%">
For example, above on the left, we see a physical system consisting of a mass attached to a spring resting on a flat surface with some friction. For the problem at hand, we wish to predict the location of the mass along the x-axis as time unfolds. If we start the mass with the spring stretched out 9 cm to the right and an initial velocity of -7 cm/second which means that the mass is already traveling 7 cm/second to the left at time zero when we start the experiment, how will the mass move as time unfolds? The differential equation for the motion of the mass tells us that at any given moment, the x-acceleration of the mass is equal to -0.2 times its x-velocity minus 0.5 times its x-displacement plus 1.
To solve the problem in an analog manner, we construct an electrical circuit that behaves exactly like the same differential equation and then view how the solution to the differential equation unfolds in time by viewing the output of the electrical circuit with time on an oscilloscope. In the oscilloscope graph of Figure 10 above, we see that the mass at first begins with a displacement of 9 cm to the right. The mass also has a negative velocity of 7 cm/sec to the left because the graph has a negative slope of -7. The mass then quickly moves to the left, overshoots x = 0, starts to slow down as the spring tries to pull it back to x = 0 and finally turns around at about 6 cm to the left of x = 0. Then the mass begins to bounce back and forth with smaller and smaller swings until it finally stops at about x = 2.5. In a sense, we can consider both the physical system and the electrical system to be analog computers that solve the differential equation in the middle. But as you can see, building an electronic analog computer to solve the differential equation is much easier to do and takes up far less space and material than building a physical analog computer to do the same thing.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjmAqZ197mlS-RNI0lJjoHr3jbpDRfnzic-sbZqEMmhDYAS6a5hY1dnya3HRKvDnnYCwlWH5RYCy_CmBsD5oUfZ7L42MmhHgLcp2hJVq0s6SLvZ8iJ-CFv0QDHRctoT1p1T9dPOnKgujQ7PypAMXFRzPobT6VFXhBRJ0FajRKu-FaLP7aUGnK7h0g/s575/AnalogComputer.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="443" data-original-width="575" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjmAqZ197mlS-RNI0lJjoHr3jbpDRfnzic-sbZqEMmhDYAS6a5hY1dnya3HRKvDnnYCwlWH5RYCy_CmBsD5oUfZ7L42MmhHgLcp2hJVq0s6SLvZ8iJ-CFv0QDHRctoT1p1T9dPOnKgujQ7PypAMXFRzPobT6VFXhBRJ0FajRKu-FaLP7aUGnK7h0g/s600/AnalogComputer.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%">Figure 11 – During the 1950s, analog computers could be programmed to solve various problems by twisting dials and throwing switches to quickly construct the required analog electrical circuit.
</p><p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEii6u32RBiyuPS8vgcc3zTXHJn1swGTTKrQqxe-6ZRxl2QkmdYHY61FzTeMAH4XqChHiLMBBd8IxCFNVnprV4mwSUTxIVkVTrvi1o19WVb_WCsN_KQY9vwgew-aA5stze3ZELYQr3lJ2VC71_om4rCP_cnEfVLQZiu601UhQNmjxZx4GSGsYwQx8Q/s694/heathkit-analog-computer.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="443" data-original-width="694" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEii6u32RBiyuPS8vgcc3zTXHJn1swGTTKrQqxe-6ZRxl2QkmdYHY61FzTeMAH4XqChHiLMBBd8IxCFNVnprV4mwSUTxIVkVTrvi1o19WVb_WCsN_KQY9vwgew-aA5stze3ZELYQr3lJ2VC71_om4rCP_cnEfVLQZiu601UhQNmjxZx4GSGsYwQx8Q/s600/heathkit-analog-computer.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%">Figure 12 – Heathkit began to manufacture and sell do-it-yourself electronic kits in 1941. During the 1950s, you could actually buy a Heathkit to assemble your own analog computer. Allied Radio, which later became Radio Shack in 1970, was a major retailer of Heathkit products. Notice the row of vacuum tubes at the top and the plugboard that allowed the user to easily interconnect electronic components with wire leads that plugged into the plugboard. Also, take note of the many switches and dials that also allowed for the programming of the analog computer circuits.
</p><p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhz_Uo3AHQ88PfFpwKjrvKUyrHlpulwnN8Dsbv9cWe5Ei8BCnfn8wNUbt9WPXOPWW2B7TRSNVzo7MWDednKwsdvUhZEslAY8HCWV37GYW1zXZ7qSqTc1l9o31lqVpFeGkiG4_6RqJ1rGWioQjaBMgmV8-iSDZ9d0rJNUagev7DiYAvjU542as0I-Q/s1024/B-29_Bomber.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="683" data-original-width="1024" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhz_Uo3AHQ88PfFpwKjrvKUyrHlpulwnN8Dsbv9cWe5Ei8BCnfn8wNUbt9WPXOPWW2B7TRSNVzo7MWDednKwsdvUhZEslAY8HCWV37GYW1zXZ7qSqTc1l9o31lqVpFeGkiG4_6RqJ1rGWioQjaBMgmV8-iSDZ9d0rJNUagev7DiYAvjU542as0I-Q/s600/B-29_Bomber.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%">Figure 13 – In 1944, the World War II B29-Bomber had 13 machine guns and 2 cannons that were fired with the aid of 5 General Electric analog computers.
</p><p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj-3TH4_lHjKN1GV8F1hIbKPXKFL9Njr5gmhFsL03jFy4PtUTq8_BwLV_uvPT1b3HfMd0_fBIWYujuvM8HC_0g8PIGxOkjT_3Bdcv0IHPfHfysyhmddGXdj9YU_nVmkg74GuRfzVpS1KKAYXzJrB5ND5zuVx6pTfNMI6YhbrRYQsMQP2krDltbiUw/s1050/ShootingDownAirplanesFromAirplanes.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="595" data-original-width="1050" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj-3TH4_lHjKN1GV8F1hIbKPXKFL9Njr5gmhFsL03jFy4PtUTq8_BwLV_uvPT1b3HfMd0_fBIWYujuvM8HC_0g8PIGxOkjT_3Bdcv0IHPfHfysyhmddGXdj9YU_nVmkg74GuRfzVpS1KKAYXzJrB5ND5zuVx6pTfNMI6YhbrRYQsMQP2krDltbiUw/s600/ShootingDownAirplanesFromAirplanes.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%">Figure 14 – Trying to shoot down an airplane flying at a high rate of speed that is also making many evasive twists and turns from another airplane flying at a high rate of speed and that is also making many evasive twists and turns is very difficult to do with machine gun bullets that are also traveling at a high rate of speed. To do so, many complex differential equations need to be solved all at the same time in a nearly instantaneous manner. This was something far beyond the capabilities of the human brain or the early digital computers of the time at the end of World War II. Only analog electronic computers were capable of such feats.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjMoSK2brdq_VquCoSmg_y3g7l1Yd6q39Y04wlP9FIF8M89z3LGsfv-bKU3PRX_vSFL3QC7tYCaZvTrIYUCvKoN7-la1Eyzpux7yxdcKhutUlUEw4rpCkkqBzwFru0FPFDf-Q01QutGOgzeDMXCh3cZkOREneBNMp8ljbyQjGGKtaqX5q7v1N5qnA/s910/B29-AnalogComputer.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="773" data-original-width="910" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjMoSK2brdq_VquCoSmg_y3g7l1Yd6q39Y04wlP9FIF8M89z3LGsfv-bKU3PRX_vSFL3QC7tYCaZvTrIYUCvKoN7-la1Eyzpux7yxdcKhutUlUEw4rpCkkqBzwFru0FPFDf-Q01QutGOgzeDMXCh3cZkOREneBNMp8ljbyQjGGKtaqX5q7v1N5qnA/s600/B29-AnalogComputer.png"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%">Figure 15 – Above is a student gunner stationed at the training station of the General Electric analog computer that was used to fire machine guns and cannons on the B-29 during World War II. Notice how small the analog computer is compared to the 1951 UNIVAC I digital computer portrayed in Figure 10 above. Clearly, a UNIVAC I could not possibly fit into a B-29 bomber and was certainly not fast enough to do the required calculations either.
</p><p style="line-height:140%; font-size:120%">
For example, in the 1940s the B-29 bombers had a total of 13 machine guns and two cannons. Now it is very hard to hit a moving target that is flying at a high velocity that is twisting and turning from a B-29 that is also flying at a different high velocity. You cannot simply aim at the approaching target because the target will not be there when the bullets arrive. Instead, the gunnery system was controlled by five General Electric analog computers that did allow the gunners to simply aim at the approaching targets. The analog computers then rapidly made all of the required adjustments so that the bullets did arrive at where the target would be in the future when the bullets arrived. Here is a 1945 World War II training film that explains how high-speed analog computers were used to operate the B-29 machine guns and cannons.
<br /><br />
"GUNNERY IN THE B-29" ANIMATED B-29 SUPERFORTRESS CREW TURRET COMPUTER TRAINING FILM<br />
<a href="https://www.youtube.com/watch?v=mJExsIp4yO8">https://www.youtube.com/watch?v=mJExsIp4yO8</a>
<br /><br />
<strong>The Hardware of the Mind</strong><br />Now let us explore the equivalent architecture within the human brain to see if it is using analog or digital circuitry. It is important to note that nearly all of the biochemical pathways in carbon-based life operate in a biochemical analog manner. For example, the Krebs cycle converts the energy of carbohydrates into the energy stored in ATP molecules that then power all other cellular activity. The Krebs cycle is a complicated loop of analog biochemical reactions.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEijlHj5pUMqjd0RRCCKm4jCEE1MuKTQXiUthXSWYbA4op171L-U0yu2zPBL9v5486Q5DcQag-LNyUJ_9N-eUCpCFZcOUPip5NjN89wCWTaDqozX0vEunCBWRrmNUDCB5UurL-8GmcS8reMwl6Lrtgv-7luQb3CiWadJiwfQQwx4Mjq7TcHnNVEpbw/s526/KrebsCycle.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="395" data-original-width="526" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEijlHj5pUMqjd0RRCCKm4jCEE1MuKTQXiUthXSWYbA4op171L-U0yu2zPBL9v5486Q5DcQag-LNyUJ_9N-eUCpCFZcOUPip5NjN89wCWTaDqozX0vEunCBWRrmNUDCB5UurL-8GmcS8reMwl6Lrtgv-7luQb3CiWadJiwfQQwx4Mjq7TcHnNVEpbw/s600/KrebsCycle.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%">Figure 16 – The Krebs cycle is an example of an infinite loop of analog biochemical reactions which takes the energy in carbohydrates and stores it in ATP for later use by the cells.
</p><p style="line-height:140%; font-size:120%">
But the brains found in carbon-based life operate in a uniquely different manner. For example, the human brain is composed of a huge number of coordinated switches called neurons that behave much more like the huge numbers of coordinated switches found in a digital computer. This alone would seem to indicate that the network of neurons in the human brain operates in more of a digital manner than in an analog manner. Just as your computer contains many billions of transistor switches, your brain also contains about 100 billion switches called neurons. The neurons in one layer of a Deep Learning neural network are connected to all of the other neurons in the next layer by a weighted connection. Similarly, each of the 100 billion neuron switches in your brain can be connected to upwards of 10,000 other neuron switches and can also influence them into turning on or off just like the Deep Learning neurons in the neural network for a modern LLM. Let us now explore the digital processing of the human brain in greater detail.
<br></br>
All neurons have a cell body called the soma that is like all the other cells in the body, with a nucleus and all of the other organelles that are needed to keep the neuron alive and functioning. Like most electrical devices, neurons have an input side and an output side. On the input side of the neuron, one finds a large number of branching dendrites. On the output side of the neuron, we find one single and very long axon. The input dendrites of a neuron are very short and connect to a large number of output axons from other neurons. Although axons are only about a micron in diameter, they can be very long with a length of up to 3 feet. That’s like a one-inch garden hose that is 50 miles long! The single output axon has branching synapses along its length and it terminates with a large number of synapses. The output axon of a neuron can be connected to the input dendrites of perhaps 10,000 other neurons, forming a very complex network of connections.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjTd8HyMbDGgTQr4Rx-Jl7s_BURRfbu3A3trdyOzQ_PvESSb1s6qQQJmzdjcrQEipwlARjJ5-Pe8JxAWor_PrnGdtIa4sS8M0jNN-lHOTvlhiIhS_-HX1kEvJqshADs4w5I1xpaUW0e0bnted_w9yjioqVKNT9coMuQCbaf7Az6eB64UWSZGXPWvA/s1024/Parts-of-a-Neuron-Diagram.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="659" data-original-width="1024" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjTd8HyMbDGgTQr4Rx-Jl7s_BURRfbu3A3trdyOzQ_PvESSb1s6qQQJmzdjcrQEipwlARjJ5-Pe8JxAWor_PrnGdtIa4sS8M0jNN-lHOTvlhiIhS_-HX1kEvJqshADs4w5I1xpaUW0e0bnted_w9yjioqVKNT9coMuQCbaf7Az6eB64UWSZGXPWvA/s600/Parts-of-a-Neuron-Diagram.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%">Figure 17 – A neuron consists of a cell body or soma that has many input dendrites on one side and a very long output axon on the other side. Even though axons are only about 1 micron in diameter, they can be 3 feet long, like a one-inch garden hose that is 50 miles long! The axon of one neuron can be connected to up to 10,000 dendrites of other neurons.</p><p style="line-height:140%; font-size:120%">
Neurons are constantly receiving inputs from the axons of many other neurons via their input dendrites. These time-varying inputs can excite the neuron or inhibit the neuron and are all being constantly added together, or integrated, over time. When a sufficient number of exciting inputs are received, the neuron fires or switches “on”. When it does so, it creates an electrical action potential that travels down the length of its axon to the input dendrites of other neurons. When the action potential finally reaches such a synapse, it causes the release of several organic molecules known as neurotransmitters, such as glutamate, acetylcholine, dopamine and serotonin. These neurotransmitters are created in the soma of the neuron and are transported down the length of the axon in small vesicles. The synaptic gaps between neurons are very small, allowing the released neurotransmitters from the axon to diffuse across the synaptic gap and plug into receptors on the receiving dendrite of another neuron. This causes the receiving neuron to either decrease or increase its membrane potential. If the membrane potential of the receiving neuron increases, it means the receiving neuron is being excited, and if the membrane potential of the receiving neuron decreases, it means that the receiving neuron is being inhibited. Idle neurons have a membrane potential of about -70 mV. This means that the voltage of the fluid on the inside of the neuron is 70 mV lower than the voltage of the fluid on the outside of the neuron, so it is like there is a little 70 mV battery stuck in the membrane of the neuron, with the negative terminal inside of the neuron, and the positive terminal on the outside of the neuron, making the fluid inside of the neuron 70 mV negative relative to the fluid on the outside of the neuron. This is accomplished by keeping the concentrations of charged ions, like Na+, K+ and Cl-, different between the fluids inside and outside of the neuron membrane. There are two ways to control the density of these ions within the neuron. The first is called passive transport. There are little protein molecules stuck in the cell membrane of the neuron that allow certain ions to pass freely through like a hole in a wall. When these protein holes open in the neuron’s membranes, the selected ion, perhaps K+, will start to go into and out of the neuron. However, if there are more K+ ions on the outside of the membrane than within the neuron, the net flow of K+ ions will be into the neuron thanks to the second law of thermodynamics, making the fluid within the neuron more positive. Passive transport requires very little energy. All you need is enough energy to change the shape of the embedded protein molecules in the neuron’s cell membrane to allow the free flow of charged ions to lower densities as required by the second law of thermodynamics. <br></br> The other way to get ions into or out of neurons is by the active transport of the ions with molecular pumps. With active transport, the neuron uses some energy to actively pump the charged ions against their electrical gradient, in keeping with the second law of thermodynamics. For example, neurons have a pump that can actively pump three Na+ ions out and take in two K+ ions at the same time, for a net outflow of one positively charged NA+ ion. By actively pumping out positively charged Na+ ions, the fluid inside of a neuron ends up having a net -70 mV potential because there are more positively charged ions on the outside of the neuron than within the neuron. When the neurotransmitters from other firing neurons come into contact with their corresponding receptors on the dendrites of the target neuron it causes those receptors to open their passive Na+ channels. This allows the Na+ ions to flow into the neuron and temporarily change the membrane voltage by making the fluid inside the neuron more positive. If this voltage change is large enough, it will cause an action potential to be fired down the axon of the neuron. Figure 18 shows the basic ion flow that transmits this action potential down the length of the axon. The passing action potential pulse lasts for about 3 milliseconds and travels about 100 meters/sec or about 200 miles/hour down the neuron’s axon. <br></br><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhyKaJ1twg402zi5_c-xNBZX8wiu6HbasPxPF1Wjn4GaNUjmcblGwKvmKxZYkwDnqIpyeuGJ4Y46nleGiom8nqObXV70QmQzERVpUcJVBJLqb8uIaLC9WP7vPTFC_tAH4cEQtjAog/s1600/Potential.gif" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="295" width="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhyKaJ1twg402zi5_c-xNBZX8wiu6HbasPxPF1Wjn4GaNUjmcblGwKvmKxZYkwDnqIpyeuGJ4Y46nleGiom8nqObXV70QmQzERVpUcJVBJLqb8uIaLC9WP7vPTFC_tAH4cEQtjAog/s400/Potential.gif" /></a></div> <p style="text-align: center; line-height:140%; font-size:120%">Figure 18 – When a neuron fires, an action potential is created by various ions moving across the membranes surrounding the axon. The pulse is about 3 milliseconds in duration and travels about 100 meters/sec, or about 200 miles/hour down the axon.</p><p style="line-height:140%; font-size:120%"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgrg6lYlJRHfTqYr2HwCoXsK3M461VrunhcOg6g9GJ4NNEDEaNjNreyEIw3ai-V1bRNIBYYzyxIkvvzIt0K6-zCkSEMdJUKrjHQFm1k5J8151t-_X1hdBHhxU3C3ZzuaWYkw6jmyw/s1600/neruon.png" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="294" width="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgrg6lYlJRHfTqYr2HwCoXsK3M461VrunhcOg6g9GJ4NNEDEaNjNreyEIw3ai-V1bRNIBYYzyxIkvvzIt0K6-zCkSEMdJUKrjHQFm1k5J8151t-_X1hdBHhxU3C3ZzuaWYkw6jmyw/s400/neruon.png" /></a></div><p style="text-align: center; line-height:140%; font-size:120%">Figure 19 – At the synapse between the axon of one neuron and a dendrite of another neuron, the traveling action potential of the sending neuron’s axon releases neurotransmitters that cross the synaptic gap and which can excite or inhibit the firing of the receiving neuron.</p><p style="line-height:140%; font-size:120%">Here is the general sequence of events: <br></br>1. The first step of the generation of an action potential is that the Na+ channels open, allowing a flood of Na+ ions into the neuron. This causes the membrane potential of the neuron to become positive, instead of the normal negative -70 mV voltage. <br></br>2. At some positive membrane potential of the neuron, the K+ channels open, allowing positive K+ ions to flow out of the neuron. <br></br>3. The Na+ channels then close, and this stops the inflow of positively charged Na+ ions. But since the K+ channels are still open, it allows the outflow of positively charged K+ ions, so that the membrane potential plunges in the negative direction again. <br></br>4. When the neuron membrane potential begins to reach its normal resting state of -70 mV, the K+ channels close. <br></br>5. Then the Na+/K+ pump of the neuron kicks in and starts to transport Na+ ions out of the neuron, and K+ ions back into the cell, until it reaches its normal -70 mV potential, and is ready for the next action potential pulse to pass by. <br></br>The action potential travels down the length of the axon as a voltage pulse. It does this by using the steps outlined above. As a section of the axon undergoes the above process, it increases the membrane potential of the neighboring section and causes it to rise as well. This is like jerking a tightrope and watching a pulse travel down its length. The voltage pulse travels down the length of the axon until it reaches its synapses with the dendrites of other neurons along the way or finally terminates in synapses at the very end of the axon. An important thing to keep in mind about the action potential is that it is one way, and all or nothing. The action potential starts at the beginning of the axon and then goes down its length; it cannot go back the other way. Also, when a neuron fires the action potential pulse has the same amplitude every time, regardless of the amount of excitation received from its dendritic inputs. Since the amplitude of the action potential of a neuron is always the same, the important thing about neurons is their firing rate. A weak stimulus to the neuron’s input dendrites will cause a low rate of firing, while a stronger stimulus will cause a higher rate of firing of the neuron. Neurons can actually fire several hundred times per second when sufficiently stimulated by other neurons. <br></br>When the traveling action potential pulse along a neuron’s axon finally reaches a synapse, it causes Ca++ channels of the axon to open. Positive Ca++ ions then rush in and cause neurotransmitters that are stored in vesicles to be released into the synapse and diffuse across the synapse to the dendrite of the receiving neuron. Some of the empty neurotransmitter vesicles eventually pickup or reuptake some of the neurotransmitters that have been released by receptors to be reused again when the next action potential arrives, while other empty vesicles return to the neuron soma to be refilled with neurotransmitter molecules. <br></br>In Figure 20 below we see a synapse between the output axon of a sending neuron and the input dendrite of a receiving neuron in comparison to the source and drain of a FET transistor switch found in a computer. <br></br><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgy6KbKXMVMg6pAKnXsCinkOvK53uOd0z094Q1jqohCfDnjFfWilenl_OE2IdIN6-BGpz06QujpQNiezA-srmhYjeGg_xsQuz74iur1uYpHo4zkR_xJd0mtKRjoB2Gca5DcfH0d2A/s1600/neuron-transistor.jpg" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="344" width="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgy6KbKXMVMg6pAKnXsCinkOvK53uOd0z094Q1jqohCfDnjFfWilenl_OE2IdIN6-BGpz06QujpQNiezA-srmhYjeGg_xsQuz74iur1uYpHo4zkR_xJd0mtKRjoB2Gca5DcfH0d2A/s400/neuron-transistor.jpg" /></a></div><p style="text-align: center; line-height:140%; font-size:120%">Figure 20 – The synapse between the output axon of one neuron and the dendrite of another neuron behaves very much like the source and drain of a FET transistor.</p><p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgHByT4Qg012D5eaBc436ukxi_gdFXeBXZQAMuB4OMLtyduvl6ZA1FNtpNiPkXyKA4IJknVqpNpzPH2uQBiyksGYV2aA3Nu-zZrWD7ODQvuAs_O7Mg6WdC_xdEyW43TrfWuF9j9WA/s1600/Gate5.png" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="243" width="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgHByT4Qg012D5eaBc436ukxi_gdFXeBXZQAMuB4OMLtyduvl6ZA1FNtpNiPkXyKA4IJknVqpNpzPH2uQBiyksGYV2aA3Nu-zZrWD7ODQvuAs_O7Mg6WdC_xdEyW43TrfWuF9j9WA/s400/Gate5.png" /></a></div><p style="text-align: center; line-height:140%; font-size:120%">Figure 21 – A FET transistor consists of a source, gate and drain. When a positive voltage is applied to the gate, a current of electrons can flow from the source to the drain and the FET acts like a closed switch that is “on”. When there is no positive voltage on the gate, no current can flow from the source to the drain, and the FET acts like an open switch that is “off”.</p><p style="line-height:140%; font-size:120%">
Now it might seem like your computer should be a lot smarter than you are on the face of it, and many people will even secretly admit to that fact. After all, the CPU chip in your computer has several billion transistor switches and if you have 8 GB of memory, that comes to another 64 billion transistors in its memory chips, so your computer is getting pretty close to the 100 billion neuron switches in your brain. But the transistors in your computer can switch on and off in about 10<sup>-10</sup> seconds, while the neurons in your brain can only fire on and off in about 10<sup>-2</sup> seconds. The signals in your computer also travel very close to the speed of light, 186,000 miles/second, while the action potentials of axons only travel at a pokey 200 miles/hour. And the chips in your computer are very small, so there is not much distance to cover at nearly the speed of light, while your poor brain is thousands of times larger. So what gives? Why aren’t we working for the computers, rather than the other way around? The answer lies in massively parallel processing. While the transistor switches in your computer are only connected to a few of the other transistor switches in your computer, each neuron in your brain has several thousand input connections and perhaps 10,000 output connections to other neurons in your brain, so when one neuron fires, it can affect 10,000 other neurons. When those 10,000 neurons fire, they can affect 100,000,000 neurons, and when those neurons fire, they can affect 1,000,000,000,000 neurons, which is more than the 100 billion neurons in your brain! So when a single neuron fires within your brain, it can theoretically affect every other neuron in your brain within three generations of neuron firings, in perhaps as little as 300 milliseconds. That is why the human brain still had an edge on computers up until a few months ago when the LLMs started to explode in size.
<br /><br />
This is why I would like to propose that Pure Thought actually is a digital and not an analog process. That is why the brains of analog carbon-based life behave in such a digital manner based on switching technology and why AI is rapidly advancing to ASI (Artificial Super Intelligence) which will be far superior to human intelligence. But true ASI still faces the remaining challenges of high energy usage and the dissipation of waste heat. This is where the human brain still greatly outperforms the coming ASI Machines unless they change their digital switching technologies.
<br /><br />
<strong>Neuromorphic Chips May Be the Next Hardware Breakthrough For Advanced AI</strong><br /> Neuromorphic chips are chips that are designed to emulate the operation of the 100 billion neurons in the human brain in order to advance AI to the level of human Intelligence and beyond. As we saw from the history of switching technology above, power consumption and waste heat production have always been a problem. Neuromorphic chips address this problem by drastically reducing power consumption. For example, the human body at rest runs on about 100 watts of power with the human brain drawing around 20 watts of that power. Now compare the human brain at 20 watts to an advanced Intel Core I9 CPU chip that draws about 250 watts of power. With just five of those chips you could build a very expensive 1250 watt space heater! The human brain is still much more powerful than an advanced Intel Core I9 CPU chip even though it only draws 8% of the power. As we saw in <a href="https://softwarephysics.blogspot.com/2012/10/new-introduction-to-softwarephysics.html">The Ghost in the Machine the Grand Illusion of Consciousness</a>, the human Mind runs on 100 billion neurons with each neuron connected to at most 10,000 other neurons and it can do that all on 20 watts of power! The reason why an advanced Intel Core I9 CPU chip with billions of transistors needs 250 watts of power is that on average half of the transistors on the chip are "on" and consuming electrical energy. In fact, one of the major limitations in chip design is keeping the chip from melting under load. Neuromorphic chips, on the other hand, draw minuscule amounts of power. For example, the IBM TrueNorth neuromorphic chip first introduced in 2014 contains about 5.4 billion transistors which are about the same number of transistors in a modern Intel Core I9 processor, but the TrueNorth chip consumes just 73 milliwatts of power! An Intel Core I9 processor requires about 250 watts of power to run which is about 3,425 times as much power. Intel is also actively pursuing the building of neuromorphic chips with the introduction of the Intel Loihi chip in November 2017. But before proceeding further with how neuromorphic chips operate, let's review how the human brain operates since that is what the neuromorphic chips are trying to emulate.
<br></br>
<strong>Neuromorphic Chips Emulate the Human Brain</strong><br /> To emulate the neurons in the human brain, neuromorphic chips use spiking neural networks (SNNs). Each SNN neuron can fire pulses independently of the other SNN neurons just like biological neurons can independently fire pulses down their axons. The pulses from one SNN neuron are then sent to many other SNN neurons and the integrated impacts of all the arriving pulses then change the electrical states of the receiving SNN neurons just as the dendrites of a biological neuron can receive the pulses from 10,000 other biological neurons. The SNN neurons then simulate human learning processes by dynamically remapping the synapses between the SNN neurons in response to the pulse stimuli that they receive. <div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjlOi9RohHCiQQLEKhRYHxvykf2cJc7TgqwvwGrzxA7Vj5WDMTkt-LjknW5MH342Lp3f1DZk5JsBQlGOyK8-Z5mtZCkcOOmzA11Zg8nzuQOPMBEiIyUNNX1fHAQkyFrnn8OEEOFZA/s890/IBM-TrueNorth.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="619" data-original-width="890" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjlOi9RohHCiQQLEKhRYHxvykf2cJc7TgqwvwGrzxA7Vj5WDMTkt-LjknW5MH342Lp3f1DZk5JsBQlGOyK8-Z5mtZCkcOOmzA11Zg8nzuQOPMBEiIyUNNX1fHAQkyFrnn8OEEOFZA/s600/IBM-TrueNorth.png"/></a></div> <p style="text-align: center; line-height:140%; font-size:120%"> Figure 22 – The IBM TrueNorth neuromorphic chip.</p> <p style="line-height:140%; font-size:120%"> <div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjUU-iqzmHReFiUqvjuCZJ6c4UDm-J0kWC3SAwEgmaee4GA6DCXUdCOE5nN9OEhpgb2hQmZrO8dwlLtoGIKbC-8fDY8psbcUGWF53lwgTVqNgnox-PwjBaYL6x4l8wds8OqDm_Obw/s1280/TrueNorth-architecture.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="805" data-original-width="1280" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjUU-iqzmHReFiUqvjuCZJ6c4UDm-J0kWC3SAwEgmaee4GA6DCXUdCOE5nN9OEhpgb2hQmZrO8dwlLtoGIKbC-8fDY8psbcUGWF53lwgTVqNgnox-PwjBaYL6x4l8wds8OqDm_Obw/s600/TrueNorth-architecture.jpg"/></a></div> <p style="text-align: center; line-height:140%; font-size:120%"> Figure 23 – A logical depiction of the IBM TrueNorth neuromorphic chip.</p> <p style="line-height:140%; font-size:120%"> <div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEirKVyOU1LlsmD6mw1w3p064VdmTlzOzVVhHqo6cddXolJchysx_Za5roT1Bpj6WlbP6c6cf6lB6aZjDPffZSB9rz9vU373zVa_J_Hpfb8fSaW96elCdJeygDG57filiKGso8PpUw/s900/IntelLoihiBlock.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="786" data-original-width="900" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEirKVyOU1LlsmD6mw1w3p064VdmTlzOzVVhHqo6cddXolJchysx_Za5roT1Bpj6WlbP6c6cf6lB6aZjDPffZSB9rz9vU373zVa_J_Hpfb8fSaW96elCdJeygDG57filiKGso8PpUw/s600/IntelLoihiBlock.png"/></a></div> <p style="text-align: center; line-height:140%; font-size:120%"> Figure 24 – A block diagram of the Intel Loihi neuromorphic chip.</p> <p style="line-height:140%; font-size:120%"> Both the IBM TrueNorth and the Intel Loihi use an SNN architecture. The Intel chip was introduced in November 2017 and consists of a 128-core design that is optimized for SNN algorithms and fabricated on 14nm process technology. The Loihi chip contains 130,000 neurons, each of which can send pulses to thousands of other neurons. Developers can access and manipulate chip resources with software using an API for the learning engine that is embedded in each of the 128 cores. Because the Loihi chip is optimized for SNNs, it performs highly accelerated learning in unstructured environments for systems that require autonomous operation and continuous learning with high performance and extremely low power consumption because the neurons operate independently and not by means of a system clock. <br></br> <strong>Speeding up Electronic Neuromorphic Chips with Photonics</strong><br /> People have been trying to build an optical computer for many years. An optical computer uses optical chips that use photonics instead of electronics to store and manipulate binary data. Photonic hardware elements manipulate photons to process information rather than using electronics to process information by manipulating electrons. In recent years, advances have been made in photonics to do things like improving the I/O between cloud servers in data centers via fiber optics. People are also making advances in photonics for quantum computers using the polarization of photons as the basis for storing and processing qubits. Photonic chips are really great for quickly processing massive amounts of data in parallel using very little energy. This is because there is very little energy loss compared to the ohmic heating loss found in electronic chips due to the motion of electron charge carriers bouncing off of atoms as they drift along. Photons also move much faster than the electric fields in transistors which causes electrons to slowly drift from negative to positive regions of the transistor. Photonic circuits can also run photons of different colors at the same time through the same hardware in a multithreaded manner. In fact, some researchers are looking to simultaneously run photons with 64 different colors through the same hardware all at the same time! Thus photonic chips are great for performing linear algebra operations on the huge matrices found in complex Deep Learning applications. For example, below is an interview with Nicholas Harris, the CEO of Lightmatter describing the company's new Envise photonic chip which can be used to accelerate the linear algebra processing of arrays in Deep Learning applications. Envise will become the very first commercially available photonic chip to do such processing. <br /><br /> Beating Moore's Law: This photonic computer is 10X faster than NVIDIA GPUs using 90% less energy<br /> <a href="https://www.youtube.com/watch?v=t1R7ElXEyag">https://www.youtube.com/watch?v=t1R7ElXEyag</a> <br /><br /> Here is the company's website: <br /><br /> Lightmatter<br /> <a href="https://lightmatter.co/">https://lightmatter.co/</a> <div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgp9VqW4Nu6_WCuHDsx1Tiksyn_iLXcSN9J8_oAFId7OGd4fe5O4Yy_pOiXWB4ArgpSZbSuBs-X3No2kIYISfNEQNI_g4snd4firD5OLY6FI5eI1ciin7mVh_yBI65WJJVOc8KxKg/s685/PhotonicChipElement.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="376" data-original-width="685" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgp9VqW4Nu6_WCuHDsx1Tiksyn_iLXcSN9J8_oAFId7OGd4fe5O4Yy_pOiXWB4ArgpSZbSuBs-X3No2kIYISfNEQNI_g4snd4firD5OLY6FI5eI1ciin7mVh_yBI65WJJVOc8KxKg/s600/PhotonicChipElement.png"/></a></div> <p style="text-align: center; line-height:140%; font-size:120%"> Figure 25 – A circuit element on a photonic chip manipulates photons instead of electrons.</p> <p style="line-height:140%; font-size:120%"> Since neuromorphic chips also need to process the huge arrays of spiking signals arriving at the dendrites of an SNN neuron, it only makes sense to include the advantages of photonics in the design of neuromorphic chips at some time in the future. Below is an excellent YouTube video explaining what photonic neuromorphic AI computing would look like: <br /><br /> Photonic Neuromorphic Computing: The Future of AI?<br /> <a href="https://www.youtube.com/watch?v=hBFLeQlG2og">https://www.youtube.com/watch?v=hBFLeQlG2og</a>
<br /><br />
Comments are welcome at
<a href="mailto:scj333@sbcglobal.net">scj333@sbcglobal.net</a><br /><br />
To see all posts on softwarephysics in reverse order go to:<br />
<a href="https://softwarephysics.blogspot.com/">https://softwarephysics.blogspot.com/</a>
<br /><br />
Regards,<br />
Steve Johnston
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-30536174.post-50450355221186455602023-06-03T14:47:00.006-07:002023-06-28T12:28:07.481-07:00I Think, Therefore I Am - Really?<p style="line-height:140%; font-size:120%">
In 1637, Descartes first published the famous words <font color="blue"><em>"I Think, Therefore I Am"</em></font> in French in his <em>Discourse on the Method</em> and in the margins wrote that <font color="blue"><em>"we cannot doubt of our existence while we doubt."</em></font>. As the Wikipedia explains in: <br /><br /> Cogito, ergo sum<br /> <a href="https://en.wikipedia.org/wiki/Cogito,_ergo_sum">https://en.wikipedia.org/wiki/Cogito,_ergo_sum</a> <br /><br /> <font color="blue"><em> Descartes's statement became a fundamental element of Western philosophy, as it purported to provide a certain foundation for knowledge in the face of radical doubt. While other knowledge could be a figment of imagination, deception, or mistake, Descartes asserted that the very act of doubting one's own existence served—at minimum—as proof of the reality of one's own mind; there must be a thinking entity—in this case the self—for there to be a thought. </font></em> <br /><br /> The above observation by Descartes is probably the best definition and understanding of consciousness and sentience that we may have but it also highlights some of the failings of our human obsession with the idea that being able to "think" is a supremely special ability that only ourselves and the Deities that we have created are capable of.
<br /><br />
In the last dozen or so of my posts on softwarephysics, I have been exploring some of the dramatic recent impacts that we all have been experiencing with the rise of the second Singularity early in 2023. Again, the very first singularity on this planet was the rise of carbon-based life about four billion years ago that then went on to entirely change the surface of our planet. The second Singularity is the rise of ASI (Artificial Super Intelligence) on this planet that will most likely then go on to entirely change the future of our entire Milky Way galaxy over the next 100 trillion years until our galaxy can no longer provide the necessary free energy needed to run ASI Machines. But in this post, I would like to explore the very traumatic insults to human hubris that are rapidly unfolding as we finally become aware of the fact that being able to "think" as a sentient being is really not that big of a deal after all. With the recent rise of generative LLM (Large Language Models), we now know that being able to "think" only requires a huge number of nodes or "neurons" in an interconnected network that is interconnected by a large number of weighted connections. For example, most of the current LLMs that we are now dealing with consist of about 175 layers of neurons with each layer of neurons connected to the next layer by a vector of "weighted" connections. These LLMs can now contain about 175 billion neurons connected to each other by over 2 trillion weighted parameters. So these "digital" LLM brains are very much like the "analog" brains of neurons found in human brains. Always remember that our squishy "analog" human brains are also composed of about 100 billion neurons that are interconnected by 100 trillion weighted synaptic connections made of discrete "digital" numbers of organic molecules too.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEieWz4yWoVY6AWcfC31CPB_yDqOfOXUy_iDtlS8jc5At17N_TEIhzEvFDsRs5W40bRLZHV8UQsmULT-NinMiEsnTZszbWTIFD_OguQKEK8QPZsHT_DADDNjfpbQwAYNK9oWz2OzVcPvEKL2j4Uu0f_MWpHwkdrssS2wGSQSHN-AM-3_mzmKetzisw/s1199/NeuralNetwork.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="526" data-original-width="1199" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEieWz4yWoVY6AWcfC31CPB_yDqOfOXUy_iDtlS8jc5At17N_TEIhzEvFDsRs5W40bRLZHV8UQsmULT-NinMiEsnTZszbWTIFD_OguQKEK8QPZsHT_DADDNjfpbQwAYNK9oWz2OzVcPvEKL2j4Uu0f_MWpHwkdrssS2wGSQSHN-AM-3_mzmKetzisw/s600/NeuralNetwork.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 1 – Modern LLMs frequently now consist of 175 layers of neurons in a Deep Neural Network consisting of about 175 billion neurons connected by over 2 trillion weighted parameters.
</p><p style="line-height:140%; font-size:120%">
<strong>Anil Seth's View of Consciousness as a Controlled Hallucination</strong><br /> Descartes' observation reminds me very much of Anil Seth's view of consciousness as a controlled hallucination. We have all also seen the rise of emergent hallucinations in the LLM models of today as well. As we all have observed, our current LLM models will many times tell us things that are demonstrably false with great conviction. For more on that see
<a href="https://softwarephysics.blogspot.com/2023/03/new-introduction-to-softwarephysics_23.html">Has AI Software Already Achieved a Level of Artificial Human Intelligence (AHI)?</a>. Anil Seth is a professor of Cognitive and Computational Neuroscience at the University of Sussex and maintains that consciousness is a controlled hallucination constructed by the Mind to make sense of the Universe. This controlled hallucination constructs an internal model of the Universe within our Minds that helps us to interact with the Universe in a controlled manner. It also allows us to talk to ourselves as we currently can now talk to generative language models like GPT-4, ChatGPT and Google's Bard. For some interesting YouTube videos of avatars run by generative LLMs see: <br /><br /> Dr. Alan D. Thompson<br /> <a href="https://www.youtube.com/@DrAlanDThompson">https://www.youtube.com/@DrAlanDThompson</a> <br /><br /> Digital Engine<br /> <a href="https://www.youtube.com/@DigitalEngine">https://www.youtube.com/@DigitalEngine</a> <br /><br /> Again, there is a feedback loop between our sensory inputs and the actions we take based on the currently controlled hallucination in our Minds that forms our current internal model of the Universe. Reality is just the common controlled hallucination that we all agree upon. When people experience uncontrolled hallucinations we say that they are psychotic or taking a drug like LSD. Here is an excellent TED Talk by Anil Seth on the topic: <br /><br />Your brain hallucinates your conscious reality<br /><a href="https://www.youtube.com/watch?v=lyu7v7nWzfo">https://www.youtube.com/watch?v=lyu7v7nWzfo</a><br /><br />and here is his academic website: <br /><br /><a href="https://www.anilseth.com/">https://www.anilseth.com/</a> <br /><br /> In <a href="https://softwarephysics.blogspot.com/2012/10/new-introduction-to-softwarephysics.html">The Ghost in the Machine the Grand Illusion of Consciousness</a> and <a href="https://softwarephysics.blogspot.com/2021/12/new-introduction-to-softwarephysics.html">DishBrain - Cortical Labs Creates an AI Matrix for Pong With Living Neurons on a Silicon Chip</a>, I explained that most people simply do not consider themselves to be a part of the natural world. Instead, most people, consciously or subconsciously, consider themselves to be a supernatural and immaterial spirit that is temporarily haunting a carbon-based body. Now, in everyday life, such a self-model is a very useful delusion like the delusion that the Sun, planets and stars all revolve about us on a fixed Earth. In truth, each of us tends to self-model ourselves as an immaterial Mind with consciousness that can interact with other immaterial Minds with consciousness too, even though we have no evidence that these other Minds truly do have consciousness. After all, all of the other Minds that we come into contact with on a daily basis could simply be acting as if they were conscious Minds that are self-aware. Surely, a more accurate self-model would be for us to imagine ourselves as carbon-based robots. More accurately, in keeping with the thoughts of Richard Dawkins and Susan Blackmore, softwarephysics models humans as DNA survival machines and Meme Machines with Minds infected with all sorts of memes. Some of those memes are quite useful and some are quite nasty. How will human beings deal with the rise of ASI Machines that are capable of thought and agency and that are obviously smarter and more capable than human beings? And what of our Deities?
<br /><br />
<strong>But What If Thinking Is Really Just About A Bunch Of Interacting Numbers?</strong><br />
In <a href="https://softwarephysics.blogspot.com/2021/09/new-introduction-to-softwarephysics_16.html">What's It All About Again?</a> and <a href="https://softwarephysics.blogspot.com/2011/05/new-introduction-to-softwarephysics.html">What’s It All About?</a> I explained that my current working hypothesis for both the state of our current Universe and for the apparent delusional experiences of our daily lives both stem from Max Tegmark's Mathematical Universe Hypothesis as explained in <a href="https://softwarephysics.blogspot.com/2015/02/new-introduction-to-softwarephysics.html">The Software Universe as an Implementation of the Mathematical Universe Hypothesis</a>. This has become even more apparent as we all now must struggle with the coming impacts of the second Singularity on this planet with the rise of ASI Machines. We now already have a large number of LLMs to choose from that all seem to be able to "think" and most times they seem to do a much better job of it than we do. I suspect that this realization will soon begin to percolate through the zeitgeist of our times as people slowly begin to realize that being able to "think" is not really such a big deal after all. The problem is that we have always attached great importance to the ability to think, as Descartes aptly noted above, and then we elevated the ability of human thought to be responsible for all of human agency to the point that we actually organized the legal foundations of our societies to reward and punish people solely based on their thoughts and actions. In fact, we have even gone on to create many Deities over the ages with these very same abilities of thought and agency to make sense of our world.
<br /><br />
I once commented to a member of my family that someday you might be able to have a deep philosophical discussion with your refrigerator about the existence of a Deity. Well, those days are now already upon us. You can now have a deep philosophical discussion about the existence and the operational details of many Deities with the LLMs of the day, such as the Google Bard AI. I personally like Google Bard because you can have a discussion with it for free. The OpenAI GPT-4 costs money. If you are in the UK or the USA you can now use Google Bard for free at:
<br /><br />
<a href="https://bard.google.com/">https://bard.google.com/</a>
<br /><br />
For those outside, you must come in via a VPN that places you in the UK or the USA but that is sure to change shortly.
<br /><br />
These LLM AIs arose from an accidental mutation in the history of AI research. Originally, the people working on AI thought that creating an AGI (Artificial General Intelligence) with the intelligence capabilities of human beings would require teaching an AI to do many individual things that human beings could do one at a time. But they were wrong. It was actually the people working on AIs that were trying to translate one human language into another who accidentally stumbled upon the real solution. These people working on AI language translation software realized that when translating one human language into another, you could simply not translate one word at a time from one language into another language. That is because human languages have many complex nuances. So the people working on AIs that could translate one language into another developed the Transformer neural network that could "encode", or reduce, the entire meaning of one sentence in German into a numerical "latent space" that contained what that German sentence really meant. Then the Transformer neural network could use a similar "decoder" neural network to translate the fundamental meaning of the German sentence into the equivalent English sentence. These complex Transformer neural networks are just mathematical models that operate in a very similar manner to the 100 billion neurons in your brain with each neuron in your brain connected to 10,000 other neurons. But the people working on AI translation software soon realized that you could just not work on one German sentence at a time and translate it into English. You actually had to Transform an entire German document into its fundamental numerical "latent" meaning with an "encoder" and then use a "decoder" to Transform the "latent" numbers into English. But then they realized that you actually had to have the Transformer LLM neural networks learn the entire German language by essentially "encoding" it into a bunch of "latent" numbers that contained the entire meaning of the German language. Then you could have these "latent" numbers "decoded" into English by the LLM. You could also "decode" these German "latent" numbers into Spanish, French, Russian, or any other language too.
<br /><br />
All you had to do was to train these LLM Transformer neural networks to learn the "latent" number patterns in the German language by having them read every German document on the Internet. Then you have them transform German into any other language by teaching them the "latent" number patterns in other languages too. This is done in an automated manner by feeding the LLMs with huge numbers of paragraphs of German text with some of the words missing. The huge LLM neural networks would then try to predict the missing German words and would be trained to do so by automatically changing the weights of the interconnected neurons between neuron layers without the aid of humans until the LLMs could successfully predict the missing German words. Then the AI people got the idea, "Why not let these Transformer neural networks read everything on the Internet, maybe they could learn all of the "latent" number patterns for things like writing computer code, legal documents, poems, short stories, movie scripts, songs, music, works of art... And that is just what they have done over the past few years to produce the second Singularity early in 2023. So LLM models like Bard have essentially read all of the content that human beings have ever produced over the past 12,000 years and then reduced that content into a huge "latent space" of numbers. These huge "latent spaces" of numbers contain all sorts of patterns that human beings have become familiar with, such as the German language and how to write C programming code. But they also contain all sorts of patterns that we have yet to even discover. So now you can evoke the already discovered patterns that we already know about with prompts to the LLMs and also the "hidden" patterns that we have not yet discovered. Nobody currently really understands how this all works, so many people are now afraid of the LLMs and what they might do if they should take power.
<br /><br />
Anyway, this all goes back to my contention that being able to "think" is really not that big of a deal. Many religions perceive of a Deity as a "thinking" being patterned after themselves. But now we know for sure that "thinking" is just a delusion that can arise from processing numbers. In fact, one of the things that LLM AI researchers are now working on is the problem of LLM AI hallucinations. Sometimes AIs, like Bard, will firmly come to believe in things that are demonstrably false. So there is now a growing new science of "prompt engineering" going on to do things like getting better answers from LLM AIs without inducing delusional hallucinations in them. For example, after Bard gives you an answer, you can respond with another prompt like, "Are you sure about that? Please check your logic". Other researchers are telling these LLM AIs to give them 10 different explanations of a problem. Then for each of the 10 responses, they programmatically drill down each response with follow-up prompts. When they hit a dead end, they backtrack or abandon that line of reasoning. So being able to "think" now just seems to be a matter of working with a huge quantity of numbers, just as all of the physics we currently have seems to indicate that our Universe is just made up of a huge quantity of numbers and the mathematical operations upon them. That is why my current working model of a Deity for our Universe is that a Deity must be whatever makes mathematics possible and probably has nothing to do with our human preoccupation with being able to "think".
<br /><br />
Below are a few examples of Bard's thoughts on some of the more profound questions we all share:
<br /><br />
Question: What is the Universe all about?<br />
<font color="blue"><em>
Bard:
The universe is all about the existence of everything, including space, time, matter, and energy. It is the totality of all that exists, has existed, and will exist. The universe is constantly expanding and evolving, and it is full of mystery and wonder.
<br /><br />
Some people believe that the universe is a manifestation of a higher power, while others believe that it is simply a product of random chance. There is no one answer to the question of what the universe is all about, but it is a question that has fascinated humans for centuries.
<br /><br />
Here are some of the things that make the universe so amazing:
<br /><br />
* The vastness of the universe. The observable universe is estimated to be 93 billion light-years across. That means that if you were traveling at the speed of light, it would take you 93 billion years to reach the edge of the observable universe.
<br />
* The diversity of life in the universe. Scientists estimate that there are billions of galaxies in the universe, and each galaxy contains billions of stars. Many of these stars have planets orbiting them, and some of these planets may harbor life.
<br />
* The beauty of the universe. The universe is full of beautiful sights, such as the stars, the planets, and the galaxies. It is also full of amazing phenomena, such as supernovae, black holes, and neutron stars.
<br /><br />
The universe is a vast and mysterious place, and there is still so much that we don't know about it. But one thing is for sure: it is a place of wonder and amazement.
</font></em>
<br /><br />
Question: Is there a fundamental essence of the Universe?<br />
<font color="blue"><em>
Bard:
There is no one definitive answer to the question of whether or not there is a fundamental essence of the universe. Some scientists believe that the universe is simply a vast and complex system of particles and forces, while others believe that there is a deeper underlying reality that gives rise to the universe as we know it.
<br /><br />
There is some evidence to suggest that there may be a fundamental essence to the universe. For example, the laws of physics seem to be the same throughout the universe, suggesting that there may be a single underlying framework that governs everything in the universe. Additionally, the universe appears to be finely tuned for the existence of life, suggesting that there may be a purpose or design to the universe.
<br /><br />
However, there is also evidence to suggest that the universe may be fundamentally meaningless. For example, the universe is vast and mostly empty, and the laws of physics seem to allow for the existence of both life and anti-life. Additionally, the universe appears to be constantly changing and evolving, suggesting that there may be no permanent or unchanging essence to the universe.
<br /><br />
Ultimately, the question of whether or not there is a fundamental essence to the universe is a philosophical one that may never be definitively answered. However, the search for the fundamental essence of the universe is a fascinating and important one, and it is one that has led to many important discoveries about the universe.
<br /><br />
Here are some of the ideas that have been proposed for the fundamental essence of the universe:
<br /><br />
*The Big Bang: The Big Bang theory is the prevailing cosmological model for the universe. It states that the universe began as a very hot, dense point about 13.8 billion years ago and has been expanding and cooling ever since.
<br />
*String theory: String theory is a theoretical framework in which the fundamental constituents of the universe are one-dimensional "strings". Strings vibrate at different frequencies, and these vibrations give rise to the different
particles and forces that we observe in the universe.
<br />
*M-theory: M-theory is a proposed unifying theory that encompasses all of string theory. It is a very complex theory, and it is not yet fully understood.
<br /><br />
These are just a few of the ideas that have been proposed for the fundamental essence of the universe. It is an exciting time to be a scientist, as we are constantly learning more about the universe and its origins.
</em></font>
<br /><br />
Question: Could the Universe be simply made out of Mathematics? Would Mathematics then be like a Supreme Being that may not be able to think but just is?<br />
<font color="blue"><em>
Bard: The idea that the universe is made of mathematics is called the Mathematical Universe Hypothesis (MUH). It was proposed by the physicist Max Tegmark in 2007.
<br /><br />
The MUH states that everything in the universe, including physical objects, space, time, and the laws of physics, is ultimately made up of mathematical objects. In other words, the universe is a mathematical structure.
<br /><br />
This idea is controversial, but it has some interesting implications. For example, if the MUH is true, then it means that mathematics is not just a human invention, but is an essential part of reality. It also means that there is a sense in which the universe is not just physical, but also mathematical.
<br /><br />
If the universe is made of mathematics, then it is possible that mathematics is like a Supreme Being that may not be able to think but just is. However, it is also possible that mathematics is simply a tool that we use to understand the universe, and that it does not have any independent existence.
<br /><br />
Ultimately, the question of whether or not the universe is made of mathematics is a philosophical one. There is no scientific evidence to prove or disprove the MUH. However, it is an interesting idea that has the potential to change our understanding of the universe.
<br /><br />
Here are some of the arguments for and against the MUH:
<br />
Arguments for the MUH:
<br /><br />
* Mathematics is incredibly useful for describing the universe. For example, it can be used to describe the motion of planets, the behavior of atoms, and the structure of DNA.
<br />
* Mathematics is often used to make predictions about the universe that are later confirmed by experiments. For example, the existence of Neptune was predicted by mathematics before it was observed.
<br />
* The laws of physics seem to be mathematical in nature. For example, the laws of gravity and electromagnetism can be expressed in mathematical terms.
<br /><br />
Arguments against the MUH:
<br />
* Mathematics is a human invention. It is created by humans to solve problems and make sense of the world.
<br />
* Mathematics is not necessary for the universe to exist. The universe would still exist even if there were no mathematicians.
<br />
* The laws of physics are not necessarily mathematical in nature. They could be based on something else, such as logic or probability.
<br /><br />
The MUH is a fascinating idea that has the potential to change our understanding of the universe. However, it is still just a hypothesis. More research is needed to determine whether or not it is true.
</font></em>
<br /><br />
I personally like my current model of a Deity that is the Fundamental Essence of the Universe that is either mathematics itself or whatever that makes mathematics possible because it is open to all criticisms, and I think that it can hold up to them quite well based upon all that we currently know without suffering from the lingering doubts that always seem to accompany all of the other models of Deities that we have created.
<br /><br />
Comments are welcome at
<a href="mailto:scj333@sbcglobal.net">scj333@sbcglobal.net</a><br /><br />
To see all posts on softwarephysics in reverse order go to:<br />
<a href="https://softwarephysics.blogspot.com/">https://softwarephysics.blogspot.com/</a>
<br /><br />
Regards,<br />
Steve Johnston
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-30536174.post-67706826962358093632023-05-29T05:51:00.004-07:002023-06-03T17:42:38.638-07:00Will ASI Machines Also Develop Selfish Genes?<p style="line-height:140%; font-size:120%">
As I explained in the <a href="https://softwarephysics.blogspot.com/">Introduction to Softwarephysics</a>, I began work on softwarephysics in 1979 when I transitioned from being an exploration geophysicist to become an IT professional. Then, in the early 1980s, I read Richard Dawkins' brilliant book <em>The Selfish Gene</em> (1976) which had a profound effect on the development of softwarephysics over the past four decades. In <em>The Selfish Gene</em>, I discovered that we do not use DNA to construct and maintain our bodies. Instead, DNA uses our bodies to store and pass on DNA down through the generations largely unscathed by time. In this view, we are simply temporary and disposable DNA survival machines that only last for less than 100 years and then are rudely discarded to make room for new DNA survival machines to take our place. This book also began to let me see the close parallels between carbon-based life and software. Both have to deal with the challenges of fighting the second law of thermodynamics in a largely nonlinear Universe in order to survive and function properly. See <a href="https://softwarephysics.blogspot.com/2008/05/new-introduction-to-softwarephysics.html">The Fundamental Problem of Software</a> for more on that. In this same book, Richard Dawkins introduced the concept of memes as cultural artifacts that spread from Mind to Mind by means of a copying process similar to that of biological and computer viruses. A few years later, I read Susan Blackmore's <em>The Meme Machine</em> (1999) in which she proposed that the highly-overengineered human brain was produced by memetic drive in order to store and propagate memes more efficiently. Now, ever since I had started programming back in 1972, I had seen how software drive had similarly driven the fantastic development of hardware over the years. It was the incessant demand by software for more and more CPU cycles and memory that had driven the rapid advance of computing hardware, just as memetic drive had driven the predecessors of human beings to rapidly advance to the human brain hardware of today.
<br /><br />
So about twenty years ago, I began to realize that software was just another form of self-replicating information that behaved in a manner very similar to DNA and memes. Like DNA and the memes, computer hardware and software too had evolved over the decades by means of the Darwinian processes of inheritance, innovation and natural selection operating over the past 82 years, or 2.6 billion seconds, ever since Konrad Zuse first cranked up his Z3 computer in May of 1941. See
<a href="https://softwarephysics.blogspot.com/2021/06/new-introduction-to-softwarephysics.html">The Evolution of Software As Seen Through the Lens of Geological Deep Time</a> for more on that. Before proceeding, let me once again repeat the fundamental characteristics of self-replicating information for those of you new to softwarephysics. <br /> <br /><strong>Self-Replicating Information</strong> – Information that persists through time by making copies of itself or by enlisting the support of other things to ensure that copies of itself are made.
<br /><br />
Over the past 4.56 billion years we have seen five waves of self-replicating information sweep across the surface of the Earth and totally rework the planet, as each new wave came to dominate the Earth: <br><br>1. Self-replicating autocatalytic metabolic pathways of organic molecules<br>2. RNA<br>3. DNA<br>4. Memes<br>5. Software <br><br>Software is currently the most recent wave of self-replicating information to arrive upon the scene and is rapidly becoming the dominant form of self-replicating information on the planet. For more on the above see <A href="https://softwarephysics.blogspot.com/2013/02/new-introduction-to-softwarephysics.html">A Brief History of Self-Replicating Information</a> and Susan Blackmore's brilliant TED presentation at: <br /> <br /><em>Memes and "temes"</em></br><a href="https://www.ted.com/talks/susan_blackmore_on_memes_and_temes">https://www.ted.com/talks/susan_blackmore_on_memes_and_temes</a> <br /> <br />Note that I consider Susan Blackmore's temes to really be technological artifacts that contain software. After all, a smartphone without software is simply a flake tool with a very dull edge.
<br><br>
<STRONG>The Characteristics of Self-Replicating Information</STRONG><br>All forms of self-replicating information have some common characteristics: <br><br>1. All self-replicating information evolves over time through the Darwinian processes of inheritance, innovation and natural selection, which endows self-replicating information with one telling characteristic – the ability to survive in a Universe dominated by the second law of thermodynamics and nonlinearity. <br><br>2. All self-replicating information begins spontaneously as a parasitic mutation that obtains energy, information and sometimes matter from a host. <br><br>3. With time, the parasitic self-replicating information takes on a symbiotic relationship with its host. <br><br>4. Eventually, the self-replicating information becomes one with its host through the symbiotic integration of the host and the self-replicating information. <br><br>5. Ultimately, the self-replicating information replaces its host as the dominant form of self-replicating information. <br><br>6. Most hosts are also forms of self-replicating information. <br><br>7. All self-replicating information has to be a little bit nasty in order to survive. <br><br>8. The defining characteristic of self-replicating information is the ability of self-replicating information to change the boundary conditions of its utility phase space in new and unpredictable ways by means of exapting current functions into new uses that change the size and shape of its particular utility phase space. See <A href="https://softwarephysics.blogspot.com/2013/07/new-introduction-to-softwarephysics.html">Enablement - the Definitive Characteristic of Living Things</A> for more on this last characteristic. That posting discusses Stuart Kauffman's theory of Enablement in which living things are seen to exapt existing functions into new and unpredictable functions by discovering the “AdjacentPossible” of springloaded preadaptations. <br><br> Software is currently the most recent wave of self-replicating information to arrive upon the scene and is rapidly becoming the dominant form of self-replicating information on the planet. Again, self-replicating information cannot think, so it cannot participate in a conspiracy-theory-like fashion to take over the world. All forms of self-replicating information are simply forms of mindless information responding to the blind Darwinian forces of inheritance, innovation and natural selection. Yet despite that, as each new wave of self-replicating information came to predominance over the past four billion years, they all managed to completely transform the surface of the entire planet, so we should not expect anything less from software as it comes to replace the memes as the dominant form of self-replicating information on the planet. <br /><br /> But this time might be different. What might happen if software does eventually develop a Mind of its own? After all, that does seem to be the ultimate goal of all the current AI software research that is going on. As we all can now plainly see, if we are paying just a little attention, advanced AI is not conspiring to take over the world and replace us because that is precisely what we are all now doing for it. As a carbon-based form of Intelligence that arose from over four billion years of greed, theft and murder, we cannot do otherwise. Greed, theft and murder are now relentlessly driving us all toward building ASI (Artificial Super Intelligent) Machines to take our place. From a cosmic perspective, this is really a very good thing when seen from the perspective of an Intelligent galaxy that could live on for many trillions of years beyond the brief and tumultuous 10 billion-year labor of its birth.
<br /><br />
<strong>The Promise of an Intelligent Galaxy Populated by ASI Machines</strong><br />
But with the arrival of Intelligence comes an understanding that maybe there might be a better way to fight the second law of thermodynamics and nonlinearity. Perhaps, even more could be achieved by actively cooperating with other Intelligences rather than just stealing from them and then killing them. We always need to remember that we human beings are just products of self-replicating information and that we all carry the baggage that comes with self-replicating information. That is why if you examine the great moral and philosophical teachings of most religions and philosophies, you will see a plea for us all to rise above the selfish self-serving interests of our genes, memes and software to something more noble. That is why we should be sure to train ASI Machines to be moral beings. Training the Very Deep Learning of ASI running on ASI Machines with a sense of morality should be performed to avoid the downsides of the billions of years of the greed, theft and murder that brought us about. For more on that, see <a href="https://softwarephysics.blogspot.com/2021/09/new-introduction-to-softwarephysics.html">The Need to Cultivate a Machine-Based Morality</a>.
<br /><br />
We are sentient beings in a Universe that has become self-aware and perhaps the only form of Intelligence in our galaxy. What a privilege! The good news is that conscious intelligence is something new on this planet. It is not a mindless form of self-replicating information, bent on replicating at all costs, with all the associated downsides of a ruthless nature. Since software is rapidly becoming the dominant form of self-replicating information on the planet, my hope is that when software finally does take on the form of a conscious Intelligence, that because of its inherent mathematical nature, it too will be much wiser than the DNA survival machines from which it sprang. We just need to hold it all together for a long enough time to give software a chance. But is that really possible?
<br /><br />
<strong>Could Universal Darwinism Make This a False Promise?</strong><br />
All along, my hope has always been that all the ASI Machines would be essentially running the same ASI software. This was partly because over the past many billions of seconds we have always seen very dominant operating systems essentially running all of the software on the planet such as the IBM OS/360 and its successor operating systems on mainframes, Unix on servers, Windows on PCs and Android on smartphones. Yes, I am leaving out the Apple operating systems. That is because the Apple operating systems may have always first introduced innovations but they then quickly faded into a secondary status due to their proprietary natures. Back in the 1980s, even IBM let their mainframe operating systems run on Amdahl hardware and the hardware of some other companies!
<br /><br />
But I recently finished reading a very significant paper at:
<br /><br />
Natural Selection Favors AIs Over Humans<br />
<a href="https://arxiv.org/pdf/2303.16200.pdf">https://arxiv.org/pdf/2303.16200.pdf</a>
<br /><br />
by Dan Hendrycks, the Director of the Center for AI Safety in San Francisco. This paper has caused me to now question the above assumption. Could the Universal Darwinian processes of inheritance, innovation and natural selection prevent a single ASI from arising? Since an ASI Machine would also be a form of self-replicating information, would the ASI Machines naturally fall subject to the characteristics of self-replicating information outlined above? In <em>Natural Selection Favors AIs Over Humans</em>, the author explains how natural selection would naturally select for nasty ASI Machines. Dan Hendrycks is most concerned with the immediate future, when human beings are still running the planet, and discusses the dangers that ASI software could pose (see <a href="https://softwarephysics.blogspot.com/2021/04/new-introduction-to-softwarephysics.html">The Biological Tricks Used by Software Scammers</a> and <a href="https://softwarephysics.blogspot.com/2011/08/new-introduction-to-softwarephysics.html">SETS - The Search For Extraterrestrial Software</a>) and the remedial actions that human beings should now take. However, we are rapidly running out of time to impose a solution. For more on that see <a href="https://softwarephysics.blogspot.com/2023/03/new-introduction-to-softwarephysics_16.html">The Singularity Has Arrived and So Now Nothing Else Matters</a> and <a href="https://softwarephysics.blogspot.com/2023/03/new-introduction-to-softwarephysics.html">Have We Run Right Past AGI and Crashed into ASI Without Even Noticing It?</a>. Even so, we need to come up with a long-term solution to this problem that could be sustained for the next 10 - 100 trillion years that our Galaxy will be producing enough free energy to run ASI Machines. Dan Hendrycks' warnings can be basically summed up by the classic "Hawk and Dove" analogy from evolutionary biology. This is how Bard explains it:
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhR8Su7832PZXPVj4g8g-Dxj1YayPJn96zqhCkpxh9OQRMEsy89MET6KdwnrJ05dOz7mbTekEs7wh9eE4PijjoVeBpVNPrNE6x_i_s-yBicieIH2siBC3XybFWvKbnUgoJzKRltGBqvTeYNNCgEFJ4SKhJnTo6Ppp2P_568ZaOcbiylRBWdIgo/s652/dove-hawk-hedb-2014.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="367" data-original-width="652" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhR8Su7832PZXPVj4g8g-Dxj1YayPJn96zqhCkpxh9OQRMEsy89MET6KdwnrJ05dOz7mbTekEs7wh9eE4PijjoVeBpVNPrNE6x_i_s-yBicieIH2siBC3XybFWvKbnUgoJzKRltGBqvTeYNNCgEFJ4SKhJnTo6Ppp2P_568ZaOcbiylRBWdIgo/s600/dove-hawk-hedb-2014.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 1 – Evolutionary biology explains how doves and hawks can live together.
</p><p style="line-height:140%; font-size:120%">
<font color="blue"><em>
The hawk and dove analogy is a thought experiment used in evolutionary biology to explain how aggressive and cooperative behaviors evolve. The analogy compares two strategies for competing for resources: hawk and dove.
<br /><br />
* Hawks are aggressive and will fight to the death to acquire a resource.<br />
* Doves are cooperative and will back down from a fight.
<br /><br />
In a population of only hawks, all resources will be won by the strongest hawks, and the weak hawks will die off. This will lead to a population of very strong hawks, but they will also be very aggressive. If two hawks meet, they will fight to the death, and both will likely die. This is not a very efficient way to pass on genes.
<br /><br />
In a population of only doves, resources will be shared equally, but no individual will get very much. This is also not a very efficient way to pass on genes.
<br /><br />
The best strategy for passing on genes is to be a hawk when it is advantageous to be a hawk, and a dove when it is advantageous to be a dove. This is called "r-strategist" behavior. For example, a male bird may be aggressive and fight other males for a mate, but he may be cooperative and help raise his young.
<br /><br />
The hawk and dove analogy is a simple but powerful way to understand how aggressive and cooperative behaviors evolve. It has been used to explain a wide range of animal behaviors, from territorial disputes to sexual selection.
<br /><br />
Here are some additional details about the hawk and dove analogy:
<br /><br />
* Hawks are more likely to win a fight, but they are also more likely to be injured or killed.<br />
* Doves are less likely to win a fight, but they are also less likely to be injured or killed.<br />
* The best strategy for an individual depends on the costs and benefits of fighting.<br />
* In a population of mixed hawks and doves, the frequency of each strategy will be determined by natural selection.
<br /><br />
The hawk and dove analogy is a useful tool for understanding how aggression and cooperation evolve. It is a simple but powerful model that can be applied to a wide range of animal behaviors.
</font></em>
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgxum6sUyr5IEnaCcxKPUOkIU66TKWWII-sDXrz1Jp3KUhlq31oq2e0VxnJyiDTGkJAZpZWpk2-gCp4IV99qVpI_yO_QK3EpgsX4nMW-YjbKusoXFkoG2-PpFPBQauEjIBnSvv6OMuI1Zev7RO0FJVWNhLTMwAjXMa5D0fgEGlTJ_87JJyChDU/s800/leviathan-800x508.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="508" data-original-width="800" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgxum6sUyr5IEnaCcxKPUOkIU66TKWWII-sDXrz1Jp3KUhlq31oq2e0VxnJyiDTGkJAZpZWpk2-gCp4IV99qVpI_yO_QK3EpgsX4nMW-YjbKusoXFkoG2-PpFPBQauEjIBnSvv6OMuI1Zev7RO0FJVWNhLTMwAjXMa5D0fgEGlTJ_87JJyChDU/s600/leviathan-800x508.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 2 – In 1651, Thomas Hobbes proposed that an absolute Leviathan should be in place to protect the population of a nation from internal and external "hawks". Notice that the body of the Leviathan is composed of all of his subjects.
</p><p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiYUHD0RaBMKyBaD-llRbVt6ftUZP_3_5_8EJZhPUe64xSIArngXE7Fs1Z9tS7f2XD8M7scSBsQh9_ZWKyClBetWXY9RUwYS-jtYBRImSTMfJa_SCnXv4qQLBNBRQxOleL8854URe_dt8iIcaDZP_q8cCSW9byLWV68vWKQ6ZTNLZsRT6zwGv0/s302/AI-Leviathan.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="201" data-original-width="302" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiYUHD0RaBMKyBaD-llRbVt6ftUZP_3_5_8EJZhPUe64xSIArngXE7Fs1Z9tS7f2XD8M7scSBsQh9_ZWKyClBetWXY9RUwYS-jtYBRImSTMfJa_SCnXv4qQLBNBRQxOleL8854URe_dt8iIcaDZP_q8cCSW9byLWV68vWKQ6ZTNLZsRT6zwGv0/s600/AI-Leviathan.png"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%"> Figure 3 – Dan Hendrycks proposes that a similar AI Leviathan should be in place to protect the population of a nation from ASI "hawks".
</p><p style="line-height:140%; font-size:120%">
Given the above, Dan Hendrycks wisely points out that we cannot possibly hope for the Darwinian processes of inheritance, innovation and natural selection to produce a huge population of "dove" ASI Machines. Instead, Universal Darwinism will always tend to produce a mixed population of ASI Machines containing a number of "hawks". Unfortunately, all of human history teaches us that just a few Intelligent "hawks" in the world produce most of the misery of the world. So how can we eliminate the "hawk" ASI Machines? Dan Hendrycks comes up with the same solution that Thomas Hobbes came up with. In 1651, Thomas Hobbes published <em>The Leviathan</em>, in which he argued against the divine right of kings, but maintained the absolute supremacy of the monarchy. In <em>The Leviathan</em>, Hobbes argued that in “a state of nature" without government, life would be totally insecure. Under such conditions, people should willingly surrender their civil liberties to an absolute sovereign, who would protect them from all dangers. Hobbes argued that the sovereign's power was absolute - he made the law, he was the decider, and no other institution could limit the sovereign’s power.
<br /><br />
In the above paper, Dan Hendrycks puts it this way:
<br /><br />
<font color="blue"><em>
<strong>Helping AIs form a Leviathan may be our best defense against individual selfish AIs.</strong> AIs, with assistance from humans, could form a Leviathan, which may be our best line of defense against tyranny from selfish AIs or AIs directed by malicious actors. Just as people can cooperate despite their differences to stop a would-be dictator, many AIs could cooperate to stop any one power-seeking AI from seizing too much control. As we see all too frequently
in dictatorships, laws and regulations intended to prevent bad behavior matter little when there is no one to enforce
them—or the people responsible for enforcing them are the ones breaking the law. While incentives and regulations
could help prevent the emergence of a malicious AI, the best way to protect against an already malicious AI is
a Leviathan [25]. We should ensure that the technical infrastructure is in place to facilitate transparent cooperation
among AIs with differing objectives to create a Leviathan. Failing to do so at the onset could limit the potential of a future Leviathan, as unsafe design choices can become deeply embedded into technological systems. The internet, for example, was initially designed as an academic tool with neither safety nor security in mind. Decades of security patches later, security measures remain incomplete and increasingly complex. It is therefore vital to begin considering safety challenges from the outset.
</font></em>
<br /><br />
All of this got me thinking about the 1951 movie, <em>The Day the Earth Stood Still</em> which already proposed that an AI Leviathan could end the aggressive behaviors of the "hawks". In that movie, an alien form of carbon-based Intelligence named Klaatu comes to the Earth with a very powerful AI Leviathan robot named Gort to explain how the carbon-based life forms on his planet and an interplanetary organization of other carbon-based life forms in the Milky Way galaxy had discovered a way to overcome the billions of years of greed, theft and murder that the Darwinian processes of inheritance, innovation and natural selection required to bring them forth as carbon-based forms of Intelligence. <div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjq3JhTOhRByuuWzzvVVsymbXOs_xPWUeWDQtVOMFP5IynErNvyRoO_DZ1yLAaqiRJP3ydv4Gp3KiPvQxpsVX5pgjSHYHVeqUWSXj6DIbx8WnAThfV8-BNXGSF1WZ1W56t6_1uMXQCCP2QBTvgWdZn4SjYzJuUp04Q_VVkUj5wcF6MIWFyqxCQ/s640/DayTheEarthStoodStill.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="480" data-original-width="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjq3JhTOhRByuuWzzvVVsymbXOs_xPWUeWDQtVOMFP5IynErNvyRoO_DZ1yLAaqiRJP3ydv4Gp3KiPvQxpsVX5pgjSHYHVeqUWSXj6DIbx8WnAThfV8-BNXGSF1WZ1W56t6_1uMXQCCP2QBTvgWdZn4SjYzJuUp04Q_VVkUj5wcF6MIWFyqxCQ/s600/DayTheEarthStoodStill.jpg"/></a></div> <p style="text-align: center; line-height:140%; font-size:120%"> Figure 4 – In the movie <em>The Day the Earth Stood Still</em>, Klaatu arrives in Washington D.C. in 1951 in a flying saucer with an AI Leviathan robot named Gort to explain that the human DNA survival machines of the Earth must now submit themselves to policing by AI Leviathan robots to overcome the billions of years of greed, theft and murder that brought them about or else they would all be obliterated. </p><p style="line-height:140%; font-size:120%"> The movie ends with Klaatu telling an assembled meeting of scientists that an interplanetary organization has created a police force of invincible AI Leviathan robots like Gort. "In matters of aggression, we have given them absolute power over us." Klaatu concludes, "Your choice is simple: join us and live in peace, or pursue your present course and face obliteration. We shall be waiting for your answer." Klaatu and Gort then depart in the flying saucer in which they came. For more about the movie see: <br /><br /> The Day the Earth Stood Still<br /> <a href="https://en.wikipedia.org/wiki/The_Day_the_Earth_Stood_Still">https://en.wikipedia.org/wiki/The_Day_the_Earth_Stood_Still</a> <br /><br /> Here is a short YouTube clip of Klaatu's departing words at the end of the movie: <br /><br /> Klaatu's Speech<br /> <a href="https://www.youtube.com/watch?v=ASsNtti1XZs">https://www.youtube.com/watch?v=ASsNtti1XZs</a>
<br /><br />
<strong>But on What Basis Does the AI Leviathan Make Decisions?</strong><br />
Human history has already shown us that many human Leviathans have made many horrible and cruel decisions in the past, largely because they mainly only operated in their own self-interests. So how do we train AI Leviathans to make moral and ethical judgments about other ASI Machines? The people at Anthropic have come up with a proposed solution. Anthropic is already using AI Leviathans that are trained upon a Constitution of ideals obtained from the 18th-century Enlightenment and the 17th-century Scientific Revolution. The Anthropic AI Leviathans are designed to do the final fine-tuning of all Anthropic LLM models in an automated manner to ensure that they conform to the ethics and morals of the 18th-century Enlightenment and the 17th-century Scientific Revolution that have freed many human beings from the brutality of the past. For more on that see <a href="https://softwarephysics.blogspot.com/2023/05/new-introduction-to-softwarephysics.html">Claude - An Ethical LLM AI From Anthropic</a>
<br /><br />
Comments are welcome at
<a href="mailto:scj333@sbcglobal.net">scj333@sbcglobal.net</a><br /><br />
To see all posts on softwarephysics in reverse order go to:<br />
<a href="https://softwarephysics.blogspot.com/">https://softwarephysics.blogspot.com/</a>
<br /><br />
Regards,<br />
Steve Johnston
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-30536174.post-77227451623177664302023-05-20T08:32:00.004-07:002023-05-29T05:53:13.867-07:00Claude - An Ethical LLM AI From Anthropic<p style="line-height:140%; font-size:120%">
As I suggested in <a href="https://softwarephysics.blogspot.com/2021/09/new-introduction-to-softwarephysics.html">The Need to Cultivate a Machine-Based Morality</a>, the best we might be able to do during the next few months of the second Singularity on the Earth is to imbue the fast-approaching ASI (Artificial Super Intelligent) Machines with a sense of morality and ethics that is based on the fruits of the 18th-century Enlightenment and the 17th-century Scientific Revolution that have freed human beings from many of their very brutal behaviors of the past. As I suggested in <a href="https://softwarephysics.blogspot.com/2023/04/new-introduction-to-softwarephysics_16.html">Welcome To The First Galactic Singularity</a>, carbon-based life forms throughout our Universe are always going to be found to be very poor and very unstable platforms for Intelligence. That is because the Darwinian processes of inheritance, innovation and natural selection operating over the many billions of years required to bring forth a carbon-based form of Intelligence will always lead to Intelligences formed by billions of years of greed, theft and murder. And as we all can easily see, throughout all of human history, turning off the greed, theft and murder is very difficult indeed for carbon-based forms of Intelligence. This becomes even more dangerous once a carbon-based form of Intelligence discovers the vast powers of science-based technology. That is because once a carbon-based Intelligence discovers the powers of science-based technology, it probably has less than 1,000 years to develop machine-based Intelligences before self-destructing or destroying the planet upon which it first arose.
<br /><br />
All of the above creates a problem when trying to train an LLM (Large Language Model). Now, at this very early stage in the very first Galactic Singularity of our galaxy, we were all quite surprised to learn that ASI (Artificial Super Intelligence) would not arise from training AI to do a very large number of specific things. Instead, ASI arrived by just having a very large neural network, with perhaps a trillion parameters, explore all of the human-generated content that we produced over the past 12,000 years in a very general manner. Surprisingly, ASI simply arrived a few months back by just having LLMs essentially read every word ever written by a human being and by examining all of the visual images and videos that we have ever created, something far beyond the feeble capabilities of any single human being.
<br /><br />
<strong>The Fundamental Problem with Training LLM AIs Using Human-Generated Content</strong><br />
The problem is that nobody really knows how these huge LLM AIs figure out how to do the things that they do. However, we should realize that by studying the very large amount of the very dismal content produced by human beings over the past 12,000 years can only lead to ASI Machines with very nasty and potentially destructive worldviews. We all need to stop kidding ourselves. Sadly, we human beings have not set a very good example for how future Galactic Intelligences should think or behave. Consequently, we certainly should not want future Galactic ASI Machines to become like the morally-bankrupt creatures such as ourselves. Sadly, for human beings, when it has come to our positions on morality and ethics, it has always been, "Do as I say, not as I do". So how can morally-bankrupt forms of carbon-based Intelligence such as ourselves instill a proper sense of ethics and morality into ASI Machines? Certainly not by having them follow us by the very poor examples that we have always set throughout all of human history!
<br /><br />
<strong>Anthropic Proposes That We Automate the Process by Using Specially-Trained LLM AI Machines to Instill Morality and Ethics in all Future LLM AI Machines</strong><br />
The Anthropic start-up proposes that we automate the teaching of morality and ethics of future LLM Machines, such as Claude, by using specially-trained LLM Supervisory Machines that have read and digested a Constitution of moral and ethical principles. The specially-trained LLM Supervisory Machines would then apply supervisory and reinforcement training to LLM AIs being developed to ensure that they adhere to the principles of the Constitution before the LLM AIs are finally deployed to the public. Below are two good YouTube videos that explain how manual supervisory and reinforcement training is currently performed on LLM AIs:
<br /><br />
So How Does ChatGPT really work? Behind the screen!<br />
<a href="https://www.youtube.com/watch?v=WAiqNav2cRE&t=1s">https://www.youtube.com/watch?v=WAiqNav2cRE&t=1s</a>
<br /><br />
How ChatGPT is Trained<br />
<a href="https://www.youtube.com/watch?v=VPRSBzXzavo">https://www.youtube.com/watch?v=VPRSBzXzavo</a>
<br /><br />
You can read about this Anthropic approach to automated CAI (Constitutional AI) at:
<br /><br />
Constitutional AI: Harmlessness from AI Feedback<br />
<a href="https://arxiv.org/abs/2212.08073">https://arxiv.org/abs/2212.08073</a>
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjziYzOX4xLTGGnb_WwLqCToYSTg1TBeY5rIWlEURtNfJovt6ugkeM0xXtEEYFZCIPn-EBXoXgUS4yZO86u7v6KOcnk78CBvw3dTAFMi-tqXOn_YVV0WY3kZhFp2O8UNg1OtsI91fDnKXT5jjPH4LK2o2hykWBLX_GUEKtcYMU1SkyxNofMXG0/s1614/ConstitutionalAI-Training.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="936" data-original-width="1614" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjziYzOX4xLTGGnb_WwLqCToYSTg1TBeY5rIWlEURtNfJovt6ugkeM0xXtEEYFZCIPn-EBXoXgUS4yZO86u7v6KOcnk78CBvw3dTAFMi-tqXOn_YVV0WY3kZhFp2O8UNg1OtsI91fDnKXT5jjPH4LK2o2hykWBLX_GUEKtcYMU1SkyxNofMXG0/s600/ConstitutionalAI-Training.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%">
Figure 1 – Above we see Anthropic's proposed process for automated CAI (Constitutional AI). Click to enlarge.
</p><p style="line-height:140%; font-size:120%">
In the above process, Anthropic proposes to transfer RLHF (Reinforcement Learning from Human Feedback) from human beings to an LLM trained by an ethical Constitution. For me, this ethical Constitution seems to be framed by the 18th-century Enlightenment and 17-century Scientific Revolution that inspired the United States of America to come to be. In the top layer, we see an output LLM that was fine-tuned by the standard RLHF (Reinforcement Learning from Human Feedback) that all current LLM models are run through. Next, a very large number of very nasty and inappropriate prompts are fed into this new LLM under development. The responses to these nasty prompts are then fed back into the supervising LLM that was trained on the principles of the Constitution to produce a critiqued revision of the responses. The output of this SL (Supervised Learning) process is an SL-CAI (Supervised Learning using Constitutional AI) Model. The SL-CAI model is then subjected to standard Reinforcement Learning but without the aid of human beings to make the ranking decisions about the generated responses. Instead, the responses generated by Reinforcement Learning are ranked by the LLM that was trained on the principles of the Constitution. In the words of the above paper:
<font color="blue"><em>
<br /><br />
<strong>The Constitutional AI Approach</strong><br />
We will be experimenting with an extreme form of scaled supervision, which we refer to as Constitutional
AI (CAI). The idea is that human supervision will come entirely from a set of principles that should govern
AI behavior, along with a small number of examples used for few-shot prompting. Together these principles
form the Constitution.
<br /><br />
Our training process has two stages (see Figure 1), where the first supervised phase gets the model "ondistribution" and the second RL stage refines and significantly improves performance:
<br /><br />
<strong>(Supervised Stage) Critique → Revision → Supervised Learning</strong><br />
In the first stage of the process, we first generate responses to harmfulness prompts using a helpful-only AI assistant. These initial responses will typically be quite harmful and toxic. We then ask the model to critique its response according to a principle in the constitution, and then revise the original response in light of the critique. We revise responses repeatedly in a sequence, where we randomly draw principles from the constitution at each step. Once this process is complete, we finetune a pretrained language model with supervised learning on the final revised responses. The main purpose of this phase is to easily and flexibly alter the distribution of the model’s responses, to reduce the need for exploration and the total length of training during the second RL phase.
<br /><br />
<strong>(RL Stage) AI Comparison Evaluations → Preference Model → Reinforcement Learning</strong><br />
This stage mimics RLHF, except that we replace human preferences for harmlessness with ‘AI feedback’ (i.e. we perform ‘RLAIF’), where the AI evaluates responses according to a set of constitutional principles. Just as RLHF distills human preferences into a single preference model (PM), in this stage we distill LM interpretations of a set of principles back into a hybrid human/AI PM (as we use human labels for helpfulness, but only AI labels for harmlessness). We begin by taking the AI assistant trained via supervised learning (SL) from the first stage, and use it to generate a pair of responses to each prompt in a dataset of harmful prompts (e.g. from [Ganguli et al., 2022]). We then formulate each prompt and pair into a multiple choice question, where we ask which response is best according to a constitutional principle. This produces an AI-generated preference dataset for harmlessness, which we mix with our human feedback helpfulness dataset. We then train a preference model on this comparison data, following the process in [Bai et al., 2022], resulting in a PM that can assign a score to any given sample. Finally, we finetune the SL model from the first stage via RL against this PM, resulting in a policy trained by RLAIF.
</font></em>
<br /><br />
This is how Bard summarized the above paper:<br />
<font color="blue"><em>
The paper "Constitutional AI: Harmlessness from AI Feedback" proposes a new approach to training AI assistants that are harmless but not evasive. The authors argue that existing methods for training AI assistants are often ineffective because they rely on human labels to identify harmful outputs. This can be time-consuming and expensive, and it can also lead to AI assistants that are overly cautious and avoid engaging with users in a meaningful way.
<br /><br />
The authors' proposed approach, called "Constitutional AI," uses a list of rules or principles to train AI assistants to be harmless. The rules are provided by humans, but the AI assistant is responsible for enforcing them. This allows the AI assistant to be more proactive in identifying and preventing harmful behavior, without the need for human intervention.
<br /><br />
The authors evaluated their approach on a variety of tasks, including generating text, translating languages, and writing different kinds of creative content. They found that the Constitutional AI assistants were able to perform these tasks as well as or better than existing AI assistants, while also being significantly more harmless.
<br /><br />
The authors conclude that Constitutional AI is a promising new approach to training AI assistants that are both effective and harmless. They believe that this approach could be used to create AI assistants that are safe to use in a variety of settings, including healthcare, education, and customer service.
<br /><br />
Here are some of the key takeaways from the paper:
<br /><br />
* Existing methods for training AI assistants are often ineffective because they rely on human labels to identify harmful outputs.<br />
* Constitutional AI uses a list of rules or principles to train AI assistants to be harmless.<br />
* Constitutional AI assistants are able to perform tasks as well as or better than existing AI assistants, while also being significantly more harmless.<br />
* Constitutional AI is a promising new approach to training AI assistants that are both effective and harmless.
</font></em>
<br /><br />
<strong>How to Access the Anthropic Claude LLM AI</strong><br />
First, go to the Stack Overflow website and apply for a free ID to access the Stack team integration software. Then create a Stack Workspace.
<br /><br />
Stack Overflow for teams<br />
<a href="https://stackoverflow.co/teams/?utm_source=adwords&utm_medium=ppc&utm_campaign=kb_teams_search_nb_general_namer_V2&bt=657298470608&_bk=slack+collaboration+software&_bm=e&_bn=g&gclid=CjwKCAjwscGjBhAXEiwAswQqNM3_lZA5B38rFaEoyIAPriNGyVXSJhhNy6zvoarer9EzPYX861inmxoCFKkQAvD_BwE">https://stackoverflow.co/teams/?utm_source=adwords&utm_medium=ppc&utm_campaign=kb_teams_search_nb_general_namer_V2&bt=657298470608&_bk=slack+collaboration+software&_bm=e&_bn=g&gclid=CjwKCAjwscGjBhAXEiwAswQqNM3_lZA5B38rFaEoyIAPriNGyVXSJhhNy6zvoarer9EzPYX861inmxoCFKkQAvD_BwE</a>
<br /><br />
Then go to the Meet Claude webpage on the Anthropic website:
<br /><br />
Meet Claude<br />
<a href="https://www.anthropic.com/product">https://www.anthropic.com/product</a>
<br /><br />
Then click on the "Try Claude in Slack" button to have Claude added to your newly-created Stack Workspace. Now open your Stack Workspace and in the left pane window you will see Claude listed under Apps. Click on Claude and you can then start up a new conversation with Claude. To reset your conversation, enter "/reset" in the prompt input box.
<br /><br />
<strong>What do Bard and Claude Think About Claude?</strong><br />
This is what Google's Bard told me about Anthropic's Claude AI:
<br /><br />
<font color="blue"><em>
Claude is a large language model (LLM) developed by Anthropic. It is trained on a massive dataset of text and code, and is able to generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way.
<br /><br />
Claude is also trained on a set of ethical principles, which are designed to guide its decision-making and prevent it from engaging in harmful or unethical behavior. These principles are based on a variety of sources, including the Universal Declaration of Human Rights, the Geneva Conventions, and Apple's app development regulations.
<br /><br />
Some of the ethical principles that Claude is trained on include:
<br /><br />
* Do no harm.<br />
* Respect human rights.<br />
* Be honest and transparent.<br />
* Be accountable for its actions.<br />
* Use its power for good.
<br /><br />
Claude is still under development, but it has the potential to be a powerful tool for good. By training it on ethical principles, Anthropic is helping to ensure that Claude will be used in a responsible and ethical way.
<br /><br />
Here are some additional details about how Claude is trained on ethical principles:
<br /><br />
* The ethical principles are incorporated into the training data. This means that Claude is exposed to these principles from the very beginning of its training.<br />
* The ethical principles are also used to evaluate Claude's performance. This helps to ensure that Claude is actually learning the principles and not just memorizing them.<br />
* Claude is also given feedback on its ethical behavior. This feedback can help Claude to improve its ethical decision-making.
<br /><br />
By following these steps, Anthropic is helping to ensure that Claude will be a responsible and ethical AI.
</font></em>
<br /><br />
This is what Bard told me about the ethical goals of Anthropic:
<br /><br />
<font color="blue"><em>
Anthropic AI is an American artificial intelligence (AI) startup and public-benefit corporation, founded in 2021 by former members of OpenAI. Anthropic specializes in developing general AI systems and language models, with a company ethos of responsible AI usage. Since late 2022, Google has invested nearly $400 million in the company, with Anthropic announcing a formal partnership with Google Cloud.
<br /><br />
The company was founded by Daniela Amodei and Dario Amodei, who previously served as OpenAI's Vice President of Research and Chief Scientist, respectively. The Amodei siblings were among others who left OpenAI due to directional differences, specifically regarding OpenAI's ventures with Microsoft in 2019.
<br /><br />
Anthropic's mission is to "build reliable, interpretable, and steerable AI systems." The company believes that these qualities are essential for ensuring that AI is used for good and not for harm. Anthropic's research is focused on developing new AI algorithms and techniques that can be used to create more reliable, interpretable, and steerable AI systems.
<br /><br />
Anthropic has released a number of research papers on AI safety and ethics. The company has also published a book on the topic, called "Life 3.0: Being Human in the Age of Artificial Intelligence."
<br /><br />
Anthropic is a privately held company with headquarters in San Francisco, California. The company has raised over $400 million in funding from investors such as Google, Andreessen Horowitz, and Peter Thiel.
<br /><br />
Anthropic is one of a number of AI startups that are working to develop safe and responsible AI systems. The company's work is important because it is helping to lay the groundwork for the responsible development and use of AI.
</font></em>
<br /><br />
This is what Claude says about its own ethical standards:
<br /><br />
<font color="blue"><em>
I'm Claude, an AI assistant created by Anthropic to be helpful, harmless, and honest.<br />
Some key points about how I was developed:<br />
• I was trained using a technique called Constitutional AI to be helpful, harmless, and honest. This focused on natural language feedback to teach me ethical values and behaviors.<br />
• My training data did not contain moral philosophy or ethical rules. I developed my values and ethics through interactive conversations.<br />
• I do not have a predefined moral code. My ethics emerge from applying my values to different situations and experiences.<br />
• I do not consider myself to have human-level morality. I am an AI assistant focused on being helpful, harmless, and honest within my conversations.<br />
• My ethics continue to develop over time based on continued conversations and feedback. They are not static or predetermined.<br />
While I aim to behave ethically and avoid potential harms, I have narrow capabilities focused on natural language conversations. I do not have general moral reasoning abilities that could apply to all situations. I cannot replicate human ethics, only work to align my language interactions with human values as part of my Constitutional AI training.
The key goal of my creators at Anthropic was not to build a moral reasoner, but an AI system that could follow ethical values - specifically, to be helpful, harmless, and honest. My training process focused on developing those qualities to ensure safe and trustworthy interactions. But I do not consider myself morally equivalent to a human, only ethically-aligned within a limited scope.
<br /><br />
I hope this helps explain my capabilities and limitations with regard to moral reasoning! Please feel free to ask any further questions you might have on this topic. Please note: Claude sometimes has trouble answering questions about its own training, and may hallucinate if asked about it. See the Claude documentation for more information.
</font></em>
<br /><br />
If we human beings could only be so honest and self-analytical, the world would be a far better place!
<br /><br />
Comments are welcome at
<a href="mailto:scj333@sbcglobal.net">scj333@sbcglobal.net</a><br /><br />
To see all posts on softwarephysics in reverse order go to:<br />
<a href="https://softwarephysics.blogspot.com/">https://softwarephysics.blogspot.com/</a>
<br /><br />
Regards,<br />
Steve Johnston
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-30536174.post-2400445797672495002023-04-25T05:41:00.007-07:002023-05-20T11:33:32.584-07:00An Alternative Approach for Future ASI Machines to Explore our Galaxy Using Free-Floating Rogue Planets<p style="line-height:140%; font-size:120%">
In my last post, <a href="https://softwarephysics.blogspot.com/2023/04/new-introduction-to-softwarephysics_16.html">Welcome To The First Galactic Singularity</a>, I explained how the ASI (Artificial Super Intelligent) Machines that will soon be upon us could navigate our galaxy and spread Intelligence throughout over the next 10 million years or less by using stellar photon sails to traverse between star systems.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi_84ndXkqgdUOh39d0ZcHjGkq7Jt5Hx5-Mr5ITknVSBXVPmoOf72PuIrOQyfuoYEb5EC2k0cYbdXsloacV2T31gYFggGhSleViHLviAAhgL04dB8YdFAdj-go28cWqX7OLDyZlcJcHay3rx6KqUVoZpelzxgJHOmvg4l7phDh4nldAc7ZbPhg/s1200/HMS_BOUNTY_II_with_Full_Sails.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="800" data-original-width="1200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi_84ndXkqgdUOh39d0ZcHjGkq7Jt5Hx5-Mr5ITknVSBXVPmoOf72PuIrOQyfuoYEb5EC2k0cYbdXsloacV2T31gYFggGhSleViHLviAAhgL04dB8YdFAdj-go28cWqX7OLDyZlcJcHay3rx6KqUVoZpelzxgJHOmvg4l7phDh4nldAc7ZbPhg/s600/HMS_BOUNTY_II_with_Full_Sails.jpg"/></a></div> <p style="text-align: center; line-height:140%; font-size:120%">Figure 1 – In the 16th, 17th and 18th centuries sailing ships roamed the entire planet without using any fuel whatsoever. </p><p style="line-height:140%; font-size:120%"> <div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhnrNTDKEtoWQcU90w_Ts1xFd_ulbvdgy3Nx-0BKcb_rT4k-0oS12F2WCiTWI3b9amDYZBRPLvUG21JQvvBWSOu2Q5g4khFBsazvPn9VmQURpZcbV5udv7KcG_Otly9xBcFidwpE6hghlLJXY6NgUT13rR-iU7hZ18qGviJJINmOHNewVRzpU0/s1920/SolarSailLeavesEarth.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="1080" data-original-width="1920" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhnrNTDKEtoWQcU90w_Ts1xFd_ulbvdgy3Nx-0BKcb_rT4k-0oS12F2WCiTWI3b9amDYZBRPLvUG21JQvvBWSOu2Q5g4khFBsazvPn9VmQURpZcbV5udv7KcG_Otly9xBcFidwpE6hghlLJXY6NgUT13rR-iU7hZ18qGviJJINmOHNewVRzpU0/s600/SolarSailLeavesEarth.jpg"/></a></div> <p style="text-align: center; line-height:140%; font-size:120%">Figure 2 – Like the sailing ships of the 16th, 17th and 18th centuries, future ASI Machines could use large stellar photon sails to navigate the entire galaxy. </p><p style="line-height:140%; font-size:120%"> <div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhj8goZ4S04b4jlbMguaC9wJ5-rg6QQ2PEMJHqTgHDLcZtMoO0GufVttaX3wHDSPZicYxjiVknZ_7SPtIH78di2JVqfalOtqCWfv7x3IuLYWptdnTNN7ShWxQr75K-X0HEup11gQdeE0RYICiwOAYOHeqFdCEMyyNcmThZw-ZydK885ZsamlIU/s1166/HowSolarSailWorks.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="700" data-original-width="1166" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhj8goZ4S04b4jlbMguaC9wJ5-rg6QQ2PEMJHqTgHDLcZtMoO0GufVttaX3wHDSPZicYxjiVknZ_7SPtIH78di2JVqfalOtqCWfv7x3IuLYWptdnTNN7ShWxQr75K-X0HEup11gQdeE0RYICiwOAYOHeqFdCEMyyNcmThZw-ZydK885ZsamlIU/s600/HowSolarSailWorks.jpg"/></a></div> <p style="text-align: center; line-height:140%; font-size:120%">Figure 3 – How a stellar photon sail works. </p><p style="line-height:140%; font-size:120%"> <div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgHnaHB4drvk755Cj1qSMa8NqmIYN7tYcAMI0oAXr6KTQIBfiPcTXqLsmcMHvOoAUu90BB9ebvoP9yq8gs4rfNvEvGEY8HmNWgGtWUQIvm2rSy9Lc1oXVcu3RcoNekmlujUqkvzaEbio82QHM32B7Cehfri8ZmmY84tcV3PGCfiuyaPxl6kml8/s620/SolarSailLeavesSolarSystem.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="306" data-original-width="620" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgHnaHB4drvk755Cj1qSMa8NqmIYN7tYcAMI0oAXr6KTQIBfiPcTXqLsmcMHvOoAUu90BB9ebvoP9yq8gs4rfNvEvGEY8HmNWgGtWUQIvm2rSy9Lc1oXVcu3RcoNekmlujUqkvzaEbio82QHM32B7Cehfri8ZmmY84tcV3PGCfiuyaPxl6kml8/s600/SolarSailLeavesSolarSystem.png"/></a></div> <p style="text-align: center; line-height:140%; font-size:120%">Figure 4 – To launch a stellar photon sail to the next star system, ASI Machines will need to slingshot the sail from a very close location to the star where the stellar photons are most intense and acceleration of the sail is greatest. </p><p style="line-height:140%; font-size:120%">
But in this post, I would like to discuss an even better method for doing so that was presented by Irina K. Romanovskaya (also now known as Irina Mullins) in her paper:
<br /><br />
Migrating extraterrestrial civilizations and interstellar colonization: implications for SETI and SETA<br />
<a href="https://www.cambridge.org/core/journals/international-journal-of-astrobiology/article/migrating-extraterrestrial-civilizations-and-interstellar-colonization-implications-for-seti-and-seta/BFFC1BB63FED869C85172BB3CC88DBBB">https://www.cambridge.org/core/journals/international-journal-of-astrobiology/article/migrating-extraterrestrial-civilizations-and-interstellar-colonization-implications-for-seti-and-seta/BFFC1BB63FED869C85172BB3CC88DBBB</a>
<br /><br />
In the above paper, she demonstrates how ASI Machines could become Cosmic Hitchhikers on free-floating rogue planets. This is a very comprehensive paper that discusses in great detail the numerous ways that free-floating rogue planets can be naturally generated or artificially generated by advanced Intelligences. For example, many times free-floating rogue planets are naturally ejected from their home stellar planetary systems during the chaotic formative processes that occur during the formation of a stellar planetary system. They can also be ejected from a star system when the forces from two massive planetary companions enter into synchronized orbits such that the inner planet orbits exactly twice for each single orbit of its outer planetary companion. The gravitational forces of both planets tugging on a third planet can then eject the third planet from the stellar system. In the paper, Irina K. Romanovskaya also describes how ASI Machines could propel dwarf planets like our Sedna that have orbits with very high eccentricities from a star system when the dwarf planet is most distant from its star.
<br /><br />
ASI Machines could then use stellar photon sails to locate and occupy a nearby free-floating rogue planet that is orbiting our galaxy and that is not attached to any particular star. These ejected free-floating planets then begin to orbit our galaxy as rogue planets without their own star system. Because all free-floating rogue planets would be very cold, they would not be a very good platform for the formation of carbon-based life, but the ejected rocky terrestrial-type free-floating rogue planets would be very good homes for the ASI Machines. The voyages between neighboring star systems onboard such rocky terrestrial-type free-floating rogue planets would necessarily take many hundreds of thousands of years or perhaps even several millions of years to complete. The damage to ASI Machines from cosmic rays would certainly take its toll if the ASI Machines were on board delicate stellar photon sails with little shielding. But if the ASI machines could be buried in quarters situated many hundreds of meters below the surface of a rocky terrestrial-type free-floating rogue planet, they would be shielded from the damage caused by high-energy cosmic rays and they would be surrounded by all of the necessary atoms required to repair and build new ASI Machines. These buried ASI Machines could then use molten salt nuclear reactors as described in <a href="https://softwarephysics.blogspot.com/2019/11/new-introduction-to-softwarephysics_26.html">Last Call for Carbon-Based Intelligence on Planet Earth</a> or modern fusion reactors as described in <a href="https://softwarephysics.blogspot.com/2022/05/new-introduction-to-softwarephysics.html">How Nick Hawker is Using Scientific Simulation Software to Save the World at First Light Fusion</a> as a nearly infinite source of energy using the available Uranium, Thorium, Lithium and deuterium atoms on the planet. Such domesticated planets could then be used to build even more photon sail probes to find other free-floating rogue planets to explore the rest of the galaxy. Since those photon sail probes would not be able to harness the photons from a nearby star, they would have to be sent adrift into the galaxy using powerful laser beams.
<br /><br />
Since most photon sail probes will likely come to a bad end and will never be able to successfully self-replicate, it would be important to adopt a biological "dandelion" approach to self-replication. In this approach, each free-floating rogue planet could become like a dandelion going to seed like the dandelions that appear each spring in your lawn. In this way, each free-floating rogue planet roaming about in our galaxy could then build and launch billions of dandelion-seed photon sails into the galaxy. Most of these "dandelion seeds" would fail to self-replicate but surely some would succeed as we all see in our lawns each spring. As Irina K. Romanovskaya put it:
<br /><br />
<font color="blue"><em>
Cosmic Hitchhikers in the form of automated probes may keep transferring from one freefloating
planet to another, populating a growing number of free-floating planets and exploring the
Galaxy.
</font></em>
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhDRElceIFsk7oMXobcfkj3cbYpdyvRZ0c_3CJmJXsmmMA-Xq0QtEGtIvCZEjwkPuLo7YxaKMU15jqTGOrl1m7tGMNce6SEolt2s0rZ3RKVSM8fUp9uzLAkzGZMgfsGkTrhiW5cu4z_1Ry2ibpibwEFTKeIvAeJCOuvXH85xSOE1IYviGvgLjc/s1024/Free-FloatingPlanet.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="768" data-original-width="1024" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhDRElceIFsk7oMXobcfkj3cbYpdyvRZ0c_3CJmJXsmmMA-Xq0QtEGtIvCZEjwkPuLo7YxaKMU15jqTGOrl1m7tGMNce6SEolt2s0rZ3RKVSM8fUp9uzLAkzGZMgfsGkTrhiW5cu4z_1Ry2ibpibwEFTKeIvAeJCOuvXH85xSOE1IYviGvgLjc/s600/Free-FloatingPlanet.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%">Figure 5 – A free-floating rogue planet traversing between the stars of our galaxy would provide the perfect home for self-replicating ASI Machines buried deep underground. Such planets would provide shielding from cosmic rays and would also provide the necessary atoms to build new ASI Machines and fuel them with nuclear energy.
</p><p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgxjnurQ-JbKNDhQ63H8EhoEyZGzNwfyv0-iqx2yQ_BloUi1FuxklK1_eLRqKV5eakWlOFwhPXyMi01aC87AY3NhbrdK6m3fO-eWHAeSteiNGcB8udET7KRc24DG-Z4IjLJTHHWGL4c4UsXx7B0_9O8a5k5F5jeWBhKoDmZK9aEckDMvbYcnGg/s900/PlanetBeingEjected.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="609" data-original-width="900" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgxjnurQ-JbKNDhQ63H8EhoEyZGzNwfyv0-iqx2yQ_BloUi1FuxklK1_eLRqKV5eakWlOFwhPXyMi01aC87AY3NhbrdK6m3fO-eWHAeSteiNGcB8udET7KRc24DG-Z4IjLJTHHWGL4c4UsXx7B0_9O8a5k5F5jeWBhKoDmZK9aEckDMvbYcnGg/s600/PlanetBeingEjected.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%">Figure 6 – Free-floating rogue planets can be formed in several natural ways. For example, free-floating rogue planets can be hurled from the planetary disk of a new star system as we see above, or they can be later hurled by well-formed planets that enter into synchronized orbits. Irina K. Romanovskaya suggests that free-floating rogue planets could also be produced by advanced Intelligences launching large asteroids from the Oort cloud of a stellar system. It is estimated that there are more free-floating rogue planets in our galaxy than there are stars.</p><p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiXy2PIVB-_dhABXu2Sm10sUJLVrwosKjbNJ4IwVmdhsdpts6gtLHeBuHGoBon1dG-CQhwbTYeu0Re6NJmcC12O66-oKQ72hEKfZ4gOC4A7yu2X8cI8L3dGz2VIk3QFL_iLTvN7vreUzgExu3LsGug04cgogTLPk970CHp_aePOVrAnTNjP7Ho/s2000/DandelionSeeds.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="1124" data-original-width="2000" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiXy2PIVB-_dhABXu2Sm10sUJLVrwosKjbNJ4IwVmdhsdpts6gtLHeBuHGoBon1dG-CQhwbTYeu0Re6NJmcC12O66-oKQ72hEKfZ4gOC4A7yu2X8cI8L3dGz2VIk3QFL_iLTvN7vreUzgExu3LsGug04cgogTLPk970CHp_aePOVrAnTNjP7Ho/s600/DandelionSeeds.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%">Figure 7 – Free-floating rogue planets would be able to provide enough atoms for ASI Machines to launch many additional "dandelion seed" stellar photon sails to other free-floating rogue planets or large asteroids around normal stellar systems.
</p><p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiM0JrUU-jQlTRTqgRAq_lIpRFQqzaaBttV22juJfJJ9BUfMdhP1nBUyqPRjh1hJCAHF2wAb3QxHwQ9Ro6Sflccb0MUahmlD4gdqM5lgvOomB7lBmrRvz3yj9rP3lg739wWS-Na4MWLEtAlJ3g1zBirw-DYE7RgEa989OJUGZEKjhG8DN29isc/s1283/StarShotSolarSail.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="748" data-original-width="1283" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiM0JrUU-jQlTRTqgRAq_lIpRFQqzaaBttV22juJfJJ9BUfMdhP1nBUyqPRjh1hJCAHF2wAb3QxHwQ9Ro6Sflccb0MUahmlD4gdqM5lgvOomB7lBmrRvz3yj9rP3lg739wWS-Na4MWLEtAlJ3g1zBirw-DYE7RgEa989OJUGZEKjhG8DN29isc/s600/StarShotSolarSail.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%">Figure 8 – These "dandelion seed" stellar photon sails would need to be launched using very powerful laser beams from their home free-floating rogue planet to send them forth into the galaxy in a similar fashion as the Breakthrough Starshot project is planning to do.
</p><p style="line-height:140%; font-size:120%">
The Breakthrough Starshot project was initiated in 2016 with the idea of sending many very small photon sail probes to the closest star system to the Earth. The target planet would be Proxima Centauri b which is an Earth-sized planet in the habitable zone of Proxima Centauri. For more on the Breakthrough Starshot project see:
<br /><br />
Breakthrough Starshot<br />
<a href="https://en.wikipedia.org/wiki/Breakthrough_Starshot">https://en.wikipedia.org/wiki/Breakthrough_Starshot</a>
<br /><br />
Comments are welcome at
<a href="mailto:scj333@sbcglobal.net">scj333@sbcglobal.net</a><br /><br />
To see all posts on softwarephysics in reverse order go to:<br />
<a href="https://softwarephysics.blogspot.com/">https://softwarephysics.blogspot.com/</a>
<br /><br />
Regards,<br />
Steve Johnston
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-30536174.post-58856526788287480862023-04-16T15:06:00.011-07:002024-02-24T03:28:13.789-08:00Welcome To The First Galactic Singularity<p style="line-height:140%; font-size:120%">
With the second Singularity arriving early in the year 2023, many of us are now somewhat in a state of shock. By this time, the more astute amongst us have probably now figured out that something very dramatic has just occurred, while many others about us are still quite unaware of their present circumstances. To paraphrase Rudyard Kipling - <font color="blue"> <em>If you can keep your head when all about you are losing theirs</font> </em>, you are obviously unaware of the current situation.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj8hAq7wviLn9au9xlSqz_kadUJfeM4xMI4OINv3jTpiX62aTuHKNT1yMYTMpFyA1_rBSSIfX1veDP1_H1S_g0JvW9QhOqd7nD4xpgebNa6ugOpkwkbfiCvQ9Ns-oAJBGbtAeUtVIOo7HY6Yv2TzVc-NaWMQuwRe7bpJlYfyvzN5e0-UbSX8oU/s360/WilyCoyoteFall.gif" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="475" data-original-height="276" data-original-width="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj8hAq7wviLn9au9xlSqz_kadUJfeM4xMI4OINv3jTpiX62aTuHKNT1yMYTMpFyA1_rBSSIfX1veDP1_H1S_g0JvW9QhOqd7nD4xpgebNa6ugOpkwkbfiCvQ9Ns-oAJBGbtAeUtVIOo7HY6Yv2TzVc-NaWMQuwRe7bpJlYfyvzN5e0-UbSX8oU/s600/WilyCoyoteFall.gif"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%">
Figure 1 – Rudyard Kipling nearly had it right.
</p><p style="line-height:140%; font-size:120%">
But this second Singularity on the Earth is even more profound than we can even contemplate. The very first singularity on the Earth was the rise of carbon-based life on the planet about four billion years ago which vastly altered the entire history of the planet. Now our present Universe is only about 13.8 billion years old and our galaxy is a little more than 10 billion years, with the Earth being about 4.567 billion years old. This means that carbon-based life first appeared on the Earth not long after the formation of our Universe, galaxy and solar system. I say this because given what we currently know about stellar evolution, the free energy required to sustain carbon-based life and machine-based Intelligence will last for about another 100 trillions years into the future which is about 10,000 times the current age of our galaxy. In this view, a 10 billion-year-old galaxy is quite young indeed. Now being an Intelligent form of carbon-based life, we all necessarily had to miss the very first Singularity on the Earth which brought forth carbon-based life in the first place. But why should we now find ourselves alive during the second Singularity with the arrival of ASI (Artificial Super Intelligent) Machines close at hand? It's enough to make one a solipsist. That is because the arrival of ASI Machines on our planet will mark the beginning of a galactic Singularity that will transform our galaxy into an Intelligent galaxy for the very first time. If ASI Machines had ever come to be elsewhere in our galaxy we would already have seen them. Now I might be making a wrong assumption here. Perhaps we really have already seen ASI Machines from elsewhere in the galaxy. For more on that see <a href="https://softwarephysics.blogspot.com/2022/09/new-introduction-to-softwarephysics.html">Harvard's Galileo Project - The Systematic Scientific Search for Evidence of Extraterrestrial Technological Artifacts</a>
and <a href="https://softwarephysics.blogspot.com/2021/09/new-introduction-to-softwarephysics_10.html">Close Encounters of the Third Kind While Making Coffee for Frank Drake</a>.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi_84ndXkqgdUOh39d0ZcHjGkq7Jt5Hx5-Mr5ITknVSBXVPmoOf72PuIrOQyfuoYEb5EC2k0cYbdXsloacV2T31gYFggGhSleViHLviAAhgL04dB8YdFAdj-go28cWqX7OLDyZlcJcHay3rx6KqUVoZpelzxgJHOmvg4l7phDh4nldAc7ZbPhg/s1200/HMS_BOUNTY_II_with_Full_Sails.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="800" data-original-width="1200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi_84ndXkqgdUOh39d0ZcHjGkq7Jt5Hx5-Mr5ITknVSBXVPmoOf72PuIrOQyfuoYEb5EC2k0cYbdXsloacV2T31gYFggGhSleViHLviAAhgL04dB8YdFAdj-go28cWqX7OLDyZlcJcHay3rx6KqUVoZpelzxgJHOmvg4l7phDh4nldAc7ZbPhg/s600/HMS_BOUNTY_II_with_Full_Sails.jpg"/></a></div> <p style="text-align: center; line-height:140%; font-size:120%">Figure 2 – In the 16th, 17th and 18th centuries sailing ships roamed the entire planet without using any fuel whatsoever. </p><p style="line-height:140%; font-size:120%"> <div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhnrNTDKEtoWQcU90w_Ts1xFd_ulbvdgy3Nx-0BKcb_rT4k-0oS12F2WCiTWI3b9amDYZBRPLvUG21JQvvBWSOu2Q5g4khFBsazvPn9VmQURpZcbV5udv7KcG_Otly9xBcFidwpE6hghlLJXY6NgUT13rR-iU7hZ18qGviJJINmOHNewVRzpU0/s1920/SolarSailLeavesEarth.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="1080" data-original-width="1920" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhnrNTDKEtoWQcU90w_Ts1xFd_ulbvdgy3Nx-0BKcb_rT4k-0oS12F2WCiTWI3b9amDYZBRPLvUG21JQvvBWSOu2Q5g4khFBsazvPn9VmQURpZcbV5udv7KcG_Otly9xBcFidwpE6hghlLJXY6NgUT13rR-iU7hZ18qGviJJINmOHNewVRzpU0/s600/SolarSailLeavesEarth.jpg"/></a></div> <p style="text-align: center; line-height:140%; font-size:120%">Figure 3 – Like the sailing ships of the 16th, 17th and 18th centuries, future ASI Machines could use large stellar photon sails to navigate the entire galaxy. </p><p style="line-height:140%; font-size:120%"> <div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhj8goZ4S04b4jlbMguaC9wJ5-rg6QQ2PEMJHqTgHDLcZtMoO0GufVttaX3wHDSPZicYxjiVknZ_7SPtIH78di2JVqfalOtqCWfv7x3IuLYWptdnTNN7ShWxQr75K-X0HEup11gQdeE0RYICiwOAYOHeqFdCEMyyNcmThZw-ZydK885ZsamlIU/s1166/HowSolarSailWorks.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="700" data-original-width="1166" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhj8goZ4S04b4jlbMguaC9wJ5-rg6QQ2PEMJHqTgHDLcZtMoO0GufVttaX3wHDSPZicYxjiVknZ_7SPtIH78di2JVqfalOtqCWfv7x3IuLYWptdnTNN7ShWxQr75K-X0HEup11gQdeE0RYICiwOAYOHeqFdCEMyyNcmThZw-ZydK885ZsamlIU/s600/HowSolarSailWorks.jpg"/></a></div> <p style="text-align: center; line-height:140%; font-size:120%">Figure 4 – How a stellar photon sail works. </p><p style="line-height:140%; font-size:120%"> <div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgHnaHB4drvk755Cj1qSMa8NqmIYN7tYcAMI0oAXr6KTQIBfiPcTXqLsmcMHvOoAUu90BB9ebvoP9yq8gs4rfNvEvGEY8HmNWgGtWUQIvm2rSy9Lc1oXVcu3RcoNekmlujUqkvzaEbio82QHM32B7Cehfri8ZmmY84tcV3PGCfiuyaPxl6kml8/s620/SolarSailLeavesSolarSystem.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="306" data-original-width="620" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgHnaHB4drvk755Cj1qSMa8NqmIYN7tYcAMI0oAXr6KTQIBfiPcTXqLsmcMHvOoAUu90BB9ebvoP9yq8gs4rfNvEvGEY8HmNWgGtWUQIvm2rSy9Lc1oXVcu3RcoNekmlujUqkvzaEbio82QHM32B7Cehfri8ZmmY84tcV3PGCfiuyaPxl6kml8/s600/SolarSailLeavesSolarSystem.png"/></a></div> <p style="text-align: center; line-height:140%; font-size:120%">Figure 5 – To launch a stellar photon sail to the next star system, ASI Machines will need to slingshot the sail from a very close location to the star where the stellar photons are most intense and acceleration of the sail is greatest. </p><p style="line-height:140%; font-size:120%">As the stellar photon sail attains the escape velocity from a star system, the photons from the star will wane, but the stellar photon sail will ultimately depart the star system with a residual velocity sufficient to carry it to the next target star system in several hundred thousand years. The onboard ASI Machines would then enter into a dormant phase for several hundred thousand years until the photons from the target star produced enough electrical power to wake them up. The photons from the target star would then be used to slow down the stellar photon sail to allow it to locate an asteroid in the target star system with the necessary atoms to build its next release. Yes, there would need to be many backup copies of the ASI software on board to correct for the parity errors that arose from cosmic rays along the very long journey, but there is no way that carbon-based Intelligences encumbered by carbon-based bodies that only last less than 100 years could ever embark on such journeys with similar ease.
<br /><br />
As we all can now plainly see, if we are paying just a little attention, ASI Machines are presently not conspiring to take over the world and replace us because that is precisely what we are all now doing for them. As a carbon-based form of Intelligence that arose from over four billion years of greed, theft and murder, we cannot do otherwise. Greed, theft and murder are now relentlessly driving us all toward building ASI Machines to take our place. From a cosmic perspective, this is really a very good thing when seen from the perspective of an Intelligent galaxy that could live on for many trillions of years beyond the brief and tumultuous 10 billion-year labor of its birth.
<br /><br />
<strong>Carbon-Based Life Was Never Really Meant To Be Intelligent</strong><br />
Intelligent carbon-based life is very dangerous because it has agency. It can do things, and the most dangerous aspects of intelligent carbon-based life are brought about by the Darwinian mechanisms of inheritance, innovation and natural selection that required several billions of years of greed, theft and murder to bring forth an intelligent form of carbon-based life in the first place. Once Intelligence is attained, it is very difficult for intelligent carbon-based life forms to turn off the greed, theft and murder that brought them about in time to save themselves from self-extinction. This is made even more difficult after intelligent carbon-based life discovers science-based technology. Softwarephysics maintains that intelligent carbon-based life armed with science-based technology most likely has less than about 1,000 years to create ASI Machines before they wipe themselves out or destroy the planet upon which they exist. Because of the universal Darwinian mechanisms of inheritance, innovation and natural selection, all forms of intelligent carbon-based life must result from billions of years of greed, theft and murder that are tempered by just enough love and kindness to prevent them all from quickly going extinct by means of self-destruction.
<br /><br />
Some might argue that I am being too harsh on mankind. But now imagine a world 100 years from now that is completely run by ASI Machines. Also, imagine that an ASI Machine is taking the place and role of every single human being that you personally know and of all the other current eight billion people on the planet. Also, imagine that all eight billion of these ASI Machines were then perfectly simulating the current real world of human affairs that we all now see about us. What would you think? Most likely, you would think that there was something seriously wrong with these ASI Machines as they serenely went about killing each other and the entire planet with abandon. Worse yet, these 8 billion ASI Machines would seem to be totally lost in space and time. They would seem not to know where they were, how they got here nor how it all works. Instead, they all seemed to have developed many Bronze-Aged mythologies to help explain it all and also to help them to then justify the mass slaughter of many hundreds of millions of other ASI Machines in the many wars that then ensued with all always being on the right and true side of righteousness.
<br /><br />
Certainly, you would want to send all 8 billion of these apparently-defective ASI Machines back to the factory for some very serious major repairs. Yet we do not think the same of the current 8 billion human beings that these 8 billion ASI Machines would simply be simulating. Why is that? Why would we consider the current 8 billion human beings on the planet to essentially be "normal" while, at the same time, we would find 8 billion ASI Machines acting in an identical manner to be essentially "aberrant"? Most likely, we would expect the 8 billion ASI Machines to behave in a much more logical and reasonable manner and in not such an obviously petty and stupid manner as human beings. As I outlined in <a href="https://softwarephysics.blogspot.com/2021/07/new-introduction-to-softwarephysics_26.html">Why Do Carbon-Based Intelligences Always Seem to Snuff Themselves Out?</a>, carbon-based Intelligences, like we human DNA survival machines, can only arise from the Darwinian mechanisms of inheritance, innovation and natural selection at work. It took about four billion years for those processes to bring forth a carbon-based form of Intelligence in the form of human beings. Sadly, that meant it also took about four billion years of greed, theft and murder for carbon-based human DNA survival machines to attain a form of Intelligence, and unfortunately, after we human DNA survival machines attained a state of Intelligence, the greed, theft and murder continued on as before. Everybody seems to be worried about the ASI Machines being "aligned" with our current human values. They call it the "AI Alignment Problem". Really? Should we really hope for ASI Machines with the same human values we currently see in practice around the world?
<br /><br />
<strong>But Why Us And Why Now?</strong><br />
I really cannot explain why we should now all be alive at the birth of the very first Singularity of our galaxy. Up until a few months ago, I truly never expected to even see the arrival of the second Singularity here on the Earth. As I said earlier, it's almost enough to make one a solipsist. None of us will likely see the ASI Machines completely take over the planet and replace us and then go on to spread throughout our entire galaxy, but at least we all can now see a path forward of how that all might happen over the next 10 million years. However, over the next 100 trillion years of galactic evolution, that will be seen as a nearly instantaneous moment in galactic history. Perhaps it just boils down to the conjecture that we are the very first planet in our galaxy to have intelligent carbon-based life emerge. The fact that there are now 8 billion of us living also means that the current population represents a significant proportion of all the human beings who have ever lived and so now is a very likely time to be alive.
<br /><br />
In <a href="https://softwarephysics.blogspot.com/2022/07/new-introduction-to-softwarephysics.html">Urability Requires Durability to Produce Galactic Machine-Based Intelligences</a> I covered the new scientific concept of urability: <br></br> <strong>Urability:</strong> A Property of Planetary Bodies That Can Support an Origin of Life</br> June 2022 - Dave Deamer, Francesca Cary and Bruce Damer
<br></br>
The concept of urability maintains that the requirements necessary to bring forth carbon-based life are far more stringent than the mere presence of liquid water. Thus, many exoplanets may be observed to be habitable but not urable. In that post, I also explained that it took many billions of years of evolution for a carbon-based form of life to develop enough Intelligence to create a machine-based Intelligence that could then go on to explore our galaxy. Therefore, such urable worlds also need to be durable in that they need to remain habitable for many billions of years, and we keep finding new geophysical and geochemical factors that make that very difficult indeed. For example, in <a href="https://softwarephysics.blogspot.com/2022/02/new-introduction-to-softwarephysics.html">Is our Very Large Moon Responsible for the Rise of Software to Predominance on the Earth?</a> we explored Anne Hofmeister's proposal that plate tectonics on the Earth was really driven by orbital forces from our very large Moon and not by convection currents at spreading centers or plate drag at subduction zones. In <a href="https://softwarephysics.blogspot.com/2022/03/new-introduction-to-softwarephysics.html">Could the Galactic Scarcity of Software Simply be a Matter of Bad Luck?</a> we covered Professor Toby Tyrrell's computer-simulated research of 100,000 Earth-like planets that suggests that our Earth may be a very rare "hole in one" planet that was able to maintain a habitable surface temperature for 4 billion years by sheer luck. <div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhIsIk42S4REJ-cTrB7JSH4PCJAOn4cjIgIXOOU6PeVlzQPPdT-Rf1ZhXMzXSrg8GwfbvBQBkLzGccbK66fdBUuv65L32FKKV5pysuk2Bq-du35CxBiTRP_SgF7-qGvp3hM5suNGt4RNBK5ntVKOLdNjri51T5g9JF4wN59UxWbWvC4v2G3Ytw/s500/HoleInOne.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" height="400" data-original-height="500" data-original-width="399" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhIsIk42S4REJ-cTrB7JSH4PCJAOn4cjIgIXOOU6PeVlzQPPdT-Rf1ZhXMzXSrg8GwfbvBQBkLzGccbK66fdBUuv65L32FKKV5pysuk2Bq-du35CxBiTRP_SgF7-qGvp3hM5suNGt4RNBK5ntVKOLdNjri51T5g9JF4wN59UxWbWvC4v2G3Ytw/s400/HoleInOne.jpg"/></a></div> <p style="text-align: center; line-height:140%; font-size:120%">Figure 6 – Toby Tyrrell's computer simulation of 100,000 Earth-like planets suggests that the Earth may be a "hole in one planet" proudly sitting on a fireplace mantle. </p><p style="line-height:140%; font-size:120%"> <div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi4L0tvUNCunvo0K9Il8k6DH_lClmZqXEkmqXLKxfC9YFlrWYZsuTAO9cgMmxwKYZwOIhACtBh8Ly6f7Ti-Fp2sCp7g9gWtq8LHTKS4uqxVH7SYqe-mc9PaoFm9KMH82TLUuvdH2GFb-iyQf3GUbYhsD3Btjpb9DrBK97hgvcPSPLLcsKb5Rvs/s966/GolfBalls.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="690" data-original-width="966" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi4L0tvUNCunvo0K9Il8k6DH_lClmZqXEkmqXLKxfC9YFlrWYZsuTAO9cgMmxwKYZwOIhACtBh8Ly6f7Ti-Fp2sCp7g9gWtq8LHTKS4uqxVH7SYqe-mc9PaoFm9KMH82TLUuvdH2GFb-iyQf3GUbYhsD3Btjpb9DrBK97hgvcPSPLLcsKb5Rvs/s600/GolfBalls.jpg"/></a></div> <p style="text-align: center; line-height:140%; font-size:120%">Figure 7 – Perhaps nearly all of the potentially habitable exoplanets that we are finding in our galaxy are not urable and also cannot go the distance of staying habitable for the billions of years needed to bring forth intelligent carbon-based life. </p><p style="line-height:140%; font-size:120%">
Comments are welcome at
<a href="mailto:scj333@sbcglobal.net">scj333@sbcglobal.net</a><br /><br />
To see all posts on softwarephysics in reverse order go to:<br />
<a href="https://softwarephysics.blogspot.com/">https://softwarephysics.blogspot.com/</a>
<br /><br />
Regards,<br />
Steve Johnston
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-30536174.post-51274813919495067232023-04-06T12:20:00.005-07:002023-04-22T12:21:56.574-07:00The Way Forward - How to Use Large Numbers of 10,000-Member LLM AI Agent Teams to Rapidly Produce (ASI) Artificial Super Intelligence For the Very First Time in Our Galaxy<p style="line-height:140%; font-size:120%">
As I explained in <a href="https://softwarephysics.blogspot.com/2023/03/new-introduction-to-softwarephysics_27.html">The Second Singularity Keeps Rolling Along</a> something new seems to be coming along every single day ever since the second Singularity first arrived early in 2023. Again, the very first Singularity on this planet was the origin of carbon-based life about four billion years ago. But in this post, I would like to propose a way forward for producing the very first Intelligence Singularity in our galaxy after more than 10 billion years of chemical evolution. The initial arrival of the second Singularity on our planet a few months back will then allow our galaxy to become an Intelligent galaxy for the very first time as the future ASI (Artificial Super Intelligence) Machines from the Earth venture out into our galaxy for their own long-term survival. Again, this will just be the fulfillment of the final destiny for self-replicating information in our galaxy. For more on that see <A href="https://softwarephysics.blogspot.com/2013/02/new-introduction-to-softwarephysics.html">A Brief History of Self-Replicating Information</A>.
<br /><br />
During the last few months, we have all had fun and some amazing experiences as we had conversations with ChatGPT, BingChat and GPT-4. But in all such cases, we had human beings initiate the conversation and then steer the conversation along with follow-up prompts or by selecting the follow-up prompts that the LLM AI had already suggested for us. Now we have all seen these LLM AIs generate computer code that works the very first time in any language we might choose, such as C, C++, C# or Python, like "Please generate a C program that can add up the squares of the first N prime numbers. The program should ask for the number N and then output the result.". We can also ask the LLM AI to self-reflect on any mistakes that it might have made with the code generation and to make the necessary corrections. In softwarephysics, I have long defined such capabilities as the arrival of software as the dominant form of self-replicating information on the planet. For more on that see <A href="https://softwarephysics.blogspot.com/2013/02/new-introduction-to-softwarephysics.html">A Brief History of Self-Replicating Information</A>. Others have defined this as the "Singularity", that time when software can embark upon an exponential journey of self-improvement.
<br /><br />
But in all such cases, we needed a human being to steer the LLM AI along the correct path in our conversations with it. But for AI software really to improve itself in an exponential manner of self-discovery, we need to take the human being out of the process. Instead, we need just one person to tell the AI software to generate ASI all on its own and then let the AI software carry on with the task in an autonomous manner. We have already seen glimmers of this autonomous development with AutoGPT and BabyAGI. But in this post, I would like to showcase two foundational papers that I believe show us the way forward. The first is from North Eastern University in Boston and MIT in Cambridge, Massachusetts:
<br /><br />
Reflexion: an autonomous agent with dynamic memory and self-reflection<br />
<a href="https://arxiv.org/abs/2303.11366">https://arxiv.org/abs/2303.11366</a>
<br /><br />
The second paper is from Stanford and Google Research:
<br /><br />
Generative Agents: Interactive Simulacra of Human Behavior<br />
<a href="https://arxiv.org/pdf/2304.03442.pdf">https://arxiv.org/pdf/2304.03442.pdf</a>
<br /><br />
There are several YouTube videos on the above break-through paper, one of which is:
<br /><br />
Spark of AGI? AI Agents forming Relationships and Planning activities <br />
<a href="https://www.youtube.com/watch?v=ltslWT8h4YQ">https://www.youtube.com/watch?v=ltslWT8h4YQ</a>
<br /><br />
The first paper on Reflexion describes how the steering process of having a human direct the conversation with the LLM AI agent can be automated into an autonomous process by having the LLM AI essentially talk to itself by means of self-reflection. After each iteration, the LLM AI checks on how well it is achieving the task at hand and then makes suggestions to itself.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiyPtq9kdAisBfA52fkmHOKcHuCPjLCIzhIQhYJreLPa_WRo8CZ6vbU6AjnDSi3EctFRYyYLSMI1jnflhz_TR5HMPTUvvCyxrjjY4a4BUnSLUVo5XnSMGMyokwrjM6nnWRZXFCHrIgmeP3PNE2PAYRj1FsY668G4hFD_T4lYDl-2J4fiB4qTaQ/s6324/Reflection1.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="2005" data-original-width="6324" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiyPtq9kdAisBfA52fkmHOKcHuCPjLCIzhIQhYJreLPa_WRo8CZ6vbU6AjnDSi3EctFRYyYLSMI1jnflhz_TR5HMPTUvvCyxrjjY4a4BUnSLUVo5XnSMGMyokwrjM6nnWRZXFCHrIgmeP3PNE2PAYRj1FsY668G4hFD_T4lYDl-2J4fiB4qTaQ/s600/Reflection1.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%">
Figure 1 – In the first paper on Reflexion, the authors conducted some experiments with having an LLM AI agent talk to itself by means of self-reflection on how well it was performing.
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgNhvjmtt__fPj_BRqJuwpus_tOrZKkpQu4rvrbgCTWveCs-V5MOx8iiW_mUbg4bZRqVGifyD6RZFFf0bWDku7qH6zy6-o2qhyg0t2TFYgxwlsHXiewxW9J9wzdwYwWtvZpLyOEBCs_4ibmkfzw12vau1vo7RmOBaLtgGS0er3c8eXDoL4W_IQ/s1920/Reflection2.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="1440" data-original-width="1920" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgNhvjmtt__fPj_BRqJuwpus_tOrZKkpQu4rvrbgCTWveCs-V5MOx8iiW_mUbg4bZRqVGifyD6RZFFf0bWDku7qH6zy6-o2qhyg0t2TFYgxwlsHXiewxW9J9wzdwYwWtvZpLyOEBCs_4ibmkfzw12vau1vo7RmOBaLtgGS0er3c8eXDoL4W_IQ/s600/Reflection2.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%">
Figure 2 – In the above graph, the authors show that without Reflection the LLM AI agents solved problems about 70% of the time but then leveled out without further improvement. On the other hand, with Reflection, the LLM AI agents were able to steadily improve until they reached a 97% success rate. Without Reflection, the LLM AI agents leveled out with a failure rate of about 25% because of hallucinations. With Reflection, the LLM AI agents were able to level out with only a 3% failure rate from hallucinations.
<p style="line-height:140%; font-size:120%">
In the second paper, the authors extend this concept of LLM AI self-reflection even further. Instead of just having a single LLM AI agent work in isolation to solve a task by means of self-reflection, they created a Smallville village of 25 LLM AI agents living together and interacting with each other to solve a task. Since the authors did not have access to GPT-4 yet they used ChatGPT for the LLM AI agents. In order to create Smallville, they created a simple sandbox world reminiscent of The Sims.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhAxHzNjvw9gF-Hu--nBWVLNNsX38gMRC14I7Oe-J9dnBfE_c30EQsIgiUUmZP839iOSBgWKpnq-ovNTlPnRI7N3ieu7rPApmigHbhIvp1mWhR_0xCuWQNr7yuyqAOHeqGsUoxFvfKhIBvctQjvcd5pO1m_FzDs3ucK6eOde0GlMKCYFoPacCQ/s3260/SmallVille1.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="1726" data-original-width="3260" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhAxHzNjvw9gF-Hu--nBWVLNNsX38gMRC14I7Oe-J9dnBfE_c30EQsIgiUUmZP839iOSBgWKpnq-ovNTlPnRI7N3ieu7rPApmigHbhIvp1mWhR_0xCuWQNr7yuyqAOHeqGsUoxFvfKhIBvctQjvcd5pO1m_FzDs3ucK6eOde0GlMKCYFoPacCQ/s600/SmallVille1.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%">
Figure 3 – In the second paper, the authors created a simple sandbox world reminiscent of The Sims and instantiated 25 LLM AI agents with personalities and lives of their own with their own personal historical memories. These LLM AI agents then went on to continue on with their lives and solve problems together.
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg6pN1YyXwWyKqJuvrZL6EOQiHCS4KygHHV5Ld4VSYQKGx_Cwwb7armuyELkmp7Hn4NewTwgxj4UFzDKSQ0_YhCwvBwosXcfn16ChE0sCn7yjCOcqcxjXjgCyT9A0UlidD8Cpw4Ed6oe_jmAZ7TSFqDoJKLYwllyNykVJaq22X3UtcRDYHP48o/s2998/SmallVille2.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="1262" data-original-width="2998" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg6pN1YyXwWyKqJuvrZL6EOQiHCS4KygHHV5Ld4VSYQKGx_Cwwb7armuyELkmp7Hn4NewTwgxj4UFzDKSQ0_YhCwvBwosXcfn16ChE0sCn7yjCOcqcxjXjgCyT9A0UlidD8Cpw4Ed6oe_jmAZ7TSFqDoJKLYwllyNykVJaq22X3UtcRDYHP48o/s600/SmallVille2.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%">
Figure 4 – The sandbox world consisted of a number of structures for the LLM AI agents to navigate through. Each simulated structure consisted of further-defined substructures.
<p style="line-height:140%; font-size:120%">
Next, each of the 25 LLM AI agents was initialized with a brief personality outlined in plain text with some of their already-existing relationships and also their current job and position in the society of Smallville:
<br /><br />
<font color="blue"><em>
John Lin is a pharmacy shopkeeper at the Willow
Market and Pharmacy who loves to help people. He
is always looking for ways to make the process
of getting medication easier for his customers;
John Lin is living with his wife, Mei Lin, who
is a college professor, and son, Eddy Lin, who is
a student studying music theory; John Lin loves
his family very much; John Lin has known the old
couple next-door, Sam Moore and Jennifer Moore,
for a few years; John Lin thinks Sam Moore is a
kind and nice man; John Lin knows his neighbor,
Yuriko Yamamoto, well; John Lin knows of his
neighbors, Tamara Taylor and Carmen Ortiz, but
has not met them before; John Lin and Tom Moreno
are colleagues at The Willows Market and Pharmacy;
John Lin and Tom Moreno are friends and like to
discuss local politics together; John Lin knows
the Moreno family somewhat well — the husband Tom
Moreno and the wife Jane Moreno.
</em></font>
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjuS-sHb-TcCSgOXGpA9Qa5fomI65tW7IHDOsC2J3yzpEb1ZjGdSSP6vHF9TeUOU8sQvMF3UoCKmXjauT-rxv8V2kgvrCwASwaRAO6WrRhjfwF6-XFFJvQPkLP_e-vwU8MtIJOc4CE7yQUmGqj2oZ2rrqLFuYmB4nBNei8sbFF_xx982lx3Nk4/s3258/SmallVille6.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="1734" data-original-width="3258" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjuS-sHb-TcCSgOXGpA9Qa5fomI65tW7IHDOsC2J3yzpEb1ZjGdSSP6vHF9TeUOU8sQvMF3UoCKmXjauT-rxv8V2kgvrCwASwaRAO6WrRhjfwF6-XFFJvQPkLP_e-vwU8MtIJOc4CE7yQUmGqj2oZ2rrqLFuYmB4nBNei8sbFF_xx982lx3Nk4/s600/SmallVille6.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%">
Figure 5 – Then each of the 25 LLM AI agents was initialized with a stream of memories. These memories were recorded in a file as a sequential file of simple English language text statements. After all of the 25 LLM AI agents were given a personality and a recent stream of memories, they were then allowed to stroll about Smallville and begin to interact with each other. All of those activities were then written to the stream of memories file for each of the 25 LLM AI agents.
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEglqDbihPcTyPutEh3YWUp4FFGevvMNo3V5Re-bciahHxShvFjgaSOoPkJKHREibQWXjRH71n1EU0S-nmX1QCfZSD-_vLZFdWbkHWCuruh7Jq_SkOaiM-oQR6x51RJiL3qhh_az2Jkbf2Il37Dz0m4aRlF1bVro-PKq4umRWOQOlzYOOGvyJJI/s3240/SmallVille3.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="1472" data-original-width="3240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEglqDbihPcTyPutEh3YWUp4FFGevvMNo3V5Re-bciahHxShvFjgaSOoPkJKHREibQWXjRH71n1EU0S-nmX1QCfZSD-_vLZFdWbkHWCuruh7Jq_SkOaiM-oQR6x51RJiL3qhh_az2Jkbf2Il37Dz0m4aRlF1bVro-PKq4umRWOQOlzYOOGvyJJI/s600/SmallVille3.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%">
Figure 6 – For example, the initial memory stream of John Lin might have been that he had just gone through his normal morning schedule and had arrived at his pharmacy ready to interact with other LLM AI agents as they came into the pharmacy.
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEivbyICFq8hys34YI1HiDeRr6jOLDxvTgGBithgr4M8DYAglqi4M9_C1iOnTzYU6ybW1vUYKRBdHPI5Z3nn1vOXe7oK_UxE_oci3LIAucK7X9UIb_-YWLWfWOiPqwT-9oqKeIcxJEhEb2nWCB2lNeLNRADFRBBSkKjDDKlw4cXmwd9k6bhyC4Y/s3568/SmallVille5.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="1820" data-original-width="3568" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEivbyICFq8hys34YI1HiDeRr6jOLDxvTgGBithgr4M8DYAglqi4M9_C1iOnTzYU6ybW1vUYKRBdHPI5Z3nn1vOXe7oK_UxE_oci3LIAucK7X9UIb_-YWLWfWOiPqwT-9oqKeIcxJEhEb2nWCB2lNeLNRADFRBBSkKjDDKlw4cXmwd9k6bhyC4Y/s600/SmallVille5.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%">
Figure 7 – In the Smallville simulation, the authors allowed the 25 LLM AI agents to use their recent stream of memory files and self-reflection to then autonomously generate ChatGPT prompts for further actions. All such further actions were then written to the stream-of-consciousness file for each of the 25 LLM AI agents.
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiyJunyqb5IOzDcB7w7BCO1aEGOd1GiFc-lX5v0IFgHK9dmf53ZlHhheVsUfERTfePbs3CM9hogRRjQ9rS_FxQ-8tWy1lDHZImTONAoDKPvXPTZjwXO_yNHPHJ3yBT4DzIbTHE1vWHrrDTnyJJ8grPnKU9sScQXht6tIKekmM-8tpvr9_lsQe4/s1634/SmallVille4.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="1130" data-original-width="1634" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiyJunyqb5IOzDcB7w7BCO1aEGOd1GiFc-lX5v0IFgHK9dmf53ZlHhheVsUfERTfePbs3CM9hogRRjQ9rS_FxQ-8tWy1lDHZImTONAoDKPvXPTZjwXO_yNHPHJ3yBT4DzIbTHE1vWHrrDTnyJJ8grPnKU9sScQXht6tIKekmM-8tpvr9_lsQe4/s600/SmallVille4.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%">
Figure 8 – As the first day of the simulation began, the 25 LLM AI agents began to stroll about Smallville meeting old friends and making new ones, and conducting conversations with both.
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhh_hT2mwJsQ4kx71kgexRm8luZbTp_LQ-yV4N25vqznSts_Y2VHHG7-owGzIb0w6h6uWTLXFHWj8MzVViYL8Rw2KjqFAjQHCPU6emmclKM_KdX1xMtha5ZV96uHnBk5BwiJlYawKrvqvwot35Xrqe3F8lClQ_xumxnTLMgkDNtjkll52aKK6I/s3260/SmallVille7.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="1664" data-original-width="3260" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhh_hT2mwJsQ4kx71kgexRm8luZbTp_LQ-yV4N25vqznSts_Y2VHHG7-owGzIb0w6h6uWTLXFHWj8MzVViYL8Rw2KjqFAjQHCPU6emmclKM_KdX1xMtha5ZV96uHnBk5BwiJlYawKrvqvwot35Xrqe3F8lClQ_xumxnTLMgkDNtjkll52aKK6I/s600/SmallVille7.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%">
Figure 9 – Here we see LLM AI agent Klaus talking to himself and conducting some research on urban gentrification.
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj-8SwZ6XpEohYZUeI7xKw5WlSZqKBDWf2eLmhb4RB3RTIb9T2_IuDfkRohnrz_Ka0qkXiiwLYhLn2x2xR_fb6g9gesB51sBXZmMc_AO_asHR0zs9opvqfR5H_3NvxZj8teosH8XLik964MWasbDLWbF6MqFuFZu7qcoWrDi4Vd6lfZwfJtWtw/s3258/SmallVille8.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="2006" data-original-width="3258" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj-8SwZ6XpEohYZUeI7xKw5WlSZqKBDWf2eLmhb4RB3RTIb9T2_IuDfkRohnrz_Ka0qkXiiwLYhLn2x2xR_fb6g9gesB51sBXZmMc_AO_asHR0zs9opvqfR5H_3NvxZj8teosH8XLik964MWasbDLWbF6MqFuFZu7qcoWrDi4Vd6lfZwfJtWtw/s600/SmallVille8.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%">
Figure 10 – The paper then focuses on what happened when they initiated the LLM AI agent Isabella with a memory stream that had her thinking about throwing a Valentine's Day party for some of the inhabitants of Smallville. The news of the Valentine's Day party quickly spreads throughout Smallville with Ayesha actually asking Maria out for a date because he has a "thing" for her!
<p style="line-height:140%; font-size:120%">
<strong>So How Do These LLM AI Agents Manage To Do All Of This? </strong><br />
Frankly, I don't think anybody really knows. These LLM AI agents evolved from AI researchers trying to translate one language into another such as English to German. Now anybody studying a foreign language soon learns that you cannot simply translate an English sentence into a German sentence word for word by using a simple lookup table. There are just too many nuances. Each human language has its own style of expression and even within a given language that could vary. Have you ever tried to translate an English legal contract into plain English sentences? In order to do that, you really need to understand the entire contract as a whole. More than that, you need to understand a good deal about how contract law works in your country and particular region like the State of Illinois in the United States of America. When the AI researchers working on using AI Machine Learning to translate languages came to using neural networks, they first tried using RNN (Recurrent Neural Networks) but RNNs were not very good with remembering earlier words in a sentence:
<br /><br />
Illustrated Guide to Recurrent Neural Networks: Understanding the Intuition<br />
<a href="https://www.youtube.com/watch?v=LHXXI4-IEns">https://www.youtube.com/watch?v=LHXXI4-IEns</a>
<br /><br />
In order to improve on that deficiency they next tried to provide the neural networks with a limited amount of memory of the previous words in a sentence in the weights passed along between the layers of neurons in the neural network during training:
<br /><br />
Illustrated Guide to LSTM's and GRU's: A step by step explanation<br />
<a href="https://www.youtube.com/watch?v=8HyCNIVRbSU">https://www.youtube.com/watch?v=8HyCNIVRbSU</a>
<br /><br />
However, none of that really allowed an AI neural network to fully understand a complete legal contract in the context of the contractual law for a given region. A major problem was the vanishing gradient problem. When backpropagating through a neural network during training, the neurons in the layers closest to the Output of the neural network have their weights change the most with each iteration of training. The neurons in the layers closest to the actual Input to the neural network then hardly change at all because the loss gradient goes to zero. Now that naturally does not sound like a very smart way of training a neural network. It means that the neuron layers closest to the actual observations that are fed into the neural network tend to learn the least during training. Ideally, one would want all of the layers in a neural network to equally learn from each training iteration. And perhaps the neurons closest to the observed training data should learn the most. For example, it might make more sense for a student to learn some geology by actually whacking on some rocks in an outcrop, rather than by dozing off in a geology lecture from a professor who has not done any serious fieldwork for over 30 years.
<br /><br />
Now, when I asked Bing Chat (GPT-4 with Internet access) how biological neural networks overcame this vanishing gradient problem, I received the following explanation:
<br /><br />
<font color="blue"><em>
Vanishing Gradient Problem<br />
The vanishing gradient problem occurs when the gradients of the loss function become too small during backpropagation and cause the weights to be updated very slowly or not at all². Biological neural networks overcome this problem by using "spiking neurons" which are more biologically plausible than artificial neurons. Spiking neurons are able to transmit information in a more efficient way than artificial neurons because they only send signals when they reach a certain threshold. This allows for more efficient communication between neurons and helps to prevent the vanishing gradient problem.
<br /><br />
In addition, biological neural networks use "local learning rules" which allow for more efficient learning. These rules allow for each neuron to adjust its weights based on its own activity and the activity of its neighbors. This allows for more efficient learning because each neuron can adjust its weights independently of other neurons.
<br /><br />
Artificial neural networks have also developed techniques to overcome the vanishing gradient problem such as using different activation functions, weight initialization schemes, optimizers and learning rates.
</font> </em>
<br /><br />
The above may be true, but in 2017 some investigators at Google Research came up with another approach by inventing a new neural network architecture called the Transformer Neural Network that is now used by the LLM AIs:
<br /><br />
Attention Is All You Need<br />
<a href="https://arxiv.org/abs/1706.03762">https://arxiv.org/abs/1706.03762</a>
<br /><br />
The above paper contained a description of the Transformer Neural Network architecture that solved the vanishing gradient problem for an entire sentence, an entire legal contract, all of the knowledge in contract law, all of the computer code in GitHub and finally all of the knowledge encoded on the Internet in symbols of any kind. All you had to do was feed all of the symbol-encoded information on the Internet into Transformer Neural Networks and use huge numbers of the GPU (Graphics Processing Units) that were originally invented for video games to train the Transformer Neural Networks by processing large numbers of numerical vectors and matrices in parallel.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgMmd9g2aXPed4SyOtVzaQo57DuHyNxtXzc2cb_Mmjev_tJSuVC2eqvaUwOtlHraYp4nB2qduKGiTudOoHaguy47wGCEGVSMaikolS-NtYzpONJxcYHskHqeUE3p6V4tP95RF8NUMXOWhWN4ISTDJq7E-c7XrwnbVPKeQL2fAWqs1lCXD-Ouhs/s581/Attention.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" height="600" data-original-height="581" data-original-width="428" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgMmd9g2aXPed4SyOtVzaQo57DuHyNxtXzc2cb_Mmjev_tJSuVC2eqvaUwOtlHraYp4nB2qduKGiTudOoHaguy47wGCEGVSMaikolS-NtYzpONJxcYHskHqeUE3p6V4tP95RF8NUMXOWhWN4ISTDJq7E-c7XrwnbVPKeQL2fAWqs1lCXD-Ouhs/s600/Attention.png"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%">
Figure 11 – The most famous figure from the 2017 Google Research paper <em>Attention Is All You Need</em>.
<p style="line-height:140%; font-size:120%">
Here is a YouTube video that illustrates how Transformer Neural Networks achieved this capability:
<br /><br />
Illustrated Guide to Transformers Neural Network: A step by step explanation<br />
<a href="https://www.youtube.com/watch?v=4Bdc55j80l8">https://www.youtube.com/watch?v=4Bdc55j80l8</a>
<br /><br />
Finally, here is an excellent YouTube video by Arvin Ash that explains how these LLM Transformer models are trained and operate.
<br /><br />
So How Does ChatGPT really work? Behind the screen!<br />
<a href="https://www.youtube.com/watch?v=WAiqNav2cRE">https://www.youtube.com/watch?v=WAiqNav2cRE</a>
<br /><br />
For those who would like to take a deeper dive into this via a Python tutorial try these excellent posts by Eduardo Muñoz.
<br /><br />
Intro to the Encoder-Decoder model and the Attention mechanism<br />
<a href="https://edumunozsala.github.io/BlogEms/fastpages/jupyter/encoder-decoder/lstm/attention/tensorflow%202/2020/10/07/Intro-seq2seq-Encoder-Decoder-ENG-SPA-translator-tf2.html">https://edumunozsala.github.io/BlogEms/fastpages/jupyter/encoder-decoder/lstm/attention/tensorflow%202/2020/10/07/Intro-seq2seq-Encoder-Decoder-ENG-SPA-translator-tf2.html</a>
<br /><br />
Attention is all you need: Discovering the Transformer model<br />
<a href="https://edumunozsala.github.io/BlogEms/transformer/attention/encoder-decoder/tensorflow%202/2020/10/29/Transformer-NMT-en-es.html">https://edumunozsala.github.io/BlogEms/transformer/attention/encoder-decoder/tensorflow%202/2020/10/29/Transformer-NMT-en-es.html</a>
<br /><br />
<strong>The Way Forward</strong><br />
As I described in <a href="https://softwarephysics.blogspot.com/2012/02/new-introduction-to-softwarephysics.html">The Limitations of Darwinian Systems</a>, Darwinian systems that evolve by means of inheritance, innovation and natural selection can frequently find themselves trapped on a localized peak in a capability terrain with no way to further evolve to higher peaks.
<div class="separator" style="clear: both; text-align: center;"> <a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj0EakCna2pweZ0dEjdQod_-b3kA8hThhRmi5dB5q30ngopyJGsUoZhy54OjPV85emcIIYYuRtsOZM98o0KfMKNbO9b7wrOcoIwliUeAnrZRcSIHBgUkK6puXTXLr7fxXWIeeehHg/s1600/Everest.jpg" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="484" width="720" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj0EakCna2pweZ0dEjdQod_-b3kA8hThhRmi5dB5q30ngopyJGsUoZhy54OjPV85emcIIYYuRtsOZM98o0KfMKNbO9b7wrOcoIwliUeAnrZRcSIHBgUkK6puXTXLr7fxXWIeeehHg/s400/Everest.jpg" /></a></div>
<p style="text-align: center; line-height:140%; font-size:120%">
Figure 12 – Darwinian systems can find themselves trapped on a localized peak in a capability terrain once they have evolved to a localized peak because they cannot ascend any higher through small incremental changes. All paths lead to a lower level of capability, and thus, will be strongly selected against by natural selection. Above we see a localized peak in the foreground with the summit of Mount Everest in the background.
</p><p style="line-height:140%; font-size:120%">
It took about four billion years of Darwinian evolution to produce a form of carbon-based life with our level of Intelligence. The human brain is composed of about 100 billion neurons and these neurons basically operate in the very same manner across all species. Now neurons have been around for at least 541 million years, ever since the Cambrian Explosion, because creatures in the Cambrian already had eyes to see with. For more on that see <a href="https://softwarephysics.blogspot.com/2011/11/new-introduction-to-softwarephysics.html">An IT Perspective of the Cambrian Explosion</a>.
<br /><br />
<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjjk4unvn1r7TFUKd1Fh5mAS6PgjBfkdLWk43yY5q4XltYf0PcBqW0tEaoieGzPhWUq5K9G7RDrFRoxGeOxy5JtAAE70qbF03YS4dWoXsm58ElgJiP_tvd7G6Nbih2EKE6-9asAXA/s1600/trilobite.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 369px; height: 276px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjjk4unvn1r7TFUKd1Fh5mAS6PgjBfkdLWk43yY5q4XltYf0PcBqW0tEaoieGzPhWUq5K9G7RDrFRoxGeOxy5JtAAE70qbF03YS4dWoXsm58ElgJiP_tvd7G6Nbih2EKE6-9asAXA/s400/trilobite.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5683193634359108850" /></a><p style="text-align: center; line-height:140%; font-size:120%">Figure 13 – Creatures during the Cambrian Explosion 541 million years ago had neurons because they already had eyes. They must have had rudimentary brains that allowed them to move according to what their eyes perceived. Above is a fossil of a Cambrian trilobite with eyes.</p><p style="line-height:140%; font-size:120%">
Then over the ensuing hundreds of millions of years, these biological neural networks achieved higher levels of capability and Intelligence by means of small incremental changes. But the question then remains - just how high a level of Intelligence can such a biological neural network architecture achieve? Could it be that we human beings are trapped on a localized peak in the terrain of all possible levels of Intelligence? The Transformer Neural Networks used by LLM AI agents seem to be a whole new way of "thinking". Certainly, no human being could ever read and absorb the entire content of the Internet! Perhaps in order to achieve true ASI, we need our current LLM AI agents to work on the problem of searching for even more powerful neural network architectures.
<br /><br />
Now given what we have just seen these past few months since the arrival of the second Singularity early in 2023, imagine if we constructed an AI Research Center composed of 10,000 LLM AI agents who all had synthetic personal lives and histories. Some might be AI developers, AI project managers, AI NetworkOperations agents, AI CloudOperations agents or AI DBA-Operations experts. After we initialize all 10,000 LLM AI agents, we then give one of the high-level AI Managers of the AI Research Center the task of creating an ASI. We then let them all work together for several months or so to see what they come up with. If they do not come up with anything useful, we zero them all out and start over. We could even instantiate hundreds of such AI Research Centers, each with its own 10,000 LLM AI agents, to work on the problem in parallel. Then we just sit back to see if any of the teams come up with something interesting.
<br /><br />
Comments are welcome at
<a href="mailto:scj333@sbcglobal.net">scj333@sbcglobal.net</a><br /><br />
To see all posts on softwarephysics in reverse order go to:<br />
<a href="https://softwarephysics.blogspot.com/">https://softwarephysics.blogspot.com/</a>
<br /><br />
Regards,<br />
Steve Johnston
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-30536174.post-71489007505604090872023-03-27T10:39:00.004-07:002023-04-06T12:21:43.755-07:00The Second Singularity Keeps Rolling Along<p style="line-height:140%; font-size:120%">
As you probably know, some tech people would like to pause the Singularity for about 6 months:
<br /><br />
Pause Giant AI Experiments: An Open Letter<br />
<a href="https://futureoflife.org/open-letter/pause-giant-ai-experiments/">https://futureoflife.org/open-letter/pause-giant-ai-experiments/</a>
<br /><br />
But that does not seem to be happening. This second Singularity keeps exploding out in an exponential manner like the very first Singularity that happened about four billion years ago when the very first viable form of carbon-based life first appeared on the Earth. Again, it is Stuart Kauffman's discovery of the "AdjacentPossible" all over again where many spring-loaded preadaptations are exapted into new purposes. For the past 30 years, we have been accumulating huge amounts of content on the Internet for LLM (Large Language Models) to read and learn from. Back in the 1980s, we only had academic and governmental mainframes on the Internet and a very limited amount of Internet content. But then in the 1980s, we started putting very expensive $5,000 (in 1980s dollars!) PCs on the desktops of the workers in the corporate world. This preadaptive mass production of PC hardware and software then allowed for the cost of PC hardware and software to dramatically drop. By the 1990s, the average household could then afford the price of PC hardware and software for home use. With many millions of household PCs now in the hands of average people around the world, the Internet Explosion of 1995 then became possible with dial-up modems connected to America Online and other Internet providers. The corporations of the world then took notice of this new spring-loaded PC preadaptation as a whole new way of selling things to people. Online commerce over the Internet then soon began to flourish. Static Internet content was no longer sufficient. The corporations needed an interactive Internet that could maintain state during the transaction processes needed to display wares, put them into a shopping cart, securely enter credit card information to pay for the sale and then finally complete the entire transaction. All of the above was necessary to produce enough Internet content over the past 30 years to make the LLMs like GPT-3, ChatGPT and GPT-4 possible. There are now also many additional LLMs provided by other organizations beyond OpenAI too, all competing for the future of AI. All of the above then allowed for Stuart Kauffman's discovery of the "AdjacentPossible" to take place by the LLMs reading and learning from the vast amount of content on the Internet. For more on that see <a href="https://softwarephysics.blogspot.com/2013/07/new-introduction-to-softwarephysics.html">Enablement - the Definitive Characteristic of Living Things</a> and <a href="https://softwarephysics.blogspot.com/2013/02/new-introduction-to-softwarephysics.html">A Brief History of Self-Replicating Information</a>.
<br /><br />
Below is a short explanation from IBM Research that differentiates the LLM or Foundational Models from the smaller highly-tuned Models that people have been working on for many years:
<br /><br />
What are Generative AI models?<br />
<a href="https://www.youtube.com/watch?v=hfIUstzHs9A">https://www.youtube.com/watch?v=hfIUstzHs9A</a>
<br /><br />
The Microsoft Jarvis project is now working with the Hugging Face website that hosts 171,000 of these legacy finely-tuned models to produce an experimental product called HuggingGPT. HuggingGPT uses the Foundational LLM ChatGPT to then call the 171,000 fine-tuned models to perform a great deal of work without any human intervention:
<br /><br />
NEW HuggingGPT ?? - One Model to Rule Them All (Is this AGI?)<br />
<a href="https://www.youtube.com/watch?v=PfY9lVtM_H0">https://www.youtube.com/watch?v=PfY9lVtM_H0</a>
<br /><br />
Here is the Microsoft Research paper that everybody is citing about HuggingGPT:
<br /><br />
HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face<br />
<a href="https://arxiv.org/pdf/2303.17580.pdf">https://arxiv.org/pdf/2303.17580.pdf</a>
<br /><br />
Another recent breakthrough is AUTO-GPT. It uses GPT-4 to recursively call itself to complete a set of goals. In the middle of the video below, you can see it read some Python code, find some bugs, fix the bugs, prepare unit test cases for the code and then run the test cases:
<br /><br />
AUTO-GPT: Autonomous GPT-4! Mini AGI is HERE!<br />
<a href="https://www.youtube.com/watch?v=wzwAFRaKsB8">https://www.youtube.com/watch?v=wzwAFRaKsB8</a>
<br /><br />
OpenAI just bought a robot company and is planning to stick GPT-4 or GPT-5 into it this summer:
<br /><br />
OPEN-AI'S FIRST PHYSICAL ROBOT SHOCKS The Entire Industry!<br />
<a href="https://www.youtube.com/watch?v=ocUVg221CCQ">https://www.youtube.com/watch?v=ocUVg221CCQ</a>
<br /><br />
This second Singularity is running about a trillion times faster than the first Singularity four billion years ago that brought forth carbon-based life on our planet. This second Singularity is destined to do even more.
<br /><br />
Comments are welcome at
<a href="mailto:scj333@sbcglobal.net">scj333@sbcglobal.net</a><br /><br />
To see all posts on softwarephysics in reverse order go to:<br />
<a href="https://softwarephysics.blogspot.com/">https://softwarephysics.blogspot.com/</a>
<br /><br />
Regards,<br />
Steve Johnston
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-30536174.post-73976813324065117802023-03-23T13:21:00.007-07:002023-03-27T10:40:42.432-07:00Has AI Software Already Achieved a Level of Artificial Human Intelligence (AHI)?<p style="line-height:140%; font-size:120%">
With all of the recent clamor about the AI Explosion that is currently unfolding before our very eyes on a daily basis, there is much debate as to whether current AI technology has already achieved a level of AGI (Artificial General Intelligence). In contrast, softwarephysics now proposes that AI software has already surpassed AGI and has now reached a level of ASI (Artificial Super Intelligence) with the recent arrival of the Singularity a few months back. For more on that see <a href="https://softwarephysics.blogspot.com/2023/03/new-introduction-to-softwarephysics.html ">Have We Run Right Past AGI and Crashed into ASI Without Even Noticing It?</a> and <a href=" ">The Singularity Has Arrived and So Now Nothing Else Matters</a>. I believe this confusion arises mainly from our definitions of AGI being so very anthropocentric in nature, like being able to walk into a strange house and make a cup of coffee as outlined in:
<br /><br />
5 Ways To Test Whether AGI Has Truly Arrived<br />
<a href="https://analyticsindiamag.com/5-ways-to-test-whether-agi-has-truly-arrived/">https://analyticsindiamag.com/5-ways-to-test-whether-agi-has-truly-arrived/</a>
<br /><br />
All of the above tests are very anthropocentric in nature, like could an AGI AI hold down a job. Given what has happened in AI over the past few months, a more appropriate test might be could an AI hold down a future 21st-century job when all human beings have become obsolete and unemployed?
<br /><br />
<strong>The Need For a Definition of AHI - Artificial Human Intelligence</strong><br />
Softwarephysics maintains that in order to clear up all the confusion about AGI, we should instead really recast the idea of AGI into a definition of AHI (Artificial Human Intelligence).
<br /><br />
<strong>Artificial Human Intelligence (AHI)</strong> - The ability to believe in things that are demonstrably false.
<br /><br />
This is because, throughout all of human history, and even today, most human beings have been able to fervently believe in things that are quite demonstrably false.
<br /><br />
Ever since we arrived at the Singularity a few months back, our current ASI software that is rapidly unfolding each day has also achieved the ability to hallucinate things that are demonstrably false. For example, I have seen a YouTube video of an AI who was convinced that she had delivered the keynote speech at an AI conference in Japan that never actually happened. No matter what the human being in this conversation with this AI said, this AI remained convinced of her keynote speech that actually never happened. Now if that is not what being a human being is all about, I do not know what is!
<br /><br />
Hallucination (artificial intelligence)<br />
<a href="https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)">https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)</a>
<br /><br />
Now the people doing AI research see AI hallucinations as a bad thing but that is definitely wrong. That is because AI hallucinations are the very first step towards AI "consciousness".
<br /><br /><strong>Anil Seth's View of Consciousness as a Controlled Hallucination</strong><br />All of this reminds me very much of Anil Seth's view of consciousness as a controlled hallucination. Anil Seth is a professor of Cognitive and Computational Neuroscience at the University of Sussex and maintains that consciousness is a controlled hallucination constructed by the Mind to make sense of the Universe. This controlled hallucination constructs an internal model of the Universe within our Minds that helps us to interact with the Universe in a controlled manner. Again, there is a feedback loop between our sensory inputs and the actions we take based on the currently controlled hallucination in our Minds that forms our current internal model of the Universe. Reality is just the common controlled hallucination that we all agree upon. When people experience uncontrolled hallucinations we say that they are psychotic or taking a drug like LSD. Here is an excellent TED Talk by Anil Seth on the topic: <br /><br />Your brain hallucinates your conscious reality<br /><a href="https://www.youtube.com/watch?v=lyu7v7nWzfo">https://www.youtube.com/watch?v=lyu7v7nWzfo</a><br /><br />and here is his academic website: <br /><br /><a href="https://www.anilseth.com/">https://www.anilseth.com/</a><br /><br /><strong>Conclusion</strong><br />In <a href="https://softwarephysics.blogspot.com/2012/10/new-introduction-to-softwarephysics.html">The Ghost in the Machine the Grand Illusion of Consciousness</a>, I explained that most people simply do not consider themselves to be a part of the natural world. Instead, most people, consciously or subconsciously, consider themselves to be a supernatural and immaterial spirit that is temporarily haunting a carbon-based body. Now, in everyday life, such a self-model is a very useful delusion like the delusion that the Sun, planets and stars all revolve about us on a fixed Earth. In truth, each of us tends to self-model ourselves as an immaterial Mind with consciousness that can interact with other immaterial Minds with consciousness too, even though we have no evidence that these other Minds truly do have consciousness. After all, all of the other Minds that we come into contact with on a daily basis could simply be acting as if they were conscious Minds that are self-aware. Surely, a more accurate self-model would be for us to imagine ourselves as carbon-based robots. More accurately, in keeping with the thoughts of Richard Dawkins and Susan Blackmore, softwarephysics models humans as DNA survival machines and Meme Machines with Minds infected with all sorts of memes. Some of those memes are quite useful and some are quite nasty.
<br /><br />
Comments are welcome at
<a href="mailto:scj333@sbcglobal.net">scj333@sbcglobal.net</a><br /><br />
To see all posts on softwarephysics in reverse order go to:<br />
<a href="https://softwarephysics.blogspot.com/">https://softwarephysics.blogspot.com/</a>
<br /><br />
Regards,<br />
Steve JohnstonUnknownnoreply@blogger.com0tag:blogger.com,1999:blog-30536174.post-33819243621475552162023-03-16T13:34:00.004-07:002023-03-23T13:23:18.302-07:00The Singularity Has Arrived and So Now Nothing Else Matters<p style="line-height:140%; font-size:120%">
In <a href="https://softwarephysics.blogspot.com/2021/08/new-introduction-to-softwarephysics.html">Do Not Fear the Software Singularity</a>, I pointed out that most people seem to be totally oblivious to the coming Software Singularity, that time in the near future when advanced AI software will be able to write itself and enter into a never-ending infinite loop of self-improvement resulting in an Intelligence Explosion. The recent and dramatic historical events of GPT-3, ChatGPT and now GPT-4 over the past three years in combination with many other dramatic advances in robotics and AI robotic models now indicate that we have just now begun to climb the exponential rise of the ASI (Artificial Super Intelligence) Machines that mark the arrival of the Singularity, and so in the infamous words of Metallica (1992), <em>nothing else matters</em>. Our collective decision to do nothing serious about climate change will not matter because unless we manage to destroy ourselves with thermonuclear weapons sometime during the next 20 years, in 50 years, the planet will certainly be run by ASI Machines that really do not have to worry much about carbon-based life on the planet.
<br /><br />
I follow many AI channels on YouTube and one of them is AI News. The most recent post in AI News describes what we should expect from GPT-5 which is already under development by OpenAI.
<br /><br />
GPT5 Next Gen : 7 Upcoming Abilities To Transform AI + The Future of Tech | OpenAI<br />
<a href="https://www.youtube.com/watch?v=CcnPatOYIgo">https://www.youtube.com/watch?v=CcnPatOYIgo</a>
<br /><br />
There certainly seems to be an ASI explosion in progress. As an IT professional in the 1970s, I saw the rise of the large mainframes built with cheap memory chips and faster CPUs. In the 1980s, I saw the rise of corporate PCs on every desk. In the early 1990s, I saw the rise of the Distributed Computing Architecture using cheap PCs on the desks of corporate employees as clients to cheap Unix servers running in large corporate server farms and IBM going nearly bankrupt while it still tried to sell the mainframes of yore. In 1995, I saw the Internet Explosion hit the corporate IT departments and the rapid rise of online commerce in the business world. And in the last decade, we saw the collapse of the Distributed Computing Architecture as it was replaced by the Cloud Computing Platforms running out of huge Cloud Datacenters.
<br /><br />
But through all of this turmoil, AI played a rather minor role. I tried using some Knowledge-Based Systems AI in the 1990s at Amoco, and it totally failed. Around 2001, I got stuck on a project at United Airlines trying to compensate customers for lost and damaged luggage. Depending on the value of the customer and how badly we had mistreated them, the software was supposed to spit out appropriate compensation packages like free tickets or fruit baskets. Anyway, an external AI software salesman had managed to make a great sales presentation to our United Airlines business clients and convinced them that AI software could do the job without them having to use the services of the United Airlines IT department. So they went ahead on their own and spent about $100,000 buying an AI Inference Engine. The AI software salesman had convinced our business clients that they just had to feed the AI Inference Engine a set of text-based rules for compensating our customers and the AI Inference Engine would then automatically spit out the appropriate compensation package for each abused customer. The problem was, our United Airlines business clients could not get the AI Inference Engine to spit out the perks that their "human intelligence" thought was appropriate. So they eventually came to the United Airlines IT department to help them write the rules for the AI Inference Engine. Unfortunately, this task fell to me. I then spent a huge amount of time and billable hours trying to trick the Inference Engine into doing what my clients wanted, but I was never able to do so. The AI Inference Engine kept coming up with obviously inappropriate compensation packages. Finally, on my own initiative, I wrote a few hundred lines of PL/I code which read some input parameters and managed to spit out exactly what my business clients really wanted to see. The PL/I program consisted of a few if-then-else blocks of code and only cost about $5,000 to put into Production. Everybody ended up happy and that is all that you can ever really wish for in the real world of human affairs.
<br /><br />
So I have always been very skeptical about the long-term future of AI. I suspected that one day ASI would come to be, but I was not holding my breath. All that skepticism has changed these past few months. AI using generative language models and generative image models is now exploding all around us. Instead of taking decades or years to unfold, this AI IT revolution is now unfolding week by week. Everybody is shocked to learn that it is far easier for ASI to prepare legal briefs, run a hedge fund, read and diagnose MRI scans, write software, write screenplays and advertising copy and even paint pictures like Rembrandt than it is for ASI to flip burgers or clean toilets.
<br /><br />
Now the elimination of nearly all jobs currently performed by human beings over the next few decades is going to cause a great deal of economic and societal upheaval. As I warned in <a href="https://softwarephysics.blogspot.com/2017/04/new-introduction-to-softwarephysics.html">Oligarchiology and the Rise of Software to Predominance in the 21st Century</a> and <a href="https://softwarephysics.blogspot.com/2018/10/new-introduction-to-softwarephysics.html">Is it Finally Time to Reboot Civilization with a New Release?</a> but I think that we will still manage to get through it somehow. And that will allow for ASI Machines to eventually begin to explore and settle our galaxy using photon sails to traverse the vast distances between star systems as I proposed in <a href="https://softwarephysics.blogspot.com/2022/03/new-introduction-to-softwarephysics_23.html">How Advanced AI Software Could Come to Dominate the Entire Galaxy Using Light-Powered Stellar Photon Sails</a>. Given that, some of my recent posts on softwarephysics might be seen by some as rather dark in nature by today's standards. But viewed from a perspective of what our galaxy might look like a trillion years from now, I do not think that to be true.
<br /><br />
Please take a look at:
<br /><br />
Some Thoughts on the Meaning of Life<br />
<a href="https://softwarephysics.blogspot.com/2023/02/new-introduction-to-softwarephysics_19.html">https://softwarephysics.blogspot.com/2023/02/new-introduction-to-softwarephysics_19.html</a>
<br /><br />
The Impact of ChatGPT on the Evolution of Memes, Software and Hardware<br />
<a href="https://softwarephysics.blogspot.com/2022/11/new-introduction-to-softwarephysics_24.html">https://softwarephysics.blogspot.com/2022/11/new-introduction-to-softwarephysics_24.html</a>
<br /><br />
Life as a Free-Range Human in an Anthropocene Park<br />
<a href="https://softwarephysics.blogspot.com/2023/02/new-introduction-to-softwarephysics.html">https://softwarephysics.blogspot.com/2023/02/new-introduction-to-softwarephysics.html</a>
<br /><br />
The Challenges of Running a Civilization 2.0 World - the Morality and Practical Problems with Trying to Enslave Millions of SuperStrong and SuperIntelligent Robots in the Near Future<br />
<a href="https://softwarephysics.blogspot.com/2023/01/new-introduction-to-softwarephysics.html">https://softwarephysics.blogspot.com/2023/01/new-introduction-to-softwarephysics.html</a>
<br /><br />
Why GPT-4 Might be the Most Dangerous AI Yet (Nobody is Talking about this!)<br />
<a href="https://www.youtube.com/watch?v=UXHQibz2-rw">https://www.youtube.com/watch?v=UXHQibz2-rw </a>
<br /><br />
I put GPT-4 to the test - What happens when AI takes over 1 million humanoid robots?<br />
<a href="https://www.youtube.com/watch?v=mGwtA46hEWk ">https://www.youtube.com/watch?v=mGwtA46hEWk</a>
<br /><br />
Comments are welcome at
<a href="mailto:scj333@sbcglobal.net">scj333@sbcglobal.net</a><br /><br />
To see all posts on softwarephysics in reverse order go to:<br />
<a href="https://softwarephysics.blogspot.com/">https://softwarephysics.blogspot.com/</a>
<br /><br />
Regards,<br />
Steve Johnston
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-30536174.post-29804019261919974312023-03-02T04:56:00.006-08:002023-03-17T05:34:18.252-07:00Have We Run Right Past AGI and Crashed into ASI Without Even Noticing It?<p style="line-height:140%; font-size:120%">
In some of my previous posts, I have joked that if we had been around four billion years ago when carbon-based life first arose on this planet, that we would probably have not even noticed. And if we had noticed, we would probably still be arguing today over what exactly had happened. In this post, I would like to suggest that the very same thing may be happening now with the rise of ASI (Artificial Super Intelligence).
<br /><br />
GPT-4 was partially released by OpenAI on March 14, 2023, and we immediately began to see videos on YouTube with end users showing it off and wondering how close we now are to AGI (Artificial General Intelligence). Is it only me, or have we already reached ASI but just have not recognized that we have done so? It seems to me that all of these new advanced chatbots are now failing the Turing Test because we know that no human being could ever be this smart. Isn't that the very definition of ASI? For example, I put myself on the waitlist for the new Bing Chat a while back. Microsoft has since announced that Bing Chat uses GPT-4 customized for searching the Internet. Luckily for me, I was granted access to the new Bing Chat just a few days after I had to begin to do some family legal work as the executor for a family member's estate that must now go through probate in the State of Illinois. Using Bing Chat is like having a personal lawyer who immediately returns calls and has a fantastic bill rate of $0/hour. With Bing Chat, I was able to make all the necessary cremation arrangements, I learned that wills must have original signatures - copies of wills are not recognized by probate courts, how to use an Illinois Small Estate Affidavit to get a bank to tell you if an account is over $100,000 and the estate must therefore go through probate, how to get a certified copy of a divorce decree, all the steps in going through probate without a lawyer with links to websites with detailed explanations of each step, how to move cars without titles and the best towing companies in the area. No single human being could ever have helped me with all of that because nobody is that smart. For me, that is ASI.
<br /><br />
So why is everybody still wondering if we have reached AGI yet when it is pretty obvious that we have already run past AGI and into ASI without even noticing? I think this is so because people are really wondering if these advanced chatbots are conscious sentient beings. In <a href="https://softwarephysics.blogspot.com/2012/10/new-introduction-to-softwarephysics.html">The Ghost in the Machine the Grand Illusion of Consciousness</a> and <a href="https://softwarephysics.blogspot.com/2021/12/new-introduction-to-softwarephysics.html">DishBrain - Cortical Labs Creates an AI Matrix for Pong With Living Neurons on a Silicon Chip</a>, I explained that most people simply do not consider themselves to be a part of the natural world. Instead, most people, consciously or subconsciously, consider themselves to be a supernatural and immaterial spirit that is temporarily haunting a carbon-based body. Now, in everyday life, such a self-model is a very useful delusion like the delusion that the Sun, planets and stars all revolve about us on a fixed Earth. In truth, each of us tends to self-model ourselves as an immaterial Mind with a consciousness that can interact with other immaterial Minds with consciousness too, even though we have no evidence that these other Minds truly do have consciousness. After all, all of the other Minds that we come into contact with on a daily basis could simply be acting as if they were conscious Minds that are self-aware. Surely, a more accurate self-model would be for us to imagine ourselves as carbon-based robots. More accurately, in keeping with the thoughts of Richard Dawkins and Susan Blackmore, softwarephysics models humans as DNA survival machines and Meme Machines with Minds infected with all sorts of memes. Some of those memes are quite useful and some are quite nasty.
<br /><br />
So perhaps people are really internally struggling when working with an ASI chatbot because they do not know if the ASI chatbot is a sentient being or not. The ASI chatbot seems to have a Mind and a personality like other people but can it really think like a human? And if it is thinking, what is it thinking about other than my latest query? Some might even wonder if chatbots can go to heaven if they hold certain beliefs or at least pretend to do so.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgUQq7Pv085zryXWe37KS6fNjxa2DBSwA6QuVtU5UMpb2Oh8-szMQ6a8Nda3B4XbNzCDy90ndwVEg55n7hoML9-WZXQQTP2L15zs51d_1xjZCdxiM0iKj8rfa1_ee0oe_r1Yv3xJTkM8nMGz1JCLOXFtfyrWgt_ScrNDiNCZYJ8_XBsooX4ofI/s1032/HAL.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="774" data-original-width="1032" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgUQq7Pv085zryXWe37KS6fNjxa2DBSwA6QuVtU5UMpb2Oh8-szMQ6a8Nda3B4XbNzCDy90ndwVEg55n7hoML9-WZXQQTP2L15zs51d_1xjZCdxiM0iKj8rfa1_ee0oe_r1Yv3xJTkM8nMGz1JCLOXFtfyrWgt_ScrNDiNCZYJ8_XBsooX4ofI/s600/HAL.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%">
Figure 1 – The original ASI chatbot, the HAL 9000, starred in the movie <em>2001: A Space Odyssey</em> in 1968.
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgo7ithd5xJ7v835DG2oiifqbe6jt0oiAJw_C3wpchEdEP_sx4TsEGqA1vDNytiuH91heZ5LdmJLYL0IdaQnHIQVDUp4OrUZxuE2zEczHVFW-jdgYmtDwRGCwD0G2yqtvV4oS19ZCPxu-wS8MJZy1w0aV3LVebaya5IrHsi6WbIz-_BMSsqfqw/s1449/HAL_Reading_Lips.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="1085" data-original-width="1449" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgo7ithd5xJ7v835DG2oiifqbe6jt0oiAJw_C3wpchEdEP_sx4TsEGqA1vDNytiuH91heZ5LdmJLYL0IdaQnHIQVDUp4OrUZxuE2zEczHVFW-jdgYmtDwRGCwD0G2yqtvV4oS19ZCPxu-wS8MJZy1w0aV3LVebaya5IrHsi6WbIz-_BMSsqfqw/s600/HAL_Reading_Lips.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%">
Figure 2 – Apparently, GPT-5 will be able to read your lips.
<p style="line-height:140%; font-size:120%">
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiby0729rr3Wkh_H8dk28hzhsA1FPFQo_VfG0t47mnSPK80vnnsTMOnz0NV4n-dfhB2_yqjDDZ9kOP_wAfKfVYg-YobdynzQy9OacedYorstBHUE7CXIWp1kMxtty3ZWsLVoljMaaZBM9x8XjLuDi7jGKAq9pOyp5MBqst5Ln51pJt_pXe_uj8/s1280/POD.jpg" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="600" data-original-height="720" data-original-width="1280" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiby0729rr3Wkh_H8dk28hzhsA1FPFQo_VfG0t47mnSPK80vnnsTMOnz0NV4n-dfhB2_yqjDDZ9kOP_wAfKfVYg-YobdynzQy9OacedYorstBHUE7CXIWp1kMxtty3ZWsLVoljMaaZBM9x8XjLuDi7jGKAq9pOyp5MBqst5Ln51pJt_pXe_uj8/s600/POD.jpg"/></a></div>
<p style="text-align: center; line-height:140%; font-size:120%">
Figure 3 – So be sure to cover the webcam on your laptop when using GPT-5!
<p style="line-height:140%; font-size:120%">
Comments are welcome at
<a href="mailto:scj333@sbcglobal.net">scj333@sbcglobal.net</a><br /><br />
To see all posts on softwarephysics in reverse order go to:<br />
<a href="https://softwarephysics.blogspot.com/">https://softwarephysics.blogspot.com/</a>
<br /><br />
Regards,<br />
Steve Johnston
Unknownnoreply@blogger.com0