Tuesday, October 17, 2023

The Self-Organizing Recursive Cosmos

At an age of 72 years, I must admit that I have now spent most of my life trying to figure out what's it all about before I finally check out of this Universe as I explained in What’s It All About? and What's It All About Again?. With the realization that I am now rapidly running out of time to do so, in this post, I would again like to expand upon these ideas after finishing a very remarkable paper in the recent October 2023 edition of the American Journal of Physics:

All Objects and Some Questions
https://pubs.aip.org/aapt/ajp/article/91/10/819/2911822/All-objects-and-some-questions

Below is an intriguing graph from this paper. The authors plot the log of the mass of objects within our Universe versus the log of their sizes. The upper triangle, designated by "forbidden by gravity", rules out regimes of masses and sizes where the general theory of relativity produces a black hole. The lower triangle, designated by "quantum uncertainty" rules out masses and sizes which are precluded by quantum mechanics. Essentially, the Heisenberg Uncertainty Principle prevents objects in this regime from existing. The very small black triangle on the far left that is designated by "QG" (Quantum Gravity) is where both the general theory of relativity and quantum mechanics preclude objects from existing. This is also where both theories break down and need to be replaced by a quantum theory of gravity that we currently do not have.

Figure 1 – A plot of the allowed masses of objects in our Universe versus the allowed sizes of objects in our Universe.

Along the boundaries of these forbidden zones, we find all of the very strange objects in our Universe while in the pink and blue zones, we find the objects that we are more familiar with like people, planets, stars and galaxies. Along the lower edge, we find the fundamental particles like electrons and neutrinos. Along the top edge, we find all of the black holes of various sizes and masses from the PBH (Primal Black Holes) of the Big Bang to the SMBH (Super Massive Black Holes) found at the centers of most galaxies. Notice that the Hubble radius also falls along this line of black holes of increasing mass and size. The Hubble radius is defined as the speed of light divided by the current Hubble constant Ho which is about 65 - 71 km/s/Mpc and comes to about 13.7 - 14.5 billion light years away from us. Given that, the authors entertain the idea that we might be living inside of a black hole but with some severe reservations. That is because it is more complicated than just having the occurrence of the Hubble radius lie along the black hole line on the graph.

To understand why, recall that at the heart of a black hole is a singularity, a point-sized pinch in spacetime with infinite density, where all the current laws of physics break down. Surrounding the singularity is a spherical event horizon. The black hole essentially sucks spacetime down into its singularity with increasing speed as you approach the singularity, and the event horizon is simply where spacetime is being sucked down into the black hole at the speed of light. Because nothing can travel faster than the speed of light, nothing can escape from within the event horizon of a black hole because everything within the event horizon is carried along by the spacetime being sucked down into the singularity faster than the speed of light. You can think of a black hole as a drain in a very large sink of peaceful water. Now imagine that you are a goldfish in that sink. When the drain is first opened, water begins to pour down into the drain with increasing velocity as you get closer to the drain. At some point, you reach a region of water where the current is dragging you down faster than you can swim. That would be the event horizon for the black hole drain. Once you cross the event horizon of the drain you can still swim around inside before you go crashing down into the drain singularity but you will never be able to escape it.

Figure 2 – The "Observable Universe" is now about 46 billion light years away from us. But the current Hubble radius is now only about 13.7 - 14.5 billion light years away. The current Hubble radius is where spacetime is currently expanding away from us faster than the speed of light. So the Hubble radius is the furthest distance that we could now reach traveling at the speed of light and the light from objects beyond the current Hubble radius will never reach us.

Similarly, the Hubble radius is where spacetime is expanding away from us at the speed of light. That means we could never reach such a distance even if our goldfish could swim at the speed of light. Also, the light from any objects beyond the Hubble radius will never be able to reach us because it will be swimming against a current of spacetime traveling faster than the speed of light away from us. In Figure 2 above, we see that the "Observable Universe" is now about 46 billion light years away from us and consists of all the light that has been traveling towards us for the entire age of the Universe. During all of those billions of years, the Universe has expanded greatly so that such objects are now thought to be about 46 billion light years away. On the other hand, the current Hubble radius is now only about 13.7 - 14.5 billion light years away. But when the light from the current Hubble radius finally does reach us, the objects that emitted that light will too be many billions of light years beyond the current Hubble radius because the expanding spacetime of our Universe will have floated them quite far away from us. I know that these cosmic horizons are very confusing. In actuality, everything in our Universe must be within the current Hubble radius including the photons from the "Observable Universe". Remember, the photons from the "Observable Universe" are actually now at a distance of zero from us because they are now just finally hitting our telescopes after their very long journey since the Big Bang!

Figure 3 – Figure 2 above displays a Universe with a mass-energy density external to our current Hubble radius that is very similar to the mass-energy density of our Universe within our current Hubble radius. In Figure 2 this mass-energy density extends out to infinity as most cosmologists currently think is true. The authors of the above paper go through all of the mathematics that predicts that if these cosmologists are wrong and the mass-energy density of our Universe beyond our current Hubble radius is actually zero, then we truly are living inside of a black hole and everything within our Universe will eventually collapse into a singularity with a "Big Crunch". The authors think this model is highly unlikely because it would place us exactly at the center of our Universe and Copernicus already fought that battle for us over 400 years ago.

Instead, the authors propose that the white dot towards the left edge of the plot, designated as the Instanton, might be representative of the initial conditions of our Universe at least in terms of our current theories of quantum mechanics and general relativity. At an age of one Planck time, the Instanton would be a black hole with a radius of one Planck length and a mass of one Planck mass all at the tremendously high Planck temperature. Note that unlike the arbitrary units of meters and seconds that are based on arbitrary human conventions, the Planck Units are all based on the fundamental constants of our Universe such as the gravitational constant G, the speed of light c, Boltzman's constant kB and Planck's constant h.

Figure 4 – The Planck Units are not based on arbitrarily chosen units like the second that is based upon the time between high noons on the Earth. The Planck Units are based on the fundamental constants of our Universe as listed in the Defining Expression column above.

Again, to fully understand our Universe prior to the initial state of the Instanton, we would need a theory of quantum gravity and we currently do not have one. Both our general theory of relativity and quantum mechanics break down prior to the arrival of the Instanton. But at least this would imply that although we are not currently living within a black hole, our Universe may have indeed been produced by a black hole. Given that thought, the above paper inspired me to, once again, reread Lee Smolin's classic Life of the Cosmos.

Life of the Cosmos (1997)
https://tkececi.files.wordpress.com/2009/12/the-life-of-the-cosmos.pdf

For me, Lee Smolin's Life of the Cosmos is one of the most significant books shaping my current personal worldview, much like Richard Dawkins' The Selfish Gene because it explains so much. Both books convey very deep philosophical thoughts that can be of use in trying to make sense of it all. Lee Smolin is a founding and senior faculty member at the Perimeter Institute for Theoretical Physics in Waterloo, Ontario, Canada, and an adjunct professor of physics at the University of Waterloo. Smolin is also a member of the graduate faculty of the philosophy department at the University of Toronto. Lee Smolin has made significant contributions to a wide range of areas in physics, including quantum gravity, cosmology, and the foundations of quantum mechanics. He is one of the leading proponents of loop quantum gravity, a promising approach to unifying quantum mechanics and general relativity. He has also developed the theory of cosmological natural selection, which posits that the laws of physics are constantly evolving into a Multiverse that is home to a multitude of different universes with different physical laws.

Could the Instanton be a Black Hole From a Previous Universe?
In the Life of the Cosmos, Lee Smolin ponders why our Universe is so seemingly fine-tuned to contain complex things like galaxies, stars, planets and living things. Lee Smolin begins by explaining that our current Standard Model of particle physics has about 20 parameters that have to be experimentally measured such as the masses of particles, the strength of their electrical charges and the strengths of the force-carrying bosons. The Standard Model does not predict the values of these parameters. Instead, they have to be experimentally measured and then plugged into the Standard Model in order for it to predict how our Universe works. Smolin then goes on to explain that if any of these 20 parameters were to be changed by even the slightest amount, our Universe would be incapable of complexity. For example, the odds of randomly producing a universe with these 20 parameters that could even have stars would only be about one in 10229! Lee Smolin sees this as a huge challenge to the platonic concept of a Universe framed by a beautiful set of fixed and immutable mathematical laws with fixed constants that make it all unfold before us.

Figure 5 – The Standard Model of particle physics is composed of quantum fields that we observe as particles. The matter particles are called fermions and have a spin of ½. The force-carrying particles are called bosons and they have a spin of 1. The Higgs boson has a spin of 0 and is a scalar field.

If you count the antimatter particles, the Standard Model contains 12 quarks and 12 leptons. But each of the 12 quarks comes in one of three color charges - red, green or blue. That means there really are 36 different quarks. There are also really 8 gluons because gluons have a combination of two color charges (one of red, green, or blue and one of antired, antigreen, or antiblue). So that comes to 36 quarks + 12 leptons + 13 bosons = 61 particles!

Many other physicists are also perplexed by the complexities of the Standard Model too. There just seem to be way too many fundamental particles and forces. Some have looked for a simpler and more comprehensive "Theory of Everything" that could easily fit on a T-shirt. Smolin sees this as a desire to return to the simple Platonic Forms that eternally exist outside of our Universe as a pure and simple mathematical pattern for our reality.

Figure 6 – Early in the 20th century, physicists were also perplexed by the very numerous fundamental elements of the Periodic Table. But atom smashers soon revealed that this very large number of fundamental elements were all simply composed of three particles - protons, neutrons and electrons. Perhaps the same could be said of the numerous particles of the Standard Model.

The desire for a simpler model has led many physicists to search for things such as string theory which has been struggling for the past 40 years. Smolin is an ardent critic of such efforts - see The Danger of Believing in Things for more on that.

Beyond a platonic desire for an eternal transcendental simplicity, many physicists remembered that they had already been fooled once before by a seemingly fine-tuned mess that, nonetheless, seemed to work just fine, going all the way back to Ptolemey's original model of the Universe.

Figure 7 – In Ptolemy's model of the Universe, the Earth was not even at the center of the Universe! Instead, the Earth was slightly displaced from the true center of the Universe which was called the Equant. The planets orbited the Equant on a circular Deferent at a constant speed. In addition, each planet also moved in a circular motion about the Deferent on its own Epicycle that had its own radius. With all of that complicated compound motion, Ptolemy was able to predict the motions of the planets in the night sky accurately enough to match the very primitive observational astronomical instruments of the day.

Figure 8 – However, even though Ptolemy's Mess seemed to work, it resulted in some very complex compound motions for the planets in our Solar System!

Figure 9 – Copernicus had a much simpler model for the Universe that also made accurate predictions of the motions of the planets and that did not require fine-tuned compound motions of the planets. All that was needed was to place the Sun at the center of the Universe and have the Earth orbit around it like the other planets.

In contrast, Smolin suggests in the Life of the Cosmos that the very messy mathematical "laws" and "constants" of the Standard Model and general relativity might be the real deal after all and for a very good reason. Perhaps they evolved to where we now see them in a Darwinian sense through a random walk through an infinite number of possible Universes, marked by a history of accidental modifications honed by natural selection. Smolin proposes that since the only other example of such a similarly fine-tuning of a mess in our Universe, that nonetheless works, is manifested by our biosphere, we should look to the biosphere as an explanation for the fine-tuning that we see in the cosmos. After all, the odds of forming a human baby out of a random mixture of a few kilograms of water and organic molecules would be far slimmer than one in 10229.

Figure 10 – Above is a simplified flowchart of the metabolic pathways used by carbon-based life. Notice that it too is a fine-tuned mess that seems to work nonetheless. However, in this case, we do know that carbon-based life actually is a fine-tuned mess that works superbly despite all of its complexity. Biologists had to come up with some mechanism to explain how such a fine-tuned mess came to be and they finally did so with Darwinian thought.

Living things are incredible examples of highly improbable fine-tuned systems, and this fine-tuning was accomplished via the Darwinian mechanisms of inheritance and innovation honed by natural selection. Along these lines, Lee Smolin proposes that when black holes collapse they produce a white hole in another universe, and the white hole is observed in the new universe as a Big Bang. He also proposes that the physics in the new universe would essentially be the same as the physics in the parent universe, but with the possibility for slight variations to arise when a black hole reaches the Planck density. In this view, the seemingly fixed and immutable laws and constants of our Universe are no more so than the fixed and immutable species of a pre-Darwinian biosphere. For Lee Smolin, a universe that had the physics that was good at creating black holes would tend to outproduce universes that did not. Thus, a selection pressure would arise that selected for universes that had a physics that was good at making black holes, and so, a kind of Darwinian natural selection would occur in the Cosmic Landscape of the Multiverse. Over an infinite amount of time, the universes that were good at making black holes would be found to dominate the Cosmic Landscape. He calls this effect cosmological natural selection. One of the predictions made by Lee Smolin's model of the Multiverse is that we should most likely find ourselves in a universe that is very much like our own and that has an abundance of black holes. Such universes should be the norm and not the exception. Thus, for Lee Smolin, searching for an eternal and external God or Platonic Form defined in eternal and unchanging mathematical terms by an eternal theory of everything is a fool's errand. It would be like trying to find an equation that could predict all of the current species of the Earth without dealing with the historical evolution of the biosphere.

Figure 11 - In Lee Smolin's the Life of the Cosmos he proposes that the black holes of one universe puncture the spacetime of the universe, causing a white hole to appear in a new universe similar to the Instanton of Figure 1.

Figure 12 – In Lee Smolin's model, cosmological natural selection selects for new Universes that are fine-tuned to produce many black holes.

For Smolin, the intelligent beings in our Universe are just a fortuitous by-product of making black holes. In order for a universe to make black holes, it must exist for many billions of years, and do other useful things, like easily make carbon in the cores of stars. Carbon is necessary to make black holes because carbon is necessary to make stars and it is not easy to make carbon. The carbon dust in molecular clouds that was expelled from earlier stars shades the molecular clouds from the bright stars that are forming within them, and the carbon-based organic molecules within molecular clouds can also radiate away infrared radiation to cool the clouds as they contract into stars. To make carbon, two helium-4 nuclei must first fuse in the core of a star to form beryllium-8. Then another helium-4 nuclei can fuse with the beryllium-8 to form carbon-12. The problem is that beryllium-8 only has a half-life of 8.19 x 10-17 seconds before it decays back into two helium-4 nuclei and that does not provide very much time for beryllium-8 to absorb a helium-4 to become carbon-12. Changing the strength of the strong nuclear force or the masses of protons and neutrons by the slightest amount would mean far fewer stars and far fewer black holes in our Universe. For example, in 1953 Fred Hoyle published "On Nuclear Reactions Occurring in Very Hot Stars". In the paper, Hoyle argued that the fusion of three helium-4 nuclei could not occur at a significant rate without the existence of a nuclear resonance of carbon-12 at an energy of 7.65 million electron volts to absorb the collisional energy of the nuclei. Hoyle's prediction was confirmed a few years later by experimental physicists at the University of Chicago. In 1957, William Fowler and his colleagues reported the discovery of a resonance state in carbon-12 at an energy level of 7.68 million electron volts. Without this carbon-12 resonance, there would be far fewer black holes in our Universe and no us either. Similarly, in the Life of the Cosmos Lee Smolin runs through a good number of scenarios that show that by just slightly altering the Standard Model or the general theory of relativity we would end up with a Universe that could not produce black holes or would produce far fewer black holes. He has a very hard time trying to come up with one that could produce more, and finally just leaves that as an exercise for the student.

Figure 12 – Making carbon in our fine-tuned Universe is not easy.

All of these factors aid in the formation of intelligent beings, even if those intelligent beings might be quite rare in such a universe. I have always liked Lee Smolin’s theory about black holes in one universe spawning new universes in the Multiverse, but I have always been bothered by the idea that intelligent beings are just a by-product of black hole creation. We still have to deal with the built-in selection biases of the Weak Anthropic Principle. Nobody can deny that intelligent beings will only find themselves in a universe that is capable of supporting intelligent beings. I suppose the Weak Anthropic Principle could be restated to say that black holes will only find themselves existing in a universe capable of creating black holes and that a universe capable of creating black holes will also be capable of creating complex intelligent beings out of the leftovers of black hole creation. However, in the Life of the Cosmos, Smolin takes a very dim view of the Weak Anthropic Principle because in his view it cannot be falsified in a Popperian manner and cannot make any useful predictions beyond what is necessarily already observed. In fact, he devotes an entire chapter in the Life of the Cosmos saying so.

Could Cosmic Intelligence be the Driving Force of Creation?
Towards the end of In search of the multiverse: parallel worlds, hidden dimensions, and the ultimate quest for the frontiers of reality (2009), John Gribbin proposes a different solution to this quandary. Perhaps intelligent beings in a preceding universe might be responsible for creating the next generation of universes in the Multiverse by attaining the ability to create black holes on a massive scale. For example, people at CERN are currently trying to create mini-black holes with the LHC collider. Currently, it is thought that there is a supermassive black hole at the center of the Milky Way Galaxy and apparently all other galaxies as well. In addition to the supermassive black holes found at the centers of galaxies, there are also numerous stellar-mass black holes that form when the most massive stars in the galaxies end their lives in supernova explosions. For example, our Milky Way galaxy contains several hundred billion stars, and about one out of every thousand of those stars is massive enough to become a black hole. Therefore, our galaxy should contain about 100 million stellar-mass black holes. Actually, the estimates run from about 10 million to a billion black holes in our galaxy, with 100 million black holes being the best order of magnitude guess. So let us presume that it took the current age of the Milky Way galaxy, about 10 billion years, to produce 100 million black holes naturally. Currently, the LHC collider at CERN can produce at least 100 million collisions per second, which is about the number of black holes that the Milky Way galaxy produced in 10 billion years. Now imagine that we could build a collider that produced 100 million black holes per second. Such a prodigious rate of black hole generation would far surpass the natural rate of black hole production in our galaxy by a factor of about 1020. Clearly, if only a single technological civilization with such technological capabilities should arise at anytime during the entire history of each galaxy within a given universe, such a universe would spawn a huge number of offspring universes, compared to those universes that could not sustain intelligent beings with such capabilities. As Lee Smolin pointed out, we would then see natural selection in action again because the Multiverse would come to be dominated by universes in which it was easy for intelligent beings to make black holes with a minimum of technology. The requirements simply would be that it was very easy to produce black holes by a technological civilization, and that the universe in which these very rare technological civilizations find themselves is at least barely capable of supporting intelligent beings. It seems that these requirements describe the state of our Universe quite nicely. This hypothesis helps to explain why our Universe seems to be such a botched job from the perspective of providing a friendly home for intelligent beings and software. All that is required for a universe to dominate the Cosmic Landscape of the Multiverse is for it to meet the bare minimum of requirements for intelligent beings to evolve, and more importantly, allow those intelligent beings to easily create black holes within them. Since software is needed in all such universes to run the machines that generate the black holes, that explains why our Universe is capable of supporting software, but just barely so, and that is why software is so rare within our galaxy and Universe.

Is the Multiverse a Recursive Function?
For me such a self-replicating Multiverse has an almost fractal nature at heart, like the Mandelbrot set that is defined by a simple recursive algorithm in the complex plane:

Zn+1=Zn*Zn + C


Figure 13 – The Multiverse can be thought of as a fractal form of self-replicating information, like the Mandelbrot set, with no beginning and no end (click to enlarge)

For more on that let's ask Bard to explain how recursive functions work.

In computer science, please describe what a recursive function is.

Bard
In computer science, a recursive function is a function that calls itself directly or indirectly. This means that a recursive function solves a problem by breaking it down into smaller instances of the same problem, and then recursively solving those smaller instances.

Recursive functions can be difficult to understand at first, but they can be very powerful and elegant ways to solve certain problems. For example, many common algorithms, such as quicksort and mergesort, are recursive.

Here is a simple example of a recursive function in Python:

def factorial(n):
   if n == 0:
      return 1
   else:
     return n * factorial(n - 1)

print(factorial(5))

This function calculates the factorial of a number, which is the product of all the positive integers less than or equal to that number. For example, 5 factorial (5!) is equal to 120.

The factorial function works by recursively breaking down the problem into smaller instances. For example, to calculate 5!, the function first calculates 4!, which is equal to 24. It then multiplies 5 by 24 to get the final answer of 120.

Recursive functions can be a bit tricky to debug, but they can be a very powerful tool for solving certain types of problems.

Here are some tips for writing recursive functions:

* Make sure to have a base case, which is a case where the function can return a result without calling itself recursively.
* Make sure that the recursive case always moves closer to the base case.
* Be careful not to create infinite recursion, which happens when a function calls itself forever without ever reaching the base case.

If you are new to recursion, I recommend starting with some simple examples, such as the factorial function. Once you understand how simple recursive functions work, you can start to tackle more complex problems.


The Multiverse described above would be an example of an infinite recursion, endlessly calling itself for all of eternity, without ever reaching its initial base case that never even actually existed. It would also be one in which the code for the recursive function would be allowed to change slightly in a random manner with each iteration. The successful recursive functions of such a Multiverse would be those that evolved to carry on without crashing into oblivion. As Lee Smolin explained in Life of the Cosmos:

At its root, the most elemental operation in logic is the making of a distinction. We do this whenever we separate one part of the world from another, or when we define a set that contains only certain things and not others. It is possible to make distinctions in the real world because we live in a world full of structure and variety. To the extent that this is the result of intrinsic processes of self-organization, it might be considered that the possibility of self-reference is what makes logic possible. Thus, if it is the case that the laws of nature have been constructed, over time, by processes of self-organization, then the old dream of reducing science to logic may to some extent be realized. But this will be possible, not in the timeless sense dreamed about by Platonists, but only in the sense that logic, expressed in time and complicated by the possibility of self-reference, is self-organization.

Such a model of the Multiverse would be an example of the Strong Anthropic Principle in action where Intelligent beings would only find themselves in universes that were specifically designed to sustain Intelligent beings. The Strong Anthropic Principle is mostly frowned upon by most scientists because it seems to harken back to the ancient religious mythologies of yore with an eternal and external god-like force acting behind the scenes to create universes designed for creatures such as ourselves. However, this model really only requires that an eternal form of self-replicating Intelligence be at work with no need for any mysticism whatsoever.

Conclusion
Lee Smolin's ends Life of the Cosmos with this very Darwinian paragraph:

So there never was a God, no pilot who made the World by imposing order on chaos and who remains outside, watching and proscribing. And Nietzsche now also is dead. The eternal return, the eternal heat death, are no longer threats, they will never come, nor will heaven. The world will always be here, and it will always be different, more varied, more interesting, more alive, but still always the world in all its complexity and incompleteness. There is nothing behind it, no absolute or platonic world to transcend to. All there is of Nature is what is around us. All there is of Being is relations among real, sensible things. All we have of natural law is a world that has made itself. All we may expect of human law is what we can negotiate among ourselves, and what we take as our responsibility. All we may gain of knowledge must be drawn from what we can see with our own eyes and what others tell us they have seen with their eyes. All we may expect of justice is compassion. All we may look up to as judges are each other. All that is possible of Utopia is what we make with our own hands. Pray let it be enough.


As always, this still leaves us with the eternal unanswered question of a "First Cause" that has baffled all philosophers throughout the ages. But perhaps there never really was one.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

No comments: