I just finished reading:
If the Universe is Teeming with Aliens…
Where is Everybody?
Fifty Solutions to the Fermi Paradox and the Problem of Extraterrestrial Life
(2002) by Stephen Webb. In Where is Everybody? Stephen Webb briefly goes through 50 possible solutions to explain Fermi’s Paradox:
Fermi’s Paradox - If the Universe is just chock full of intelligent beings, why do we not see any evidence of their existence?
He divides the 50 solutions into three categories:
1. They Are Here
2. They Exist But Have Not Yet Communicated
3. They Do Not Exist
Many of these solutions can be found in Some Additional Thoughts on the Galactic Scarcity of Software, SETS - The Search For Extraterrestrial Software, CyberCosmology and The Sounds of Silence the Unsettling Mystery of the Great Cosmic Stillness. Stephen Webb can only spend a few pages on each proposed solution, but he does so in a very unbiased and even-handed manner, and his book probably does provide the most exhaustive analysis of the Fermi Paradox that I am aware of.
I think that, like most members of the scientific community, Stephen Webb has a very low level of confidence in most of the solutions that suggest that intelligent aliens have already made their presence known here on Earth. That leaves the solutions that maintain that intelligent aliens exist, but like us, do not yet have the necessary technology or desire to communicate with the rest of the galaxy, or that we indeed are alone in our galaxy and that intelligent aliens simply do not exist elsewhere in it. However, the solutions that propose that intelligent aliens do exist within our galaxy, but have not yet made their presence known to us all suffer from the same problem. We, ourselves, are only a mere 400 years into the Scientific Revolution, and yet we already have started to make our presence known to the rest of the galaxy through radio and television broadcasts. Certainly, within the next few centuries, the Earth will have sufficient technology to unleash von Neumann probes upon our galaxy, self-replicating robotic probes that travel from star system to star system building copies along the way. Studies have shown that once released, von Neumann probes should easily subdue our entire galaxy within a few million years, and since our galaxy is about 10 billion years old, we should already find ourselves knee deep in alien von Neumann probes, but that obviously is not the case. That leaves the solutions that maintain that we are alone in the galaxy, and that alien intelligences simply do not exist. Most of those solutions hinge upon the Rare Earth Hypothesis presented in the classic Rare Earth (2000) by Peter Ward and Donald Brownlee. The Rare Earth Hypothesis maintains that our Earth and Solar System are a fluke of nature that is very hard to reproduce in the first place, and because our universe is also a very dangerous place for intelligent beings, it is very hard for a planet and planetary system to remain hospitable for intelligent beings for very long.
In the conclusion of Where is Everybody? Stephen Webb comes up with a personal solution for Fermi’s Paradox based upon a “death by a thousand cuts” explanation which I find very plausible. Now that we have essentially figured out how our Universe formed and evolved, how galaxies and stars formed and evolved, how our Solar System formed and evolved, how the Earth formed and evolved, how simple prokaryotic life formed and evolved, how more complex eukaryotic life formed and evolved, how complex multicellular life based upon eukaryotic cells formed and evolved, how complex neuronetworks formed and evolved within complex multicellular organisms, and how intelligent and self-aware organisms emerged from these complex neuronetworks, it becomes quite apparent that any disruption to this very complicated chain of events could easily derail the emergence of sentient beings.
To model this effect Stephen Webb cleverly uses the sieve of Eratosthenes as an example. The sieve of Eratosthenes is a simple algorithm that can be used to quickly filter out the non-prime numbers from a population of natural numbers. To understand this we need to review what natural and prime numbers are. The natural numbers are formed by simply taking “1” and adding “1” to it an infinite number of times. This, of course, leads to the familiar sequence of counting numbers “1, 2, 3, 4, 5, 6, 7, 8, 9, 10 …”. Prime numbers are simply the natural numbers that are only evenly divisible by “1” and themselves, yielding the sequence of “1, 2, 3, 5, 7, 11, 13, 17…”. Naturally, there are an infinite number of natural numbers because you can always add “1” to a number to obtain a number that is “1” greater, but around 300 B.C. Euclid also proved that there are also an infinite number of prime numbers too. Since all natural numbers are not prime numbers, that means that the set of all natural numbers must represent a “bigger” infinity than the set of all prime numbers. The problem is that there is no formula that tells you in advance whether or not a particular natural number is going to be a prime number. However, there are statistical approaches, and these statistical approaches tell us that as the number of digits in a natural number increases, the odds of it being a prime number decreases because there are so many smaller natural numbers that could be a divisor of the natural number in question. The Prime Number Theorem states that for any natural number N that is sufficiently large, the odds of the number N being a prime number is very close to 1 / ln(N). Thus the odds that a natural number N with 1000 digits is prime is about one in 2300 because ln 101000 ~ 2302.6, whereas for a natural number with 2000 digits, about one in 4600 is prime because ln 102000 ~ 4605.2, and the average gap between consecutive prime numbers amongst the first N natural numbers is roughly ln(N), and so the gap between prime numbers increases as N increases. This means that as natural numbers get bigger, the odds of them being a prime number decreases and the gap between prime numbers increases as natural numbers get bigger too. So as N goes to infinity, the odds of N being a prime number go to zero. So even though there are an infinite number of prime numbers hidden amongst the infinite number of natural numbers, as N goes to infinity the vast majority of the natural numbers get eliminated from the set of prime numbers because they have a divisor that eliminates them from being a prime number. Stephen Webb proposes that a similar elimination process may have occurred amongst the 1 trillion planets and moons that are likely to be found within our galaxy, leaving the Earth as the sole safe harbor for intelligent technologically-capable beings. As each fluky turn of events that led to the emergence of intelligent technologically-capable beings on the Earth unfolded, it eliminated a number of the 1 trillion candidates and perhaps eliminated them more than once in an “overkill” manner.
To see how this works take a look at Figure 1 which is a pictorial depiction of the sieve of Eratosthenes. Recall that Eratosthenes was the first person to measure the size of the Earth by measuring the angular height of the Sun in the Egyptian cities of Syene and Alexandria at the summer solstice and then measuring the distance between the two cities. The sieve of Eratosthenes works like this. First, you pick a population of natural numbers. In Figure 1 we chose the first 120 natural numbers. The numbers “1” and “2” form the first two prime numbers. Then we begin to eliminate candidate prime numbers from the population of the first 120 natural numbers by multiplying the second prime number “2” by the sequence “1, 2, 3, 4, 5 …”, yielding:
2 * 1 = 2
2 * 2 = 4
2 * 3 = 6
2 * 4 = 8
It is easy to see that this eliminates 1/2 of the first 120 natural numbers. Then we do the same thing for the third prime number “3”, by multiplying it by the sequence “1, 2, 3, 4, 5 …”, yielding:
3 * 1 = 3
3 * 2 = 6
3 * 3 = 9
3 * 4 = 12
This process would eliminate another 1/3 of the natural numbers, but there is some “overkill” in the process because the numbers “6” and “12” were already eliminated by the prime number “2”. We see similar “overkill” actions when performing the same process for prime numbers “5” and “7”. In fact, by the time we get to prime number “11” all of the natural numbers in the first 120 natural numbers that “11” would eliminate, like “22”, “33”, “44”, “55”, “66”, “77”, “88” and “99” have already been eliminated by smaller prime numbers than “11”.
Stephen Webb uses this “overkill” action of the sieve of Eratosthenes as a metaphor. Perhaps there is no single explanation for Fermi's Paradox, like only rocky planets with plate tectonics produce intelligent beings. Perhaps it is the myriad elements of the Rare Earth Hypothesis in combination with the inherent dangers of our galaxy that finally “overkills” all other planets and moons in our galaxy, leaving Earth alone with the only intelligent technologically-capable beings in the galaxy.
Figure 1 – Perhaps the myriad elements of the Rare Earth Hypothesis in combination essentially form a sieve of Eratosthenes that eliminates all other locations in our galaxy as possible candidates for intelligent technologically-capable beings.
The problem with this solution is that in recent years we have obtained evidence from the Kepler Mission:
https://www.nasa.gov/mission_pages/kepler/overview/index.html
that demonstrates that planets are quite common in our galaxy, and we have even found some Earth-like planets out there already. Researchers have even begun to obtain spectra from the atmospheres of such planets, so within a few years, we may be able to detect the presence of carbon-based life forms from molecular signatures in the atmospheres of these observed exoplanets. So it is rather difficult to propose that all of the factors that tend to eliminate intelligent technologically-capable beings in our galaxy always come together to eliminate them with 100% efficiency. So let me try to add one more elimination factor to the long list that we already have.
Another Possible Explanation For Fermi’s Paradox
Whenever I read material about such matters, I always try to remember that we are now living in a very special time in the history of the self-replicating information that has been reworking the surface of our planet for the past 4.0 billion years. Recall that softwarephysics maintains that there are currently three forms of self-replicating information on the Earth – the genes, memes and software, with software rapidly becoming the dominant form of self-replicating information on the planet (see A Brief History of Self-Replicating Information for details). Given the fact that software will soon become the dominant form of self-replicating information on the planet, I seriously doubt that there will be any human beings, as we currently now know them, on the Earth a few centuries from now. Mankind, as we know it, will simply no longer exist. By that time, we will have become a well-dispersed transition fossil marking the arrival of machine intelligence on the Earth. Hopefully, we will have merged with the machines in a parasitic/symbiotic manner, like all of the preceding parasitic/symbiotic mergers that have marked the evolutionary history of self-replicating information on the Earth over the past 4.0 billion years, but those details remain to be seen.
In Programming the Universe: A Quantum Computer Scientist Takes on the Cosmos (2006) Seth Lloyd proposes that our Universe is simply a vast quantum computer calculating how to perform. Perhaps in 1,000 years when software has finally become the dominant form of self-replicating information on the planet and is running on huge networks of quantum computers, it will make no distinction between the “real” Universe and the “simulated” universes that it can easily cook up in its own hardware. Perhaps as we saw in Quantum Computing and the Many-Worlds Interpretation of Quantum Mechanics the software running on these vast networks of quantum computers of the future will come to realize that the Many-Worlds interpretation of quantum mechanics is indeed correct, and that the humans of long ago were simply a large collection of quantum particles constantly getting entangled or “correlated” with other quantum particles, and splitting off into parallel universes in the process. This constant splitting gave the long-forgotten humans the delusion that they were conscious beings and led them to do very strange things, like look for similarly deluded entities.
Comments are welcome at scj333@sbcglobal.net
To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/
Regards,
Steve Johnston