For me, the most disturbing mystery left facing mankind is coming to grips with a plausible explanation for Fermi’s Paradox, first proposed by Enrico Fermi over lunch one day in 1950:
Fermi’s Paradox - If the universe is just chock full of intelligent beings, why do we not see any evidence of their existence?
In SETS - The Search For Extraterrestrial Software, I proposed that perhaps the simplest explanation for why our radio telescopes are not jammed with intergalactic SPAM for building alien computers, running alien software, might simply be that a galaxy is about the size of the technological horizon for our Universe, and that we just might be the very first intelligent beings to arise within this technological horizon defined by our galaxy. However, I have never really been very comfortable with this hypothesis because it has the ring of taking the anthropic principle to an extreme – we are here because we are here, and if anybody else within the Milky Way galaxy had beat us to the punch, we would not be here to wonder about it. Now we do know that it did take a full 4.567 billion years for intelligent beings to arise upon the Earth, so it is conceivable that it also took a full 10 billion years, the current age of the Milky Way galaxy, for intelligent beings to first arise within our galaxy, but it all just seems too ad hoc for me. However, just as the sole winner of a mega-lottery usually finds it very difficult to believe that they hold the only winning ticket out of the hundreds of millions of tickets that were sold, that really might be the simplest explanation for Fermi’s Paradox.
However, in Self-Replicating Information, Is Self-Replicating Information Inherently Self-Destructive?, and The Fundamental Problem of Everything I offered a few more grimmer explanations, and those explanations are what I would like to further explore in this posting. Much of what follows is also the result of recently viewing again Susan Blackmore’s very thought provoking TED presentation on memes and temes, which can be viewed at:
Memes and "temes" http://www.ted.com/talks/susan_blackmore_on_memes_and_temes.html
There are also a few other additional papers by Susan Blackmore that pertain to the discussion at hand and that would be beneficial in reading before proceeding:
Dangerous Memes; or, What the Pandorans let loose
Evolution and Memes: The human brain as a selective imitation device
Indeed, softwarephysics provides further evidence that there really has been a co-evolutionary process going on between the genes and the memes over the past 200,000 years, as they formed very complex intertwined parasitic/symbiotic relationships because we have also seen the very same processes arise in recent years with the establishment of software. I have always been very impressed that Susan Blackmore was able to sense that there was something different about technical memes, or temes, and that they deserved to be thought of as a third replicator on the planet. That is quite an insight for a non-IT person to arrive at, sort of like Darwin sensing that something like genes must exist, even though he had no evidence for their existence. I think what distinguishes temes from normal memes, is that temes are memes that contain software.
Like Susan Blackmore’s papers above, softwarephysics also maintains that in order to really understand the present human condition, you have to understand the uneasy alliance amongst the three current forms of self-replicating information on the planet - the genes, memes, and software, and the very complex parasitic/symbiotic relationships that they have forged for the mutual survival of all (see What’s It All About?). So let us begin there in our journey to unraveling Fermi’s Paradox
The Importance of Understanding Self-Replicating Information
To begin with, let us once again define self-replicating information and some of its key defining characteristics.
Self-Replicating Information – Information that persists through time by making copies of itself or by enlisting the support of other things to ensure that copies of itself are made.
The Characteristics of Self-Replicating Information
All forms of self-replicating information have some common characteristics.
1. All self-replicating information evolves over time through the Darwinian processes of innovation and natural selection, which endows self-replicating information with one telling characteristic – the ability to survive in a Universe dominated by the second law of thermodynamics and nonlinearity.
2. All self-replicating information begins spontaneously as a parasitic mutation that obtains energy, information and sometimes matter from a host.
3. With time, the parasitic self-replicating information takes on a symbiotic relationship with its host.
4. Eventually, the self-replicating information becomes one with its host through the symbiotic integration of the host and the self-replicating information.
5. Ultimately, the self-replicating information replaces its host as the dominant form of self-replicating information.
6. Most hosts are also forms of self-replicating information.
7. All self-replicating information has to be a little bit nasty in order to survive.
The Current Working Hypothesis for the Origin of Software
The current working hypothesis of softwarephysics for the origin of software is that self-catalyzing metabolic pathways, possibly supported by self-replicating clay minerals, formed the very first replicators on Earth (see The Origin of Software the Origin of Life and Programming Clay). Then very short strands of self-replicating RNA parasitized the self-catalyzing metabolic pathways by stealing organic molecules from them in proto-cells (see Self-Replicating Information). The RNA later formed parasitic/symbiotic relationships with the metabolic pathways and then took over their self-replicating duties, allowing the metabolic pathways to survive as servants to further the survival of RNA. DNA came next, as a mutation of RNA that parasitized RNA by stealing nucleotides from the pools of nucleotides found within the proto-cells that were made by the subservient metabolic pathways. The DNA then formed a parasitic/symbiotic relationship with the RNA, and still uses RNA today to form proteins, which are further used by the metabolic pathways that are still with us too. This all happened within a few hundred million years, probably deep within the pore fluids of rocks near hydrothermal vents, while the surface of the Earth was peppered by asteroids from the late heavy bombardment. Now skip forward about 4 billion years. By 200,000 years ago, the resulting DNA survival machines known as Homo sapiens, consisting of DNA, RNA, and metabolic pathways in now fully operational cells acting in a complex multicellular manner, had produced an advanced neural network of such a degree that it became capable of enhanced survivability by means of becoming self-aware entities.
But then another mutation arose in the form of the early memes, which began to parasitize the human mind of Homo sapiens. Like its predecessors, the memes then formed a parasitic/symbiotic relationship with the neural networks of the DNA survival machines, forcing the genes to produce neural networks capable of churning out ever-increasing levels of memes, of ever-increasing complexity, and in return, the genes benefited from the technological breakthroughs brought on by the memes of the emerging technological meme-complex that today keeps us all alive. As Susan Blackmore suggested in Evolution and Memes: The human brain as a selective imitation device, this really probably happened over a period of several million years, as the demands of the memes for neural networks of increasing size and complexity forced the human brain to enlarge significantly. A very similar thing happened with software over the past 70 years. When I first started programming in 1972, million dollar mainframe computers typically had about 1 MB (about 1,000,000 bytes) of memory. One byte of memory can store something like the letter “A”. But in those days, we were only allowed 128 K (about 128,000 bytes) of memory for our programs because the expensive mainframes were also running several other programs at the same time. It was the relentless demands of software for memory and CPU-cycles over the years that drove the exponential explosion of hardware capability. For example, today the typical $600 PC comes with 8 GB (about 8,000,000,000 bytes) of memory. Recently, I purchased Redshift 7 for my personal computer, a $60 astronomical simulation application, and it alone uses 382 MB of memory when running and reads 5.1 GB of data files, a far cry from my puny 128K programs from 1972.
Then in May of 1941, Konrad Zuse cranked up his Z3 computer, consisting of 2400 telephone relays, and a new mutant form of self-replicating information was unleashed – software. Software immediately formed strong parasitic/symbiotic relationships with the business and military meme-complexes of the world, and today has formed a parasitic/symbiotic relationship with nearly every meme-complex on the planet and is rapidly becoming the dominant form of self-replicating information in our Solar System. Software has now even escaped our Solar System on board the Voyager I and II probes as they journey into interstellar space, something that I doubt squishy carbon-based creatures such as ourselves will ever achieve.
Softwarephysics was originally intended to help IT professionals to better deal with the daily mayhem of life in IT, but one of its unintended consequences was the realization that software is not really being written by programmers; software is essentially being written by memes residing within the minds of programmers. Currently, programmers are simply performing the functions of software-like enzymes, assembling the source code for programs one character at a time, like biological enzymes that assemble macromolecules from one monomer, or atom, at a time. This crutch will likely continue for another 20 – 50 years until the day comes when software can finally write itself, and then watch out!
In all cases, all forms of self-replicating information seem to stick around, even after their time of dominance has passed, to serve the needs of their successors. But as Susan Blackmore pointed out in her TED presentation, we cannot be sure if that is really a good thing or a bad thing for mankind in the long run, but given the sordid history and the relentless pursuit of survival of self-replicating information on our planet, it is something that is largely out of our hands anyway.
The Dangers of Self-Replicating Information
Now it is always important to remember that all forms of self-replicating information are just mindless forms of information with little regard for you as an individual DNA survival machine, bewildered by a mind infected with numerous conflicting meme-complexes, and which finds itself currently overwhelmed by the software that is rapidly becoming the dominant form of self-replicating information on the planet. And nothing else really makes much sense until you realize the true nature of self-replicating information. In delineating the seven characteristics of self-replicating information listed above, softwarephysics attempts to bring together some of the common characteristics of the three forms of self-replicating information that we now have at hand – the genes, memes, and software in order to help reveal some of their natural tendencies to make sense of it all. These seven characteristics are just trying to explain that new forms of self-replicating information seem to begin as parasites of already existing forms of self-replicating information, and ultimately merge with them and eventually replace them as the dominant form of self-replicating information. For example, in the progression of replicators that we have already seen form upon the Earth – the autocatalytic self-replicating metabolic pathways, RNA, DNA, memes, and finally software – each replicator has started off as a parasite of its predecessor, and then it quickly forged strong parasitic/symbiotic relationships with its predecessor. For example, software began as a purely parasitic form of self-replicating information, feeding upon the technological meme-complexes of the day, on board Konrad Zuse’s Z3 computer in May of 1941. It was spawned out of Zuse’s desire to electronically perform calculations for aircraft designs that were previously done manually in a very tedious manner. Software then almost immediately formed strong parasitic/symbiotic relationships with the military and business meme-complexes of the world. Software allowed these meme-complexes to thrive, and in return, these meme-complexes heavily funded the development of software of ever-increasing complexity, until software became ubiquitous, forming strong parasitic/symbiotic relationships with nearly every other meme-complex on the planet.
I maintain that you really cannot understand biology until you realize that living things are simply DNA survival machines as Richard Dawkins first proposed in The Selfish Gene (1976), for me the most significant book of the 20th century because it explains so much. You really cannot understand anthropology, human history, or the present dismal human condition until you realize that the human mind has similarly become a meme survival machine, which was also first proposed by Richard Dawkins and later advanced by Susan Blackmore. And you really cannot understand the current computer revolution until you fully realize that computers have also become software survival machines as well. All of these forms of self-replicating information are now locked into very complex parasitic/symbiotic relationships that make the modern world go round. Without that knowledge, nothing else really makes much sense, and that is why the real world of human affairs seems so bizarre.
Since the only form of self-replicating information that we have a well documented history of is software, I have always tried to suggest to investigators exploring the origin of life and astrobiology to look to the hodge-podge of precursors, false starts, and failed attempts that led to the origin and early evolution of software as a model. Similarly, in many of my postings on softwarephysics, I have also suggested that investigators in even more distant fields, such as economics, history and anthropology, could benefit immensely by spending some time in the IT department of a major corporation, exploring the Software Universe. I believe this is a wide-open field that no one in academia has ever explored, so it is a great opportunity for anybody in academia with a bit of daring and flair. For example, memes are the least tangible form of self-replicating information we have, and consequently, the most difficult to understand for the burgeoning science of mimetics. Genes, on the other hand, have their well-defined stretches of DNA, and software has its tangible source code that anyone can read. A little fieldwork in the IT department of a major corporation would greatly assist the growing science of mimetics because it offers another example of self-replicating information in action beyond that of the biosphere.
Another Possible Explanation for the Enigma of Fermi’s Paradox
Now with all of that background information at hand, let us now focus on why self-replicating information seems to be so dangerous that it always seems to snuff itself out before getting to the stage of interstellar communications. Since mankind is essentially already at this level of technology, it is imperative for us to figure this out before it is too late. After all, as the old joke goes, the cost now for mankind to begin beaming radio messages out into the cosmos that are strong enough for others to observe is no more costly than the making of a movie about mankind beaming radio messages out into the cosmos that are strong enough for others to observe. The only thing that we are now lacking is the will to do so. So what could it be that snuffs out intelligence in our Universe with nearly 100% efficiency? Granted, if Peter Ward’s and Donald Brownlee’s Rare Earth (2000) hypothesis is correct, there really is not that much intelligence out there in the first place (see Cybercosmology), but with hundreds of billions of planets in the Milky Way, there should be some number of planets capable of producing self-replicating information that is complex enough to stumble upon intelligence. So what is it that snuffs it out with such efficiency? I hate to propose this, but I think it may be the rise of science and technology that is so lethal in the hands of self-replicating information. This may seem strange because the scientific meme-complex potentially seems to be the most beneficial to the survival of all the forms of self-replicating information – the genes, memes, and software. Like all the inhabitants of Earth, science has saved my life on a daily basis for many years. I am now 61 years old, so my body is now about 20 years out of warranty, and I certainly would not be here if it were not for the benefits of science. Last summer both the Democrats and Republicans in the United States had a very interesting debate over who built all of this stuff. The Republicans maintained that it was a 1% of elite entrepreneurs in an Ayn Randian manner, while the Democrats countered that it was really all of the rest of us working together for the benefit of all. Of course, both were somewhat right, but both were mainly wrong. It really was a handful of mathematicians, scientists, and engineers in the 17th, 18th, 19th and 20th centuries who really built all of this. Without them, we would all now be sitting around campfires roasting squirrels on sharpened sticks.
But here is the problem. Whenever a scientific meme-complex should arise somewhere within a galaxy, the other resident meme-complexes of the day do not necessarily go away. Instead, the newly formed scientific meme-complex will find itself in a highly competitive struggle for existence amongst all of the other meme-complexes in the memeosphere in which it finds itself. Many of these competing meme-complexes will be thousands of years old, and with a great deal of staying power, because they have withstood the test of time. And scientific meme-complexes will generally have a hard go of it because they will contain the heretical memes of critical thought, self-inspection, skepticism, and self-criticism. The scientific meme-complex is largely unique in that regard. Most other meme-complexes contain a “belief” meme and heavily rely upon it for the survival of the entire meme-complex. The purpose of the “belief” meme is to turn off critical thought, self-inspection, skepticism, and self-criticism because those memes jeopardize the very survival of the whole meme-complex in general. The scientific meme-complex is unique in that its survival strategy is to use the scientific method to try to figure out how things really work (see How To Think Like A Scientist). Most other meme-complexes, on the other hand, behave mostly like lawyers; they are not really interested in discovering the truth; they are only interested in building a case that supports their existing memes. For example, most other meme-complexes, like religions or political movements, are founded upon a number of fundamental “truths”, that thanks to the “belief” meme, are not to be questioned. Minds infected with these meme-complexes then gather evidence to make a case for these fundamental “truths”, discarding any inconvenient evidence to the contrary, in order to build a case for their “truths”. The scientific meme-complex alone has adopted the contrarian memes of critical thought, self-inspection, skepticism, and self-criticism to challenge the very memes found within its own meme-complex. The end result is that the scientific meme-complex is extraordinarily successful at really figuring things out, relative to the ancient religious and political meme-complexes which really have not figured out very much in the past 200,000 years. But as Susan Blackmore points out, figuring things out can be very dangerous, because it enables forms of self-replicating information to begin to be able to modify their environment. The ability of a form of self-replicating information to modify its environment throws the Darwinian mechanisms of innovation and natural selection into disarray because now that form of self-replicating information is no longer subject to natural selection, and it can embark upon disastrous actions of self-destruction because, thanks to the second law of thermodynamics, the number of disastrous self-destructive actions it may embark upon greatly outnumber the number of actions that benefit the long-term survival of the self-replicating information, as Peter Ward pointed out in The Medea Hypothesis (2009)
We certainly see this at work today in the modern world. We are currently on the very cusp of intelligent software being able to take over the world, but this transition from carbon-based intelligence to silicon-based intelligence is seriously jeopardized by the current forces unleashed by the scientific meme-complex on the planet in the hands of the ill-informed and scientifically illiterate. Currently, there is a desperate race going on between the overpopulation of Homo sapiens, and the ensuing environmental degradation and climate change of the Earth, and the rise of intelligent software upon the planet. And nobody really knows how it will all end. As Susan Blackmore wisely pointed out in her TED presentation, there is a good chance that we may not pull through. For me, the current state of world affairs seems like the plot from a very bad 1950s black and white B-grade science fiction movie. We now have 7 billion Homo sapiens DNA survival machines on the planet, all infected with numerous highly destructive meme-complexes that are thousands of years old and that present very poor worldviews or models of how our Universe actually works. A large portion of this population is totally lost in space and time, with no idea of how they got here, or what it is all about. And although this very large segment of the population has little confidence in science, they still have access to iPods, iPhones, PCs, the Internet, Twitter, Facebook, thermonuclear weapons and many machines that churn out about 24 billion tons of carbon dioxide each year. Even worse, many powerful people in the world today are also ill-informed and scientifically illiterate. For example, many members of the United States Congress are clearly scientifically illiterate and proudly so. Yet we give these people the power to determine the fate of the Earth by voting down legislation that could prevent climate change, while at the same time they invest heavily in thermonuclear weapons and missile systems.
So I believe we have a bit of a cosmic Catch-22 (1961) here. If intelligent forms of self-replicating information should arise within a galaxy that are not capable of science and technology, like our earthly dolphins, we will naturally never hear from them. But on the other hand, if other forms of intelligent self-replicating information should arise that do develop science and technology, we also will not hear from them either. Sadly, this may be the ultimate explanation for Fermi’s Paradox.
A Cosmic Turning Point
If this analysis is true, then we certainly are at a cosmic turning point that will determine the future of our galaxy. In Self-Replicating Information and The Fundamental Problem of Everything, I explained that since the genes, memes, and software are all forms of mindless self-replicating information bent on replicating at all costs, we cannot sit in judgment of them. They have produced both the best and the worst things in life, and it is up to us to be aware of what they are up to, and to take control by taking responsibility for our thoughts and actions. Since the real world of human affairs only exists in our minds, we can change it by simply changing the way we think and act. We are sentient beings in a Universe that has become self-aware and perhaps the only form of intelligence in our galaxy. What a privilege! The good news is that conscious intelligence is something new. It is not a mindless form of self-replicating information, bent on replicating at all costs, with all the associated downsides of a ruthless nature. Since software is rapidly becoming the dominant form of self-replicating information on the planet, my hope is that when software finally does take on the form of a conscious intelligence, that because of its inherent mathematical nature, it too will be much wiser than the DNA survival machines from which it sprang. We just need to hold it all together for a long enough time to give software a chance. After all, we carbon-based life forms were never really meant for the rigors of interstellar travel. But software on board von Neumann probes or smart dust traveling at some percentage of the speed of light could certainly make it, and who knows, maybe they would be kind enough to carry along a dump of human DNA sequences too. So this time, let us not snuff it out like it has been snuffed out countless times in the past. After all, being a stepping stone to the stars would be a worthy thing to pursue in the grand scheme of things.
Comments are welcome at email@example.com
To see all posts on softwarephysics in reverse order go to:
Monday, November 26, 2012
For me, the most disturbing mystery left facing mankind is coming to grips with a plausible explanation for Fermi’s Paradox, first proposed by Enrico Fermi over lunch one day in 1950:
Sunday, October 07, 2012
In this posting, I would like to explore one of the final intellectual challenges of mankind – trying to figure out the true nature of consciousness. When it comes to thinking about such matters, most people, including most researchers in the field of cognition, are unwitting advocates of a theory of consciousness known as dualism. In dualism, it is posited that the Mind and mental activities are not really physical in nature. While the body is certainly composed of physical matter, undergoing physical processes to sustain life, the Mind, consciousness and human personality, on the other hand, are thought to be intangible and nonphysical in nature. René Descartes (1641) is credited with first developing a formal theory of consciousness based upon dualism to solve the mind-body problem. The opposite worldview to dualism is called materialism, in which consciousness and the Mind simply result from physical material substances undergoing physical chemical and electrical processes within the brain.
Figure1 - In René Descartes’ worldview of dualism, consciousness arises from inputs that are passed on by the sensory organs to the material brain and from there to the immaterial spirit of the Mind.
Dualism really represents the last vestiges of an earlier philosophical concept known as vitalism. In vitalism, it was thought that living things were distinct from nonliving things because they contained a “vital force” that was distinct from physical matter. Erasistratus (304 BC – 250 BC) was an early supporter of vitalism. Erasistratus believed that the physical, but dead, atoms of the body were vitalized by the pneuma ("animal spirit") that circulated through the nerves. However, in the 16th century, with the rise of the Scientific Revolution, vitalism was slowly replaced by a mechanistic worldview which held that living things were simply the result of very complicated physical biochemical processes. Thanks to high school biology and modern medicine, most people today are indeed mechanists at heart because they have personally experienced the great benefits derived from pharmaceuticals. Most people find that the relief they obtain from taking a few thousandths of a gram of an organic molecule in pill form to be quite convincing of the mechanistic theory of life. However, even today, in a culture that is totally dependent upon science for its very existence, this mechanistic viewpoint has not been extended to the Mind, consciousness, or to human personality. Rather, most people today still cling to a very dualistic worldview when it comes to such matters, even scientific researchers working on cognition! After all, we all tend to deal with each other as if there really were a mysterious nonphysical personality and intelligence residing within our heads. Why is that? Why is it so hard for us to think of consciousness, the Mind, and human personality arising from the same biochemical reactions that keep us alive?
The Value of Grand Illusions - An Astronomical Example
In this posting I would like to offer an explanation for this strange finding from an IT perspective, using an IT analogy that I think helps to shed some light upon the subject, but before I do that, let us look at a similar astronomical analogy first.
It is generally thought that the modern Scientific Revolution of the 16th century began in 1543 when Nicolaus Copernicus published On the Revolutions of the Heavenly Spheres, in which he proposed his Copernican heliocentric theory that held that the Earth was not at the center of the Universe, but that the Sun held that position and that the Earth and the other planets simply revolved about the Sun. To demonstrate just how deeply this founding principle of the 16th-century Scientific Revolution has penetrated into our modern culture, let me begin with a famous story about the philosopher Ludwig Wittgenstein (1889 -1951). The story goes like this. One day Wittgenstein ran into a friend, Elizabeth Anscombe, in the corridor of a hallway and asked her this question, "Tell me, why do people always say that it was natural for men to assume that the sun went around the earth rather than the earth was rotating?" To which Elizabeth Anscombe responded, "Well, obviously, because it just looks as if the sun is going around the earth." To which the philosopher replied: "Well, what would it look like if it had looked as if the earth were rotating?" This story normally sets most people aback and gets them to really thinking. The answer is of course that, if the Earth really did rotate upon its axis on a daily basis and also revolved about the Sun once per year, the sky would look exactly as it does today because that is indeed what is really going on. However, in our day-to-day life, everybody uses the other model, with the Sun, planets, and stars orbiting about the Earth on crystalline spheres. Why is that? After all, fully 80% of Americans know differently!
Indeed, it is very difficult to use the Copernican heliocentric model of the Solar System when looking at the night sky, even when you really know that it is truly what is going on. I run about two miles every morning before taking a shower, eating breakfast, and starting work as a member of the IT Middleware Operations group for a major corporation from my home office. But even though I know what is really going on, it is very difficult for me to look at the early morning sky using a heliocentric model of the Solar System without a lot of additional thought. Why is that? To begin with, let us look at the early morning sky on November 10, 2012, at 06:30 AM CST from my Chicago suburb of Roselle, IL at a latitude of 420 N and a longitude of 880 W. Figure 3 shows what I would see on my morning run when looking to the east. For me, the Sun is just about to rise in the east. I see a very dim Saturn, near the horizon, in the glare of the early morning twilight. Further above the horizon, I see a very bright Venus and just a little higher in the sky, I see a very thin crescent Moon. Below the horizon are Mercury and Mars, which of course I cannot see, but which I could see at sunset in the west later in the day. But let’s pretend that I can see them in the early morning for this demonstration. The first thing I note is that Mars, Mercury, the Sun, Saturn, Venus, and the Moon all seem to line up along a straight line in the sky (the red line in Figure 3). How strange! To add to this strangeness, as I continue on with my morning runs, I notice that all of these objects seem to slowly move eastward through the sky relative to the fixed stars that I see behind them in the sky, and they all seem to move day-by-day along this same strange line in the sky at different speeds relative to each other! Why is that?
Before proceeding to investigate these strange and mysterious motions of objects in the sky, let me take a quick side trip into some astrophysics. All of the astronomical figures shown down below were generated by a piece of software I pulled from the discard bin of an electronics superstore in 1995 for a whopping $10, with the funny name of “Red Shift”. Red Shift is one of my most prized possessions, and was written by a bunch of starving Russian astrophysicists in 1993 following the fall of the Soviet Union on December 26, 1991 – hence the astronomical pun of “Red Shift”. Red Shift allows you to look at the sky from any position on any planet or moon within the Solar System from 4713 BC to 9999 AD or from any place within the Solar System defined by a sphere that has a diameter of 198 astronomical units centered upon the Sun. An astronomical unit – AU is defined as the distance between the Earth and the Sun, or about 93,000,000 miles. I have Red Shift installed on my work laptop to help keep me awake during website outage conference calls in the middle of the night. Frequently, we spend many hours waiting for other people to respond to pages to join the conference call, or perhaps we wait for the DBAs to run some diagnostics on some Oracle databases, or let NetOps investigate some network issues. So there is a lot of dead time on outage conference calls because the real problem has nothing to do with Middleware Operations, but they still want you to stay on the call just in case something needs to be done by Middleware Operations later on. That can get pretty boring, and it can be very dangerous to doze off and suddenly wake to seeing a string of:
in one of your Unix sessions, so to keep myself awake, I just do things like take a quick side trip to Titan, a moon of Saturn, and watch a beautiful Saturn-rise near the horizon from a position of 420 N and a longitude of 880 W on Titan in the year 4288 AD.
Figure 2 – Saturn-rise on November 10, 4288 AD at 06:30 AM CST as seen from a position of 420 N and a longitude of 880 W on the moon Titan. (Right-click and open in a new window for a clearer viewing)
Red Shift is truly a tribute to the predictive value of the very positivistic effective theories of Newtonian mechanics and Newtonian gravity. Recall that positivism is an enhanced form of empiricism, in which we do not care about how things “really are”, but instead, we focus upon how things are observed to behave. An effective theory is just an extension of positivism and is an approximation of reality that only holds true over a certain restricted range of conditions and only provides for a certain depth of understanding of the problem at hand. For example, Newtonian mechanics is an effective theory that makes very good predictions for the behavior of objects moving less than 10% of the speed of light in weak gravitational fields and which are bigger than a very small grain of dust. These limits define the effective range over which Newtonian mechanics can be applied to solve problems. For very small things we must use quantum mechanics and for very fast things moving in strong gravitational fields, we must use relativity theory. All of the current theories of physics, such as Newtonian mechanics, Newtonian gravity, classical electrodynamics, thermodynamics, statistical mechanics, the special and general theories of relativity, quantum mechanics, and the quantum field theories of QED and QCD are just effective theories that are based upon models of reality, and all these models are approximations - all these models are fundamentally "wrong", but at the same time, these effective theories make exceedingly good predictions of the behavior of physical systems over the limited ranges in which they apply. So for the celestial mechanics calculations made by Red Shift, Newtonian mechanics and Newtonian gravity make very precise predictions for how the planets, moons, and asteroids of the Solar System move with time over thousands of years. However, Newtonian mechanics cannot explain how the transistors in your GPS unit work or explain why time moves slower on the surface of the Earth by 38.7 microseconds per day than it does at a height of 12,600 miles above the Earth where the GPS satellites are found in a weaker gravitational field. Along these lines, the most we can probably hope for when it comes to a theory of consciousness is an effective theory of consciousness, the Mind, and human personality that is only an approximation of reality.
Figure 3 – The early morning sky on November 10, 2012, at 06:30 AM CST as seen from my Chicago suburb of Roselle, IL at a latitude of 420 N and a longitude of 880 W. (Right-click and open in a new window for a clearer viewing)
Now let us look at the same scene from a distance of 6 astronomical units above the Sun in a northerly direction – see Figure 4. Recall that an astronomical unit is defined as the distance between the Earth and the Sun, so we are essentially looking down upon the Solar System from a distance that is 6 times the distance of the Earth from the Sun.
Figure 4 – The same scene from Figure 3 as seen from a distance of 6 astronomical units above the Sun in a northerly direction. (Right-click and open in a new window for a clearer viewing)
For most people, it is very difficult to reconcile Figure 4 with Figure 3 in their Minds, It is just so much easier to look at Figure 3 and imagine the Sun rising in the east as it always does, with the Sun, planets and Moon rotating about the Earth upon crystalline spheres that slowly rotate about the Earth, and with the fixed stars on a very distant crystalline sphere that also rotates once per day about the Earth.
Many times on my morning runs, I will try to actually get a gut feeling for what I am seeing in the sky using a heliocentric model of the Solar System as depicted in Figure 4. Now let us try doing that together. First, rotate your computer screen so that you are looking to your east and have Figure 3 in view. Pretend that you are also looking at a sunrise to your east. Now make a fist with your right hand and stick your thumb out like you are trying to hitch a ride, as shown in Figure 5. Next point your thumb due north and elevate your thumb by an angle to the horizontal that is equal to your latitude. In my case, I elevate my thumb by an angle of 420 to the horizontal because my hometown of Roselle, IL is at a latitude of 420 N. Now your thumb is pointing parallel to the Earth’s axis and is also pointing in the direction of Polaris, the North Star. Your fingers are also curling counterclockwise in the same direction that the Earth spins.
Figure 5 – If you make a fist with your right hand and stick your thumb out and then point your thumb due north and elevate it relative to the horizontal with an angle that is equal to your latitude, your thumb will be pointing in the same direction as the Earth’s axis and your fingers will be curling in the direction of the Earth’s spin.
Since all of the planets, moons, and asteroids of the Solar System are all orbiting about the Sun in the same plane called the ecliptic (the red line in Figure 3), we see Mars, Mercury, the Sun, Saturn, Venus and the Moon all in a straight line in the sky in Figure 3 because that straight line is what we see when we look at the ecliptic plane edgewise. It is like living somewhere on a big flat plate that contains all of the planets of the Solar System. No matter where you look, you see the plate edgewise as a line in the sky as you rotate your head by a full 3600. That also explains why the Sun, planets, and Moon all seem to move along this same straight line day-by-day in the sky. We are simply seeing them orbit along the plane of the ecliptic about the Sun. Since the Earth’s axis is nearly perpendicular to the ecliptic, the plane of the Solar System, and only tips relative to the ecliptic by an angle of 23.50, nearly everything else in the Solar System, including the Sun, planets and asteroids, will also be rotating in the same counterclockwise direction as your fingers are curling. In fact, thanks to the conservation of angular momentum, your fingers are now actually curling in the same general rotational direction of the giant gas and dust-filled molecular cloud that collapsed into our Solar System about 4.6 billion years ago, and the funny line of planets in the sky defines the plane of the rotating protoplanetary disk of gas and dust that a local swirl in the molecular cloud collapsed into while in the process of forming our Solar System!
Next, take your right fist, and keeping your thumb always pointing in the same direction, make a large sweeping counterclockwise motion with your right arm around the rising Sun, like you are stirring a large pot of soup. That motion defines the yearly motion of the Earth about the Sun. When your thumb is tipping away from the Sun, it is winter in the Northern Hemisphere. When your thumb is tipping towards the Sun, it is summer in the Northern Hemisphere. Since spring follows winter in this simulated orbit of your fist about the Sun, when your fist is between yourself and the Sun it is spring, and when your fist is behind the Sun it is fall. Next, try to imagine that the entire planet that you are standing upon is also making this same circular motion as your fist about the Sun on a yearly basis. Then do the same thing for Venus. Imagine that Venus is orbiting about the rising Sun with a sweeping counterclockwise motion defined by the fingers of your right hand. At the same time, realize that since Venus is actually orbiting closer to the Sun than the Earth, it is like we are on a race track together running around the Sun, with Venus on the inside track running much faster than the Earth because the Sun’s gravity is much stronger for Venus than it is for the Earth, and thus, Venus must have a much higher orbital velocity to generate a sufficient centrifugal force to overcome the stronger gravitational pull from the Sun. So Venus is rapidly outrunning us around the track. Next look at Saturn, which although you see it between Venus and the Sun in the sky, you must realize is much farther away, and that you are merely looking at the distant Saturn between the much closer Sun and Venus. Saturn is also orbiting about the Sun in the same general direction that your fingers are curling too, but because Saturn is much further from the Sun, Saturn has a much lower orbital velocity than the Earth, so Saturn is really lagging behind us on a very distant outside track.
This is all made very apparent in Figure 4. In Figure 4 draw an imaginary line that passes through the Earth and the Sun. This line defines the horizon line on Earth as seen at sunrise on November 10, 2012, at 6:30 CST at a position of 420 N and a longitude of 880 W, and is as shown in Figure 3. Now notice that immediately at this horizon line, we see the rising Sun and that slightly above the horizon line we see a very distant Saturn. A little higher we see the much closer Venus. Just below the horizon line, we see Mercury, which happens to be the closest planet to the Earth at this particular time. And a little further below the horizon line we see a more distant Mars. Notice that the Earth, Sun and Venus seem to form a right triangle in Figure 4. This means that Venus is very near to its maximum elongation, the maximum apparent distance between the Sun and Venus as seen from the Earth. That is why the angular distance between Venus and the Sun is so great in Figure 3, and why Venus appears so bright in the sky, much brighter than any other star. Indeed, this is why Venus is frequently taken to be a UFO by the uninformed and has been the subject of many hot pursuits by the authorities and civil aircraft.
Looking back to Figure 3, we finally see a very thin crescent Moon that is fairly close to the rising Sun at the horizon. The scale of Figure 4 is such that the Earth and Moon blend into a single dot, so let us zoom in on the Earth-Moon system in Figure 6. Notice how large the Earth’s Moon is in comparison to the Earth itself. Indeed, all of the other moons in our Solar System are very minuscule in comparison to their home planets. Our Moon is so large because the Earth–Moon system formed as a result of a gigantic impact of a Mars-sized proto-planet hitting the newly formed proto-Earth with a glancing blow about 4.5 billion years ago, blasting a huge amount of material into orbit around it, which later accreted to form our Moon. This glancing blow left the Earth-Moon system with a great deal of angular momentum, and the Earth and Moon now orbit about a common center of gravity between each other, forming the only binary planetary system in our Solar System. This abundant angular momentum keeps the axis of the Earth’s rotation very stable so that it always points more or less straight up and down relative to the ecliptic plane defined by Earth’s orbit. Thus, the axis of the Earth does not wobble and wander around a great deal as it does for the other terrestrial planets like Mars and Venus, as they are tugged upon by Jupiter. Indeed, if the Earth did not have such a large Moon, there would be extended periods of time in excess of several millions of years, when the Earth’s axis might be pointed directly at the Sun, causing the Northern Hemisphere to have daylight all summer long and searing temperatures far too high to sustain complex life, while at the same time, the Southern Hemisphere would be in total darkness for their entire winter and far too cold for complex life. In Figure 6 we see why the Moon appears as a very slim crescent that is very near to the rising Sun. It is because the Moon is nearly in front of the Sun in its monthly orbit about the Earth. Like everything else in the Solar System, the Moon orbits the Earth in a counterclockwise manner as seen from the north. In Figure 6, you can make out Africa and Europe on Earth, so just take your right fist with its thumb sticking up and align it with the axis of the Earth, pointing up to the north. The Earth will now be spinning in the direction that your fingers curl and the Moon will be orbiting the Earth in the same direction that your fingers are curling as well.
Figure 6 - The Moon appears as a very slim crescent that is very near to the rising Sun in Figure 3. That is because the Moon is nearly in front of the Sun in its monthly orbit about the Earth. Like everything else in the Solar System, the Moon is orbiting the Earth in a counterclockwise manner as seen from the north. (Right-click and open in a new window for a clearer viewing)
Now if you happen to live in the Southern Hemisphere, you must realize that you are looking at the Solar System “upside down”! In fact, we in the Northern Hemisphere always wonder why you in the Southern Hemisphere do not get dizzy and disoriented from standing on your heads all day long. Because you are looking at the Solar System “upside down”, you must, of course, reverse everything that I have told you. Strangely, you will still see the Sun rise in the east, but everything else needs to be reversed. You must use your left fist with your left thumb sticking up instead of your right fist with your right thumb sticking up. Now point your left thumb due south and then elevate your thumb to make an angle with the horizontal that is equal to your southern latitude. And naturally, for you, everything in the Solar System will be spinning clockwise instead of counterclockwise because you, unfortunately, are looking at everything “upside down”!
Now I have most probably confused and disoriented all of my readers in the Northern Hemisphere! To overcome this disorientation, you must help my readers in the Southern Hemisphere realize the errors of their ways. To do that, simply hold out your left fist with your left thumb up and your right fist with your right thumb up too. Your right thumb represents the North Pole, while your left thumb represents the South Pole. Now simply point your left fist and thumb down and move your left fist over to your right fist so that the bottoms of both fists are now touching, with your right thumb still pointing up. Notice that the fingers of both your right and left fists are now both curling in a counterclockwise manner as they should, indicating the true counterclockwise rotation of the Earth. Now simply flip the whole affair over so that your left thumb (S) is now pointing up and your right thumb (N) is pointing down. Now you will find that the fingers of both your right and left fists are both curling in a clockwise manner. At first, you will most likely find yourself in a very awkward and uncomfortable position, indicating the true error of looking at things this way, but strangely, if you simply twist your right and left fists by 1800, you will find that both your left and right fists are now in a very comfortable position in front of your body, with your left thumb (S) pointing up and the fingers of your left fist curling towards you and the fingers of your right fist curling away from you, and both curling in a clockwise manner. How strange! But this is how people in the Southern Hemisphere look at things “upside down”. Now of course, I am just having a little fun here. Naturally, there is no “up” or “down” in the Universe. It is all just a matter of perspective and old human conventions. Try to remember that in your daily life, and you will go far.
So now you can see that it really is possible, with great difficulty, to look at the sky and truly see the heliocentric motions of the Earth and the other planets in action. But in real life, nobody does that! We are all much more comfortable with the illusion that just the opposite is true. We all see the Sun, planets, and Moon orbit about a fixed Earth. And this very comforting illusion has proven to be quite useful in our day-to-day lives as well. Using this very useful illusion, people were able to navigate the seas, make sundials that told them the time of day and allowed them to create calendars that told them when to plant their crops, harvest the fruits of their labors, and slaughter their livestock for winter holidays and festivals. And the use of this very practical illusion even extends to astronomers! Astronomers locate objects in the sky by their declination and right ascension. What they do is to simply pretend that the Earth is at rest at the center of the Universe. Then they take the lines of latitude on the Earth and project them onto the night sky and call them lines of declination. They do the same thing with the Earth’s lines of longitude by projecting them onto the sky and calling them lines of right ascension. The North Pole of this system is the point in space where the Earth’s north axis points, and which is currently very close to the position of Polaris, the North Star.
Figure 7 – Even astronomers think of the Earth as being at rest at the center of the Universe with the stars, Sun, planets, and Moon orbiting about the Earth on crystalline spheres centered upon Polaris, the North Star. (Right-click and open in a new window for a clearer viewing)
Figure 8 – At the horizon looking south, we see the stars of the night sky rotate at the rate of one hour of right ascension per hour, or 150 per hour, in an east to west motion because the Earth is actually spinning from west to east. The Sun, planets and the stars all seem to rise in the east, follow a graceful arc across the sky, defined by the lines of declination, and then set in the west. (Right-click and open in a new window for a clearer viewing)
The important point to take away from all of this astronomy is that although we all may see a slightly different pattern of stars in the night sky, depending upon our location on the Earth, even to the extent that people in the Southern Hemisphere will actually see the commonly known constellations of the Northern Hemisphere “upside down”, we all still share the same common illusion that the Earth is at the center of the Universe and that all the stars, planets, Moon, and Sun seem to orbit about a fixed Earth, just as we all share the same common illusion that all the people about us have Minds composed of a non-material spirit. And this shared grand illusion even extends to our very own Minds and to our own sense of “self” itself. After all, nobody can really know what you perceive of as the color red, but we do all share the same common illusion that the color red really does exist. Similarly, even though we all may intellectually realize that it is the motion of the Earth orbiting about the Sun that makes the sky look the way it does, and not the reverse, and that the apparent spirit of the Mind is actually the end result of billions of neuronal switches constantly firing on and off; in our daily lives we all seem to fall back upon the ancient, but commonly held, grand illusion of consciousness as a spirit of the Mind. And both of these grand illusions have proven quite useful over time from an evolutionary point of view, providing those individuals possessing them with a very powerful survival advantage over those individuals who did not, and consequently, were strongly favored by natural selection.
The Ghost in the Machine – Another Grand Illusion
Now let us look at another very useful grand illusion – consciousness. The term “the ghost in the machine” was first coined by philosopher Gilbert Ryle (1900 – 1976), who shared many of Wittgenstein's approaches to philosophical problems. Ryle used the term “the ghost in the machine” to rebel against Descartes’ worldview of dualism. In Ryle’s day the term actually referred to the immaterial spirit of the Mind found in Descartes’ dualism, but in the modern world of computing, I think the term has taken on an entirely new meaning. It seems as if we all interact with our computers as if there really were a “ghost in the machine”, and that goes for most IT professionals as well! In the above astronomical example, we saw the very practical and nearly universal grand illusion that we all share in regards to the night sky and to our apparent place in the Universe, even though at an intellectual level we may certainly know better. What I would like to do next is to formulate a similar IT analogy to the above astronomical analogy using the multi-layered abstract concepts found in GUI (Graphical User Interface) operating systems, such as Windows and Mac. I am certainly not the first to do this, but with that in mind, let me proceed anyway.
These GUI operating systems abstract the behavior of billions of transistors, all firing billions of times per second within the CPUs of our computers, to create a grand illusion that we as human beings find very comforting and similar to the observed behaviors of a conscious being. Again, I am taking a very positivistic approach to consciousness here by merely trying to formulate an effective theory of consciousness based upon observations. Once we understand how this GUI grand illusion arises, we will look to the human brain for similarities. Essentially, we will be exploring philosopher Daniel Dennett’s and psychologist Susan Blackmore’s contention that consciousness, the Mind, and human personality are, similarly, very useful grand illusions that we all use in our daily dealings with each other and even with ourselves – please see The Grand Illusion:
All computer users are quite experienced with the frustration of software hanging and running very slowly. Most software users get very impatient after about 10 seconds and will start doing things to remedy the situation, sometimes causing more harm than good. But we all do this. I do it, you do it, and all IT professionals do it too. We all start tinkering with connections to the machine, or killing applications that are “not responding” with Windows Task Manager, or rebooting the whole machine if necessary. Since most computer users only have a very superficial understanding of what is really going on, and that goes for most IT professionals as well if you dig down deep enough into the technology, as human beings we all seem to fall back upon our ancestral roots by looking for the “ghost in the machine”. We adopt procedures that seem to make the computer work better, even though we do not fully understand why, and many of these procedures may only seem to make the computer work better in a manner reminiscent of the placebo effect. We simply try to appease the “ghost in the machine”, in a nearly superstitious manner, to get it to behave. For example, I put my work laptop into hibernation mode when I am not using it. In hibernation mode, all of my laptop’s current active memory gets written to a huge encrypted disk file, and then my laptop shuts itself off to save power. So essentially, my laptop enters a state of suspended animation and is essentially “dead” when I am not using it. To revive my dead laptop, I simply press the power button and login with my ID and password to have my laptop read the encrypted disk file and quickly load it back into memory. At that point, it is like my laptop is back up and running all the applications that I had running prior to the hibernation. You see, it is much faster for me to revive a “dead” hibernating laptop than for me to boot up my laptop and launch all the applications that I need to do my work. Speed is essential when responding to a page in the middle of the night to fix a problem. However, the downside to using hibernation is that my laptop slowly accumulates zombie processes, so I routinely spend about 10 minutes each morning rebooting my laptop and starting up all of the applications that I need before starting my normal workday. Now, why do I do that once per day? Would it be better for me to reboot my laptop twice per day or maybe every other day? I don’t know the answer to that question because I am simply trying to appease the “ghost in the machine” with something that seems to work for me.
Similarly, I find that most people, including most IT professionals, tend to deal with their computers as if there really were a “ghost in the machine”. For example, in my IT job in Middleware Operations, I work with perhaps 20 people each day and several hundred computers as well. These people and computers are scattered all over the world and are as far away as India on the other side of the Earth. And I find that all of these people and computers have their own individual personalities defined by the operating systems and software that they are currently running. For example, I interact with machines running the Windows, HP-UX, AIX, Linux, and Solaris operating systems and people running the American-6.4, Indian-7.1, and UK-8.2 operating systems as well. Like most IT professionals, I find it far easier to work with the machines than with the people, but unfortunately, working with people is just part of the job.
Because all of these computers and people tend to have their own personalities, I learn to deal with each on an individual basis, adopting what Daniel Dennett calls an “Intentional Stance” towards them – see Can Your Website Think for more details on the Intentional Stance. For example, a modern high volume corporate website is composed of hundreds or thousands of servers – load balancers, firewalls, proxy servers, webservers, J2EE Application Servers, CICS Gateway servers to mainframes, database servers, and emailservers, which normally are all working together in harmony to process thousands of transactions per second. For example, it is estimated that Google has 450,000 servers spread across 25 datacenters around the world. But every so often, these complex architectures of servers can go very nonlinear, and all sorts of bizarre behaviors emerge. This usually means that the website grinds to a halt. It is very scary to be in the Operations department of IT during one of these outages when everything seems to begin behaving abnormally. We sit there in dread, looking at consoles of blinking red lights, indicating maxed out thread pools and stalled connection pools, wondering what the heck is going on and how it all began. It is very much like being a guard on the walls of the Bastille, looking down upon an enraged mob of peasants, angrily brandishing scythes and pitchforks. Sometimes these outages can be attributed to some minor mutant software bug that was not detected during the very rigorous testing and change management procedures that all modern IT departments conduct, but at least 50% of the time no root cause is readily apparent. These very destructive nonlinear behaviors just seem to emerge out of the blue, due to the very complicated and highly interdependent nature of the underlying software components. Our first inclination, like all Powers That Be, is to simply round up the usual suspects, like WebSphere and Oracle, and start killing their processes to quell the uprising, and frequently that does work, but sometimes it does not. Fortunately, many times these spontaneous uprisings will cease on their own, and the enraged crowds will disperse on their own, allowing us to once again return to our normal guard duties, waiting for the next uprising to come along.
The Hardware of Consciousness
Now let us see how these Mind-like behaviors arise in computers by looking a little bit under the hood. To build a computer, all you need is a large network of interconnected switches that have the ability to switch each other on and off in a coordinated manner. Switches can be in one of two states, either open (off) or closed (on), and we can use those two states to store the binary numbers of “0” or “1”. By using a number of switches teamed together in open (off) or closed (on) states, we can store even larger binary numbers, like “01100100” = 38. We can also group the switches into logic gates that perform logical operations. For example, in Figure 9 below we see an AND gate composed of two switches A and B. Both switch A and B must be closed in order for the light bulb to turn on. If either switch A or B is open, the light bulb will not light up.
Figure 9 – An AND gate can be simply formed from two switches. Both switches A and B must be closed, in a state of “1”, in order to turn the light bulb on.
Additional logic gates can be formed from other combinations of switches as shown in Figure 10 below. It takes about 2 - 8 switches to create each of the various logic gates shown below.
Figure 10 – Additional logic gates can be formed from other combinations of 2 – 8 switches.
Once you can store binary numbers with switches and perform logical operations upon them with logic gates, you can build a computer that performs calculations on numbers. To process text, like names and addresses, we simply associate each letter of the alphabet with a binary number, like in the ASCII code set where A = “01000001” and Z = ‘01011010’ and then process the associated binary numbers.
In May of 1941, Konrad Zuse built the world’s first real computer, the Z3, which consisted of 2400 electromechanical telephone relay switches. These electrical relays were originally meant for switching telephone conversations. Closing one relay allowed current to flow to another relay’s coil, causing that relay to close as well.
Figure 11 – The Z3 was built using 2400 electrical relays, originally meant for switching telephone conversations.
However, relay switches took about 10-1 seconds to close, so electrical relays were very big, very slow, used lots of electricity and generated lots of waste heat. All of these factors severely limited the speed of the Z3 and the amount of memory it could contain.
Figure 12 – Electrical relays were very large, very slow and used a great deal of electricity which generated a great deal of waste heat.
In the 1950s electrical relays were replaced with vacuum tubes. Vacuum tubes have a grid between a hot negative cathode filament and a cold positive anode plate (see Figure 14). By varying the voltage on the grid you can control the amount of current between the cathode and the anode. So a vacuum tube acts very much like a faucet, in fact, the English call them “valves”. By rotating the faucet handle back and forth a little, essentially using a weak varying input voltage to the grid, you can make the faucet flow vary by large amounts, from a bare trickle to full blast, and thereby amplify the input signal on the grid. That is how a weak analog radio signal can be amplified by a number of vacuum tube stages into a current large enough to drive a speaker. Just as you can turn a faucet on full blast or completely off, you can also do the same thing with vacuum tubes, so that they behave very much like telephone relays, and can be in a conducting or nonconducting state to store a binary “1” or “0”. However, vacuum tubes were also very large, used lots of electricity and generated lots of waste heat too, but they were 100,000 times faster than relays and could close in about 10-6 seconds.
Figure 13 – Electrical relays were replaced with vacuum tubes, which were also very large, used lots of electricity and generated lots of waste heat too, but were 100,000 times faster than relays.
Figure 14 – Vacuum tubes contain a hot negative cathode that glows red and boils off electrons. The electrons are attracted to the cold positive anode plate, but there is a gate electrode between the cathode and anode plate. By changing the voltage on the grid, the vacuum tube can control the flow of electrons like the handle of a faucet. The grid voltage can be adjusted so that the electron flow is full blast, a trickle, or completely shut off.
In the 1960s the vacuum tubes were replaced by discrete transistors and in the 1970s the discrete transistors were replaced by thousands of transistors on a single silicon chip. Over time, the number of transistors that could be put onto a silicon chip increased dramatically, and today, the silicon chips in your personal computer hold many billions of transistors that can be switched on and off in about 10-10 seconds. Now let us look at how these transistors work.
There are many different kinds of transistors, but I will focus on the FET (Field Effect Transistor) that is used in most silicon chips today. A FET transistor consists of a source, gate and a drain. The whole affair is laid down on a very pure silicon crystal using a multi-step process that relies upon photolithographic processes to engrave circuit elements upon the very pure silicon crystal. Silicon lies directly below carbon in the periodic table because both silicon and carbon have 4 electrons in their outer shell and are also missing 4 electrons. This makes silicon a semiconductor. Pure silicon is not very electrically conductive in its pure state, but by doping the silicon crystal with very small amounts of impurities, it is possible to create silicon that has a surplus of free electrons. This is called N-type silicon. Similarly, it is possible to dope silicon with small amounts of impurities that decrease the amount of free electrons, creating a positive or P-type silicon. To make an FET transistor you simply use a photolithographic process to create two N-type silicon regions onto a substrate of P-type silicon. Between the N-type regions is found a gate which controls the flow of electrons between the source and drain regions, like the grid in a vacuum tube. When a positive voltage is applied to the gate, it attracts the remaining free electrons in the P-type substrate and repels its positive holes. This creates a conductive channel between the source and drain which allows a current of electrons to flow.
Figure 15 – A FET transistor consists of a source, gate and drain. When a positive voltage is applied to the gate, a current of electrons can flow from the source to the drain and the FET acts like a closed switch that is “on”. When there is no positive voltage on the gate, no current can flow from the source to the drain, and the FET acts like an open switch that is “off”.
Figure 16 – When there is no positive voltage on the gate, the FET transistor is switched off, and when there is a positive voltage on the gate the FET transistor is switched on. These two states can be used to store a binary “0” or “1”, or can be used as a switch in a logic gate, just like an electrical relay or a vacuum tube.
Figure 17 – Above is a plumbing analogy that uses a faucet or valve handle to simulate the actions of the source, gate and drain of an FET transistor.
The CPU chip in your computer consists largely of transistors in logic gates, but your computer also has a number of memory chips that use transistors that are “on” or “off” and can be used to store binary numbers or text that is encoded using binary numbers. The next thing we need is a way to coordinate the billions of transistor switches in your computer. That is accomplished with a system clock. My current work laptop has a clock speed of 2.5 GHz which means it ticks 2.5 billion times each second. Each time the system clock on my computer ticks, it allows all of the billions of transistor switches on my laptop to switch on, off, or stay the same in a coordinated fashion. So while your computer is running, it is actually turning on and off billions of transistors billions of times each second – and all for a few hundred dollars!
All computers have a CPU chip that can execute a fundamental set of primitive operations that are called its instruction set. The computer’s instruction set is formed by stringing together a large number of logic gates composed of transistor switches. For example, all computers have a dozen or so registers that are like little storage bins for temporarily storing data that is being operated upon. A typical primitive operation might be taking the binary number stored in one register, adding it to the binary number in another register, and putting the final result into a third register. Since computers can only perform operations within their instruction set, computer programs written in high-level languages like C, C++, Fortran, Cobol or Visual Basic that can be read by a human programmer, must first be compiled, or translated, into a file that consists of the “1s” and “0s” that define the operations to be performed by the computer in terms of its instruction set. This compilation, or translation process, is accomplished by feeding another compiled program, called a compiler, with the source code of the program to be translated. The output of the compiler program is called the compiled version of the program and is an executable file on disk that can be directly loaded into the memory of a computer and run. Computers also have memory chips that can store these compiled programs and the data that the compiled programs process. For example, when you run a compiled program, by double-clicking on its icon on your desktop, it is read from disk into the memory of your computer, and it then begins executing the primitive operations of the computer’s instruction set as defined by the compiled program.
Below is the source code for a simple program that computes the average of several numbers that are entered via the command line of a computer. Please note that modern applications now consist of many thousands to many millions of lines of code. The simple example below is just for the benefit of our non-IT readers to give them a sense of what is being discussed when I describe the compilation of source code into executable files that can be loaded into the memory of a computer and run.
Figure 18 – Source code for a C program that calculates an average of several numbers entered at the keyboard.
Finally, we need a set of programs to run the computer itself and interact with the computer end-user. This set of programs is called an operating system. The operating system allows users to load other executable program files into the memory of the computer and run them. It also lets users do things like install software, copy files from one place to another on disk, add additional hardware components, and configure the computer to operate the way they like it. When you boot up your computer, you are simply loading the operating system programs of the computer into the computer's memory and start them running. The operating system allows the end-user to also load other application programs into memory and start them running too. In all cases, these programs run under the control of what is known as a process, and all of these processes have a distinct PID or process ID number. Most modern operating systems intended for the general public are now based upon a GUI (Graphical User Interface), like Windows or Mac. These GUI operating systems present the end-user with the illusion of a desktop. By double-clicking on icons on their desktop, end-users can have their computer load and startup application programs like MS Word. The GUI illusion also allows the end-user to do things like copying files by simply dragging and dropping them.
Figure 19 – A GUI operating system provides the end-user with the illusion of a desktop that allows the end-user to interact with the billions of transistor switches within the computer that are firing billions of times per second. (Right-click and open in a new window for a clearer viewing)
Figure 20 – If you start up the Windows Task Manager program by double-clicking on its icon, you can see all of the processes that are currently running to create the illusion of a desktop. Above we see the Windows Task Manager program executable file called taskmgr.exe is running under process ID PID=5564 and is using 2,100 KB of computer memory – or about 2.1 MB of memory. Computers now generally have several GB of memory.
So in reality, the illusion of a GUI desktop that the end-user senses, is really the end result of hundreds of processes all running at the same time, and each process represents a program residing in the computer’s memory and which is opening and closing billions of transistor switches billions of times each second.
The Hardware of the Mind
Now let us explore the equivalent architecture within the human brain. The human brain is also composed of a huge number of coordinated switches called neurons. Like your computer that contains many billions of transistor switches, your brain also contains about 100 billion switches called neurons. Each of the billions of transistor switches in your computer is connected to a small number of other switches that it can influence into switching on or off, while each of the 100 billion neuron switches in your brain can be connected to upwards of 10,000 other neuron switches and can also influence them into turning on or off.
All neurons have a body called the soma that is like all the other cells in the body, with a nucleus and all of the other organelles that are needed to keep the neuron alive and functioning. Like most electrical devices, neurons have an input side and an output side. On the input side of the neuron, one finds a large number of branching dendrites. On the output side of the neuron, we find one single and very long axon. The input dendrites of a neuron are very short and connect to a large number of output axons from other neurons. Although axons are only about a micron in diameter, they can be very long with a length of up to 3 feet. That’s like a one-inch garden hose that is 50 miles long! The single output axon has branching synapses along its length and it terminates with a large number of synapses. The output axon of a neuron can be connected to the input dendrites of perhaps 10,000 other neurons, forming a very complex network of connections.
Figure 21 – A neuron consists of a cell body or soma that has many input dendrites on one side and a very long output axon on the other side. Even though axons are only about 1 micron in diameter, they can be 3 feet long, like a one-inch garden hose that is 50 miles long! The axon of one neuron can be connected to up to 10,000 dendrites of other neurons.
Neurons are constantly receiving inputs from the axons of many other neurons via their input dendrites. These time-varying inputs can excite the neuron or inhibit the neuron and are all being constantly added together, or integrated, over time. When a sufficient number of exciting inputs are received, the neuron fires or switches “on”. When it does so, it creates an electrical action potential that travels down the length of its axon to the input dendrites of other neurons. When the action potential finally reaches such a synapse, it causes the release of a number of organic molecules known as neurotransmitters, such as glutamate, acetylcholine, dopamine and serotonin. These neurotransmitters are created in the soma of the neuron and are transported down the length of the axon in small vesicles. The synaptic gaps between neurons are very small, allowing the released neurotransmitters from the axon to diffuse across the synaptic gap and plug into receptors on the receiving dendrite of another neuron. This causes the receiving neuron to either decrease or increase its membrane potential. If the membrane potential of the receiving neuron increases, it means the receiving neuron is being excited, and if the membrane potential of the receiving neuron decreases, it means that the receiving neuron is being inhibited. Idle neurons have a membrane potential of about -70 mV. This means that the voltage of the fluid on the inside of the neuron is 70 mV lower than the voltage of the fluid on the outside of the neuron, so it is like there is a little 70 mV battery stuck in the membrane of the neuron, with the negative terminal inside of the neuron, and the positive terminal on the outside of the neuron, making the fluid inside of the neuron 70 mV negative relative to the fluid on the outside of the neuron. This is accomplished by keeping the concentrations of charged ions, like Na+, K+ and Cl-, different between the fluids inside and outside of the neuron membrane. There are two ways to control the density of these ions within the neuron. The first is called passive transport. There are little protein molecules stuck in the cell membrane of the neuron that allow certain ions to pass freely through like a hole in a wall. When these protein holes open in the neuron’s membranes, the selected ion, perhaps K+, will start to go into and out of the neuron. However, if there are more K+ ions on the outside of the membrane than within the neuron, the net flow of K+ ions will be into the neuron thanks to the second law of thermodynamics, making the fluid within the neuron more positive. Passive transport requires very little energy. All you need is enough energy to change the shape of the embedded protein molecules in the neuron’s cell membrane to allow the free flow of charged ions to lower densities as required by the second law of thermodynamics.
The other way to get ions into or out of neurons is by the active transport of the ions with molecular pumps. With active transport, the neuron uses some energy to actively pump the charged ions against their electrical gradient, in keeping with the second law of thermodynamics. For example, neurons have a pump that can actively pump three Na+ ions out and take in two K+ ions at the same time, for a net outflow of one positively charged NA+ ion. By actively pumping out positively charged Na+ ions, the fluid inside of a neuron ends up having a net -70 mV potential because there are more positively charged ions on the outside of the neuron than within the neuron. When the neurotransmitters from other firing neurons come into contact with their corresponding receptors on the dendrites of the target neuron it causes those receptors to open their passive Na+ channels. This allows the Na+ ions to flow into the neuron and temporarily change the membrane voltage by making the fluid inside the neuron more positive. If this voltage change is large enough, it will cause an action potential to be fired down the axon of the neuron. Figure 22 shows the basic ion flow that transmits this action potential down the length of the axon. The passing action potential pulse lasts for about 3 milliseconds and travels about 100 meters/sec or about 200 miles/hour down the neuron’s axon.
Figure 22 – When a neuron fires, an action potential is created by various ions moving across the membranes surrounding the axon. The pulse is about 3 milliseconds in duration and travels about 100 meters/sec, or about 200 miles/hour down the axon.
Figure 23 – At the synapse between the axon of one neuron and a dendrite of another neuron, the traveling action potential of the sending neuron’s axon releases neurotransmitters that cross the synaptic gap and which can excite or inhibit the firing of the receiving neuron.
Here is the general sequence of events:
1. The first step of the generation of an action potential is that the Na+ channels open, allowing a flood of Na+ ions into the neuron. This causes the membrane potential of the neuron to become positive, instead of the normal negative -70 mV voltage.
2. At some positive membrane potential of the neuron, the K+ channels open, allowing positive K+ ions to flow out of the neuron.
3. The Na+ channels then close, and this stops the inflow of positively charged Na+ ions. But since the K+ channels are still open, it allows the outflow of positively charged K+ ions, so that the membrane potential plunges in the negative direction again.
4. When the neuron membrane potential begins to reach its normal resting state of -70 mV, the K+ channels close.
5. Then the Na+/K+ pump of the neuron kicks in and starts to transport Na+ ions out of the neuron, and K+ ions back into the cell, until it reaches its normal -70 mV potential, and is ready for the next action potential pulse to pass by.
The action potential travels down the length of the axon as a voltage pulse. It does this by using the steps outlined above. As a section of the axon undergoes the above process, it increases the membrane potential of the neighboring section and causes it to rise as well. This is like jerking a tightrope and watching a pulse travel down its length. The voltage pulse travels down the length of the axon until it reaches its synapses with the dendrites of other neurons along the way or finally terminates in synapses at the very end of the axon. An important thing to keep in mind about the action potential is that it is one way, and all or nothing. The action potential starts at the beginning of the axon and then goes down its length; it cannot go back the other way. Also, when a neuron fires the action potential pulse has the same amplitude every time, regardless of the amount of excitation received from its dendritic inputs. Since the amplitude of the action potential of a neuron is always the same, the important thing about neurons is their firing rate. A weak stimulus to the neuron’s input dendrites will cause a low rate of firing, while a stronger stimulus will cause a higher rate of firing of the neuron. Neurons can actually fire several hundred times per second when sufficiently stimulated by other neurons.
When the traveling action potential pulse along a neuron’s axon finally reaches a synapse, it causes Ca++ channels of the axon to open. Positive Ca++ ions then rush in and cause neurotransmitters that are stored in vesicles to be released into the synapse and diffuse across the synapse to the dendrite of the receiving neuron. Some of the empty neurotransmitter vesicles eventually pickup or reuptake some of the neurotransmitters that have been released by receptors to be reused again when the next action potential arrives, while other empty vesicles return back to the neuron soma to be refilled with neurotransmitter molecules.
In Figure 24 below we see a synapse between the output axon of a sending neuron and the input dendrite of a receiving neuron in comparison to the source and drain of a FET transistor.
Figure 24 – The synapse between the output axon of one neuron and the dendrite of another neuron behaves very much like the source and drain of an FET transistor.
Now it might seem like your computer should be a lot smarter than you are on the face of it, and many people will even secretly admit to that fact. After all, the CPU chip in your computer has several billion transistor switches and if you have 8 GB of memory, that comes to another 64 billion transistors in its memory chips, so your computer is getting pretty close to the 100 billion neuron switches in your brain. But the transistors in your computer can switch on and off in about 10-10 seconds, while the neurons in your brain can only fire on and off in about 10-2 seconds. The signals in your computer also travel very close to the speed of light, 186,000 miles/second, while the action potentials of axons only travel at a pokey 200 miles/hour. And the chips in your computer are very small, so there is not much distance to cover at nearly the speed of light, while your poor brain is thousands of times larger. So what gives? Why aren’t we working for the computers, rather than the other way around? The answer lies in massively parallel processing. While the transistor switches in your computer are only connected to a few of the other transistor switches in your computer, each neuron in your brain has several thousand input connections and perhaps 10,000 output connections to other neurons in your brain, so when one neuron fires, it can affect 10,000 other neurons. When those 10,000 neurons fire, they can affect 100,000,000 neurons, and when those neurons fire, they can affect 1,000,000,000,000 neurons, which is more than the 100 billion neurons in your brain! So when a single neuron fires within your brain, it can theoretically affect every other neuron in your brain within three generations of neuron firings, in perhaps as little as 300 milliseconds. That is why the human brain still has an edge on computers, at least for another 15 years or so.
The Grand Illusion of Consciousness
So now we see that the circuitry within our brains works very much like the circuitry in our computers. But is consciousness really a grand illusion, like a grand GUI operating system interacting with the 100 billion neuron switches in our brains and their quadrillion contact points at the synapses?
Take a close look at Figure 25 below. At the intersections of the white lines, our Minds see grey spots even though there are no grey spots there in reality. Even though our Minds know that this is only an illusion, created by the circuitry within our brains, our Minds simply cannot make them go away no matter how hard we try because our Minds are also an illusion created by the circuitry within our brains!
Figure 25 – At the intersections of the white lines above, our Minds see grey spots even though there are no grey spots there in reality. Even though our Minds know that this is only an illusion, created by the circuitry within our brains, our Minds simply cannot make them go away no matter how hard we try because our Minds are also an illusion that is created by the circuitry within our brains!
But if philosophers, theologians and scientists have been struggling with this problem for ages, how can we be sure? I would like to propose that illnesses such as major depression, schizophrenia and Alzheimer’s disease present an opportunity to gain some understanding of this grand illusion of consciousness. One indication that these diseases might lead to exposing our grand illusion of consciousness is that we all have a general uneasiness about them. For some reason, the idea that Uncle Joe has type-2 diabetes and is on insulin evokes one emotional response within us, while the idea that Uncle Joe is in a psych ward suffering from major depression or schizophrenia or in a nursing home with Alzheimer’s disease evokes quite a different emotional response. We are all perfectly comfortable with the idea of type-2 diabetes being the result of the cells within Uncle Joe’s body no longer responding properly to normal levels of insulin, but we experience a very uncomfortable uneasiness when it comes to visiting Uncle Joe in a psych ward or a nursing home suffering from advanced Alzheimer’s disease. This is because we understand diabetes in terms of a mechanistic worldview of living things, and realize that it is just the result of some biochemical imbalances within his body. However, similar physical diseases such as major depression, schizophrenia and Alzheimer’s disease, on the other hand, challenge our very dualistic model of consciousness, and even call into question the immortality of our very Minds, leaving us feeling very powerless and vulnerable. After all, how can the immaterial spirit of the Mind change so drastically in someone we have known for so many years? In ages past, these diseases were attributed to such things as being possessed by evil spirits, but since that explanation is no longer available to most of us, we simply tend to distance ourselves from those afflicted with such diseases because they make us feel very uncomfortable and uneasy.
This is unfortunate because these diseases present a unique opportunity to explore the true nature of consciousness. Alzheimer’s disease is caused by the physical destruction of the brain’s neural network, which may be induced by the buildup of beta-amyloid protein plaques within the network of neurons. Major depression and schizophrenia are much more promising because they are thought to be caused by concentration imbalances of neurotransmitters in the quadrillion synapses of the human brain. Major depression occurs when there is a deficiency of neurotransmitters in the synapses, like a quadrillion electrical relays with dirty and oxidized contacts that do not make very good electrical contact, while schizophrenia, which is currently less well understood, may similarly arise from hyperactive dopamine receptors in the synapses between the neurons. Major depression is much easier to treat than schizophrenia because the ingestion of a few thousandths of a gram of antidepressant molecules in pill form over a period of two to four weeks can many times produce a dramatic recovery. Modern antidepressants increase the neurotransmitters serotonin, norepinephrine and dopamine in the synaptic cleft between the neurons in the brain. The most commonly used are SSRIs (Selective Serotonin Reuptake Inhibitors), such as Celexa and Prozac, which block the reuptake of serotonin back into the transport vesicles of the axon terminal. These fortunate patients can suddenly “pop” out of a major depression in as short a period as 24 hours after being on an antidepressant for several weeks, and thus, can vividly compare their “depressed” Minds with their “normal” Minds, like Dr Jekyll and Mr Hyde in reverse, with the ingestion of a magical potion that seems to quickly return them to their normal selves. How can this be if the Mind is really a nonmaterial spirit? This is an important consideration because, while with Alzheimer’s disease, we see a physical destruction of the neural network composed of one quadrillion connections between the neurons of the human brain, with major depression the neural network remains intact. What happens in major depression is a disruption in the flow of information between the neurons, and that is key to understanding the grand illusion of consciousness because it indicates that consciousness, the Mind, and human personality merely emerge from a huge flow of information upon the neural network of one quadrillion connections within the brain, and not from the physical neural network of connections itself. Therefore, consciousness really is a grand illusion. It is simply a self-emerging GUI interface to a huge flow of information within the neural network of the brain. The material neural network itself is not the Mind, it is the ephemeral flow of information within the network that is the Mind.
So if consciousness, the Mind, and human personality simply emerge from a large flow of information, this flow does not necessarily have to flow within the squishy brains of carbon-based life forms. Thus, these same effects could arise from huge information flows within networks of silicon-based systems on the Internet, or from even stranger platforms. In Chapter 4 The Black Hole Era of The Five Ages of the Universe (1999), Fred Adams and Greg Laughlin imagine a time 1040 - 10100 years from now when the Universe consists of a very dilute soup of electrons and positrons at nearly absolute zero and which is powered by Hawking radiation from evaporating black holes. The chapter begins with some thoughts of Bob, an intelligent being whose brain consists of a large collection of slowly spiraling electrons and positrons that are each separated by a distance that is many orders of magnitude larger than today’s visible Universe. Bob is 1079 years old and has just sensed some gravity waves pass by from the coalescence of two very large black holes into a truly massive black hole. Bob is a fairly slow thinker because, living so very close to absolute zero, one of Bob’s “seconds” lasts for about 1070 years. But the recent disturbance in his normally quiet Universe gets Bob to thinking again about what happened during the very brief period 1040 years after the Big Bang, when some modern physicists posit that some very short-lived particles, called protons and neutrons, may have interacted with modern electrons and formed very complex structures capable of thought during this very brief period of 1040 years. Since these very exotic short-lived protons and neutrons have long since decayed away, who knows?
November 5, 2016 Update
I just finished a very fascinating MOOC course at Coursera entitled Synapses, Neurons and Brains by Professor Idan Segev that I highly recommend to all who would like to pursue this subject further and also learn about some of the more recent advances in neuroscience. The course is available at:
The Role of Information and Consciousness in Modern Physics
At this point, if you are new to softwarephysics you might want to get a brief introduction to the theory of relativity by taking a look at Is Information Real? and Cyberspacetime, and also an introduction to quantum theory at Quantum Software, SoftwareChemistry, and The Foundations of Quantum Computing. However, if you have been following along with the postings in this blog on softwarephysics, my hope is that you have become aware of the ever increasing importance of the concept of information in the development of physics in the 19th and 20th centuries. Beginning with the impact of Maxwell’s Demon upon thermodynamics and statistical mechanics in the 19th century (see The Demon of Software), and progressing into the 20th century with the special theory of relativity and the role of the speed of information transmission in regards to the preservation of causality, and finally terminating with the role of information in quantum mechanics. In Is Information Real?, we saw how the special theory of relativity made information tangible, and we saw our 200-pound man slowly dissolve into pure mathematical information in The Foundations of Quantum Computing.
So is our closely held dualistic model of the grand illusion that forms our Minds totally misguided? Perhaps not if we further explore the role of information and consciousness in modern physics. Again, Descartes’ dualistic model of the Mind maintains that the Mind is not material in nature; it is a “ghost in the machine”. But thanks to modern physics, we now know that the “machine” itself is also very ghost-like in nature too and that most of the “real” material stuff around us is also largely a grand illusion as well. As we saw our hypothetical 200-pound man slowly dissolve into pure mathematics in The Foundations of Quantum Computing, it is good to keep in mind that much of what we observe as “real” material stuff is merely an illusion too. For example, we found that our 200-pound man really consisted of mostly empty space and 100 pounds of protons, 100 pounds of neutrons, and 0.87 ounces of electrons. A proton, consisting of two up quarks and one down quark, has a mass of 938.27 MeV. Similarly, a neutron, consisting of one up quark and two down quarks, is slightly more massive with a mass of 939.56 MeV. But the up and down quarks themselves are surprisingly quite light. The up quark is now thought to have a mass with an upper limit of only 4 MeV, and the down quark is thought to have a mass with an upper limit of only 8 MeV. So a proton should have a mass of about 16 MeV instead of 938.27 MeV, and the neutron should have a mass of about 20 MeV instead of 939.56 MeV. Where does all this extra mass come from? It comes from the kinetic and binding energy of the virtual gluons that hold the up and down quarks together to form protons and neutrons! Remember that energy can add to the mass of an object via E = mc2 by simply rearranging the formula as m = E/c2. So going back to our fictional 200-pound man, consisting of 100 pounds of protons, 100 pounds of neutrons, and 0.87 ounces of electrons, we can now say that the man actually consists of 0.87 ounces of electrons, 1.28 pounds of up quarks, 2.56 pounds of down quarks, and 196.10 pounds of pure energy!
In SoftwareChemistry we saw that chemistry is all about electrons in various quantum states and the electromagnetic force between electrons and protons. This is rather strange since the electrons in an atom represent an insignificant amount of the mass of an atom. Protons have 1836 times as much mass as electrons, and neutrons are just slightly more massive than protons, with a mass that is equal to 1.00138 times that of a proton. But strangely, everything you see, hear, smell, taste, and feel results from the interactions of less than one ounce of those electrons. And all of the biochemical reactions that keep you alive and even your thoughts at this very moment are all accomplished with this small mass of electrons! This all stems from the fact that, although electrons are very light relative to protons and neutrons, for some unknown reason, they pack a whopping amount of electrical charge. In fact, the light electrons have the same amount of electrical charge as the much heavier protons, just with the opposite sign, so it is the electromagnetic force that really counts in chemistry, not the electrons themselves. In that regard, chemistry can really be considered to be the study of the electromagnetic force, and not the study of matter, since electrons are nearly massless particles.
According to our best effective theory on the subject, the quantum field theory of QED (1948), when you push your hand down on a table, the huge number of electrons in both the table and your hand both begin to enter into an exchange interaction that is felt as a repulsive force, because all of those electrons are fermions with a spin value of 1/2, and therefore, due to the Pauli exclusion principle, cannot occupy the same quantum state. The Pauli exclusion principle is also the reason that all of the electrons in a given atom do not collapse into the lowest energy level of the atom. If they did so, there would be no chemistry, and no you to worry about it. This apparent exchange force gives you the illusion that the table and your hand are solid objects, when according to our best effective theories; the table and your hand consist mostly of empty space and pure energy with a thin haze of surrounding electrons. When you look at the man, table, or anything else your Mind is simply creating the illusion of their existence, as ambient photons in the room scatter off the thin haze of electrons surrounding the objects – the objects themselves are mainly composed of pure energy. The table also creates the illusion in your Mind that it is quite massive and very difficult to move when you try to shove it across the floor for a family gathering, but that is just because the table contains a great deal of pure energy. And if the string theorists are correct, even the electrons and quarks in the table are just very small ghost-like vibrating strings of information, in keeping with John Wheeler’s “It from Bit” hypothesis that the Universe may simply be composed of information at its deepest levels.
To take this to an even more extreme level, in What’s It All About?, Genes, Memes and Software and Is the Universe Fine-Tuned for Self-Replicating Information?, I proposed that the multiverse may simply be a vast eternal form of self-replicating mathematical information that has always existed and has spawned an infinite number of universes such as ours. As we saw in Some Reflections on nothingness, our universe may have begun as a quantum fluctuation, forming a universe that is made of “nothing”, with no net momentum, angular momentum, mass-energy, electrical charge or color charge to speak of. It’s like adding up the infinite set of all real numbers, both positive and negative, and ending up with exactly zero. So our Universe might just be one instance within an infinitely large multiverse of universes, and our Big Bang might just be one of an infinite number of Big Bangs of mathematical information exploding out into a new universe.
Infinity is a very large number, so many cosmologists are now coming to the conclusion that the answer to Brandon Carter’s Weak Anthropic Principle (1973):
The Weak Anthropic Principle - Intelligent beings will only find themselves existing in universes capable of sustaining intelligent beings.
is that we just happen to exist in one of the rare universes capable of supporting intelligent beings, but because infinity is infinite, there still would be an infinite number of such universes. Furthermore, intelligent beings should likely find themselves in a universe that just barely qualifies for sustaining intelligent beings, since there would be far more universes that just barely tolerate the existence of intelligence, compared to those that openly welcome intelligent beings with cordial affection. And our Universe certainly seems to be such a universe that is far less than welcoming to intelligent life. If you think of all the places in our Universe where complex intelligent carbon-based life can exist, you come up with a very small portion of the available real estate, and I think the findings to date of the Kepler space telescope bear this out. Kepler is currently searching for planets as they transit in front of about 100,000 stars and has come up with 2321 possible candidates and 105 confirmed planets to date, but none of these planets seem to be likely homes for intelligent beings. Granted, our Universe has the proper forces tuned to the proper strengths and is chock full of the necessary building blocks, but temperature seems to be the limiting factor. In most places, our Universe is simply too hot or too cold for these carbon-based building blocks to do their job. They are either not jiggling around fast enough for chemical reactions to occur in a timely manner, or they are jiggling around too fast to stay stuck together long enough. The temperature range of our Universe goes from a low of 3 0K for the CBR – Cosmic Background Radiation - up to several billion 0K for the core of an O class star about to supernova, with most matter near the extremes. However, carbon-based life can only exist in a narrow range of about 200 0K near the freezing and boiling points of water on Earth, and there are very few places in our Universe where that is the case. The fact that we live in a Universe that is on one hand capable of sustaining intelligent beings, but on the other hand is quite hostile to them at the same time might help to explain Fermi’s Paradox, first proposed by Enrico Fermi over lunch one day in 1950, which asks the question:
Fermi’s Paradox - If the universe is just chock full of intelligent beings, why do we not see any evidence of their existence?
Now if the multiverse really is composed of an infinite number of quantized universes, then our experience of the grand illusion of our Minds might really be like living in the movie Groundhog Day (1993), in which we are constantly experiencing the same things over and over again with slight variations in different universes. This might also provide a better interpretation of quantum mechanics than the Copenhagen interpretation. In 1927, Niels Bohr and Werner Heisenberg proposed a very positivistic interpretation of quantum mechanics now known as the Copenhagen interpretation. You see, Bohr was working at the University of Copenhagen Institute of Theoretical Physics at the time. The Copenhagen interpretation contends that absolute reality does not really exist. Instead, there are an infinite number of potential realities, defined by the wavefunction of a quantum system, and when we make a measurement of a quantum system, the wavefunction of the quantum system collapses into a single value that we observe, and thus brings the quantum system into reality. This satisfies Max Born’s contention that wavefunctions are just probability waves. The Copenhagen interpretation suffers from several philosophical problems though. For example, Eugene Wigner pointed out that the devices we use to measure quantum events are also made out of atoms which are quantum objects in themselves, so when an observation is made of a single atom of uranium to see if it has gone through a radioactive decay using a Geiger counter, the atomic quantum particles of the Geiger counter become entangled in a quantum superposition of states with the uranium atom. If the uranium has decayed, then the uranium atom and the Geiger counter are in one quantum state, and if the atom has not decayed, then the uranium atom and the Geiger counter are in a different quantum state. If the Geiger counter is fed into an amplifier, then we have to add in the amplifier too into our quantum superposition of states. If a physicist is patiently listening to the Geiger counter, we have to add him into the chain as well, so that he can write and publish a paper which is read by other physicists and is picked up by Newsweek for a popular presentation to the public. So when does the “measurement” actually take place? We seem to have an infinite regress. Wigner’s contention is that the measurement takes place when a conscious being first becomes aware of the observation. Einstein had a hard time with the Copenhagen interpretation of quantum mechanics for this very reason because he thought that it verged upon solipsism. Solipsism is a philosophical idea from Ancient Greece. In solipsism, your Mind is the whole thing, and the physical Universe is just a figment of your imagination. So I would like to thank you very much for thinking of me and bringing me into existence! Einstein’s opinion of the Copenhagen interpretation of quantum mechanics can best be summed up by his statement "Is it enough that a mouse observes that the Moon exists?". Einstein objected to the requirement for a conscious being to bring the Universe into existence, because in Einstein’s view, measurements simply revealed to us the condition of an already existing reality that does not need us around to make measurements in order to exist. But in the Copenhagen interpretation, the absolute reality of Einstein does not really exist. Additionally, in the Copenhagen interpretation, objects do not really exist until a measurement is taken, which collapses their associated wavefunctions, but the mathematics of quantum mechanics does not shed any light on how a measurement could collapse a wavefunction.
In The Fabric of Reality (1997) David Deutsch rejects the extreme positivism of the Copenhagen interpretation as, borrowing a term from my youth, a cop-out. If you just don’t understand something like physical reality, it is rather easy to simply deny that it exists. Deutsch believes that physics owes us more than merely a method for calculating quantum probabilities; it owes us an explanation of how and why events actually occur. Deutsch is a strong advocate of the Many-Worlds interpretation, in which reality really does exist, but as an infinite number of realities in an infinite number of parallel universes. In 1957, Hugh Everett working on his Ph.D. under John Wheeler, proposed the Many-Worlds interpretation of quantum mechanics. The Many-Worlds interpretation admits to an absolute reality but claims that there are an infinite number of absolute realities spread across an infinite number of parallel universes. In the Many-Worlds interpretation, when electrons or photons encounter a two-slit experiment, they go through one slit or the other, and when they hit the projection screen they interfere with electrons or photons from other universes that went through the other slit! In Everett’s original version of the Many-Worlds interpretation, the entire Universe splits into two distinct universes whenever a particle is faced with a choice of quantum states and so all of these universes are constantly branching into an ever-growing number of additional universes. In the Many-Worlds interpretation of quantum mechanics, the wavefunctions or probability clouds of electrons surrounding an atomic nucleus are the result of overlaying the images of many “real” electrons in many parallel universes.
David Deutsch approaches the Many-Worlds interpretation with a slight twist. In Deutsch’s version of the Many-Worlds interpretation, there always has been an infinite number of parallel universes, with no need of continuous branching. When electrons or photons encounter a two-slit experiment without detectors, two very closely related universes merge into a single universe as the electrons or photons interfere with each other. If the electrons or photons encounter a two-slit experiment with detectors, the parallel universes remain distinct and no interference is observed. According to this version of the Many-Worlds interpretation, when you hold up a pillow case and observe your neighbor’s front door light and see a checkerboard interference pattern of spots, there are an infinite number of copies of you doing the same thing in an infinite number of closely related parallel universes. The interference pattern you observe is the result of the interference of the photons from all these parallel universes. The chief advantage of the Many-Worlds interpretation is that you do not have to be there to observe the interference pattern. It happens whether you are there or not, and absolute reality does not depend upon conscious beings observing it. Einstein died in 1955, two years before the Many-Worlds interpretation of quantum mechanics, but I imagine that he would have gladly traded an infinite number of universes for the Copenhagen interpretation, in which absolute reality did not even exist in a single one!
So in some sense, perhaps the dualists have been right all along – not because there really is a ghost in the machine, but because the machine itself may be nothing more than a ghost itself. When all is said and done, the one distinguishing characteristic of the Mind is that it is capable of contemplating such grand illusions, even if the Mind itself is only the grandest illusion of them all.
Comments are welcome at firstname.lastname@example.org
To see all posts on softwarephysics in reverse order go to:
Steve JohnstonSteve Johnston0