Saturday, March 23, 2013

The Driving Forces of Software Evolution

In 1979 I made a career change from being an exploration geophysicist exploring for oil with Amoco to become an IT professional in Amoco’s IT department. At the time, I figured if you could apply physics to geology, why not apply physics to software? That is when I first started working on softwarephysics to help me cope with the daily mayhem of life in IT. Since I had only taken one computer science class in college back in 1972, I was really starting from scratch in this new career, but since I had been continuously programming geophysical models for my thesis and for oil companies during that period, I did have about seven years of programming experience as a start. When I made this career change into IT, I quickly realized that all IT jobs essentially boiled down to simply pushing buttons. All you had to do was push the right buttons, in the right sequence, at the right time, and with nearly zero defects. How hard could that be? Well, as we all know that is indeed a very difficult thing to do.

When I first started programming in 1972, as a physics major at the University of Illinois in Urbana, I was pushing the buttons on an IBM 029 keypunch machine to feed cards into a million dollar mainframe computer with a single CPU running with a clock speed of about 750 KHz and about 1 MB of memory.

Figure 1 - An IBM 029 keypunch machine like the one I first learned to program on at the University of Illinois in 1972.

Figure 2 - Each card could hold a maximum of 80 bytes. Normally, one line of code or one 80 byte data record was punched onto each card.

Figure 3 - The cards for a program were held together into a deck with a rubber band, or for very large programs, the deck was held in a special cardboard box that originally housed blank cards. Many times the data cards for a run followed the cards containing the source code for a program. The program was compiled and linked in two steps of the run and then the generated executable file processed the data cards that followed in the deck.

Figure 4 - To run a job, the cards in a deck were fed into a card reader, as shown on the left above, to be compiled, linked, and executed by a million dollar mainframe computer with a clock speed of about 750 KHz and about 1 MB of memory.

Now I push these very same buttons for a living on a $500 laptop with 2 CPUs running with a clock speed of 2.5 GHz and 8 GB of memory. So hardware has improved by a factor of over 10 million since 1972.

Figure 5 - Now I push these very same buttons for a living that I pushed on IBM 029 keypunch machines, the only difference is that I now push them on a $500 machine with 2 CPUs running with a clock speed of 2.5 GHz and 8 GB of memory.

Now how much progress have we seen in our ability to develop, maintain and support software over this very same period of time? I would estimate that our ability to develop, maintain and support software has only increased by a factor of about 10 – 100 times since 1972, and I think that I am being very generous here. In truth, it is probably much closer to being a factor of 10, rather than to being a factor of 100. Here is a simple thought experiment. Imagine assigning two very good programmers the same task of developing some software to automate a very simple business function. Imagine that one programmer is a 1972 COBOL programmer using an IBM 029 keypunch machine, while the other programmer is a 2013 Java programmer using the latest Java IDE (Integrated Development Environment software) to write and debug a Java program. Now set them both to work. Hopefully, the Java programmer would win the race, but by how much? I think that the 2013 Java programmer, armed with all of the latest tools of IT, would be quite pleased if she could finish the project 10 times faster than the 1972 programmer cutting cards on an IBM 029 keypunch machine. This would indeed be a very revealing finding. Why has the advancement of hardware outpaced the advancement of software by nearly a factor of a million since 1972? How can we possibly account for this vast difference between the advancement of hardware and the advancement of software over a span of 40 years that is on the order of 6 orders of magnitude? Clearly, two very different types of processes must be at work to account for such a dramatic disparity.

My suggestion would be that hardware advanced so quickly because it was designed using science, while the advancement of software was left to evolve more or less on its own. You see, nobody really sat back and designed the very complex world-wide software architecture that we see today, it just sort of evolved on its own through small incremental changes brought on by many millions of independently acting programmers, through a process of trial and error. In this view, software itself can be thought of as a form of self-replicating information, trying to survive by replicating before it disappears into extinction, and evolving over time on its own. Software is certainly not alone in this regard. There currently are three forms of self-replicating information on the planet – the genes, memes, and software, with software rapidly becoming the dominant form of self-replicating information on the Earth. (see A Brief History of Self-Replicating Information for more details).

The evolution of all three forms of self-replicating information seems to be primarily driven by two factors - the second law of thermodynamics and nonlinearity. Before diving into the second law of thermodynamics again, let us first review the first law of thermodynamics. The first law of thermodynamics describes the conservation of energy. Energy cannot be created nor destroyed; it can only be transformed from one form of energy into another form of energy, and none is created or lost in the process. For example, when you drive to work, you convert the chemical energy in gasoline, which originally came from sunlight that was captured by single-celled life forms many millions of years ago. These organisms were later deposited in shallow-sea mud that subsequently turned into shale, as the carbon-rich mud was pushed down, compressed and heated by the accumulation of additional overlying sediments. The heat and pressure at depth cooked the shale enough to turn the single-celled organisms into oil, which then later migrated into overlying sandstone and limestone reservoir rock. When the gasoline is burned in your car engine, about 85% of the energy is immediately turned into waste heat energy, leaving about 15% left to be turned into the kinetic energy of your moving car. This remaining 15% is also eventually turned into waste heat energy too by the conclusion of your trip by wind resistance, rolling friction, and your brake linings. So when all is said and done, the solar energy that was released many millions of years ago by the Sun finally ends up as heat energy with 100% efficiency, none of the energy is lost during any of the steps of the process. It is just as if the million year old sunlight had just now fallen upon the asphalt parking lot of where you work and heated its surface. (see A Lesson From Steam Engines, Computer Science as a Technological Craft, and Entropy - the Bane of Programmers for more details)

There is a very similar effect in place for information. It turns out that information cannot be created nor destroyed either; it can only be converted from one form of information into another. It is now thought that information, like energy, is conserved because all of the current theories of physics are both deterministic and reversible. By deterministic, we mean that given the initial state of a system, a deterministic theory guarantees that one, and only one, possible outcome will result. Similarly, reversible theories describe interactions between objects in terms of reversible processes. A reversible process is a process that can be run backwards in time to return the Universe back to the initial state that it had before the process even began, as if the process had never even happened in the first place. For example, the collision between two perfectly elastic balls at low energy is a reversible process that can be run backwards in time to return the Universe to its original state because Newton’s laws of motion are reversible. Knowing the position of each ball at any given time and also its momentum, a combination of its speed, direction, and mass, we can predict where each ball will go after a collision between the two, and also where each ball came from before the collision as well, using Newton’s laws of motion. For a deterministic reversible process such as this, the information required to return a system back to its initial state cannot be destroyed, no matter how many collisions might occur, in order to be classified as a reversible process that is operating under reversible physical laws.

Figure 6 - The collision between two perfectly elastic balls at low energy is a reversible process because Newton’s laws of motion are deterministic and reversible.

The fact that all of the current theories of physics, including quantum mechanics, are both deterministic and reversible, is a heavy blow to philosophy because it means that once a universe such as ours spuds off from the multiverse, all the drama is already over – all that will happen is already foreordained to happen, so there is no free will to worry about. Luckily, we do have the illusion that free will really exists because our Universe is largely composed of nonlinear systems and chaos theory has shown that even though nonlinear systems behave in a deterministic manner, they are not predictable because very small changes to initial conditions of a nonlinear system can produce huge changes to the final outcome of the system. This chaotic behavior makes the Universe appear to be random to us and gives us a false sense of security that all is not already foreordained (see Software Chaos for more on this). Also, remember that all of the current theories of physics are only effective theories. An effective theory is an approximation of reality that only holds true over a certain restricted range of conditions and only provides for a certain depth of understanding of the problem at hand. All effective theories are just approximations of reality, and are not really the fundamental “laws” of the Universe, but these effective theories make exceedingly good predictions of the behavior of physical systems over the limited ranges in which they apply. We keep coming up with better ones all the time. For example, if the above collision between two perfectly elastic balls were conducted at a very high energy, meaning that the balls were traveling close to the speed of light, Newton’s laws of motion would no longer work, and we would need to use another effective theory called the special theory of relativity to perform the calculations. However, we know that the special theory of relativity is also just an approximation because it cannot explain the behavior of very small objects like the electrons in atoms, where we use another effective theory called quantum mechanics. So perhaps one day we will come up with a more complete effective theory of physics that is not deterministic and reversible. It’s just that all of the ones we have come up with so far always are deterministic and reversible.

In Entropy - the Bane of Programmers we went on to describe the second law of thermodynamics as the propensity of isolated macroscopic systems to run down or depreciate with time, as first proposed by Rudolph Clausius in 1850. Clausius observed that the Universe is constantly smoothing out differences. For example, his second law of thermodynamics proposed that spontaneous changes tend to smooth out differences in temperature, pressure, and density. Hot objects tend to cool off, tires under pressure leak air, and the cream in your coffee will stir itself if you are patient enough. Clausius defined the term entropy to measure this amount of smoothing-out or depreciation of a macroscopic system, and with the second law of thermodynamics, proposed that entropy always increased whenever a change was made. In The Demon of Software we drilled down deeper still and explored Ludwig Boltzmann’s statistical mechanics, developed in 1872, in which he viewed entropy from the perspective of the microstates that a large number of molecules could exist in. For any given macrostate of a gas in a cylinder, Boltzmann defined the entropy of the system in terms of the number N of microstates that could produce the observed macrostate as:

S = k ln(N)

For example, air is about 78% nitrogen, 21% oxygen and 1% other gasses. The macrostate of finding all the oxygen molecules on one side of a container and all of the nitrogen molecules on the other side has a much lower number of microstates N than the macrostate of finding the nitrogen and oxygen thoroughly mixed together, so the entropy of a uniform mixture is much greater than the entropy of finding the oxygen and nitrogen separated. We used poker to clarify these concepts with the hope that you would come to the conclusion that the macrostate of going broke in Las Vegas had many more microstates than the macrostate of breaking the bank at one of the casinos.

We also discussed the apparent paradox of Maxwell’s Demon and how Leon Brillouin solved the mystery with his formulation of information as the difference between the initial and final entropies of a system after a determination of the state of the system had been made.

∆I = Si - Sf
Si = initial entropy
Sf = final entropy

Since the second law of thermodynamics demands that the entropy of the Universe must constantly increase, it also implies that the total amount of useful information in the Universe must constantly decrease. The number of low-entropy macrostates of a system, with very few contributing microstates will always be much smaller than the number of high-entropy macrostates of a system with a large number of microstates. That is why a full house in poker or a bug-free program are rare, while a single pair or a slightly buggy program are much more common. So low-entropy forms of energy and information will always be much rarer in the Universe than high-entropy forms of energy and information. What this means is that the second law of thermodynamics demands that when ever we do something, like push a button, the total amount of useful energy and useful information in the Universe must decrease! Now of course the local amount of useful energy and information of a system can always be increased with a little work. For example, you can charge up your cell phone and increase the amount of useful energy in it, but in doing so, you will also create a great deal of waste heat when the coal that generated the electricity was burned because not all of the energy in the coal can be converted to electricity. Similarly, if you rifle through a deck of cards, you can always manage to deal yourself a full house, but in doing so, you will still decrease the total amount of useful information in the Universe. If you later shuffle your full house back into the deck of cards, your full house still exists in a disordered shuffled state with increased entropy, but the information necessary to reverse the shuffling process cannot be destroyed, so your full house could always be reclaimed by exactly reversing the shuffling process. It is just not a very practical thing to do, and that is why it appears that the full house has been destroyed by the shuffle.

The actions of the second law naturally lead to both the mutation of self-replicating information and to the natural selection of self-replicating information as well. This is because the second law guarantees that errors, or mutations, will always occur in all copying processes, and also limits the existence of the low-entropy resources, like a useful source of energy, that are required by all forms of self-replicating information to replicate. The existence of a limited resource base naturally leads to the selection pressures of natural selection because there simply are not enough resources to go around for all of the consumers of the resource base. Any form of self-replicating information that is better adapted to its environment will have a better chance at obtaining the resources it needs to replicate, and will therefore have a greater chance of passing that trait on to its offspring. Since all forms of self-replicating information are just one generation away from extinction, natural selection plays a very significant role in the evolution of all forms of self-replicating information.

The fact that the Universe is largely nonlinear in nature, meaning that very small changes to the initial state of a system can cause very large changes to the final state of a system as it moves through time, also means that small copying errors usually lead to disastrous, and many times lethal, consequences for all forms of self-replicating information (see Software Chaos for more details). The idea that the second law of thermodynamics, coupled with nonlinearity, are the fundamental problems facing all forms of self-replicating information, and therefore, are the driving forces behind evolution, are covered in greater detail in The Fundamental Problem of Software .

In addition to the second law of thermodynamics and nonlinearity, the evolution of software over the past 70 years has also shown that there have been several additional driving forces operating one level higher than the fundamental driving forces of the second law of thermodynamics and nonlinearity. Since the only form of self-replicating information that we have a good history of is software, I think the evolutionary history of software provides a wonderful model to study for those researches looking into the origin and evolution of early life on Earth and also for those involved in the search for life elsewhere in the field of astrobiology. So let us outline those forces below for their benefit.

The Additional Driving Forces of Evolution
1. Darwin’s concept of evolution by means of innovation honed by natural selection has certainly played the greatest role in the evolution of software over the past 70 years (2.2 billion seconds). All software evolves by means of these processes. In softwarephysics, we extend the concept of natural selection to include all selection processes that are not supernatural in nature. So a programmer making a selection decision after testing his latest iteration of code is considered to be a part of nature, and is therefore, a form of natural selection. Actually, the selection process for code is really performed by a number of memes residing within the mind of a programmer. Software is currently the most recent form of self-replicating information on the planet and it is currently exploiting the memes of various meme-complexes on the planet to survive. Like all of its predecessors, software first emerged as a pure parasite in May of 1941 on Konrad Zuse’s Z3 computer. Initially, software could not transmit memes, it could only perform calculations, like a very fast adding machine, so it was a pure parasite. But then the business and military meme-complexes discovered that software could be used to store and transmit memes, and software then quickly entered into a parasitic/symbiotic relationship with the memes. Today, software has formed strong parasitic/symbiotic relationships with just about every meme-complex on the planet. In the modern day, the only way memes can now spread from mind to mind without the aid of software is when you directly speak to another person next to you. Even if you attempt to write a letter by hand, the moment you drop it into a mailbox, it will immediately fall under the control of software. The poor memes in our heads have become Facebook and Twitter addicts (see How Software Evolves for more details).

2. Stuart Kauffman’s concept of "order for free", like the emergent order found within a phospholipid bilayer that is simply seeking to minimize its free energy, and which forms the foundation upon which all biological membranes are built, or the formation of a crystalline lattice out of a melt, leading to Alexander Graham Cairns-Smith’s theory, first proposed in 1966, that there was a clay microcrystal precursor to RNA (see The Origin of Software the Origin of Life and Programming Clay for more details).

3. Lynn Margulis’s discovery that the formation of parasitic/symbiotic relationships between organisms is a very important driving force of evolution (see Software Symbiogenesis for more details).

4. Stephen Jay Gould’s concept of exaptation – the reuse of code originally meant for one purpose, but later put to use for another. We do that all the time with the reuse of computer code (see When Toasters Fly for more details).

5. Simon Conway Morris’s contention that convergence is a major driving force in evolution, where organisms in different evolutionary lines of descent evolve similar solutions to solve the same problems, as outlined in his book Life’s Solution (2003). We have seen this throughout the evolutionary history of software architecture, as software has repeatedly recapitulated the architectural design history of living things on Earth. The very lengthy period of unstructured code (1941 – 1972) was similar to the very lengthy dominance of the prokaryotic architecture of early life on Earth. This was followed by the dominance of structured programming (1972 – 1992), which was very similar to the rise of eukaryotic single-celled life. Object oriented programming took off next, primarily with the arrival of Java in 1995. Object-oriented programming is the implementation of multicellular organization in software. Finally, we are currently going through a Cambrian explosion in IT with the SOA (Service Oriented Architecture) revolution, where consumer objects (somatic cells) make HTTP SOAP calls on service objects (organ cells) residing within organ service JVMs to provide webservices (see the SoftwarePaleontology section of SoftwareBiology for more details).

Figure 7 - The eye of a human and the eye of an octopus are nearly identical in structure, but evolved totally independently of each other. As Daniel Dennett pointed out, there are only a certain number of Good Tricks in Design Space and natural selection will drive different lines of descent towards them.


Figure 8 – Computer simulations reveal how a camera-like eye can easily evolve from a simple light sensitive spot on the skin.


Figure 9 – We can actually see this evolutionary history unfold in the evolution of the camera-like eye by examining modern-day mollusks such as the octopus.

6. Peter Ward’s observation that mass extinctions are key to clearing out ecological niches through dramatic environmental changes which additionally open other niches for exploitation. We have seen this throughout the evolutionary history of software as well. The distributed computing revolution of the early 1990s was a good example when people started hooking up cheap PCs into LANs and WANs and moved to a client/server architecture that threatened the existence of the old mainframe software. The arrival of the Internet explosion in 1995 opened a whole new environmental niche too for web-based software, and today we are going through a wireless-mobile computing revolution which is also opening entirely new environmental niches for software that might threaten the old stationary PC software we use today (see How to Use Your IT Skills to Save the World and Is Self-Replicating Information Inherently Self-Destructive? for more details).

7. The dynamite effect, where a new software architectural element spontaneously arises out of nothing, but its significance is not recognized at the time, and then it just languishes for many hundreds of millions of seconds, hiding in the daily background noise of IT. And then just as suddenly, after perhaps 400 – 900 million seconds, the idea finally catches fire and springs into life and becomes ubiquitous. What seems to happen with most new technologies, like eyeballs or new forms of software architecture, is that the very early precursors do not provide that much bang for the buck so they are like a lonely stick of dynamite with an ungrounded blasting cap stuck into it, waiting for a stray voltage to finally come along and set it off (see An IT Perspective of the Cambrian Explosion for more details).

Currently, researchers working on the origin of life and astrobiology are trying to produce computer simulations to help investigate how life could originate and evolve at its earliest stages. As you can see, trying to incorporate all of the above elements into a computer simulation would be a very daunting task indeed. The good news is that over the past 70 years the IT community has spent over $10 trillion building this computer simulation for them, and has already run it for over 2.2 billion seconds. It has been hiding there in plain sight the whole time for anybody with a little bit of daring and flair to explore.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
http://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Sunday, March 03, 2013

How to Use Softwarephysics to Revive Memetics in Academia

I just finished The Meme Machine (1999) by Susan Blackmore, which I found to be a very significant piece of scientific work that finally formalized the science of memetics in academia. As you know, softwarephysics maintains that there currently are three forms of self-replicating information on the planet – the genes, memes, and software, with software rapidly becoming the dominant form of self-replicating information. See A Brief History of Self-Replicating Information for details. Actually, in softwarephysics the genes are thought to be an amalgam of at least three, and possibly many more, of the original forms of self-replicating information that brought forth life upon the Earth - the original organic metabolic pathways, RNA, and DNA. However, these three forms of self-replicating information are now so deeply intertwined that we can safely think of them as being one and call them the genes.

I first became aware of the memes in 1986 while in the IT department of Amoco working on BSDE – the Bionic Systems Development Environment. BSDE was my first practical application of softwarephysics and was used to “grow” applications from an “embryo” by allowing programmers to turn on and off a number of “genes” to generate code on the fly in an interactive mode. Applications were grown to maturity within BSDE through a process of embryonic growth and differentiation, with BSDE performing a maternal role through it all. Because BSDE generated the same kind of code that it was made of, BSDE was also used to generate code for itself. The next generation of BSDE was grown inside of its maternal release. Over a period of seven years, from 1985 – 1992, more than 1,000 generations of BSDE were grown to maturity, and BSDE slowly evolved into a very sophisticated tool through small incremental changes. During this period, BSDE also put several million lines of code into production at Amoco. For more on BSDE see the last half of my original post on SoftwarePhysics. Anyway, one day I was explaining BSDE to a fellow coworker and he recommended that I read The Selfish Gene (1976), for me the most significant book of the 20th century because it explains so much. In The Selfish Gene Richard Dawkins ended the book by explaining that there were now two forms of self-replicating information on the planet – the genes and memes. The concept of memes was later advanced by Daniel Dennett in Consciousness Explained (1991) and Richard Brodie in Virus of the Mind: The New Science of the Meme (1996), and was finally formalized by Susan Blackmore in The Meme Machine. For those of you not familiar with the term meme, it rhymes with the word “cream”. Memes are cultural artifacts that persist through time by making copies of themselves in the minds of human beings and were first recognized by Richard Dawkins in The Selfish Gene. Dawkins described memes as “Examples of memes are tunes, ideas, catch-phrases, clothes fashions, ways of making pots or of building arches. Just as genes propagate themselves in the gene pool by leaping from body to body via sperms or eggs, so memes propagate themselves in the meme pool by leaping from brain to brain via a process which, in the broad sense, can be called imitation.”. Just as genes come together to build bodies, or DNA survival machines, for their own mutual advantage, memes also come together from the meme pool to form meme-complexes for their own joint survival. DNA survives down through the ages by inducing disposable DNA survival machines, in the form of bodies, to produce new disposable DNA survival machines. Similarly, memes survive in meme-complexes by inducing the minds of human beings to reproduce memes in the minds of others. To the genes and memes, human bodies are simply disposable DNA survival machines housing disposable minds that come and go with a lifespan of less than 100 years. The genes and memes, on the other hand, continue on largely unscathed by time as they skip down through the generations. However, both genes and memes do evolve over time through the Darwinian mechanisms of innovation and natural selection. You see, the genes and memes that do not come together to build successful DNA survival machines, or meme-complexes, are soon eliminated from the gene and meme pools. So both genes and memes are selected for one overriding characteristic – the ability to survive. Once again, the “survival of the fittest” rules the day. Now it makes no sense to think of genes or memes as being either “good” or “bad”; they are just mindless forms of self-replicating information bent upon surviving with little interest in you as a disposable survival machine. So in general, these genes and memes are not necessarily working in your best interest, beyond keeping you alive long enough so that you can pass them on to somebody else. That is why, if you examine the great moral and philosophical teachings of most religions and philosophies, you will see a plea for us all to rise above the selfish self-serving interests of our genes and memes.

Meme-complexes come in a variety of sizes and can become quite large and complicated with a diverse spectrum of member memes. Examples of meme-complexes of increasing complexity and size would be Little League baseball teams, clubs and lodges, corporations, political and religious movements, tribal subcultures, branches of the military, governments and cultures at the national level, and finally the sum total of all human knowledge in the form of all the world cultures, art, music, religion, and science put together. Meme-complexes can do wonderful things, as is evidenced by the incredible standard of living enjoyed by the modern world, thanks to the efforts of the scientific meme-complex, or the great works of art, music, and literature handed down to us from the Baroque, Classical, and Romantic periods, not to mention the joys of jazz, rock and roll, and the blues. However, meme-complexes can also turn incredibly nasty. Just since the Scientific Revolution of the 17th century we have seen the Thirty Years War (1618 -1648), the Salem witch hunts (1692), the French Reign of Terror (1793 – 1794), American slavery (1654 – 1865), World War I (all sides) (1914 – 1918), the Stalinist Soviet Union (1929 – 1953), National Socialism (1933 – 1945), McCarthyism (1949 – 1958), Mao’s Cultural Revolution (1969 – 1976), and Pol Pot’s reign of terror (1976 – 1979).

The problem is that when human beings get wrapped up in a meme-complex, they can do horrendous things without even being aware of the fact. This is because in order to survive, the first thing that most meme-complexes do is to use a meme that turns off human thought and reflection. To paraphrase Descartes ”I think, therefore I am" a heretic. So if you ever questioned any of the participants caught up in any of the above atrocious events, you would find that the vast majority would not have any qualms about their deadly activities whatsoever. In fact, they would question your loyalty and patriotism for even bringing up the subject. For example, during World War I, which caused 40 million casualties and the deaths of 20 million people for apparently no particular reason at all, there were few dissenters beyond Albert Einstein in Germany and Bertrand Russell in Great Britain, and both suffered the consequences of not being on board with the World War I meme-complex. Unquestioning blind obedience to a meme-complex through unconditional group-think is definitely a good survival strategy for any meme-complex. But the scientific meme-complex has an even better survival strategy – skepticism and scrutiny. Using skepticism and scrutiny may not seem like a very good survival strategy for a meme-complex because it calls into question the validity of the individual memes within the meme-complex itself. But that can also be a crucial advantage. By eliminating memes from within the scientific meme-complex that cannot stand up to skepticism and scrutiny, the whole scientific meme-complex is strengthened, and when this skepticism and scrutiny are turned outwards towards other meme-complexes, the scientific meme-complex is strengthened even more so.

Reading The Selfish Gene was a real epiphany for me. All of a sudden everything finally made sense. Living things do not use their genes to build and operate their bodies, rather genes use bodies to store and replicate genes! Similarly, brains do not use abstract concepts to allow them to do things in the real world, rather abstract concepts, or memes, use brains to store and replicate memes. Given that, the absurd “real world” of human affairs finally made sense, and I immediately adopted these concepts into my own worldview and have used them routinely on a daily basis ever since to explain it all. I also became aware at this time that a third form of self-replicating information, in the form of software, had also recently appeared upon the scene and that software was rapidly taking over control from the memes that were currently running the world.

In The Meme Machine, Susan Blackmore goes much further with the concept of memes and brings memetics to the level of a fully comprehensive science that is falsifiable. Memetics can now explain many current observations better than other models, and can make predictions of observations yet to be made that can be investigated and tested in the future. As such, memetics should now stand in good stead with the rest of the sciences. For example, Blackmore maintains that memetic-drive was responsible for creating our extremely large brains and also our languages and cultures as well, in order to store and spread memes more effectively. Many researchers have noted that the human brain is way over-engineered for the needs of a simple hunter-gatherer. After all, even a hundred years ago, people did not require the brain-power to do IT work, yet today we find many millions of people earning their living doing IT work, or at least trying to. Blackmore then points out that the human brain is a very expensive and dangerous organ. The brain is only 2% of your body mass, but burns about 20% of your calories each day. The extremely large brain of humans also kills many mothers and babies at childbirth, and also produces babies that are totally dependent upon their mothers for survival and that are totally helpless and defenseless on their own. Blackmore asks the obvious question of why the genes would build such an extremely expensive and dangerous organ that was definitely not in their own self-interest. Blackmore has a very simple explanation – the genes did not build our exceedingly huge brains, the memes did. Her reasoning goes like this. About 2.5 million years ago, the predecessors of humans slowly began to pick up the skill of imitation. This might not sound like much, but it is key to her whole theory of memetics. You see, hardly any other species learns by imitating other members of their own species. Yes, there are many species that can learn by conditioning, like Pavlov’s dogs, or that can learn through personal experience, like mice repeatedly running through a maze for a piece of cheese, but a mouse never really learns anything from another mouse by imitating its actions. Essentially, only humans do that. If you think about it for a second, nearly everything you do know, you learned from somebody else by imitating or copying their actions or ideas. Blackmore maintains that the ability to learn by imitation required a bit of processing power by our distant ancestors because one needs to begin to think in an abstract manner by abstracting the actions and thoughts of others into the actions and thoughts of themselves. The skill of imitation provided a great survival advantage to those individuals who possessed it, and gave the genes that built such brains a great survival advantage as well. This caused a selection pressure to arise for genes that could produce brains with ever increasing capabilities of imitation and abstract thought. As this processing capability increased there finally came a point when the memes, like all of the other forms of self-replicating information that we have seen arise, first appeared in a parasitic manner. Along with very useful memes, like the meme for making good baskets, other less useful memes, like putting feathers in your hair or painting your face, also began to run upon the same hardware in a manner similar to computer viruses. The genes and memes then entered into a period of coevolution, where the addition of more and more brain hardware advanced the survival of both the genes and memes. But it was really the memetic-drive of the memes that drove the exponential increase in processing power of the human brain way beyond the needs of the genes.

A very similar thing happened with software over the past 70 years. When I first started programming in 1972, million dollar mainframe computers typically had about 1 MB (about 1,000,000 bytes) of memory. One byte of memory can store something like the letter “A”. But in those days, we were only allowed 128 K (about 128,000 bytes) of memory for our programs because the expensive mainframes were also running several other programs at the same time. It was the relentless demands of software for memory and CPU-cycles over the years that drove the exponential explosion of hardware capability. For example, today the typical $600 PC comes with 8 GB (about 8,000,000,000 bytes) of memory. Recently, I purchased Redshift 7 for my personal computer, a $60 astronomical simulation application, and it alone uses 382 MB of memory when running and reads 5.1 GB of data files, a far cry from my puny 128K programs from 1972.

The memes then went on to develop languages and cultures to make it easier to store and pass on memes. Yes, languages and cultures also provided some benefits to the genes as well, but with languages and cultures, the memes were able to begin to evolve millions of times faster than the genes, and the poor genes were left straggling far behind. Given the growing hardware platform of an ever increasing number of Homo sapiens on the planet, the memes then began to cut free of the genes and evolve capabilities on their own that only aided the survival of memes, with little regard for the genes, to the point of even acting in a very detrimental manner to the survival of the genes, like developing the capability for global thermonuclear war and global climate change. The memes have since modified the entire planet. They have cut down the forests for agriculture, mined minerals from the ground for metals, burned coal, oil, and natural gas for energy, releasing the huge quantities of carbon dioxide that its predecessors had sequestered in the Earth, and have even modified the very DNA, RNA, and metabolic pathways of its predecessors.

We can see these very same processes at work today with the evolution of software. Software is currently being written by memes within the minds of programmers. Nobody ever learned how to write software all on their own. Just as with learning to speak or to read and write, everybody learned to write software by imitating teachers, other programmers, or by imitating the code of others, or by working through books written by others. Even after people do learn how to program in a certain language, they never write code from scratch; they always start with some similar code that they have previously written, or others have written, in the past as a starting point, and then evolve the code to perform the desired functions in a Darwinian manner (see How Software Evolves). This crutch will likely continue for another 20 – 50 years, until the day finally comes when software can write itself, but even so, “we” do not currently write the software that powers the modern world; the memes write the software that does that. This is just a reflection of the fact that “we” do not really run the modern world either; the memes in meme-complexes really run the modern world because the memes are currently the dominant form of self-replicating information on the planet. See Self-Replicating Information for more details on this stage of self-replicating information. See A Brief History of Self-Replicating Information for details.

In The Meme Machine, Susan Blackmore goes on to point out that the memes at first coevolved with the genes during their early days, but have since outrun the genes because the genes could simply not keep pace when the memes began to evolve millions of times faster than the genes. The same thing is happening before our very eyes to the memes, with software now rapidly outpacing the memes. Software is now evolving thousands of times faster than the memes, and the memes can simply no longer keep up. As with all forms of self-replicating information, software began as a purely parasitic mutation within the scientific and technological meme-complexes. Initially software could not transmit memes, it could only perform calculations, like a very fast adding machine, so it was a pure parasite. But then the business and military meme-complexes discovered that software could be used to transmit memes, and software then entered into a parasitic/symbiotic relationship with the memes. Today, software has formed strong parasitic/symbiotic relationships with just about every meme-complex on the planet. In the modern day, the only way memes can now spread from mind to mind without the aid of software is when you directly speak to another person in person. Even if you attempt to write a letter by hand, the moment you drop it into a mailbox, it will immediately fall under the control of software. The poor memes in our heads have become Facebook and Twitter addicts.

Memetics in Academia
I started working on softwarephysics in 1979, when I made a career change from being an exploration geophysicist to becoming an IT professional, and consequently, I have mainly focused upon software and its effects as the latest form of self-replicating information on the planet over the ensuing years. I must admit that, over this same period of time, I did not closely follow what had happened to memetics in academia. However, I had noticed that the concept of memes had seemed to have permeated throughout the popular culture, granted, many times in a rather confused and distorted manner, but still, I figured that memetics had done quite well in academia as well. But after reading The Meme Machine, it dawned upon me that this book should have totally revolutionized all of the fields in academia that dealt with the human condition, such as psychology, sociology, history, physical anthropology, cultural anthropology, political science, and economics, but I had not noticed that happening as a passive observer.

After doing a little research on the Internet, it seemed to me that memetics had gotten off to a pretty good start in the late 1980s and continued on into the 1990s, but had seemingly died in academia around 2005! Perhaps I am wrong, but this would truly be an intellectual tragedy if it were true. My perception is based upon the fact that I could not find very much serious material about memetics published after 2005. Also, I came across the very last issue of:

Journal of Memetics - Evolutionary Models of Information Transmission at:
http://cfpm.org/jom-emit/

and it contained a very disturbing paper by Bruce Edmonds:

The revealed poverty of the gene-meme analogy - why memetics per se has failed to produce substantive results
http://cfpm.org/jom-emit/2005/vol9/edmonds_b.html

which contained the graph of memetics papers shown in Figure 1.

Figure 1 – A graph of academic papers on memetics over nearly 20 years. (click to enlarge)

Now I must point out that his dramatic downturn in the graph is based solely upon a single data point at the very end of the data set, which is always a very serious warning signal in science, but I must admit that this paper did indeed appear in the very last issue of the journal, and that I have not found very much work being done in memetics past the year 2005 in academia. Additionally, I was very dismayed to read of the numerous, and very wrong-headed, objections to memetics that Susan Blackmore outlined in The Meme Machine, and that I also found on the Internet.

Deja Vu All Over Again
However, all is not lost. I have personally seen all this before. It all stems from the very conservative nature of meme-complexes, and their very great reluctance to adopt new memes that threaten the very existence of the entire meme-complex. This is especially true of scientific meme-complexes, and rightly so. Scientific meme-complexes always have to be on guard to prevent the latest crackpot idea from taking hold. But if you look to the history of science, the downside to all this is that nearly all of the great scientific breakthroughs were delayed by 10 – 50 years, patiently waiting for acceptance by a scientific meme-complex. That is why Thomas Kuhn found that scientific meme-complexes were so very reluctant to adopt paradigm shifts. Let me provide a personal example.

I finished my B.S. in Physics at the University of Illinois in 1973. However, early in my senior year, I unfortunately discovered that there was no future in physics in the United States. At the time, there were thousands of newly minted Ph.D. graduates, but only a handful even got a postdoc position in 1972. Most ended up doing other things, like waiting tables. This was just a precursor to America’s long and very sad march to becoming a country that has little confidence in science. Now I did happen to have a roommate who was a geology major; and he suggested that I try switching into geophysics to explore for oil for oil companies. Like all little boys, I had once had a rock collection, so I figured that being a geophysicist with a job was a lot better than being a waiter, and so I made the switch. When I got to the Department of Geology and Geophysics at the University of Wisconsin in the summer of 1973, I had many deficiencies in geology, having not taken a single course in geology for my entire undergraduate career, so I had to take many undergraduate courses in geology to make up for what I had missed.

These were very interesting times in geology because of the plate tectonics revolution of the late 1960s. The plate tectonics revolution was a dramatic paradigm shift for classical geology, and everything had to be rethought in light of the new model. However, the textbooks in 1973 that I studied had not had time to catch up with the radical new worldview of plate tectonics, and still contained the old classical geological models that now seemed rather silly in light of the new model of plate tectonics. So it was rather exciting to be a graduate student in the middle of a paradigm shift. Let me explain what had happened. By the early 1960s, geologists had done a marvelous job at figuring out what had happened over the past billion years of geological time, but they had done a truly miserable job at explaining why things had happened. By mapping outcrops and road cuts, geologists were able to see mountains rise from the sea over the course of tens of millions of years, only to be later eroded down to flat plains over the course of hundreds of millions of years, and they saw massive volcanic eruptions like the Deccan Trapps covering 500,000 square miles of India to a depth of 6,000 feet, and there were the ever present earthquakes and volcanoes to deal with too. But by the early 1960s, the geologists were stuck, they could just not figure out what was going on. Then geophysics came to the rescue. Modern geophysics really got started after World War II, with the availability of lots of government war surplus gear for the Universities to buy on the cheap. With geophysics, we began to explore the Earth with things besides our eyes, ears, and hands. Yes, geologists will actually listen to rocks as they whack them with a field hammer. Geophysicists did that by shooting seismic waves into the Earth and mapping the variations of the Earth’s magnetic, electrical, and gravitational fields. With these technologies, geophysicists were able to “see” below the Earth’s surface, and more importantly, to “see” under the Earth’s oceans. Now it turns out that most of the evidence for plate tectonics was under water at the spreading centers, like the Mid-Atlantic Ridge, and at subduction zones at the deep oceanic trenches. If the Earth had not had oceans, the geologists would have seen plate tectonics in action with their very own eyes, and would have figured it all out hundreds of years ago. Of course, if the Earth had had no oceans, there would not be any geologists here to bother with the problem in the first place!

Now the sad point to all this is that people like Alfred Wegener had accumulated enough data by 1912 to come up with an alternative to the classical geological models called Continental Drift that contained all of the essentials of plate tectonics. Continental Drift was just a little fuzzy on the exact mechanisms of plate tectonics, but it certainly deserved proper recognition by the geological community at the time and additional research efforts into its significance. But instead, Continental Drift was totally rejected by the geological establishment of the day, and poor Alfred Wegner was ridiculed for all his excellent efforts. So despite the fact that millions of grade-school children had commented to their geography teachers that South America seemed to fit together with Africa like two pieces of a puzzle, only to be told that it was just a funny coincidence, the geologists stubbornly stuck to their old models until irrefutable geophysical data forced them to change in the 1960s.

I have not spent a great deal of time investigating this, but I do seem to see these very same processes at work against memetics in academia on the Internet. Yes, memetics, like all new sciences, has to start out as a feeble candle in the darkness, like the theory of Continental Drift, but it holds so much promise! I don’t know if anybody in academia is still working on memetics, but if there are some young courageous souls out there with a bit of daring and flair, I would like to pass the following suggestions on to them. Below are some suggestions based upon softwarephysics that might help to clear up some of the confusions that academia seems to have with memetics. Yes, I know when you have a hammer, everything begins to look like a nail, but please bear with me.

Suggestions For A Research Program To Revive Memetics
1. Genes, memes, and software are all forms of self-replicating information, so the first thing you have to understand is the nature of information in physics. There seems to be a great deal of confusion about what exactly a meme is and what information is in the criticisms that I have seen of memetics. See The Demon of Software and Is Information Real? for more details.

2. Memes are the most difficult form of self-replicating information to understand because they are all tied up with the very messy “real world” of human affairs. In addition, there are many competing academic meme-complexes to deal with, like cultural anthropology, sociology, and psychology, all vying for the same academic turf, very much like the battle between the geophysicists and geologists over plate tectonics outlined above. Memetics needs some additional help from outside of these contentious domains by looking to the world of software. I have been traipsing through the world of software myself for more than 30 years, and I have yet to come across a fellow research party - not even a lonely lost graduate student! And I am quite sure there must be some out there; graduate students have a tremendous knack for getting lost before their thesis advisor sets them straight upon the right course, narrowly averting a major scientific breakthrough! But I can guarantee that this is a wide-open domain in academia with no competing scientific meme-complexes to deal with.

3. Yes, the genes and memes are both forms of self-replicating information, but if you only work with two forms of self-replicating information, people in the academic meme-complexes that you are about to invade will invariably complain that memes are not exactly the same as genes. Of course memes are not identical to genes; otherwise memes would be genes! Memes and genes are both just forms of self-replicating information, with some common characteristics, but naturally, with many differences as well. If you bring in a third replicator, like software, the differences between the three become quite evident, and so do their similarities.

4. Memes have much more in common with software than they do with genes. Both memes and software do not have to drag along clunky bodies or have sex to replicate. Like the memes, software is now nearly a pure form of self-replicating information with little physical ties to matter. In the 1950s and 1960s, IT people mainly worried about hardware and not software. The puny computers of the day had so little memory that programs had to be incredibly small to fit within them, and the hardware was not very reliable either, so IT people mostly spent their time dealing with hardware problems. But now hardware is as cheap as dirt, and IT people hardly ever even worry about it any longer. We only get annoyed when the incessant software-drive forces us to upgrade the hardware because we ran out of memory and processor speed again. But that only happens every few years. Still, the major headache with a hardware upgrade is getting all the software installed and operational again on the new hardware which you just have to plug into an electrical socket and connect a few cables to.

To make software an even purer form of information, we now use virtual machines to run software. For example, nearly all of the software I support runs in JVMs (Java Virtual Machines) on Unix servers. This means the compiled Java programs run in a computer-simulated computer of their own called a JVM, which has its own virtual registers, and its own virtual heap and stack memory too. A typical Unix server might run about 100 simulated JVM computers. Worse yet, even the Unix servers are not real! The Unix servers are virtual servers created from a Frame containing many CPUs in a box and are connected to a SAN (Storage Area Network) of disk drives. So when you use a website, all of this software is spread out over several hundreds to several thousands of these virtual servers. Consequently, we now have software running on software running on software running on software…… So for most IT people and end-users, it’s all just a huge virtual cyberspacetime continuum of pure information floating in what IT people now call the Cloud.

5. Software is the most recent form of self-replicating information and is the only form of self-replicating information that we have a good history of. Unlike the genes and memes, we have a well documented history of the arrival of software upon the scene and its evolutionary history as well. I am now 61 years old, and therefore old enough to remember back to my early childhood when there essentially was no software whatsoever. Now software is virtually everywhere, and is quickly becoming the dominant form of self-replicating information on the planet. Therefore, software makes a much better model for memes than do the genes because we have seen all of this happen within our very lifetimes. See How Software Evolves

6. Memetics needs to take a cue from physics and adopt a positivistic approach to the subject using effective theories. Positivism is an enhanced form of empiricism, in which we do not care about how things “really” are; we are only interested with how things are observed to behave. With positivism, physicists only seek out models of reality - not reality itself. Effective theories are an extension of positivism. An effective theory is an approximation of reality that only holds true over a certain restricted range of conditions and only provides for a certain depth of understanding of the problem at hand. For example, Newtonian mechanics works very well for objects moving in weak gravitational fields at less than 10% of the speed of light and which are larger than a very small mote of dust. For things moving at high velocities or in strong gravitational fields we must use relativity theory, and for very small things like atoms we must use quantum mechanics. All of the current theories of physics, such as Newtonian mechanics, classical electrodynamics, thermodynamics, statistical mechanics, the special and general theories of relativity, quantum mechanics, and quantum field theories like QED and QCD are just effective theories that are based upon models of reality, and all these models are approximations - all these models are fundamentally "wrong", but at the same time, these effective theories make exceedingly good predictions of the behavior of physical systems over the limited ranges in which they apply, and that is all positivism hopes to achieve.

So do not worry about whether or not memes “really” exist. For a physicist, if things behave as if memes exist, that is plenty good enough to form the basis for an effective theory. For example, do atoms really exist? Certainly not in the sense that most people would recognize as being “real”. In quantum field theory an atom is just a collection of electron and quark fields that extend over the entire Universe, but that are much stronger in the vicinity of the supposed atom, and that are held together by virtual photons and virtual gluons. When you interact with the quantum fields of an atom by colliding an electron or some other particle with it in order to make a measurement, you will find that the measurement allows you to narrow down the location of the constituent electrons, protons, and neutrons and to also get some idea of their momentum too, but you will never be able to exactly determine the exact location and momentum of any particular particle within an atom because of the Heisenberg Uncertainty Principle. To take this analysis to the level of bulk matter, consider a 200 pound man. According to the best available effective theories of physics, the man really consists of mostly empty space and about 100 pounds of protons, 100 pounds of neutrons, and 0.87 ounces of electrons. A proton, consisting of two up quarks and one down quark, has a mass of 938.27 MeV. Similarly, a neutron, consisting of one up quark and two down quarks, is slightly more massive with a mass of 939.56 MeV. But the up and down quarks themselves are surprisingly quite light. The up quark is now thought to have a mass with an upper limit of only 4 MeV, and the down quark is thought to have a mass with an upper limit of only 8 MeV. So a proton should have a mass of about 16 MeV instead of 938.27 MeV, and the neutron should have a mass of about 20 MeV instead of 939.56 MeV. Where does all this extra mass come from? It comes from the kinetic and binding energy of the virtual gluons that hold the up and down quarks together to form protons and neutrons! Remember that energy can add to the mass of an object via E = mc2 by simply rearranging the formula as m = E/c2. So going back to our fictional 200 pound man, consisting of 100 pounds of protons, 100 pounds of neutrons, and 0.87 ounces of electrons, we can now say that the man actually consists of 0.87 ounces of electrons, 1.28 pounds of up quarks, 2.56 pounds of down quarks, and 196.10 pounds of pure energy! Strangely, everything you see, hear, smell, taste, and feel results from the interactions of less than one ounce of those electrons. And all of the biochemical reactions that keep you alive and even your thoughts at this very moment are all accomplished with this small mass of electrons! According to our best effective theory on the subject, the quantum field theory of QED (1948), when you push your hand down on a table, the electrons in the table and your hand begin to exchange virtual photons that carry the electromagnetic force. This force gives you the illusion that the table and your hand are solid objects, when according to our best effective theories; the table and your hand consist mostly of empty space and pure energy with a thin haze of surrounding electrons. When you look at the man, table, or anything else your Mind is simply creating the illusion of their existence, as ambient photons in the room scatter off the thin haze of electrons surrounding the objects – the objects themselves are mainly composed of pure energy. So do not worry about memes being “real”. Stephen Hawking certainly does not worry about electrons being “real”. For more on this see Introduction to Softwarephysics and Model-Dependent Realism - A Positivistic Approach to Realism.

7. The best way to achieve this is to do some field work in the IT department of a major corporation. See A Proposal For All Practicing Paleontologists for additional advice. Yes, this might be a little scary. It would be best to approach this like a 21st century Margaret Mead doing fieldwork amongst IT professionals in the wild. The good news is that IT people are notoriously straightforward and will certainly not try to trick you like the inhabitants of Samoa. However, you should not tell them that you are trying to revive memetics using concepts from softwarephysics. If you do, they will certainly not burn you at the stake, but your research program will indeed go up in smoke. Instead, tell them you are doing a study comparing Agile development techniques to traditional development techniques. The IT department of the corporation in question will be using one or the other, or possibly both, and will probably get quite excited over your proposal because Agile development is one of the latest IT memes – a nicer way of saying “fads”. You can easily Google what Agile development is all about. Below is the first paragraph from the Wikipidea:

Agile software development is a group of software development methods based on iterative and incremental development, where requirements and solutions evolve through collaboration between self-organizing, cross-functional teams. It promotes adaptive planning, evolutionary development and delivery, a time-boxed iterative approach, and encourages rapid and flexible response to change. It is a conceptual framework that promotes foreseen interactions throughout the development cycle.

It certainly is vague enough to let you easily observe IT people in the field and talk to them about things like evolutionary development without drawing too much attention to yourselves.

I could go on, but this posting is way too long already. If anybody is interested, I would be glad to help with this endeavor.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
http://softwarephysics.blogspot.com/

Regards,
Steve Johnston