Last time we explored the macroscopic properties of software behavior using the thermodynamics of steam engines as a guide. We saw that in thermodynamics there is a mysterious fluid at work called entropy which measures how run down a system is. You can think of entropy as depreciation; it always increases with time and never decreases. As I mentioned previously, the second law of thermodynamics can be expressed in many forms. Another statement goes as:
dS/dt > 0
This is a simple differential equation that states that the entropy S of an isolated system always increases with time for irreversible processes. All of the other effective theories of physics can also be expressed as differential equations with time as a factor. However, all of these other differential equations have a "=" sign in them, meaning that the interactions can be run equally forwards or backwards in time. For example, a movie of two billiard balls colliding can be run forwards or backwards in time, and you cannot tell which is which. Such a process is called a reversible process because it can be run backwards to return the Universe to its original state, like backing out a bad software install for a website. But if you watch a movie of a cup falling off a table and breaking into a million pieces, you can easily tell which is which. For all the other effective theories of physics, a broken cup can spontaneously jump back onto a table and reassemble itself into a whole cup if you give the broken shards the proper energy kick. This would clearly be a violation of the second law of thermodynamics because the entropy of the cup fragments would spontaneously decrease. The second law of thermodynamics is the only effective theory in physics that has a ">" sign in it, which many physicists consider to be the arrow of time. With the second law of thermodynamics, you can easily tell the difference between the past and the future because the future will always contain more entropy or disorder.
Energy too was seen to be subject to the second law of thermodynamics and subject to entropy increases as well. We saw that a steam engine can convert the energy in high temperature steam into useful mechanical energy by dumping a portion of the steam energy into a reservoir at a lower temperature. Because not all of the energy in steam can be converted into useful mechanical energy, steam engines, like all other heat engines, can never be 100% efficient. We also saw that software too tends to increase in entropy whenever maintenance is performed upon it. Software tends to depreciate by accumulating bugs.
The Second Law of Thermodynamics
The second law does not mean that we can never create anything of value; it just puts some limits on the process. Whenever we create something of value, like a piece of software or pig iron, the entropy of the entire Universe must always increase. For example, a piece of pig iron can be obtained by heating iron ore with coke in a blast furnace, but if you add up the entropy decrease of the pig iron with the entropy increase of degrading the chemical energy of the coke into heat energy, you will find that the net amount of entropy in the Universe has increased. The same goes with software. It is possible to write perfect bug-free software by degrading chemical energy in a programmer’s brain into heat and allowing the heat to cool off to room temperature. A programmer on a 2400 calorie diet (2400 kcal/day) produces about 100 watts of heat sitting at her desk and about 20 – 30 watts of that heat comes from the brain. So the next time your peers comment that you are as dim-witted as a 40 watt light bulb after a code review, please be sure to take it as a compliment!
Kinetic Theory of Gasses and Statistical Mechanics
This time we will drill down deeper and use another couple of effective theories from physics – the kinetic theory of gasses and statistical mechanics. We will see that both of these effective theories can be related to software source code at the line of code level. Recall that in 1738 Bernoulli proposed that gasses are really composed of a very large number of molecules bouncing around in all directions. Gas pressure in a cylinder was simply the result of a huge number of molecular impacts from individual gas molecules striking the walls of a cylinder, and heat was just a measure of the kinetic energy of the molecules bouncing around in the cylinder. Bernoulli’s kinetic theory of gasses was not well received by the physicists of the day because many physicists in the 18th and 19th centuries did not believe in atoms or molecules. The idea of the atom goes back to the Ancient Greeks, Leucippus and his student Democritus, about 450 B.C. and was formalized by John Dalton in 1803, when he showed that in chemical reactions the relative weights of elemental chemical reactants was always the same and proportional to integer multiples. But many physicists had problems with thinking of matter being composed of a “void” filled with “atoms” because that meant they had to worry about the forces that kept the atoms together and that repelled atoms apart when objects “touched”. To avoid this issue, many physicists simply considered “atoms” to be purely a mathematical trick used by chemists to do chemistry. This held sway until 1897 when J. J. Thompson successfully isolated electrons in a cathode ray beam by deflecting them with electric and magnetic fields in a vacuum. It seems that physicists don’t have a high level of confidence in things until they can bust them up into smaller things.
Now imagine a container consisting of two compartments. We fill the left compartment with pure oxygen gas molecules (white dots) and the right compartment with pure nitrogen gas molecules (black) dots.
This is an example of a spatial entropy increase. The reverse process, that of a mixture of oxygen and nitrogen spontaneously separating into one compartment of pure oxygen and another compartment of pure nitrogen is never observed to occur. Such a process would be a violation of the second law of thermodynamics.
In 1859, physicist James Clerk Maxwell took Bernoulli’s idea one step further. He combined Bernoulli’s idea of a gas being composed of a large number of molecules with the new mathematics of statistics. Maxwell reasoned that the molecules in a gas would not all have the same velocities. Instead, there would be a distribution of velocities; some molecules would move very quickly while others would move more slowly, with most molecules having a velocity around some average velocity. Now imagine that the two preceding compartments (see Figure 1) are filled with nitrogen gas, but that this time the left compartment is filled with cold slow moving nitrogen molecules (white dots), while the right compartment is filled with hot fast moving nitrogen molecules (black dots). If we again perforate the partition between compartments, as in Figure 2 above, we will observe that the fast moving hot molecules on the right will mix with and collide with the slow moving cold molecules on the left and will give up kinetic energy to the slow moving molecules. Eventually both containers will be found to be at the same temperature (see Figure 3), but we will always find some molecules moving faster than the average (black dots), and some molecules moving slower than the average (white dots) just as Maxwell had determined. This is called a state of thermal equilibrium and demonstrates a thermal entropy increase. Just as with the previous example, we never observe a gas in thermal equilibrium suddenly dividing itself into hot and cold segments (the gas can go from Figure 2 to Figure 3 but never the reverse). Such a process would also be a violation of the second law of thermodynamics.
In 1867, Maxwell proposed a paradox along these lines known as Maxwell’s Demon. Imagine that we place a small demon at the opening between the two compartments and install a small trap door at this location. We instruct the demon to open the trap door whenever he sees a fast moving molecule in the left compartment approach the opening to allow the fast moving molecule to enter the right compartment. Similarly, when he sees a slow moving molecule from the right compartment approach the opening, he opens the trap door to allow the low temperature molecule to enter the left compartment. After some period of time, we will find that all of the fast moving high temperature molecules are in the right compartment and all of the slow moving low temperature molecules are in the left compartment. Thus the left compartment will become colder and the right compartment will become hotter in violation of the second law of thermodynamics (the gas would go from Figure3 to Figure 2 above). With the aid of such a demon, we could run a heat engine between the two compartments to extract mechanical energy from the right compartment containing the hot gas as we dumped heat into the colder left compartment. This really bothered Maxwell, and he never found a satisfactory solution to his paradox. This paradox also did not help 19th century physicists become more comfortable with the idea of atoms and molecules.
Beginning in 1866, Ludwig Boltzmann began work to extend Maxwell’s statistical approach. Boltzmann’s goal was to be able to explain all the macroscopic thermodynamic properties of bulk matter in terms of the statistical analysis of microstates. Boltzmann proposed that the molecules in a gas occupied a very large number of possible energy states called microstates, and for any particular energy level of a gas there were a huge number of possible microstates producing the same macroscopic energy. The probability that the gas was in any one particular microstate was assumed to be the same for all microstates. In 1872, Boltzmann was able to relate the thermodynamic concept of entropy to the number of these microstates with the formula:
S = k ln(N)
S = entropy
N = number of microstates
k = Boltzmann’s constant
These ideas laid the foundations of statistical mechanics.
The Physics of Poker
Boltzmann’s logic might be a little hard to follow, so let’s use an example to provide some insight by delving into the physics of poker. For this example, we will bend the formal rules of poker a bit. In this version of poker, you are dealt 5 cards as usual. The normal rank of the poker hands still holds and is listed below. However, in this version of poker all hands of a similar rank are considered to be equal. Thus a full house consisting of a Q-Q-Q-9-9 is considered to be equal to a full house consisting of a 6-6-6-2-2 and both hands beat any flush. We will think of the rank of a poker hand as a macrostate. For example, we might be dealt 5 cards, J-J-J-3-6, and end up with the macrostate of three of a kind. The particular J-J-J-3-6 that we hold, including the suit of each card, would be considered a microstate. Thus for any particular rank of hand or macrostate, such as three of a kind, we would find a number of microstates. For example, for the macrostate of three of a kind there are 54,912 possible microstates or hands that constitute the macrostate of three of a kind.
Rank of Poker Hands
Royal Flush - A-K-Q-J-10 all the same suit
Straight Flush - All five cards are of the same suit and in sequence
Four of a Kind - Such as 7-7-7-7
Full House - Three cards of one rank and two cards of another such as K-K-K-4-4
Flush - Five cards of the same suit, but not in sequence
Straight - Five cards in sequence, but not same suit
Three of a Kind - Such as 5-5-5-7-3
Two Pair - Such as Q-Q-7-7-4
One Pair - Such as Q-Q-3-J-10
Next we create a table using Boltzmann’s equation to calculate the entropy of each hand. For this example, we set Boltzmann’s constant k = 1, since k is just a “fudge factor” used to get the units of entropy using Boltzmann’s equation to come out to those used by the thermodynamic formulas of entropy.
Thus for three of a kind where N = 54,912 possible microstates or hands:
S = ln(N)
S = ln(54,912) = 10.9134872
|Hand||Number of Microstates N||Probability||Entropy = LN(N)||Information Change = Initial Entropy - Final Entropy|
|Royal Flush||4||1.54 x 10-06||1.3862944||13.3843291|
|Straight Flush||40||1.50 x 10-05||3.6888795||11.0817440|
|Four of a Kind||624||2.40 x 10-04||6.4361504||8.3344731|
|Full House||3,744||1.44 x 10-03||8.2279098||6.5427136|
|Flush||5,108||2.00 x 10-03||8.5385632||6.2320602|
|Three of a Kind||54,912||2.11 x 10-02||10.9134872||3.8571363|
|Two Pairs||123,552||4.75 x 10-02||11.7244174||3.0462061|
|Pair||1,098,240||4.23 x 10-01||13.9092195||0.8614040|
|High Card||1,302,540||5.01 x 10-01||14.0798268||0.6907967|
Examine the above table. Note that higher ranked hands have more order, less entropy, and are less probable than the lower ranked hands. For example, a straight flush with all cards the same color, same suit, and in numerical order has an entropy = 3.6889, while a pair with two cards of the same value has an entropy = 13.909. A hand that is a straight flush appears more orderly than a hand that contains only a pair and is certainly less probable. A pair is more probable than a straight flush because there are more microstates that produce the macrostate of a pair (1,098,240) than there are microstates that produce the macrostate of a straight flush (40). In general, probable things have lots of entropy and disorder, while improbable things, like perfectly bug-free software, have little entropy or disorder. In thermodynamics, entropy is a measure of the depreciation of a macroscopic system like how well mixed two gases are, while in statistical mechanics entropy is a measure of the microscopic disorder of a system, like the microscopic mixing of gas molecules. A pure container of oxygen gas will mix with a pure container of nitrogen gas because there are more arrangements or microstates for the mixture of the oxygen and nitrogen molecules than there are arrangements or microstates for one container of pure oxygen and the other of pure nitrogen molecules. In statistical mechanics, a neat room tends to degenerate into a messy room and increase in entropy because there are more ways to mess up a room than there are ways to tidy it up.
In statistical mechanics, the second law of thermodynamics results because systems with lots of entropy and disorder are more probable than systems with little entropy or disorder, so entropy naturally tends to increase with time.
Information and the Solution to Maxwell’s Demon
As an IT professional you deal with information all day long, but have you ever stopped to think what information is? As a start let’s begin with a simplistic definition.
Information – Something you know
For nearly 100 years physicists struggled with Maxwell’s Demon. Finally in 1950 Leon Brillouin used quantum mechanics to show that Maxwell’s Demon required information to tell if a molecule was moving slowly or quickly. Brillouin defined information as negative entropy and found that information about the velocities of the oncoming molecules could only be obtained by the demon by bouncing photons off the moving molecules. Bouncing photons off the molecules increased the total entropy of the entire system whenever the demon determined if a molecule was moving slowly or quickly. So Maxwell's Demon was really not a paradox after all, since even the Demon could not violate the second law of thermodynamics.
For Brillouin entropy was a lack of information about a system; a measure of ignorance. Brillouin proposed that information is the elimination of microstates that a system can be found to exist in. From the above analysis, information is then the difference between the initial and final entropies of a system after a determination about the system has been made.
Information = Si - Sf
Si = initial entropy
Sf = final entropy
Going back to our poker example, let’s compute the amount of information you convey when you tell your opponent what hand you hold. When you tell your opponent that you have a straight flush, you eliminate more microstates than when you tell him that you have a pair, so telling him that you have a straight flush conveys more information than telling him you hold a pair. For example, there are a total of 2,598,964 possible poker hands or microstates for a 5 card hand, but only 40 hands or microstates constitute the macrostate of a straight flush.
Strait Flush Information = Si – Sf = ln(2,598,964) – ln(40) = 11.082
For a pair we get:
Pair Information = Si – Sf = ln(2,598,964) – ln(1,098,240) = 0.8614040
When you tell your opponent that you have a straight flush you deliver 11.082 units of information, while when you tell him that you have a pair you only deliver 0.8614040 units of information. Clearly when your opponent knows that you have a straight flush, he knows more about your hand than if you tell him that you have a pair.
Application to Software
Software (source code, config files, lookup tables, etc.) exists as a set of bytes. Each byte can be in one of 256 microstates if we allow for all possible ASCII states of a byte. Some programmers might object that we do not use all 256 possible ASCII characters in programs, but I am going for the most general case here. Since there are hundreds of programming languages, all using a variety of different character sets, the approximation of using all 256 possible ASCII characters is not too bad. For example, there actually is a programming language called whitespace that only uses non-displayed characters such as spaces, tabs and newlines for its source code character set. Such programs appear as blank whitespace in a normal editor, adding an extra layer of source code security. More on whitespace is available at:
Now consider a program that is M bytes long. M bytes of software can be in N = 256M microstates, which will be a very large number for any M of appreciable size. That means there are 256M versions of a program that is M bytes long. Consider a medium size program of 30,000 bytes. The number of versions or microstates of a 30,000 byte program are:
N = 25630,000 = 158 x 10 72,245
That is a 158 with 72,245 zeroes behind it! Unfortunately, nearly all of these potential programs will just be a meaningless jumble of characters. In this huge mix, you will also find car ads, the wedding invitations for every couple ever married in the western world, and the Gettysburg Address. If we narrow our search down in the mix to just the true programs, we will find all 30,000 byte programs that ever have been written, or ever will be written, in every programming language that ever has been or ever will be devised using the ASCII character set. Your job as a programmer is to find one of the small set of 30,000 byte programs in the mix that performs the task you desire with a decent response time.
Later we will explore the biological aspects of sofwarephysics, but to skip ahead and borrow a biological concept now, we can consider the 158 x 1072,245 possible versions of a 30,000 byte program as the DNA Landscape of the program. DNA stores information in 4 nucleotide or base pair sequences abbreviated as A, C, T, and G. Each base pair in a living thing can be considered a biological bit and can exist in one of the four states A, C, T, or G. Note that living things use base 4 arithmetic as opposed to the binary or base 2 arithmetic common to all of today's computing devices. Today's computers use bits that can be in one of two states "1" or "0". A simple bacterium, like E. coli, contains about 4 million base pairs. Thus the DNA Landscape of E. coli is the set of all DNA sequences that can be made with 4 million base pairs.
N = 44,000,000
N = 1.0 x 10 2,408,240
which is a 1 with 2,408,240 zeroes behind it. Just as with the Landscape of our 30,000 byte program, nearly all of these DNA sequences lead to a dead bacterium that does not work.
The entropy of a piece of software can be measured using the same simplified Boltzmann’s equation with k = 1 that we used for poker hands.
S = ln(N)
N = Number of microstates
The entropy of all possible programs of length M bytes is:
S = ln(256M) = M ln(256) = 5.5452 M
For example, the entropy of all possible 30,000 byte programs comes to
S = 5.5452 M = (5.5452) (30,000) = 166,356
Now of the 158 x 1072,245 possible 30,000 byte programs, we know that only an infinitesimal number of them will provide the desired functionality with a decent response time. Since 158 x 1072,245 is such a huge number, we might as well make the simplifying approximation that there is only one of the possible programs that does the job. Of course this is not true, but even if there were a billion billion suitable programs in the mix they would pale to insignificance relative to 158 x 1072,245.
So if we assume that there is only one correct version of the program, we can calculate its information content if we remember that the ln(1) = 0:
Information of program = Si - Sf = ln(N) - ln(1) = ln(N)
Information of program = ln(N) = ln(256M) = M ln(256)
Information of program = 5.5452 M
Since the number of correct versions of a program is always much less than the number of possible versions, our approximation that there is only one correct version of a program is not too bad. For configuration files, frequently there only is one correct version of a file. Notice in the above formula that the information content of a piece of software is proportional to the number of bytes M in the file. This prediction makes intuitive sense.
For our 30,000 byte program the information content comes to:
Information in 30,000 bytes = ln(N) = ln(25630,000)
= 30,000 ln(256) = (5.5452) (30,000)
Information = 166,356
Thus a 30,000 byte program contains 166,356 units of information, which is a huge amount of information when you compare it to the information in a straight flush that weighs in with only 11.082 units of information. It turns out that a mere 2 bytes of correct software conveys 11.090 units of information, which is about that of a straight flush. That means the odds of getting two bytes of software correct by sheer chance are about the same as drawing a straight flush in poker! This is why programming is so difficult.
When you write a program or other piece of software, you are creating information by eliminating microstates that a file could exist in. Essentially you eliminate all of the programs that do not work, like a sculptor who creates a statue from a block of marble by removing the marble that is not part of the statue. The number of possible microstates of a file that constitute the macrostate of being a "correct" version of a program is quite small relative to the vast number of "buggy" versions or microstates that do not work, and consequently has a very low level of entropy and correspondingly a very large amount of information. According to the second law of thermodynamics, software will naturally seek a state of maximum disorder (entropy) whenever you work on software because there are many more “buggy” versions of a file than there are “correct” versions of a file. Large chunks of software will always contain a small number of residual bugs no matter how much testing is performed. Continued maintenance of software tends to add functionality and more residual bugs. Either the second law of thermodynamics directly applies to software, or we have created a very good computer simulation of the second law of thermodynamics - the effects are the same in either case.
In Brillouin’s reformulation of the second law of thermodynamics:
dS/dt > 0
dI/dt < 0
which means that the total amount of information in the Universe must always decrease with time.
Whenever you do something, like work on a piece of software, the total amount of disorder in the Universe must increase, the total amount of information must decrease, and the most likely candidate is the software you are working on. Truly a sobering thought for any programmer. It seems that the Universe is constantly destroying information. That’s how the Universe tells time!
Entropy, Information, and Black Holes
Now I must make a point of distinction here. In future postings, we will explore the theories of special relativity (1905), general relativity (1915), and quantum mechanics (1926), and see that the concept of information plays a fundamental role in all. For the remainder of my postings on softwarephysics, I will be exclusively using Leon Brillouin’s concept of information because I believe it is the most suitable for IT, but in physics there is another formulation for the concept of information besides the one just described. In this other formulation of information, entropy itself is considered to be a form of information – hidden information. For example, suppose you are dealt a full house, K-K-K-4-4, but at the last moment a misdeal is declared and your K-K-K-4-4 gets shuffled back into the deck! In this other formulation of information, the K-K-K-4-4 still exists as scrambled hidden information in the entropy of the entire deck, and so long as the shuffling process can be reversed, the K-K-K-4-4 can be recovered, and no information is lost. Since all the current laws of physics are reversible, including quantum mechanics, we should never see information being destroyed. In other words, because entropy always increases and never decreases, the hidden information of entropy cannot be destroyed.
In recent years, this has caused quite a battle in physics as outlined in Leonard Susskind’s The Black Hole War – My Battle With Stephen Hawking To Make The World Safe For Quantum Mechanics (2008). It all began in 1972, when Jacob Bekenstein suggested that black holes must have entropy. A black hole is a mass that is so concentrated that its surrounding gravitational field will allow nothing to escape, not even light. At the time, it was thought that a black hole only had three properties: mass, electrical charge, and angular momentum – a measure of its spin. Now imagine two identical black holes. Into the first black hole we start dropping a large number of card decks fresh from the manufacturer that are in perfect sort order. Into the second black hole we drop a similar number of shuffled card decks. When we are finished, both black holes will have increased in mass by exactly the same amount, and will remain identical because there will be no change to their electrical charge or angular momentum either. The problem is that the second black hole picked up much more entropy by absorbing all those shuffled card decks than the first black hole, which absorbed only fresh card decks in perfect sort order. If the two black holes truly remain identical after absorbing different amounts of entropy, then we would have a violation of the second law of thermodynamics because some entropy obviously disappeared from the Universe. Bekenstein proposed that in order to preserve the second law of thermodynamics, black holes must have a fourth property – entropy. Since the only thing that changes when a black hole absorbs decks of cards, or anything else, is the radius of the black hole’s event horizon, then the entropy of a black hole must be proportional to the area of its event horizon. The event horizon of a black hole essentially defines the size of a black hole. At the heart of a black hole is a singularity, a point-sized pinch in spacetime with infinite density, where all the current laws of physics break down. Surrounding the singularity is a spherical event horizon. The black hole essentially sucks spacetime down into its singularity with increasing speed as you approach the singularity, and the event horizon is simply where spacetime is being sucked down into the black hole at the speed of light. Because nothing can travel faster than the speed of light, nothing can escape from within the event horizon of a black hole because everything within the event horizon is carried along by the spacetime being sucked down into the singularity faster than the speed of light.
In Bekenstein’s model, the event horizon of a black hole is densely packed with bits, or pixels, of information to account for all the information, or entropy, that has fallen past the black hole’s event horizon. Like in a computer, the pixilated bits on the event horizon of a black hole can be in a state of “1”, or “0”. Each pixel is one Planck unit in area, about 10-70 square meters. In Quantum Software we will learn how Max Planck started off quantum mechanics in 1900 with the discovery of Planck’s constant h = 4.136 x 10-15 eV sec. Later, Planck proposed that, rather than using arbitrary units of measure like meters, kilograms, and seconds that were simply thought up by certain human beings, we should use the fundamental constants of nature - c (the speed of light), G (Newton’s gravitational constant), and h (Planck’s constant) to define the basic units of length, mass, and time. When you combine c, G, and h into a formula that produces a unit of length, called the Planck length, it comes out to about 10-35 meters, so a Planck unit of area is the square of a Planck length or about 10-70 square meters. Now a Planck length is a very small distance indeed, about 1025 times smaller than an atom, so a square Planck length is a very small area, which means the data density of black holes is quite large, and in fact, black holes have the maximum data density allowed in the Universe. They would make great disk drives!
In 1974, Stephen Hawking calculated the exact formula for the entropy of a black hole’s event horizon, and also determined that black holes must also have a temperature, and consequently, radiate energy into space. Recall that entropy was originally defined in terms of heat flow. This meant that because black holes are constantly losing energy by radiating Hawking radiation, they must be slowly evaporating and will one day totally disappear in a brilliant flash of light. This would take a very long time, for example, a black hole with a mass equal to that of the Sun would evaporate in about 2 x 1067 years. But what happens to all the information or entropy that black holes absorb over their lifetimes? Does it disappear from the Universe as well? This is exactly what Hawking, and the other relativists, believed – information, or entropy, could be destroyed as black holes evaporated.
This idea was quite repugnant to people like Leonard Susskind, and other string theorists grounded in quantum mechanics. In future postings, we shall see that the general theory of relativity is a very accurate effective theory that makes very accurate predictions for large masses separated by large distances, but does not work very well for small objects separated by small atomic-sized distances. Quantum mechanics, is just the opposite; it works for very small objects separated by very small distances, but does not work well for large masses at large distances. Physics has been trying to combine the two into a theory of quantum gravity for nearly 80 years to no avail. Fortunately, the relativists and quantum people work on very different problems in physics and never even have to speak to each other for the most part. These two branches of physics are only forced to confront each other at the extrema of the earliest times following the Big Bang and at the event horizon of black holes.
In The Black Hole War – My Battle With Stephen Hawking To Make The World Safe For Quantum Mechanics (2008), Susskind describes the ensuing 30 year battle he had with Stephen Hawking over this issue. The surprising solution to all this is the Holographic Principle. The Holographic Principle states that any amount of 3-dimensional stuff in our physical Universe, like an entire galaxy, can be described by the pixilated bits of information on a 2-dimensional surface surrounding the 3-dimensional stuff, just like the event horizon of a black hole. It is called the Holographic Principle because, like the holograms that you see in science museums, the 2-dimensional wiggles of an interference pattern generated by a laser beam bouncing off 3-dimensional objects and recorded on a 2-dimensional film, can regenerate a 3-dimensional image of the objects that you can walk around, and which appears just like the original 3-dimensional objects. In the late 1990s, Susskind and other investigators demonstrated with string theory and the Holographic Principle that black holes are covered with a huge number of strings attached to the event horizon which can break off as photons or other fundamental particles. String theory has been an active area of investigation for the past 30 years in physics and contends that the fundamental particles of nature, such as photons, electrons, quarks, and neutrinos are really very small vibrating strings or loops of energy about one Planck length in size. In this model, the pixilated bits on the event horizon of a black hole are really little strings with both ends firmly attached to the event horizon. Every so often, the strings can twist upon themselves and form a complete loop that breaks free of the event horizon to become a photon, or any other fundamental particle, that is outside of the event horizon and which, therefore, can escape from the black hole. As it does so, it carries away the entropy or information it encoded on the black hole event horizon. This is the string theory explanation of Hawking radiation. In fact, the Holographic Principle can be extended to the entire observable Universe, which means that all the 3-dimensional stuff in our Universe can be depicted as a huge number of bits of information on a 2-dimensional surface surrounding our Universe, like a gargantuan quantum computer calculating how to behave. As I mentioned in So You Want To Be A Computer Scientist, the physics of the 20th century has led many physicists and philosophers to envision our physical Universe simply consisting of information, running on a huge quantum computer. Please keep this idea in mind, while reading the remainder of the postings in this blog.
So what happened to Boltzmann and his dream of a statistical mechanical explanation for the observed laws of thermodynamics? Boltzmann had to contend with a great deal of resistance by the physics establishment of the day, particularly from those like Ernst Mach who had adopted an extreme form of logical positivism. This group of physicists held that it was a waste of time to create theories based upon things like molecules that could not be directly observed. After a life long battle with depression and many years of scorn from his peers, Boltzmann tragically committed suicide in 1906. On his tombstone is found the equation:
S = k ln(N)
Scientists and engineers have developed a deep respect for the second law of thermodynamics because it is truly "spooky".
Next time we will further explore the nature of information and its role in the Universe, and surprisingly, see how it helped to upend all of 19th century physics.
Comments are welcome at firstname.lastname@example.org
To see all posts on softwarephysics in reverse order go to: