Perhaps it is time to take stock of how far we have come with softwarephysics. Recall that softwarephysics is a simulated science, where we try to match up the effective theories of physics that describe the physical Universe, with the corresponding effective theories of softwarephysics that describe similar phenomena in the Software Universe. We began with the struggles of 18th century steam engine designers and the resulting development of thermodynamics. We saw that thermodynamics was an effective theory that described the macroscopic behavior of matter in terms of pressures, volumes, temperatures, and energy flows. We were also introduced to the second law of thermodynamics, which held that the entropy or disorder of the Universe was constantly increasing, and that the only way we could buck this trend and produce order out of disorder, like a car out of iron ore, was to degrade the low entropy chemical energy in a fuel into high entropy disordered energy, also known as heat. We found that entropy is a measure of the depreciation of the Universe, the total amount always increases and never decreases as a whole, but can be decreased locally with an expenditure of effort. We saw that software too was subject to the second law of thermodynamics in that software tended to depreciate, or increase in entropy, through the introduction of bugs whenever software was worked upon by programmers, and that the only way to add macroscopic functionality to software, and thereby reduce its entropy, was to degrade the chemical energy in a programmer’s brain into heat energy. We then drilled down deeper to another effective theory of physics called statistical mechanics, which took the macroscopic ideas of thermodynamics and examined them at the microscopic level of molecules and atoms. With statistical mechanics, we saw that entropy could also be viewed in terms of the microscopic disorder of the microstates that a system could exist in, and we used the entropy of poker hands to clarify this idea. The paradox of Maxwell’s Demon was presented next, and we learned how Leon Brillouin solved the paradox with his concept of information as a decrease in the entropy of a system at the microscopic level. We then used these ideas to show that software tended to increase in entropy, and decrease in information content, whenever programmers worked on software because there were many more “buggy” versions of a piece of software than correct versions. Next we examined the special theory of relativity and found that information, the stuff that Leon Brillouin used to solve Maxwell’s Demon, was just as real as matter or energy. In order to preserve the concept of causality, we had to introduce the limitation that information could not travel faster than the speed of light; the same limitation that special relativity imposed upon matter and energy. So the information that flows through our computer systems on a 24 x 7 basis is tangible stuff after all. Finally, we ended with the general theory of relativity and the concept of spacetime, which we extended to cyberspacetime as a stage upon which the events of the Software Universe and IT could unfold.

The ultimate purpose of all this is to collect enough effective theories in softwarephysics to be able to frame the fundamental problem of software. With the fundamental problem of software in hand, we will see that a biological approach to software is in order, and at that point, we will switch gears to cover the biological aspects of softwarephysics. We are about half way there, and I just wanted to briefly stop to recap our progress to date because our next stop will bring us to the really strange physics of the 20th century, quantum mechanics, and its very counter intuitive assertions. As one of my old physics professors used to say, “You never really understand quantum mechanics; you just get used to it”. So be prepared to hold on tight!**The Software Universe is Quantized**

In my last posting on cyberspacetime, we saw that the cyberspacetime of the Software Universe is quantized in both the cyberspace and time dimensions. Because the time dimension of cyberspacetime comes in quanta of time of less than a nanosecond (10^{-9} seconds) and the number of microprocessors on a server or PC is not readily apparent to end-users, the quantization of cyberspacetime vanishes at the macroscopic level familiar to IT professionals and laymen end-users. For example, even the most economical of PCs are now equipped with a dual-core processor with two CPUs, and a modern data warehouse can scale up to 1,024 nodes, with each node containing 4 dual-core processors, for a total of 8,192 microprocessors. During the course of a single browser session on the Internet, end-users can easily interact with thousands of microprocessors over the span of trillions of nanoseconds, but this all just appears as one large continuous cyberspacetime to them.

For layman end-users and IT managers, the same can be said of software itself. They only view software macroscopically and are only interested in the macroscopic functions that software performs, the speed with which the functions execute, and the stability and reliability of its performance. For them, software is a continuous macroscopic substance. Programmers, on the other hand, are well aware of the quantization of software at the source code level. For programmers, software is composed of lines of source code. And each line of code has a microscopic purpose which translates into a macroscopic effect. In turn, the lines of code are composed of characters, which must be exactly correct in both number and kind. Each character of a line of code is composed of a series of 8 quantized bits, with each bit in one of two quantum states “1” or “0”, which can also be characterized as ↑ or ↓. For example, in the following line of code:

discountedTotalCost = (totalHours * ratePerHour) - costOfNormalOffset;

some sample characters have ASCII representations of:

C = 01000011 = ↓ ↑ ↓ ↓ ↓ ↓ ↑ ↑

H = 01001000 = ↓ ↑ ↓ ↓ ↑ ↓ ↓ ↓

N = 01001110 = ↓ ↑ ↓ ↓ ↑ ↑ ↑ ↓

O = 01001111 = ↓ ↑ ↓ ↓ ↑ ↑ ↑ ↑

The physical characteristics of each character in a line of code and its ultimate macroscopic effects are defined by the arrangement of its 8 quantized bits ↑ ↓.**Is the Physical Universe Quantized Too?**

Recall that the general theory of relativity relied upon Einstein’s Equivalence Principle, which held that the effects of gravity and acceleration were deemed equivalent for all observations. Thus the slowing down of accelerated clocks could be used to predict the slowing down of clocks in a gravitational field and vice versa. The Equivalence Principle is a two-way street, that allows physicists to jump back and forth between accelerated reference frames and gravitational fields. In softwarephysics we have a similar two-way street in the Equivalence Conjecture of softwarephysics:*Over the past 70 years, through the uncoordinated efforts of over 50 million independently acting programmers to provide the world with a global supply of software, the IT community has accidentally spent more than $10 trillion creating a computer simulation of the physical Universe on a grand scale – the Software Universe.*

The Equivalence Conjecture is an outgrowth of Konrad Zuse’s *Calculating Space*, published in 1967, in which he proposed that the physical Universe was equivalent to a network of computers, and which gave birth to the concept of digital physics. Using nothing more than the Equivalence Conjecture of softwarephysics, we could logically predict that the physical Universe must also be quantized too! I am just trying to have a little fun here. The discovery of the quantization of the physical Universe actually has a very long history going back thousands of years to Ancient Greece.**Atoms and the Void**

It all began around 450 B.C., when Leucippus and his student Democritus, proposed that the world was composed of a “void” filled with eternal, unchanging, quantized particles called atoms. The Greek word *atomos* means "uncuttable". According to Democritus, if you kept cutting a piece of gold in half, ultimately you would end up with a single uncuttable atom of gold. These atoms were eternal and unchanging, so the multitude of changes seen in the real world was an illusion, the result of combining or breaking apart combinations of atoms. This atomic view of the Greek atomists contrasted sharply with the philosophy of many of the other Greek philosophers of the day, which held that matter was a continuous substance composed of 4 elemental substances – fire, water, earth, and air. But if matter was really composed of atoms, what forces held the atoms together in combinations, and why didn’t they all just fall apart in a pile at the bottom of the void? These questions plagued the atomists and hampered the acceptance of atomic theory until the 20th century. In 1803, John Dalton, a school teacher, used the concept of atoms to explain why elemental substances always reacted in simple proportions by weight. For example, 12 grams of carbon always reacted with 32 grams of oxygen to form 48 grams of carbon dioxide, and 4 grams of hydrogen always combined with 32 grams of oxygen to form 36 grams of water. Dalton proposed that each element consisted of a unique kind of atom, and that these atoms could join with each other to form chemical compounds:

C + O_{2} → CO_{2}

2 H_{2} + O_{2} → 2 H_{2}O

Strangely enough, physics was the last branch of science to fully buy into the atomic theory. The strongly positivist school of thought, led by Ernst Mach, frowned upon atoms because they could not be directly observed. This began to change in 1897, when J. J. Thompson successfully isolated electrons in atoms by deflecting a cathode ray, a stream of electrons, in a vacuum tube with a magnetic and an electric field. Thompson discovered that the charge-to-mass ratio of the electrons from a variety of cathodes, made of different metals, was always the same. Thompson concluded that electrons must be a negatively charged particle, common to all atoms, and that all electrons were identical. Since normal matter has no net electrical charge, Thompson proposed that atoms consisted of a smeared out positive charge with embedded electrons – the plum pudding model of the atom.

In 1909, Hans Geiger and Ernest Marsden, under the direction of Ernest Rutherford, bombarded a thin gold foil, about 200 atoms thick, with alpha particles. An alpha particle is a helium nucleus, consisting of two protons and two neutrons, which manages to tunnel out of a much larger radioactive nucleus such as radon. An alpha particle obtains a high velocity, and consequently lots of kinetic energy, as it is rapidly pushed away from the mother radioactive nucleus by the many protons in the radioactive nucleus. The plum pudding model of the atom predicted that, as the alpha particles passed through the gold foil, they should only have been deflected by a small angle of a few degrees. The surprising result from this experiment was that some of the alpha particles were deflected by very large angles, with some actually reflecting right back into the radioactive source. In 1911, Rutherford proposed that the back scattering of the alpha particles was caused by a very small, positively charged nucleus, less than 3.4 x 10^{-14} meters in size, and about 100,000 times smaller than the overall size of the gold atoms. The negative charge of the surrounding electrons, orbiting the positively charged nucleus, neutralized the nucleus, yielding atoms with no net electrical charge. In this model, atoms were composed mostly of empty space, with negatively charged electrons orbiting a central positively charged nucleus, like the Earth orbiting the Sun.**Quantization of the Atom**

As with relativity, this model presented a problem for classical electrodynamics. The orbiting electrons, moving in circles around the positively charged nucleus, would be constantly accelerating, and should, by classical electrodynamic theory, be constantly radiating off electromagnetic radiation. As the orbiting electrons in atoms radiated off all of their orbital kinetic energy, all the atoms in the Universe should immediately collapse in the blink of an eye. In an earlier posting, I cited a similar problem with classical electrodynamics that Max Planck confronted in 1900. According to classical electrodynamics, the walls of the room in which you are currently located should be at a temperature of absolute zero, having converted all of the energy of the free electrons in the walls of the room into ultraviolet light and x-rays. This was known as the “Ultraviolet Catastrophe” at the time, and is another example of an effective theory bumping up against the limitations of its effective range of reliable prediction. In 1900, Max Planck was able to resolve this dilemma by proposing that the energy of the oscillating electrons in the walls of your room was quantized into a set of discrete integer multiples of an elementary unit of energy E = hf.

Thus:

E = nhf

where

n = 1, 2, 3, ...

h = Planck’s constant = 4.136 x 10^{-15} eV sec

f = frequency of the electron oscillation

You can read a translation of his famous 1901 paper, that first introduced the concept of quantization at:

http://theochem.kuchem.kyoto-u.ac.jp/Ando/planck1901.pdf

Max Planck regarded his inspiration/revelation of the quantization of the oscillation energy of the free electrons and their radiated energy as a mathematical trick to overcome the Ultraviolet Catastrophe. But in 1905, the same year that he published the special theory of relativity, Einstein proposed that Planck’s discovery was not a mathematical trick at all. Einstein proposed that sometimes light, an electromagnetic wave in classical electrodynamics, could also behave like a stream of quantized particles, that we now call photons, with energy:

E = hf

In 1913, Niels Bohr adopted a similar strategy of quantization to resolve the problem of atoms spontaneously collapsing due to radiating away all of their orbital energy. Bohr proposed that the angular momentum L of the electrons was quantized too.

L = nħ

where ħ = h/2π and n = 1, 2, 3, …

Bohr proposed that electrons did not radiate energy so long as they remained in one of the quantized orbits about the nucleus.

Angular momentum is a measure of the amount of spin that a body has. In classical mechanics, the angular momentum of an electron orbiting a nucleus is:

L = mvr

where

m = mass of the electron

v= the velocity of the electron

r = radius of the electron’s orbit

and the angular momentum L can take on any value as the velocity v and radius r vary continuously. For example, the conservation of angular momentum is the reason that a skater spins faster when she pulls in her arms in a spin and can take on any value. But in Bohr’s model of the atom, the electrons could only take on quantized values of angular momentum with quantized amounts of energy. For hydrogen, which consists of a single electron orbiting a nucleus composed of a single proton, the energy states became:

E_{n} = -13.6 eV/n²

where n = 1, 2, 3, …

An electron-volt, eV, is a very small amount of energy. It is the amount of energy that an electron gains as it accelerates in a vacuum towards the positive pole of a one volt battery and away from the negative pole of the battery. Most chemical reactions have an energy of a few electron-volts per molecular reaction, so it is a convenient unit of energy for atomic theory.

The energies of the quantized states of hydrogen run as:

n = 1: E_{1} = -13.6/1 = -13.6 eV

n = 2: E_{2} = -13.6/4 = -3.4 eV

n = 3: E_{3} = -13.6/9 = -1.5 eV

n = 4: E_{4} = -13.6/16 = -0.85 eV

where n=1 is the lowest energy level of the atom. In the Bohr model of the atom, quantized photons of light are emitted or absorbed when electrons jump from one quantum state to another. For example, when an electron in the second energy level of hydrogen with n = 2 drops to the lowest quantum state of n = 1, a photon with energy:

∆E = -3.4 - (-13.6) = 10.2 eV

is emitted. Using the formula:

E = hf

it easy to calculate the frequency of a photon with 10.2 eV of energy, and this is what is observed spectroscopically in the laboratory in confirmation of Bohr’s predicted value.

In 1860 Gustav Kirchhoff and Robert Bunsen invented the modern spectroscope, consisting of a gas flame, a slit, a prism, and a small observing telescope. Kirchoff and Bunsen introduced small amounts of materials into a flame and then allowed the colored light from the flame to pass through a slit and then on through a prism. They then observed the image of the slit with a small telescope. The prism caused the single slit to appear as a series of multiple slits, or lines, of different colors caused by the splitting up of the different frequencies of light within the colored light of the flame. You can repeat their experiment by sprinkling a little salt water in the flame of a gas range and observing the characteristic yellow color of excited sodium atoms. Kirchoff and Bunsen did not know it at the time, but what was happening was that the high temperature of the flame was bouncing atoms around in the flame causing some of the electrons in the atoms to get excited. As the electrons transitioned back to lower energy levels, they gave off photons of specific frequencies or colors. You can read their original paper at:

http://chemteam.info/Chem-History/Kirchhoff-Bunsen-1860.html

In 1924, Louis de Broglie proposed in his doctoral thesis, that if light waves could sometimes behave as particles, then perhaps particles, like electrons, could also behave like waves with a wavelength λ of:

λ = h/mv

where

λ = wavelength

h = Planck's constant

m = mass of the particle

v = the velocity of the particle

In 1925, Clinton Davisson and Lester Germer were at Bell Labs bombarding a nickel target with electrons in a vacuum tube. During one of the runs of their experiment, the tube leaked causing the nickel to oxidize. To salvage the nickel target, they had to heat the nickel to a high temperature. Unknown to them, the heating of the nickel fused the nickel into several large crystals. When the fused nickel target was later bombarded with electrons again, they discovered that the electrons were now reflected back in a diffraction pattern. Diffraction patterns are a telling characteristic of waves. It is the diffraction of light waves off the closely spaced pits on a music CD that creates the rainbow sparkle that you see when you shine a light on it. The light waves diffract off the regularly spaced pattern of pits on a CD and get spread out at different angles by their frequencies (or wavelengths and colors), just as a prism spreads out light by its colors or frequencies. In fact, you can make a simple homemade spectroscope using a CD and two pieces of cardboard. Tape the two pieces of cardboard together, such that there is a narrow slit between the two pieces, about 1/16th of an inch wide. Now place the CD on a flat table with the label side down, and then put the cardboard with the slit, in front of a desk lamp containing a compact fluorescent bulb. Be sure to position the desk lamp, so that the photons from the slit have to do a bank shot off the CD at an angle to reach your eyes. Reflected in the CD you will see the bright white reflection of the slit, but you will also see the image of the slit as a series of colored circular slits or lines expanding out from the center of the CD. That is the diffraction pattern. If you look carefully, you will see a complex pattern of closely spaced lines; two red lines, a broad yellow line, a green line, a blue line, and an indigo line, with black spaces between the colored lines. This is the spectral signature of the excited mercury atoms in the compact fluorescent bulb, and it is first hand proof of both the quantization of the energy levels in mercury atoms and of the emitted photons too. In a similar fashion, Davisson and Germer observed essentially the same thing, when they accelerated electrons to specific energies and bombarded their crystalline nickel target; they too obtained a diffraction pattern of reflected electrons at specific angles. The electrons diffracted off the regularly spaced nickel atoms in the crystalline lattice of the nickel crystal at specific angles, just as the photons in your homemade spectroscope diffracted off of the pattern of pits in the CD at specific angles with specific colors or frequencies. Davisson and Germer reported their results in a paper published in 1927, confirming de Broglie’s idea of the wavelike behavior of particles.

The idea that particles, like electrons, could behave as waves allowed for a reinterpretation of Bohr’s conjecture that the angular momentum of the electrons in a hydrogen atom were quantized as:

L = nħ

where ħ = h/2π and n = 1, 2, 3, …

We could now envision the electron as a standing wave, surrounding the central positively charged proton. Just as only certain wavelengths of sound can exactly fit into a pop bottle and resonate when you blow across the mouth of the bottle, only electrons with certain fixed wavelengths or energies could exactly fit around the circumference of an electron orbital about the proton.

The Bohr quantum model of the atom was very successful at predicting the spectra from hydrogen atoms, but less so for more complex atoms. It also did not explain things like why the spectral lines from atoms split into two or more lines when exposed to a strong magnetic field. As with all effective theories, it had its limitations. Clearly, some improvements were required.** Erwin Schrödinger **

The next advance came from a 38 year old physicist, Erwin Schrödinger, who was still struggling to make his mark in physics. The rule of thumb in physics is that most of the great advances come from people under the age of 30, probably because the inspiration/revelation step of the scientific method seems to come easier to the young at heart with less to lose. At this point, physicists had been working with the mathematics of waves for more than 100 years, and were well aware that all known waves obeyed a differential equation of a particular form known as the wave equation. Schrödinger was convinced that for the de Broglie hypothesis about the wavelike behavior of matter to advance, a wave equation for particles was required. So in the winter of 1926, Schrödinger packed up his latest mistress and left his home and wife in Vienna for a couple of weeks of sabbatical in a Swiss chalet. There he worked out a famous wave equation for particles, now known as the Schrödinger equation. The story goes that he had two pearls, one for each ear, that allowed him to work undisturbed by his mistress. His wife back in Vienna certainly was not a distraction either. He truly was young at heart at the age of 38 and ripe for some inspiration/revelation.

To understand the significance of all this, we need to delve a little into the mathematics of differential equations. Imagine a very long taught guitar string stretched between two walls that are separated by a large distance. If we pluck the string near the left wall, a pulse will begin to travel to the right. Figure 1 below is a snapshot of a small section of a possible pulse at a particular time as it moves to the right.

Figure 1 (click to enlarge)

The motion of the pulse can be described by a scary looking differential equation, known as the wave equation for a stretched string, which describes how each small section of the string moves up and down as the pulse passes by.** The Wave Equation of a Stretched String **

∂²y = μ ∂²y

── ── ───

∂x² T ∂t²

We will be dealing a little bit with differential equations when we get to chaos theory in softwarephysics, so let’s spend a little time with the wave equation to show that it is really not so scary after all. The project plan for a large IT project can be pretty scary too, if you look at the whole thing at once. However, if you break it down into its individual tasks, it looks much less formidable. It’s important to take life one 2x4 at a time. The first term in the equation on the left is called the second partial derivative of the pulse with respect to the x-axis (distance along the string). It is just the curvature of a small section of the string as the pulse passes by (see Figure 1):

∂²y

──

∂x²

When this term is a big number, it means that the curvature of the string is large and the string has a peak or valley. When this term is a smaller number, it means that the curvature is low and the string is kind of flat. The variable μ is the density of the string. When μ is large, it means that the string is heavy; when μ is small, it means that the string is light. The variable T is the tension in the string, or how tightly the string is stretched. So if we just look at what we have so far, we see that the curvature of the string pulse is equal to the density of the string μ divided by the tension T of the string times “something else”. This makes sense. When the string is stretched very tightly, the tension T is large, so the curvature of the pulse should get flatter (smaller). Also, when the density μ of the string gets bigger, the curvature of the pulse should get larger too, because a heavy string should be less flattened by a tension T, than a light flimsy string. Now we have to look at the “something else” term on the far right:

∂²y

──

∂t²

This term is called the second partial derivative with respect to time. It is just the curvature of the pulse in time – how fast a tiny section of the string accelerates up and down as the pulse passes by. A flat pulse will not accelerate up and down very quickly as it passes by, while a tightly peaked pulse will accelerate up and down quickly as the pulse passes by.

The wave equation for the stretched string now makes sense as a whole. It simply says that the curvature of the pulse along the x-axis gets bigger when the density μ of the string gets bigger, or when tension T gets smaller, or when the rate at which the string accelerates up and down as the pulse passes by gets bigger. When you work out the mathematics, the velocity of the pulse is given by:

____

v = √ T/μ

The hard part about differential equations is solving them. You have to find a curve that meets the above requirements. When you take a course in differential equations, they teach you all sorts of ingenious techniques using calculus to guess what curvy line fits the requirements. For the problem at hand, the solution to the wave equation for a stretched string fixed at both ends, is a series of standing sine waves, which look something like Figure 2 below. The string can be excited into many standing waves, defined by a number n = 1, 2, 3,… which describes how many wavelengths of the standing wave just exactly fit between the two fixed points.**Schrödinger’s Equation**

Working with Einstein’s equation for the energy of a photon and de Broglie’s equation for the wavelength of a particle, Schrödinger had a problem. Unlike the velocity of a wave on a string, which only depended upon the density of the string μ and its tension T, the velocity of a particle’s wave depended upon the wavelength of the particle λ:

v = h

──

2mλ

Schrödinger believed that a particle must really be a wavepacket of many superimposed waves of different wavelengths that added up in phase near the location of the particle. Given the above equation, the waves in the wavepacket would tend to move at different velocities because they all had different wavelengths. The traditional wave equation, like the one for a wave pulse on a string, would not work under such conditions. Schrödinger overcame this problem with the following compromise equation, that sort of looks like a traditional wave equation. Note that the wavefunction Ψ is just a wiggly line, like the pulse on our string, and is pronounced like the word “sigh”, and m is the mass of the particle.

-ħ² ∂²Ψ = iħ ∂Ψ

── ── ──

2m ∂x² ∂t

He had to make two modifications to the standard wave equation:

1. He used the first partial derivative with respect to time, instead of the second partial derivative with respect to time on the far right side of the “=” sign

2. The equation contained:

____

i = √ -1

(or in other words i² = -1 )

which meant that the Schrödinger equation was a complex differential equation, with an imaginary part containing the quantity “i”, the square root of -1. Now we all know that there is no “real” number, that when multiplied by itself (squared) produces a -1, but that does not scare off mathematicians! Several hundred years ago, mathematicians became comfortable with the idea of an “imaginary” number i, which they defined as the square root of -1. “Real” numbers, the kind of numbers that we are used to dealing with, are just numbers that do not have an imaginary part. A little later, physicists discovered that the Universe seemed to just love “imaginary” numbers. The imaginary number i started to pop up in all sorts of equations and was nearly as popular as π. Now the fact that Schrödinger’s equation contained an imaginary part meant that solutions to the equation, known as wavefunctions Ψ, would not be totally “real” either, because they would contain imaginary parts using the square root of -1. As we shall see, this implication created a problem for the interpretation of what exactly a wavefunction really was. All of the other waves we had experience with, like waves on a string, light waves, or water waves were “real” functions or curvy lines. What did a complex wavefunction, with both “real” and “imaginary” parts, mean?** Schrödinger’s Time Independent Equation for a Single Dimension **

We can simplify Schrödinger’s equation by getting rid of the part that depends upon time for the cases where the energy E does not change with time. That is certainly true for the electrons in an atom, so long as they remain in their orbits and do not jump from one orbit to another and emit or absorb a photon.

-ħ² d²ψ(x) + V(x) ψ(x) = E ψ(x)

── ──────

2m dx²

In this equation we use ψ for the wavefunction, instead of the full blown Ψ, because it does not contain the part that varies with time. ψ is still just a wiggly line, like the pulse on our string, and is still pronounced like the word “sigh”. In addition, we added a term V(x) which is another function or wiggly line that describes how the potential energy of the particle varies as it moves back and forth along the x-axis. Imagine a straight road that runs across a hilly landscape and that you are riding a bicycle. If you ride your bicycle up a hill, you can tell that you are increasing your potential energy V(x) because it is hard to peddle up the hill as you convert some of your kinetic energy into potential energy. Similarly, it is easy to coast down a hill on a bicycle, because some of the potential energy V(x) that you have stored away, is converted back into kinetic energy. Another way of looking at this is that V(x) is also a way of describing a force. For example, the topography of the wiggly V(x) function seems to produce a force pulling you down the hill and another force impeding your progress up the hill. So the above formula allows you to calculate the wavefunction ψ for a particle subject to a force. ** The Particle in a Box **

Recall Figure 1 in my posting The Demon of Software, which depicted two containers full of molecules bouncing around. In 1872, Ludwig Boltzmann developed statistical mechanics by envisioning the molecules existing in a large number of microstates. This was many years before the arrival of quantum mechanics, so Boltzmann had to make up his microstates by arbitrarily force-fitting the molecules into little mathematical cubicles, like saying “I can tell by your energy that you are a perfect size 8”, at least approximately. Let us now apply Schrödinger’s equation to this problem to find the real microstates. To simplify the problem, imagine a single container of width “L” along the x-axis containing a single particle, such as an electron, in it. The electron is bouncing back and forth along the x-axis like a target in a shooting gallery. The electron only feels a force when it hits one of the walls on either the left or right side of the container at x = 0 or x = L, so the potential energy V(x) between the walls is zero. At the walls located at x = 0 and x = L, the electron feels an infinite force pushing it back into the container, so the potential energy hill V(x) at these two points is very steep and actually goes straight up to infinity.

When you solve Schrödinger’s equation for this problem, you get quantized solutions that are sine wavefunctions:

____

ψ_{n}(x) = √ 2/L sin(nπx/L)

n = 1, 2, 3, ...

E_{n} = n²h²

─────

8mL²

n = 1, 2, 3, ...

where

m = mass of the particle (electron in this case)

L = width of the box

h = Planck’s constant

n = quantum number

For example for the lowest energy level where n = 1:

____

ψ_{1}(x) = √ 2/L sin(πx/L)

E_{1} = h²

─────

8mL²

What is happening here is that you can only fit sine waves into the box that have wavelengths that fit just right. The “n”s determine the number of wavelengths of the sine wavefunction that fits in the box properly and the energy E_{n} of each wavefunction. It is easier to see this as a plot rather than as an equation. Figure 2 shows the first three wavefunctions for n = 1, 2, and 3.

Figure 2

The above wavefunctions look a lot like the first three octaves of a vibrating guitar string, and indeed, are identical to what you get for a vibrating string.

Remember, I warned you about wavefunctions being strange. At first, nobody really knew what to do with them. Physicists are just like other people. They tend to take ideas that they are familiar with, and project these ideas onto things that are new to them. Physicists have a few popular models. Grab a small pebble and keep making it smaller in your mind, until it is infinitely small with a dimension of zero. When you are finished, you have a particle. Now take the particle and throw it into a still pond. The resulting waves that propagate away are another good model. Take a bar magnet, as Michael Faraday did, and cover it with a piece of paper. Then sprinkle some iron filings over it. The “lines of force” that you see are a field. Faraday called it a field because it reminded him of a freshly plowed farmer’s field. So when Schrödinger came up with his equation, nobody really knew what to make of the wavefunction solutions to the equation. Schrödinger thought they might have something to do with the electric field of electrons, but he wasn’t quite sure.

In 1928 Max Born came up with another interpretation. Born proposed that the wavefunction was really a probability wave. Imagine that a gang of young thugs moves into your neighborhood, and that they begin to knock off liquor stores late at night in the surrounding area. In order to avoid capture, the gang decides to only knock off each liquor store once in case the police are in hiding. If you plot the liquor stores that get knocked off versus time, you will see an expanding wave of crime. The odds of a particular liquor store getting knocked off increases as the wave passes by and diminishes as the wave moves on. Recall that usually the wavefunctions that are solutions to Schrödinger’s equation are complex, meaning that they have both “real” and “imaginary” parts, and that we are only used to dealing with solutions to wave equations that are real. Born knew that it was a mathematical fact that whenever you multiplied a complex number or function by its complex conjugate, you always obtained a real number because all of the imaginary parts disappeared. To obtain the complex conjugate of a wavefunction Ψ, all you have to do is change i to –i wherever you see it. The resulting function is denoted as Ψ*.

Born proposed that the probability of finding a particle at a position x was:

Ψ* Ψ = | Ψ|²

at the point x. So to plot the probability of finding the electron in our 1-dimensional box, we just have to square the absolute value of its wavefunction. Figure 3 shows the resulting plot.

Figure 3 (click to enlarge)

Now here comes the interesting part. Classical mechanics predicts that the probability of finding the electron at any point should be the same for each point along the x-axis, because the electron is just bouncing back and forth like a target in a shooting gallery, and that the electron could have any energy at all, since the energy is just defined by how fast the electron is bouncing back and forth. However, the wavefunctions that are solutions to Schrödinger’s equation predict that the energy of the electron is quantized and comes in discrete allowed values. They also predict that the probability of finding the electron along the x-axis varies according to the energy state of the electron defined by its quantum number n. For the lowest energy level, where n = 1, the electron is most likely to be found near the center of the box because ψ*ψ has a peak there. That is not too bothersome. However, for the second energy level, where n = 2, something really strange happens. There is a high probability of finding the electron on either the left or right side of the box, but never in the center! How can an electron move back and forth in the box without ever passing through the center? For the higher energy levels, where n = 3, 4, 5, …, there are even more dead spots where ψ*ψ = 0, and the electron will never be found! This is just another example of the quantum strangeness that is built into our Universe.**Quantum Mechanics of the Atom**

Schrödinger was able to apply his new equation for the case of the hydrogen atom and derive its energy levels defined by the quantum number n by using a V(x) caused by the electrostatic force of the proton pulling on the electron. Again, these matched the spectroscopically observed energy levels of the hydrogen atom, also predicted by the Bohr model of the atom. Because electrons moving around a proton have some rotational motion, Schrödinger’s wavefunctions for the hydrogen atom also had two additional quantum numbers l and m, which defined quantized amounts of angular momentum, and these quantum numbers explained most of the splitting of spectral lines in a magnetic field that Bohr’s model did not. But there was still a small amount of additional spectral line splitting that Schrödinger’s model failed to predict. Schrödinger’s wavefunctions for electrons in atoms had one additional major failing.

In 1921, Otto Stern and Walter Gerlach performed an experiment which showed that electrons had an intrinsic quantized angular momentum they called spin and a small associated magnetic field like a tiny bar magnet. Stern and Gerlach shot hot silver atoms from an oven through a distorted magnetic field and found that the beam of silver atoms split into two beams. This was a little strange, since electrons are now thought of as fundamental point particles with a dimension of zero, so how could electrons have any angular momentum, if angular momentum is defined as L = mvr, and electrons have an r = 0? The other strange thing was that if electrons were really spinning like little tops, they should spin in all different directions, but when Stern and Gerlach performed their experiment, they always found that the electrons were either spinning in the same direction as their magnetic field or 180^{0} in the opposite direction of their magnetic field. How did the electrons know how to align their spins in advance, before they got to the magnetic field? Again this is just quantum strangeness at work. Stern and Gerlach found that electrons are like little spinning magnets of dimension zero with a spin up ↑ or spin down ↓ of magnitude:

S_{z} = ± ½ ħ

In 1928 Paul Dirac realized that, from the standpoint of classical mechanics, the electrons orbiting the nucleus of an atom would have to move very quickly to overcome the electrostatic force pulling them into the positively charged nucleus. In fact, the electrons would have to move at about 30% of the speed of light, and would experience relativistic effects. Schrödinger had used the classical concept of the energy E of a particle in his equation, but had not taken into account any relativistic effects. When Dirac included these effects into a much more complicated form of the wave equation, the quantized spin of electrons popped out just as Stern and Gerlach had observed. The other thing that the Dirac equation predicted was that electrons came in two forms; one with a negative charge and positive energy and one with a positive charge and negative energy! We now call these positively charged electrons, positrons. Positrons were the first form of antimatter predicted by physics, and Carl D. Anderson actually observed a positron in 1932 in a cosmic ray experiment carried aloft in a balloon. With Dirac’s equation, we now have a model for the atom composed of a positively charged nucleus surrounded by electrons with wavefunctions defined by four quantum numbers n, l, m, and s.

In 1925, Wolfgang Pauli wondered why all of the electrons in an atom did not simply decay to their lowest energy level of n = 1 by emitting photons. He empirically proposed that each electron in an atom had to be in a unique quantum state. The implication of this proposal was that each electron had to have a unique wavefunction defined by a unique combination of quantum numbers n, l, m, and s. This meant that as you throw electrons into an atom, the electrons have to arrange themselves in shells about the nucleus, as the available slots for each combination of n, l, m, and s is taken up. This is a good thing, because the chemical characteristics of an atom are chiefly defined by the arrangement of its electrons in shells and especially by the electrons near the surface of the atom in the outer electron shells. If all electrons were at their lowest energy level of n = 1, you would not be sitting here contemplating atoms because there would be no chemical activity in the Universe, and you would not exist. It turns out that, theoretically, all of chemistry can be derived from the quantum mechanical model of atoms.**Quantum Implications for Softwarephysics**

Recall that the individual characters in a line of source code:

discountedTotalCost = (totalHours * ratePerHour) - costOfNormalOffset;

are each defined by 8 quantized bits, with each bit in one of two quantum states “1” or “0”, which can also be characterized as ↑ or ↓.

C = 01000011 = ↓ ↑ ↓ ↓ ↓ ↓ ↑ ↑

H = 01001000 = ↓ ↑ ↓ ↓ ↑ ↓ ↓ ↓

N = 01001110 = ↓ ↑ ↓ ↓ ↑ ↑ ↑ ↓

O = 01001111 = ↓ ↑ ↓ ↓ ↑ ↑ ↑ ↑

We may think of each character in a line of code as an atom, and each line of code as a molecular chemical reaction which ultimately produces a macroscopic effect. The 8 quantized bits for each character are the equivalent of the spins of 8 electrons in 8 electron shells that may be either in a spin up↑ or spin down ↓ state. And the chemical characteristics of each character are determined by the arrangements of the spin up ↑ or spin down ↓ state of the bits in the character.

I apologize for all of the math in this posting. If you feel a little confused, you are in good company. Nobody really understands any of this stuff.

Next time we will expand upon these ideas and develop the quantum mechanical formulation of softwarechemistry and try to delve a little into the cosmic interpretation of what all this means.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:

http://softwarephysics.blogspot.com/

Regards,

Steve Johnston