Friday, September 28, 2007

Software as a Virtual Substance

Before diving into thermodynamics, let’s map out the road ahead. Remember that the underlying concept of softwarephysics is that the global IT community has unintentionally created a pretty decent computer simulation of the physical Universe that softwarephysics depicts as the Software Universe. We then use this simulation in reverse. Understanding how the physical Universe operates, allows us to better model how software behaves in the Software Universe. We do this by visualizing software as a virtual substance. Now how do you model a virtual substance? It might help to think of another virtual substance that we are all more familiar with – money. Many people devote their entire educations and careers to learning how to manipulate and model the virtual substance we call money. Nobody gets squeamish about the Federal Reserve Board expanding or contracting the money supply, manipulating interest rates, or trying to raise or lower our exchange rate with other currencies. In fact, you can win a Nobel Prize for such efforts. But for the most part, money is simply a collection of bits stored in a network of computers. Is software any less real than money? Now imagine trying to run the modern world economy without the benefit of macro and microeconomic theories to model the ebb and flow of money throughout the world economies. The aim of softwarephysics is to achieve a similar ability to model the behavior of software at differing levels of software architecture by using the physical Universe as a guide.

The Value of Effective Theories
Our current understanding of the physical universe is based upon a collection of effective theories in physics. Recall that an effective theory is an approximation of reality that only holds true over a certain restricted range of conditions and only provides a certain depth of understanding of the problem at hand. Although all effective theories are fundamentally “wrong”, they are still exceedingly useful. All of the technology surrounding you - your PCs, your cell phones, your GPS units, and your air conditioners, vacuum cleaners, cars, and TV sets were all built using the current approximate effective theories of physics. It’s amazing that you can build all these useful gadgets using theories that are all fundamentally “wrong”, but that just highlights the value of effective theories. The crown jewel of physics is the effective theory called QED - Quantum Electrodynamics. QED has predicted the gyromagnetic ratio of the electron, a measure of its intrinsic magnetic field, to 11 decimal places. This prediction has been validated by rigorous experimental data. As Richard Feynman has pointed out, this is like predicting the exact distance between New York and Los Angeles to within the width of a human hair! Yet most physicists believe that someday an even better effective theory will come along to replace QED. I try to carry over this idea of approximate effective theories into my personal and professional lives too, by trying to keep in mind that my own deeply held personal opinions are also just approximations of reality too. This makes it easier to live with, and work with, people of differing viewpoints.

It might seem disheartening that we don’t really understand what is going on in our own Universe and that all we have is a set of very useful approximations of reality. But as Ayn Rand cautioned us, be sure to “check your premises”. What exactly do you mean by reality? Physicists and philosophers have been debating the nature of reality for thousands of years. My personal favorite definition is:

Physical Reality – Something that does not go away even when you stop believing in it.

I find this definition to be flexible enough to keep most people happy, including physicists, theologians, and even most philosophers. When we discuss the implications of quantum mechanics for software, you may be surprised to learn that about 60% of physicists still ascribe to the old Copenhagen interpretation of quantum mechanics (1927) in which absolute reality does not even exist. In the Copenhagen interpretation, there are an infinite number of potential realities. About 30% adhere to a variation of the “Many-Worlds” interpretation of Hugh Everett (1957) which admits an absolute reality, but claims that there are an infinite number of absolute realities spread across an infinite number of parallel universes. And the remaining 10% bank on the really strange interpretations! They all agree on the underlying mathematics of quantum mechanics; they just don’t agree on what the mathematics is trying to say. It’s like trying to figure out what the mathematical formula:

Glass = 0.5

is trying to tell you. Is the glass half empty or half full? So many physicists skirt the whole issue by adopting a positivist approach to the subject. Logical positivism is an enhanced form of empiricism, in which we do not care about how things “really” are. We are only interested in how things are observed to behave, and effective theories are a perfect fit in that regard. In softwarephysics, I am also taking a positivist approach to software. I don’t care what software “really” is, I only care about how software is observed to behave.

Softwarephysics is a Simulated Science
Just as physics is a collection of useful effective theories, softwarephysics is also a matching collection of effective theories. Because softwarephysics is a simulated science, the challenge for softwarephysics is to find the corresponding matching effective theory of physics to apply to software at each level of complexity. At the highest level, we will be using chaos theory which is a theory for nonlinear dynamical systems developed in the 1970s and 1980s. In the physical Universe, one might apply chaos theory to the traffic patterns on the network of expressways found in a large metropolitan area such as Chicago. In a similar fashion, we could apply chaos theory to the complex traffic patterns for a large corporate website residing on 100 production servers - load balancers, firewalls, proxy servers, webservers, WebSphere Application Servers, CICS Gateway servers to mainframes, mailservers, etc. Any IT professional who has ever worked on a large corporate website might indeed guess that chaos theory would provide a fitting description of their daily life, without ever even having heard of softwarephysics! In fact, chaos theory has already been applied to computer networks by many others.

Thermodynamics
If we pull one of the cars out of a traffic jam on one of the expressways and take a look under the hood, we come to the next level in the hierarchy of effective theories - thermodynamics which was developed during the last half of the 19th century. Thermodynamics allows us to understand the macroscopic behaviors of matter. For example, thermodynamics allows us to understand what is going on in the cylinders of a car by relating the pressures, temperatures, volumes, and energy flows of the gases in the cylinders while the engine is running. We can apply these same basic ideas to the macroscopic behavior of software at the program level. The macroscopic behavior of a program can be viewed as the functions the program performs, the speed with which those functions are performed, and the stability and reliability of its performance.

Statistical Mechanics
The next lower level theory in the hierarchy of effective theories is called statistical mechanics. Statistical mechanics was also developed during the last half of the 19th century and allows us to derive the thermodynamic properties of the gases in the cylinders of a car by viewing the gases as a large collection of molecules bouncing around in the cylinders. It also provides us with a definition of information and some very powerful insights into how information operates in the physical Universe. We will see that we can apply these ideas to software at the line of code level by examining the interplay of information and entropy (disorder) at the coding level.

QED and Chemistry
Going still deeper we come to QED (Quantum Electrodynamics) which is the basis for chemistry at the molecular level. QED is an effective theory that reached maturity in 1948 and which is an amalgam of two other effective theories; quantum mechanics (1925) and Einstein’s special theory of relativity (1905). We will use ideas from QED at the line of code level by depicting lines of code as interacting organic molecules, similar to the chemical reactions back at the refinery that made the gasoline for the car in question.

Quantum Mechanics
Deeper still we finally come to quantum mechanics (1925). Quantum mechanics describes the structure and behavior of individual atoms and we will use concepts from quantum mechanics to describe software at the level of individual characters in source code like the carbon and hydrogen atoms found in gasoline molecules.

In summary:

Software ElementPhysical CounterpartEffective Theory
Computer Networks Expressway Traffic Chaos Theory
ProgramsGas in a Cylinder Thermodynamics and Statistical Mechanics
Lines of CodeOrganic MoleculesQED and Chemistry
Source Code CharactersAtomsQuantum Mechanics


When we are finished with all of this, we will have a collection of effective theories for softwarephysics that will allow us to define a self-consistent model for software behavior that can be used for making day-to-day decisions in IT and provide a direction for thought. It will also lead us to the fundamental problem of software and the suggestion that a biological solution is in order.

Next time we will continue on with the saga of steam engine designers and how their struggle led to the development of the first and second laws of thermodynamics, the concept of entropy (disorder) and the discovery of information itself.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Sunday, September 23, 2007

A Lesson From Steam Engines

Let’s get back to exploring the benefits of applying science to computer science. Since the rebirth of science about 400 years ago, we have had two major economic revolutions; the Industrial Revolution and the Information Revolution. During the Industrial Revolution, mankind began to manipulate large quantities of energy; while during the Information Revolution, mankind began to manipulate large quantities of information. Both have had huge economic impacts. We can date the dawn of the Information Revolution to the spring of 1941 when Konrad Zuse built the Z3 with 2400 telephone relays. Similarly, we can date the dawn of the Industrial Revolution to 1712 when Thomas Newcomen invented the first commercially successful steam engine.

As an IT professional you are a warrior in the Information Revolution. You work with information all day long. You create, maintain, and operate software which processes information in huge quantities. Software is also a form of information, so essentially you get paid to process information with information. Have you ever stopped to wonder what information is? Is information “real” or just something we made up? Is information a tangible part of the physical Universe, or is it just a useful human contrivance like the names we use for the days of the week? Over the past 400 years, the role of information in physics has taken on more and more significance, to the point that many eminent physicists, such as John Wheeler, have proposed that the physical Universe is simply made out of information - “It from Bit”. Over the years, the concept of information has arisen in physics in several effective theories, most notably in thermodynamics and Einstein’s special theory of relativity. Today we will lay the foundations for the concept of information in thermodynamics, and leave Einstein for another time. Now let’s see if we can learn a lesson from the past warriors of the Industrial Revolution.

The early factories of the 18th century were forced to run on water power. This required them to be located in the highlands near fast-moving water, far from the lowland cities where workers and consumers resided and distant from many natural resources required for production. What was needed was a portable source of power. The Newcomen steam engine was the first commercially successful steam engine and consisted of an iron cylinder with a movable piston. Low-pressure steam was sucked into a cylinder by a rising piston. When the piston reached its maximum extent, a cold water spray was shot into the cylinder causing the steam to condense and form a partial vacuum in the cylinder. External atmospheric air pressure forced the piston down during the power stroke. In the 18th-century, steam engines used low-pressure steam and were thus called atmospheric steam engines because the power stroke came from atmospheric air pressure. High-pressure steam boilers in the 18th century were simply too dangerous to use for steam engines. The Newcomen steam engine was used primarily to pump water out of coal mines. It had an efficiency of about 1%, meaning that about 1% of the energy in the coal used to fuel the engine ended up as useful mechanical work, while the remaining 99% ended up as useless waste heat. This did not bother owners of steam engines in the 18th century because they had never even heard of the term energy. The concept of energy did not come into existence until 1850 when Rudolph Clausius published the first law of thermodynamics. However, they did know that the Newcomen steam engine used a lot of coal. This was not a problem if you happened to own a coal mine, but for 18th-century factory owners, the Newcomen steam engine was far too expensive for their needs.

You can see the oldest surviving Newcomen steam engine at the Henry Ford Museum in Dearborn Michigan just outside of Detroit, as well as Thomas Edison’s original Menlo Park Laboratory, which has also been relocated to the adjoining Greenfield Village museum. This engine was built in 1760 and pumped water from an English coal pit until 1834. I had the chance to see this steam engine a few years ago. It was as big as a house and weighed in at a whopping 15 horsepower, about the horsepower of a modern riding lawnmower. You might wonder why anybody would go to the trouble of building such an engine, but you have to compare it to the effort involved in the care and feeding of 15 horses!

Figure 1 – The first commercially successful steam engine was invented by Thomas Newcomen in 1712. The Newcomen steam engine had an efficiency of 1%.

In 1763, James Watt was a handyman at the University of Glasgow building and repairing equipment for the University. One day the Newcomen steam engine at the University broke, and Watt was called upon to fix it. During the course of his repairs, Watt realized that the main cylinder lost a lot of heat through conduction and that the water spray which cooled the entire cylinder below 212 0F required a lot of steam to reheat the cylinder above 212 0F on the next cycle. In 1765, Watt had one of those scientific revelations in which he realized that he could reduce the amount of coal required by a steam engine if he could just keep the main cylinder above 212 0F for the entire cycle. He came up with the idea of using a secondary condensing cylinder cooled by a water jacket to condense the steam instead of using the main cylinder. He also added a steam jacket to the main cylinder to guarantee that it always stayed above 212 0F for the entire cycle. In 1765, Watt conducted a series of experiments on scale model steam engines that proved out his ideas.

Figure 2 – In 1765, James Watt improved the Newcomen steam engine by introducing a condensing cylinder to condense steam during the power stroke and by using a steam jacket to always keep the main cylinder at a temperature higher than the boiling point of water. Watt's improved steam engine had an efficiency of 3%, and on that basis, launched the Industrial Revolution

To learn moree about the Newcomen and Watt steam engines go to:

http://technology.niagarac.on.ca/staff/mcsele/newcomen.htm

Watt’s steam engine had an efficiency of 3% which may still sound pretty bad, but that meant it only used 1/3 the coal of a Newcomen steam engine with the same horsepower. So the Watt steam engine became an economically viable option for 18th-century factory owners. We will discuss the second law of thermodynamics at a later time. But just for the sake of comparison, the second law allows us to calculate that the maximum efficiency of a steam engine running at a room temperature of 72 0F using 212 0F steam is 21%.

The Industrial Revolution was delayed by more than 50 years because nobody bothered to try to understand what was going on in a Newcomen steam engine. This was overcome by James Watt when he unknowingly applied the scientific method to steam engines. Based upon some empirical evidence gathered while repairing a Newcomen steam engine, he had a moment of inspiration. He then proceeded to deduce the implications of his revelation and came up with the design for a new kind of steam engine. He then tested his design with a series of controlled experiments.

We are now some 60+ years into the Information Revolution, and like our counterparts in the Industrial Revolution, we are still struggling with the inefficiency of creating and operating software. And like our counterparts, we know that we are very inefficient at software, but we really do not have a clue as to how inefficient we may truly be. Softwarephysics proposes that we stop and take a look into our engine compartment.

The Problem with Common Sense
Like the 18th-century engineers struggling with steam engines, IT professionals have developed a common sense approach to software based upon a set of heuristics. In IT, you quickly learn that when coding software it never works the first time. If you are lucky it will work on the 10th try. If it does not work by the 100th try, you need to look for another profession. But common sense is just another effective theory. Recall that an effective theory is an approximation of reality that only holds true over a certain restricted range of conditions and only provides a certain depth of understanding of the problem at hand. Take a ballpoint pen from your desk and one of your shoes. Drop them both from shoulder height and see which one hits the ground first. For nearly 2,000 years, common sense and the teachings of Aristotle held that the shoe will hit the ground first. It was not until the late 16th century that Galileo demonstrated that they hit the ground at the same time. He also discovered that if you doubled the time of a fall, the distance traveled increased by a factor of four (the square of the time). This was one of the first uses of a mathematical model in physics. The purpose of softwarephysics is to go beyond IT common sense and come up with an effective theory of software behavior at a deeper level.

Next time we will continue on with thermodynamics and see how it led to an effective theory of information and how softwarephysics incorporates that theory into a model for software behavior.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Monday, September 17, 2007

How To Think Like A Scientist

I was just about to tell you about applying science to computer science when I realized I was getting ahead of myself. First I need to define what I mean by science. As with all of softwarephysics, this is my own operational definition. However, I think it is pretty close to the mainstream concept of what science is as held by the majority of the scientific community.

First of all, science is a way of thinking. Science has a methodology to aid in this way of thinking which has been very successful over the past 400 years. The purpose of the scientific method is to formulate theories or models of the Universe. A scientific model is a simplified approximation of reality that allows people to gain insight into the real structure and operation of true reality. Scientists create models to explain observations, predict future observations, and provide direction for thought. The scientific method is a little different than the way most people think in their daily lives, so let’s examine some of the ways people come up with ideas with a little help from our philosophical friends.

There are three main approaches to gaining knowledge:

1. Inspiration/Revelation
These are ideas that just come out of the blue with no apparent source. I find that IT people are very good at this. For example, on a conference call for a website outage, I am frequently surprised at the incredible level of troubleshooting skill of many of the participants. I frequently wonder to myself “Where did that insight come from?” when somebody nails a root cause out of the blue.

Most of the great ideas in science have also come from inspiration/revelation. For example, in 1900 Max Planck had the insight that he could solve the Ultraviolet Catastrophe by assuming that charged particles in the walls of a room could only oscillate with certain fixed or quantized frequencies. The classical electromagnetic theory of the day predicted that the room you are currently sitting in should be bathed in a lethal level of ultraviolet light and x-rays and that the walls of the room should be at a temperature of absolute zero having turned over all of their available energy into zapping you to death. This was clearly evidence of a theory missing the mark by a wide margin! Planck thought that his fixed frequency solution was just a mathematical trick, but in 1905 Einstein had the revelation that maybe this was not just a trick. Maybe light did not always behave as an electromagnetic wave. Maybe light sometimes behaved like a stream of particles we now call photons that only came in fixed or quantized amounts of energy. The fixed energy of the photons would match up with the fixed frequencies of the charged particles in the walls of your room. In 1924, Louis de Broglie had another revelation and suggested that particles, like electrons, might behave like waves too, just as a stream of photons sometimes behaved like an electromagnetic wave. In 1925, Werner Heisenberg and Erwin Schrödinger developed quantum mechanics based upon these insights, and in 1948 the transistors in your PC were invented at Bell Labs based upon quantum mechanics.

The limits of Inspiration/Revelation:
You never know for sure that your idea is correct.

2. Deductive Rationalism
With deductive rationalism, you make a few postulates which usually come from inspiration/revelation and then you deduce additional ideas or truths from them using pure rational thought. Plato and Des Cartes were big fans of deductive rationalism. It goes like this:

If A = B
And B = C
Then A = C

The limits of deductive rationalism:
In 1931, Kurt Gödel proved that no self-consistent mathematical theory could deduce all truths and that no self-consistent mathematical theory could prove that it was always self-consistent (does not contradict itself). So you cannot deduce all truths.

3. Inductive Empiricism
With inductive empiricism, you make a lot of observations and then reverse the deductive rationalism process. Aristotle and John Locke were big fans of inductive empiricism. If I observe that 99.99% of the time that A = C, then I will assume that A is really equal to C, and I will chalk up the .01% discrepancy to observational error. I don’t know anything about B at this point because I have no observations of B’s state. However, if I make some more observations and find that 99.99% of the time that B = C, then I will infer that B is really equal to C, and therefore, that A is really equal to B too.

If A = C 99.99% of the time
And B = C 99.99% of the time
Then B = C, A = C, and A = B

The limits of empirical induction:
The above may all just be coincidences and you have to have good technology in order to make accurate observations. Most Ancient Greek philosophers did not like inductive empiricism because they thought that all physical measurements on Earth were debased and corrupt. They believed in the power of pure uncorrupted rational thought. This was largely due to the poor level of measurement technology they possessed at the time (they had no Wily). But even in the 17th century when Galileo was demonstrating experiments to his patrons that proved, contrary to Aristotle’s teachings, that all bodies fell with the same acceleration, they thought his experimental demonstrations were magic tricks!

People get into trouble when they only use one or two of the above three approaches to knowledge to make decisions. I know that I do. Politicians have frequently been known to not use any of them at all! The power of the scientific method is that it uses all three of the above approaches to knowledge. Like the checks and balances in the U.S. Constitution, this helps to keep you out of trouble.

The Scientific Method
1. Formulate a set of hypotheses based upon inspiration/revelation with a little empirical inductive evidence mixed in.

2. Expand the hypotheses into a self-consistent model or theory by deducing the implications of the hypotheses.

3. Use more empirical induction to test the model or theory by analyzing many documented field observations or performing controlled experiments to see if the model or theory holds up. It helps to have a healthy level of skepticism at this point. As philosopher Karl Popper has pointed out, you cannot prove a theory to be true, you can only prove it to be false. Galileo pointed out that the truth is not afraid of scrutiny, the more you pound on the truth, the more you confirm its validity.

Effective Theories
The next concept that we need to understand is that of effective theories. Physics currently does not have an all-encompassing unifying theory or model. Researchers are looking for a TOE – Theory of Everything in physics, but currently, we do not have one. Instead, we have a series of pragmatic effective theories. An effective theory is an approximation of reality that only works over a certain range of conditions. For example, Newtonian mechanics allowed us to put men on the Moon, but it cannot explain how atoms work or why the clocks on GPS satellites run faster than clocks on Earth. All of the current theories in physics are effective theories that only work over a certain range of conditions. Physics currently comes in three sizes – Small, Medium, and Large

• Small – less than 10-10 meter and tiny masses
Quantum Mechanics – atomic bombs and transistors
• Medium – 19th-Century Classical Physics
Newtonian Mechanics – space shuttle launches
Maxwell’s Electromagnetic Theory – electric motors
Thermodynamics – air conditioners
• Large – greater than 20,000 miles/sec or very massive objects
Einstein’s General Theory of Relativity – cosmology, black holes, and GPS satellites

Since all of the other sciences are built upon a foundation of underlying effective theories in physics, that means that all of science is “wrong”! But knowing that you are “wrong” gives you a huge advantage over people who know that they are “right” because knowing that you are “wrong” allows you to keep an open mind to search for models that are better approximations of reality.

In addition to covering different ranges of conditions, effective theories also come in different levels of depth with more profound effective theories providing deeper levels of insight. For example, Charles’ Law is a very high-level effective theory that states that at a constant pressure, the volume of a gas in a cylinder is proportional to the temperature of the gas. If you double the temperature of a gas in a cylinder having a freely moving piston, its volume will expand and double in size. A more profound effective theory for the same phenomena is called statistical mechanics which views the gas as a large number of molecules bouncing around in the cylinder. When you double the temperature of the gas, you double the energy of the molecules, so they bounce around faster and take up more room. An even deeper effective theory is called quantum mechanics which views the molecules as standing waves in the cylinder.

The goal of softwarephysics is to provide a pragmatic high-level effective theory of software behavior at a level of complexity similar to that of Charles’ Law. Having an effective theory of software behavior is useful because it allows you to make day-to-day IT decisions with more confidence. For example, suppose you learn 30 minutes before your maintenance window goes down that you have a new EJB that must go into production, but that it corrupts 0.5% of a certain new database transaction. A young programmer on your team quickly produces a “fixed” version of the EJB, but he does not have time to regression test it. Do you put the “fixed” EJB into production, or do you go with the one with the known 0.5% bug with the hope that the corrupted database records can be corrected later? As we shall see later, softwarephysics helps in such situations.

The Most Difficult Thing in Science
The final concept of the scientific method is the most difficult for human beings. In science, you are not allowed to believe in things. You are not allowed to accept models or theories without supporting evidence. However, you are allowed to have a level of confidence in models and theories. For example, I do not “believe” in Newtonian mechanics because I know that it is “wrong”, but I do have a high level of confidence that it could launch me into an Earth orbit. I might get blown up on the launch pad, but like all of our astronauts, I would bet my life on Newtonian mechanics getting me into an Earth orbit instead of plunging me into the Sun if I do my calculations properly! Similarly, I have a low level of confidence in the old miasma theory of disease. In the early 19th century, it was thought by the scientific community that diseases were caused by miasma, a substance found in foul smelling air. And there was a lot of empirical evidence to support this model. For example, people who lived near foul smelling 19th-century rivers were more prone to dying of cholera than people who lived further from the rivers. We had death certificate data to prove that empirical fact. If you were running a cesspool cleaning business in the 19th century, you knew that on the first day of work your rookies were likely to get sick and vomit when they were exposed to the miasma from their first cesspool and a few days later they might come down with a fever and die on you! The miasma theory of disease even had predictive power! If you were running a 19th-century cesspool cleaning business in the middle of a cholera epidemic, and you shut down your operation during the epidemic, while your competitors kept theirs open, you would probably enjoy a larger market share when the epidemic subsided. This just highlights the dangers of relying too heavily on the inductive empiricism approach to gaining knowledge.

As a human being, it is hard not to believe in things. I have been married for 32 years and I have two wonderful adult children. And I truly believe in them all! If somebody confronted me with incontrovertible evidence that one of my children had embezzled funds, my first thought would be that there must be some horrible mistake. However, in scientific matters, you are not allowed this luxury.

The Scientific Method and Softwarephysics
So what does all of this have to do with softwarephysics? Softwarephysics is a high-level effective theory of software behavior. It is a simulated science for the simulated Software Universe that we are all immersed in. Let me explain. In the 1970s, I was an exploration geophysicist writing FORTRAN software to simulate geophysical observations for oil companies. When I transitioned into IT in 1979, it seemed like I was trapped in a frantic computer simulation, just like the ones I used to program for oil companies. After a few months in Amoco’s IT department, I had the following inspiration/revelation:

The Equivalence Conjecture of Softwarephysics

Over the past 70 years, through the uncoordinated efforts of over 50 million independently acting programmers to provide the world with a global supply of software, the IT community has accidentally spent more than $10 trillion creating a computer simulation of the physical Universe on a grand scale – the Software Universe.

I soon realized that I could use this simulation in reverse. By understanding how the physical Universe behaved, I could predict how the Software Universe would react to stimuli, and I proceeded to deduce many implications for software behavior based upon this insight. This was a bit of a role reversal; in physics, we use software to simulate the behavior of the Universe, while in softwarephysics we use the Universe to simulate the behavior of software.

The one problem that I have always had with softwarephysics has been with the confirmation of the model via inductive empiricism. How do you produce and analyze large amounts of documented field observations of software behavior or run controlled experiments for a simulated science? “Hey, Boss I would like to run a double-blind experiment where we install software into production, but only half of it goes through UAT testing. The other half comes straight from the programmers as is, and we don’t know which is which in advance”. Unfortunately, I have always had a full-time job without the luxury of graduate students! So I am relying on 30+ years of personal anecdotal observation of software behavior to offer softwarephysics as a working hypothesis.

Next time I will describe why applying science to computer science is a good idea using the challenges faced by steam engine designers in the 18th century as a case study.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Saturday, September 15, 2007

So You Want To Be A Computer Scientist?

As professional IT people, we are constantly being called upon to innovate. Unfortunately, at most places where I have worked over the past 32 years, that has meant innovating using conventional ideas – a truly difficult thing to do. I know that softwarephysics can be a bit daunting, especially as presented in SoftwarePhysics 101 – The Physics of Cyberspacetime because the course is designed for several audiences – IT people, physicists, and biologists, and none of these folks talk to each other much. So I would like to break down softwarephysics into some smaller chunks that might be easier to absorb from an IT perspective. I work with a large number of my fellow IT people on a daily basis, and I frequently hear that “Why is this happening to me?” sound in their voices at 3:00 AM. This might help.

Let’s begin where it all started in the spring of 1941 when Konrad Zuse built the Z3 with 2400 electromechanical telephone relays. The Z3 was the world’s first full-fledged computer. You don’t hear much about Konrad Zuse because he was working in Germany during World War II. The Z3 had a clock speed of 5.33 Hz and could multiply two very large numbers together in 3 seconds. It used a 22-bit word and had a total memory of 64 words. It only had two registers, but it could read in and store programs via a punched tape. In 1945, while Berlin was being bombed by over 800 bombers each day, Zuse worked on the Z4 and developed Plankalkuel, the first high-level computer language more than 10 years before the appearance of FORTRAN in 1956. Zuse was able to write the world’s first chess program with Plankalkuel. And in 1950 his startup company Zuse-Ingenieurbüro Hopferau began to sell the world’s first commercial computer, the Z4, 10 months before the sale of the first UNIVAC.

Figure 1 – Konrad Zuse with a reconstructed Z3 in 1961 (click to enlarge)


Figure 2 – Block diagram of the Z3 architecture (click to enlarge)


Now in the past 66 years hardware has improved by a factor of about a billion. You can now go to Best Buy with $500 bucks and buy a machine that is approximately a billion times faster than the Z3 with nearly a billion times as much memory. So how much progress have we made on the software side of computer science in this same period of time? How far have we come since Plankalkuel? Now be careful! A billion seconds is 32 years, and I know that some of you have not quite reached that milestone yet. I would estimate that at most we are perhaps 100 – 1,000 times better off at creating, maintaining, and operating software than Zuse was with writing Plankalkuel on punched tape. And I think I am being generous here. So although we have made great strides in software, how come the hardware guys beat us out by a factor of between 1 – 10 million over the past 60 some years? My suggestion is that this was not a fair fight because the hardware guys were cheating - they were using science! Yes, softwarephysics makes the outrageous suggestion that computer scientists try using science! This has already started to happen in academic computer science with the Biologically Inspired Computing community spread across many universities, but it has not yet filtered down much to the commercial IT community.

Next time I would like to discuss why in the world would you possibly want to apply science to computer science? People working on steam engines in the 18th century asked this very same question.

So what happened to Konrad Zuse? Zuse died in 1995 after making many contributions to computing that you use in your job on a daily basis. You can read about his adventures in computing in his own words at:

http://ei.cs.vt.edu/~history/Zuse.html

Being the unsung genius that he was, Zuse published Calculating Space in 1967, in which he proposed that the physical Universe was a giant computer! This crazy idea has recently been adopted and expanded upon by such huge intellects as physicists John Wheeler, Seth Lloyd, David Deutsch and many others now working on quantum computers. In 1687, Newton published his Principia in which he presented the world with Newtonian mechanics. The Newtonian clockwork model of the Universe, which depicted the world as a huge machine relentlessly moving in deterministic paths, dominated Western thought throughout the 18th and 19th centuries. But the rise of quantum mechanics and chaos theory in the 20th century has recently caused many physicists and philosophers to adopt a new model of the Universe which depicts the Universe as a huge quantum computer constantly calculating how to behave.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston