Thursday, October 22, 2015

Don't ASAP Your Life Away

For the benefit of international readers let me begin with a definition:

ASAP - An American acronym for "As Soon As Possible", meaning please drop everything and do this right now instead.

I am now a 64-year-old IT professional, planning to work until I am about 70 years old if my health holds up. Currently, I am doing middleware work from home for the IT department of a major corporation, and only go into the office a few times each year, which is emblematic of my career path trajectory towards retirement. Now I have really enjoyed my career in IT all of these years, but having been around the block a few times, I would like to offer a little advice to those just starting out in IT, and that is to be sure to pace yourself for the long haul. You really need to dial it back a bit to go the distance. Now I don't want this to be seen as a negative posting about careers in IT , but I personally have seen way too many young bright IT professionals burn out due to an overexposure to stress and long hours, and that is a shame. So dialing it back a bit should be seen as a positive recommendation. And you have to get over thinking that dialing it back to a tolerable long-term level makes you a lazy worthless person. In fact, dialing it back a little will give you the opportunity to be a little more creative and introspective in your IT work, and maybe actually come up with something really neat in your IT career.

This all became evident to me back in 1979 when I transitioned from being a class 9 exploration geophysicist in one of Amoco's exploration departments to become a class 9 IT professional in Amoco's IT department. One very scary Monday morning, I was conducted to my new office cubicle in Amoco’s IT department, and I immediately found myself surrounded by a large number of very strange IT people, all scurrying about in a near state of panic, like the characters in Alice in Wonderland. After 36 years in the IT departments of several major corporations, I can now state with confidence that most corporate IT departments can best be described as “frantic” in nature. This new IT job was a totally alien experience for me, and I immediately thought that I had just made a very dreadful mistake. Granted, I had been programming geophysical models for my thesis and for oil companies ever since taking a basic FORTRAN course back in 1972, but that was the full extent of my academic credentials in computer science. I immediately noticed some glaring differences between my two class 9 jobs in the same corporation. As a class 9 geophysicist, I had an enclosed office on the 52nd floor of the Amoco Building in downtown Chicago, with a door that actually locked, and a nice view of the north side of the Chicago Loop and Lake Michigan. With my new class 9 IT job at Amoco I moved down to the low-rent district of the Amoco Building on the 10th floor where the IT department was located to a cubicle with walls that did not provide very much privacy. Only class 11 and 12 IT professionals had relatively secluded cubicles with walls that offered some degree of privacy. Later I learned that you had to be a class 13 IT Manager, like my new boss, to get an enclosed office like I had back up on the 52nd floor. I also noticed that the stress levels of this new IT job had increased tremendously over my previous job as an exploration geophysicist. As a young geophysicist, I was mainly processing seismic data on computers for the more experienced geophysicists to interpret and to plan where to drill the next exploration wells. Sure there was some level of time-urgency because we had to drill a certain number of exploration wells each year to maintain our drilling concessions with foreign governments, but still, work proceeded at a rather manageable pace, allowing us ample time to play with the processing parameters of the software used to process the seismic data into seismic sections.

Figure 1 - Prior to becoming an IT professional, I was mainly using software to process seismic data into seismic sections that could be used to locate exploration wells.

However, the moment I became an IT professional, all of that changed. Suddenly, everything I was supposed to do became a frantic ASAP effort. It is very difficult to do quality work when everything you are supposed to do is ASAP. Projects would come and go, but they were always time-urgent and very stressful, to the point that it affected the quality of the work that was done. It seemed that there was always the temptation to simply slap something into production to hit an arbitrary deadline, ready or not, and many times we were forced to succumb to that temptation. This became more evident when I moved from Applications Development to Operations about 15 years ago, and I had to then live with the sins of pushing software into production before it was quite ready for primetime. In recent decades I also noticed a tendency to hastily bring IT projects in through heroic efforts of breakneck activity, and for IT Management to then act as if that were actually a good thing after the project was completed. When I first transitioned into IT, I also noticed that I was treated a bit more like a high-paid clerk than a highly trained professional, mainly because of the time-pressures of getting things done. One rarely had time to properly think things through. I seriously doubt that most business professionals would want to hurry their surgeons along while under the knife, but that is not so for their IT support professionals.

You might wonder why I did not immediately run back to exploration geophysics in a panic. There certainly were enough jobs for an exploration geophysicist at the time because we were just experiencing the explosion of oil prices resulting from the 1979 Iranian Revolution. However, my wife and I were both from the Chicago area, and we wanted to stay there. In fact, I had just left a really great job with Shell in Houston to come to Amoco's exploration department in Chicago for that very reason. However, when it was announced about six months after my arrival at Amoco that Amoco was moving the Chicago exploration department to Houston, I think the Chief Geophysicist who had just hired me felt guilty, and he found me a job in Amoco's IT department so that we could stay in Chicago. So I was determined to stick it out for a while in IT, until something better might come along. However, after a few months in Amoco's IT department, I began to become intrigued. It seemed as though these strange IT people had actually created their own little simulated universe, that seemingly, I could explore on my own. It also seemed to me that my new IT coworkers were struggling because they did not have a theoretical framework from which to work from, like I had had in Amoco's exploration department. That is when I started working on softwarephysics. I figured if you could apply physics to geology; why not apply physics to software? I then began reading the IT trade rags, to see if anybody else was doing similar research, and it seemed as though nobody else on the planet was thinking along those lines, and that raised my level of interest in doing so even higher.

But for the remainder of this posting, I would like to explore some of the advantages of dialing it back a bit by going back to a 100-year-old case study. I just finished reading Miss Leavitt's Stars - the Untold Story of the Woman Who Discovered How to Measure the Universe (2005) by George Johnson, a biography of Henrietta Swan Leavitt who in 1908 discovered the Luminosity-Period relationship of Cepheid variables that allowed Edwin Hubble in the 1920s to calculate the distances to external galaxies, and ultimately, determine that the Universe was expanding. This discovery was certainly an example of work worthy of a Nobel Prize that went unrewarded. Henrietta Leavitt started out as a human "computer" in the Harvard College Observatory in 1893, examining photographic plates in order to tabulate the locations and magnitudes of stars on the photographic plates for 25 cents/hour. Brighter stars made larger spots on photographic plates than dimmer stars, so it was possible to determine the magnitude of a star on a photographic plate by comparing it to the sizes of the spots of stars with known magnitudes. She also worked on tabulating data on the varying brightness of variable stars. Variable stars were located by overlaying a negative plate that consisted of a white sky containing black stars and a positive plate that consisted of a dark sky containing white stars. The two plates were taken some days or weeks apart in time. Then by holding up both superimposed plates to the light from a window, one could flip them back and forth, looking for variable stars. If you saw a black dot with a white hallow or a white dot with a black hallow, you knew that you had found a variable star.

What Henrietta Leavitt noted was that certain variable stars in the Magellanic Clouds, called Cepheid variables, varied in luminosity in a remarkable way. The Large Magellanic Cloud is about 160,000 light years away, while the Small Magellanic Cloud is about 200,000 light years distant. Both are closeby small irregular galaxies. The important point is that all of the stars in each Magellanic Cloud are all at about the same distance from the Earth. What Henrietta Leavitt discovered was that the Cepheid variables in each Magellanic Cloud varied such that the brighter Cepheid variables had longer periods than the fainter Cepheid variables in each Magellanic Cloud. Since all of the Cepheid variables in each of the Magellanic Clouds were all at approximately the same distance, that meant that the Cepheid variables that appeared brighter when viewed from the Earth actually were intrinsically brighter. Now if one could find the distance to some closeby Cepheid variables, using the good old parallax method displayed in Figure 5, then by simply measuring the luminosity period of a Cepheid variable, it would be possible to tell how bright the star really was - see Figure 6. However, it was a little more complicated than that because there were no Cepheid variables within the range that the parallax method worked; they were all too far away. So instead, astronomers used the parallax method to determine the local terrain of stars in our neighborhood and how fast the Sun was moving relative to them. Then by recording the apparent slow drift of distant Cepheid variables relative to even more distant stars, caused by the Sun moving along through our galaxy, it was possible to estimate the distance to a number of Cepheid variables. Note that obtaining the distance to a number of Cepheid variables by other means is no longer the challenge that it once was because from November 1989 to March 1993 the Hipparcos satellite measured the parallax of 118,200 stars accurate to one-milliarcsecond, and 273 Cepheid variables were amongst the data, at long last providing a direct measurement of some Cepheid variable distances. Once the distance to a number of Cepheid variables was determined by other means, it allowed astronomers to create the Luminosity-Period plot of Figure 6. Then by comparing how bright a Cepheid variable appeared in the sky relative to how bright it really was, it was possible to figure out how far away the Cepheid variable actually was. That was because if two Cepheid variables had the same period, and therefore, the same intrinsic brightness, but one star appeared 100 times dimmer in the sky than the other star, that meant that the dimmer star was 10 times further away than the brighter star because the apparent luminosity of a star falls off as the square of the distance to the star. Additionally, it also turned out that the Cepheid variables were extremely bright stars that were many thousands of times brighter than our own Sun, so they could be seen from great distances, and could even be seen in nearby galaxies. Thus it became possible to find the distances to galaxies using Cepheid variables.

Figure 2 - Henrietta Swan Leavitt July 4, 1868 - December 12, 1921 died at an age of 53.

Figure 3 - The human computers of the Harvard Observatory were used to tabulate the locations and magnitudes of stars on photographic plates and made 25 cents/hour. Female cotton mill workers made about 15 cents/hour at the time.

Figure 4 - In 1908 Henrietta Leavitt discovered that the brighter Cepheid variables in the Magellanic Clouds had longer periods than the dimmer Cepheid variables, as seen from the Earth. She published those results in 1912. Because all of the Cepheid variables in the Magellanic Clouds were at approximately the same distance, that meant that the Cepheid variables that appeared brighter in the sky were actually intrinsically brighter, so Henrietta Leavitt could then plot the apparent brightness of those Cepheid variables against their periods to obtain a plot like Figure 6. Later it was determined that this variability in luminosity was due to the Cepheid variables pulsating in size. When Cepheid variables grow in size, their surface areas increase and their surface temperatures drop. Because the luminosity of a star goes as square of its radius (R2), but as the surface temperature raised to the 4th power (T4), the drop in temperature wins out, and so when a Cepheid variable swells in size, its brightness actually decreases.

Figure 5 - The standard parallax method can determine the distance to nearby stars. Unfortunately, no known Cepheid variables were close enough for the parallax method to work. Instead, the parallax method was used to figure out the locations of stars near to the Sun, and then the motion of the Sun relative to the nearby stars was calculated. This allowed the slow apparent drift of some Cepheid variables against the background of very distant stars to be used to calculate the distance to a number of Cepheid variables. With those calculations, combined with the apparent brightness of the Cepheid variables, it was possible to create the Luminosity-Period plot of Figure 6.

Figure 6 - This was a crucial observation because it meant that by simply measuring the amount of time it took a Cepheid variable to complete a cycle it was possible to obtain its intrinsic brightness or luminosity. Because Cepheid variables are also very bright stars in general, that meant it was easy to see them in nearby galaxies. For example, from the above graph we can see that a Cepheid variable with a period of 30 days is about 10,000 times brighter than the Sun. That means it can be seen about 100 times further away than our own Sun can be seen, and could even be seen in a distant galaxy.

While reading Miss Leavitt's Stars, I was taken aback, as I always am, by the slow pace of life 100 years ago, in contrast to the breakneck pace of life today. People in those days lived about half as long as we do today, yet they went through life about 1,000 times slower. For example, Henrietta Leavitt worked for Edward Pickering at the Harvard College Observatory for many decades. Unfortunately, she suffered from poor health, as did many people 100 years ago, and a number of times had to return home to Beloit Wisconsin to recuperate for many months at a time. It was very revealing to read the correspondence between the two while she was at home convalescing. It seems that in those days the safest and most effective medicine was bed rest. Putting yourself into the hands of the medical establishment of the day was a risky business indeed. In fact, they might treat you with a dose of radium salts to perk you up. However, Edward Pickering really needed Henrietta Leavitt to complete some work on her observations of Cepheid variables in order for them to be used as standard candles to measure astronomical distances, so much so that he even raised her wages to 30 cents/hour. But because of poor health Henrietta Leavitt had to take it easy. Despite the criticality of her work, the correspondence went back and forth between the two in an excruciatingly slow manner, with a time scale of several months between letters, certainly not in the ASAP manner of today with its overwhelming urgency of nearly immediate response times. Sometimes Edward Pickering would even ship photographic plates to Henrietta Leavitt for her to work on. Even when Henrietta Leavitt did return to work at the Harvard College Observatory, many times she could only work a few hours each day. Although this at first may seem incredibly passe and out of touch with the ASAP pace of the modern world, I have to wonder if Henrietta Leavitt had simply ground out stellar luminosities at 30 cents/hour, as fast as she possibly could in a mind-numbing way, would she ever had had the time to calmly sit back and see what nobody else had managed to see? Perhaps if everybody in IT dialed it back a bit, we could do the same. It would also help if IT Management treated IT professionals in less of a clerk-like manner, and allowed them the time to be as creative as they really could be.

So my advice to those just starting out in IT is to dial it back a bit, and to always keep a sense of perspective. It is important to always make time for yourself and for your family, and to allow enough time to actually think about what you are doing and what you are trying to achieve in life. With enough time, maybe you might come up with something as astounding as did Henrietta Swan Leavitt.

Comments are welcome at

To see all posts on softwarephysics in reverse order go to:

Steve Johnston

Saturday, October 03, 2015

The Economics of the Coming Software Singularity

I was born in 1951, a few months after the United States government bought its very first commercial computer, a UNIVAC I, for the Census Bureau on March 31, 1951. So when I think back to my early childhood, I can still remember a time when there essentially was no software at all in the world. In fact, I can still vividly remember my very first encounter with a computer on Monday, Nov. 19, 1956 watching the Art Linkletter TV show People Are Funny. Art was showcasing a UNIVAC 21 “electronic brain” sorting through the questionnaires from 4,000 hopeful singles, looking for the ideal match. The machine paired up John Caran, 28, and Barbara Smith, 23, who later became engaged. And this was more than 40 years before! To a five-year-old boy, a machine that could “think” was truly amazing. Since that first encounter with a computer back in 1956, I have personally witnessed software slowly becoming the dominant form of self-replicating information on the planet, and I have also seen how software has totally reworked the surface of the planet to provide a secure and cozy home for more and more software of ever increasing capability. Now of course a UNIVAC 21 could really not "think", but now the idea of the software on a computer really "thinking" is no longer so farfetched. But since the concept of computers really "thinking" is so subjective and divisive, in this posting I would like to instead focus on something more objective and measureable, and then work through some of its upcoming implications for mankind. In order to do that, let me first define the concept of the Software Singularity:

The Software Singularity – The point in time when software is finally capable of generating software faster and more reliably than a human programmer, and finally becomes fully self-replicating on its own.

Notice that mankind can experience the Software Singularity while people are still hotly debating whether the software that first initiates the Software Singularity can really "think" or not. In that regard, the Software Singularity is just an extension of the Turing Test, narrowly applied to the ability of software to produce operational software on its own. Whether that software has become fully self-aware and conscious is irrelevant. The main concern is will software ever be able to self-replicate on its own. Again, in softwarephysics software is simply considered to just be the latest form of self-replicating information to appear on the planet:

Self-Replicating Information – Information that persists through time by making copies of itself or by enlisting the support of other things to ensure that copies of itself are made.

Basically, we have seen several waves of self-replicating information dominate the Earth:
1. Self-replicating autocatalytic metabolic pathways of organic molecules
2. RNA
3. DNA
4. Memes
5. Software

Note that because the self-replicating autocatalytic metabolic pathways of organic molecules, RNA and DNA have become so heavily intertwined over time that now I simply refer to them as the “genes”. Over the past 4.0 billion years, the surface of the Earth has been totally reworked by three forms of self-replicating information – the genes, memes and software, with software rapidly becoming the dominant form of self-replicating information on the planet. For more on this see:

A Brief History of Self-Replicating Information
Self-Replicating Information
Is Self-Replicating Information Inherently Self-Destructive?
Enablement - the Definitive Characteristic of Living Things
Is the Universe Fine-Tuned for Self-Replicating Information?
How to Use an Understanding of Self-Replicating Information to Avoid War
The Great War That Will Not End
How to Use Softwarephysics to Revive Memetics in Academia

Some Possible Implications of the Software Singularity
If things keep moving along as they currently are the Software Singularity will most certainly occur within the next 100 years or so, and perhaps much sooner. I am always amused when I hear people speculating about what Homo sapiens will be doing in 100 million or a billion years from now, without taking into account the fact that we are currently living in one of those very rare transitionary periods when another form of self-replicating information comes to dominate, and thus, on the brink of a Software Singularity that will change everything. Therefore, I think we need to worry more about what will happen over the next 100 years, rather than the next 100 million years, because once the Software Singularity happens it will be a sudden phase change that will mark the time when software finally becomes the dominant form of self-replicating information on the planet, and what happens after that, nobody can really tell. Traditionally, all of the other waves of self-replicating information have always kept their predecessors around because their predecessors were found to be useful in helping the new wave to self-replicate, but that may not be true for software. There is a good chance that software will not need organic molecules, RNA and DNA to survive, and that is not very promising for us.

However, we have already seen many remarkable changes happen to mankind as software has proceeded towards the Software Singularity. In that regard, it is as if we had passed through an event horizon in May of 1941 when Konrad Zuse first cranked up some software on his Z3 computer, and there has been no turning back ever since, as we have continued to fall headlong into the Software Singularity. Since we really cannot tell what will happen when we hit the Software Singularity, let's focus on our free fall trip towards it instead. It is obvious that software has already totally transformed all of the modern societies of the Earth, and for the most part in a very positive manner. But I would like to explore one of the less positive characteristics of the rise of software in the world - that of a growing wealth disparity. Wealth disparity is currently a hot topic in a number of political circles these days, and usually the discussion boils down to whether taxing the rich would fix or exacerbate the problem. The poor maintain that the rich need to pay more taxes, and that governments then need to redistribute the wealth to the less well off. The rich, on the other hand, maintain that increasing taxes on the rich only reduces the incentive for the rich to get richer and to reluctantly drag the poor along with them. So who is right? In the United States this growing income disparity of recent decades has been attributed to the end of the Great Compression of the 1950s, 1960s and early 1970s, which created a huge middle class in the United States. During the Great Compression, the marginal income tax on the very rich grew to as high as 90%, yet this period was also characterized by the largest economic boom in the history of the United States. The theory being that the top 1% of earners will not push for increased compensation if 90% of their marginal income goes to taxes. The end of the Great Compression has been attributed to the massive tax cuts on the rich during the Reagan and George W. Bush administrations in the United States as displayed in Figure 1. But could there be another explanation? In order to investigate that, we need to look beyond the United States to all of the Western world. Figure 2 shows that the very same thing has been happening in most of the Western world over the past 100 years. Figure 2 shows that the concentration of income for the top 1% dramatically dropped during the first half of the 20th century in many different countries in the Western world that had a large variation in their approaches to taxation and the redistribution of wealth, yet we see the very same trend in recent decades to concentrate more and more wealth into the top 1% that we have seen in the United States. Could it be that there is another factor involved beyond simply changing tax rates?

Figure 1 - In the United States, the Great Compression of the 1950s, 1960s and early 1970s has been attributed to the high marginal income taxes on the very rich during that period, and the redistribution of wealth to the less well off. The end of the Great Compression came with the massive tax cuts on the very rich during the Reagan administration, and which were greatly expanded during the administration of George W. Bush.

Figure 2 - However, the percent of income received by the top 1% of many Western nations also fell dramatically during the first half of the 20th century, but began to slow around 1950 when software first appeared. This downward trend bottomed out in the early 1980s when PC software began to proliferate. Since then the downward trend has reversed direction and has climbed dramatically so that now the top 1% of earners on average receive about 12% of the total income. This is true despite a very diverse approach to taxation and social welfare programs amongst all of the nations charted.

The Natural Order of Things
We keep pushing the date back for the first appearance of civilization on the Earth. Right now civilization seems to have first appeared in the Middle East about 12,000 years ago, when mankind first pulled out of the last ice age. Now ever since we first invented civilization there has always been one enduring fact. It seems that all societies, no matter how they are organized, have always been ruled by a 1% elite. This oligarchical fact has been true under numerous social and economic systems - autocracies, aristocracies, feudalism, capitalism, socialism and communism. It just seems that there has always been about 1% of the population that liked to run things, no matter how things were set up, and there is nothing wrong with that. We certainly always need somebody around to run things, because honestly, 99% of us simply do not have the ambition or desire to do so. Of course, the problem throughout history has always been that the top 1% naturally tended to abuse the privilege a bit and overdid things a little, resulting in 99% of the population having a substantially lower economic standard of living than the top 1%, and that has led to several revolutions in the past that did not always end so well. However, historically, so long as the bulk of the population had a relatively decent life, things went well in general for the entire society. The key to this economic stability has always been that the top 1% has always needed the remaining 99% of us to do things for them, and that maintained the hierarchical peace within societies.

But the advent of software changed all of that. Suddenly with software it became possible to have machines perform many of the tasks previously performed by people. This began in the 1950s, as commercial software first began to arrive on the scene, and has grown in an exponential manner ever since. At first the arrival of software did not pose a great threat because, as the arrival of software began to automate factory and clerical work, it also produced a large number of highly paid IT workers as well to create and maintain the software and skilled factory workers who could operate the software-laden machinery. However, by the early 1980s this was no longer true. After initially displacing many factory and clerical workers, software then went on to displace people at higher and higher skill levels. Software also allowed managers in modern economies to move manual and low-skilled work to the emerging economies of the world where wage scales were substantially lower, because it was now possible to remotely manage such operations using software. As software grew in sophistication this even allowed for the outsourcing of highly skilled labor to the emerging economies. For example, in today's world of modern software it is now possible to outsource complex things like legal and medical work to the emerging economies. Today, your CT scan might be read by a radiologist in India or Cambodia, and your biopsy might be read by a pathologist on the other side of the world as well. In fact, large amounts of IT work have also been outsourced to India and other countries. But as the capabilities of software continue to progress and general purpose androids begin to appear later in the century there will come a point when even the highly reduced labor costs of the emerging economies will become too dear. At that point the top 1% ruling class may not have much need for the remaining 99% of us, especially if the androids start building the androids. This will naturally cause some stresses within the current oligarchical structure of societies, as their middle classes continue to evaporate and more and more wealth continues to concentrate into the top 1%.

Additionally, while software was busily eliminating many high-paying middle class jobs over the past few decades, it also allowed for a huge financial sector to flourish. For example, today the world's GWP (Gross World Product) comes to about $78 trillion in goods and services, but at the same time we manage to trade about $700 trillion in derivatives each year. This burgeoning financial sector made huge amounts of money for the top 1% without really producing anything of economic value - see MoneyPhysics and MoneyPhysics Revisited for more about the destabilizing effects of the financial speculation that has run amuck. Such financial speculation would be entirely impossible without software because it would take more than the entire population of the Earth to do the necessary clerical work manually, like we did back in the 1940s.

So as we rapidly fall into the Software Singularity, the phasing out of Homo sapiens may have already begun.

Comments are welcome at

To see all posts on softwarephysics in reverse order go to:

Steve Johnston