Wednesday, September 16, 2020

Agile Project Management at SpaceX

In Agile vs. Waterfall Programming and the Value of Having a Theoretical Framework, I explained that back in the 1950s and 1960s IT professionals used an Agile methodology to develop software because programs had to be very small to fit into the puny amount of memory that mainframe computers had at the time. Recall that the original UNIVAC I came out in 1951 with only 12 KB of memory. By 1970, mainframe computers typically only had about 1 MB of memory that was divided up into a number of fixed regions by the IBM 360-MFT operating system. Typically, the largest memory region was only 256 KB. Now you really cannot do a lot of logic in 256 KB of memory so applications in the 1960s consisted of lots of very small programs in a JCL job stream that used many tapes for input and output. Each step in the job stream had a very small program that would read the tapes from the previous job step, do some processing on the data, and then write out the processed data to tapes for the next step in the JCL job stream. The output tapes were then used as input for the next step in the JCL job stream. For more on tape processing see: An IT Perspective on the Origin of Chromatin, Chromosomes and Cancer.

So this kind of tape batch processing easily lent itself to the prototyping of the little programs in each job step. Developers would simply "breadboard" each little step by trial and error like they used to do on the Unit Record Processing machines of the 1950s that were programmed by physically rewiring the plugboards of each Unit Record Processing machine in a job stream. The Unit Record Processing machines would then process hundreds of punch cards per minute by routing the punch cards from machine to machine in processing streams.

Figure 1 – In the 1950s, Unit Record Processing machines like this card sorter were programmed by physically rewiring a plugboard.

Figure 2 – The plugboard for a Unit Record Processing machine. Programming a plugboard was a trial and error process that had to be done in an Agile manner. This Agile programming technique carried over to the programming of the small batch programs in a tape batch processing stream.

To get a better appreciation for how these small programs in a long tape batch processing stream worked, and how they were developed in a piecemeal prototyping manner, let's further explore tape batch processing.

Batch Tape Processing and Sequential Access Methods
One of the simplest and oldest sequential access methods is called QSAM - Queued Sequential Access Method:

Queued Sequential Access Method
http://en.wikipedia.org/wiki/Queued_Sequential_Access_Method

I did a lot of magnetic tape processing in the 1970s and early 1980s using QSAM. At the time we used 9 track tapes that were 1/2 inch wide and 2400 feet long on a reel with a 10.5 inch diameter. The tape had 8 data tracks and one parity track across the 1/2 inch tape width. That way we could store one byte across the 8 1-bit data tracks in a frame, and we used the parity track to check for errors. We used odd parity, if the 8 bits on the 8 data tracks in a frame added up to an even number of 1s, we put a 1 in the parity track to make the total number of 1s an odd number. If the 8 bits added up to an odd number of 1s, we put a 0 in the parity track to keep the total number of 1s an odd number. Originally, 9 track tapes had a density of 1600 bytes/inch of tape, with a data transfer rate of 15,000 bytes/second. Remember, a byte is 8 bits and can store one character, like the letter “A” which we encode in the ASCII code set as A = “01000001”.

Figure 3 – A 1/2 inch wide 9 track magnetic tape on a 2400-foot reel with a diameter of 10.5 inches

Figure 4 – 9 track magnetic tape had 8 data tracks and one parity track using odd parity which allowed for the detection of bad bytes with parity errors on the tape.

Later, 6250 bytes/inch tape drives became available, and I will use that density for the calculations that follow. Now suppose you had 10 million customers and the current account balance for each customer was stored on an 80-byte customer record. A record was like a row in a spreadsheet. The first field of the record was usually a CustomerID field that contained a unique customer ID like a social security number and was essentially the equivalent of a promoter region on the front end of a gene in DNA. The remainder of the 80-byte customer record contained fields for the customer’s name and billing address, along with the customer’s current account information. Between each block of data on the tape, there was a 0.5 inch gap of “junk” tape. This “junk” tape allowed for the acceleration and deceleration of the tape reel as it spun past the read/write head of a tape drive and perhaps occasionally reversed direction. Since an 80-byte record only came to 80/6250 = 0.0128 inches of tape, which is quite short compared to the overhead of the 0.5 inch gap of “junk” tape between records, it made sense to block many records together into a single block of data that could be read by the tape drive in a single I/O operation. For example, blocking 100 80-byte records increased the block size to 8000/6250 = 1.28 inches and between each 1.28 inch block of data on the tape, there was the 0.5 inch gap of “junk” tape. This greatly reduced the amount of wasted “junk” tape on a 2400-foot reel of tape. So each 100 record block of data took up a total of 1.78 inches of tape and we could get 16,180 blocks on a 2400-foot tape or the data for 1,618,000 customers per tape. So in order to store the account information for 10 million customers, you needed about 6 2400-foot reels of tape. That means each step in the batch processing stream would use 6 input tapes and 6 output tapes. The advantage of QSAM, over an earlier sequential access method known as BSAM, was that you could read and write an entire block of records at a time via an I/O buffer. In our example, a program could read one record at a time from an I/O buffer which contained the 100 records from a single block of data on the tape. When the I/O buffer was depleted of records, the next 100 records were read in from the next block of records on the tape. Similarly, programs could write one record at a time to the I/O buffer, and when the I/O buffer was filled with 100 records, the entire I/O buffer with 100 records in it was written as the next block of data on an output tape.

Figure 5 – Between each record, or block of records, on a magnetic tape, there was a 0.5 inch gap of “junk” tape. The “junk” tape allowed for the acceleration and deceleration of the tape reel as it spun past the read/write head on a tape drive. Since an 80-byte record only came to 80/6250 = 0.0128 inches, it made sense to block many records together into a single block that could be read by the tape drive in a single I/O operation. For example, blocking 100 80-byte records increased the block size to 8000/6250 = 1.28 inches, and between each 1.28 inch block of data on the tape, there was a 0.5 inch gap of “junk” tape for a total of 1.78 inches per block.

Figure 6 – Blocking records on tape allowed data to be stored more efficiently.

So it took 6 tapes to just store the rudimentary account data for 10 million customers. The problem was that each tape could only store 123 MB of data. Not too good, considering that today you can buy a 1 TB PC disk drive that can hold 8525 times as much data for about $100! Today, you could also store about 67 times as much data on a $7.00 8 GB thumb drive. So how could you find the data for a particular customer on 24,000 feet (2.72 miles) of tape? Well, you really could not do that reading one block of data at a time with the read/write head of a tape drive, so we processed data with batch jobs using lots of input and output tapes. Generally, we had a Master Customer File on 6 tapes and a large number of Transaction tapes with insert, update and delete records for customers. All the tapes were sorted by the CustomerID field, and our programs would read a Master tape and a Transaction tape at the same time and apply the inserts, updates and deletes on the Transaction tape to a new Master tape. So your batch job would read a Master and Transaction input tape at the same time and would then write to a single new Master output tape. These batch jobs would run for many hours, with lots of mounting and unmounting of dozens of tapes.

Figure 7 – The batch processing of 10 million customers took a lot of tapes and tape drives.

When developing batch job streams that used tapes, each little program in the job stream was prototyped first using a small amount of data on just one input and one output tape in a stepwise manner.

The Arrival of the Waterfall Methodology
This all changed in the late 1970s when interactive computing began to become a significant factor in the commercial software that corporate IT departments were generating. By then mainframe computers had much more memory than they had back in the 1960s, so interactive programs could be much larger than the small programs found within the individual job steps of a batch job stream. Since the interactive software had to be loaded into a computer all in one shot, and required some kind of a user interface that did things like checking the input data from an end-user, and also had to interact with the end-user in a dynamic manner, interactive programs were necessarily much larger than the small programs that were found in the individual job steps of a batch job stream. These factors caused corporate IT departments to move from the Agile prototyping methodologies of the 1960s and early 1970s to the Waterfall methodology of the 1980s, and so by the early 1980s prototyping software on the fly was considered to be an immature approach. Instead, corporate IT departments decided that a formal development process was needed, and they chose the Waterfall approach used by the construction and manufacturing industries to combat the high costs of making changes late in the development process. This was because in the early 1980s CPU costs were still exceedingly quite high so it made sense to create lots of upfront design documents before coding actually began to minimize the CPU costs involved with creating software. For example, in the early 1980s, if I had a $100,000 project, it was usually broken down as $25,000 for programming manpower costs, $25,000 for charges from IT Management and other departments in IT, and $50,000 for program compiles and test runs to develop the software. Because just running compiles and test runs of the software under development consumed about 50% of the costs of a development project, it made sense to adopt the Waterfall development model to minimize those costs.

Along Comes NASA
Now all during this time, NASA was trying to put people on the Moon and explore the Solar System. At first, NASA also adopted a more Agile approach to project management using a trial and error approach to solve the fundamental problems of rocketry. If you YouTube some of the early NASA launches from the late 1950s and early 1960s, you will see many rocket failures blowing up in a large fireball:

Early U.S. rocket and space launch failures and explosions
https://www.youtube.com/watch?v=13qeX98tAS8

This all changed when NASA started putting people on top of their rockets. Because the NASA rockets were getting much larger, more complicated and were now carrying live passengers, NASA switched from an Agile project management approach to a Waterfall project management approach run by the top-down command-and-control management style of corporate America in the 1960s:

NASA's Marshall Space Flight Center 1960s Orientation Film
https://www.youtube.com/watch?v=qPmJR3XSLjY

All during the 1960s and 1970s NASA then went on to perform numerous technological miracles using the Waterfall project management approach and the top-down command-and-control management style of corporate America in the 1960s.

SpaceX Responds with an Agile Approach to Exploring the Solar System
Despite the tremendous accomplishments of NASA over many decades, speed to market was never one of NASA's strong suits. Contrast that with the astounding progress that SpaceX has made, especially in the past year. Elon Musk started out in software, and with SpaceX, Elon Musk has taken the Agile techniques of software development and applied them to the development of space travel. The ultimate goal of SpaceX is to put colonies on the Moon and Mars using a SpaceX Starship.

Figure 8 – The SpaceX Starship SN8 is currently sitting on a pad at the SpaceX Boca Chica launch facility in South Texas awaiting a test flight.

SpaceX is doing this in an Agile stepping stone manner. SpaceX is funding this venture by launching satellites for other organizations using their reusable Falcon 9 rockets. SpaceX is also launching thousands of SpaceX Starlink satellites with Falcon 9 rockets. Each Falcon 9 rocket can deliver about 60 Starlink satellites into low-Earth orbit to eventually provide broadband Internet connectivity to the entire world. It is expected that the revenue from Starlink will far exceed the revenue obtained from launching satellites for other organizations.

Figure 9 – SpaceX is currently making money by launching satellites for other organixations using reusable Falcon 9 rockets.

Unlike NASA in the 1960s and 1970s, SpaceX uses Agile techniques to design, test and improve individual rocket components rather than NASA's Waterfall approach to build an entire rocket in one shot.

Figure 10 – Above is a test flight of the SpaceX Starhopper lower stage of the SpaceX Starship that was designed and built in an Agile manner

Below is a YouTube video by Marcus House that covers many of the remarkable achievements that SpaceX has attained over the past year using Agile techniques:

This is how SpaceX is changing the space industry with Starship and Crew Dragon
https://www.youtube.com/watch?v=CmJGiJoU-Vo

Marcus House comes out with a new SpaceX YouTube video every week or two and I highly recommend following his YouTube Channel for further updates.

Marcus House SpaceX YouTube Channel
https://www.youtube.com/results?search_query=marcus+house+spacex&sp=CAI%253D

Many others have commented on the use of Agile techniques by SpaceX to make astounding progress in space technology. Below are just a few examples.

SpaceX’s Use of Agile Methods
https://medium.com/@cliffberg/spacexs-use-of-agile-methods-c63042178a33

SpaceX Lessons Which Massively Speed Improvement of Products
https://www.nextbigfuture.com/2018/12/reverse-engineering-spacex-to-learn-how-to-speed-up-technological-development.html

Pentagon advisory panel: DoD could take a page from SpaceX on software development
https://spacenews.com/pentagon-advisory-panel-dod-could-take-a-page-from-spacex-on-software-development/

If SpaceX Can Do It Than You Can Too
https://www.thepsi.com/if-spacex-can-do-it-than-you-can-too/

Agile Is Not Rocket Science
https://flyntrok.com/2020/09/01/spacex-and-an-agile-mindset/

Using Orbiter2016 to Simulate Space Flight
If you are interested in things like SpaceX, you should definitely try out some wonderful free software called Orbiter2016 at:

Orbiter Space Flight Simulator 2016 Edition
http://orbit.medphys.ucl.ac.uk/index.html

Be sure to download the Orbiter2016 core package and the high-resolution textures. Also, be sure to download Jarmonik's D3D9 graphics client and XRsound. There is also a button on the Download page for the Orbit Hangar Mods addon repository. From the Orbit Hangar Mods addon repository, you can download additional software for Orbiter2016. Be careful! Some of these addon packages contain .dll files that overlay .dll files in the Orbiter2016 core package. And some of the .dll files in one addon package may overlay the .dll files of another addon package. All of this software can be downloaded from the Orbiter2016 Download page and it comes to about 57 GB of disk space! If you try to add additional addons, I would recommend keeping a backup copy of your latest working version of Orbiter2016. That way, if your latest attempt to add an addon crashes Orbiter2016, you will be able to recover from it. This sounds like a lot of software, but do not worry, Orbiter2016 runs very nicely on my laptop with only 4 GB of memory and a very anemic graphics card.

Figure 11 – Launching a Delta IV rocket from Cape Canaveral.

Figure 12 – As the Delta IV rocket rises, you can have the Orbiter2016 camera automatically follow it.

Figure 13 – Each vehicle in Orbiter2016 can be viewed from the outside or from an internal console.

Figure 14 – The Delta IV rocket releases its solid rocket boosters.

Figure 15 – The Delta IV rocket second stage is ignited.

Figure 16 – The Delta IV rocket faring is released so that the payload satellite can be released.

Figure 17 – The third stage is ignited.

Figure 18 – Launching the Atlantis Space Shuttle.

Figure 19 – Following the Atlantis Space Shuttle as it rises.

Figure 20 – Once the Atlantis Space Shuttle reaches orbit, you can open the bay doors and release the payload. You can also do an EVA with a steerable astronaut.

Figure 21 – Using Orbiter2016 to fly over the Moon.

Check your C:\Orbiter2016\Doc directory for manuals covering the Orbiter2016. The C:\Orbiter2016\Doc\Orbiter.pdf is a good place to start. Orbiter2016 is a very realistic simulation that can teach you a great deal about orbital mechanics. But to get the most benefit from Orbiter2016, you need an easy way to learn how to use it. Below is a really great online tutorial that does that.

Go Play In Space
https://www.orbiterwiki.org/wiki/Go_Play_In_Space

There also are many useful Orbiter2016 tutorials on YouTube that can show you how to use Orbiter2016.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

Thursday, September 03, 2020

How To Cope With the Daily Mayhem of Life in IT

As you probably know, I started working on softwarephysics back in 1979 when I transitioned from being an exploration geophysicist, exploring for oil with Amoco, to become an IT professional in Amoco's IT department. I then spent about 20 years in Applications Development as a programmer and then 17 years in Middleware Operations at various large corporations. I am now 69 years old, and I retired from my last paying IT position in December of 2016. My son is also an IT professional with 17 years of experience. Currently, he is a Team Lead in Applications Development doing Salesforce Cloud Computing for a small investment company. When I first transitioned into IT from geophysics, I figured that if you could apply physics to geology; why not apply physics to software? So like the exploration team at Amoco that I had just left, consisting of geologists, geophysicists, paleontologists, geochemists, and petrophysicists, I decided to take all the physics, chemistry, biology, and geology that I could muster and throw it at the problem of software. The basic idea was that many concepts in physics, chemistry, biology, and geology suggested to me that the IT community had accidentally created a pretty decent computer simulation of the physical Universe on a grand scale, a Software Universe so to speak, and that I could use this fantastic simulation in reverse to better understand the behavior of commercial software, by comparing software to how things behaved in the physical Universe. The original purpose of softwarephysics was to explain why IT was so difficult, to suggest possible remedies, and to provide a direction for thought. My hope was that softwarephysics might help the members of the IT community to better cope with the daily mayhem of life in IT. As we all know, writing and maintaining software is very difficult because so much can go wrong. As we saw in The Fundamental Problem of Software this is largely due to the second law of thermodynamics introducing small bugs into software whenever software is written or changed and also to the nonlinear nature of software that allows small software bugs to frequently produce catastrophic effects. In later postings on softwarephysics, I explained that the solution was to take a biological approach to software by "growing" code biologically instead of writing code. See Agile vs. Waterfall Programming and the Value of Having a Theoretical Framework for more on using an Agile biological approach to software development.

People-Problems Also Contribute to the Daily Mayhem of Life in IT
However, most of my efforts over the years have always been focused on the physical constraints that contribute to the daily mayhem of life in IT. In Hierarchiology and the Phenomenon of Self-Organizing Organizational Collapse and Don't ASAP Your Life Away I did briefly touch upon some people-problem issues. But as I watched my son graduate from the University of Illinois in Urbana with a B.S. in Computer Science and then begin a career in IT, I frequently found myself counseling my son about the common people-problems that can occur in IT and that can also heavily contribute to the daily mayhem of life in IT. It may be true that the root cause of all IT problems can be traced back to the second law of thermodynamics and nonlinearity, but it is also true that nearly all of the real day-to-day problems that arise in IT come about from how the people around you deal with the second law of thermodynamics and nonlinearity. It also depends upon how you deal with the second law of thermodynamics and nonlinearity too! After all, if you could write perfect code off the top of your head that always worked the very first time and never failed while in Production, all of your IT problems would immediately go away! Unfortunately, so would your IT job.

If You Are an IT Professional You Need to Watch Jayme Edwards' The Healthy Software Developer Videos
So here is some fatherly advice from an IT old-timer. If you want to go the distance in IT, you need to watch the Healthy Software Developer videos that Jayme Edwards has produced for the mental health of the IT community. Jayme Edwards has produced a huge number of short videos that deal with all of the people-problems you will likely encounter in your IT career. More importantly, Jayme Edwards will also help you to understand your own tendencies to descend into self-destructive behaviors and thought patterns.

Jayme Edwards' The Healthy Software Developer YouTube Home Page:
https://www.youtube.com/c/JaymeEdwardsMedia/featured

Jayme Edwards' The Healthy Software Developer YouTube videos:
https://www.youtube.com/c/JaymeEdwardsMedia/videos

Everything Old is New Again
I only wish that these videos were available when my son first started out and that they were also available to me back in 1979 when I first started in IT. In fact, these videos are timeless because people-problems ultimately stem from human nature, and human nature does not change with time. For example, when I first transitioned into IT from geophysics, I used to talk to the old-timers about the good old days of IT back in the 1950s. They told me that when the operators began their shift on an old-time 1950s vacuum tube computer, the first thing they did was to crank up the voltage on the vacuum tubes to burn out the tubes that were on their last legs. Then they would replace the burned-out tubes to start the day with a fresh machine.

Figure 1 – In the 1950s, mainframe computers contained thousands of vacuum tubes that had to be constantly replaced as they burned out.

The UNIVAC I first came out in 1951 and was 25 feet by 50 feet in size. It contained 5,600 vacuum tubes, 18,000 crystal diodes and 300 electromechanical relays with a total memory of 12 KB.

Figure 2 – The UNIVAC I was very impressive on the outside.

Figure 3 – But the UNIVAC I was a little less impressive on the inside.

Figure 4 – Prior to 1955, huge mercury delay lines built from tubes of mercury that were about 3 inches long were used to store bits of computer memory. A single mercury delay line could store about 18 bits of computer memory as a series of sound waves that were continuously refreshed by quartz piezoelectric transducers at each end of the tube.

In 1955 magnetic core memory came along, and used tiny magnetic rings called "cores" to store bits. Four little wires had to be threaded by hand through each little core in order to store a single bit, so although magnetic core memory was a lot cheaper and smaller than mercury delay lines, it was still very expensive and took up lots of space.

Figure 5 – Magnetic core memory arrived in 1955 and used a little ring of magnetic material, known as a core, to store a bit. Each little core had to be threaded by hand with 4 wires to store a single bit.

Figure 6 – Magnetic core memory was a big improvement over mercury delay lines, but it was still hugely expensive and took up a great deal of space within a computer.

Figure 7 – Finally in the early 1970s inexpensive semiconductor memory chips came along that made computer memory small and cheap.

They also told me about programming the plugboards of electromechanical Unit Record Processing machines back in the 1950s by physically rewiring the plugboards. The Unit Record Processing machines would then process hundreds of punch cards per minute by routing the punch cards from machine to machine in processing streams.

Figure 8 – In the 1950s, Unit Record Processing machines like this card sorter were programmed by physically rewiring a plugboard.

Figure 9 – The plugboard for a Unit Record Processing machine.

As you can see, IT technology was pretty primitive back in the 1950s, 1960s and the 1970s. But despite the primitive IT technology of the day, these old-timers also told me about many of the people-problems that they encountered back then too. And these old-timer people-problem stories sounded very familiar to me. In fact, most of the old-timer stories that I heard back in the 1980s started out after our current IT management made some new process changes. That's when I would hear something like, "Oh, we tried that back in 1955 and it made a huge mess...". So when listening to one of Jayme Edwards' videos, I frequently find myself recalling similar personal situations that I experienced back in the 1980s or similar situations from the 1950s that the old-timers had warned me about. Hardware and software may dramatically change with time, but the people-problems do not. After all, that is why people study history.

How To Go the Distance
Jayme Edwards stresses the importance of taking healthy measures so that you do not get burned out by IT. There are many things that you can do to prevent IT burn-out. Here is a good example:

Why Do So Many Programmers Lose Hope?
https://www.youtube.com/watch?v=NdA6aQR-s4U

Now I really enjoyed my IT career that spanned many decades, but having been around the block a few times, I would like to offer a little advice to those just starting out in IT, and that is to be sure to pace yourself for the long haul. You really need to dial it back a bit to go the distance. Now I don't want this to be seen as a negative posting about careers in IT, but I personally have seen way too many young bright IT professionals burn out due to an overexposure to stress and long hours, and that is a shame. So dialing it back a bit should be seen as a positive recommendation. And you have to get over thinking that dialing it back to a tolerable long-term level makes you a lazy worthless person. In fact, dialing it back a little will give you the opportunity to be a little more creative and introspective in your IT work, and maybe actually come up with something really neat in your IT career.

This all became evident to me back in 1979 when I transitioned from being a class 9 exploration geophysicist in one of Amoco's exploration departments to become a class 9 IT professional in Amoco's IT department. One very scary Monday morning, I was conducted to my new office cubicle in Amoco’s IT department, and I immediately found myself surrounded by a large number of very strange IT people, all scurrying about in a near state of panic, like the characters in Alice in Wonderland. After nearly 40 years in the IT departments of several major corporations, I can now state with confidence that most corporate IT departments can best be described as “frantic” in nature. This new IT job was a totally alien experience for me, and I immediately thought that I had just made a very dreadful mistake. Granted, I had been programming geophysical models for my thesis and for oil companies ever since taking a basic FORTRAN course back in 1972, but that was the full extent of my academic credentials in computer science. I immediately noticed some glaring differences between my two class 9 jobs in the same corporation. As a class 9 geophysicist, I had an enclosed office on the 52nd floor of the Amoco Building in downtown Chicago, with a door that actually locked, and a nice view of the north side of the Chicago Loop and Lake Michigan. With my new class 9 IT job at Amoco I moved down to the low-rent district of the Amoco Building on the 10th floor where the IT department was located to a cubicle with walls that did not provide very much privacy. Only class 11 and 12 IT professionals had relatively secluded cubicles with walls that offered some degree of privacy. Later I learned that you had to be a class 13 IT Manager, like my new boss, to get an enclosed office like I had back up on the 52nd floor. I also noticed that the stress levels of this new IT job had increased tremendously over my previous job as an exploration geophysicist. As a young geophysicist, I was mainly processing seismic data on computers for the more experienced geophysicists to interpret and to plan where to drill the next exploration wells. Sure there was some level of time-urgency because we had to drill a certain number of exploration wells each year to maintain our drilling concessions with foreign governments, but still, work proceeded at a rather manageable pace, allowing us ample time to play with the processing parameters of the software used to process the seismic data into seismic sections.

Figure 10 - Prior to becoming an IT professional, I was mainly using software to process seismic data into seismic sections that could be used to locate exploration wells.

However, the moment I became an IT professional, all of that changed. Suddenly, everything I was supposed to do became a frantic ASAP effort. It is very difficult to do quality work when everything you are supposed to do is ASAP. Projects would come and go, but they were always time-urgent and very stressful, to the point that it affected the quality of the work that was done. It seemed that there was always the temptation to simply slap something into Production to hit an arbitrary deadline, ready or not, and many times we were forced to succumb to that temptation. This became more evident to me when I moved from Applications Development to Middleware Operations, and I had to then live with the sins of pushing software into Production before it was quite ready for primetime. In recent decades, I have also noticed a tendency to hastily bring IT projects in through heroic efforts of breakneck activity, and for IT Management to then act as if that were actually a good thing after the project was completed! When I first transitioned into IT, I also noticed that I was treated a bit more like a high-paid clerk than a highly trained professional, mainly because of the time-pressures of getting things done. One rarely had time to properly think things through. I seriously doubt that most business professionals would want to hurry their surgeons along while under the knife, but that is not so for their IT support professionals.

You might wonder why I did not immediately run back to exploration geophysics in a panic. There certainly were enough jobs for an exploration geophysicist at the time because we were just experiencing the explosion of oil prices resulting from the 1979 Iranian Revolution. However, my wife and I were both from the Chicago area, and we wanted to stay there. In fact, I had just left a really great job with Shell in Houston to come to Amoco's exploration department in Chicago for that very reason. However, when it was announced about six months after my arrival at Amoco that Amoco was moving the Chicago exploration department to Houston, I think the Chief Geophysicist who had just hired me felt guilty, and he found me a job in Amoco's IT department so that we could stay in Chicago. So I was determined to stick it out for a while in IT until something better might come along. However, after a few months in Amoco's IT department, I began to become intrigued. It seemed as though these strange IT people had actually created their own little simulated universe, that seemingly, I could explore on my own. It also seemed to me that my new IT coworkers were struggling because they did not have a theoretical framework from which to work from like I had had in Amoco's exploration department. That is when I started working on softwarephysics. I figured that if you could apply physics to geology; why not apply physics to software? I then began reading the IT trade rags, to see if anybody else was doing similar research, and it seemed as though nobody else on the planet was thinking along those lines, and that raised my level of interest in doing so even higher.

A Final Note
Although I may have spent the very first 25 years of my career working for oil companies, as a geophysicist by training, I am now very much concerned with the effects of climate change. Even if you are only a very young IT professional, climate change will likely have a dominant effect on the rest of your life and the rest of your IT career in the future. Climate change will most likely determine where you work, how you live, what you eat, what you drink and under what kind of government you live. For more on that, please see This Message on Climate Change Was Brought to You by SOFTWARE and Last Call for Carbon-Based Intelligence on Planet Earth. There are many more softwarephysics postings in this blog on how software may shape the rest of your life in the future. All of my postings on softwarephysics are available and are sorted by year via the links in the upper right-hand corner of each posting. The oldest postings deal mainly with explaining how softwarephysics can help you with your job as an IT professional. The newer postings deal more with how softwarephysics can be of help with some of the Big Problems, like the origin of carbon-based life and the future of Intelligence in our Universe.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston