Like the rest of you, I have been spending much more time at home than usual during the COVID-19 Pandemic and catching up with things that I missed the first time around by streaming old movies and TV content on Netflix. One of the things that I missed the first time around was the AMC TV series Halt and Catch Fire which ran from June 1, 2014, to October 14, 2017, spanning four seasons and 40 episodes. Halt and Catch Fire is a docudrama that very accurately follows the lives of a small group of IT innovators over the period of 1983 to 1995. This was a very critical period in the history of IT. The series covers in great detail the events surrounding the very first software mass extinction that ended the age of the Mainframe Dinosaurs and brought forth the Distributed Computing Revolution of the early 1990s. For more on that see the SoftwarePaleontology section of SoftwareBiology and Cloud Computing and the Coming Software Mass Extinction. The series portrays the arrival of PCs and PC clones into corporate America in the early 1980s and then into the homes of millions of end-users. The Tandy (Radio Shack) TRS-90 and the Commodore-64 home computers also play important roles too. The first stand-alone networks of computers that could be reached with dial-up modems then appear. Then we see IBM get into trouble when it does not take the PC Revolution seriously, and instead, stubbornly clings to the mainframe business that originally led it to dominate the computer marketplace of the 1960s and 1970s. The rise of the Internet out of academia and DARPA is then portrayed and the beginnings of the World Wide Web. The major impact of the MOSAIC browser out of the University of Illinois is well documented. The ISP wars are also covered and the carpet-bombing techniques of America Online (AOL) to become the dominant dial-up ISP of the time. There really was a time in America when everybody actually received several AOL diskettes each week in the mail. The Search Engine wars are also accurately retold with the struggle between algorithm-based Search Engines like Google versus the manually indexed approach of Yahoo. The arrival of Yahoo as the first Web Portal is then described in terms of the evolution of MOSAIC into the Netscape browser that then went on to take over the Web marketplace in 1995. The importance of C++ as the first widely-used object-oriented language is seen throughout the series. I have watched many other similar docudramas covering this period in IT history but Halt and Catch Fire stands out for its accuracy and depth of detail. I was shocked by the technical detail of Halt and Catch Fire which went way beyond the needs of a general audience. It seems like AMC produced the TV series for an audience of Silicon Valley workers to relive this crucial period in the history of IT during the 1980s and early 1990s.
My Experiences of the Time
As for me, I finished up my B.S. in Physics at the University of Illinois in Urbana in 1973 and headed up north to complete an M.S. in Geophysics at the University of Wisconsin in Madison. From 1975 to 1979, I was an exploration geophysicist exploring for oil, first with Shell, and then with Amoco. But in 1979 I left exploration geophysics and transitioned into IT with the sole benefit of having taken CS 101 at the University of Illinois in 1972. However, I had used a PDP-8/e minicomputer to complete my M.S. thesis at Wisconsin, and while I was an exploration geophysicist I was mainly writing geophysical simulation software in FORTRAN. This was not unusual. In those days the Computer Science departments of the universities were just starting to crank out large numbers of CS graduates. So most of the established IT workers of the day were refugees from the science, engineering or accounting departments of major corporations. For example, one of my first IT bosses was actually a former refinery worker with a high school education. You see, during the late 1960s Amoco came out to the refineries and gave IT aptitude tests to the workforce. My boss passed the IT aptitude test and soon found himself learning to program COBOL on the primitive mainframes of the day. He was one of the sharpest IT workers I ever came across.
Figure 1 – Some graduate students huddled around a DEC PDP-8/e minicomputer. Notice the teletype machines in the foreground on the left that were used to input code and data into the machine and to print out results as well.
When I transitioned into IT in 1979 there were no Change Management groups, no DBAs, no UAT testers, no IT Security, no IT Project Management department, and we didn’t even have IT! I was in the ISD (Information Services Department) of Amoco, and I was a programmer, not a developer, and we ruled the day because we did it all. Yes, we had to deal with some meddling from those pesky systems analysts with their freshly minted MBA degrees but no programming experience, but thankfully, they were usually busy with preparing user requirement documents of a dubious nature and did not get into our way too often. ISD had two sections - computer systems and manual systems. The people in manual systems worked on the workflows for manual systems with the Forms Design department, while the computer systems people programmed COBOL, PL/I, or FORTRAN code for the IBM MVS/TSO mainframes running the OS/370 operating system. Luckily for me, I knew some FORTRAN and ended up in the computer systems section of ISD, because for some reason, designing paper forms all day long did not exactly appeal to me. We created systems in those days, not applications, and nearly all of them were batch. We coded up and tested individual programs that were run in sequence, one after the other, in a job stream controlled by JCL or a cataloged procedure. To implement these systems all we had to do was fill out a 3-part carbon paper form for the Data Management clerks – they got one copy, we got one copy, and somebody else must have gotten the third copy, but I never found out who. The form simply told the Data Management clerks which load modules to move from my development PDS (partitioned dataset) to my production PDS and how to move the cataloged procedures from my development proclib to the production proclib. That was it. Systems were pretty simple back then and there was no change management.
All during the 1980s, PCs kept getting cheaper, more powerful and commercial PC software improved as well as the power of PC hardware improved. The end-users were using WordPerfect for word processing and VisiCalc for spreadsheets running under Microsoft DOS on cheap PC-compatible personal computers. The Apple Mac arrived in 1984 with a GUI (Graphical User Interface) operating system, but Apple Macs were way too expensive, so all the Amoco business units continued to run on Microsoft DOS on cheap PC-compatible machines. All of this PC activity was done outside of the control of the Amoco IT Department. In most corporations, end-users treat the IT Department with disdain, and being able to run PCs with commercial software gave the end-users a feeling of independence. But like IBM, Amoco's IT Department did not see the PCs as much of a risk. But then the end-user business units began to hook their cheap PCs up into LANs (Local Area Networks) that allowed end-users to share files and printers on a LAN network. That really scared Amoco's IT Department, so in 1986 all members of the IT Department were given PCs to replace their IBM 3278 terminals.
I got my first PC in the IT department of Amoco in 1986. It was an IBM PC/AT with a 6 MHz Intel 80-286 processor and a 20 MB hard disk, with a total of 640 KB of memory. It cost about $1600 at the time - about $3,800 in 2021 dollars! But Amoco's IT Department mainly just used these PCs to run IBM 3278 emulation software to continue to connect to the IBM mainframes running MVS and VM/CMS. I was actually doing primitive Cloud Computing on Amoco's network of 31 VM/CMS datacenters. For more on that see
The Origin and Evolution of Cloud Computing - Software Moves From the Sea to the Land and Back Again. At the time, I thought that PCs were a real pain because they were so anemic, expensive and hard to support. To get a feel for this, take a look at this YouTube video that describes the original IBM PC that appeared in 1981:
The Original IBM PC 5150 - the story of the world's most influential computer
https://www.youtube.com/watch?v=0PceJO3CAGI
and this YouTube video that shows somebody installing and using Windows 1.04 on a 1985 IBM PC/XT clone:
Windows1 (1985) PC XT Hercules
https://www.youtube.com/watch?v=xnudvJbAgI0
Then Microsoft released Windows 3.0 in 1990. Suddenly, we were able to run a GUI desktop on top of Microsoft DOS on our cheap PCs that had been running Microsoft DOS applications! To end-users, Windows 3.0 looked just like the expensive Macs that they were not supposed to use. This greatly expanded the number of Amoco end-users with cheap PCs running Windows 3.0. To further complicate the situation, some die-hard Apple II end-users bought expensive Apple Macs too! And because the Amoco IT Department had been running IBM mainframes for many decades, some Amoco IT end-users started running the IBM OS/2 GUI operating system on cheap PC-compatible machines on a trial basis. Now instead of running the heavy-duty commercial applications on IBM MVS and light-weight interactive applications on the Amoco IBM VM/CMS Corporate Timesharing Cloud, we had people trying to replace all of that with software running on personal computers running Microsoft DOS, Windows 3.0, Mac and OS/2. What a mess!
Just when you would think that it could not get worse, this all dramatically changed in 1992 when the Distributed Computing mass extinction hit IT. The end-user business units began to buy their own Unix servers and hired young computer science graduates to write two-tier C and C++ client-server applications running on Unix servers.
Figure 2 – The Distributed Computing Revolution hit IT in the early 1990s. Suddenly, people started writing two-tier client-server applications on Unix servers. The Unix servers ran RDBMS database software, like Oracle, and the end-users' PC ran "fat" client software on their PCs (Left). Later, a third layer of Middleware was added to do most of the heavy processing with the PCs reduced to running "thin" client software in the form of a browser on their PC (Right).
Fortunately, in 1995 the Internet Revolution hit. At first, it was just passive corporate websites that could only display static information, but soon people were building interactive corporate websites too. The "fat" client software on the end-users PCs was then reduced to "thin" client software in the form of a browser on their PC communicating with backend Unix webservers delivering the interactive content on the fly. But to allow heavy-duty corporate websites to scale with increasing loads, a third layer was soon added to the Distributed Computing Platform to form a 3-tier Client-Server model (Right side of Figure 2). The third layer ran Middleware software containing the business logic for applications and was positioned between the backend RDBMS database layer and the webservers dishing out dynamic HTML to the end-user browsers. The three-tier Distributed Computing Model finally put an end to the ".dll Hell" of the "fat" client two-tier Distributed Computing Model.
Conclusion
So for the most part, like most IT professionals of the time, I was just responding as best I could to these dramatic changes in IT. But to see what it was like for those actually driving these dramatic IT innovations, you really need to watch the 40 episodes of Halt and Catch Fire and take some notes while you do it.
Comments are welcome at scj333@sbcglobal.net
To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/
Regards,
Steve Johnston
No comments:
Post a Comment