If you are a busy IT Development professional, you probably have not had much time to keep up with the advances being made in AI text generation. But since software source code is just text, you should be paying attention, especially with the arrival of GPT-3 last May. Since Microsoft has since purchased the exclusive rights to the GPT-3 source code, you should definitely be paying attention to how AI-generated computer source code may affect your job in the near future. In order to do that, please view the following two YouTube videos.
From Essays to Coding, This New A.I. Can Write Anything
https://www.youtube.com/watch?v=Te5rOTcE4J4
The next one features an interview with GPT-3 itself in a Turing-Test-like manner. It is a little bit scary.
What It's Like To be a Computer: An Interview with GPT-3
https://www.youtube.com/watch?v=PqbB07n_uQ4
How Does GPT-3 Do It?
GPT-3 uses Machine Learning neural networks with tremendous Deep Learning capabilities to produce coherent text of all kinds, including computer source code. All Machine Learning techniques essentially use the concepts of Universal Darwinism - inheritance, innovation and natural selection to work their wonders. All Machine Learning techniques take some input data and then apply an initial model to the data that tries to explain the data. Next, a selection process is applied by first measuring how well the model explains the data and then keeping those models around that do well. The surviving models are then mutated slightly with some innovation. The mutated models are then inherited by the Machine Learning processes into a new generation of models that are then subjected to the same Machine Learning processes all over again to see if any improvements have been made. This Darwinian process of inheritance, innovation and natural selection is then repeated over and over again until a final model is output.
Developers writing computer source code manually today also use these same concepts of Universal Darwinism to produce code. For example, developers never code up the source code for new software from scratch. Instead, developers take old existing code from previous applications, or from the applications of others in their development group, or perhaps, even from the Internet itself as a starting point and then use the Darwinian processes of inheritance, innovation and natural selection to evolve the software into the final product. The developer currently does this with a very tedious manual process of:
Borrow some old code → modify code → test → fix code → test → Borrow some more old code → modify code → test → fix code → test ....
To understand how GPT-3 simulates the above process, begin with some very excellent YouTube videos by Luis Serrano. If you are totally new to Machine Learning, begin with this first video which covers all of the major machine learning approaches currently being followed:
A Friendly Introduction to Machine Learning
https://www.youtube.com/watch?v=IpGxLWOIZy4
The next video covers neural networks and the Deep Learning that large neural networks can perform. GPT-3 relies heavily on many Deep Learning neural networks that have basically read everything on the Internet and have taught themselves how to talk all by themselves, just like a human toddler does by listening to the people around it.
A Friendly Introduction to Deep Learning and Neural Networks
https://www.youtube.com/watch?v=BR9h47Jtqyw
The next video explains how recurrent neural networks use feedback loops, like the feedback loops found in electronic circuits and biochemical reactions, to stabilize output.
A Friendly Introduction to Recurrent Neural Networks
https://www.youtube.com/watch?v=UNmqTiOnRfg
The next video explains how generative neural networks with positive feedback can be paired with neural networks with negative feedback. This same mechanism is used to control the expression of genes stored in DNA.
A Friendly Introduction to Generative Adversarial Networks (GANs)
https://www.youtube.com/watch?v=8L11aMN5KY8
Now with the above background, next take a look at the following excellent posts by Jay Alammar.
Visualizing A Neural Machine Translation Model (Mechanics of Seq2seq Models With Attention)
https://jalammar.github.io/visualizing-neural-machine-translation-mechanics-of-seq2seq-models-with-attention/
The Illustrated Transformer
https://jalammar.github.io/illustrated-transformer/
Finally, read this last post that explains how GPT-3 uses all of the above to work.
How GPT3 Works - Visualizations and Animations
http://jalammar.github.io/how-gpt3-works-visualizations-animations/
How Will GPT-3 Affect My Programming Job?
I don't think that GPT-3 has brought us to the Software Singularity yet, but we are getting closer. Here is a YouTube video that seems to have a good take on the subject:
GPT3 - Will AI replace programmers?
https://www.youtube.com/watch?v=u5MmL3nqvfE
One commenter wisely noted, "I'm not scared about GPT-3. I'm scared about GPT-5.".
But I can imagine Microsoft incorporating GPT-3 into their Visual Studio IDE. Since GPT-3 is much too large to run on a single PC and needs to run in the Cloud, Microsoft Visual Studio will need to run in a metered manner with Cloud support for GPT-3, GPT-4, GPT-5... Also, GPT-3 can be fine-tuned for specific computer languages to vastly improve performance. Currently, GPT-3 is just generating code based on the code that it randomly read on the Internet by accident. So I think that GPT-3 will only slightly alter the normal development cycle by removing lots of the tedious aspects of coding that really do not require actual thinking. It will probably look more like this:
Generate some code with GPT-3 → modify code → test → fix code → test → Generate some code with GPT-3 → modify code → test → fix code → test ....
The reason I say this is because, back in 1985, I began working on an IDE with a built-in code generator that was called BSDE - the Bionic Systems Development Environment. This was while I was a programmer in the IT department of Amoco. BSDE was an early mainframe-based IDE (Integrated Development Environment like Eclipse or Microsoft Visual Studio) at a time when there were no IDEs. During the 1980s BSDE was used to grow several million lines of production code for Amoco by growing applications from embryos in an interactive mode. BSDE would first generate an embryo application for a programmer by reading the genes for the application. The application genes were stored on a sequential file containing the DDL statements for the DB2 tables and indexes that the application was going to use to store data. The embryo was then grown within BSDE by the programmer injecting reusable BSDE skeleton code segments into the embryo and by turning on the embryo's genes to generate SQL code on the fly. This continued on until the application grew to maturity and BSDE delivered the completed application into Production. About 30 programmers in Amoco's IT department used BSDE to put several million lines of code into Production. The development process looked something like this:
Generate some code with BSDE → modify code → test → fix code → test → Generate some code with BSDE → modify code → test → fix code → test ....
For more about BSDE take a look at Agile vs. Waterfall Programming and the Value of Having a Theoretical Framework.
How Will GPT-3 Affect the Rest of Society?
Since most white-collar jobs deal primarily with reading and writing text and that text does not have to be as perfect as computer source code has to be, other white-collar jobs, beyond those in IT, are probably at more risk from AI text generators than are IT jobs. As I pointed out in Is it Finally Time to Reboot Civilization with a New Release?, Oligarchiology and the Rise of Software to Predominance in the 21st Century and The Danger of Tyranny in the Age of Software society will soon need to come to grips with trying to run civilizations in which most people do not work as we currently know it. In that regard, perhaps your next job interview will be conducted by a GPT-3 avatar!
Comments are welcome at scj333@sbcglobal.net
To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/
Regards,
Steve Johnston
Wednesday, October 28, 2020
The Impact of GPT-3 AI Text Generation on the Development and Maintenance of Computer Software
Tuesday, October 13, 2020
Agile Development of Molten Salt Nuclear Reactors at Copenhagen Atomics
In Last Call for Carbon-Based Intelligence on Planet Earth, I explained that we have just about run out of time in regards to climate change possibly putting an end to carbon-based Intelligence on the Earth. This conclusion is an outgrowth of my Null Result Hypothesis explanation for Fermi's Paradox.
Fermi’s Paradox - If the universe is just chock full of intelligent beings, why do we not see any evidence of their existence?
Briefly stated:
Null Result Hypothesis - What if the explanation to Fermi's Paradox is simply that the Milky Way galaxy has yet to produce a form of interstellar Technological Intelligence because all Technological Intelligences are destroyed by the very same mechanisms that bring them forth?
By that, I mean that the Milky Way galaxy has not yet produced a form of Intelligence that can make itself known across interstellar distances, including ourselves. I then went on to propose that the simplest explanation for this lack of contact could be that the conditions necessary to bring forth a carbon-based interstellar Technological Intelligence on a planet or moon were also the same kill mechanisms that eliminated all forms of carbon-based Technological Intelligences with 100% efficiency. One of those possible kill mechanisms could certainly be for carbon-based Technological Intelligences to mess with the carbon cycle of their home planet or moon. For more on that see The Deadly Dangerous Dance of Carbon-Based Intelligence. But in Last Call for Carbon-Based Intelligence on Planet Earth, I also explained that we still had a chance to stop pumping carbon dioxide into our atmosphere by using molten salt nuclear reactors to burn the 250,000 tons of spent nuclear fuel, 1.2 million tons of depleted uranium and the huge mounds of thorium waste from rare earth mines. From the perspective of softwarephysics, this is important because the carbon-based Intelligence on this planet is so very close to producing a silicon-based Intelligence to carry on with exploring our galaxy and making itself known to other forms of Intelligence that might be out there.
Figure 1 – A ball of thorium or uranium smaller than a golf ball can fuel an American lifestyle for 100 years. This includes all of the electricity, heating, cooling, driving and flying that an American does in 100 years. We have already mined enough thorium and uranium to run the whole world for thousands of years. There is enough thorium and uranium on the Earth to run the world for hundreds of thousands of years.
How To Make Molten Salt Nuclear Reactors a Reality
Over the past decade, many organizations have begun research and development programs to produce molten salt nuclear reactors. China and India have both adopted a government-led NASA-type approach to the problem using Waterfall-like methodologies. A number of private concerns throughout the world, most notably ThorCon, are also using a Waterfall approach to produce a design for a large-scale molten salt nuclear reactor that can attain certification by governmental regulatory bodies like the NRC. Once certification by a governmental regulatory body has been attained, it is hoped that it should be possible to find the necessary private funding to build molten salt nuclear reactors on a massive scale.
Copenhagen Atomics Takes an Agile Approach
However, in this posting, I would like to showcase the efforts of a small startup called Copenhagen Atomics. Unlike the other organizations working on molten salt nuclear reactors, Copenhagen Atomics is using an Agile approach. Copenhagen Atomics is applying the same Agile research and development techniques to the production of molten salt nuclear reactors that SpaceX is currently using for the research and development of private spaceflight - only without the benefit of huge amounts of money. For more on the amazing efforts of SpaceX see Agile Project Management at SpaceX. Below is a recent video of Thomas Jam Pedersen, one of the founding members of Copenhagen Atomics, at TEAC10 explaining the Agile techniques that Copenhagen Atomics is using.
CA @ TEAC10, Oak Ridge - Thomas Jam Pedersen
https://www.youtube.com/watch?v=9nN4txzie8M
Rather than spending a great deal of time and effort on Waterfall design documents, Copenhagen Atomics is spending their time and effort on actually "building stuff" in an Agile trial and error manner. For example:
1MW Molten-Salt Test Reactor by Copenhagen Atomics - Aslak Stubsgaard @ ORNL MSRW 2020
https://www.youtube.com/watch?v=A9zfYTWjZqk
Now compare the Copenhagen Atomics' presentations to the 1969 film produced by the Oak Ridge National Laboratory for the United States Atomic Energy Commission that describes the Molten Salt Reactor Experiment (MSRE) and how Alvin Weinberg's team of 30 nuclear scientists built the very first experimental molten salt nuclear reactor from scratch in an Agile manner with only $10 million during the period 1960 - 1965 and then ran it for 20,000 hours from 1965 - 1969 without a hitch. As I mentioned in Agile vs. Waterfall Programming and the Value of Having a Theoretical Framework and Agile Project Management at SpaceX, back in the 1950s and early 1960s technical people used a more Agile approach to solving complex problems rather than using the Waterfall approach that arose in the 1970s.
The Molten-Salt Reactor Experiment
https://www.youtube.com/watch?v=tyDbq5HRs0o#t-1
More Copenhagen Atomics YouTube videos are available on their YouTube Channel:
Copenhagen Atomics YouTube Channel
https://www.youtube.com/c/CopenhagenAtomics
Funding the Copenhagen Atomics Approach
Like SpaceX, Copenhagen Atomics is partially funding its Agile research and development program by developing and building component products for molten salt nuclear reactors that can then be sold to other organizations.
Figure 2 – Copenhagen Atomics is partially funding their research and development efforts by developing and building component products for molten salt nuclear reactors. Above we see some team members sitting on portable test molten salt loops that can be used to test the interaction of molten salts with test components. Since working on molten salt nuclear reactors basically involves dealing with high-temperature plumbing, much research can be conducted with molten salts that are not even radioactive.
The ultimate goal of Copenhagen Atomics is to produce small 50 MW molten salt nuclear reactors the size of a 40-foot shipping container that can burn spent nuclear fuel.
Figure 3 – A schematic for a 50 MW molten salt nuclear waste burner.
Figure 4 – These proposed 50 MW molten salt nuclear waste burners would be the size of a 40-foot shipping container and would be constructed on assembly lines ready for shipment to customers by ship and truck.
Figure 5 – These proposed 50 MW molten salt nuclear waste burners would use a thorium-232 fuel cycle.
Figure 6 – The neutrons from plutonium-239 and plutonium-240 from spent nuclear fuel rods would be used to get the thorium-232 fuel cycle going. Additional spent nuclear fuel could also be added to provide additional energy and to turn the spent fuel into fission products that only need to be stored for 10 half-lives or 300 years.
Time For Action
Last fall, I met with Congressman Raja Krishnamoorthi of Illinois about molten salt nuclear reactors and the possibility of using them to turn the 10,000 tons of spent nuclear fuel currently being stored at nuclear reactors in Illinois. Instead of being a burden, these 10,000 tons of spent nuclear fuel could be turned into electricity and fission products that only need to be stored for 300 years. I explained that currently, Illinois is committed to storing the 10,000 tons of spent nuclear fuel for about 200,000 years. Of the 535 members of the 116th Congress of the United States of America, we only have 11 members with a background in engineering or science. Fortunately for me, Congressman Raja Krishnamoorthi received a B.S. in mechanical engineering from Princeton and a law degree from Harvard and has had some experience with solar energy and is also very concerned about climate change. I also passed on my concerns about these issues to both of my senators.
Comments are welcome at scj333@sbcglobal.net
To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/
Regards,
Steve Johnston
Wednesday, September 16, 2020
Agile Project Management at SpaceX
In Agile vs. Waterfall Programming and the Value of Having a Theoretical Framework, I explained that back in the 1950s and 1960s IT professionals used an Agile methodology to develop software because programs had to be very small to fit into the puny amount of memory that mainframe computers had at the time. Recall that the original UNIVAC I came out in 1951 with only 12 KB of memory. By 1970, mainframe computers typically only had about 1 MB of memory that was divided up into a number of fixed regions by the IBM 360-MFT operating system. Typically, the largest memory region was only 256 KB. Now you really cannot do a lot of logic in 256 KB of memory so applications in the 1960s consisted of lots of very small programs in a JCL job stream that used many tapes for input and output. Each step in the job stream had a very small program that would read the tapes from the previous job step, do some processing on the data, and then write out the processed data to tapes for the next step in the JCL job stream. The output tapes were then used as input for the next step in the JCL job stream. For more on tape processing see: An IT Perspective on the Origin of Chromatin, Chromosomes and Cancer.
So this kind of tape batch processing easily lent itself to the prototyping of the little programs in each job step. Developers would simply "breadboard" each little step by trial and error like they used to do on the Unit Record Processing machines of the 1950s that were programmed by physically rewiring the plugboards of each Unit Record Processing machine in a job stream. The Unit Record Processing machines would then process hundreds of punch cards per minute by routing the punch cards from machine to machine in processing streams.
Figure 1 – In the 1950s, Unit Record Processing machines like this card sorter were programmed by physically rewiring a plugboard.
Figure 2 – The plugboard for a Unit Record Processing machine. Programming a plugboard was a trial and error process that had to be done in an Agile manner. This Agile programming technique carried over to the programming of the small batch programs in a tape batch processing stream.
To get a better appreciation for how these small programs in a long tape batch processing stream worked, and how they were developed in a piecemeal prototyping manner, let's further explore tape batch processing.
Batch Tape Processing and Sequential Access Methods
One of the simplest and oldest sequential access methods is called QSAM - Queued Sequential Access Method:
Queued Sequential Access Method
http://en.wikipedia.org/wiki/Queued_Sequential_Access_Method
I did a lot of magnetic tape processing in the 1970s and early 1980s using QSAM. At the time we used 9 track tapes that were 1/2 inch wide and 2400 feet long on a reel with a 10.5 inch diameter. The tape had 8 data tracks and one parity track across the 1/2 inch tape width. That way we could store one byte across the 8 1-bit data tracks in a frame, and we used the parity track to check for errors. We used odd parity, if the 8 bits on the 8 data tracks in a frame added up to an even number of 1s, we put a 1 in the parity track to make the total number of 1s an odd number. If the 8 bits added up to an odd number of 1s, we put a 0 in the parity track to keep the total number of 1s an odd number. Originally, 9 track tapes had a density of 1600 bytes/inch of tape, with a data transfer rate of 15,000 bytes/second. Remember, a byte is 8 bits and can store one character, like the letter “A” which we encode in the ASCII code set as A = “01000001”.
Figure 3 – A 1/2 inch wide 9 track magnetic tape on a 2400-foot reel with a diameter of 10.5 inches
Figure 4 – 9 track magnetic tape had 8 data tracks and one parity track using odd parity which allowed for the detection of bad bytes with parity errors on the tape.
Later, 6250 bytes/inch tape drives became available, and I will use that density for the calculations that follow. Now suppose you had 10 million customers and the current account balance for each customer was stored on an 80-byte customer record. A record was like a row in a spreadsheet. The first field of the record was usually a CustomerID field that contained a unique customer ID like a social security number and was essentially the equivalent of a promoter region on the front end of a gene in DNA. The remainder of the 80-byte customer record contained fields for the customer’s name and billing address, along with the customer’s current account information. Between each block of data on the tape, there was a 0.5 inch gap of “junk” tape. This “junk” tape allowed for the acceleration and deceleration of the tape reel as it spun past the read/write head of a tape drive and perhaps occasionally reversed direction. Since an 80-byte record only came to 80/6250 = 0.0128 inches of tape, which is quite short compared to the overhead of the 0.5 inch gap of “junk” tape between records, it made sense to block many records together into a single block of data that could be read by the tape drive in a single I/O operation. For example, blocking 100 80-byte records increased the block size to 8000/6250 = 1.28 inches and between each 1.28 inch block of data on the tape, there was the 0.5 inch gap of “junk” tape. This greatly reduced the amount of wasted “junk” tape on a 2400-foot reel of tape. So each 100 record block of data took up a total of 1.78 inches of tape and we could get 16,180 blocks on a 2400-foot tape or the data for 1,618,000 customers per tape. So in order to store the account information for 10 million customers, you needed about 6 2400-foot reels of tape. That means each step in the batch processing stream would use 6 input tapes and 6 output tapes. The advantage of QSAM, over an earlier sequential access method known as BSAM, was that you could read and write an entire block of records at a time via an I/O buffer. In our example, a program could read one record at a time from an I/O buffer which contained the 100 records from a single block of data on the tape. When the I/O buffer was depleted of records, the next 100 records were read in from the next block of records on the tape. Similarly, programs could write one record at a time to the I/O buffer, and when the I/O buffer was filled with 100 records, the entire I/O buffer with 100 records in it was written as the next block of data on an output tape.
Figure 5 – Between each record, or block of records, on a magnetic tape, there was a 0.5 inch gap of “junk” tape. The “junk” tape allowed for the acceleration and deceleration of the tape reel as it spun past the read/write head on a tape drive. Since an 80-byte record only came to 80/6250 = 0.0128 inches, it made sense to block many records together into a single block that could be read by the tape drive in a single I/O operation. For example, blocking 100 80-byte records increased the block size to 8000/6250 = 1.28 inches, and between each 1.28 inch block of data on the tape, there was a 0.5 inch gap of “junk” tape for a total of 1.78 inches per block.
Figure 6 – Blocking records on tape allowed data to be stored more efficiently.
So it took 6 tapes to just store the rudimentary account data for 10 million customers. The problem was that each tape could only store 123 MB of data. Not too good, considering that today you can buy a 1 TB PC disk drive that can hold 8525 times as much data for about $100! Today, you could also store about 67 times as much data on a $7.00 8 GB thumb drive. So how could you find the data for a particular customer on 24,000 feet (2.72 miles) of tape? Well, you really could not do that reading one block of data at a time with the read/write head of a tape drive, so we processed data with batch jobs using lots of input and output tapes. Generally, we had a Master Customer File on 6 tapes and a large number of Transaction tapes with insert, update and delete records for customers. All the tapes were sorted by the CustomerID field, and our programs would read a Master tape and a Transaction tape at the same time and apply the inserts, updates and deletes on the Transaction tape to a new Master tape. So your batch job would read a Master and Transaction input tape at the same time and would then write to a single new Master output tape. These batch jobs would run for many hours, with lots of mounting and unmounting of dozens of tapes.
Figure 7 – The batch processing of 10 million customers took a lot of tapes and tape drives.
When developing batch job streams that used tapes, each little program in the job stream was prototyped first using a small amount of data on just one input and one output tape in a stepwise manner.
The Arrival of the Waterfall Methodology
This all changed in the late 1970s when interactive computing began to become a significant factor in the commercial software that corporate IT departments were generating. By then mainframe computers had much more memory than they had back in the 1960s, so interactive programs could be much larger than the small programs found within the individual job steps of a batch job stream. Since the interactive software had to be loaded into a computer all in one shot, and required some kind of a user interface that did things like checking the input data from an end-user, and also had to interact with the end-user in a dynamic manner, interactive programs were necessarily much larger than the small programs that were found in the individual job steps of a batch job stream. These factors caused corporate IT departments to move from the Agile prototyping methodologies of the 1960s and early 1970s to the Waterfall methodology of the 1980s, and so by the early 1980s prototyping software on the fly was considered to be an immature approach. Instead, corporate IT departments decided that a formal development process was needed, and they chose the Waterfall approach used by the construction and manufacturing industries to combat the high costs of making changes late in the development process. This was because in the early 1980s CPU costs were still exceedingly quite high so it made sense to create lots of upfront design documents before coding actually began to minimize the CPU costs involved with creating software. For example, in the early 1980s, if I had a $100,000 project, it was usually broken down as $25,000 for programming manpower costs, $25,000 for charges from IT Management and other departments in IT, and $50,000 for program compiles and test runs to develop the software. Because just running compiles and test runs of the software under development consumed about 50% of the costs of a development project, it made sense to adopt the Waterfall development model to minimize those costs.
Along Comes NASA
Now all during this time, NASA was trying to put people on the Moon and explore the Solar System. At first, NASA also adopted a more Agile approach to project management using a trial and error approach to solve the fundamental problems of rocketry. If you YouTube some of the early NASA launches from the late 1950s and early 1960s, you will see many rocket failures blowing up in a large fireball:
Early U.S. rocket and space launch failures and explosions
https://www.youtube.com/watch?v=13qeX98tAS8
This all changed when NASA started putting people on top of their rockets. Because the NASA rockets were getting much larger, more complicated and were now carrying live passengers, NASA switched from an Agile project management approach to a Waterfall project management approach run by the top-down command-and-control management style of corporate America in the 1960s:
NASA's Marshall Space Flight Center 1960s Orientation Film
https://www.youtube.com/watch?v=qPmJR3XSLjY
All during the 1960s and 1970s NASA then went on to perform numerous technological miracles using the Waterfall project management approach and the top-down command-and-control management style of corporate America in the 1960s.
SpaceX Responds with an Agile Approach to Exploring the Solar System
Despite the tremendous accomplishments of NASA over many decades, speed to market was never one of NASA's strong suits. Contrast that with the astounding progress that SpaceX has made, especially in the past year. Elon Musk started out in software, and with SpaceX, Elon Musk has taken the Agile techniques of software development and applied them to the development of space travel. The ultimate goal of SpaceX is to put colonies on the Moon and Mars using a SpaceX Starship.
Figure 8 – The SpaceX Starship SN8 is currently sitting on a pad at the SpaceX Boca Chica launch facility in South Texas awaiting a test flight.
SpaceX is doing this in an Agile stepping stone manner. SpaceX is funding this venture by launching satellites for other organizations using their reusable Falcon 9 rockets. SpaceX is also launching thousands of SpaceX Starlink satellites with Falcon 9 rockets. Each Falcon 9 rocket can deliver about 60 Starlink satellites into low-Earth orbit to eventually provide broadband Internet connectivity to the entire world. It is expected that the revenue from Starlink will far exceed the revenue obtained from launching satellites for other organizations.
Figure 9 – SpaceX is currently making money by launching satellites for other organixations using reusable Falcon 9 rockets.
Unlike NASA in the 1960s and 1970s, SpaceX uses Agile techniques to design, test and improve individual rocket components rather than NASA's Waterfall approach to build an entire rocket in one shot.
Figure 10 – Above is a test flight of the SpaceX Starhopper lower stage of the SpaceX Starship that was designed and built in an Agile manner
Below is a YouTube video by Marcus House that covers many of the remarkable achievements that SpaceX has attained over the past year using Agile techniques:
This is how SpaceX is changing the space industry with Starship and Crew Dragon
https://www.youtube.com/watch?v=CmJGiJoU-Vo
Marcus House comes out with a new SpaceX YouTube video every week or two and I highly recommend following his YouTube Channel for further updates.
Marcus House SpaceX YouTube Channel
https://www.youtube.com/results?search_query=marcus+house+spacex&sp=CAI%253D
Many others have commented on the use of Agile techniques by SpaceX to make astounding progress in space technology. Below are just a few examples.
SpaceX’s Use of Agile Methods
https://medium.com/@cliffberg/spacexs-use-of-agile-methods-c63042178a33
SpaceX Lessons Which Massively Speed Improvement of Products
https://www.nextbigfuture.com/2018/12/reverse-engineering-spacex-to-learn-how-to-speed-up-technological-development.html
Pentagon advisory panel: DoD could take a page from SpaceX on software development
https://spacenews.com/pentagon-advisory-panel-dod-could-take-a-page-from-spacex-on-software-development/
If SpaceX Can Do It Than You Can Too
https://www.thepsi.com/if-spacex-can-do-it-than-you-can-too/
Agile Is Not Rocket Science
https://flyntrok.com/2020/09/01/spacex-and-an-agile-mindset/
Using Orbiter2016 to Simulate Space Flight
If you are interested in things like SpaceX, you should definitely try out some wonderful free software called Orbiter2016 at:
Orbiter Space Flight Simulator 2016 Edition
http://orbit.medphys.ucl.ac.uk/index.html
Be sure to download the Orbiter2016 core package and the high-resolution textures. Also, be sure to download Jarmonik's D3D9 graphics client and XRsound. There is also a button on the Download page for the Orbit Hangar Mods addon repository.
From the Orbit Hangar Mods addon repository, you can download additional software for Orbiter2016. Be careful! Some of these addon packages contain .dll files that overlay .dll files in the Orbiter2016 core package. And some of the .dll files in one addon package may overlay the .dll files of another addon package. All of this software can be downloaded from the Orbiter2016 Download page and it comes to about 57 GB of disk space! If you try to add additional addons, I would recommend keeping a backup copy of your latest working version of Orbiter2016. That way, if your latest attempt to add an addon crashes Orbiter2016, you will be able to recover from it. This sounds like a lot of software, but do not worry, Orbiter2016 runs very nicely on my laptop with only 4 GB of memory and a very anemic graphics card.
Figure 11 – Launching a Delta IV rocket from Cape Canaveral.
Figure 12 – As the Delta IV rocket rises, you can have the Orbiter2016 camera automatically follow it.
Figure 13 – Each vehicle in Orbiter2016 can be viewed from the outside or from an internal console.
Figure 14 – The Delta IV rocket releases its solid rocket boosters.
Figure 15 – The Delta IV rocket second stage is ignited.
Figure 16 – The Delta IV rocket faring is released so that the payload satellite can be released.
Figure 17 – The third stage is ignited.
Figure 18 – Launching the Atlantis Space Shuttle.
Figure 19 – Following the Atlantis Space Shuttle as it rises.
Figure 20 – Once the Atlantis Space Shuttle reaches orbit, you can open the bay doors and release the payload. You can also do an EVA with a steerable astronaut.
Figure 21 – Using Orbiter2016 to fly over the Moon.
Check your C:\Orbiter2016\Doc directory for manuals covering the Orbiter2016. The C:\Orbiter2016\Doc\Orbiter.pdf is a good place to start. Orbiter2016 is a very realistic simulation that can teach you a great deal about orbital mechanics. But to get the most benefit from Orbiter2016, you need an easy way to learn how to use it. Below is a really great online tutorial that does that.
Go Play In Space
https://www.orbiterwiki.org/wiki/Go_Play_In_Space
There also are many useful Orbiter2016 tutorials on YouTube that can show you how to use Orbiter2016.
Comments are welcome at scj333@sbcglobal.net
To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/
Regards,
Steve Johnston
Thursday, September 03, 2020
How To Cope With the Daily Mayhem of Life in IT
As you probably know, I started working on softwarephysics back in 1979 when I transitioned from being an exploration geophysicist, exploring for oil with Amoco, to become an IT professional in Amoco's IT department. I then spent about 20 years in Applications Development as a programmer and then 17 years in Middleware Operations at various large corporations. I am now 69 years old, and I retired from my last paying IT position in December of 2016. My son is also an IT professional with 17 years of experience. Currently, he is a Team Lead in Applications Development doing Salesforce Cloud Computing for a small investment company. When I first transitioned into IT from geophysics, I figured that if you could apply physics to geology; why not apply physics to software? So like the exploration team at Amoco that I had just left, consisting of geologists, geophysicists, paleontologists, geochemists, and petrophysicists, I decided to take all the physics, chemistry, biology, and geology that I could muster and throw it at the problem of software. The basic idea was that many concepts in physics, chemistry, biology, and geology suggested to me that the IT community had accidentally created a pretty decent computer simulation of the physical Universe on a grand scale, a Software Universe so to speak, and that I could use this fantastic simulation in reverse to better understand the behavior of commercial software, by comparing software to how things behaved in the physical Universe. The original purpose of softwarephysics was to explain why IT was so difficult, to suggest possible remedies, and to provide a direction for thought. My hope was that softwarephysics might help the members of the IT community to better cope with the daily mayhem of life in IT. As we all know, writing and maintaining software is very difficult because so much can go wrong. As we saw in The Fundamental Problem of Software this is largely due to the second law of thermodynamics introducing small bugs into software whenever software is written or changed and also to the nonlinear nature of software that allows small software bugs to frequently produce catastrophic effects. In later postings on softwarephysics, I explained that the solution was to take a biological approach to software by "growing" code biologically instead of writing code. See Agile vs. Waterfall Programming and the Value of Having a Theoretical Framework for more on using an Agile biological approach to software development.
People-Problems Also Contribute to the Daily Mayhem of Life in IT
However, most of my efforts over the years have always been focused on the physical constraints that contribute to the daily mayhem of life in IT. In Hierarchiology and the Phenomenon of Self-Organizing Organizational Collapse and
Don't ASAP Your Life Away I did briefly touch upon some people-problem issues. But as I watched my son graduate from the University of Illinois in Urbana with a B.S. in Computer Science and then begin a career in IT, I frequently found myself counseling my son about the common people-problems that can occur in IT and that can also heavily contribute to the daily mayhem of life in IT. It may be true that the root cause of all IT problems can be traced back to the second law of thermodynamics and nonlinearity, but it is also true that nearly all of the real day-to-day problems that arise in IT come about from how the people around you deal with the second law of thermodynamics and nonlinearity. It also depends upon how you deal with the second law of thermodynamics and nonlinearity too! After all, if you could write perfect code off the top of your head that always worked the very first time and never failed while in Production, all of your IT problems would immediately go away! Unfortunately, so would your IT job.
If You Are an IT Professional You Need to Watch Jayme Edwards' The Healthy Software Developer Videos
So here is some fatherly advice from an IT old-timer. If you want to go the distance in IT, you need to watch the Healthy Software Developer videos that Jayme Edwards has produced for the mental health of the IT community. Jayme Edwards has produced a huge number of short videos that deal with all of the people-problems you will likely encounter in your IT career. More importantly, Jayme Edwards will also help you to understand your own tendencies to descend into self-destructive behaviors and thought patterns.
Jayme Edwards' The Healthy Software Developer YouTube Home Page:
https://www.youtube.com/c/JaymeEdwardsMedia/featured
Jayme Edwards' The Healthy Software Developer YouTube videos:
https://www.youtube.com/c/JaymeEdwardsMedia/videos
Everything Old is New Again
I only wish that these videos were available when my son first started out and that they were also available to me back in 1979 when I first started in IT. In fact, these videos are timeless because people-problems ultimately stem from human nature, and human nature does not change with time. For example, when I first transitioned into IT from geophysics, I used to talk to the old-timers about the good old days of IT back in the 1950s. They told me that when the operators began their shift on an old-time 1950s vacuum tube computer, the first thing they did was to crank up the voltage on the vacuum tubes to burn out the tubes that were on their last legs. Then they would replace the burned-out tubes to start the day with a fresh machine.
Figure 1 – In the 1950s, mainframe computers contained thousands of vacuum tubes that had to be constantly replaced as they burned out.
The UNIVAC I first came out in 1951 and was 25 feet by 50 feet in size. It contained 5,600 vacuum tubes, 18,000 crystal diodes and 300 electromechanical relays with a total memory of 12 KB.
Figure 2 – The UNIVAC I was very impressive on the outside.
Figure 3 – But the UNIVAC I was a little less impressive on the inside.
Figure 4 – Prior to 1955, huge mercury delay lines built from tubes of mercury that were about 3 inches long were used to store bits of computer memory. A single mercury delay line could store about 18 bits of computer memory as a series of sound waves that were continuously refreshed by quartz piezoelectric transducers at each end of the tube.
In 1955 magnetic core memory came along, and used tiny magnetic rings called "cores" to store bits. Four little wires had to be threaded by hand through each little core in order to store a single bit, so although magnetic core memory was a lot cheaper and smaller than mercury delay lines, it was still very expensive and took up lots of space.
Figure 5 – Magnetic core memory arrived in 1955 and used a little ring of magnetic material, known as a core, to store a bit. Each little core had to be threaded by hand with 4 wires to store a single bit.
Figure 6 – Magnetic core memory was a big improvement over mercury delay lines, but it was still hugely expensive and took up a great deal of space within a computer.
Figure 7 – Finally in the early 1970s inexpensive semiconductor memory chips came along that made computer memory small and cheap.
They also told me about programming the plugboards of electromechanical Unit Record Processing machines back in the 1950s by physically rewiring the plugboards. The Unit Record Processing machines would then process hundreds of punch cards per minute by routing the punch cards from machine to machine in processing streams.
Figure 8 – In the 1950s, Unit Record Processing machines like this card sorter were programmed by physically rewiring a plugboard.
Figure 9 – The plugboard for a Unit Record Processing machine.
As you can see, IT technology was pretty primitive back in the 1950s, 1960s and the 1970s. But despite the primitive IT technology of the day, these old-timers also told me about many of the people-problems that they encountered back then too. And these old-timer people-problem stories sounded very familiar to me. In fact, most of the old-timer stories that I heard back in the 1980s started out after our current IT management made some new process changes. That's when I would hear something like, "Oh, we tried that back in 1955 and it made a huge mess...". So when listening to one of Jayme Edwards' videos, I frequently find myself recalling similar personal situations that I experienced back in the 1980s or similar situations from the 1950s that the old-timers had warned me about. Hardware and software may dramatically change with time, but the people-problems do not. After all, that is why people study history.
How To Go the Distance
Jayme Edwards stresses the importance of taking healthy measures so that you do not get burned out by IT. There are many things that you can do to prevent IT burn-out. Here is a good example:
Why Do So Many Programmers Lose Hope?
https://www.youtube.com/watch?v=NdA6aQR-s4U
Now I really enjoyed my IT career that spanned many decades, but having been around the block a few times, I would like to offer a little advice to those just starting out in IT, and that is to be sure to pace yourself for the long haul. You really need to dial it back a bit to go the distance. Now I don't want this to be seen as a negative posting about careers in IT, but I personally have seen way too many young bright IT professionals burn out due to an overexposure to stress and long hours, and that is a shame. So dialing it back a bit should be seen as a positive recommendation. And you have to get over thinking that dialing it back to a tolerable long-term level makes you a lazy worthless person. In fact, dialing it back a little will give you the opportunity to be a little more creative and introspective in your IT work, and maybe actually come up with something really neat in your IT career.
This all became evident to me back in 1979 when I transitioned from being a class 9 exploration geophysicist in one of Amoco's exploration departments to become a class 9 IT professional in Amoco's IT department. One very scary Monday morning, I was conducted to my new office cubicle in Amoco’s IT department, and I immediately found myself surrounded by a large number of very strange IT people, all scurrying about in a near state of panic, like the characters in Alice in Wonderland. After nearly 40 years in the IT departments of several major corporations, I can now state with confidence that most corporate IT departments can best be described as “frantic” in nature. This new IT job was a totally alien experience for me, and I immediately thought that I had just made a very dreadful mistake. Granted, I had been programming geophysical models for my thesis and for oil companies ever since taking a basic FORTRAN course back in 1972, but that was the full extent of my academic credentials in computer science. I immediately noticed some glaring differences between my two class 9 jobs in the same corporation. As a class 9 geophysicist, I had an enclosed office on the 52nd floor of the Amoco Building in downtown Chicago, with a door that actually locked, and a nice view of the north side of the Chicago Loop and Lake Michigan. With my new class 9 IT job at Amoco I moved down to the low-rent district of the Amoco Building on the 10th floor where the IT department was located to a cubicle with walls that did not provide very much privacy. Only class 11 and 12 IT professionals had relatively secluded cubicles with walls that offered some degree of privacy. Later I learned that you had to be a class 13 IT Manager, like my new boss, to get an enclosed office like I had back up on the 52nd floor. I also noticed that the stress levels of this new IT job had increased tremendously over my previous job as an exploration geophysicist. As a young geophysicist, I was mainly processing seismic data on computers for the more experienced geophysicists to interpret and to plan where to drill the next exploration wells. Sure there was some level of time-urgency because we had to drill a certain number of exploration wells each year to maintain our drilling concessions with foreign governments, but still, work proceeded at a rather manageable pace, allowing us ample time to play with the processing parameters of the software used to process the seismic data into seismic sections.
Figure 10 - Prior to becoming an IT professional, I was mainly using software to process seismic data into seismic sections that could be used to locate exploration wells.
However, the moment I became an IT professional, all of that changed. Suddenly, everything I was supposed to do became a frantic ASAP effort. It is very difficult to do quality work when everything you are supposed to do is ASAP. Projects would come and go, but they were always time-urgent and very stressful, to the point that it affected the quality of the work that was done. It seemed that there was always the temptation to simply slap something into Production to hit an arbitrary deadline, ready or not, and many times we were forced to succumb to that temptation. This became more evident to me when I moved from Applications Development to Middleware Operations, and I had to then live with the sins of pushing software into Production before it was quite ready for primetime. In recent decades, I have also noticed a tendency to hastily bring IT projects in through heroic efforts of breakneck activity, and for IT Management to then act as if that were actually a good thing after the project was completed! When I first transitioned into IT, I also noticed that I was treated a bit more like a high-paid clerk than a highly trained professional, mainly because of the time-pressures of getting things done. One rarely had time to properly think things through. I seriously doubt that most business professionals would want to hurry their surgeons along while under the knife, but that is not so for their IT support professionals.
You might wonder why I did not immediately run back to exploration geophysics in a panic. There certainly were enough jobs for an exploration geophysicist at the time because we were just experiencing the explosion of oil prices resulting from the 1979 Iranian Revolution. However, my wife and I were both from the Chicago area, and we wanted to stay there. In fact, I had just left a really great job with Shell in Houston to come to Amoco's exploration department in Chicago for that very reason. However, when it was announced about six months after my arrival at Amoco that Amoco was moving the Chicago exploration department to Houston, I think the Chief Geophysicist who had just hired me felt guilty, and he found me a job in Amoco's IT department so that we could stay in Chicago. So I was determined to stick it out for a while in IT until something better might come along. However, after a few months in Amoco's IT department, I began to become intrigued. It seemed as though these strange IT people had actually created their own little simulated universe, that seemingly, I could explore on my own. It also seemed to me that my new IT coworkers were struggling because they did not have a theoretical framework from which to work from like I had had in Amoco's exploration department. That is when I started working on softwarephysics. I figured that if you could apply physics to geology; why not apply physics to software? I then began reading the IT trade rags, to see if anybody else was doing similar research, and it seemed as though nobody else on the planet was thinking along those lines, and that raised my level of interest in doing so even higher.
A Final Note
Although I may have spent the very first 25 years of my career working for oil companies, as a geophysicist by training, I am now very much concerned with the effects of climate change. Even if you are only a very young IT professional, climate change will likely have a dominant effect on the rest of your life and the rest of your IT career in the future. Climate change will most likely determine where you work, how you live, what you eat, what you drink and under what kind of government you live. For more on that, please see This Message on Climate Change Was Brought to You by SOFTWARE and Last Call for Carbon-Based Intelligence on Planet Earth. There are many more softwarephysics postings in this blog on how software may shape the rest of your life in the future. All of my postings on softwarephysics are available and are sorted by year via the links in the upper right-hand corner of each posting. The oldest postings deal mainly with explaining how softwarephysics can help you with your job as an IT professional. The newer postings deal more with how softwarephysics can be of help with some of the Big Problems, like the origin of carbon-based life and the future of Intelligence in our Universe.
Comments are welcome at scj333@sbcglobal.net
To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/
Regards,
Steve Johnston
Tuesday, August 18, 2020
Why Our Universe is not a Simulation
In Digital Physics and the Software Universe and Is Our Universe a Computer Simulation?, we covered the idea that some claim that our Universe seems to behave like a large network of quantum computers calculating how to behave. Perhaps not in an actual literal sense as the realists would have us believe, but at least, perhaps, in a positivistic manner that makes for a good model that yields useful predictions about how our Universe seems to behave. Recall that positivism is an enhanced form of empiricism, in which we do not care about how things “really” are; we are only interested in how things are observed to behave. With positivism, physicists only seek out models of reality - not reality itself.
In this posting, I would like to showcase two of Dr. Matt O'dowd's PBS Space Time videos that deal with the topic:
Are We Living in an Ancestor Simulation?
https://www.youtube.com/watch?v=hmVOV7xvl58
Are You a Boltzmann Brain?
https://www.youtube.com/watch?v=nhy4Z_32kQo
I love Dr. Matt O'dowd's PBS Space Time videos because they tend to dive a bit deeper into the physics of the subject at hand than most lectures on the Internet.
Matt O'dowd points out that it is nearly impossible to prove that we are not living in a computer simulation or a Boltzmann Brain because any objection to those ideas can easily be refuted with a sufficient level of computer technology or a Boltzmann Brain with a sufficient level of imagination to circumvent the objection. Thus, given an infinite number of objections to the idea that our Universe is some kind of a Universe Simulation, there is also an infinite number of opposing arguments available to refute those objections. In that regard, both ideas seem to verge on solipsism. Solipsism is a philosophical idea from Ancient Greece. In solipsism, your Mind is the whole thing, and the physical Universe is just a figment of your imagination. So I would like to thank you very much for thinking of me and bringing me into existence! But in an infinite Multiverse that has always existed, one would think that such things must eventually arise - see The Software Universe as an Implementation of the Mathematical Universe Hypothesis for more on that.
Similarly, it is nearly impossible to prove that we are the lone form of Intelligence in the Milky Way Galaxy. Just because we see no evidence of other Intelligences does not mean they are not out there. As you all know, I am obsessed with the fact that we see no signs of Intelligence in our Milky Way galaxy after more than 10 billion years of chemical evolution that should have brought forth a carbon-based or silicon-based Intelligence to dominate the galaxy. Such thoughts naturally lead to Fermi's Paradox first proposed by Enrico Fermi over lunch one day in 1950:
Fermi’s Paradox - If the universe is just chock full of intelligent beings, why do we not see any evidence of their existence?
I have covered many explanations in other postings such as: The Deadly Dangerous Dance of Carbon-Based Intelligence, A Further Comment on Fermi's Paradox and the Galactic Scarcity of Software, Some Additional Thoughts on the Galactic Scarcity of Software, SETS - The Search For Extraterrestrial Software, The Sounds of Silence the Unsettling Mystery of the Great Cosmic Stillness, Last Call for Carbon-Based Intelligence on Planet Earth and Swarm Software and Killer Robots. The explanations for Fermi's Paradox usually fall into one of two large categories:
1. They really are out there but for some reason, we just cannot detect them.2. We are truly alone in the Universe or at least we are alone in the Milky Way galaxy.
The first category of explanations is rather weak because it means that all forms of Intelligence in the Milky Way galaxy are intentionally, or unintentionally, hiding for one reason or another. The second category implies that our Universe is not very friendly to Intelligence of any kind. True, Intelligence could be rather rare, but more likely, our Universe is just a very dangerous place for Intelligence.
Another Approach
But after watching Matt O'dowd's Space Time videos I had another thought. Even though we cannot rule out being in a Universe Simulation or explain why we see no signs of other Intelligences in the Milky Way galaxy, could we use these two unprovable conjectures to null each other out? This is an old trick in physics. Probably the most famous example of this trick is Richard Feynmann's Path Integral formulation of quantum mechanics. Richard Feynmann's Path Integral formulation of quantum mechanics maintains that when a particle moves from point A to point B, it follows an infinite number of unknowable paths that interfere with each other both constructively and destructively. Richard Feynmann then showed mathematically how nearly all of these infinite paths cancel each other out with destructive interference. The only paths that really count are those that add up constructively and those paths have the particle move directly from point A to point B along a straight line. Now proving the idea that particles move directly from point A to point B along straight lines may not seem very profound, but Richard Feynmann's Path Integral formulation of quantum mechanics also does a wonderful job of explaining how quantum mechanics works for much more interesting problems. For more on this see Matt O'dowd's Space Time video:
Feynman's Infinite Quantum Paths
https://www.youtube.com/watch?v=vSFRN-ymfgE&t=1s
But the very best source for Richard Feynmann's Path Integral formulation of quantum mechanics is, of course, Richard Feynmann himself, and is available in his very accessible book QED – The Strange Theory of Light and Matter (1985). In QED Feynmann explains that when a photon reflects off of a mirror, it actually reflects off every point along the mirror and not just off the center point in the middle as we were taught in school. In Richard Feynmann's Path Integral formulation of quantum mechanics, individual photons do not do what we were taught in school. Instead, each photon follows an infinite number of unknown paths between the source and receiver. Each path has a quantum amplitude that can be denoted by a little arrow spinning in the complex plane. When the little quantum amplitude arrows line up in the same direction, as they do near the middle of the mirror in Figure 2, it means that those paths contribute the most to the ultimate arrival of the photon to the receiver by means of constructive interference. Notice that the little quantum amplitude arrows that are not close to the center of the mirror point in all different directions and, consequently, cancel each other out by means of destructive interference. Now if you think that a single photon cannot possibly follow an infinite number of paths between the source and receiver, try removing some of the mirror! Simply cover part of the mirror surface with a set of parallel strips of black paint as shown in Figure 3. Such a mirror with many little parallel stripes is called a diffraction grating. Now when a photon bounces off the mirror, the angle of incidence will no longer be equal to the angle of reflection. Instead, the angle with which photons reflect off the diffraction grating will depend on the frequency or color of the photons. You can see this effect when photons reflect off of a CD. The rainbow colors come from individual photons reflecting off the shiny flat strips of undisturbed aluminum between the tracks of data pits in the CD at different angles.
Figure 1 – In school we were all taught that light rays bounced off of a mirror midway between a source and a receiver and that the angle of incidence was equal to the angle of reflection.
Figure 2 – But in Richard Feynmann's Path Integral formulation of quantum mechanics, individual photons do not do that. Instead, each photon follows an infinite number of unknown paths between the source and receiver.
Figure 3 – To prove that photons bounce off of every part of the mirror, simply cover part of the mirror surface with a set of parallel strips of black paint. Now when a photon bounces off the mirror, the angle of incidence will no longer be equal to the angle of reflection. Instead, the angle with which photons reflect off the mirror will depend on the frequency or color of the photons.
Figure 4 – You can see this effect when photons reflect off of a CD. The rainbow colors come from individual photons reflecting off the shiny flat strips of undisturbed aluminum between the tracks of data pits in the CD at different angles.
Do the Universe Simulation Hypothesis and Fermi's Paradox Cancel Each Other Out?
Now let's try to use Richard Feynmann's Path Integral formulation of quantum mechanics to deal with the nearly infinite number of paths that the Simulation Hypothesis and Fermi's Paradox can follow. Suppose that our Universe actually were some kind of Universe Simulation that we cannot distinguish from a "real" Universe because whatever test we perform on the Universe Simulation yields the same result that a "real" Universe would yield. This yields an infinite number of possibilities. Similarly, the number of possible explanations for Fermi's Paradox is nearly infinite in number too. Now let's add up all of the little quantum amplitude arrows for each. For example, if our Universe really were a Universe Simulation, then our Universe Simulation should most likely be a Universe Simulation that resembles the most common Universe Simulation. And I would suggest that in that Universe Simulation we should find a Universe dominated by Intelligence because we have Intelligence. Otherwise, this Universe Simulation is hugely wasteful and filled with way too much unnecessary cosmic real estate. Now if a Universe Simulation did not contain any Intelligence at all, we would not be here pondering it, and it could be any size at all. This is reminiscent of Brandon Carter’s famous Weak Anthropic Principle (1973):
The Weak Anthropic Principle - Intelligent beings will only find themselves existing in universes capable of sustaining intelligent beings.
but with a twist:
The Simulated Weak Anthropic Principle - Intelligent simulations in a Universe Simulation will only find themselves existing in Universe Simulations capable of sustaining intelligent simulations that dominate the Universe Simulation because they are intelligent and capable of doing so. Thus, Universe Simulations should never have a Fermi's Paradox.
This hypothesis means that we should only suspect that we are indeed simulations in a Universe Simulation when we do not have a Fermi's Paradox. Since our Universe does have a Fermi's Paradox, it means that our Universe is most likely not a Universe Simulation! Granted, given an infinite number of Universe Simulations, a very few will contain a sole Intelligence observing an otherwise sterile Universe Simulation. But far fewer will contain a sole Intelligence surrounded by 7 billion other simulated Intelligences all residing by themselves on a single planet in a vast Universe Simulation. On the other hand, if we did observe a Universe just chock full of intelligent beings running the joint, we should be very suspicious of being in a Universe Simulation. First, because most Universe Simulations should either contain no Intelligence at all or should contain huge amounts of Intelligence spread out all over the place. Secondly, if we really did find ourselves in a Universe dominated by Intelligence, such a Universe would have levels of simulated technology capable of creating Universe Simulations in an infinite loop!
Comments are welcome at scj333@sbcglobal.net
To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/
Regards,
Steve Johnston