Sunday, September 13, 2015

The Danger of Believing in Things

During the course of your career as an IT professional, you will undoubtedly come across instances when your IT Management will institute new policies that seem to make no sense at all. Surprisingly, you will also find that many of your coworkers will also secretly agree that the new policies actually seem to make things worse and make no sense at all. Yet you will find that no one will openly question the new policies. Much of this stems from basic Hierarchiology - see Hierarchiology and the Phenomenon of Self-Organizing Organizational Collapse for details. But some of it also stems from the fact that much of human thought is seriously deficient in rigor because it is largely based upon believing in things and therefore is non-critical in nature. It seems that as human beings we just tend not to question our own belief systems or the belief systems that are imposed upon us by the authorities we have grown up with. Instead, we tend to seek out people who validate our own belief systems and to just adapt as best we can to the belief systems that are imposed upon us. Politicians are keenly aware of this fact, as is evidenced by the 2016 presidential election cycle, which is now in full swing in the United States. Politicians simply seek to validate the belief systems of enough people to get elected to office.

In The Great War That Will Not End I explained that this failure in critical thinking arose primarily because our minds are infected with memes that are forms of self-replicating information bent on replicating at all costs, and I discussed how Susan Blackmore had pointed out in The Meme Machine (1999), that we are not so much thinking machines as we are copying machines. Susan Blackmore maintains that memetic-drive was responsible for creating our extremely large brains, and also our languages and cultures as well, in order to store and spread memes more effectively. So our minds evolved to believe in things, which most times are quite useful but also has its downsides too. For example, there is a strong selection pressure for humans to unquestioningly believe that if they accidentally let go of a branch while hiding in a tree waiting to ambush some game, that they will accelerate to the ground and sustain a nasty fall. In such a situation there is no evolutionary advantage for an individual to enter into a moment of self-reflection to question their belief system in regards to the nature of falling. Instead, a quick knee-jerk reaction to grasp at any branch at all in a panic is called for. Unfortunately, this reflex tendency to unquestionably believe in things seems to extend to most of human thought, and that can get us into lots of trouble.

In How To Think Like A Scientist I explained that there were three ways to gain knowledge:

1. Inspiration/Revelation
2. Deductive Rationalism
3. Inductive Empiricism

and that the Scientific Method was one of the very few human protocols that used all three.

The Scientific Method
1. Formulate a set of hypotheses based on Inspiration/Revelation with a little empirical inductive evidence mixed in.

2. Expand the hypotheses into a self-consistent model or theory by deducing the implications of the hypotheses.

3. Use more empirical induction to test the model or theory by analyzing many documented field observations or by performing controlled experiments to see if the model or theory holds up. It helps to have a healthy level of skepticism at this point. As philosopher Karl Popper has pointed out, you cannot prove a theory to be true, you can only prove it to be false. Galileo pointed out that the truth is not afraid of scrutiny, the more you pound on the truth, the more you confirm its validity.

Some Thoughts on Human Thinking
False memes certainly do not like the above process very much since it tends to quickly weed them out. Instead, false memes thrive when people primarily rely on the Revelation part of step 1 in the process, and usually the Revelation comes from somebody else revealing an appealing meme to an individual. Again, appealing memes are usually memes that appeal to the genes, and usually have something to do with power, status, wealth or sex. The downside of relying primarily on Revelation for knowledge is that most times it is just a mechanism for a set of memes to replicate in a parasitic manner. Since we are primarily copying machines, and not thinking machines, the Inspiration part of step 1 does not happen very often. Now most forms of human thought do make a half-hearted attempt at step 2 in the process, by deducing some of the implications of the hypotheses that came from the Inspiration/Revelation step, but oftentimes this does not lead to a self-consistent model or theory. In fact, many times such deductions can lead to a model or theory that is self-contradictory in nature, and surprisingly, this does not seem to bother people much of the time. For some reason, people tend to just take the good with the bad in such cases, and stress the value of the good parts of their theory or model, while discounting the parts that appear to be a bit self-contradictory. Finally, it seems that step 3 is the step that is most frequently skipped by most of human thought. People rarely try to verify their models or theories with empirical evidence. That is probably because step 3 in the process requires the most work and rigor. Collecting data in an unbiased and rigorous manner is really difficult and frequently can take many years of hard work. Hardly anybody, other than observational and experimental scientists, is willing to make that sacrifice to support their worldview. In some cases they might collect some supporting evidence, like a lawyer trying to build a strong case for his client, while discarding any evidence that contradicts their model or theory, but even that is a rarity. Besides, if you have a really good idea that came to you via Inspiration/Revelation, and that makes sense for the most part when you deduce its implications, why bother checking it? Certainly, we can just have faith in it because it must be right, especially if it is a beautiful set of memes that also lead to power, status, wealth or sex.

The Trouble With Human Thought
If you have been following this blog closely, you might think that next, I am going to come down hard on political and religious meme-complexes as examples of self-replicating information that do not follow the Scientific Method, but I am not going to do that. Personally, I view political and religious meme-complexes in a very positivistic manner in that I only care about the philosophies that they espouse. If they espouse philosophies that help mankind to rise above the selfish self-serving interests of our genes and memes through the values of the Enlightenment which brought us evidence-based reasoning, respect for the aspirations of the individual, egalitarianism, concern for the welfare of mankind in general, tolerance of others, the education of the general public, and the solving of problems through civil discourse and democracy then they are okay with me. Otherwise, I do not have much use for them. Religious meme-complexes invariably have very primitive mythological cosmologies, but cosmology is best handled by the sciences anyway, and that does not negate any of their more positive values.

Instead, I am going to raise concerns about one of the true loves of my life - physics itself. I just finished reading Not Even Wrong - the Failure of String Theory and the Search for Unity in Physical Law (2006) by Peter Woit. Unless you have a Ph.D. in physics and have recently done a postdoc heavily steeped in quantum field theory, I would suggest first reading Lee Smolin's very accessible The Trouble with Physics (2006) which raises the same concerns. Consequently, I would say that The Trouble with Physics best provides a cautionary tale for the general public and for physics undergraduates, while Not Even Wrong performs this same function for graduate students in physics or physicists outside of string theory research. Both books provide a very comprehensive deep-dive into the state of theoretical physics today. I certainly do not have the space here to outline all of the challenges and difficulties that theoretical physics faces today with string theory because that takes at least an entire book for a genius like Lee Smolin or Peter Woit, but here it is in a nutshell. Basically, the problem is what do you do when theoretical physics has outrun the technology needed to verify theories?

It all goes back to the 1950s and 1960s when particle physicists were able to generate all sorts of new particles out of the vacuum by smashing together normal protons, antiprotons, electrons and positrons together at high energies with particle accelerators. Because the colliding particles had high energies, it was possible to generate all sorts of new particles using Einstein’s E=mc2. With time, it was discovered that all of these hundreds of new particles could be characterized as either being fundamental particles that could not be broken apart with our current technologies or were composite particles that consisted of fundamental particles. Thanks to quantum field theory we came up with the Standard Model in 1973 that arranged a set of fundamental particles into patterns of mass, charge, spin and other physical characteristics.

Figure 1 – The particles of the Standard Model are fundamental particles that we cannot bust apart with our current technologies, perhaps because it is theoretically impossible to do so with any technology.

Again in quantum field theories, everything is a field that extends over the entire Universe. So there are things like electron fields, neutrino fields, quark fields, gluon fields and more that extend over the entire Universe. For a brief introduction to quantum theory see: Quantum Software, SoftwareChemistry, and The Foundations of Quantum Computing. The quantum wavefunctions of these fundamental fields determine the probability of finding them in certain places doing certain things, and when we try to measure one of these quantum fields, we see the fundamental particle instead. Unfortunately, there are several problems with the Standard Model and the quantum field theories that explain it. Firstly, the Standard Model seems to be just too complicated. Recall that each of the above fundamental particles also has an antimatter twin, like the negatively charged electron having a positively charged twin positron with the same mass, so there are a very large number of fundamental particles, and these fundamental particles are also observed to behave in strange ways. The Standard Model also has nothing to say about the force of gravity, so it only covers 3/4 of the known forces - the electromagnetic force, the strong nuclear force and the weak nuclear force. The Standard Model also has about 18 numbers that define things like the mass of the electron that have to be plugged into the Standard Model as parameters. It would be nice to have a theory that explains those values from fundamental principles. The Standard Model is also based upon quantum field theories that struggle with the problem of infinities. Let me explain.

Physicists love to use a mathematical technique called perturbation theory to solve problems that are just too hard to solve mathematically with pure brute force. Rather than solving the problem directly, they expand the problem into a series of terms that add up to the final solution. The hope is that none of the terms in the expansion series will be infinite and that adding all of the terms of the series together will also not lead to an infinite sum. For example, suppose you want to calculate the value of π. Now it is known that:

π/4 = 1/1 – 1/3 + 1/5 - 1/7 + 1/9 – 1/11 + 1/13 – 1/15 + 1/17 ...

where 4 is the first even integer raised to the first even power 22.

If you divide π/4 on your calculator you get:

π/4 = 0.7853982...

and if your calculator were powerful enough, the answer would continue on for an infinite number of digits. So let’s see how well the above series works:

1/1 = 1.000000000
1/1 - 1/3 = 0.66666666666...
1/1 - 1/3 + 1/5 = 0.866666666...
1/1 - 1/3 + 1/5 - 1/7 = 0.7238096...
1/1 - 1/3 + 1/5 - 1/7 + 1/9 = 0.8349207...
1/1 - 1/3 + 1/5 - 1/7 + 1/9 -1/11 = 0.7440116...
1/1 - 1/3 + 1/5 - 1/7 + 1/9 -1/11 + 1/13 = 0.8209346...
1/1 - 1/3 + 1/5 - 1/7 + 1/9 - 1/11 + 1/13 - 1/15 = 0.7542679...

What we see is that the approximation of π/4 gets better and better as we add more terms and that each correction term gets smaller and smaller as we continue on, so most likely the approximation of π/4 would converge to its true value if we were able to add up an infinite number of terms. Also, none of the individual terms in the series are infinite like a term of 1/0, so this is a very useful and well-behaved series that approximates:

π/4 = 0.7853982...

The problem with the Standard Model is that it relies upon quantum field theories that have approximation series that are not so well behaved, and do have infinite terms in their perturbation theory series expansions. The way to get around this problem is a Nobel Prize winning mathematical technique known as renormalization. With renormalization, one rearranges the series containing infinite terms in such a way so that the positive and negative infinities cancel out. It would be as if we altered our series above to:

π/4 = 1/1 – 1/3 + 1/5 - 1/7 + 1/9 – 1/11 + 1/13 – 1/15 + 1/17... + 1/0 - 1/0 + 1/0 - 1/0 ...

sure it has some terms that alternate between +∞ and -∞, but we can arrange them so that they cancel each other out.

π/4 = 1/1 – 1/3 + 1/5 - 1/7 + 1/9 – 1/11 + 1/13 – 1/15 + 1/17... + (1/0 - 1/0) + (1/0 - 1/0) ...

So that allows us to approximately calculate π/4 by just adding up some of the most important terms in the series while letting the infinite terms cancel each other out:

π/4 ≈ 1/1 - 1/3 + 1/5 - 1/7 + 1/9 - 1/11 + 1/13 - 1/15 = 0.7542679...

The main reason for this battle with infinities in quantum field theories is that they model the fundamental particles as point sources with a dimension of zero, and therefore, a zero extension in space. This gets us into trouble even in classical electrodynamics because the electric field is defined as:

E ~ q/R2

where q is the amount of electric charge and R is the distance from it. The equation states that as R goes to 0 the electric field E goes to +∞, so at very small distances the electric field becomes unmanageable.

Getting back to the large number of fundamental particles in the Standard Model, we now find that the current state of particle physics is very much like the state of physics back in the year 1900, with a Periodic Table of fundamental elements that had been worked out by the chemists with much hard work during the 19th century. At that time, the question naturally was what were these fundamental elements made up of, and what made them stick together into compounds and molecules the way they did? Like the Standard Model of particle physics, it seemed like the Periodic Table of the elements was just way too complicated. There had to be some more fundamental theory that explained what all of those fundamental elements were made of and how they worked together to form compounds and molecules. Later in the 20th century, that explanation was provided by the atomic theory of quantum mechanics (1926) and from some observational data from the low-energy atom smashers of the day. All of that revealed that the elements were made of a central nucleus consisting of protons and neutrons surrounded by clouds of electrons in a quantum mechanical manner. Luckily, in the 20th century, we had the necessary technology required to create the low-energy atom smashers that verified what quantum mechanics had predicted.

Figure 2 – Early in the 20th century physics was able to figure out what the fundamental elements of the Periodic Table were made of using the low-energy atom smashers of the day that validated what quantum mechanics had predicted.

String theory got started in the late 1960s as an explanation for the strong nuclear force, but since quantum field theory did such a great job of that, work in the field was abandoned until the early 1980s. Then in 1984, the first string theory revolution took place in physics because it was found that string theory solved one mathematical problem that led theorists to think that string theory might be a candidate to explain the Standard Model. The basic idea was that the fundamental particles were actually made of very small vibrating strings and that the large number of fundamental particles could all be generated by strings in different vibrational modes.

Figure 3 – String theory maintains that the fundamental particles of the Standard Model all arise from strings in different vibrational modes.

Because in string theory the fundamental particles of the Standard Model are made from vibrating strings and no longer have a dimension of zero, the difficulties with infinities vanished. String theory also provided for the generation of particles called gravitons that could carry the gravitational force, and that plugged the big hole in the traditional Standard Model that only covered the electromagnetic, strong nuclear and weak nuclear forces. So it seemed that string theory was a very promising way to fix the problems of the Standard Model. However, string theory also came with some problems of its own. For example, the vibrating strings had to exist in a 10-dimensional world. The Universe as we know it only has 4 dimensions - 3 spatial dimensions and one dimension of time. String theorists proposed that the unseen dimensions were in fact present, but that they were so small that we could not detect them with our current level of technology. Originally it was also hoped that a unique mathematical framework would emerge for string theory that would yield the Standard Model and all 18 of its numerical parameters. The quantum field theories of the Standard Model would then be found to be low-energy approximations of this string theory mathematical framework. However, that did not happen. What did happen was that a unique mathematical framework for string theory was never developed. This was because it was found that there were nearly an infinite number of possible geometries for the 10-dimensional spaces of string theory, and each of those geometries profoundly affected what the vibrating strings would produce. Now although string theory never really produced a unique mathematical framework that yielded the Standard Model and its 18 parameters, the fact that it now seemed possible that some form of string theory could produce just about any desired result, meant that string theory would probably never have much predictive capability, and thus would not be falsifiable. Instead, string theorists proposed that string theory now offered a cosmic landscape of possible universes - see Leonard Susskind’s The Cosmic Landscape (2006).

This idea of a cosmic landscape also goes hand in hand with some of the current thoughts in cosmology, which contend that our Universe is just a single member of an infinite multiverse with no beginning and no end. In such a model, the multiverse endures forever and has always existed in a state of self-replication. In 1986 Andrei Linde formalized this with his Eternal Chaotic Inflation model, which proposes that the multiverse is in an unending state of inflation and self-replication that is constantly generating new universes where inflation ceases. When inflation ceases in a portion of the multiverse, a tiny isolated universe is formed with its own vacuum energy and a unique topology for its 10-dimensions. The way strings vibrate in the newly formed 10-dimensional universe then determine the Standard Model of that universe - see The Software Universe as an Implementation of the Mathematical Universe Hypothesis for details. That solves the problem of the fine-tuning of the Standard Model of our Universe, which seems to be fine-tuned so that intelligent beings can exist to observe it. In such a model of the multiverse, our Standard Model has the parameters that it has because of a selection bias known as the Anthropic Principle. If there are an infinite number of universes in the multiverse, each with its own particular way of doing string theory, then intelligent beings will only find themselves in those universes that can sustain intelligent beings. For example, it has been shown that if the parameters of our Standard Model were slightly different, our Universe would not be capable of supporting intelligent beings like ourselves, so we would not be here contemplating such things. Think of it this way, the mathematical framework of Newtonian mechanics and Newtonian gravity let us calculate how the planets move around the Sun, but they do not predict that the Earth will be found to be 93 million miles from the Sun. The reason the Earth is 93 million miles from the Sun and not 33 million miles is that, if the Earth were 33 million miles from the Sun, we would not be here wondering about it. That is just another example of a selection bias in action, similar to the Anthropic Principle. The Cosmic Landscape model does fit nicely with Andrei Linde’s Eternal Chaotic Inflation model in that the Eternal Chaotic Inflation model does offer up the possibility of a multiverse composed of an infinite number of universes, all running with different kinds of physics. And Eternal Chaotic Inflation gains support from the general Inflationary model that has quite a bit of supporting observational data from CBR (Cosmic Background Radiation) studies that seem to confirm most of the predictions made by the general idea of Inflation. Thus Eternal Chaotic Inflation seems like a safe bet because, theoretically, once Inflation gets started, it is very hard to stop. However, Eternal Chaotic Inflation does not need string theory to generate a Cosmic Landscape. It could do so with any other theory that explains what happens when a new universe forms out of the multiverse with some particular vacuum energy.

String Theory Difficulties
It has now been more than 30 years since the first string theory revolution of 1984 unfolded. But during all of those decades string theory has not been able to make a single verifiable prediction and has not even come together to form a single mathematical framework that explains the Standard Model of particles that we do observe. Despite many years of effort by the world's leading theoretical physicists, string theory still remains a promising model trying to become a theory. In defense of the string theorists, we do have to deal with the problem of what does theoretical physics do when it has outrun the relatively puny level of technology that we have amassed over the past 400 years. Up until recently, theoretical physics has always had the good fortune of being able to be tested and validated by observational and experimental data that could be obtained with comparatively little cost. But there is no reason why that should always be so, and perhaps we have finally come up against that technological limit. However, the most disturbing thing about string theory is not that it has failed to develop into a full-blown theory that can predict things that can be observed. That might just be theoretical physics running up against the limitations of our current state of technology. The most disturbing aspect of string theory is a sociological one in nature. It seems that over the past 30 years string theory has become a faith-based endeavor with a near-religious zeal that has suppressed nearly all other research programs in theoretical physics that attempt to explain the Standard Model or attempt to develop a theory of quantum gravity. In that regard string theory has indeed become a meme-complex of its own, bent on replicating at all costs, and like most religious meme-complexes, the string theory meme-complex does not look kindly upon heretics who question the memes within its meme-complex. In fact, it is nearly impossible these days to find a job in theoretical physics if you are not a string theorist. Both The Trouble with Physics and Not Even Wrong go into the gory details of the politics in academia regarding the difficulties of obtaining a tenured position in theoretical physics these days. All IT professionals can certainly relate to this based upon their own experiences with corporate politics. Since both academia and corporations have adopted hierarchical power structures, it is all just an example of Hierarchiology in action. In order to get along, you have to go along, so things that do not make sense, but are a part of the hierarchical groupthink must be embraced if one is to succeed in the hierarchy.

So theoretical physics now finds itself in a very strange state for the first time in 400 years because string theory is seemingly like a deity that leaves behind no empirical evidence of its existence, and must be accepted based upon faith alone. That is a very dangerous thing for physics because we already know that the minds of human beings evolved to believe in such things. Mankind already has a large number of competing deities based upon faith, many of which have already gone extinct, proving that they all cannot be real in the long run. Adding one more may not be the best thing for the future of theoretical physics. Granted, most programs in theoretical physics must necessarily begin as speculative conjectures and should be given the latitude to explore the unknown initially unencumbered by the limitations of the available empirical data of the day, and string theory is no exception. After all, something like string theory may turn out to be the answer. We just don't know at this time. But we do know that for the good of science, we should not allow string theory to crowd out all other competing research programs.

Déjà vu all over again
It seems that theoretical physics is currently "stuck" because it is lacking the observational and experimental data that it needs to proceed. Both Peter Woit and Lee Smolin suggest that what theoretical physics needs today is to start using other means to gain the data that it needs to progress. For example, perhaps going back to observing high-energy cosmic rays would be of use. Some protons slam into the upper atmosphere of the Earth with the energy of a baseball pitched at 100 miles/hour or the energy of a bowling ball dropped on your toe from waist high. Such protons have energies that are many of orders of magnitude greater than the energy of the proton collisions at the LHC. Using the CBR (Cosmic Background Radiation) photons that have traveled for 13.8 billion years since the Big Bang might be of use too, to bring us closer to the very high energies of the early universe.

Being theoretically "stuck" because the way you normally collect data no longer suffices, reminds me very much of the state of affairs that classical geology found itself in back in 1960, before the advent of plate tectonics. I graduated from the University of Illinois in 1973 with a B.S. in physics, only to find that the end of the Space Race and a temporary lull in the Cold War had left very few prospects for a budding physicist. So on the advice of my roommate, a geology major, I headed up north to the University of Wisconsin in Madison to obtain an M.S. in geophysics, with the hope of obtaining a job with an oil company exploring for oil. These were heady days for geology because we were just emerging from the plate tectonics revolution that totally changed the fundamental models of geology. The plate tectonics revolution peaked during the five year period 1965 – 1970. Having never taken a single course in geology during all of my undergraduate studies, I was accepted into the geophysics program with many deficiencies in geology, so I had to take many undergraduate geology courses to get up to speed in this new science. The funny thing was that the geology textbooks of the time had not yet had time to catch up with the new plate tectonics revolution of the previous decade, so they still embraced the “classical” geological models of the past which now seemed a little bit silly in light of the new plate tectonics model. But this was also very enlightening. It was like looking back at the prevailing thoughts in physics prior to Newton or Einstein. What the classical geological textbooks taught me was that over the course of several hundred years, the geologists had figured out what had happened, but not why it had happened. Up until 1960 geology was mainly an observational science relying upon the human senses of sight and touch, and by observing and mapping many outcrops in detail, the geologists had figured out how mountains had formed, but not why.

In classical geology, most geomorphology was thought to arise from local geological processes. For example, in classical geology, fold mountains formed off the coast of a continent when a geosyncline formed because the continental shelf underwent a dramatic period of subsidence for some unknown reason. Then very thick layers of sedimentary rock were deposited into the subsiding geosyncline, consisting of alternating layers of sand and mud that turned into sandstones and shales, intermingled with limestones that were deposited from the carbonate shells of dead sea life floating down or from coral reefs. Next, for some unknown reason, the sedimentary rocks were laterally compressed into folded structures that slowly rose from the sea. More horizontal compression then followed, exceeding the ability of the sedimentary rock to deform plastically, resulting in thrust faults forming that uplifted blocks of sedimentary rock even higher. As compression continued, some of the sedimentary rocks were then forced down into great depths within the Earth and were then placed under great pressures and temperatures. These sedimentary rocks were then far from the thermodynamic equilibrium of the Earth’s surface where they had originally formed, and thus the atoms within recrystallized into new metamorphic minerals. At the same time, for some unknown reason, huge plumes of granitic magma rose from deep within the Earth’s interior as granitic batholiths. Then over several hundred millions of years, the overlying folded sedimentary rocks slowly eroded away, revealing the underlying metamorphic rocks and granitic batholiths, allowing human beings to cut them into slabs and to polish them into pretty rectangular slabs for the purpose of slapping them up onto the exteriors of office buildings and onto kitchen countertops. In 1960, classical geologists had no idea why the above sequence of events, producing very complicated geological structures, seemed to happen over and over again many times over the course of billions of years. The most worrisome observational fact had to do with the high levels of horizontal compression that were necessary to produce the folding and faulting. The geologists of the time were quite comfortable with rock units moving up and down thousands of feet due to subsidence and uplift, but they did not have a good explanation for rock units moving sideways by many miles, and that was necessary to explain the horizontal compression that caused the folding and faulting of strata. One idea was that after geosynclines subsided, they were uplifted and the sedimentary rock they contained then slipped backward against the continental strata, causing the horizontal compression that led to the folding and faulting, but that seemed a bit far-fetched, and it still left unanswered the question of where did all of this subsidence and uplift come from in the first place. Fortunately, with the advent of plate tectonics (1965 – 1970), all was suddenly revealed. It was the lateral movement of plates on a global scale that made it all happen. With plate tectonics, everything finally made sense. Fold mountains did not form from purely local geological factors in play. There was the overall controlling geological process of global plate tectonics making it happen. For a quick review of this process, please take a look at the short video down below:

Fold Mountains
http://www.youtube.com/watch?v=Jy3ORIgyXyk

Figure 39 – Fold mountains occur when two tectonic plates collide. A descending oceanic plate first causes subsidence offshore of a continental plate, which forms a geosyncline that accumulates sediments. When all of the oceanic plate between two continents has been consumed, the two continental plates collide and compress the accumulated sediments in the geosyncline into fold mountains. This is how the Himalayas formed when India crashed into Asia.

Now the plate tectonics revolution was really made possible by the availability of geophysical data. It turns out that most of the pertinent action of plate tectonics occurs under the oceans, at the plate spreading centers and subduction zones, far removed from the watchful eyes of geologists in the field with their notebooks and trusty hand lenses. Geophysics really took off after World War II, when universities were finally able to get their hands on cheap war surplus gear. By mapping variations in the Earth’s gravitational and magnetic fields and by conducting deep oceanic seismic surveys, geophysicists were finally able to figure out what was happening at the plate spreading centers and subduction zones. Actually, the geophysicist and meteorologist Alfred Wegner had figured this all out in 1912 with his theory of Continental Drift, but at the time Wegner was ridiculed by the geological establishment. You see, Wegner had been an arctic explorer and had noticed that sometimes sea ice split apart, like South America and Africa, only later to collide again to form mountain-like pressure ridges. Unfortunately, Wegner froze to death in 1930 trying to provision some members of his last exploration party to Greenland, never knowing that one day he would finally be vindicated.

Conclusion
The sordid details of Alfred Wegner's treatment by the geological community of the day and the struggle that plate tectonics went through for acceptance by that community, shows the value of tolerating differing viewpoints when a science is theoretically "stuck". It also shows the value of seeking empirical data from non-traditional sources when the traditional sources of data have been exhausted. I think there is a valuable lesson here for theoretical physics to heed, and for IT professionals as well when confronted with similar issues. The key point to remember is that it is always very dangerous to unquestionably believe in things. Instead, we should maintain a level of confidence in things that is never quite 100%, and always keep a healthy level of skepticism that we might just have it all wrong. For as Richard Feynman always reminded us, “The most important thing is to not fool yourself because you are the easiest one to fool.”.

Comments are welcome at scj333@sbcglobal.net

To see all posts on softwarephysics in reverse order go to:
https://softwarephysics.blogspot.com/

Regards,
Steve Johnston

No comments: