From the beginning of time, people have been predisposed to accept the latest distortion of reality and pass it along to others. Deep in the human psyche there is a need to have answers and we are all to willing to accept the first one that comes along. Science has slowly progressed over the years and some of the "knowledge" of the past has been shown to be off the mark. But science itself has been ritualized to the point that many people think that the use of such terms as studies, correlations, percentages and models are sufficient for a scientific analysis. If a report contains these terms, it is given a great deal of credibility.
The public has been so willing to use these criteria for acceptance that scientists are even skipping over the more rigorus tests required for the establishment of knowledge. After all, one of the main objectives of scientists' is to be believed and an uneducated public doesn't challenge them on the limitations of their work. And, of course, the reporting media doesn't have a critical eye because all they want is a story. This is human nature at work!
Let's take some types of "scientific" analyses and show how they can produce misleading conclusions:
A retrospective study uses happenstance data that has been generated by uncontrolled events. Statistical correlations are found by determining which events are usually associated with one another. These correlations are often treated as representations of cause and effect.
For example a study might find that people with an active life style live longer than those with a more sedentary life style. In a typical study of this type, participants are selected randomly and questioned about their lifestyle and state of health. When the media reports the results of this study, they will point out how we should all get more exercise and not be couch pototoes if we want to have a long life. The assumption is that exercise causes better health.
This may be true but to cite this study as proof of the benefits of exercise is misleading. The weak link in this study is that the participants have chosen for themselves what lifestyle they are living. Wouldn't you expect that people in poor health might choose a sedentary life style? Or, quite possibly, they may not have a choice. The study doesn't tell us if this is a factor. Poor health may cause a sedentary life style!
How can we determine cause and effect with more certainty. In the above example, we would pick people at random and separate them into two groups (healthy and unhealthy). Then we would force half of each group to live an active life and force the other half of each group to live a sedentary life. It will then be clear whether exercise improves health. This would be what is called an experimental design. In medicine the term would be a clinical trial. The bottom line is that in order to determine cause and effect you must intervene to prevent random associations from occurring. Random associations have many potential causes and effects - many of which are not even contemplated.
Many proclamations in the popular media take the form of "X study has shown that eating Y will decrease your risk of breast cancer by Z percent." How are we to make a risk assessment for ourselves based on this type of information? If we are to make an intelligent judgment concerning Y, we need to know the current incidence of breast cancer. If the incidence is 250 out of every 1000 women and Z is 20% then the incidence will be reduced from 250 to 200. This represents an overall net change in breast cancer risk of 5% (50/1000). If, instead, the current incidence is 50 out of every 1000 women and Z is 20% then the incidence will be reduced from 50 to 40. In this case, the overall net change in risk is 1% (10/1000). In fhe former scenario you might consider changing your eating habits, whereas in the latter case you may not be too concerned even though a 20% reduction was present in each case.
Another misleading factor is due to the fact that most medical (or whatever) studies are conducted using a broad sampling of the population. In most cases, it takes only a small segment of the population to skew the overall results. For example, certain people may have a genetic predisposition to react adversely to certain foods. When the results of the study are expressed in terms of the total population, it appears that everyone is somewhat at risk. This is the scientific equivalent to socialism where everyone is treated as an equal member of a large group.
Studies using animals has become the method of choice when evaluating potentially toxic chemicals and foods. The problem is that the most toxic materials that cause significant problems are so obvious that very little study is required. Now scientists are faced with determining the toxicity of materials that may only affect a small percentage of the population. They have two choices for animal studies:
A) Treat a large number of animals with the potentially toxic material at its normally encountered levels. Then determine the incidence of adverse reaction, if any, by a small number of animals.
B) Try to reduce costs and speed up the process by treating a small number of animals at greatly exaggerated levels of the material in question. Then extrapolate the results to estimate what would have occurred in a larger population exposed to the normally encountered levels of the material.
Figure 1 shows how this extrapolation works. When in doubt, scientists tend to think in terms of linear relationships. This is illustrated by the assumed dosage response in the figure. The erroneous assumption is that if the dosage is cut in half, the effect will half as great, etc. A more likely scenario is the lower curve which starts with the same animal responses at high dosage levels. The lower curve is probably more accurate because the body tends to excrete toxic materials over time. A high dosage, as in the animal studies, overloads the body and does not allow elmination to occur in time to prevent damage. Lower dosage levels allow more time for the body to keep the concentration below a harmful level.
The vertical line represents normally encountered levels of the toxic material. It intersects the assumed dosage response curve at a population level of 5%. In other words, the analysis predicts that 5% of the population will be affected at the normal dosage. If we use the lower curve, taking into consideration the self-cleansing action of the body, we would predict a negligible affect on the population at normal dosage.
Figure 2 shows a somewhat more complicated situation where large amounts of material can be fatal, small amounts are necessary for life, and complete elimination can be fatal. Water is such a material. With no water, we die, with moderate amounts of water we thrive, and with extraordinary amounts of water we may die. A case in point occurred about 30 years ago in England where a woman died after drinking 5 gallons of water within a short period of time. Iodine, which the body needs in small amounts, would also have this kind of dosage response.
Alcohol can cause death when consumed in large quantities in a short time. Moderate intake of alcohol is believed to reduce the incidence of heart attack. Unlike water, however, no one has died from the complete elimination of alcohol.
Models are mathematical simulations of real world phenomena. Models are typically programmed into computers so we don't have to spend endless hours working on detailed calculations with a pencil and paper. The more complex the phenomena under study, the less likely the model accurately represents reality.
Lately, models have been the source for many scientific pronouncements. They are often coupled with retrospective studies. The global warming scare is an example. Another is the nuclear winter scenario promoted by Carl Sagan in 1985.
Sagan's modeling showed that a series of nuclear blasts would fill the atmosphere with dust for an extended period of time and block the sun. This in turn would cause a decrease in the earth's surface temperature of 54 deg. F! When the gulf war produced a large number of burning oil wells, he predicted a similar result. He was shown to be wrong in this real world case.
What his model failed to consider was the cleansing action of the water vapor present in the atmosphere. Fine smoke and dust particles are known to serve as nucleation sites for the formation of rain drops. In effect, the dust and smoke will soon be cleansed from the atmosphere by the falling rain. Anyone with a scientific education would have been able to pick out this flaw but the media didn't give the public the benefit of these opinions because it would ruin the story. Sagan was more politician than scientist.
What is needed is a shot that everyone can be given that will immunize them from junk science. In a free society, opinions are many and of varying quality. Free speech is only an advantage if it has some truth associated with it. In the case of junk science, the best defense is education. Everyone should obtain a basic understanding of statistical analysis and it's pitfalls. Also, knowledge of simple scientific principles will go a long way toward determining the worth or validity of what is presented in the popular media. And, believe it or not, an occasional call or e-mail to the media challenging their pronouncements might turn things around. Every media outlet should be encouraged to hire a scientific advisor - preferably a non-political one.