When Science Goes Wrong

Science is our perception of how things work. The scientific method is how we determine what is the current state of our science. Science is the product of the successful application of the scientific method. They are not the same. For one thing, while science changes; the scientific method is constant.

When people say “trust the science” what they really mean to say, or should mean to say, is “trust the scientific method.” Science is constantly in a state of flux. It is never settled because there are always new things to learn. In the 1950s, I was taught that there were electrons having a negative charge, protons having a positive charge, and neutrons having no charge. My grandparents never learned about any of these when they went to school, it was all too new and unsettled. Today, there are more subatomic particles than I can count. I don’t even know what is taught about them in high school.

There are many ways that the scientific method can be perverted, if not ignored altogether, to produce erroneous results. Most research characterized as bad science is probably the result of bias on the part of the researcher. Sometimes, it is a consequence of the topic not having a theoretical basis or being near the limits of our current understanding. And, of course, in rare cases, it is intentional.

Categories of bad science go by many names, all of which are pejorative. Category definitions vary between sources and some topics have been given as examples in more than one category. Sometimes the negative connotations are used to discredit research that challenges mainstream scientific ideas. Like an ad hominem argument, invoking terms related to bad science have been used to silence dissenters by preventing them from receiving financial support or publishing in scientific journals.

Pathological Science

Pathological science occurs when a researcher holds onto a hypothesis despite valid opposition from the scientific community. This isn’t necessarily a bad thing. Most scientific hypotheses go through periods when they are ignored in favor of the accepted hypothesis. It is only with persistence and further research that a hypothesis will be accepted. Sometimes the change is evolutionary and sometimes the change is revolutionary. The change from the Expanding-Earth hypothesis to the Continental-Drift hypothesis was revolutionary; the change from the Continental-Drift hypothesis to Plate Tectonics was evolutionary.

The pathological part of pathological science occurs when the researcher deviates from strict adherence to the scientific method in order to favor the desired hypothesis or incorporate wishful thinking into interpretation of the data. Usually, the hypothesis is experimental in nature and is developed after some research data have been generated. The effects of the results are near the limits of detectability. Sometimes, other researchers are recruited to perpetuate the delusion.

Researchers involved in pathological science tend to have the education and experience to conduct true science so their initial results may be accepted as legitimate. Eventually, though, failure to replicate the results damages its credibility.

Cold fusion is considered by some to be an example of pathological science because all or most of the research is done by a closed group of scientists who sponsor their own conferences and publish their own journals.

Pseudoscience

Pseudoscience involves hypotheses that cannot be validated by observation or experimentation, that is, are incompatible with the scientific method, but still are claimed to be scientifically legitimate. Pseudoscience often involves long-held beliefs that pre-date experiments, consequently, it is often based on faulty premises. While less likely to be popular in the scientific community, pseudoscience may find support from the general public.

Examples that have been characterized as pseudoscience include numerology, free energy, dowsing, Lysenkoism, graphology , body memory, human auras, crystal healing, grounding therapy, macrobiotics, homeopathy, and near-death experiences.

The term pseudoscience is often used as an inflammatory buzzword for dismissing opponents’ data and results.

Fringe Science

Fringe science refers to hypotheses within an established field of study that are highly speculative, often at the extreme boundaries of mainstream studies. Proponents of some fringe sciences may come from outside the mainstream of the discipline. Nevertheless, they are often important agents in bringing about changes in traditional ways of thinking about science, leading to far-reaching paradigm shifts.

Some concepts that were once rejected as fringe science have eventually been accepted as mainstream science. Examples include heliocentrism (sun-centered solar system), peptic ulcers being caused by Helicobacter pylori, and chaos theory. The term protoscience refers to topics that were at one point mainstream science but fell out of favor and were replaced by more advanced formulations of similar concepts. The original hypothesis then became a pseudoscience. Examples of protosciences are astrology evolving into the science of astronomy, alchemy evolving into the science of chemistry, and continental drift evolving into plate tectonics.

Other examples of fringe science include Feng shui, Ley lines, remote viewing, hypnotherapy and psychoanalysis, subliminal messaging, and the MBTI (Myers–Briggs Type Indicator). Some areas of complementary medicine, such as mind-body techniques and energy therapies, may someday become mainstream with continuing scientific attention.

The term fringe science is considered to be pejorative by some people but it is not meant to be.

Barely Science

Barely science might be perfectly acceptable science except that it is too underdeveloped to be released outside the scientific community. Barely science may be based on a single study, or pilot studies that lack the methodological rigor of formal studies, or studies that don’t have enough samples for adequate resolution, or studies that haven’t undergone formal peer review. Researchers under pressure to demonstrate results to sponsors or announce results before competitors are the sources. Consumers see barely science more than they know.

Junk Science

Junk science refers to research considered to be biased by legal, political, ideological, financial, or otherwise unscientific motives. The concept was popularized in the 1990s in relation to legal cases. Forensic methods that have been criticized as junk science include polygraphy (lie detection), bloodstain-pattern analysis, speech and text patterns analysis, microscopic hair comparisons, arson burn pattern analysis, and roadside drug tests. Creation sciences, faith healing, eugenics, and conversion therapy are considered to be junk sciences.

Sometimes, characterizing research as junk science is simply a way to discredit opposing claims. This use of the term is a common ploy for devaluing studies involving archeology, complementary medicine, public health, and the environment. Maligning analyses as junk science has been criticized for undermining public trust in real science.

Tooth-Fairy Science

Tooth-Fairy science is research that can be portrayed as legitimate because the data are reproducible and statistically significant but there is no understanding of why or how the phenomenon exists. Placebos, endometriosis, yawning, out-of-place artifacts, megalithic stonework, ball lightning, and dark matter are examples. Chiropractic, acupuncture, homeopathy, therapeutic touch, and biofield tuning may also be considered to be tooth-fairy sciences

Cargo-Cult Science

Cargo-cult science involves using apparatus, instrumentation, procedures, experimental designs, data, or results without understanding their purpose, function, or limitations, in an effort to confirm a hypothesis. Examples of cargo-cult experimentation might involve replication studies that use lower-grade chemical reagents, instruments not designed for field conditions, or data obtained using different populations and sampling schemes. In a case of fraudulent science involving experimental research on Alzheimer’s disease, over a decade of research efforts were wasted by relying on the illegitimate results.

Coerced Science

Coerced science occurs when researchers are compelled by authorities to study sometimes-objectionable topics in ways that promote speed in reaching a desired result over scientific integrity. There are many notable examples. During World War II, virtually every major power pushed their scientists and engineers to achieve a variety of desired results. In the 1960s, JFK successfully pressured NASA to land a man on the Moon. In the 1980s, Reagan prioritized efforts on his Strategic Defense Initiative (SDI) even though the goal was considered to be unachievable by experts. Many governments restrict research on their country’s cultural artefacts to individuals who agree to severe preconditions including censorship of announcements and results.

Businesses, especially in the fields of medicine and pharmaceutics, place great pressure on research staff to achieve results. For example, Elizabeth Holmes, founder of the medical diagnostic company Theranos, was convicted of fraud and sentenced to 1114 years in prison. Businesses are also known to conceal data that would be of great benefit to society if they were available. Examples include results of pharmaceutical studies (e.g., Tamiflu, statins) and subsurface exploration for oil and mineral resources.

Academic institutions predicate tenure appointments in part on journal publications and grant awards, both of which rely on researchers finding statistical significance in their analyses (p-hacking, see Chapter 6).

Taboo Science

Taboo science refers to areas of research that are limited or even prohibited either by governments or funding organizations. Sometimes this is reasonable and good. For example, research on humans has become more and more restrictive after the atrocities that occurred during World War II. During the Cold War, U.S. military and intelligence agencies obstructed independent research on national security topics, such as encryption.

Some taboos, however, are promoted by special-interest groups, such as political and religious organizations. Examples of topics that are difficult for researchers to obtain funding for include: effectiveness of methods to control gun violence; ancient civilizations, archeological sites,  artefacts, and STEM capabilities; health benefits of cannabis and psychedelics; resurrecting extinct species; and some topics in human biology such as cloning, genetic engineering, chimeras, synthetic biology, scientific aspects of racial and gender differences, and causes and treatments for pedophilia.

Fraudulent Science

Fraudulent science consists of research, experimental or observational, in which data, results, or even whole studies are faked. Creation of false data or cases is called fabrication; misrepresentation of data or results is called falsification. Plagiarism and other forms of information theft, conflicts of interest, and ethical violations are also considered aspects of fraudulent science. The goals of fraudulent science are usually for the researcher to acquire money including funding and sponsorships, and enhance reputation and power within the profession.

Unfortunately, there are too many examples of fraudulent science. Perhaps the most notorious is the 1998 case of Andrew Wakefield, a British expert in gastroenterology, who claimed to have found a link between the MMR vaccine, autism and inflammatory bowel disease. His paper published in The Lancet, which was retracted in 2010, is thought to have caused worldwide outbreaks on measles after a substantial decline in vaccinations. Wakefield later became a leader in the anti-vaxx movement in the U.S.. Another infamous example involves faked images in a 2006 experimental study of memory deficits in mice, which subsequently led to an unproductive diversion of funding for Alzheimer’s research.

Sometimes, fraudulent actions are subtle and go unnoticed even by experts. Examples include pharmaceutical studies designed to accentuate positive effects while concealing undesirable side effects. Sometimes, well-meaning actions have unforeseen ramifications, such as when definitions of medical conditions are changed resulting in patients being treated differently. Examples include obesity, diabetes, and cardiac conditions.

From 2000 to 2020, 37,780 professional papers have been retracted because of fraud (The Retraction Watch Database [Internet]. New York: The Center for Scientific Integrity. 2018. ISSN: 2692-465X. Accessed 4/13/2023. Available at: http://retractiondatabase.org/). Those retractions are considered to represent only a fraction of all fraudulent science.

It’s Not All Bad

Clearly, science and scientists are wrong on occasion even when they don’t intend to be. That is to be expected. Even if the scientific method isn’t all that difficult to understand it is incredibly difficult to put into practice, simplified flowcharts notwithstanding. As a consequence, scientific studies are too often poorly designed, poorly executed, misleading, or misinterpreted. Most of the time, this is inadvertent though sometimes not.

While this may seem like a fairly dismal portrayal of science, bear in mind that the vast majority of today’s science is real and legitimate. The difference between bad science and true science that strictly follows the scientific method is that true science will eventually correct illegitimate results.

About statswithcats

Charlie Kufs has been crunching numbers for over thirty years. He retired in 2019 and is currently working on Stats with Kittens, the prequel to Stats with Cats.
This entry was posted in Uncategorized and tagged , , , , , , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s