IS THE PSEUDOSCIENCE CONCEPT USEFUL FOR CLINICAL PSYCHOLOGY?
The Demise of Pseudoscience
Richard J. McNally, Department of Psychology, Harvard University.
The author wishes to thank Heidi M. Barrett, Susan A. Clancy, and Scott O. Lilienfeld for their comments on previous drafts of this manuscript. Preparation of this manuscript was supported by NIMH grant RO1 MH61268-01A1.
Correspondence concerning this article should be addressed to Richard J. McNally, Department of Psychology, Harvard University, 33 Kirkland Street, Cambridge, MA 02138. E-mail: firstname.lastname@example.org.
Talented entrepreneurs have been developing and marketing novel therapeutic methods, some touted as veritable miracle cures for diverse complaints. This phenomenon has caught the attention of scientist-practitioners in psychology, many of whom criticize these approaches as “pseudoscientific.” The purpose of this essay is to sketch a simpler, alternative approach to debunking dubious methods in clinical psychology. When therapeutic entrepreneurs make claims on behalf of their interventions, we should not waste our time trying to determine whether their interventions qualify as pseudoscientific. Rather, we should ask them: How do you know that your intervention works? What is your evidence?
Pseudoscience is like pornography: we cannot define it, but we know it when we see it.  Or so it seems. But on what basis do scholars identify pseudoscience in clinical psychology (Lilienfeld, Lynn, & Lohr, 2003)? Even if no sharp criterion distinguishes pseudoscience from genuine science, we still need a way to identify it—if we assume the concept of pseudoscience is meaningful. Accordingly, scholars have identified pseudoscience by either its practitioners, its theories, or its methods.
There is little question that certain figures in contemporary clinical psychology have been strongly identified with pseudoscience. But linking pseudoscience to its alleged practitioners has its limitations. Many of history’s greatest scientists embraced ideas that clearly qualify as pseudoscientific, at least by today’s standards. Not only did early modern astronomers moonlight as astrologers (Dear, 2001, p. 18; Heilbron, 1999, pp. 83–85), but scientific pioneers such as Boyle, Leibniz, and Newton credulously swallowed all kinds of bizarre tales about the natural world resembling those featured in tabloids sold today in supermarket checkout lines (Miller, 2000, p. 27).
A fascinating American case of the scientist doubling as pseudoscientist is that of Cotton Mather, the Puritan polymath perhaps best known for his notorious role in the Salem witch trials (Boyer & Nissenbaum, 1974, p. 9). Despite his “day job” as minister of Boston’s First Church, he somehow found the time to publish enough outstanding research to earn election to England’s prestigious Royal Society (Bremer, 1995, p. 197). In fact, Mather was nearly martyred for his scientifically prescient but unpopular promotion of the smallpox inoculation. A fellow Bostonian, fearing that the vaccine would spread the dreaded disease, tossed a bomb through a window in Mather’s house (p. 198). Despite his scientific achievements, Mather’s impressive C.V. of 400-plus publications contains many curiosities, such as his article on two-headed snakes (Perry, 1984, p. 55) and his treatise entitled Memorable Providences Relating to Witchcrafts and Possessions, in which he described several bewitched children who could “fly like geese” by flapping their arms “like the wings of a bird” (Mather, quoted in Boyer & Nissenbaum, 1974, pp. 23–24). The upshot is that identifying pseudoscience by its practitioners fails because scientists and pseudoscientists have often been the very same people.
Another approach is to identify theories, rather than theorists, as pseudoscientific. Proclaiming falsifiability as the hallmark of science, Karl Popper (1976, pp. 41–43) consigned psychoanalysis,  Marxism, and Jungian depth psychology to the dustbin of pseudoscience because, he said, they did not generate falsifiable predictions. No matter what happened, no matter what the empirical observations turned out to be, advocates of these disciplines could always interpret the outcome as support for their theory.
But clearly this approach fails, too. If psychoanalysis were nothing but unfalsifiable pseudoscience, how has it been possible to test predictions derived from Freud’s theory of repression (Holmes, 1990)? Indeed, historians of psychoanalysis have convincingly argued that Freud abandoned his early “seduction theory” because his clinical failures refuted predictions about the therapeutic benefits of recovering repressed memories of early childhood sexual abuse (Israëls & Schatzman, 1993).
Falsifiability is useless for distinguishing scientific theories from pseudoscientific ones because any theory, however bizarre, can be clarified, amended, or supplemented with auxiliary hypotheses to prevent its refutation. As Laudan (1996, pp. 218–219) pointed out, the falsifiability criterion renders “scientific” any crank claim made by flat-earthers, astrologers, creationists, or whomever, as long as they specify what would count as a falsifying observation. Falsifiability fails as a demarcation criterion because it is far too lenient.
Finally, one might identify certain methods as pseudoscientific. For example, even though a theory might be falsifiable, its advocates may act pseudoscientifically by engaging in ad hoc attempts to explain away theoretically embarrassing observations. From this Popperian  perspective, Herbert et al. (2000) have accused Francine Shapiro and other EMDR advocates of practicing pseudoscience. According to these critics, EMDR mavens do not behave like real scientists, who, according to Popperian dogma, derive bold conjectures from their theories and then relentlessly seek theoretical refutation by exposing these conjectures to risky empirical tests.
Although I share Herbert et al.’s (2000) concerns about the marketing of eye movements and other amusing exotica of the EMDR movement (McNally, 1999a, 1999b), I believe the accusation of pseudoscience misses the mark. After clearing away all the neurological mumbo-jumbo, one can see that EMDR theory is eminently falsifiable (McNally, 2001a), and if Shapiro’s (1989) hypothesis about the curative powers of eye movement is not a Popperian “bold conjecture,” then nothing is. Indeed, not only is EMDR theory falsifiable, it has already been repeatedly falsified, as a recent meta-analysis has shown (Davidson & Parker, 2001). Despite many attempts, researchers have been unable to demonstrate that eye movements possess therapeutic powers. In response to these disappointing findings, EMDR theorists have cheerfully reconceptualized placebo control manipulations (e.g., rhythmic tapping) as variant forms of EMDR, and it is this ad hoc maneuver that Herbert et al. find especially problematic.
But, as Putnam (1974) points out in his devastating critique  of Popper, scientists engage in these ad hoc maneuvers all the time. He illustrates this point with a historical example. Astronomers attempted to predict the orbit of Uranus by applying Newton’s law of universal gravitation plus the auxiliary assumption that all planets in the solar system were known. Their observations, however, ran counter to prediction. Rather than admitting that Newton’s theory was wrong, they brazenly engaged in an ad hoc gambit to save it from refutation. The astronomers simply assumed that there must be another planet lurking out there somewhere that was responsible for the aberrant observations. This maneuver is formally identical to Shapiro’s concluding that all kinds of rhythmic stimulation—bilateral eye movements, tapping, or whatever—are fungible and more effective than imaginal exposure minus rhythmic stimulation. Fortunately for the astronomers, they turned out to be right: Neptune was discovered. And Shapiro, one day, might also turn out to be right.
Of course, one might attempt to distinguish between legitimate ad hoc moves and illegitimate ones, condemning only the latter as pseudoscientific. One might argue that the astronomers were right to engage in ad hoc attempts to save Newton’s theory; after all, it had a better prefalsification track record than Shapiro’s. Unfortunately, this approach drains the ad hoc objection of its force; it renders it entirely parasitic on issues of evidential support. If we cannot tell whether an ad hoc move is justified without first examining a theory’s track record, why not just cut to the chase and inspect the theory’s evidential support (or lack thereof) without quibbling about ad hocness per se?
Not all psychologists who diagnose pseudoscience rely solely on Popper’s falsifiability criterion. Lilienfeld (1998), for example, has endorsed Mario Bunge’s seven hallmarks of pseudoscience. The more criteria met, the more likely the practice or theory qualifies as pseudoscience. The criteria are: (1) overuse of ad hoc hypotheses to escape refutation, (2) emphasis on confirmation rather than refutation, (3) absence of self-correction, (4) reversed burden of proof, (5) overreliance on testimonials and anecdotal evidence, (6) use of obscurantist language, and (7) absence of “connectivity” with other disciplines (p. 5). Of course, each of these individual criteria are fuzzy, too. For example, when does use of ad hoc hypotheses become “overuse,” or reliance on anecdotes become “overreliance,” or complex concepts become “obscurantist”? And as Foster and Huber (1999) have recently emphasized, first-rate science is strongly confirmationist. As they observed, authors of scientific papers are “much more likely to stress how well the data agree with some theory than how decisively they refute some theory” (p. 48).
One of Lilienfeld’s (1998) chief concerns is educating the public about the hazards of pseudoscience. But if most people fail to grasp Popper’s simple falsifiability criterion, what are the chances that John Q. Public will memorize and apply Bunge’s seven complex criteria for diagnosing pseudoscience? The chances are not great, especially when one considers that most advocates of wacky therapies hold Ph.D.s in clinical psychology, making them far more educated than the average citizen.
The term “pseudoscience” has become little more than an inflammatory buzzword for quickly dismissing one’s opponents in media sound-bites. This problem has been especially evident in debates about sociobiology and evolutionary psychology (Segerstrĺle, 2000, pp. 183, 329, and passim). In yet another example of terminological misuse, an erstwhile debunker of “snake oil” dismissed the work of Karl Lashley as “discredited pseudoscience” (Sarnoff, 2001, p. 28). To be sure, Lashley failed to locate the “engram” of memory, but does that make his efforts pseudoscientific?
Of course, merely because a term can be misused does not mean that it does not have its proper uses. Nevertheless, the pseudoscience concept generates more heat than light. As Laudan (1996) has said: “If we would stand up and be counted on the side of reason, we ought to drop terms like ‘pseudo-science’ and ‘unscientific’ from our vocabulary; they are just hollow phrases which do only emotive work for us” (p. 222).
I hasten to add that my ambivalence about the concept of pseudoscience should not be misunderstood as a defense of the psychologists, the theories, or the clinical practices justly criticized in this journal. EMDR, Thought Field Therapy (see McNally, 2001b), and all the rest rightly deserve critique, just not on the grounds of pseudoscience. There are much stronger grounds for critique. Rather than asking, Is this pseudoscience or genuine science? we should ask, What arguments and evidence support this clinical claim?  We should be concerned with belief-worthiness, epistemic warrant, evidential basis, empirical support (pick your favorite locution), rather than attempting to determine whether the theory or practice falls on the proper side of a demarcation criterion that separates science from pseudoscience. The problem with EMDR, for example, is not that Francine Shapiro is a pseudoscientist, or that EMDR theory is unfalsifiable, or that EMDR mavens make ad hoc moves when confronted with embarrassing data. The problem is that the central claim about the therapeutic powers of eye movement lacks any convincing empirical support.
In conclusion, when clinical psychologists make claims on behalf of their theories or interventions, we should ask them, “How do you know?” Or, we can paraphrase the immortal words of Cuba Gooding Jr.: “Show me the data! Show me the data!”
I acknowledge the inspiration of Supreme Court Justice Potter Stewart’s oft-paraphrased concurring opinion in the 1964 Jacobellis v. Ohio case. The Court was addressing whether an erotic film met the description of hard-core pornography. Stewart said: “I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description; and perhaps I could never succeed in intelligibly doing so. But I know it when I see it, and the motion picture involved in this case is not that” (United States Reports, 1965, p. 197).
[On the very day that I finished this manuscript, I had occasion to speak to my good friend Carol Tavris. We discussed a forthcoming book on pseudoscience, and she seconded my ambivalence about the concept, noting that she had once likened its vagueness to that of pornography! When I expressed amazement at the coincidence of our both hitting upon the same analogy, she added that she had made this point in an American Psychological Society (APS) lecture a year or so ago. I suddenly had a recovered memory of having read quotes from her talk—including the analogy between pseudoscience and pornography—in the APS Observer. I had entirely forgotten the source of what I mistakenly thought was an original idea of mine! Tavris had said, “Pseudoscience is like pornography; we can’t define it, but we know it when we see it” in her talk on the APS Presidential Symposium on Science and Pseudoscience, Denver Colorado, June, 3, 1999.]
- For a truly ghastly specimen of psychoanalytic reasoning, see Freud’s (1918/1955) famous case study of the “Wolfman.” After reading this example of Freud’s genius, one can easily understand Popper’s contempt for psychoanalysis. Specifically, Freud begins with the assumption that his patient witnessed his parents having sexual intercourse. He then embarks on a wildly unrestrained interpretive exercise whereby every bit of evidence is twisted to fit his preordained conclusions.
- Some scholars interpret Popper as identifying certain practices as pseudoscientific (e.g., ad hoc falsification evasion) rather than certain theories as pseudoscientific (e.g., Cioffi, 1998, pp. 210–227).
- Putnam is scarcely alone. Devastating critiques have been leveled against Popperian and neo-Popperian (e.g., Lakatos, 1970) philosophies of science in recent years (see, for example, Laudan, 1996; Sober, 1993, pp. 46–54; Stove, 2001).
- I am, of course, aware of Popper’s (1959, pp. 27–42) critique of induction and related notions of empirical support, confirmation, etc. Indeed, his belief that science progresses via conjectures and refutations, not confirmation of predictions, arose as a response to Hume’s (1739/2000) famous attempt to debunk inductive inference: “even after the observation of the frequent or constant conjunction of objects, we have no reason to draw any inference concerning any object beyond those of which we have had experience” (p. 95; emphasis in original).
Thus, a person who touches a flame and gets burned has “no reason” to infer that future flames will likewise be hot. One cannot validly deduce (in the logician’s sense of valid deductive inference) a theory from the facts of observation. Moreover, Hume (1739/2000) added, any appeal to previous successful inductive inference presupposes the very principle under dispute, thereby leading to an infinite regress of justificatory explanations (p. 64).
According to Popper (1979), Hume provided “a simple, straightforward, logical refutation of any claim that induction could be a valid argument, or a justifiable way of reasoning” (p. 86). Agreeing with Hume’s analysis, Popper argued that the invalidity of inductive inference means that observations can never “confirm” a theory’s probable truth. Popper endeavored to ground scientific reasoning entirely on a deductive basis, claiming that we can falsify but never verify our hypotheses. Thankfully, he said, “a principle of induction is superfluous [in science]” (Popper, 1959, p. 29). We can get by with falsification even if confirmation is nothing but an illusion.
Few scientists take Popper very seriously. As Foster and Huber (1999) wrote: “Despite Popper’s enormous prestige and the lip service that is often paid to his ideas, it is astounding how little influence he seems to have had on the practice of science” (p. 48).
Of course, no scientist believes that infallible truth can be deduced or derived from observational data. Yet merely because an inference is formally invalid by a logician’s criteria does not mean that it is unlikely to be true. Indeed, scientists make abductive inferences (“the inference to the best explanation”; Harman, 1965; Josephson & Josephson, 1996) all the time in their efforts to explain their data. Stove (2001) provides a homely example of the kind of reasoning common in science, historical scholarship, and police investigation, but offensive to deductivists like Hume and Popper: “The canary was alive and well when we left the room an hour ago; but it is dead now. Gas from the oven was leaking into the room during that time. So, if nothing else caused the canary’s death, the gas did” (p. 136).
The inference about the cause of the canary’s death is formally invalid, but not unreasonable. Moreover, it is not unreasonable to believe that all flames are hot even though one cannot deductively derive this conclusion from having gotten one’s fingers burned a few times. Scientific reasoning is not confined to formal deductive inference. Popperian concerns about inductive (in)validity should not obscure the fact that science progresses by “getting it right,” confirming hypotheses as well as by falsifying them. Therefore, a failure to test and confirm a theory is a good reason for suspending belief in its truth. On these grounds, we can easily criticize targets hitherto condemned on grounds of pseudoscience.
Bremer, F. J. (1995). The Puritan experiment: New England society from Bradford to Edwards (rev. ed.). Hanover, NH: University Press of New England.
Boyer, P., & Nissenbaum, S. (1974). Salem possessed: The social origins of witchcraft. Cambridge, MA: Harvard University Press.
Cioffi, F. (1998). Freud and question of pseudoscience. Chicago, IL: Open Court.
Davidson, P. R., & Parker, K. C. H. (2001). Eye movement desensitization and reprocessing (EMDR): A meta-analysis. Journal of Consulting and Clinical Psychology, 69, 305–316.
Dear, P. (2001). Revolutionizing the sciences: European knowledge and its ambitions, 1500–1700. Princeton, NJ: Princeton University Press.
Foster, K. R., & Huber, P. W. (1999). Judging science: Scientific knowledge and the federal courts. Cambridge, MA: MIT Press.
Freud, S. (1955). From the history of an infantile neurosis. In J. Strachey (Ed. and Trans.), The standard edition of the complete psychological works of Sigmund Freud (Vol. 17, pp. 7–122). London: Hogarth Press. (Original work published 1918)
Harman, G. (1965). The inference to the best explanation. Philosophical Review, 74, 88–95.
Heilbron, J. L. (1999). The sun in the church: Cathedrals as solar observatories. Cambridge, MA: Harvard University Press.
Herbert, J. D., Lilienfeld, S. O., Lohr, J. M., Montgomery, R. W., O’Donohue, W. T., Rosen, G. M., et al. (2000). Science and pseudoscience in the development of eye movement desensitization and reprocessing: Implications for clinical psychology. Clinical Psychology Review, 20, 945–971.
Holmes, D. S. (1990). The evidence for repression: An examination of sixty years of research. In J. L. Singer (Ed.), Repression and dissociation: Implications for personality theory, psychopathology, and health (pp. 85–102). Chicago: University of Chicago.
Hume, D. (2000). A treatise of human nature. Oxford, England: Oxford University Press. (Original work published 1739)
Israëls, H., & Schatzman, M. (1993). The seduction theory. History of Psychiatry, 4, 23–59.
Josephson, J. R., & Josephson, S. G. (Eds). (1996). Abductive inference: Computation, philosophy, technology. Cambridge, England: Cambridge University Press.
Lakatos, I. (1970). Falsification and the methodology of scientific research programmes. In I. Lakatos & A. Musgrave (Eds.), Criticism and the growth of knowledge (pp. 91–196). London: Cambridge University Press.
Laudan, L. (1996). Beyond positivism and relativism: Theory, method, and evidence. Boulder, CO: Westview Press.
Lilienfeld, S. O. (1998). Pseudoscience in contemporary clinical psychology: What it is and what we can do about it. The Clinical Psychologist, 51(4), 3–9.
Lilienfeld, S. O., Lynn, S. J., & Lohr, J. M. (Eds.). (2003). Science and pseudoscience in contemporary clinical psychology. New York: Guilford Press.
McNally, R. J. (1999a). EMDR and mesmerism: A comparative historical analysis. Journal of Anxiety Disorders, 13, 225–236.
McNally, R. J. (1999b). On eye movements and animal magnetism: A reply to Greenwald’s defense of EMDR. Journal of Anxiety Disorders, 13, 617–620.
McNally, R. J. (2001a). How to end the EMDR controversy. Psicoterapia Cognitiva e Comportamentale, 7, 153–154.
McNally, R. J. (2001b). Tertullian’s motto and Callahan’s method. Journal of Clinical Psychology, 57, 1171–1174.
Miller, P. N. (2000). Peiresc’s Europe: Learning and virtue in the seventeenth century. New Haven, CT: Yale University Press.
Perry, L. (1984). Intellectual life in America: A history. New York: Franklin Watts.
Popper, K. (1959). The logic of scientific discovery. New York: Harper & Row.
Popper, K. (1976). Unended quest: An intellectual autobiography. LaSalle, IL: Open Court.
Popper, K. R. (1979). Objective knowledge: An evolutionary approach (rev. ed.). Oxford, England: Oxford University Press.
Putnam, H. (1974). The “corroboration” of theories. In P. A. Schilpp (Ed.), The philosophy of Karl Popper (pp. 221–240). LaSalle, IL: Open Court.
Sarnoff, S. K. (2001). Sanctified snake oil: The effect of junk science on public policy. Westport, CT: Praeger.
Segerstrĺle, U. (2000). Defenders of the truth: The battle for science in the sociobiology debate and beyond. Oxford, England: Oxford University Press.
Shapiro, F. (1989). Efficacy of the eye movement desensitization procedure in the treatment of traumatic memories. Journal of Traumatic Stress, 2, 199–223.
Sober, E. (1993). Philosophy of biology. Boulder, CO: Westview Press.
Stove, D. (2001). Scientific irrationalism: Origins of a postmodern cult. New Brunswick, NJ: Transaction Publishers.
United States Reports. (1965). Cases adjudged in the Supreme Court at October term, 1963 (Vol. 378). Washington, DC: United States Government Printing Office.
can read this article in
The Scientific Review of Mental Health Practice,
vol. 2, no. 2 (Fall/Winter 2003).