Chapter 9: Cognitive Errors

Scope

Unfortunately, errors in diagnosis and treatment selection are common occurrences in medicine, and in 2000, the Institute of Medicine estimated that nearly 100,000 deaths yearly could be attributed to some kind of error.[ 73] Errors can range from as important an error as removing a normal kidney instead of a diseased one to as minor an error as missing a scheduled dose of a drug by a few minutes. Some of the many causes of medical errors include poor communication among doctors or other members of a medical care team, inadequate staff, equipment failure, misinterpretation of doctors’ orders, mistaken patient identity, misuse of drugs and laboratory tests, and wrong diagnosis.

Many of these errors are “systemic,” that is, they can be attributed to glitches, flaws, and inefficiencies in our “patch quilt” medical care system.[ 73] These mistakes have been studied extensively, and many methods have been implemented to repair the defects. These systemic issues, however, are not within the scope of this book: Here we consider only the errors in diagnosis and treatment attributable to flaws in reasoning. However, errors of both varieties—systemic and cognitive—can coexist, and often do, as recent studies attest.[ 52 ],[ 74]

Classification

( Case 16, Case 27, Case 54, Case 55, Case 57, Case 58)

If diseases can be considered errors in normal structure and function, and if diseases can be classified by type, etiology, pathogenesis, epidemiology, prevention, and treatment, then by analogy diagnostic errors can be considered fallacies in normal clinical reasoning, and such errors can be organized and classified.[ 52 ],[ 75], [ 76]–[ 77] The classification of errors parallels the categories of the diagnostic process. Such categories include errors in hypothesis generation, context formulation, hypothesis refinement (information gathering and processing), and verification.

An additional category called “no fault” encompasses errors that a physician could not be expected to avoid (Table 9.1). Experience shows that such errors can be identified unambiguously, that multiple errors of different types may be present in a single diagnostic endeavor, and that many errors can be attributed to inadvertent cognitive biases.[ 52 ] Many examples of such errors are described in the cases in Part II.

TABLE 9.1. Classification of Cognitive Errors

Faulty hypothesis generation
Faulty context formulation
Faulty information gathering and processing
Faulty estimation of disease prevalence
Faulty interpretation of a test result
A faulty causal model
Overreliance on a clinical axiom
Faulty verification, including premature closure
“No-fault” errors

Some Errors May Have a Psychological Origin

( Case 5)

Errors in diagnosis can also have their source in many factors that are neither “systemic” nor strictly cognitive. A physician may miss a pertinent physical finding, receive a faulty laboratory result, or be misinformed about factual data. Many other errors exist in which a judgment seems to be influenced by psychological factors. Such errors have been attributed to factors such as ego bias, hindsight bias, physician regret, reciprocation, and others.[ 78], [ 79]–[ 80] Another error occurs when physicians exaggerate the probability of a given diagnosis when one possible outcome is perceived as exceedingly unfavorable. This error is a kind of value-induced bias. Here we consider only the errors introduced by faulty information processing.

The Nature of Cognitive Errors

( Case 31, Case 36, Case 39, Case 54, Case 55, Case 56, Case 57– Case 58)

Faults in clinical cognition that provoke diagnostic errors presumably are the consequences of inadequate knowledge, defective information processing, or some combination of the two. Although we have little data on the relation between the structure or adequacy of physicians’ knowledge and the commission of errors, some information on the interplay between cognitive processes and knowledge is available. In some instances, defective hypothesis generation can be attributed to improper interpretation of clinical cues, to failure of properly identified cues to raise the possibility of a given disease, or to lack of knowledge to invoke a disease.

Another error occurs when a correct diagnosis is eliminated even though the clinical findings actually are consistent with this diagnosis. This error can be ascribed to the clinician’s overly specific expectations for the disease. In such instances, physicians presumably have constructed a faulty model of the disease. On some occasions, physicians fail to recognize that observed findings are at odds with those of the suspected disease (a failure of verification). This error can be attributed to an overestimation of the allowable range of variation for findings in a given disease and is another example of a faulty disease model. Rather than the disease model being too restrictive like the one described before, the model in this instance presumably is too broad.

Cognitive Biases in the Laboratory

( Case 57)

In the everyday process of problem solving, people use short-cuts known as heuristics. These quick, intuitive judgments are often correct and produce the desired result, yet many studies show that people (including physicians) sometimes make errors in information processing when using these heuristics.[ 81], [ 82 ]-[ 83] Perhaps some of the common heuristics are best understood in their “pure culture,” that is, as they are studied in the psychology laboratory. Investigators have generally used simple problems as their experimental models and nonphysicians as their subjects, and they identify quite clearly the errors that people make when using these heuristics.

The representativeness heuristic—a technique used in probability assessments—derives from the practice of assessing the likelihood of an event on the basis of its close resemblance to other well-defined events. In one classic experiment in the psychology laboratory, this error was revealed by describing the personal attributes of an introverted and meticulous individual and then asking subjects whether they thought the individual was most likely an engineer, a physician, an airline pilot, or a librarian. Indeed, subjects were confident that the individual was a librarian even if the description was scant, unreliable, or outdated and even though librarians are fewer in number than those in the other professions listed.

The availability heuristic involves assessing the chance of some event or outcome on the basis of readily recallable similar events or outcomes. The event or outcome may be particularly easy to recall because a given event was quite striking or impressive, because a combination of findings brings it readily to mind, or because the causal connections between events makes a given outcome quite imaginable. In a classic laboratory experiment that revealed this error, subjects were asked to judge how many people on a list were men and how many were women (half were of each sex).

Manipulating the list to contain either a disproportionate number of famous men or famous women induced the subjects to guess that the numbers were not evenly split between the sexes. Another heuristic identified in these psychological studies is that of anchoring. This approach involves assessing the likelihood of an event or an outcome based on some starting point or some initial value. In another classic experiment, one group of subjects was asked to estimate the product of 8 × 7 × 6 × 5 × 4 × 3 × 2 × 1 and another group was asked to estimate the product of 1 × 2 × 3 × 4 × 5 × 6 × 7 × 8. The median score of the former group was 2,250 and of the latter group was 512.

Consequences of Cognitive Biases

( Case 16, Case 43, Case 52 , Case 56, Case 57– Case 58)

Cognitive biases similar to those identified in laboratory experiments do taint everyday clinical reasoning and can influence clinical outcomes. Indeed, physicians make many errors similar to those described by the psychologists. In a study carried out some years ago, physicians presented with a hypothetical test for cancer (which they agreed was similar to tests in their everyday practice) made grossly incorrect interpretations of a positive test because they ignored the base rate of cancer in the population. Among the cases in Part II, we identified errors in the use of both the representativeness heuristic and the availability heuristic (see Case 1, Case 3, Case 13, Case 14, Case 54, Case 57). Although we did not identify an error attributable to the anchoring heuristic, other studies clearly show that physicians do make such errors.

It should be pointed out that, by and large, people are excellent problem solvers, and questions have been raised about the applicability of these laboratory exercises to real-world problem solving. Indeed, the real world often consists of redundant cues and multiple measures of the same cue, and the context in actual problem solving is likely to be far richer in content than that of the artificial constraints of a laboratory experiment. Nonetheless, many of our cases and those of others illustrate not only the existence of these cognitive biases in day-to-day medical decision making, but also the gravity of such errors.[ 82 ],[ 83] Serious emotional consequences and many morbid outcomes can result from such faulty reasoning.

Strategies for Avoiding Cognitive Errors

This book focuses on examples of excellent and faulty reasoning, assuming that exposure to both kinds of examples sensitizes students to recognizing and avoiding errors. In addition, we have provided descriptions of many of the common cognitive biases. As diagnostic processes become increasingly automated, steps are being built in that reduce reliance on knowledge and memory, and these approaches further reduce many errors.

Some have suggested that other educational approaches can reduce errors, such as regularly requiring thorough consideration of alternate diagnostic possibilities, developing strategies based on specific diagnostic categories, organizing clinical information so as to simplify the cognitive task, monitoring one’s cognitive processes, and regularly revisiting important diagnostic decisions before acting on them.[ 79],[ 83] These ideas are interesting, but, like the method of instantiation presented here, they have not been subjected to much evaluation,[ 84] and some are less optimistic about the benefit of “debiasing” to change how we think.[ 85], [ 86]-[ 87]