Heuristics and medical errors. Part 2: How to make better medical decisions

Cover Page


Cite item

Full Text

Open Access Open Access
Restricted Access Access granted
Restricted Access Subscription or Fee Access

Abstract

This publication is a continuation of the article published in the 4th issue of the journal Russian family doctor for 2020 “Heuristics, language and medical errors”, which described the ways of making medical decisions that can lead to errors in patient management tactics, in particular “affect of heuristics / visceral bias”, “attribution error”, “frame of reference”, “availability bias”, “one-word-one-meaning-fallacy”. This article discusses additional sources of diagnostic error, including “diagnosis momentum”, “confirmation bias”, “representativeness”, and “premature closure” also the conflict that arises from diagnostic uncertainty is discussed. All errors in the tactics and the diagnostic process are illustrated by clinical cases from the personal practice of the author of the article.

Full Text

INTRODUCTION

In the first paper in this series, we defined heuristics and how we use (and misuse) them when we make medical decision makers [1]. We discussed several sources of error including the “affect heuristic/visceral bias”, “attribution error”, “frame of reference”, “availability bias”, and the “one-word-one-meaning-fallacy”. In this second part, we will discuss additional sources of diagnostic errors including “diagnosis momentum”, “confirmation bias”, “representativeness”, and “premature closure”. Finally, will discuss the conflict that arises from diagnostic uncertainty.

CLINICAL CASE 1

SFG is a 45-year-old female who complains of unilateral headaches which are pulsating and associated with photophobia and retroorbital pain. She notes that some of the headaches only last for 5 or 10 (or less) minutes but many last for hour or more. She often has 5 or more headache episodes a day. Her exam is normal.

She has been seen for these headaches multiple times before and has been given a diagnosis of “migraine headache” by one of your senior partners as well as by several other physicians who have seen her. A head CT and MRI have been normal. You start her on migraine prophylaxis, but she returns a week later still complaining of “migraine” headaches. You consider temporal arteritis, but a sedimentation rate and c-reactive protein are normal. You are more convinced that these are migraine headaches. She eventually tries several antimigraine drugs over a period of 12 weeks, but nothing seems to work. You are wondering if this is psychological since she looks well when she is in the office.

Diagnosis Momentum. This patient has paroxysmal hemicrania (which responds to indomethacin). This case demonstrates several classic causes of diagnostic error including “diagnosis momentum”, “confirmation bias” and “premature closure”. “Diagnosis momentum” occurs when there is a diagnosis on the chart, often from a senior physician, and instead of questioning the diagnosis, other practitioners proceed based on the assumption that the diagnosis on the chart is correct [2, 3].

In this case, the chart says “migraine”, so our patient is tried on multiple medications for migraines. Rather than questioning whether the diagnosis is correct, we just go down the same path as the last physician assuming a different or “better” medicine might work. An example that is common in practice would be that of a patient with “bronchitis” or “sinusitis” who is started on a course of antibiotics. This doesn’t work so the next doctor puts the patient on a “stronger” antibiotic and then a 3rd course when that doesn’t work. Instead of questioning whether the diagnosis is correct and whether antibiotics are the proper choice, we just go down the easy path offered to us by the chart. Remember that neither we nor our colleagues are infallible; the first one or two doctors who saw the patient might be wrong, even if it is another specialist. “Diagnosis momentum” can be a particular problem with electronic medical records where every patient diagnosis is carried forward regardless of whether or not it has been verified. There is a common saying by Rita Mae Brown (not Einstein); “Insanity is doing the same thing over and over again and expecting a different result.” It has also been put this way: “The greatest barrier to the proper diagnosis is a prior diagnosis” [4]. Instead of “just doing the same thing”, question whether you are treating the right illness.

Confirmation Bias. The second cognitive error demonstrated by this case is called “confirmation bias”. This has been defined as, “the tendency to look for confirming evidence to support a diagnosis rather than look for disconfirming evidence to refute it, despite the latter often being more persuasive and definitive” [5–7].

We look for evidence that validates our original diagnostic consideration (migraine) including a unilateral, pulsing headache with a retroorbital component, a negative CT and MRI, a normal sedimentation and CRP and a normal exam. We tend to ignore information that doesn’t fit with our diagnosis, such as the fact that some of the headaches last 10 minutes or less and that she has them multiple times per day (migraines headaches generally last for hours and don’t usually occur multiple times in one day). Instead of “thinking out of the box” and rethinking our diagnosis, we ignore information that doesn’t fit into our diagnostic framework. We also ignore the evidence that antimigraine medications usually work for migraines and that if a medication isn’t working, we should question whether we are treating the right illness (and have the right diagnosis). Another example of “confirmation bias” might be a patient with fever, chills, cough and an infiltrate on chest radiograph with 10 eosinophils on peripheral smear. Instead of including “Pulmonary Infiltrates with Eosinophilia” on our differential diagnosis, we may jump to the diagnosis of pneumonia and ignore the eosinophilia.

Perhaps the best example of “confirmation bias” can be seen in everyday life. When looking on the internet, we tend to look for information that “confirms” our opinion and ignore information that does not fit with our world view.

There are several ways to try to mitigate this problem. Instead of ignoring some information, try integrating everything you know into one diagnosis. This can be difficult to do when our ability to generate a differential diagnosis is limited. This leads to “premature closure” (another cognitive bias… settling on a diagnosis without considering all of the reasonable options). One solution to this problem is to use a differential diagnosis “check list”. These have been shown to improve diagnostic accuracy [8].

There are several free tools available to help you including at: http://pie.med.utoronto.ca/DC/index.htm and a downloadable set at: https://www.dropbox.com/s/ynoacqw9xsiowj4/diffdx.doc?dl=0.

Another reasonably priced option is “Diagnosaurus”, available for Apple and Android in their respective app stores. Does this mean that you need to use a differential diagnosis check list for every patient and workup every possible diagnosis? No. These tools are designed to “jog your memory”.

If your treatment isn’t working the way you would expect or the problem is one that you don’t frequently see, for example an unusual polyarthritis, these simple tools can help you arrive at the correct diagnosis by making sure you broaden your considerations. In difficult cases I’ll often go over the list with my patient in order to assure them that we are thinking broadly about their problem.

CLINICAL CASE 2

JRB a 24-year-old smoker who presents with chest pain radiating between his shoulder blades. It is sharp and it hurts to take a deep breath. He has a blood pressure of 139/90, normal pulses, and oxygen saturation. The rest of his exam is normal. An EKG is negative and there is no evidence of deep venous thrombosis or other risk for pulmonary embolism. He is PERC rule (pulmonary embolism rule out criteria) negative, essentially ruling out pulmonary embolism [9–10].

His family history is negative for Ehlers-Danlos, Marfan Syndrome, etc. You think about the possibility of an aortic dissection and send him for a contrast enhanced CT scan. The patient has an anaphylactic reaction; he is discharged 3 days later with a diagnosis of esophageal spasm.

Representativeness. This is an example of an error based on the “representativeness heuristic”. When we use “representativeness” we look at the essential features of what is presented to us (in this case a smoker, short of breath, chest pain radiating between the shoulder blades) and use this information to judge how similar the presentation is to a particular diagnosis (for example, aortic dissection) [11–12].

The problem with basing our decision only on representativeness, how the patient’s presentation is resembles a diagnosis, is that ignores the pretest probability of the diagnosis. We need to think about the “base rate” of aortic dissection, how common aortic dissection is in a 24-year-old (exceedingly uncommon). So, for example, gastrointestinal reflux disease (GERD), esophageal spasm, esophagitis, pneumomediastinum, and others are much more likely in this 24-year-old than is an aortic dissection.

We did the right thing considering aortic dissection as a diagnosis but erred in putting it at the top of the list. The patient would have been better served by trying a treatment for dyspepsia, a much more common cause of chest pain in a 24-year-old, first. If the patient were 72 years old with a history of cardiovascular disease, a stronger consideration of aortic dissection would be justified (although we still cannot rule out other considerations including GI disease, cardiac disease, pulmonary embolism, etc.). In the case of the 72-year-old, the base rate of aortic dissection is going to be higher than that in a 24-year-old.

A non-medical example of representativeness comes (slightly modified) from “Behavioral Economics” [13].

Bob is an opera fan who enjoys touring art museums when on holiday. Growing up, he enjoyed playing chess with family members and friends and played in the high school orchestra where he was well regarded. He has a passion for detail. Which situation is more likely?

  1. Bob plays trumpet for a major symphony orchestra
  2. Bob is a farmer

Based on what we actually know, the answer is “B”. There are many more farmers than there are trumpet players in major symphony orchestras. Even though we have described Bob as someone with a personality who may fit our concept of a musician, it is still actually much more likely he farms. We need to take the prior probability of each job into account. Similarly, we need to take the prior probability of a disease into account when we decide what illness a patient’s symptoms represent. That doesn’t mean that rare diseases don’t occur. But it does mean that if we put rare illnesses at the top of our list, we might prescribe unneeded medications or do unneeded tests. We will also miss the correct diagnosis.

In American medicine there is a saying, “When you hear hoofbeats, think of horses and not zebras”. This is another way of saying that common illnesses occur commonly and are statistically likely going to be the correct diagnosis. This doesn’t mean that we ignore zebras, diseases that occur rarely. But if we pursue statistically unlikely diagnoses to the exclusion of the more common, we will make a diagnostic error.

CLINICAL CASE 3

The last topic we will address in this paper is diagnostic uncertainty. We will start with a case.

QTS is a 62-year-old female who complains of shortness of breath which has been getting worse for the past 6 months. She used to walk to the store (5 blocks) without difficulty but now has trouble getting up the steps to her apartment. On exam she has decreased breath sounds on the right and a chest radiograph shows evidence of a large right pleural effusion. A diagnostic thoracentesis is done, and you look at your algorithm for “Light’s Criteria” to help differentiate an exudate from a transudate.

 

Table. Likelihood of exudates using the pleural fluid to serum protein ratio [14, 15]

Таблица. Вероятность наличия экссудата при исследовании соотношения содержания белков в плевральной жидкости и белков в сыворотке крови [14, 15]

Pleural to Serum Protein Ratio

Post-test Probability of exudate if pretest probability is 10%

Post-test Probability of exudate if pretest probability is 30%

>0.71

91%

98%

0.61–0.65

32%

64%

0.56–0.6

29%

61%

Note. Overall Sensitivity and Specificity: 90% and 90%

 

The pleural to serum protein ratio (pleural: serum protein ratio) is 0.6 which leads you to make the diagnosis of an exudate [14]. How sure are you that this is an exudate?

Diagnostic Uncertainty. If you look at the algorithm you would diagnose an exudate since at least one of Light’s criteria are met. However, if you look at the actual data on which the algorithm is based (Table) you will see that probability is only 61% when the pleural: serum protein ratio is 0.6 [14]. You will also see that in order to know and interpret the pleural: serum protein ratio, you need to know the prior probability that it is an exudate. Do you think there is a 10% pre-test probability that it is an exudate? The post-test probability is only 29%. Do you think there is a 30% pre-test probability it is an exudate. The post-test probability is still only 61% [15].

This is the problem with algorithms and decision rules. Even though we like using them, algorithms are “black or white” thinking. Either someone has a disease, or they do not. However, algorithms and decision rules are based on probabilities.

The same is true when we make most diagnoses. We really cannot be 100% certain of appendicitis, for example, until after the patient goes to the operating room. Even CT scan has false positives and false negatives.

Another example: The Pulmonary Embolism Rule Out Criteria (PERC rules) mentioned above [9–10]. The rule has a cutoff of age 55 to clinically rule out a pulmonary embolism. What if the patient is age 55 and one week? Does this really change her probability of a pulmonary embolism significantly? No. But when developing a rule or algorithm you have to choose a cutoff.

One final example. If you have a positive HIV test in an IV drug abuser from Africa, the HIV test result is likely correct: It is likely a true positive. But what if the patient were a celibate nun? The HIV test is likely incorrect: It is likely a false positive. So, when we are making a diagnosis, we need to know the prior probability of disease before we can decide if our test or algorithm is correct.

This leads to two points.

  1. When we make a diagnosis, we are dealing with probabilities. However, patients want a 100% guarantee.
  2. This uncertainty can lead to anxiety on the part of the doctor and the patient [16]. In fact, uncertainty is one of the biggest stressors in practice [17, 18]. We all feel bad if we give a patient a diagnosis and it turns out to be wrong. And the patient (or the family) will get angry. How to we prevent this? We should not be hesitant to say, “This is what I think but we do not know for sure. It could also be diagnosis X, Y, or Z. Let’s keep evaluating this and working together until we have a firm diagnosis”. The important thing is to form a partnership with the patient and let them know you won’t abandon them until the diagnosis is as close to certain as we can be. And patients appreciate when we are being honest with them [19].

CONCLUSIONS

“Representativeness”, “diagnosis momentum”, “confirmation bias” and “premature closure” are common sources of diagnostic error. It is important to take the “base rate” of disease in the population we are seeing into account when interpreting the history, physical and laboratory tests. Diagnostic uncertainty is a source of stress in practice. Discussing the uncertainty with your patients can lead to better communication and satisfaction.

×

About the authors

Mark A. Graber

University of Iowa Carver College of Medicine

Author for correspondence.
Email: mark-graber@uiowa.edu

MD MSHCE FACEP, Emeritus Professor of Emergency and Family Medicine

United States, University of Iowa Carver College of Medicine, 451 Newton Road, 200 Medicine Administration Building, Iowa City, IA 52242

References

  1. Graber MA. Heuristics, language and medical errors. Russian Family Doctor. 2020;24(4):25–30. doi: 10.17816/RFD50991
  2. Howard J. Premature closure: anchoring bias, occam’s error, availability bias, search satisficing, yin-yang error, diagnosis momentum, triage cueing, and unpacking failure. In: Cognitive Errors and Diagnostic Mistakes. Springer; 2019. P. 379–423. doi: 10.1007/978-3-319-93224-8_23
  3. Crosskerry P. Cognitive and affective biases in medicine [Internet]. Critical Thinking Program. Canada. 2013. Available from: http://sjrhem.ca/wp-content/uploads/2015/11/CriticaThinking-Listof50-biases.pdf. Accessed 19.01.2021.
  4. Sterbenz C. 12 Famous Quotes That Always Get Misattributed [Internet]. 2013. Available from: https://www.businessinsider.com/misattributed-quotes-2013-10. Accessed 19.01.2021.
  5. Nickerson RS. Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology. 1998;2(2):175–220. doi: 10.1037/1089-2680.2.2.175
  6. Mendel R, Traut-Mattausch E, Jonas E, et al. Confirmation bias: why psychiatrists stick to wrong preliminary diagnoses. Psychol Med. 2011;41(12):2651–2659. doi: 10.1017/S0033291711000808
  7. Pines JM. Profiles in patient safety: Confirmation bias in emergency medicine. Acad Emerg Med. 2006;13(1):90–94. doi: 10.1197/j.aem.2005.07.028
  8. Ely JW, Graber MA. Checklists to prevent diagnostic errors: a pilot randomized controlled trial. Diagnosis (Berl). 2015;2(3):163–169. doi: 10.1515/dx-2015-0008
  9. Penaloza A, Soulié C, Moumneh T, et. al. Pulmonary embolism rule-out criteria (PERC) rule in European patients with low implicit clinical probability (PERCEPIC): a multicentre, prospective, observational study. Lancet Haematol. 2017;4(12):e615–e621. doi: 10.1016/S2352-3026(17)30210-7
  10. Freund Y, Cachanado M, Aubry A, et. al. Effect of the pulmonary embolism rule-out criteria on subsequent thromboembolic events among low-risk emergency department patients: The PROPER randomized clinical trial. JAMA. 2018;319(6):559–566. doi: 10.1001/jama.2017.21904
  11. Tversky A, Kahneman D. Judgment under uncertainty: Heuristics and biases. Science. 1974;185(4157):1124–1131. doi: 10.1126/science.185.4157.1124
  12. Kulkarni SS, Dewitt B, Fischhoff B, et al. Defining the representativeness heuristic in trauma triage: A retrospective observational cohort study. PLoS One. 2019;14(2):e0212201. doi: 10.1371/journal.pone.0212201
  13. Representativeness heuristic [Internet] // Behavioral Economics. Available from: https://www.behavioraleconomics.com/resources/mini-encyclopedia-of-be/representativeness-heuristic. Accessed 19.01.2021.
  14. Wilcox ME, Chong CAKY, Stanbrook MB, et. al. Does this patient have an exudative pleural effusion? The rational clinical examination systematic review. JAMA. 2014;311(23):2422–2431. doi: 10.1001/jama.2014.5552
  15. Porcel JM, Light RW. Diagnostic approach to pleural effusion in adults. Am Fam Physician. 2006;73(7):1211–1220.
  16. Gu Y, Gu S, Lei Y, Li H. From uncertainty to anxiety: How uncertainty fuels anxiety in a process mediated by intolerance of uncertainty. Neural Plast. 2020:8866386. doi: 10.1155/2020/8866386
  17. Malterud K, Guassora AD, Reventlow S, et. al. Embracing uncertainty to advance diagnosis in general practice. Br J Gen Pract. 2017;67(659):244–245. doi: 10.3399/bjgp17X690941
  18. Russek NS, Detsky AS, Quinn K.L. Managing clinical uncertainty a teachable moment. JAMA Intern Med. 2020;180(3):452–453. doi: 10.1001/jamainternmed.2019.6700
  19. Armstrong K. If you can’t beat it, join it: Uncertainty and trust in medicine. Ann Intern Med. 2018;168(11):818–819. doi: 10.7326/M18-0445

Supplementary files

Supplementary Files
Action
1. JATS XML

Copyright (c) 2021 Graber M.A.

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

СМИ зарегистрировано Федеральной службой по надзору в сфере связи, информационных технологий и массовых коммуникаций (Роскомнадзор).
Регистрационный номер и дата принятия решения о регистрации СМИ: серия ПИ № ФС 77 - 70763 от 21.08.2017 г.


This website uses cookies

You consent to our cookies if you continue to use our website.

About Cookies