Have you ever sat at M&M rounds, seeing the same lists of biases and the same diagnostic errors being discussed over and over again? I know I have. I am often left wondering how can we do better? The solution seems simple – learn the biases, know the biases, don’t let them affect you. Then why are they often a recurrent theme in M&M rounds? Is the solution just to slow down and think harder? I honestly, thought I was already thinking pretty hard ….  

Plagued with all these questions, I sought out the evidence about what works and doesn’t work, using cognitive science as the lens. DISCLAIMER: The evidence in this area is limited, and there is controversy in the field. Experts often argue from polar opposite perspectives on what approach might be best for reducing diagnostic error. Therefore, I present to you a synthesis of opinion and primary literature to surmise what might work, and what the (albeit limited) evidence suggests doesn’t work.

 

LEARNING OBJECTIVES 

We’ll aim to address the objectives as listed below: 

      • Define and recognize the relevance of diagnostic errors; 
      • Understand bias, heuristics, and the concept of dual processing models of decision-making; 
      • Understand evidence regarding “de-biasing” strategies; 
      • Recognize alternatives to the heuristic/bias perspective to reducing diagnostic error; 
      • Develop a broader toolkit of evidence- or theory-based approaches to reducing diagnostic error. 

 

Diagnostic Errors

According to the Institute of Medicine1Diagnostic Error is the failure to: 

      • Establish an accurate and timely explanation of the patient’s health problem(s); 
      • Communicate that explanation to the patient. 

Diagnostic Errors are: 

      • COMMON
        • Occurs in an estimated 10-15% of patients2. For a typical academic emergency department, that equates to roughly 1-3 patients every shift!
        • Errors are seen with both common and rare conditions. 
        • Cognitive mistakes can be identified in 96% of US malpractice cases related to delayed or missed diagnoses3.
      • DISTRESSING 
        • Stress for the physician 
        • Stress for the patient: patient bears the impact of the error. Diagnostic errors have proportionally higher morbidity than any other error4.
        • Litigation  
      • CHALLENGING 
        • Most errors go unrecognized 
        • Difficulty defining an error: How accurate is “accurate”? How time is “timely”? Is it still an error if there are NO negative consequences? 
        • We still don’t have a way to accurately know what a clinician is really thinking at the time an error is made. 
        • All case reviews are subject to hindsight bias 

 

Emergency Medicine has been called a “NATURAL LABORATORY OF ERROR”5

 

Cognitive Science: Making a Diagnosis 

Physicians form a working hypothesis and differential diagnosis extremely quickly. 

      • Most EM physicians make most of their diagnostic hypotheses within seconds of entering a room.6 
      • A first hypothesis is generated 78% of the time before even seeing the patient.7
      • A radiologist can categorize a chest XRay as normal or abnormal in just 200ms.8 

We have the capacity for very rapid, intuitive, and associative thought. We can make a rapid diagnosis quickly without having to actively think about it. 

SO HOW DO WE MAKE THESE DECISIONS?

 

1. DUAL-PROCESSING MODEL

One popular model is the dual-processing model. This is just one of several models, based on the concept of two physiologic patterns of thought: often termed System 1 and System 2. 

      • An example of a dual-process model is the “Default Interventionist” model, popularized in Kahneman’s book Thinking Fast and Slow.9
      • System 1 is the fast and more associative thought process, whereas system 2 is conscious, effortful, and more analytical.
      • Both systems are prone to errors, that are different in nature. 

      • There is a structural correlation with fMRI evidence: certain parts of the brain metabolically become active when either system is active. This has been experimentally manipulated and demonstrated as observable physiology. 

 

2. HEURISTICS

      • A common definition is “mental shortcuts used in clinical decision-making”10
        • Though, keep in mind that this is a metaphor. The mind probably doesn’t work as a roadway where A leads to B leads to C, so it may be inaccurate to call heuristics “shortcuts”. 
      • Heuristics can also be thought of as the brain’s mechanism for rapid associations and categorizations. They are part of the efficiency of the brain. 
      • You would not be able to think meaningfully without heuristics. 

 

3. BIAS

      • Bias is a pattern of deviation from the norm.
      • In the context of cognitive science, we can also think of it as the way our brain is unconsciously predisposed or having “tendencies” to certain thoughts. 
        • Croskerry (2015) has previously called them “cognitive dispositions to respond” in order to emphasize this point10.
      • Most bias is UNCONSCIOUS and UNRECOGNIZED  at the time of error. 
      • Labels are typically applied retrospectively to diagnostic errors, but recognition is subject to hindsight bias
      • In medicine, bias has taken on a very negative connotation. It is bad, personal, and to be avoided. 
      • Types of Cognitive Bias: 
        • Many “comprehensive” lists have been described over the years5
        • For a review, see First10EM’s Blog on the subject.5, 11 
      • Bias ≠ Error
        • A key point to keep in mind is that bias is not the same thing as error; 
        • These terms are often used interchangeably, but the presence of a bias does not mean that you will make an error. It’s probably best to think of bias as simply influencing our thought process. 
        • For instance, anchoring bias explains the tendency to stick with your first hypothesis as the diagnosis, but what diagnosis you make in the end is dependent on many more things. 
        • In fact, our tendencies are correct more often than not, in which case it isn’t an error at all!
        • These terms have become coupled in our language BUT many experts in the field make it a point to de-couple these concepts. 

 

The De-Biasing Approach

“I need to just be aware of that bias, so I can recognize it next time and avoid it”

“I think I just shut my brain off.  I didn’t stop to think”

“I should have slowed down to think”

“I should have forced myself to think through it harder”

De-biasing strategies have been promoted by many experts in the field based on a theory of dual processing in the mind. A particularly higher profile proponent is Dr. Croskerry, who has brought the concepts of cognitive science into the common language of emergency medicine. This has had an overall very positive effect on how we pay attention to errors and our cognitive processes. Recognition of how bias impacts our decisions has also been popularized in books like Thinking Fast and Slow and How Doctors Think. The idea of “de-biasing” tends to show up in many places in the literature, in organizational recommendations, and throughout the FOAM world. 

What exactly is “de-biasing”? The language of these techniques tends to be used fairly interchangeably, but generally combines some ideas of metacognition, cognitive forcing strategies, and slowing down. 

      • METACOGNITION 
        • Thinking about thinking 
        • “Stepping back….to examine and reflect on the thinking process”10
        • Being “aware” of thought process and impacts of bias, and thus avoiding intentionally
      • COGNITIVE FORCING STRATEGIES 
        • Deliberately choosing analytic reasoning 
        • “Triggering” System 2 thought consciously
      • SLOWING DOWN 
        • Consciously thinking “slower” to apply deductive and analytic reasoning
        • Going slower to allow more reasoning through System 2 thought

Summarized another way, it’s “Having ‘the appropriate knowledge of ‘solutions and strategic rules to substitute for a heuristic response’ and the ability to override system 1 processing.” 12

 

But ….. Show me the evidence!

Some authors have challenged the heuristic and bias-based approach to preventing diagnostic errors. They argue the theory is based on faulty assumptions and loose connections, and that it does not hold up when looking at the complexities of diagnostic error in real-life contexts. Two literature reviews looking at methods for reducing diagnostic error found little evidence to support de-biasing strategies13,14. However, evidence remains very limited overall. The following are select studies from those reviews as well as my own review of primary literature. This is not meant to be comprehensive, but rather to highlight to nature of the evidence currently:

      • Studies of cognitive forcing strategies did not show improved diagnostic accuracy
        • Training in cognitive forcing strategies did not reduce medical students’ diagnostic error.15
        • Training in debiasing techniques did not improve family medicine residents’ ability to recognize risk of bias.16
        • Cognitive debiasing checklists did not improve physician or resident diagnostic performance.17
      • Studies of slowing down did not show benefit
        • Having residents consciously slow down increased time spent on the task, but did not improve diagnostic accuracy.18
        • Neither physicians nor trainees were more accurate when going slower.19
        • Emergency physicians and residents tested had similar accuracy when subjected to intentionally rapid or slow conditions.20
        • Having trainees and faculty use a slowing-down tool did not improve diagnostic accuracy.21
      • An observational study showed the accuracy was worse in those with longer response times.22
      • Studies of metacognition and deliberate reflection have been mixed
        • Conscious and deliberate reflection decreases the tendency toward availability bias in controlled settings, but generalization to real practice is unclear.23

 

“For a physician to successfully apply debiasing tactics, he or she must first be aware of common biases and their impact on cognitive error. Then the physician must detect the bias, decide to intervene, and successfully apply strategies to mitigate risk, all the while not becoming paralyzed in decision-making.” (Hartigan, 2020)

 

The question remains – Can we ACTUALLY do this?

      • Can we reliably detect bias’? 
      • Can we recognize when we need to intervene? 
      • And then can we successfully apply the strategies to prevent error?

 

The evidence suggests …. MAYBE not

      • We are UNRELIABLE AT IDENTIFYING OUR BIASES
        • Bias is unconscious and unrecognized at the time of the error
        • We tend to try to label bias retrospectively to errors we have made, but this is heavily influenced by hindsight bias.
        • Retrospective labelling of bias to errors tends to heavily de-emphasize the many more times that bias did not lead to error.
        • Observers do not reliably identify cognitive biases retrospectively
          • See the study from Zwann et al. (2017) below

Zwann et al. (2017) – This issue of reliability in recognizing biases was creatively illustrated in this study. The authors manipulated written scenarios to have two equiprobable diagnoses (e.g., RLQ pain, where the vignette sounds like either appendicitis or tuboovarian abscess). Physicians then gave suspected diagnosis for the case. Because the authors manipulated the case features to make it a 50-50 toss up to physicians, indeed answers were roughly a 50-50 split. Then, no matter the answer provided, a conclusion to the vignette was given where the wording implied the diagnosis was either correct or incorrect. Physicians from the Society to Improve Diagnosis in Medicine (SIDM) reviewed to identify biases in diagnostic error. Where the diagnosis seemed wrong in retrospect, raters identified twice as many biases as when the diagnosis seemed right. This is evidence of hindsight bias. They indicate evidence of more bias when the decision is wrong in retrospect, than when it is right in retrospect. Further, they did not consistently or reliably identify the same cognitive biases – they couldn’t agree which biases were present.

 

      • We are UNRELIABLE AT IDENTIFYING WHEN WE ARE WRONG 
        • Confidence with diagnosis shows poor correlation with accuracy.24,25 This is shown across experience levels of learners and experts.
        • “94% of academic doctors rate themselves as performing within the top half of their profession, and most have difficulty recalling any errors they make.” 26
        • This is partly explained by two psychological principles:
          • Blind-spot bias – Tendency to not recognize our own weaknesses or cognitive errors.
          • Dunning-Kruger effect – Tendency for unskilled individuals to overestimate their abilities. Likewise, there is a tendency for highly skilled individuals to underestimate their abilities.
        • We can’t rely on ourselves to trigger analytic thought to override unconscious processes

 

So, do we throw away de-biasing completely?

We probably don’t have a clear enough picture here to completely discount understanding of biases and considering how to go about de-biasing. Proponents argue that evidence to date is too reductionist. Of course, evidence should be taken in context, but evidence supporting debiasing strategies has been conspicuously absent. This does not conclusively tell us they don’t work – perhaps more clearly defined techniques can be tested with better results. Yet, if 9 out of 10 drug trials failed to show the benefit of a drug, you would not use that drug.

Proponents also argue that there is still value in talking about bias and attempts to de-bias ourselves, including,

      • Discussion sensitizes clinicians to pitfalls of decision making
      • Offers a ‘language and logic’ for discussing diagnostic safety
      • It encourages reflective practice
      • It encourages recognition of personal limitations
      • It might improve recognition of “high risk” situations

 

Other Approaches: What can YOU do about Diagnostic Error

Monteiro, Norman, and Sherbino (2018) took an epistemological approach to review the literature on preventing diagnostic error27. This is a great way to look at a topic where there is a varied opinion, language, and lack of consensus. Each “perspective” represents a general theme in the literature of how authors approach this topic. They identified 3 major “perspectives”:

      1. The heuristics/bias perspective
      2. The population/epidemiology perspective
      3. The exemplar/associative perspective

To their list, I have added two other broad categories for discussion:

      1. Cognitive load theory
      2. Optimizing self – Fatigue, Wellness, Emotional regulation, and Stress

These are not meant to be a one-size-fits-all or mutually exclusive approach. Instead, these are different ways of looking at the problem and there is probably value in understanding the issue from as many sides as possible. We can choose what seems most relevant or helpful given the theory- and evidence that we do have.

 

1. HEURISTIC/BIAS PERSPECTIVE

      • This is the “de-biasing” approach covered above
      • The central assumption is that an understanding of how biases affect our heuristics will allow us to develop strategies to use conscious thought to override those biases.
      • Despite minimal supporting evidence, this is probably still the most represented of the views in organizational recommendations and popular thought.
      • Suggested approaches include,
        • Metacognition – covered above
        • Slowing down – covered above
        • Cognitive forcing strategies – covered above
        • Second opinions
          • Evidence has been minimal, but generally not very supportive13,14.
          • As explained above, we’re not very accurate at knowing when we’re making errors, so when do you choose to get a second opinion? And if you do get one, when do you discount your own thought process and follow your colleague’s suggestions? Does that just introduce new bias and risk error?
        • Diagnostic checklists
          • Externalizing the de-biasing process into a checklist of common areas where errors arise
          • Mixed results – Have had some minor successes, mostly in the case of complicated cases and novice-learners14,17,28–31
          • The biggest issue seems to be when to use them, and how feasible it is to even implement them? Maybe there could be a role here for novice learners, for example around handover or with overnight admissions to internal medicine.

 

2. EXPERTISE/ASSOCIATIVE

      • Considers decisions to be knowledge retrieval and categorization, with exemplars that are the basis for association. It emphasizes the importance of knowledge and experience rather than reasoning.
      • This position argues that the process of heuristics is not the problem, but rather the lack of readily accessible knowledge or experience. Instead of fighting our system 1 thought, we should improve the content of that thought by improving our expertise.
      • Most parallel Naturalistic Decision Making (NDM)
      • Essentially all traditional medical training is, to some degree, based on the implicit assumption that better expertise leads to better performance.
      • Suggested approaches include,
      • Supportive evidence?
        • When the correct diagnosis is hypothesized within the first minutes of an encounter, clinicians are right 95% of the time; when it is not, the likelihood of an eventual correct diagnosis is only 20%32. In other words, when we think of the answer we’re usually right; when we don’t think of the answer, it is very unlikely for us to get there through reasoning alone.
        • Priming with content knowledge reduces the impact of availability bias23,33
        • The use of a differential diagnosis checklist (but not a diagnostic checklist) improved medical students diagnostic performance.29
        • A working hypothesis improves the recognition of important features in written cases.34–36
        • Starting with a diagnosis in mind improves diagnostic accuracy 37

 

3. POPULATION/EPIDEMIOLOGY

      • Emphasizes evidence-based medicine (EBM) and Bayes theory
      • Involves knowledge of pre-test probability using population rates, and post-test probability by applying test characteristics
      • Those supporting this perspective might say, “if only more physicians understood epidemiology and likelihood ratios, we would have less misdiagnosis.”
      • Challenges with this approach
        • Access to data (e.g., what IS the true denominator for incidence of PE in Ottawa? What is the LR for every feature of my history and physical? How many cases of Ebola are there right now?)
        • Difficulty in applying Bayesian thought – we aren’t computers, so it’s hard to think like one
        • Tends to devalue intuition
        • In practice, we tend to use EBM to inform our gestalt and as a supplement, rather than as a strict basis for what we do.
      • Suggested approaches include,
        • Learning and using Bayesian thought (e.g., likelihood ratios, nomograms)
        • Evidence-based decision rules and diagnostic scores
        • Computerized diagnostic aids (i.e., Clinical decision support systems (CDSS))
          • CDSS is essentially a computerized version of the second opinion approach – software designed to aid clinical-decision making using a computerized knowledge base and patient-specific information, where recommendations are presented to the clinician to aid in decisions 38.
          • Some mixed evidence of effectiveness13,14,24,25,38, but where to implement and how? When to rely on them, or when to override?
          • When implemented, they are generally are ignored38.
        • Artificial intelligence (AI)
          • AI can support much higher level data processing than humans can.
            • Isabel – As just one example, Isabel is software that can pool huge volumes of population and patient data, and then interpret it in the context of the current presentation. It can give a differential diagnosis with relative likelihoods. In an industry funded study (obviously some bias issues), Isabel’s differential list included the correct diagnosis nearly 100% of the time39.
            • Google-search – One study found Google-searching by physicians in complicated cases yields the correct diagnosis about 58% of the time! 40
          • AI will almost certainly play a role in the future. The question will not be if this is coming, but rather how do we integrate this into practice?

 

4. COGNITIVE LOAD THEORY

      • This is primarily an educational theory but has been applied to many wider fields including medicine
      • An essential concept is that conscious thought requires working memory, which can hold 7 +/- 2 “items” for about 15 seconds.
      • Cognitive load theory breaks down tasks into intrinsic/extraneous/germane cognitive load.41 Using handover as an example,
        • Intrinsic load: things associated with the task (e.g., patient name, comorbidities, their vitals, the plan)
        • Extraneous load: all things non-essential to the task itself (e.g., people chatting in the background, alarms going off in a patient room, a code STEMI called overhead). It might also include the emotional state of stress you are dealing with at home.
        • Germane load: the background processes of the brain to make sense of all of this information. Where there is too much load, the germane load is not handled by the brain, and we are unable to properly process everything (something can be missed).
      • You want to manage intrinsic load, minimize extraneous load, and maximize the germane load.

Cognitive load in the emergency department (ED)

      • ED physicians are interrupted about 10 times per hour, and two out of three of these result in task change.42
      • Westbrook (2018) found task errors are associated with:43
        • frequent interruptions
        • multitasking and task-switching
        • working memory being at capacity
      • Cognitive load reduces decision-making flexibility, contributes to more simplistic reasoning processes, and may contribute to the marginalization of patient groups44.

Measuring cognitive load

      • Several measures have been developed45
      • NASA uses the NASA Task Load Index (NASA-TLX) to calculate a cognitive load index score for everything an astronaut might do. For example, every step on a spacewalk is catalogued for the load it has, in order to avoid over-taxing them and minimize the chance for error.
      • In medicine, it might be used to assess workload burdens and develop patterns or protocols that reduce that workload.46,47

How to address your cognitive load?

Evidence is beginning to emerge around cognitive load theory in medicine, but practice-changing suggestions are almost entirely theory- / expert-based. I’ve provided some suggestions for how cognitive load might be addressed on-shift in the ED. For other reviews and suggestions, see Cognitive load (emdocs.net) and Cognitive load theory (Life in the Fast Lane).48,49

Cognitive Load and Diagnostic Error

 

      1. Offload tasks. This minimizes the amount of intrinsic load you are dealing with. Rely on your team members where you can – to get things off your plate. Call for help when you need it, and likewise, help others when they need it.
      2. Write more, remember less. Use patient lists and to-do lists. Write out your differential and you won’t forget it. Write your plan if it isn’t straightforward. Again, this offloads your working memory to do other things.
      3. Look stuff up. We have so much ready access to information, relying on our memory alone can sometimes be a liability
      4. Avoid task switching. Switching tasks burdens working memory. Where possible, complete tasks one at a time so you can get them off your plate. Only touch the task once. By batching similar tasks, you can further minimize the effect of task-switching by combining similar intrinsic loads.
      5. Avoid decision density. Don’t procrastinate on decisions or leave all of your dispositions until the end of your shift, because you’ll end up overloading yourself later on. Flatten the decision curve.
      6. Personal algorithms. Plan your routine practice decisions ahead of time. Where possible, decide how you will respond to certain patterns/problems with routine and reflexive algorithms. “When X happens, I will do Y”. Patterns of practice that you are familiar with will reduce the cognitive burden of the decision – you’ve already made the decision, you’re just enacting it today.
      7. Close the loop. Close the loop on orders, consults, patient plans – etc. Don’t leave yourself wondering if it’s going to get done.
      8. Manage interruptions. We can’t truly minimize interruptions (this is beyond our control), but we can control how and when we receive them to some extent. This will minimize extraneous load building at inopportune times.
      9. Plan and make quiet. Make explicit times to be quiet when you need it to be quiet – e.g., handover, thinking about a challenging case, batch-charting. Maybe this means taking a few minutes to put out fires before handover. The goal is to minimize the extraneous load.

 

5. OPTIMIZING SELF – FATIGUE, WELLNESS, EMOTIONAL REGULATION, AND STRESS

      • These are different concepts, but I’ve lumped them here because there is a general theme – To optimize yourself.
      • This was not a focus of this talk, but the impacts of fatigue, wellness, and emotion should hopefully be self-evident and are well-covered elsewhere.
      • Recognize that these concepts apply to other physiologic needs as well – consider, if you didn’t have anything to eat or you didn’t go pee all shift, then this is certainly affecting your decision-making! These are variables that I personally don’t want to leave to chance.
      • Suggested approaches,
        • Address fatigue
        • Address personal wellness
        • Consider strategies for emotional regulation
        • Stress inoculation training

 

Don’t discount or neglect this! Your patients deserve YOUR BEST.

Take Away Messages: Minimizing Diagnostic Errors

DISCLAIMER again: Evidence in reducing diagnostic errors is surprisingly limited and fraught with controversy. Here I have presented a review of various perspectives and approaches taken in the literature. These are my suggestions based on the review of primary evidence, discussions with experts, personal experience, and a deep dive into the literature. The intent is that you take a more comprehensive approach to reduce your own errors, or reflecting on errors you have made at your next M+M rounds.

What NOT to do

      • When you make an error, don’t just label bias.
      • Don’t focus on keeping a list of biases “in mind” in an effort to consciously try hard not to make the same error again.
      • The evidence says this won’t work. In fact, it might be counter-productive and harmful.

What TO DO instead

      • Make a learning plan
        • Improve the quality of your heuristics with better knowledge and expertise
        • Attend department Grand rounds, M+M rounds, and Case rounds
        • Value reflective practice to calibrate your expertise.
        • Seek feedback on patient cases through deliberate follow-up systems, conversations with consultants, feedback from patients, etc.
      • Use evidence when you can
        • Population incidence rates, clinical decision tools, evidence-based medicine
      • Address your cognitive load
        • Optimize cognitive load (e.g., manage interruptions, handover, task-switching)
        • Make safe practice a routine — handover tool, checklists, routinely listing of differential
      • Maintain your peak performance
        • Address fatigue – sleep hygiene, scheduling, screen-time, etc.
        • Address your wellness
        • Develop strategies for emotional and stress regulation

COMING ONE DAY/UNCLEAR EVIDENCE

      • Diagnostic checklists – may have a role at certain high-risk moments where there is low time pressure (e.g., admissions, handover)
      • Clinical decision support systems – may be helpful, but are highly dependent on the implementation
      • Artificial intelligence
      • Personalized task-load analysis
      • Stress inoculation training

 

References

  1. Committee on Diagnostic Error in Health Care. Improving Diagnosis in Health Care. (Balogh EP, Miller BT, Ball JR, eds.). National Academies Press; 2015. doi:10.17226/21794
  2. Graber ML, Franklin N, Gordon R. Diagnostic error in internal medicine. Arch Intern Med. 2005;165(13):1493-1499. doi:10.1001/archinte.165.13.1493
  3. Kachalia A, Gandhi TK, Puopolo AL, et al. Missed and delayed diagnoses in the emergency department: A study of closed malpractice claims from 4 liability insurers. Ann Emerg Med. 2007;49(2):196-205. doi:10.1016/j.annemergmed.2006.06.035
  4. Brennan TA, Leape LL, Laird NM, et al. Incidence of adverse events and negligence in hospitalized patients. Results of the Harvard Medical Practice Study I. N Engl J Med. 1991;324(6):370-376. doi:10.1056/NEJM199102073240604
  5. Croskerry P. ED cognition: Any decision by anyone at any time. CJEM. 2014;16(1):13-19. doi:10.2310/8000.2013.131053
  6. Pelaccia T, Tardif J, Triby E, et al. How and when do expert emergency physicians generate and evaluate diagnostic hypotheses? A qualitative study using head-mounted video cued-recall interviews. Ann Emerg Med. 2014;64(6):575-585. doi:10.1016/j.annemergmed.2014.05.003
  7. Gruppen LD, Woolliscroft JO, Wolf FM. The contribution of different components of the clinical encounter in generating and eliminating diagnostic hypotheses. Res Med Educ. 1988;27:242-247.
  8. Kundel HL, Nodine CF, Carmody D. Visual scanning, pattern recognition and decision-making in pulmonary nodule detection. Invest Radiol. 1978;13(3):175-181. doi:10.1097/00004424-197805000-00001
  9. Kahneman D. Thinking, Fast and Slow. Farrar, Straus and Giroux; 2011.
  10. Croskerry P. Diagnostic Failure: A Cognitive and Affective Approach. In: Henriksen K, Battles JB, Marks ES, Lewin DI, eds. ; 2005.
  11. Morgenstern J. Cognitive errors in medicine: The common errors. First10EM blog. https://first10em.com/performance-under-pressure/
  12. Hartigan S, Brooks M, Hartley S, Miller RE, Santen SA, Hemphill RR. Review of the Basics of Cognitive Error in Emergency Medicine: Still No Easy Answers. West J Emerg Med. 2020;21(6):125-131. doi:10.5811/westjem.2020.7.47832
  13. Graber ML, Kissam S, Payne VL, et al. Cognitive interventions to reduce diagnostic error: A narrative review. BMJ Qual Saf. 2012;21(7):535-557. doi:10.1136/bmjqs-2011-000149
  14. Lambe KA, O’Reilly G, Kelly BD, Curristan S. Dual-process cognitive interventions to enhance diagnostic reasoning: A systematic review. BMJ Qual Saf. 2016;25(10):808-820. doi:10.1136/bmjqs-2015-004417
  15. Sherbino J, Kulasegaram K, Howey E, Norman G. Ineffectiveness of cognitive forcing strategies to reduce biases in diagnostic reasoning: a controlled trial. CJEM. 2014;16(1):34-40. doi:10.2310/8000.2013.130860
  16. Smith BW, Slack MB. The effect of cognitive debiasing training among family medicine residents. Diagnosis (Berlin, Ger. 2015;2(2):117-121. doi:10.1515/dx-2015-0007
  17. Sibbald M, Sherbino J, Ilgen JS, et al. Debiasing versus knowledge retrieval checklists to reduce diagnostic error in ECG interpretation. Adv Heal Sci Educ. 2019;24(3):427-440. doi:10.1007/s10459-019-09875-8
  18. Norman G, Sherbino J, Dore K, et al. The etiology of diagnostic errors: A controlled trial of system 1 versus system 2 reasoning. Acad Med. 2014;89(2):277-284. doi:10.1097/ACM.0000000000000105
  19. Ilgen JS, Bowen JL, McIntyre LA, et al. Comparing diagnostic performance and the utility of clinical vignette-based assessment under testing conditions designed to encourage either automatic or analytic thought. Acad Med. 2013;88(10):1545-1551. doi:10.1097/ACM.0b013e3182a31c1e
  20. Monteiro SD, Sherbino JD, Ilgen JS, et al. Disrupting diagnostic reasoning: Do interruptions, instructions, and experience affect the diagnostic accuracy and response time of residents and emergency physicians? Acad Med. 2015;90(4):511-517. doi:10.1097/ACM.0000000000000614
  21. O’Sullivan ED, Schofield SJ. A cognitive forcing tool to mitigate cognitive bias – a randomised control trial. BMC Med Educ. 2019;19(1):12. doi:10.1186/s12909-018-1444-3
  22. Sherbino J, Dore KL, Wood TJ, et al. The relationship between response time and diagnostic accuracy. Acad Med. 2012;87(6):785-791. doi:10.1097/ACM.0b013e318253acbd
  23. Mamede S, de Carvalho-Filho MA, de Faria RMD, et al. “Immunising” physicians against availability bias in diagnostic reasoning: A randomised controlled experiment. BMJ Qual Saf. 2020;29(7):550-559. doi:10.1136/bmjqs-2019-010079
  24. Friedman C, Gatti G, Elstein A, Franz T, Murphy G, Wolf F. Are clinicians correct when they believe they are correct? Implications for medical decision support. Stud Health Technol Inform. 2001;84(Pt 1):454-458.
  25. Friedman CP, Gatti GG, Franz TM, et al. Do physicians know when their diagnoses are correct? Implications for decision support and error reduction. J Gen Intern Med. 2005;20(4):334-339. doi:10.1111/j.1525-1497.2005.30145.x
  26. Mele AR. Real self-deception. Behav Brain Sci. 1997;20(1):36-91. doi:10.1017/s0140525x97000034
  27. Monteiro S, Norman G, Sherbino J. The 3 faces of clinical reasoning: Epistemological explorations of disparate error reduction strategies. J Eval Clin Pract. 2018;24(3):666-673. doi:10.1111/jep.12907
  28. Sibbald M, de Bruin ABH, van Merrienboer JJG. Checklists improve experts’ diagnostic decisions. Med Educ. 2013;47(3):301-308. doi:10.1111/medu.12080
  29. Shimizu T, Matsumoto K, Tokuda Y. Effects of the use of differential diagnosis checklist and general de-biasing checklist on diagnostic performance in comparison to intuitive diagnosis. Med Teach. 2013;35(6):e1218-29. doi:10.3109/0142159X.2012.742493
  30. Chew KS, Durning SJ, van Merriënboer JJ. Teaching metacognition in clinical decision-making using a novel mnemonic checklist: An exploratory study. Singapore Med J. 2016;57(12):694-700. doi:10.11622/smedj.2016015
  31. Graber ML, Sorensen A V, Biswas J, et al. Developing checklists to prevent diagnostic error in Emergency Room settings. Diagnosis (Berlin, Ger. 2014;1(3):223-231. doi:10.1515/dx-2014-0019
  32. Barrows HS, Norman GR, Neufeld VR, Feightner JW. The clinical reasoning of randomly selected physicians in general medical practice. Clin Investig Med. 1982;5(1):49-55.
  33. Mamede S, Goeijenbier M, Schuit SCE, et al. Specific Disease Knowledge as Predictor of Susceptibility to Availability Bias in Diagnostic Reasoning: A Randomized Controlled Experiment. J Gen Intern Med. 2021;36(3):640-646. doi:10.1007/s11606-020-06182-6
  34. Brooks LR, LeBlanc VR, Norman GR. On the difficulty of noticing obvious features in patient appearance. Psychol Sci. 2000;11(2):112-117. doi:10.1111/1467-9280.00225
  35. Leblanc VR, Brooks LR, Norman GR. Believing is seeing: The influence of a diagnostic hypothesis on the interpretation of clinical features. Acad Med. 2002;77(10 Suppl):S67-9. doi:10.1097/00001888-200210001-00022
  36. Leblanc VR, Norman GR, Brooks LR. Effect of a diagnostic suggestion on diagnostic accuracy and identification of clinical features. Acad Med. 2001;76(10 Suppl):S18-20. doi:10.1097/00001888-200110001-00007
  37. Norman GR, Brooks LR, Colle CL, Hatala RM. The Benefit of Diagnostic Hypotheses in Clinical Reasoning: Experimental Study of an Instructional Intervention for Forward and Backward Reasoning. Cogn Instr. 1999;17(4):433-448. doi:10.1207/S1532690XCI1704_3
  38. Sutton RT, Pincock D, Baumgart DC, Sadowski DC, Fedorak RN, Kroeker KI. An overview of clinical decision support systems: Benefits, risks, and strategies for success. NPJ Digit Med. 2020;3:17. doi:10.1038/s41746-020-0221-y
  39. Riches N, Panagioti M, Alam R, et al. The Effectiveness of Electronic Differential Diagnoses (DDX) Generators: A Systematic Review and Meta-Analysis. PLoS One. 2016;11(3):e0148991. doi:10.1371/journal.pone.0148991
  40. Tang H, Ng JHK. Googling for a diagnosis–use of Google as a diagnostic aid: internet based study. BMJ. 2006;333(7579):1143-1145. doi:10.1136/bmj.39003.640567.AE
  41. Szulewski A, Howes D, van Merriënboer JJG, Sweller J. From Theory to Practice: The Application of Cognitive Load Theory to the Practice of Medicine. Acad Med. 2021;96(1):24-30. doi:10.1097/ACM.0000000000003524
  42. Chisholm CD, Collison EK, Nelson DR, Cordell WH. Emergency department workplace interruptions: Are emergency physicians “interrupt-driven” and “multitasking”? Acad Emerg Med. 2000;7(11):1239-1243. doi:10.1111/j.1553-2712.2000.tb00469.x
  43. Westbrook JI, Raban MZ, Walter SR, Douglas H. Task errors by emergency physicians are associated with interruptions, multitasking, fatigue and working memory capacity: A prospective, direct observation study. BMJ Qual Saf. 2018;27(8):655-663. doi:10.1136/bmjqs-2017-007333
  44. Burgess DJ. Are providers more likely to contribute to healthcare disparities under high levels of cognitive load? How features of the healthcare setting may lead to biases in medical decision making. Med Decis Mak. 2010;30(2):246-257. doi:10.1177/0272989X09341751
  45. Naismith LM, Cavalcanti RB. Validity of Cognitive Load Measures in Simulation-Based Training: A Systematic Review. Acad Med. 2015;90(11 Suppl):S24-35. doi:10.1097/ACM.0000000000000893
  46. Asselin N, Choi B, Pettit CC, et al. Comparative Analysis of Emergency Medical Service Provider Workload During Simulated Out-of-Hospital Cardiac Arrest Resuscitation Using Standard Versus Experimental Protocols and Equipment. Simul Healthc. 2018;13(6):376-386. doi:10.1097/SIH.0000000000000339
  47. Prottengeier J, Petzoldt M, Jess N, et al. The effect of a standardised source of divided attention in airway management: A randomised, crossover, interventional manikin study. Eur J Anaesthesiol. 2016;33(3):195-203. doi:10.1097/EJA.0000000000000315
  48. O’Shea J. Cognitive load and the emergency physician. emdocs.net. Published 2016. http://www.emdocs.net/cognitiveload/
  49. Nickson C. Cognitive load theory. Life in the Fast Lane blog. Published 2020. https://litfl.com/cognitive-load-theory/

Authors

  • Mark McKinney

    Dr. Mark McKinney is a resident at the University of Ottawa, in the Department of Emergency Medicine.

  • Josee Malette

    Dr. Josée Malette is an Emergency Medicine Resident in the Department of Emergency Medicine, University of Ottawa. She is a Senior Editor with the Digital Scholarship and Knowledge Dissemination team for the EMOttawaBlog. Her interests involve critical care in low resource settings, medical education, rural medicine and prehospital medicine.