October 21, 2011 Leave a comment
In a recent study of high-severity patient injury cases, the underlying cause was almost nine times as likely to be diagnostic error than medication error. Furthermore, the injuries caused by diagnostic errors cost more than all other error categories combined. So I chose this topic as generally important, but also a timely contribution to Healthcare Quality Week 2011 (for Twitter users, #hqw11).
AHRQ’s patient safety indicators, JCAHO’s sentinel event registry and IHI’s Global Trigger Tool don’t even include a category for diagnostic error. Malpractice and autopsy cases may not generalize well, but they are the source of most of the published data. In an AHRQ-funded study of physician-reported diagnostic errors, 28% of the errors were rated a major, resulting in patient death, permanent disability, or a near-life-threatening event.
Detection of diagnostic errors is difficult, but estimates of their prevalence generally range from 5% to 15%, depending on specialty. Causes of diagnostic errors include both systemic (poor training, poorly defined procedures, etc.) and cognitive (premature closure, representativeness bias, confirmation bias, etc.) . I will write another post which discusses cognitive errors in more detail.
What does all of this have to do with systems engineering? First, systems engineers understand that the kinds of cognitive errors that contribute to misdiagnoses are themselves errors of systems design. Extending the argument Donald Norman makes in his classic book, The Design of Everyday Things predictable human errors in using machines or objects are really design flaws. Norman’s argument applies equally well to systems. Second, we should design systems to reduce or eliminate errors. Of course, the current “system” in which physicians work is, for the most part, not explicitly designed, which is a large part of the problem.
If we want to design a system to reduce errors in medical diagnosis, here are some recommendations:
- improve feedback. In some cases this involves creating feedback loops where none exist. Many diagnostic errors occur in physician offices, and if the error is detected later in the treatment stream, the physician who made the error may never know. The increasing use of EHRs provides great opportunity here.
- track and analyze diagnostic errors. The information we have thus far about these errors and their causes may be skewed, and as we learn more, we can design our systems to mitigate or eliminate the causes.
- in some cases, elimination of error is not possible. There may be tradeoffs between increased accuracy and delay in diagnosis, or between false positives and false negatives. In these cases we should use our industrial engineering tools to try to find the optimal tradeoffs.
- technical support systems to correct for cognitive biases (such as premature closure, anchoring, and the availability bias) and lessen reliance on the memory of individual physicians. These technical support systems can include information about base and temporally and geographically localized conditional probabilities, alternative diagnoses which should be considered, and perhaps even automated reading of radiological scans. The particular tools chosen should focus on diagnostic errors with the greatest frequency and impact on patient outcomes.
- in cases where technical support systems are not an option, having review and input from other physicians can significantly reduce cognitive errors — especially confirmation bias.
- improve communication among everyone involved: physicians, nurses, laboratories, patients, etc.
- in repetitive tasks, attentiveness declines as a function of the number of repetitions — so we should design work patterns and schedules to take this into account.
- educate physicians about cognitive biases and about best practices (as they evolve) in decisionmaking under uncertainty.
The measures of “correctness” should vary depending on the stage of the treatment stream. As Mark Graber et al. point out, early in the stream the best measure would be inclusion of all clinically significant diagnoses, not simply inclusion of the correct diagnosis.
The ultimate question is whether we have the will to tackle the problem of diagnostic error. There are many reasons for its relative neglect thus far, but Thomas et al. in a recent article in the Archives of Internal Medicine suggest that business motives may be the underlying factor:
Finally, though, we should ask whether the health care system will support interventions to reduce diagnostic errors. It has not done so thus far. It would appear that the health care system tolerates some background rate of errors, so long as practitioners or hospitals are not wild outliers. There is little business rationale for improving diagnosis because most of the costs of diagnostic AEs are never uncovered and are absorbed quietly by payers. Money is made in health care by moving forward with ever more costly interventions, not by looking back at errors that could have been avoided.
The authors go on to suggest that this might change with the advent of ACOs and a focus on population health. Any reform structure which includes accountability for health outcomes will have an incentive to analyze and reduce diagnostic error. (Systems engineers should favor such structures anyway, because health outcomes should be the clear purpose of health care.) So is there reason to be optimistic? I welcome your comments.