Patient safety: What if you can’t speak up?

Everyone in healthcare has stories of being silenced with a look or comment.  You need clarification, or think something might be wrong, and when you try to tell the physician, s/he snaps at you, or is sarcastic, or condescending.  In some cases, the physician may even be verbally abusive or threatening.  You quickly learn, at least with that physician, to keep quiet.   This can happen in any communication where there is a strong power differential:  between nurse or doctor and patient, between senior and junior healthcare professionals, between managers and their staff, etc. 

Still worse, many people have pointed out this situation to their manager or someone else in authority, only to be told to just deal with it, or to become labeled a complainer.  Or the manager may respond attentively and with concern, but then nothing changes. 

All of these situations work to stifle concerns healthcare professionals have about the quality and safety of patient care.  This is demoralizing:  your perspective is unimportant, you are not worth listening to.  You come to understand that the organization in which you work is not committed to quality and safety, but that the organization often chooses to protect the egos of powerful people (like physicians who are not employees but control a large revenue stream) over the safety of patients (and of other healthcare providers).    You are in a double-bind situation, and feel increasing stress and frustration. 

Yes, physicians are under tremendous pressure — but that’s when they are more likely to make mistakes, and more likely to need the situational awareness of the others on the care team to prevent these mistakes from harming the patient.  In a complex adaptive system such as healthcare delivery, where error-proofing is often not possible, we need to rely on the dynamic awareness of everyone involved in patient care to catch mistakes before they cause harm.  

Most physicians, and most managers and administrators, do not habitually intimidate people or dismiss their concerns.  But anyone can be under so much stress that s/he sometimes or occasionally responds this way. 

According to The Joint Commission, in Sentinel Event Alert Issue 40,

Intimidating and disruptive behaviors can foster medical errors, contribute to poor patient satisfaction and to preventable adverse outcomes, increase the cost of care, and cause qualified clinicians, administrators and managers to seek new positions in more professional environments.  Safety and quality of patient care is dependent on teamwork, communication, and a collaborative work environment. To assure quality and to promote a culture of safety, health care organizations must address the problem of behaviors that threaten the performance of the health care team.

Intimidating and disruptive behaviors include overt actions such as verbal outbursts and physical threats, as well as passive activities such as refusing to perform assigned tasks or quietly exhibiting uncooperative attitudes during routine activities. Intimidating and disruptive behaviors are often manifested by health care professionals in positions of power. Such behaviors include reluctance or refusal to answer questions, return phone calls or pages; condescending language or voice intonation; and impatience with questions. Overt and passive behaviors undermine team effectiveness and can compromise the safety of patients.  All intimidating and disruptive behaviors are unprofessional and should not be tolerated.
. . .
Several surveys have found that most care providers have experienced or witnessed intimidating or disruptive behaviors.
. . .
Organizations that fail to address unprofessional behavior through formal systems are indirectly promoting it.

Lucian Leape, in an article about problematic physician behavior, notes that:  “Performance failures of one type or another are not uncommon among physicians, posing substantial threats to patient welfare and safety. Few hospitals manage these situations promptly or well.” 

So what can we do?

At the organizational level, the Joint Commission has already required that hospitals set behavior standards and implement a process for managing disruptive and inappropriate behaviors.  It has also made a number of further recommendations, including training, contractual provisions, non-retaliation, and leaders modeling appropriate behavior.   

At the individual level, Maxwell et al. use positive deviance to identify what enables some people to successfully speak up in spite of an environment that is less than fully supportive.   These include:  work behind the scenes if the situation is not urgent;  avoid provoking a defensive reaction; keep your own frustration and anger in check; explain your intention to help the caregiver as well as the patient. 

As the organizational culture begins to change, scripted interventions may be very helpful.  In one hospital, “Doctor, I have a concern…” was used as an indicator that something might be going wrong, and requesting the physician’s attention.  If that didn’t work, the same person would follow up by saying, “Doctor, I have a patient safety concern.”  Regardless of whether the concern is founded or not, perspicuous or misguided, the physician or other caregiver who is being questioned should always thank the person who raised the concern, in order to encourage this behavior in the future.  (This can be very hard to do when the concern is unfounded, and the caregiver is irritated — but it is crucial to remember that this is how we prevent our mistakes from causing harm.) 

In your organization, do all staff members have the skills they need to speak up tactfully?  Will they be encouraged to speak up, and supported when they do so?  Are the behavioral standards applied equally to everyone in the organization?  “Respect for people” is a pillar of the Lean approach.  Are hospitals that have adopted Lean more likely to encourage and support speaking up? 

Diagnostic errors: a challenge for systems engineering

In a recent study of high-severity patient injury cases, the underlying cause was almost nine times as likely to be diagnostic error than medication error.  Furthermore, the injuries caused by diagnostic errors cost more than all other error categories combined.   So I chose this topic as generally important, but also a timely contribution to Healthcare Quality Week 2011 (for Twitter users, #hqw11).

AHRQ’s patient safety indicators, JCAHO’s sentinel event registry and IHI’s Global Trigger Tool don’t even include a category for diagnostic error.  Malpractice and autopsy cases may not generalize well, but they are the source of most of the published data.  In an AHRQ-funded study of physician-reported diagnostic errors, 28% of the errors were rated a major, resulting in patient death, permanent disability, or a near-life-threatening event. 

Detection of diagnostic errors is difficult, but estimates of their prevalence generally range from 5% to 15%, depending on specialty.  Causes of diagnostic errors include both systemic (poor training, poorly defined procedures, etc.) and cognitive (premature closure, representativeness bias, confirmation bias, etc.) .   I will write another post which discusses cognitive errors in more detail. 

What does all of this have to do with systems engineering?  First, systems engineers understand that the kinds of cognitive errors that contribute to misdiagnoses are themselves errors of systems design.  Extending the argument Donald Norman makes in his classic book, The Design of Everyday Things predictable human errors in using machines or objects are really design flaws.  Norman’s argument applies equally well to systems.   Second, we should design systems to reduce or eliminate errors.   Of course, the current “system” in which physicians work is, for the most part, not explicitly designed, which is a large part of the problem. 

If we want to design a system to reduce errors in medical diagnosis, here are some recommendations: 

  • improve feedback.  In some cases this involves creating feedback loops where none exist.  Many diagnostic errors occur in physician offices, and if the error is detected later in the treatment stream, the physician who made the error may never know.  The increasing use of EHRs provides great opportunity here. 
  • track and analyze diagnostic errors.  The information we have thus far about these errors and their causes may be skewed, and as we learn more, we can design our systems to mitigate or eliminate the causes. 
  • in some cases, elimination of error is not possible.  There may be tradeoffs between increased accuracy and delay in diagnosis, or between false positives and false negatives.  In these cases we should use our industrial engineering tools to try to find the optimal tradeoffs.    
  • technical support systems to correct for cognitive biases (such as premature closure, anchoring, and the availability bias) and lessen reliance on the memory of individual physicians.  These technical support systems can include information about base and temporally and geographically localized conditional probabilities, alternative diagnoses which should be considered, and perhaps even automated reading of radiological scans.  The particular tools chosen should focus on diagnostic errors with the greatest frequency and impact on patient outcomes. 
  • in cases where technical support systems are not an option, having review and input from other physicians can significantly reduce cognitive errors — especially confirmation bias. 
  • improve communication among everyone involved:  physicians, nurses, laboratories, patients, etc. 
  • in repetitive tasks, attentiveness declines as a function of the number of repetitions — so we should design work patterns and schedules to take this into account. 
  • educate physicians about cognitive biases and about best practices (as they evolve) in decisionmaking under uncertainty.

The measures of “correctness” should vary depending on the stage of the treatment stream.  As Mark Graber et al. point out, early in the stream the best measure would be inclusion of all clinically significant diagnoses, not simply inclusion of the correct diagnosis.  

The ultimate question is whether we have the will to tackle the problem of diagnostic error.  There are many reasons for its relative neglect thus far, but Thomas et al. in a recent article in the Archives of Internal Medicine suggest that business motives may be the underlying factor: 

Finally, though, we should ask whether the health care system will support interventions to reduce diagnostic errors. It has not done so thus far. It would appear that the health care system tolerates some background rate of errors, so long as practitioners or hospitals are not wild outliers.  There is little business rationale for improving diagnosis because most of the costs of diagnostic AEs are never uncovered and are absorbed quietly by payers. Money is made in health care by moving forward with ever more costly interventions, not by looking back at errors that could have been avoided.

The authors go on to suggest that this might change with the advent of ACOs and a focus on population health.  Any reform structure which includes accountability for health outcomes will have an incentive to analyze and reduce diagnostic error.  (Systems engineers should favor such structures anyway, because health outcomes should be the clear purpose of health care.)  So is there reason to be optimistic?  I welcome your comments.

Field Note: Infection Prevention in Outpatient Settings

By Alina Hsu

The CDC recently published a new infection prevention guide and checklist specifically for health care providers in outpatient care settings such as endoscopy clinics, surgery centers, primary care offices, and pain management clinics.”

According to the CDC, more than three-quarters of all the operations in the United States are performed at outpatient facilities.  “Patients deserve the same basic levels of protection in a hospital or any other healthcare setting,” said Michael Bell, M.D., deputy director of CDC’s Division of Healthcare Quality Promotion.  “Failure to follow standard precautions … cannot be tolerated.”

The guidelines recommend specific procedures or practices in the following areas:  administrative, educational, HAI surveillance/reporting, hand hygiene, use of personal protective equipment, injection safety, environmental cleaning, sterilization of medical equipment, respiratory hygiene, and triage of potentially infectious patients.

In hospitals, implementation of infection control programs lags far behind our knowledge of best practices.  The missing element appears to be the sustained will or commitment of organizational leaders.   Certainly, if the commitment is there, Industrial Engineers and other process improvement professionals can help with the execution.

What physicians don’t know about the risks of CT scans

By Alina Hsu

There’s been a lot of discussion recently about the risks of CT scans, partly in response to the new imaging data reported by Hospital Compare.

According to an article by Rita F. Redberg, M.D., M.Sc., in Engineering a learning healthcare system:  A look at the future:  Workshop summary,  (p. 128) “[an estimated] 2 percent of all cancers in the United states are attributable to radiation from CT scans, and…some 3 million additional cancers can be expected in the next decade because of increased use of CT scans.”  Further, the increased use of CT scans has not been associated with decreases in mortality or improved health outcomes.  According to the FDA, CT screening is also associated with false positives, which may necessitate costly, painful, invasive and unneeded follow-up procedures that may themselves present additional risks.  The incremental increase in cancer risk for an individual is very small compared to the baseline risk, but at a population level there is reason for concern.

Read more of this post

Triple Aim: Impossible dream, or bold new reality?

By Alina Hsu

While many hospitals and healthcare providers are still struggling to provide care that is safe, effective, patient-centered, timely, efficient and equitable, IHI has raised the bar.  The next step is the Triple Aim:

  • Improve the health of the population;
  • Enhance the patient experience of care (including quality, access, and reliability);
    and
  • Reduce, or at least control, the per capita cost of care.

Read more of this post