The power of a question

By Todd Schneider

Often in improvement work, we rely on Lean tools, Six Sigma methodologies, classic Industrial Engineering methods, etc.   However, perhaps we often overlook one of the most valuable skills of a change agent – Asking questions. 

Most often, improvement professionals are not the subject matter experts in the clinical or operational area in which they are improving.  This provides the perfect excuse to ask questions.  Sure, some questions are part of other tools or methodologies.  Obviously we ask questions when we apply the 5 Whys.  And it’s common to ask “what happens next?” when constructing a flowchart.  But do we ask the questions of “what if” or “why not” enough?  When done correctly, the questions asked are not so much for your learning, but instead to stimulate the group’s creativity and awareness.

 Several years ago, I was working with an organization that had been awarded a grant from the Robert Wood Johnson Foundation.  As part of the grant, the organization was provided technical assistance from the Institute for Healthcare Improvement (IHI).  One of the project teams was working to improve the care of the Acute Myocardial Infarction (AMI) patients, which typically present to the ER with chest pain.  At that time, there were two main goals the team was working to achieve: 1) Obtain an EKG within 5 minutes of arrival, and 2) Go to the Cath Lab and open the vessel within 90 minutes of arrival (commonly referred to as “Door to Balloon”).  We were reviewing this project with staff from IHI during a site visit.  During the discussion, Don Berwick, then President/CEO of IHI, asked a simple, yet important question.   “If I told you that a chest pain patient would arrive to the ER at 5:00 p.m. today, would you be able to meet the goals for Door to EKG and Door to Balloon?”  Almost immediately the team leader responded, “Yes, of course.”  Berwick responded, “Then go do it.  Get ready.  And if the patient doesn’t come at 5:00, keep waiting.  We know the patient will come.”

 The team did just that.  They figured out how to get the process right for that next patient.  That conversation was a pivotal turning point for this project team.  It helped them focus on how they could do it right once, and then figure out how to do that every time for every patient.  This organization went on to become an early leader in the care of the cardiac patient and several years later still referred to that moment as an important stimulus for their work.  It didn’t change their overall goal or even their general focus.  But it did re-frame the project, which allowed the team to look at the project from a different perspective.

By all means, we need to continue to use all the tools we have available, but we must not forget about the power of asking questions.  The questions may be simple; they don’t need to be complex or technical.  However, the questions should stimulate new ideas and help the team see the potential for new solutions. 

 Speaking of questions, have you asked a few today?

Applying Manufacturing Quality Paradigms to Healthcare in South Africa

By Maria Treurnicht

Quality management has played a vital role in the improvement of effectiveness and efficiency in the manufacturing industry. Over the past century the manufacturing industry has grown systematically in the way quality is understood and managed. The focus on quality and effectiveness in the manufacturing industry is largely a result of the competitiveness in this industry. Public healthcare, in contrast, especially in the developing world, is not driven by competition. Public healthcare could therefore benefit considerably from using quality management principles of the manufacturing industry. In this post I will discuss how different manufacturing quality paradigms correlate to South African public healthcare provision.

Custom-Craft Paradigm

This paradigm is best explained using the primitive example of a blacksmith making swords exactly to their customers’ specifications.  Kings and knights had their swords made to fit their exact specifications. The direct communication between the craftsman and the customer assured a high quality product and high customer satisfaction. Nevertheless, this time-consuming and costly process had a low production rate that made it infeasible for the general population to have custom-made swords. This paradigm closely correlates to private practice consultations where patients can make direct appointments with their GP. The quality of care received is of a high standard, but unfortunately these consultations have high cost implications and are time-consuming.

Mass Production Paradigm

Thanks to Henry Ford, assembly line manufactured cars and appliances changed the lives of the middle class of the world. Mass production of standardized parts allowed for almost continuous improvement of efficiency and reduction of real costs since the 1920’s. In 2004, the national antiretroviral (ARV) treatment program was launched in South Africa. This led to the introduction of many ARV clinics that specialize in ARV treatment to HIV-infected individuals. Another application of mass production in South African healthcare is the introduction of tuberculosis clinics and hospitals. Through specializing on specific treatments and thus improving their productivity, these clinics are able to implement lean processes. Therefore these mass production healthcare facilities are able to deliver services efficiently, while providing quality care to the large proportion of HIV patients in South Africa. However, in cases where a patient does not fit the typical profile, it is likely that the ARV clinic would refer the patient to a hospital. These patients are therefore examined twice, consuming unnecessary resources, similar to unnecessary production that is targeted as waste. Nevertheless, just like mass production reduced costs and improved accessibility for the broad population, the ARV clinics are effectively bringing basic healthcare to the broad population.

Mass Customization Paradigm

Mass customization is a paradigm that combines the custom-craft and mass production paradigms to produce products or services that have near mass production efficiency and simultaneously meet individual customers’ needs. The Primary Health Care model where patients visit low-level care facilities and are referred for specialist care could be argued as mass customization. The referral process contains elements of both mass production and custom-craft. Administrative processes during the referral are standardized whereas consultations are customized and specific patient need focused. Unfortunately the referral system requires transportation and accommodation, moving away from the manufacturing paradigm of sustained effectiveness, when moving from mass production to mass customization.

The introduction of telemedicine referrals in South Africa is playing a vital role in improving mass customization processes. Telemedicine, using Information and Communication Technologies (ICT) in patient referrals, reduces the need to transport patients between hospitals. Telemedicine could improve the quality of a consultation by including a remote specialist using ICT. The standardization of these telemedicine and other healthcare processes is the Industrial Engineer’s opportunity to play a similar role in healthcare as in the manufacturing industry, to bring quality care to the general population, specifically in the developing world.

Patient safety: What if you can’t speak up?

Everyone in healthcare has stories of being silenced with a look or comment.  You need clarification, or think something might be wrong, and when you try to tell the physician, s/he snaps at you, or is sarcastic, or condescending.  In some cases, the physician may even be verbally abusive or threatening.  You quickly learn, at least with that physician, to keep quiet.   This can happen in any communication where there is a strong power differential:  between nurse or doctor and patient, between senior and junior healthcare professionals, between managers and their staff, etc. 

Still worse, many people have pointed out this situation to their manager or someone else in authority, only to be told to just deal with it, or to become labeled a complainer.  Or the manager may respond attentively and with concern, but then nothing changes. 

All of these situations work to stifle concerns healthcare professionals have about the quality and safety of patient care.  This is demoralizing:  your perspective is unimportant, you are not worth listening to.  You come to understand that the organization in which you work is not committed to quality and safety, but that the organization often chooses to protect the egos of powerful people (like physicians who are not employees but control a large revenue stream) over the safety of patients (and of other healthcare providers).    You are in a double-bind situation, and feel increasing stress and frustration. 

Yes, physicians are under tremendous pressure — but that’s when they are more likely to make mistakes, and more likely to need the situational awareness of the others on the care team to prevent these mistakes from harming the patient.  In a complex adaptive system such as healthcare delivery, where error-proofing is often not possible, we need to rely on the dynamic awareness of everyone involved in patient care to catch mistakes before they cause harm.  

Most physicians, and most managers and administrators, do not habitually intimidate people or dismiss their concerns.  But anyone can be under so much stress that s/he sometimes or occasionally responds this way. 

According to The Joint Commission, in Sentinel Event Alert Issue 40,

Intimidating and disruptive behaviors can foster medical errors, contribute to poor patient satisfaction and to preventable adverse outcomes, increase the cost of care, and cause qualified clinicians, administrators and managers to seek new positions in more professional environments.  Safety and quality of patient care is dependent on teamwork, communication, and a collaborative work environment. To assure quality and to promote a culture of safety, health care organizations must address the problem of behaviors that threaten the performance of the health care team.

Intimidating and disruptive behaviors include overt actions such as verbal outbursts and physical threats, as well as passive activities such as refusing to perform assigned tasks or quietly exhibiting uncooperative attitudes during routine activities. Intimidating and disruptive behaviors are often manifested by health care professionals in positions of power. Such behaviors include reluctance or refusal to answer questions, return phone calls or pages; condescending language or voice intonation; and impatience with questions. Overt and passive behaviors undermine team effectiveness and can compromise the safety of patients.  All intimidating and disruptive behaviors are unprofessional and should not be tolerated.
. . .
Several surveys have found that most care providers have experienced or witnessed intimidating or disruptive behaviors.
. . .
Organizations that fail to address unprofessional behavior through formal systems are indirectly promoting it.

Lucian Leape, in an article about problematic physician behavior, notes that:  “Performance failures of one type or another are not uncommon among physicians, posing substantial threats to patient welfare and safety. Few hospitals manage these situations promptly or well.” 

So what can we do?

At the organizational level, the Joint Commission has already required that hospitals set behavior standards and implement a process for managing disruptive and inappropriate behaviors.  It has also made a number of further recommendations, including training, contractual provisions, non-retaliation, and leaders modeling appropriate behavior.   

At the individual level, Maxwell et al. use positive deviance to identify what enables some people to successfully speak up in spite of an environment that is less than fully supportive.   These include:  work behind the scenes if the situation is not urgent;  avoid provoking a defensive reaction; keep your own frustration and anger in check; explain your intention to help the caregiver as well as the patient. 

As the organizational culture begins to change, scripted interventions may be very helpful.  In one hospital, “Doctor, I have a concern…” was used as an indicator that something might be going wrong, and requesting the physician’s attention.  If that didn’t work, the same person would follow up by saying, “Doctor, I have a patient safety concern.”  Regardless of whether the concern is founded or not, perspicuous or misguided, the physician or other caregiver who is being questioned should always thank the person who raised the concern, in order to encourage this behavior in the future.  (This can be very hard to do when the concern is unfounded, and the caregiver is irritated — but it is crucial to remember that this is how we prevent our mistakes from causing harm.) 

In your organization, do all staff members have the skills they need to speak up tactfully?  Will they be encouraged to speak up, and supported when they do so?  Are the behavioral standards applied equally to everyone in the organization?  “Respect for people” is a pillar of the Lean approach.  Are hospitals that have adopted Lean more likely to encourage and support speaking up? 

Diagnostic errors: a challenge for systems engineering

In a recent study of high-severity patient injury cases, the underlying cause was almost nine times as likely to be diagnostic error than medication error.  Furthermore, the injuries caused by diagnostic errors cost more than all other error categories combined.   So I chose this topic as generally important, but also a timely contribution to Healthcare Quality Week 2011 (for Twitter users, #hqw11).

AHRQ’s patient safety indicators, JCAHO’s sentinel event registry and IHI’s Global Trigger Tool don’t even include a category for diagnostic error.  Malpractice and autopsy cases may not generalize well, but they are the source of most of the published data.  In an AHRQ-funded study of physician-reported diagnostic errors, 28% of the errors were rated a major, resulting in patient death, permanent disability, or a near-life-threatening event. 

Detection of diagnostic errors is difficult, but estimates of their prevalence generally range from 5% to 15%, depending on specialty.  Causes of diagnostic errors include both systemic (poor training, poorly defined procedures, etc.) and cognitive (premature closure, representativeness bias, confirmation bias, etc.) .   I will write another post which discusses cognitive errors in more detail. 

What does all of this have to do with systems engineering?  First, systems engineers understand that the kinds of cognitive errors that contribute to misdiagnoses are themselves errors of systems design.  Extending the argument Donald Norman makes in his classic book, The Design of Everyday Things predictable human errors in using machines or objects are really design flaws.  Norman’s argument applies equally well to systems.   Second, we should design systems to reduce or eliminate errors.   Of course, the current “system” in which physicians work is, for the most part, not explicitly designed, which is a large part of the problem. 

If we want to design a system to reduce errors in medical diagnosis, here are some recommendations: 

  • improve feedback.  In some cases this involves creating feedback loops where none exist.  Many diagnostic errors occur in physician offices, and if the error is detected later in the treatment stream, the physician who made the error may never know.  The increasing use of EHRs provides great opportunity here. 
  • track and analyze diagnostic errors.  The information we have thus far about these errors and their causes may be skewed, and as we learn more, we can design our systems to mitigate or eliminate the causes. 
  • in some cases, elimination of error is not possible.  There may be tradeoffs between increased accuracy and delay in diagnosis, or between false positives and false negatives.  In these cases we should use our industrial engineering tools to try to find the optimal tradeoffs.    
  • technical support systems to correct for cognitive biases (such as premature closure, anchoring, and the availability bias) and lessen reliance on the memory of individual physicians.  These technical support systems can include information about base and temporally and geographically localized conditional probabilities, alternative diagnoses which should be considered, and perhaps even automated reading of radiological scans.  The particular tools chosen should focus on diagnostic errors with the greatest frequency and impact on patient outcomes. 
  • in cases where technical support systems are not an option, having review and input from other physicians can significantly reduce cognitive errors — especially confirmation bias. 
  • improve communication among everyone involved:  physicians, nurses, laboratories, patients, etc. 
  • in repetitive tasks, attentiveness declines as a function of the number of repetitions — so we should design work patterns and schedules to take this into account. 
  • educate physicians about cognitive biases and about best practices (as they evolve) in decisionmaking under uncertainty.

The measures of “correctness” should vary depending on the stage of the treatment stream.  As Mark Graber et al. point out, early in the stream the best measure would be inclusion of all clinically significant diagnoses, not simply inclusion of the correct diagnosis.  

The ultimate question is whether we have the will to tackle the problem of diagnostic error.  There are many reasons for its relative neglect thus far, but Thomas et al. in a recent article in the Archives of Internal Medicine suggest that business motives may be the underlying factor: 

Finally, though, we should ask whether the health care system will support interventions to reduce diagnostic errors. It has not done so thus far. It would appear that the health care system tolerates some background rate of errors, so long as practitioners or hospitals are not wild outliers.  There is little business rationale for improving diagnosis because most of the costs of diagnostic AEs are never uncovered and are absorbed quietly by payers. Money is made in health care by moving forward with ever more costly interventions, not by looking back at errors that could have been avoided.

The authors go on to suggest that this might change with the advent of ACOs and a focus on population health.  Any reform structure which includes accountability for health outcomes will have an incentive to analyze and reduce diagnostic error.  (Systems engineers should favor such structures anyway, because health outcomes should be the clear purpose of health care.)  So is there reason to be optimistic?  I welcome your comments.

Reflections: Project ECHO, Quality and Leadership

By Alina Hsu

As I described in an earlier post, Project Echo is an innovative approach to delivering specialist care for complex chronic conditions to remote and/or underserved populations.  It has been shown to be as effective for treating hepatitis C as care in an academic medical center.

Primary care clinicians who have participated in this project with Dr. Arora are uniformly enthusiastic and personally grateful to Dr. Arora for enabling them to better care for their patients and for enriching their practice.  In a comment to a post on The Healthcare Blog, a community clinician remarked

Read more of this post

Play the Health Policy (Simulation) Game

by Alina Hsu

If payers cut reimbursement rates, what is the effect on healthcare cost over time?  On morbidity and mortality?

How about if we improve quality but make no other changes to the system?

Or if we simply move to universal coverage?

What if we try combinations of interventions?

HealthBound, a simulation game available on the CDC website, is based on a system dynamics model of causal relationships between components of the US healthcare system, including outcome measures.  Models are simplified and abstracted representations of complex realities.  For complex systems, especially considering unintended consequences and time lags between interventions and results, it can be difficult or impossible to intuit what the results will be.  The only ways we can learn about the results are by implementing changes in the actual systems, or by running simulations.  When the risks or costs of making changes in the actual systems are very high, we can experiment with various options and use what we learn to guide policy.  Interventions in the real world also don’t permit us to go back and try something different.  HealthBound also provides a neutral,  consistent framework for considering alternative approaches to healthcare reform.

Read more of this post