Offering the latest news in health care quality and safety, the ISQua blog also features guest posts from the best and brightest in the industry.

By ISQua Monday. Dec 12, 2011

Conversations with Leaders in the Field of Patient-Centered Care - Dr Peter Pronovost

The program has brought about a radical decrease in the incidence of hospital-acquired infections.

A practicing anesthesiologist and critical care specialist physician at Johns Hopkins Hospital and a researcher and professor at Johns Hopkins University, Dr. Pronovost is also a professor in the School of Medicine in the Departments of Anesthesiology, Critical Care Medicine and Surgery, and medical director for the Center for Innovation in Quality Patient Care. He also is a professor in the Bloomberg School of Public Health (Department of Health Policy and Management) and in the School of Nursing.

Dr. Pronovost established the Quality and Safety Research Group in 2003 to advance the science of safety and he serves on a number of other safety organizations

Dr. Pronovost holds a Ph.D. from Johns Hopkins Bloomberg School of Public Health, an M.D. from Johns Hopkins School of Medicine and a B.S. from Fairfield University.

 

People are always interested to know how so renowned a doctor as you became interested in intensive care, and what in 2001 caught your interest re the intensive-care specialist in Maryland whose checklist you were to improve to the point that it would bring about a major revolution in how healthcare is delivered.

 

When I was a fourth-year medical student, my father died, needlessly, after having been misdiagnosed with lymphoma. He was then beyond the ability to receive a bone marrow transplant, which he needed. And I watched him die—and die, frankly, writhing in pain—and I became convinced that patients deserved better than our healthcare system often gave them.

 

I went on to finish medical school, where I trained in critical care because I liked that area. Early in my career in the ICU at Johns Hopkins, a little girl named Josie King died from what started as a catheter infection. At the time, our rates of catheter infections were unacceptably high, and I was one of the doctors causing those high rates. Like most doctors, I certainly didn’t want to harm patients. I wanted to give them the best care possible, yet I wasn’t, and patients were being harmed needlessly. The little girl’s mother came to me on the first anniversary of her death and said, Peter could you tell me that Josie would be less likely to die today than she was a year ago?

 

And the sad reality was that I couldn’t give her an answer. At the time, we didn’t focus very much on these infections because we thought we were great. We were Johns Hopkins. We take care of sick patients, and those infections were in what I call “the inevitable bucket”: they just happen. But Josie’s mother’s words haunted me, because I thought she deserved an answer.

 

We started to look for a way we could improve these infection rates. We went first to the Centers for Disease Control guidelines, which are elegant and scholarly but less helpful at the bedside because they are 200 pages long. We culled out from them a simple checklist and discovered that we were doing the things on that checklist 30 percent of the time—I was doing those things 30 percent of the time! So that’s how we got started.

 

We applied what I called the science of patient safety. I didn’t just berate the doctors, or myself, and say, Try harder. We brought the science of human factors and systems engineering to the problem, and what did we find? We had to go to eight different places to get all the equipment needed to comply with the checklist! Caps were in one place, masks were in another and gowns were in yet another, and half the time they weren’t stocked—and when they weren’t stocked, or when it was hard to find them, I made what was in my mind a rational economic decision: I said Okay, is it worth me running down the hall and spending 10 minutes getting supplies, which would mean 10 minutes less with a different patient, or do I just go ahead without it?

 

Every day doctors and nurses in healthcare are faced with this kind of decision, where we make tradeoffs because of inefficient systems: safety for efficiency. And the reality was that it was a managerial failure, a failure of leadership that made that tradeoff necessary. The solution was a line cart stocked with all the things clinicians said they needed. And compliance with the checklist went from 30 percent up to 75 percent, and infection rates went from 11 per 1000 catheter days down to five. And this did not come from exhorting doctors to try harder but from changing the system to make it easier by applying some basic principles.

 

But then I discovered that the culture itself needed to be changed, because 75 percent was good, but it wasn’t good enough. So I called the doctors and nurses together and asked the nurses to work with the doctors when they put catheters in to be sure they use the checklist. And I nearly caused World War III. The nurses said, it’s not my job to police the doctors, and if I do I’ll get my head bitten off. The doctors said, Peter, there’s no way you can have a nurse question me in public—it hurts my credibility.

 

This was the illusion that doctors had to be perfect. And what was striking was that there was no debate about the efficacy of the checklist—everyone agreed we should do those things. What we were debating was power and politics, and shame, and maintaining credibility. I pulled everyone together and said, Doctors, we have permission to forget to wash our hands. We have permission to not remember an item on the checklist—we’re human, we’re going to do that. But we don’t have permission to put patients needlessly at risk. So if you forget, the nurses will see you and remind you to go back and fix the defect, not because of power or politics but because of the patient.

 

And nurses, I need you to speak up. I know you’re worried about getting your head bitten off, so docs and nurses let me be clear: Nurses, if doctors give you flak about questioning them, unless it’s an emergency I want the nurses to page me any time day or night. We must ensure patients receive the best care and the nurses will be supported. If we disagree with the evidence for the checklist, let’s have a conversation about that. But you just told me you all agree with it, so the only reason for not complying is ego, and that’s really not acceptable.

 

I wasn’t paged, and compliance went up from 75 percent to 98 percent and infection rates went from five per 1000 catheter days to two. The team was really excited about how good we could get, but we didn’t just leave it there. We applied the third principle of patient safety. The first is standardized work, the second is to create independent checks for things that are important and the third is to learn from our mistakes. We started investigating every infection as a defect. This first changed the mental model from “These infections are inevitable” to “They are preventable.” But more important, we found opportunities. We found that half the infections we had were from catheters placed in the operating room, and that we hadn’t taken our checklist there. Infection rates came down to one. Then we found that the remaining infections had nothing to do with how the catheter was inserted but how it was maintained. Patients who had had catheters for a long time got infections. We discovered that our nursing practices for catheter maintenance were highly unstandardized and variable, so we standardized them, we created independent checks, we started auditing and learning and we virtually eliminated infections. That’s when we realized the potential to make large-scale change by bringing good science and good measurement to bear. And that’s the program that’s spread throughout this and almost a dozen other countries.

 

Just as important, these are principles you can apply to other areas. Every year, central line-associated bloodstream infections kill about as many people in the United States as breast cancer does. It’s a public health problem of every bit the magnitude of breast cancer, but nowhere near as visible or funded for research or attention-getting. And yet, unlike breast cancer, for which we do not yet have a cure, we have a cure for central line-associated bloodstream infections right now: We have a simple program to prevent a public health issue from becoming a problem as big as cancer. It needs to get the attention it deserves, because if you think about it from that perspective, to have some hospitals not adopting these kinds of programs and not implementing them—when we know from hundreds of hospitals that this disease can be cured—is really a leadership failure.

 

Did it amaze you that something so simple—and obvious—as washing hands and creating a sterile zone could have so dramatic an effect on in-hospital infection rates? How had medicine lost sight of this simple truth to the extent that a massive culture change is needed to reintroduce it? Do you think the CUSP program does enough to bring that about?

 

I was shocked. In fact, early on in this process, to give you a little anecdote, the doctors pushed back—these things happen to sick patients, they said. I said, Okay, let’s not see how low the infection rate can go. If you agree with the items on this checklist, let’s create a healthcare system that makes sure every patient always gets them and then let’s see how low the rate can go. Maybe we’ll do them and the rates will stay high, but my hunch is that if we do the rates will come down dramatically.

 

In the beginning, of course, we didn’t really know what would happen, and I will admit that as a doctor I had some self-doubt that the rate could go that low. I was floored to see that not only did the rate decrease but that we could virtually eliminate these infections, the vast majority of which are preventable. Why hasn’t medicine accomplished this? This is a really important mental model, that in medicine we view research as finding new genes or finding new drugs. We’re really focused on solving puzzles rather than solving problems. And the science of healthcare delivery—the science that we brought to this about culture change and human factors engineering—is neglected. Most medical schools will have a hundred people studying genes and practically no psychologists or sociologists or human factors engineers—none. And we wonder why we haven’t made progress in the delivery of safe healthcare! Though our program seems simple, it is based on strong theory and informed by the science of healthcare delivery. In addition, we kept score with a measure clinicians believed was valid.

 

The other science piece of this is that we have accountability for patient outcomes. We invest taxpayer dollars to study and find genes and therapies, but how they then get used is up to your own judgment. We are way over-invested in physician autonomy—you may want to do it, but there is no accountability for your outcomes. What we did here was seek to solve a problem: We said we’re going to eliminate these infections. And to do so we’re going to package a program that uses all the known evidence to do that. Now I got a lot of pushback, a lot of criticism: Because I did everything at once, I didn’t know whether the checklist items could number four things rather than five, or three things rather than five. How much was culture versus the checklist? And my response is—and I’m not being flippant—in many senses I don’t know and I don’t care, because unless a component of that checklist is really risky for the patient or costly, it’s not worth finding out whether the checklist could be four or five items. To do so would have taken ten years to design the study to look at each item individually, and in the meantime as many people die every year as die from breast cancer. And to me that’s untenable. So we packaged an intervention that was based on science, and on theory about human behavior and teamwork that we thought would most quickly and rapidly bring infections down. And it did.

 

I think medicine is ripe to take that same approach to other types of problems. It’s the same thing John F. Kennedy did when he said he wanted to put a man on the moon in ten years: Here’s the goal, now let’s work backward and solve the problem rather than medicine working forward and saying Does drug A work better than drug B. We need that approach, but once we have enough evidence about how therapies work, once we know how to measure the outcome, I think we have to flip our mindset to say, Okay, now let’s work backward to drive down this complication as low as we can and package everything together to get us there.

 

The CUSP program, or the Comprehensive Unit-Based Safety Program, is the intervention that we use to change culture, and it’s one of the only programs that has been validated to show that it really does move culture. It’s based on some really simple ideas—we learned from Toyota. The CEO of Toyota was asked, what’s the secret of your production system, which gets a lot of notoriety and has a lot of fancy buzz words around with it. His answer was simplicity itself: The secret is that we do two things. We improve teamwork, and we learn from our mistakes. And healthcare didn’t have a mechanism to do either. We’re really good at recovering from mistakes, so things go wrong and we recover, or we run down the hall and get the needed supplies. But we don’t have mechanisms to step back and learn. And by learning I mean, how are we going to reduce the risk that a future patient suffers this?

 

The second insight we had is that culture is local. If you compare one hospital to another, there is actually very little variation in culture at the hospital level. But if you look at the units within a hospital—the nursing units, for example—the culture varies widely, from 10 percent to 95 percent of people reporting good culture and teamwork. This same variation occurs with patient satisfaction. There is a unit level culture of caring. We realized that hospital-level solutions alone won’t work. We have to get to the unit, where the work takes place. CUSP is a program that builds on that idea and creates a unit-level team to solve their own problems. And it has five simple steps to it: First, we educate staff to the science of safety, because the science that we had applied earlier is not known to most physicians. They’re not taught it in medical or nursing or pharmacy school, and they need to get some basic principles down, so we have a training program for that. It’s on the Web and it’s free.

 

Second, we ask staff members to identify their defects, and by defects we mean anything that you don’t want to have happen again. What’s gone wrong in your unit? We have error-reporting systems and liability claims to do that, but the most powerful way we found is simply asking the clinicians in that area how they think the next patient is going to be harmed. Because there is overwhelming data from psychology that frontline operators know the risks; it’s just that they’re not often asked about them. They know what the hazards are. They live them every day—I live them every day as a practicing doctor—and we have to get that wisdom out of their heads and into some collective pool so we can begin to address it.

 

Third, we assign an executive to partner with these teams. This kind of builds on the executive walk-rounds, but you might think its executive walk-rounds on steroids or evidence-based because what we found is that the more things that executives work to fix, the more times they visit the unit, the better the improvement in culture. These executives are actually part of the team. They roll up their sleeves; they help the team learn from defects.

 

The fourth step is that we say, you have to learn from one defect a month. And we trust you: You know what your biggest risks are. There are accountabilities you have to learn, but the empowerment is that you focus on whatever you think your biggest risks are. They have to answer four questions: What happened or could have happened? Why did it happen? What did you do to reduce risk? And then, most important, How do you know that risk was actually reduced? And if they write a one-page answer to those four questions, we learned.

 

You can almost think of this approach to learning from defects as root-cause lite. We have a big economic problem in healthcare. And what I mean by that is that the way we learn from defects is for the most part a root-cause analysis. Done well, it’s very effective. But the way it’s done in healthcare, it takes about 200 to 300 person hours for each event. It’s very labor-intensive. Every unit in a hospital has from 15 to 20 defects a day, so the math just doesn’t add up. All these problems—and the solution takes 300 hours? We need a different hammer.

 

At the fifth step we ask staff to try tools to improve teamwork. We have a menu of tools that work, but really it’s user-driven. We provide some basic training on teamwork, and then focus further training and tools on their specific issues. The unit says, Okay, we’re struggling with understanding each other’s roles, or, we’re struggling with understanding the care plan. And when they find out what their challenges are, they choose from the menu the tools that might help address their particular need. Often, they create new tools. And there’s been a dramatic improvement in culture across the whole large Johns Hopkins organization, where we have 120 nursing units working on CUSP. We’re confident that this kind of program that engages local teams is what’s needed in healthcare.

 

You have been called—in addition to “genius,” courtesy of the MacArthur grant—a true visionary, a pioneer, a true hero and many other complimentary things, so your opinion is important. What other kinds of changes do you think are needed? What is your major interest right now?

 

I’m really focusing on two things. One is to have the field of safety or health delivery research viewed as a legitimate science and get the attention it deserves. You’ve probably seen the recent news reports that despite the decades of awareness of the issue of patient safety, the empirical evidence that we’ve made progress is virtually nonexistent, and that’s quite disappointing.

 

I think we’ve learned a lot, and our bloodstream example is probably one of the few shining examples of where we did make a difference nationally and internationally. If I think why, it’s because we were guided by science, we kept score using a valid measure and we committed ourselves to collaborate. We formed a clinical community and clinicians co-created the program. The improvement was done with rather than over clinicians. Though we need regulatory, economic and management incentives, I believe the significant improvements in safety will only come when clinicians lead the way. Safety has to be something they do rather than something done to them.

 

I’d like to see other kinds of programs like that developed so that at the end of the day we really can document that patient outcomes are better, and that this isn’t just hype or encouraging people to focus on it. I think that as in every other part of medicine, science has to guide us. That might be management science, social science, psychology—it won’t happen unless it’s viewed as legitimate.

 

Our research group straddles operations and research, and we try to advance the science and be ruthlessly practical trying to find what I call “the sweet spot” between being scientifically sound and feasible. What I see out there in the quality movement is well-intentioned people doing things but not, as the evidence shows, having an impact, because the science just isn’t good--they’re not evaluating, they’re not using the right theories.

 

On the other hand, I see a lot of researchers, including me at times, doing elegant work that is completely impractical to scale. Our group includes both practitioners and researchers working together on problems—clinicians trained in research, economists, psychologists, human factors engineers, statisticians—all together in this mixing bowl bringing their different views to solve these problems. It’s a really rich environment, and I wish there were more of them around the country.

 

The second is getting good measurements so that we can know exactly how big a problem we face and so that we can monitor progress. Some of the evidence of the failure to make progress is pretty concerning, and even if you look at the state of our ability to measure safety it’s still really rudimentary. The public deserves to know how big the problem is and whether we are making any progress at all.

 

There is a reference in Wikipedia to your “alarm about the unintended consequences of computerization of patient records.” Again, they were/are the result of the abandonment of a very simple truth: Don’t discard the old way of doing things until the new way has been tested and proved. Is this an issue you’re tackling as you did the checklist?

 

What I see in this quality and safety movement is the belief that it is going to “transform healthcare.” You hear it all the time, and that gives me pause for a couple of reasons. It assumes that everything we’ve done before this is bad. It doesn’t honor the past. Healthcare isn’t perfect, but the systems we’ve evolved to meet the need—and they’ve evolved thanks to some really wise people—were well-intended by working very hard. Certainly they need to improve, and there’s harm occurring, but we also need to see what’s good about them, what’s working, what do we want to preserve. We don’t do that enough, and it sends a message to clinicians—and when I wear my clinician hat I hear that—that says, you’re just a dumb doc and everything you do is wrong and all you do is hurt people. That’s not a good way to engage the people we need to help solve this problem.

 

It gives me pause too because every time you change the system, you may defend against some mistakes but you will inevitably introduce new risks. You always do. And it may be that those new risks are worse than what you’re trying to solve, because there’s wisdom in the way the systems evolved. They’re not perfect, but there no quick fix, no magic formula for cutting mortality by half, and I think the goal is largely illusory.

 

What recommendations do you have for the field?

 

To begin with, remember to put patients first. Keep them as the North Star, because if we do that, it’ll guide so much of our work. Second, commit to collaborate. There’s no one group that’s going to do this alone. There are different ways to change behavior: regulation, economics, hierarchies, management, or you can do it by communities. And though they’re all needed, community is what is going to solve this problem. So clinicians have to see it as theirs and I hope they accept that charge.

 

Third, let’s be guided by science. Healthcare has to embrace the science of healthcare delivery as every bit as valid and important as basic clinical research so that programs are informed by good theory and good measurement and we know if they work.

 

ISQua is delighted to announce that Dr. Pronovost will be a Plenary Speaker at our 29th International Conference, to be held in Geneva, Switzerland

Share

Sign Up for our newsletter and receive a monthly digest of news!

Become A Member Today

We all have two vocations in health care – to do our job and to continually improve

JOIN ISQua TODAY