Welcome to Collective Conversations, a series of discussions aimed at highlighting people and groups working to improve health through better health care systems.
In this conversation you'll hear from Dr. Joshua Rosen and his research work focusing on decision-making in acute care surgery.
Joy Lee: Tell us what led you to do work with a multidisciplinary team on projects through the Decision Science Group?
Joshua Rosen: This was a really exciting opportunity to work in a different space for me with people who brought in with a bunch of different expertise that I really had no experience with and the chance to look at problems that were interesting to me from different perspectives. So, I kind of came in to our research experience with an interest in understanding how it is that doctors and patients work together to make decisions, especially from patients who are really sick and coming with big medical problems and making those 2AM, middle of the night, consequential health decisions; and wondering how is it that we can help patients make better decisions – decisions they are happier with in the end. And realizing that there was so many other professions that had spent a lot more time thinking about those questions of even what makes a good decision? How is it that humans go about making decisions? And asking those very fundamental questions and spending so much time thinking about that related to so many other questions in marketing, in psychology, and realizing that there is such a rich amount of expertise out there and that so much of it exists in the University of Washington, and that there was this great opportunity to collaborate with these people and start thinking about how we can apply some of that to these health care decisions, sort of an exciting opportunity. So, it is kind of a fortuitous meeting of an interest that we have around some of these health care questions and a lot of expertise here around the basic science of decision-making that kind of came together around some of these questions.
JL: You’ve most recently created a type of tool - a patient decision aid – for a medical condition called acute appendicitis. Before we get into the specifics, could you first talk about why decision aids are necessary to support care delivery and system redesign, especially in emergency surgery settings?
JR: So, I think when we think about emergency surgery settings or really any acute care settings in particular, there is a number of features that we’ve identified that make these particularly challenging settings for decision-making. Particularly, when we think about patient-centered decision-making, we really are making a decision that involves the patient and that tries to align with their deeper preferences and values about what they want for their lives. One of them is just that usually they are meeting their doctors or their health care provider that they are working with to make this decision for the first time. You have no relationship with each other, no foundation of shared trust, shared knowledge. That might often be the case even in non-acute decisions – you know, if you are meeting a specialist to make a decision on cancer treatment for the first time – but it’s a different kind of environment; you might at least have a visit or two; you might have the opportunity to go home and think about it after an office-visit and talk it over with your family and then have a phone call or follow-up visit to make a decision about treatment. And that’s often not the case in an acute situation in the emergency department, for example.
There are a lot of environmental factors that impede also that relationship building. When I go to see a patient in the emergency room sometimes, there’s a lot of noise. Sometimes I’m going to see them just as they are being whisked out to a CT scan or to another test, or there’s someone coming in to draw their blood while they’re talking to me, or another specialist is coming in to talk to them while I’m talking to them. There’s a lot of system factors that impede that ability. There’s also things on my end – you know, I’m maybe getting three pages that wants to go see three different patients or something like that. So, there’s a ton of different system factors that impede that ability to really have a thorough conversation with a patient.
So, we saw that maybe there’s a role for decision aids, which are often tools that have a component of both delivering information to a patient and they could be sending samples of pamphlets or sophisticated as an online web-based activity that has videos and rich multimedia experiences, which have often been used more in more elective settings or doctor’s settings and things like that. A little less so in emergency settings or in acute settings – that they might have a role here to help overcome some of those barriers and that they might be able to help facilitate some better decision-making in these cases by helping to offload some of the tasks on physicians or on health care providers in terms of information delivery by helping to kind of shift some of that time burden around, by letting patients, kind of, review information when the health care provider is not present. Also, sometimes you know, the time that I happen to come by to speak with the patient is the time when they’re just having a really bad attack at their pain or another distressing symptom or that it’s not conducive to them listening to me right then. And sometimes having someone sit and talk to them is not the best way that they want to receive information, right? They would rather look at something or have, some type of multimedia thing with graphics that shows them different comparisons in a better way.
So, I think that there’s a lot of different ways that decision aids can help people improve their understanding and comprehension that can help people in these distressing environments. And they might not be for everybody, and they might not work for every patient, but I think they’re another tool that can help overcome some of the barriers that patients face in making really good decisions in these challenging environments.
JL: Back to the decision aid you created - why the focus on acute appendicitis?
JR: So, appendicitis is a bread-and-butter condition for general surgeons that we’ve been treating for many, many years. And traditionally, there’s kind of been one treatment for it, which is in surgery, if you came in with appendicitis, in almost all cases, the treatment was an appendectomy. And over the past number of years, there has been mounting evidence, most recently from a large randomized trial called the CODA trial (Comparison of Outcomes of Antibiotic Drugs and Appendectomy) that was run out of one of the research groups here at the University of Washington and a number of sites around the country, showing that it’s both safe and effective to treat appendicitis with antibiotics as well, and to offer that as a treatment to patients. And the trial showed that antibiotics were non-inferior to surgery with the primary endpoint of patient reported quality of life, but that there were differences in other metrics like time missed from work or chance of readmission to the hospital that might be really important to certain patients. So, all of a sudden, we moved from appendicitis basically being this disease that had one treatment that the doctor told you were going to get, to being what we call a preference-sensitive decision, that patients really have a choice to make. And it’s not simple. There’s a whole bunch of different outcomes that might be prioritized very differently by different patients. And that might be confusing for folks and there’s a lot of data points to present. So, we thought this is a really interesting opportunity now to create a decision aid to help patients make this decision that they never used to have to make and that might be challenging for some patients to make them understand the nuances here between the two different treatments.
JL: That sounds pretty exciting to know that we actually have some options here and we can give some better care through a decision aid.
JR: I think that, like many things, there’s going to be a number of patients that come in and they’re going to have a really easy decision – they’re going to be like “I never want to experience this pain again.” “I want surgery.” “Take my appendix out.” And that’s a really easy decision for them.
And there’s going to be another group that comes in and be like, “Wait, you mean I have a treatment that might help me avoid surgery? Heck yeah, I really don’t want surgery.” And that’s similarly, a very easy decision for them.
But then you’re going to have this group in the middle. And that’s the group that we are really trying to cater to with this decision aid who are like, “Oh I really don’t know. Maybe I have an exam coming up that I kind of need but I also have this other thing and I kind of am a little scared of surgery.” I think that that’s a group that we can really help with a tool like this and help to visualize the data in different ways; help them figure out what’s really most important to them. And that this might be a really exciting thing for them to use to help figure out what’s going to make them ultimately the most satisfied with their care in the end.
JL: Absolutely, and I think we can all agree that patient satisfaction is the biggest and best goal that we want to uphold.
JR: Yeah, I think we just want people to be happy with the choices they make in their care and know that they have two reasonable options here to pick from.
JL: What was your approach in creating this patient decision aid and how was it “empirically informed”?
JR: So, we kind of took a different approach in designing this decision aid that was somewhat novel, which was to start with a lot of the things you would traditionally do when designing decision aids. So, we had great stakeholder groups that we worked with – patient advisory groups, clinician advisory groups that really helped develop the overall design of what kind of data points did we want to present, what was the overall structure of the decision aid.
But then, we kind of were still left with a number of design questions that were not answered for us by either the traditional process of engaging with stakeholders or through really combing through literature for guidelines and best practices around decision aids. So, things like, what’s the best way to present some of this comparative outcome data to patients? What’s the best way to design certain graphics for things? So then, through working with some of the multidisciplinary groups that we talked about earlier, with some of our colleagues from, for example, marketing and psychology departments here, we took an approach of doing a large randomized experiment using online crowdsourcing platforms – so things like Amazon’s Mechanical Turk platform – to do a lot of empirical testing of different designs and formats using objective outcome scales. So, if I present this information to 200-300 people using percentages or fractions, which group actually understood the data better? Or I present it with this type of chart or that type of chart, which one did people find more comprehensible? Or I present it with this type of way of presenting a confidence interval and this way of showing an uncertainty estimate – did people interpret them in the proper way or not? So, that was, I think, a really interesting approach that let us test it. Between all the different experiments, we probably tested different designs in over 3,000 different participants in maybe a month, a month and a half, which was a really cool way of doing this and allowed us to provide an evidence-based approach to a lot of our design decisions in this decision aid.
JL: That is incredibly interesting and I can imagine such a rich amount of data in just a month to help inform this.
JR: And it was something I certainly would not have thought about before working with this kind of multidisciplinary team that we talked about earlier. And it’s something they do every day with, for example, market research for products and A/B testing, and stuff like that. And it was really cool to apply that to a health care example, but then also bring in things like using the types of validated outcome scales that we would use in testing health care interventions or decision sciences intervention as the outcomes for these experiments, and kind of applying a more randomized experiment and randomized trial mindset to the design of these A/B test experiments. So, it was a really cool merger of those fields.
JL: What recommendations would you give to folks who want to improve systems of care by creating or improving patient decision aids, and who might consider a similar process that you and colleagues have taken?
JR: I think first would be to familiarize yourself with the amazing resources out there. There are so many groups that have spent their entire careers and lives to developing these resources, the Ottawa Group is one, and there are so many others that have such great collective reviews, resources for people who want to develop these types of tools. It’s really such a rich field. People worked really hard to create a lot of good resources for people like myself that want to come along and try get into this and help patients, which has been really cool to see how much exists out there.
And then specifically working, thinking of empiric testing, I think it was a super interesting way of doing this because then you are also not left scratching your head in making some of these design decisions. You’re thinking of how I should present this information and just making arbitrary decisions or having two people sit in a room and make it together. You could actually very quickly put together an experiment through Qualtrics and enter in making a little survey and run it in 100-200 people for a pretty minimal expense on the order of $100-$150 dollars or something like that and get some real data behind your decision-making. And the keys that I would say with that is that, if you can, partner with people who have done it before because there are a lot of tricks just in the actual administration of those types of experiments in terms of making sure you are getting good quality data, that you are not going to get internet bots answering your surveys, that you are getting real humans doing it. So, there is a little bit of a learning curve just to making sure you are getting things done efficiently and getting good quality data that’s worth partnering with someone who’s done it a few times, but there are also some pretty good resources on the internet for figuring some of that out, complementing simple solutions like a captcha technique for making sure you have actual humans doing your survey and stuff like that as well.
JL: Thank you so much Dr. Joshua Rosen. I really appreciate your time today to talk.
JR: Thanks for having me. I really appreciate the conversation.