CDS: Practical Lessons Learned—and Dirty Laundry

CDS is a good idea because it provides actionable, reliable advice on the right study to order, prior to ordering, when the clinician is trying to determine what they need to order.

Greg Mogel, MD
April 22, 2019

Taking the podium mid-stream during a series of talks on AI at the annual meeting of the California Radiological Society in February, Greg Mogel, MD, was mock-apologetic about interjecting a dose of cold reality into radiology’s topic du jour. Yet the timeliness of his talk on implementing order-entry radiology clinical decision support could not have been better. With CDS on the docket for a test year in 2020, the day of reckoning is near, barring any further delays. Furthermore—as you will read—he sees a very real and pressing role for AI in future iterations of CDS.

Recently retired from leadership roles in several Kaiser Permanente radiology departments, Dr. Mogel continues to be active in the Lung Cancer Screening and AI communities at the ACR and also as Clinical Lead for Imaging at the National Decision Support Company (NDSC), acquired by Change Healthcare, whose CDS tool employs the ACR Appropriateness Criteria. “Everything I’m presenting is relevant across multiple vendors who all tend to solve this problem slightly differently,” he assured.

Reasons to Engage

Mogel began with why you should engage: Inappropriate imaging continues unabated, CDS is a much better solution than prior authorization, the number of imaging studies has exploded, PCPs and physician extenders comprise a greater portion of the ordering spectrum, and due to increased productivity expectations and distributed reading, you are not as available as you used to be to provide guidance.

“When I started in radiology there were maybe two ways to do a chest CT and now there are at least 15,” he said. “Subspecialists may order a small range of studies frequently, but a PCP may order one study from an indication two or three times a year. CDS is a good idea because it provides actionable, reliable advice on the right study to order, prior to ordering, when the clinician is trying to determine what they need to order.”

Besides, it’s the law, and failing to implement successfully could get in the way of your reimbursement. The Protecting Access to Medicare Act (PAMA) of 2014 mandated CDS in the outpatient, ED, ASC, and IDTF settings, and the fun begins with a year of testing and evaluation on January 1, 2020. “At that point, you need to consult a Qualified CDS Mechanism (QCDSM) and demonstrate that you’ve done so as you move forward,” Mogel related.  “In year 2, our payment will be at risk if we complete and interpret studies for which we cannot document that appropriate use criteria were consulted prior to ordering. After that, years 2 and beyond, there will be other punitive measures theoretically put in place.”

Keys to Successful Implementation

Mogel reminded all that CDS systems date back to the early 2000s, and many of them have failed, typically in two areas: People do not trust the information in the system, and people couldn’t figure out how to use it. With gold standard information available in the form of the ACR Appropriateness Criteria (ACR-AC) and the availability of a variety of qualified delivery mechanisms, present-day implementers have the opportunity to implement successfully.

Mogel outlined the implementation process and offered the following advice based on hundreds of NDSC implementations:

DEMONSTRATE A FEEDBACK MECHANISM. Your users need to feel that the content in the QCDSM reflects what they do and that the words reflect their language. Therefore, it is necessary that your vendor has real-time engagement with subject matter experts to convey user input. For instance, in his role with NDSC, Mogel communicates telephonically with the ACR Rapid Response Committee to convey the gems culled from the constant, real-time feedback provided by people who disagree, can’t find an appropriate indication, or an inappropriate indication. The AC Committees get re-seated every several years, so the Rapid Response Team is there to make adjustments and additions in the interim.

MAKE IT EASY. Ideally, order-entry decision support works like this: the clinician searches the orderables in the EHR for head CT, and the system returns the options. The clinician selects the desired option and is shown a new screen via the CDS application that asks for an indication. If the indication is scored 7-9 (green/appropriate), the order is entered.  If it is scored 4-6 (yellow/may be inappropriate) or 1-3 (red/inappropriate), the user receives a Best Practice Advisory that shows studies that have a higher appropriateness score, with some information about cost and relative radiation exposure 

“Most importantly,” said Mogel, “they are given the opportunity to remove the study they ordered and replace it with one of the studies that score more highly. The only way to get it to work on a large scale, quickly and efficiently, is to embed it in the EMR. Doing the right thing has to be the easy thing to do.”

FIRST STEPS. Your first tasks will be to pick your vendor, map your exams to the content provider’s exam names, and take care of the technical issues, unexciting but necessary preliminary steps. Mogel recommends carefully considering your next steps, which are critical to your implementation. “In planning implementations, there are options,” he advised. “Understand your options.”

DECIDE WHICH EXAMS TO COVER.  Deciding which exams and what indications you want to cover is the first clinical decision you will make.  “The ACR-AC covers a huge range of exams and clinical scenarios—do you want to cover them all the first day, or do you want to have a focused core of exams?” he asked.

One option is to cover the studies CMS identified as priority clinical areas, including back pain, low back pain, neck pain, shoulder pain, headache, and a few others. “Clearly, there is something they will do with them, but they haven’t actually suggested what it is,” he noted. “Some people think that PAMA will focus on those first when it is time to, in terms of withholding payment or being punitive about having red scores.” 

Implement with the most impactful, actionable areas first, such as the CMS priority clinical areas, and perhaps a few others that show up frequently, such as abdominal pain, he recommends. “That will minimize the burden, give them a little less exposure and hopefully more acceptance. You can begin to expand the CDS coverage once you have some success.”

DECIDE WHEN TO TURN ON FEEDBACK. After a site determines which exams to cover, data collection begins in silent mode. Clinicians will start seeing the request for the exam indications, and appropriateness rates can be tracked and monitored. The next critical decision to make will be when to start exposing the clinicians to the exam score by activating the feedback mechanism for their exam selections.

Don’t wait more than six months in silent data-collection mode before turning on the feedback mechanism, Mogel advised “In other words, the longer you have this turned on with physicians entering an unnecessary click without giving them feedback about why they are doing it, the greater the chance that the initiative will lose its velocity and its energy,” he explained.  “I can understand why not turning it on immediately would be right for some places also.”

DECIDE WHERE TO TURN ON FEEDBACK. Feedback can be turned on in stages for different clinicians and locations. It can be activated by location, directed at levels of training, and held back on attendings.  For instance, you can turn on feedback for primary care but not specialists. Do not begin in the ED, Mogel advised.

DECIDE WHICH CASES WILL RECEIVE FEEDBACK. “We don’t tell people generally that the study they order is green,” Mogel reported. “We have not found that to be very successful except for certain very narcissistic individuals. Most people assume that what they order is the right study.” However, you will need to decide whether to only deliver feedback when the order is red—an inappropriate study—or report to clinicians both inappropriate (red) and possibly inappropriate (yellow) studies. “That’s a cultural decision that we have to make,” Mogel said.

Airing the Dirty Laundry

Mogel shared a case study reported in a peer review journal in which the institution successfully reduced its rate of inappropriate orders to 5%, cut its possibly inappropriate studies in half, and increased the number of indicated studies from 64% to 82%. “That’s a win,” he said.  However, he also shared that 70% of the studies were never scored, a problem that plagues every CDS study going back to the CMS Demonstration Project: In nearly every study published, between 20% and 80% of studies are not scored.  “That is the dirty laundry,” Mogel said. 

Mogel identified two reasons for no indication. The minor reason is that no score exists for a particular structured indication. “None of these QCDSMs make content, we get content from specialty societies,” he said, “and that content currently does not cover everything. This problem is reducing every day.”

The bigger problem is an age-old problem: 85% of the time that there is no score, the clinician did not want to put in a structured reason for the exam. NDSC offers 8 to 12 common indications for every study but it also offers a free-text alternative. “Guess what people do?” Mogel asked. “We are radiologists, the problem of history precedes us. If you read, you know that you see it every day: rule out, follow up, pain, I want it.”

Requiring a clinician to input a structured indication is even more of a problem, structurally and logistically, than expecting a prose history, he pointed out. A clinician may know that they want to order a study for pancreatitis, but that is not a structured indication, it is a diagnosis. “Some patients that you are imaging for pancreatitis do have the diagnosis of pancreatitis, and there is a study that is appropriate for them,” said Mogel. Most patients for whom clinicians choose pancreatitis are suspected of having pancreatitis and the real indication for the study is fever and epigastric pain, which recommends a completely different study than following up known pancreatitis.

Nonetheless, a structured indication is critically important because free text does not get scored. Currently, an ordering clinician will get a decision support number if “two cats”—a favorite in Mogel’s collection of free-text indications—is written into the box. That is unlikely to last, since the intention is for a patient to get the right study. Also, clinicians only get best practices feedback if they are scored, and that is the only opportunity to have an impact on ordering behavior.  

Handling Free-Texters, Building Buy-in

Researchers at the University of Pennsylvania found that 60% of their studies were unscored in silent mode data collection period, and that 97% of free text indications already existed as indications. “The real work is getting people to see that it is easier to click this box than to type in pneumonia when pneumonia is already here,” he explained.” Ninety-seven percent of these indications were right there.  This is the work: it is not sexy, and it is hard.”

Mogel’s suggestions for handling the problem include:

  • Evaluate, look for patterns, look for certain exams or indications with a lot of free-text input.
  • Communicate with free texters and ask what’s missing.
  • Make sure that the narrow range of studies that specialists order is in their favorites.
  • Keep the stakeholders engaged by showing them the reports that show their percentage of green, yellow, red, and no score. Don’t be punitive with those who have high red scores, use it as an opportunity to educate. “For free texters, you want to be a little punitive,” Mogel said. “Say, ‘Cmon. work with us.’
  • You will have data: Analyze it, release it to users, department chairs, administration, and wherever it will do the most good.
  • Remove free text as an option once users trust the system.

While the work Mogel outlined above can be reduced to a data collection initiative, a CDS implementation will succeed only when it is insinuated into the culture.  “All of this work is just to get to the data, but It doesn’t necessarily change behavior,” he said. “This is where the real change in the improvement in health care is. It is people trusting the system and using the system. It’s accountability and visibility.”

In conclusion, he urged implementers to:

  • Analyze the data, release it to users, department chairs, health systems, administrators,
  • Build buy-in before you begin. “Telling physicians they have to do it because it is the law, or to be compliant, or someone else says they have to do it—that never works.”
  • Give data feedback to individual users. “Physicians do like data and they love data about themselves.”

As for the role for artificial intelligence, Mogel believes that the field could help solve radiology’s age-old lack-of-history problem and alleviate CDS’s free-text issue: “Fundamentally, we think AI can help to create tight EMR integration, to prevent physicians from having to reenter information that has already been entered, and, whether through natural language processing or something else, have the system understand why the study is being requested.”

 

BACK
Subscribe to Hub
News from around the coalition and beyond

Hub is the monthly newsletter published for the membership of Strategic Radiology practices. It includes coalition and practice news as well as news and commentary of interest to radiology professionals.

If you want to know more about Strategic Radiology, you are invited to subscribe to our monthly newsletter. Your email will not be shared.