Workflow-driven AI: Introducing the Automated Impression

If we want AI to succeed for radiologists, the long-term vision needs to be focused around making each aspect of the radiologist’s daily life easier and better, in such a way that the AI is nearly invisible,” he concludes. “It needs to work exactly as a radiologist would expect.

Jeffrey Chang, MD, MBA
Co-founder, Rad AI
November 20, 2019

Jeffrey Chang, MD, MBA—co-founder of Rad AI, a machine learning start-up based in Berkeley, CA, and a private practice radiologist—has an explanation for why there are dozens of FDA-cleared AI applications that most radiologists are not using: for the most part, they impede, rather than improve, workflow.

Chang, who graduated from medical school at age 20, gave members of Strategic Radiology a 40-minute demonstration of Rad AI’s first product for the radiology marketplace, an application that automatically generates an Impression from a radiologist’s Findings and Indications.

While radiologists tend to be proprietary about the impression section of their report, the fact that Rad AI is trained on tens of thousands of the radiologist’s own reports to become more proficient at using the radiologist’s own language and style may soften their grip. Here’s another salient fact from Chang: “It’s amazing how much time radiologists spend on their Impressions. We’ve measured it, and radiologists average 30% to 35% of report time spent on Impressions—of course, that varies depending on the radiologist.”

AI Status Report

Chang observes that AI is neither good nor bad.  “It’s a tool, and its impact, good or bad, depends entirely on how it is applied,” he says. Before delving into how Rad AI’s first application works, he offered insight into why AI adoption has been slow in radiology, the challenges to success that are unique to radiology, and where he thinks the opportunity lies.

Most AI applications fall into six categories that Chang attributed to the FDA and Hugh Harvey from Kheiron Medical:

  1. CAD-E (detection)
  2. CAD-X (diagnosis)
  3. CAD-Q (quantification)
  4. CAST (smart triage)
  5. Radiomics and Biomarkers
  6. Workflow Automation

Chang points out that workflow has been one of the most misunderstood aspects of radiology for developers. “Machine learning models mostly aren’t designed to be practically useful for radiologists,” Chang notes. “At many radiology groups, the workflow is already optimized and carefully customized in hundreds of ways. You can’t just throw in an AI model and expect it to work seamlessly in that workflow.”

On the other hand, he believes that automating workflow presents the greatest opportunity for developers, predicting that there could be as many applications in workflow automation as in all the other categories combined.

“Radiologists have limited time,” he emphasizes. “As reimbursements keep dropping, the amount of work we need to do every day only rises. Of course we can’t spare time for AI products that make us slower, add more work, stay permanently in the medical record, or don’t have any measurable benefits. No matter how technically brilliant they happen to be, it’s no surprise that all of these applications fall by the wayside.”

Data Is a Problem

Another major problem is limited datasets. “Data is the lifeblood of AI, which is why many of us share concerns that AI is one field in which China can pull ahead of us in the next decade,” Chang notes “Unlike quantum computing or fusion reactors, machine learning doesn’t require much in the way of hardware research and development. You just need more data, cleaner data, segmented data. In the US, data is highly siloed and protected per HIPAA security and privacy rules.” 

Furthermore, once you have data, it needs to be cleaned and pre-processed. If a machine learning model is based on incorrectly labeled data, inaccurate data, or data that is dependent on individual radiologists’ varying perspectives, then the results of that machine learning model will be unreliable, Chang says. Those errors need to be identified and repaired, and inconsistencies need to be addressed—a daunting, time-consuming task.

“In almost every case, AI product design has been driven by the need to apply existing engineering solutions to limited datasets, resulting in models that do well in testing and validation but can’t actually work in a production environment,” he says, sharing the experience at Stanford, which tested three different commercially available software products for pulmonary nodule detection, all cleared by the FDA. All failed on different cases, and one product was so inaccurate it was clinically useless when tested on Stanford data.

Noise is another problem. “The model is not just learning to read the patterns that you and I read, the ones visible to the human eye; it’s learning to interpret from what all of us would consider noise, the nondescript artifact we see on every CT. Different scanners have very different noise patterns. If a model has never seen data from that scanner before, then it might consider much of the data to be unknown—in that case, who knows what results the model will produce?”

How We Work

When compared to radiology, tasks like BI, self-driving cars, StarCraft gameplay, language translation, automation of quality control in factories, and even writing stories are considered more low-hanging fruit for AI, says Chang.

Tremendous variability in how individual radiologists go about interpreting medical images is yet another big challenge for machine learning. Machine learning applications must account for individual habits and practice patterns, requiring programmers to understand all the different habits and practices of individual radiologists. “To build something that is seamless, simple, and fast for every radiologist, you really have to understand how each radiologist does his or her work,” Chang emphasizes.

Finally, radiology is home to an incredible range of edge cases—thousands of types of findings that any radiologist would see only once or twice in their career, if ever. “Traditional neural networks don’t do well with data that they only ever see once,” Chang explains. “One-shot and few-shot training involve model architecture that use very small datasets but are generally difficult to apply to radiology.”

Chang shared an image of a free-floating IUD in the pelvis. “It could be any one of a dozen IUDs, all with a different appearance, and could be anywhere in the abdomen or pelvis in any orientation,” he says. “If it were aligned craniocaudally, all you might see is a bright metallic dot on this image. How many hundreds of uterine perforation cases does your model need to see before it knows how to describe a finding like this?”

“Radiology is really complicated,” he concludes. “We all spend at least half a decade learning to be a radiologist, and many more years honing our craft. I tend to call radiology the most complex form of pattern recognition known to humankind. That makes it really tough for machine learning.”

Building a machine learning model is only a tiny part of building an AI model in a production environment, Chang notes. The vast majority of the code is outside of the model—data pipeline management, model training and review, production infrastructure, and quality testing.

“User experience is key,”  Chang asserts. If the following four questions cannot be answered correctly, the application is likely to fail in the real world:

  1. Is it more or less work for the radiologist? 
  2. Is it intuitive and familiar for radiologists? 
  3. Does it live within existing workflow, or do you have to exit the workflow to run the product?
  4. Does the product improve the relevant metrics, while allowing the radiologist to maintain their exact workflow? 

Saving Time: Automating the Impression

When Chang set out to create a product, he and the Rad AI team asked themselves this question:  How can we help radiologists? 

The answer was to improve radiologists’ lives by automating tedious and repetitive tasks, and they focused first on automating the Impression. “We can save them time so that radiologists can focus on the most interesting parts of radiologist interpretation, get home less exhausted, and still enjoy what it means to be a radiologist,” Chang says.

RAD AI automatically generates a report Impression from Findings and Indications sections in the report, customized to each radiologist’s individual language preferences. It has been trained on more than 10 million radiology reports, including more than 20,000 radiology reports from each radiologist user. Because it has been integrated seamlessly into PowerScribe and Fluency, use of the application involves zero clicks under normal circumstances.

The current release applies to all CT and X-ray studies; Chang expects MRI and ultrasound applications to be available in the next few months. Right now, the product increases radiologist productivity on CT by 20% to 25% and on X-ray by 15%. At the same time, it improves report accuracy, and is customized to each individual radiologist.

“It doesn’t change a thing in how a radiologist works; there is no need for extensive training because it doesn’t affect workflow at all,” Chang says. “Using automatic deployment software, it can be rolled out across a group’s workstations in a matter of minutes.”

For a side-by-side demonstration of the product, Chang dictated an impression of a CTA of the chest with numerous significant Findings, while the application automated its Impression on the other side of the screen. The time to dictating an impression manually was 2 minutes and 20 seconds, while the application completed its work in 12 seconds. The language is customized to each radiologist, based on how that radiologist formulates their impressions, and how they structure their items and language.

Augmenting Performance

Not only does the application save radiologists time, but it also helps radiologists to remember every finding of importance, by automatically pulling down significant findings. 

“For AI to really augment us as radiologists, we have to make sure we’re teaching it to solve the right problem,” he says. “We need to give it a deep understanding of each facet of the problem, the complexities of the edge cases and exceptions, and allow for the individual preferences of each radiologist. AI has to be designed for how we work as radiologists; otherwise, it won’t succeed in augmenting us.”

Over the next few years, Chang predicts that we will see more carefully integrated automation of the repetitive, the mundane, and the disruptive tasks in radiology. “From the protocoling of studies, to image orientation errors, to having to repeat the same actions dozens of times a day, all of these can be improved with carefully developed AI products,” he suggests.

“If we want AI to succeed for radiologists, the long-term vision needs to be focused around making each aspect of the radiologist’s daily life easier and better, in such a way that the AI is nearly invisible,” he concludes. “It needs to work exactly as a radiologist would expect.”

“We as radiologists must lead the way on the future of AI in radiology,” he continues. “If we don’t, someone else will create the vision of what healthcare looks like in fifteen years, someone who doesn’t understand the science and the art of radiology, who might neglect to focus on the patient, who sees health care as just another vertical to be automated with AI.”

“The tools of AI are here,” he adds, “and they are getting more advanced every day—what are we going to do with them?”

BACK
Subscribe to Hub
News from around the coalition and beyond

Hub is the monthly newsletter published for the membership of Strategic Radiology practices. It includes coalition and practice news as well as news and commentary of interest to radiology professionals.

If you want to know more about Strategic Radiology, you are invited to subscribe to our monthly newsletter. Your email will not be shared.