As the healthcare industry’s relationship with data becomes increasingly sophisticated, executives, supervisors and other key decision makers have more tools than ever to measure and guide the performance of their organization. Collecting and surfacing new streams of raw data can only help to a certain extent, however, as even the most insightful individuals will struggle to identify actionable opportunities when facing an inelegant glut of information.
“Today’s dashboards are a bit static. Yes, they get a daily feed, maybe from the [EHR], or yes, they provide trends. But generally, you are left with the information that’s on the dashboard — it is difficult to click through,” said Dr. Jean Drouin, CEO of Clarify Health Solutions, an artificial intelligence-enabled healthcare insights startup.
Could dashboards become more dynamic and more granular in the actionability of their suggestions using AI and machine learning?
“The answer is yes,” Drouin said. “It’s just that the difficulty for a CIO these days is differentiating those who are using ML and AI in an actionable way, and those who are making bold claims but aren’t really delivering anything that’s either actionable or fixes an issue where there’s much of an ROI.”
What goals can AI dashboards best achieve?
If an organization is looking to integrate AI into its monitoring dashboard, the first step is ensuring that any implementation will actively work toward increasing the dashboard user’s awareness and understanding of operations, Drouin said.
“A good dashboard is something that individuals who need to monitor or to act on something can go to to rapidly be able to answer a couple of key questions,” he said. “I think of it really in terms of [whether] the ML and AI help to provide information more quickly. Does it help to provide more precise or trusted information, or does it help to provide more actionable information? It turns out, depending on the metric, ML and AI can help with all three, and I think it’s helpful to first think [where it] it can really be helpful, or what concept can be smoke and mirrors.”
For Mudit Garg, CEO and founder of Qventus, which uses machine learning and AI to create highly reliable operations for healthcare, AI is an opportunity to offload a “tremendous amount of cognitive load” by intelligently automating data interpretation. As such, an organization deciding where best to point the technology might want to start by targeting analysis projects a human employee could perform, if given an open schedule.
“If you, as an end user, had completely free time to be looking at this all the time, what are the decisions you would make?” Garg asked. “You don’t want to focus on things you can’t do anything about. It’s a lot more about what is the decision you are trying to make, who makes it, what is the cost and benefit of that decision, and then with all that information you think about what is the best path of operation?”
Drouin’s recommendation was to look to other industries — much in the way banking has already employed ML and AI to stratify loan risks, a similar approach could be taken to gauge patient’s risk of readmission or face other complications or to identify and draw insights from the performance of an individual physician.
“That kind of information about the relative performance of a physician can then be delivered into a physician’s monthly dashboard about their performance, or on an administrator’s dashboard [for] the items I need to discuss with my physician when I have my quarterly discussion about his or her performance,” he explained. “Where AI can be useful in this context is rather than have an analyst have to spin through lots and lots of analyses to find which physicians are more variated in their performance, or [among] what types of patients that physician has variance in performance, you can get a computer to very, very quickly go in and get all the outliers.”
Regardless of its specific task, an AI implementations can’t drive change in an organization without clearly communicating actionable insights to its user.
“The end goal is making sure the right things are happening reliably and consistently,” Garg said. “As a person responsible to make decisions in my department, [rather than] showing me everything that’s going on and relying on my cognitive ability to spot what I think and to figure out what to do with that … draw my attention to the thing I should be paying attention to and then connecting to what I should do, and make it really straightforward for me to do that.”
Trusting insights from the black box
Much like any tool, it’s important to insure that the data and insights provided by an AI tool is accurate. Certain AI and ML designs in particular can obfuscate the means by which they reached a conclusion, and that “black box” effect can greatly undermine trust in the software.
The insight needing to be trusted is massive. One of the common complaints, particularly from physicians, is that it’s a black box,” Drouin said. “The ability to un-black box this stuff and show how it is that one arrived at a given value, or how one derives it, is in my mind absolutely essential. It’s true from a statistical point of view that black boxes often give you a bit more precision. Having the courage to trade off on that precision to un-black box it, and in particular [so that] clinical users are able to trust it, in my mind is a valid tradeoff. What that requires is that the ML approach is not improved something by a couple of percent, but that it improved it a lot — you say ‘fine, it improved it by 20 percent. I’m going to take 3 percent out so I’m still improving it by 17 percent, but I’m not needing a black box.’”
As far as identifying correcting any mistakes that may arise, Garg noted that such investigations often fall short if every variable isn’t taken into account.
“Too often, people only measure the AI or ML portion of it, which is whether the model is performing well — I think that is a pitfall of being too focused on the AI,” Garg said. “The whole pipeline of operationalization needs to be measured and observed because there are many things that can throw it off rails, all the way from data being available, to data being good quality, to the machine learning model doing what it should do, to the information going to the right user [and] if the user acts on it.”
To integrate, or spin out
While augmenting a dashboard to include new AI capabilities is certainly a possibility, Garg and Drouin alike mused on whether or not such an approach would allow the technology to reach its full potential. Garg in particular said that his own experience working in healthcare had soured him on traditional, unintelligent monitoring tools.
“For a long time, the national goal for healthcare has been toward having more dashboards that can give information back. The challenge with that is that there are now so many dashboards and reports ad graphs for people on the front line to look at, and no one really has time to look at them,” he explained.
“The supervisor sitting and observing that information, he or she might get this push or nudge through what might look like a dashboard with all this information. My point is that this screen isn’t ever useful. It’s just that [it's] the traditional way of building that; cramming all that information you could possibly need for any situation.”
As an alternative, Garg advocated for the adoption of independent AI platforms that, unlike static dashboards, are designed from the ground up with a focus on interactivity and insight generation. These kinds of tools — which is Qventus’ primary offering — can more directly communicate a single idea without the clutter of unrelated data.
“As an example, I could show you in a dashboard that it got busy in the ER yesterday at 4 o’clock. You’ve got to log in, you have to talk to people to figure out why, etc. … it just requires a tremendous amount of effort,” Garg said. “Versus, in the moment, the AI looking at how cold it is outside, what’s going on in the area, if Dr. Smith is working and he orders more labs and that labs are slow. [An AI tool looks] at all the stuff right now without expecting the user to process that complicated information, and then says ‘yeah, this looks like a situation where we might not have capacity, and it might be best to focus on one of the bottlenecks we have on the lab side.’ That has a dramatically higher chance of changing what actions will be taken in that situation, versus the dashboard that someone looks at after the fact.”
Drouin addressed these kinds of challenges as well, but wasn’t so quick to write off the option of modifying an existing dashboard. For some organizations, he said the end user’s familiarity with the existing system and the logistical burdens of implementing a new system could make retrofitting the more appealing route.
“Presumably, the ML or AI-driven metrics could, in an ideal world, be fed directly into an existing dashboard,” Drouin said. “So if one accepts that premise, the questions are about to what degree you trust the metric or the piece of information that happens to have been derived from ML or AI. If you say it comes at the required periodicity, I trust it, and it has an actionable purpose, then presumably it meets the criteria you would have had anyhow for adding a piece of information to a dashboard.”
Focus on Artificial Intelligence
In November, we take a deep dive into AI and machine learning.