CDS Coalition drops voluntary guidelines for unregulated clinical decision support software

By Jonah Comstock
01:13 pm
Share

Computers, whether through simple decision trees or complex neural networks, are increasingly playing a larger and larger role in helping healthcare providers make treatment decisions. As the role of clinical decision software in healthcare grows, so too does the need for guidelines that allow clinicians and patients to trust and understand that software.

This week, the CDS Coalition, an industry group led by Bradley Merrill Thompson and Kim Tyrrell-Knott, members of the firm at Epstein Becker Green, has released the final draft of its voluntary industry guidelines for medium-risk clinical decision support software.

The guidelines seek to create another level of regulatory clarity, beyond what is found in the 21st Century Cures Act, which stipulates that the FDA will only regulate high-risk clinical decision support. In the introductory comments to the guidelines, the coalition explains the lines between those three risk categories:

"FDA will regulate high risk CDS software where, among other things, (1) the user does not have a reasonable opportunity to review the basis of a recommendation and (2) the software performs important functions where, should the software not work as intended, someone could get seriously hurt," they write. "Further, in our judgment, low risk CDS software where the risk of injury is low regardless of whether there is reasonable opportunity to review the basis for the recommendation need not be burdened by these guidelines. Instead, these guidelines focus on what we call 'medium risk clinical decision support' software or 'MRCDS,' which naturally enough falls between those two other categories."

Partly, the document lays out, as clearly as possible, what kinds of clinical decision support software can fly under the FDA's radar. But the voluntary guidance goes further than that, seeking to impose what in some cases is a higher standard in the name of industry self-regulation.

"At their core, these guidelines are intended to give software developers a framework for discerning whether additional validation – beyond that which they would ordinarily do – is required as a consequence of the software taking over decision-making from healthcare professionals. These guidelines reflect the view that taking over, in any substantial way, the healthcare decision-making carries with it heightened responsibility for validation," the guidelines state.

The guidelines, which were previously presented as a draft and subject to comments, present a tripartite test for whether a clinical decision software is medium risk. First, is the software transparent about its methods for generating suggestions and recommendations? Second, is the intended user of the software sufficiently trained and knowledgable to make a decision without the software? Third, does the software offer the user adequate time to reflect on the recommendations it has presented?

While these three guidelines are simple on the surface, they are nuanced in practice, as the 40-page document attests. In particular, as Thompson explained in an accompanying LinkedIn post, the concept of transparency is a little murky in a world where CDS is increasingly wrapped up with machine learning and artificial intelligence.

"This is an emerging area, and we recognize that many people are studiously working to figure out a way to make machine learning software less of a black box," he wrote. "For example, biomedical research scientists are working to address the challenge of articulating machine learning models in a clear and concise manner. In the meantime, the guidelines lay out five key steps developers can take to address the need to empower the user to review the basis for the recommendation."

Although these are "final" guidelines in a sense, the coalition acknowledges that the state of the technology will continue to change -- and they plan to update the guidelines accordingly.

Share