At AcademyHealth’s 2018 Health Datapalooza on Thursday, the US Food and Drug Administration offered a vote of confidence for artificial intelligence in healthcare, promising more refined strategies for regulation, touting its tech incubator for AI innovation, and announcing a new machine learning partnership with Harvard.
"We’re implementing a new approach to the review of artificial intelligence," FDA Commissioner Dr. Scott Gottlieb said. As one example, he pointed to the agency's approval earlier this year of a new clinical decision support software that uses AI algorithms to help alert neurovascular specialists of brain deterioration faster than existing technologies.
"AI holds enormous promise for the future of medicine, and we’re actively developing a new regulatory framework to promote innovation in this space and support the use of AI-based technologies," Gottlieb said. "So, as we apply our Pre-Cert program — where we focus on a firm’s underlying quality — we’ll account for one of the greatest benefits of machine learning — that it can continue to learn and improve as it is used."
Doing so, he said, "may allow a firm to make certain minor changes to its devices without having to make submissions each time."
FDA also plans to ensure its regulatory framework and software validation tools are "sufficiently flexible to keep pace with the unique attributes of this rapidly advancing field," Gottlieb said.
That said, the agency still needs to "establish appropriate guardrails for patients," he emphasized. "And even as we cross-new frontiers in innovation, we must make sure that these novel technologies can deliver benefits to patients by meeting our standards for safety and effectiveness. The technology won’t be scaled or reimbursed without that level of confidence that it protects and promotes patients."
Gottlieb said the agency is fully expecting to see an ever-increasing number of AI-powered healthcare tools submitted for approval in the years to come, from imaging devices to technology derived from other industries such as finance "that are already widely using AI platforms for fraud detection."
FDA's regulatory approach will "focus on the ways in which real-world data flows," he said. "This includes structured and unstructured data from pathology slides, electronic medical records, wearable devices, and insurance claims data. We want to better understand, and unlock ways, that this data can be used to inform development and validation of AI devices."
Whether tools that can scan digital biomarkers for early diagnosis, or leverage EHRs to enable clinical trials at the point of care — or even technology that someday "might even be taught to explain itself to clinicians" — Gottlieb said he's excited about the prospects for AI in healthcare.
Toward that end, he pointed to ongoing work at FDA's internal data science incubator, called Information Exchange and Data Transformation, or INFORMED.
Among its projects, researchers are exploring opportunities for machine learning and AI to improve existing clinical practices, the ways open-access platforms and technologies such as blockchain can enable the widespread and secure exchange of health data.
"Among the ongoing INFORMED projects are collaborations with Project Data Sphere, a nonprofit open-access cancer data repository, aimed at developing algorithms for classification of tumor dynamics using medical imaging data," he said.
FDA has also recently launched a fellowship program with Harvard on AI and machine learning, which is focused on designing, developing, and implementing algorithms for regulatory science applications, he added.
"It’ll look at developing new clinical endpoints and signal detection methods for evaluation of the safety and effectiveness of therapies," Gottlieb said. "These efforts also will help us develop new approaches for understanding variations in individual patient experience using diverse data sets from clinical trials, EHRs, and biometric monitoring devices."