Photo: Sarah Silbiger/Getty Images
The Food and Drug Administration (FDA) has released guidance providing recommendations on what information should be included in a predetermined change control plan (PCCP) tailored to marketing AI-enabled medical devices.
"This final guidance is part of the FDA's broader commitment to develop and apply innovative approaches to the regulation of device software functions and contains recommendations for manufacturers to support iterative improvement through modifications to an artificial intelligence-enabled device software function while continuing to provide a reasonable assurance of device safety and effectiveness," the agency said in a Linkedin post.
"This guidance is intended to provide recommendations on certain types of modifications that, at this time, FDA believes generally may be appropriate for inclusion in a PCCP for an AI-DSF. This guidance is not intended to delineate a comprehensive list of modifications FDA would consider appropriate for inclusion in a PCCP for an AI-DSF," the agency wrote.
The agency suggests manufacturers leverage the Q-Submission Program to get FDA feedback on a proposed PCCP for a device and submission type before submitting a marketing submission.
Although manufacturers are encouraged to talk about their plans via a presubmission, FDA does not sanction a PCCP in a pre-submission.
The agency says that, when utilizing an authorized PCCP to implement device modifications, the manufacturer should update the labeling for the device as specified in the authorized PCCP.
"For AI-DSFs with an authorized PCCP, the labeling should explain that the device incorporates machine learning and has an authorized PCCP so that users are aware that the device may require the user to perform software updates, and that such software updates may modify the device’s performance, inputs or use," the agency wrote.
According to the FDA, a committed description of modifications section in a PCCP should identify the particular, planned modifications to the AI-DSF that the manufacturer plans to implement. The explanation of modifications should include the specifications for the characteristics and performance of the device that, following the agreed upon confirmation and proof described in the modification protocol, can be implemented without a new marketing submission.
"To achieve these goals, FDA recommends that the description of modifications enumerate the list of individual proposed device modifications discussed in the PCCP, as well as the specific rationale for the change to each part of the AI-DSF that is planned to be modified. In some situations, a Description of Modifications may consist of multiple modifications," the agency wrote.
The agency says guidance documents mostly do not create legally enforceable responsibilities, but rather guidances describe the agency’s present thinking on a topic and should be looked at as recommendations, unless specific regulatory or statutory requirements are mentioned.
"The recommendations in this guidance apply to AI-enabled devices, including the device constituent part of device-led combination products, reviewed through the 510(k), De Novo, and PMA pathways," the agency wrote.
"As technology continues to advance all facets of healthcare, medical software incorporating AI , including the subset of AI known as machine learning (ML), has become an integral part of many medical devices."
According to the FDA, the guidance is intended to provide a forward-thinking approach to encourage the development of safe and effective AI-enabled devices.
THE LARGER TREND
In October, the FDA released its perspective on regulating AI in healthcare and biomedicine, stating that oversight needs to be coordinated across all regulated industries, international organizations and the U.S. government.
The FDA said it regulates industries that distribute their products to the global market, and therefore U.S. regulatory standards must be compatible with international standards.
In August, the EU AI Act came into effect, which outlines regulations for the development, market placement, implementation and use of artificial intelligence in the European Union.
According to the Act, high-risk use cases of AI include:
- implementation of the technology within medical devices.
- using it for biometric identification.
- determining access to services like healthcare.
- any form of automated processing of personal data and emotional recognition for medical or safety reasons.
The council stated that the act intends to "promote the uptake of human-centric and trustworthy artificial intelligence while ensuring a high level of protection of health, safety, [and] fundamental rights … including democracy, the rule of law and environmental protection, to protect against the harmful effects of AI systems in the Union, and to support innovation."