Photo: bymuratdeniz/Getty Images
The Society of General Internal Medicine released a position statement on generative AI use in healthcare, highlighting the need for clinicians and technologists to partner to enhance genAI decision-making and for healthcare organizations to use the technology but, avoid "the urge to rapidly displace the human clinical workforce with generative AI."
The Society says clinicians must remain receptive to the use of genAI but approach the technology similarly to other biomedical advancements by critically evaluating its evidence and appraising its value in the clinical setting.
It's also necessary for clinicians to incorporate genAI tools in the clinical setting thoughtfully, recognizing that "errors and omissions are the major technical weakness of generative AI."
For technologists developing genAI for use in healthcare, there must be a significant shift away from the expectation that genAI will be "supervised" by clinicians.
"Rather, these organizations and individuals should hold themselves and their technologies to the same set of high standards expected of the clinical workforce and strive to design high-performing, well-studied tools that improve care and foster the therapeutic relationship, not simply those that improve efficiency of market share," the Society wrote.
The organization also says deep and ongoing partnerships between technologists, clinicians and patients are necessary to understand real-world user needs and ensure that output can be viewed as ground truth.
The Society says genAI "must provide obvious and intuitive mechanisms for verification and error-proofing."
Healthcare organizations should pursue incremental and transformative change with genAI, but avoid replacing the human workforce with these tools. Instead, organizations should partner with physicians to see where genAI is being accepted in workflows.
Additionally, care organizations should focus on using genAI for preventative care and chronic condition management, as these areas have been shown to have the largest impact on modern internal medicine practice.
"We affirm that the practice of medicine remains a fundamentally human endeavor which should be enhanced by technology, not displaced by it," the Society wrote.
THE LARGER TREND
As generative AI has made its way into healthcare, more organizations and governments are delineating how the technology should be utilized and regulated.
In October, the Food and Drug Administration (FDA) released its perspective on regulating AI in healthcare and biomedicine, stating that oversight needs to be coordinated across all regulated industries, international organizations and the U.S. government.
The agency also said that regulators must "advance flexible mechanisms to keep up with the pace of change in AI," and that transparency about the use of AI in premarket development and post-market performance must exist.
It added that regulation must be created to balance the needs of the healthcare ecosystem, from large companies to startups, and regulators must concentrate on patient health outcomes while weighing AI use for financial optimization for health systems, developers and payers.
In May, the Bipartisan Senate AI Working Group released a roadmap for AI policy in the U.S. Senate that encourages the Senate Appropriations Committee to fund cross-government AI research and development projects, including research for biotechnology and applications of AI that would fundamentally transform medicine.
The senators also wrote that legislation should provide transparency requirements for providers and the general public to understand AI's use in healthcare products and the clinical setting, including information on the data used to train the AI models.