Last fall, a group of experts advising the U.S. Food and Drug Administration debated for two days on how to regulate generative artificial innotifyigence tools in medicine. One report presented at the meeting displayed that a generative AI tool supposedly utilized by 40% of the radiology practices in the U.S. produced clinically significant errors in one of every 21 reports.
Those errors, “I’ll be honest, gave me palpitations,” declared committee chair Ami Bhatt, chief innovation officer at the American College of Cardiology, at the meeting. “And I don’t just state that becautilize I’m a cardiologist.” The committee considered many complicating factors in generative AI regulation, but the FDA has not yet issued any guidelines on how it plans to police the technology.
In Europe, things are relocating rapider. In April, the U.K.’s National Health Service declared it would regulate highly popular ambient AI scribes as Class 1 medical devices. Earlier, in March, the first generative AI tool for providing medical information, “Prof. Valmed,” was certified in Europe as a medium-to-high-risk medical device. While the U.S. FDA has shied away from judging whether all medical generative AI tools count as medical devices and necessary approval, the decisions from the U.K. and Europe raise the question yet again for American regulators.
This article is exclusive to STAT+ subscribers
Unlock this article — and receive additional analysis of the technologies disrupting health care — by subscribing to STAT+.
Already have an account? Log in
Leave a Reply