Exploring international guidelines for artificial intelligence-powered gadgets
In the rapidly advancing world of artificial intelligence (AI), the regulatory approval process for AI-enabled software as medical devices (SaMD) globally faces significant challenges. Traditional medical device regulations, designed for static devices with fixed functions, struggle to accommodate AI systems that continuously learn, adapt, and execute complex autonomous workflows without constant human oversight [1][2][5].
One of the main challenges is managing adaptive and autonomous AI complexities. Regulators grapple with overseeing ongoing changes in AI algorithms post-approval, balancing innovation and patient safety [2][5]. The Food and Drug Administration's (FDA) use of predetermined change control plans (PCCPs) is one approach to anticipate algorithm changes without requiring full re-approval each time, but heightened-risk changes still necessitate additional reviews [2][5].
Another concern is regulatory fragmentation and lack of harmonization. Different global regions have distinct regulatory requirements and risk classifications for AI SaMD, complicating compliance for developers seeking international market access. These include variations in clinical validation standards, data privacy laws, and oversight mechanisms [2][3].
Bias, fairness, and transparency concerns also pose challenges. Ensuring AI tools perform fairly across diverse populations is challenging, especially when training data are biased or limited. Regulatory authorities emphasize transparency and risk mitigation measures to protect patient safety and equitable care [4].
Regulating emergent generative AI systems that produce new medical texts or images raises novel safety and accuracy issues, posing unpredictable regulatory and clinical risks [1][4].
Despite these challenges, regulatory frameworks for AI-enabled SaMD are rapidly evolving. Innovations such as predefined change control plans, harmonized guidance, and adaptive oversight models exemplify ongoing advancements [1]. The FDA has authorized over 1,000 AI/ML-based medical devices by mid-2024 and introduced initiatives like PCCPs to streamline continuous learning algorithm updates while maintaining safety [3][4][5].
The European Union (EU) has integrated AI SaMD under its stringent Medical Device Regulation (MDR) and the Artificial Intelligence Act, emphasizing risk-based classification, human oversight, and post-market surveillance. EU policymakers have also moved to ease regulatory burdens to better balance innovation and safety after industry feedback [2][3].
International harmonization steps, such as joint guidance from European Notified Bodies and standardized questionnaires, represent practical steps toward regulatory convergence [2]. Collaborative standardization initiatives, like those involving the FDA, AAMI, and BSI, enhance regulatory science collaboration, further aiding in the development of risk management and standardization for AI/ML medical devices [5].
When assessing the safety and effectiveness of algorithms within an AI-enabled SaMD, the FDA considers factors including data quality, robustness, and clinical performance. Quality system and post-market requirements, including adverse event reporting, apply to AI-enabled devices [5].
Applicants for AI-enabled devices need to prove their device is substantially equivalent to an already FDA-authorized device (a predicate device) [5]. The majority of AI-enabled devices in the US get to market via the 510(k) pathway, with devices without a predicate granted approval through De Novo classification [5].
AI-enabled software tools intended to assist with administrative tasks such as scheduling, inventory, or financial management are also exempt from FDA regulations [5]. In January 2025, the FDA issued draft guidance entitled "Artificial Intelligence-Enabled Device Software Functions: Lifecycle management and marketing submissions recommendations."
AI is more suited for pattern-recognition via image and/or waveform analysis, making it ideal for medical imaging and radiology devices [5]. The first AI device approved in 1995, PAPNET, was more accurate at diagnosing cervical cancer than human pathologists but was not sufficiently cost-effective for widespread adoption [5].
AI-enabled software that matches patient data with current treatment guidelines for common illnesses is exempt from FDA regulations [5]. AI algorithms can be "locked" or "adaptive" or "continual" depending on whether the learning component is removed or retained after implementation [5].
As we navigate this evolving landscape, it is crucial for developers, engineers, and regulators to carefully consider the data the algorithm will have access to for continued learning, ensuring patient safety and equitable care [5]. The FDA has recognized that traditional medical device regulations were not designed for AI and has issued guidance documents to address this issue since 2021 [5].
- The regulatory approval process for AI-enabled software as medical devices (SaMD) faces challenges due to their adaptive and autonomous complexities, as well as ongoing changes in AI algorithms post-approval.
- Regulatory fragmentation and lack of harmonization present concerns as different global regions have distinct regulatory requirements for AI SaMD, making compliance complex for developers seeking international market access.
- Ensuring AI tools perform fairly across diverse populations is important, but challenging when training data are biased or limited, raising concerns about bias, fairness, and transparency.
- Regulating emergent generative AI systems that produce new medical texts or images poses novel safety and accuracy issues, posing unpredictable regulatory and clinical risks.