Regulatory Compliance for SaMD and AI-Based Medical Devices - What You Must Know
Software as a Medical Device (SaMD) and artificial intelligence (AI)-driven healthcare solutions are transforming modern medicine, offering enhanced diagnostic capabilities, real-time decision support, and improved patient outcomes. However, these innovations also introduce complex regulatory challenges across different jurisdictions. The regulatory landscape for SaMD and AI-based medical devices is evolving to ensure safety, effectiveness, and compliance with global health standards.
This guide explores the key regulatory requirements in the United States (FDA), European Union (EU MDR), and United Kingdom (MHRA), providing insights into classification, approval pathways, risk management, post-market surveillance, and emerging AI-specific compliance trends.
Regulatory Definitions and Classification of SaMD and AI-Based Devices
United States (FDA)
Defining SaMD and AI-Based Medical Devices
The US Food and Drug Administration (FDA) aligns its regulatory approach with the International Medical Device Regulators Forum (IMDRF), defining Software as a Medical Device (SaMD) as software intended for medical purposes that does not require a dedicated hardware component. In practical terms, this includes mobile apps for clinical decision support, AI-powered diagnostic tools, and cloud-based patient monitoring platforms.
For AI-driven medical software, the intended use determines whether the software is classified as a medical device. If AI is used for disease diagnosis, treatment, or prevention, it must comply with FDA medical device regulations. However, software purely for administrative functions (e.g. hospital scheduling tools) is not considered a medical device.
FDA's Risk-Based Classification System
The FDA employs a three-tier risk-based classification model, where patient safety impact determines regulatory scrutiny:
Class I (Low risk): Covers basic wellness applications, such as fitness trackers or symptom checkers. Many Class I devices are 510(k)-exempt, meaning they do not require premarket submission.
Class II (Moderate risk): Includes AI-powered clinical decision-support tools that assist (but do not make autonomous decisions). Most require a 510(k) clearance, demonstrating substantial equivalence to an existing approved device.
Class III (High risk): Reserved for AI-driven autonomous diagnostic systems or life-critical applications, such as AI that autonomously detects strokes on CT scans. These devices require Premarket Approval (PMA), involving rigorous clinical validation and extensive safety testing.
European Union (EU MDR 2017/745)
Definition and Scope of Regulation
Under the EU Medical Device Regulation (MDR 2017/745), software intended for a medical purpose is classified as Medical Device Software (MDSW). Unlike the FDA, the EU does not distinguish between SaMD and Software in a Medical Device (SiMD)—any software influencing medical decisions falls under regulation, even if it interacts with hardware.
For AI-based devices, any system that processes patient data and informs clinical decisions is considered a medical device. This means AI-based tools used in diagnosis, monitoring, and personalised treatment recommendations are subject to MDR compliance.
EU MDR Risk-Based Classification – Rule 11
EU MDR Annex VIII, Rule 11 governs software classification:
Class III (Highest risk): Covers software that drives or influences critical clinical decisions, such as AI that recommends or alters life-sustaining treatments.
Class IIb: Includes AI-powered software that guides diagnostic decisions for serious conditions (e.g. oncology AI models assessing tumour malignancy).
Class IIa: Applies to non-critical decision-support tools, such as AI that flags potential abnormalities but does not diagnose directly.
Class I (Low risk): Covers software that provides general monitoring or wellness insights without influencing medical decisions.
EU MDR applies stringent requirements to software, particularly under Rule 11, which classifies many AI-based medical devices as Class IIa or higher when they significantly influence clinical decisions. However, some software may remain Class I if it does not directly drive or alter medical decisions.
United Kingdom (MHRA)
Post-Brexit SaMD & AI Regulations
Post-Brexit, the UK Medical and Healthcare products Regulatory Agency (MHRA) continues to regulate medical devices under the UK MDR 2002, originally derived from EU law. While the UK currently follows EU-like classification rules, its Software and AI as a Medical Device Change Programme aims to develop AI-specific regulations.
Current Classification Approach
The UK maintains the Class I, IIa, IIb, III model, similar to EU MDR. However, the MHRA is reviewing AI regulations to introduce:
Clearer risk classifications for AI-based medical devices.
Guidelines on AI model transparency and explainability.
New post-market surveillance rules to address evolving AI models.
Regulatory Approval Pathways – FDA (United States) for SaMD & AI-Based Devices
Navigating the US Food and Drug Administration (FDA) approval process for Software as a Medical Device (SaMD) and AI-driven medical devices is one of the most complex and critical steps in bringing an innovative healthcare solution to market. The regulatory pathway chosen depends largely on risk classification, intended use, and the presence (or absence) of a legally marketed predicate device.
Below, we explore the four primary approval routes for SaMD and AI-based medical technologies, providing deeper insights, practical considerations, and best practices to help manufacturers streamline regulatory compliance.
1. 510(k) Clearance – The Most Common Route for Moderate-Risk SaMD & AI Tools
What is 510(k) Clearance?
The 510(k) pathway is the fastest and most widely used regulatory approval route for moderate-risk Class II medical devices, including AI-powered clinical decision-support tools. The fundamental requirement for this pathway is substantial equivalence (SE), meaning the new device must demonstrate that it is as safe and effective as an existing legally marketed predicate device. However, some devices may qualify under the Safety & Performance-Based Pathway, which does not require direct predicate comparison
Key Requirements for 510(k) Clearance
For a successful 510(k) submission, manufacturers must provide:
✔ Predicate Device Identification – The reference device must be legally approved and serve a similar function.
✔ Comparative Performance Data – The new device must match or exceed the safety and efficacy of the predicate.
✔ Software Validation & Verification – Proof that the AI-driven system performs reliably under intended conditions.
✔ Human Factors & Usability Testing – Ensuring safe and effective user interactions.
✔ Cybersecurity & Data Integrity Evidence – Compliance with FDA cybersecurity guidance, especially for AI models processing sensitive patient data.
Challenges & Considerations
Lack of Suitable Predicate Devices – Many innovative AI-based solutions lack direct equivalence to existing devices, forcing manufacturers to consider the De Novo pathway instead.
Regulatory Scrutiny on AI Variability – AI-based tools learn and adapt over time, making static equivalence comparisons challenging. The FDA requires that AI devices either maintain a fixed algorithm at approval or implement an FDA-approved Predetermined Change Control Plan (PCCP) to allow future modifications without requiring a new 510(k) submission.
Growing Focus on Real-World Performance Data – The FDA increasingly expects manufacturers to conduct post-market performance monitoring to validate real-world effectiveness.
Advantages of 510(k) Clearance
Faster time to market (typically 3-9 months for approval).
Lower regulatory burden compared to De Novo or PMA.
Predictable pathway for AI solutions with clear predicates.
2. De Novo Classification – The Pathway for First-of-Its-Kind AI Medical Devices
What is the De Novo Pathway?
The De Novo classification process is designed for novel medical devices that do not have a suitable predicate but are moderate risk (Class II). This is a critical route for AI-based devices that introduce new functionalities, such as machine-learning-driven diagnostics or predictive analytics tools.
When is De Novo Classification Required?
If the device does not fit into an existing classification and lacks a predicate for 510(k) clearance.
If the FDA rejects a 510(k) submission due to the device being too different from existing products.
If the AI device offers a new method of diagnosis, monitoring, or treatment recommendation that has not been previously regulated.
Key Submission Requirements
Risk-Based Justification – Demonstrating why the device is not high-risk (Class III) and why existing Class II frameworks do not apply.
Clinical Performance Data – Unlike many 510(k) devices, De Novo submissions often require clinical validation to prove safety and effectiveness.
Software & AI Model Transparency – The FDA demands explainability in AI decision-making to ensure accountability in clinical use.
Comprehensive Risk Management – Following ISO 14971 risk management standards.
Challenges & Considerations
Longer approval process (typically 9-18 months compared to 510(k)’s 3-9 months).
Higher regulatory burden, requiring a full risk assessment and clinical evaluation.
Higher costs, due to clinical validation studies and additional regulatory interactions.
Strategic Advantage of De Novo Classification
Once a device receives De Novo approval, it creates a new regulatory classification, allowing future AI devices in the same category to qualify for 510(k) clearance—significantly simplifying approval for subsequent products.
3. Premarket Approval (PMA) – The Most Stringent FDA Pathway for High-Risk AI Devices
What is the PMA Process?
The Premarket Approval (PMA) process is reserved for Class III medical devices that pose significant health risks if they fail. This includes AI-driven autonomous diagnostics, robotic-assisted surgeries, and AI-based life-support systems.
Why Does PMA Have the Strictest Requirements?
- Comprehensive Clinical Trials – Manufacturers must conduct rigorous, multi-phase human clinical trials similar to pharmaceutical approvals.
- Extensive Safety & Efficacy Data – The FDA requires a full benefit-risk analysis proving the device improves patient outcomes without unacceptable risk.
- Manufacturing Oversight – Companies must establish Quality Management Systems (QMS) compliant with FDA 21 CFR Part 820.
- Post-Market Surveillance Requirements – Ongoing performance tracking to detect any adverse effects once deployed.
Challenges & Considerations
Significantly longer approval timeline (1-3 years on average).
Extremely high costs (millions of dollars in clinical trials and regulatory fees).
Highly unpredictable approval—many PMA applications face extensive revisions or outright rejections.
Who Needs PMA?
AI-based software that makes direct, autonomous clinical decisions.
High-risk diagnostic AI tools, such as AI-driven cancer detection.
AI-integrated robotic surgical systems.
PMA Considerations for AI Developers
Given the challenges, AI companies often try to avoid PMA by:
Applying De Novo first, if possible.
Limiting AI decision-making autonomy, requiring human-in-the-loop verification to qualify for lower risk classification.
Leveraging real-world evidence (RWE) to supplement clinical trials.
4. Breakthrough Device Program – Accelerating AI Innovation in Healthcare
What is the FDA Breakthrough Device Program?
The Breakthrough Device Program is an expedited approval pathway for AI-driven medical devices that address serious or life-threatening conditions where no adequate solutions exist.
Key Benefits
Faster FDA feedback and priority review.
Eligibility for rolling submissions, reducing approval time.
Close collaboration with FDA experts, improving approval success rates.
Who Qualifies?
AI software addressing critical unmet medical needs (e.g., AI-driven stroke detection in emergency care).
AI systems improving diagnostic speed and accuracy in life-threatening conditions.
AI solutions for rare diseases or high-mortality conditions.
Challenges & Considerations
Breakthrough status does not guarantee final approval - devices must still meet all safety and effectiveness standards.
Requires a robust evidence base demonstrating significant clinical benefit.
EU MDR (CE Marking Process) – Approval Pathway for AI & SaMD
The CE marking process under EU MDR involves multiple regulatory steps:
Classification: Most AI-based medical devices fall into Class IIa or higher.
Technical Documentation: Manufacturers must provide a comprehensive risk assessment, software validation reports, and evidence of regulatory compliance.
Notified Body Review: Class IIa, IIb, and III software must undergo external assessment by an MDR-accredited Notified Body.
Clinical Evaluation: AI-driven SaMD must demonstrate clinical performance and safety, typically via Clinical Evaluation Reports (CERs).
CE Marking & Post-Market Surveillance: Once approved, manufacturers must maintain ongoing performance monitoring.
EU MDR has significantly increased regulatory scrutiny for SaMD and AI, making compliance a more resource-intensive process than under the previous Medical Device Directive (MDD).
MHRA (UKCA Marking Process) – Post-Brexit Medical Device Approval
The UK will continue recognising CE-marked devices until June 2028 for most medical devices and 2030 for some in vitro diagnostic (IVD) devices. However, new AI-based medical devices seeking UK market approval must obtain UKCA certification:
Class I: Self-certification.
Class IIa, IIb, III: Assessment by an MHRA-approved UK Approved Body.
Key Future Changes in UK AI Regulation
The UK is developing new AI-specific classification rules under the Software and AI as a Medical Device Change Programme, which may align with but not directly mirror EU MDR Rule 11.
New AI-specific regulatory requirements for transparency, performance validation, and algorithm drift monitoring.
Streamlined premarket pathways for AI devices demonstrating exceptional clinical benefit.
Conclusion
Navigating regulatory compliance for SaMD and AI-driven medical devices requires a comprehensive understanding of risk classification, approval pathways, software lifecycle compliance, cybersecurity, and post-market monitoring. The FDA, EU MDR, and MHRA are rapidly evolving their frameworks to address AI's unique challenges, ensuring patient safety while fostering innovation.
By integrating robust quality systems, clinical validation, and proactive regulatory engagement, manufacturers can streamline market entry and maintain compliance in an increasingly AI-driven healthcare ecosystem.