Compliance Checklist: How to Meet the High Risk EU AI Act?
A step-by-step guide to compliance with High-Risk AI Systems under the EU AI Act. Focus on Technical Documentation and Conformity Assessment.
DEVIAN Strategic ~ Mitigating AI Model Drift in Regulated Environments
Summary:
This article provides a detailed checklist that High Risk AI System providers must follow, discussing Quality Management System (QMS) requirements, fundamental impact assessments, and the process for obtaining CE marking in accordance with the EU AI Act. Providers must adopt an iterative, lifecycle-based approach to Risk Management and create comprehensive Technical Documentation as the core evidence of compliance.
Audience:
Product Managers, AI Engineers, Legal Counsel in companies exporting or operating in the European Union.
Introduction:
The Imperative for Structured Compliance
The European Union’s Artificial Intelligence Act (AI Act) is the world's first comprehensive legal framework for regulating AI. It establishes a tiered, risk-based approach, with the most stringent obligations placed upon High-Risk AI Systems.
For providers—developers, manufacturers, or entities placing these systems on the EU market—compliance is not optional; it is the prerequisite for market access. Meeting the obligations requires a structured, lifecycle-based methodology, shifting AI development from an experimental process to a regulated engineering discipline.
This checklist provides the actionable steps to achieve Technical Documentation, complete the Conformity Assessment, and affix the necessary CE Marking.
Determine Your Risk Classification and Role
The first and most critical step is to legally confirm that your system is classified as High-Risk and to identify your precise role under the Act.
A. High-Risk Confirmation
A system is classified as High-Risk if it falls under one of two conditions, as defined by Article 6 and Annex III of the Act:
- Safety Components of Regulated Products: The AI system is intended to be used as a safety component of a product, or is itself a product covered by existing EU harmonisation legislation (e.g., Medical Devices Regulation (MDR), Machinery Regulation, Aviation Safety).
- Standalone Systems in Annex III: The system is explicitly listed in Annex III, which includes use cases that critically affect fundamental rights:
- Biometric Identification and Categorisation (excluding those explicitly prohibited).
- Critical Infrastructure (e.g., controlling traffic, managing energy or water supply).
- Education and Vocational Training (e.g., determining access, assessing performance).
- Employment, Worker Management, and Access to Self-Employment (e.g., recruitment, performance evaluation).
- Essential Private and Public Services (e.g., assessing creditworthiness, dispatching emergency services).
- Law Enforcement (e.g., polygraphs, predicting criminal activity).
- Migration, Asylum, and Border Control (e.g., assessing eligibility, verifying documents).
- Administration of Justice and Democratic Processes (e.g., influencing judicial outcomes).
Define Your Role:
B. Provider vs. Deployer
The Provider (the entity that develops or has a High-Risk AI System developed and places it on the market) bears the bulk of the compliance burden, including the QMS, Technical Documentation, and CE Marking.
The Deployer (the user operating the system under its authority) has obligations related to logging, human oversight, and monitoring.
Actionable Step: Create a formal Use Case Document detailing the intended purpose, context, and environment. Have legal counsel confirm the Annex III classification and your legal role.
Establish the Foundational Governance:
QMS and RMS
Compliance under the AI Act is centered on a robust Quality Management System (QMS) and an iterative Risk Management System (RMS). These systems must be established before market placement.
A. Quality Management System (QMS)
The QMS must cover all aspects of the AI system's lifecycle. It acts as the organizational framework for compliance.
- Mandatory QMS Procedures (Article 17):
- Regulatory Compliance Strategy: Documented procedures for adhering to the Act’s requirements.
- Design and Development Control: Procedures covering the entire development process, including design specifications, testing, and validation.
- Data Governance: Procedures for collecting, managing, and preparing data, ensuring data quality checks for relevance and representativeness.
- Post-Market Monitoring (PMM): Procedures for gathering and reviewing performance data and incident reports once the system is live.
- Management Review: Procedures for periodic internal auditing and management review of the entire QMS to ensure its effectiveness.
B. Risk Management System (RMS)
The RMS is a continuous process that must be updated throughout the entire lifecycle (design, development, deployment, and monitoring).
- Systematic Risk Identification: Identify risks to health, safety, and fundamental rights resulting from both the intended use and reasonably foreseeable misuse (e.g., unintended biases, vulnerabilities to cyberattacks, lack of transparency).
- Risk Evaluation: Assess the severity (impact) and probability (likelihood) of the identified risks.
- Risk Mitigation: Implement appropriate risk reduction measures.
- Mitigation must prioritize technical solutions (e.g., design changes, data quality improvements) over procedural or informational controls.
- Documentation and Residual Risk: Document all steps, including the measures taken and the resulting residual risks.
- The deployer must be explicitly informed of all known residual risks.
Expert Tip: Consider the principles established in standards like ISO 31000 for risk management and the planned technical standard for the AI Act to structure your RMS.
Core Technical Requirements Implementation
These technical obligations are non-negotiable and must be built directly into the AI system's architecture.
A. Data Governance and Datasets (Article 10)
The quality of data dictates the quality and compliance of the AI system.
- Data Management: Implement rigorous procedures to ensure data meets criteria for relevance, representativeness, completeness, and error-freeness.
- Bias Mitigation: Proactively identify and mitigate the risks of bias in the data that could lead to discriminatory outcomes for protected groups.
- Data Provenance: Document the source, collection method, and preparation of all training, validation, and testing datasets.
B. Technical Robustness and Accuracy (Article 15)
High-Risk AI Systems must function reliably and consistently.
- Accuracy Metrics: Define clear, measurable, and appropriate accuracy metrics (e.g., precision, recall, F1-score) for the system’s intended purpose and context.
- Robustness and Stability: Design the system to be resilient against errors, faults, and inconsistencies within the data or environment. Implement mechanisms for graceful degradation.
- Cybersecurity: Address cybersecurity risks proportionate to the system's risk level, including protection against manipulation by malicious third parties (Model Poisoning and Adversarial Attacks).
C. Record-Keeping and Logging (Article 12)
Traceability is essential for accountability.
- Automatic Logging: The system must be designed to automatically record events (logs) during its operation.
- Key Logged Information: Logs should include the date, time, user, input data, and, where possible, the decision or output generated.
- Retention: Logs must be maintained for a period suitable for post-market monitoring and investigations by authorities.
D. Transparency and Human Oversight (Articles 13 & 14)
Users must be able to understand and control the system.
- Transparency: Provide clear and comprehensive Instructions for Use to the deployer, detailing:
- The system’s capabilities, limitations, and expected performance.
- The known residual risks.
- The required human oversight measures.
- Human Oversight: The system must allow for effective human intervention to prevent or correct erroneous or adverse decisions.
- This may require an appropriate user interface, mechanisms to monitor the system's operation, or a simple stop button.
The Compliance Nexus:
Technical Documentation
The Technical Documentation (TD) is the core evidence of compliance. It must be a comprehensive, living document demonstrating how every requirement of the Act has been met. It must be drawn up before placing the system on the market.
| Required Documentation Element (Annex IV) | Description |
|---|---|
| I. System Description | Intended purpose, name of provider, versioning, hardware/software requirements, and regulatory framework application. |
| II. Development Process | Methods, steps, and specifications for the design, training, testing, and validation of the AI system (e.g., ML-Ops pipeline documentation). |
| III. Data Documentation | Detailed description of training, validation, and testing datasets, their provenance, scope, and quality checks for bias and representativeness. |
| IV. Risk Management System | Complete records of the RMS, including identified risks, mitigation measures implemented, and residual risk communication. |
| V. Validation & Testing | Detailed results and procedures of testing against the defined metrics (accuracy, robustness), including performance reports. |
| VI. Post-Market Monitoring Plan | Description of the plan and procedures for ongoing monitoring, logging, and incident reporting. |
| VII. Instructions for Use | The complete set of information provided to the deployer, covering the system's limitations and required human oversight. |
Connection to the Pillar: A robust compliance strategy, particularly in terms of system traceability and risk documentation, often relies on advanced governance technologies. For financial institutions seeking to automate this documentation and management, integrating AI compliance into a wider Strategic RegTech Implementation for Financial Institutions is essential for long-term efficiency.
Final Step:
Conformity Assessment and CE Marking
The Conformity Assessment is the formal procedure to verify compliance.
A. The Two Conformity Assessment Paths
The path depends on the type of High-Risk system:
| Path | Applicable High-Risk Systems | Assessment Requirement |
|---|---|---|
| 1. Internal Control (Self-Assessment) | Most systems listed in Annex III (Points 2 to 8) (e.g., employment, essential services, law enforcement). | Provider conducts the assessment and must be rigorous in their review of the Technical Documentation. |
| 2. Assessment with Notified Body | Systems intended as a safety component of a product (Annex III, Point 1) or systems where Harmonised Standards are not followed. | An independent Notified Body audits the QMS and reviews the Technical Documentation. |
Crucial Insight: Applying Harmonised Standards (once they are fully available) grants a presumption of conformity, significantly simplifying the assessment process.
B. The CE Marking and Registration
Upon successful completion of the assessment:
- EU Declaration of Conformity: The Provider signs the declaration, formally stating that the AI system meets all the requirements of the Act.
- Affix the CE Marking: The CE marking must be affixed to the system, its packaging, or documentation, signifying compliance for market access.
- If a Notified Body was used, its identification number must accompany the CE mark.
- Registration: The Provider must register the High-Risk AI System in the EU Database for High-Risk AI Systems (required for Annex III systems, but not for safety components).
HowTo:
Actionable Implementation Guide
To kickstart your High-Risk AI Act compliance, start by establishing a cross-functional compliance team involving engineering, legal, and product management.
- First, perform a gap analysis by mapping your current development processes against the Technical Requirements (Articles 10-15).
- Second, formalize your Risk Management System (RMS) and integrate it into your ML-Ops pipeline so that risk assessments are mandatory at every development stage (data collection, model training, deployment).
- Third, begin concurrently building the Technical Documentation (TD) from day one—do not leave it until the end. Use your internal QMS procedures to govern the TD creation.
- Finally, choose your Conformity Assessment path: if you are performing a self-assessment, implement a rigorous internal audit process that mimics a Notified Body review before issuing your Declaration of Conformity to ensure you are truly market-ready.
FAQs
Does the EU AI Act require all High-Risk systems to be externally audited by a Notified Body?
- No. Most High-Risk systems listed in Annex III (e.g., in employment, education, or essential services) can follow the Internal Control (self-assessment) procedure, provided they fully apply the forthcoming harmonised standards.
- Only systems intended as a safety component of a regulated product (like medical devices) usually require the involvement of a Notified Body.
What is a 'substantial modification' and how does it affect compliance?
- A substantial modification is any change to the AI system that affects its compliance with the Act or changes its risk classification.
- If such a modification occurs, the system is considered a new High-Risk AI System, requiring a new Conformity Assessment and updated Technical Documentation before it can be placed back on the market.
How does the AI Act affect open-source models?
- The AI Act largely exempts general-purpose AI models (GPAI), including many open-source models, from the specific High-Risk obligations.
- However, if a Provider takes a GPAI model and integrates it into a High-Risk system for a specific intended purpose listed in Annex III, the Provider is responsible for ensuring the final High-Risk system meets all compliance requirements.
Conclusion
Compliance with the EU AI Act for High-Risk systems is a demanding, but necessary, undertaking that necessitates a paradigm shift from ad-hoc development to regulated AI engineering.
The core takeaway for providers is that success hinges upon two intertwined pillars: the Quality Management System providing the robust process framework, and the Technical Documentation serving as the irrefutable evidence.
By embedding the Risk Management System iteratively throughout the development lifecycle and rigorously completing the necessary Conformity Assessment—whether through self-assessment or Notified Body involvement—providers can successfully achieve the CE Marking and ensure legal, trustworthy access to the European market.
This structured, human-centric approach will ultimately foster greater trust in AI technologies across the Union.
Reference
This article framework and content are based on the official text and principles established in the Regulation (EU) 2024/1689 of the European Parliament and of the Council on a horizontal legal framework for artificial intelligence (Artificial Intelligence Act).
Specific obligations concerning High-Risk systems, QMS, Technical Documentation, and Conformity Assessment are detailed in Articles 6, 17, Annex III, and Annex IV of the Act.
Further guidance on compliance frameworks and product regulation principles is informed by the European Commission's work.
- Official EU AI Act Text:
- European Commission - Internal Market, Industry, Entrepreneurship and SMEs (High-Risk AI Systems):
- EU Declaration of Conformity and CE Marking:




Post a Comment for "Compliance Checklist: How to Meet the High Risk EU AI Act?"
Post a Comment