EU AI Act Compliance Audit Tool
Interactive compliance checklist based on Regulation (EU) 2024/1689. Track your organisation's compliance status across all mandatory requirements for high-risk AI systems. Click any item to mark as Compliant, Partial, or Gap Identified.
Verify that your AI system does not fall under any of the prohibited practices listed in Article 5, including subliminal manipulation, exploitation of vulnerabilities, social scoring by public authorities, real-time biometric surveillance, and predictive policing based solely on profiling.
Determine whether your AI system qualifies as high-risk under Annex III (8 categories) or as a safety component of a product covered by Union harmonisation legislation listed in Annex I.
Assess whether your system uses or constitutes a General-Purpose AI (GPAI) model. If so, determine whether it is a GPAI model with systemic risk (trained with >10^25 FLOPs or designated by Commission).
For AI systems interacting with natural persons (chatbots, virtual assistants), AI-generated content (deepfakes, synthetic media), and emotion recognition or biometric categorisation systems: implement mandatory transparency disclosures.
Establish, implement, document and maintain a risk management system as a continuous iterative process throughout the entire lifecycle of a high-risk AI system.
Identify and analyse known and reasonably foreseeable risks that the high-risk AI system can pose to health, safety or fundamental rights when used as intended and under conditions of reasonably foreseeable misuse.
Evaluate residual risks after implementation of risk management measures. Ensure that residual risks associated with each hazard are acceptable and that the overall residual risk is acceptable.
Test the high-risk AI system to identify the most appropriate and targeted risk management measures. Testing must be performed prior to placing on the market and must be adequate for the intended purpose.
Implement data governance and management practices covering training, validation and testing datasets. These practices must address: design choices; data collection processes; data preparation operations; formulation of assumptions; assessment of availability, quantity and suitability; examination for biases; identification of data gaps.
Training, validation and testing datasets must be relevant, sufficiently representative, and to the best extent possible, free of errors and complete in view of the intended purpose.
Training, validation and testing datasets must take into account the characteristics, capabilities and limitations of the AI system, including with regard to the persons or groups of persons on which the high-risk AI system is to be used, and to avoid possible biases that could lead to prohibited discrimination.
Where necessary to ensure bias monitoring, detection and correction for high-risk AI systems, providers may process special categories of personal data for training, subject to appropriate safeguards.
Technical documentation must include: intended purpose; number of persons affected; categories of natural persons; specific groups particularly at risk; how the AI system interacts with hardware/software; versions of relevant software/firmware; description of all forms in which the AI system is placed on the market.
Technical documentation must include detailed description of: system elements and development process; methods and steps for system development; design specifications; system architecture; computational resources used; data requirements and data sheets.
Technical documentation must describe capabilities and limitations of the AI system, including: accuracy, robustness and cybersecurity; foreseeable unintended outcomes and sources of risk; human oversight measures; technical measures for human oversight; technical specifications for input data.
Technical documentation must include: validation and testing procedures and results; standards applied; EU declaration of conformity; post-market monitoring plan; description of changes made to the system after initial conformity assessment.
High-risk AI systems must be accompanied by instructions for use in appropriate digital or non-digital format. Instructions must enable deployers to use the system appropriately and implement human oversight.
Instructions for use must include the level of accuracy, robustness and cybersecurity against which the high-risk AI system has been tested and validated, and which can be expected, and any known and foreseeable circumstances that may have an impact on that expected level.
Instructions for use must describe the human oversight measures, including the technical measures to facilitate the interpretation of the outputs of AI systems by the deployers.
High-risk AI systems must be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which the AI system is in use.
Human oversight measures must enable the persons to whom oversight is assigned to fully understand the capabilities and limitations of the high-risk AI system and be able to duly monitor its operation.
Human oversight measures must enable the persons to whom oversight is assigned to be able to intervene on the operation of the high-risk AI system or interrupt the system through a "stop" button or a similar procedure.
Human oversight measures must enable the persons to whom oversight is assigned to be able to take appropriate action when the AI system shows signs of operating in a way that leads to risks to health, safety or fundamental rights.
High-risk AI systems must be designed and developed in such a way that they achieve an appropriate level of accuracy, robustness and cybersecurity, and perform consistently in those respects throughout their lifecycle.
High-risk AI systems must be resilient with regard to errors, faults or inconsistencies that may occur within the system or the environment in which the system operates.
High-risk AI systems must be resilient against attempts by unauthorised third parties to alter their use, outputs or performance by exploiting system vulnerabilities, such as data poisoning, model poisoning, adversarial examples, or model theft.
Before placing a high-risk AI system on the market, providers must carry out a conformity assessment. For most Annex III systems: internal conformity assessment based on technical documentation. For biometric identification and law enforcement AI: third-party assessment by notified body required.
Providers must draw up an EU declaration of conformity for each high-risk AI system and keep it up to date. The declaration must contain all information listed in Annex V.
Before placing a high-risk AI system on the market, providers must register the system in the EU database established under Art. 71. Deployers of high-risk AI used by public authorities must also register.
High-risk AI systems that are not safety components of products must bear the CE marking before being placed on the market. The CE marking must be affixed visibly, legibly and indelibly.
Providers must establish and document a post-market monitoring system. The system must actively collect and review data on the performance of high-risk AI systems throughout their lifetime, including data from deployers.
Providers of high-risk AI systems placed on the EU market must report any serious incident to the market surveillance authorities of the Member States where the incident occurred.
The post-market monitoring system must be designed to collect, document and analyse relevant data provided voluntarily by deployers, as well as data generated through the use of the AI system.
Deployers must use high-risk AI systems in accordance with the instructions for use accompanying the systems and implement appropriate technical and organisational measures to ensure use in accordance with those instructions.
Deployers must ensure that the natural persons to whom human oversight is assigned have the necessary competence, training and authority, and the necessary resources to carry out that task.
Deployers that are public authorities, or private entities providing public services, or deployers of high-risk AI systems in employment, education, essential services, law enforcement, migration, or justice must conduct a Fundamental Rights Impact Assessment before deploying the system.
Where high-risk AI systems are used to make or assist in making decisions related to natural persons, deployers must inform those natural persons that they are subject to the use of the high-risk AI system.