AI Ministry โ€” Compliance Tools

EU AI Act Compliance Audit Tool

Interactive compliance checklist based on Regulation (EU) 2024/1689. Track your organisation's compliance status across all mandatory requirements for high-risk AI systems. Click any item to mark as Compliant, Partial, or Gap Identified.

0%
Compliance Score
0 of 37 requirements met
Click items to update status
0
Compliant
0
Partial
37
Gap Identified
37
Total Items
Status:
Risk Tier:
Legend:
Compliant โ€” requirement fully met
Partial โ€” partially implemented
Gap โ€” not yet implemented
N/A โ€” not applicable
Art. 5Prohibited AI Practices CheckMandatoryGap Identified

Verify that your AI system does not fall under any of the prohibited practices listed in Article 5, including subliminal manipulation, exploitation of vulnerabilities, social scoring by public authorities, real-time biometric surveillance, and predictive policing based solely on profiling.

2 February 2025 โ€” Already in force
Annex III + Art. 6High-Risk Classification AssessmentMandatoryGap Identified

Determine whether your AI system qualifies as high-risk under Annex III (8 categories) or as a safety component of a product covered by Union harmonisation legislation listed in Annex I.

2 August 2026 โ€” High-risk compliance deadline
Art. 51 + Annex XIIGPAI Model ClassificationMandatoryGap Identified

Assess whether your system uses or constitutes a General-Purpose AI (GPAI) model. If so, determine whether it is a GPAI model with systemic risk (trained with >10^25 FLOPs or designated by Commission).

2 August 2025 โ€” GPAI obligations already apply
Art. 50Transparency Obligations โ€” Limited Risk AIMandatoryGap Identified

For AI systems interacting with natural persons (chatbots, virtual assistants), AI-generated content (deepfakes, synthetic media), and emotion recognition or biometric categorisation systems: implement mandatory transparency disclosures.

2 August 2026
Art. 9(1)Risk Management System โ€” EstablishmentMandatoryGap Identified

Establish, implement, document and maintain a risk management system as a continuous iterative process throughout the entire lifecycle of a high-risk AI system.

2 August 2026
Art. 9(2)Risk Identification and AnalysisMandatoryGap Identified

Identify and analyse known and reasonably foreseeable risks that the high-risk AI system can pose to health, safety or fundamental rights when used as intended and under conditions of reasonably foreseeable misuse.

2 August 2026
Art. 9(4)Residual Risk EvaluationMandatoryGap Identified

Evaluate residual risks after implementation of risk management measures. Ensure that residual risks associated with each hazard are acceptable and that the overall residual risk is acceptable.

2 August 2026
Art. 9(7)Testing for Risk ManagementMandatoryGap Identified

Test the high-risk AI system to identify the most appropriate and targeted risk management measures. Testing must be performed prior to placing on the market and must be adequate for the intended purpose.

2 August 2026
Art. 10(1)Data Governance PracticesMandatoryGap Identified

Implement data governance and management practices covering training, validation and testing datasets. These practices must address: design choices; data collection processes; data preparation operations; formulation of assumptions; assessment of availability, quantity and suitability; examination for biases; identification of data gaps.

2 August 2026
Art. 10(2)Training Data Quality RequirementsMandatoryGap Identified

Training, validation and testing datasets must be relevant, sufficiently representative, and to the best extent possible, free of errors and complete in view of the intended purpose.

2 August 2026
Art. 10(3)Bias Detection and MitigationMandatoryGap Identified

Training, validation and testing datasets must take into account the characteristics, capabilities and limitations of the AI system, including with regard to the persons or groups of persons on which the high-risk AI system is to be used, and to avoid possible biases that could lead to prohibited discrimination.

2 August 2026
Art. 10(5)Special Category Data in TrainingMandatoryGap Identified

Where necessary to ensure bias monitoring, detection and correction for high-risk AI systems, providers may process special categories of personal data for training, subject to appropriate safeguards.

2 August 2026
Art. 11 + Annex IV ยง1General Description of AI SystemMandatoryGap Identified

Technical documentation must include: intended purpose; number of persons affected; categories of natural persons; specific groups particularly at risk; how the AI system interacts with hardware/software; versions of relevant software/firmware; description of all forms in which the AI system is placed on the market.

2 August 2026
Annex IV ยง2Detailed Description of System ElementsMandatoryGap Identified

Technical documentation must include detailed description of: system elements and development process; methods and steps for system development; design specifications; system architecture; computational resources used; data requirements and data sheets.

2 August 2026
Annex IV ยง3Monitoring, Functioning and ControlMandatoryGap Identified

Technical documentation must describe capabilities and limitations of the AI system, including: accuracy, robustness and cybersecurity; foreseeable unintended outcomes and sources of risk; human oversight measures; technical measures for human oversight; technical specifications for input data.

2 August 2026
Annex IV ยง4โ€“9Validation, Testing, Standards and ConformityMandatoryGap Identified

Technical documentation must include: validation and testing procedures and results; standards applied; EU declaration of conformity; post-market monitoring plan; description of changes made to the system after initial conformity assessment.

2 August 2026
Art. 13(1)Instructions for Use โ€” Deployer InformationMandatoryGap Identified

High-risk AI systems must be accompanied by instructions for use in appropriate digital or non-digital format. Instructions must enable deployers to use the system appropriately and implement human oversight.

2 August 2026
Art. 13(3)(b)Performance Metrics DisclosureMandatoryGap Identified

Instructions for use must include the level of accuracy, robustness and cybersecurity against which the high-risk AI system has been tested and validated, and which can be expected, and any known and foreseeable circumstances that may have an impact on that expected level.

2 August 2026
Art. 13(3)(d)Human Oversight Measures in InstructionsMandatoryGap Identified

Instructions for use must describe the human oversight measures, including the technical measures to facilitate the interpretation of the outputs of AI systems by the deployers.

2 August 2026
Art. 14(1)Human Oversight Design RequirementMandatoryGap Identified

High-risk AI systems must be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which the AI system is in use.

2 August 2026
Art. 14(4)(a)Understanding Capabilities and LimitationsMandatoryGap Identified

Human oversight measures must enable the persons to whom oversight is assigned to fully understand the capabilities and limitations of the high-risk AI system and be able to duly monitor its operation.

2 August 2026
Art. 14(4)(d)Override and Disregard CapabilityMandatoryGap Identified

Human oversight measures must enable the persons to whom oversight is assigned to be able to intervene on the operation of the high-risk AI system or interrupt the system through a "stop" button or a similar procedure.

2 August 2026
Art. 14(4)(e)Fundamental Rights OversightMandatoryGap Identified

Human oversight measures must enable the persons to whom oversight is assigned to be able to take appropriate action when the AI system shows signs of operating in a way that leads to risks to health, safety or fundamental rights.

2 August 2026
Art. 15(1)Accuracy Levels and MetricsMandatoryGap Identified

High-risk AI systems must be designed and developed in such a way that they achieve an appropriate level of accuracy, robustness and cybersecurity, and perform consistently in those respects throughout their lifecycle.

2 August 2026
Art. 15(3)Robustness and ResilienceMandatoryGap Identified

High-risk AI systems must be resilient with regard to errors, faults or inconsistencies that may occur within the system or the environment in which the system operates.

2 August 2026
Art. 15(4)Cybersecurity MeasuresMandatoryGap Identified

High-risk AI systems must be resilient against attempts by unauthorised third parties to alter their use, outputs or performance by exploiting system vulnerabilities, such as data poisoning, model poisoning, adversarial examples, or model theft.

2 August 2026
Art. 43Conformity Assessment ProcedureMandatoryGap Identified

Before placing a high-risk AI system on the market, providers must carry out a conformity assessment. For most Annex III systems: internal conformity assessment based on technical documentation. For biometric identification and law enforcement AI: third-party assessment by notified body required.

2 August 2026
Art. 47EU Declaration of ConformityMandatoryGap Identified

Providers must draw up an EU declaration of conformity for each high-risk AI system and keep it up to date. The declaration must contain all information listed in Annex V.

2 August 2026
Art. 49Registration in EU AI DatabaseMandatoryGap Identified

Before placing a high-risk AI system on the market, providers must register the system in the EU database established under Art. 71. Deployers of high-risk AI used by public authorities must also register.

2 August 2026
Art. 48CE MarkingMandatoryGap Identified

High-risk AI systems that are not safety components of products must bear the CE marking before being placed on the market. The CE marking must be affixed visibly, legibly and indelibly.

2 August 2026
Art. 72Post-Market Monitoring PlanMandatoryGap Identified

Providers must establish and document a post-market monitoring system. The system must actively collect and review data on the performance of high-risk AI systems throughout their lifetime, including data from deployers.

2 August 2026
Art. 73Serious Incident ReportingMandatoryGap Identified

Providers of high-risk AI systems placed on the EU market must report any serious incident to the market surveillance authorities of the Member States where the incident occurred.

Ongoing โ€” from 2 August 2026
Art. 72(2)Deployer Feedback IntegrationMandatoryGap Identified

The post-market monitoring system must be designed to collect, document and analyse relevant data provided voluntarily by deployers, as well as data generated through the use of the AI system.

2 August 2026
Art. 26(1)Use in Accordance with InstructionsMandatoryGap Identified

Deployers must use high-risk AI systems in accordance with the instructions for use accompanying the systems and implement appropriate technical and organisational measures to ensure use in accordance with those instructions.

2 August 2026
Art. 26(2)Human Oversight ImplementationMandatoryGap Identified

Deployers must ensure that the natural persons to whom human oversight is assigned have the necessary competence, training and authority, and the necessary resources to carry out that task.

2 August 2026
Art. 26(6)Fundamental Rights Impact Assessment (FRIA)MandatoryGap Identified

Deployers that are public authorities, or private entities providing public services, or deployers of high-risk AI systems in employment, education, essential services, law enforcement, migration, or justice must conduct a Fundamental Rights Impact Assessment before deploying the system.

2 August 2026
Art. 26(7)Transparency to Affected PersonsMandatoryGap Identified

Where high-risk AI systems are used to make or assist in making decisions related to natural persons, deployers must inform those natural persons that they are subject to the use of the high-risk AI system.

2 August 2026
AI Ministry โ€” Expert Support

Gaps identified? We can close them.

Our EU AI Act specialists provide gap assessment, technical documentation, conformity assessment support, and ongoing compliance monitoring for high-risk AI systems across all sectors.

AI Ministry Assistant
Online ยท AI-powered