Regulation & Policy
Understanding the EU AI Act and its implications for your organisation
The EU AI Act โ Explained
The EU AI Act is the world's first comprehensive legal framework for artificial intelligence. It classifies AI systems by risk level and imposes obligations on providers and deployers operating in the EU market.
Adopted on 13 March 2024 and published in the Official Journal of the EU on 12 July 2024, the AI Act entered into force on 1 August 2024. It applies to providers, deployers, importers, and distributors of AI systems in the EU market.
Key Compliance Dates
Risk Classification Framework
The EU AI Act uses a risk-based approach. The obligations placed on providers and deployers depend on the risk level of the AI system.
AI systems that pose an unacceptable risk to fundamental rights and safety are banned outright.
AI systems in critical sectors requiring conformity assessments, documentation, and ongoing monitoring.
AI systems with specific transparency obligations โ users must know they are interacting with AI.
The vast majority of AI systems fall here. No mandatory requirements, but voluntary codes of conduct encouraged.
EU Regulatory Acts Affecting AI
AI adoption in Europe is governed by a comprehensive ecosystem of regulations. Understanding the interplay between these acts is essential for full compliance.
AI & Digital
EU AI Act
The world's first comprehensive AI regulation. Risk-based framework classifying AI systems as prohibited, high-risk, limited-risk, or minimal-risk. Applies to all AI providers and deployers in the EU market.
EU Data Act
Governs who can access and use data generated by connected products and services. Critical for AI systems that rely on IoT and industrial data. Enables data sharing between businesses and public sector.
EU Data Governance Act
Establishes mechanisms for data sharing across sectors and member states. Creates data intermediaries and altruistic data sharing frameworks that enable AI training datasets.
Digital Services Act (DSA)
Regulates online platforms and intermediaries. Includes obligations for algorithmic transparency, recommender system disclosures, and risk assessments for very large platforms using AI.
Digital Markets Act (DMA)
Regulates gatekeepers in digital markets. Impacts AI-powered platforms and services from large technology companies operating in the EU.
Privacy & Data Protection
GDPR
The cornerstone of EU data protection law. Governs all processing of personal data, including AI systems that process personal information. Art. 22 specifically addresses automated decision-making and profiling.
EU Health Data Space (EHDS)
Creates a common European health data infrastructure enabling secondary use of health data for AI research, training, and public health purposes, subject to strict access controls.
Cybersecurity
NIS2 Directive
Strengthens cybersecurity requirements for critical infrastructure and essential services. AI systems in critical sectors must meet NIS2 security standards. Introduces supply chain security obligations.
Cyber Resilience Act (CRA)
Introduces mandatory cybersecurity requirements for products with digital elements, including AI-enabled hardware and software. Manufacturers must ensure security throughout the product lifecycle.
DORA
Digital Operational Resilience Act for financial sector. Requires financial entities to manage ICT and AI-related risks, conduct resilience testing, and manage third-party AI providers.
Sector-Specific
EU Medical Device Regulation (MDR)
Regulates medical devices including AI-based diagnostic and therapeutic software (SaMD). AI medical devices must comply with both MDR and EU AI Act. Requires clinical evaluation and post-market surveillance.
EU Machinery Regulation
Replaces the Machinery Directive. Explicitly addresses AI and autonomous systems in machinery. Requires safety assessments for AI-enabled machines and collaborative robots (cobots).
UNECE WP.29 / R155 / R156
UN regulations for vehicle cybersecurity (R155) and software update management (R156). Mandatory for type approval of new vehicle models in EU. Critical for AI-enabled vehicles and ADAS systems.
EU AI Liability Directive (proposed)
Proposed directive adapting civil liability rules for AI. Introduces rebuttable presumption of causality for AI-related damages. Requires disclosure of high-risk AI system documentation in litigation.
Product Liability Directive (revised)
Modernised product liability rules explicitly covering AI systems and software. Manufacturers of AI products can be held liable for damages caused by defective AI systems.
Practical Guidance for Companies
A structured 6-step approach to EU AI Act compliance, developed by AI Ministry based on the official regulation text and implementation guidance.
Classify Your AI Systems
Identify all AI systems in your organisation and map them to the EU AI Act risk categories. Use the official GPAI classification guidance.
Conduct a Risk Assessment
For high-risk systems, conduct a conformity assessment. Document intended purpose, risk management measures, and technical documentation.
Implement Governance
Establish an AI governance framework: designate an AI Officer, create internal policies, and set up monitoring procedures.
Register High-Risk Systems
High-risk AI systems must be registered in the EU AI database before deployment. Maintain up-to-date technical documentation.
Ensure Transparency
Implement transparency measures: inform users when interacting with AI, provide explanations for automated decisions.
Continuous Monitoring
Establish post-market monitoring systems. Report serious incidents to national authorities. Conduct periodic reviews.