AI Across Industries
Sector-specific AI use cases, regulatory requirements, and implementation pathways
AI in Action: Industry Examples
Leading European companies are already deploying AI systems that fall under EU AI Act obligations. Understanding the regulatory implications is no longer optional — it is a market requirement.
Automated Driving & AI Safety Systems
European automotive OEMs and Tier-1 suppliers are deploying AI systems for automated driving — processing real-world sensor data, training perception models, and making safety-critical decisions at high speed. These systems directly affect human lives on public roads and fall under the strictest EU AI Act obligations.
- Mandatory conformity assessment before market placement
- Registration in EU high-risk AI database
- Continuous post-market monitoring & incident reporting
- UNECE WP.29 R155/R156 cybersecurity compliance
- Full technical documentation and audit trail required
AI-Powered In-Vehicle Virtual Assistants
Premium automotive manufacturers are integrating AI-powered virtual assistants that conduct natural conversations, remember previous interactions, and personalise the driving experience using cloud AI infrastructure and mapping services. Systems like these — built on automotive AI agents and cloud platforms — process personal data including location, preferences, and conversation history, triggering GDPR and EU AI Act transparency obligations.
- Transparency: users must know they interact with AI
- GDPR compliance for conversation and location data
- Data minimisation and purpose limitation principles
- Right to explanation for AI-driven recommendations
- Cross-border data transfer rules (EU–US cloud providers)
AI Testing in Vehicles: The Standard That Doesn't Exist Yet
Leading European OEMs are actively developing AI systems for automated driving — yet a critical gap exists: no unified standard currently defines how AI and machine learning models in vehicles must be tested for functional safety compliance. ISO/PAS 8800:2024 (AI safety for road vehicles) explicitly does not define ASIL-level compliance for ML models. ISO 26262 covers functional safety but predates modern AI architectures. SOTIF (ISO 21448) addresses intended functionality but lacks ML-specific test protocols.
Autonomous and semi-autonomous driving systems are classified as High-Risk AI under EU AI Act Annex III. From August 2026, providers must demonstrate conformity — including testing and validation — yet the testing methodology standard remains undefined. AI Ministry is actively engaging with industry stakeholders and EU bodies to develop practical guidance for this gap.