EU AI Act compliance

The EU AI Act introduces the world's most comprehensive AI regulation. Enterprise organizations must understand their obligations and implement appropriate controls — or face fines up to €35 million or 7% of worldwide turnover.

Understanding the risk-based framework

The EU AI Act classifies AI systems into four risk categories, each with progressively stricter requirements:

  • Unacceptable risk — Banned AI practices: social scoring, manipulative subliminal techniques, and real-time biometric surveillance.
  • High risk — AI in employment, credit scoring, education, law enforcement, and critical infrastructure. Requires conformity assessments, audit trails, and human oversight.
  • Limited risk — AI chatbots, deepfakes, and emotion recognition. Requires transparency labeling.
  • Minimal risk — Most general-purpose AI usage. Subject to voluntary codes of practice.

Implementation timeline

Feb 2025

Prohibited Practices

Ban on social scoring, real-time biometric identification, and manipulative AI.

Aug 2025

High-Risk Obligations

Conformity assessments, documentation, and monitoring for high-risk AI systems.

Aug 2025

GPAI Requirements

Transparency obligations for general-purpose AI models (GPT, Claude, Gemini).

Aug 2026

Full Enforcement

All remaining provisions apply. Fines up to €35M or 7% of worldwide turnover.

Key requirements for enterprises

Regardless of risk classification, enterprises using AI should implement:

1. Record-keeping and audit trails (Article 12)

AI systems must maintain logs sufficient to determine compliance. PromptWall's audit trail records every AI interaction with full inspection detail — who sent what, what was detected, and what action was taken.

2. Risk management system (Article 9)

A continuous risk management process must identify, analyze, and mitigate AI risks. PromptWall's risk management capabilities quantify AI risks through detection metrics and compliance dashboards.

3. Data governance (Article 10)

Training data and operational data must be governed with appropriate controls. For enterprise AI usage, AI DLP ensures that personal data is detected, classified, and protected before reaching AI providers.

4. Transparency and human oversight

Users must be informed when they interact with AI systems. Organizations must ensure human oversight for high-risk decisions. Policy enforcement automates compliance with transparency requirements.

Compliance automation with PromptWall

Manual compliance is expensive and error-prone. PromptWall automates key compliance activities: audit trail generation, policy enforcement, data classification, PII detection, and compliance reporting — reducing the burden on GRC teams while improving compliance posture.

See also our broader AI governance framework for a comprehensive approach to compliance and our compliance checklist for a practical implementation guide.

Prepare for the EU AI Act

Deploy AI governance controls that satisfy EU AI Act requirements.

Frequently asked questions

When does the EU AI Act take effect?+

The EU AI Act entered into force in August 2024, with a phased approach: prohibited AI practices apply from February 2025, high-risk AI obligations from August 2025, and general-purpose AI model requirements from August 2025. Full enforcement applies from August 2026.

Does the EU AI Act apply to companies outside the EU?+

Yes. The EU AI Act has extraterritorial scope. It applies to any organization that deploys AI systems in the EU market or whose AI outputs are used by EU-based individuals — regardless of where the company is headquartered.

Is using ChatGPT considered a high-risk AI system?+

General-purpose ChatGPT usage is not classified as high-risk. However, if you use AI for employment decisions, credit scoring, education assessment, or other high-risk applications listed in Annex III, those specific deployments are subject to high-risk requirements — including documentation, audit trails, and human oversight.

How does PromptWall help with EU AI Act compliance?+

PromptWall provides several compliance building blocks: complete audit trails for AI interactions (Article 12 — Record-keeping), automated policy enforcement for governance (Article 9 — Risk management), PII protection for data governance (Article 10 — Data governance), and transparency logging showing what AI decisions were made and why.

Final CTA

Bring AI under policy before risk reaches production.

Talk to PromptWall about browser, editor, CLI, and shared policy rollout for governed AI access.

PromptWall mark

PromptWall

© 2026 PromptWall. All rights reserved.