EU AI Act: what you need to know
Practically summarized — timeline, penalties, who is affected and what must be in place by August 2, 2026.
What the AI Act is
An EU regulation adopted in 2024. It regulates AI systems by risk — from prohibited practices, through high-risk systems, to general-purpose AI models. It applies to providers and to deployers — i.e. most EU companies.
Effectiveness timeline
February 2, 2025
Prohibited practices bans + mandatory AI literacy
August 2, 2025
Rules for general-purpose AI models and governance structures
August 2, 2026
Full application — high-risk systems, transparency obligations, sanctions
August 2, 2027
Rules for AI inside regulated products (Annex I)
Penalties
Deployer obligations
- ·Use high-risk systems per the provider's instructions
- ·Ensure human oversight appropriate to the use case
- ·Monitor operation and report incidents
- ·Inform affected people about AI decisions
- ·Data governance: quality inputs, DPIA where relevant
- ·AI literacy across the team (article 4)
The Shadow AI problem
Employees use AI tools that IT and leadership often don't know about. Personal ChatGPT accounts, Copilot outside company IDP, AI features inside SaaS. An auditor asks 'which AI do you use' — and nobody truly knows. Shadow AI is the most frequent source of real incidents.
Overlap with GDPR, DORA, NIS2, ISO 42001
The AI Act is not isolated. Data in AI is still GDPR. Financial firms have DORA. Critical infrastructure has NIS2. ISO 42001 is a voluntary management system covering many of these topics systematically. The GovReady framework is designed so evidence collected once serves multiple frameworks.