EU AI Act: what your company needs to know if you use or plan to use AI
The EU AI Act entered progressive application from August 2024 and its main provisions will be fully mandatory by August 2026. This is not future regulation โ it is present. And unlike GDPR, which many companies implemented late, the AI Act has sanctions that make GDPR look moderate: up to โฌ35 million or 7% of global turnover for the most serious violations.
This article is not legal advice. It is a practical guide to understanding whether your company has obligations and what they are.
The AI Act’s logic: risk classification
The AI Act classifies AI systems into four categories based on the risk they represent:
Unacceptable risk (prohibited): systems that manipulate behavior unconsciously, social scoring by public authorities, real-time facial recognition in public spaces (with exceptions). If your company uses any of these, it has an immediate legal problem.
High risk: AI used in critical infrastructure, education, employment (candidate screening, performance evaluation), essential services, justice. These systems require conformity assessment, technical documentation, human oversight and registration in the EU database.
Limited risk: chatbots, content generation systems. Mainly transparency obligations โ users must know they are interacting with AI.
Minimal risk: spam filters, AI in video games, etc. No specific obligations beyond general laws.
Concrete cases affecting mid-sized companies
You use AI to filter CVs or evaluate candidates: high risk. You need conformity assessment and human oversight of decisions.
You have a chatbot on your website: limited risk. You must clearly inform users that they are talking with AI.
You use LLMs to generate marketing content: minimal risk in general, but if the content could be mistaken for factual information, labeling obligations apply.
You process data with AI to detect fraud: may be high risk depending on the sector and decisions the system makes.
You deploy AI in critical infrastructure processes: high risk with strict obligations.
What to do now
1. Inventory. List all AI systems your company uses โ including those that come as features within third-party software (CRM with AI, HR with automatic scoring, etc.).
2. Classify. For each system, determine the risk category. If in doubt, assume the highest category.
3. Document. For high-risk systems, start building the required technical documentation: system description, training data, performance metrics, human oversight mechanisms.
4. Integrate with GDPR. The AI Act does not replace GDPR โ it overlaps with it. If the AI system processes personal data, both regulations apply simultaneously.
The private LLM angle
One of the strongest technical arguments for deploying private on-premise LLMs (beyond data privacy) is that it greatly simplifies AI Act compliance: you have full control over the model, training data, system behavior and audit trail.
With third-party cloud models, much of that information is opaque or inaccessible โ which complicates the technical documentation required for high-risk systems.
Angel Sulev implements EU AI Act and ISO 27001 compliance for B2B companies.
Angel Sulev
Cybersecurity + Agentic AI Expert
Senior specialist in cybersecurity and Agentic AI with 30+ years turning security into competitive advantage.
About meRelated Posts
Private LLMs vs ChatGPT: why your company shouldn't use the OpenAI API for sensitive data
Every time your team pastes a contract, an internal email, or customer data into ChatGPT, that โฆ
Zero Trust is not a product you buy: it's an architecture you build
In recent years “Zero Trust” has gone from being a rigorous security concept to a โฆ
