Skip to content
DataDream
← All articles
Compliance10 min

AI Act article 4: what must SMB employers do now?

Laurens van Dijk

Founder, DataDream

The deadline passed long ago, and most SMBs don't know

On 2 February 2025, article 4 of the European AI Act became binding. Not "phased in". Not "a guideline for later". Simply: from that day, every EU employer must ensure that employees using AI also have appropriate understanding of it. We're now over a year on, and most SMBs using ChatGPT, Copilot or Claude don't even have this on their radar.

The law is brief. Article 4 requires "a sufficient level of AI literacy" for everyone in your organisation who uses or operates AI systems on your behalf. Not a non-binding intention, but a legal obligation with enforcement powers held by the Dutch Data Protection Authority (Autoriteit Persoonsgegevens) and fines that can in theory run into millions of euros.

This article explains what art. 4 literally requires, who it covers, what an appropriate level looks like for your SMB team, and how to arrange it pragmatically without setting up a consultancy circus. Even if you've done nothing yet: it's not a disaster, but you need to start.

What does article 4 say?

The official text is short: providers and deployers of AI systems must take measures to ensure, to their best extent, a sufficient level of AI literacy among their staff and other people operating and using AI systems on their behalf, taking into account technical knowledge, experience, education and training, the context in which the AI systems will be used, and the persons or groups affected.

Three takeaways:

  1. The duty rests with two parties. The provider (often a software vendor) and the deployer (you, as the company using AI). For most SMBs, you're a deployer, not a provider. You use ChatGPT or Copilot, you don't build it.
  2. The level is "appropriate". Not "everyone at PhD level". What's appropriate depends on role, type of AI, and risk. A marketing employee using ChatGPT for blog posts needs a different level than an HR manager using AI to screen candidates.
  3. It applies to anyone using AI on your behalf. Not just permanent staff. Interns, freelancers and temp workers acting for you fall under it too.

In the Netherlands, enforcement falls to the Autoriteit Persoonsgegevens (AP), designated by the cabinet as the market supervisor for algorithms and AI. The AP has published its own guide ("Aan de slag met AI-geletterdheid") and is clear that this is not a paper tiger.

Who exactly does this cover in your business?

This is where many SMB employers go wrong. They think: "We don't use AI." Then it turns out:

  • the marketing manager uses ChatGPT daily for copy, emails and newsletters
  • the office manager uses AI features in Microsoft 365 or Google Workspace (Copilot in Outlook, Gemini in Gmail)
  • the finance employee uses AI tools to process transactions or summarise reports
  • the recruiter has AI summarise cover letters or screen CVs
  • customer service uses AI chatbots or AI-suggested replies
  • the CRM has AI features (HubSpot AI, Salesforce Einstein) that are actively used
  • developers run Copilot or Cursor inside their IDE

Nine out of ten SMBs underestimate how much AI is already in their processes. It doesn't have to be a large AI platform to fall under art. 4. One employee using ChatGPT for customer communication is enough to trigger the obligation.

Interns and freelancers count. The law speaks of "persons who operate and use AI systems on their behalf". If an external copywriter writes for your brand using AI, or a freelancer handles customer support via an AI tool, you as the client should ensure they're AI-literate for that task.

What is an "appropriate level" for your team?

Art. 4 doesn't ask for a uniform course for everyone. It asks for context-appropriate knowledge. A workable model is to think of your team in layers:

Base layer (everyone, including those not using AI daily). What is AI, how does a large language model work at a high level, what are the risks (hallucinations, bias, data leaks), when shouldn't you use AI (customer data in a free ChatGPT account is a bad idea), and how do you report an incident. You can cover this in an hour or two for the whole team.

User layer (employees using AI daily or weekly). Tool-specific: how does ChatGPT/Copilot/Claude work, what makes a good prompt, how do you validate output (AI invents facts), when not to use it, how to handle GDPR-sensitive data. Plan for half a day to a day per user, plus a prompt library with good examples for your work.

Responsibles layer (management, AI champion, IT lead). Governance: which tools you allow, agreements with vendors, how you document compliance, what transparency obligations you have toward customers. This is more a working session than a training: you set policy.

Risk-role layer (HR, finance, legal, customer service). Anyone using AI for decisions about people (recruitment, credit assessment, customer segmentation) enters high-risk territory under the AI Act. Extra rules apply: documentation duty, human oversight, transparency to those affected. Specific training needed per role.

These layers aren't sequential. They run in parallel to different audiences.

How to arrange this pragmatically

No consultancy circus required. A structured five-step approach works for most SMBs.

Step 1: inventory who uses which AI tools. Send a short survey, or do a round through departments. Almost always the list is longer than you thought. This is also valuable input for your GDPR register.

Step 2: write a short AI policy of one to two pages. Not a 40-page handbook. Do cover: which tools are allowed, what data may go in (and what absolutely may not), when human review is required, how to report incidents, and who in your company is the contact point for AI questions.

Step 3: arrange training in the three layers. Base for everyone, user training for active users, governance session for management. You don't have to buy all of this externally. The Autoriteit Persoonsgegevens offers its own guide. Open-source material is widely available. For specific roles or larger teams, external guidance pays off.

Step 4: incident reporting process. Like with GDPR: when in doubt, report it. An AI accidentally leaking customer data, a recruitment screening systematically rejecting people with certain surnames, a chatbot giving incorrect legal information: those are incidents. Who reports what where? Set this out in advance.

Step 5: annual review. AI changes fast, regulation too. The AI Act itself rolls out in phases (high-risk obligations from August 2026, some obligations only in 2027). Train once and done doesn't work.

Common misconceptions

"Our employees don't use AI." Almost always untrue. ChatGPT browser extensions, AI features in M365 and Google Workspace, AI in CRM systems: it creeps in everywhere. One round of asking shows it.

"One training and we comply." No. Art. 4 asks for an appropriate level that keeps fitting how the work and tools change. Annual review is part of it.

"Only IT people need to be AI-literate." Incorrect. The law covers anyone using or operating AI on your behalf. A marketing employee with no technical background falls under it just as much.

"It's only for large companies." Incorrect. The AI Act applies to every EU employer using AI, regardless of size. The European Commission does take size and risk into account when enforcing and setting fines. Initial enforcement actions are likely to focus on larger players with clear breaches, but that doesn't release SMBs from the obligation.

"GDPR training already covers it." Incorrect. GDPR and the AI Act overlap on privacy aspects, but AI literacy requires additional knowledge: how a model works, what hallucinations are, what bias in training data means, when human oversight is required. That's not in a GDPR training.

"Our vendor handles it." Partly. A software vendor (provider) has its own duties, but that doesn't release you as deployer from your responsibility for your own employees. The law assigns both parties, not just the provider.

What if you do nothing?

Practical consequences run on three tracks.

Supervisor. In the Netherlands, the Autoriteit Persoonsgegevens is the market supervisor for algorithms and AI. The AP investigates, issues enforcement decisions, and can impose fines. For art. 4 (and related article) breaches, the fine levels in article 99 of the AI Act apply: up to €15 million or 3% of worldwide annual turnover, whichever is higher. In practice the AP is expected to focus first on large players and evident breaches, not an SMB just setting up its AI policy. But the power is there.

Liability and damages. An employee using AI without training and harming a customer (incorrect legal information, data leak, discriminatory decision): you as employer carry primary responsibility. No demonstrable AI literacy is then an aggravating factor. Damage claims, reputational damage and relationship damage follow.

Reputation. The Netherlands is strict on GDPR enforcement and known for privacy scandals that haunt companies for years (childcare benefits affair, SyRI ruling). An AI incident at an SMB without policy hits the press faster than you'd think. Especially if you're in B2B with larger customers who have their own compliance requirements for suppliers.

How DataDream can help

No package you have to buy, but several ways to start faster.

  • Free AI scan. Short survey showing where you stand: which AI you're probably already using, which risks are acute, and where to begin.
  • AI Quickscan via /ai-strategie. Working session where we inventory per department who uses what, draft a concept policy, and assess training needs per role. Start small, roll out in phases.
  • AI trainings explicitly meeting art. 4. In the three layers: base for everyone, user training for active users, governance session for management. On-site or online, in your context (we use your use cases, not a generic slide deck).
  • Specific guidance for HR recruitment AI. Recruitment AI is high-risk under the AI Act. Heavier documentation and transparency duties apply here. Separate approach needed.

What we don't do: hand over a 40-page handbook nobody reads. What we do: a workable policy, a team that understands what it's doing, and documentation that proves art. 4 compliance. That's what the AP wants to see if they ever come knocking, and what customers will ask of you when they include AI compliance in procurement terms.

Frequently asked questions

What is AI literacy exactly?

AI literacy is the combination of skills, knowledge and understanding needed to deploy AI systems responsibly. It covers technical basics (how an AI model works), risk awareness (hallucinations, bias, privacy), legal context (GDPR, AI Act) and practical use (prompting, output validation). The level depends on the role: an end user needs less depth than an AI champion or management member setting governance.

When do I have to comply with art. 4?

The duty has been directly applicable since 2 February 2025. There's no transition period left. If you've arranged nothing, you're technically in breach. Practical supervisor focus is on large players and clear breaches, but that doesn't release you. Start now.

How long does this take for an SMB team of 20?

Depends on where you stand and how deep you go. A workable base (inventory, short AI policy, base training for the whole team, user training for heavy users) can be set up in a few weeks if everyone cooperates. Deeper governance, documentation and risk-role-specific training takes longer. You don't have to have everything perfect at once: start small, in phases.

Can I arrange this internally or do I have to hire an external party?

You can do it yourself. The law doesn't prescribe an external party. The Autoriteit Persoonsgegevens offers a guide that's a fine starting point for many SMBs. External guidance pays off mainly if you don't have anyone internally combining AI and legal knowledge, or if you operate in a regulated sector with heavier requirements.

What if an employee uses AI without training? Am I liable?

In principle, yes. As an employer you're responsible for what employees do in their role. No demonstrable AI literacy is an aggravating factor in an incident. An AI policy plus demonstrable training is your best defence. Having a policy without enforcing it doesn't help: there must also be evidence the policy is alive.

Does art. 4 also cover freelancers working for me?

Yes. The law speaks of "persons operating and using AI systems on their behalf". If a freelancer works for your brand using AI, you as the client must ensure they are AI-literate for that task. You can solve this by giving them access to the same training, or by including in the contract that they themselves arrange adequate AI literacy.

How do I document that I comply with art. 4?

Keep three things in order. One: an AI policy with date and version number. Two: a register of who took which training, when, at what level. Three: a log of AI incidents and how they were handled. Keep it simple but updated. An Excel file or a section in your intranet works fine for SMBs.

Starting with a first step

If you read this article and think "we haven't done any of this": it's not a disaster, provided you start now. Most SMBs are in the same situation. A first workable version of inventory + AI policy + base training can be in place in a few weeks.

No idea where to start? Take the free AI scan to see where your business stands. Want guidance on policy and strategy: discuss the approach via /ai-strategie. For team trainings that explicitly meet art. 4: /ai-training.

Curious what AI can do for your business?

Take the free AI Scan and find out in 1 minute.

Start the AI Scan →