Skip to content
DataDream
← All articles
AI Act11 min

What is the AI Act? A clear explainer for Dutch SMBs

Laurens van Dijk

Founder, DataDream

The AI Act is no longer hypothetical

Years of negotiation in Brussels; 2024 saw the EU AI Act adopted; since 2 February 2025 the first obligations are active. For Dutch SMBs that means: prohibitions are in force from now, AI literacy for staff is required from now, and from 2026-2027 heavier work applies for anyone using or building high-risk AI. The law is no longer something to "tackle in 2027". It is here, and your organisation almost certainly touches it.

This article is a clear explainer rather than legal jargon. For the full hub with tools and checks see /en/ai-act; for a fast compliance self-scan see /en/ai-act-checker.

The AI Act in a few sentences

The AI Act is the first broad AI law in the world. Its approach is risk-based: not every AI application gets the same rules, but obligations scale with what the AI does and where it is deployed. The law applies to anyone offering AI on the EU market or using AI in the EU, including American vendors selling here, and Dutch companies deploying external AI in their operations.

The law distinguishes two roles you must clearly separate: provider (who builds or markets the AI) and deployer (who uses AI in their organisation). Most SMBs are deployers; only organisations that put AI models or AI systems on the market are providers.

The four risk categories

The law has four categories, in increasing severity:

1. Unacceptable risk (banned, since 2 February 2025). AI applications that violate fundamental rights. Examples: government social scoring, real-time biometric surveillance in public spaces without strict exceptions, manipulative AI targeting vulnerable groups, emotion recognition in workplaces or education (with exceptions for medical or safety purposes). SMBs rarely touch this category, but know the ban so you know what to avoid.

2. High risk (obligations from August 2026). AI with direct impact on people, their rights or safety. Examples: AI in recruitment and selection, AI in credit scoring, AI in education evaluation, AI in healthcare diagnostics, AI in critical infrastructure. This category carries an extensive obligation set: risk management, dataset quality control, technical documentation, user-facing transparency, human oversight, robustness, accuracy and cybersecurity. For SMBs deploying AI in HR or customer screening: this chapter is the most relevant.

3. Limited risk (transparency obligations). AI that interacts with humans (chatbots, voicebots) must make clear it is AI. AI-generated content (deepfakes, manipulated text, images) must be identifiable as such. For SMBs with chatbots, AI voice reception or AI content publication: arrange this transparency now.

4. Minimal risk (no specific obligations). The big majority of AI applications: spam filters, AI in search engines, product recommendations, AI features in office software. No specific AI Act obligations, but the general obligation (Art. 4) to provide AI literacy to your staff still applies.

The timeline: what is now and what is coming?

DateWhat kicks in
2 August 2024AI Act published in the EU Official Journal
2 February 2025Bans + AI literacy (Art. 4) active
2 August 2025Rules for general-purpose AI (GPAI providers)
2 August 2026High-risk AI obligations active
2 August 2027Remaining obligations (notably existing high-risk) active

For SMBs August 2026 is the pivot date: from then the heavy obligations for high-risk AI apply, and enforcement starts in steps.

Article 4: AI literacy is now required

The most concrete obligation since February 2025 is Article 4 of the AI Act. Every organisation deploying AI (provider or deployer) must ensure its staff and involved external parties have "an appropriate level of AI literacy". No size threshold. A 10-person SMB using ChatGPT falls under this duty just like a multinational with agent systems.

What does it mean concretely?

  • Staff must know what AI is, how it works at a high level, what risks (hallucinations, bias, data leak risk) it brings and how to use it responsibly.
  • Higher-risk applications need deeper training; office use is fine with a basic introduction.
  • Document what you do: trainings, policies, prompts and guidelines. If a regulator asks, you want to be able to show you arranged it.

For practical training see /en/ai-training; for a deeper article see AI Act Art. 4 for SMBs.

What should you do now? (six steps for SMBs)

Step 1: Inventory where you use AI. Not just ChatGPT and Copilot, also AI features inside tools you already had (Hubspot, Salesforce, Mailchimp, recruitment software, customer support tools). Make a list per department.

Step 2: Classify per use case. Per AI application: is this minimal, limited, high, or unacceptable risk? In most cases you sit in minimal or limited; check for high risk in HR (selection), credit checks, education evaluation, healthcare output or customer scoring.

Step 3: Arrange transparency for limited risk. Chatbot? Make clear it is an AI. Voicebot? Same. AI-generated marketing content at a scale where your audience needs to know? Label it.

Step 4: Train your staff. A basic AI literacy training for everyone using AI. Document attendance and content. We typically run 90 minutes interactive, with sector examples, prompt basics, risks and internal policy.

Step 5: Document your policy. A short internal AI policy document: which tools are allowed, which data yes/no, when do you escalate to IT, what do you do on incidents. Keep it concise (two to three A4 pages) and keep it alive.

Step 6: Plan for 2026. If high-risk applications sit in your use cases, start now with technical documentation, dataset control and the governance structure you must have ready in August 2026.

What the AI Act means per sector (in practice)

The law looks abstract, but client conversations keep returning to the same sector-specific questions. Below are four common scenarios and what they concretely mean.

HR and recruitment. Are you using AI to screen CVs, score candidates or automatically pre-select? That is high risk. From August 2026: document your dataset (which examples train the system, how do you safeguard against unintended discrimination), ensure human oversight on every rejection, and inform candidates that AI is in the process. Many ATS vendors (Werken bij, Recruitee, Greenhouse) offer helper features here, but as deployer you remain responsible.

Healthcare. AI in diagnostics, treatment advice or triage is high risk. A GP practice deploying a chatbot for complaint intake quickly falls in this category if the output weighs into consulting room decisions. Approach: technical documentation of the AI system, clinical validation, human oversight baked into the process, and transparency to the patient.

Legal and accounting. AI generating legal or tax advice falls under generative AI with transparency requirements, not automatically under high risk. But disciplinary responsibility for output stays with the advisor. Practically: use AI output always as draft, do professional review, and inform clients about AI use in your process. For accountants see also AI for accountants.

Education. AI for student evaluation, exam grading or admission decisions is high risk. For teaching support (a student chatbot, an AI tutor) you sit in limited risk with transparency requirements. For targeted explanation per education step see AI in education.

How enforcement works in the Netherlands

In the Netherlands multiple regulators are involved: the Autoriteit Persoonsgegevens (AP) is the coordinated market regulator, with sector-specific authorities (DNB for financial, IGJ for healthcare, Education Inspectorate for education, ACM on consumer angles). Fines are heavy: up to 35 million euro or 7% of global revenue for prohibited AI practices; lower but still significant up to 15 million or 3% revenue for other violations.

Practically for SMBs: real enforcement only starts in 2026 on high risk, and regulators in 2025 communicate mostly via guidance and outreach. But waiting for a letter is not a strategy. The catch-up government services do is predictable; companies that get documentation in order now will have peace of mind later.

How DataDream helps

We coach Dutch SMB and scale-up organisations on four concrete steps:

1. Compliance scan. A structured inventory of your AI use and classification per use case. Output: a risk overview and a priority list. Start via /en/ai-act-checker for a first self-assessment.

2. AI literacy training. Live or online, 90 minutes or half a day, with sector examples from your world. Including attendance list and certification for your records. See /en/ai-training.

3. AI policy and governance. A workable internal AI policy, a role split (DPO, AI officer, decision-makers) and escalation paths. Fits in two weeks alongside your existing compliance work. See /en/ai-strategie.

4. Implementation of controls. For high-risk applications: technical documentation, monitoring set-up, incident response, audit trail. Tailored per use case.

A fair summary

The AI Act is no panic for SMBs, but a reason to get the basics in order now. AI literacy is required, transparency to chatbot users is required, and if you use AI in HR or customer scoring, you have until August 2026 to get the heavier obligations ready. Companies that have done nothing in 2026 risk fines (up to 35 million euro or 7% of global revenue for the heaviest violations) and reputation damage.

Start small. A compliance scan, a training, a policy document. That is the minimum. For the broader context (what obligations, what deadlines, what sectors) see /en/ai-act. For a free self-assessment see /en/ai-act-checker. Want help executing? Schedule a free discovery call.

Curious what AI can do for your business?

Take the free AI Scan and find out in 1 minute.

Start the AI Scan →