EU AI Act for Dutch SMEs: what to do, and when?
The EU AI Act has been in force since 1 August 2024. The first obligations already kicked in on 2 February 2025. This is what it means for your business, in plain language, no consultancy circus.
The AI Act (Regulation 2024/1689) is the world's first broad AI law. It is not just about Big Tech: if you use AI for customer service, recruitment, credit scoring, or even run a chatbot that talks to customers, you are in scope. The law phases in until 2 August 2026, with fines up to 7% of global annual turnover or 35 million euros for the heaviest violations.
For Dutch SMEs the panic is usually bigger than needed, but the risks are also bigger than many founders think. Article 4 (AI literacy) and article 5 (prohibited practices) are already in force. A ChatGPT licence without any training or policy is, technically speaking, a light breach. On a complaint or audit you must show something.
We help Dutch SMEs get into compliance pragmatically, no panic, and no ten-chapter consultancy report nobody reads. A short quickscan to map where you stand, an AI register to get a grip on what is in use, AI literacy training to cover article 4, and only the heavy compliance stack where it is truly required. For overlap with other regulation we work with lawyers via /ai-juridisch and, at the cybersecurity edge, with sister site nis2-compliant.com.
Want to know where you stand? Start with a free indication via /ai-scan. For a real AI Act quickscan, get in touch.
Challenges
You are not sure if the AI Act applies to you
The definition of AI system in article 3 is broad. Many companies say they do not do AI while running ChatGPT add-ins, applicant screening, or dynamic pricing models that fall squarely under the law. Lack of clarity is no excuse, and non-compliance is expensive.
We do an AI inventory: every system, tool, and use case in your organisation that qualifies as AI under the law. Then we classify by risk (prohibited, high-risk, limited risk, minimal risk). Outcome: a short document so you know where you stand.
Article 4 (AI literacy) has been mandatory since 2 February 2025
Every organisation that uses or provides AI must ensure staff and relevant third parties have a sufficient level of AI literacy. This applies now, not somewhere in 2026. Many SMEs have not heard of it and have nothing in place.
We deliver AI literacy training at role level (management, marketing, HR, customer service, IT). With attendance records, materials, and a short test, so you can show it to a regulator. See /ai-training.
No overview of which AI tools your people already use
Shadow AI is everywhere. Marketing runs Midjourney, sales chats with ChatGPT, HR tests cover letters with Claude, finance tests forecasts with Copilot. No one tracks it. For high-risk uses this is not just risky, it directly breaches article 26.
We build an AI register: which tool, who uses it, for which purpose, which data goes in, how the output is validated. A living document, short and usable, not a 500-row spreadsheet.
High-risk systems without realising it
Annex III lists high-risk uses: recruitment, lending, biometric identification, critical infrastructure, education assessment. A screener or scoring model that looks innocent today gets a heavy compliance stack from 2 August 2026: documentation, data quality, human oversight, post-market monitoring.
We screen your use cases against Annex III and help you choose between "build the high-risk stack" or "redesign the use case so it stops being high-risk". Often the second route is cheaper and faster.
AI Act, GDPR and NIS2 land on the same desk
The AI Act does not replace GDPR, it stacks on top. NIS2 (cybersecurity) is added too. Three frameworks, three logs, three risk analyses. For an SME without a compliance officer this is unmanageable without help.
We integrate the three frameworks: a shared risk inventory, a DPIA that also covers AI Act requirements, and a security baseline that satisfies NIS2 and AI Act at once. For the cybersecurity side we point at sister project nis2-compliant.com.
Results
- You know within a week where you stand on the AI Act, no 80-page report
- Evidence for article 4 (AI literacy) properly arranged before the first complaint
- AI register that also helps your GDPR processing register, no double work
- Risk classification per system, so you know where the real obligations sit
- Concrete action list with priority, deadlines, and owner per action
- Integrated with your existing GDPR and (if relevant) NIS2 work
- Training materials your people actually want to follow
- Tools and use cases redesigned where possible so they stop being high-risk
- Support for questions from the Dutch DPA (AP) or your own customers
- No vendor lock-in: all documentation is yours, in plain Dutch or English
Frequently asked questions
What is the EU AI Act and when did it take effect?
The AI Act, formally Regulation (EU) 2024/1689, is the world's first horizontal AI law. It was adopted on 13 June 2024, published in the Official Journal of the EU on 12 July 2024, and entered into force on 1 August 2024. From that date several obligations phase in, with full application on 2 August 2026. The law applies directly in all EU member states, including the Netherlands, without national transposition.
When does each provision apply?
Four key dates. 2 February 2025: prohibited AI practices (article 5) apply, and the AI literacy duty (article 4) applies to every organisation that uses AI. 2 August 2025: obligations for providers of general-purpose AI models (GPAI), such as OpenAI, Anthropic, Google and Mistral, together with governance provisions and most penalties. 2 August 2026: the bulk of the law, including rules for high-risk systems (Annex III) and transparency duties for limited-risk systems. 2 August 2027: rules for high-risk AI embedded in products covered by existing EU product legislation (Annex I, think medical devices, machinery, toys).
What are the four risk categories?
The AI Act is risk-based. Unacceptable risk (prohibited): government social scoring, manipulative AI that harms vulnerable groups, biometric categorisation by sensitive characteristics, real-time biometric identification in public spaces (with narrow law-enforcement exceptions), emotion recognition in workplaces and education. Banned from 2 February 2025. High risk: systems listed in Annex III (recruitment, lending, critical infrastructure, education, biometrics not under the ban, law enforcement, migration, justice) and AI in products under Annex I. Heavy obligations: risk management, data governance, technical documentation, logging, human oversight, robustness, cybersecurity. Limited risk: chatbots, deepfakes, AI-generated content. Required: tell users they are dealing with AI, label AI-generated content. Minimal risk: spam filters, AI in video games, inventory optimisation. No specific AI Act obligations, but GDPR and ordinary law still apply.
Who must comply with the AI Act?
Four roles. Providers: those who develop or commission an AI system to place on the market under their own name. The heaviest role. Deployers: organisations that use an AI system in a professional context. Where most Dutch SMEs sit. Importers: businesses that bring an AI system from a third country to the EU market. Distributors: businesses in the supply chain that make AI systems available without making or importing them. Important: even as a deployer you have obligations, especially for high-risk systems (article 26): you must arrange human oversight, keep logs, report incidents, and ensure your staff are AI literate (article 4).
What does AI literacy under article 4 mean in practice?
Article 4 requires providers and deployers to take measures to ensure, to their best extent, that their staff and other persons using AI systems on their behalf have a sufficient level of AI literacy. It must take account of technical knowledge, experience, training, the context of AI use, and the persons affected. In practice: training at role level, not one general course for everyone. A marketing employee using ChatGPT for copy needs a different curriculum from an HR manager running a recruitment screener. Regulators, in the Netherlands the Dutch Data Protection Authority (AP) and the RDI, have signalled they will enforce on demonstrability: documentation, participants, learning objectives, evaluation. We deliver this as a package via /ai-training.
What are the penalties under the AI Act?
Fines are tiered. Up to 35 million euros or 7% of global annual turnover (whichever is higher) for prohibited AI practices (article 5). Up to 15 million euros or 3% for breach of most other duties, including high-risk system rules, transparency, and GPAI provider duties. Up to 7.5 million euros or 1% for supplying incorrect or misleading information to regulators. SMEs and startups get specific treatment: member states may apply lower caps for smaller undertakings, and proportionality must be considered. In the Netherlands the Dutch DPA (AP) is the lead regulator, alongside sector regulators such as the RDI, AFM, and NZa.
How does the AI Act relate to GDPR and NIS2?
They stack, they do not replace each other. GDPR remains fully in force for anything touching personal data. An AI system that processes personal data must comply with GDPR (legal basis, purpose limitation, data minimisation, data subject rights) and the AI Act (risk classification, documentation, human oversight). NIS2, the cybersecurity directive for essential and important entities, touches AI on technical security and incident reporting. In practice you run integrated risk assessments: a DPIA for GDPR, a fundamental rights impact assessment (FRIA) for high-risk public-sector deployers, and a NIS2 risk analysis, all from the same input. For the NIS2 side we partner with nis2-compliant.com.
What does the Dutch DPA (AP) say about AI?
The Autoriteit Persoonsgegevens (AP) is the lead Dutch market surveillance authority for the AI Act, alongside its existing role as GDPR regulator. The AP runs a dedicated algorithms and AI page with guidelines, FAQ, and sector reports. Key positions: AI literacy is not a formality but a serious obligation, deepfakes without clear labelling are problematic under both GDPR and the AI Act, and automated decision-making with legal effects falls under both article 22 GDPR and article 26 of the AI Act. The AP publishes a quarterly report on algorithmic risks in the Netherlands.
How do you help with AI Act compliance?
Three tracks, in this order. Track 1, fast picture: AI quickscan of half a day to a day, all AI systems mapped, risk classification, top 3 actions on one page. See also the free AI scanner on /ai-scan for a first impression. Track 2, groundwork: build an AI register, roll out AI literacy training via /ai-training, write the AI policy and internal procedures, integrate with existing GDPR work. Track 3, high risk or large scale: for organisations with real high-risk systems or GPAI builders we propose an implementation track with risk management, technical documentation, human oversight, and post-market monitoring. We work with lawyers via /ai-juridisch and, for finance and admin overlap, with accountants via /ai-accountants.
Should I start now or wait until 2 August 2026?
Waiting is an expensive strategy. Three reasons. One: article 4 (AI literacy) and article 5 (prohibited practices) have been in force since 2 February 2025. If you use AI today without any training or any check on prohibited use, you are already in breach. Two: high-risk implementation realistically takes six to twelve months for risk management, documentation, data quality, and audits. To be compliant on 2 August 2026, you start in early 2026, not in July. Three: customers and suppliers are already asking for evidence. A large buyer procuring an AI component wants documentation, even before full enforcement. Starting costs less than rushing. A one-day quickscan gives you immediate visibility and a prioritised list.
Let's get acquainted.
Book a free call or send us a message. We always respond within 24 hours on business days.
Phone / WhatsApp
+31 85 124 95 22Location
Middelburg, Zeeland
Office hours
Mon – Fri, 09:00 – 17:00