Skip to content
DataDream
← All articles
AI Strategy10 min

How to start with AI in your business: a practical step-by-step guide for SMBs

Laurens van Dijk

Founder, DataDream

Most AI projects in SMBs run aground at the same moment: right after the strategy session. A consultant delivers an 80-page report, you read through it, and then everyone goes back to the daily grind. Three months later somebody asks at a Friday drink whether "that AI thing" is still running anywhere. The answer is usually no.

This piece is about how it actually works. No consultancy circus, no strategy without execution, no six-month platform shopping. Instead: a concrete pilot within two weeks, measured in time and money, on one clearly scoped process. Only then do you scale.

This approach isn't for everyone. If you're building a data warehouse for 200 employees you genuinely need a strategy phase. But for 90% of Dutch SMBs, and according to CBS data that's the overwhelming majority, this is the path that works.

Step 1: first understand what AI can and cannot do

Before you pick a tool or call a vendor: make sure you broadly understand what the current generation of AI does. Not to become an expert, but to avoid building a chatbot when you needed a spreadsheet, or deploying an agent when an email template would have done the job.

A few minimal things to grasp:

  • The difference between an LLM (language model like ChatGPT or Claude) and an AI agent: an LLM answers a question, an agent executes tasks across multiple steps.
  • What AI does well: summarising text, classifying, rewriting, extracting structured data from unstructured input, generating code, taking simple rule-based decisions.
  • What AI doesn't do well: arithmetic without tools, exact recall without a memory layer, guarantees on factual accuracy, replacing existing software where deterministic logic is required.

Read our explainer on what AI is for SMBs if you haven't yet. Forty-five minutes of reading saves you three months of going the wrong direction.

Step 2: find your three time-sinks

No abstract exercise, no post-it workshop. Take a week, set a timer on your phone, and three times a day note what you or your team is actually doing. Or even simpler: look at your last 20 working days and ask the three most productive people on your team where their time goes.

You're looking for tasks that meet three criteria:

  1. Digital input and output. A typical good candidate: drafting quotes based on a client PDF. A typical bad candidate: counting physical inventory.
  2. Repetitive, with variation. Not identical (a macro is enough then), but the same pattern. Think of answering emails, rewriting content for different channels, entering invoice data, categorising support tickets.
  3. Quantifiable in time. You should be able to say: "this takes me 20 minutes on average, I do it 30 times a week".

Write down the three candidates with estimated hours per week. Without that number you can't measure later whether it worked. According to the Stanford AI Index Report 2025, the average productivity gain from LLM-based tools on repetitive knowledge work sits between 25% and 40%, with peaks up to 70% on specific writing tasks. So you need a time-sink of at least 4-5 hours per week to make it worth the effort.

Step 3: pick one pilot

One. Not three at once, not "we'll start broad". One. From your three candidates pick the one with the highest time investment and the lowest complexity. Often that's not the sexiest option, and that's exactly right.

Good pilots in the Dutch SMB market that we've seen work over the past two years:

  • Categorising incoming emails and pre-drafting replies in Outlook or Gmail.
  • Rewriting product descriptions for webshops from a supplier feed.
  • Auto-summarising meetings with action points per person.
  • Drafting quotes based on an intake call and a price list.
  • Handling first-line customer service for standard questions (the actual 80% of volume).

What you should not pick as a first pilot: a chatbot on your website that needs to "do everything", a sales prediction model based on data you still need to collect, or an AI agent that takes over "the entire customer process". Those are phase-3 projects, not phase-1.

If you can't decide between candidates: take our AI scan, it gives you a ranking in 5 minutes of where the highest return per hour sits for your business.

Step 4: put two weeks on the clock

This is where most projects still run aground: execution. Not because it's technically hard, but because it needs to feel urgent. So: block two weeks in the calendar, no longer.

Week 1: build.

  • Day 1: lock down the happy path (10 examples of input + desired output).
  • Day 2-3: build the first version. For 80% of pilots in this category you don't need to write any code. ChatGPT with custom instructions, Claude Projects, a Zapier flow or a Make scenario will get you very far. Only when the pilot works and scaling is on the table do you look at custom integrations.
  • Day 4-5: test with the actual user (yourself or a colleague), not with an ideal demo. Note where it breaks.

Week 2: polish.

  • Day 6-8: fix the three most common errors. Not every edge case, just the three that cause 80% of the problems.
  • Day 9: deploy it into the real workflow for one person.
  • Day 10: measure (see step 5).

No platform comparisons, no RFPs, no vendor beauty contests. If the pilot works, in phase 2 you'll look at whether it should sit on a more serious platform. The OpenAI documentation and Anthropic's prompt engineering guide are free and good enough for the first version.

Step 5: measure what it delivers in time/money

Without measurement you don't have a pilot, you have a hobby. The measurement doesn't need to be complex, but it does need to be hard.

Three numbers you record before and after:

  1. Time per task. Stopwatch, not estimate. Before the pilot: take 5 representative cases and time how long the old method takes. After the pilot: time the same 5 steps with the new workflow.
  2. Error rate. How many of the 5 outputs need correction? This matters more than time, because a process that's 90% faster but wrong 50% of the time costs you net time.
  3. User satisfaction. Not a formal NPS, just: "would you want to keep working with this tomorrow, yes or no?". If the answer is no, the time savings don't matter.

Translate it into money. A process going from 30 minutes to 8 minutes, run 30 times a week, is 11 hours saved per week. At an internal hourly rate of EUR 60 that's EUR 660 per week, or EUR 34,000 per year for one pilot. Against EUR 15-30 a month in tooling, payback is usually within a week.

According to a 2024 McKinsey study, 23% of companies starting with AI achieve measurable revenue or cost impact in the first year. The difference between that 23% and the rest is almost always this point: do they measure concretely, or not.

Step 6: document and train your team

Pilot works, numbers check out. Now comes the part 80% of companies skip: making sure it still works in six months when you're on holiday.

Three things that need to happen now:

  1. Write down what the pilot does and doesn't do. Not as an ISO document, but as one A4 page in Notion or Confluence. What's the input, what's the output, where is the human in the loop, when do you escalate to manual.
  2. Train at least two people. One bottleneck isn't a success, it's a single point of failure. Our AI training programmes are built specifically for this, but you can also do it yourself with 2 hours of explanation and a week of shadowing.
  3. Check your obligations under Article 4 of the AI Act. As of February 2025 this is in force: anyone working with AI systems in your organisation needs to be sufficiently AI-literate. That sounds heavy but isn't, you need to be able to demonstrate that the people using it understand what it does. Read our AI Act explainer or take the AI Act check to see what applies to your situation.

Documentation and training are often the difference between "fun pilot from last year" and "standard part of how we work".

Step 7: scale or stop

At the end of the pilot you have three options: continue and expand, replace with something else, or stop. None of those three are wrong. What is wrong: staying in the floating middle ground where you're no longer enthusiastic but haven't officially stopped either.

Decide on numbers, not feelings:

  • Time saved >= 25% and high user satisfaction: scale. Add the second task from your time-sink list. Start over at step 3.
  • Time saved between 10-25%, mixed satisfaction: two more weeks of polish. No longer. If it's still not there, stop.
  • Time saved <10% or low satisfaction: stop, learn, pick a different pilot. It's not failure, it's information.

One scaling rule: only add a second pilot once the first has run without your attention for at least 4 weeks. Otherwise you build a pile of half-working systems that all demand your time.

What this isn't

For clarity, these are the three patterns this guide explicitly pushes back against:

  • No "AI strategy first". An AI strategy without pilots is a PowerPoint without a customer. Strategy follows from patterns you see in pilots, not the other way around. If a consultant says you first need a two-year vision: thank them and start at step 2 above.
  • No "platform first". Microsoft Copilot, Google Gemini, your own LLM on Azure: those are all fine choices, but only in phase 2. In phase 1 it doesn't matter. A pilot that works on ChatGPT also works on Copilot, and vice versa.
  • No "big bang implementation". Switching the whole company to AI workflows in one go is how you lose everyone. One pilot, one team, one month. Then the next.

Common mistakes

In the pilots we've guided we see the same five mistakes:

  1. Too large a scope. "We want to automate our entire customer service". Start with the top 3 questions that drive 60% of the volume. Only when those work, look further.
  2. No baseline measured. Without a before-measurement you don't know whether you delivered anything. "It feels faster" is not a measurement.
  3. Locking into a platform too early. Signing a yearly contract with an AI vendor before you have a working pilot is almost always more expensive than two months of your own experimentation.
  4. Skipping the user. The person who does the work daily knows better where the friction is than the boardroom. If that person isn't building along, the pilot fails.
  5. No exit criterion. Agreeing in advance when you stop is just as important as agreeing when you scale. Otherwise every half-working pilot keeps swimming around until somebody quietly retires it.

When to bring in outside help

Most of the above you can do yourself. A few moments where external help genuinely makes a difference:

  • You don't have time to focus for two weeks. If the pilot keeps drifting between "tomorrow" and "next week", an external partner is sometimes the only way to enforce a deadline.
  • The pilot touches customer data, financial data or personnel data. You don't need to pay top dollar for a GDPR consultancy here, but you do want someone who can check whether your setup is legally sound.
  • You want to scale to more than 5 processes. At that point thinking about architecture is smart, and an AI strategy phase is a valid investment. But only then, not earlier.
  • Your team isn't AI-literate under Article 4 AI Act. A one-day training programme gets you over the legal threshold and saves you fines and hassle later.

Even when you bring in help: stay owner of the problem. The external party does it faster, you understand the business better. The combination works, the handover rarely does.

Conclusion

Starting with AI isn't a matter of strategy, it's a matter of one two-week pilot on one concrete time-sink, with hard numbers before and after. Do that once well, and your second pilot takes half the energy. Do it ten times and AI isn't a project anymore, it's just how you work.

The biggest mistake we see in SMBs isn't that people pick the wrong tool, it's that they spend months picking. Two weeks of concrete action delivers more than six months of comparing.

Want to know which pilot has the highest return per hour for your business? Take the AI scan, 5 minutes, no sales pitch afterwards. Or get in touch if you'd rather spar directly about your three time-sinks.

Curious what AI can do for your business?

Take the free AI Scan and find out in 1 minute.

Start the AI Scan →