Dashboards that tell you what you did not know
Connect data sources, build custom dashboards, layer AI analysis on top of your own data. For SMBs and scale-ups that want more from their numbers than a monthly Excel export.
Most businesses do not have a data shortage, they have data chaos. Customer information in the CRM, revenue in the accounting system, ad data at Meta and Google, inventory in the ERP, support tickets in a separate tool, and last week's export in an Excel file on someone's desktop. Every month a small report is made by hand that nobody really trusts. Decisions are made on gut feel, because the numbers are too much work to pull together. This can be better, and it does not have to be expensive or complicated.
We connect your data sources, clean up what needs cleaning, and build dashboards that are correct every morning without anyone having to export anything. On top of that we add AI analysis where it makes sense: anomaly detection on transactions, customer segmentation by behaviour, forecasts on cashflow or inventory, AI summaries that land in your inbox every Monday saying "what changed this week, and why". AI on top of data is only valuable if the data is good, so we are honest about the cleanup work that sometimes has to come first.
The technical stack is deliberately open and unexotic. BigQuery, Postgres, or DuckDB as warehouse, dbt for modelling, Fivetran or Airbyte for connectors to your existing tools, and Looker Studio, Metabase, or Hex for dashboards. For AI analysis we use Claude or GPT via API, with a natural-language interface so colleagues without SQL knowledge can also ask questions of the data. Code and dashboards live in your repository, not with us. No vendor lock-in, no black box, and you can always maintain it yourself.
We work for SMBs and scale-ups across the Netherlands: e-commerce that wants to know which ad euros pay back, manufacturers with inventory questions, service providers wanting to forecast churn, accountants automating reporting for their clients. GDPR-compliant working is the default: pseudonymisation where possible, no customer data to external AI without a DPA, and a processing register you can show in an audit. You can start small, with one dashboard on one source, and grow into a full data layer when it delivers value.
What you get
Connecting data sources
Your data is scattered: CRM, ERP, accounting (Twinfield, Exact, AFAS), ad platforms, internal databases, marketing tools. Each source has its own export, format, and update cadence. Combining them only works by hand.
We connect your sources via Fivetran, Airbyte, or custom connectors into one warehouse (BigQuery, Postgres, or DuckDB). Daily or hourly refresh, audit trail per source, structured so your marketing data and sales data finally sit next to each other in the same row. For accountants with bookkeeping data see /ai-accountants.
Custom dashboards built for you
Vendor default dashboards show what they think is important, not what you want to know. Excel exports are manual work every month and go stale fast. You need something that fits how your team actually steers.
Dashboards in Looker Studio, Metabase, Hex, or your existing BI tool (Power BI, Tableau). Own metrics, own segmentation, own drill-down. Cross-system reports where ad spend and sales CRM are visible together. No one-size-fits-all template. For e-commerce performance & customer behaviour see /ai-e-commerce.
AI analysis & forecasting
A dashboard shows what happened. But you want to know what is going to happen, and which outliers you would not spot yourself. Anomaly detection, customer segmentation, cashflow and inventory forecasting, sentiment analysis on reviews.
AI layer on top of your clean data. Cashflow forecast based on invoicing and payment patterns. Customer segmentation via behavioural clustering, not just demographics. Anomaly detection that runs nightly and alerts on deviations. Sentiment on support tickets and reviews. With back-tests so you know how well it works.
Automated reporting
Someone builds a report by hand every Monday or every month. It costs hours, is error-prone, and hangs on one person. If they are ill or leave, it is gone.
Weekly or monthly reports that automatically land in your inbox or Slack. Including AI-generated executive summary: "what changed this week, and why". Filtered per recipient, so sales sees different numbers than finance. Ready to forward, without anyone needing to look at it first.
Data-quality audits & cleanup
Before you build anything on your data you want to know if it is correct. Duplicates, empty fields, broken formats, outdated categories, definitions that have shifted over the years. A dashboard on dirty data is worse than no dashboard.
Data-quality audit per source: which fields are reliable, where is the noise, which definitions need to be pinned down. Dimensional modelling with dbt so the logic lives in one place. Automated dbt tests that check on every refresh whether the data meets the rules. First want to see what data you have and may use? AI Quickscan at /ai-strategie. For education (learning outcomes / school data) see /ai-onderwijs, for market analysis in real estate /ai-makelaars.
What it delivers
- One warehouse where all your sources come together, refreshed daily
- Dashboards that are correct every morning without manual exports
- AI analysis on top of your own data: forecasting, segmentation, anomaly detection
- Automated reports to email or Slack, with AI summary
- Data-quality tests that catch deviations before you see them in a chart
- Open stack without vendor lock-in, code and dashboards in your repository
- Works alongside your existing BI (Power BI, Tableau) or replaces it fully
- GDPR-compliant: pseudonymisation, retention, DPA, processing register updated
- Cross-system insight: ad spend and sales revenue in one report
- Transferable so you can hire your own internal data person later
Frequently asked questions
Our data is a mess. Can you clean it up first?
Yes, and that is often where we start. A dashboard built on dirty data produces nice charts that mean nothing, or worse: numbers that contradict each other. We start with a data-quality audit: which sources do you have, which fields are reliable, where are the duplicates, empty records, broken formats, outdated definitions. Then we set up a lightweight modelling layer (usually with dbt) where the logic lives once: what is an active customer, how do we count revenue, which records belong to which campaign. Only on top of that clean layer do we build dashboards and AI analysis. It sounds like a detour but is cheaper than building twice on data you cannot trust. We are honest: sometimes this groundwork is bigger than the dashboard itself, and we will say so.
Which tools do you use and is there vendor lock-in?
We work with BigQuery, Postgres, or DuckDB as the warehouse, dbt for modelling, Fivetran or Airbyte for connectors (or custom scripts when that is cheaper), and Looker Studio, Metabase, or Hex for dashboards. For AI analysis we use Claude or GPT via API. This is deliberately an open stack: your SQL models run on any warehouse, dbt is open source, dashboards can be moved to a tool you can manage yourself. We always deliver documentation and code in your repository, not in a platform that locks you in. If you switch vendor later or want to take it over, you can. No black box.
How does GDPR work for customer behaviour analysis?
Customer segmentation and behaviour analysis fall under GDPR the moment you process personal data, even if it is only an email address or customer ID. Important: the legal basis must be clear. For analysis on your own customer data this is usually legitimate interest or contract performance, provided you can justify it and the impact on the customer is limited. We help make that assessment and document it. For heavier profiling with automated decision-making, stricter rules apply (Article 22 GDPR). Practical measures we set up by default: pseudonymisation where possible, retention periods on datasets, no customer data sent to external AI vendors without a DPA, and a processing register update so you can defend it in an audit. For edge cases: the AI Quickscan first checks what data you may use at /ai-strategie.
Can we maintain dashboards ourselves or are we dependent?
Self-maintenance is the starting point. We build dashboards in tools you can open, adjust, and share without us, like Looker Studio (free, Google account) or Metabase (open source, your own server). The SQL models underneath sit in dbt and are version-controlled in your GitHub or GitLab, so any colleague with SQL knowledge can read and edit them. We document every metric: what counts, what does not, and why. If you have or hire an internal data person, they can take over. We are happy to stay involved for heavier AI analysis and new sources, but that is a choice, not a dependency.
What is the difference between a data warehouse and a data lake?
A data warehouse is structured storage for clean, modelled data, typically SQL tables that dashboards and reports run on. Examples: BigQuery, Snowflake, Postgres, DuckDB. A data lake is rawer storage, often on cheap object storage (S3, GCS), where data lands in its original form before being modelled. For most SMBs, a data warehouse is enough. A data lake (or "datalake") only becomes useful at high volumes of unstructured data, like log files, images, or sensor data you might want to use later. We almost always recommend starting with a simple warehouse (BigQuery is cheap until you genuinely need more) and only expanding to a lake architecture when the use cases justify it. Data warehousing principles apply either way: clean modelling, traceable definitions, tested pipelines.
Does this work with our existing BI (Power BI, Tableau)?
Yes. We often work with Looker Studio, Metabase, or Hex because it is faster and cheaper for SMBs, but if you already invest in Power BI or Tableau we just build on top of that. The warehouse and dbt models work the same, only the visualisation layer differs. What we often see: the dashboard work in Power BI is fine, but the data underneath is a tangle of Excel exports and manual steps. That is where the value is, not in the visualisation. We then tackle the pipeline and keep Power BI as the presentation layer. Same for Tableau, Qlik, or Pyramid.
Can you put AI analysis on top of existing data?
Yes, and that is often where most value sits. If your data is already clean in a warehouse, we can build on top: anomaly detection (which transactions deviate from the pattern), customer segmentation by behaviour (clustering with machine learning), forecasts (cashflow, inventory, churn), sentiment analysis on reviews or support tickets, and natural-language interfaces ("ask your data how many orders we had last month from Brabant"). We connect to your existing tables, do not add new systems unless we have to, and return output to dashboards you already know or in automated reports to email or Slack.
How do we know the insights are correct?
Three checks we build in by default. One: every number in a dashboard has a definition and a traceable query. You can always drill down to the underlying rows to verify. Two: dbt has tests (uniqueness, not-null, referential integrity, business rules) that run on every refresh. If the data deviates we get an alert before you see it in the dashboard. Three: for AI analysis (forecasts, segmentations) we run a back-test on historical data and show how well the output matches what actually happened. No blind trust in a model. If the back-test is poor, we say so. A good dashboard tells you something you did not know, but it has to be correct.
Let's get acquainted.
Book a free call or send us a message. We always respond within 24 hours on business days.
Phone / WhatsApp
+31 85 124 95 22Location
Middelburg, Zeeland
Office hours
Mon – Fri, 09:00 – 17:00