
Let’s be honest. If your team uses ChatGPT to draft fundraising emails or Claude to polish your next newsletter (guilty 🙋♂️), the EU AI Act officially applies to you.
Before you roll your eyes — “not another regulation” — hear me out. This one isn’t nearly as bad as it sounds. The truth is, getting ahead of it now is a lot easier than scrambling in 2026 when compliance becomes mandatory.
So let’s talk about what this really means for mission-driven organisations like ours — in plain English, no legal fluff, and no fearmongering.
What Is the EU AI Act, Really?
Think of the EU AI Act as GDPR’s cousin, only for artificial intelligence instead of data privacy. It became law in August 2024, and organisations have until August 2026 to comply. That gives us about eighteen months — enough time to get our house in order if we start now.
The law introduces a risk-based system for AI, which sounds complicated but isn’t. It’s basically the EU saying: “Not all AI is equal.” The higher the risk of harm or manipulation, the tighter the rules.
At one end, you’ve got things like social scoring or manipulative AI systems — the kind of stuff that belongs in a dystopian TV series. Those are banned outright. At the other end, you’ve got AI that helps with internal productivity — tools that summarise meeting notes, draft reports, or improve accessibility. Those are considered minimal risk, and you don’t have to do anything special.
Most nonprofits, social enterprises, and community organisations will fall somewhere in the middle — what’s called “limited risk.” This includes things like chatbots, AI-generated emails, and donor segmentation tools. You can still use all of them. You just need to be transparent about it.
Does It Actually Apply to Small Nonprofits?
Almost certainly, yes. Even if you’re small, even if you’re not actively targeting Europe.
If someone in the EU can visit your website, interact with your chatbot, or receive your newsletter — you’re covered. If you work with EU-based partners, donors, or beneficiaries, you’re covered too.
In short, if your organisation operates online or collaborates internationally in any way, you’re in scope.
But don’t panic. For most of us, compliance comes down to a few simple, common-sense steps.
What You Actually Need To Do
The first step is awareness. Take half an hour and jot down what AI tools your team is using. ChatGPT, Canva’s Magic Write, HubSpot’s AI assistant — whatever’s part of your workflow. Just listing them out gives you a clear picture of where AI is showing up in your organisation.
Next, be transparent. If you’re creating content with AI, include a simple note like “This content was created with AI assistance and reviewed by our team.” If your website has a chatbot, make sure people know it’s a chatbot. A short message like “Hi, I’m an AI assistant here to help — want to speak to a human?” does the job.
That’s it for transparency — no fancy templates required.
Then document it. Open a new Google Doc and outline four things: what tools you use, why you use them, how humans still review or oversee their output, and who’s responsible for keeping an eye on this going forward. It doesn’t have to be long or formal — a couple of pages is plenty.
By doing this, you’ve essentially completed 90% of what most nonprofits need for compliance.
What Happens If You Don’t
Yes, the law includes fines — up to €35 million or 7% of global revenue — but let’s be real. Regulators are not coming after small charities and social enterprises the same way they would a Silicon Valley giant. They’re required to consider your size, resources, and intent.
The bigger risk isn’t financial, it’s reputational. If your AI tool makes a mistake — say, it generates an insensitive fundraising message or misclassifies a supporter — and you have no idea how it happened, that’s where you lose trust.
And trust is the one thing none of us can afford to lose.
Why This Actually Matters
The EU AI Act isn’t about paperwork — it’s about accountability and values. It’s asking the right questions:
Are we using AI in a way that aligns with our mission?
Are we being honest with supporters about how we create and communicate?
Do we understand what our AI tools are actually doing?
And most importantly, are humans still making the key decisions?
For most purpose-driven organisations, the answer to these questions is already yes — we care deeply about doing things ethically and transparently. The EU AI Act simply gives us a framework to prove it.
In that sense, it’s not a barrier. It’s an opportunity to show that we’re leading by example.
A Simple Action Plan
If you want to make progress this week, keep it simple. On Monday, ask your team what AI tools they’re using — get everything down in one place. Midweek, update your chatbot and content workflows to include transparency notes. By Friday, draft your two-page AI usage document and assign one person to review it quarterly.
That’s it. Three small steps, and you’ll already be ahead of most organisations.
The Bigger Picture
Here’s why this matters beyond compliance: the way we use AI says a lot about who we are as organisations. It shows whether we value transparency, trust, and human judgment over convenience.
When we get this right, we don’t just protect ourselves from risk — we build credibility with our donors, attract better talent, and set a higher standard for the sector.
AI is here to stay. It’s helping teams save time, make better decisions, and focus on what really matters. The EU AI Act just ensures we’re doing it responsibly — and that’s something we should want anyway.
Final Thoughts
Don’t overthink it. This isn’t about bureaucracy; it’s about building trust.
If you’re transparent, document what you’re doing, and make sure humans stay in control, you’re already most of the way there.
The organisations that embrace this early won’t just be compliant — they’ll be the ones shaping what ethical, purpose-driven AI looks like in practice.

