News   >   Ripples Story: “Could AI be a Catch-22?”

Ripples Story: “Could AI be a Catch-22?”

25 Mar 2026   /   Nhung Phung

In this article, Kye Lockwood, CEO of DataKind UK, explores how charities can approach generative AI responsibly, recognising there is no one-size-fits-all answer.

At DataKind UK, we’ve been debating the question, ‘How can charities use generative AI tools responsibly?’ so much that we recently hosted a webinar to explore this further. It’s a challenge for a sector rooted in values, ethics, and justice that, like many of the challenges we address, doesn’t have an easy one-size-fits-all solution.

You might expect, as the ‘data experts,’ that we’d have all the answers. However, we believe that defining ‘responsible AI’ must be rooted in your own organisation’s values, not in a universal checklist. Just as we support charities with their impact, we don’t claim to be the experts on what constitutes impact (that’s for you and your beneficiaries to decide). Similarly, we can’t be prescriptive about what responsible AI looks like for every charity.

Before we go any further, let’s define what we’re talking about when we talk about AI. Here, I’m focusing on generative AI, which is artificial intelligence that creates new content, such as text, images, or even videos. Think of ChatGPT writing emails, DALL-E creating images, or tools like Claude helping with reports. Unlike traditional software that follows set rules, these systems learn patterns from massive amounts of data and use that knowledge to generate something new. These models work by predicting what should come next. When you ask ChatGPT a question, it’s essentially making incredibly sophisticated guesses about what the most likely response would be, based on patterns it learned during training. It’s remarkably good at this, but it’s still making educated guesses and crucially not accessing some database of truth. Indeed, what makes these tools so convincing can also be their downfall. AI can confidently present completely false information: earlier this year, the BBC found that 51% of AI-generated news summaries had issues.

Be aware of the potential costs

These systems can perpetuate existing biases, struggle with mathematical calculations, and create what’s increasingly known as ‘slop’: AI summaries of dubious usefulness filling previously helpful platforms. Then there are the significant environmental and social costs that many charities (especially those focused on sustainability or social justice) need to confront when considering GenAI use. According to MIT, “researchers have estimated that a ChatGPT query consumes about five times more electricity than a simple web search,” and the infrastructure required for AI is incredibly resource-intensive.*

Coupled with this is the human cost: behind every “intelligent” system are human workers, often in the Global South, labelling data and reviewing AI outputs in jobs that frequently involve disturbing content and are typically low-paid with poor working conditions. The question for your charity becomes stark: Does using these tools align with your values around environmental protection and fair labour practices? However, it is this tension between utility and values that sits at the core of responsible AI adoption.

*Estimates vary, and a major difficulty is the lack of transparency and data, though public pressure is slowly changing that.

Problem before Tech

At DataKind UK, our most fundamental approach is one that echoes throughout all our work: start with the problem you are trying to solve and not the technology. We’ve seen charities deploy generative AI for the more typical use cases such as note-taking or content generation for fundraising, reports or marketing. But we’ve also seen some fantastically innovative uses, such as Citizens Advice’s “Caddy” copilot that helps advisors quickly find information (with crucial human oversight); seen AI combat disinformation, and witnessed organisations using AI to analyse thousands of survey responses to draw out themes in moments that would have previously taken weeks to identify manually.

A Guiding Framework

Unfortunately, we can’t hand you a ready-made checklist for responsible AI. However, the folks over at mySociety have developed an excellent AI framework that can be applied to charities. It covers six domains:

  • Practical questions: Are we solving a real problem, or working backwards from a solution? Is this the best way to address our challenge, or are we just excited by new technology?
  • Societal questions: What are the best and worst-case scenarios of our AI use? Is this shift consistent with our strategy and ethical framework?
  • Legal and ethical questions: What’s the nature of the organisation producing these tools? Is the training data publicly available? Are we comfortable with the intellectual property implications?
  • Reputational questions: Does this tool touch on areas requiring high accuracy or trust? Could it create potential for bad-faith attacks on our services?
  • Infrastructural questions: What are the long-term costs of deploying this tool? Do we have the skills to manage it sustainably?
  • Environmental questions: Are we tracking the ongoing environmental impact? Are there more efficient alternatives?

Responsible tech use means using it strategically where it genuinely adds value, while advocating for more sustainable and ethical AI development, such as championing open-source tools, using ‘frugal AI’, and interrogating AI supply chains. Ask challenging questions: Do these tools align with your values? Are there less resource-intensive alternatives?

Remember, your goal isn’t to be cutting-edge, it’s to better serve your mission in a way that doesn’t undermine the very causes you’re working to support. Sometimes that means using AI responsibly. Sometimes it means choosing not to use it at all.

Read the full latest issue of Rank Ripples magazine

Recent News

LeadershipRank Aspire Programme

Growing Through the Rank Aspire Programme

Read More
Place-based NewsSunderland

Learning From Participatory Grant Making in Sunderland

Read More
FellowshipLeadershipNewsSchool Leadership Award

Fellowship Futures 2026

Read More