Articles

How to use AI for the common good

07 December 2025

Using AI can be hugely beneficial to charities. It can save time, money and even help with service delivery. But it comes with risks. Here’s how to use AI for the common good.

Artificial intelligence (AI) has enormous potential in being a force for good. However, there are risks involved as well as ethical and environmental considerations. In this article we explore the steps you can take to use AI for the common good. 

 

 

What is generative AI?

 

Generative AI is a type of AI that can create content based on patterns that it’s learned from existing data. The content can include text, images, music, and video. Examples of generative AI platforms are ChatGPT, Sora, and MusicLM. 

 

 

How charities use generative AI

 

According to the Charity Digital Skills Report 2024, 61% of the over 630 charities surveyed said that they use AI in their day-to-day operations. And 33% are using AI to help with content creation, admin (32%), and drafting reports and documents (28%).

 

Charity communicators, according to the CharityComms Salary and Organisational Culture Survey 2024, are using generative AI for copywriting (39%), transcribing events (21%), design (10%), and creating images (6%).

 

The usage of generative AI by charities has increased from 2023 and we’ll no doubt see it increase year-on-year as more people learn how to use it and make the most of it.

 

In just a year, 14% more charities are using AI for copywriting in 2024 (39%), compared to 2023 (25%) according to CharityComms Salary and Organisational Culture Survey 2024. 

 

 

Five steps to using AI for the common good

 

Ready to start using AI, develop your AI skills, or build a culture of learning at your charity? Here are five steps to using AI responsibly and ethically.

 

 

Understand the risks and how to mitigate them

 

As with any technology, there are risks to using AI. It’s important to understand what the risks are and to put processes in place to mitigate these risks. For example, one risk is that generative AI can produce false, misleading, or nonsensical information.

 

This is referred to as ‘hallucinations’. One way to mitigate this risk is to ask for the source of the information in your prompt. That way you can check whether it’s a legitimate source of information or not.

 

Another risk of using AI is that AI learns from data created by humans, which can reflect real-life inequalities and stereotypes. One way to mitigate against this bias is to ensure that your data is diverse and that it represents a wide range of demographics and cultures.

 

 

Apply human oversight

 

The use of AI has helped charities save time and money and be more efficient. However, it’s imperative to always have human oversight as AI is not perfect. A great example of this is Caddy, the AI-powered assistant for Citizens Advice advisors which is used for service delivery. Caddy helps advisors find information quickly, using trusted sources and Citizens Advice’s own proprietary knowledge base.

 

The team that built Caddy quickly came to the agreement that Caddy would have a “human in the loop” validation system to ensure that the advice given by the advisor when speaking with the client via phone or in-person was accurate and reliable. So, all responses are approved by a supervisor before they can be shared with the client.

 

It’s always best to use photos of people in real-life situations in your comms. But if this is not possible, AI can create photos for you. However, it’s vital that you check the accuracy of what AI has created. Amnesty International was criticised for using AI generated images of the protests in Colombia in 2021, which showed a woman wearing the Columbian flag draped around her shoulders (but the colours of the flag were in the wrong order) and the police who were escorting her away were wearing outdated uniforms.

 

Using AI generated images that are inaccurate or misleading could be a reputational risk.

 

 

Be mindful of digital exclusion 

 

A lack of digital skills, both within charities and within the communities they serve, is a concern – and AI may be widening that gap. 

 

2023 report by Connect Humanity, which was a global study of more than 7,500 civil society organisations, found that 39% of respondents said that a lack of digital skills was a barrier in their organisation.

 

And 50% said that the absence of digital skills was a top challenge to their service users and communities. Of those surveyed, 60% said that their organisation didn’t provide any digital literacy training.

 

Whilst AI is being used more by charities, we must be mindful of the potential for digital exclusion – both within and outside our organisations.

 

 

Champion the use of ethical AI platforms

 

Not all AI platforms are created equally. As charities, it’s important to champion the use of ethical AI platforms. These are ones that prioritise responsible use, fairness, and transparency.

 

One such ethical AI platform is Hugging Face, which is a machine-learning and data science platform and community. It’s open source, which means that anyone can access its code. This means that developers and researchers can discover and share AI models, collaborate with others, and access tools to build and experiment with AI. 

 

 

Advocate for the responsible use of AI

 

The charity sector has a role to play in ensuring that AI is used responsibly, inclusively, and collaboratively. Play your part by joining the Charity AI Taskforce, which so far has 50 UK charities and social good organisations signed up to promote responsible and ethical use of AI – both inside and outside the sector. 

)
Sign Up

Sign in to continue reading

Access all our articles and search the provider directory for free.