News > Ripples Story: “AI’s ethical challenges”
Ripples Story: “AI’s ethical challenges”
25 Mar 2026 / Nhung Phung
In this article, Nina Karwalska, Strategic Operations & Production Manager at Futures in Film CIC, reflects on the ethical challenges AI brings to the social sector, and why a people-first approach must remain central.
Artificial Intelligence is reshaping every sector of society, from healthcare and education to housing and creative industries. The social sector is no exception to this. However, our work is about people, and that is why we must hold to the highest standards of ethics as AI begins to touch how we work. The key to achieving this is learning as much as we can and understanding the ethical challenges and how to mitigate and work around them.
At Futures in Film CIC, where we create access into film and high-end television for underrepresented and faith communities, we have seen how disruptive AI can be. In 2023, writers and actors took strike action across the United States, protesting low pay, unsafe working conditions, and the threat of AI replacing human creativity. This led to an industry-wide shutdown of production, resulting in an economic impact of $5bn in the US alone. The clear lesson from this is that AI is a tool, but it mustn’t be allowed to replace human imagination and creativity.

The same applies across the social sector. Whether organisations work with homeless people, mentor at-risk youth, or provide community sports programmes, the challenge is not about adopting AI for efficiency; it is about making sure technology never undermines the values that define our work. By recognising the ethical challenges AI raises, we can prepare, adapt, and prevent harm before it occurs.
Research consistently highlights the values that underpin good practice: justice, dignity, solidarity, and accountability (Machado et al., 2020; NASW Standards, 2017). In healthcare, systematic reviews of AI ethics echo these same concerns: fairness, transparency, privacy, and responsibility (Neiva et al., 2023; PLOS Digital Health, 2025). These shape how we interact with people every day.
Key Ethical Challenges
Fairness and bias
AI systems learn from data, and that data often carries the inequalities of the world it comes from. This means AI can replicate and even amplify existing patterns of disadvantage. For the social sector, fairness underpins trust. If technology quietly excludes or misrepresents the very people we aim to serve, the damage to relationships can be lasting.
Reflection: Are the tools we use tested for fairness, or do we assume neutrality where none exists?
Transparency and accountability
Many AI systems produce outcomes without clear explanations. This is a challenge for organisations that pride themselves on openness. If we cannot explain why a decision was influenced by AI, we risk undermining trust. Accountability is equally vital: when harm occurs, responsibility must rest with people, not hidden systems.
Reflection: Can we explain the role AI plays in our decisions, and do we know who is answerable if things go wrong?
Consent and privacy
AI complicates what it means to give informed consent. A single agreement can lead to data being reused or shared in ways that service users never anticipated. In contexts where trust is fragile, this can deter people from engaging at all. Privacy is not only about compliance but about dignity, agency and the confidence that personal stories remain under an individual’s control.
Reflection: Does our approach to consent empower people, or does it prioritise organisational convenience?
Human dignity and autonomy
Efficiency is often presented as AI’s strength, but efficiency is not the same as care. When people are reduced to data points or scores, individuality is lost. The social sector’s strength lies in human relationships, empathy and judgment. If these are displaced by technology, we risk hollowing out the very heart of our work.
Reflection: Are we allowing AI to support human judgment, or letting it redefine what good practice looks like?
Equity of access
AI often requires resources and expertise that smaller organisations may not have. This risks widening the gap between large institutions and grassroots groups, and by extension, the communities they represent. Equity means not only ensuring access to tools, but also ensuring that diverse voices shape how AI is developed and deployed.
Reflection: Are smaller organisations included in decisions about AI, or forced to adapt to systems built without them?
Moving Forward with Awareness
To meet the opportunity as well as the challenge of AI, we must engage critically: questioning bias, accountability, consent, dignity, and equity when adopting new systems. Human oversight must remain central. The values of justice, solidarity, and care must remain our guide.
At Futures in Film CIC, we have learned that disruption is inevitable. AI can support our work, but only if it remains people-focused and is used as a tool. The question is not whether AI will be used; it is whether it will serve our values.
Read the full latest issue of Rank Ripples magazine
Share
Tags
News


