How the UK public feels about social workers using Notes
Trust in new technology, especially in sectors like social care, needs to be earned. That’s why we took part in Nesta’s AI Social Readiness pilot to measure public confidence in Notes. In this blog, we share five key themes from the research, and what it tells us about building trust in AI in social care.
Social care in the UK is under serious strain. Rising demand, an ageing population and inflation are putting more pressure on already stretched services.
For social workers, this means more paperwork, higher caseloads, and less quality time with the people they support. So, it’s no surprise that burnout, stress and turnover rates are increasing.
Technology has the potential to help. AI tools can reduce admin and give social workers time back to focus on delivering high quality, human care. But when it comes to AI in public services, people are understandably cautious.
In the social care sector, where decisions affect people’s lives, trust in new technology has to be earned. That’s why we chose to take part in Nesta’s AI Social Readiness pilot and open up Notes to public scrutiny.
Considering only 40% of the UK trusts the public sector to use AI responsibly, we went into the research prepared for some tough questions. But following workshops with 137 members of the public and people who access social care, 83% said they felt positive about social workers using Notes.
We also got to hear the public’s ideas for how we can mitigate risks and further build public trust in our tools, which is directly informing our product development. In this blog, we share five key themes from the research, and what they tell us about trust and the role of AI in social care.
AI in social care needs human oversight
After learning how Notes works and the safeguards around it, 83% of participants said they felt positive about social workers using the tool. But they were clear that how the tool is used matters.
People liked the fact that Notes is limited to transcription and summarisation, and doesn’t make decisions about care. They also valued that every summary must be reviewed, edited and approved by a social worker before it’s saved.
As one participant put it:
“It’s all great to have AI to make things simpler, but it should still have a person to check. There needs to be that one person to make the final yes or no.”
Participants also spoke about the importance of empathy, context and emotional understanding in social care conversations. These are things AI can’t fully capture, and shouldn’t try to.
The takeaway: When AI is used in social care, human oversight must be built in by design. Being transparent about what AI does/doesn’t do is essential to earning public trust.
Saving time only matters if it improves care
Participants strongly valued the benefits of Notes. Many felt it could save time, improve documentation quality, and support social workers’ wellbeing, all helping them to provide better care.
But people also wanted to know what would happen to that time. Some worried that efficiency gains could lead to larger caseloads, rather than easing pressure on already stretched teams.
Participants wanted to see the time reinvested in ways that would genuinely improve care, like giving social workers more time with people and investing in the quality of support.
The takeaway: For the public, the value of AI in social care isn’t measured in minutes saved, but in human outcomes improved.
Trust is built through visible safeguards
After learning about the safeguards we’ve built into Notes, 72% felt comfortable with the level of risk involved.
Many recognised that it would be impossible to eliminate all risk in any technology, especially in complex public services like social care. What mattered most was how those risks were managed.
As one participant put it:
"You can't remove all risks entirely. What's good is that there's clearly a lot of work being done to mitigate or even eliminate the risks."
We were encouraged to see that concerns about Notes risks were much lower compared to concerns around risks of AI tools in general, and we think this reflects the safeguards and design choices we’ve put in place across the product.
Participants said they felt reassured by the steps we’ve taken to reduce harm, protect data, and ensure human oversight.
The takeaway: Public trust in AI grows through transparency and practical safeguards people can see and understand.
Accuracy is crucial
Accuracy was the biggest concern raised by participants. In social care, records don’t just describe what happened, they shape future decisions and influence people’s access to support. So small errors can have real consequences.
As one person put it:
“Let's say AI, being AI, doesn't capture the information right – I think accuracy is a big issue."
People raised thoughtful questions about transcription errors, particularly for those with strong accents or second-language English. They also worried about missed nuance or context, and the risk of social workers over-relying on the tool and failing to spot mistakes.
That’s why participants placed such importance on human review and sign-off. They valued the fact that Notes requires social workers to check, edit and approve every summary before it’s saved.
The takeaway: If AI is used in social care, accuracy must come first, and human judgement must always have the final say.
AI won’t fix a broken system, but it can help
Participants didn’t hold back when talking about the social care system. Only 13% said they were satisfied with how it works today, and many described it as “broken” and in need of investment.
They recognised and spoke openly about long waiting times, growing demand, and the pressure social workers are under. Some questioned whether funding should focus on fixing systemic issues rather than introducing new technology.
But these concerns didn’t cancel out support for Notes. 86% of participants said MN would benefit social care as a whole. People held two things true at once: social care needs reform and investment, and responsibly designed tools can still make a difference for staff working within today’s constraints.
The takeaway: AI can’t fix the system, but it can support social workers working within it.
What these findings tell us about trust in AI
The findings from Nesta’s research tells a clear story. The UK public supports the use of AI in social care when it is:
- Designed responsibly
- Clear about its role, and what it can/can’t do
- Built with visible, practical safeguards
- Used alongside human judgement, not instead of it
- Focused on real outcomes, with time savings reinvested into better quality care
If you’d like to explore the findings and recommendations, you can download the full report here.
Our reflections
We really valued taking part in this process. The small-group deliberation sessions were well designed, giving people space to learn, ask questions, and reflect together. That care showed up in the thoughtfulness and quality of the insights.
The process gave us a deeper understanding of how members of the public feel about the use of AI in care settings. We also got to hear ideas for how we can reduce risks and build more trust in our tools. These insights are directly shaping how we build and improve our product.
Most of all, this process reinforced our belief that decisions about AI should be shaped with the people they affect. We’d encourage any organisation building AI for public services to take a similar approach.
