Artificial intelligence tools used to record and summarise social work meetings are generating potentially harmful errors, including false claims of suicidal ideation and nonsensical transcriptions, frontline workers have warned.
An eight-month study across 17 councils in England and Scotland found that AI-generated “hallucinations” are appearing in official care records. In one case, a transcription tool incorrectly stated that a client had expressed suicidal thoughts, despite no such discussion taking place. Other workers reported transcripts containing irrelevant or inaccurate words, particularly when service users spoke with regional accents.
Dozens of councils have introduced AI note-taking systems to help address staff shortages and reduce administrative burdens. Tools such as Magic Notes and Microsoft Copilot are being used to record and summarise meetings with children and adults receiving care. While the technology has delivered noticeable time savings and allowed some social workers to focus more on direct support, concerns are growing about reliability and oversight.
The study found instances where AI systems altered the tone of reports or inserted language that had not been used, raising fears that inaccurate summaries could influence care decisions. Experts warned that errors in official records could lead to inappropriate safeguarding judgments, missed risks, or professional consequences for staff who fail to detect mistakes.
Training on AI tools appears limited in some cases, and checking practices vary widely. While some social workers spend significant time reviewing transcripts, others admit to conducting only brief checks before uploading content into official systems.
The British Association of Social Workers has called for clearer regulatory guidance on AI use, warning that insufficient oversight may leave practitioners exposed to disciplinary risks. It also stressed the importance of reflective practice, cautioning that reliance on automated summaries could undermine critical thinking in complex cases.
Developers of specialist social work AI tools maintain that their products are designed to produce draft notes rather than final records and include safeguards to detect potential hallucinations. However, researchers argue that stronger risk assessments and clearer standards are needed as AI adoption accelerates across local authorities.
