🧠 When AI Gets It Wrong: Understanding “Hallucinations” in Social Housing

🧠 When AI Gets It Wrong: Understanding “Hallucinations” in Social Housing

AI tools like ChatGPT are unlocking new possibilities in social housing, from speeding up report writing to streamlining tenant communications and diagnostics. Providers are increasingly turning to these technologies to improve efficiency and support their teams. But with these benefits comes a risk that all organisations should understand: AI hallucinations.

What Are AI Hallucinations?

AI models such as ChatGPT generate text by predicting likely sequences of words, drawing on extensive datasets. While this often produces articulate and accurate content, these models do not "know" facts in the traditional sense. Sometimes, they generate information that sounds entirely plausible but is simply incorrect, a phenomenon the industry terms as an AI hallucination.

This might mean:

  • Inventing a regulation or policy reference,
  • Generating a statistic that isn’t found in any authoritative dataset,
  • Citing a document or source that does not exist.

Importantly, these outputs are usually delivered confidently, making them harder to spot, especially if you are under time pressure.

Why Does It Matter in Social Housing?

Accuracy is fundamental to our sector. Errors in tenant communications, compliance reporting, board briefings, or draft policies can cause reputational harm, regulatory issues, or even impact resident wellbeing.

For example, in recent tests using ChatGPT to summarise public performance data, the responses were polished and professional, yet included references to quartile rankings and trends that simply were not present in the data. These were not just simple misunderstandings; they were confidently delivered errors that could go undetected without careful review.

While more tailored prompts can sometimes improve reliability, the core issue is that AI tools don’t naturally signal when they’re making things up. Mistakes can pass through undetected if proper oversight isn't in place, a risk that is amplified in sectors where decisions have real-world implications.

A New Kind of Risk—and Emerging Protections

Recognising these new challenges, some insurers, including Lloyd’s of London, have begun offering dedicated cover for AI output errors, such as hallucinations that result in legal, financial, or regulatory harm. However, these insurance products are still emerging and may not be widely available or suitable for all scenarios. Coverage terms are likely to change as the sector, and associated risks, evolve.

It’s worth noting: Responsible use and robust internal safeguards remain your primary protection against AI-related risk. Insurance should be seen as a backstop, not a replacement for due diligence.

How Should Housing Providers Respond?

  • Treat AI as an assistant, not an authority. Human expertise and sector knowledge are essential for reviewing AI outputs.
  • Implement robust oversight. Don’t use AI "out of the box" for critical tasks. Test, monitor, and validate its outputs thoroughly.
  • Craft clear prompts and review the results. Quality input helps, but never forgo manual checks.
  • Upskill teams. Foster digital literacy across your organisation to build awareness about both AI’s potential and its limitations.
  • Formalise guidance. Document when and how AI should be used and ensure every process using AI has human review before decisions are made.

A Note on Evolving Best Practice

Best practice in applying AI is not static, technology, regulation, and sector expectations are all in flux. Guidance will continue to evolve as regulators, industry bodies, and professional associations learn more about both the opportunities and pitfalls of these tools. Providers should regularly review their policies and remain alert to new sector recommendations.

Final Thought

AI is transforming our sector. But adopting it safely and responsibly means understanding where it excels and where it can fall short. Hallucinations are not theoretical, they are already shaping the debate on trust, compliance, and risk in social housing. The most effective path is an informed, balanced approach: leveraging AI’s value while ensuring strong governance and continuous learning.

Before adopting any new AI tool, ask: Do we have the checks in place to know when it gets it wrong? If the answer isn’t a confident yes, now is the time to strengthen your safeguards.

Read more