The Computer Said No – Understanding Bias in AI

The Computer Said No – Understanding Bias in AI

We like to think of AI as neutral—after all, it’s just code, right? But the truth is, AI can reflect the same biases found in the data it’s trained on. And in a sector like social housing, that matters.

⚖️ What Is AI Bias?

AI bias happens when the data used to train an AI system contains assumptions, gaps, or inequalities—often without anyone realising. The AI learns those patterns and replicates them, even if they’re unfair or outdated.

For example, if a chatbot has only ever been trained on formal complaint language, it might ignore or misroute a tenant who writes casually or emotionally. That’s bias in action—not because anyone intended it, but because the system wasn’t designed with enough variety in mind.

🏘️ Bias Risks in Housing

Here are a few areas where AI bias can creep in:

- **Tenant screening tools** may favour certain demographics

- **Repairs triage systems** might prioritise certain terms over others

- **Chatbots** could misunderstand informal or regional language

This isn’t about being ‘woke’—it’s about ensuring fairness. AI systems should serve *everyone*, not just the data majority.

🔍 Spotting and Reducing Bias

You don’t have to be a data scientist to help reduce bias. Ask questions like:

- Where did this AI get its training data?

- Does it recognise plain English, or only formal phrases?

- Who tested it before it was rolled out?

- Is it producing unequal outcomes—faster help for some, slower for others?

💡 Common Sense AI Takeaway

AI is only as fair as the data it’s built on. If you’re introducing AI into housing services, make sure it’s tested with real, diverse, and local examples. Bias isn’t always obvious—but it can be challenged and improved.


Next up in the Common Sense AI series: “Chatbots in Housing – Are They Just Annoying, or Actually Useful?”

Read more