Who’s Accountable When AI Gets It Wrong?

AI can be incredibly helpful—but what happens when it makes a mistake? Who’s responsible if a tenant receives the wrong letter, or a repairs appointment is missed because of a system error?
⚠️ The Myth of 'Blame the Machine'
It’s tempting to treat AI as a neutral tool—just another bit of software. But unlike a spreadsheet, AI systems can produce unexpected or incorrect results, even when they’re doing exactly what they were built to do.
So when an AI tool sends out a confusing message or flags a complaint incorrectly, we can’t just shrug and say 'the computer did it.' Someone in the organisation is still responsible for how it was used, what data it relied on, and whether it was fit for purpose.
🏘️ Real-World Risks
In social housing, missteps can have serious consequences:
- A chatbot misunderstanding a vulnerable tenant’s message
- A system wrongly downgrading a priority repair
- An AI-generated letter causing anxiety or confusion
These outcomes might seem small, but they can damage trust and lead to complaints, reputational harm, or even legal risk.
🧑⚖️ Where Does Accountability Sit?
Ultimately, responsibility lies with the organisation deploying the tool—not the software vendor, and certainly not the algorithm. That means you need good governance around AI use:
- Clear staff guidance and training
- Human oversight for sensitive tasks
- The ability to override or challenge AI decisions
- Regular review of outcomes for unintended impacts
💡 Common Sense AI Takeaway
AI is powerful, but it’s not infallible. Make sure someone in your organisation owns it—whether it’s a manager, data lead, or service head. Because when AI goes wrong, the accountability doesn’t disappear into the code.
Next up in the Common Sense AI series: “5 Ways to Start Using AI in Social Housing Without Big Budgets”