Scenario 4 – Gap in user understanding
An agency is building an AI chatbot to route clients through to the right digital service. A PoC team is created with a small internal technical team and a vendor partner. Citizens, frontline staff and diversity advocates are not consulted in the early design. The resultant chatbot in the pilot phase, and before business deployment, does not recognise diverse accents and does not provide easy to use pathways for those with a low level of digital literacy.
The chatbot uses a LLM that is finetuned using the agency FAQs. When asked by a prospective end-user during the pilot stage "why do I need to provide this document to support my application for assistance?" the chatbot provides a vague answer rather than citing the underlying policy or legislation. Citizens could feel they are being given a "black box" answer without any clear traceability. Advocacy groups start to raise concerns about fairness and accountability.
Inclusive co-design is important for AI solutions to meet real-world needs, as excluding end-users leads to poor accessibility and usability. Early and ongoing engagement with stakeholders helps identify and address potential issues before deployment, preventing public criticism and reputational risk. Ultimately, embedding user-centric design principles throughout development ensures that AI solutions are usable and deliver value to all intended users, demonstrating that technical success alone is not enough for meaningful impact.
What worked well
- The pilot highlighted accessibility and explainability issues before the system reached the public.
- The shortcomings prompted a shift toward user-centric design principles and stronger governance oversight.
Lessons learned
- Lack of early engagement with citizens, frontline staff and diverse user groups resulted in a chatbot unable to recognise diverse accents or support users with low digital literacy.
- The model produced vague, non-traceable answers, leading to perceptions of a "black box" system.
- Failure to embed explainability from the start led to concerns from advocacy groups around fairness, transparency and accountability.
- Technical success is insufficient – human-centred design and policy traceability must be embedded from the outset.