Skip to content

Needs not tech - a human centred approach to AI design

 

A recent MIT study found that 95 percent of AI pilots fail to deliver meaningful business results. That's huge.

The technology itself isn't the problem. Instead, the failures come from how organisations approach it: launching pilots without clear objectives, investing in areas unlikely to generate impact, and failing to integrate AI into workflows in ways that make sense for people. In other words, most AI projects stumble because they start with the technology, not the human need.

Our experience in the AI design space has illuminated a more effective path forward. We begin with people, not technology. The first step is to understand the progress users are trying to make in their own context, how they measure success, and which parts of the process they want to control themselves. It's a set of simple analysis and mapping activities that anyone can get involved with. Once the picture is clear, it becomes easier to see where AI can genuinely enhance an experience and where it risks getting in the way. Only then do we ask if AI is the right tool and, if so, how it should be introduced.

A recent project shows how this works in practice. The challenge was to design a workplace appreciation portal where employees could recognise one another. The obvious temptation was to automate recognition messages with AI. But when we examined the meaning behind appreciation (authenticity, effort, and connection) it was clear that automation would strip away the value of the act. Instead, we used AI in a supporting role: offering prompts, suggesting phrasing, and helping people articulate their thoughts. Employees still wrote in their own voice, but with the confidence and ease that smart assistance provided. The result was a tool that kept appreciation authentic while reducing friction.

This example highlights a broader principle. AI should amplify human intent, not replace it. Applied with care, it can streamline tasks, provide guidance, and get traction at scale. But when it tries to own moments of trust or emotion, it risks hollowing out the very experience it is meant to improve.

That is why human-centred design is so important for AI. By starting with users, mapping the progress they seek, and testing solutions with both people and business stakeholders, we validate not just what works technically but also what feels right and genuine. This approach addresses the very issues the MIT study uncovered: it creates clarity of purpose, aligns investment with real needs, and ensures technology adds value rather than becoming an expensive distraction.

For us, human-centred AI is not about resisting technology. It is about putting it in service of people. Done well, it leads to solutions that are efficient, authentic, and meaningful. Or put simply, solutions that strengthen trust and engagement in the workplace and beyond.

 

A recent MIT study found that 95 percent of AI pilots fail to deliver meaningful business results. That's huge.

The technology itself isn't the problem. Instead, the failures come from how organisations approach it: launching pilots without clear objectives, investing in areas unlikely to generate impact, and failing to integrate AI into workflows in ways that make sense for people. In other words, most AI projects stumble because they start with the technology, not the human need.

Our experience in the AI design space has illuminated a more effective path forward. We begin with people, not technology. The first step is to understand the progress users are trying to make in their own context, how they measure success, and which parts of the process they want to control themselves. It's a set of simple analysis and mapping activities that anyone can get involved with. Once the picture is clear, it becomes easier to see where AI can genuinely enhance an experience and where it risks getting in the way. Only then do we ask if AI is the right tool and, if so, how it should be introduced.

A recent project shows how this works in practice. The challenge was to design a workplace appreciation portal where employees could recognise one another. The obvious temptation was to automate recognition messages with AI. But when we examined the meaning behind appreciation (authenticity, effort, and connection) it was clear that automation would strip away the value of the act. Instead, we used AI in a supporting role: offering prompts, suggesting phrasing, and helping people articulate their thoughts. Employees still wrote in their own voice, but with the confidence and ease that smart assistance provided. The result was a tool that kept appreciation authentic while reducing friction.

This example highlights a broader principle. AI should amplify human intent, not replace it. Applied with care, it can streamline tasks, provide guidance, and get traction at scale. But when it tries to own moments of trust or emotion, it risks hollowing out the very experience it is meant to improve.

That is why human-centred design is so important for AI. By starting with users, mapping the progress they seek, and testing solutions with both people and business stakeholders, we validate not just what works technically but also what feels right and genuine. This approach addresses the very issues the MIT study uncovered: it creates clarity of purpose, aligns investment with real needs, and ensures technology adds value rather than becoming an expensive distraction.

For us, human-centred AI is not about resisting technology. It is about putting it in service of people. Done well, it leads to solutions that are efficient, authentic, and meaningful. Or put simply, solutions that strengthen trust and engagement in the workplace and beyond.