Thought Leadership | Experience Design Driven by Evidence, Not Just Opinions

AI + Research: Saying "Yes, and..."

Written by Rich Brophy | November 4, 2024 at 10:05 PM

 

There's a rule in improvised comedy that performers always take an opportunity to say "Yes, and..." It's a mechanism for opening up new possibilities and seeing where things can go. It's also a great rule of thumb for innovation and evolution in business.

In mid-2023, we had a "Yes, but..." problem. New AI tools were coming online at pace. Our researchers and designers would look at what they could do, but the discussion often led to discussing how our existing practices were superior. It was a fair response, and showed critical analysis was happening, but it didn't open up new possibilities. About 12 months ago, we decided to change tact. We agreed to stop saying "Yes, but..." and start saying "Yes, and...".

The result has been profound. Experimentation with AI is now widespread across our agency. We have operationalised AI into our processes, developed shared recipes for success, and leaned into the emergence of a Human + AI design practice. We've refined our AI policy and embraced collaboration to adapt our design and research processes. We have gathered a suite of tools and features that can expand the scope, scale and certainty of our research for clients. We've also defined a framework for engaging with AI-fueled products, platforms, and features—ensuring we use these tools smartly, safely, and effectively.

Recently, the Voluntary AI Safety Standard was launched here in Australia. This has given us strong foundations for thinking about our use of AI. Our framework adds another layer - ensuring we are using the right tool for the right job in the right way..

There is a broad range of interesting AI products to support, enhance, and sometimes even replace researchers – each tool has strengths and shortcomings. Knowing what to use, when and why is critical. We see these tools as additional components in our toolkit—to be engaged as needed rather than used by default.

Our Framework for AI in Research: The Four Lenses

There are four lenses we apply when we’re considering if, why and how we will support our research with AI-driven tools: Context, Content, Constraints and Considerations.

 

The four lenses for appraising opportunities to roll AI into our practice.

 

1. Context

Context is about understanding the specific circumstances and objectives that shape the research—what we are trying to achieve, who we are helping, and the environment we operate in.

For instance, if the goal is to speed up survey analysis for a non-profit project, an AI summarisation tool might be a great fit. But if the context involves sensitive information (like patient data), we’d only be considering tools with robust privacy controls and ensure our data practices are strong.

The clarity we get from thinking about the context helps us identify the right approach and AI tools, especially when balancing human and AI contributions.

 

2. Content

Content refers to the information we work with—whether we are creating new data, processing existing data, or both. It involves understanding the quality and source of this data, which influences how we use AI effectively whilst maintaining integrity.

For example, discovery research may include a review of analytics, where analysing a big bank of structured data makes predictive models a useful tool. However in Discovery we’d also be looking to generate new data from users (traditionally done by interviews or surveys) - in this instance we’d think about using conversational AI research tools to add depth and scale to data generated by our in-person interviews.

The content lens is a pretty straight forward one: What do we have? What do we need? What’s the right kind of AI tool for the job?

 

3. Constraints

Constraints are the limitations or requirements, such as ethical guidelines, privacy concerns, and legal obligations that our clients of participants bring to the table. These define how we can responsibly use AI while safeguarding their data.

Constraints might come in the form of an organisation’s privacy policy or compliance obligations, or ethical guidelines or policy.

Starting with a sharp view of the constraints we need to work within makes auditing AI tools fast and effective - and much better than a tool-first approach.

 

4. Considerations

Considerations involve broader reflections that guide decision-making, including evaluating risks, ensuring humans stay in the loop, and balancing AI-driven perspectives with human judgement.

A company might have a strong AI agenda or existing toolset they want us to use. Timelines or budgets may preclude us from face-to-face interviews. Some things simply can’t be achieved with AI tools just yet - like reading emotional responses, good old fashioned empathy, or simply making sure people feel heard and seen as part of the design process. These are all worthy and impactful considerations

At the moment, considerations emerge from round table discussions with the project team and clients, and we gather more as we go. It’s a broad bucket, so really anything not covered by the first 3 lenses pops up here.

 

Putting it into action

We have built a database of AI research tools, with their key attributes and features recorded. Once we have our lenses in place, we can quickly find the right tool (if any) for the job. It’s a growing list, so if you’ve heard of anything interesting, let us know and we will review it!

 

Moving Forward

This framework is, and always will be, a work in progress. Our approach to AI and research is grounded in being a smart and safe pair of hands. We see AI as an opportunity to enhance what we do, not replace it. By asking the right questions and staying true to our framework, we ensure we're delivering quality research that blends the best of human insight and AI capabilities.

We're excited to keep evolving—to keep saying "Yes, and..." as we push the boundaries of what research can achieve.