Product discovery has always been information-heavy work. In any given cycle you might be sitting on interview recordings, a survey with hundreds of responses, a backlog of support tickets, and a dashboard full of analytics. The job is to make sense of all of it, find the signal, and turn it into something your team can act on.
The volume of data available to product teams has grown faster than the time available to process it. Something has to give.
AI doesn't change what good discovery looks like. You still need to talk to users, ask the right questions, and make difficult judgment calls about what matters. But it can meaningfully compress the time between raw input and structured understanding, if you use it for the right things.
In practice, AI earns its place in exactly two areas: summarising findings and generating new insights from analytics. Everything else is either marginal or actively risky.
Summarising findings
The most straightforward use case is also the most valuable.
When you're working with high volumes of qualitative input (interview notes, open-ended survey responses, support tickets), the bottleneck is rarely gathering the data. It's processing it. Reading through fifty support tickets looking for a pattern takes time. Doing it across multiple sources, trying to hold everything in your head at once, is where things start to slip. You miss connections. You unconsciously weight the most recent thing you read. You run out of time and stop before you're done.
AI handles this well. Feed it a batch of interview summaries and ask it to identify recurring themes, and it will do so faster and more consistently than you can manually. For support tickets in particular, where volume can be much higher, it can compress hours of reading into minutes.
A few things matter in practice. The prompt matters more than you'd expect. "Summarise these findings" gives you something generic, while "identify the three most common frustrations, and for each one find a direct quote that best illustrates it" gives you something you can actually use. Specificity in the prompt translates directly to usefulness in the output.
Read the source material anyway. AI summaries are good at breadth but can miss the outlier that changes everything: the one user who describes the problem in a way that reframes how you think about it. Skim the originals even after you've read the summary.
Use AI to triangulate, not conclude. An AI summary of your interviews compared against an AI summary of your support tickets is a fast way to check whether what users say matches what they actually complain about. The contradictions are where the most interesting questions live.
Generating new insights from analytics
The second area is less obvious but potentially more powerful: using AI to interrogate analytics data and surface patterns you weren't looking for.
Most analytics workflows are hypothesis-driven. You have a question, you build a query, you get an answer. This works, but it means you only ever find what you go looking for. Your blind spots stay blind.
AI changes this. Give it access to your analytics data and ask open-ended questions. It will often surface correlations or segments worth investigating that weren't on your radar, not because it's smarter than you, but because it doesn't share your assumptions about what's worth examining.
The key phrase is worth investigating. AI-generated insights are hypotheses, not conclusions. Treat them as prompts for further analysis, not findings you can act on directly.
What AI can't do
This is the part that's easy to underestimate, especially once you've seen how much time it can save.
The parts of discovery that matter most are judgment calls, and AI is not reliable at those. Deciding which problem is worth solving is a judgment call. Knowing when a user is telling you what they think you want to hear versus what's actually true is a judgment call. Understanding why a metric is moving (not just that it is) requires context that lives in your head and your team's heads, not in a dataset.
There's a subtler risk: AI can make discovery feel done when it isn't. A clean summary with clear themes and a few supporting quotes looks like thorough research. It can be, but it can also be a confident-sounding artefact built on shallow input. The quality of what comes out is still determined by the quality of what went in: the questions you asked in interviews, the way you structured the survey, the hypotheses you brought to the analytics.
This is the trap. The speed and polish of AI output can create a false sense of completeness. Discovery done in half the time isn't good discovery if the underlying thinking was also halved.
Fitting it into your workflow
The workflow itself doesn't change much. Collect your sources as normal: interview notes, survey responses, support tickets, analytics exports. Don't change how you gather; AI works with whatever format you already have.
Where AI earns its time is in synthesis. Summarise each source independently first, then ask AI to compare across sources. Where do the themes align? Where do they contradict? The contradictions are usually the most productive places to dig.
After that, treat the output as a starting point. Push back on anything that feels off. Add your own observations. Identify what needs validation before you act on it. The AI has done the volume work; you still own the analysis.
What changes is the ratio: less time on processing, more time on thinking. That's the trade worth making. Just make sure the thinking is actually happening.