The AI era's most underrated product skill: knowing your users
User understanding is the core competency of product management in the age of AI
I’ve spent my career trying to understand why products fail, specifically the actual decision that set the failure in motion. The answer, more often than I expected, is surprisingly simple: the team didn’t really know their users.
They had personas. They had surveys. They had dashboards full of behavioral data. What they lacked was an accurate, living model of the person on the other side of the screen, what that person was actually trying to accomplish, what they believed about the product before they even opened it, and what made them close it and never come back.
This piece is my attempt to lay out the evidence for something I've come to treat as foundational: user understanding is the mechanism through which durable products get built. When it's missing, everything else, the roadmap, the experimentation, the retention work, is built on a shaky foundation. That's always been true. In the age of AI, it's more consequential than ever. Teams can build faster than at any point in the history of software, and the real risk now is moving quickly in the wrong direction, accumulating expensive mistakes at a speed that would have been impossible a few years ago. While developing at warp speed is only now possible, the research on why teams drift from their users, and what to do about it, has been building for decades. The data across failure analysis, behavioral research, neuroscience, and business performance all point in the same direction. I'll walk through what it shows.
1. Why products fail
CB Insights analyzed post-mortems across 431 VC-backed companies that shut down since 2023. Running out of capital topped the list at 70%, but as CB Insights noted, that figure is where these stories end. The more telling causes sit underneath it. Of the 385 companies for which underlying failure reasons could be identified, poor product-market fit was cited by 43% as a primary cause, ranking above bad timing (29%), unsustainable unit economics (19%), and competition (6%). The capital dried up because the product never earned its place in users’ lives. 1
The persistence of this finding across multiple cohorts suggests it reflects something structural. “No market need” is often read as a market research failure. The more precise diagnosis is a user understanding failure. The companies in this category typically had a clearly defined market segment. What they lacked was accurate knowledge of what people in that segment actually needed, how they currently solved the problem, and whether the proposed solution represented a meaningful improvement over their existing behavior. They operated from a model of their users that was too abstract to be useful and too static to survive contact with reality.
The pattern is consistent: teams build from internal assumptions, validate those assumptions against other internally generated artifacts (roadmaps, design specs, internal demos), and ship products that are coherent within their own logic but misaligned with the reality of use.
2. The limits of current discovery and research methods
Behavioral data and surveys are the primary research instruments available to most product teams, and both have real limitations that compound when teams treat them as the whole picture.
Quantitative data describes behavior without explaining it. A drop in activation rate tells you something broke somewhere in a flow. It can’t tell you which part of the experience caused the problem, what the user was trying to accomplish, or what expectation they brought to that moment that went unmet. Funnel analysis narrows the search space; figuring out what actually happened requires getting closer to the user.
Surveys have their own blind spot. They collect responses to the questions a researcher thought to ask, phrased the way the researcher thought to phrase them. On top of that, people aren’t very good at accurately explaining why they do what they do. Nisbett and Wilson’s foundational 1977 paper “Telling More Than We Can Know” documented that people frequently construct explanations for their behavior after the fact, explanations that often have no accurate relationship to what actually drove the decision. Surveys surface the story users tell themselves, shaped by memory and self-perception.2
Clayton Christensen’s milkshake study, conducted as part of the development of the Jobs-to-Be-Done framework, illustrates this gap in a way that’s hard to forget. McDonald’s had been trying to improve milkshake sales through standard methods: demographic segmentation, taste testing, and direct consumer interviews. None of it produced actionable insight. When researchers shifted to observational methods and started watching customers in the actual context of purchase, they found that a large share of morning milkshake purchases were being made by commuters who wanted something to occupy them during a long, boring drive and hold off hunger until lunch. The product was being hired to do a job that had nothing to do with the “better milkshake” framing that had organized all previous research. The fixes that followed from understanding the real job (a thicker shake that lasted longer, a faster purchase experience) came from a completely different direction than anything the taste tests had surfaced.3
What the study shows is how much gets lost when research methods can’t capture the context of use. People can’t reliably report on behavior that is habitual, automatic, or emotionally driven. If your research depends entirely on asking users to explain themselves, you’re working with incomplete information.
3. The gap between what users say they need and the real opportunity
There’s a related problem that shows up constantly in product work: when users put their experience into words, something gets lost in translation. The stated need is almost always a simplified version of something more complex going on underneath.
Indi Young’s work on mental models gets at why. Users bring a layered set of prior experiences, beliefs, emotional states, and goals to every product interaction, often all at once. A user who says “I want faster search” is describing a symptom. The actual experience driving that frustration might be anxiety about their own competence, distrust built up from a previous failure in the product, or a time pressure that makes even small friction feel unbearable. The surface request points toward query performance. The real problem might be about confidence, error recovery, or what happens when search returns nothing useful.4
This matters enormously for product decisions. Stated needs are a compressed version of something more complex. When teams build to the surface request, they solve the wrong layer. The real opportunity is the emotional problem underneath it, and solving that is what drives retention.
Getting there requires methods that go beyond asking direct questions. Contextual inquiry, as developed by Beyer and Holtzblatt, involves observing users in their actual environment while they do real tasks, with the researcher asking questions in the moment about what the user is doing and why. The technique is designed to surface the reasoning and emotional responses that users would never bring up in a structured interview, because they’ve stopped noticing them, forgotten them, or don’t have words for them.5
4. The most frustrated users are the ones to watch
Microsoft’s Inclusive Design program, which grew out of decades of accessibility research, surfaced a finding that applies well beyond accessibility: designing for users with significant constraints tends to produce solutions that work better for everyone.6
The reason is that highly constrained users feel friction that everyone else has learned to live with. A wheelchair user can’t get over a curb at all; a person carrying heavy luggage finds it annoying but manages. When designers solved for the wheelchair user, they created curb cuts that turned out to make life easier for a much broader group of people. The constraint made a widespread problem visible that normalization had been hiding.
In product terms, the users most worth paying attention to are often the ones who complain most specifically. A detailed complaint is a diagnostic instrument. It describes with precision an experience that a much larger group of users is having at a lower intensity and never reporting. The person who writes a long email about a confusing onboarding flow is putting into words something that many other users resolved by simply leaving. Understanding their specific experience gets you to the mechanism behind the churn, which is far more useful than knowing the churn rate alone.
5. The neuroscience of user behavior
Antonio Damasio’s research on patients with damage to the ventromedial prefrontal cortex has a direct implication for how people experience and evaluate products. These patients retained their full reasoning abilities but lost the capacity to make decisions. They could analyze options, articulate consequences, and follow logical arguments, but without the emotional signal that makes one option feel preferable to another, they couldn’t reach a conclusion.7
What this tells us is that how people feel about a decision is not a soft add-on to how they think about it. Feeling is part of how people decide. Users don’t assess a product rationally and then react emotionally to their conclusion. They’re responding emotionally the entire time they’re using the product, and those emotional responses are the material their evaluation is made from.
The practical implication for user research is significant. Understanding what users do and what they think about a product only gets you so far. The emotional texture of the experience, where users feel uncertain, where they feel capable, where they feel like the product actually sees them, is part of the primary data. Research methods that don’t surface how users feel are leaving out a lot of what actually determines whether the product works for them.
6. Knowing your users directly impacts revenue
The relationship between user understanding and business outcomes is measurable. Forrester Research found that companies with above-average customer experience scores grew revenue at approximately five times the rate of lower-scoring companies over a five-year period.8 McKinsey’s research on personalization found that companies doing it well generate 40% more revenue from those activities than average, and effective personalization is a direct function of how well a company actually knows its users.9
The connection between user understanding and revenue runs through a few channels: making better bets on which features will actually matter, reducing churn by solving the problems that are really driving users away, and generating word of mouth from users who feel genuinely understood by a product. All of it traces back to the same place: knowing what users need and how they actually experience what you’ve built.
7. The teams that win prioritize continuous proximity to users
The practical implication of everything above is that user understanding is a continuous operational function, woven into the normal rhythm of the team. Users change. Their contexts change. The problems they’re trying to solve evolve as their lives and the world around them change. A user model built at launch becomes progressively less accurate over time if it isn’t actively maintained.
The question for most teams is how to build the capacity for regular, direct contact with users into how they actually operate day to day. That means direct observation in real contexts of use, unstructured conversations that give users room to raise things the team hasn’t thought to ask about, and a habit of treating support interactions and complaints as research data rather than noise to be managed.
Teams that maintain this kind of closeness accumulate something that analytics can’t replicate: a genuine feel for the people they’re building for. They develop the ability to sense, before the data confirms it, when a decision is drifting away from what users actually need. That instinct is hard to build and easy to lose.
In my experience, teams with true closeness to their users ship tests that hit at higher rates. The more interesting effect is the ambition it unlocks. Teams with true closeness to their users start seeing bigger opportunities that remain invisible to teams who never did the work to see past the surface.
8. The hidden cost of using AI to build at warp speed
AI has raised the stakes on user understanding in a specific way: the cost of skipping it is higher than it’s ever been.
The speed at which product teams can now build has increased dramatically. A PM with access to the right tools can go from idea to testable prototype in a day. That’s genuinely exciting, and it’s also a new kind of risk. When the bottleneck was engineering time, slow build cycles created a natural forcing function for rigor. Building anything took weeks, which made it harder to build the wrong thing carelessly. That friction is largely gone now.
What replaces it has to be discipline. Specifically, the discipline to use the time that AI gives back for deeper user understanding, getting to real user signal earlier, and pressure-testing assumptions before they become churn-inducing features. The teams that will win in this environment are the ones that stay close enough to their users to build genuine intuition about them, the kind of gut-level familiarity with how users think, feel, and behave that sharpens every decision before a single line of code is written.
Final thoughts
The evidence across failure analysis, behavioral research, neuroscience, and business performance data points consistently in the same direction: user understanding is the primary variable in product success, and its absence accounts for a disproportionate share of product failure.
I’ve seen this play out from both sides. Teams that stay close to their users make better bets, course-correct faster, and build products that earn long-term loyalty. Teams that drift from their users, relying on internal assumptions and lagging metrics to guide them, tend to discover the gap only after churn has already set in.
The challenge is that acting on this requires sustained investment in methods that are slower and less legible than the quantitative alternatives. The value shows up in compounding ways over time: better prioritization, lower churn, stronger retention, and a product that stays relevant as the users it serves continue to change. That’s the return on knowing your users. It just takes longer to show up on a dashboard.
Source: CB Insights, Why Startups Fail, 2024. cbinsights.com/research/report/startup-failure-reasons-top
Source: Nisbett, R. E., & Wilson, T. D. (1977). Telling more than we can know: Verbal reports on mental processes. Psychological Review, 84(3), 231–259.
Source: Christensen, C. M., Cook, S., & Hall, T. (2005). Marketing malpractice: The cause and the cure. Harvard Business Review, 83(12), 74–83.
Source: Young, I. (2008). Mental Models: Aligning Design Strategy with Human Behavior. Rosenfeld Media.
Source: Beyer, H., & Holtzblatt, K. (1998). Contextual Design: Defining Customer-Centered Systems. Morgan Kaufmann.
Source: Microsoft Inclusive Design, Inclusive: A Microsoft Design Toolkit (2016). microsoft.com/design/inclusive
Source: Damasio, A. R. (1994). Descartes’ Error: Emotion, Reason, and the Human Brain. Putnam.
Source: Forrester Research, The Business Impact of Customer Experience, 2014.
Source: McKinsey & Company, The value of getting personalization right — or wrong — is multiplying, 2021. mckinsey.com/capabilities/growth-marketing-and-sales/our-insights/the-value-of-getting-personalization-right-or-wrong-is-multiplying
