Every VoC team has numbers they’re responsible for: CSAT, NPS, customer effort. These numbers go up and go down, and every one of these movements demands an explanation in the boardroom.
What’s less often said out loud is this: most of the time, that explanation is a best guess. Not because teams aren’t capable, but because the data behind the number rarely tells the full story.
According to leading customer experience (CX) analysts, this dynamic is creating a growing risk.
In a December 2025 episode of Forrester’s podcast CX Cast, Forrester’s VP and principal analyst Maxie Schmidt shares why VoC teams risk falling into a trap: “Metrics obsession comes from this deep-seated and very earnest need to have metrics to look to…metrics cease to inform about progress and they become a goal in themselves.”
This looks like producing dashboards that track metrics, but fail to actually explain what customer problems are happening, how to approach them, or the resulting impact on the business. The root issue isn’t the metrics themselves; it’s how they’re collected, interpreted and used.
Where Traditional VoC Breaks Down
Metrics like CSAT and NPS are undeniably useful; they give you a directional sense of how your customers feel.
But most VoC programs rely on surveys as the primary way to collect customers’ thoughts, and that introduces real limitations:
- Low response rates - often a small, single-digit percentage of customers
- Response bias - overrepresentation of extreme experiences
- Priming effects - leading or framed questions that may influence answers
- Lack of context - no clear connection to what actually happened in the interaction
- Time lag - responses collected hours or days after the experience
The result is a thin slice of signal, often treated as if it represents the whole. And from this fragmented view, teams are expected to explain why something happened, and what next strategic steps should look like.
How Metric Obsession Leads to Bad Decisions
When the underlying data is incomplete, something predictable happens: teams fill in the gaps. They connect the metric to a plausible narrative:
- “It’s probably due to hold time.”
- “It’s likely the new feature rollout.”
- “We’ve seen this trend before.”
And often those narratives align, intentionally or not, with what leadership already believes. Not because anyone is trying to mislead, but because the data available doesn’t provide a clear, defensible answer.
This is the core of Forrester’s “metric obsession”:
- Dashboards without context
- Metrics without explanation
- Reporting without real guidance on what to fix and how
Over time, VoC risks becoming a reporting function, rather than a strategic one.
What’s Missing: The Actual Voice of the Customer
Most VoC programs today still rely primarily on solicited feedback, the answers customers give when you ask them directly, usually through surveys. But some of the richest customer insight lives in unsolicited feedback: what customers are already saying in calls, chats, and service interactions, without being asked.
This feedback is immediate, contextual, and tied to the experience as it happens. It’s the actual voice of the customer, not filtered through memory, delayed by hours or days, or influenced by how a survey question is phrased. The problem is that unsolicited feedback has historically been much harder to operationalize. It isn’t just unstructured; many teams don’t even have the infrastructure to systematically capture and analyze it. Until recently, most teams didn’t have the tools to turn this kind of unstructured data into something measurable and actionable.
That’s starting to change with advances in conversation intelligence.
Instead of relying on a small sample of responses, forward-looking teams are asking: What if we could understand the customer experience across every interaction?
This is where approaches like predictive CSAT come in.
By applying AI to customer conversations, teams can measure sentiment across every interaction, without guessing based on a small sample.
Rather than waiting for customers to respond to a survey, these models:
- Analyze conversations directly
- Infer satisfaction signals from behavior, language, and outcomes
- Provide coverage across 100% of interactions, not just the standard 2-5% from surveys
The result is a more representative, more immediate view of the customer experience. In effect, you’re no longer flying blind with a small sample.
You’re building a modeled view of your entire customer based, grounded in real interactions.
Moving from Metrics to Meaning
The goal isn’t just to measure sentiment; it’s to understand what’s driving it and what can be done about it.
That requires:
- Connecting CSAT to topics, intents, and behaviors
- Identifying patterns across interactions at scale
- Exploring those patterns in a way that’s both flexible and statistically grounded
This is where newer capabilities, like natural language analysis tools designed for CX, are starting to change what’s possible.
Instead of manually slicing dashboards, teams can:
- Ask questions directly (“What’s driving low CSAT in billing calls?”)
- Get answers backed by real data and statistical validation
- Drill into specific examples to further understand context
This kind of natural language analysis allows teams to explore CX data the same way they’d investigate a problem: iteratively, and with context.
None of this means that surveys disappear. They still play an important role, allowing teams to:
- Ask targeted, forward-looking questions
- Track perception over time
- Gather feedback that can’t be inferred from behavior alone
But on their own, they’re incomplete. Without behavioral context, surveys tell you what customers say. They don’t fully explain why they experienced it that way.
The most effective VoC programs in 2026 won’t choose between surveys and conversations. They’ll combine them to enable:
- Broad, continuous measurement - sentiment across every interaction
- Targeted, structured input - surveys to capture specific questions and trends
- Deep, explainable insight - linking outcomes to behaviors, intents, and specific moments in the journey
This is what moves VoC from reporting to decision-making and business-wide impact.
Looking Ahead for VoC
Forrester’s concern is clear: teams that focus on reporting metrics without improving the underlying signal risk becoming less relevant over time. Teams that evolve will build stronger, more representative data foundations, allowing them not just to measure experience, but to explain it, defend it, and ultimately act on it. That’s the longer-term vision: moving from retrospective reporting to proactive intervention, including earlier forms of experience recovery when customer frustration first starts to surface.
Imagine seeing an alert that a single high-paying customer has shared negative feedback about your returns policy on three different calls in the last six weeks, and immediately dispatching an AI agent to call them, offer an exception to the policy, and recover their experience, all without needing to send out surveys or have analysts manually parse responses.
Because in 2026, VoC teams won’t be judged only by the metrics they report, but by the clarity of the decisions they enable, and their ability to help the business recover experiences before they turn into churn.
Platforms like Cresta are built around this model: combining conversation intelligence, predictive measurement, and AI-powered analysis to help teams move from sampled feedback to a complete, explainable view of customer experience.







