How do I interpret xCES scores?
Required Feature Flags
The following feature flags and permissions are required to use this feature:
Feature Flag | Description |
Analytics Data Processing | Enables AI-powered analytics processing for conversations, including AI metrics like xCES |
xCES itself is configured by evaluagent as part of your AI metrics setup — it doesn't have its own customer-facing feature flag. Contact your evaluagent administrator if you'd like xCES enabled or disabled.
Required Permissions:
View imported contacts (
quality.evaluations.imported-contacts) — required to open conversations and see the xCES result in the Insights sidebarInsights (
reporting.insights) — required to see xCES widgets on reporting dashboards
xCES (Customer Effort Score) is the AI's prediction of how hard a customer had to work to get their issue resolved. High effort is a leading indicator of churn — when customers have to repeat themselves, get transferred, or wade through a complicated process, they're more likely to leave.
This guide covers how to read xCES, what the xCES Driver tells you, and when xCES matters more than xCSAT or xNPS.
Step 1: Open a conversation and find xCES
Go to Conversations > Imported contacts and open any analysed conversation.
In the Insights sidebar, look under the xMetrics section. xCES sits alongside the other xMetrics with a status badge.
Step 2: Read the result
xCES has three result categories plus N/A:
Result | Badge | What it means |
Easy | Green | The customer's issue was handled smoothly with minimal effort |
Neutral | Yellow | Some effort required, but not overly difficult |
Difficult | Red | The customer had to work hard to get their issue resolved — process or experience problem |
N/A | Grey | Not enough conversation content to assess |
Step 3: Read the xCES Driver
Below the xCES result you'll see an info badge called the xCES Driver. This is the AI's call on what most influenced the score. The driver text is generated per conversation, so the wording varies — but it gives you a clear handle on what to fix, not just that there's a problem. If you keep seeing similar drivers across Difficult conversations, that's a pattern worth investigating.
Step 4: Drill into the reasoning and evidence
Click into the xCES result to see the full reasoning. This explains why the AI scored the conversation the way it did.
Click View evidence to highlight the exact phrases that informed the result. Evidence usually includes:
Customer statements about difficulty or ease
Mentions of multiple contacts or transfers
Frustration about the process itself (not the issue)
Indicators of a smooth or complicated path to resolution
Step 5: Filter and report on xCES
Filter conversations
In the Imported contacts filter panel, find xCES under the evaluagent Insight Fields section. Filter to Difficult to find every high-effort conversation in your selection.
Dashboard widgets
If your organisation has dashboards enabled:
xCES Trend — Effort distribution over time. Watch for spikes in Difficult after process changes.
xCES Distribution — The breakdown of Easy, Neutral, and Difficult across your conversations.
When to use xCES vs xCSAT vs xNPS
The three metrics measure related but different things. Use them together:
Metric | What it measures | Best for |
xCES | How hard the customer had to work | Spotting process friction and churn risk |
xCSAT | How satisfied the customer is with the interaction | Tracking immediate satisfaction with each interaction |
xNPS | Whether the customer would recommend you | Tracking long-term loyalty and brand sentiment |
A customer can be satisfied with the resolution but rate the effort as high. That's an important signal — they got what they needed, but the process was painful, and they may not stick around.
A customer can have a high-effort experience and still rate as a Promoter if the agent went above and beyond. xCES tells you the experience needed work; xNPS tells you the customer forgave it.
Use xCES when you're focused on:
Reducing churn driven by friction
Improving processes, routing, or self-service
First-contact resolution work (combine with xResolution and xRepeats)
Use xCSAT when you're focused on:
Day-to-day satisfaction tracking
Comparing AI predictions against your survey CSAT programme
Coaching individual interactions
Use xNPS when you're focused on:
Brand-level loyalty signals at scale
Validating survey NPS coverage gaps
Long-term customer experience tracking
The strongest reading comes from looking at all three together. A Difficult, low-xCSAT, Detractor conversation is a clear priority. A Difficult, high-xCSAT conversation tells you the agent saved the day on a broken process — fix the process so they don't have to.
How to use xCES data
For process improvement: Pull all Difficult conversations and group by xCES Driver. The most common driver tells you where to focus.
For agent recognition: Identify agents who consistently produce Easy results, especially on issue types that are typically Difficult. Use their conversations as training material.
For root cause: Combine xCES with xResolution and xRepeats. A Difficult, Not resolved conversation that becomes a Repeat is a process failure worth chasing down.
Troubleshooting
xCES not appearing
The conversation hasn't been processed yet
The integration doesn't have xCES enabled
The conversation has no transcript (voice calls need to be transcribed)
Result doesn't seem right
Read the reasoning — it explains the AI's logic
Check the xCES Driver for the main influence
Review the evidence
Remember xCES is based on conversation content only
Report consistent inaccuracies to evaluagent support.
