Skip to main content

How Do I Interpret XCSAT In Reports

Written by Alex Richards

How do I interpret xCSAT in reports?

Required Feature Flags

The following feature flags and permissions are required to use this feature:

Feature Flag

Description

Analytics Data Processing

Enables AI-powered analytics processing for conversations, including AI metrics like xCSAT

xCSAT itself is configured by evaluagent as part of your AI metrics setup — it doesn't have its own customer-facing feature flag. Contact your evaluagent administrator if you'd like xCSAT enabled or disabled.

Required Permissions:

  • View imported contacts (quality.evaluations.imported-contacts) — required to open conversations and see the xCSAT score in the Insights sidebar

  • Insights (reporting.insights) — required to see xCSAT widgets on reporting dashboards

xCSAT is the AI's prediction of how satisfied a customer was, scored 1 to 5. Unlike survey CSAT, you get a score on every conversation — no waiting for survey responses, no skewed data from only the very happy or very unhappy customers.

This guide covers how to read xCSAT scores, drill into the AI's reasoning, and filter conversations by score.

Step 1: Open a conversation

Go to Conversations > Imported contacts and open any analysed conversation.

In the Insights sidebar, look under the xMetrics section. You'll see xCSAT with a status badge and score.

Step 2: Read the score and badge

xCSAT gives you a score from 1 to 5. The badge colour reflects how positive the score is:

Score

Badge colour

What it means

4–5

Green

Strong positive signals, generally successful interaction

3

Yellow

Mixed signals or unclear

1–2

Red

Negative indicators, frustration, unresolved issues

Grey

Not enough conversation content to assess

Step 3: Drill into the reasoning

Click the xCSAT entry to see the AI's reasoning. This is a written explanation of why the AI landed on that score — what it picked up on in the conversation.

Click View evidence to highlight the exact phrases from the conversation that informed the score. If there's more than one piece of evidence, click again to cycle through each one.

Evidence usually includes:

  • Direct expressions of satisfaction or frustration

  • Language about resolution (or lack of it)

  • Tone shifts during the conversation

  • Closing statements that show the customer's final mood

Step 4: Filter conversations by xCSAT score

To find all the low-scoring (or high-scoring) conversations:

  • Go to Conversations > Imported contacts

  • Open the filter panel

  • Find the xCSAT filter under the evaluagent Insight Fields section

  • Select the score range you want (e.g. 1–2 for low satisfaction)

  • Apply the filter

This is useful for prioritising QA reviews — focus your manual evaluation effort where customers were unhappy.

Step 5: Track xCSAT on dashboards

If your organisation has dashboard features enabled, you can add two xCSAT widgets:

  • xCSAT Trend — Average predicted satisfaction over time. Useful for tracking the impact of process or training changes.

  • xCSAT Distribution — The breakdown of scores across all your conversations. Useful for spotting whether you're scoring 4s and 5s or clustering around 2s and 3s.

xCSAT vs survey CSAT

If you import survey CSAT alongside xCSAT, you can compare the two:

Aspect

xCSAT (AI)

Survey CSAT

Source

AI analysis of conversation

Customer survey response

Coverage

Every conversation

Only customers who respond

Reasoning

Yes, with evidence

None

Timing

Immediate after processing

Delayed

Bias

None

Skews to very satisfied or very unsatisfied

Comparing the two can validate the AI's predictions and show you the response bias in your survey programme.

How to use xCSAT data

For QA prioritisation: Sort the conversation list by low xCSAT and focus manual reviews there. You'll find the at-risk customers and the coaching moments faster.

For agent coaching: Pull a few high-xCSAT conversations from a top-performing agent and use the evidence as training material. Same for low scores — review with the agent and walk through the reasoning.

For trend tracking: Watch the xCSAT Trend widget over time. If it's dropping, drill into the distribution and the recent low-scoring conversations to find why.

Troubleshooting

xCSAT not appearing on a conversation

  • The conversation hasn't been processed yet

  • The integration doesn't have xCSAT enabled

  • The conversation has no transcript (voice calls need to be transcribed)

Score doesn't match what you'd expect

  • Read the reasoning — the AI explains itself

  • Check the evidence to see exactly what it picked up on

  • Remember xCSAT is based only on conversation content; it can't see external context

If you're seeing the same kind of inaccuracy repeatedly, report it to evaluagent support so the model can improve.

Related guides

Did this answer your question?