How do I read a SmartScore report?
Required Feature Flags
The following feature flags and permissions are required to use this feature:
Feature Flag | Technical Name | Description |
SmartScore V2 |
| Enables AI-powered quality scoring |
You also need at least one of these contract-level features to see the report:
Feature Flag | Technical Name |
Out-of-the-box Topics |
|
Blended Scorecards |
|
Boost Plan |
|
Scale Plan |
|
Insights Only |
|
Required Permissions:
Smartscore reporting (
reporting.evaluagent-cx.view-smartscore-reporting) — to access the report
Introduction
SmartScore is evaluagent's AI-powered quality scoring. It generates an initial quality score for an interaction, which a human reviewer can then confirm or correct.
The SmartScore report shows you how that AI is performing. It tracks how often reviewers agreed with the AI, how often they revised it, and which areas of your scorecards are causing the most corrections.
This guide explains where to find the report and how to read what it's telling you.
What SmartScore Does
AI analyses an interaction and generates line item scores
An evaluator reviews those scores
The reviewer confirms or corrects them
Corrections feed into improving future AI accuracy
A high agreement rate means the AI is well-calibrated for your scorecards. A high revision rate means there's room for the AI to learn — or for your scorecard criteria to be made clearer for both AI and humans.
Step 1: Open the Report
Go to Conversation Analytics and open SmartScore reporting. The page loads with summary metrics at the top and data tables below.
Step 2: Read the Summary Metrics
The top of the page shows three headline numbers:
Line Items SmartScored — how many line items the AI scored in the period
Line Items Changed — how many of those line items were changed during the initial evaluation
SmartScore accuracy — the percentage of SmartScored line items that weren't changed by an evaluator
A high change rate suggests the AI needs calibration. Tighten your scorecard criteria, or raise it with the evaluagent team.
Step 3: Use the Tabs
The page has two tabs that share the same columns:
Under review tab
Lists scores still flagged for human review.
Reviewed tab
Lists scores that have been reviewed and moved across.
Both tabs show the same columns:
Contact ref. — click to open the evaluation
Scorecard & Line Item — the scorecard name and the line item the AI scored
SmartScore — the AI-generated score
Correction — the score after evaluator review
Evaluator — the evaluator and the evaluation date
Reason for change — the reason the evaluator captured (if any)
You can move scores between tabs in bulk by selecting them and clicking the Move … to under review or Move … to reviewed action that appears.
There's a Hide / Show changes without comments toggle above the table — handy when you only want to see corrections that include a written reason.
Step 4: Filter the Report
Use the filter controls at the top to narrow the view by:
Date range (Evaluation publish date or Date of score)
Scorecards
Evaluator
Line item
Click Run Report to update both the metrics and the tables.
Step 5: Export the Data
Use the export option in the filter bar to download a CSV of the current filtered view. Useful if you want to dig deeper in a spreadsheet or your BI tool.
How to Use the Report
Spot Problem Line Items
Filter by line item and look at the SmartScore accuracy per line item. Items with low accuracy are candidates for clearer wording on the scorecard or extra examples for reviewers.
Identify Patterns in Corrections
Look at the Correction column versus the original SmartScore. If the AI consistently scores higher than evaluators, it may be too lenient; if consistently lower, too strict. Either pattern is useful for the evaluagent team and for your own scorecard design.
Track Trends Over Time
Run the report weekly and compare. SmartScore accuracy that climbs suggests the AI is learning the patterns of your scorecards. Drops are worth investigating — has a scorecard changed recently?
Quality-Check Your Reviewers
Compare patterns across evaluators. If one evaluator consistently changes scores far more than the rest, that's worth a calibration conversation.
Tips
Run the report after a meaningful volume of evaluations — small samples don't tell you much
When a reason for change is captured during review, use those reasons to guide scorecard tweaks
Share SmartScore accuracy with your team. It's a useful, simple measure of how aligned the AI and your evaluators are
Troubleshooting
No Data Is Showing
Check that SmartScore is enabled for the contract and that evaluations exist in the date range. Confirm your filters aren't too restrictive.
Metrics Look Off
The metrics need a reasonable volume of completed reviews to be meaningful. Widen the date range and try again.
Export Isn't Working
Try a smaller date range and check your browser allows downloads from evaluagent.
