Understanding conversation metadata
Every conversation imported into evaluagent carries a set of metadata fields alongside the transcript and audio. The metadata is what makes conversations searchable, filterable, and meaningful in reporting. This page explains what each field means, where it comes from, and why it matters.
Why metadata matters
Metadata is the structured information about a conversation: when it happened, who handled it, what platform it came from, what category it belongs to, and what AI analysis has detected in it. Without metadata, all you have is a transcript. With it, you can:
Find specific conversations in seconds rather than scrolling through lists
Filter to a meaningful population — a particular agent, channel, time period, or customer issue
Build saved filters for recurring review tasks
Sample contacts representatively for QA
Report on quality scores split by channel, source, contact type, or sentiment
Spot patterns across thousands of contacts that you'd never see one at a time
Most metadata is set automatically when the conversation is imported. Some fields are populated later by AI analysis. A few can be edited manually after import.
Core contact fields
These are the basic identity fields that every conversation has.
Contact reference
The unique identifier from your source system. For a Zendesk ticket this is the ticket number; for a phone call it might be the call ID from your telephony platform; for a Salesforce case it's the case number.
The reference is how evaluagent matches updates from the source system back to the existing conversation. If a Zendesk ticket is reopened with new replies, the next import uses the same reference and updates the existing record rather than creating a duplicate.
You can use the reference to find a specific conversation quickly using the Reference quick search at the top of the conversations list.
Source
The integration the conversation came from — Zendesk, Intercom, Five9, Genesys Cloud, Salesforce, and so on. Use this to filter the list to one platform when your organisation has multiple integrations active.
Channel
The communication medium of the conversation: voice call, chat, email, messaging, social, and so on. Channels are configured by your administrator under quality settings, and each conversation is tagged with the channel it belongs to.
The channel matters because:
Scorecards can be linked to specific channels — only the scorecards associated with the channel will appear when you start an evaluation
Reporting can be split by channel to compare quality across mediums
The conversation view layout differs slightly by channel — voice calls show an audio player, digital channels show a chat-style thread
Direction
Whether the contact was inbound (the customer reached out to you) or outbound (the agent reached out to the customer). Direction is mainly relevant for voice and messaging channels and is used in reporting and filtering.
Contact start date and time
When the interaction took place, taken from your source system. This is the timestamp used by the date picker on the conversations list and by date-based reports.
Handle time
The total duration of the interaction, in seconds or minutes depending on the channel. For voice calls this is the call length; for digital channels it's the time between the first and last message in the thread.
You can filter and sort the conversations list by handle time, which is useful for spotting unusually short or unusually long interactions.
Agent
The agent who handled the contact, linked to their evaluagent user profile. The agent is identified by your integration's configured matching strategy — usually email address, but some integrations match by username, full name, or switch ID.
If the agent identifier from the source system doesn't match any evaluagent user, the conversation is flagged as Agent Unmatched and needs to be linked manually before it can be evaluated. See Managing imported contacts for the full agent matching workflow.
Contact type
A classification of what the contact is about — Sales Inquiry, Support Request, Complaint, Follow-up, and so on. Contact types are configured by your administrator and are independent of channel: a complaint can be a phone call, an email, or a chat.
Like channels, contact types can be linked to scorecards and used in reporting.
Status
Where the conversation is in the evaluagent lifecycle:
Status | What it means |
Processing | Import is in progress; data is still being collected |
Ready | The conversation is fully imported and available for action |
Evaluated | One or more evaluations have been completed against it |
Agent Unmatched | The agent from the source system isn't linked to an evaluagent user — manual linking needed before evaluation |
Analytics Pending | The conversation is in the queue for AI analysis |
Analytics Complete | AI analysis has finished and topics, sentiment, and other AI fields are available |
Custom fields from your integration
Each integration imports its own set of metadata fields specific to that platform. Common ones include:
Tags and labels — for example Zendesk tags, Intercom tags, Salesforce labels
Status and state — open, closed, pending, resolved, escalated
Priority — low, normal, high, urgent
Groups, queues, or teams — the routing destination the contact was assigned to
Custom fields — anything your team has configured in the source platform (department, product area, customer segment, and so on)
For Zendesk, Intercom, Salesforce, and Freshdesk, your administrator chooses which integration fields appear and how they are named, using Metadata Field Mapping in the integration settings. For other integrations, the default field set is imported.
Custom fields are filterable in the filter builder under [Integration] Fields and can be used in reports.
AI-generated fields
Several fields are populated after import by evaluagent's AI features. Whether you see them depends on which features are enabled for your organisation.
Insight Topics
Topics detected by analytics — for example "billing query", "cancellation request", "agent transferred call". Topics are organised in your Insight Topics configuration and can be filtered, reported on, and used as Auto Work Queue criteria.
SmartScore moments and summaries
When auto-tagging is run on a conversation, SmartScore generates:
Moments — key highlights extracted from the transcript with timestamps
Summary — a short AI-written summary of the whole conversation
Summaries appear in the metadata sidebar on the conversation page. Moments are visible inline against the transcript.
Sentiment
Sentiment analysis produces several fields:
Overall agent sentiment — the dominant sentiment across the agent's messages
Overall customer sentiment — the dominant sentiment across the customer's messages
Overall sentiment — combined sentiment for the whole conversation
Sentiment score — a numeric score
Prolonged sentiment — flags extended periods of a specific sentiment, for example sustained negative customer sentiment
xMetrics — predictive AI metrics
If xMetrics is enabled, the following predicted fields are added to each analysed conversation:
xNPS — the predicted Net Promoter Score category for the customer (Promoter, Passive, Detractor)
xRepeats — whether the customer mentioned a previous contact about the same issue, indicating a repeat contact
xResolution — the predicted resolution status of the conversation
xVulnerability — flags conversations where the customer shows signs of vulnerability
These metrics are useful for finding conversations that warrant deeper review — detractors, repeats, and vulnerable customers in particular.
Audio metrics (voice calls)
If Conversation Intelligence is enabled, voice calls also include audio metrics like overtalk and silence durations. These are captured per-call and can be filtered on. See Understanding audio metrics for the full list.
Auto-QA results
If Auto-QA scorecards have been run against a conversation, the metadata includes which scorecards were applied and the PASS/FAIL result for each. You can filter the conversations list by Auto-QA result to find contacts that need human review.
Evaluation-related metadata
Once a conversation has been evaluated, additional fields become available:
Evaluation outcome — line item and section results, scorecard outcome
Quality score — the overall percentage score from the evaluation
Evaluator — who completed the evaluation
Evaluation mode — for example "Official" or "Extra", as configured in your quality settings
These fields are filterable under Evaluation Results in the filter builder, which is what reports and dashboards use to surface quality performance.
How metadata is used across evaluagent
Metadata isn't just for browsing the conversations list. It's the connective tissue across the platform:
Auto Work Queues use metadata filters to decide which conversations are sampled and assigned to evaluators automatically
Auto-QA uses metadata to choose which conversations to score with AI scorecards
Reports and dashboards group and slice quality data by channel, source, contact type, sentiment, agent, and any custom fields you have mapped
Analytics and Insights use metadata to build trends — for example tracking how negative sentiment changes over time for a specific contact type
Calibration sessions can be built from filtered metadata to ensure participants score representative conversations
The richer and more accurate your metadata, the more useful all of these features become.
Keeping metadata accurate
A few things help keep your metadata clean:
Map agents correctly — set the right matching strategy on each integration so contacts are linked to the right evaluagent user without manual intervention. Adding agents in evaluagent before turning an integration on avoids "Agent Unmatched" backlogs.
Review your metadata field mapping — for Zendesk, Intercom, Salesforce, and Freshdesk, ask your administrator to enable the integration fields you actually use for filtering and reporting, and to hide ones that just add noise.
Configure channels and contact types deliberately — too few and you can't segment your data; too many and the lists become unmanageable. Most teams start with the defaults and add custom values as patterns emerge.
Use auto-tagging where it adds value — running SmartScore against high-volume contacts gives you searchable summaries and moments without manual work.
