Skip to main content

EvaluAgent CX: How to evaluate a conversation using SmartScore

Alex Richards avatar
Written by Alex Richards
Updated yesterday

evaluagent CX: How to evaluate a conversation using SmartScore

Prerequisites

To complete this help guide, you must have the following:

  • evaluagent CX must be enabled on your account

  • The contact you wish to evaluate must have a conversation attached (i.e., call transcript or copy of the ticket)

  • You must have the permission "View Smartscore analysis"

  • You must have created and published a Scorecard with at least one Automated Line Item within it. We call this a "Blended Scorecard"

Getting started

Within evaluagent CX there are three ways in which a conversation can be automatically evaluated once you have created a scorecard that contains Automated Line Items.

Option 1 - Automatically score Automated Line Items with human review

Option 1 is our suggested approach until a sufficient level of accuracy and comfort is reached with your Automated Line Items.

  1. Build an AutoWorkQueue and select the relevant scorecard you want the evaluators to use on that Evaluation

  2. As part of the assignment process, all automated line items will be pre-scored by the evaluagent system.

  3. The "Contacts to Evaluate" screen will then be populated with evaluations as per the current AutoWorkQueues process (Described here)

  4. When the evaluator loads up the evaluation, all automated line items will be pre-scored.

  5. The evaluator's role is to then review the automated scores, complete any remaining line items in the scorecard, check SmartScore's evidence (by clicking view evidence), copy and paste any relevant coaching tips provided by SmartScore into the standard feedback model, and finally, click the publish button.

  6. It is important to note that SmartScore is your Co-Pilot, and everything after the evaluation is published remains the same. Users without permission to "View Smartscore analysis" will not see evidence of its use within the evaluation process.

Option 2 - Automatically score and publish evaluations

Once you have reached a sufficient level of comfort in your Automation, you may be ready to publish evaluation results to reporting, without Human Approval. To do this, simply select "Publish without review" when setting up your AutoWorkQueue with a Scorecard that contains entirely automated line items.


This will publish the evaluation directly to your reports - assigning the evaluation to "CoPilot".

Important: Auto-publishing generates quality scoring data automatically. This data provides insights for human decision-makers. Decisions affecting employees (coaching, performance reviews, disciplinary actions) are made by managers and supervisors using these insights - not by the AI system.

Note: AutoPublish does not work with blended scorecards (those containing both manual and automated line items). All manual line items must be completed before the evaluation can be published.

Alternative method: Evaluate a contact using the SmartScore button

From time to time you may wish to evaluate a conversation with a Scorecard that contains Automated Line Items but hasn't been assigned through the AutoWorkQueue process.

In this case, you can follow the steps below providing all the prerequisites mentioned at the start of this article are met.

  1. Load up the evaluation you wish to evaluate

  2. Select your evaluation mode, and choose the scorecard you wish to use.

  3. Click the SmartScore button, and watch the magic happen

  4. Review the automated scores, complete any remaining line items in the scorecard, check SmartScore's evidence (by clicking view evidence), copy and paste any relevant coaching tips provided by SmartScore into the standard feedback model, and finally, click the publish button.

Why are some scores missing?

Like any form of Automation, SmartScore may encounter errors occasionally, but we'll do our best to highlight this to you when using the product so appropriate action can be taken. One of the most likely reasons is that the conversation is too long to process via SmartScore. If this continues to be an issue, please share examples with our support team or your CSM directly so that we can address the query on a one-to-one basis.

If scoring fails for a line item, the system automatically retries up to 3 times with exponential backoff between retry attempts. Failed items are flagged for manual review after all retries are exhausted.

How to change a score?

Depending on the quality of your prompts, guidelines, and the conversation itself, it's not uncommon for an evaluator to disagree with the automated score provided by SmartScore. As SmartScore is your co-pilot, you remain in control and changing a score is easy to do.

To change a score:

  1. Simply select the outcome to override the automated score

  2. If you wish to reverse the override, simply select the original outcome to reinstate SmartScore's analysis and suggested improvement.

Why are some contacts unavailable for SmartScore?

Currently, there are three main scenarios where SmartScore will not be available for use:

  1. When an evaluation lacks a transcript of the conversation

  2. When the length of the conversation exceeds the limit currently supported by our Large Language Models (LLMs).

  3. When a conversation has already been scored using SmartScore.

What happens post-evaluation?

Since SmartScore is your own Co-Pilot, everything after the evaluation is published remains the same. Users without permission to "View SmartScore Analysis" will see no evidence of its use within the evaluation process.

Did this answer your question?