How do I request a transcription settings change?
Required Feature Flags
The following feature flags and permissions are required to use this feature:
Feature Flag | Description |
Transcription | Available on contracts with transcription enabled. Settings are managed by evaluagent on your behalf. |
Required Permissions:
No customer-facing permission — transcription configuration is managed by evaluagent staff
How transcription works at evaluagent
When voice recordings are imported from your contact centre platform, evaluagent transcribes them into searchable text with speaker labels and timestamps. You don't pick a vendor or wire up an external service — transcription is part of the platform.
Configuration of transcription is handled by evaluagent staff against each integration. That's deliberate: most settings need careful tuning against your audio, and getting them wrong is the difference between transcripts you can score against and transcripts that miss half the conversation.
This guide tells you what's configurable, how to request changes, and what to give us so the change is right first time.
Step 1: Decide what you want changed
These are the settings evaluagent can tune for your integrations. Pick the ones relevant to your request.
Turning transcription on or off for an integration
Whether transcription runs for a specific integration is set during provisioning. If you've added a new integration and want transcription enabled, or you need it disabled on one, ask us.
Language settings
Setting | What it does |
Language mode | Auto-detect, multi-language, or single language |
Primary language | The default language when running in single mode |
Output locale | Regional variant, for example en-GB, en-US, en-AU |
Identify languages | The set of languages to detect when running in multi mode |
Low confidence action | What happens when language can't be detected — fall back to default, fail the job, or do nothing |
Speaker detection
Speaker detection works out who's talking at each point in the conversation. For stereo recordings this can use channel separation. For mono, the system uses phrase libraries and algorithmic diarisation.
Setting | What it does |
Detection mode | Phrase detection, agent speaks first, or customer speaks first |
Assign unknown to | Default role (Agent, Customer, or Bot) for utterances that can't be confidently assigned |
Agent / Customer / Bot phrases | Up to 100 phrases per role, max five words each, that indicate the speaker |
Force diarisation | Apply diarisation to recordings that look stereo but actually have identical audio on both channels |
For multilingual call centres, phrase libraries need to be set up per language.
Word boosting
Up to 40 domain-specific terms the transcription service should prioritise. Useful for product names, industry jargon, and words that get consistently misheard. Send us a list with examples of where each term gets misheard if you have them.
Processing options
Setting | What it does |
Enhanced transcription | Higher-accuracy transcription. Incurs additional charges. |
Exclude system notes | Excludes internal notes (for example Intercom internal notes) from AI processing |
Volume threshold | Filters audio below a minimum volume |
Speaker sensitivity | How aggressively the system detects speaker changes |
Prefer current speaker | Reduces erratic speaker switching on ambiguous utterances |
Remove disfluencies | Strips filler words (um, uh) from transcripts. Off by default to preserve coaching detail. |
Sampling
Whether every contact gets transcribed or only a sample, and how that sample is selected. Set during provisioning. Ask us if you want it reviewed.
Setting | What it does |
Sampling method | Random or filtered selection |
Sampling percentage | What percentage of eligible contacts to transcribe |
Minimum audio length | Skip recordings shorter than this |
Lookback window | How far back to collect recordings from |
Step 2: Raise the request
Email your evaluagent representative or support@evaluagent.com with:
The integration the change applies to (for example, Genesys Cloud, Aircall)
Which settings you want changed and what you want them set to
Any context that helps us tune the change — example calls, common phrases, languages spoken
If you're requesting word boosting, include the terms. If you're changing speaker detection phrases, include typical opening lines for each role and language.
Step 3: Confirm the result
After we apply the change, transcripts ingested from that point will use the new settings. Older transcripts aren't reprocessed.
Listen to a handful of new transcripts and check:
Speaker labels are landing on the right speaker
Domain terms are coming through correctly if you requested word boosting
Language is detected correctly if you changed language settings
If something's not right, reply to the same thread and we'll tune further.
What you can do yourself
Audio quality at source is on you. The cleaner the audio going in, the better the transcript coming out. The things to get right in your contact centre:
Stereo recordings where your platform supports it — far better speaker separation than mono
16kHz or higher sample rate
Clear phone lines, decent microphones, minimal background noise
evaluagent normalises audio to 16kHz during processing, so anything above that doesn't add accuracy, but anything below it loses detail.
Why I can't see transcription settings in my menu
You won't see a settings page for transcription. The configuration sits in an internal admin tool that customers don't have access to. Everything goes via your CSM or support@evaluagent.com.
Related guides
Language Support — which languages are supported across transcription, AI, and analytics
