SDK - Krisp Blog https://krisp.ai/blog/category/enterprise/sdk/ Blog Thu, 30 Oct 2025 12:05:04 +0000 en-US hourly 1 https://wordpress.org/?v=5.5.3 https://krisp.ai/blog/wp-content/uploads/2023/03/cropped-favicon-32x32.png SDK - Krisp Blog https://krisp.ai/blog/category/enterprise/sdk/ 32 32 Audio-Only Turn-Taking Model v2 https://krisp.ai/blog/krisp-turn-taking-v2-voice-ai-viva-sdk/ https://krisp.ai/blog/krisp-turn-taking-v2-voice-ai-viva-sdk/#respond Mon, 27 Oct 2025 13:19:26 +0000 https://krisp.ai/blog/?p=22440 Introducing Krisp’s Turn-Taking v2 We’ve already discussed the challenges of turn-taking in conversational AI in this blog post. Now, we’re excited to announce our newest Turn-Taking model, available as part of Krisp’s VIVA SDK. In this article, we’ll walk through the technology behind the new model and share our latest testing results. The new generation […]

The post Audio-Only Turn-Taking Model v2 appeared first on Krisp.

]]>
Introducing Krisp’s Turn-Taking v2

We’ve already discussed the challenges of turn-taking in conversational AI in this blog post.
Now, we’re excited to announce our newest Turn-Taking model, available as part of Krisp’s VIVA SDK.

In this article, we’ll walk through the technology behind the new model and share our latest testing results. The new generation of models is more streamlined than ever—making it simple to integrate Voice Isolation, Turn-Taking, and VAD into your Voice AI pipelines.

If you’d like to see how Krisp’s VIVA SDK can enhance your Voice AI agent experience, apply now from our Developers page.


How the New Model Works

Our latest model predicts End-of-Turns using only audio input—perfect for real-time conversational systems like human-bot interactions.

Compared to v1, krisp-viva-tt-v2 represents a major step forward. It was trained on a more diverse and better-structured dataset, with richer data augmentations that help the model perform more reliably in real-world conditions.


Key Improvements in v2

  • Greater robustness in noisy environments
  • Higher accuracy when paired with Krisp’s Voice Isolation models
  • Faster and more stable turn detection in live conversations

Testing Results

Testing on Clean Audio

We evaluated both model versions on ~1800 audio samples from real conversations, including ~1000 “hold” cases and ~800 “shift” cases, with mild background noise.

Although the numerical difference between versions is small on this clean dataset, the results show that v2 achieves faster mean shift prediction time at the same false positive rate.

Model Balanced Accuracy AUC F1 Score
krisp-viva-tt-v1 0.82 0.89 0.804
krisp-viva-tt-v2 0.823 0.904 0.813

Mean shift time vs false positive rate for Krisp TT

Insight: Even in clean audio conditions, krisp-viva-tt-v2 offers slightly better prediction stability and overall performance.


Testing on Noisy Audio

Next, we evaluated the models on noisy audio mixes at 5 dB, 10 dB, and 15 dB noise levels. Two scenarios were tested:

  1. Directly on the noisy dataset
  2. On the same dataset after processing through the Krisp VIVA Voice Isolation model

In both scenarios, krisp-viva-tt-v2 consistently outperformed v1.

Model Balanced Accuracy AUC F1 Score
krisp-viva-tt-v1 0.723 0.799 0.71
krisp-viva-tt-v2 0.768 0.842 0.757

Performance comparison on noisy datasets

Insight: krisp-viva-tt-v2 delivers up to a 6% improvement in F1 score under noisy conditions, demonstrating greater resilience in real-world environments.


Testing After Noise and Voice Removal

Finally, we tested both models on the same noisy dataset after applying background noise and voice removal with the krisp-viva-tel-v2 model.

Model Balanced Accuracy AUC F1 Score
krisp-viva-tt-v1 0.787 0.854 0.775
krisp-viva-tt-v2 0.816 0.885 0.808

Performance after noise removal

Insight: When combined with Krisp’s Voice Isolation technology, v2 achieves even greater accuracy and stability.


Conclusion

The new krisp-viva-tt-v2 model marks a significant leap forward in real-time conversation handling for Voice AI. With improved robustness against noise and smoother integration with Krisp’s other models, developers can now build faster, smarter, and more natural-sounding conversational agents.

Explore the VIVA SDK today and see how Krisp’s advanced models can elevate your Voice AI experience.

The post Audio-Only Turn-Taking Model v2 appeared first on Krisp.

]]>
https://krisp.ai/blog/krisp-turn-taking-v2-voice-ai-viva-sdk/feed/ 0
Audio-only, 6M weights Turn-Taking model for Voice AI Agents https://krisp.ai/blog/turn-taking-for-voice-ai/ https://krisp.ai/blog/turn-taking-for-voice-ai/#respond Mon, 04 Aug 2025 23:20:04 +0000 https://krisp.ai/blog/?p=21824 In this article we discuss an outstanding problem in today’s Voice AI Agents – turn-taking. We examine why it is a hard problem and present a solution in Krisp’s VIVA SDK. We also benchmark the Krisp solution against some of the established solutions in the market. Note: The Turn-Taking model is included in the VIVA […]

The post Audio-only, 6M weights Turn-Taking model for Voice AI Agents appeared first on Krisp.

]]>
In this article we discuss an outstanding problem in today’s Voice AI Agents – turn-taking. We examine why it is a hard problem and present a solution in
Krisp’s VIVA SDK.
We also benchmark the Krisp solution against some of the established solutions in the market.

Note: The Turn-Taking model is included in the VIVA SDK offering at no additional charge.

What is turn-taking?

Turn-taking is the fundamental mechanism by which participants in a conversation coordinate who speaks when. While seemingly effortless in human interaction, in human to AI agent conversations modeling this process computationally is highly complex. In the context of Voice AI Agents (including voice assistants, customer support bots, and AI meeting agents), turn-taking decides when the agent should speak, listen, or remain silent.

Without effective turn-taking, even the most advanced dialogue systems can come across as unnatural, unresponsive, and frustrating to use. A precise and lightweight turn-taking model enables natural, seamless conversations by minimizing interruptions and awkward pauses while adapting in real time to human cues such as hesitations, prosody, and pauses.

In general, turn-taking includes the following tasks:

  • End-of-turn prediction – predicting when the current speaker is likely to finish their turn
  • Backchannel prediction – detecting moments where a listener may provide short verbal acknowledgments like “uh-huh”, “yeah”, etc. to show engagement, without intending to take over the speaking turn.

In this article, we present our first audio-based turn-taking model, which focuses on the end-of-turn prediction task using only audio input. We chose to release the audio-based turn-taking model first, as it enables faster response times and a lightweight solution compared to text-based models, which usually require large architectures and depend on the availability of a streamable ASR providing real-time, accurate transcriptions.

Approaches to Turn-Taking

Solutions to Turn Taking problem are usually implemented in AI models, which use audio and/or text representation.

1. Audio-based

Audio-based approaches rely on analyzing acoustic and prosodic features of speech. These features include, changes in pitch, energy levels, intonation, pauses and speaking rate. By detecting silence or overlapping speech, the system predicts when the user has finished speaking and when it is safe to respond. For example, a sudden drop in energy followed by a pause can be interpreted as a turn-ending cue. Such models are effective in real-time, low-latency scenarios where immediate response timing is critical.

2. Text-based

Text-based solutions analyze the transcribed content of speech rather than the raw audio. These models detect linguistic cues that indicate turn completion, such as sentence boundaries, punctuation, discourse markers (e.g., “so,” “anyway”), natural language patterns or semantics (e.g., user might directly ask the bot not to speak). Text-based systems are often integrated with dialogue state tracking and natural language processing (NLP) modules, making them effective for scenarios where accurate semantic interpretation of user intent is essential. However, they may require larger neural network architectures to effectively analyze the linguistic content.

3. Audio-Text Multimodal (Fusion)

Multimodal solutions combine both acoustic and textual inputs, leveraging the strengths of each. While audio-based methods capture real-time prosodic cues, text-based analysis provides deeper semantic understanding. By integrating both modalities, fusion models can make accurate and context-aware predictions of turn boundaries. These systems are effective in complex, multi-turn conversations where relying on either audio or text alone might lead to errors in timing or intent detection.

Challenges of turn-taking

Hesitation and filler words

In natural dialogue, speakers often take a pause using fillers like “um” or “you know” without intending to give up their turn. For instance:

“I think we should, um, maybe –” [The agent jumps in, assuming the sentence is over]

Here, a turn-taking system must distinguish hesitation from completion, or risk interrupting too early.

Natural pauses vs. true end-of-turns

Pauses are not always indicators that a speaker has finished. For example:

“Yesterday I woke up early, then… [pause] I went to work…”

A model might misinterpret the pause as a turn boundary, generating a premature response and breaking the conversational flow.

Quick turn prediction

Minimizing response latency is essential for maintaining natural conversational flow. Humans tend to respond quickly, sometimes even reactively, when the end of the speech is obvious. If a model fails to predict the turn boundary fast enough, the system may sound sluggish or unnatural. The challenge is to trigger responses at just the right moment – early enough to sound fluid, but not so early that it risks interrupting the speaker.

Varying speaking styles and accents

People speak in diverse rhythms, intonations, and speeds. A fast speaker with sharp pitch drops might appear to end a sentence even when they haven’t. Conversely, a slow, melodic speaker may stretch syllables in ways that confuse timing-based systems. Modeling these variations effectively requires a neural network–based approach.

Krisp’s audio-based Turn-Taking model

Recently Krisp had released AI models for effective noise cancellation and voice isolation for Voice AI Agent use-cases, particularly improving pre-mature turn taking caused by background noise. See more details. This technology is widely deployed and has recently passed a 1B mins/month milestone.

It was only natural for us to take on a larger problem of turn-taking (TT). In this first iteration, we designed a lightweight, low-latency, audio-based turn-taking model optimized to run efficiently on a CPU. The Krisp TT model is built into  Krisp’s VIVA SDK, where using the Python SDK you can easily chain it with the Voice Isolation models , placing it in front of a voice agents to create a complete, end‑to‑end conversational flow, as shown in the following diagram.

 

Here, the TT model continuously outputs a confidence score (probability) ranging from 0 to 1, indicating the likelihood of a shift – a point where a speaker is expected to finish their turn. It operates on 100ms audio frames, assigning a shift confidence score to each frame. To convert this score into a binary decision, we apply a configurable threshold. If the score exceeds this threshold (Δ), we interpret it as a shift (end of turn) prediction; otherwise, the model considers the current speaker is still holding the turn.

We also define a maximum hold duration, which defaults to 5 seconds. The model is designed such that, during uninterrupted silence, the confidence score gradually increases and reaches a value of 1 precisely at the end of this maximum hold period.

Comparison with other Turn-Taking models

Let’s take a closer look at how other solutions handle the turn-taking problem in comparison to Krisp.

Simple VAD (Voice Activity Detection)

The basic VAD-based approach is as straightforward as it gets – if you taken a pause in your speech, you have probably have finished your turn. Technically, once a few seconds of (usually configurable) silence is detected, the system assumes the speaker has finished and hands over the turn. While efficient, this method lacks awareness of conversational context and often struggles with natural pauses or hesitant speech. In our comparisons, we use the Silero-VAD model with a 1-second silence detection window as a simple VAD-based turn-taking approach.

SmartTurn

SmartTurn v1 and SmartTurn v2 by Pipecat are open-source AI models, designed to detect exactly when a speaker has finished their turn. We picked them for in-depth comparison because like Krisp TT, they are audio-based models.

Interestingly, SmartTurn models introduce a hybrid strategy. They first wait for 200ms of silence detected by Silero VAD, then evaluate whether a turn shift should occur. If the confidence is too low to switch, the system defers the decision. However, if silence persists for 3 seconds (default value, configurable parameter in SmartTurn), it forcefully initiates the turn transition. This layered approach aims to strike a balance between speed and caution in handling user pauses.

Tested Models

The following table gives a high-level comparison between the contenders

Attribute Krisp TT SmartTurn v1 SmartTurn v2 VAD-based TT
Model Parameters count 6.1M 581M 95M 260k
Model Size 65 MB 2.3 GB 360 MB 2.3 MB
Recommended Execution On CPU On GPU On GPU On CPU
Overall Accuracy Good Good Good Poor

Test Dataset

The test dataset was built using real conversational recordings, with manually labeled turn-taking (shift) and hold scenarios (hold). A turn-taking instance marks a point where one speaker hands over the conversation, we will call a shift, while a hold scenario captures cases where the speaker continues after a brief pause, filler words, or unfinished context.

The dataset consists of 1,875 labeled audio samples, including a significant number of labeled shift and hold scenarios. Each audio file is annotated to include the silence at the end of a speaker’s segment – either resulting in a turn shift or a hold. The test data was annotated according to multiple criteria, including context, intonation, filler words (e.g., “um,” “am”), keywords (e.g., “but,” “and”), and breathing patterns.

Below are the statistics on silence duration for each scenario type as well as the distribution of shift and hold cases based on mentioned criteria.

 

 

 

Training Dataset

Our training dataset comprises approximately 2,000 hours of conversational speech, containing around 700,000 speaker turns.

Evaluation: Prediction Quality Metrics

To assess the performance of the turn-taking model, we used a combination of classification metrics and timing-based analysis:

Metric Description
TP True Positives: Correctly predicted positive class cases
TN True Negatives: Correctly predicted negative class cases
FP False Positives: Incorrectly predicted positive class cases
FN False Negatives: Missed positive class cases
Metric Formula Description
Precision TP / (TP + FP) Proportion of predicted positives that are actually positive
Recall TP / (TP + FN) Proportion of actual positives correctly predicted
Specificity TN / (TN + FP) Proportion of actual negatives correctly predicted
Balanced Accuracy (Recall + Specificity) / 2 Average performance across both classes (positive and negative)
F1 Score 2 × (Precision × Recall) / (Precision + Recall) Harmonic mean of Precision and Recall; balances false positives and false negatives

AUC: The AUC is the area under the ROC curve. A higher AUC value indicates better classification performance, here ROC (receiver operating characteristic) shows the trade-off between the true positive rate and the false positive rate as the decision threshold is varied, for more details on AUC and other metrics read here.

Evaluation: Latency vs. Accuracy tradeoff (MST vs FPR)

We realized that there is a natural tradeoff between the accuracy and latency, i.e. how quickly the system detects a true shift. We can reduce the latency by lowering the threshold, however, it will likely lead to increased false-positive rate (FPR) and unwanted interruptions. On the other hand, we don’t want to wait too long to predict a shift, because the increased latency will result in awkward interaction (see the chart below).

 

Therefore, the latency to accuracy relationship is important and here we measure TT system’s latency by mean shift time (MST). The shift time is defined as the duration between the onset of silence and the moment of predicting end-of-turn (shift). If the model outputs a confidence score, the end-of-turn prediction can be controlled via a threshold. This makes the threshold an important control lever in the trade-off between reaction speed and prediction accuracy:

  • Higher thresholds result in delayed shift predictions, which help reduce false positives (i.e., shift detections during the current speaker hold period which leads to interruption from the bot). However, this increases the mean shift time, making the system slower to respond.
  • Lower thresholds lead to faster responses, decreasing mean shift time, but at the cost of increased false positives, potentially causing the bot to interrupt speakers prematurely.

To visualize this trade-off, we plot a chart showing the relationship between mean shift time calculated in end-of-speech cases and false positive (interruption) rate as the threshold varies from 0 to 1. To provide a comparative summary of models, we plot these charts. A lower curve indicates a faster mean response time for the same interruption rate – or, from another perspective, fewer interruptions for the same mean response time. Here you can see the corresponding plots for Krisp TT, SmartTurn v1 and SmartTurn v2. Note that we can’t directly visualize such a chart for the VAD-based TT, as MST vs FPR requires a model that outputs a confidence score, whereas the VAD-based model produces binary outputs (0 or 1). The same limitation applies to AUC-shift computation shown in the table above.

This basically means that the Krisp TT model has considerably faster average response time (0.9 vs. 1.3 seconds at a 0.06 FPR) compared to SmartTurn to produce a true-positive answer.

To summarize the overall latency-accuracy tradeoff, we also compute the area under the MST vs FPR curve. This single scalar score captures the model’s ability to respond quickly while minimizing interruptions across different thresholds. A lower area indicates better performance.

Evaluation Results

Model Balanced Accuracy AUC Shift F1 Score Shift F1 Score Hold AUC (MSP vs FPR)
Krisp TT 0.82 0.89 0.80 0.83 0.21
VAD based TT 0.59 0.48 0.70
SmartTurn V1 0.78 0.86 0.73 0.84 0.39
SmartTurn V2 0.78 0.83 0.76 0.78 0.44

💡 It’s important to note that the Krisp TT model delivers comparable quality in terms of predictive quality metrics and significantly better quality in terms of latency vs accuracy tradeoff while being 5-10x smaller and optimized to run efficiently on a CPU. The VAD-based turn-taking approach is more lightweight, but it performs significantly worse than dedicated TT models – highlighting the importance of modeling the complex relationships between speech structure, acoustic features, and turn-taking behavior.

Demo

Here’s a simple dialogue showing how Krisp’s Turn-Taking model works in practice. In the demo, you’ll hear intentional utterances, pauses, filler words and interruptions. The response time you observe includes the Turn-Taking model’s speed, plus the latency from the speech-to-text (STT) system and the language model (LLM).

Krisp’s Turn-Taking Model

Krisp’s TT model vs Pipecat’s SmartTurn V2

This demo compares Krisp’s Turn-Taking model with Pipecat’s SmartTurn model (3-second default value, configurable parameter in SmartTurn). To highlight the differences visually, we’ve also overlaid a speech-to-text transcript on the video.

Future Plans

Improved Accuracy in TT

While this initial, audio-based TT model provides balanced accuracy and latency, it is mainly limited to analyzing prosodic and acoustic features, such as changes in intonation, pitch and rhythm. By analyzing linguistic features like the syntactic completion of a sentence we can further improve the accuracy of the TT model.

We plan to build the following features as well:

  • Text-based Turn-Taking: This model will use text only input and predict end-of-turn with a custom Neural Network trained for this use case.
  • Audio-Text Multimodal (Fusion): This model will use both audio and text inputs to leverage the best from these two modalities and give the highest accuracy end-of-turn prediction.

Early prototypes show promising results, with the multimodal approach outperforming the audio-based turn-taking models noticeably.

Backchannel support

Backchannel detection is another challenge encountered during the development of Voice AI agents. The “backchannel” is the secondary or parallel forms of communication that occur alongside a primary conversation or presentation. It encompasses the responses a listener gives to a speaker to indicate they are paying attention, without taking over the main speaking role.

While interacting with AI agent, in some cases, the user may genuinely want to interrupt – to ask a question or shift the conversation. In others, they might simply be using backchannel cues like “right” or “okay” to signal that they’re actively listening. The core challenge lies distinguishing meaningful interruptions from casual acknowledgments.

Our roadmap includes the release of a reliable dedicated backchannel prediction model.

The post Audio-only, 6M weights Turn-Taking model for Voice AI Agents appeared first on Krisp.

]]>
https://krisp.ai/blog/turn-taking-for-voice-ai/feed/ 0
Improving Turn-Taking of AI Voice Agents with Background Noise and Voice Cancellation https://krisp.ai/blog/improving-turn-taking-of-ai-voice-agents-with-background-voice-cancellation/ https://krisp.ai/blog/improving-turn-taking-of-ai-voice-agents-with-background-voice-cancellation/#respond Mon, 24 Mar 2025 17:08:31 +0000 https://krisp.ai/blog/?p=21048 Turn-Taking is a big challenge AI Voice Agents are rapidly evolving, powering critical use-cases such as customer support automation, virtual assistants, gaming, and remote collaboration platforms. For these voice-driven interactions to feel natural and practical, the underlying audio pipeline must be resilient to noise, responsive, and accurate—especially in real-time scenarios.   In a typical deployment, […]

The post Improving Turn-Taking of AI Voice Agents with Background Noise and Voice Cancellation appeared first on Krisp.

]]>
Turn-Taking is a big challenge

AI Voice Agents are rapidly evolving, powering critical use-cases such as customer support automation, virtual assistants, gaming, and remote collaboration platforms. For these voice-driven interactions to feel natural and practical, the underlying audio pipeline must be resilient to noise, responsive, and accurate—especially in real-time scenarios.

 

In a typical deployment, audio streams originate from diverse endpoints like mobile applications, web browsers, or traditional telephony and are delivered via real-time communication protocols like WebRTC or WebSockets (WSS). This audio is aggregated and managed through specialized providers like LiveKit, Daily, or Agora, which ensure reliable, low-latency audio transport to the server-side pipeline.

 

Within the server pipeline, once the audio arrives, it undergoes optional preprocessing steps for formatting or basic adjustments, after which it moves directly into a Voice Activity Detection (VAD).

VAD identifies active speech segments, driving automatic end-pointing and intelligent interruption handling. Following a user speech, when VAD detects silence, relevant API events trigger downstream Voice AI models to generate and deliver responses. If the user resumes speaking during the voice bot’s response generation, the pipeline seamlessly cancels the ongoing output and clears buffers, ensuring natural conversational turn-taking.

 

In this scenario, background noises—such as music, traffic sounds, TVs, or nearby conversations—remain embedded within the audio stream, reaching the VAD module unfiltered. Because VAD is designed to detect human speech activity, these background sounds often cause false-positive speech detections. As a result, the VAD mistakenly interprets noise or background voices as active user speech, triggering unintended interruptions. These false triggers negatively impact turn-taking, a core component of natural, human-like conversational interactions.

 

Here, by placing Krisp Background Voice and Noise Cancellation before the VAD, the pipeline substantially reduces false-positive triggers and prevents interruptions from common background distractions.

Additionally, Krisp significantly improves downstream speech processing accuracy by delivering cleaner audio.

Introducing Krisp Server SDK for AI Voice Agents

We’re excited to announce the launch of Krisp Server SDK, featuring two advanced AI models engineered explicitly for superior noise cancellation for AI Voice Agents.

 

Compared to our on-device AI models, these models are optimized to deliver unmatched performance and voice quality, especially in challenging corner cases.

 

Both models remove background noise, chatter, and secondary voices, ensuring the retention and clarity of only the primary speaker’s voice.

  1. BVC-tel (General-Purpose Model):
    • Designed as a robust, versatile solution ideal for a wide variety of audio sources, including WebRTC, mobile, and traditional telephony inputs.
    • Specifically engineered to be highly resilient against audio artifacts introduced by common telephony codecs, such as the G711 codec, widely used in telecommunication networks.
    • Supports audio sampling rates up to 16 kHz, which is optimal for AI Voice Agents as it effectively captures the essential frequency ranges of human speech.
  2. BVC-app (High-Fidelity Model):
    • Specifically optimized for WebRTC use-cases where high-quality audio streams are required.
    • Supports higher sampling rates up to 32 kHz, enabling clearer, more natural-sounding voice interactions suitable for applications with superior audio fidelity.

    ℹ If the incoming audio source has a sampling rate higher than the model’s supported rate (e.g., 48 kHz), the SDK intelligently manages the audio processing by automatically downsampling to the model’s working rate, applying the noise cancellation and then seamlessly upsampling back to the original audio quality.

Despite significant quality enhancements, server-side models maintain a low algorithmic latency of just 15 milliseconds, identical to our on-device models. This ensures real-time responsiveness, which is critical for conversational interactions.

 

The new Krisp Server SDK models are CPU-optimized and support a range of platforms, including:

  • Linux (x64 and ARM64 architectures)
  • Windows (x64) with ARM64 support coming soon.

Quantifying the Krisp BVC Impact

We comprehensively evaluated how the new Background Voice and Noise Cancellation (BVC) model improves turn-taking accuracy and speech recognition quality.

Using the BVC-tel model, we specifically tested two distinct audio pipeline scenarios:

  1. BVC-VAD-STT: Audio processed by Krisp BVC and VAD is passed to the AI Voice Agent.
  2. BVC-VAD only: The original (unprocessed) audio is passed downstream to the AI Voice Agent, with Krisp BVC processed audio used solely for improved VAD accuracy.

 

The following graphics and audio examples demonstrate a typical example: Krisp BVC effectively canceling the background TV speech when interacting with the AI Voice Agent.

 

The red-circled areas represent the TV speech. The green-circled areas represent the primary speaker’s speech.

Turn-taking with VAD only

Turn-taking with BVC-VAD

TV speech passes through VAD, potentially interrupting the AI Voice Agent during its response. TV speech passes through VAD, potentially interrupting the AI Voice Agent during its response.
Image for VAD only Image for BVC-VAD
Original Audio

Original Audio
Audio after VAD processing only
Audio after BVC processing
Audio after BVC + VAD processing

In the following sections, we perform more comprehensive evaluations to capture and quantify improvements in turn-taking and WER improvements in STT.

 

Evaluation Setup:

  • Dataset: We selected the widely-used AMI corpus, specifically the individual headset recordings. This dataset is ideal due to its realistic mix of background conversations and noise, which is representative of many typical mobile and telephony scenarios.
  • Voice Activity Detection: Latest version of open-source SileroVAD
  • Speech-To-Text Models: Whisper V3 (base version). In our tests, the difference between the base and large versions was insignificant, so we present only the base model results.

Impact on Turn-Taking

Applying Krisp BVC upstream had a clear, positive impact on VAD precision within the AMI dataset—especially in reducing false-positive speech detections. Lower false positives are particularly critical for ensuring smooth, uninterrupted conversational experiences.

Our tests show that with Krisp BVC, false-positive triggers in VAD were reduced by 3.5x on average. This means the AI Voice Agent is significantly less likely to experience unintended interruptions caused by background speech or noise. Overall, the precision after Krisp BVC increases by over a quarter—a major improvement.

Impact on Speech Recognition Accuracy (WER)

Using Krisp BVC also markedly reduces the Word Error Rate (WER) of Whisper V3 models on the AMI dataset—achieving more than a 2x improvement. This result aligns with expectations, given Krisp’s effectiveness in eliminating distracting background speech.

Interestingly, the WER improvements were consistent in both BVC-VAD and BVC-VAD-STT modes.

 

To further explore this, we evaluated an additional dataset with minimal background speech: the ITU-T P.501 dataset, which mixes single-speaker audio with 24 different noise types at three intensity levels (0db, 5db, 10db).

 

Modern STT models, including Whisper, generally have strong built-in noise robustness. We aimed to measure any further WER improvements achievable by applying Krisp BVC upstream.

 

Indeed, the WER metric was generally much lower in this case compared to the AMI dataset.

 

In the BVC-VAD mode, where Whisper operated on original audio while leveraging Krisp BVC-processed audio for enhanced VAD, we observed an 18% improvement in WER.

 

 

Conversely, in the BVC-VAD-STT mode — where Whisper processed Krisp-modified audio—the WER increased by about 2x, although the absolute WER number is still relatively low. This increase is attributed to Whisper never encountering Krisp NC-processed audio during its training, which could cause suboptimal performance for such modified audio.

 

💡Note that WER% results in BVC-VAD-STT mode could be very different on other datasets and STT engines. We recommend experimenting with both BVC-VAD and BVC-VAD-STT modes to determine the optimal audio pipeline setup for you.

 

Overall, these evaluations demonstrate that incorporating Krisp BVC into AI Voice Agents pipelines substantially improves turn-taking and speech recognition quality, especially in real-world scenarios where background noise and secondary conversations are prevalent.

The post Improving Turn-Taking of AI Voice Agents with Background Noise and Voice Cancellation appeared first on Krisp.

]]>
https://krisp.ai/blog/improving-turn-taking-of-ai-voice-agents-with-background-voice-cancellation/feed/ 0
Krisp and Fixie Bring AI Noise Cancellation to Ultravox to Improve Bot-to-Human Communication https://krisp.ai/blog/krisp-and-fixie-partner-to-bring-unparalleled-audio-for-voice-ai-conversations-to-ultravox/ https://krisp.ai/blog/krisp-and-fixie-partner-to-bring-unparalleled-audio-for-voice-ai-conversations-to-ultravox/#respond Fri, 20 Dec 2024 14:56:32 +0000 https://krisp.ai/blog/?p=19032 Berkeley, CA, December 20, 2024–Krisp, the leader in AI-powered voice productivity software, has partnered with Fixie.ai to integrate Krisp’s AI Noise Cancellation into Ultravox Realtime Voice AI managed service. This integration enhances Ultravox’s real-time, text-free voice processing by ensuring clear, natural conversations—even in noisy environments.   Ultravox Realtime is a managed service that builds on […]

The post Krisp and Fixie Bring AI Noise Cancellation to Ultravox to Improve Bot-to-Human Communication appeared first on Krisp.

]]>
Berkeley, CA, December 20, 2024–Krisp, the leader in AI-powered voice productivity software, has partnered with Fixie.ai to integrate Krisp’s AI Noise Cancellation into Ultravox Realtime Voice AI managed service. This integration enhances Ultravox’s real-time, text-free voice processing by ensuring clear, natural conversations—even in noisy environments.

 

Ultravox Realtime is a managed service that builds on open-source foundations to deliver real-time AI voice conversations in applications. It includes built-in support for telephony, voice assistants, and other key tools required for advanced voice agents. Unlike most voice AI systems, Ultravox processes speech directly as embeddings, bypassing traditional automatic speech recognition (ASR) to enable more natural, fluent AI interactions.

 

“Our unique approach to voice AI is all anchored on delivering unmatched quality,” said Zach Koch, Founder and CEO of Ultravox. “Integrating Krisp eliminates one of the biggest challenges—unwanted interruptions—ensuring seamless, interruption-free conversations.”

 

“Voice AI should feel natural and effortless to customers,” said Robert Schoenfield, EVP of Licensing and Partnerships at Krisp. “Ultravox is redefining conversational voice AI, and we’re excited to help power their journey toward seamless interactions.”

 

About Krisp

 

Founded in 2017, Krisp pioneered the world’s first AI-powered Voice Productivity software. Krisp’s Voice AI technology enhances digital voice communication through audio cleansing, noise cancelation, accent localization, and call transcription and summarization. Offering full privacy, Krisp works on-device, across all audio hardware configurations and applications that support digital voice communication. Today, Krisp processes over 75 billion minutes of voice conversations every month, eliminating background noise, echoes, and voices in real-time, helping individuals and businesses harness the power of voice to unlock higher productivity and deliver better business outcomes.

 

About Fixie.ai

 

Founded in 2022, Fixie.ai is building the next generation of real-time AI Voice Agents that can communicate as naturally as humans do. Through Ultravox, its open-source speech language model, and Ultravox Realtime, a comprehensive platform for building and scaling AI Voice Agents, Fixie.ai is tackling the messy realities of natural conversation – from interruptions to group dynamics and emotional understanding. As AI moves beyond simple text interactions, Fixie.ai is creating the technology stack needed to enable AI systems that can fully participate in the rapid, fluid exchanges that drive human progress.

 

The post Krisp and Fixie Bring AI Noise Cancellation to Ultravox to Improve Bot-to-Human Communication appeared first on Krisp.

]]>
https://krisp.ai/blog/krisp-and-fixie-partner-to-bring-unparalleled-audio-for-voice-ai-conversations-to-ultravox/feed/ 0
Krisp and Vodex Partner to Perfect GenAI-Powered Voicebot Calls for High-Quality Lead Qualification https://krisp.ai/blog/krisp-and-vodex-partnership/ https://krisp.ai/blog/krisp-and-vodex-partnership/#respond Mon, 22 Jul 2024 15:09:13 +0000 https://krisp.ai/blog/?p=13409 BERKELEY, Calif., July 22, 2024–Krisp, the world’s leading AI-powered voice productivity software, today announced a new integration with Vodex’s AI-powered voicebots, which specialize in generating qualified leads to help B2C businesses in North America connect exclusively with premium prospects. The partnership delivers Krisp’s advanced AI Noise Cancellation as part of their voicebot solution, ensuring that […]

The post Krisp and Vodex Partner to Perfect GenAI-Powered Voicebot Calls for High-Quality Lead Qualification appeared first on Krisp.

]]>
BERKELEY, Calif., July 22, 2024–Krisp, the world’s leading AI-powered voice

productivity software, today announced a new integration with Vodex’s AI-powered voicebots, which specialize in generating qualified leads to help B2C businesses in North America connect exclusively with premium prospects. The partnership delivers Krisp’s advanced AI Noise Cancellation as part of their voicebot solution, ensuring that leads are clearly heard and understood by Vodex’s voicebot, even with difficult background noise conditions such as television, music and offices.

 

Vodex harnesses the power of Generative AI to fully automate outbound calls, delivering a transformative communication experience. Unlike a machine-based or robotic-based call, Vodex speaks in a human-like voice. Every call feels friendly and genuine. It understands and recognizes speech, and now, with the inclusion of Krisp, has a higher rate of understanding.

 

“Vodex’s Gen AI-powered voicebots are experiencing tremendous growth in the North American market,” said Kumar Saurav, Co-founder & CTO of Vodex. “To ensure exceptional customer experiences as we scale, we’re excited to be integrating Krisp’s best-in-class noise cancellation technology. This partnership empowers Vodex to reach new heights in delivering clear and uninterrupted voice interactions.”

 

“GenAI voicebots are becoming core to many market-facing sales processes,” said Robert Schoenfield, EVP of Licensing and Partnerships at Krisp. “We are thrilled to be part of Vodex’s solution by leveraging our core Voice AI technology for human-to-voicebot communications.”

 

The integration is now live, enhancing audio quality for Vodex’s voicebots.

 

About Krisp

 

Founded in 2017, Krisp pioneered the world’s first AI-powered Voice Productivity software. Krisp’s Voice AI technology enhances digital voice communication through audio cleansing, noise cancelation, accent localization, and call transcription and summarization. Offering full privacy, Krisp works on-device, across all audio hardware configurations and applications that support digital voice communication. Today, Krisp processes over 75 billion minutes of voice conversations every month, eliminating background noise, echoes, and voices in real-time, helping individuals and businesses harness the power of voice to unlock higher productivity and deliver better business outcomes.

 

Learn more at www.krisp.ai

 

About Vodex

 

Vodex harnesses the power of Generative AI to fully automate outbound calls, delivering a transformative communication experience. With Vodex, you can expand your reach, generate qualified leads, scheduling appointments, confirming bookings, sharing updates, connect with potential customers, and close deals faster without increasing your workforce.

 

https://vodex.ai 

The post Krisp and Vodex Partner to Perfect GenAI-Powered Voicebot Calls for High-Quality Lead Qualification appeared first on Krisp.

]]>
https://krisp.ai/blog/krisp-and-vodex-partnership/feed/ 0
Krisp launches Accent Conversion SDK Early Access Program for Communications Providers https://krisp.ai/blog/krisp-launches-accent-conversion/ https://krisp.ai/blog/krisp-launches-accent-conversion/#respond Fri, 21 Jun 2024 11:03:53 +0000 https://krisp.ai/blog/?p=12734 BERKELEY, Calif., June 2, 2024 – Krisp, the world’s leading AI-powered voice productivity software, announced today an Early Access program for its breakthrough accent conversion technology. The Early Access program is available to select CCaaS partners focusing on delivering breakthrough Voice AI technology to clients and end customers.    Krisp’s Accent Conversion SDKs are available […]

The post Krisp launches Accent Conversion SDK Early Access Program for Communications Providers appeared first on Krisp.

]]>
BERKELEY, Calif., June 2, 2024 – Krisp, the world’s leading AI-powered voice productivity software, announced today an Early Access program for its breakthrough accent conversion technology. The Early Access program is available to select CCaaS partners focusing on delivering breakthrough Voice AI technology to clients and end customers. 

 

Krisp’s Accent Conversion SDKs are available initially for Windows OS and will become available for browser applications via WASM JS later this year. AI Accent Conversion dynamically adjusts contact center agents’ accent to the natively understood accent of the customer calling them. Krisp’s AI Accent Conversion supports India and the Philippines, due to their predominantly US-based customer market. Agents and administrators can select one of five output voices for both male and female agents. Like all other Krisp technologies, AI Accent Conversion processes voice exclusively on-device, minimizing latency and maximizing quality, while preserving privacy and security. 

 

“A number of our leading AI Noise Cancellation customers are already moving forward, integrating and testing Krisp’s AI Accent Conversion SDKs,” said Robert Schoenfield, EVP of Licensing and Partnerships at Krisp. “We are thrilled to be diversifying our offerings and opening up our Early Access program to other CCaas providers as well.”

 

Krisp SDKs process more than 75 billion minutes of audio every month, supporting leading enterprise contact centers, BPOs and CCaaS platforms directly and through partnerships globally. 

AI Accent Conversion revolutionizes customer service for companies by eliminating the need for resource and cognitive-intensive accent neutralization training and puts all call center agents on an equal playing field. This allows contact centers to rapidly scale operations from any location in the world by increasing access to a larger employable talent pool, while delivering consistent, superior service without additional resources.

Krisp Accent Conversion supports a wide range of Indian and Filipino accent dialects, making it an ideal solution for BPOs with India and Philippine-based contact center operations. Soon to follow are accent conversion packs for English-speaking Latin American, South African and other contact center agents.

 

About Krisp

Founded in 2017, Krisp pioneered the world’s first AI-powered Voice Productivity software. Krisp’s Voice AI technology enhances digital voice communication through audio cleansing, noise cancelation, accent conversion, and call transcription and summarization. Offering full privacy, Krisp works on-device, across all audio hardware configurations and applications that support digital voice communication. Today, Krisp processes over 75 billion minutes of voice conversations every month, eliminating background noise, echoes, and voices in real-time, helping businesses harness the power of voice to unlock higher productivity and deliver better business outcomes. 

 

Learn more about Krisp’s SDK for developers.

The post Krisp launches Accent Conversion SDK Early Access Program for Communications Providers appeared first on Krisp.

]]>
https://krisp.ai/blog/krisp-launches-accent-conversion/feed/ 0
Elevate Your Contact Center Experience with Krisp Background Voice Cancellation (BVC) https://krisp.ai/blog/contact-center-background-voice-cancellation/ https://krisp.ai/blog/contact-center-background-voice-cancellation/#respond Wed, 19 Jun 2024 14:49:18 +0000 https://krisp.ai/blog/?p=12690 In the energetic environment of a contact center, maintaining clear and focused communications with customers is critical, and foundational. Agents often face the challenge of background noise and overlapping voices, which not only distract customers but can also lead to inadvertent disclosure of sensitive information. Traditional headsets and hardware solutions fall short in addressing these […]

The post Elevate Your Contact Center Experience with Krisp Background Voice Cancellation (BVC) appeared first on Krisp.

]]>
In the energetic environment of a contact center, maintaining clear and focused communications with customers is critical, and foundational. Agents often face the challenge of background noise and overlapping voices, which not only distract customers but can also lead to inadvertent disclosure of sensitive information. Traditional headsets and hardware solutions fall short in addressing these issues effectively. Krisp’s Background Voice Cancellation (BVC) is a game-changer for contact center operations, materially improving AHT, CSAT and ESAT.

What is Krisp Background Voice Cancellation?

Krisp BVC is an advanced AI noise-canceling technology that eliminates all background noises and other competing voices nearby, including the voices of other agents. This breakthrough technology is enabled as soon as an agent plugs in their headsets, without requiring individual voice enrollment or training. This innovative solution integrates smoothly with both native applications and browser-based calling applications via WebAssembly JavaScript (WASM JS), ensuring high performance and efficiency.

Why Choose Krisp BVC for Your Contact Center?

1. Enhanced Customer Experience

Customers often struggle with understanding agents when there’s background chatter, leading to frustration and reduced satisfaction. By using Krisp BVC, all extraneous voices and noises are filtered out, allowing customers to focus solely on the agent they are speaking with. This ensures a smooth and professional interaction every time, which directly contributes to higher CSAT scores.

2. Privacy and Confidentiality

In a contact center, the risk of customers overhearing personal information from other calls is a significant concern, especially for financial and healthcare customers. Krisp BVC addresses this by completely isolating the agent’s voice from the background, ensuring that sensitive information remains confidential.

3. Hardware Independence

While headsets and other hardware solutions provide some noise reduction, they do not eliminate background voices. Krisp BVC works independently of hardware, offering superior noise and background voice cancellation without the need for additional devices or complicated setups.

4. Plug-and-Play Functionality

Once the agent’s headset is plugged in, Krisp BVC is activated automatically. There’s no need for agents to enroll their voice or go through any training process, making it an effortless solution that saves time and resources.

5. Versatility Across Platforms

Krisp BVC is uniquely available for both native applications and browser-based calling applications through WASM JS. This means it can be integrated effortlessly into various platforms, ensuring consistent performance and reliability.

6. Efficient Performance

Krisp BVC is designed to run efficiently in the browser, making it an ideal solution for Contact Center as a Service (CCaaS) platforms. Its high-performance capabilities ensure minimal latency and a smooth user experience.

7. Improved CSAT Metrics

With the enhanced clarity of communication provided by Krisp BVC, customers are more likely to have positive interactions with agents. This leads to increased satisfaction, as reflected in improved CSAT metrics reported to us by a number of customers. Clear and effective communication is crucial in resolving issues promptly and accurately, which in turn boosts customer loyalty and satisfaction.

Integration Made Easy

Integrating Krisp BVC into your contact center application is straightforward. Here’s a sample code snippet to demonstrate how simple it is to get started:

Visualizing the Difference

The graphical representation above illustrates the clarity and focus achieved by using Krisp BVC. Notice how the agent’s speech is clear and distinct, free from background distractions.

Hear the Difference

Experience the transformative power of Krisp BVC with this audio comparison:

Without BVC – Competing Agent Voices

 

With BVC – Clear communication

 

Conclusion

Integrating Krisp BVC into your contact center solutions can significantly enhance the quality of interactions and customer satisfaction. Its ease of integration, combined with superior performance and versatility, makes Krisp BVC a must-have feature for modern contact centers. Upgrade your communication systems today with Krisp Background Voice Cancellation and experience the difference it makes, including improved CSAT metrics.

Ready to get started? Visit Krisp’s Developer Portal for more information and comprehensive integration guides.

The post Elevate Your Contact Center Experience with Krisp Background Voice Cancellation (BVC) appeared first on Krisp.

]]>
https://krisp.ai/blog/contact-center-background-voice-cancellation/feed/ 0
Krisp launches On-Device Transcription SDKs for Integration https://krisp.ai/blog/krisp-launches-on-device-transcription-sdks-for-app-integration/ https://krisp.ai/blog/krisp-launches-on-device-transcription-sdks-for-app-integration/#respond Wed, 29 May 2024 16:36:15 +0000 https://krisp.ai/blog/?p=12374 BERKELEY, Calif., May 29, 2024 – Krisp, the world’s leading AI-powered voice productivity software, announced today general availability of its on-device Speech-to-Text (STT) SDKs. Krisp STT technology runs on the end-user device, delivering accuracy and privacy without the need of expensive servers or connectivity. This breakthrough technology has been available within the Krisp application for […]

The post Krisp launches On-Device Transcription SDKs for Integration appeared first on Krisp.

]]>
BERKELEY, Calif., May 29, 2024 – Krisp, the world’s leading AI-powered voice productivity software, announced today general availability of its on-device Speech-to-Text (STT) SDKs. Krisp STT technology runs on the end-user device, delivering accuracy and privacy without the need of expensive servers or connectivity. This breakthrough technology has been available within the Krisp application for Contact Centers and Enterprises, and is now available for integration into communications applications via SDKs. 

 

How it works: Krisp STT employs noise-robust deep learning algorithms for real-time on-device speech-to-text conversion. The process consists of several stages, including processing and turning speech into unformatted text, adding punctuation, capitalization and numerical values, redacting PII/PCI, and removing filler words on-device, in real-time. It then assigns text to speakers with timestamps before securely transmitting the encrypted transcript to a private cloud within each application.

 

“The success of our on-device transcriptions within the Krisp app has fueled demand for our STT SDKs,” said Robert Schoenfield, EVP of Licensing and Partnerships at Krisp. “Krisp STT SDKs are uniquely suited for applications and devices that require secure and accurate on-device transcriptions without the need of servers or an internet connection.”

 

Since introducing Transcriptions and Meeting Notes within the Krisp application, end users have generated more than 50 million hours of transcriptions. Today, Krisp STT SDKs are available for English and are speaker-accent robust, with plans to add other languages later this year including French, Spanish, and German. Krisp STT SDKs are also noise robust, as the underlying model is trained with noise represented in many use cases. 

Krisp STT SDKs are available today for Windows, Mac and Linux, with support for browser-based applications via WASM JS becoming available in the second half of the year. Krisp SDKs deliver industry-leading performance on-device while consuming minimal CPU and memory resources.

 

About Krisp

Founded in 2017, Krisp pioneered the world’s first AI-powered Voice Productivity software. Krisp’s Voice AI technology enhances digital voice communication through audio cleansing, noise cancelation, accent localization, and call transcription and summarization. Offering full privacy, Krisp works on-device, across all audio hardware configurations and applications that support digital voice communication. Today, Krisp processes over 75 billion minutes of voice conversations every month, eliminating background noise, echoes, and voices in real-time, helping businesses harness the power of voice to unlock higher productivity and deliver better business outcomes. 

 

Learn more about Krisp’s SDK for developers.

 

The post Krisp launches On-Device Transcription SDKs for Integration appeared first on Krisp.

]]>
https://krisp.ai/blog/krisp-launches-on-device-transcription-sdks-for-app-integration/feed/ 0
Enhancing Browser App Experiences: Krisp JS SDK Pioneers In-browser AI Voice Processing for Desktop and Mobile https://krisp.ai/blog/enhancing-browser-apps-experience/ https://krisp.ai/blog/enhancing-browser-apps-experience/#respond Wed, 15 May 2024 07:17:29 +0000 https://krisp.ai/blog/?p=12076   In today’s connected world, where web browsers serve as gateways to an assortment of online experiences, ensuring a seamless and productive user experience is paramount. One crucial aspect often overlooked in browser-based communication applications is voice quality, especially in scenarios where clarity of communication is essential.    Diverse Applications of Noise Cancellation on the […]

The post Enhancing Browser App Experiences: Krisp JS SDK Pioneers In-browser AI Voice Processing for Desktop and Mobile appeared first on Krisp.

]]>
 

In today’s connected world, where web browsers serve as gateways to an assortment of online experiences, ensuring a seamless and productive user experience is paramount. One crucial aspect often overlooked in browser-based communication applications is voice quality, especially in scenarios where clarity of communication is essential. 

 

Diverse Applications of Noise Cancellation on the Web

From virtual meetings and online classes to contact center operations, the demand for clear audio communications has become ever more important, making AI Voice processing with noise and background voice cancellation an expected and highly sought-after feature. While standalone applications have provided this functionality, integrating this directly into browser-based applications has proven to be a challenge.

The need for noise and background voice cancellation extends beyond conventional communication platforms. In Telehealth, for instance, where accurate communication is vital for call-based diagnosis and consultation, background noise and voices can hinder effective communication.  Another interesting example is insurance companies, taking calls from their customers from the place of an incident. Eliminating background noise ensures that critical information is accurately conveyed, leading to smoother claims processing and customer satisfaction. These, and many other use cases, often involve one-click web sessions for the calls. 

 

Overcoming Challenges for Mobile Browser Integration

The growing demand for quality communications in browser-based applications extends to both desktop and mobile devices. Up until recently, achieving compatibility with mobile devices, particularly with iOS Safari, posed significant difficulties. Limitations within Apple’s WebKit framework and the inherently CPU-intensive nature of JavaScript solutions hindered bringing the power of Krisp’s technologies to mobile browser applications.

The introduction of Single Instruction, Multiple Data (SIMD) support marked a significant opening for Krisp to deliver its market-leading technology into Safari specifically, and mobile browsers generally. SIMD enables parallel processing of data, significantly boosting performance and efficiency, particularly on mobile devices with limited computational resources.

By leveraging SIMD, the Krisp JS SDK has achieved low levels of CPU efficiency, making its market-leading noise cancellation available for users on mobile browser applications. This breakthrough not only enhances the user experience but also opens up new possibilities for web-based applications across various industries.

As Krisp’s technologies continue to evolve and extend into new territories, the ability to make AI Voice features available for all users across desktop and mobile browser-based applications is fundamental and allows users to have seamless access to the best voice processing technologies in the market.

 

Try next-level audio and voice technologies

Krisp licenses its SDKs to embed directly into applications and devices. Learn more about Krisp’s SDKs and begin your evaluation today.

 

The post Enhancing Browser App Experiences: Krisp JS SDK Pioneers In-browser AI Voice Processing for Desktop and Mobile appeared first on Krisp.

]]>
https://krisp.ai/blog/enhancing-browser-apps-experience/feed/ 0
Vonage to Launch Enhanced Noise Cancellation Powered by Krisp’s Voice AI https://krisp.ai/blog/vonage-to-launch-enhanced-noise-cancellation-powered-by-krisps-voice-ai/ https://krisp.ai/blog/vonage-to-launch-enhanced-noise-cancellation-powered-by-krisps-voice-ai/#comments Tue, 26 Mar 2024 19:48:51 +0000 https://krisp.ai/blog/?p=11507 Enhanced Noise Cancellation boosts agent productivity and elevates customer experience with embedded AI for Vonage Contact Center   BERKELEY, CALIF., (March 26 2024) Vonage, a global leader in cloud communications helping businesses accelerate their digital transformation and a part of Ericsson, has announced the addition of Vonage Enhanced Noise Cancellation to Vonage Contact Center (VCC). […]

The post Vonage to Launch Enhanced Noise Cancellation Powered by Krisp’s Voice AI appeared first on Krisp.

]]>
Enhanced Noise Cancellation boosts agent productivity and elevates customer experience with embedded AI for Vonage Contact Center

 

BERKELEY, CALIF., (March 26 2024) Vonage, a global leader in cloud communications helping businesses accelerate their digital transformation and a part of Ericsson, has announced the addition of Vonage Enhanced Noise Cancellation to Vonage Contact Center (VCC). This noise and echo cancellation feature uses Krisp’s machine learning technology to eliminate disruptive background noises and voices, boosting agent productivity, reducing average handle time, and improving the overall customer experience.

 

Through Krisp’s proprietary Voice AI technology, Vonage Enhanced Noise Cancellation is one of the only offerings on the market with noise cancellation fully embedded and available out of the box, eliminating the need for onsite developers or IT departments to manually integrate the technology. Users simply click to add the noise canceling feature in the VCC dashboard. Unique to this feature is its ability to cancel noise and voices around agents and any inbound noise behind the caller, providing an exceptionally clear connection, and a better experience for all.

 

In addition to eliminating local and remote audio quality issues – including removal of background voices, fan sounds, pet sounds, acoustic echo, and more – Vonage Enhanced Noise Cancellation enables better recordings, and more accurately captures call data to deliver superior analytics by extracting more meaningful insights, such as customer behavior and agent performance. This drives improved efficiency and customer support and overall better communication on the leading method of connecting agents and customers.

 

“The ongoing prevalence of voice in a world where consumers are connecting with businesses and their favorite brands from literally anywhere has driven a considerable demand for clear, concise and noise-free, two-way connections,” said Mary Wardley, VP, Customer Service and Contact Center for IDC. “Vonage’s introduction of Enhanced Noise Cancellation provides an immediate and effective way to improve customer experiences that drive engagement while also providing better recordings and analytics to help businesses resolve issues faster and gain better insights that drive data-driven decisions.”



“Audio quality is often a concern when it comes to a busy contact center environment, and the addition of inbound noise from a caller can make the experience a challenging one,” said Savinay Berry, EVP Product and Engineering for Vonage. “By embedding the technology to ensure optimal audio quality from within the Vonage Contact Center, agents are more productive and more efficient while providing a better experience for customers. This seamless experience helps drive the kind of personal and meaningful engagement that leads to long-lasting customer relationships.”

 

“Customer communications with contact centers should only be focused on solving issues and extending services, and agents should not have concern or angst about the noises and other voices around them,” said Robert Schoenfield, EVP of Licensing and Partnerships at Krisp. “Krisp’s Voice AI technology, integrated seamlessly within Vonage Contact Center, delivers on the promise of clear communications for agents and customers on every call, no matter the environment on either side of the call.”

 

Vonage customer Champion Power Equipment, a global market leader in power generation equipment, relies on Vonage Enhanced Noise Cancellation to help its agents diagnose and triage inbound callers seeking service on equipment.

 

“At Champion Power Equipment, the Vonage Contact Center has been a game-changer in our customer support journey. The Enhanced Noise Cancellation feature’s remarkable effectiveness in minimizing background noise has transformed the way our agents handle inbound calls, leading to quicker resolution and heightened customer satisfaction,” said Pedram Koukia, Customer Service Supervisor for Champion Power Equipment, Inc.

 

Koukia continued, “With a notable 20 percent decrease in call abandonment rates and a remarkable 25 percent increase in first-call issue resolution, Vonage has become an indispensable asset in our commitment to delivering seamless service and maintaining our reputation as a global leader in power generation equipment.”

 

Lowell Five Savings Bank, a Boston area savings and financial institution for more than 165 years, has employed Enhanced Noise Cancellation to its Vonage Contact Center solution, which serves its in-office agents: “Most of our agents are in-house and noise is a frequent challenge, not only for the productivity of our agents but in enabling them to deliver a personal and private experience for our clients. With Vonage Enhanced Noise Cancellation, that is no longer an issue,” said the Contact Center Manager for Lowell Five Savings Bank. “Our agents are empowered with the tools they need to provide every client with the kind of one-on-one engagement that makes them feel comfortable and heard and that ultimately drives loyalty and repeat business.”

 

Vonage Enhanced Noise Cancellation is currently in beta and will be Generally Available in April 2024. To date, more than 100,000 calls into VCC and a total of 350,000 minutes have been completed by Vonage customers using Vonage Enhanced Noise Cancellation. A demo of this new feature will be available in the Vonage Booth #818 at Enterprise Connect March 25 – 28, 2023, at the Gaylord Palms Convention Center in Orlando, Florida.

 

About Vonage

Vonage, a global cloud communications leader, helps businesses accelerate their digital transformation. Vonage’s Communications Platform is fully programmable and allows for the integration of Video, Voice, Chat, Messaging, AI and Verification into existing products, workflows and systems. The Vonage conversational commerce application enables businesses to create AI-powered omnichannel experiences that boost sales and increase customer satisfaction. Vonage’s fully programmable unified communications, contact center and conversational commerce applications are built from the Vonage platform and enable companies to transform how they communicate and operate from the office or remotely – providing the flexibility required to create meaningful engagements.

 

Vonage is headquartered in New Jersey, with offices throughout the United States, Europe, Israel and Asia and is a wholly-owned subsidiary of Ericsson (NASDAQ: ERIC), and a business area within the Ericsson Group called Business Area Global Communications Platform. To follow Vonage on X (formerly known as Twitter), please visit twitter.com/vonage. To follow on LinkedIn, visit linkedin.com/company/Vonage/. To become a fan on Facebook, go to facebook.com/vonage. To subscribe on YouTube, visit youtube.com/vonage.

 

About Krisp

Founded in 2017, Krisp pioneered the world’s first AI-powered Voice Productivity software. Krisp’s Voice AI technology enhances digital voice communication through audio cleansing, noise cancelation, accent localization, and call transcription and summarization. Offering full privacy, Krisp works on-device, across all audio hardware configurations and applications that support digital voice communication. Today, Krisp has transcribed over 20 million calls and processes over 75 billion minutes of voice conversations every month, helping businesses harness the power of voice to unlock higher productivity and deliver better business outcomes.

 

Learn more about Krisp SDKs here.

 

This announcement originally appeared on Vonage.com

The post Vonage to Launch Enhanced Noise Cancellation Powered by Krisp’s Voice AI appeared first on Krisp.

]]>
https://krisp.ai/blog/vonage-to-launch-enhanced-noise-cancellation-powered-by-krisps-voice-ai/feed/ 3