High-Fidelity Simultaneous Speech-To-Speech Translation

Kyutai - code on github

Abstract. We introduce Hibiki ('echo' in Japanese) Hibiki leverages a multistream language model to synchronously process source and target speech, and jointly produces text and audio tokens to perform speech-to-text and speech-to-speech translation. We furthermore address the fundamental challenge of simultaneous interpretation, which unlike its consecutive counterpart---where one waits for the end of the source utterance to start translating--- adapts its flow to accumulate just enough context to produce a correct translation in real-time, chunk by chunk.
To do so, we introduce a weakly-supervised method that leverages the perplexity of an off-the-shelf text translation system to identify optimal delays on a per-word basis and create aligned synthetic data. After supervised training, Hibiki performs adaptive, simultaneous speech translation with vanilla temperature sampling. On a French-English simultaneous speech translation task, Hibiki demonstrates state-of-the-art performance in translation quality, speaker fidelity and naturalness. Moreover, the simplicity of its inference process makes it compatible with batched translation and even real-time on-device deployment.

In the Wild Examples

This example comes from a video explaining automated translation. (source, original video (c) Arte) This example comes from a humoristic video. The source voice is high pitch on purpose, it is a good showcase of how well Hibiki replicates pitch and prosody and how robust it is to background noise as no denoising is applied to the audio which is fed raw to Hibiki. (source, original video (c) Canal+)

Examples with Ground Truth Interpretation

These samples come from the VoxPopuli dataset where the ground truth is real human interpretation. The volume for the sources has been reduced so that it's easier to hear the translations.

Multistream Visualization

The audio for the source and translated versions are on different channels. Use headphones to hear both at the same time. These samples are the same as in the voxpopuli section with CFG set to 3.

Impact of Classifier-Free Guidance

Samples taken from the VoxPopuli dataset. The Hibiki samples are presented with different levels of classifier-free guidance (CFG). The higher the CFG value, the closer the generated voice will be to the original voice. This results in very strong accents for the generations with the higher values.

Source Hibiki CFG-1 Hibiki CFG-3 Hibiki CFG-10 Seamless

Long-form Simultaneous Translations

Samples taken from the audio NTREX dataset.

Source Hibiki Seamless

Short-form Simultaneous Translations

Samples taken from the CVSS-C dataset.

Source Hibiki Seamless

This page was adapted from the SoundStorm project page.