December 20, 2022
|Type of format||speech codec|
|Free format?||Yes (Apache-2.0)|
Lyra is a lossy audio codec developed by Google that is designed for compressing speech at very low bitrates. Unlike most other audio formats, it compresses data using a machine learning-based algorithm.
The Lyra codec is designed to transmit speech in real-time when bandwidth is severely restricted, such as over slow or unreliable network connections. It runs at fixed bitrates of 3.2, 6, and 9 kbit/s and it is intended to provide better quality than codecs that use traditional waveform-based algorithms at similar bitrates. Instead, compression is achieved via a machine learning algorithm that encodes the input with feature extraction, and then reconstructs an approximation of the original using a generative model. This model was trained on thousands of hours of speech recorded in over 70 languages to function with various speakers. Because generative models are more computationally complex than traditional codecs, a simple model that processes different frequency ranges in parallel is used to obtain acceptable performance. Lyra imposes 20 ms of latency due to its frame size. Google's reference implementation is available for Android and Linux.
Lyra's initial version performed significantly better than traditional codecs at similar bitrates. Ian Buckley at MakeUseOf said, "It succeeds in creating almost eerie levels of audio reproduction with bitrates as low as 3 kbps." Google claims that it reproduces natural-sounding speech, and that Lyra at 3 kbit/s beats Opus at 8 kbit/s. Tsahi Levent-Levi writes that Satin, Microsoft's AI-based codec, outperforms it at higher bitrates.
In December 2017, Google researchers published a preprint paper on replacing the Codec 2 decoder with a WaveNet neural network. They found that a neural network is able to extrapolate features of the voice not described in the Codec 2 bitstream and give better audio quality, and that the use of conventional features makes the neural network calculation simpler compared to a purely waveform-based network. Lyra version 1 would reuse this overall framework of feature extraction, quantization, and neural synthesis.
Lyra was first announced in February 2021, and in April, Google released the source code of their reference implementation. The initial version had a fixed bitrate of 3 kbit/s and around 90 ms latency. The encoder calculates a log mel spectrogram and performs vector quantization to store the spectrogram in a data stream. The decoder is a WaveNet neural network that takes the spectrogram and reconstructs the input audio.
A second version (v2/1.2.0), released in September 2022, improved sound quality, latency, and performance, and permitted multiple bitrates. V2 uses a "SoundStream" structure where both the encoder and decoder are neural networks, a kind of autoencoder. A residual vector quantizer is used to turn the feature values into transferrable data.
- Buckley, Ian (2021-04-08). "Google Makes Its Lyra Low Bitrate Speech Codec Public". MakeUseOf. Retrieved 2022-07-21.
- "Lyra: A New Very Low-Bitrate Codec for Speech Compression". Google AI Blog. 25 February 2021. Retrieved 2022-07-21.
- "Lyra V2 - a better, faster, and more versatile speech codec". Google Open Source Blog. Retrieved 2023-04-26.
- "Google Duo uses a new codec for better call quality over poor connections". XDA. 2021-04-09. Retrieved 2022-07-21.
- Levent-Levi, Tsahi (2021-04-19). "Lyra, Satin and the future of voice codecs in WebRTC". BlogGeek.me. Retrieved 2022-07-21.
- Kleijn, W. B.; Lim, F. S.; Luebs, A.; Skoglund, J.; Stimberg, F.; Wang, Q.; Walters, T. C. (April 2018). Wavenet based low rate speech coding. 2018 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE. pp. 676–680.
- Google (2021). "Lyra: A Very Low-Bitrate Codec for Speech Compression". GitHub. Retrieved 21 July 2022.
- Lyra: A New Very Low-Bitrate Codec for Speech Compression Google blog post with a demonstration comparing codecs