Speech recognition software for Linux
This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages)(Learn how and when to remove this template message)
There are currently several speech recognition software packages for Linux. Some of them are free and open source software while others are proprietary. Speech recognition usually refers to software that attempts to distinguish thousands of words in a human language. Voice control may refer to software used for sending operational commands to a computer.
- 1 Native Linux speech recognition
- 2 Voice control and keyboard shortcuts
- 3 Running Windows speech recognition software with Linux
- 4 See also
- 5 References
- 6 External links
Native Linux speech recognition
Current development status
Recently, there has been a push to get a high-quality native Linux speech recognition engine developed. As a result, numerous projects dedicated to creating Linux speech recognition solutions were established, such as Mycroft. Mycroft is similar to Microsoft's Cortana, but open source.
Crowdsourcing of speech samples
It is essential to compile a speech corpus to produce acoustic models for speech recognition projects. VoxForge is a free speech corpus and acoustic models repository that was built with the aim of collect transcribed speech to be used in speech recognition projects. VoxForge accepts crowdsourced speech samples and corrections of recognized speech sequences. It is licensed under the GPL.
Speech recognition concept
The first step is to begin recording an audio stream on a Linux machine. The user has two main processing options:
- (DSR) Discrete Speech Recognition - process the voice recognition entirely on local machine. This refers to self-contained systems in which all aspects of SR (Speech Recognition) are performed entirely within the user's computer. This is becoming critical for protection of IP (Intellectual Property) and avoiding unwanted surveillance (2018).
- (Remote) Server-based SR which transmits the speech file to a remote server for converting the audio file into a text string. Due to recent Cloud storage schemes and data mining, this method more easily allows surveillance, theft of IP and introduction of malware.
FYI, The second option (remote) was previously used on smart phones as they did not possess sufficient performance, disk space or RAM to process speech recognition on-board the phone.These limitations have largely been overcome although server-based SR on mobile devices remains universal.
Speech Recognition in Browser
Discrete Speech Recognition can be performed within a web browser and works well with supported browsers. Remote SR does not require installation of software on the desktop computer or mobile device as it is primarily a server-based system with the inherent security issues noted above.
- (Remote): https://dictation.io (use Chromium/Chrome) The dictation service records an audio track of the user via the web browser. In turn, dictation.io uses the Google API for speech recognition. Within Google Docs, Google voice typing works within the Chrome browser, regardless of operating system as it is a server-based system.
- (DSR): There are solutions that work on the client only, without sending data to servers, e.g. pocketsphinx.js.
Free speech recognition engines
The following is a list of current projects dedicated to implementing speech recognition in Linux, as well as major native solutions. These are not end-user applications. These are programming libraries that a programmer may use to develop an end-user application.
- CMU Sphinx is a general term to describe a group of speech recognition systems developed at Carnegie Mellon University.
- Julius is a high-performance, two-pass large vocabulary continuous speech recognition (LVCSR) decoder software for speech-related researchers and developers.
- Kaldi a toolkit for speech recognition provided under the Apache licence.
- Mozilla DeepSpeech is developing an open source Speech-To-Text engine based of Baidu's deep speech research paper. It is intended for end user usage in the coming months.
Possibly active projects:
- Lera (Large Vocabulary Speech Recognition) based on Simon and CMU Sphinx for KDE.
- Speechpad.pw uses Google's speech recognition engine and Chrome native messaging API to provide direct speech input in Linux.
- Speech uses Google's speech recognition engine to support dictation in many different languages.
- Speech Control: is a Qt-based application that uses CMU Sphinx's tools like SphinxTrain and PocketSphinx to provide speech recognition utilities like desktop control, dictation and transcribing to the Linux desktop.
- Platypus is an open source shim that will allow the proprietary Dragon NaturallySpeaking running under Wine to work with any Linux X11 application.
- FreeSpeech, from the developer of Platypus, is a free and open source cross-platform desktop application for GTK that uses CMU Sphinx's tools to provide voice dictation, language learning, and editing in the style of Dragon NaturallySpeaking.
- Vedics (Voice Enabled Desktop Interaction and Control System) is a speech assistant for GNOME Environment
- GnomeVoiceControl is a dialogue system to control the GNOME Desktop that was developed in the Google Summer of Code in 2007.
- NatI is a multi-language voice control system written in Python
- SphinxKeys allows the user to type keyboard keys and mouse clicks by speaking into their microphone.
- VoxForge is a free speech corpus and acoustic model repository for open source speech recognition engines.
- Simon aims at being extremely flexible to compensate dialects or even speech impairments. It uses either HTK / Julius or CMU SPHINX, works on Windows and Linux and supports training.
- Speeral Speeral a group of speech recognition tools developed at University of Avignon
- Jasper project https://jasperproject.github.io/ Jasper is an open source platform for developing always-on, voice-controlled applications. This is an embedded Raspberry Pi front-end for CMU Sphinx or Julius
It is possible for developers to create Linux speech recognition software by using existing packages derived from open-source projects.
- CVoiceControl is a KDE and X Window independent version of its predecessor KVoiceControl. The owner ceased development in alpha stage of development.
- Open Mind Speech, a part of the Open Mind Initiative, aims to develop free (GPL) speech recognition tools and applications, as well as collect speech data. Production ended in 2000.
- PerlBox is a perl based control and speech output. Development ended in early stages in 2004.
- Xvoice A user application to provide dictation and command control to any X application. Development ended in 2009 during early project testing. (requires proprietary ViaVoice to function)
Proprietary speech recognition engines
- Verbio ASR is a commercial speech recognition server for Linux and windows platforms.
- DynaSpeak, from SRI International, (speaker-independent speech recognition software development kit that scales from small- to large-scale systems, for use in commercial, consumer, and military applications)
- Janus Recognition Toolkit (JRTk) is a closed source speech recognition toolkit mainly targeted at Linux developed by the Interactive Systems Laboratories developed at Carnegie Mellon University and Karlsruhe Institute of Technology for which commercial and research licenses are available.
- LumenVox Speech Engine is a commercial library for Linux and Windows for inclusion in other software. It has been integrated into the Asterisk private branch exchange system.
- VoxSigma is a speech recognition software suite developed by Vocapia Research.
Voice control and keyboard shortcuts
Speech recognition usually refers to software that attempts to distinguish thousands of words in a human language. Voice control may refer to software used for sending operational commands to a computer or appliance. Voice control typically requires a much smaller vocabulary and thus is much easier to implement.
Simple software combined with keyboard shortcuts, have the earliest potential for practically accurate voice control in Linux.
Running Windows speech recognition software with Linux
Using a compatibility layer
Using virtualized Windows
It is also possible to use Windows speech recognition software under Linux. Using no-cost virtualization software, it is possible to run Windows and NaturallySpeaking under Linux. VMware Server or VirtualBox support copy and paste to/from a virtual machine, making dictated text easily transferable to/from the virtual machine.
- A TensorFlow implementation of Baidu's DeepSpeech architecture, Mozilla, 2017-12-05, retrieved 2017-12-05
- Lera KDE git repository - (2015) - https://cgit.kde.org/scratch/grasch/lera.git/ Retrieved 2017-07-25.
- NatI (Natural Language Interface)
- Simon KDE - Main Developer until 2015 Peter Grasch - (accessed 2017/09/04) - http://simon.kde.org/]
- Open Mind Speech
- Open Mind Initiative Archived 2003-08-05 at the Wayback Machine.
- Verbio ASR
- Janus Recognition Toolkit (JRTk)
- "Speech Recognition Software - LumenVox". Retrieved 2013-02-28.
- Speech-to-text software by Vocapia
- Dragon NaturallySpeaking - Wine Application Database
- Speech Synthesis & Analysis Software
- Gnome Voice Control (an incomplete speech recognition solution for GNOME) - Demonstration
- Speech Recognition Software - list of speech recognition projects and solutions in Linux
- Accessibility / SpeechRecognition - Ubuntu Help
- Alternatives to Nuance Dragon NaturallySpeaking