Speech recognition software for Linux

From Wikipedia, the free encyclopedia
  (Redirected from Linux speech recognition software)
Jump to navigation Jump to search

There are currently several speech recognition software packages for Linux. Some of them are free and open source software while others are proprietary. Speech recognition usually refers to software that attempts to distinguish thousands of words in a human language. Voice control may refer to software used for sending operational commands to a computer.

Native Linux speech recognition[edit]

History[edit]

In the late 1990s, a Linux version of ViaVoice (created by IBM) was made available to users for no charge. However, the free SDK was removed by the developer in 2002.

Current development status[edit]

Recently, there has been a push to get a high-quality native Linux speech recognition engine developed. As a result, numerous projects dedicated to creating Linux speech recognition solutions were established, such as Mycroft. Mycroft is similar to Microsoft's Cortana, but open source.

Crowdsourcing of speech samples[edit]

To compile a speech corpus and enable the production of acoustic models for speech recognition, VoxForge was set up with the aim of collecting transcribed speech for use with speech recognition projects. It is licensed under the GPL. VoxForge accepts crowdsourced speech samples and corrections of recognized speech sequences.

Speech recognition concept[edit]

The first step is to begin recording an audio stream on a Linux machine. The user has two main processing options:

  • (DSR) Discrete Speech Recognition - process the voice recognition entirely on local machine. This refers to self-contained systems in which all aspects of SR (Speech Recognition) are performed entirely within the user's computer. This is becoming critical for protection of IP (Intellectual Property) and avoiding unwanted surveillance (2018).
  • (Remote) Server-based SR which transmits the speech file to a remote server for converting the audio file into a text string. Due to recent Cloud storage schemes and data mining, this method more easily allows surveillance, theft of IP and introduction of malware.

FYI, The second option (remote) was previously used on smart phones as they did not possess sufficient performance, disk space or RAM to process speech recognition on-board the phone.These limitations have largely been overcome although server-based SR on mobile devices remains universal.

Speech Recognition in Browser[edit]

Discrete Speech Recognition can be performed within a web browser and works well with supported browsers. Remote SR does not require installation of software on the desktop computer or mobile device as it is primarily a server-based system with the inherent security issues noted above.

  • (Remote): https://dictation.io (use Chromium/Chrome) The dictation service records an audio track of the user via the web browser. In turn, dictation.io uses the Google API for speech recognition. Within Google Docs, Google voice typing works within the Chrome browser, regardless of operating system as it is a server-based system.
  • (DSR): There are solutions that work on the client only, without sending data to servers, e.g. pocketsphinx.js.


Free speech recognition engines[edit]

The following is a list of current projects dedicated to implementing speech recognition in Linux, as well as major native solutions. These are not end-user applications. These are programming libraries that a programmer may use to develop an end-user application.

  • CMU Sphinx is a general term to describe a group of speech recognition systems developed at Carnegie Mellon University.
  • Julius is a high-performance, two-pass large vocabulary continuous speech recognition (LVCSR) decoder software for speech-related researchers and developers.
  • Kaldi a toolkit for speech recognition provided under the Apache licence.
  • Mozilla DeepSpeech is developing an open source Speech-To-Text engine based of Baidu's deep speech research paper. It is intended for end user usage in the coming months.[1]

Possibly active projects:

  • Lera (Large Vocabulary Speech Recognition) based on Simon and CMU Sphinx for KDE[2].
  • Speechpad.pw[3] uses Google's speech recognition engine and Chrome native messaging API to provide direct speech input in Linux.
  • Speech[4] uses Google's speech recognition engine to support dictation in many different languages.
  • Speech Control: is a Qt-based application that uses CMU Sphinx's tools like SphinxTrain and PocketSphinx to provide speech recognition utilities like desktop control, dictation and transcribing to the Linux desktop.
  • Platypus[5] is an open source shim that will allow the proprietary Dragon NaturallySpeaking running under Wine to work with any Linux X11 application.
  • FreeSpeech,[6] from the developer of Platypus, is a free and open source cross-platform desktop application for GTK that uses CMU Sphinx's tools to provide voice dictation, language learning, and editing in the style of Dragon NaturallySpeaking.
  • Vedics[7] (Voice Enabled Desktop Interaction and Control System) is a speech assistant for GNOME Environment
  • GnomeVoiceControl[8] is a dialogue system to control the GNOME Desktop that was developed in the Google Summer of Code in 2007.
  • NatI[9] is a multi-language voice control system written in Python
  • SphinxKeys[10] allows the user to type keyboard keys and mouse clicks by speaking into their microphone.
  • VoxForge is a free speech corpus and acoustic model repository for open source speech recognition engines.
  • Simon[11] aims at being extremely flexible to compensate dialects or even speech impairments. It uses either HTK / Julius or CMU SPHINX, works on Windows and Linux and supports training.
  • Speeral Speeral a group of speech recognition tools developed at University of Avignon
  • Jasper project https://jasperproject.github.io/ Jasper is an open source platform for developing always-on, voice-controlled applications. This is an embedded Raspberry Pi front-end for CMU Sphinx or Julius

It is possible for developers to create Linux speech recognition software by using existing packages derived from open-source projects.

Inactive projects:

  • CVoiceControl[12] is a KDE and X Window independent version of its predecessor KVoiceControl. The owner ceased development in alpha stage of development.
  • Open Mind Speech,[13] a part of the Open Mind Initiative,[14] aims to develop free (GPL) speech recognition tools and applications, as well as collect speech data. Production ended in 2000.
  • PerlBox[15] is a perl based control and speech output. Development ended in early stages in 2004.
  • Xvoice[16] A user application to provide dictation and command control to any X application. Development ended in 2009 during early project testing. (requires proprietary ViaVoice to function)

Proprietary speech recognition engines[edit]

Voice control and keyboard shortcuts[edit]

Speech recognition usually refers to software that attempts to distinguish thousands of words in a human language. Voice control may refer to software used for sending operational commands to a computer or appliance. Voice control typically requires a much smaller vocabulary and thus is much easier to implement.

Simple software combined with keyboard shortcuts, have the earliest potential for practically accurate voice control in Linux.

Running Windows speech recognition software with Linux[edit]

Using a compatibility layer[edit]

It is possible to use programs such as Dragon NaturallySpeaking in Linux, by utilizing Wine, though some problems may arise, depending on which version is used.[22]

Using virtualized Windows[edit]

It is also possible to use Windows speech recognition software under Linux. Using no-cost virtualization software, it is possible to run Windows and NaturallySpeaking under Linux. VMware Server or VirtualBox support copy and paste to/from a virtual machine, making dictated text easily transferable to/from the virtual machine.

See also[edit]

References[edit]

External links[edit]