This article relies too much on references to primary sources. (April 2015) (Learn how and when to remove this template message)
|Developer(s)||Lee Akinobu (Nagoya Institute of Technology)|
4.3.1 / January 15, 2014
|Operating system||Unix systems (GNU/Linux, BSD etc.), Windows (via Cygwin)|
|License||Free, GPL-incompatible license|
Julius is a high-performance, two-pass large vocabulary continuous speech recognition (LVCSR) decoder software for speech-related researchers and developers. It can perform almost real-time decoding on most current PCs in 60k word dictation task using word 3-gram and context-dependent HMM. Major search techniques are fully incorporated. It is also modularized carefully to be independent from model structures, and various HMM types are supported such as shared-state triphones and tied-mixture models, with any number of mixtures, states, or phones. Standard formats are adopted to cope with other free modeling toolkit. The main platform is Linux and other Unix workstations, and also works on Windows. Julius is open source and distributed with a revised BSD style license.
Julius has been developed as part of a free software toolkit for Japanese LVCSR research since 1997, and the work has been continued at Continuous Speech Recognition Consortium (CSRC), Japan from 2000 to 2003.
From rev.3.4, a grammar-based recognition parser named "Julian" is integrated into Julius. Julian is a modified version of Julius that uses hand-designed DFA grammar as a language model. It can be used to build a kind of voice command system of small vocabulary, or various spoken dialog system tasks.
Julius adopts acoustic models in HTK ASCII format, pronunciation dictionary in HTK-like format, and word 3-gram language models in ARPA standard format (forward 2-gram and reverse 3-gram as trained from speech corpus with reversed word order).
Although Julius is only distributed with Japanese models, the VoxForge project is working on creating English acoustic models for use with the Julius Speech Recognition Engine.
Recently in April 2018 thanks to the effort of Mozilla foundation, where a 350 hours of spoken English audio corpus was made available, a new English ENVR-v5.4 open source speech model was released alongside with Polish PLPL-v7.1 models, available from 
|This computational linguistics-related article is a stub. You can help Wikipedia by expanding it.|