Jump to content

Subvocal recognition

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by BG19bot (talk | contribs) at 00:57, 19 March 2016 (Remove blank line(s) between list items per WP:LISTGAP to fix an accessibility issue for users of screen readers. Do WP:GENFIXES and cleanup if needed. Discuss this at Wikipedia talk:WikiProject Accessibility#LISTGAP). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Electrodes used in subvocal speech recognition research at NASA's Ames Research Lab.

Subvocal recognition (SVR) is the process of taking subvocalization and converting the detected results to a digital output aurally or text-based.

Concept

A set of electrodes are attached to the skin of the throat and, without opening the mouth or uttering a sound, the words are recognized by a computer.

Subvocal speech recognition deals with electromyograms that are different for each speaker. Therefore, consistency can be thrown off just by the positioning of an electrode. To improve accuracy, researchers in this field are relying on statistical models that get better at pattern-matching the more times a subject "speaks" through the electrodes, but even then there are lapses. At Carnegie Mellon University, researchers found that the same "speaker" with accuracy rates of 94% one day can see that rate drop to 48% a day later; between two different speakers it drops even more.[citation needed]

Relevant applications for this technology where audible speech is impossible: for astronauts, underwater Navy Seals, fighter pilots and emergency workers charging into loud, harsh environments. At Worcester Polytechnic Institute in Massachusetts, research is underway to utilize subvocal information as a control source for sophisticated computer music instruments.[citation needed]

Research and Patents

With a grant from the U.S. Army, research into synthetic telepathy using subvocalization is taking place at the University of California, Irvine under lead scientist Mike D'Zmura.[1]

NASA's Ames Research Laboratory in Mountain View, California, under the supervision of Charles Jorgensen is conducting subvocalization research.[citation needed]

The Brain Computer Interface R&D program at Wadsworth Center under the New York State department of health has confirmed the existing ability to decipher consonants and vowels from imagined speech, which allows for brain-based communication using imagined speech.[2]

US Patents on silent communication technologies include: US Patent 6587729 "Apparatus for audibly communicating speech using the radio frequency hearing effect",[3] US Patent 5159703 "Silent subliminal presentation system",[4] US Patent 6011991 "Communication system and method including brain wave analysis and/or use of brain activity",[5] US Patent 3951134 "Apparatus and method for remotely monitoring and altering brain waves".[6]

In fiction

See also

References

  1. ^ http://www.nbcnews.com/id/27162401/[full citation needed]
  2. ^ Pei, Xiaomei; Barbour, Dennis L; Leuthardt, Eric C; Schalk, Gerwin (2011). "Decoding vowels and consonants in spoken and imagined words using electrocorticographic signals in humans". Journal of Neural Engineering. 8 (4): 046028. Bibcode:2011JNEng...8d6028P. doi:10.1088/1741-2560/8/4/046028. PMC 3772685. PMID 21750369.
  3. ^ Apparatus for audibly communicating speech using the radio frequency hearing effect
  4. ^ Silent subliminal presentation system
  5. ^ Communication system and method including brain wave analysis and/or use of brain activity
  6. ^ Apparatus and method for remotely monitoring and altering brain waves

Further reading